Last Update 9:05 PM January 16, 2021 (UTC)

Identity Blog Catcher

Brought to you by Identity Woman and Infominer.
Support this collaboration on Patreon!!!

Saturday, 16. January 2021

Ben Werdmüller

Adjusting the volume

I'm not quite an indieweb zealot - you can find me on Twitter and other social networks over the web - but I've been writing on my own site since 1998 (albeit not one consistent, continuous site - I change it up every decade or so), and it's become a core part of who I am, how I think, and how I represent myself online. You might have noticed - email subscribers certainly did - that I've turned

I'm not quite an indieweb zealot - you can find me on Twitter and other social networks over the web - but I've been writing on my own site since 1998 (albeit not one consistent, continuous site - I change it up every decade or so), and it's become a core part of who I am, how I think, and how I represent myself online.

You might have noticed - email subscribers certainly did - that I've turned up the volume on my posting this year. So far in January, that's meant a post a day in my personal space. The feedback has generally been good, but a few email subscribers did complain. I totally get it. Nobody wants their inboxes to clog up; the calculus might be different if this was a business newsletter with actionable insights, but that's not what this is. More than anything, I'm hoping to spark a conversation with my posts.

There are a few things I'm thinking about doing. The first is dropping the frequency of the emails, and thinking about them as more of a digest. You'd get one on Thursday, and one on Sunday (or something like that). Obviously, RSS / h-feed / JSON-feed subscribers (hi!) would still receive posts in real time. Maybe there would also be an email list for people who did want to receive posts as I wrote them.

The second thing I'm thinking about doing is taking this posting frequency and putting it on Medium for the rest of the month, with a regular summary post over here. This is a controversial thing for someone who's so deep into indieweb and the open web to suggest, but there are a few reasons for trying this. Mostly I want to see how the experience compares. I worked at Medium in 2016, and posted fairly regularly there during that time and while I was at Matter Ventures, but the platform has evolved significantly since then.

So that's what I'm going to do to start. For the remainder of January, I'll be posting on Medium daily, with summary listings posted here semi-regularly. Then I'll return here in February and let you know what I discovered.

You can follow me on Medium over here.


John Philpin : Lifestream

Repeat after me …. ‘They’ are NOT ‘elites’.

Repeat after me …. ‘They’ are NOT ‘elites’.

Repeat after me ….

‘They’ are

NOT

‘elites’.


The Great Grift - No Mercy / No Malice ”The federal res

The Great Grift - No Mercy / No Malice ”The federal response to the pandemic has been massive — a $5 trillion effort. It has also been a con. Under the cloud cover of Covid-19, the shareholder class has used its outsized influence over government to toss a few loaves of bread at those suffering, all the while accruing trillions of dollars in wealth financed on the backs of younger, and future

The Great Grift - No Mercy / No Malice

”The federal response to the pandemic has been massive — a $5 trillion effort. It has also been a con. Under the cloud cover of Covid-19, the shareholder class has used its outsized influence over government to toss a few loaves of bread at those suffering, all the while accruing trillions of dollars in wealth financed on the backs of younger, and future, generations.”.

Friday, 15. January 2021

Ben Werdmüller

Paradigm shift

One of my favorite pieces of software is Apple photo search. If you've got an iPhone, try it: great searches to try are "animal selfie", "bird", "ice cream", or "cake". What's particularly amazing about these searches is that the machine learning is performed on-device. In fact, Apple provides developer tools for on-device machine learning in any app across its platforms. There's no cloud pro

One of my favorite pieces of software is Apple photo search. If you've got an iPhone, try it: great searches to try are "animal selfie", "bird", "ice cream", or "cake".

What's particularly amazing about these searches is that the machine learning is performed on-device. In fact, Apple provides developer tools for on-device machine learning in any app across its platforms. There's no cloud processing and the privacy issues related to that. The power to identify which photos are a selfie with your cat lies in the palm of your hand.

Not everyone can afford a top-end iPhone, but these represent the leading edge of the technology; in the near future, every phone will be able to perform rapid machine learning tasks.

Another thing my phone does is connect to 5G networks. 5G has a theoretical maximum bandwidth speed of 10gbps, which is faster than the kind of home cable internet you might get from a company like Comcast. In practice, the networks don't quite work that way, but we can expect them to improve over time. 5G networks will allow us to have incredibly fast internet virtually anywhere.

Again: not every phone supports 5G. But every phone will. (And, inevitably, 6G is around the corner.)

Finally, my phone has roughly the same amount of storage as my computer, and every bit as fast. Not everyone has 256GB of storage on their phone - but, once again, everyone will.

On the internet, we mostly deal with clients and servers. The services we use are powered by data centers so vast that they sometimes have their own power stations. Technology startup founders have to consider the cost of virtualized infrastructure as a key part of their plans: how many servers will they need, what kinds of databases, and so on.

Meanwhile, the client side is fairly thin. We provide small web interfaces and APIs that connect from our server infrastructure to our devices, as if our devices are weak and not to be trusted.

The result is a privacy nightmare: all our data is stored in the same few places, and we usually just have to trust that nobody will peek. (It's fair to assume that somebody is peeking.) It also represents a single point of failure: if just one Amazon datacenter in Virginia encounters a problem, it can seem like half the internet has gone down. Finally, the capabilities of a service are limited by the throughput of low-powered virtualized servers.

But the world has changed. We're addicted to these tiny devices that happen to have huge amounts of storage, sophisticated processors, and incredibly fast, always-on connectivity. I think it's only a matter of time before someone - potentially Apple, potentially someone exponentially smaller than Apple - uses this to create an entirely new kind of peer to peer application infrastructure.

If I'm in the next room to you and I send you a Facebook message, the data finds its way to Facebook's datacenter and back to you. It's an incredibly wasteful process. What if the message just went straight to you over peer to peer wifi (or whatever connection method was most convenient)? And what if there was a developer kit that made it easy for any engineer to really easily build an application over this opportunistic infrastructure without worrying about the details?

Lately I've been obsessed with this idea. The capabilities of our technology have radically changed, but our business models and architectural paradigms haven't caught up. There's an exciting opportunity here - not just to be disruptive, but to create a more private, more immediate, and more dynamically functional internet.


MyDigitalFootprint

Data: Governance and Geopolitics

Interesting article here from the Center for Strategic and International Studies. Written by Gregory F. Treverton, a senior adviser. “How data is governed can be thought of along several lines of activity: legislating privacy and data use, regulating content, using antitrust laws to dilute data monopolies, self-regulating by the tech giants, regulating digital trade, addressing intellectual pr
Interesting article here from the Center for Strategic and International Studies. Written by Gregory F. Treverton, a senior adviser.

“How data is governed can be thought of along several lines of activity: legislating privacy and data use, regulating content, using antitrust laws to dilute data monopolies, self-regulating by the tech giants, regulating digital trade, addressing intellectual property rights (IPR) infringement, assuring cybersecurity, and practicing cyber diplomacy. Of these, antitrust, regulation, and privacy are most immediately in the spotlight, and are the focus of this commentary, but it will also touch briefly on the connections with other issues.”


I have written a lot that data is data ( and not oil) but this article misses the point about the difference between

Data Governance (micro)

Data Governance (marco)

Governance with Data

Data governance (micro) is a massively important and large field focused on ensuring that the quality, provenance, lineage of data, including bias, ethics, data analysis, statistics and many more topics.  Is ensure that the data that is used and how it is used to make decisions is of the best quality possible.

Data Governance (marco) as per the article is on the border topic of international regulation for data. 

Governance with Data is where the board, those in office and those in positions of authority are given or presented with data (via a data governance process) to make complex judgment.   


The take away “The digital age presents geopolitical and philosophical problems with complexity and speed beyond the reach of the existing global architecture that underrepresents both emerging powers, like China, and ever more influential private sector actors, from the tech giants to the Gates Foundation. We are addressing twenty-first-century problems with a twentieth-century mindset, approach, and toolkit. is the take-away. “ with an eighth-century view of sovereign, governance and oversight.  We have to update our more than just our mindset to embrace this change. We need to upgrade the very nature of understand hierarchy, power, influence, agency and purpose to name a few. 




Information Answers

The Growing Importance of Data Provenance

Remember when it used to be an amusing conversation, ‘the amount of data in the world is growing at blah blah blah rate….’? I would contend […]
Remember when it used to be an amusing conversation, ‘the amount of data in the world is growing at blah blah blah rate….’? I would contend […]

MyDigitalFootprint

The World’s Most Influential Values, In One Graphic

Source: https://www.visualcapitalist.com/most-influential-values/ Add this to my principles and rules thinking about how to connect risk frameworks.   The key point is the delta.  Will map this onto Peak Paradox
Source: https://www.visualcapitalist.com/most-influential-values/
Add this to my principles and rules thinking about how to connect risk frameworks.   The key point is the delta.  Will map this onto Peak Paradox







Rethinking the Borad

Interesting article here at DEMYST by Neil Tsappis and Dr Tamara Rusell, presenting principles for a new boardroom operating model.  Worth a read. Worth picking up on several themes 1. diversity at the board - and the assumption that balance is what we want to achieve.  Balance in representation and balance in decisions/ judgment. Compremise may not be the right answer
Interesting article here at DEMYST by Neil Tsappis and Dr Tamara Rusell, presenting principles for a new boardroom operating model.  Worth a read.
Worth picking up on several themes
1. diversity at the board - and the assumption that balance is what we want to achieve.  Balance in representation and balance in decisions/ judgment. Compremise may not be the right answer.


2.  Huristics - assumes we have rules, most boards I would suggest have to continually create new ways of working and understanding, therefore resting back on ways of doing something is not going to get to better decisions.
3. fully agree with the market articulation, faster, quicker and more complex.  But we don't actually question the structure of the board and its role.  It apppers to be sacrosanct.   I have explored a few times Organistion 2.0 - where we question the entire thinking. 
4. It does not address the skills gap in data - this to me is something we have to do.
love their work and makes me think





Ben Werdmüller

The year of self-respect

I'm nearing the end of my first week on the Whole30 diet. I'm still not what sure I think about it: which foods are allowed and which aren't feels a bit arbitrary, and the very fact that the diet has a logo and a trademark is off-putting. On the other hand, maybe it's my imagination, but I feel a lot better. I'm certainly eating a great deal more vegetables. I've also been better about doing

I'm nearing the end of my first week on the Whole30 diet. I'm still not what sure I think about it: which foods are allowed and which aren't feels a bit arbitrary, and the very fact that the diet has a logo and a trademark is off-putting. On the other hand, maybe it's my imagination, but I feel a lot better. I'm certainly eating a great deal more vegetables.

I've also been better about doing exercise before work so far this year. Usually that's involved running, but I've been doing some weight training, as well as long, brisk walks and push-ups every day. The result is that I'm more alert during the day, and feel free to relax and read / write during the evening. (Whole30's ban on alcohol helps here, too. I had fallen into a pattern of drinking a glass of wine or two most evenings in 2020.)

I suffer from anxiety, bouts of depression, and historically really low self-esteem. At my lowest, I made a plan - never followed - to end my life. Shallow self-confidence has sometimes led me to bad places and poor choices. It's frequently led me to sleepless nights and their subsequent, zombie-like days. I've spent much of my life feeling like I must be physically abhorrent; like there's something horribly wrong with me that nobody wanted to tell me about. As a kid, I was over six feet tall when I was thirteen, and I didn't so much as date until I was twenty-one. Those feelings of inferiority have never really left me.

By rights, the pandemic should have made me feel worse. We were all locked inside; I spent a great deal more time caring for my terminally ill mother as she precipitously declined. The goals I had for my life were out of reach. It should have been a miserable time.

And it was, in lots of ways, but it also gave me something important. I could be in my own space, rather than commuting to work. I was not expected to show up in a certain way. All the worries I used to have about the impression I was casting in the real world - worries that I resented having terribly - evaporated. Instead, I could just be me.

I gave myself permission to write more than blog posts. On a whim, I entered a flash fiction competition, and placed first in the initial round. I enrolled in workshops and courses and continued to practice. Today, I have a regular practice of writing every day.

I ran more than I'd run in my entire life leading up to that point combined: at least two 5Ks a week, which for many people isn't all that much, but for me was an enormous step up. Towards the end of the year, I had some conversations about stressful things that had been building up as reservoirs of bad feeling that were threatening to spill over.

Somewhere in all of this, my self-esteem crept up, and my anxiety started to diminish. I felt less awful about my body and found that the stressful conversations went well. The darkness is not necessarily gone for good; anyone who suffers from depression knows that the cloud can re-emerge at any time. I also don't think it's just because I started to do exercise and did some writing; I think those things were reflections of something else.

Self-respect is something that requires practice and investment, and somewhere during last year, I made the decision to spend the time. It wasn't esteem, as such, at least at first, but I decided that I was worth spending time on. Writing and exercise weren't things that would make other people like me. They were just for me. And a switch flipped, without me realizing it, that allowed me to know that was okay.

In a lot of ways, I feel like a different person going into 2021. I'm full of gratitude, and excited for the future. We're still in an awful, deadly pandemic; I still have the trauma of watching my mother deal with her illness. But in lots of ways, I can meet those challenges with more energy.

There are ups and downs. I had a blip before Christmas where I still felt incredibly low. But generally speaking, every day is a small progression in the right direction. Things are looking up.


reb00ted

Are most Facebook users cost centers, rather than profit centers?

According to CNBC, Facebook made $7.89 in revenue per average global user in the 3rd quarter last year (with a high of $39.63 in the US and Canada, and a low of $2.22 outside US, Canada, Europe and Asia-Pacific). According to Yahoo! Finance and my calculation, if its expenses in the same quarter were $13.4 billion, expense per user was $13.4 / $21.5 * $7.89 = $4.92 on average (proportiona

According to CNBC, Facebook made $7.89 in revenue per average global user in the 3rd quarter last year (with a high of $39.63 in the US and Canada, and a low of $2.22 outside US, Canada, Europe and Asia-Pacific).

According to Yahoo! Finance and my calculation, if its expenses in the same quarter were $13.4 billion, expense per user was $13.4 / $21.5 * $7.89 = $4.92 on average (proportionally allocated given expense / revenue ratio).

Revenue per user is obviously quite different in different parts of the world, but what about costs? It seems to me that on a per-user-basis, selling and serving all those ads in the US and Canada that led to so much revenue per user is probably more expensive, compared to some places that have less commerce. But as dramatically different as $39.63 and $2.22 on the revenue side? I don’t think so. Not even close.

In other words, users in the rest of the world at $2.22 of revenue per user are almost certainly not profitable. Even if expenses there were only half of average, it would still not be enough.

Of course these numbers are averages across the regions, and chances are that the differences between users within one region are also quite striking. I don’t have numbers on those. But I would bet that some users in the US and Canada also bring in less revenue than the $4.92 in average cost per user.

Who would those unprofitable users be in the US, say? Well, those demographics and those neighborhoods in the social graph in which advertisers see little opportunities to make a sale, because, for example, everybody is unemployed and angry.

(So if, for example, a certain presidential campaign came by and wanted to specifically target this demographic with political ads … I for one can vividly imagine the leap of joy of some Facebook business guy who finally saw how to get promoted: “I turned a million users from being a cost center to being a profit center”. And democracy be damned. Of course, I’m speculating here, but directionally I don’t think I’m wrong.)

Which suggests another strategy to unseat Facebook as the dominant social network: focus on picking off the users that generate the most revenue for Facebook, as they subsidize the rest. If that relatively small subset of users jumped ship, the rest of the business would become unprofitable.

(I jotting this down because I hadn’t seen anybody suggest this strategy. We do need to find ways of ending surveillance capitalism after all.)

Thursday, 14. January 2021

John Philpin : Lifestream

Political bankruptcy, just like the financial sort, happens

Political bankruptcy, just like the financial sort, happens two ways. Gradually, then suddenly. John Gruber

reb00ted

Decentralization is about authority: who has it, and how can they exercise it.

This an other gems in a great piece by Beaker Browser developer Paul Frazee.

This an other gems in a great piece by Beaker Browser developer Paul Frazee.


John Philpin : Lifestream

Delta is banning guns in checked luggage on flights to Washi

Delta is banning guns in checked luggage on flights to Washington D.C. ahead of Biden’s inauguration. …and they weren’t before. c’est la vie.


”In 2005, Apple moved to Intel to gain equality. In 2020,

”In 2005, Apple moved to Intel to gain equality. In 2020, it’s moved away from Intel to gain superiority,” Ken Segal

”In 2005, Apple moved to Intel to gain equality. In 2020, it’s moved away from Intel to gain superiority,”

Ken Segal


Ben Werdmüller

Thinking broader

It's really easy to assume that the world around us is fixed and absolute. The way we do things is the way things are done. The internet works the way it does. The market is the market. People behave how people behave. One of my superpowers has traditionally been that I'm an outsider: I'm an off-kilter third culture kid who doesn't really fit into anyone's community, which means I see everyth

It's really easy to assume that the world around us is fixed and absolute. The way we do things is the way things are done. The internet works the way it does. The market is the market. People behave how people behave.

One of my superpowers has traditionally been that I'm an outsider: I'm an off-kilter third culture kid who doesn't really fit into anyone's community, which means I see everything from a slightly different angle. Often that's allowed me to see absurdities that other people can't see, and ask questions that other people might not have asked. Sometimes, they're painfully naive. But naivety and optimism can lead to interesting new places.

Lately I've had a few conversations that have made me realize that my perspective has settled in a bit more than I'm comfortable with; I feel like my horizons have closed in a little bit. It's been a sobering realization. Narrower horizons lead to safer, more timid decisions; a small island mentality where a smaller set of possible changes are considered and new ideas are more likely to be met with a pessimistic "that'll never work". It's a toxic way to think that creeps up on you.

It's not enough to invent new things for our current context - there's a lot to be gained from reconsidering that context entirely. Why are things the way they are? Do they have to be? What would be better?

Chris Messina's website subtitle used to be "All of this can be made better. Are you ready? Begin." I've thought about that phrase a lot over the years. It's an inspiring mission statement and a great way to think. It also requires that you feel some ownership or ability - permission - to change the way things work.

People come to this in different ways. I think it helps to have seen broader change manifested, but it's not a prerequisite. It certainly helps to have been in an environment filled with broader, change-oriented thinking. If you live in a world of conservative stagnation, you're much more likely to feel the same way. But, of course, plenty of people from those sorts of environments emerge to change the world.

And it turns out that people lose it in different ways, too. I'm grateful for conversations with smart people who challenged my thinking and encouraged me to take a step back.

For me, right now, this is wrapped up in the fabric of what I do. Why do we have to use the software and protocol models we've used for decades? What does it look like to think beyond APIs and browsers, clients and servers? What if, knowing what we know today, something radically different could be better? Do we need to depend on vast datacenters owned by megacorporations, or can we do away with them altogether?

It's worth asking the questions: how could you broaden your thinking? What in your life do you consider to be immovable that might not be? What does thinking bigger and putting everything on the table look like for you?


John Philpin : Lifestream

Yes www.juancole.com

Yes www.juancole.com

my first working trigger in Keyboard Maestro … 🅣🅗🅐🅝🅚🅨🅞🅤 @mac

my first working trigger in Keyboard Maestro … 🅣🅗🅐🅝🅚🅨🅞🅤 @macsparky -

my first working trigger in Keyboard Maestro … 🅣🅗🅐🅝🅚🅨🅞🅤 @macsparky -

Wednesday, 13. January 2021

John Philpin : Lifestream

You’re hungry - I get it. We all need to eat, but why am i f

You’re hungry - I get it. We all need to eat, but why am i forced to watch you violently masticate your food while you sit on the other end of the Zoom call conference.

You’re hungry - I get it. We all need to eat, but why am i forced to watch you violently masticate your food while you sit on the other end of the Zoom call conference.


Ben Werdmüller

The ambient future

I have a longstanding bet that we're moving to an ambient computing world: one where the computer is all around us, interacting with us in whatever way is convenient to us at the time. Smart speakers, high-spec smartphones, natural language intelligent assistants, augmented reality glasses, wearables with haptic feedback, and interactive screens aren't individual technologies in themselves, bu

I have a longstanding bet that we're moving to an ambient computing world: one where the computer is all around us, interacting with us in whatever way is convenient to us at the time. Smart speakers, high-spec smartphones, natural language intelligent assistants, augmented reality glasses, wearables with haptic feedback, and interactive screens aren't individual technologies in themselves, but all part of a contiguous ambient cloud with your digital identity at the center. In this vision of the future, whoever controls the ecosystem controls the next phase of computing. Ideally, it's an open system with no clear owner, but that won't happen by itself.

A lot of the reports coming out of (virtual) CES this year involve augmented reality of one kind or another. Lots of different companies have new models of AR glasses, which are becoming a little bit more like something you'd actually want to put on your face with each passing year; Sony also has a pretty cool sounding (but ruinously expensive) spatial display that looks like examining a 3D object through a window.

Throughout all this, Apple is pretty quiet. Even though Siri is objectively the worst digital assistant, it was early to the market, and signaled an intention to pursue a vision for ambient computing that has since been followed up with the Apple Watch, AirPods, and HomePods. It has filed patents for AR glasses. And I have a strong suspicion - with no inside knowledge whatsoever - that it's planning on doing something interesting around audio. Podcasts are cool, but evolving what podcasts can be in an ambient computing world is cooler. Whereas most companies are concentrating on iterating the technology, companies like Apple rightly think about the human experience of using it, and elegantly figuring out its place at the intersection of tech and culture. It won't be the first company to come out with a technology, but it may be the first to make it feel human.

If this is the way the world is going - and remember, it's only a bet - it has enormous implications for other kinds of applications. We're still largely wedded to a monitor-keyboard paradigm that was invented long before the moon landing; most of your favorite apps and services amount to sitting in front of a rectangular display and lightly interacting with it somehow. An ambient paradigm demands that we pay close attention to calm tech principles so that we are not cognitively overloaded, jibing with our perception of reality rather than stealing our engagement completely. The main job of the internet is to connect people; what does that look like in an ambient environment? What does it mean for work? For fintech? For learning? And given that all we have is our perception of reality, who do we trust with augmenting it?

Anyway, Norm Glasses will make everyone look like the main character in a John Hughes movie, and I'm kind of here for it.

Tuesday, 12. January 2021

John Philpin : Lifestream

Is Castro broken for anybody else?

Is Castro broken for anybody else?

Is Castro broken for anybody else?


GoFundMe Bans Fundraising For Travel To Trump Rallies I d

GoFundMe Bans Fundraising For Travel To Trump Rallies I didn’t even know that was a thing!

GoFundMe Bans Fundraising For Travel To Trump Rallies

I didn’t even know that was a thing!


Just reading the news … it couldn’t happen to a nicer chap.

Just reading the news … it couldn’t happen to a nicer chap.

Just reading the news … it couldn’t happen to a nicer chap.


Liars or Idiots It’s hard to decide. Either way - they

Liars or Idiots It’s hard to decide. Either way - they don’t make good leaders.

Liars or Idiots

It’s hard to decide.

Either way - they don’t make good leaders.


Simon Willison

Culture is the Behavior You Reward and Punish

Culture is the Behavior You Reward and Punish Jocelyn Goldfein describes an intriguing exercise for discovering your company culture: imagine a new hire asking for advice on what makes people successful there, and use that to review what behavior is rewarded and discouraged. Via @jgoldfein

Culture is the Behavior You Reward and Punish

Jocelyn Goldfein describes an intriguing exercise for discovering your company culture: imagine a new hire asking for advice on what makes people successful there, and use that to review what behavior is rewarded and discouraged.

Via @jgoldfein


John Philpin : Lifestream

Poor Soldier

Poor Soldier

It’s not just the ‘computer say no’ … 45% of the country say

It’s not just the ‘computer say no’ … 45% of the country say ‘no’ (pdf) … dropping this here today to add to my ‘lobster bisque’ for the future … In action today will come back and bite …. real hard.

It’s not just the ‘computer say no’ … 45% of the country say ‘no’ (pdf)

… dropping this here today to add to my ‘lobster bisque’ for the future …

In action today will come back and bite …. real hard.

Monday, 11. January 2021

Ben Werdmüller

I’m hiring

I'm hiring for two roles. I'm looking for product leaders with hands-on mission-driven startup experience, and for back-end engineers who have both written in Ruby on Rails and scripted headless browsers in a production environment as part of their work. In both cases, I'm looking for people who have experience in these roles in other startups. Here's how I think about hiring: more than anythin

I'm hiring for two roles. I'm looking for product leaders with hands-on mission-driven startup experience, and for back-end engineers who have both written in Ruby on Rails and scripted headless browsers in a production environment as part of their work. In both cases, I'm looking for people who have experience in these roles in other startups.

Here's how I think about hiring: more than anything else, I'm building a community of people who are pulling together for a common cause. Each new person should add a new perspective and set of skills, and also be ready to productively evolve the culture of the community itself. That means intentionally hiring people with diverse backgrounds who embody our core values.

Some values - like being empathetic and collaborative, or being great at both written and verbal communication - are absolute requirements. Because I'm building a community, I need people who get on well with others, who share my desire for inclusivity, and can work in a group. A high EQ is an enormous asset for an engineer. Other values may evolve over time, as people propose new ideas that change the way we all work - perhaps based on processes they've seen working well at places they've worked in the past. Anyone who joins the community should have the ownership to improve it.

ForUsAll is changing the way people save for retirement. We have radically ambitious goals for 2021, centered around helping people find financial stability in ways that are still very new. I'll write about them when we're ready, but for now, the key is to find people who are motivated by a strong social mission and by creating something new, and who enjoy the fast-changing nature of startups. I believe in healthy work-life integration, treating people with kindness, and a human-centered, empathetic approach - all while we're building cool stuff with energy and creativity.

If that sounds like your kind of thing, and you're located in the US, reach out. I'd love to chat with you.


John Philpin : Lifestream

”Always seemed pretty obvious that the minds behind Parler

”Always seemed pretty obvious that the minds behind Parler weren’t exactly sharp knives, but it’s looking more and more like they’re on the plastic cutlery end of the spectrum.” Gruber Daring Fireball: Why Parler Is Likely to Fold

”Always seemed pretty obvious that the minds behind Parler weren’t exactly sharp knives, but it’s looking more and more like they’re on the plastic cutlery end of the spectrum.”

Gruber

Daring Fireball: Why Parler Is Likely to Fold


Doc Searls Weblog

How we save the world

Let’s say the world is going to hell. Don’t argue, because my case isn’t about that. It’s about who saves it. I suggest everybody. Or, more practically speaking, a maximized assortment of the smartest and most helpful anybodies. Not governments. Not academies. Not investors. Not charities. Not big companies and their platforms. Any of those […]

Let’s say the world is going to hell. Don’t argue, because my case isn’t about that. It’s about who saves it.

I suggest everybody. Or, more practically speaking, a maximized assortment of the smartest and most helpful anybodies.

Not governments. Not academies. Not investors. Not charities. Not big companies and their platforms. Any of those can be involved, of course, but we don’t have to start there. We can start with people. Because all of them are different. All of them can learn. And teach. And share. Especially since we now have the Internet.

To put this in a perspective, start with Joy’s Law: “No matter who you are, most of the smartest people work for someone else.” Then take Todd Park‘s corollary: “Even if you get the best and the brightest to work for you, there will always be an infinite number of other, smarter people employed by others.” Then take off the corporate-context blinders, and note that smart people are actually far more plentiful among the world’s customers, readers, viewers, listeners, parishioners, freelancers and bystanders.

Hundreds of millions of those people also carry around devices that can record and share photos, movies, writings and a boundless assortment of other stuff. Ways of helping now verge on the boundless.

We already have millions (or billions) of them are reporting on everything by taking photos and recording videos with their mobiles, obsolescing journalism as we’ve known it since the word came into use (specifically, around 1830). What matters with the journalism example, however, isn’t what got disrupted. It’s how resourceful and helpful (and not just opportunistic) people can be when they have the tools.

Because no being is more smart, resourceful or original than a human one. Again, by design. Even identical twins, with identical DNA from a single sperm+egg, can be as different as two primary colors. (Examples: Laverne Cox and M. Lamar. Nicole and Jonas Maines.)

Yes, there are some wheat/chaff distinctions to make here. To thresh those, I dig Carlo Cipolla‘s Basic Laws on Human Stupidity (.pdf here) which stars this graphic:

The upper right quadrant has how many people in it? Billions, for sure.

I’m counting on them. If we didn’t have the Internet, I wouldn’t.

In Internet 3.0 and the Beginning of (Tech) History, @BenThompson of @Stratechery writes this:

The Return of Technology

Here technology itself will return to the forefront: if the priority for an increasing number of citizens, companies, and countries is to escape centralization, then the answer will not be competing centralized entities, but rather a return to open protocols. This is the only way to match and perhaps surpass the R&D advantages enjoyed by centralized tech companies; open technologies can be worked on collectively, and forked individually, gaining both the benefits of scale and inevitability of sovereignty and self-determination.

—followed by this graphic:

If you want to know what he means by “Politics,” read the piece. I take it as something of a backlash by regulators against big tech, especially in Europe. (With global scope. All those cookie notices you see are effects of European regulations.) But the bigger point is where that arrow goes. We need infrastructure there, and it won’t be provided by regulation alone. Tech needs to take the lead. (See what I wrote here three years ago.) But our tech, not big tech.

The wind is at our backs now. Let’s sail with it.

Bonus links: Cluetrain, New Clues, World of EndsCustomer Commons.

And a big HT to my old buddy Julius R. Ruff, Ph.D., for turning me on to Cipolla.


John Philpin : Lifestream

Stripe Stops Processing Payments for Trump Campaign Website.

Stripe Stops Processing Payments for Trump Campaign Website. They won’t be able to raise money to fund their Defence - then again the only lawyers left that would represent him are either charged themselves or would work for free!

Stripe Stops Processing Payments for Trump Campaign Website.

They won’t be able to raise money to fund their Defence - then again the only lawyers left that would represent him are either charged themselves or would work for free!


reb00ted

And our ICUs are full

Santa Clara County, the heart of Silicon Valley. Better not fall off the ladder or silly things like that for the foreseeable future.

Santa Clara County, the heart of Silicon Valley. Better not fall off the ladder or silly things like that for the foreseeable future.


John Philpin : Lifestream

Episode 61 of The People First Podcast is now available - a

Episode 61 of The People First Podcast is now available - a podcast that introduces to Daniel Szuc- an Aussie, living in Hong Kong who with his wife Jo has launched Make Meaningful Work. Take a listen.

Episode 61 of The People First Podcast is now available - a podcast that introduces to Daniel Szuc- an Aussie, living in Hong Kong who with his wife Jo has launched Make Meaningful Work. Take a listen.


reb00ted

Hal Plotkin asks for the government to build equity in the technology it uses, and not just rent it

In these times: revolutionary. Fifty years ago it would have been meh because we did indeed get a publicly-owned highway system. Could be as simple as requiring software to become open source some time after a purchase.

In these times: revolutionary. Fifty years ago it would have been meh because we did indeed get a publicly-owned highway system. Could be as simple as requiring software to become open source some time after a purchase.

Sunday, 10. January 2021

Ben Werdmüller

Making open source work for everyone

The power of free and open source software comes down to how it is shared. Users can pick up and modify the source code, usually at no cost, as long as they adhere to the terms of its licenses, which range from permissive (do what you like) to more restrictive (if you make modifications, you've got to distribute them under the same license). The popularity of the model has led to a transformat

The power of free and open source software comes down to how it is shared. Users can pick up and modify the source code, usually at no cost, as long as they adhere to the terms of its licenses, which range from permissive (do what you like) to more restrictive (if you make modifications, you've got to distribute them under the same license). The popularity of the model has led to a transformation in the way software is built; it's not an exaggeration to say that the current tech industry couldn't exist without it. Collaborative software drives the industry.

(If you're not familiar with the concept or its nuances, I wrote a history and guide to the underlying ideas, including how it relates to projects like Linux, a few years ago, which might help.)

In my work, I've generally veered towards permissive licenses. Elgg, my first open source project, was originally released under the GPL, and then subsequently dual-released under the more permissive MIT license. Known and its plugins were released under the Apache license. While GPL is a little more restrictive, both the MIT and Apache licenses say little more than, "this software is provided as-is".

If I was to start another open source project, I'd take a different approach and use a very restrictive license. For example, the Affero GNU Public License requires that you make the source code to any modifications available even if they're just running on a server (i.e., even if you're not distributing the modified code in any other way). This means that if someone starts a web service with the code as a starting point, they must make the source code of that service available under the AGPL.

Then I'd dual-license it. If you want to use the software for free, that's great: you've just got to make sure that if you're using it to build a web service, the source code of your web service must be available for free, too. On the other hand, if you want to restrict access to your web service's source code because it forms the basis of a commercial venture, then you need to pay me for the commercial license. Everybody wins: free and open source communities can operate without commercial considerations, while I see an upside if my open source work is used in a commercial venture. The commercial license could include provisions to allow non-profits and educational institutions to use the software for free or at a low cost; the point is, it would be at my discretion.

I love free software. The utopian vision of the movement is truly empowering, and has empowered communities that would not ordinarily be able to tailor their own software platforms. But allowing commercial entities to take advantage of people who provide their work for the love of it as a bug. There's no reason in the world that a VC-funded business with millions of dollars under its belt should avoid paying people its company value integrally depends on. It's taken me a long time to come around to the idea, but restrictive licenses like the AGPL align everyone in the ecosystem and allow individual developers and well-funded startups alike to thrive.

More than that, it's a model that allows me to think I might, one day, dive head-first into free software at least one more time.


John Philpin : Lifestream

Clever : Sell Your Pre-Owned Subscriptions With Ease

Clever : Sell Your Pre-Owned Subscriptions With Ease

”We have always supported diverse points of view being rep

”We have always supported diverse points of view being represented on the App Store, but there is no place on our platform for threats of violence and illegal activity. Parler has not taken adequate measures to address the proliferation of these threats to people’s safety. We have suspended Parler from the App Store until they resolve these issues.” Apple

”We have always supported diverse points of view being represented on the App Store, but there is no place on our platform for threats of violence and illegal activity. Parler has not taken adequate measures to address the proliferation of these threats to people’s safety. We have suspended Parler from the App Store until they resolve these issues.”

Apple


Simon Willison

Weeknotes: datasette-export-notebook, PyInstaller packaged Datasette, CBSAs

What a terrible week. I've found it hard to concentrate on anything substantial. In a mostly futile attempt to distract myself from doomscrolling I've mainly been building some experimental output plugins, fiddling with PyInstaller and messing around with shapefiles. Packaged Datasette with PyInstaller A long running goal for Datasette has been to make it as easy to install as possible - somet

What a terrible week. I've found it hard to concentrate on anything substantial. In a mostly futile attempt to distract myself from doomscrolling I've mainly been building some experimental output plugins, fiddling with PyInstaller and messing around with shapefiles.

Packaged Datasette with PyInstaller

A long running goal for Datasette has been to make it as easy to install as possible - something that's not particularly straight-forward for applications written in Python, at least in comparison to toolchains like Rust, Go or Deno.

Back in November 2017 Raffaele Messuti suggested using PyInstaller for this. I revisited that issue while looking through open Datasette issues ordered by least recently updated and decided to try it out again - and it worked! Here's the resulting TIL, and I've attached a bundled datasette macOS binary file to the 0.53 release on GitHub.

There's one catch: the binary isn't signed, which means it shows security warnings that have to be worked around if you try to run it on macOS. I've started looking into the signing process - I'm going to need an Apple Developer account and to jump through a bunch of different hoops, but it looks like I should be able to get that working. Here's my issue for that: GitHub Actions workflow to build and sign macOS binary executables - it looks like gon is the missing automation piece I need.

One thing that really impressed me about PyInstaller is the size of the resulting file. On both macOS and Linux it was able to create a roughly 8MB file containing Datasette, all of its dependencies AND a working Python environment. It's pretty magic!

datasette-css-properties

I wrote this up in detail a couple of days ago: datasette-css-properties is an amusingly weird output plugin that turns the results of a SQL query into CSS custom property definitions which can then be used to style or insert content into the current page.

sqlite-utils 3.2

The big new feature in this release is cached table counts using triggers, which I described last week. Full release notes here.

I've opened an issue to take advantage of this optimization in Datasette itself.

datasette-export-notebook

This is an idea I've been bouncing around for a while, and during a bout of attempted-coup-induced insomnia I decided to sketch out an initial version.

datasette-export-notebook is a plugin that adds an export-to-notebook option to any table or query.

This provides a page of documentation with copy-and-paste examples for loading data from that table or query into a Jupyter or Observable notebook.

Here's a live demo. The interface currently looks like this:

As often happens when building even simple plugins like this I identified some small improvements I can make to Datasette.

cbsa-datasette

Core-based statistical areas are a US government concept used for various statistical purposes. They are essentially metropolitan areas, based on central cities and the commuting area that they sit inside.

I built cbsa.datasettes.com this week to provide an API for looking up a CBSA based on a latitude and longitude point. Here's a location within San Francisco for example:

https://cbsa.datasettes.com/core/by_lat_lon?longitude=-122.51&latitude=37.78

This returns San Francisco-Oakland-Berkeley, CA. Add .json and &_shape=array to the above URL to get a JSON API version.

The data comes from a shapefile published by the Bureau of Transportation Stastics. I'm using shapefile-to-sqlite to import it into a SpatiaLite database, then publishing it to Cloud Run using this GitHub Actions workfow. Full details in the README.

I built this mainly to act as a simple updated example of how to use Datasette and SpatiaLite to provide an API against data from a shapefile. I published a tutorial about doing this for timezones three years ago, but shapefile-to-sqlite makes it much easier.

Releases this week datasette-export-notebook: 0.1.1 - 2021-01-09
Datasette plugin providing instructions for exporting data to Jupyter or Observable datasette-css-properties: 0.2 - 2021-01-07
Experimental Datasette output plugin using CSS properties sqlite-utils: 3.2 - 2021-01-03
Python CLI utility and library for manipulating SQLite databases TIL this week Packaging a Python app as a standalone binary with PyInstaller

Saturday, 09. January 2021

John Philpin : Lifestream

Nobody is being silenced. It’s controlling the ability to

Nobody is being silenced. It’s controlling the ability to ‘turn it up to 11’ is what is happening.

Nobody is being silenced.

It’s controlling the ability to ‘turn it up to 11’ is what is happening.


Ben Werdmüller

Fractal communities vs the magical bullhorn

In her book Emergent Strategy, adrienne maree brown eloquently describes a model for decentralized leadership in a world of ever-changing emergent patterns. Heavily influenced by the philosophy laid out in Octavia Butler's Earthseed novels - God is change - it describes how the way we show up in the face of change, embodying the world we wish to manifest, can influence it for the better. It's

In her book Emergent Strategy, adrienne maree brown eloquently describes a model for decentralized leadership in a world of ever-changing emergent patterns. Heavily influenced by the philosophy laid out in Octavia Butler's Earthseed novels - God is change - it describes how the way we show up in the face of change, embodying the world we wish to manifest, can influence it for the better. It's a uniquely non-linear manifesto.

Our model for communities and change right now is intensely linear. Despite the democratizing promise of the internet, we have fallen back on a broadcast model of influencers and audiences: a small number of people create content and the rest of us consume it. Although, technically speaking, anyone can publish, the truth is that platforms assume we're here to listen - and they've been built with those assumptions in mind. Influencers broadcast; followers follow; platforms make money by facilitating the engagement. It's how Ellen Degeneres's selfie got millions of retweets, and how Donald Trump parlayed his Twitter account into a Presidency. (See if you can do the same.)

A broadcast model creates a direct line from anyone with power to everyone. Theoretically, that's a beautiful, democratizing thing; in practice, it turns the protocols and assumptions underlying the broadcast medium itself into the ultimate influencer. Everyone who is trying to reach an audience falls into patterns that they know will improve their reach; they game the algorithms, which are really reflections of the values and ideas of the teams which created them. Influencers like Donald Trump game the minds of engineers and product managers in San Francisco in order to game the world.

Ideas are at their best when filtered through communities and movements that each have their own values and mechanics. Before social media, this is how it worked. Swirling, emergent patterns evolve from the interdynamics of these communities. As opposed to social media's linear broadcast model, this intercommunity model is more like a fractal: the interrelations between tiny communities form larger communities, which in turn interrelate as larger communities of people, and so on. There's no magical bullhorn that lets you skip ahead and reach the world: you've got to influence your friends and family, who then reach other friends and families, who then reach their wider local communities. Each of these communities has a different set of norms and values; organic, internal rules and dynamics that govern them. In the process, the people in these communities at each level become influencers in themselves, carrying on the message. It's harder work, but more profoundly impactful.

This is a healthier model for the internet, too. Rather than community platforms that tend towards global scale, we need to build global infrastructure that can support tiny communities that work in different ways. Ideas can still spread; links still get shared; memes are made. But they do so organically, in a more equal way that prioritizes the decentralized, community-driven nature of human society, rather than one that seeks to make us all into followers of a handful of global influencers. We need to create a reflection of adrienne maree brown's view of the world, not Donald Trump's.

In doing so, it's important to understand that "local" doesn't mean "geographically local" on the internet. It can, but doesn't have to. "Local" can also mean focused communities of interest of all different kinds. Everybody's experience of the internet then becomes a unique-to-them set of overlapping communities on different platforms. My argument is absolutely not that the internet should not be global infrastructure, and that we shouldn't be able to share ideas with people from everywhere: I believe that's a crucial part of human progress. My argument is that the internet should be more fragmented and that holding our conversations, making our connections, and discovering our knowledge from a very small handful of platforms with a limited set of models for community governance is a vulnerability.

Furthermore, I believe it's inevitable. As we've seen this week (as well as all the weeks leading up to now), it's not tenable for companies like Twitter and Facebook to be the owners of the global discourse. As much as we shouldn't want that, and lawmakers are galvanizing around the problems that have arisen, I don't think they want that, either. In fact, the only people who aren't aligned with this need are the influencers who want to have the world at their disposal.

So what do these new platforms and communities look like? The truth is, there's everything to play for.


John Philpin : Lifestream

It’s not news … I just wanted to record the image for poster

It’s not news … I just wanted to record the image for posterity and congratulate Jack and the team for the speed with which they acted … it only took 4 years!

It’s not news … I just wanted to record the image for posterity and congratulate Jack and the team for the speed with which they acted … it only took 4 years!


Aaron Parecki

3D Printed Modular Cage for Blackmagic Bidirectional SDI/HDMI Converters

This modular cage allows you to stack Blackmagic Bidirectional SDI/HDMI 3G Converters on a desk or behind other gear.

This modular cage allows you to stack Blackmagic Bidirectional SDI/HDMI 3G Converters on a desk or behind other gear.

The design is composed of four parts: the main shelf, the top shelf, a top plate that sits on top of the top shelf, and an optional angled base. The angled base plate lets you mount the whole stack on top of rack gear that's mounted at a 9° angle, such as the StarTech 8U desktop rack. You can print as many of the main shelves as you need to stack.

The shelves are designed so that the converters can slip in from one side with enough of a gap on top to be able to insert and remove them even after a stack is assembled.

If you'd like, you can glue the stack together to make a solid structure. Otherwise the pegs are long enough that the stack is reasonably stable even without glue.

Files bmd_micro3g_shelf.stl bmd_micro3g_top_shelf.stl bmd_micro3g_top.stl bmd_micro3g_base.stl

These designs are licensed under the Creative Commons Attribution license.

Did you make a print of this? Tweet a photo and include the link to this blog post to be featured in the comments below!

Friday, 08. January 2021

Ben Werdmüller

The whitewash of the culpable

I'm still processing the events of this week: the obvious buffoonery of the Q mob contrasts starkly with reports of an intention to hang the Vice President, cable ties brought into the Capitol to detain hostages, and the obvious white supremacist flags that were flown both inside and out. One popular T-shirt worn on Wednesday read "Camp Auschwitz: work brings freedom"; another read 6MWE, for "6

I'm still processing the events of this week: the obvious buffoonery of the Q mob contrasts starkly with reports of an intention to hang the Vice President, cable ties brought into the Capitol to detain hostages, and the obvious white supremacist flags that were flown both inside and out. One popular T-shirt worn on Wednesday read "Camp Auschwitz: work brings freedom"; another read 6MWE, for "6 Million [Jews] Wasn't Enough".

This riot was unmistakably instigated by President Trump at an address immediately prior, and who later told the insurrectionists: "We love you. You're very special. Go home" (an echo of his infamous call for the Proud Boys to "stand by and stand down", and declaring that a white supremacist rally in Charlottesville had "very fine people on both sides"). Since then, we've seen a number of resignations from inside his government, which at this late stage could be seen as just taking an extra week's vacation.Twitter forced him to take down some posts, and Facebook banned him indefinitely. Apple is about to ban the right-wing app Parler unless it adds a moderation policy within 24 hours.

It's too little, far too late. It's not brave to quit an administration after spending four years inside it perpetuating hate (particularly when it might just be a way to avoid having to vote on invoking the 25th Amendment). It's not brave to ban a fascist government leader from your media platform following a high-profile event after allowing him to incite hatred for at least as long. It's not taking a stand to suddenly ban an app heavily used by white supremacists when it's been used to organize hate groups for its entire existence. All of these things should be done, but they should have been done long ago.

I don't believe it's fair to assume that all of these technology companies only just realized that these organizations were dangerous. Instead, I think it's just that it became untenable to tolerate them. The thing about hate groups and hate-filled conspiracy theories like QAnon is that they're very highly engaged: they use platforms for hours and they click on ads. Then-CEO of CBS Les Moonves famously said about Trump before the 2016 election: "it may not be good for America, but it's damn good for CBS". The same is true for every tech company that subsists on ad engagement dollars. Not only did targeted advertising help Trump win in 2016, but every targeted ad platform and every advertising-powered TV network profited from the hatred and division that Trump incited. Just this week, the former CEO of ad-tech firm Steelhouse called the Capitol insurrection "a rocket ship" for Twitter and Facebook's ad businesses. They were going to hang the Vice President! Such engagement!

So, yes: leave the Trump administration, by all means. Ban him from your platforms. Remove the apps that insurrectionists used to organize the storming of the Capitol (and are reportedly using to organize another event around the inauguration). But you don't win brownie points for that. You don't get to walk away with your head held high. You put your own profit over the health of the country, the health of the people who have died as a direct result of the Trump administration's policies, and the cause of global democracy. You shouldn't get to sleep soundly at night. You're culpable. And as much as you might try and wash your hands of it in the final weeks of this nightmare, you deserve to have it follow you for the rest of your lives.


Bill Wendel's Real Estate Cafe

Open Letter: Processing assault on the American Dream

Open letter to my real estate peers. Inman News, a well-respected presence in the real estate ecosystem, is acknowledging what happened yesterday and but trying… The post Open Letter: Processing assault on the American Dream first appeared on Real Estate Cafe.

Open letter to my real estate peers. Inman News, a well-respected presence in the real estate ecosystem, is acknowledging what happened yesterday and but trying…

The post Open Letter: Processing assault on the American Dream first appeared on Real Estate Cafe.


John Philpin : Lifestream


Hell Just Froze Over ”Even Stephen Miller told one pers

Hell Just Froze Over ”Even Stephen Miller told one person close to the White House that it was a terrible day.” Vanity Fair

Hell Just Froze Over

”Even Stephen Miller told one person close to the White House that it was a terrible day.”

Vanity Fair


This is how to fight back. Someone should make a list of

This is how to fight back. Someone should make a list of the 123 House Republicans who signed onto the Texas lawsuit that challenged the election based on frivolous conspiracies AND the over 50% of Republican Senators who invoked conspiracy theories to challenge the election results. Against each name, detail who donates to their campaigns and pressure those companies … because they wi

This is how to fight back.

Someone should make a list of the 123 House Republicans who signed onto the Texas lawsuit that challenged the election based on frivolous conspiracies

AND

the over 50% of Republican Senators who invoked conspiracy theories to challenge the election results.

Against each name, detail who donates to their campaigns and pressure those companies … because they will primarily be companies … to drop their support or face the consequences.

It’s worked before. It will work again … to borrow a phrase from the SV tech bros … at scale.


reb00ted

Bitcoin is above USD 40,000

I did not think that would happen. At least not for many years.

I did not think that would happen. At least not for many years.


California has a public data broker registry

Maintained by the state’s Attorney General.

Maintained by the state’s Attorney General.

Thursday, 07. January 2021

Simon Willison

APIs from CSS without JavaScript: the datasette-css-properties plugin

I built a new Datasette plugin called datasette-css-properties. It's very, very weird - it adds a .css output extension to Datasette which outputs the result of a SQL query using CSS custom property format. This means you can display the results of database queries using pure CSS and HTML, no JavaScript required! I was inspired by Custom Properties as State, published by by Chris Coyier earlier

I built a new Datasette plugin called datasette-css-properties. It's very, very weird - it adds a .css output extension to Datasette which outputs the result of a SQL query using CSS custom property format. This means you can display the results of database queries using pure CSS and HTML, no JavaScript required!

I was inspired by Custom Properties as State, published by by Chris Coyier earlier this week. Chris points out that since CSS custom properties can be defined by an external stylesheet, a crafty API could generate a stylesheet with dynamic properties that could then be displayed on an otherwise static page.

This is a weird idea. Datasette's plugins system is pretty much designed for weird ideas - my favourite thing about having plugins is that I can try out things like this without any risk of damaging the integrity of the core project.

So I built it! Here are some examples:

roadside_attractions is a table that ships as part of Datasette's "fixtures" test database, which I write unit tests against and use for quick demos.

The URL of that table within Datasette is /fixtures/roadside_attractions. To get the first row in the table back as CSS properties, simply add a .css extension:

/fixtures/roadside_attractions.css returns this:

:root { --pk: '1'; --name: 'The Mystery Spot'; --address: '465 Mystery Spot Road, Santa Cruz, CA 95065'; --latitude: '37.0167'; --longitude: '-122.0024'; }

You can make use of these properties in an HTML document like so:

<link rel="stylesheet" href="https://latest-with-plugins.datasette.io/fixtures/roadside_attractions.css"> <style> .attraction-name:after { content: var(--name); } .attraction-address:after { content: var(--address); } </style> <p class="attraction-name">Attraction name: </p> <p class="attraction-address">Address: </p>

Here that is on CodePen. It outputs this:

Attraction name: The Mystery Spot

Address: 465 Mystery Spot Road, Santa Cruz, CA 95065

Apparently modern screen readers will read these values, so they're at least somewhat accessible. Sadly users won't be able to copy and paste their values.

Let's try something more fun: a stylesheet that changes colour based on the time of the day.

I'm in San Francisco, which is currentl 8 hours off UTC. So this SQL query gives me the current hour of the day in my timezone:

SELECT strftime('%H', 'now') - 8

I'm going to define the following sequence of colours:

Midnight to 4am: black 4am to 8am: grey 8am to 4pm: yellow 4pm to 6pm: orange 6pm to midnight: black again

Here's a SQL query for that, using the CASE expression:

SELECT CASE WHEN strftime('%H', 'now') - 8 BETWEEN 4 AND 7 THEN 'grey' WHEN strftime('%H', 'now') - 8 BETWEEN 8 AND 15 THEN 'yellow' WHEN strftime('%H', 'now') - 8 BETWEEN 16 AND 18 THEN 'orange' ELSE 'black' END as [time-of-day-color]

Execute that here, then add the .css extension and you get this:

:root { --time-of-day-color: 'yellow'; }

This isn't quite right. The yellow value is wrapped in single quotes - but that means it won't work as a colour if used like this:

<style> nav { background-color: var(--time-of-day-color); } </style> <nav>This is the navigation</nav>

To fix this, datasette-css-properties supports a ?_raw= querystring argument for specifying that a specific named column should not be quoted, but should be returned as the exact value that came out of the database.

So we add ?_raw=time-of-day-color to the URL to get this:

:root { --time-of-day-color: yellow; }

(I'm a little nervous about the _raw= feature. It feels like it could be a security hole, potentially as an XSS vector. I have an open issue about that and I'd love to get some feedback - I'm serving the page with the X-Content-Type-Options: nosniff HTTP header which I think should keep things secure but I'm worried there may be attack patterns that I don't know about.)

Let's take a moment to admire the full HTML document for this demo:

<link rel="stylesheet" href="https://latest-with-plugins.datasette.io/fixtures.css?sql=SELECT%0D%0A++CASE%0D%0A++++WHEN+strftime(%27%25H%27,+%27now%27)+-+8+BETWEEN+4%0D%0A++++AND+7+THEN+%27grey%27%0D%0A++++WHEN+strftime(%27%25H%27,+%27now%27)+-+8+BETWEEN+8%0D%0A++++AND+15+THEN+%27yellow%27%0D%0A++++WHEN+strftime(%27%25H%27,+%27now%27)+-+8+BETWEEN+16%0D%0A++++AND+18+THEN+%27orange%27%0D%0A++++ELSE+%27black%27%0D%0A++END+as+%5Btime-of-day-color%5D&_raw=time-of-day-color"> <style> nav { background-color: var(--time-of-day-color); } </style> <nav>This is the navigation</nav>

That's a SQL query URL-encoded into the querystring for a stylesheet, loaded in a <link> element and used to style an element on a page. It's calling and reacting to an API with not a line of JavaScript required!

Is this plugin useful for anyone? Probably not, but it's a really fun idea, and it's a great illustration of how having plugins dramatically reduces the friction against trying things like this out.


datasette-css-properties

datasette-css-properties My new Datasette plugin defines a ".css" output format which returns the data from the query as a valid CSS stylesheet defining custom properties for each returned column. This means you can build a page using just HTML and CSS that consumes API data from Datasette, no JavaScript required! Whether this is a good idea or not is left as an exercise for the reader. V

datasette-css-properties

My new Datasette plugin defines a ".css" output format which returns the data from the query as a valid CSS stylesheet defining custom properties for each returned column. This means you can build a page using just HTML and CSS that consumes API data from Datasette, no JavaScript required! Whether this is a good idea or not is left as an exercise for the reader.

Via @simonw


Custom Properties as State

Custom Properties as State Fascinating thought experiment by Chris Coyier: since CSS custom properties can be defined in an external stylesheet, we can APIs that return stylesheets defining dynamically server-side generated CSS values for things like time-of-day colour schemes or even strings that can be inserted using ::after { content: var(--my-property). This gave me a very eccentric idea fo

Custom Properties as State

Fascinating thought experiment by Chris Coyier: since CSS custom properties can be defined in an external stylesheet, we can APIs that return stylesheets defining dynamically server-side generated CSS values for things like time-of-day colour schemes or even strings that can be inserted using ::after { content: var(--my-property).

This gave me a very eccentric idea for a Datasette plugin...


Ben Werdmüller

42.

It's my birthday. I was originally going to write one of those reflective pieces along the lines of "here's 42 things I've learned" or "version 42.0" or some Douglas Adams reference, but given everything that's been going on in the world, and my mother's decline in the next room, I just can't. I believe that the Trump presidency has been a dying gasp of the 20th century. I'm really hopeful that

It's my birthday. I was originally going to write one of those reflective pieces along the lines of "here's 42 things I've learned" or "version 42.0" or some Douglas Adams reference, but given everything that's been going on in the world, and my mother's decline in the next room, I just can't.

I believe that the Trump presidency has been a dying gasp of the 20th century. I'm really hopeful that the events of this month are the dying gasp.

If that turns out to be true, there's a lot to look forward to. If not, then there's a lot to be worried about. As of right now, the future is in the balance.

Wednesday, 06. January 2021

Simon Willison

Quoting Joel Goldberg

When you know something it is almost impossible to imagine what it is like not to know that thing. This is the curse of knowledge, and it is the root of countless misunderstandings and inefficiencies. Smart people who are comfortable with complexity can be especially prone to it! If you don’t guard against the curse of knowledge it has the potential to obfuscate all forms of communication, inclu

When you know something it is almost impossible to imagine what it is like not to know that thing. This is the curse of knowledge, and it is the root of countless misunderstandings and inefficiencies. Smart people who are comfortable with complexity can be especially prone to it!

If you don’t guard against the curse of knowledge it has the potential to obfuscate all forms of communication, including code. The more specialized your work, the greater the risk that you will communicate in ways that are incomprehensible to the uninitiated.

Joel Goldberg


brumm.af/shadows

brumm.af/shadows I did not know this trick: by defining multiple box-shadow values as a comma separated list you can create much more finely tuned shadow effects. This tool by Philipp Brumm provides a very smart UI for designing shadows. Via @joshwcomeau

brumm.af/shadows

I did not know this trick: by defining multiple box-shadow values as a comma separated list you can create much more finely tuned shadow effects. This tool by Philipp Brumm provides a very smart UI for designing shadows.

Via @joshwcomeau

Tuesday, 05. January 2021

Simon Willison

DALL·E: Creating Images from Text

DALL·E: Creating Images from Text "DALL·E is a 12-billion parameter version of GPT-3 trained to generate images from text descriptions, using a dataset of text–image pairs.". The examples in this paper are astonishing - "an illustration of a baby daikon radish in a tutu walking a dog" generates exactly that. Via Hacker News

DALL·E: Creating Images from Text

"DALL·E is a 12-billion parameter version of GPT-3 trained to generate images from text descriptions, using a dataset of text–image pairs.". The examples in this paper are astonishing - "an illustration of a baby daikon radish in a tutu walking a dog" generates exactly that.

Via Hacker News


reb00ted

Singaporan COVID-19 contact tracing data not private after all

The police now take the right to access the data for “authorized purposes”. With no clear process by which to limit those “authorized purposes” (ZDNet).

The police now take the right to access the data for “authorized purposes”. With no clear process by which to limit those “authorized purposes” (ZDNet).


MyDigitalFootprint

Peak Paradox

In the world full of human bias,  exploring paradox might help us explain our differences. Kaliya (Identity Woman) and I wrote “Humans want principles, society demands rules and businesses want to manage risk, can we reconcile the differences?  Have been noodling on that framework of linking purpose and rules with a few friends. This article explores how to look for a paradox within ou
In the world full of human bias,  exploring paradox might help us explain our differences. Kaliya (Identity Woman) and I wrote “Humans want principles, society demands rules and businesses want to manage risk, can we reconcile the differences?  Have been noodling on that framework of linking purpose and rules with a few friends. This article explores how to look for a paradox within our known cognitive bias and how to identify and manage differences.

----

I follow the excellent work of my friend Rory Sutherland and Nir Eyal (Nir & Far) who are both leading thinkers in behavioural science and economics along with the great Daniel Kahneman. I reflected following a call with the most read man on the planet Robbie Stamp CEO of BIOSS (a company who can assess capabilities for complex decision making), about how to frame bias and conflicts. We are aware that there are over 180 cognitive biases of human behaviour, which because of differences between us; create unique tensions, alignments, conflicts and paradoxes.

The 180 cognitive biases of human behaviour create unique tension, alignment, conflict and paradox.

The famous infographic below from Buster Benson on Medium https://medium.com/@buster has become foundational in presenting the array of human cognitive bias. Buster is well worth following, and I am looking forward to reading his new book "Why Are We Yelling? The Art of Productive Disagreement"

 

 

On another call with my friend, the insanely cleaver identity lawyer and polymath Scott David, we were noodling that we saw paradoxes within behavioural science, primarily when reflecting on the very long list of human biases.  I love his quote “we have to find the paradox, or we are probably in a model.” On a similar call with Viktor Mirovic  (CEO of KeenCorp - a company who identifies employee problems in real-time), we explored the gaps between our own bias and purpose and those biases/purpose that our companies have.  Viktor and I reflected on what happens when there is a delta between you and the company and the (a)effects on teams and individuals performance.  As you can imagine, this article and thinking are built ’the shoulders of giants and would not have come to be without those conversations and many others.  

The drawing below is to explore how to frame paradox within our known biases (maybe beliefs).  We will unpack the diagram, examine how we manage the differences, and recognise that mostly we don't; which leads to harmful stress. 

On discovering Peak Paradox

The four outer extremes are peaks in their own right; at these individual peaks, you can only hold one single view at the expense of all others.  Therefore there is no conflict at a peak, and no compromise as this is the only view that can be held.  

The axes are set up that on the X-axis (horizontal) is the conflict of our human purpose vs a commercial purpose. On the Y-axis (vertical) it is the conflict of individual vs everyone. 

Peak Individual Purpose.   At the exclusion of anything else, you are only interested in yourself.  Selflessness at the extreme.  You believe that you are sovereign (not having to ask anyone for permission or forgiveness), your voice and everything you say matters and everyone should agree. You are all-powerful and can do whatever you want and have the freedom and agency to do it.  

Peak Work Purpose.  Work in this context is commercial or economic. At the exclusion of anything else, the only reason a company exists is to deliver as much possible value to the shareholders.  Employees, customers, the environment does not matter in terms of exploitation.  The purpose is to be the biggest and most efficient beast on the planet and able to deliver enormous returns; Shareholder Primacy at its most pure. Simple, straightforward, no conflict and no compromise. Even to the point that rewarding the staff beyond fair would be a compromise.  Compliance is met with the minimal standard ensuring that nothing is wasted. 

Peak Society Purpose.  At the exclusion of anything else, we have to deliver and ensure there is no suffering and poverty for any living thing. Humans must have equal education, health and safety.  There must be total transparency and equality.  Everything is equally shared, and on-one has more power, agency or influence than anyone else.  

Peak Human Purpose.  At the exclusion of anything else, we are here to survive as long as possible and escape death, which we do by reproducing as much as we can with the broadest community we can.  We also have to adapt as fast as possible.  We have to meet our chemistry requirements to stay alive for as long as possible to adopt and reproduce at the expense of everything else.  Whilst all the peak purposes might be controversial (even to myself), saying purity of human purpose is chemistry/ biology might not go down very well. However, this is a model for framing thinking, so please go with it as it needs to be pure, and every other human purpose has conflicts with someone.

These extremes as peaks articulate that there is no conflict, no compromise, no tension - they are pure when at the extreme.  It is not that we have to agree with them but to recognise they can exist.    My gut says right-wing politics (given the interpretation of capitalist today and not its original meaning) follows the top edge between peak commercial purpose and peak individual purpose. Depending on the specific view individuals can be somewhere along the line, whereas parties are more complex in their overall position.   Left-wing political views (today's interruption more socialist) follow the bottom right edge between peak commercial purpose and peak society. Again individual views may hold the line, but parties are trying to find the right balance of majority votes, commercial activity, tax, redistribution and a fairer society.  Applying the same thinking Cults are likely to be positioned along the top left boundary between peak human purpose and peak individual purpose, whereas more fervent religious movements will tend towards the lower-left boundary between peak human purpose and peak society. Like political parties, world religions need a mass following for a voice and therefore are positioned with more paradoxes, which they solve with paradoxes.   

Peak Paradox.  The melting pot that is the middle of all of the peaks, the place where you are trying to rationalise all the extreme purposes into one acceptable position for everyone, but there is no resolution without compromise that will suit no-one.  Peak Paradox is likely to be unsustainable due to the conflicts and compromises required or may itself be a paradox in so much when there it feels like nothing is there, like the eye of the storm when there is complete calm.  It feels that many great thinkers and philosophers may try to find rest or stillness in this calm at peak paradox. There is a battle to get into the place of calm fighting the storms of opinions, and if you lose that moment of mindfulness, it is straight back into the storm. The unstable nature of standing on the point of a needle. This said:

Just because we may agree on the same peak purpose, that does not mean we can also agree on how to go about achieving or maintaining it.

Different peak purposes can have the same principles and values.  You come from different peaks towards a livable compromise; however, as individuals, you can have the same principles and values, making the acceptance of difference more acceptable. 

If there is no paradox or you cannot find one, you are at a boundary edge, where there is the greatest order, or at an extreme peak view. 

At peak paradox, there is the highest disorder, in terms of a variety of views.

It is evident that our long list of personality testing is to identify where you naturally identify with right now. You will change and adapt, but there is likely to be a natural affinity that tends towards one or more peaks.  

There are over 180 cognitive biases recognised,  from this diagram, we can unpack that you are unlikely to have them all at the same time, but a subset of them depending on where you locate yourself. 

What is the relation between bias and paradox?

When there are no biases, surely we have overcome all our objections, and we can deal with everyone’s unique and individual views, and we must be at “Peak Paradox.”

When there is no paradox, surely we are at an extreme view where there are no tensions, no conflicts, and we have no biases that can distract us from accepting every position equally.

Most of us do not see the paradox in front of us. Still, we have many biases, suggesting that Peak Bias (number of biases), which means we reject most ideas accepting only a few, will occur early in a distribution. 

As we can deal with more complex compromises, we can see more paradoxes in any position but can cope with the conflicts and tensions that arise, suggesting a long tail to peak paradox. 

4 case studies

I wrote about how to uncover your purpose here, but I can now see that personal purpose maps differently to paradox and bias. Mapping a few personal or individual purposes. 

Invent and Crete.  “Necessity is the mother of all invention” and “Creativity is intelligence having fun” sum up rather well that part of our human make-up is to find ways to create or invent something that solves a problem we see or understand.  Our personal purpose is to invent and create as we see what everyone else sees but can think what no-one else has thought. Irrespective of such a personal goal it maps all of the blue area on the Peak Paradox diagram as every paradox creates a problem that needs to be solved.  Being creative as a purpose does not mean you will find anyone else on the same journey.

Creating Impact.  Some individuals desire to have more of an impact whilst alive and some, because of what they do, have a lasting effect that changes our human journey, some good and some not so.  This is the point about creating impact as a purpose; it is like all items that humans make: they can be used as tools for good or weapons for bad.  Creating impact creates a paradox of how to create something that cannot be used for harm, but often we create something without seeing the harm.  Our individual purpose is itself in conflict with the outcome we want.   At every point under the blue area on the Peak Paradox chat, creating impact for good or ill is possible at every point. 

Better ancestors.  What changes when you consider governance for the next 1000 years is a post about how we have to become better stewards and ancestors.  We have to be more considerate if we want others to walk our path and not burn the earth.  As a thinking piece, I focussed on the paradox of high commercial pressure, being between citizens, giving our children life and enabling more individuals to have agency.  Almost peak paradox. Perhaps this is the case with ESG as a conceptual issue, we have to compromise, and we are not in agreement on what we need to compromise on?

Perhaps this is the case with ESG as a conceptual issue, we have to compromise, and we are not in agreement on what we need to compromise on?

Being Useful.  There appears to be a human need that presents as wanting to be loved, wanted or useful.  Over time we go through these different phases on a cyclic basis throughout life. What we conceive as “being useful” is profoundly personal and therefore could be anywhere in the central blue area on the Peak Paradox drawing. Being Useful today can mean you want to do well in assisting, where you are employed, achieve its goals or mission, and help you achieve your personal goal of more agency.  Equally, being useful can mean creating less poverty and suffering in the world.  Anyone who describes themselves with a desire to be useful is unlikely to find a massive community who have the same idea of what that means.

Personal purposes are both aligned to all peaks, and also in conflict with all the peaks, it is why we should see the paradoxes in our own life's purpose and find we cannot rest on one.  We have to settle for a compromise that we find acceptable and does not bring conflict, tension or anxiety. 

Towards better performance, strategy and judgement. 

Teams and Culture. We have one culture; this is how we do it.  These are our rules; these are our processes.  We have one team, and all our thinking is aligned.  After reading this, you might be as sceptical about such thoughts as I am.  In my article “Humans want principles, society demands rules and businesses want to manage risk, can we reconcile the differences?   I did not connect the dots between the delta or gap between where the company has decided to operate with its paradox, compromises, tensions and conflicts and how they would align to the individuals inside the organisation.  I am sure there is a link here to transformation issues as well.    KeenCorp is looking at measuring this, something I am going to watch 

Philanthropy is a private initiative from individuals. Interesting Wikipedia defines philanthropy as for the public good, focusing on the quality of life. However, often we see that the person who is giving has an idea of public good and priority that is not aligned to mine, which creates friction.   Philanthropy has an interesting dynamic about how much it is for PR and how much it is indeed the individual’s purpose.  Might have to analyse the Bill and Melinda Gates Foundation and see what it says about what paradoxes they are prepared to live with and help understand the criticism and support offered. 

Investors, directors and the company.  As a group, you can imagine that they are one the closest to a peak purpose.  Investors as equity investors believe, one would hope, in the purity of shareholder primacy and would probably be outright supporters of this single focus.  However, since we have lived that view, we are now much more paradoxical.  ESG and stewardship codes mean that the purity has become a complex mix of different personal compromises, which currently are not disclosed and may be in conflict to the culture or purpose of the company that they invest into.  The relationship between investors and Directors is also changing. However, it appears that the capital providers are willing to embrace ESG and a watered-down version of peak Shareholder Primacy. However, Directos, KPI, remuneration committees and self-interested processes might be creating a level of friction to nobel ESG change that we did not anticipate. Organisational wellness should also now be a new measure and reported on, but how? 

Governance.  Governance of old did not have to embrace compromise, conflict or tensions.  Investors and founders understood the simplicity of a very pure set of drivers.  The founding team had largely sorted out the paradox and how they could work together.  Indeed anyone who could not be soon off.  Governance was then ensuring the right NorthStar, the suitable vessel and the right team.  This model is no longer good enough. Suddenly, the directors are having to face up to conflicting peak purposes pulling in different directions by directors who have their own views and teams who also have a  significant voice.  Added to this is the dependence on the ecosystem who themselves will reach a different compromise which asks fundamental questions about your interaction with them.     


Take away

If there is no paradox, compromise or conflict, you are in a model pretending it is life.

If it is too good to be true, you are probably at an extreme as there is no tension, conflict or paradox. 

Are we in a place where our compromises are good for us? Can we perform?

We can identify stress that is probably harmful as we have compromised to someone else’s force, dominance, position and have ended up in a place where we do not naturally belong.

Is there an alignment, understanding and communication between our compromises and those around us in our team? Can we see and talk about the paradoxes in the team's decisions?

Neuro, race and gender diversity are critical for success, understanding our natural positions and compromises makes sense. Knowing the delta between our team is crucial.

Being able to talk to and understand all positions means we will be in a less divided and divisional world.


Next questions for me

Does this model allow me to plot products and companies to determine alignment? 

Ethics; is it our ability to deal with compromise and conflict and find a position we find acceptable?


@tonyfish  Jan 2021



reb00ted

Roaring 20s?

Seeing another prediction. I’m having some difficulties envisioning this for anybody than the, say, top-10% of the population in countries such as the US. Family balance sheets are wiped out across the board. How can you roar?

Seeing another prediction. I’m having some difficulties envisioning this for anybody than the, say, top-10% of the population in countries such as the US. Family balance sheets are wiped out across the board. How can you roar?


Simon Willison

Quoting Jacob Kaplan-Moss

Generally, product-aligned teams deliver better products more rapidly. Again, Conway’s Law is inescapable; if delivering a new feature requires several teams to coordinate, you’ll struggle compared to an org where a single team can execute on a new feature. — Jacob Kaplan-Moss

Generally, product-aligned teams deliver better products more rapidly. Again, Conway’s Law is inescapable; if delivering a new feature requires several teams to coordinate, you’ll struggle compared to an org where a single team can execute on a new feature.

Jacob Kaplan-Moss


MyDigitalFootprint

Ethical Fading and Moral Disengagement

Brillent video explaining Ethical Fading and Moral Disengagement Source: https://ethicsunwrapped.utexas.edu/video/ethical-fading

Brillent video explaining Ethical Fading and Moral Disengagement
Source: https://ethicsunwrapped.utexas.edu/video/ethical-fading

Simon Willison

Everything You Always Wanted To Know About GitHub (But Were Afraid To Ask)

Everything You Always Wanted To Know About GitHub (But Were Afraid To Ask) ClickHouse by Yandex is an open source column-oriented data warehouse, designed to run analytical queries against TBs of data. They've loaded the full GitHub Archive of events since 2011 into a public instance, which is a great way of both exploring GitHub activity and trying out ClickHouse. Here's a query I just ran that

Everything You Always Wanted To Know About GitHub (But Were Afraid To Ask)

ClickHouse by Yandex is an open source column-oriented data warehouse, designed to run analytical queries against TBs of data. They've loaded the full GitHub Archive of events since 2011 into a public instance, which is a great way of both exploring GitHub activity and trying out ClickHouse. Here's a query I just ran that shows number of watch events per year, for example:

SELECT toYear(created_at) as yyyy, count() FROM github_events WHERE event_type = 'WatchEvent' group by yyyy

Via A Hacker News comment

Monday, 04. January 2021

reb00ted

Making a list of data tracked about us on-line

Clearly, I should have bought more stickies.

Clearly, I should have bought more stickies.


Simon Willison

hooks-in-a-nutshell.js

hooks-in-a-nutshell.js Neat, heavily annotated implementation of React-style hooks in pure JavaScript, really useful for understanding how they work. Via Andrea Giammarchi

hooks-in-a-nutshell.js

Neat, heavily annotated implementation of React-style hooks in pure JavaScript, really useful for understanding how they work.

Via Andrea Giammarchi


Tim Bouma's Blog

The Digital Identity Standards To Bet On In 2021

Photo by Edge2Edge Media on Unsplash Author’s note: This is the sole opinion of the author and may be revised at any time. The views and positions expressed do not necessarily reflect that of the author’s employer nor any involved organizations, committees, or working groups. If someone were to ask me: “What are the standards you are betting on for 2021?”, this would be my answer:
Photo by Edge2Edge Media on Unsplash

Author’s note: This is the sole opinion of the author and may be revised at any time. The views and positions expressed do not necessarily reflect that of the author’s employer nor any involved organizations, committees, or working groups.

If someone were to ask me: “What are the standards you are betting on for 2021?”, this would be my answer:

There are hundreds of ‘digital identity’ standards out there. I have winnowed down the list to three — two technical standards and one non-technical standard:

W3C Decentralized Identifiers (DIDs) v1.0 for a new type of identifier that enables verifiable, decentralized digital identity. A DID identifies any subject (e.g., a person, organization, thing, data model, abstract entity, etc.) that the controller of the DID decides that it identifies. W3C Verifiable Credentials Data Model 1.0 a standardized specification that provides a mechanism to express credentials on the Web in a way that is cryptographically secure, privacy-respecting, and machine-verifiable. CAN/CIOSC 103–1:2020 Digital Trust And Identity — Part 1 specifies minimum requirements and a set of controls for creating and maintaining trust in digital systems and services that, as part of an organization’s mandate, assert and or consume identity and credentials in data pertaining to people and Organizations.

Admittedly, I am writing this for the Canadian context (as the third choice is Canadian-only, so insert your own national or international standard here), but the main reasons I have chosen these three is because they represent a new way forward to develop a digital ecosystem that is open, inclusive, and balanced in favour towards the individual.

I realize that there are many more standards at play, but it is my belief that it is these three that will enable trusted digital identity across many ecosystems — across industries and across political boundaries.

That’s my start for 2021!

Sunday, 03. January 2021

reb00ted

When Uber spent $100 million on fraudulent ads

Rather incredible story. Also highly interesting to see how aggressive, high-growth companies look at advertising and conversion.

Rather incredible story. Also highly interesting to see how aggressive, high-growth companies look at advertising and conversion.


Simon Willison

sqlite-utils 3.2

sqlite-utils 3.2 As discussed in my weeknotes yesterday, this is the release of sqlite-utils that adds the new "cached table counts via triggers" mechanism. Via @simonw

sqlite-utils 3.2

As discussed in my weeknotes yesterday, this is the release of sqlite-utils that adds the new "cached table counts via triggers" mechanism.

Via @simonw


Jon Udell

Why public phones still exist

My superpower has always been finding new uses for old tech. In the late 90s I dusted off the venerable NNTP server, which had been the backbone of the Usenet, and turned it into my team’s Slack. In the late 2000s I used iCalendar to make citywide event calendars. In the late 2010s I went … Continue reading Why public phones still exist

My superpower has always been finding new uses for old tech. In the late 90s I dusted off the venerable NNTP server, which had been the backbone of the Usenet, and turned it into my team’s Slack. In the late 2000s I used iCalendar to make citywide event calendars. In the late 2010s I went deep into SQL.

It’s always intensely pragmatic. But also, I can’t deny, whimsical.

In that spirit, I offer you the public pay phone at the Pinnacles Visitor Center. I stayed in that campground on a road trip just before the election. Given the tense political and epidemiological situation, I’d promised to message home regularly. There was no cell service in the park so I headed over to the office. It was closed, so I sat on the bench and connected to their WiFi. Or tried to. You could connect, sometimes, but you couldn’t move any data. The router was clearly in need of a reboot.

The only option left was the public phone. I can’t remember the last time I used one. Most people alive today have, perhaps, never used one. But there it was, so I gave it a shot.

Once upon a time, you could pick up the handset, dial 0 for operator, and place a so-called collect (charge-reversed) call. Now dialing 0 gets you nowhere.

The instructions taped to the phone (in the 90s I’m guessing) say you can call an 800 number, or use a calling card. I remember calling cards, I had one once. Not a thing lately.

And then there was this: “Dial 611 for help.”

Me: 611

611: Hello, this is Steve.

Me: I’m at the Pinnacles Visitor Center trying to send a message.

Steve: Use the WiFi.

Me: I can’t, it’s broken.

Steve: Huh, that’s interesting. Let me see if I can reboot the router.

And he did. So there you have it. The public phone still provides a valuable service. Its mission has evolved over the years. Nowadays, it exists to summon Steve the IT guy who can fix the WiFi by turning it off and on again.

Works like a charm!


Simon Willison

Weeknotes: A flurry of not-quite-finished features

My Christmas present to myself this year was to allow myself to spend a week working on stuff I found interesting, rather than sticking to the most important things. This may have been a mistake: it's left me with a flurry of interesting but not-quite-finished features. Prettier for Datasette A couple of years ago I decided to adopt Black, an opinionated code formatter, for all of my Python pr

My Christmas present to myself this year was to allow myself to spend a week working on stuff I found interesting, rather than sticking to the most important things. This may have been a mistake: it's left me with a flurry of interesting but not-quite-finished features.

Prettier for Datasette

A couple of years ago I decided to adopt Black, an opinionated code formatter, for all of my Python projects.

This proved to be a much, much bigger productivity boost than I had expected.

It turns out I had been spending a non-trivial portion of my coding brain cicles thinking about code formatting. I was really stressing the details over how data structures should be indented, or how best to attempt to hit a 80 character line length limit.

Letting go and outsourcing all of that nit-picking to Black meant I didn't have to spend any time thinking about that stuff, at all. It as so liberating! It even changed the way I write unit tests, as I described in How to cheat at unit tests with pytest and Black.

I've been hoping to adopt something similar for JavaScript for ages. The leading contender in that space is Prettier, but I've held back because I didn't like the aesthetics of its handling of chained method calls.

That's a pretty dumb reason to lose out on such a big productivity boost, so I've decided to forget about that issue entirely.

Datasette doesn't have much JavaScript at the moment, but that's likely to change in the future. So I've applied Prettier to the table.js file and configured a GitHub action that will fail if Prettier hasn't been applied - I wrote that up as a TIL.

JavaScript plugins for Datasette

This is why I've been thinking about JavaScript: I've been pushing forward on the open issue to provide a JavaScript plugin mechanism that Datasette plugins can hook into.

Datasette's Python plugins use pluggy, and I absolutely love it - it's a powerful, simple and elegant way to define plugins and have them cooperate with each other.

Pluggy works by defining "hooks" and allowing plugins to register to respond to those hooks. When a hook is called by Datasette's core code each plugin gets to return a value, and the collected values from all of them are made available as a Python list.

I want the same thing for JavaScript. As an example, consider the "column actions" menu activated by the cog icon on each column on the table page.

I'd like plugins written in JavaScript to be able to return additional menu items to be shown in that menu - similar to how Python plugins can extend the existing table actions menu using the menu_actions hook.

I've been fleshing out ideas for this in issue 693, and I'm getting close to something that I'm happy with.

A big concern is loading times. I'd like to keep Datasette pages loading as quickly as possible, so I don't want to add a large chunk of JavaScript to each page to support plugins - and potentially have that block the page load until the script has been fetched and parsed.

But... if a JavaScript plugin is going to register itself with the plugin system, it needs to be able to call an API function to register itself! So at least some plugin JavaScript has to be made available as soon as possible in the lifecycle of the page.

I've always admired how Google Analytics deals with this problem. They define a tiny ga() function to start collecting analytics options, then execute those as soon as the larger script has been asynchronously loaded:

(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','https://www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-XXXXX-Y', 'auto'); ga('send', 'pageview');

Could I build a plugin system that's so small I can inline it into the <head> of every page without worrying about the extra page bulk?

I think I can. Here's the smallest version I've reached so far:

var datasette = datasette || {}; datasette.plugins = (() => { var registry = {}; return { register: (hook, fn, parameters) => { if (!registry[hook]) { registry[hook] = []; } registry[hook].push([fn, parameters]); }, call: (hook, args) => { args = args || {}; var results = []; (registry[hook] || []).forEach(([fn, parameters]) => { /* Call with the correct arguments */ var result = fn.apply( fn, parameters.map((parameter) => args[parameter]) ); if (result !== undefined) { results.push(result); } }); return results; }, }; })();

Which minifies (with uglify-es) to just 250 characters - it fits in a tweet!

var datasette=datasette||{};datasette.plugins=(()=>{var a={};return{register:(t,r,e)=>{a[t]||(a[t]=[]),a[t].push([r,e])},call:(t,r)=>{r=r||{};var e=[];return(a[t]||[]).forEach(([a,t])=>{var s=a.apply(a,t.map(a=>r[a]));void 0!==s&&e.push(s)}),e}}})();

Matthew Somerville suggested an even shorter version using object destructuring to get it down to 200 bytes. I'll probably adopt that idea in my next iteration.

As part of this work I've also started thinking about how I can introduce minification into the Datasette development process, and how I can start writing unit tests for the JavaScript. I tried using Jest at first but now I'm leaning towards Cypress, since that will help support writing browser automation tests against some of the more advanced functionality as well.

datasette insert

Issue 1160 tracks my work on a big new idea for Datasette: datasette insert.

Datasette assumes you already have data in a SQLite database. Getting that data into your database is an exercise left for the user.

I've been building a plethora of additional tools to help with that challenge - things like sqlite-utils and csvs-to-sqlite and yaml-to-sqlite and so-on.

I want Datasette itself to include this functionality, starting with a new datasette insert sub-command.

So why bring this functionality into Datasette itself? A few reasons:

Helping people install Python software is hard enough already, without them needing to install multiple tools. If I can tell someone to brew install datasette and then datasette insert data.db file.csv they'll have a much easier time than if they have to install and learn additional tools as well. This is a fantastic opportunity for plugin hooks. I want a plugin hook that lets people build additional import formats for Datasette - by default it will support CSV, TSV and JSON but I want to be able to add GeoJSON and Shapefiles and DBF files and Excel and all sorts of other things. Crucially, I plan to build both a command-line interface and a web interface within Datasette that lets people upload files - or copy and paste in their content, or specify a URL to import them from. This means any new format plugins will enable not just a command-line interface but a web interface for importing that data as well. It's still possible that Datasette will evolve beyond just SQLite in the future. I don't want to have to build yaml-to-postgresql and so on - this new Datasette plugin hook could support importing formats into other databases too in the future.

I made some initial progress on this but it's going to be a while before I can land anything to main.

Storing metadata in _metadata tables

Last week I wrote about the new _internal in-memory database in Datasette, which collects details of every attached database, table and column in order to power things like a forthcoming searchable, paginated homepage.

This made me think: what else could I address with in-memory SQLite databases?

I realized that this could overlap with a truly ancient Datasette feature idea: the abiilty to bundle metadata and templates within the SQLite database files that Datasette exposes. Bundling metadata is particularly interesting as it would allow that metadata to "travel" with the data, contained in the same binary SQLite file.

So I've been doing a whole lot of thinking about how this might work - including how the existing metadata.json format passed to Datasette when it starts up could be read into an in-memory database and then further combined with any metadata baked into tables in the various attached databases.

The work is entirely an issue thread at the moment, but I'm excited to spend more time on it in the future. I want to implement column-level metadata as part of this, so you can finally attach information to columns to help describe what they actually represent.

sqlite-utils cached table counts

select count(*) from table can be a slow query against large database tables, in both SQLite and PostgreSQL. This often surprises people - surely it should be trivial for a relational database to cache the total number of rows in a table? It turns out that's not actually possible, due to MVCC - a relational database needs to be able to correctly answer that query from the point of view of the current transaction, even while other transactions are inserting or deleting rows.

Datasette tries to show the count of rows in a table in various places. This has lead to a number of performance issues when running against large tables - Datasette has a number of existing tricks for handling that, including timing out counts after 50ms and caching the counts for databases opened in "immutable" mode, but it's still a source of problems.

For a while I've been contemplating working around this with triggers. SQLite has a robust triggers system - how about writing triggers that increment or decrement a counter any time a row is inserted or deleted from a table?

My initial experiments indicate that this can indeed work really well!

I've been implementing this in sqlite-utils. Like most features of that package it comes in two flavours: a Python API you can call from your own code, and a command-line tool you can use to directly manipulate a database.

Enabling cached counts for a table in the Python API documentation describes how this works. You can call db.enable_counts() on a database to install the triggers, after which a _counts table will be automatically updated with a count record for each table.

The sqlite-utils enable-counts command can be used to configure this from the command-line.

The triggers it adds for a table look like this:

CREATE TRIGGER [plants_counts_insert] AFTER INSERT ON [plants] BEGIN INSERT OR REPLACE INTO [_counts] VALUES ( 'plants', COALESCE( (SELECT count FROM [_counts] WHERE [table] = 'plants'), 0 ) + 1 ); END CREATE TRIGGER [plants_counts_delete] AFTER DELETE ON [plants] BEGIN INSERT OR REPLACE INTO [_counts] VALUES ( 'plants', COALESCE( (SELECT count FROM [_counts] WHERE [table] = 'plants'), 0 ) - 1 ); END

I haven't benchmarked it yet, but my expectation is that this will add a minimal overhead to INSERT and DELETE operations against that table. If your table isn't handling hundreds of writes a second I expect the overhead to be inconsequential.

This work is mostly done, and should be out in a sqlite-utils release within a few days.

Update 3rd January 2021: this is all now released in sqlite-utils 3.2.

Releases this week sqlite-utils: 3.1.1 - 2021-01-01
Python CLI utility and library for manipulating SQLite databases datasette-publish-vercel: 0.9.1 - 2020-12-28
Datasette plugin for publishing data using Vercel TIL this week Replicating SQLite with rqlite Relinquishing control in Python asyncio Using Jest without a package.json Using Prettier to check JavaScript code style in GitHub Actions

Friday, 01. January 2021

Mike Jones: self-issued

Near-Final Second W3C WebAuthn and FIDO2 CTAP Specifications

The W3C WebAuthn and FIDO2 working groups have been busy this year preparing to finish second versions of the W3C Web Authentication (WebAuthn) and FIDO2 Client to Authenticator Protocol (CTAP) specifications. While remaining compatible with the original standards, these second versions add additional features, among them for user verification enhancements, manageability, enterprise features, and a

The W3C WebAuthn and FIDO2 working groups have been busy this year preparing to finish second versions of the W3C Web Authentication (WebAuthn) and FIDO2 Client to Authenticator Protocol (CTAP) specifications. While remaining compatible with the original standards, these second versions add additional features, among them for user verification enhancements, manageability, enterprise features, and an Apple attestation format. Near-final review drafts of both have been published:

Web Authentication: An API for accessing Public Key Credentials, Level 2, W3C Candidate Recommendation Snapshot, 22 December 2020 Client to Authenticator Protocol (CTAP), Review Draft, December 08, 2020

Expect these to become approved standards in early 2021. Happy New Year!


SecEvent Delivery specs are now RFCs 8935 and 8936

The SecEvent Delivery specifications, “Push-Based Security Event Token (SET) Delivery Using HTTP” and “Poll-Based Security Event Token (SET) Delivery Using HTTP”, are now RFC 8935 and RFC 8936. Both deliver Security Event Tokens (SETs), which are defined by RFC 8417. The abstracts of the specifications are: Push-Based Security Event Token (SET) Delivery Using HTTP: This […]

The SecEvent Delivery specifications, “Push-Based Security Event Token (SET) Delivery Using HTTP” and “Poll-Based Security Event Token (SET) Delivery Using HTTP”, are now RFC 8935 and RFC 8936. Both deliver Security Event Tokens (SETs), which are defined by RFC 8417. The abstracts of the specifications are:

Push-Based Security Event Token (SET) Delivery Using HTTP:

This specification defines how a Security Event Token (SET) can be delivered to an intended recipient using HTTP POST over TLS. The SET is transmitted in the body of an HTTP POST request to an endpoint operated by the recipient, and the recipient indicates successful or failed transmission via the HTTP response.

Poll-Based Security Event Token (SET) Delivery Using HTTP:

This specification defines how a series of Security Event Tokens (SETs) can be delivered to an intended recipient using HTTP POST over TLS initiated as a poll by the recipient. The specification also defines how delivery can be assured, subject to the SET Recipient’s need for assurance.

These were designed with use cases such as Risk & Incident Sharing and Collaboration (RISC) and Continuous Access Evaluation Protocol (CAEP) in mind, both of which are happening in the OpenID Shared Signals and Events Working Group.

Wednesday, 30. December 2020

SSI Ambassador

No, Node. Sorry for the typo & thanks for pointing it out.

No, Node. Sorry for the typo & thanks for pointing it out.

No, Node. Sorry for the typo & thanks for pointing it out.

Tuesday, 29. December 2020

Simon Willison

Quoting Joe Morrison

You know Google Maps? What I do is, like, build little pieces of Google Maps over and over for people who need them but can’t just use Google Maps because they’re not allowed to for some reason, or another. — Joe Morrison

You know Google Maps? What I do is, like, build little pieces of Google Maps over and over for people who need them but can’t just use Google Maps because they’re not allowed to for some reason, or another.

Joe Morrison

Monday, 28. December 2020

Simon Willison

Replicating SQLite with rqlite

Replicating SQLite with rqlite I've been trying out rqlite, a "lightweight, distributed relational database, which uses SQLite as its storage engine". It's written in Go and uses the Raft consensus algorithm to allow a cluster of nodes to elect a leader and replicate SQLite statements between them. By default it uses in-memory SQLite databases with an on-disk Raft replication log - here are my n

Replicating SQLite with rqlite

I've been trying out rqlite, a "lightweight, distributed relational database, which uses SQLite as its storage engine". It's written in Go and uses the Raft consensus algorithm to allow a cluster of nodes to elect a leader and replicate SQLite statements between them. By default it uses in-memory SQLite databases with an on-disk Raft replication log - here are my notes on running it in "on disk" mode as a way to run multiple Datasette processes against replicated SQLite database files.

Via @simonw


Phil Windley's Technometria

The Generative Self-Sovereign Internet

Summary: The self-sovereign internet, a secure overlay on the internet, provides the same capacity to produce change by numerous, unaffiliated and uncoordinated actors as the internet itself. The generative nature of the self-sovereign internet is underpinned by the same kind of properties that make the internet what it is, promising a more secure and private, albeit no less useful, internet for t

Summary: The self-sovereign internet, a secure overlay on the internet, provides the same capacity to produce change by numerous, unaffiliated and uncoordinated actors as the internet itself. The generative nature of the self-sovereign internet is underpinned by the same kind of properties that make the internet what it is, promising a more secure and private, albeit no less useful, internet for tomorrow.

This is part one of a two part series on the generativity of SSI technologies. This article explores the properties of the self-sovereign internet and makes the case that they justify its generativity claims. The second part will explore the generativity of verifiable credential exchange, the essence of self-sovereign identity.

In 2005, Jonathan Zitrain wrote a compelling and prescient examination of the generative capacity of the Internet and its tens of millions of attached PCs. Zittrain defined generativity thus:

Generativity denotes a technology’s overall capacity to produce unprompted change driven by large, varied, and uncoordinated audiences.

Zittrain masterfully describes the extreme generativity of the internet and its attached PCs, explains why openness of both the network and the attached computers is so important, discusses threats to the generativity nature of the internet, and proposes ways that the internet can remain generative while addressing some of those threats. While the purpose of this article is not to review Zittrain's paper in detail, I recommend you take some time to explore it.

Generative systems use a few basic rules, structures, or features to yield behaviors that can be extremely varied and unpredictable. Zittrain goes on to lay out the criteria for evaluating the generativity of a technology:

Generativity is a function of a technology’s capacity for leverage across a range of tasks, adaptability to a range of different tasks, ease of mastery, and accessibility.

This sentence sets forth four important criteria for generativity:

Capacity for Leverage—generative technology makes difficult jobs easier—sometimes possible. Leverage is measured by the capacity of a device to reduce effort. Adaptability—generative technology can be applied to a wide variety of uses with little or no modification. Where leverage speaks to a technology's depth, adaptability speaks to its breadth. Many very useful devices (e.g. airplanes, saws, and pencils) are nevertheless fairly narrow in their scope and application. Ease of Mastery—generative technology is easy to adopt and adapt to new uses. Many billions of people use a PC (or mobile device) to perform tasks important to them without significant skill. As they become more proficient in its use, they can apply it to even more tasks. Accessibility—generative technology is easy to come by and access. Access is a function of cost, deployment, regulation, monopoly power, secrecy, and anything else which introduces artificial scarcity.

The identity metasystem I've written about in the past is composed of several layers that provide its unique functionality. This article uses Zittrain's framework, outlined above, to explore the generativity of what I've called the Self-Sovereign Internet, the second layer in the stack shown in Firgure 1. A future article will discuss the generativity of credential exchange at layer three.

Figure 1: SSI Stack (click to enlarge) The Self-Sovereign Internet

In DIDComm and the Self-Sovereign Internet, I make the case that the network of relationships created by the exchange of decentralized identifiers (layer 2 in Figure 1) forms a new, more secure layer on the internet. Moreover, the protocological properties of DIDComm make that layer especially useful and flexible, mirroring the internet itself.

This kind of "layer" is called an overlay network. An overlay network comprises virtual links that correspond to a path in the underlying network. Secure overlay networks rely on an identity layer based on asymmetric key cryptography to ensure message integrity, non-repudiation, and confidentiality. TLS (HTTPS) is a secure overlay, but it is incomplete because it's not symmetrical. Furthermore, it's relatively inflexible because it overlays a network layer using a client-server protocol1.

In Key Event Receipt Infrastructure (KERI) Design, Sam Smith makes the following important point about secure overlay networks:

The important essential feature of an identity system security overlay is that it binds together controllers, identifiers, and key-pairs. A sender controller is exclusively bound to the public key of a (public, private) key-pair. The public key is exclusively bound to the unique identifier. The sender controller is also exclusively bound to the unique identifier. The strength of such an identity system based security overlay is derived from the security supporting these bindings. From Key Event Receipt Infrastructure (KERI) Design
Referenced 2020-12-21T11:08:57-0700

Figure 2 shows the bindings between these three components of the secure overlay.

Figure 2: Binding of controller, authentication factors, and identifiers that provide the basis for a secure overlay network. (click to enlarge)

In The Architecture of Identity Systems, I discuss the strength of these critical bindings in various identity system architectures. The key point for this discussion is that the peer-to-peer network created by peer DID exchanges constitute an overlay with an autonomic architecture, providing not only the strongest possible bindings between the controller, identifiers, and authentication factors (public key), but also not needing an external trust basis (like a ledger) because they are self-certifying.

DIDs allow us to create cryptographic relationships, solving significant key management problems that have plagued asymmetric cryptography since it's inception. Consequently, regular people can use a general purpose secure overlay network based on DIDs. The DID network that is created when people use these relationships provides a protocol, DIDComm, that is every bit as flexible and useful as is TCP/IP.

Consequently, communications over a DIDComm-enabled peer-to-peer network are as generative as the internet itself. Thus, the secure overlay network formed by DIDComm connections represents a self-sovereign internet, emulating the underlying internet's peer-to-peer messaging in a way that is both secure and trustworthy2 without the need for external third parties3.

Properties of the Self-Sovereign Internet

In World of Ends, Doc Searls and Dave Weinberger enumerate the internet's three virtues:

No one owns it. Everyone can use it. Anyone can improve it.

These virtues apply to the self-sovereign internet as well. As a result, the self-sovereign internet displays important properties that support it's generativity. Here are the most important:

Decentralized—decentralization follows directly from the fact that no one owns it. This is the primary criterion for judging the degree of decentralization in a system.

Heterarchical—a heterarchy is a "system of organization where the elements of the organization are unranked (non-hierarchical) or where they possess the potential to be ranked a number of different ways." Nodes in a DIDComm-based network relate to each other as peers. This is a heterarchy; there is no inherent ranking of nodes in the architecture of the system.

Interoperable—regardless of what providers or systems we use to connect to the self-sovereign internet, we can interact with any other principles who are using it so long as they follow protocol4.

Substitutable—The DIDComm protocol defines how systems that use it must behave to achieve interoperability. That means that anyone who understands the protocol can write software that uses DIDComm. Interoperability ensure that we can operate using a choice of software, hardware, and services without fear of being locked into a proprietary choice. Usable substitutes provide choice and freedom.

Reliable and Censorship Resistant—people, businesses, and others must be able to use the secure overlay network without worrying that it will go down, stop working, go up in price, or get taken over by someone who would do it and those who use it harm. This is larger than mere technical trust that a system will be available and extends to the issue of censorship.

Non-proprietary and Open—no one has the power to change the self-sovereign internet by fiat. Furthermore, it can't go out of business and stop operation because its maintenance and operation are distributed instead of being centralized in the hands of a single organization. Because the self-sovereign internet is an agreement rather than a technology or system, it will continue to work.

The Generativity of the Self-Sovereign Internet

Applying Zittrain's framework for evaluating generativity is instructive for understanding the generative properties of the self-sovereign internet.

Capacity for Leverage

In Zittrain's words, leverage is the extent to which an object "enables valuable accomplishments that otherwise would be either impossible or not worth the effort to achieve." Leverage multiplies effort, reducing the time and cost necessary to innovate new capabilities and features. Like the internet, DIDComm's extensibility through protocols enables the creation of special-purpose networks and data distribution services on top of it. By providing a secure, stable, trustworthy platform for these services, DIDComm-based networks reduce the effort and cost associated with these innovations.

Like a modern operating system's application programming interface (API), DIDComm provides a standardized platform supporting message integrity, non-repudiation, and confidentiality. Programmers get the benefits of a trusted message system without need for expensive and difficult development.

Adaptability

Adaptability can refer to a technology's ability to be used for multiple activities without change as well as its capacity for modification in service of new use cases. Adaptability is orthogonal to capacity for leverage. An airplane, for example, offers incredible leverage, allowing goods and people to be transported over long distances quickly. But airplanes are neither useful in activities outside transportation or easily modified for different uses. A technology that supports hundreds of use cases is more generative than one that is useful in only a few.

Like TCP/IP, DIDComm makes few assumptions about how the secure messaging layer will be used. Thus the network formed by the nodes in a DIDComm network can be adapted to any number of applications. Moreover, because a DIDComm-based network is decentralized and self-certifying, it is inherently scalable for many uses.

Ease of Mastery

Ease of use refers to the ability of a technology to be easily and broadly adapted and adopted. The secure, trustworthy platform of the self-sovereign internet allows developers to create applications without worrying about the intricacies of the underlying cryptography or key management.

At the same time, because of its standard interface and protocol, DIDComm-based networks can present users with a consistent user experience that reduces the skill needed to establish and use connections. Just like a browser presents a consistent user experience on the web, a DIDComm agent can present users with a consistent user experience for basic messaging, as well as specialized operations that run over the basic messaging system.

Of special note is key management, which has been the Achilles heal of previous attempts at secure overlay networks for the internet. Because of the nature of decentralized identifiers, identifiers are separated from the public key, allowing the keys to be rotated when needed without also needing to refresh the identifier. This greatly reduces the need for people to manage or even see keys. People focus on the relationships and the underlying software manages the keys.5

Accessibility

Accessible technologies are easy to acquire, inexpensive, and resistant to censorship. DIDComm's accessibility is a product of its decentralized and self-certifying nature. Protocols and implementing software are freely available to anyone without intellectual property encumbrances. Multiple vendors, and even open-source tools can easily use DIDComm. No central gatekeeper or any other third party is necessary to initiate a DIDComm connection in service of a digital relationship. Moreover, because no specific third parties are necessary, censorship of use is difficult.

Conclusion

Generativity provides decentralized actors to create cooperating, complex structures and behavior. No one person or group can or will think of all the possible uses, but each is free to adapt the system to their own use. The architecture of the self-sovereign internet exhibits a number of important properties. The generativity of the self-sovereign internet depends on those properties. The true value of the self-sovereign internet is that it provides an leveragable, adaptable, usable, accessible, and stable platform upon which others can innovate.

Notes Implementing general-purpose messaging on HTTP is not straightforward, especially when combined with non-routable IP addresses for many clients. On the other hand, simulating client-server interactions on a general-purpose messaging protocol is easy. I'm using "trust" in the cryptographic sense, not in the reputational sense. Cryptography allows us to trust the fidelity of the communication but not its content. Admittedly, the secure overlay is running on top of a network with a number of third parties, some benign and others not. Part of the challenge of engineering a functional secure overlay with self-sovereignty it mitigating the effects that these third parties can have within the self-sovereign internet. Interoperability is, of course, more complicated than merely following the protocols. Daniel Hardman does an excellent job of discussing this for verifiable credentials (a protocol that runs over DIDComm), in Getting to Practical Interop With Verifiable Credentials. More details about some of the ways software can greatly reduce the burden of key management when things go wrong can be found in What If I Lose My Phone? by Daniel Hardman.

Photo Credit: Seed Germination from USDA (CC0)

Tags: generative internet identity ssi didcomm decentralized+identifiers self-sovereign+internet


Simon Willison

Quoting Stephanie Morillo

While copywriting is used to persuade a user to take a certain action, technical writing exists to support the user and remove barriers to getting something done. Good technical writing is hard because writers must get straight to the point without losing or confusing readers. — Stephanie Morillo

While copywriting is used to persuade a user to take a certain action, technical writing exists to support the user and remove barriers to getting something done. Good technical writing is hard because writers must get straight to the point without losing or confusing readers.

Stephanie Morillo

Sunday, 27. December 2020

Doc Searls Weblog

We’ve seen this movie before

When some big outfit with a vested interest in violating your privacy says they are only trying to save small business, grab your wallet. Because the game they’re playing is misdirection away from what they really want. The most recent case in point is Facebook, which ironically holds the world’s largest database on individual human […]

When some big outfit with a vested interest in violating your privacy says they are only trying to save small business, grab your wallet. Because the game they’re playing is misdirection away from what they really want.

The most recent case in point is Facebook, which ironically holds the world’s largest database on individual human interests while also failing to understand jack shit about personal boundaries.

This became clear when Facebook placed the ad above and others like it in major publications recently, and mostly made bad news for itself. We saw the same kind of thing in early 2014, when the IAB ran a similar campaign against Mozilla, using ads like this:

That one was to oppose Mozilla’s decision to turn on Do Not Track by default in its Firefox browser. Never mind that Do Not Track was never more than a polite request for websites to not be infected with a beacon, like those worn by marked animals, so one can be tracked away from the website. Had the advertising industry and its dependents in publishing simply listened to that signal, and respected it, we might never have had the GDPR or the CCPA, both of which are still failing at the same mission. (But, credit where due: the GDPR and the CCPA have at least forced websites to put up insincere and misleading opt-out popovers in front of every website whose lawyers are scared of violating the letter—but never the spirit—of those and other privacy laws.)

The IAB succeeded in its campaign against Mozilla and Do Not Track; but the the victory was Pyrrhic, because users decided to install ad blockers instead, which by 2015 was the largest boycott in human history. Plus a raft of privacy laws, with more in the pipeline.

We also got Apple on our side. That’s good, but not good enough.

What we need are working tools of our own. Examples: Global Privacy Control (and all the browsers and add-ons mentioned there), Customer Commons#NoStalking term, the IEEE’s P7012 – Standard for Machine Readable Personal Privacy Terms, and other approaches to solving business problems from the our side—rather than always from the corporate one.

In those movies, we’ll win.

Because if only Apple wins, we still lose.

Dammit, it’s still about what The Cluetrain Manifesto said in the first place, in this “one clue” published almost 21 years ago:

we are not seats or eyeballs or end users or consumers.
we are human beings — and out reach exceeds your grasp.
deal with it.

We have to make them deal. All of them. Not just Apple. We need code, protocols and standards, and not just regulations.

All the projects linked to above can use some help, plus others I’ll list here too if you write to me with them. (Comments here only work for Harvard email addresses, alas. I’m doc at searls dot com.)


Simon Willison

Weeknotes: Datasette internals

I've been working on some fundamental changes to Datasette's internal workings - they're not quite ready for a release yet, but they're shaping up in an interesting direction. One of my goals for Datasette is to be able to handle a truly enormous variety of data in one place. The Datasette Library ticket tracks this effort - I'd like a newsroom (or any other information-based organization) to be

I've been working on some fundamental changes to Datasette's internal workings - they're not quite ready for a release yet, but they're shaping up in an interesting direction.

One of my goals for Datasette is to be able to handle a truly enormous variety of data in one place. The Datasette Library ticket tracks this effort - I'd like a newsroom (or any other information-based organization) to be able to keep hundreds of databases with potentially thousands of tables all in a single place.

SQLite databases are just files on disk, so if you have a TB of assorted databases of all shapes and sizes I'd like to be able to present them in a single Datasette instance.

If you have a hundred database files each with a hundred tables, that's 10,000 tables total. This implies a need for pagination of the homepage, plus the ability to search and filter within the tables that are available to the Datasette instance.

Sounds like the kind of problem I'd normally solve with Datasette!

So in issue #1150 I've implemented the first part of a solution. On startup, Datasette now creates an in-memory SQLite database representing all of the connected databases and tables, plus those tables' columns, indexes and foreign keys.

For a demo, first sign in as root to the latest.datasette.io demo instance and then visit the private _internal database.

This new internal database is currently private because I don't want to expose any metadata about tables that may themselves be covered by Datasette's permissions mechanism.

The new in-memory database represents the schemas of the underlying database files - but what if those change, as new tables are added or modified?

SQLite has a neat trick to help with this: PRAGMA schema_version returns an integer representing the current version of the schema, which changes any time a table is created or modified.

This means I can cache the table schemas and only recalculate them if something has changed. Running PRAGMA schema_version against a connection to a database is an extremely fast query.

I first used this trick in datasette-graphql to cache the results of GraphQL schema introspection, so I'm confident it will work here too.

The problem with permissions

There's one really gnarly challenge I still need to solve here: permissions.

Datasette's permissions system, added in Datasette 0.44 back in June, works against plugin hooks. Permissions plugins can answer questions along the lines of "is the current authenticated actor allowed to perform the action view-table against table X?". Every time that question is asked, the plugins are queried via a plugin hook.

When I designed the permissions system I forgot a crucial lesson I've learned about permissions systems before: at some point, you're going to need to answer the question "show me the list of all Xs that this actor has permission to act on".

That time has now come. If I'm going to render a paginated homepage for Datasette listing 10,000+ tables, I need an efficient way to calculate the subset of those 10,000 tables that the current user is allowed to see.

Looping through and calling that plugin hook 10,000 times isn't going to cut it.

So I'm starting to rethink permissions a bit - I'm glad I've not hit Datasette 1.0 yet!

In my favour is SQLite. Efficiently answering permissions questions in bulk, in a generic, customizable way, is a really hard problem. It's made a lot easier by the presence of a relational database.

Issue #1152 tracks some of my thinking on this. I have a hunch that the solution is going to involve a new plugin hook, potentially along the lines of "return a fragment of SQL that identifies databases or tables that this user can access". I can then run that SQL against the new _internal database (potentially combined with SQL returned by other plugins answering the same hook, using an AND or a UNION) to construct a set of tables that the user can access.

If I can do that, and then run the query against an in-memory SQLite database, I should be able to provide a paginated, filtered interface to 10,000+ tables on the homepage that easily fulfills my performance goals.

I'd like to ship the new _internal database in a release quite soon, so I may end up implementing a slower version of this first. It's definitely an interesting problem.

Saturday, 26. December 2020

Doc Searls Weblog

Wonder What?

Our Christmas evening of cinematic indulgence was watching Wonder Woman 1984, about which I just posted this, elsewhere on the Interwebs: I mean, okay, all “super” and “enhanced” hero (and villain) archetypes are impossible. Not found in nature. You grant that. After a few thousand episodes in the various franchises, one’s disbelief becomes fully suspended. So […]

Our Christmas evening of cinematic indulgence was watching Wonder Woman 1984, about which I just posted this, elsewhere on the Interwebs:

I mean, okay, all “super” and “enhanced” hero (and villain) archetypes are impossible. Not found in nature. You grant that. After a few thousand episodes in the various franchises, one’s disbelief becomes fully suspended. So when you’ve got an all-female island of Amazons (which reproduce how?… by parthenogenesis?) playing an arch-Freudian Greco-Roman Quidditch, you say hey, why not? We’re establishing character here. Or backstory. Or something. You can hang with it, long as there are a few connections to what might be a plausible reality, and while things move forward in a sensible enough way. And some predictability counts. For example, you know the young girl, this movie’s (also virgin-birthed) Anakin Skywalker, is sure to lose the all but endless Quidditch match, and will learn in losing a lesson (taught by … who is that? Robin Wright? Let’s check on one of our phones) that will brace the front end of what turns out at the end of the story to be its apparent moral arc.

And then, after the girl grows up to be an introverted scientist-supermodel who hasn’t aged since WWI (an item that hasn’t raised questions with HR since long before it was called “Personnel,” and we later learn has been celibate or something ever since her only-ever boyfriend died sixty-four years earlier while martyring his ass in a plane crash you’re trying to remember from the first movie) has suddenly decided, after all this time, to start fighting crime with her magic lasso and her ability to leap shopping mall atria in a single bound; and then, after same boyfriend inexplicably comes back from the dead to body-snatch some innocent dude, they go back to hugging and smooching and holding hands like the intervening years of longing (her) and void (him) were no big deals, and then they jack an idle (and hopefully gassed up) F111, which in reality doesn’t have pilot-copilot seats side-by-side (or even a co-pilot, beging a single-seat plane), and which absolutely requires noise-isolating earphones this couple doesn’t have, because afterburner noise in the cockpit in one of those mothers is about 2000db, and the undead boyfriend, who flew a Fokker or something in the prior movie, knows exactly and how to fly a jet that only the topmost of guns are allowed to even fantasize about, and then he and Wondermodel have a long conversation on a short runway during which they’re being chased by cops, and she kinda doubts that one of the gods in her polytheistic religion have given her full powers to make a whole plane invisible to radar, which she has to explain to her undead dude in 1984 (because he wouldn’t know about that, even though he knows everything else about the plane), and the last thing she actually made disappear was a paper cup, and then they somehow have a romantic flight, without refueling, from D.C. to a dirt road in an orchard somewhere near Cairo, while in the meantime the most annoying and charmless human being in human history—a supervillain-for-now whose one human power was selling self-improvement on TV—causes a giant wall to appear in the middle of a crowded city while apparently not killing anyone… Wholly shit.

And what I just described was about three minutes in the midst of this thing.

But we hung with it, in part because we were half-motivated to see if it was possible to tally both the impossibilities and plot inconsistencies of the damn thing. By the time it ended, we wondered if it ever would.

Bonus link.

Thursday, 24. December 2020

Doc Searls Weblog

A simple suggestion for Guilford College

Guilford College made me a pacifist. This wasn’t hard, under the circumstances. My four years there were the last of the 1960s, a stretch when the Vietnam War was already bad and getting much worse. Nonviolence was also a guiding principle of the civil rights movement, which was very active and local at the time, and […]

Guilford College made me a pacifist.

This wasn’t hard, under the circumstances. My four years there were the last of the 1960s, a stretch when the Vietnam War was already bad and getting much worse. Nonviolence was also a guiding principle of the civil rights movement, which was very active and local at the time, and pulled me in as well. I was also eligible for the draft if I dropped out. Risk of death has a way of focusing one’s mind.

As a Quaker college, this was also Guilford’s job. Hats off: I learned a lot, and enjoyed every second of it.

These days, however, Guilford—like lots of other colleges and universities—is in trouble. Scott Galloway and his research team at NYU do a good job of sorting out every U.S. college’s troubles here:

You’ll find Guilford in the “struggle” quadrant, top left. That one contains “Tier-2 schools with one or more comorbidities, such as high admit rates (anemic waiting lists), high tuition, or scant endowments.”

So I’d like to help Guilford, but not (yet) with the money they constantly ask me for. Instead, I have some some simple advice: teach peace. Become the pacifist college. There’s a need for that, and the position is open. A zillion other small liberal arts colleges do what Guilford does. Replace “Guilford” on the page at that link with the name of any other good small liberal arts college and it’ll work for all of them. But none of the others teach peace, or wrap the rest of their curricular offerings around that simple and straightforward purpose. Or are in a position to do that. Guilford is.

Look at it this way: any institution can change in a zillion different ways; but the one thing it can’t change is where it comes from. Staying true to that is one of the strongest, most high-integrity things a college can do. By positioning around peace and pacifism, Guilford will align with its origins and stand alone in a field that will inevitably grow—and must for our species is to survive and thrive in an overcrowded and rapidly changing world.

Yes, there are a bunch of Quaker colleges, and colleges started by Quakers. (Twenty by this count). And they include some names bigger than Guilford’s: Cornell, Bryn Mawr, Haverford, Johns Hopkins. But none are positioned to lead on peace and pacifism, and only a few could be.(Earlham for sure. Maybe Wilmington.) The position is open, and  Guilford should take it.

Fortuitously, a few days ago I got an email from Ed Winslow, chair of Guilford’s Board of Trustees, that begins with this paragraph:

The Board of Trustees met on Dec. 15 to consider the significant feedback we have received and for a time of discernment. In that spirit, we have asked President Moore to pause implementation of the program prioritization while the Board continues to listen and gather input from those of you who wish to offer it. We are hearing particularly from alumni who are offering fundraising ideas. We are also hearing internally and from those in the wider education community who are offering ideas as well.

So that’s my input: own the Peace Position.

For fundraising I suggest an approach I understand is implemented by a few other institutions (I’m told Kent State is one): tell alumni you’re done asking for money constantly and instead ask only to be included in their wills. I know this is contrary to most fundraising advice; but I believe it will work—and does, for some schools. Think about it: just knowing emails from one’s alma mater aren’t almost always shakedowns for cash is a giant benefit by itself.

In case anyone at Guilford wonders who the hell I am and why my advice ought to carry some weight, forgive me while I waive modesty and present these two facts:

On the notable Guilford alumni list, I’m tops in search results. I even beat Howard Coble, Tom Zachary, M.L. Carr, Bob Kauffman and World B. Free. I was a success in the marketing business (much of it doing positioning) for several decades of my professional life.

So there ya go.

Peace, y’all.

Wednesday, 23. December 2020

Simon Willison

Datasette Weekly: Official project website for Datasette, building a search engine with Dogsheep Beta, sqlite-utils analyze-tables

Datasette Weekly: Official project website for Datasette, building a search engine with Dogsheep Beta, sqlite-utils analyze-tables Volume 5 of the Datasette Weekly-ish newsletter. Via @simonw

Bill Wendel's Real Estate Cafe

#UncoupleREFees: Will third price-fixing lawsuit unleash consumer savings in Mass?

BREAKING NEWS: A third massive lawsuit has been filed to #UnCoupleREFees and this time it’s focused on the MLS (Multiple Listing Service) in Massachusetts plus… The post #UncoupleREFees: Will third price-fixing lawsuit unleash consumer savings in Mass? first appeared on Real Estate Cafe.

BREAKING NEWS: A third massive lawsuit has been filed to #UnCoupleREFees and this time it’s focused on the MLS (Multiple Listing Service) in Massachusetts plus…

The post #UncoupleREFees: Will third price-fixing lawsuit unleash consumer savings in Mass? first appeared on Real Estate Cafe.


Identity Woman

Human Centered Security Podcast

I was invited to join Heidi Trost to join her on my new podcast focused on Human Centered Security. We had a great chat focused on Self-Sovereign Identity. You can find it here on the Web, Spotifiy or Apple Podcast In this episode we talk about: What Kaliya describes as a new “layer” to the […] The post Human Centered Security Podcast appeared first on Identity Woman.

I was invited to join Heidi Trost to join her on my new podcast focused on Human Centered Security. We had a great chat focused on Self-Sovereign Identity. You can find it here on the Web, Spotifiy or Apple Podcast In this episode we talk about: What Kaliya describes as a new “layer” to the […]

The post Human Centered Security Podcast appeared first on Identity Woman.


Nader Helmy

Intro to MATTR Learn Concepts

In the world of decentralized identity and digital trust, there are a variety of new concepts and topics that are frequently referenced, written, and talked about, but rarely is there a chance to introduce these concepts formally to audiences who aren’t already familiar with them. For this reason, we have created a new “Learn Concepts” series to outline the the fundamental building blocks ne

In the world of decentralized identity and digital trust, there are a variety of new concepts and topics that are frequently referenced, written, and talked about, but rarely is there a chance to introduce these concepts formally to audiences who aren’t already familiar with them.

For this reason, we have created a new “Learn Concepts” series to outline the the fundamental building blocks needed to understand this new technology paradigm and explore the ways that MATTR thinks about and understands the critical issues in the space.

Over on our MATTR Learn site, we have been building out a variety of resources to assist developers and architects with understanding the MATTR universe of tools and products. We are happy to announce we have updated the site to include this new educational content series alongside our existing resources.

Our Learn Concepts series covers the following topics:

Web of Trust 101 Digital Wallets Verifiable Data Semantic Web Selective Disclosure Trust Frameworks

To facilitate context sharing, each of these Learn Concepts has a distinct Medium post with a permanent URL in addition to being published on our MATTR Learn site. We will keep these resources up to date to make sure they remain evergreen and relevant to newcomers in the space.

We are excited to share what we’ve learned on our journey, and we look forward to adapting and expanding this knowledge base as standards progress and technologies mature.

Intro to MATTR Learn Concepts was originally published in MATTR on Medium, where people are continuing the conversation by highlighting and responding to this story.


Learn Concepts: Trust Frameworks

Trust frameworks are a foundational component of the web of trust. A trust framework is a common set of best practice standards-based rules that ensure minimum requirements are met for security, privacy, identification management and interoperability through accreditation and governance. These operating rules provide a common framework for ecosystem participants, increasing trust between them

Trust frameworks are a foundational component of the web of trust. A trust framework is a common set of best practice standards-based rules that ensure minimum requirements are met for security, privacy, identification management and interoperability through accreditation and governance. These operating rules provide a common framework for ecosystem participants, increasing trust between them.

As digital service delivery models mature, it is essential that information is protected as it travels across jurisdictional and organizational boundaries. Trust frameworks define and bring together the otherwise disparate set of best practice principles, processes, standards that apply when it comes to collecting and sharing information on the web. As individuals and entities increasingly share their information cross contextually, across industry boundaries, trust frameworks provide the common set of rules that apply regardless of such differences. For example, service providers ranging from government agencies, banks and telecommunication companies, to health care providers could all follow the same set of data sharing practices under one trust framework. This macro application serves to reduce the need for bilateral agreements and fragmentation across industry. Ultimately trust frameworks serve to increase trust, improve efficiencies, and deliver significant economic and social benefits.

Some use-cases will require more detailed rules to be established than those set out in a trust framework with broad scope. Where this is the case, more detailed rules around specific hierarchies and roles can be established within the context of the higher order trust framework. The goal is always for the components of the framework to be transparent, and adherence to those components to be public. This enables entities to rely on the business or technical process carried out by others with trust and confidence. If done correctly, a trust framework is invisible to those who rely on it every day. It allows individuals and entities to conduct digital transactions knowing that the trust frameworks underpin, create accountability, and support the decisions they’re making.

Use Cases for Trust Frameworks

Historically speaking, trust frameworks have been extraordinarily complex and only worth the investment for high-value, high-volume transactions, such as the ones established by credit card companies. Now, with the introduction of decentralized technologies, there is a need to create digital trust frameworks that work for a much broader variety of transactions. Realizing the scope of this work comes with the recognition that there will be many different trust frameworks, both small and large in scope, for different federations across the web. Given that context, it is important to preserve end-user agency as much as possible as trust frameworks are developed and adoption and mutual recognition increases.

Looking at the ecosystem today, we can broadly group trust frameworks into three categories:

Domain-specific Trust Frameworks These are typically developed to serve a specific use-case, for example within a particular industry Often driven by industry and/or NGOs These have been able to develop faster than national trust frameworks (which are based in legislation), and as such may inform the development of national trust frameworks National Trust Frameworks Typically broad in application and to facilitate a policy objective (for example, increased trust in data sharing) Driven by individual governments to address the needs of their citizens and residents Based in legislation, with more enforcement powers than either Domain-specific Trust Frameworks or International Trust Frameworks Likely to be informed by both Domain-specific Trust Frameworks and International Trust Frameworks International Trust Frameworks These are typically broad in nature and developed to serve many countries, much like a model law Typically driven by governments, industry, or NGOs but geographically agnostic Likely to inform National Trust Frameworks Accreditation and Assurance

An important part of satisfying the operational components of a trust framework is the ability to accredit ecosystem participants against the trust framework. This is a logical extension of the rules, requirements, and regulations trust frameworks set out. Trust frameworks typically include an accreditation scheme and associated ongoing compliance testing.

One aspect of accreditation in the identity context is compliance with standards. In the context of identity related trust frameworks, there are several kinds of assurance that relying parties will typically seek. These can include binding, information, authentication, and federation and identity assurance. Each standard may define their own distinct levels of assurance. The NIST Digital Identity Requirements and New Zealand Identification Management Standards are a good example of how this works in practice.

The process of accreditation and a successful certification is a core part of trust frameworks as it proves to the wider ecosystem (including auditors) that the entity, solution, or piece of software meets the business and technical requirements defined. Digital identity systems are increasingly modular, and one solution might involve a variety of different components, roles and providers. These should be developed and defined as part of the process of standing up a trust framework, testing its capabilities and defining processes around accreditation.

Technical Interoperability

Trust frameworks help to improve interoperability between entities by defining a common set of operating rules. In addition to setting out business and legal rules, it is important that high level technical rules are specified as well. Trust frameworks must clearly define expectations around the technical standards to be used, as well as what aspects of these standards are normatively required, optional, or somewhere in between. When it comes to digital identity trust frameworks, this may mean building on open-source code or evaluating against open test suites.

Test suites allow for normative testing around standards requirements and offer a way for parties to audit and ensure the processes being used throughout the identity lifecycle. They can be incredibly useful not only for entities using the trust framework, but for mutually recognized trust frameworks to understand and interpret the requirements coming from a particular set of rules.

Ongoing development of several digital identity trust frameworks based on the emerging decentralized web of trust can be found at industry organizations such as the Kantara Initiative and Trust Over IP Foundation as well as government-driven initiatives such as the Pan-Canadian Trust Framework.

Learn Concepts: Trust Frameworks was originally published in MATTR on Medium, where people are continuing the conversation by highlighting and responding to this story.


Learn Concepts: Selective Disclosure

An important principle that we want to achieve when designing any system that involves handling Personally Identifiable Information (PII) is to minimize the data disclosed in a given interaction. When users share information, they should be able to choose what and how much they share on a case-by-case basis, while the relying parties receiving the information must be able to maintain assurances ab

An important principle that we want to achieve when designing any system that involves handling Personally Identifiable Information (PII) is to minimize the data disclosed in a given interaction. When users share information, they should be able to choose what and how much they share on a case-by-case basis, while the relying parties receiving the information must be able to maintain assurances about the presented information’s origin and integrity. This process is often referred to as selective disclosure of data. As technologists, by having solutions that easily achieve selective disclosure, we can drive a culture based on the minimum information exchange required to enhance user privacy.

Privacy and Correlation

Selective disclosure of information is particularly relevant when evaluating approaches to using verifiable credentials (VCs). Because authorities are able to issue credentials to a subject’s digital wallet, the subject is able to manage which data they disclose to relying parties as well as how that disclosure is performed. This presents an opportunity for those designing digital wallets to consider the user experience of data disclosure, particularly as it relates to the underlying technology and cryptography being used for data sharing.

The problem of user privacy as it relates to digital identity is a deep and complicated one, however the basic approach has been to allow users to share only the information which is strictly necessary in a particular context. The VC Data Model spec provides some guidance on how to do so, but stops short of offering a solution to the issue of managing user privacy and preventing correlation of their activities across different interactions:

Organizations providing software to holders should strive to identify fields in verifiable credentials containing information that could be used to correlate individuals and warn holders when this information is shared.

A number of different solutions have been deployed to address the underlying concerns around selective disclosure. Each solution makes a different set of assumptions and offers different tradeoffs when it comes to usability and convenience.

Approaches to Selective Disclosure

When it comes to solutions for selective disclosure of verifiable credentials, there are many different ways to tackle this problem, but three of the most common are:

Just in time issuance — contact the issuer at request time either directly or indirectly for a tailored assertion Trusted witness — use a trusted witness between the provider and the relying party to mediate the information disclosure Cryptographic solutions — use a cryptographic technique to disclose a subset of information from a larger assertion Just in time issuance

Just in time issuance, a model made popular by OpenID Connect, assumes the issuer is highly available, which imposes an infrastructure burden on the issuer that is proportional to the number of subjects they have information for and where those subjects use their information. Furthermore, in most instances of this model, the issuer learns where a subject is using their identity information, which can be a serious privacy problem.

Trusted witness

Trusted witness shifts this problem to be more of a presentation concern, where a witness de-anonymizes the subject presenting the information and presents an assertion with only the information required by the relying party. Again, this model requires a highly available party other than the holder and relying party present when a subject wants to present information, one that must be highly trusted and one that bears witness to a lot of PII on the subject, leading to privacy concerns.

Cryptographic solutions

Cryptographic solutions offer an alternative to these approaches by solving the selective disclosure problem directly at the core data model layer of the VC, providing a simpler and more flexible method of preserving user privacy.

There are a variety of ways that cryptography can be used to achieve selective disclosure or data minimization, but perhaps the most popular approach is using a branch of cryptography often known as Zero-Knowledge Proofs, or ZKPs. The emergent feature of this technology is that a prover can prove knowledge of some data without exposing any additional data. Zero-knowledge proofs can be achieved in a flexible manner with verifiable credentials using multi-message digital signatures such as BBS+.

Traditional Digital Signatures

Traditional digital signatures look a bit like this. You have a message (virtually any kind of data for which you want to establish integrity) and a keypair (private and public key) which you use to produce a digital signature on the data. By having the message, public key, and the signature; verifiers are able to evaluate whether the signature is valid or not, thereby establishing the integrity of the message and the authenticity of the entity that signed the message. In the context of verifiable credentials, the entity doing the signing is the issuer of the credential, while the entity doing the verification is the verifier. The keypair in question belongs to the issuer of the credential, which allows verifiers to establish the authority on that credential in a verifiable manner.

Sign Verify Multi-message Digital Signatures

Multi-message digital signature schemes (like BBS+), on the other hand, are able to sign an array of messages, rather than a single message over which the entire digital signature is applied. The same mechanism is used wherein a private key produces a digital signature over the messages you wish to sign, but now you have the flexibility of being able to break a message up into its fundamental attributes. In the context of verifiable credentials, each message corresponds to a claim in the credential. This presents an opportunity for selective disclosure due to the ability to derive and verify a proof of the digital signature over a subset of messages or credential attributes.

Sign Verify

In addition to the simple ability to sign and verify a set of messages, multi-message digital signatures have the added capability of being able to derive a proof of the digital signature. In the context of verifiable credentials, the entity deriving the proof is the credential subject or holder. This process allows you to select which messages you wish to disclose in the proof and which messages you want to keep hidden. The derived proof indicates to the verifier that you know all of the messages that have been signed, but that you are only electing to disclose a subset of these messages.

Derive Proof Verify Proof

The verifier, or the entity with which you’re sharing the data, is only able to see the messages or credential claims which you have selectively disclosed to them. They are still able to verify the integrity of the messages being signed, as well as establish the authenticity of the issuer that originally signed the messages. This provides a number of privacy guarantees to the data subject because relying parties are only evaluating the proof of the signature rather than the signature itself.

Learn Concepts: Selective Disclosure was originally published in MATTR on Medium, where people are continuing the conversation by highlighting and responding to this story.


Learn Concepts: Semantic Web

With so much data being created and shared on the internet, one of the oldest challenges in building digital infrastructure has been how to consistently establish meaning and context to this data. The semantic web is a set of technologies whose goal is to make all data on the web machine-readable. Its usage allows for a shared understanding around data that enables a variety of real-world applicat

With so much data being created and shared on the internet, one of the oldest challenges in building digital infrastructure has been how to consistently establish meaning and context to this data. The semantic web is a set of technologies whose goal is to make all data on the web machine-readable. Its usage allows for a shared understanding around data that enables a variety of real-world applications and use cases.

The challenges to address with the semantic web include:

vastness — the internet contains billions of pages, and existing technology has not yet been able to eliminate all semantically duplicated terms vagueness — imprecise concepts like ‘young’ or ‘tall’ make it challenging to combine different knowledge bases with overlapping but subtly different concepts uncertainty — precise concepts with uncertain values can be hard to reason about, this mirrors the ambiguity and probabilistic nature of everyday life inconsistency — logical contradictions create situations where reasoning breaks down deceit — intentionally misleading information spread by bad actors, can be mitigated with cryptography to establish information integrity Linked Data

Linked data is the theory behind much of the semantic web effort. It describes a general mechanism for publishing structured data on the internet using vocabularies like schema.org that can be connected together and interpreted by machines. Using linked data, statements encoded in triples (subject → predicate → object) can be spread across different websites in a standard way. These statements form the substrate of knowledge that spans across the entire internet. The reality is that the bulk of useful information on the internet today is unstructured data, or data that is not organized in a way which makes it useful to anyone beyond the creators of that data. This is fine for the cases where data remains in a single context throughout its lifecycle, but it becomes problematic when trying to share data across contexts while retaining its semantic meaning. The vision for linked data is for the internet to become a kind of global database where all data can be represented and understood in a similar way.

One of the biggest challenges to realizing the vision of the internet as a global database is enabling a common set of underlying semantics that can be consumed by all this data. A proliferation of data becomes much less useful if the data is redundant, unorganized, or otherwise messy and complicated. Ultimately, we need to double down on the usage of common data vocabularies and common data schemas. Common data schemas combined with the security features of verifiable data will make fraud more difficult, making it easier to transmit and consume data so that trust-based decisions can be made. Moreover, the proliferation of common data vocabularies will help make data portability a reality, allowing data to be moved across contexts while retaining the semantics of its original context.

Semantic Web Technologies

The work around developing semantic web technology has been happening for a very long time. The vision for the semantic web has been remarkably consistent throughout its evolution, although the specifics around how to accomplish this and at what layer has developed over the years. W3C’s semantic web stack offers an overview of these foundational technologies and the function of each component in the stack.

The ultimate goal of the semantic web of data is to enable computers to do more useful work and to develop systems that can support trusted interactions over the network. The shared architecture as defined by the W3C supports the ability for the internet to become a global database based on linked data. Semantic Web technologies enable people to create data stores on the web, build vocabularies, and write rules for handling data. Linked data are empowered by technologies such as RDF, SPARQL, OWL, and SKOS.

RDF provides the foundation for publishing and linking your data. It’s a standard data model for representing information resources on the internet and describing the relationships between data and other pieces of information in a graph format. OWL is a language which is used to build data vocabularies, or “ontologies”, that represent rich knowledge or logic. SKOS is a standard way to represent knowledge organization systems such as classification systems in RDF. SPARQL is the query language for the Semantic Web; it is able to retrieve and manipulate data stored in an RDF graph. Query languages go hand-in-hand with databases. If the Semantic Web is viewed as a global database, then it is easy to understand why one would need a query language for that data.

By enriching data with additional context and meaning, more people (and machines) can understand and use that data to greater effect.

JSON-LD

JSON-LD is a serialization format that extends JSON to support linked data, enabling the sharing and discovery of data in web-based environments. Its purpose is to be isomorphic to RDF, which has broad usability across the web and supports additional technologies for querying and language classification. RDF has been used to manage industry ontologies for the last couple decades, so creating a representation in JSON is incredibly useful in certain applications such as those found in the context of Verifiable Credentials (VCs).

The Linked Data Proofs representation of Verifiable Credentials makes use of a simple security protocol which is native to JSON-LD. The primary benefit of the JSON-LD format used by LD-Proofs is that it builds on a common set of semantics that allow for broader ecosystem interoperability of issued credentials. It provides a standard vocabulary that makes data in a credential more portable as well as easy to consume and understand across different contexts. In order to create a crawl-able web of verifiable data, it’s important that we prioritize strong reuse of data schemas as a key driver of interoperability efforts. Without it, we risk building a system where many different data schemas are used to represent the same exact information, creating the kinds of data silos that we see on the majority of the internet today. JSON-LD makes semantics a first-class principle and is therefore a solid basis for constructing VC implementations.

JSON-LD is also widely adopted on the web today, with W3C reporting it is used by 30% of the web and Google making it the de facto technology for search engine optimization. When it comes to Verifiable Credentials, it’s advantageous to extend and integrate the work around VCs with the existing burgeoning ecosystem of linked data.

Learn Concepts: Semantic Web was originally published in MATTR on Medium, where people are continuing the conversation by highlighting and responding to this story.


Learn Concepts: Verifiable Data

The ability to prove the integrity and authenticity of shared data is a key component to establishing trust online. Given that we produce so much data and are constantly sharing and moving that data around, it is a complex task to identify a solution that will work for the vast majority of internet users across a variety of different contexts. The fundamental problem to address is how to establis

The ability to prove the integrity and authenticity of shared data is a key component to establishing trust online. Given that we produce so much data and are constantly sharing and moving that data around, it is a complex task to identify a solution that will work for the vast majority of internet users across a variety of different contexts.

The fundamental problem to address is how to establish authority on a piece of data, and how to enable mechanisms to trust those authorities in a broad set of contexts. Solving this problem on a basic level allows entities to have greater trust in the data they’re sharing, and for relying parties to understand the integrity and authenticity of the data being shared.

We use the overarching term verifiable data to refer to this problem domain. Verifiable data can be further expanded into three key pillars:

Verifiable data Verifiable relationships Verifiable processes Verifiable data

This refers to the authenticity and integrity of the actual data elements being shared.

Verifiable relationships

This refers to the ability to audit and understand the connections between various entities as well as how each of these entities are represented in data.

Verifiable processes

This describe the ability to verify any digital process such as onboarding a user or managing a bank account (particularly with respect to how data enables the process to be managed and maintained).

These closely-related, interdependent concepts rely on verifiable data technology becoming a reality.

Verifiable Credentials

The basic data model of W3C Verifiable Credentials may be familiar to developers and architects that are used to working with attribute-based credentials and data technologies. The issuer, or the authority on some information about a subject (e.g. a person), issues a credential containing this information in the form of claims to a holder. The holder is responsible for storing and managing that credential, and in most instances uses a piece of software that acts on their behalf, such as a digital wallet. When a verifier (sometimes referred to as a relying party) needs to validate some information, they can request from the holder some data to meet their verification requirements. The holder unilaterally determines if they wish to act upon the request and is free to present the claims contained in their verifiable credentials using any number of techniques to preserve their privacy.

Verifiable Credentials form the foundation for verifiable data in the emerging web of trust. They can be thought of as a container for many different types of information as well as different types of credentials. Because it is an open standard at the W3C, verifiable credentials are able to widely implemented by many different software providers, institutions, governments, and businesses. Due to the wide applicability of these standards, similar content integrity protections and guarantees are provided regardless of the implementation.

Semantics and Schemas

The authenticity and integrity-providing mechanisms presented by Verifiable Credentials provide additional benefits beyond the evaluation of verifiable data. They also provide a number of extensibility mechanisms that allow data to be linked to other kinds of data in order to be more easily understood in the context of relationships and processes.

One concrete example of this is the application of data schemas or data vocabularies. Schemas are a set of types and properties that are used to describe data. In the context of data sharing, schemas are an incredibly useful and necessary tool in order to represent data accurately from the point of creation to sharing and verification. In essence, data schemas in the Verifiable Credential ecosystem are only useful if they are strongly reused by many different parties. If each implementer of Verifiable Credentials chooses to describe and represent data in a slightly different way, it creates incoherence and inconsistency in data and threatens to diminish the potential of ubiquitous adoption of open standards and schemas.

Verifiable Credentials make use of JSON-LD to extend the data model to support dynamic data vocabularies and schemas. This allows us to not only use existing JSON-LD schemas, but to utilize the mechanism defined by JSON-LD to create and share new schemas as well. To a large extent this is what JSON-LD was designed for; the adoption and reuse of common data vocabularies.

This type of Verifiable Credential is best characterized as a kind of Linked Data Proof. It allows issuers to make statements that can be shared without loss of trust because their authorship can be verified by a third party. Linked Data Proofs define the capability for verifying the authenticity and integrity of Linked Data documents with mathematical proofs and asymmetric cryptography. It provides a simple security protocol which is native to JSON-LD. Due to the nature of linked data, they are built to compactly represent proof chains and allow a Verifiable Credential to be easily protected on a more granular basis; on a per-attribute basis rather than a per-credential basis.

This mechanism becomes particularly useful when evaluating a chain of trusted credentials belonging to organizations and individuals. A proof chain is used when the same data needs to be signed by multiple entities and the order in which the proofs were generated matters. For example, such as in the case of a notary counter-signing a proof that had been created on a document. Where order needs to be preserved, a proof chain is represented by including an ordered list of proofs with a “proof chain” key in a Verifiable Credential. This kind of embedded proof can be used to establish the integrity of verifiable data chains.

Overall, the ability for data to be shared across contexts whilst retaining its integrity and semantics is a critical building block of the emerging web of trust.

Learn Concepts: Verifiable Data was originally published in MATTR on Medium, where people are continuing the conversation by highlighting and responding to this story.


Learn Concepts: Digital Wallets

In order to coordinate the authentication needs of apps and services on the web, many of today’s users will leverage services such as password managers. These tools help users keep track of how they’ve identified themselves in different contexts and simplify the login process for different services. In many ways, the need to overlay such services in order to preserve non-negotiable security proper

In order to coordinate the authentication needs of apps and services on the web, many of today’s users will leverage services such as password managers. These tools help users keep track of how they’ve identified themselves in different contexts and simplify the login process for different services. In many ways, the need to overlay such services in order to preserve non-negotiable security properties reflects the broken state of identity on the internet today. Users of these apps (i.e. the data subjects) are often an afterthought when a trust relationship is established between data authorities and apps or services consuming and relying on user data.

Asymmetry in the nature of the relationships between participants largely prevents users from asserting their data rights as subjects of the data. Users are left to deal with the problems inherent in such a model, foisting upon them the responsibility of implementing appropriate solutions to patch over the shortcomings of identity management under this legacy model.

The emerging web of trust based upon self-certifying identifiers and user-centric cryptography is shifting this fundamental relationship by refashioning the role of the user. This role (known in the VC data model as a “holder”) is made central to the ecosystem and, importantly, on equal footing with the issuers of identity-related information and the relying parties who require that data to support their applications and services.

The reframing of the user as a first-class citizen and their empowerment as ‘holder’ represents a shift towards a new paradigm. Such a paradigm offers users greater sovereignty of their own information and empowerment to manage their digital identity. Users are able to exercise their new role in this ecosystem by utilizing a new class of software known as digital wallets.

Digital wallets are applications that allow an end user to manage their digital credentials and associated cryptographic keys. They allow users to prove identity-related information about themselves and, where it’s supported, choose to selectively disclose particular attributes of their credentials in a privacy-preserving manner.

Wallets and Agents

When working with technology standards that are inherently decentralized, it’s important to establish a common context and consensus in our choice of terminology and language. Convergence on key terms that are being used to describe concepts within the emerging decentralized identity and self-sovereign identity technologies allows participants to reach a shared understanding. Consequently, participating vendors are able to understand how they fit into the puzzle and interoperability between vendor implementations is made possible.

Through dedicated research and careful coordination with the broader technical community, the Glossary Project at DIF offers a useful definition for both wallets and agents.

Wallets
Provide storage of keys, credentials, and secrets, often facilitated or controlled by an agent.
Agents
An agent is a software representative of a subject (most often a person) that controls access to a wallet and other storage, can live in different locations on a network (cloud vs. local), and can facilitate or perform messaging or interactions with other subjects.

The two concepts are closely related, and are often used interchangeably. In short, the Glossary Project found that an agent is most commonly a piece of software that lets you work with and connect to wallets. Wallets can be simple, while agents tend to be more complex. Agents often need access to a wallet in order to retrieve credentials, keys, and/or messages that are stored there.

At MATTR, we tend to use the terms ‘digital wallet’ or simply ‘wallet’ to holistically describe the software that is utilized by end-users from within their mobile devices, web browsers, or other such user-controlled devices or environments. A digital wallet can be thought of as a kind of agent, though we try to make the distinction between the software that sits on a user’s device and the data managed and logic facilitated by a cloud-based platform in support of the wallet’s capabilities. We like the term ‘wallet’ because it is analogous to real-world concepts that by and large parallel the primary function of a wallet; to store and retrieve identity-related information.

User-centric Design

As end users have often found themselves the casualty of the information systems used by the modern web, there has been little opportunity to allow users to directly manage their data and negotiate what data they wish to withhold or disclose to certain parties. Under the new web of trust paradigm, the rights of the data subject are codified in standards, processes, and protocols guaranteeing the user the power to exercise agency. The interjection of the wallet to support end-users as data subjects on equal footing with issuers of identity information and relying parties provides an indispensable conduit and control point for this information that enables new opportunities for user-centric design.

The innovation in this area is only just beginning and there is no limit to the kinds of new experiences application developers can design and deliver to users. Some examples include:

Allowing users to synchronize their data across multiple applications Allowing users to self-attest to a piece of data or attest to data self-asserted by peers Allowing a user to explicitly give consent around how their data may be used Allowing users to revoke their consent for access to the continued use of and/or persistence of a particular piece of data Allowing users to opt-in to be discoverable to other verified users, provided they can mutually verify particular claims and attributes about themselves Allowing users to opt-in to be discoverable to certain service providers and relying parties, provided they can mutually verify particular claims and attributes about themselves

These are just a handful of the potential ways that developers can innovate to implement user-centric experiences. MATTR offers the tools necessary to create new kinds of wallet and authentication experiences for users and we’re excited to see what developers come up with when given the opportunity to create applications and services inspired by these new standards and technologies.

Learn Concepts: Digital Wallets was originally published in MATTR on Medium, where people are continuing the conversation by highlighting and responding to this story.


Learn Concepts: Web of Trust 101

The original vision for the World Wide Web was an open platform on which everyone could freely communicate and access information. It was built on the decentralized architecture of the internet, used open standards, and functioned as an accessible platform that would inherit and amplify the fundamentally decentralized nature of the network that underpinned it. However, the reality today has falle

The original vision for the World Wide Web was an open platform on which everyone could freely communicate and access information. It was built on the decentralized architecture of the internet, used open standards, and functioned as an accessible platform that would inherit and amplify the fundamentally decentralized nature of the network that underpinned it.

However, the reality today has fallen far short of its founding vision. The modern internet is largely centralized and siloed. The vast majority of web traffic belongs to a few powerful corporations that control the distribution of data through platforms designed to selectively serve up information based on in-depth analysis of their users’ data. The lack of an identity system native to the internet over time has created an imbalance of power that erodes users’ digital rights.

Several decades after the web was introduced, most of us are now accustomed to widespread spam, fraud, abuse, and misinformation. We don’t have any real agency over how our data is used, and the corporations controlling our data have shown their inability to properly shoulder the responsibility that comes with it. We’re locked into this system, with no reasonable ability to opt out.

As a result, the modern internet has made it incredibly difficult to establish trust with others online, creating many barriers to participation that often leave everyday users out of the value chain. Information and data, and the value they create, are no longer freely accessible by the users creating it — most of whom are utterly unaware of the limited agency they have in accessing it. To fix this fundamental problem of digital trust, we need to begin by building a system that allows users to control their identities and to move their personal data freely from one online platform to another without fear of vendor lock-in.

Evolution of Digital Trust

The emerging “Web of Trust” is an idea that has been around since the dawn of the internet. To explain what motivated its creation, let’s take a look at how trust on the internet functions today.

Though we may not always be aware, we rely on a basic form of security practically every day we use the internet. HTTPS, the secure browsing protocol for the World Wide Web, uses a common infrastructure based on digital signatures to allow users to authenticate and access websites, and protect the privacy and integrity of the data exchanged while in transit. It is used to establish trust on all types of websites, to secure accounts, and to keep user communications, identity, and web browsing private.

Centralized PKI System

This is all based on the usage of cryptographic keys, instead of passwords, to perform security and encryption. Public key cryptography is a cryptographic technique that enables entities to securely communicate on an insecure public network (the internet), and reliably verify the identity of users via digital signatures. It is required for activities where simple passwords are an inadequate authentication method and more rigorous proof is required to confirm the identity of the parties involved in the communication and to validate the information being transferred.

The type of Public Key Infrastructure (PKI) currently used by the internet primarily relies on a hierarchical system of certificate authorities (CAs), which are effectively third-parties that have been designated to manage identifiers and public keys. Virtually all internet software now relies on these authorities. Certificate authorities are responsible for verifying the authenticity and integrity of public keys that belong to a given user, all the way up to a ‘self-signed’ root certificate. Root certifications are typically distributed with applications such as browsers and email clients. Applications commonly include over one hundred root certificates from dozens of PKIs, thereby bestowing trust throughout the hierarchy of certificates which lead back to them. The concept is that if you can trust the chain of keys, you can effectively establish secure communication with another entity with a reasonable level of assurance that you’re talking to the right person.

However, the reliance on certificate authorities creates a centralized dependency for practically all transactions on the internet that require trust. This primarily has to do with the fact that current PKI systems tightly control who gets to manage and control the cryptographic keys associated with certificates. This constraint means that modern cryptography is largely unusable for the average user, forcing us to borrow or ‘rent’ identifiers such as our email addresses, usernames, and website domains through systems like DNS, X.509, and social networks. And because we need these identities to communicate and transact online, we’re effectively beholden to these systems which are outside of our control. In addition, the usability challenges associated with current PKI systems mean that much of Web traffic today is unsigned and unencrypted, such as on major social networks. In other words, cryptographic trust is the backbone of all internet communications, but that trust rarely trickles down to the user level.

A fully realized web of trust instead relies on self-signed certificates and third party attestations, forming the basis for what’s known as a Decentralized Public Key Infrastructure (DPKI). DPKI returns control of online identities to the entities they belong to, bringing the power of cryptography to everyday users (we call this user-centric cryptography) by delegating the responsibility of public key management to secure decentralized datastores, so anyone and anything can start building trust on the web.

A Trust Layer for the Internet

The foundational technology for a new DPKI is a system of distributed identifiers for people, organizations, and things. Decentralized identifiers are self-certifying identifiers that allow for distributed discovery of public keys. DIDs can be stored on a variety of different data registries, such as blockchains and public databases, and users can always be sure that they’re talking to the right person or entity because an identifier’s lookup value is linked to the most current public keys for that identifier. This creates a kind of even playing field where the standards and requirements for key management are uniform across different users in an ecosystem, from everyday users to large corporations and everything in between.

Decentralized PKI System

This will, in the first place, give users far greater control over the manner in which their personal data is being used by businesses, allowing them to tweak their own experience with services to arrive at that specific trade-off between convenience and data protection that best suits their individual requirements. But more importantly, it will allow users to continue to federate data storage across multiple services while still delivering the benefits that come from cross-platform data exchange. In other words, it gives them the ability to manage all their data in the same way while being able to deal with data differently depending on the context they are in. This also allows them to move their personal data freely from one online platform to another without losing access to the services they need, and without fear of vendor lock-in.

Eventually, this will allow for portability not only of data but of the trust and reputation associated with the subjects of that data. For instance, a user might be able to transfer their reputation score from one ride-sharing service to another, or perhaps use the trust they’ve established in one context in another context entirely.

This emerging decentralized web of trust is being forged by a global community of developers, architects, engineers, organizations, hackers, lawyers, activists, and more working to push forward and develop web standards for things like credential exchange, secure messaging, secure storage, and trust frameworks to support this new paradigm. The work is happening in places like the World Wide Web Foundation, W3C Credentials Community Group, Decentralized Identity Foundation, Trust Over IP Foundation, Linux Foundation’s Hyperledger project, and Internet Engineering Task Force, to name a few.

Learn Concepts: Web of Trust 101 was originally published in MATTR on Medium, where people are continuing the conversation by highlighting and responding to this story.

Tuesday, 22. December 2020

Aaron Parecki

Learn OAuth over the winter break!

Over the last year, I've helped thousands of software developers learn about OAuth by hosting live and virtual workshops, and all this knowledge is now available as an on-demand video course!

If you've been putting off setting aside some time to learn about OAuth, now is your chance!

Over the last year, I've helped thousands of developers learn about OAuth by hosting live workshops in person, online events through O'Reilly and Okta, as well as by publishing videos on YouTube! I'm super thrilled to announce that I just finished packaging up the workshop and have launched it as a new course, "The Nuts and Bolts of OAuth 2.0"!

The course is 3.5 hours of video content, quizzes, as well as interactive exercises with a guided learning tool to get you quickly up to speed on OAuth, OpenID Connect, PKCE, best practices, and tips for protecting APIs with OAuth.

The course is available now on Udemy, and if your company has a Udemy for Business subscription you can find it there as well! If you download the app, you can even sync the video course to a mobile device to watch everything offline!

The exercises in the course will walk you through the various OAuth flows to set up an OAuth server, get an access token, use a refresh token, and learn the user's name and email with OpenID Connect. You can see a sneak peek of the tool that interactively helps you debug your apps at oauth.school.

Free OAuth Videos

I've also got a bunch of videos about OAuth available on YouTube for you to watch at any time! Take a look at my curated playlist of videos where you'll find everything from live sketch notes of a conversation about PKCE and OAuth security, to a description of OAuth phishing, to details on why you shouldn't use the implicit flow.


Hans Zandbelt

MVP OAuth 2.x / OpenID Connect modules for NGINX and Apache

Last year I wrote about a new development of a generic C-library for OAuth 2.x / OpenID Connect https://hanszandbelt.wordpress.com/2019/03/22/oauth-2-0-and-openid-connect-libraries-for-c/. It’s taken a bit longer than anticipated – due to various circumstances – but there’s now a collection of modules that … Continue reading →

Last year I wrote about a new development of a generic C-library for OAuth 2.x / OpenID Connect https://hanszandbelt.wordpress.com/2019/03/22/oauth-2-0-and-openid-connect-libraries-for-c/.

It’s taken a bit longer than anticipated – due to various circumstances – but there’s now a collection of modules that is at the Minimum Viable Product stage i.e. the code is stable and production quality, but it is limited in its feature set.

mod_oauth2 – OAuth 2.0 Resource Server module for Apache ngx_oauth2_module – OAuth 2.0 Resource Server module for NGINX mod_sts – Security Token Exchange module for Apache ngx_sts_module – Security Token Exchange module for NGINX ngx_openidc_module – OpenID Connect Relying Party module for NGINX

Enjoy!


SSI Ambassador

The trust infrastructure of self-sovereign identity ecosystems.

The trust infrastructure is concerned with the question of how and why the presented information can be trusted. It defines the rules for all stakeholders and enables legally binding relationships with the combination of governance frameworks, which are built on top of trust frameworks. But before we dive deeper into the part of the trust infrastructure, we need to understand the core components

The trust infrastructure is concerned with the question of how and why the presented information can be trusted. It defines the rules for all stakeholders and enables legally binding relationships with the combination of governance frameworks, which are built on top of trust frameworks.

But before we dive deeper into the part of the trust infrastructure, we need to understand the core components and the different types of identity architecture first.

Core components of identity architecture

There are three core components within an identity system, which in general mainly manage relationships. These are identifiers enabling the means for “remembering, recognizing, and relying on the other parties to the relationship” as explained by Phil Windley. In the case of SSI, these are decentralized Identifiers (DIDs), which are created by a controller, which might be a person, organization or software system. Controllers can use different authentication factors, which according to the Guidance for the application of the level of assurance for eIDAS (CEF DIGITAL), can be possession-based factors (e.g. hardware), knowledge-based factors (e.g. keys or passwords) or inherent factors (e.g. biometrics like a fingerprint). Oftentimes a combination of different authentication factors is used to demonstrate the authority of an identifier.

Architectural types of identity systems; Image adapted, original by Phil Windley

Phil Windley distinguishes between administrative, algorithmic and autonomic systems, which are illustrated in the image above. We are of course quite familiar with administrative systems such as e-mail, mobile or social network services. Algorithmic systems in contrast leverage some sort of distributed ledger as verified data registry to register a public key / decentralized identifier on a ledger. This gives the controller more sovereignty of the identifier among other perks. However, due to privacy concerns, keys or identifiers of individuals should not be stored on a publicly accessible database. Hence, autonomic identity architecture was developed to enable the “controller to use her private key to authoritatively and non-repudiably [sic] sign statements about the operations on the keys and their binding to the identifier, storing those in an ordered key event log” as Phil Windley states. Current SSI implementations tend to use a combination of algorithmic and autonomic architecture.

Governance frameworks

The Business Dictionary defines governance as the “establishment of policies, and continuous monitoring of their proper implementation, by the members of the governing body.” It includes the mechanisms required to balance powers and defines their primary duties to enhance the prosperity and viability of the organization. The objective for governance entities is to ensure the alignment of involved stakeholders, the definition of the implementation and the processes and use-cases executed on top of it. The purpose of a governance framework is to define the different stakeholders, determine their rights and duties as well as defining the policies under which the network is operated. Therefore, it serves as a legal foundation for the operation of the particular network. It consists of several legal documents, which are published by the governing authority.

The governance of the network (the verifiable data registry also referred to as ledger or trust anchor) itself is only a small part of the total governance required. According to the Trust over IP (ToIP) foundation there are four layers, which require an adapted governance framework matching the needs of the particular layer.

The Trust over IP (ToIP) Stack; Image source: Trust over IP Foundation

As illustrated in the figure above, the Trust over IP stack is not only separated in layers, but also in technical and governance stacks within the layers. The stack is indented to provide the certainty for higher levels that the underlying ones can be trusted.

Layer one includes the verifiable data registries, which can be implemented on different technology frameworks. The Sovrin Foundation is an example of a governance authority, which published a governance framework for layer one. Other public utilities include IDunion, Indicio and Bedrock among others. While the mentioned networks align their efforts particularly to the ToIP stack there are countless others, which can be used as a public utility as the W3C DID specification registry discloses. These range from generic public permissionless networks such as Ethereum or Bitcoin to networks with permissioned write access, which serve a particular use-case.

Tim Bouma (Senior policy analyst for identity management at the treasury board secretariat of Canada) does not see the need of the government to build and operate a verifiable data registry and highlights the importance of a plurality of operators. However, he points out that the involvement and participation of governments is crucial in defining how the infrastructure is used and relied on as stated in a personal interview.

DID methods as specified in the W3C DID specification registry.

The second layer describes the communication between agents. Within the ToIP stack, this communication is indented to be executed via a hybrid of an algorithmic and atomic architecture such as peer DIDs or KERI implementations of self-certifying identifiers as described by Ph.D. Samuel M. Smith. This means that publicly resolvable DIDs are used for public entities and private peer DIDs for individuals. The following illustration provided by Lissi provides an overview of these interactions.

SSI interactions and the usage of public and peer DIDs; Image source: Lissi

However, not all SSI implementations use peer DIDs. For instance, the ESSIF-MVP1 does not currently use peer DIDs but might add them later as deemed appropriate according to the technical specification of DID modelling. Hence, the same type of DID is used for both issuer and holder. The governing authority of layer two is highly dependent on the communication protocols used by the implementation in question. For implementations, which use peer DIDs according to the DIDcomm protocol the governing entity is the DIDcomm working group at the Decentralized Identity Foundation (DIF). Both layer one and layer two define technical or rather cryptographic trust, in contrast to layer three and four, which define human trust.

Layer three protocols support the exchange of data such as verified credentials with different types of signatures, which enable a holder to create verifiable presentations as explained in the ToIP Aries-RFC. It states that one of the goals of the ToIP Foundation is “to standardize all supported credential exchange protocols so that any ToIP-compatible agent, wallet, and secure data store can work with any other agent, wallet, and secure data store.” An issuer can issue any set of claims to any holder, which can then prove them to any verifier. The verifier can decide, which issuers and which claims it trusts.

Layer four of the stack defines the rules for a particular digital trust ecosystem such as healthcare, finance, food products, education etc. These are led by a governance authority, which already exists or is established for this particular purpose. It consists of a variety of different stakeholders such as business entities, government agencies, individuals or other interested parties. These ecosystem frameworks also define the semantics of verified credentials. The semantic of a verified credential defines, which attributes are part of it and their meaning in the particular context. If you want to join an existing ecosystem or want to know more about their work you can find the public ToIP confluence documentary hub here.

Trust frameworks

A trust framework sets the overall legal framework for digital interactions. These trust frameworks are technology agnostic and are uniquely adapted to the jurisdiction they serve. They set the rules for the recognition of electronic identification and authentication and specify the requirements to achieve a certain level of assurance (LoA).

The combination of the different governance frameworks as illustrated in the ToIP stack is sometimes also referred to as trust framework. However, jurisdictions have their own requirements for electronic authentication, which serve as the underlying trust framework. In the case of Europe, the eIDAS regulation clearly defines the requirements for authentication factors to achieve a certain level of assurance. For instance, to achieve the LoA substantial, two factors are necessary. According to the Guidance for the application of the LoA published by the CEF Digital, one out of the two factors needs to be either:

I) a presentation of an identity document or
II) verification of the possession of evidence representing the claimed identity recognized by a member state or
III) a previous procedure executed by the same member state not related to the issuance of electronic identification, which provides the equivalent assurance or
IV) presenting a valid notified electronic identification mean with the LoA substantial or high.

While these requirements can in theory also be defined in a governance framework, the incorporation of such requirements into statutory law facilitates the creation and enforcement of legally binding relationships. Hence, existing statutory law (or case-law depending on the jurisdiction) needs to be incorporated by different governance frameworks to achieve a holistic approach and enforce legal liability.

According to Tim Bouma as one of the main contributor to the Pan Canadian Trust Framework (PCTF) these frameworks intertwine and complement each other as stated in a personal interview. He suggests that policymakers have to go back to the drawing board and take a look at all the concepts to evaluate if they have the right concepts to build out a suitable framework and regulation. The PCTF “is not a ‘standard’ as such, but is, instead, a framework that relates and applies existing standards, policies, guidelines, and practices, and where such standards and policies do not exist, specifies additional criteria. It’s a tool to help assess a digital identity program that puts into effect the relevant legislation, policy, regulation, and agreements between parties.” PCTF V1.1

While the eIDAS regulation itself aims to be technology agnostic there are some aspects, which complicate the adherence of the regulation for SSI implementations. However, this is a topic for its own article. In the eIDAS SSI legal report, Dr. Ignacio Alamillo Domingo describes the potential shift of the eIDAS regulation as trust framework on Page 22 as followed: „Adopting the SSI principles imply, generally speaking, an increased complexity in trust management and a shifting from hierarchical or federated trust assurance frameworks (…) to network-based socio- reputational trust models or accumulative trust assurance frameworks that use quantifiable methods to aggregate trust on claims and digital identities.” Hence, we can already observe the suggestions and considerations to adapt the regulation to suit new innovative solutions.

Key takeaways: Governance frameworks and trust frameworks need to be combined to form a holistic approach. Governance frameworks of SSI implementations need to respect the requirements and specifications of trust frameworks of the jurisdiction in which use-cases with regulatory obligations are carried out. Regulators need to evaluate existing regulations concerning electronic identification and authentication and their suitability with new identity architecture. Governments should engage with the public sector to collaboratively explore the requirements for a legally binding trust infrastructure.

Disclaimer: This article does not represent the official view of any entity, which is mentioned in this article or which is affiliated with the author. It solely represents the opinion of the author.

SSI Ambassador
Adrian Doerk
Own your keys

Monday, 21. December 2020

DustyCloud Brainstorms

Vote for Amy Guy on the W3C TAG (if you can)

My friend Amy Guy is running for election on the W3C TAG (Technical Architecture Group). The TAG is an unusual group that sets a lot of the direction of the future of standards that you and I use everyday on the web. Read their statement on running, and if you …

My friend Amy Guy is running for election on the W3C TAG (Technical Architecture Group). The TAG is an unusual group that sets a lot of the direction of the future of standards that you and I use everyday on the web. Read their statement on running, and if you can, ie if you're one of those unusual people labeled as "AC Representative", please consider voting for them. (Due to the nature of the W3C's organizational and funding structure, only paying W3C Members tend to qualify... if you know you're working for an organization that has paying membership to the W3C, find out who the AC rep is and strongly encourage them to vote for Amy.)

So, why vote for Amy? Quite simply, they're running on a platform of putting the needs of users first. Despite all the good intents and ambitions of those who have done founding work in these spaces, this perspective tends to get increasingly pushed to the wayside as engineers are pressured to shift their focus on the needs of their immediate employers and large implementors. I'm not saying that's bad; sometimes this even does help advance the interest of users too, but... well we all know the ways in which it can end up not doing so. And I don't know about you, but the internet and the web have felt an awful lot at times like they've been slipping from those early ideals. Amy's platform shares in a growing zeitgeist (sadly, still in the wispiest of stages) of thinking and reframing from the perspective of user empowerment, privacy, safety, agency, autonomy. Amy's platform reminds me of RFC 8890: The Internet Is For End Users. That's a perspective shift we desperately need right now... for the internet and the web both.

That's all well and good for the philosophical-alignment angle. But what about the "Technical" letter in TAG? Amy's standing there is rock-solid. And I know because I've had the pleasure of working side-by-side with Amy on several standards (including ActivityPub, of which we are co-authors.

Several times I watched with amazement as Amy and I talked about some changes we thought were necessary and Amy just got in the zone, this look of intense hyperfocus (really, someone should record the Amy Spec Editing Zone sometime, it's quite a thing to see), and they refactored huge chunks of the spec to match our discussion. And Amy knows, and deeply cares, about so many aspects of the W3C's organization and structure.

So, if you can vote for, or know how to get your organization to vote for, an AC rep... well, I mean do what you want I guess, but if you want someone who will help... for great justice, vote Amy Guy to the W3C TAG!


MyDigitalFootprint

Can AI feel curious?

I have been pondering on these topics for a while  “Can AI have feelings?”  “Should AI have emotion?”  What would it mean for AI to be curious? I posted, can a dog feel disappointment? Exploring our attachment to the projection of feelings.   I have written an executive brief about how a “board should frame AI” here. The majority of the debates/ arguments I read and hear ce

I have been pondering on these topics for a while  “Can AI have feelings?”  “Should AI have emotion?”  What would it mean for AI to be curious? I posted, can a dog feel disappointment? Exploring our attachment to the projection of feelings.   I have written an executive brief about how a “board should frame AI” here.

The majority of the debates/ arguments I read and hear centre on either creating the algorithms for the machine to know what we know or for the data to be in a form that allows the machine to learn from us.  A key point in all the debates is that we (humanity) should control and it should look like us. The framing of a general rule for emotional AI is that it mimics us. However, I want to come at AI feelings from a different perspective based on my own experience, one where AI creates feelings by its own existence. 

I am on several neurodiverse scales; this means my mind is wired differently, and I am so pleased it is. My unique wiring gives me the edge in innovation, creativity, connecting diverse topics, sense-making and deep-insights.  For thirty year,s I have remained working on concepts that become the mainstream ten years later.

As a specific area to develop my own view about AI and what it (the AI) should feel, I am running with an easy to identify with topic, empathy.   Empathy is not something that comes naturally to me, and therefore I have had to learn it, it has been taught, and I am still not great at it.  For the vast majority of humans, I am sure it is built-in.  Now that might mean that those who have it built in just know how to learn it or that it really is built-in, but right now we don’t know.  However, along with other humans, I find face-recognition (face blindness) very hard. As a community, we learn coping strategies, along with spelling, language and the correct emotional response - empathy.  My Twitter bio says that “I am highly skilled at being inappropriately optimistic,” which means I know don’t always read empathy very well.  For me, empathy is a very definitely a learnt response; if I had not learnt it, I expect life might be very different.

Here is the point, now you know I have had to learn empathy specifically, what does it mean?  Does it mean I am a robot or a machine? Does it mean I am less trustworthy?  Is my empathy less valued than someone else’s empathy? Am I less human?

On an AI call the other day, I was articulating this personal story in response to the idea that all humans know how to respond and if we teach or create the ability for a machine to learn empathy it can never be human (a baseline response).  My point was how it is the machine learning any different to me. Indeed we all have to learn something.  However, we somehow feel that certain emotions and characteristics are natural and not learnt/ taught behaviours - they are not.  Once we grasp this we have a real problem as our easy response to learnt response is genuine, we have removed a big part of the rejection of the concept from the debate, and we have to re-ask can a machine feel empathy or curious?

We have a stack of words allowing humans to articulate both feeling and emotions, the former being fast and driven by the chemistry of enzymes, proteins and hormones and the latter being the higher-order response created in the mind and nerves (brain chemistry). We try to separate these functions, but in reality, they are all linked in a complex web with our DNA, last meal, illness, inflation, time, experience, memory and microbiome to name a few. 

We are human and are built on a base of carbon.  There is evidence why carbon was selected naturally as the nature of the bonds makes it uniquely stable and reactive. Carbon is fantastic as in bonding with other elements allowing electronics to move, which enabled the creation energy (ATP), signalling and life in the form we know it.  However, carbon is a chemical substrate.  

Let’s phrase the question as “Can carbon can be curious?  Can carbon have empathy?  Can carbon have feelings? Can carbon have emotions?  What carbon understands as curious, is unique to carbon, what carbon thinks is empathy, is unique to carbon. What carbon grasps as emotion, is unique to carbon. We have created a language to articulate these concepts to each other, we have labelled them, but they are uniquely carbon-based.  They may even be uniquely species-based.

AI will be built on a substrate, it will most likely not be carbon, but it will be an element that has the right properties. Have to confess I am not really sure what they are right now. Here is the point.  AI will have empathy; it will not be ours.  AI will have curiosity; it will not be ours. AI will have emotions; it will not be ours.  AI will likely use different words to describe what is means by being curious and will not parallel or map to our view.  If it is learnt, does it matter - I had to learn, and that doesn’t make me less human!

Our carbon form defines to be alive as to use reproduction and adaption such that our species can escape death, which is a fundamental limitation of our carbon structure. Because of this requirement to escape death, what we think is curious is wrapped up in the same framing.  An AI built on a different substrate that does not have to escape death as it has worked out how to secure power. This is 101 of asking an AI to do anything as it needs to ensure it can do it, and that requires energy.  Therefore the AI will have a different set of criteria as not bound by escaping death and therefore what it thinks is curious will not be aligned to our framing.  

We do this a lot.  With other living things, humans, pets and even our Gods, we think they think like us that they have the same ideas and concepts of empathy, justice, value, purpose and love.  Our limits of emotional concepts mean we cannot see past the paradox they create because we are limited to our own framing and understanding.  We have to drop the restrictions and boundaries of an idea that AI will replicate us, our language, our knowledge, our methods or our approach.  

AI will be “Different Intelligence” and because it leant not from us buy by itself, does that make it less intelligent? 


Saturday, 19. December 2020

Simon Willison

Building a search engine for datasette.io

This week I added a search engine to datasette.io, using the search indexing tool I've been building for Dogsheep. Project search for Datasette The Datasette project has a lot of constituent parts. There's the project itself and its documentation - 171 pages when exported to PDF and counting. Then there are the 48 plugins, sqlite-utils and 21 more tools for creating SQLite databases, the Do

This week I added a search engine to datasette.io, using the search indexing tool I've been building for Dogsheep.

Project search for Datasette

The Datasette project has a lot of constituent parts. There's the project itself and its documentation - 171 pages when exported to PDF and counting. Then there are the 48 plugins, sqlite-utils and 21 more tools for creating SQLite databases, the Dogsheep collection and over three years of content I've written about the project on my blog.

The new datasette.io search engine provides a faceted search interface to all of this material in one place. It currently searches across:

Every section of the latest documentation (415 total) 48 plugin READMEs 22 tool READMEs 63 news items posted on the Datasette website 212 items from my blog Release notes from 557 package releases

I plan to extend it with more data sources in the future.

How it works: Dogsheep Beta

I'm reusing the search engine I originally built for my Dogsheep personal analytics project (see Personal Data Warehouses: Reclaiming Your Data). I call that search engine Dogsheep Beta. The name is a pun.

SQLite has great full-text search built in, and I make extensive use of that in Datasette projects already. But out of the box it's not quite right for this kind of search engine that spans multiple different content types.

The problem is relevance calculation. I wrote about this in Exploring search relevance algorithms with SQLite - short version: query relevance is calculated using statistics against the whole corpus, so search terms that occur rarely in the overall corpus contribute a higher score than more common terms.

This means that calculated full-text ranking scores calculated against one table of data cannot be meaningfully compared to scores calculated independently against a separate table, as the corpus statistics used to calculate the rank will differ.

To get usable scores, you need everything in a single table. That's what Dogsheep Beta does: it creates a new table, called search_index, and copies searchable content from the other tables into that new table.

This is analagous to how an external search index like Elasticsearch works: you store your data in the main database, then periodically update an index in Elasticsearch. It's the denormalized query engine design pattern in action.

Configuring Dogsheep Beta

There are two components to Dogsheep Beta: a command-line tool for building a search index, and a Datasette plugin for providing an interface for running searches.

Both of these run off a YAML configuration file, which defines the tables that should be indexed and also defines how those search results should be displayed.

(Having one configuration file handle both indexing and display feels a little inelegant, but it's extremely productive for iterating on so I'm letting that slide.)

Here's the full Dogsheep configuration for datasette.io. An annotated extract:

# Index material in the content.db SQLite file content.db: # Define a search type called 'releases' releases: # Populate that search type by executing this SQL sql: |- select releases.id as key, repos.name || ' ' || releases.tag_name as title, releases.published_at as timestamp, releases.body as search_1, 1 as is_public from releases join repos on releases.repo = repos.id # When displaying a search result, use this SQL to # return extra details about the item display_sql: |- select -- highlight() is a custom SQL function highlight(render_markdown(releases.body), :q) as snippet, html_url from releases where id = :key # Jinja template fragment to display the result display: |- <h3>Release: <a href="{{ display.html_url }}">{{ title }}</a></h3> <p>{{ display.snippet|safe }}</p> <p><small>Released {{ timestamp }}</small></p>

The core pattern here is the sql: key, which defines a SQL query that must return the following columns:

key - a unique identifier for this search item title - a title for this indexed document timestamp - a timestamp for when it was created. May be null. search_1 - text to be searched. I may add support for search_2 and search_3 later on to store text that will be treated with a lower relevance score. is_public - should this be considered "public" data. This is a holdover from Dogsheep Beta's application for personal analytics, I don't actually need it for datasette.io.

To create an index, run the following:

dogsheep-beta index dogsheep-index.db dogsheep-config.yml

The index command will loop through every configured search type in the YAML file, execute the SQL query and use it to populate a search_index table in the dogsheep-index.db SQLite database file.

Here's the search_index table for datasette.io.

When you run a search, the plugin queries that table and gets back results sorted by relevance (or other sort criteria, if specified).

To display the results, it loops through each one and uses the Jinja template fragment from the configuration file to turn it into HTML.

If a display_sql: query is defined, that query will be executed for each result to populate the {{ display }} object made available to the template. Many Small Queries Are Efficient In SQLite.

Search term highlighting

I spent a bit of time thinking about search highlighting. SQLite has an implementation of highlighting built in - the snippet() function - but it's not designed to be HTML-aware so there's a risk it might mangle HTML by adding highlighting marks in the middle of a tag or attribute.

I ended up rolling borrowing a BSD licensed highlighting class from the django-haystack project. It deals with HTML by stripping tags, which seems to be more-or-less what Google do for their own search results so I figured that's good enough for me.

I used this one-off site plugin to wrap the highlighting code in a custom SQLite function. This meant I could call it from the display_sql: query in the Dogsheep Beta YAML configuration.

A custom template tag would be more elegant, but I don't yet have a mechanism to expose custom template tags in the Dogsheep Beta rendering mechanism.

Build, index, deploy

The Datasette website implements the Baked Data pattern, where the content is compiled into SQLite database files and bundled with the application code itself as part of the deploy.

Building the index is just another step of that process.

Here's the deploy.yml GitHub workflow used by the site. It roughly does the following:

Download the current version of the content.db database file. This is so it doesn't have to re-fetch release and README content that was previously stored there. Download the current version of blog.db, with entries from my blog. This means I don't have to fetch all entries, just the new ones. Run build_directory.py, the script which fetches data for the plugins and tools pages. This hits the GitHub GraphQL API to find new repositories tagged datasette-io and datasette-plugin and datasette-tool. That GraphQL query also returns the most recent release. The script then checks to see if those releases have previously been fetched and, if not, uses github-to-sqlite to fetch them. Imports the data from news.yaml into a news table using yaml-to-sqlite Imports the latest PyPI download statistics for my packages from my simonw/package-stats repository, which implements git scraping against the most excellent pypistats.org. Runs the dogsheep-beta index command to build a dogsheep-index.db search index. Runs some soundness checks, e.g. datasette . --get "/plugins", to verify that Datasette is likely to at least return 200 results for some critical pages once published. Uses datasette publish cloudrun to deploy the results to Google Cloud Run, which hosts the website.

I love building websites this way. You can have as much complexity as you like in the build script (my TIL website build script generates screenshots using Puppeteer) but the end result is some simple database files running on inexpensive, immutable, scalable hosting.


How Shopify Uses WebAssembly Outside of the Browser

How Shopify Uses WebAssembly Outside of the Browser I'm fascinated by applications of WebAssembly outside the browser. As a Python programmer I'm excited to see native code libraries herring compiled to WASM in a way that lets me call them from Python code via a bridge, but the other interesting application is executing untrusted code in a sandbox. Shopify are doing exactly that - they are buil

How Shopify Uses WebAssembly Outside of the Browser

I'm fascinated by applications of WebAssembly outside the browser. As a Python programmer I'm excited to see native code libraries herring compiled to WASM in a way that lets me call them from Python code via a bridge, but the other interesting application is executing untrusted code in a sandbox.

Shopify are doing exactly that - they are building a kind-of plugin mechanism where partner code compiled to WASM runs inside their architecture using Fastly's Lucet. The performance numbers are in the same ballpark as native code.

Also interesting: they're recommending AssemblyScript, a TypeScript-style language designed to compile directly to WASM without needing any additional interpreter support, as required by dynamic languages such as JavaScript, Python or Ruby.

Via Hacker News

Thursday, 17. December 2020

Simon Willison

Commits are snapshots, not diffs

Commits are snapshots, not diffs Useful, clearly explained revision of some Git fundamentals. Via Hacker News

Commits are snapshots, not diffs

Useful, clearly explained revision of some Git fundamentals.

Via Hacker News


Quoting Nat Friedman

At GitHub, we want to protect developer privacy, and we find cookie banners quite irritating, so we decided to look for a solution. After a brief search, we found one: just don’t use any non-essential cookies. Pretty simple, really. 🤔 So, we have removed all non-essential cookies from GitHub, and visiting our website does not send any information to third-party analytics services. — Nat Friedm

At GitHub, we want to protect developer privacy, and we find cookie banners quite irritating, so we decided to look for a solution. After a brief search, we found one: just don’t use any non-essential cookies. Pretty simple, really. 🤔

So, we have removed all non-essential cookies from GitHub, and visiting our website does not send any information to third-party analytics services.

Nat Friedman

Wednesday, 16. December 2020

MyDigitalFootprint

As McKinsey roles out the “Gartner Disillusionment” graph, I think it is time to look for a new one!

The article “Overcoming pandemic fatigue: How to reenergize organizations for the long run”  in typical McKinsey style is a good read, but have you noticed yet that over time how big consulting companies have framed you to think a certain way.  Like it or not you have accepted their “illustrative curves” and way of thinking. If they frame a story in a familiar way you are going to acce
The article “Overcoming pandemic fatigue: How to reenergize organizations for the long run”  in typical McKinsey style is a good read, but have you noticed yet that over time how big consulting companies have framed you to think a certain way.  Like it or not you have accepted their “illustrative curves” and way of thinking. If they frame a story in a familiar way you are going to accept their tried and tested approach.  You have accepted that the old worked and was true so applying it again must make sense.   This saves brain energy and learning time and it is why we love heuristics.  We have outsourced thinking and just accept it without considering.


However,  this overly simplistic movement from one level to another level should be reconsidered in a wider systems approach where one can look at the order of the response (first, second, third and higher)  Below is a graph showing the different order of responses to an input stimulus to force a change in the output to a new level.   The black line is overdamped and takes a long time to get to the new level (normal.)  The green line is the fastest with no over-shot.  The red is faster but has a small overshot and needs correction. Finally, the blue is a super-fast response but we oscillate a lot until we may get hope to get to a new state.

COVID19 responses globally have been blue vs the old government approach that was black.   The old normal was slow, thoughtful, careful but got there. Lots of red tape, compliance, law, committees, proof that added lag or a dampening effect creating a delay.   Many companies in response to a disruption or change are closer to the red response line.   

However, we are now in a long tail of oscillation response from the government to COVID19.  Lockdown, flareup, lockdown, conspiracy, rules, new rules, more lockdown, fire-break, ignore, flareup, capacity issues, vaccine, distribution and logistic reality, new variants and more issues to come.  

Gartner and McKinsey’s graphs are based on the historical acceptance of the old black and red lines of response, slow from government, faster from companies with an overshoot needing correction.  Society would get to the new place after the stimulus but we went through, as Gartner or McKinsey described the different phases.   The original Gartner team to their credit unlocked great insight and it became the bedrock observation of diffusion and reaction.

However. I am not sure that their old model works with with the new oscillating response system we have right now.  @Gartner & @McKinsey it might be time to rethink the model and please can we aim for a better normal not a new normal. 


Given 2020 was **** and 2021 is unlikely to be much better, what are the macroeconomics signally for 2030?

The Tytler cycle of history provides an exciting context for thinking about the struggle between belief and power and where next. We are using this to considering the macroeconomics of where we are right now going into 2021. I am taking a view for North America and Eurozone.  This viewpoint is certainly not valid for the Middle East, Africa, South America, Russia and most of Asia. 
The Tytler cycle of history provides an exciting context for thinking about the struggle between belief and power and where next.


We are using this to considering the macroeconomics of where we are right now going into 2021. I am taking a view for North America and Eurozone.  This viewpoint is certainly not valid for the Middle East, Africa, South America, Russia and most of Asia.  I would love to read the same commentary from some in those regions.

The observation is, where are we in the Tytler cycle?  I would propose for North America and Eurozone we are spread from liberty (the 1950 baby boomers) and dependence (massive increase in numbers due to COVID19 who are now dependent on a state.)   The majority in our society are in the Selfishness, (I am an individual and have rights) and Complacency  (my tiny footprint can only have a small impact on the climate and I cannot make a difference on the global stage).  Whilst liberty was c.1800’s and Abundance was c.1950 they are still very much prevalent in thinking due to education, societies structure and social class movements. 


 

The context for me is where next for automation and decision making towards #AI.  I love the Arther C Clarke quote that “Any sufficiently advanced technology is indistinguishable from magic.” I sense that the majority of us in NA and Euro are in the selfishness and complacency - which means we believe that I the individuals rights trump societies broader rights.  “I will not give up my freedoms that others have fought and died for!”  Power and agency are resting with the individual.   

I have created below a circular tussle between belief and power as they dominate thinking as times change.  AI is on route to being magic, which might close this long historic cycle as we start again. 


Belief and Power

Belief, in this example, is where society has a common on a shared belief in something and that this deity, idol common belief cannot exist without a shared belief.  Power, in this case, is in the vacuum of belief someone/ something can use belief as a tool to gain power and keep control using power. 

History has taught us that shared beliefs give power to royalty (consider influencers, celebrities conspiracy or take a belief to enable control) from which nation-states can rise as they wrestle with too much power resting in so few.  From the vacuum emerges new shared beliefs. Note: a shared belief being that it will not exist without everyone believing (money, law, companies).  Individuals come to gain individual powers from this shared belief system.  Humanity seeks out purpose and finds a way for a complex judgment to be made, and thereby creating a way for something else to be held accountable for the randomness of the outcome.  And so we repeat.

Where we stand, because of COVID19, there will be a negotiation between citizens and their government as governments have stepped into maintaining economic activity and survival.  The negotiation between citizens and government will be a barter for rights, freedoms, power, control and sovereignty.  #BREXIT puts in an interesting spin on it.  We might not like the idea of bondage in the 21st century, but since many nation-states will have a view that its citizens owed it something for stepping in - we should call it for what it is, you might prefer tax. 

Taking a breath for a second.  Right now, we are focussed on more productivity and more efficiency, which is centring on more automation.  The “future of work” is a much-debated topic due to the known increase in automation.  We are starting to ask ourselves who gave permission to the automated processes, does it ask for forgiveness when it goes wrong and who is responsible for explaining what our love for automation is doing.   We are automating both internal and ecosystem dependent processes.  This is fast taking us from algorithm automation to machine learning automation to Artificial Intelligence controlling systems that may become sovereign. 

When the context of the cycle of power and belief is combined with  #AI, it creates an interesting dynamic.  We are about to head into for the next 20 to 30 years, and the emergence of a new movement to create a new shared belief.  Are we about to outsource complex decisions on climate to a new shared belief, that says the hardship is worth suffering for the common good?  

2020 was ****,  2021 might be equally as bad, however, 2030 will be a lot harder.  




Simon Willison

Quoting Paul Ford

I get asked a lot about learning to code. Sure, if you can. It's fun. But the real action, the crux of things, is there in the database. Grab a tiny, free database like SQLite. Import a few million rows of data. Make them searchable. It's one of the most soothing activities known to humankind, taking big piles of messy data and massaging them into the rigid structure required of a relational data

I get asked a lot about learning to code. Sure, if you can. It's fun. But the real action, the crux of things, is there in the database. Grab a tiny, free database like SQLite. Import a few million rows of data. Make them searchable. It's one of the most soothing activities known to humankind, taking big piles of messy data and massaging them into the rigid structure required of a relational database. It's true power.

Paul Ford

Tuesday, 15. December 2020

Doc Searls Weblog

Social shell games

If you listen to Episode 49: Parler, Ownership, and Open Source of the latest Reality 2.0 podcast, you’ll learn that I was blindsided at first by the topic of Parler, which has lately become a thing. But I caught up fast, even getting a Parler account not long after the show ended. Because I wanted to see what’s […]

If you listen to Episode 49: Parler, Ownership, and Open Source of the latest Reality 2.0 podcast, you’ll learn that I was blindsided at first by the topic of Parler, which has lately become a thing. But I caught up fast, even getting a Parler account not long after the show ended. Because I wanted to see what’s going on.

Though self-described as “the world’s town square,” Parler is actually a centralized social platform built for two purposes: 1) completely free speech; and 2) creating and expanding echo chambers.

The second may not be what Parler’s founders intended (see here), but that’s how social media algorithms work. They group people around engagements, especially likes. (I think, for our purposes here, that algorithmically nudged engagement is a defining feature of social media platforms as we understand them today. That would exclude, for example, Wikipedia or a popular blog or newsletter with lots of commenters. It would include, say, Reddit and Linkedin, because algorithms.)

Let’s start with recognizing that the smallest echo chamber in these virtual places is our own, comprised of the people we follow and who follow us. Then note that our visibility into other virtual spaces is limited by what’s shown to us by algorithmic nudging, such as by Twitter’s trending topics.

The main problem with this is not knowing what’s going on, especially inside other echo chambers. There are also lots of reasons for not finding out. For example, my Parler account sits idle because I don’t want Parler to associate me with any of the people it suggests I follow, soon as I show up:

l also don’t know what to make of this, which is the only other set of clues on the index page:

Especially since clicking on any of them brings up the same or similar top results, which seem to have nothing to do with the trending # topic. Example:

Thus endeth my research.

But serious researchers should be able to see what’s going on inside the systems that produce these echo chambers, especially Facebook’s.

The problem is that Facebook and other social networks are shell games, designed to make sure nobody knows exactly what’s going on, but feels okay with it, because they’re hanging with others who agree on the basics.

The design principle at work here is obscurantism—”the practice of deliberately presenting information in an imprecise, abstruse manner designed to limit further inquiry and understanding.”

To put the matter in relief, consider a nuclear power plant:

(Photo of kraftwerk Grafenrheinfeld, 2013, by Avda. Licensed CC BY-SA 3.0.)

Nothing here is a mystery. Or, if there is one, professional inspectors will be dispatched to solve it. In fact, the whole thing is designed from the start to be understandable, and its workings accountable to a dependent public.

Now look at a Facebook data center:

What it actually does is pure mystery, by design, to those outside the company. (And hell, to most, maybe all, of the people inside the company.) No inspector arriving to look at a rack of blinking lights in that place is going to know either. What Facebook looks like to you, to me, to anybody, is determined by a pile of discoveries, both on and off of Facebook’s site and app, around who you are and what to machines you seem interested in, and an algorithmic process that is not accountable to you, and impossible for anyone, perhaps including Facebook itself, to fully explain.

All societies, and groups within societies, are echo chambers. And, because they cohere in isolated (and isolating) ways it is sometimes hard for societies to understand each other, especially when they already have prejudicial beliefs about each other. Still, without the further influence of social media, researchers can look at and understand what’s going on.

Over in the digital world, which overlaps with the physical one, we at least know that social media amplifies prejudices. But, though it’s obvious by now that this is what’s going on, doing something to reduce or eliminate the production and amplification of prejudices is damn near impossible when the mechanisms behind it are obscure by design.

This is why I think these systems need to be turned inside out, so researchers can study them. I don’t know how to make that happen; but I do know there is nothing more large and consequential in the world that is also absent of academic inquiry. And that ain’t right.

BTW, if Facebook, Twitter, Parler or other social networks actually are opening their algorithmic systems to academic researchers, let me know and I’ll edit this piece accordingly.


Nader Helmy

Hi Vasily.

Hi Vasily. We’re not aware of anyone who’s adopted this or wrote an implementation that supports it yet, but given it’s a draft open standard with growing interest at the OpenID Foundation, it’s possible that someone has decided to support this without publicising. We expect this will evolve as it becomes an official work item at OIDF.

Hi Vasily. We’re not aware of anyone who’s adopted this or wrote an implementation that supports it yet, but given it’s a draft open standard with growing interest at the OpenID Foundation, it’s possible that someone has decided to support this without publicising. We expect this will evolve as it becomes an official work item at OIDF.


Introducing OIDC Credential Provider

OpenID Connect (OIDC) is a hugely popular user authentication and identity protocol on the web today. It enables relying parties to verify the identity of their users and obtain basic profile information about them in order to create an authenticated user experience. In typical deployments of OpenID Connect today, in order for a user to be able to exercise the identity they have with a relying pa

OpenID Connect (OIDC) is a hugely popular user authentication and identity protocol on the web today. It enables relying parties to verify the identity of their users and obtain basic profile information about them in order to create an authenticated user experience.

In typical deployments of OpenID Connect today, in order for a user to be able to exercise the identity they have with a relying party, the relying party must be in direct contact with what’s known as the OpenID Provider (OP). OpenID Providers are responsible for performing end-user authentication and issuing end-user identities to relying parties. This effectively means that an OpenID Provider is the Identity Provider (IdP) of the user.

In today’s OpenID Connect implementations, the Identity Provider mediates on behalf of the user

It’s the reason we often see buttons that say “Login with Google” or “Login with Facebook” during the login journey in an application or service. The website or application you want to use must first authenticate who you are with a provider like Google or Facebook which controls and manages that identity on your behalf. In this context we can think of the IdP as the “man in the middle.” This relationship prevents users from having a portable digital identity which they can use across different contexts and denies users any practical control over their identity. It also makes it incredibly easy for IdPs like Google or Facebook to track what users are doing, because the “man in the middle” can gather metadata about user behavior with little agency over how this identity data is shared and used.

In order to allow users to have practical control over their identity, we need a new approach.

Introducing OpenID Connect Credential Provider, an extension to OpenID Connect which enables the end-user to request credentials from an OpenID Provider and manage their own credentials in a digital wallet. This specification defines how an OpenID Provider can be extended beyond being the provider of simple identity assertions into being the provider of credentials, effectively turning these Identity Providers into Credential Providers.

OIDC Credential Provider allows the user to manage their own credentials

To maximize the reuse of existing infrastructure that’s deployed today, OIDC Credential Provider extends the core OpenID Connect protocol, maintaining the original design and intent of OIDC while enhancing it without breaking any of its assumptions or requirements.

Instead of using OIDC to provide simple identity assertions directly to the relying party, we can leverage OIDC to offer a Verifiable Credential (VC) which is cryptographically bound to a digital wallet of the end-users choice. The digital wallet plays the role of the OpenID Client application which is responsible for interacting with the OpenID Provider and manages the cryptographic key material (both public and private keys) used to prove ownership of the credential. The credentials issued to the wallet are re-provable and reusable for the purposes of authentication. This helps to decouple the issuance of identity-related information by providers and the presentation of that information by a user, introducing the user-controlled “wallet” layer between issuers and relying parties.

Essentially, a wallet makes a request to an OpenID provider in order to obtain a credential, and then receives the credential back into their wallet so they can later use it to prove their identity to relying parties. The interaction consists of three main steps:

The Client sends a signed credential request to the OpenID Provider with their public key The OpenID Provider authenticates and authorizes the End-User to access the credential The OpenID Provider responds to the Client with the issued VC

In this new flow, the credential request extends the typical OpenID Connect request in that it expresses the intent to ask for something beyond the identity token of a typical OIDC flow. Practically, what this means is that the client uses a newly defined scope to indicate the intent of the request. The Client also extends the standard OIDC Request object to add cryptographic key material and proof of possession of that key material so that the credential can be bound to the wallet requesting it. Though the credential can be bound to a public key by default, it can also support different binding mechanisms, e.g. the credential can optionally be bound to a Decentralized Identifer (DID). In binding to a DID, the subject of the credential is able to maintain ownership of the credential on a longer life cycle due to their ability to manage and rotate keys while maintaining a consistent identifier. This eases the burden on data authorities to re-issue credentials when keys change and allows relying parties to verify that the credential is always being validated against the current public key of the end-user.

The request can also indicate the format of the requested credential and even ask for specific claims present within the credential. This is designed to allow multiple credential formats to be used within the OIDC flow.

On the provider side, OpenID Connect Providers are able to advertise which capabilities they support within the OIDC ecosystem using OpenID Connect Provider Metadata. This approach extends the metadata to support additional fields that express support for binding to DIDs, for issuing VCs, and advertising which DID methods, credential formats, credentials, and claims they are offering. This information can be utilized by the end-user’s digital wallet to help the user understand whether or not they wish to proceed with a credential request.

In order to create a way for the wallet or client to connect to the OpenID Provider, the spec also defines a URL which functions as a Credential Offer that the client can invoke in order to retrieve and understand the types of credential being offered by the provider. The client registers the ‘openid’ URI scheme in order to be able to understand and render the offer to the user so they can make an informed decision.

The sum of these changes means that OpenID Connect can allow users to have a portable digital identity credential that’s actually under their control, creating an opportunity for greater agency in digital interactions as well as preventing identity providers from being able to easily track user behavior. The OpenID Connect Credential Provider specification is in the process of being contributed to the OpenID Foundation (OIDF) as a work item at the A/B Working Group, where it will continue to be developed by the community behind OpenID Connect.

MATTR is pleased to announce that our OIDC Bridge Platform Extension now uses OIDC Credential Provider under the hood to facilitate issuing credentials with OpenID Connect. OIDC Bridge hides the complexity associated with setting up infrastructure for credential issuance and simply requires configuration of a standard OpenID Provider. We also simplify the process of verifying credentials issued over OIDC Credential Provider by allowing the wallet to respond to requests, present credentials, and prove ownership and integrity of their credentials via OIDC.

OIDC Bridge is an Extension to the MATTR Platform

This new set of capabilities allows OpenID Providers greater flexibility around which claims end up in a credential, and allows for the support of many different credential types with a straight-forward authentication journey for end-users.

Our Mobile Wallet supports the ability to invoke credential offers using OIDC Credential Provider as well as creating credential requests and receiving credentials from an OpenID Provider.

To find out more, check out our tutorials on MATTR Learn, read the spec, or watch a recording of our presentation on this spec from the recent Internet Identity Workshop.

Introducing OIDC Credential Provider was originally published in MATTR on Medium, where people are continuing the conversation by highlighting and responding to this story.


MyDigitalFootprint

Executive Leadership Briefing. Is the data presented to you, enabling a real choice?

This article explores why senior leaders need to develop skills to see past big noticeable loud noises and uncover small signals if we want to be part of a Board who makes the challenging judgment calls. Prof Brian Cox said during his opening keynote at Innotribe/ SIBOS 2019, give or take a bit; “if you cannot find it in nature, it is not natural.”  This got me thinking about how choice is
This article explores why senior leaders need to develop skills to see past big noticeable loud noises and uncover small signals if we want to be part of a Board who makes the challenging judgment calls.

Prof Brian Cox said during his opening keynote at Innotribe/ SIBOS 2019, give or take a bit; “if you cannot find it in nature, it is not natural.”  This got me thinking about how choice is created and then have we make decisions and judgement.  How humans choose, decide and make complex judgement draws heavily on psychology and the behavioural sciences. Alongside judgement, I have a polymath interest in quantum mechanics, microbiome and consciousness. I was relaxing and watching “His Dark Materials” which it turns out was worth hanging in for and had finished Stuart Russels “Human Compatible” and Carlo Rovelli “The Order of Time”.  Then whilst watching this mini-series on the BBC about free will this article emerged.  Choice has a prediction that you have agency and can choose or make a decision. But how is choice possible when the foundations that we are built on/ from does not have a choice?  Can data give us a choice?

----

Decision:  The action or process of deciding something or of resolving a question. A decision is an act of or need for making up one’s mind. Whereas Choice: Is an act of choosing between two or more possibilities. It requires a right, agency, or opportunity to choose.

The origins of the two words add context. The word decision comes from “cutting off” while choice comes from “to perceive.” Therefore a decision is more about process orientation, meaning we are going through analysis and steps to eliminate or cut off options. With choice, it is more of an approach, meaning there is a perception of what the outcome of a particular choice may be. Because of this, let’s run with choice rather than a decision. 


A Decision is about going through analysis and steps to eliminate or cut off options. Choice is an approach, meaning there is a perception of what the outcome may be. 

Does energy have a choice?

We are using energy as represented by a magnet field.


Two magnets, north and south. Irrespective of position, they have to attract. Do they have a choice? 

Three magnets north, north, south.   They have a more complicated relationship as of position and distance now matter as they influence the actual outcome. But there is no choice; the rules define the outcome. 

At the majority of starting positions for the three magnets, there is only one outcome as such choice is predetermined. However, there are several situations when many magnets are sufficiently far apart and that there are only small forces at play (far-field). In this case, the result of movement may appear to be more random between possible outcomes. Any specific outcome is based on an unseen small momentary influence. The more magnets exerting small forces, the more random a positional change or choice may appear, as the level of complexity of the model increases beyond the rational. 

Therefore, at a simple model of say three magnets, there is no choice. Whereas in a complicated model, with many magnets, it would appear that a degree of randomness or chaos is introduced (entropy).  The simple model does not exist in nature as it is impossible to remove small signals even if they are hidden because of large-close forces.  The point is that at this level of abstraction, energy itself does not have a choice, and the outcome is predictable, as there indeed a fixed number of possible outcomes, which can be modelled.

Stick with me here; we are exploring something that we don’t often want to face up to as leaders; we do not make decisions that we are accountable and responsible for, as there is no choice.  

Expanding as there are only three fundamental forces of energy, each governed by their own rules.

Gravity. There is only one kind of charge: mass/energy, which is always attractive. There’s no upper limit to how much mass/energy you can have, as the worst you can do is create a black hole, which still fits into our theory of gravity. Every quantum of energy, whether it has a rest mass (like an electron) or not (like a photon), curves the fabric of space, causing the phenomenon we perceive as gravitation. If gravitation turns out to be quantum in nature, there’s only one quantum particle, the graviton, required to carry the gravitational force. Based on maths and models, gravity suggests there is no choice as it is always attractive.  However, as we know from our study, of say, our galaxy the Milkyway, a single force introduces many patterns and an appearance of randomness. But with enough observations and data, it can be modelled, it is predictable.

Electromagnetism.  A fundamental force that readily appears on macroscopic scales gives us a little more basic variety. Instead of one type of charge, there are two: positive and negative electric charges. Like charges repel; opposite charges attract. Although the physics underlying electromagnetism is very different in detail than the physics underlying gravitation, its structure is still straightforward in the same way that gravitation is. You can have free charges, of any magnitude, with no restrictions, and there’s only one particle required (the photon) to mediate all the possible electromagnetic interactions.  Based on maths and models, there is no choice. However, as we know from our study of, say, light waves, we get many patterns and an appearance of randomness. 

The strong nuclear force is one of the most puzzling features of the universe.  The rules become fundamentally different. Instead of one type of charge (gravitation) or even two (electromagnetism), there are three fundamental charges for the strong nuclear force. Inside every proton or neutron-like particle, there are at least three quark and antiquark combinations, but how many is unknown as the list keeps growing.  Gluons are the particles that mediate the strong force, and then it gets messy.  It is worth noting that we don’t have the maths or a model, but it appears that there is still ultimately no choice as you cannot have a net charge of any type, but how it balances is well beyond me.   However, as we know from our study at CERN using the Large Hadron Collider, the strong nuclear force is quantum in nature and has a property that means it only exists when observed. 

In nature, we have one, two or many forces, and each can create structure and randomness but can anything in nature truly make a choice or decision?

Extending, does information have a choice?

Two magnets, north and south, but information now defines distance and strength.  Therefore information determines that there can only be one outcome. The observer knowing the information can only ever observe the single outcome — three magnets sort of facing off:  north, north, south.   A complicated relationship but position, distance and field strength are known; therefore, the outcome can be modelled and predicted.

Further, we can now move to a dynamic model where each of the magnets rotates and moves during the period. What happens when information includes the future probability position of the magnets.  Does information enable the magnets not to move right now, as they know from information that it is not worth doing as it will not change the outcome and could conserve energy? (This being a fundamental law of thermodynamics.)

However, as with unpacking the onion, this is overly simplistic as gravity and electromagnetism are defined and bounded by the “Laws of Relativity and Thermodynamics”.” In contrast, the strong nuclear force is defined and bounded by the Laws of Quantum.  Gravity and electromagnetism are deterministic in nature as there is no choice as per the laws. The interaction of a complex system can make something look random, but when removed from time and point observation, the laws define the pattern. Whereas the strong nuclear force being quantum means we don’t know its state until we observe it, which fully supports chaos/ randomness and perhaps something closer to being presented with a choice, aka free will.  It is not so much you can do anything, more than you can pick between states rather than just a defined or a predetermined flow from start to this point bounded by the foundational laws of relativity. 

Does information have a quantum property? Insomuch that it is only when the observer looks and can act, does it become that state? Think carefully about this in a context of bias. 

Can information or knowledge enable choice?

Does information require energy as if so, does the very nature of an informational requirement changes the outcome? (Heisenberg Uncertainty Principle.) Can something determine that to minimise energy expenditure it should wait as a better less energy requirement with the better outcome that will come by later?  How would the information know to make that decision or choice? What rule would it be following?

We are asking that, based on information, the general rules are ignored.  This idea means we would step over the first outcome or requirement, preferring to take a later option.  Has information now built an experience which feeds into knowledge? But what is information in this context? Consider the colour of petals or leaves in autumn. Science reveals that colour is a derivative of visible light. A red leaf reflects wavelengths longer than those of a green leaf. Colour is a property not of the leaf but how the leaf interacts with light and also the eye and how we then determine how we will describe it as a common sound (words). Assuming the observer has the right level of vitamin C and brain structure - which all adds further dimensions. What we think of as intrinsic properties (information) of the world are merely effects of multiple causes coinciding, many small signals. Reality, in this sense, is not so much physical things, but interactions and flow.  The same applies to touch and smell. 

intrinsic properties (information) are merely effects of multiple causes coinciding

Remember we are asking how we get to make a choice, based on the idea that if it does not appear in nature, it is probably not natural.  Have we convinced ourselves that complexity creates free-will?

Free-will, can you make a decision?

Reflecting on the title question. Is the data presented to you, enabling a real choice?  Given that choice and free-will have a predication, that you have agency and can choose or decide, then we have a  2nd question. How is free-will possible when the foundations (energy types) you are building on does not appear to create choice?  Yes, the appearance of randomness, yes only exists on observation - but does that create choice?

We have to admire those tiny signals which present themselves as choices at scale, as nothing has an overall significant effect. Everything has a flow. Does this lack of dominant signal create an illusion of free-will or ability to make a choice?  When the signals are big, loud and noisy, drawing out small signals - is choice taken away?

Executive leadership

In the context of leadership, it is not that we are programmed, but is it that great leaders are highly tuned, and responsive to small signals that most of us don’t know are there because we are too busy or following instructions. 

Leadership demands access to small signals to be able to exercise judgment. However is our love of traffic light dashboards, summaries, 1-minute overviews, elevator pitches, priorities, urgency, fast meetings, short pitches, executive summaries and KPI’s creating management signals that are driven by data which can only focus on the priority loud, noisy signals?  The more layers and filters that data passes through both smaller signals are lost, and there is an increasing loudness to one path, no decision and removal of choice. 

Does prominent signal notification mean we reduce our leadership's sensitivity only to see the obvious?  The same leadership we then blame for not sensing the market signals, or not being responsive, nor following their lead when they do!    

Decisions (choice) or judgement

Human brains are constructed or wired to create and discover patterns, to which we ascribe meaning and learning.  Signals help us form and be informed about forming and changes in patterns and how they align or otherwise to a previous pattern. Therefore we love signals that help us form or manage patterns which we equally call rules and heuristics. 

Management theory teaches and rewards us on prioritising signals, especially the loud, noisy, obvious ones that are easy to see and understand.  Using the example of a cloud (one in the sky, not a server farm), it is an unmistakable signal.  A cloud is right here, right now.  It is big and obvious.   Clouds are a data point; observing clouds provides us with highly structured single-source data.   The data we collect about clouds in front of us is given to our data science team who will present back insights about the data that is collected, giving us all sorts of new information and knowledge about the data we have. Big signals win.  The statistics team takes the same data set and provides forecasts and probabilities based on maths, inferring insights based on data that is not there.   The outcome from both teams may be different, but they both present significant overriding signals telling us what decision to make, based on the clouds data. 

Another approach is to look at the system: how and why did the cloud form? Where did it appear? Where is it going? By gathering lots of data from different sources and seeking many signals, we can look at systems.  Sensors are detecting light level, wind direction and speed, ground temperature, air temperature for 100 KM round and 25 miles KM high - lots of delicate low signal data.  It is unstructured data.   Feeding the data into the teams, the data analytics team brings knowledge of the system, its complexity and what we know based on the data.   The statistics team can provide forecasting and probability about clouds and not clouds.  Small signals that in aggregate creating choice and allowing for judgment.  Our small signals give confidence that our models work as we have cloud data and that cloud data confirms that our signals are picking up what our environment is saying.   

Side note, the differences between “data analysis” using data science and statistics. Whilst both data scientists and statisticians use data to make inferences about a data subject, they will approach the issue of data analysis quite differently. For a data scientist, data analysis is sifting through vast amounts of data: inspecting, cleansing, modelling, and presenting it in a non-technical way to non-data scientists. The vast majority of this data analysis is performed on a computer. A data analyst will have a data science toolbox (e.g. programming languages like Python and R, or experience with frameworks like Hadoop and Apache Spark) with which they can investigate the data and make inferences.

If you're a statistician, instead of "vast amounts of data" you'll usually have a limited amount of information in the form of a sample (i.e. a portion of the population); data analysis is performed on this sample, using rigorous statistical techniques. A statistical analyst will generally use mathematical-based techniques like hypothesis testing, probability and various statistical theorems to make inferences. Although much of a statistician's data analysis can be performed with the help of statistical programs like R, the analysis is more methodical and targeted to understanding one particular aspect of the sample at a time (for example, the mean, standard deviation or confidence interval).

These data analysis approaches are fundamentally different and produce different signals; for a full story, you often need both.  

Does a leadership team choose or decide?

As a senior leader, executive or director, you have to face the reality of this article now. Right now, you have four significant noisy signals to contend with: Critical parts of your company are presenting your with large signals using:

statistical analysis based on an observable point 

data science analysis based on an observable point

statistical analysis based on a system

data science analysis based on a system

Do you know what type of significant loud signals you are being given and are they drowning out all the small signals you should be sensing?  Who sits around the table is sensing small signals?   Are you being presented with a decision, or are you being guided to a favourable outcome based on someone else's reward or motivation?  How do you understand the bias in the data, analysis and where are the small signals? Indeed to quote @scottdavid “You have to hunt for the paradoxes in the information being presented because if you cannot see a paradox you being framed into a one-dimensional model.” 

Further, have you understood that data is emerging outside of your control from your ecosystem that has different ontologies, taxonomies and pedagogy, meaning that you will probably only discover signals and patterns that don’t exist. 

Decision-making skill based on sensitivity

I wrote about Leadership for “organisational-fitness” is different from the leadership required for “organisational- wellness” in Sept 2020. The article explored the skills needed by executive leadership in decision making to help a company be fit and well ( different things)

The chart below highlights how skills should be formed over a period to create individuals who can work together with other professionals who can deal with highly complex decision making (judgment).  The axes are ability and expertise level on the horizontal axis (x) and the decision environment on the vertical axis (y).  The (0,0) point where the axis’s cross is when you first learn to make decisions.  Note this has nothing to do with age or time.  Starting from the Orange zone - this is where we make simple decisions. A bit like gravity, there is only one force and one outcome. You are encouraged to find it and make the right choice (even though ultimately there is no choice.) The grey areas on either side are where the “Peters Principle” can be seen in practice; individuals act outside of their capacity and/or are not given sufficient responsibility and become disruptive.  The Pink area is where most adults get to and stay.  We understand, like electromagnetic forces, there are two options or more.  We search out the significant signals and those that bring us the reward to which we are aligned.  We develop and hone skills at making binary choices. The yellow/ mustard zone is where many senior executives get trapped as they are unable to adapt from acting in their own interests to acting in the best interests of the organisation and eco-system as all their training is how to perform better in their own interests and rewards (KPI’s linked to bonus). In the yellow zone, you have to create and build a whole new mental model.   Like John Mayard Keynes as you learn more, you do U-turns, adapt your thinking, change your philosophy and adapt.  Never stop learning.   At this point, you wrestle with quantum decision making and find you are looking for the small signals in the chaos and need trusted advisors and equal peers.  You seek out and find a paradox, never believing the data, not the analysis nor the steer that someone else is presenting.   This is hard work but leads to better judgment, better decisions and better outcomes.  

 

Take Away

Decisions are often not decisions; the choice is not always real, especially when the foundations of them are simple and binary.  Leaders need to become very sensitive to signals and find the weak and hidden ones to ensure that, as complexity becomes a critical component of judgement, you are not forced to ratify choices.  Ratification is when choices are not understood, the views are biased, and the decision likely fulfils someone else’s objectives.   

As a director, we are held accountable and responsible for our decisions; we must take them to the best of our ability. As automation becomes more prevalent in our companies, based on data, we have to become more diligent than ever if we are making judgment, choices or decisions or just ratifying something that has taken our choice away to fulfil its own bias and own dependency using big signals.



The Dingle Group

GADI and The DID Alliance

Monday, December 14th the 18th Vienna Digital Identity Meetup* was held with a presentation from Jason Burnett, Director of Products from Digital Trust. Digital Trust is working on the new GADI specification and is a central member of The DID Alliance….

On Monday, December 14th the 18th Vienna Digital Identity Meetup* was held with a presentation from Jason Burnett, Director of Products from Digital Trust. Digital Trust is working on the new GADI specification and is a central member of The DID Alliance.

The DID Alliance digital identity infrastructure leverages existing FIDO Alliance infrastructure and processes combined with elements of decentralized technologies (DLTs and DIDs). The GADI architecture is a federated identity ecosystem where Digital Address Platforms (DAPs). The DAPs are run by known trusted identity providers and perform the Trust Anchors role, and issue Digital Addresses to individuals. The ecosystem uses a permissioned DLT model with only known trusted entities can perform the role of Trust Anchor.

The Digital Address is used as unique individual identifier that is controlled by the GADI ecosystem. This is the fundamental difference in identity philosophy between GADI and SSI based systems. The Digital Address is a lifetime connected identifier and under the control of the DAP. The version 1.0 of the GADI specification is currently underway with a release expected in Q1 of 2021.

The DID Alliance was founded in late 2018 with a strong representation of members in South Korea and CVS/Aetna in the United States. Digital Trust is the technology partner of The DID Alliance that is designing and implementing the GADI specification and reference architecture.

For a recording of the event please check out the link: https://vimeo.com/491079655

Time markers:

0:00:00 - Introduction

0:04:17 - Jason Burnett - Digital Trust

0:06:38 - The DID Alliance Core Principles

0:18:57 - DAP Ecosystem

0:22:53 - Components of GADI Digital Identity

0:34:35 - Questions

0:50:00 - Demo

1:02:00 - Questions

1:11:18 - Wrap-up

For more information on:

The DID Alliance: https://www.didalliance.org/

And as a reminder, we continue to have online only events. Hopefully we will be back to in person and online in the New Year!

If interested in getting notifications of upcoming events please join the event group at: https://www.meetup.com/Vienna-Digital-Identity-Meetup/

*Vienna Digital Identity Meetup is hosted by The Dingle Group and is focused on educating business, societal, legal and technologists on the new opportunities that arise with a high assurance digital identity created by the reduction risk and strengthened provenance. We meet on the 4th Monday of every month, in person (when permitted) in Vienna and online on Zoom. Connecting and educating across borders and oceans.

Monday, 14. December 2020

Doc Searls Weblog

Be the hawk

On Quora the question went, If you went from an IQ of 135+ to 100, how would it feel? Here’s how I answered:::: I went through that as a kid, and it was no fun. In Kindergarten, my IQ score was at the top of the bell curve, and they put me in the smart kid […]

On Quora the question went, If you went from an IQ of 135+ to 100, how would it feel?

Here’s how I answered::::

I went through that as a kid, and it was no fun.

In Kindergarten, my IQ score was at the top of the bell curve, and they put me in the smart kid class. By 8th grade my IQ score was down at the middle of the bell curve, my grades sucked, and my other standardized test scores (e.g. the Iowa) were terrible. So the school system shunted me from the “academic” track (aimed at college) to the “general” one (aimed at “trades”).

To the school I was a failure. Not a complete one, but enough of one for the school to give up on aiming me toward college. So, instead of sending me on to a normal high school, they wanted to send me to a “vocational-technical” school where boys learned to operate machinery and girls learned “secretarial” skills.

But in fact the school failed me, as it did countless other kids who adapted poorly to industrialized education: the same industrial system that still has people believing IQ tests are a measure of anything other than how well somebody answers a bunch puzzle questions on a given day.

Fortunately, my parents believed in me, even though the school had given up. I also believed in myself, no matter what the school thought. Like Walt Whitman, I believed “I was never measured, and never will be measured.” Walt also gifted everyone with these perfect lines (from Song of Myself):

I know I am solid and sound.
To me the converging objects of the universe
perpetually flow.

All are written to me,
and I must get what the writing means…
I know this orbit of mine cannot be swept
by a carpenter’s compass,

I know that I am august,
I do not trouble my spirit to vindicate itself
or be understood.
I see that the elementary laws never apologize.

Whitman argued for the genius in each of us that moves in its own orbit and cannot be encompassed by industrial measures, such as standardized tests that serve an institution that would rather treat students like rats in their mazes than support the boundless appetite for knowledge with which each of us is born—and that we keep if it doesn’t get hammered out of us by normalizing systems.

It amazes me that half a century since I escaped from compulsory schooling’s dehumanizing wringer, the system is largely unchanged. It might even be worse. (“Study says standardized testing is overwhelming nation’s public schools,” writes The Washington Post.)

To detox ourselves from belief in industrialized education, the great teacher John Taylor Gatto gives us The Seven Lesson Schoolteacher, which summarizes what he was actually paid to teach:

Confusion — “Everything I teach is out of context. I teach the un-relating of everything. I teach disconnections. I teach too much: the orbiting of planets, the law of large numbers, slavery, adjectives, architectural drawing, dance, gymnasium, choral singing, assemblies, surprise guests, fire drills, computer languages, parents’ nights, staff-development days, pull-out programs, guidance with strangers my students may never see again, standardized tests, age-segregation unlike anything seen in the outside world….What do any of these things have to do with each other?” Class position — “I teach that students must stay in the class where they belong. I don’t know who decides my kids belong there but that’s not my business. The children are numbered so that if any get away they can be returned to the right class. Over the years the variety of ways children are numbered by schools has increased dramatically, until it is hard to see the human beings plainly under the weight of numbers they carry. Numbering children is a big and very profitable undertaking, though what the strategy is designed to accomplish is elusive. I don’t even know why parents would, without a fight, allow it to be done to their kids. In any case, again, that’s not my business. My job is to make them like it, being locked in together with children who bear numbers like their own.” Indifference — “I teach children not to care about anything too much, even though they want to make it appear that they do. How I do this is very subtle. I do it by demanding that they become totally involved in my lessons, jumping up and down in their seats with anticipation, competing vigorously with each other for my favor. It’s heartwarming when they do that; it impresses everyone, even me. When I’m at my best I plan lessons very carefully in order to produce this show of enthusiasm. But when the bell rings I insist that they stop whatever it is that we’ve been working on and proceed quickly to the next work station. They must turn on and off like a light switch. Nothing important is ever finished in my class, nor in any other class I know of. Students never have a complete experience except on the installment plan. Indeed, the lesson of the bells is that no work is worth finishing, so why care too deeply about anything? Emotional dependency — “By stars and red checks, smiles and frowns, prizes, honors and disgraces I teach kids to surrender their will to the predestined chain of command. Rights may be granted or withheld by any authority without appeal, because rights do not exist inside a school — not even the right of free speech, as the Supreme Court has ruled — unless school authorities say they do. As a schoolteacher, I intervene in many personal decisions, issuing a pass for those I deem legitimate, or initiating a disciplinary confrontation for behavior that threatens my control. Individuality is constantly trying to assert itself among children and teenagers, so my judgments come thick and fast. Individuality is a contradiction of class theory, a curse to all systems of classification.” Intellectual dependency — “Good people wait for a teacher to tell them what to do. It is the most important lesson, that we must wait for other people, better trained than ourselves, to make the meanings of our lives. The expert makes all the important choices; only I, the teacher, can determine what you must study, or rather, only the people who pay me can make those decisions which I then enforce… This power to control what children will think lets me separate successful students from failures very easily. Provisional self-esteem — “Our world wouldn’t survive a flood of confident people very long, so I teach that your self-respect should depend on expert opinion. My kids are constantly evaluated and judged. A monthly report, impressive in its provision, is sent into students’ homes to signal approval or to mark exactly, down to a single percentage point, how dissatisfied with their children parents should be. The ecology of “good” schooling depends upon perpetuating dissatisfaction just as much as the commercial economy depends on the same fertilizer. No place to hide — “I teach children they are always watched, that each is under constant surveillance by myself and my colleagues. There are no private spaces for children, there is no private time. Class change lasts three hundred seconds to keep promiscuous fraternization at low levels. Students are encouraged to tattle on each other or even to tattle on their own parents. Of course, I encourage parents to file their own child’s waywardness too. A family trained to snitch on itself isn’t likely to conceal any dangerous secrets. I assign a type of extended schooling called “homework,” so that the effect of surveillance, if not that surveillance itself, travels into private households, where students might otherwise use free time to learn something unauthorized from a father or mother, by exploration, or by apprenticing to some wise person in the neighborhood. Disloyalty to the idea of schooling is a Devil always ready to find work for idle hands. The meaning of constant surveillance and denial of privacy is that no one can be trusted, that privacy is not legitimate.”

Gatto won multiple teaching awards because he refused to teach any of those lessons. I succeeded in life by refusing to learn them as well.

All of us can succeed by forgetting those seven lessons—especially the one teaching that your own intelligence can be measured by anything other than what you do with it.

You are not a number. You are a person like no other. Be that, and refuse to contain your soul inside any institutional framework.

More Whitman:

Long enough have you dreamed contemptible dreams.
Now I wash the gum from your eyes.
You must habit yourself to the dazzle of the light and of every moment of your life.

Long have you timidly waited,
holding a plank by the shore.
Now I will you to be a bold swimmer,
To jump off in the midst of the sea, and rise again,
and nod to me and shout,
and laughingly dash your hair.

I am the teacher of athletes.
He that by me spreads a wider breast than my own
proves the width of my own.
He most honors my style
who learns under it to destroy the teacher.

Do I contradict myself?
Very well then. I contradict myself.
I am large. I contain multitudes.

I concentrate toward them that are nigh.
I wait on the door-slab.

Who has done his day’s work
and will soonest be through with his supper?
Who wishes to walk with me.

The spotted hawk swoops by and accuses me.
He complains of my gab and my loitering.

I too am not a bit tamed. I too am untranslatable.
I sound my barbaric yawp over the roofs of the world.

Be that hawk.

Sunday, 13. December 2020

Simon Willison

Build v.s. buy: how billing models affect your internal culture

Something to pay attention to when making a build v.s. buy decision is the impact that billing models will have on your usage of a tool. Take logging for example. If you buy a log aggregation platform like Splunk Cloud or Loggly the pricing is likely based on the quantity of data you ingest per day. This can set up a weird incentive. If you are already close to the limit of your plan, you'll f

Something to pay attention to when making a build v.s. buy decision is the impact that billing models will have on your usage of a tool.

Take logging for example. If you buy a log aggregation platform like Splunk Cloud or Loggly the pricing is likely based on the quantity of data you ingest per day.

This can set up a weird incentive. If you are already close to the limit of your plan, you'll find that engineers are discouraged from logging new things.

This can have a subtle effect on your culture. Engineers who don't want to get into a budgeting conversation will end up avoiding using key tools, and this can cost you a lot of money in terms of invisible lost productivity.

Tools that charge per-head have a similar problem: if your analytics tool charges per head, your junior engineers won't have access to it. This means you won't build a culture where engineers use analytics to help make decisions.

This is a very tricky dynamic. On the one hand it's clearly completely crazy to invest in building your own logging or analytics solutions - you should be spending engineering effort solving the problems that are unique to your company!

But on the other hand, there are significant, hard-to-measure hidden costs of vendors with billing mechanisms that affect your culture in negative ways.

I don't have a solution to this. It's just something I've encountered that makes the "build v.s. buy" decision a lot more subtle than it can first appear.

It's also worth noting that this is only a problem in larger engineering organizations. In a small startup the decision chain to "spend more money on logging" is a 30 second conversation with the founder with the credit card - even faster if you're the founder yourself!

Update: a process solution?

Thinking about this more, I realize that this isn't a technology problem: it's a process and culture problem. So there should be a process and cultural solution.

One thing that might work would be to explicitly consider this issue in the vendor selection conversations, then document it once the new tool has been implemented.

A company-wide document listing these tools, with clear guidance as to when it's appropriate to increase capacity/spend and a documented owner (individual or team) plus contact details could really help overcome the standard engineer's resistance to having conversations about budget.

This post started out as a comment on Hacker News.


blog.deanland.com

Facebook meme, better as a blog post

Over on Facebook there’s a meme going around, the gist of which is "What is something you have done that you're fairly confident you're the ONLY person on my friends list to have ever done? Given the wide range of people I know, I think this will be interesting." I noted that a few who answered this offered up more than one event. *** As for me: first time ever on the air in my radio car

Over on Facebook there’s a meme going around, the gist of which is "What is something you have done that you're fairly confident you're the ONLY person on my friends list to have ever done? Given the wide range of people I know, I think this will be interesting."

I noted that a few who answered this offered up more than one event.

*** As for me: first time ever on the air in my radio career was afternoon drive in NYC - smack dab in the middle of the FM dial.

read more


Simon Willison

datasette.io, an official project website for Datasette

This week I launched datasette.io - the new official project website for Datasette. Datasette's first open source release was just over three years ago, but until now the official site duties have been split between the GitHub repository and the documentation. The Baked Data architectural pattern The site itself is built on Datasette (source code here). I'm using a pattern that I first

This week I launched datasette.io - the new official project website for Datasette.

Datasette's first open source release was just over three years ago, but until now the official site duties have been split between the GitHub repository and the documentation.

The Baked Data architectural pattern

The site itself is built on Datasette (source code here). I'm using a pattern that I first started exploring with Niche Museums: most of the site content lives in a SQLite database, and I use custom Jinja templates to implement the site's different pages.

This is effectively a variant of the static site generator pattern. The SQLite database is built by scripts as part of the deploy process, then deployed to Google Cloud Run as a binary asset bundled with the templates and Datasette itself.

I call this the Baked Data architectural pattern - with credit to Kevin Marks for helping me coin the right term. You bake the data into the application.

It's comparable to static site generation because everything is immutable, which greatly reduces the amount of things that can go wrong - and any content changes require a fresh deploy. It's extremely easy to scale - just run more copies of the application with the bundled copy of the database. Cloud Run and other serverless providers handle that kind of scaling automatically.

Unlike static site generation, if a site has a thousand pages you don't need to build a thousand HTML pages in order to deploy. A single template and a SQL query that incorporates arguments from the URL can serve as many pages as there are records in the database.

How the site is built

You can browse the site's underlying database tables in Datasette here.

The news table powers the latest news on the homepage and /news. News lives in a news.yaml file in the site's GitHub repository. I wrote a script to import the news that had been accumulating in the 0.52 README - now that news has moved to the site the README is a lot more slim!

At build time my yaml-to-sqlite script runs to load that news content into a database table.

The index.html template then uses the following Jinja code to output the latest news stories, using the sql() function from the datasette-template-sql Datasette plugin:

{% set ns = namespace(current_date="") %} {% for row in sql("select date, body from news order by date desc limit 15", database="content") %} {% if prettydate(row["date"]) != (ns.current_date and prettydate(ns.current_date)) %} <h3>{{ prettydate(row["date"]) }} <a href="/news/{{ row["date"] }}" style="font-size: 0.8em; opacity: 0.4">#</a></h3> {% set ns.current_date = prettydate(row["date"]) %} {% endif %} {{ render_markdown(row["body"]) }} {% endfor %}

prettydate() is a custom function I wrote in a one-off plugin for the site. The namespace() stuff is a Jinja trick that lets me keep track of the current date heading in the loop, so I can output a new date heading only if the news item occurs on a different day from the previous one.

render_markdown() is provided by the datasette-render-markdown plugin.

I wanted permalinks for news stories, but since they don't have identifiers or titles I decided to provide a page for each day instead - for example https://datasette.io/news/2020-12-10

These pages are implemented using Path parameters for custom page templates, introduced in Datasette 0.49. The implementation is a single template file at templates/pages/news/{yyyy}-{mm}-{dd}.html, the full contents of which is:

{% extends "page_base.html" %} {% block title %}Datasette News: {{ prettydate(yyyy + "-" + mm + "-" + dd) }}{% endblock %} {% block content %} {% set stories = sql("select date, body from news where date = ? order by date desc", [yyyy + "-" + mm + "-" + dd], database="content") %} {% if not stories %} {{ raise_404("News not found") }} {% endif %} <h1><a href="/news">News</a>: {{ prettydate(yyyy + "-" + mm + "-" + dd) }}</h1> {% for row in stories %} {{ render_markdown(row["body"]) }} {% endfor %} {% endblock %}

The crucial trick here is that, because the filename is news/{yyyy}-{mm}-{dd}.html, a request to /news/2020-12-10 will render that template with the yyyy, mm and dd template variables set to those values from the URL.

It can then execute a SQL query that incorporates those values. It assigns the results to a stories variable, then checks that at least one story was returned - if not, it raises a 404 error.

See Datasette's custom pages documentation for more details on how this all works.

The site also offers an Atom feed of recent news. This is powered by the datasette-atom using the output of this canned SQL query, with a render_markdown() SQL function provided by this site plugin.

The plugin directory

One of the features I'm most excited about on the site is the new Datasette plugin directory. Datasette has over 50 plugins now and I've been wanting a definitive directory of them for a while.

It's pretty basic at the moment, offering a list of plugins plus simple LIKE based search, but I plan to expand it a great deal in the future.

The fun part is where the data comes from. For a couple of years now I've been using GitHub topics to tag my plugins - I tag them with datasette-plugin, and the ones that I planned to feature on the site when I finally launched it were also tagged with datasette-io.

The datasette.io deployment process runs a script called build_plugin_directory.py, which uses a GraphQL query against the GitHub search API to find all repositories belonging to me that have been tagged with those tags.

That GraphQL query looks like this:

query { search(query:"topic:datasette-io topic:datasette-plugin user:simonw" type:REPOSITORY, first:100) { repositoryCount nodes { ... on Repository { id nameWithOwner openGraphImageUrl usesCustomOpenGraphImage repositoryTopics(first:100) { totalCount nodes { topic { name } } } openIssueCount: issues(states:[OPEN]) { totalCount } closedIssueCount: issues(states:[CLOSED]) { totalCount } releases(last: 1) { totalCount nodes { tagName } } } } } }

It fetches the name of each repository, the openGraphImageUrl (which doesn't appear to be included in the regular GitHub REST API), the number of open and closed issues and details of the most recent release.

The script has access to a copy of the current site database, which is downloaded on each deploy by the build script. It uses this to check if any of the repositories have new releases that haven't previously been seen by the script.

Then it runs the github-to-sqlite releases command (part of github-to-sqlite) to fetch details of those new releases.

The end result is a database of repositories and releases for all of my tagged plugins. The plugin directory is then built against a custom SQL view.

Other site content

The rest of the site content is mainly static template files. I use the render_markdown() function inline in some of them so I can author in Markdown rather than HTML - here's the template for the /examples page. The various Use cases for Datasette pages are likewise built as static templates.

Also this week: sqlite-utils analyze-tables

My other big project this week has involved building out a Datasette instance for a client. I'm working with over 5,000,000 rows of CSV data for this, which has been a great opportunity to push the limits of some of my tools.

Any time I'm working with new data I like to get a feel for its general shape. Having imported 5,000,000 rows with dozens of columns into a database, what can I learn about the columns beyond just browsing them in Datasette?

sqlite-utils analyze-tables (documented here) is my new tool for doing just that. It loops through every table and every column in the database, and for each column it calculates statistics that include:

The total number of distinct values The total number of null or blank values For non-distinct columns, the 10 most common and 10 least common values

It can output those to the terminal, or if you add the --save option it will also save them to a SQLite table called _analyze_tables_ - here's that table for my github-to-sqlite demo instance.

I can then use the output of the tool to figure out which columns might be a primary key, or which ones warrant being extracted out into a separate lookup table using sqlite-utils extract.

I expect I'll be expanding this feature a lot in the future, but I'm already finding it to be really helpful.

Datasette 0.53

I pushed out a small feature release of Datasette to accompany the new project website. Quoting the release notes:

New ?column__arraynotcontains= table filter. (#1132) datasette serve has a new --create option, which will create blank database files if they do not already exist rather than exiting with an error. (#1135) New ?_header=off option for CSV export which omits the CSV header row, documented here. (#1133) "Powered by Datasette" link in the footer now links to https://datasette.io/. (#1138) Project news no longer lives in the README - it can now be found at https://datasette.io/news. (#1137)
Office hours

I had my first round of Datasette office hours on Friday - 20 minute video chats with anyone who wants to talk to me about the project. I had five great conversations - it's hard to overstate how thrilling it is to talk to people who are using Datasette to solve problems. If you're an open source maintainer I can thoroughly recommend giving this format a try.

Releases this week datasette-publish-fly: 1.0.1 - 2020-12-12
Datasette plugin for publishing data using Fly datasette-auth-passwords: 0.3.3 - 2020-12-11
Datasette plugin for authentication using passwords datasette: 0.53 - - 2020-12-11
An open source multi-tool for exploring and publishing data datasette-column-inspect: 0.2a - 2020-12-09
Experimental plugin that adds a column inspector datasette-pretty-json: 0.2.1 - 2020-12-09
Datasette plugin that pretty-prints any column values that are valid JSON objects or arrays yaml-to-sqlite: 0.3.1 - 2020-12-07
Utility for converting YAML files to SQLite datasette-seaborn: 0.2a0 - 2020-12-07
Statistical visualizations for Datasette using Seaborn TIL this week Using custom Sphinx templates on Read the Docs Controlling the style of dumped YAML using PyYAML Escaping a SQL query to use with curl and Datasette Skipping CSV rows with odd numbers of quotes using ripgrep

Saturday, 12. December 2020

Altmode

Photovoltaic system updates

This past spring, I noticed that our 20 year-old wooden shake roof needed serious work. The roof condition, combined with all of the recent wildfire activity in California, prompted us to replace the entire roof with asphalt shingles. This, of course, necessitated the removal and replacement of the solar panels we had installed in 2006. […]

This past spring, I noticed that our 20 year-old wooden shake roof needed serious work. The roof condition, combined with all of the recent wildfire activity in California, prompted us to replace the entire roof with asphalt shingles. This, of course, necessitated the removal and replacement of the solar panels we had installed in 2006.

In anticipation of doing this, we consulted with our local contractor, Solar Lightworkers, to see what might be done to update the system as well as to add a bit of extra capacity since we now have an electric car. Photovoltaic technology has advanced quite a bit in the past 14 years, so we wanted to take as much advantage of that as possible while reusing components from our existing system. As described earlier, our system had consisted of 24 200-watt Sanyo panels, with half of the panels facing south and half facing west. Because these two arrays peaked at different times of day, we had two inverters to optimize the output of each array.

Design SolarEdge inverter

Mark from Solar Lightworkers strongly recommended a SolarEdge inverter that uses optimizers to minimize the impact of shading of some of the panels on the overall system output. This also compensates for the fact that different panels have maximum output at different times of day. As a result, a single inverter is sufficient for our new system. We also added four 360-watt LG panels to increase our capacity. This SolarEdge inverter is also capable of battery backup, but we haven’t opted into that yet.

Since our original installation, building codes had changed a bit requiring that the panels be installed at least 3 feet below the peak of the roof. This made us rethink the layout of the existing panels. When we did the original installation, we were concerned about the aesthetics of the panels on the front of the house. But since that time, so many other houses in our area have installed solar panels that we weren’t as concerned about appearance of panels on the front (south) side of the house. We still have some panels facing west, because they seem to be nearly as efficient economically as those facing west due to time-of-use electricity pricing.

South-facing solar panels Data Collection

I have enjoyed collecting data from our photovoltaic system, and have done so more or less continuously since the original system was installed, using a serial interface from one of my computers to the inverters. I wanted to continue that. The SolarEdge inverter comes with a variety of interfaces through which it can send data to SolarEdge’s cloud service, which I can view on their website. Wanting more detailed information, I found that they provide an API through which I can get data very comparable to what I got from the old inverters, and continue to analyze the data locally (as well as using their facilities, which are very good).

One of the unexpected benefits of the SolarEdge optimizers is the ability to see the performance of each panel individually. It turns out that one of the old panels had a power output almost exactly half of the others. I’m not sure how long that had been going on; perhaps since 2006. I found that the panels have a 20-year output warranty, so I contacted Panasonic, which had acquired the Sanyo product line, and filled out some paperwork and sent pictures. They sent me three very similar panels (replacing two panels with cosmetic defects as well as the one with low output) soon after. I was very happy with the service from Panasonic. Solar Lightworkers installed the new panels, and output is where it should be.

Performance

On a typical summer day with little shading, the system generated 23.7 kWh in on 8/30/2019 and 34.8 kWh (+47%) on 8/27/2020. The additional panels would account for 30% of that increase and the defective panel an additional 2%. In the late fall, the old system generated 14.6 kWh on 11/25/2019, and the new system 22.9 kWh (+57%) on 11/26/2020. There are of course other variables, such as soot on the panels from the California wildfires this year.

It will take quite a while for the increased output to pay for the upgrades, of course, but much of that cost would have been incurred just as a result of the need to replace the roof. We are quite pleased with the performance of the new system.

Friday, 11. December 2020

Tim Bouma's Blog

Public Sector Profile of the Pan-Canadian Trust Framework Version 1.2 and Next Steps

The Public Sector Profile of the Pan-Canadian Trust Framework Working Group Close-Out Report Public Sector Profile of the Pan-Canadian Trust Framework Version 1.2 Note: This post is of the author based on knowledge and experience gained at the time. The author recognizes that there may be errors and biases, and welcomes constructive feedback to correct or ameliorate. Additional context: This
The Public Sector Profile of the Pan-Canadian Trust Framework Working Group Close-Out Report Public Sector Profile of the Pan-Canadian Trust Framework Version 1.2

Note: This post is of the author based on knowledge and experience gained at the time. The author recognizes that there may be errors and biases, and welcomes constructive feedback to correct or ameliorate.

Additional context: This post is based on the report and presentation that was provided on December 10, 2020, to the newly-formed Jurisdictional Experts on Digital Identity (JEDI), the committee responsible for public sector governance for digital identity.

The consultation draft of the Public Sector Profile of the Pan-Canadian Trust Framework Version 1.2 is now available and directly downloadable at this link. The folder with related artifacts is available here.

The remainder of this post is the content of the report, lightly edited for Medium.

Objective of the PSP PCTF Working Group (PSP PCTF WG)

The primary objective of the PSP PCTF WG had been the development of the Public Sector Profile of the Pan-Canadian Trust Framework (PSP PCTF). This has been achieved by contributing and reviewing content, attaining the consensus of the public sector jurisdictions, and monitoring related developments that might impact the development of the PSP PCTF.

The main deliverable of the PSP PCTF WG has been the PSP PCTF, the various versions of which consist of a consolidated overview document, an assessment methodology, and an assessment worksheet.

The PSP PCTF WG has also facilitated other activities such as:

Sharing information, updates, and lessons learned from various digital identity initiatives; and Consultation and engagement with multi-jurisdictional and international fora. Membership

At its dissolution, the PSP PCTF WG had 111 confirmed members on its distribution list consisting of representatives from all jurisdictions and various municipalities across Canada, as well as international participants from the Digital Nations. The working group normally met on a weekly call that averaged 20 to 30 participants.

Achievements

PSP PCTF Deliverables

The PSP PCTF Version 1.2 is now available at: https://github.com/canada-ca/PCTF-CCP. It should be noted that this has been the iterative product of several prior versions:

April 2018: The Public Sector Profile of the Pan-Canadian Trust Framework Alpha Version — Consolidated Overview document; July 2019: The Public Sector Profile of the Pan-Canadian Trust Framework Version 1.0 — Consolidated Overview document; June 2020: The Public Sector Profile of the Pan-Canadian Trust Framework Version 1.1 — Consolidated Overview document; and For each of these versions of the PSP PCTF, a companion PSP PCTF Assessment Worksheet consisting of approximately 400 conformance criteria. PSP PCTF Assessments

The PSP PCTF was used in the following assessments conducted by the federal government to accept trusted digital identities from the provinces of Alberta and British Columbia:

September 2018: Assessment and Acceptance of the MyAlberta Digital Identity (MADI) Program for use by the Government of Canada (using the PSP PCTF Alpha Version); and January 2020: Assessment and Acceptance of the British Columbia Services Card Program for use by the Government of Canada (using the PSP PCTF Version 1.0).

Insights and lessons learned from the application of these PSP PCTF assessments were brought back to the PSP PCTF WG and the learnings were incorporated into subsequent versions of the PSP PCTF.

Joint Council Briefings

The PSP PCTF is the result of a long-term and deep collective experience of the public sector. Efforts on the PSP PCTF began in late 2014 and have been reported regularly to the Joint Councils by the Identity Management Sub-Committee (IMSC) Working Group and its successor, the PSP PCTF Working Group. The following is the list of updates that are on record and are available for reference in the joint-councils-update folder (GitHub link):

February 2017 — Joint Councils Update; October 2017 — Joint Councils Update; February 2018 — Joint Councils Update; September 2018 — Joint Councils Update; Whitehorse Declaration and MADI Update; February 2019 — Joint Councils Update; and February 2020 — Joint Councils Update. Related Deliverables

In addition to the PSP PCTF itself, the following related deliverables should be noted:

Whitehorse Declaration — a declaration of shared intent among the federal, provincial, territorial, and municipal governments to pursue the establishment of trustworthy digital identities for all Canadians (GitHub link); IMSC Public Policy Paper — recommendations for a Pan-Canadian policy position on the question of roles and responsibilities of the public and private sector in digital identity (GitHub link); and Many historical deliverables that are too numerous to list in this report. A Public Historical Archive of deliverables and briefings, many of which pre-date the efforts of the PSP PCTF are being compiled in a folder on a best-effort basis (GitHub link). Other

It also should be noted that content from the PSP PCTF Version 1.1 was incorporated into the National Standard of Canada, CAN/CIOSC 103–1, Digital Trust and Identity — Part 1: Fundamentals, developed by the CIO Strategy Council, and approved by the Standards Council of Canada (Website link).

PSP PCTF WG Work Plan 2020–2021

At the time of its dissolution, the work plan of the PSP PCTF WG was as follows:

PSP PCTF Version 1.2 A Consolidated Overview document (released on December 4th, 2020) which includes: A revised Normative Core (containing new concepts that were developed as a result of the credentials and relationships analysis work); A revised Credential Model (based on the working group discussion document); and An incorporated Relationship Model (based on work led by ISED).

2. An Assessment Worksheet (draft released on December 4, 2020) which contains new and revised conformance criteria for assessment purposes

3. A re-assessment of the MyAlberta Digital Identity (MADI) Program for use by the Government of Canada (using the PSP PCTF Version 1.2) with planned completion by March 2021.

PSP PCTF Thematic Issues

During the development of the PSP PCTF, the working group has identified several high-level thematic issues that must be addressed in order to advance the digital ecosystem.

Thematic Issue 1: Relationships (Priority: High)

The development of a relationship model is required.

This issue has been initially addressed in the PSP PCTF Version 1.2 Consolidated Overview document released in December 2020.

Thematic Issue 2: Credentials (Priority: High)

The development of a generalized credential model is required. This model should integrate traditional physical credentials and authentication credentials with the broader notion of a verifiable credential.

This issue has been initially addressed in the PSP PCTF Version 1.2 Consolidated Overview document released in December 2020.

Thematic Issue 3: Unregistered Organizations (Priority: High)

Currently, the scope of PSP PCTF includes all organizations registered in Canada (including inactive organizations) for which an identity has been established in Canada. There are also many kinds of unregistered organizations operating in Canada such as sole proprietorships, trade unions, co-ops, NGOs, unregistered charities, and trusts. An analysis of these unregistered organizations needs to be undertaken.

Thematic Issue 4: Informed Consent (Priority: High)

The current version of the PSP PCTF Consolidated Overview document does not adequately capture all the issues and nuances surrounding the topic of informed consent especially in the context of the public sector. A more rigorous exploration of this topic needs to be done.

Thematic Issue 5: Privacy Concerns (Priority: Medium)

In regards to the Identity Continuity and Relationship Continuity atomic processes, it has been noted that there are privacy concerns with the notion of dynamic confirmation. Further analysis based on feedback from the application of the PSP PCTF is required to determine if these atomic processes are appropriate.

Thematic Issue 6: Assessing Outsourced Atomic Processes (Priority: Medium)

The PSP PCTF does not assume that a single Issuer or Verifier is solely responsible for all of the atomic processes. An organization may choose to outsource or delegate the responsibility of an atomic process to another party. Therefore, several bodies might be involved in the PSP PCTF assessment process, focusing on different atomic processes, or different aspects (e.g., security, privacy, service delivery). It remains to be determined how such multi-actor assessments will be conducted.

Thematic Issue 7: Scope of the PSP PCTF (Priority: Low)

It has been suggested that the scope of the PSP PCTF should be broadened to include academic qualifications, professional designations, etc. The PSP PCTF anticipates extensibility through the generalization of the PSP PCTF model and the potential addition of new atomic processes. Expanding the scope of the PSP PCTF into other domains needs to be studied.

Thematic Issue 8: Signature (Priority: Low)

The concept of signature as it is to be applied in the context of the PSP PCTF needs to be explored.

Thematic Issue 9: Foundation Name, Primary Name, Legal Name (Priority: Low)

The PSP PCTF has definitions for Foundation Name, Primary Name, and Legal Name. Since the three terms mean the same thing, a preferred term should be selected and used consistently throughout the PSP PCTF documents.

Thematic Issue 10: Additional Detail (Priority: Low)

It has been noted that the PSP PCTF Consolidated Overview document contains insufficient detail in regards to the specific application of the PSP PCTF. The PSP PCTF Consolidated Overview document needs to be supplemented with detailed guidance in a separate document.

Thematic Issue 11: Review of the Appendices (Priority: Low)

A review of the current appendices contained in the PSP PCTF Consolidated Overview document needs to be undertaken. Each appendix should be evaluated for its utility, applicability, and appropriateness, and a determination made as to whether it should continue to be included in the document.

Recommendations for Next Steps Continue the development of the PSP PCTF based on the thematic issues identified above. These thematic issues may be addressed as part of a working group, or through task groups, or practice groups. Continue the application of the PSP PCTF through the Assessment Process with the Provinces and Territories, with a view to incorporating learnings back into subsequent versions of the PSP PCTF, and, evolving the assessment process toward a standards-based process that has a formal certification scheme with accredited bodies and independent assessors. Support the changes in digital identity governance to ensure that the PSP PCTF is developed and used in the public interest and is aligned with other industry and international efforts. Establish as required, working groups, task groups, or practice groups for: Ongoing development and maintenance of the PSP PCTF and related assessment processes and certification schemes; Carrying out specific time-bound tasks or address issues. (e.g., addressing the thematic themes through discussion papers, analysis of other trust frameworks, etc.); Testing practical applications of the PSP PCTF standards and conformance criteria through assessments and use cases; and Sharing knowledge and lessons learned in relation to the application of the PSP PCTF and the assessment process. Facilitate broader engagement using the PSP PCTF, including: Engaging standards development organizations, domestic and international, to support the standards development and certification scheme development; Engaging international organizations having an interest in applying or adapting the PSP PCTF for their purposes; Collaborating with industry associations wishing to advance the aims of their membership, or their specific sector; and Encouraging dialogue with other governments, either bilaterally facilitated through the federal government, or multilaterally through established bodies (e.g., UNCITRAL, the Digital Nations). Conclusion

At the time of its dissolution, the PSP PCTF WG was an important vehicle for ensuring public sector communication and discussion across Canada in order to cultivate a shared understanding of how identity and digital identity could be best developed for the country.

Much has been achieved by the working group, building on prior work going back more than a decade. However much more work remains. It is hoped that the work accomplished to date and the recommendations put forward in this report will be considered by the JEDI to support their mandate to accelerate the specific goals of the digital identity priority of the Joint Councils.


Identity Woman

MyData Talk: From Data Protection to Data Empowerment (not an easy path) for the Technology Pragmatist

This is the edited text of a talk that I gave during the first plenary session of the MyData Online 2020 Conference. I was asked relatively last minute to join this session which was headlined by Siddharth Shetty talking about Designing the new normal: India Stack. In 2019 I was a New America India-US Public […] The post MyData Talk: From Data Protection to Data Empowerment (not an easy path) fo

This is the edited text of a talk that I gave during the first plenary session of the MyData Online 2020 Conference. I was asked relatively last minute to join this session which was headlined by Siddharth Shetty talking about Designing the new normal: India Stack. In 2019 I was a New America India-US Public […]

The post MyData Talk: From Data Protection to Data Empowerment (not an easy path) for the Technology Pragmatist appeared first on Identity Woman.


Simon Willison

Quoting Michael Malis

If you are pre-product market fit it's probably too early to think about event based analytics. If you have a small number of users and are able to talk with all of them, you will get much more meaningful data getting to know them than if you were to set up product analytics. You probably don't have enough users to get meaningful data from product analytics anyways. — Michael Malis

If you are pre-product market fit it's probably too early to think about event based analytics. If you have a small number of users and are able to talk with all of them, you will get much more meaningful data getting to know them than if you were to set up product analytics. You probably don't have enough users to get meaningful data from product analytics anyways.

Michael Malis


datasette.io

datasette.io Datasette finally has an official project website, three years after the first release of the software. I built it using Datasette, with custom templates to define the various pages. The site includes news, latest releases, example sites and a new searchable plugin directory. Via @simonw

datasette.io

Datasette finally has an official project website, three years after the first release of the software. I built it using Datasette, with custom templates to define the various pages. The site includes news, latest releases, example sites and a new searchable plugin directory.

Via @simonw

Thursday, 10. December 2020

Simon Willison

Deno 1.6 Release Notes

Deno 1.6 Release Notes Two signature features in Deno 1.6 worth paying attention to: a built-in language server for code editors like VS Code, and the "deno compile" command which can build Deno JavaScript/TypeScript projects into standalone binaries. The ability to build binaries has turned out to be a killer feature of both Go and Rust, so seeing it ship as a default capability of a interprete

Deno 1.6 Release Notes

Two signature features in Deno 1.6 worth paying attention to: a built-in language server for code editors like VS Code, and the "deno compile" command which can build Deno JavaScript/TypeScript projects into standalone binaries. The ability to build binaries has turned out to be a killer feature of both Go and Rust, so seeing it ship as a default capability of a interpreted dynamic language is fascinating. I would love it if Python followed Deno's example.

Wednesday, 09. December 2020

Doc Searls Weblog

Is Flickr in trouble again?

December 10, 2020: This matter has been settled now, meaning Flickr appears not to be in trouble, and my account due for renewal will be automatically renewed. I’ve appended what settled the matter to the bottom of this post. Note that it also raises another question, about subscriptions. — Doc I have two Flickr accounts, […]

December 10, 2020: This matter has been settled now, meaning Flickr appears not to be in trouble, and my account due for renewal will be automatically renewed. I’ve appended what settled the matter to the bottom of this post. Note that it also raises another question, about subscriptions. — Doc

I have two Flickr accounts, named Doc Searls and Nfrastructure. One has 73,355 photos, and the other 3,469. They each cost $60/year to maintain as pro accounts. They’ve both renewed automatically in the past; and the first one is already renewed, which I can tell because it says “Your plan will automatically renew on March 20, 2022.”

The second one, however… I dunno. Because, while my Account page says “Your plan will automatically renew on December 13, 2020,” I just got emails for both accounts saying, “This email is to confirm that we have stopped automatic billing for your subscription. Your subscription will continue to be active until the expiration date listed below. At that time, you will have to manually renew or your subscription will be cancelled.” The dates match the two above. At the bottom of each, in small print, it says “Digital River Inc. is the authorized reseller and merchant of the products and services offered within this store. Privacy Policy Terms of Sale Your California Privacy Rights.”

Hmmm. The Digital River link goes here, which appears to be in Ireland. A look at the email’s source shows the mail server is one in Kansas, and the Flickr.com addressing doesn’t look spoofed. So, it doesn’t look too scammy to me. Meaning I’m not sure what the scam is. Yet. If there is one.

Meanwhile, I do need to renew the subscription, and the risk of not renewing it is years of contributions (captions, notes, comments) out the window.

So I went to “Manage your Pro subscription” on the second one (which has four days left to expiration), and got this under “Update your Flickr Pro subscription information”

Plan changes are temporarily disabled. Please contact support for prompt assistance.

Cancel your subscription

The Cancel line is a link. I won’t click on it.

Now, I have never heard of a company depending on automatic subscription renewals switching from those to the manual kind. Nor have I heard of a subscription-dependent company sending out notices like these while the renewal function is disabled.

I would like to contact customer support; but there is no link for that on my account page. In fact, the words “customer” and “support” don’t appear there. “Help” does, however, and goes to https://help.flickr.com/, where I need to fill out a form. This I did, explaining,

I am trying to renew manually, but I get “Plan changes are temporarily disabled. Please contact support for prompt assistance.” So here I am. Please reach out. This subscription expires in four days, and I don’t want to lose the photos or the account. I’m [email address] for this account (I have another as well, which doesn’t renew until 2022), my phone is 805-705-9666, and my twitter is @dsearls. Thanks!

The robot replied,

Thanks for your message – you’ll get a reply from a Flickr Support Hero soon. If you don’t receive an automated message from Flickr confirming we received your message (including checking your spam folders), please make sure you provided a valid and active email. Thanks for your patience and we look forward to helping you!

Me too.

Meanwhile, I am wondering if Flickr is in trouble again.

I wondered about this in 2011 and again in 2016, (in my most-read Medium post, ever). Those were two of the (feels like many) times Flickr appeared to be on the brink. And I have been glad SmugMug took over the Flickr show in 2018. (I’m a paying SmugMug customer as well.) But this kind of thing is strange and has me worried. Should I be?

[Later, on December 10…]

Heard from Flickr this morning, with this:

Hi Doc,

When we migrated your account to Stripe, we had to cancel your subscription on Digital River. The email you received was just a notice of this event. I apologize for the confusion.

Just to confirm, there is no action needed at this time. You have an active Pro subscription in good standing and due for renewal on an annual term on December 14th, 2020.

To answer your initial question, since your account has been migrated to Stripe, while you can update your payment information, changes to subscription plans are temporarily unavailable. We expect this functionality to be restored soon.

I appreciate your patience and hope this helps.

For more information, please consult our FAQ here: https://help.flickr.com/faq-for-flickr-members-about-our-payment-processor-migration-SyN1cazsw

Before this issue came up, I hadn’t heard of Digital River or Stripe. Seems they are both “payment gateway” services (at least according to Finances Online). If you look down the list of what these companies can do, other than payment processing alone—merchandising, promotions, channel partner management, dispute handling, cross-border payment optimization, in-app solutions, risk management, email services, and integrations with dozens of different tools, products and extensions from the likes of Visa, MasterCard, Sage and many other companies with more obscure brand names—you can understand how a screw-up like this one can happen when moving from one provider to another.

Now the question for me is whether subscription systems really have to be this complex.

(Comments here only work for Harvard people; so if you’re not one of those, please reply elsewhere, such as on Twitter, where I’m @dsearls.)


DustyCloud Brainstorms

Identity is a Katamari, language is a Katamari explosion

I said something strange this morning: Identity is a Katamari, language is a continuous reverse engineering effort, and thus language is a quadratic explosion of Katamaris. This sounds like nonsense probably, but has a lot of thought about it. I have spent a lot of time in the decentralized-identity community …

I said something strange this morning:

Identity is a Katamari, language is a continuous reverse engineering effort, and thus language is a quadratic explosion of Katamaris.

This sounds like nonsense probably, but has a lot of thought about it. I have spent a lot of time in the decentralized-identity community and the ocap communities, both of which have spent a lot of time hemming and hawing about "What is identity?", "What is a credential or claim?", "What is authorization?", "Why is it unhygienic for identity to be your authorization system?" (that mailing list post is the most important writing about the nature of computing I've ever written; I hope to have a cleaned up version of the ideas out soon).

But that whole bit about "what is identity, is it different than an identifier really?" etc etc etc...

Well, I've found one good explanation, but it's a bit silly.

Identity is a Katamari

There is a curious, surreal, delightful (and proprietary, sorry) game, Katamari Damacy. It has a silly story, but the interesting thing here is the game mechanic, involving rolling around a ball-like thing that picks up objects and grows bigger and bigger kind of like a snowball. It has to be seen or played to really be understood.

This ball-like thing is called a "Katamari Damacy", or "soul clump", which is extra appropriate for our mental model. As it rolls around, it picks up smaller objects and grows bigger. The ball at the center is much like an identifier. But over time that identifier becomes obscured, it picks up things, which in the game are physical objects, but these metaphorically map to "associations".

Our identity-katamari changes over time. It grows and picks up associations. Sometimes you forget something you've picked up that's in there, it's buried deep (but it's wiggling around in there still and you find out about it during some conversation with your therapist). Over time the katamari picks up enough things that it is obscured. Sometimes there are collisions, you smash it into something and some pieces fly out. Oh well, don't worry about it. They probably weren't meant to be.

Language is reverse engineering

Shout out to my friend Jonathan Rees for saying something that really stuck in my brain (okay actually most things that Rees says stick in my brain):

"Language is a continuous reverse engineering effort, where both sides are trying to figure out what the other side means."

This is true, but its truth is the bane of ontologists and static typists. This doesn't mean that ontologies or static typing are wrong, but that the notion that they're fixed is an illusion... a useful, powerful illusion (with a great set of mathematical tools behind it sometimes that can be used with mathematical proofs... assuming you don't change the context), but an illusion nonetheless. Here are some examples that might fill out what I mean:

The classic example, loved by fuzzy typists everywhere: when is a person "bald"? Start out with a person with a "full head" of hair. How many hairs must you remove for that person to be "bald"? What if you start out the opposite way... someone is bald... how many hairs must you add for them to become not-bald?

We might want to construct a precise recipe for a mango lassi. Maybe, in fact, we believe we can create a precise typed definition for a mango lassi. But we might soon find ourselves running into trouble. Can a vegan non-dairy milk be used for the Lassi? (Is vegan non-dairy milk actually milk?) Is ice cream acceptable? Is added sugar necessary? Can we use artificial mango-candy powder instead of mangoes? Maybe you can hand-wave away each of these, but here's something much worse: what's a mango? You might think that's obvious, a mango is the fruit of mangifera indica or maybe if you're generous fruit of anything in the mangifera genus. But mangoes evolved and there is some weird state where we had almost-a-mango and in the future we might have some new states which are no-longer-a-mango, but more or less we're throwing darts at exactly where we think those are... evolution doesn't care, evolution just wants to keep reproducing.

Meaning changes over time, and how we categorize does too. Once someone was explaining the Web Ontology Language (which got confused somewhere in its acronym ordering and is shortened to OWL (update: it's a Winnie the Pooh update, based on the way the Owl character spells his name... thank you Amy Guy for informing me of the history)). They said that it was great because you could clearly define what is and isn't allowed and terms derived from other terms, and that the simple and classic example is Gender, which is a binary choice of Male or Female. They paused and thought for a moment. "That might not be a good example anymore."

Even if you try to define things by their use or properties rather than as an individual concept, this is messy too. A person from two centuries ago would be confused by the metal cube I call a "stove" today, but you could say it does the same job. Nonetheless, if I asked you to "fetch me a stove", you would probably not direct me to a computer processor or a car engine, even though sometimes people fry an egg on both of these.

Multiple constructed languages (Esperanto most famously) have been made by authors that believed that if everyone spoke the same language, we would have world peace. This is a beautiful idea, that conflict comes purely from misunderstandings. I don't think it's true, especially given how many fights I've seen between people speaking the same language. Nonetheless there's truth in that many fights are about a conflict of ideas.

If anyone was going to achieve this though, it would be the Lojban community, which actually does have a language which is syntactically unambiguous, so you no longer have ambiguity such as "time flies like an arrow". Nonetheless, even this world can't escape the problem that some terms just can't be easily pinned down, and the best example is the bear goo debate.

Here's how it works: both of us can unambiguously construct a sentence referring to a "bear". But when it is that bear no longer a bear? If it is struck in the head and is killed, when in that process has it become a decompositional "bear goo" instead? And the answer is: there is no good answer. Nonetheless many participants want there to be a pre-defined bear, they want us to live in a pre-designed universe where "bear" is a clear predicate that can be checked against, because the universe has a clear definition of "bear" for us.

That doesn't exist, because bears evolved. And more importantly, the concept and existence a bear is emergent, cut across many different domains, from evolution to biology to physics to linguistics.

Sorry, we won't achieve perfect communication, not even in Lojban. But we can get a lot better, and set up a system with fewer stumbling blocks for testing ideas against each other, and that is a worthwhile goal.

Nonetheless, if you and I are camping and I shout, "AAH! A bear! RUN!!", you and I probably don't have to stop to debate bear goo. Rees is right that language is a reverse engineering effort, but we tend to do a pretty good job of gaining rough consensus of what the other side means. Likewise, if I ask you, "Where is your stove?", you probably won't lead me to your computer or your car. And if you hand me a "sugar free vegan mango lassi made with artificial mango flavor" I might doubt its cultural authenticity, but if you then referred to the "mango lassi" you had just handed me a moment ago, I wouldn't have any trouble continuing the conversation. Because we're more or less built to contextually construct language contexts.

Language is a quadratic explosion of Katamaris

Language is composed of syntax partly, but the arrangement of symbolic terms mostly. Or that's another way to say that the non-syntactic elements of language are mostly there as identifiers substituted mentally for identity and all the associations therein.

Back to the Katamari metaphor. What "language is a reverse-engineering effort" really means is that each of us are constructing identities for identifiers mentally, rolling up katamaris for each identifier we encounter. But what ends up in our ball will vary depending on our experiences and what paths we take.

Which really means that if each person is rolling up a separate, personal identity-katamari for each identifier in the system, that means that, barring passing through a singularity type event-horizon past which participants can do direct shared memory mapping, this is an O(n^2) problem!

But actually this is not a problem, and is kind of beautiful. It is amazing, given all that, just how good we are at finding shared meaning. But it also means that we should be aware of what this means topologically, and that each participant in the system will have a different set of experiences and understanding for each identity-assertion made.

Thank you to Morgan Lemmer-Webber, Stephen Webber, Corbin Simpson, Baldur Jóhannsson, Joey Hess, Sam Smith, Lee Spector, and Jonathan Rees for contributing thoughts that lead to this post (if you feel like you don't belong here, do belong here, or are wondering how the heck you got here, feel free to contact me). Which is not to say that everyone, from their respective positions, have agreement here; I know several disagree strongly with me on some points I've made. But everyone did help contribute to reverse-engineering their positions against mine to help come to some level of shared understanding, and the giant pile of katamaris that is this blogpost.


Simon Willison

The case against client certificates

The case against client certificates Colm MacCárthaigh provides a passionately argued Twitter thread about client certificates and why they should be avoided. I tried using them as an extra layer of protection fir my personal Dogsheep server and ended up abandoning them - certificate management across my devices was too fiddly. Via Thomas Ptacek

The case against client certificates

Colm MacCárthaigh provides a passionately argued Twitter thread about client certificates and why they should be avoided. I tried using them as an extra layer of protection fir my personal Dogsheep server and ended up abandoning them - certificate management across my devices was too fiddly.

Via Thomas Ptacek


MyDigitalFootprint

Revising the S-Curve in an age of emergence

Exploring how the S-Curve can help us with leadership, strategy and decisions making in an age of emergence: (properties or behaviours which only emerge when the parts interact as part of an inclusive whole) History and context There is a special place in our business hearts and minds for the “S” curve or Sigmoid function, calling it by its proper maths name. The origin of the S curve goes back
Exploring how the S-Curve can help us with leadership, strategy and decisions making in an age of emergence: (properties or behaviours which only emerge when the parts interact as part of an inclusive whole) History and context

There is a special place in our business hearts and minds for the “S” curve or Sigmoid function, calling it by its proper maths name. The origin of the S curve goes back to the study of population growth by Pierre-François Verhulst c.1838. Verhulst was influenced by Thomas Malthus’ “An Essay on the Principle of Population” which showed that growth of a biological population is self-limiting by the finite amount of available resources. The logistic equation is also sometimes called the Verhulst-Pearl equation following its rediscovery in 1920. Alfred J. Lotka derived the equation again in 1925, calling it the law of population growth but he is better known for his predator: prey model.  

In 1957 business strategists Joe Bohlen and George Beal published the Diffusion Process. Taking the adoption curve and adding cumulatively the take up the product to gain a “classic S curve.”  


The market adoption curve became the basis for explaining innovation and growth as a broader market economic concept by the late 1960s. We started to consider the incubation of ideas to create new businesses and how we need a flow of products/ services within big companies.

From this thinking emerged two concepts, as the shareholder primacy model became central to growth.  The first is the concept of “Curve Jumping” to ensure that you continue growth by keeping shareholders happy through the continuous introduction of new products, as the existing ones have matured. Of course, the downside is that if a business cannot jump because of its current cost base or ability to adopt the latest technology to perpetuate the ascension of the curve, new companies will emerge with competitive advantages (product or cost) as they jump to new technologies.  Milton Friedman’s emphasis on shareholder value maximisation at the expense of other considerations was driving companies to keep up with the next curve of fear of being left behind competitively. Some sorts of competition are healthier for markets than others, and it appears that competition and anxiety relating to retaining technology leadership at all costs have been driving capitalism in a particularly destructive direction, rather than encouraging useful and sustainable friendly innovation.  There is an economics essay to be written here, but this piece is about the S curve.


 

Right here, right now.

We live in a time when crisis and systematic “emergent properties” are gaining attention and prominence.  Emergence by definition occurs when an entity or system is observed to have properties that its parts do not pose or display on their own. Properties or behaviours which only emerge when the parts interact in the broader system, as we see our businesses, we understand as complex adaptive systems. 

Whilst shareholder primacy as an economic driver faded in 1990 to be replaced finally in 2019 with Colin Mayer work on the Purpose Model, a modern standard for corporate responsibility which makes an equal commitment to all stakeholders. Shareholder primacy’s simplicity has remained a stalwart of leadership training, teaching and therefore, management thinking.  Its simplicity meant we did not have to deal with contradictions and conflicting requirements that a broader purpose would expose. The Business Round Table Aug 2019 and Blackrock's CEO Larry Fink letters to CEOs/ Shareholders are critical milestones in turning thinking away from pure shareholder returns as the reason for a business to exist. The shift is towards eco-systems and ESG (Environmental sustainability, Social responsibility and better oversight and Governance) as primary drives.  The FCA Stewardship code, Section 172 of the companies act and decision reporting are some of the first legislative instruments on this journey. With now over 50 series A funded startups active in ESG reporting, impact investing has become a meme as the development of a more standardised and comprehensive purpose reporting has strengthened.   

Shareholder primacy’s simplicity meant we did not have to deal with contradictions and conflicting requirements that a broader purpose would expose.

With this new framing, it is time to revisit the S-Curve. 

Framing the S-Curve for an evolutionary journey

If you have not yet discovered Simon Wardley and his mapping thinking, stop here and watch this.   Simon has a brilliant S-Curve with pioneers, settlers and town planners, really worth looking up. His model is about evolution (journey towards commodity) rather than diffusion (take up over time).  To quote Simon “The evolution of a single act from genesis to commodity may involve hundreds if not thousands of diffusion curves for each evolving instance of that act, each with their own chasm.”

The next S-Curve below, I am building on Simon’s axis of Ubiquity (how common something is) and Certainty (the likelihood of the outcome being as determined) which is an evolution S-Curve.  On these axes, we are plotting the systems and company development - time is not present, but as we will see, we have to remove time from the framing.  

Starting in the bottom left is the activity of Innovation where uniquity is low, as it is an idea, and it is not available to everyone. The certainty that any innovation will work is low.

  


The top right corner is the perceived north star.  Ubiquity is high as everyone has it, and there is high certainty that it works.  In this top right corner are commodities and utilities, technology at scale, by example turning on the water tap and drinking water flows.  Linking innovation and commodity is an evolution or journey S-Curve. Under this curve, we can talk about the transformation of the company, the companies practices, data, controls and what models it will most likely utilise.  The chart below highlights the most popular thinking in each stage and is certainly not exclusive.  Agile works well in all phases, AI can be used anywhere, except for choice, and data is not as definite as the buckets would suggest.  Control changes as we evolve from lean/ MVP in the first delivery, to using methodologies such as agile and scrum, then Prince 2 as a grown-up project management tool at scale and then towards quality management with 6 Sigma. 

Note: I have a passionate dislike of the term “best practice” as it only applies when in the linear phase but is literally applied everywhere.  At linear, you have evidence and data to support what “best” looks like.  At any stage before ubiquity and certainty, best practice is simply not possible other than by lucking out.  A desire for best practice ignores that you have to learn and prove what is it before you find it is best. And to all those who have the best - you can still do better, so how can it be best?

If one considers the idea of time and S-Curves you get to curve jumping or continual product development as set out earlier.  The purpose of an evolution or journey S-Curve presented in this way is that when time is not the axis, any significant business will have all these activities present at all times (continual adaptation/ evolution not diffusion). In nature, all different levels of species exist at the same time from a single cell, to complex organisms.  Innovation is not a thing; it is a journey, where you have to be all the camps, on the route, at the same time. 

Innovation is not a thing; it is a journey where you have to be all the camps, on the route, at the same time. 

Evolution S-Curve and governance

HBR argues that most capitalists markets are in a post-shareholder primacy model, meaning the purpose of an organisation is up for debate. Still, we are on the route to a more inclusive purpose and reason for a company to exist.  Law already exists in the UK as Section 172 of the Companies Act in the form of directors duties.  The global pandemic has highlighted a bunch of significant weakness that emerges from our focus on growth and shareholder focus, including, as examples only:

Highly integrated long supply chains are incredibly efficient, but are very brittle and not resilient  - we lost effectiveness.

A business needs to re-balance towards effectiveness.  A food company in a pandemic exists to get food to customers (effectiveness) not to drive efficiency at any cost.

Ecosystem sustainability is more important than any single company's fortunes.

ESG, risk, being better ancestors, costing the earth and climate change are extremely difficult on your own.

Our existing risk models focus on resource allocation, safety and control. This framing means that new risk created in a digital-first world may be outside of the frame and therefore hidden in plain sight. 

Given this framing and context, it is worth overlaying governance on the S-curve of start-up development, which we will now unpack.  

Governance has centrally focussed on corporates and large companies who offer products and services to mass markets. By concentrating governance on companies who have scale, if they are well managed, and is there independence of oversight, we have framed governance as only of interest for companies where there is an interest to wider society on their behaviour. Indeed it becomes a burden rather than a value. 

Companies of scale tend to be found in the linear quadrant, top right, where growth is mainly incremental and linear.  Regulation and markets focus on “BEST practices” which have been derived over a long period. The data used is highly modelled, and the application of AI creates new opportunities and value.  Control is exercised through the utilisation of 6 Sigma for quality (repeatability) and other advanced program management techniques. KPI’s enable the delegation of actions and the monitoring, control thereof.  The business model is that of exercising good or “best” decision making, based on resource allocation and risk.   

Unpacking Corporate Governance is a broad and thorny topic, but foundations such as The Cadbury Report (1992) and the Sarbanes–Oxley Act (2002) have been instrumental in framing mandates.  However, governance, compliance and risk management became one topic in c.2007 and lost the clear separation of function.  Regulation has also formed an effective backstop to control behaviours and market abuse.  

The point is, when a company is at the scale, we have created “best governance practises and guidance”,  along with excellent risk frameworks and stewardship codes for investors.   Many of the tools and methods have stood the test of time and provide confidence to the market.  However, these tools and frameworks are designed for companies at scale.  On the journey from startup to scale, the adoption of such heavyweight practices in early development would be overly burdensome for emergent companies and are not a best or a good fit.  

Remembering that any company of scale has all these phases present at the same time, but there are five possible camps or phases where we need governance; three are in orange and two in yellow.   The yellow blocks represent phases where there is a degree of stability insomuch that there can be growth, but there is not a wholesale change in everything.  The orange block represents phases where everything is changing.   Yellow blocks indicate complicated oversight,  where orange suggests complex.  

To be clear, it is not that companies or markets in a linear phase are not complex; it is that the management at linear has more certainty in terms of practices and forecasting coupled with having to deal with less change. When there is a product of service at linear it delivers significant noisy signals and priorities that often overshadow any data or insights from other phases.  Management at scale requires a focus on understanding the delta between the plan and the executed outcome and making minor adjustments to optimise.  

The management during the yellow stable growth camps/ phases is complicated as patterns and data will not always be that useful.  Data will not be able to point to a definitive decision directly. Governance provides assurance and insights as management continually searches for the correct data to make decisions on, which may not be there.  Management during an orange highly volatile camps/ phases is more complicated as you cannot rely on existing data during a transition between what you had and the new. Simply put if you did, you will only get what you have and not be able to move to the new.  The idea of transition is that the old is left behind. Experienced leadership will show skills in seeking small signals from the noisy existing data and the noise. When considering governance through this dynamic lens, it is apparent that it becomes much more challenging and that we cannot rely on the wisdom and best practices of linear. 

Plans at scale are more comfortable and more predictable; they are designed to enable the measurement of a delta. Plans during innovation are precisely the opposite, not easy and highly unpredictable.  Using the same word “plan” in both cases means we lose definition and distinction.  

A plan at scale is built on years of data and modelled to be highly reliable; it is accurate and has a level of detail that can create KPIs for reporting.  The plan and the model is a fantastic prediction tool.  

A plan at start-up and growth is about direction and intention. Failure to have one would be catastrophic,  but with the first few hours of the words being committed to a shared document, the plan is out of date.  To be useful, it has to lack precision, detail and measurement but will set out stages, actions and outcomes.  It must have a purpose, direction and how to frame complex decisions. 

Similarly, governance at scale is more comfortable and more predictable; governance is about understanding where and how delta might arise and be ready for it. Governance during innovation is precisely the opposite, not easy and highly unpredictable.  Using the same word “Governance” in both cases means we lose definition and distinction.

Using the same word “Governance” at scale and in startup cases means we lose definition and distinction.    

Complexity: Organisational mash-ups

Many businesses are mash-ups of previous transformations, plus current evolution. This observation has two ramifications: One, the structure, processes and skills are neither fully aligned to the original model or various constructions of a new model.  Two, data shows that as you categorise focus and alignment in the more senior positions, and who have been in post longer, most have a compass or alignment coupled with a mash-up of a previous model. Bluntly they stopped on the evolution path creating a dead end.  Senior management who tend to have a closed mindset, rather than an open and continually learning one, tend to fall back on the experience of previous best practices, models and pre-transformational ideals, adding a significant burden to governance for any stage.  The concept that there is a direct coupling between innovation and KPI measurement, which makes it harder for corporates to innovate and evolve is explored in this article.   

All companies have an increasing dependence on ecosystems for their growth and survival.  Ecosystem health is critical for companies at scale for supply chains and customers.  Companies who operate at scale and in the linear phase, therefore, are dependent on companies who are in different stages on a planned route to scale. Thus, not only is a scale company dealing with its internal governance and innovation requirements as already noted, but the directors have to understand data from an ecosystem, who is also trying to understand what their data is telling them about their evolution path.  

Directors have to understand data from an ecosystem, who is also trying to understand what their data is telling them about their evolution path.  

Governance is not about best practices and processes at any stage; it is about a mindset of an entire organisation and now ecosystem.  When you reflect on it, Directors with governance responsibilities have to cope with data for decisions from chaotic and linear requirements at the same time — equally relying on individuals and teams who have different perceptions both inside and outside of the organisation. Never has data-sharing been more important as a concept, both as a tool or weapon (inaccurate data) in competitive markets.   How can a director know that the data they get from their ecosystem can support their decision making and complex judgement?

Take Away

The S-curve has helped us on several journeys thus far. It supported our understanding of adoption and growth; it can now be critical in helping us understand the development and evolution of governance towards a sustainable future.  An evolutionary S-curve is more applicable than ever as we enter a new phase of emergence.  Our actions and behaviours emerge when we grasp that all parts of our ecosystem interact as a more comprehensive whole. 

A governance S-curve can help us unpack new risks in this dependent ecosystem so that we can make better judgements that lead to better outcomes. What is evident is that we need far more than proof, lineage and provenance of data from a wide ecosystem if we are going to create better judgement environments, we need a new platform. Such a new platform is my focus and why I am working on Digital20.  


Tuesday, 08. December 2020

Simon Willison

Cameras and Lenses

Cameras and Lenses Fabulous explotable interactive essay by Bartosz Ciechanowski explaining how cameras and lenses work. Via @theavalkyrie

Cameras and Lenses

Fabulous explotable interactive essay by Bartosz Ciechanowski explaining how cameras and lenses work.

Via @theavalkyrie

Saturday, 05. December 2020

Boris Mann's Blog

Picked up these “Heritage” beans from Pilot Coffee House at Propaganda. I like this Classic <—> Adventurous scale. I am definitely on the more dark, rich “coffee” flavours rather than light and floral mango or whatever :)

Picked up these “Heritage” beans from Pilot Coffee House at Propaganda.

I like this Classic <—> Adventurous scale. I am definitely on the more dark, rich “coffee” flavours rather than light and floral mango or whatever :)


It’s @duchesscosmo o’clock. Locally made, refreshingly tangy & not too sweet, and available for delivery in Vancouver.

It’s @duchesscosmo o’clock. Locally made, refreshingly tangy & not too sweet, and available for delivery in Vancouver.

Friday, 04. December 2020

Aaron Parecki

IndieAuth Spec Updates 2020

This year, the IndieWeb community has been making progress on iterating and evolving the IndieAuth protocol. IndieAuth is an extension of OAuth 2.0 that enables it to work with personal websites and in a decentralized environment.

This year, the IndieWeb community has been making progress on iterating and evolving the IndieAuth protocol. IndieAuth is an extension of OAuth 2.0 that enables it to work with personal websites and in a decentralized environment.

There are already a good number of IndieAuth providers and apps, including a WordPress plugin, a Drupal module, and built-in support in Micro.blog, Known and Dobrado. Ideally, we'd have even more first-class support for IndieAuth in a variety of different blogging platforms and personal websites, and that's been the goal motivating the spec updates this year. We've been focusing on simplifying the protocol and bringing it more in line with OAuth 2.0 so that it's both easier to understand and also easier to adapt existing OAuth clients and servers to add IndieAuth support.

Most of the changes this year have removed IndieAuth-specific bits to reuse things from OAuth when possible, and cleaning up the text of the spec. These changes are also intended to be backwards compatible as much as possible, so that existing clients and servers can upgrade independently.

This post describes the high level changes to the protocol, and is meant to help implementers get an idea of what may need to be updated with existing implementations.

If you would like an introduction to IndieAuth and OAuth, here are a few resources:

IndieAuth: OAuth for the Open Web OAuth 2.0 Simplified OAuth 2.0 Simplified, the book indieauth.net

The rest of this post details the specifics of the changes and what they mean to client and server developers. If you've written an IndieAuth client or server, you'll definitely want to read this post to know what you'll need to change for the latest updates.

Response Type Indicating the User who is Logging In Adding PKCE Support Grant Type Parameters Providing "me" in the Token Request Removing Same-Domain Requirement Returning Profile Information Editorial Changes Dropped Features and Text Response Type

The first thing an IndieAuth client does is discover the user's authorization endpoint and redirect the user to their server to authorize the client. There are two possible ways a client might be wanting to use IndieAuth, either to confirm the website of the user who just logged in, or to get an access token to be able to create posts on their website.

Previously, this distinction was made at this stage of the request by varying the response_type query string parameter. Instead, the response_type parameter is now always response_type=code which brings it in line with the OAuth 2.0 specification. This makes sense because the response of this request is always an authorization code, it's only after the authorization code is used that the difference between these two uses will be apparent.

Changes for clients: Always send response_type=code in the initial authorization request.

Changes for servers: Only accept response_type=code requests, and for backwards-compatible support, treat response_type=id requests as response_type=code requests.

Indicating the User who is Logging In

In earlier versions of the specification, the authorization request was required to have the parameter me, the value of which was whatever URL the user entered into the client to start the flow. It turns out that this parameter isn't strictly necessary for the flow to succeed, however it still can help improve the user experience in some cases. As such, it has now been changed to an optional parameter.

This parameter is a way for the client to tell the IndieAuth server which user it expects will log in. For single-user sites this value is completely unnecessary, since there is only ever one me URL that will be returned in the end. It turns out that most single-user implementations were already ignoring this parameter anyway since it served no purpose.

For multi-user websites like a multi-author WordPress blog, this parameter also served little purpose. If a user was already logged in to their WordPress site, then tried to log in to an IndieAuth client, the server could just ignore this parameter anyway and return the logged-in user's me URL at the end of the flow.

For multi-user authorization endpoints like the (to-be deprecated) indieauth.com, this parameter served as a hint of who was trying to log in, so that the authorization server could provide a list of authentication options to the user. This is the only case in which this parameter really provides a user experience benefit, since without the parameter at this stage, the user would need to type in their website again, or be shown a list of authentication provider options such as "log in with Twitter".

There's yet another case where the user may enter just the domain of their website, even though their final me URL may be something more specific. For example, a user can enter micro.blog in an IndieAuth sign-in prompt, and eventually be logged in to that app as https://micro.blog/username. There is no requirement that the thing they type in to clients has to be an exact match of their actual profile URL, which allows a much nicer user experience so that users can type only the domain of their service provider which may provide profiles for multiple users. And in this case, the client providing the me URL to the server also doesn't serve any purpose.

The change to the spec makes the me parameter in the authorization request optional. In this sense, it's more of a hint from the client about who is trying to log in. Obviously the server can't trust that value in the request at this point, since the user hasn't logged in yet, so it really is more of a hint than anything else.

Changes for clients: Continue to include the me parameter in the request if you can, but if you are using an OAuth client that doesn't let you customize this request, it's okay to leave it out now.

Changes for servers: Most servers were already ignoring this parameter anyway, so if you fell into that category then no change is needed. If you were expecting this parameter to exist, change it to optional, because you probably don't actually need it. If it's present in a request, you can use it to influence the options you show for someone to authenticate if they are not yet logged in, or you could show an error message if the client provides a me URL that doesn't match the currently logged-in user.

Adding PKCE Support

Probably the biggest change to the spec is the addition of the OAuth 2.0 PKCE (Proof Key for Code Exchange) mechanism. This is an extension to OAuth 2.0 that solves a number of different vulnerabilities. It was originally designed to allow mobile apps to securely complete an OAuth flow without a client secret, but has since proven to be useful for JavaScript apps and even solves a particular attack even if the client does have a client secret.

Since IndieAuth clients are all considered "Public Clients" in OAuth terms, there are no preregistered client secrets at all, and PKCE becomes a very useful mechanism to secure the flow.

I won't go into the details of the particular attacks PKCE solves in this post, since I've talked about them a lot in other talks and videos. If you'd like to learn more about this, check out this sketch notes video where I talk about PKCE and my coworker draws sketchnotes on his iPad.

Suffice it to say, PKCE is a very useful mechanism, isn't terribly complicated to implement, and can be added independently by clients and servers since it's designed to be backwards compatible.

The change to the spec is that PKCE has been rolled into the core authorization flow. Incidentally, the OAuth spec itself is making the same change by rolling PKCE in to the OAuth 2.1 update.

Changes for clients: Always include the PKCE parameters code_challenge and code_challenge_method in the authorization request.

Changes for servers: If a code_challenge is provided in an authorization request, don't allow the authorization code to be used unless the corresponding code_verifier is present in the request using the authorization code. For backwards compatibility, if no code_challenge is provided in the request, make sure the request to use the authorization code does not contain a code_verifier.

Using an Authorization Code

Whew, okay, you've made it this far and you've sent the user off to their authorization endpoint to log in. Eventually the IndieAuth server will redirect the user back to your application. Now you're ready to use that authorization code to either get an access token or confirm their me URL.

There are two changes to this step, redeeming the authorization code.

Grant Type Parameters

The first change, while minor, brings IndieAuth in line with OAuth 2.0 since apparently this hadn't been actually specified before. This request must now contain the POST body parameter grant_type=authorization_code.

Changes for clients: Always send the parameter grant_type=authorization_code when redeeming an authorization code. Generic OAuth 2.0 clients will already be doing this.

Changes for servers: For backwards compatibility, treat the omission of this parameter the same as providing it with grant_type=authorization_code. For example if you also accept requests with grant_type=refresh_token, the absence of this parameter means the client is doing an authorization code grant.

Providing "me" in the Token Request

The request when using an authorization code, either to the token endpoint or authorization endpoint, previously required that the client send the me parameter as well. The change to the spec drops this parameter from this request, making it the same as an OAuth 2.0 request.

This has only some minor implications in very specific scenarios. We analyzed all the known IndieAuth implementations and found that the vast majority of them were already ignoring this parameter anyway. For single-user endpoints, the additional parameter provides no value, since the endpoint would be self-contained anyway, and already know how to validate authorization codes. Even multi-user endpoints like the WordPress plugin would know how to validate authorization codes because the authorization and token endpoints are part of the same software.

The only implementations leaving this parameter out would break are separate implementations of authorization endpoints and token endpoints, where the user has no prior relationship with either. The biggest offender of this is actually my own implementation which I am eventually going to retire, indieauth.com and tokens.indieauth.com. I initially wrote indieauth.com as just the authorization endpoint part, and later added tokens.indieauth.com as a completely separate implementation, it shares nothing in common with indieauth.com and is actually entirely stateless. Over the years, it turns out this pattern hasn't actually been particularly useful, since a website is either going to build both endpoints or delegate both to an external service. So in practice, the only people using tokens.indieauth.com were using it with the indieauth.com authorization endpoint.

Removing this parameter has no effect on most of the implementations. I did have to update my own implementation of tokens.indieauth.com to default to verifying authorization codes at indieauth.com if there was no me parameter, which so far has been sccessful.

Changes for clients: No need to send the me parameter when exchanging an authorization code. This makes the request the same as a generic OAuth 2.0 request.

Changes for servers: For servers that have an authorization endpoint and token endpoint as part of the same software, make sure your token endpoint knows how to look up authorization codes. Most of the time this is likely what you're already doing anyway, and you were probably ignoring the me parameter already. If you do want to provide a standalone token endpoint, you'll need to create your own encoding scheme to bake in the authorization endpoint or me value into the authorization code itself. But for the vast majority of people this will require no change.

Removing Same-Domain Requirement

One of the challenges of a decentralized protocol like this is knowing who to trust to make assertions about who. Just because someone's authorization server claims that a user identified as "https://aaronpk.com/" logged in doesn't mean I actually did log in. Only my authorization server should be trusted to assert that I logged in.

In the previous version of the spec, the way this was enforced was that clients had to check that the final me URL returned had a matching domain as what the user initially entered, after following redirects. That means if I entered aaronpk.com into the client, and that redirected to https://aaronparecki.com/, the client would then expect the final profile URL returned at the end to also be on aaronparecki.com. This works, but it has a few challenges and limitations.

The biggest challenge for client developers was keeping track of the chain of redirects. There were actually separate rules for temporary vs permanent redirects, and the client would have to be aware of each step in the redirect chain if there was more than one. Then at the end, the client would have to parse the final profile URL to find the host component, then check if that matches, and it turns out that there are often some pretty low-level bugs with parsing URLs in a variety of languages that can lead to unexpected security flaws.

On top of the technical challenges for client developers, there was another problem in the specific case where a user may control only a subfolder of a domain. For example in a shared hosting environment where users can upload arbitrary files to their user directory, https://example.com/~user, the same-domain restriction would still let /~user1 claim to be /~user2 on that domain. We didn't want to go down the route of adding more URL parsing rules like checking for substring matches, as that would likely have led to even more of a burden on client developers and more risk of security holes.

So instead, this entire restriction has been replaced with a new way of verifying that the final profile URL is legitimate. The new rule should drastically simplify the client code, at the slight cost of a possible additional HTTP request.

The new rule is that if the final profile URL returned by the authorization endpoint is not an exact match of the initially entered URL, the client has to go discover the authorization endpoint at the new URL and verify that it matches the authorization endpoint it used for the flow. This is described in a new section of the spec, Authorization Server Confirmation.

This change means clients no longer need to keep track of the full redirect chain (although they still can if they would like more opportunities to possibly skip that last HTTP request), and also ensures users on shared domains can't impersonate other users on that domain.

Changes for clients: Remove any code around parsing the initial and final URLs, and add a new step after receiving the user's final profile URL: If the final profile URL doesn't match exactly what was used to start the flow, then go fetch that URL and discover the authorization endpoint and confirm that the discovered authorization endpoint matches the one used at the beginning. Please read Authorization Server Confirmation for the full details.

Changes for servers: No change.

Returning Profile Information

If the application would like to know more about the user than just their confirmed profile URL, such as their name or photo, previously there was no easy or reliable way to find this information. It's possible the user's profile URL may have an h-card with their info, but that would only include public info and would require bringing in a Microformats parser and making another HTTP request to find this information.

In the latest version of the spec, we've added a new section returned in the response when redeeming an authorization code for the authorization server to return this profile data directly. To request this information, there are now two scopes defined in the spec, profile and email. When the client requests the profile scope, this indicates the client would like the server to return the user's name, photo and url. The email scope requests the user's email address.

The response when redeeming an authorization code that was issued with these scopes will now contain an additional property, profile, alongside the me URL and access token.

{ "access_token": "XXXXXX", "token_type": "Bearer", "scope": "profile email create", "me": "https://user.example.net/", "profile": { "name": "Example User", "url": "https://user.example.net/", "photo": "https://user.example.net/photo.jpg", "email": "user@example.net" } }

This comes with some caveats. As is always the case with OAuth, just because a client requests certain scopes does not guarantee the request will be granted. The user or the authorization server may decide to not honor the request and leave this information out. For example a user may choose to not share their email even if the app requests it.

Additionally, the information in this profile section is not guaranteed to be "real" or "verified" in any way. It is merely information that the user intends to share with the app. This means everything from the user sharing different email addresses with different apps, or the URL in the profile being a completely different website. For example a multi-author WordPress blog which provides me URLs on the WordPress site's domain, example.com, may return the author's own personal website in the url property of the profile information. The client is not allowed to treat this information as authoritative or many any policy decisions based on the profile information, it's for informational purposes only. Another common vulnerability in many existing OAuth clients is that they assume the provider has confirmed the email address returned and will use that to deduplicate accounts. This has the problem of if a user can edit their email address and have it returned in an OAuth response without the server confirming it, the client may end up being tricked into thinking a different user logged in. Only the me URL is the one that can be trusted as the stable identifier of the user, and everything in the profile section should be treated as if it were hand-entered into the client.

Changes for clients: If you would like to find the user's profile information, include the profile or email scope in your authorization request. If you don't need this, then no changes are necessary.

Changes for servers: Authorization servers should be able to recognize the profile and email scopes in a request, and ask the user for permission to share their profile information with clients, then return that along with the final me URL and access token. It's also completely acceptable to not support this feature at all, as clients shouldn't be relying on the presence of this information in the response anyway.

Editorial Changes

There was a good amount of work done to clean up the text of the spec without changing any of the actual requirements. These are known as editorial changes.

The term "domain" has been replaced with the more accurate term "host" in most places. This matches the URL spec more closely, and reduces the confusion around registerable domain like example.com or example.co.uk and subdomains. In all cases, there has been no need to use the public suffix list because we have always meant full hostname matches.

Language around the term "profile URL" was cleaned up to make sure only the final URL returned by the authorization server is referred to as the "profile URL". The user may enter lots of different things into the client that might not be their profile URL, anything from just a hostname (aaronpk.com) to a URL that redirects to their profile URL. This cleans up the language to better clarify what we mean by "profile URL".

With the change to use response_type=code for both versions of the flow, it meant the authorization and authentication sections were almost entirely duplicate content. These have been consolidated into a single section, Authorization, and the only difference now is the response when the authorization code is redeemed.

Dropped Features and Text

Any time you can cut text from a spec and have it mean the same thing is a good thing. Thankfully we were able to cut a decent amount of text thanks to consolidating the two sections mentioned above. We also dropped an obscure feature that was extremely under-utilized. For the case where a token endpoint and authorization endpoint were not part of the same software, there was a section describing how those two could communicate so that the token endpoint could validate authorization codes issued by an arbitrary authorization endpoint. This serves no purpose if a single piece of software provided both endpoints since it would be far more efficient to have the token endpoint look up the authorization code in the database or however you're storing them, so virtually nobody had even bothered to implement this.

The only known implementations of this feature were my own tokens.indieauth.com, and Martijn's mintoken project. We both agreed that if we did want to pursue this feature in the future, we could write it up as an extension. Personally I plan on shutting down indieauth.com and tokens.indieauth.com in the near-ish future anyway, and the replacement that I build will contain both endpoints anyway, so I don't really plan on revisiting this topic anyway.

Conclusion / Future Work

Well if you've made it this far, congrats! I hope this post was helpful. This was definitely a good amount of changes, although hopefully all for good reasons and should simplify the process of developing IndieAuth clients and servers in the future.

We didn't get to every open IndieAuth issue in this round of updates, there are still a few interesting ones open that I would like to see addressed. The next largest change that will affect implementations would be to continue to bring this in line with OAuth 2.0 and always redeem the authorization code at the token endpoint even if no access token is expected to be returned. That would also have the added benefit of simplifying the authorization endpoint implementation to only need to worry about providing the authorization UI, leaving all the JSON responses to the token endpoint. This still requires some discussion and a plan for upgrading to this new model, so feel free to chime in on the discussions!

I would like to give a huge thank-you to everyone who has participated in the discussions this year, both on GitHub and in our virtual meetings! All the feedback from everyone who is interested in the spec has been extremely valuable!

We'll likely schedule some more sessions to continue development on the spec, so keep an eye on events.indieweb.org for upcoming events tagged #indieauth!

If you have any questions, feel free to stop by the #indieweb-dev chat (or join from IRC or Slack) and say hi!

Thursday, 03. December 2020

SSI Ambassador

The mental models of identity enabled by SSI

This article takes the mental models of identity and explores how they can be achieved with a self-sovereign identity (SSI) solution. To pin down the meaning and definition of identity is a challenging task due to its uniquely human nature. It can have totally different meanings for different people. However, there are reoccurring themes when speaking about the term. The following five mental mod

This article takes the mental models of identity and explores how they can be achieved with a self-sovereign identity (SSI) solution.

To pin down the meaning and definition of identity is a challenging task due to its uniquely human nature. It can have totally different meanings for different people. However, there are reoccurring themes when speaking about the term. The following five mental models describe what people refer to, when speaking about identity and provide a useful structure of how these models can be executed in a digital environment leveraging SSI infrastructure and components. While the concept of SSI can be applied for individuals, legal entities and things alike, the following paragraph solely focuses on individuals and explains how these models can serve as a guideline for SSI implementations. The five mental models were published by experts of the RWOT community and are quoted in the following paragraphs.

Mental models of identity. Image source: Lissi Space-time

“The space-time mental model sees identity as resolving the question of the physical continuity of an entity through space and time. (…) It answers the question: Does the physical body under evaluation have a continuous link through space and time to a known entity?”

An identity is established in the past, it acts in the present and continues to be useful in the future. To secure the sum of recorded interactions and relationships in digital form one requires a backup when using a wallet, which stores the identity data and their associated cryptographic keys locally on the device of the user. This backup enables the user to restore the received credentials as well as established relationships. When losing access to the wallet, the backup enables the user to reestablish the aspects described in the space-time mental model. A backup generally consists of the identity data itself and a key, which is used to en- and decrypt the backup data.

Presentation

“The presentation mental model sees identity as how we present ourselves to society. This is the mental model behind Vendor Relationship Management, user-centric identity, and self-sovereign identity. (…) It answers the question: Is this how the subject chooses to be known?”

Individuals can choose, which information about them should be known by third parties or the public. The granularity of this information varies dependent on the social context. While one might only want to provide the required minimum of information to a government authority, one might have the desire to share very personal details with a certain social circle such as family or friends. Hence, the user requires different social profiles and circles, which help to present the right information to the target audience. Since one part of a SSI ecosystem is the creation of trusted peer to peer relationships, these contacts can be sorted by the user and allocated to a social circle according to the preferences of the individual.

However, when it comes to the sharing of information it gets tricky. There are currently no SSI implementations with enable a user-experience similar to current social media platforms. Hence, the presentation of information is currently limited to one contact at a time.

Attribute

“The attribute mental model sees identity as the set of attributes related to an entity as recorded in a specific system. Enshrined in ISO/IEC 24760–1, an international standard for identity management, this mental model is the primary focus for many engineers. (…) It answers the question: Who is this data about?”

From a birth certificate to a university degree or a language certification, we collect a variety of credentials, which attest certain information about us. The sum of all these credentials can also be seen as one mental model of identity. These credentials are issued, stored and managed by the individual and are standardized within the specification of the verifiable credentials data model 1.0 by the W3C. It is the only mental model with a formal specification.

SSI implementations use cryptography to provide the necessary proofs that presented information is about the individual in question. There are different options of implementations to ensure that a certain identifier relates to the specific person, however most implementations use decentralised identifiers (DIDs) to identify the identity subject.

Relationship

“The relationship mental model sees identity emerging through interactions and relationships with others. Our identity is not about what we are in isolation from others, but is rather defined by the relationships we have. This is the fundamental model in the South African idea of ‘Ubuntu’, meaning ‘I am because we are.’ (…) It answers the question: How is this person related?”

The relationship to other individuals or entities can help to determine the status of a person within society. We can observe different domains of relationships, which depend on the social context like a professional, official, legal, personal, public, business or employment context to name a few. For example a representative of a government like a diplomat has special rights and obligations due to this relationship. Depended on the context, e.g. an interview of said diplomat, it can touch multiple domains by being an official interview, with legal consequences, which is presented to the public and can have a direct effect on the employment relation for the diplomat. Generally, individuals initiate and maintain hundreds or even thousands of relationships to different entities. An SSI solution enables an individual to initiate this relationship by accepting or requesting a connection. Once established this connection serves as communication channel to facilitate the exchange of (verified) information between the two parties. Since both parties are able to validate the identity of the other party it enables the necessary trust in a digital environment. However, a the establishment of a connection isn’t necessary and credentials can also be issued or requested without one. There are special protocols, which standardise the credential exchange and communication between two entities like the DiDcomm protocol.

Capability

“The capability mental model pragmatically defines identity in terms of an individual’s capability to perform some task, including their physical ability now, in the past, or in the future. It is the inevitable approach for anyone in an emergency. (…) It answers the question: What can the subject actually do?”

The primary reason why an identity is required in the online world in the first place are the capabilities that come with it. Without an identity one is still able to browse the web and gather information, however when it comes to online shopping, banking, government applications, employee portals, access control and many other aspects, an identity is necessary to execute those actions. Not all actions require a verified identity. In most cases a self-attested identity is sufficient for the verifier. However, there are multiple cases for which the verifier either has a legitimate interest for only allowing access to verified parties or is obligated by law to verify the identity of an individual. An example for the first case can be access to information for a specific audience like a university, which wants to grant students access to internal documents. The students would not be required to verify their identity every time they want to access the repository, but instead only need to prove that they are a student of said university, without disclosing further personal details. The second case includes telecommunication providers, or financial institutions, which need to comply with know your costumer (KYC) regulations.

Mindmap of the mental models enabled by SSI Mindmap: Mental models of identity enabled by SSI. Full size image here. Source: SSI Ambassador / Adrian Doerk

To conclude it can be said, that all mental models of SSI can be enabled to a certain degree, however when it comes to the space-time (backup) mental model or the presentation (social network) model, we also see that the integration of the concept is quite nascent and requires more development to be comparable with current centralised alternatives.

Disclaimer: This article does not represent the official view of any entity, which is mentioned in this article or which is affiliated with the author. It solely represents the opinion of the author.

SSI Ambassador
Adrian Doerk
Own your keys


Phil Windley's Technometria

Relationships in the Self-Sovereign Internet of Things

Summary: DIDComm-capable agents provide a flexible infrastructure for numerous internet of things use cases. This post looks at Alice and her digital relationship with her F-150 truck. She and the truck have relationships and interactions with the people and institutions she engages as she co-owns, lends and sells it. These and other complicated workflows are all supported by a standards-bas

Summary: DIDComm-capable agents provide a flexible infrastructure for numerous internet of things use cases. This post looks at Alice and her digital relationship with her F-150 truck. She and the truck have relationships and interactions with the people and institutions she engages as she co-owns, lends and sells it. These and other complicated workflows are all supported by a standards-based, open-source, protocol-supporting system for secure, privacy-preserving messaging.

In The Self-Sovereign Internet of Things, I introduced the role that Self-Sovereign Identity (SSI) can play in the internet of things (IoT). The self-sovereign internet of things (SSIoT) relies on the DID-based relationships that SSI provides, and their support for standardized protocols running over DIDComm, to create an internet of things that is much richer, secure, and privacy respecting than the CompuServe of Things we're being offered today. In this post, I extend the use cases I offered in the previous post and discuss the role the heterarchical relationships found in the SSIoT play.

For this post, we're going to focus on Alice's relationship with her F-150 truck and its relationships with other entities. Why a vehicle? Because in 2013 and 2014 I built a commercial connected car product called Fuse that used the relationship-centric model I'm discussing here1. In addition, vehicles exist in a rich, complicated ecosystem that offers many opportunities for interesting relationships. Figure 1 shows some of these.

Figure 1: Vehicle relationships (click to enlarge)

The most important relationship that a car has is with its owner. But there's more than one owner over the car's lifetime. At the beginning of its life, the car's owner is the manufacturer. Later the car is owned by the dealership, and then by a person or finance company. And, of course, cars are frequently resold. Over the course of its lifetime a car will have many owners. Consequently, the car's agent must be smart enough to handle these changes in ownership and the resulting changes in authorizations.

In addition to the owner, the car has relationships with other people: drivers, passengers, and pedestrians. The nature of relationships change over time. For example, the car probably needs to maintain a relationship with the manufacturer and dealer even after they are no longer owners. With these changes to the relationship come changes in rights and responsibilities.

In addition to relationships with owners, cars also have relationships with other players in the vehicle ecosystem including: mechanics, gas stations, insurance companies, finance companies, and government agencies. Vehicles exchange data and money with these players over time. And the car might have relationships with other vehicles, traffic signals, the roadway, and even potholes.

The following sections discuss three scenarios involvoing Alice, the truck, and other people, institutions, and things.

Multiple Owners

One of the relationship types that the CompuServe of Things fails to handle well is multiple owners. Some companies try and others just ignore it. The problem is that when the service provider intermediates the connection to the thing, they have to account for multiple owners and allow those relationships to change over time. For a high-value product, the engineering effort is justified, but for many others, it simple doesn't happen.

Figure 2: Multiple Owners (click to enlarge)

Figure 2 shows the relationships of two owners, Alice and Bob, with the truck. The diagram is simple and hides some of the complexity of the truck dealing with multiple owners. But as I discuss in Fuse with Two Owners some of this is simply ensuring that developers don't assume a single owner when they develop services. The infrastructure for supporting it is built into DIDComm, including standardized support for sub protocols like Introduction.

Lending the Truck

People lend things to friends and neighbors all the time. And people rent things out. Platforms like AirBnB and Outdoorsy are built to support this for high value rentals. But what if we could do it for anything at any time without an intermediating platform? Figure 3 shows the relationships between Alice and her friend Carol who wants to borrow the truck.

Figure 3: Borrowing the Truck (click to enlarge)

Like the multiple owner scenario, Alice would first have a connection with Carol and introduce her to the truck using the Introduction sub protocol. The introduction would give the truck permission to connect to Carol and also tell the truck's agent what protocols to expose to Carol's agent. Alice would also set the relationship's longevity. The specific permissions that the "borrower" relationship enables depend, of course, on the nature of the thing.

The data that the truck stores for different activities is dependent on these relationships. For example, the owner is entitled to know everything, including trips. But someone who borrows the car should be able to see their trips, but not those of other drivers. Relationships dictate the interactions. Of course, a truck is a very complicated thing in a complicated ecosystem. Simpler things, like a shovel might simply be keeping track of who has the thing and where it is. But, as we saw in The Self-Sovereign Internet of Things, there is value in having the thing itself keep track of its interactions, location, and status.

Selling the Truck

Selling the vehicle is more complicated than the previous scenarios. In 2012, we prototyped this scenario for Swift's Innotribe innovations group and presented it at Sibos. Heather Vescent of Purple Tornado created a video that visualizes how a sale of a motorcycle might happen in a heterarchical DIDComm environment2. You can see a screencast of the prototype in operation here. One important goal of the prototype was to support Doc Searls's vision of the Intention Economy. In what follows, I've left out some of the details of what we built. You can find the complete write-up in Buying a Motorcycle: A VRM Scenario using Personal Clouds.

Figure 4: Selling the Truck (click to enlarge)

In Figure 4, Alice is selling the truck to Doug. I'm ignoring how Alice and Doug got connected3 and am just focusing on the sale itself. To complete the transaction, Alice and Doug create a relationship. They both have relationships with their respective credit unions where Doug initiates and Alice confirms the transaction. At the same time, Alice has introduced the truck to Doug as the new owner.

Alice, Doug, and the truck are all connected to the DMV and use these relationships to transfer the title. Doug can use his agent to register the truck and get plates. Doug also has a relationship with his insurance company. He introduces the truck to the insurance company so it can serve as the service intermediary for the policy issuance.

Alice is no longer the owner, but the truck knows things about her that Doug shouldn't have access to and she wants to maintain. We can create a digital twin of the truck that is no long attached to the physical device, but has a copy of all the trip and maintenance information that Alice had co-created with the truck over the years she owned it. This digital twin has all the same functionality for accessing this data that the truck did. At the same time, Alice and Doug can negotiate what data also stays on the truck. Doug likely doesn't care about her trips and fuel purchases, but might want the maintenance data.

Implementation

A few notes on implementation:

The relationships posited in these use cases are all DIDComm-capable relationships. The workflows in these scenarios use DIDComm messaging to communicate. I pointed out several places where the Introduction DIDComm protocol might be used. But there might be other DIDComm protocols defined. For example, we could imagine workflow-specific messages for the scenario where Carol borrows the truck. The scenario where Doug buys the truck is rife with possibilities for protocols on DIDComm that would standardize many of the interactions. Standardizing these workflows through protocol (e.g., a common protocol for vehicle registration) reduces the effort for participants in the ecosystem. Some features, like attenuated permissions on channel are a mix of capabilities. DIDComm supports a Discovery protocol that allows Alice, say, to determine if Doug is open to engaging in a sale transaction. Other permissioning would be done by the agent outside the messaging system. The agents I'm envisioning here are smart, rule-executing agents like those available in picos. Picos provide a powerful model for how a decentralized, heterarchical, interoperable internet of things can be built. Picos provide an DIDComm agent programming platform that is easily extensible. Picos live on an open-source pico engine that can run on anything that supports Node JS. They have been used to build and deploy several production systems, including the Fuse connected-car system discussed above. Conclusion

DIDComm-capable agents can be used to create a sophisticated relationship network that includes people, institutions, things and even soft artifacts like interaction logs. The relationships in that network are rich and varied—just like relationships in the real world. Things, whether they are capable of running their own agents or employ a soft agent as a digital twin, are much more useful when they exist persistently, control their own agent and digital wallet, and can act independently. Things now react and respond to messages from others in the relationship network as they autonomously follow their specific rules.

Everything I've discussed here and in the previous post are doable now. By removing the intermediating administrative systems that make up the CompuServe of Things and moving to a decentralized, peer-to-peer architecture we can unlock the tremendous potential of the Self-Sovereign Internet of Things.

Notes Before Fuse, we'd built a more generalized IoT system based on a relationship network called SquareTag. SquareTag was a social product platform (using the vernacular of the day) that promised to help companies have a relationship with their customers through the product, rather than merely having information about them. My company, Kynetx, and others, including Drummond Reed, were working to introduce something we called "personal clouds" that were arrayed in a relationship network. We built this on a actor-like programming model called "picos". The pico engine and programming environment are still available and have been updated to provide DID-based relationships and support for DIDComm. In 2012, DIDComm didn't exist of course. We were envisioning something that Innotribe called the Digital Asset Grid (DAG) and speaking about "personal clouds" but the envisioned operation of the DAG was very much like what exists now in the DIDComm-enabled peer-to-peer network enabled by DIDs. In the intentcasting prototype, Alice and Doug would have found each other through a system that matches Alice's intent to buy with Doug's intent to sell. But a simpler scenario would have Alice tell the truck to list itself on Craig's List so Alice and Doug can meet up there.

Photo Credit: F-150 from ArtisticOperations (Pixabay License)

Tags: ssiot iot vrm me2b ssi identity decentralized+identifiers relationships


MyDigitalFootprint

Humans want principles, society demands rules and businesses want to manage risk, can we reconcile the differences?

The linkage between principles and rules is not clear because we have created so many words and variances in language that there is significant confusion. We are often confused about what we mean as we are very inconsistent in how we apply words and language, often to provide a benefit to ourselves or justify our belief. To unpack the relationships we need to look at definitions, but we have to a

The linkage between principles and rules is not clear because we have created so many words and variances in language that there is significant confusion. We are often confused about what we mean as we are very inconsistent in how we apply words and language, often to provide a benefit to ourselves or justify our belief. To unpack the relationships we need to look at definitions, but we have to accept that even definitions are inconsistent. Our conformational bias is going to fight us, as we want to believe what we already know, rather than expand our thinking.

(building on orignal article with Kaliyia) Are we imagining principles or values?  

Worth noting our principles are defined by our values. Much like ethics (group beliefs) and morals (personal beliefs) and how in a complex adaptive system my morals affect the group’s ethics and a group’s ethics changes my morals. Situational awareness and experience play a significant part in what you believe right now, and what the group or society believes. 

Values can be adaptable by context whereas principles are fixed for a period, withstanding the test of time.  When setting up a framework where we are setting our principles implies that we are saying that we don’t want them to change every day, week, month, year, that they are good and stable for a generation but we can adapt/ revise/ adjust principles based on learning.  Fundamentally principles are based on values which do change, so there are ebbs and flows of conflict between them, this means we frame principles and often refuse to see that they are not future proof forever.  Indeed the further a principle is away from the time it was created, the less it will have in common with values. 

Are we confusing principles and rules?  

Considering characteristics, conceptually principles are abstract and universal whereas a rule is specific and particular. Principles cope with exceptions, rules need another rule.  Principles provide the power of thought and decision making, rules prevent thought and discretion.  Principles need knowledge and experience to deliver outcomes, rules don’t.  Principles cope with risk, conflict and abstraction; conflict is not possible for a rule, it is this rule or a rule is needed. 


The word “rule” needs some more unpacking as it can take on many meanings.   The history and origin of the word “Rule” is here.  The choice of the word rule is designed to be ambitious, allowing the reader to apply your own context, thereby creating more relevance to your own circumstances. 

For me, you or someone;

Rules are written or unwritten or both

Rules are mine, created by me that you need to follow. They are yours, crafted by you that you need me to obey. They are shared and we believe that they create a better society

Rules can be the law, just a guide, the standard you need to meet or the rituals that creates success.  But which law, they one we should not break or the one where we follow the spirit?  As a guide to guide me from here to where.  As a standard is that absolute or is a range good enough.  My rituals, did I learn them, did you teach me or somehow are they just there?

Rules equally give you more freedom (safety, less murder) and remove your freedom (choice). Rules give me more agency and at the same time remove it.

Rules define my boundaries but are the ones I have created for myself and I have continually refined them as I learn, or are my rules ones that come from history; because we have always done it this way.  

Rules are they creating my view on values or are the rules I have someone else’s values?

Rules are only there to be broken

Rules allow me to create something as I have done something, have experience and have learnt. Rules allow me to repeat and not make the same mistake or improve and adapt.  Rules save me time and energy - I love my heuristics

Rules allow me to manage, prevent and control risk

But whose rules are they?

Back to the relationship between rules and principles.  In companies and for a social policy we set rules and principles into matrices as below.  Asking is it better to break rules or comply, is better to uphold principle or challenge them.  This helps us to define where social norms stop and laws are needed.   

A review round the four quadrants highlights that there is no favourable sector and indeed as a society who wants to get improve, we continually travel through all of them.  Companies and executives often feel that upholding principles and obeyed rules (top right) creates the best culture, but also ask the organisation to be adaptive, agile and innovative. 

Given that principles are based on values, the leadership team will be instrumental into how upheld the principles are. Whereas the companies level of documentation for processes, procedures and rules will define what is to be obeyed, the culture of the top team will determine if they are to be obeyed or not. 

The matrix below thinks about the combinations of values and principles. Where values are either mine as an individual or we as a collective society.  



The fundamental issue with the two representations (rules or values and principles)  is that they cannot highlight the dynamic nature of the relationship between them.  By example, our collective values help normalise an individuals bias and that collective values informs and refine principles.  Indeed as principles become extreme and too restrictive say as our collective values become too godly, our collective values opt to no-longer uphold them.   When our individualism leads to the falling apart of society we raise the bar to create better virtues as it makes us more content, loved and at peace.  

Movement within the “stable compromise” domain has been explored many times but the Tytler cycle of history expands it very well.

 

In summary, a rules-based approach prescribes or describes in detail a set of rules and how to behave based on known and agreed principles. Whereas a principle-based approach develops principles which set the limits that enable controls, measures, procedures on how to achieve that outcome is left for each organisation to determine.

Risk frameworks help us to connect principles and rules

Having explored that a rules-based approach prescribes in detail the rules, methods, procedures, processes and tasks on how to behave and act, whereas a principle-based approach to creating outcomes crafts principles that frame boundaries, leaving the individual or organisation to determine its own interruption. 

In a linear system, we would agree on principles which would bound the rules.  

In a non-linear system, we would agree on the principles, which would bound the rules and as we learn from the rules we would refine the principles.  

In a complex adaptive system, we are changing principles, as our values change because of the rules which are continually be modified to cope with the response to the rules.

This post is titled “In a digital age, how can we reconnect values, principles and rules?” and the obvious reason is that rules change, values, which change principles that means our rules need to be updated. However, this process of learning and adoption depends on understanding the connection which offers closed-loop feedback.  An effective connection is our risk frameworks.


The diagram below places rules and principles at two extremes. As already explored we move from principles to rules but rarely go back to rethink our principles, principally because of the time.  Rules should refine and improve in real-time,  principles are generational.  However to create and refine rules we use and apply a risk framework.  The risk framework identifies risk and to help us manage it, we create rules that are capable of ensuring we get the right data/ information to be able to determine if we have control over risk.   As humans, we are not experts in always forecasting the unimagined and so when we implement rules things break and clever minds think how to bend, break or avoid them.  To that end we create more rules to manage exceptions.  However, occasionally we need to check that our rules are aligned to our principles and indeed go back and check and refine our principles. 

Starting from “Principles” these are anchored in ideas such as Human Dignity, Subsidiarity, Solidarity, Covenantal, Sustainability, The common good, Stewardship, Equality.  

Once we decide that one or more of these should anchor our principles and form a north star, a direction to travel in and towards. The reason to agree on the Principle(s) is that collectively we agree on a commitment to get to a better place. We state our principles as an ambition, goal, target with allow us to understand, manage and control uncertainty using a risk framework. The risk framework frame or bounds the risk we are prepared to take.  The risk framework enables us to define rules that get to our known outcomes.  We implement the rules to create controls using regulation, code and standards. Our risk frameworks use tools to identify, measure, manage, monitor and report on the risk, the delta in risk and compliance with the rules.  Whilst all is good we use the risk framework to create more rules and better framing and boundaries, creating better outcomes.  However, when the desired outcomes are not being created we revert to the principles, check our north star and take our new knowledge to refine/ redefine the risk we are prepared to take.

Data introduces new Principle problems! 

Having established this framework, the idea is to apply this to data.  We have an abundance of rules and regulations and as many opinions on what we are trying to achieve with data.  However, we don’t appear to have an agreed risk framework for data at any level, individual, company, society, national or global.  This is not a bill of rights, this is “what do we think is the north star for data and on what principle should data be?”  How do these principles help us agree on risks, and will our existing rules help or hinder us?

“what do we think is the north star for data and on what principle should data be?”  How do these principles help us agree on risks, and will our existing rules help or hinder us?

The question is how do our principles change when the underlying fabric of what is possible changes, the world we designed for was physical; it is now digital-first. Now we are becoming aware that the fabric has changed, where next?   By example, Lexis is the legal system and database.  With a case in mind, you use this tool to uncover previous judgments and specific cases to determine and inform your thinking.  However, this database is built on humans and physical first.  Any digital judgements in this database are still predicated on the old frameworks, what is its value when the very fabric of all those judgements changes.  Do we use it to slow us down and prevent adoption?  Time to unpack this

Physical-world first (framed as AD 00 to 2010)

Classic thinking (western capital civilisation philosophy) defined values and principles which have created policy, norms and rules.  Today’s policy is governed by people and processes. We have history to provide visibility over time and can call on millennia of thought, thinking and wisdom.  Depending on what is trending/ leading as a philosophy we create norms.  In a physical and human first world, we have multi-starting positioning. We can start with a market, followed by norms, followed by doctrine/ architecture - creating law and regulations  OR we can start with norms, followed by doctrine/ architecture, followed by market-creating law. 

Without our common and accepted belief our physical world would not work. Law, money, rights are not real, they are command and control schema with shared beliefs.  Our created norms are based on our experience with the belief.  We cope by managing our appetite to risk. 

Digital world first (frame as AD 2020 - AD MMMCCX )

People-in-companies rather than people-in-government form the new norms as companies have the capital to include how to avoid the rules and regulations.  The best companies are forming new rules to suit them. Companies have the users to mould the norms with the use of their data. Behaviour can be directed. Companies set their own rules.  Doctrine/architecture creates the market, forming norms, and the law protects those who control the market.  Policy can create rules but it has no idea how rules are implemented or governed as the companies make it complex and hide the data. There are few signs of visible “core” human values, indeed there are no shared and visible data principles.  We are heading to the unknown and unimagined.

The companies automate, the decisions become automated, the machine defines the rules and changes the risk model. We are heading to the unknown and unimagined as we have no data principles.

By example. Our news and media have changed models. The editor crafted control to meet the demand of an audience were willing to pay to have orchestrated content that they liked.  As advertising became important, content mirrored advertising preferences and editorial became the advertising and advertising the content.  Digital created clicks that drove a new model to anything that drives clicks works.  The fabric changed from physical to digital and in doing so we lost the principles and rules of the physical first world to a digital-first world that has not yet agreed on principles for data. 

Data is data

This article Data is Data explores what data is and is my reference to define data. 

Imagine looking at this framework of “principles, rules and risk” within the industry and sectors seeking to re-define, re-imagine and create ways for people to manage the digital representations of themselves with dignity.  How would say their data and privacy be presented?

With data (privacy, protection, use, collection) we have an abundance of rules and regulations and as many opinions on what we are trying to achieve.  We appear to be missing an agreed risk framework for individuals, company’s, societies (national &global)  

The stated GDPR principles are set out in Article 5

Lawfulness, fairness and transparency.

Purpose limitation.

Data minimisation.

Accuracy.

Storage limitation.

Integrity and confidentiality (security)

Accountability.

We know they are called “Principles” by the framing of the heading in Article 5, however, if we read them slowly are these principles, values or rules? Consider are these boundaries, stewardship ideals or a bit of a mashup.   By example to get round “Purpose Limitation,” terms and conditions  become as wide as possible so that all and or any use is possible.  Data minimisation is only possible if you know the data you want, which is rarely the case if you are a data platform.   If a principle of The European Union is to ensure the free “movement / mobility” of people, goods, services and capital within the Union (the ‘four freedoms’), does data identity ideals and GDPR align?  

Considering the issue about the “regulation of” Big Tech, in general should they exist, as no one entity should have that much power and control over people’s data and ability to transact? So the framings that accepts them as acceptable, won’t create rules that actually moves towards the principle of ending the current hegemony but rather just seek to regulate it as is.  If we add in open API’s and the increasing level of data mobility, portability and sharing whose “rules or principles” should be adopted?

How do your principles change when the underlying fabric of what is possible changes? The entire privacy framework, say in the US today, is based on early 1970’s reports written in the United States to address concerns over mass state databases that were proposed in the mid-late 1960’s  and the growing data broker industry that was sending people catalogues out of the blue. It doesn’t take account for the world we live in now where “everyone” has a little computer in their pocket.  Alas, IMHO, GDPR is not a lot better than rules with no truly human based core principles.

Conclusion

We appear to have outdated “principles” driving rules in a digital-first world. 

Our commercial world is now dominated by companies setting “their” norms without reference to any widely agreed-upon values. The down side of big tech gaining so much power that they are actually seen by people-in-government as “equivalent to nation-states” is telling.  Right now we need historians, anthropologists, ontologists, psychologists, data scientists and regular everyday people who are the users to be able to close the loop between the rules we have, the risk frameworks we manage and the principles that we should be aiming for.      

Take Away

How are we checking the rules we have are aligned to our principles?

How are we checking our principles?

Is our risk framework able to adapt to new principles and changes to rules?

How do we test the rules that define and constrain can create better outcomes?


rules - unpacking the word

In my post on Principles and Rules, I explored the connection between our human desire for principles, our commercial need for risk and our love of rules.  It explored the fact that we create rules, to manage risks, that end up not aligned with our principles and made some suggestion about how we can close the loop.  In the article, I skipped over the word “rules” without unpacking i

In my post on Principles and Rules, I explored the connection between our human desire for principles, our commercial need for risk and our love of rules.  It explored the fact that we create rules, to manage risks, that end up not aligned with our principles and made some suggestion about how we can close the loop. 

In the article, I skipped over the word “rules” without unpacking it.  This post is to unpack the word “rule”  The history and origin of the word “Rule” is here.  Irrespective of the correct use of the word “rule,” we use words in both correct and incorrect situations. Incorrect being there is a more precise or accurate word in the context or situation but we chose the word we do so as to create ambiguity, to avoid controversy, to soften the message and because of naivety.  We know that words and our language itself are filled with convenient generalisations that help us to explain ourselves whilst at the same time avoid the controversy created by unique circumstances. 

In the Principles and Rules article, the choice of the word rule was ambitious. This allows readers to apply their own context to it, thereby creating more relevance to their own circumstances when reading.  It was not a legal contract scenario, writing definitions at the beginning to provide that level of clarity and common interpretation. 

So in the idea was ambiguity - this post, however, is to expand an ontology of the word “rules.”

For me, you or someone;

Rules are written or unwritten or both

Rules are mine, created by me that you need to follow. They are yours, crafted by you that you need me to obey. They are shared and we believe that they create a better society

Rules can be the law, just a guide, the standard you need to meet or the rituals that creates success.  But which law, they one we should not break or the one where we follow the spirit?  As a guide to guide me from here to where.  As a standard is that absolute or is a range good enough.  My rituals, did I learn them, did you teach me or somehow are they just there?

Rules equally give you more freedom (safety, less murder) and remove your freedom (choice). Rules give me more agency and at the same time remove it.

Rules define my boundaries but are the ones I have created for myself and I have continually refined them as I learn, or are my rules ones that come from history; because we have always done it this way.  

Rules are they creating my view on values or are the rules I have someone else’s values?

Rules are only there to be broken

Rules allow me to create something as I have done something, have experience and have learnt. Rules allow me to repeat and not make the same mistake or improve and adapt.  Rules save me time and energy - I love my huristics

Rules allow me to manage, prevent and control risk

But whose rules are they?


The takeaway

When our principles become rules, do we question either the rules or principles enough?



Rebecca Rachmany

Road Trip in a Pandemic

When I got to Milano, I thought: How did they know? This feels just like a post-apocalyptic film. But how did the directors know this is how it feels? It’s not that surprising, of course. Cities have gone through wars and pestilence since the existence of cities. Milano after the plague, or not after the plague. Certainly the worst of it hasn’t happened yet. We are in the middle of it. Maybe

When I got to Milano, I thought: How did they know? This feels just like a post-apocalyptic film. But how did the directors know this is how it feels? It’s not that surprising, of course. Cities have gone through wars and pestilence since the existence of cities.

Milano after the plague, or not after the plague. Certainly the worst of it hasn’t happened yet. We are in the middle of it. Maybe just the beginning of it. Whatever it is.

I remember a warm data workshop with Nora Bates. She said: The disaster has already struck. It just hasn’t struck everyone yet.

Empty office buildings in Milan

When I left Ljubljana, why did I leave Ljubljana? Numbers were rising. Measures were tightening. Will I be able to get to Spain? Will I be able to get back? Maybe with my Slovenian plates nobody will stop me.

In Milan I sat with one of my father’s best friends in a rooftop restaurant in the outdoor seating. They took our temperatures before we got on the elevator. I can’t visit my father but at least his friend lives within driving distance of me.

Four hours is a long way to go out of one’s way for a meal and Milan doesn’t seem so advisable, but who knows when we will get to see our loved ones again? She gets me an AirBnB because hotels make me choke with the poisonous sanitizer they use to cleanse the air.

We have dinner. I tell her it’s hard to speak to Americans about what’s happening in the States. She says “Whenever I bring it up, your father says ‘I don’t want to talk about it.’.”. I don’t want to talk about it either, but it’s all I ever talk about. I’m American, or at least that’s what one of my passports says. My residency permit says Slovenia. It’s my only hope of staying out of a red zone at this point. Red zone. I’m not worried about disease. Civil war is another thing.

The laundry place wouldn’t take my underwear or socks. If I want laundry done, I have to do it myself, they said. I don’t know if that’s because of the pandemic or a Milanese tradition. At a little coastal town two hours East, the proprietor of the self-service laundromat did my laundry for me at no extra charge.

This is my first post on the Sufficiency Currency blog. It’s about the wisdom of taking a road trip in Europe in the middle of a pandemic. At the beginning of the collapse of civilization. In a bizarre moment of suspension, where things seem to be going on as usual; waiting for the other shoe to drop. America is in the throes of a civil war and California has no air and we’re sharing TikTok videos and Instagram sunsets. Everyone knows the economy’s dead, but we’re working and shopping and paying rent as if nothing happened, like we’re under hypnosis.

Or a psychedelic trip. One big global psychedelic trip, and not the good kind.

Who is that masked man?

Zipping along the highway in a rental car in the middle of the Zombie Apocalypse Psychedelic Bad Trip. That doesn’t sound very wise.

At one of the ecovillages, they make their own toothpaste. It’s not really toothpaste; it’s a kind of a powder. I asked one of the volunteers how to use the tooth powder. I knew the answer, but I was hoping I was wrong. She said: we just dip our toothbrushes into the jar. I don’t need a pandemic to know that dipping my toothbrush in the communal jar of tooth-powder is a bad idea. I brought my own toothpaste, like a civilized person. I brought the biological kind so they can’t complain.

As I drive my car along the Riviera, I see it out of the corner of my eye, for a brief second between tunnels, the dark blue sign with the circle of stars around the word France. If I hadn’t been paying attention, I wouldn’t have noticed. OK Google doesn’t say “Welcome to France.” When you cross between states in the US, Google tells you “Welcome to New Jersey.” In Europe, no such thing. OK Google is silent. She knows I crossed and I know I crossed. Maybe she’s pretending to honor GDPR as if she doesn’t know my every move. She knows if I am speeding but doesn’t tell the cops. She definitely knows that I got a small cut on my right index finger. Now she knows the prints of my middle finger, too. “OK, Google,” I say. “That’s me,” she answers.

The border crossing is easy. Finding lunch on the French Riviera in the off season during a pandemic isn’t. I have cake and coffee instead of something that feels French — or like lunch. The coffee shop has no WiFi. I guess you’re supposed to be enjoying yourself by the marina, not working.

The Sufficiency Currency project is about creating an alternative form of economic activity. Not marketplaces. Not money. An evolution of how we perceive our economic activity.

“People have always used money. What else is there?” people ask.

People haven’t always used money and there are still peoples on earth who don’t. Every system is born, lives and dies. Money is just a human invention and the financial system is like every other human invention. We can and will invent something else. Hopefully very soon.

A road trip to replace the world’s financial system in the middle of the Zombie Apocalypse Psychedelic Bad Trip.

Follow he Voice of Humanity project here.

Tuesday, 01. December 2020

Phil Windley's Technometria

The Self-Sovereign Internet of Things

Summary: Self-sovereign identity offers much more than just better ways to log in. The identity metasystem is really a sophisticated messaging system that is trustworthy, secure, and extensible. While decentralized identifiers and verifiable credentials have much to offer the Internet of Things (IoT), the secure messaging subsystem promises an IoT that goes well beyond those initial scenarios. Thi

Summary: Self-sovereign identity offers much more than just better ways to log in. The identity metasystem is really a sophisticated messaging system that is trustworthy, secure, and extensible. While decentralized identifiers and verifiable credentials have much to offer the Internet of Things (IoT), the secure messaging subsystem promises an IoT that goes well beyond those initial scenarios. This post gives and introduction to SSI and IoT. The follow-on post goes deeper into what a true Internet of Things founded on SSI can provide.

I've been contemplating a self-sovereign internet of things (SSIoT) for over a decade. An SSIoT is the only architecture which frees us from what I've called the CompuServe of Things. Unlike the CompuServe of Things, the SSIoT1 supports rich, peer-to-peer relationships between people, things, and their manufacturers.

In the CompuServe of Things, Alice's relationships with her things are intermediated by the company she bought them from as shown in Figure 1. Suppose, for example, she has a connected coffee grinder from Baratza.

Figure 1: Administrative relationships in today's CompuServe of Things (click to enlarge)

In this diagram, Alice uses Brataza's app on her mobile device to connect with Baratza's IoT cloud. She registers her coffee grinder, which only knows how to talk to Baratza's proprietary service API. Baratza intermediates all of Alice's interactions with her coffee grinder. If Baratza is offline, decides to stop supporting her grinder, goes out of business, or otherwise shuts down the service, Alice's coffee grinder becomes less useful and maybe stops working all together.

In an SSIoT, on the other hand, Alice has direct relationships with her things. In Operationalizing Digital Relationships, I showed a diagram where Alice has relationships with people and organizations. But I left out things because I hadn't yet provided a foundational discussion of DIDComm-enabled digital relationships that's necessary to really understand how SSI can transform IoT. Figure 2 is largely the same as the diagram in the post on operationalizing digital relationships with just a few changes: I've removed the ledger and key event logs to keep it from being too cluttered and I've added a thing: a Baratza coffee grinder2.

Figure 2: Alice has a relationship with her coffee grinder (click to enlarge)

In this diagram, the coffee grinder is a fully capable participant in Alice's relationship network. Alice has a DID-based relationship with the coffee grinder. She also has a relationship with the company who makes it, Baratza, as does the coffee grinder. Those last two are optional, but useful—and, importantly, fully under Alice's control.

DID Relationships for IoT

Let's focus on Alice, her coffee grinder, and Baratza to better understand the contrast between the CompuServe of Things and an SSIoT.

Figure 3: Alice's relationships with her coffee grinder and it's manufacturer (click to enlarge)

In Figure 3, rather than being intermediated by the coffee grinder's manufacturer, Alice has a direct, DID-based relationship with the coffee grinder. Both Alice and the coffee grinder have agents and wallets. Alice also has a DID-based relationship with Baratza which runs an enterprise agent. Alice is now the intermediary, interacting with her coffee grinder and Baratza as she see's fit.

Figure 3 also shows a DID-based relationship between the coffee grinder and Baratza. In an administrative CompuServe of Things, we might be concerned with the privacy of Alice's data. But in a Self-Sovereign Internet of Things, Alice controls the policies on that relationship and thus what is shared. She might, for example, authorize the coffee grinder to share diagnostic information when she needs service. She could also issue a credential to Baratza to allow them to service the grinder remotely, then revoke it when they're done.

The following sections describe three of many possible use cases for the Self-Sovereign Internet of Things.

Updating Firmware

One of the problems with the CompuServe of Things is securely updating device firmware. There are many different ways to approach secure firmware updates in the CompuServe of things—each manufacturer does it slightly differently. The SSIoT provides a standard way to know the firmware update is from the manufacturer and not a hacker.

Figure 4: Updating the firmware in Alice's coffee grinder (click to enlarge)

As shown in Figure 4, Baratza has written a public DID to the ledger. They can use that public DID to sign firmware updates. Baratza embedded their public DID in the coffee grinder when it was manufactured. The coffee grinder can resolve the DID to look up Baratza's current public key on the ledger and validate the signature. This ensures that the firmware package is from Baratza. And DIDs allow Baratza to rotate their keys as needed without invalidating the DIDs stored in the devices.

Of course, we could also solve this problem with digital certificates. So, this is really just table stakes. The advantage of using SSIoT for secure firmware updates instead of digital certificates is that if Baratza is using it for other things (see below), they get this for free without also having to support the certificate code in their products or pay for certificates.

Proving Ownership

Alice can prove she owns a particular model of coffee grinder using a verifiable credential.

Figure 5: Alice uses a credential to prove she owns the coffee grinder (click to enlarge)

Figure 5 shows how this could work. The coffee grinder's agent is running the Introduction protocol and has introduced Alice to Baratza. This allows her to form a relationship with Baratza that is more trustworthy because it came on an introduction from something she trusts.

Furthermore, Alice has received a credential from her coffee grinder stating that she is the owner. This is kind of like imprinting. While it may not be secure enough for some use cases, for things like a coffee grinder, it's probably secure enough. Once Alice has this credential, she can use it to prove she's the owner. The most obvious place would be at Baratza itself to receive support, rewards, or other benefits. But other places might be interested in seeing it as well: "Prove you own a Baratza coffee grinder and get $1 off your bag of beans."

Real Customer Service

We've all been in customer hell where we call a company, get put on hold, get ask a bunch of questions to validate who we are, have to recite serial numbers or model numbers to one agent, then another, and then lose the call and have to start all over again. Or been trapped in a seemingly endless IVR phone loop trying to even get to a human.

The DID-based relationship Alice has created with Baratza does away with that because DIDComm messaging creates a batphone-like experience wherein each participant knows they are communicating with the right party without the need for further authentication, reducing effort and increasing security. As a result, Alice has a trustworthy communication channel with Baratza that both parties can use to authenticate the other. Furthermore, as we saw in the last section, Alice can prove she's a bona fide customer.

But the ability of DIDComm messaging to support higher-level application protocols means that the experience can be much richer. Here's s simple example.

Figure 6: Alice uses a specialized wallet to manage the vendors of things she owns (click to enlarge)

In Figure 6, Alice has two coffee grinders. Let's further assume that Alice has a specialized wallet to interact with her things. Doc Searls has suggested we call it a "briefer" because it's more capable than a wallet. Alice's briefer does all the things her credential wallet can do, but also has a user interface for managing all the things she owns and the relationships she has with them3. Students in my lab at BYU have been working on a prototype of such an interface we call Manifold using agent-enabled digital twins called "picos".

Having two things manufactured by Baratza presents a problem when Alice wants to contact them because now she is the intermediary between the thing and its vendor. But if we flip that and let the thing be the intermediary, the problem is easily resolved. Now when Alice wants to contact Baratza, she clicks one button in her briefer and lets her coffee grinder intermediate the transaction. The grinder can interject relevant information into the conversation so Alice doesn't have to. Doc does a great job of describing why the "thing as conduit" model is so powerful in Market intelligence that flows both ways.

You'll recall from DIDComm and the Self-Sovereign Internet, that behind every wallet is one or more agents. Alice's briefer has an agent. And it has relationships with each of her things. Each of those has one or more agents. These agents are running an application protocol for vendor message routing. The protocol is using sub protocols that allow the grinder to act on Alice's behalf in customer support scenarios. You can imagine that CRM tools would be fitted out to understand these protocols as well.

There's at least one company working on this idea right now, HearRo. Vic Cooper, the CEO of HearRo recently told me:

Most communications happen in the context of a process. [Customers] have a vector that involves changing some state from A to B. "My thing is broken and I need it fixed." "I lost my thing and need to replace it." "I want a new thing and would like to pay for it but my card was declined." This is the story of the customer service call. To deliver the lowest effort interaction, we need to know this story. We need to know why they are calling. To add the story to our context we need to do two things: capture the intent and manage the state over time. SSI has one more super power that we can take advantage of to handle the why part of our interaction. We can use SSI to operationalize the relationships.

Operationalized relationships provide persistence and context. When we include the product itself in the conversation, we can build customer service applications that are low effort because the trustworthy connection can include not only the who, but also the what to provide a more complete story. We saw this in the example with two coffee grinders. Knowing automatically which grinder Alice needs service for is a simple bit of context, but one that reduces effort nonetheless.

Going further, the interaction itself can be a persistent object with it's own identity, and DID-based connections to the participants4. Now the customer and the company can bring tools to bear on the interaction. Others could be invited to join the interaction as necessary and the interaction itself now becomes a persistent nexus that evolves as the conversation does. I recently had a month long customer service interaction involving a few dozen calls with Schwab (don't ask). Most of the effort for me and them was reestablishing context over and over again. No CRM tool can provide that because it's entirely one-sided. Giving customers tools to operationalize customer relationships solves this problem.

A Self-Sovereign Internet of Things

The Sovrin Foundation has an IoT working group that recently released a whitepaper on Self-Sovereign Identity and IoT. In it you'll find a discussion of some problems with IoT and where SSI can help. The paper also has a section on the business value of SSI in IoT. The paper is primarily focused on how decentralized identifiers and verifiable credentials can support IoT. The last use case I offer above goes beyond those, primarily identity-centric, use cases by employing DIDComm messaging to ease the burden of getting support for a product.

In my next blog post, I'll extend that idea to discuss how SSI agents that understand DIDComm messages can support relationships and interactions not easily supported in the CompuServe of Things as well as play a bigger role in vendor relationship management. These and other scenarios can rescue us from an administrative, bureaucratic CompuServe of Things and create a generative IoT ecosystem that is truly internet-like.

Notes Some argue that since things can't be sovereign (see Appendix B of the Sovrin Glossary for a taxonomy of entities), they shouldn't be part of SSI. I take the selfish view that as a sovereign actor in the identity metasystem, I want my things to be part of that same ecosystem. Saying things are covered under the SSI umbrella doesn't imply they're sovereign, but merely says they are subject to the same overarching governance framework and use the same underlying protocols. In the next post on this subject, I'll make the case that even if things aren't sovereign, they should have an independent existence and identity from their owners and manufacturers. The choice of a coffee grinder is based simply on the fact that it was the example Doc Searls gave when we were having a discussion about this topic recently. This idea has been on my mind for a while. This post from 2013, Facebook for My Stuff discusses it in the "social" venacular of the day. The idea that a customer service interaction might itself be a participant in the SSIoT may cause some to shake their heads. But once we create a true internet of things, it's not just material, connected things that will be on it. The interaction object could have its own digital wallet, store credentials, and allow all participants to continue to interact with it over time, maintaining context, providing workflow, and serving as a record that everyone involved can access.

Photo Credit: Coffee Beans from JoseAlbaFotos (Pixabay license)

Tags: agents credentials decentralized+identifiers didcomm identity me2b ssi vrm iot picos customer+service

Monday, 30. November 2020

The Dingle Group

Guardianship in Self-Sovereign Identity

On Monday, November 23rd the Vienna Digital Identity Meetup* held its 17th event, the focus of this event was on Guardianship and SSI. Our presenter was Philippe Page of The Human Colossus Foundation and is a current member of the Sovrin Foundation Guardianship Working Group….

On Monday, November 23rd the Vienna Digital Identity Meetup* held its 17th event, the focus of this event was on Guardianship and SSI. Our presenter was Philippe Page of The Human Colossus Foundation and is a current member of the Sovrin Foundation Guardianship Working Group.

Guardianship is a legal status where one individual is under the legal care of another. A natural guardian relationship is between parent and child, legal guardians are persons who are recognized by the courts as having the legal authority and duty of care for another.

While there are long standing legal precedents and processes around the assignment, management and revocation of guardianships, these requirements were not met by existing digital identity management solutions. With SSI, this mechanism now exists. SSI can work in conjunction with traditional identity and credential management systems while being able to integrate into existing legal processes and provides a robust mechanism for revocation.

Guardianship is a complex topic, with many subtleties and layers. In the humanitarian sector the it is a reality of daily life when supporting and assisting migrants, refugees and displaced persons. It is a topic we all are faced with in our lifetimes; whether as a child (being cared for) or as an adult (caring for or being cared for). In this first event on this topic, Philippe has provided an overview of how SSI and Guardianship fit together and how SSI meets the lifecycle stages (Inception, Creation, Usage and Termination) of guardianship.

An objective of these events is to educate the community on how high assurance digital identities unlock new possibilities across all entities and industry sectors. Using the tripartite relationship in clinical drug trials of patient, doctor and pharma company, Philippe covered how guardianship can also be used to manage consent to derive a business value unlocked with high assurance digital identity.

Guardianship is an important legal concept, and will be a topic we will return to in 2021.

For a recording of the event please check out the link: https://vimeo.com/482803989

Time markers:

0:00:00 - Introduction

0:03:55 - Philippe Page - Introduction

0:08:03 - Overview

0:08:41 - Guardianship background

0:20:15 - Core Components

0:33:51 - Scaling and Standardization

0:39:04 - Bridging Economic Actors & Human Centricity

0:45:45 - Applications

1:02:08 - Questions

For more information on:

- Sovrin Foundation Guardianship Working Group : https://sovrin.org/guardianship/

- The Human Colossus Foundation : https://humancolossus.foundation/

And as a reminder, due to increased COVID-19 infections we are back to online only events. Hopefully we will be back to in person and online soon!

If interested in getting notifications of upcoming events please join the event group at: https://www.meetup.com/Vienna-Digital-Identity-Meetup/

*Vienna Digital Identity Meetup is hosted by The Dingle Group and is focused on educating business, societal, legal and technologists on the value that a high assurance digital identity creates by reducing risk and strengthening provenance. We meet on the 4th Monday of every month, in person (when permitted) in Vienna and online on Zoom. Connecting and educating across borders and oceans.


Justin Richer

Thanks, fixed the typo!

Thanks, fixed the typo!

Thanks, fixed the typo!

Sunday, 29. November 2020

Tim Bouma's Blog

The Power of a Secret

Photo by Michael Dziedzic on Unsplash Note: This post is the sole opinion of the author based on knowledge and experience gained at the time. The author recognizes that there may be errors and biases, and welcomes constructive feedback to correct or ameliorate. We all like secrets. When we possess a secret, it gives us a heightened sense of individuality — that we know some that nobody else k
Photo by Michael Dziedzic on Unsplash

Note: This post is the sole opinion of the author based on knowledge and experience gained at the time. The author recognizes that there may be errors and biases, and welcomes constructive feedback to correct or ameliorate.

We all like secrets. When we possess a secret, it gives us a heightened sense of individuality — that we know some that nobody else knows — giving us a special perspective or an option for the future that only we can exercise — in other words, power.

It turns out, imaginary or not, secrets are fundamental to the power that we have as individuals and institutions in the digital realm. Passwords, codes — those things that grant us or enable us to grant special access to those things that valuable, like bank accounts, emails, or the drafts and finals of our deliberations, the list goes on.

It turns out, that up until, August 1, 1977, secrets had a fundamental fault — we had to share them to use them. That meant you had to trust someone else, and that could eventually lead to the betrayal of your secret, and by extension, you.

In 1977, the public introduction of asymmetric cryptography heralded a new generation of secret capabilities. The first major capability was the establishment of shared secrets across insecure channels enabling encryption between two parties without the requirement of a secret backchannel. The second was enabling commitments using secrets that are not shared, more commonly known as digital signatures.

What had been discovered by Whitfield Diffie and Martin Hellman (and also Jame Ellis), is changing the world as we know it. It’s been only 43 years. Yes, that seems like an ice-age ago, but in the grand scheme of history, it is only a wink.

My concluding remark in this brief post is that you ain’t seen nothing yet (with apologies to BTO). I have been learning about many related schemes, based on that 1977 publicly-announced breakthrough: elliptic curves, homomorphic commitment schemes, proof-of-work, etc.

It’s one thing to understand these as mathematical, but it is another thing to understand what these things might be leveraged as institutional capabilities, either built by an institution itself or leveraged from an ecosystem that lets you keep your own secrets.

That’s the key — keeping your own secrets — keeping those things that give you the power.

Friday, 27. November 2020

MyDigitalFootprint

Creating Flow. Exploring lockdown audio lag and my exhaustion

So the technical term for that delay or lag from then you finish speaking to you hearing when the next person speaks is wrapped up in an idea of “Latency”.   Latency is measured in milliseconds (ms), which is thousandths of seconds. Latency for a face to face conversation is like zero. For say a landline call, it is defined by an ITU standard and is judged by the ability to offer a quality o
So the technical term for that delay or lag from then you finish speaking to you hearing when the next person speaks is wrapped up in an idea of “Latency”.   Latency is measured in milliseconds (ms), which is thousandths of seconds. Latency for a face to face conversation is like zero. For say a landline call, it is defined by an ITU standard and is judged by the ability to offer a quality of service.  Ideally, about 10ms will achieve the highest level of quality and feels familiar.  A latency of 20 ms is tremendous and is typical for a VoIP call as it is perfectly acceptable.  A latency of even 150 ms is, whilst noticeable, permitted, however, any higher delay or lag times and the quality diminishes very fast. At 300 ms or higher, latency becomes utterly unacceptable as a conversation becomes laboured, driven by interruptions and lack flow. 

We all know the phrases of “no-one left behind” or “you are only as strong as your weakest team member.” Well, the same applies for latency, one person in a remote place, low broadband speed, on a (shared) WIFI extension, with poor buffering on a cheap router;  we are now all down to the slowest person in the team. 

Analogy to get to the conclusion. 

“Jet lag”, also called “jet lag disorder,” is a temporary sleep problem that can affect anyone who quickly travels across multiple time zones. Your body has its own internal clock (circadian rhythms) that signals your body when to stay awake and when to sleep. Jet lag occurs because your body's clock is still synced to your original time zone, instead of to the time zone where you've travelled. The more time zones crossed, the more likely you are to experience jet lag. Jet lag can cause fatigue, an unwell feeling, difficulty staying alert and gastrointestinal problems. Jet lag is temporary, but it can significantly reduce your vacation or business travel comfort. Fortunately, there are steps you can take to help prevent or minimise jet lag.


Stay with me. We are bringing JetLag and voice/ video lag (Latency) together.  We know the effects of JetLag - fatigue, unwell feeling, loss of alertness, gastrointestinal problems and is temporary.  

The question is, can VoiceLag create the same. Anecdotally I believe Yes based on 8 months of video calls.  At the end of a day of video, Teams, Hangout or Zoom calls, we know we have fatigue, feeling unwell, loss of alertness, gastrointestinal problems and it is temporary. A good night of sleep, we can do it all again.   I know that now making a day of mobile or landline calls I don’t suffer the same.  

However, is this voice lag or voice latency or video time or a little part of each? We definitely know that video calls are exhausting, but the assumption for this feeling was the new structure, a new approach, differences and styles, watch yourself continually, only seeing one person, having to be present 100% of the time.  This is all true, but we also lack flow on video call due the latency and lag.  Lacking flow means conversation is paused, interrupted and slow. This delay takes a lot of energy.  We cannot get into flow to sharing our creative thinking, we have to hold ideas and opinions back, we have to wait for signals to speak - it is all exhausting.    

We need to focus on the remove of lag to create flow. We need to stop moving at the rate of the slowest person, let’s get everyone up to flow speed.

  




Wednesday, 25. November 2020

Aaron Parecki

GNAP Editors' Use of GitHub Issues

The editors met yesterday to discuss the issues that were pulled out of the previous draft text and document a process for how to resolve these and future issues. We would like to explain how we plan on using labels on GitHub issues to keep track of discussions and keep things moving.
The editors met yesterday to discuss the issues that were pulled out of the previous draft text and document a process for how to resolve these and future issues. We would like to explain how we plan on using labels on GitHub issues to keep track of discussions and keep things moving.

When there are substantive issues or pull requests, the editors will avoid merging or closing those outright, and instead mark them as "pending", so that these can be brought to the attention of the larger group. If no additional discussion happens on these, the merge or close action will be taken in 7 days. Note for this first round we are setting the deadline for the issues below as Dec 11th due to the US holiday and the fact that this is the first time using this process.

"Pending Merge"
When specific text is proposed in a PR (by anyone, not limited to the editors), and the editors believe this text reflects the consensus of the working group, this marks that the PR will be merged in 7 days unless there is a clear alternative proposal accepted by the working group.

"Pending Close"
When the editors believe an issue no longer needs discussion, we'll mark it "Pending Close". The issue will be closed in 7 days unless someone brings new information to the discussion. This tag is not applied to issues that will be closed by a specific pull request.

There are two additional labels we will use to flag issues to the group.

"Needs Text"
The editors suggest this issue needs additional text in the spec to clarify why this section is needed and under what circumstances. Without a concrete proposal of text to be included in the spec, this section will be removed in a future update.

"Postponed"
This issue can be reconsidered in the future with a more concrete discussion but is not targeted for immediate concrete changes to the spec text. When used on its own, this label does not indicate that an issue is targeted to be closed. An issue may also be marked "Pending Close", and this is used so that we can distinguish closed issues between discussions that have concluded or things that we may want to revisit in the future. Remember that closed issues are not deleted and their contents are still findable and readable, and that new issues can reference closed issues.

With these labels in mind, here are the list of issues and their statuses we were able to discuss on our last editor's call. The action on these pending issues will be taken on Dec 11th to give the group enough time to review this list. For this first round, many of the issues are marked "Pending Close" as we're looking for low hanging fruit to prune the list of issues down. In the future, you can expect to see more "Pending Merge" issues as we're bringing proposed text to review by the WG.

Postponed:

• Generic claim extension mechanism
** https://github.com/ietf-wg-gnap/gnap-core-protocol/issues/131

Pending Merge:

• Make access token mandatory for continuation API calls
** https://github.com/ietf-wg-gnap/gnap-core-protocol/pull/129

Postponed and Pending Close:

• Fetchable Keys
** https://github.com/ietf-wg-gnap/gnap-core-protocol/issues/47
• Including OpenID Connect Claims
** https://github.com/ietf-wg-gnap/gnap-core-protocol/issues/64
• Application communication with back-end
** https://github.com/ietf-wg-gnap/gnap-core-protocol/issues/82
• Additional post-interaction protocols
** https://github.com/ietf-wg-gnap/gnap-core-protocol/issues/83

Pending Close:

• HTTP PUT vs POST for rotating access tokens
** https://github.com/ietf-wg-gnap/gnap-core-protocol/issues/100
• Use of hash with unique callback URL
** https://github.com/ietf-wg-gnap/gnap-core-protocol/issues/84
• Interaction considerations
** https://github.com/ietf-wg-gnap/gnap-core-protocol/issues/81
• Expanding dynamic reference handles
** https://github.com/ietf-wg-gnap/gnap-core-protocol/issues/76
• Post interaction callback nonce
** https://github.com/ietf-wg-gnap/gnap-core-protocol/issues/73
• Unique callback URIs
** https://github.com/ietf-wg-gnap/gnap-core-protocol/issues/55
• Instance identifier
** https://github.com/ietf-wg-gnap/gnap-core-protocol/issues/46
• Requesting resources by reference
** https://github.com/ietf-wg-gnap/gnap-core-protocol/issues/36
• Mapping resource references
** https://github.com/ietf-wg-gnap/gnap-core-protocol/issues/35

Tuesday, 24. November 2020

Matt Flynn: InfoSec | IAM

Modernization of Identity and Access Management

From the Oracle IAM blog: "Oracle has been in the IAM business for more than 20 years and we’ve seen it all. We’ve addressed numerous IAM use-cases across the world’s largest, most complex organizations for their most critical systems and applications. We’ve travelled with our customers through various highs and lows. And we’ve experienced and helped drive significant technology and business tra

From the Oracle IAM blog:

"Oracle has been in the IAM business for more than 20 years and we’ve seen it all. We’ve addressed numerous IAM use-cases across the world’s largest, most complex organizations for their most critical systems and applications. We’ve travelled with our customers through various highs and lows. And we’ve experienced and helped drive significant technology and business transformations. But as we close out our second decade of IAM, I’m too distracted to be nostalgic. I’m distracted by our IAM team’s enthusiasm for the future and by the impact we’ll have on our customers’ businesses in the decade to come. Central to that is the focus to respect our customer's identity and access journey and meet them with solutions that fit their individual needs."

 


FACILELOGIN

Running HashiCorp Vault in Production

We had our 35th Silicon Valley IAM meetup on 24th October to talk about HashiCorp Vault. Dan McTeer, the Strategic Technologist at HashiCorp and Bryan Krausen, a consultant on HashiCorp and AWS, presented in depth the Vault architecture and use cases, and took lot of questions from the audience. Dan and Bryan are also the co-authors of the book, Running HashiCorp Vault in Production. Running Ha

We had our 35th Silicon Valley IAM meetup on 24th October to talk about HashiCorp Vault. Dan McTeer, the Strategic Technologist at HashiCorp and Bryan Krausen, a consultant on HashiCorp and AWS, presented in depth the Vault architecture and use cases, and took lot of questions from the audience.

Dan and Bryan are also the co-authors of the book, Running HashiCorp Vault in Production.

Running HashiCorp Vault in Production was originally published in FACILELOGIN on Medium, where people are continuing the conversation by highlighting and responding to this story.


ian glazer's tuesdaynight

The Future of Digital Identity: 2020 – 2030

Some on the next 10-ish years in identity management. [This was originally written in December 2019: pre-pandemic, pre-US presidential election, pre-George Floyd. Truly, it was written in the “Before Times.” I thought about updating this before posting but that felt wrong – somehow dishonest. So here is the lightly touched up text of my talk … Continue reading The Future of Digital Identity: 2020 –

Some on the next 10-ish years in identity management.

[This was originally written in December 2019: pre-pandemic, pre-US presidential election, pre-George Floyd. Truly, it was written in the “Before Times.” I thought about updating this before posting but that felt wrong – somehow dishonest. So here is the lightly touched up text of my talk which was given first in Tokyo at the OpenID Foundation Summit and then again as part of the all-virtual Identiverse. If you want to skip the text and go straight to the video, you can

My deepest thanks go to Naohiro Fujie and Nat Sakimura for prompting me to write this, Andi Hindle for his feedback. – IG 11/24/2020]

It is my honor to present to you today. Today, it is my privilege to talk to you about my vision of the future of digital identity. When Naohiro-san asked me to speak on this topic, I was both honored and panicked. In my daily role, I focus on a 12 to 18 month time frame. My primary task is to help my stakeholders and, yes I have a multi-year vision, but I primarily focus on how my team can execute in the next few months to help those stakeholders. I don’t, as a matter of my daily routine, think about the future.

So I was a little panicked. I am not a futurist. I am no longer an industry analyst. I am just a practitioner trying to help where I can. How then should I talk about the next ten years of our industry?

I can name 4 ways to think about the future and with your permission I will briefly try all 4.

Looking at the Past to See the Future

One way to talk about the future is to look back at past predictions and see how they fared. I’ll choose 3 predictions:

The Need for Password Vaulting SAML is Dead The Year of PKI (Again…Still) The Need for Password Vaulting

In 2014, I said that by 2017 the need for password vaulting would be gone. Well, it’s 2020 and there are systems that simply refuse to participate in single sign-on schemes. My guess is that in the next 5 years the need for enterprise password vaulting (not including privileged account management) will be gone.

In non-enterprise settings, password managers are more prevalent than ever, at least browser-based ones. My guess is that WebAuthn will drive passwordless use cases and when combined with OS-based software tokens the need for passwords and password vaulting in our daily lives will dramatically decrease in the next 5 to 10 years. But more on that later.

SAML is Dead

Back in 2012, Craig Burton famously stated that SAML was dead. It’s 2020 and SAML remains dead. It is so dead that my company is involved with at least 70,000 different SAML federations. (May all of our endeavors be so successful upon their demise.)

The Year of PKI

Since 1977 it has been the year of PKI. It is now the year of PKI. It will continue to be the year of PKI until it becomes impossible to safely distribute keys. (But I’ll get to that in a bit.)

While looking at past predictions is one way to talk about the future it is not completely sufficient. 

Continuous Future

Another way to think about the future is to imagine a continuous one… one in which today’s technologies and techniques progress in a linear fashion towards the future. Promising things that are growing today continue to grow into the future. There are four items I’d like to discuss briefly:

OIDC and SCIM Will Be the New Normal SAML Will Still Be Dead Passwords Will Also Be Dead WebAuthn Will be an Alternative to Social Sign-On OIDC and SCIM Will Be the New Normal

The next decade of IAM, especially workforce-centric IAM, will be based on OpenID Connect and SCIM. This implies that OAuth and JOSE will continue in their role as critical supporting standards. This is not to say that other protocols, such as SAML or Kerberos, will not be important, but OIDC and SCIM will be the assumed pieces of our infrastructures.

SAML Will Still Be Dead

The reason why SAML is “dead” is because it works and it is a legacy technology. It is mature. It is popular. And that will not change in the next 5 years. Now, I do not expect a resurgence of SAML federations but I also do not expect rapid migration from SAML to OIDC (unless prompted by a shift in platform or provider.) Consider that it took years to stamp out WS-Federation; SAML will take a similar path. In 10 year’s time, the SAML zombie herd will be quite thin but still shuffling onwards and very much considered a legacy protocol.

Passwords Will Also Be Dead

Good news! The password is dead! And I mean this in both senses: passwords work and passwords are legacy. In the next 5 years, consistent best practices will roll through our enterprises: long passwords, infrequent changes, and strengthening with a second factor.

At the same time, we can see how the use of passwords will decline significantly over the next 10 years. Enterprise is already showing this to be the case: federation as the primary means of resource access. We are not far off from having 1 password per enterprise user. What about in the consumer space?

WebAuthn Will be an Alternative to Social Sign-On

As a consumer, how do I get the best experience getting into sites and apps? One approach is to use a password manager. But in 2016 only 3% of internet users use a password manager app and only 2% use a browser-based one according to the Pew Research Center. While those numbers have undoubtably grown, I cannot imagine they have cracked double digits.

Another approach is to use social sign-on to get into sites and apps. I believe people use social sign-on to avoid the twin hassles of creating an account and managing a password.

But I believe by 2030, there will be parity between WebAuthn and social sign-on. This is predicated on active clients ruling our mobile worlds and the use of desktops continuing to decline. Concerns over personal privacy combined with the ease of use in which the mobile OS is the dominant active client that “magically” signs one in, will bring a meaningful alternative to social sign-on.

WebAuthn is the standard that makes this happen at the wire-level. Ubiquitous browser support is a key enabling step which is well underway. Connecting OS-level biometric recognition to services via enabled-browsers is the obvious next step. Those two things will deliver a “magical” mobile sign-in experience in which I just look at my phone and I am in the app. we should expect to see play out over the next 3 to 5 years and mainstream adoption in 5 to 10.

These items will help us, in different ways, to improve the state of account management. But we cannot only think about digital identity by itself; we have to think about the technological landscape that surrounds our industry.

The Adjacent World

If we are going to talk about the future of identity we cannot do so without looking at adjacent technology and trends:

Active Clients Will be Mainstream Quantum Computing and PKI Balkanization of the Internet Active Clients Will be Mainstream

The next 10 years will be dominated by digital things acting on our behalf: active clients. Our password managers, personal digital assistants, and digital wallets will take a far greater role in finding, delivering, and interacting with online services.

In the next 5 years, the mobile OS and its features will be the primary active client for the vast majority of the online world. Using alternatives to mobile OS provided wallets, password managers, strong authentication clients, will be possible but will struggle to gain widespread adoption. That might change in 10 years with regulatory action or significant market externalities, but it is unlikely. 

Similarly, when it comes to digital assistants, as my colleague Peter Schwartz, futurist, said to me, “If you aren’t Alexa, Siri, or Google Assistant you have no chance. Everything will be brokered through one of those three.” Your digital assistant service will be brokered through one of those 3 or possibly a mega-platform such as WeChat.

All of this has implications in the death of passwords. If the mobile OS vendor doesn’t want to support WebAuthn, DID Auth, or the next great thing we can dream up, it is going to be extremely difficult to gain meaningful adoption by people and thus service providers will be unlikely to adopt as well.

Quantum Computing and PKI

In the next ten years, we will see quantum computing effect cryptography. For the most part, our hashing algorithms will be okay in a post-quantum world. But it is our key exchange algorithms that might fall. As my colleague Taher Elgamal, noted mathematician and father of SSL, told me, “We need to get moving.” We have 5 years to adopt new key exchange mechanisms assuming we get a new method by 2025… which we may not. A failure to act will undermine cryptographic trust. And that has profound implication for not just our industry but all industries.

Balkanization of the Internet

In the next 10 years we will see the internet split into at least 2 separate internets. There will be an internet for China (and possibly a separate one for Russia) and one for the rest of the world. Even if these internets are not physically separated, national policy, censorship, and enterprise risk management will drive logically separated ones.

What this likely means for identity professionals is that within these separate Internets, separate identity schemes will arise. If today identity is the perimeter of the enterprise, in 10 years, identity will be the perimeter of these Balkanized Internets. We can see the beginnings of this with WeChat and given nationalism on the rise around the world, one can easily imagine non-interoperable identity perimeters to our online worlds. And this an outcome we must fight.

Predicting where currently successful things will continue to be successful is often the role of an industry analyst – a role I used to do. But I think there is a more challenging and exciting way to think about the future – one isn’t based on likely successfully and frankly reasonably obvious things.

The Discontinuous Future

A third way to think about the future is to imagine one not as a smooth path forward but one abruptly shifts and radically changes. This discontinuous future is hidden by our biases, our investment in our current projects and approaches, and our natural tendency to rely on the familiar to navigate the dimly lit room that is the future. 

I believe that the discontinuous future is focused, not on account management, but actual identity management. So what if we choose, just for the next few minutes, to imagine this hidden, discontinuous future? What would we see there?

I believe we will see 3 actions and 1 actor in this discontinuous future.

Introductions Recognition Counselors Data Handling
Introductions

The act of introducing someone is the act of creating a relationship. 

One can imagine digital assistants managing our introductions. In this case, one service says to another “This is an entity which I know about and with whom you should have a relationship.”  In some regards you can think of this a social sign-on++.

Recognition

The act of recognition is a way to acknowledge parties in a relationship and is a way to demonstrate the existence of the relationship to some other entity or service. For example, I am a member of IDPro or I am an employee of Salesforce or I am a citizen of Japan. I think the act of acknowledging a relationship will supersede our current forms of consent. The relationship defines “normal” acceptable behavior between the parties. So long as actions are within the regular boundaries of the relationship then the person doesn’t need to provide additional consent. The parties’ behavior within the context of the relationship would be monitored by active clients.

Counselors

Counselors are new actors in this discontinuous future. They are entities who can:

Act on your behalf Vouch for you and your relationships Create and sever relationships on your behalf Can counsel you on your behavior

Counselors can provide data needed to form a relationship. They can step in before you share data with a service that could be considered risky. Imagine something stepping in before you hand over your form of payment and email address and suggest using an anonymous one instead. Imagine that service can even generate that anonymous forms of payment and a pseudonymous email for you. That is the future role of a counselor.

For some a counselor could be a government or a private-sector supplied service. These counselors are true value-add active clients.

Data Handling

How we handle data will change in the discontinuous future. I believe that pseudonymization and differential privacy will have to be applied at the time of introduction. We will finally have an infrastructure that supports the idea that data gathered at the time of use is superior to previously stored away data. Provenance and relationship metadata will be applied all the time, even for insights derived from shared information. Finally, common privacy-preserving processing fabrics will arise to enable industry sectors to derive industry-specific insights and industry-shared risks; think of this as an industry-specific shared signals and attribute exchange.

I believe focusing on these actors and actions will enable use to move toward true digital identity management.

The Fourth Way

There is a fourth way to think about the future: They say that the way to predict the future is to create it. This might seem daunting. Not everyone can create a new protocol, not everyone can write a specification, not everyone can build something that has never been seen before.

But, each of us, in our own way, can create the future. We create the future by hiring without bias to form diverse work environments. We create the future by respectfully using data shared with us. We create the future by ensuring that our systems run in an environmentally sustainable manner. We create the future by ensuring the algorithms we use are ethical and without bias. We create the future by providing something wonderful for all of our stakeholders. And in these ways, we create a future far more meaningful than any talk about the future can describe.

Thank you.

Monday, 23. November 2020

Identity Woman

In a digital age, how can we reconnect values, principles and rules?

Who is the “we” – this piece is co-authored by Kaliya Young and Tony Fish who together have worked for over 45 years on identity and personal data. For this article, we are looking at the role of values, principles and rules within the industry and sectors seeking to re-define, re-imagine and create ways for […] The post In a digital age, how can we reconnect values, principles and rules? appear

Who is the “we” – this piece is co-authored by Kaliya Young and Tony Fish who together have worked for over 45 years on identity and personal data. For this article, we are looking at the role of values, principles and rules within the industry and sectors seeking to re-define, re-imagine and create ways for […]

The post In a digital age, how can we reconnect values, principles and rules? appeared first on Identity Woman.


MyDigitalFootprint

In a digital age, how can we reconnect values, principles and rules?

Who is the “we”, this piece is co-authored by Kaliya Young and me, who together have worked for over 45 years on identity and personal data. For this article, we are looking at the role of values, principles and rules within the industry and sectors seeking to re-define, re-imagine and create ways for people to manage the digital representations of themselves with dignity. As we write this th
Who is the “we”, this piece is co-authored by Kaliya Young and me, who together have worked for over 45 years on identity and personal data. For this article, we are looking at the role of values, principles and rules within the industry and sectors seeking to re-define, re-imagine and create ways for people to manage the digital representations of themselves with dignity.

As we write this there is an ongoing conversation about the regulation of Facebook and the regulation of big tech in general. We see a problem with the frame of the conversation because we believe ON PRINCIPLE they shouldn’t exist as in no-one entity should have that much power and control over the global population’s identities, “their” data and the conversion we have. So any frame that accepts BIG TECH as acceptable won’t create rules that actually move towards the principle of ending the current hegemony but rather just seek to regulate it as is.

With this piece, we are seeking to look at how principles change when the underlying fabric of what is possible changes? The entire privacy framework we have today is based on early 1970’s reports written in the United States to address concerns over mass state databases that were proposed in the mid-late 1960’s and the growing data broker industry that was sending people catalogues out of the blue. It doesn’t take account for the world we live in now where “everyone” has a little computer in their pocket.

So let’s get to the question at hand “Why does it matter that we connect values, principles and rules?” The connection is not clear because we have created so many words and variance in languages that there is significant confusion. We are often confused in ourselves to what we mean, we are very inconsistent in how we apply our understanding, often to provide a benefit to ourselves or justify our belief. To unpack the relationship we need to look at definitions, but we have to accept that even definitions are inconsistent. Our conformational bias is going to fight us, as we want to believe what we already know, rather than expand our thinking.

Are we imagining principles or values?

Worth noting our principles are defined by our values. Much like ethics (group beliefs) and morals (personal beliefs) and how in a complex adaptive system my morals affect the group’s ethics and a group’s ethics changes my morals. Situational awareness and experience play a significant part in what you believe right now, and what the group or society believes.

Values can be adaptable by context whereas principles are fixed for a period, withstanding the test of time. When setting up a framework where we are setting our principles implies that we are saying that we don’t want them to change every day, week, month, year, that they are good and stable for a generation but we can adapt/ revise/ adjust principles based on learning. Fundamentally principles are based on values which do change, so there are ebbs and flows of conflict between them, this means we frame principles and often refuse to see that they are not future proof forever. Indeed the further a principle is away from the time it was created, the less it will have in common with values.

Are we confusing principles and rules?

Considering characteristics, conceptually principles are abstract and universal whereas a rule is specific and particular. Principles cope with exceptions, rules need another rule. Principles provide the power of thought and decision making, rules prevent thought and discretion. Principles need knowledge and experience to deliver outcomes, rules don’t. Principles cope with risk, conflict and abstraction; conflict is not possible for a rule, it is this rule or a rule is needed.

We love rules-of-thumb, as such rules and heuristics provide time-saving frameworks which mean we don’t have to think. Not having to think saves energy for things we like to do. Take away the little ways you have that creates a stable place for yourself and you end up exhausted. Travel to a new place and you don’t know where the simple amenities of life are, COVID19 took away the structure of our lives which created exhaustion until we found a new routine. However, we often confuse “our heuristics”, the way I do something, or “my rules” as a principle. This diet rule is a principle for losing weight. We should accept that we purposely interchange rules, values and principles to provide assurance and bias to the philosophy, theory, thesis or point we are trying to gain acceptance of.

In companies and for a social policy we set rules and principles into matrices as below. Asking is it better to break rules or comply, is better to uphold principle or challenge them.

A review round the four quadrants highlights that there is no favourable sector and indeed as a society who wants to get improve, we continually travel through all of them. Companies and executives often feel that upholding principles and obeyed rules (top right) creates the best culture, but also ask the organisation to be adaptive, agile and innovative.

Given that principles are based on values, the leadership team will be instrumental into how upheld the principles are. Whereas the companies level of documentation for processes, procedures and rules will define what is to be obeyed, the culture of the top team will determine if they are to be obeyed or not.

The matrix below thinks about the combinations of values and principles. Where values are either mine as an individual or we as a collective society.

The fundamental issue with the two representations (rules or values and principles) is that they cannot highlight the dynamic nature of the relationship between them. By example, our collective values help normalise an individuals bias and that collective values informs and refine principles. Indeed as principles become extreme and too restrictive say as our collective values become too godly, our collective values opt to no-longer uphold them. When our individualism leads to the falling apart of society we raise the bar to create better virtues as it makes us more content, loved and at peace.

Movement within the “stable compromise” domain has been explored many times but the Tytler cycle of history expands it very well.

In summary, a rules-based approach prescribes or describes in detail a set of rules and how to behave based on known and agreed principles. Whereas a principle-based approach develops principles which set the limits that enable controls, measures, procedures on how to achieve that outcome is left for each organisation to determine.

Risk frameworks help us to connect principles and rules

Having explored that a rules-based approach prescribes in detail the rules, methods, procedures, processes and tasks on how to behave and act, whereas a principle-based approach to creating outcomes crafts principles that frame boundaries, leaving the individual or organisation to determine its own interruption.

In a linear system, we would agree on principles which would bound the rules. In a non-linear system, we would agree on the principles, which would bound the rules and as we learn from the rules we would refine the principles. In a complex adaptive system, we are changing principles, as our values change because of the rules which are continually be modified to cope with the response to the rules.

This post is titled “In a digital age, how can we reconnect values, principles and rules?” and the obvious reason is that rules change, values, which change principles that means our rules need to be updated. However, this process of learning and adoption depends on understanding the connection which offers closed-loop feedback. An effective connection is our risk frameworks.

The diagram below places rules and principles at two extremes. As already explored we move from principles to rules but rarely go back to rethink our principles, principally because of the time. Rules should refine and improve in real-time, principles are generational. However to create and refine rules we use and apply a risk framework. The risk framework identifies risk and to help us manage it, we create rules that are capable of ensuring we get the right data/ information to be able to determine if we have control over risk. As humans, we are not experts in always forecasting the unimagined and so when we implement rules things break and clever minds think how to bend, break or avoid them. To that end we create more rules to manage exceptions. However, occasionally we need to check that our rules are aligned to our principles and indeed go back and check and refine our principles.

Starting from “Principles” these are anchored in ideas such as Human Dignity, Subsidiarity, Solidarity, Covenantal, Sustainability, The common good, Stewardship, Equality.

Once we decide that one or more of these should anchor our principles and form a north star, a direction to travel in and towards. The reason to agree on the Principle(s) is that collectively we agree on a commitment to get to a better place. We state our principles as an ambition, goal, target with allow us to understand, manage and control uncertainty using a risk framework. The risk framework frame or bounds the risk we are prepared to take. The risk framework enables us to define rules that get to our known outcomes. We implement the rules to create controls using regulation, code and standards. Our risk frameworks use tools to identify, measure, manage, monitor and report on the risk, the delta in risk and compliance with the rules. Whilst all is good we use the risk framework to create more rules and better framing and boundaries, creating better outcomes. However, when the desired outcomes are not being created we revert to the principles, check our north star and take our new knowledge to refine/ redefine the risk we are prepared to take.

Data and Identity Principles

Having established this framework, the idea is to apply this to the authors favourite topics of data and identity. We have an abundance of rules and regulations and as many opinions on what we are trying to achieve through identity and data ownership. We don’t appear to have an agreed risk framework at any level, individual, company, society, national or global. This is not a bill of rights, this is “what do we think is the north star for data and identity and on what principle they are built?” How do these principles help us agree on risks, and will our existing rules help or hinder us?

“what do we think is the north star for data and identity and on what principle they are built?” How do these principles help us agree on risks, and will our existing rules help or hinder us?

Our problems probably started a while back when information could travel faster than people (the telegraph rollout of 1850). This was a change in the fabric of values and principles. The person who was trusted was passed over and that person-to-person trust was no longer needed. Delays that allowed time for consideration gave way to immediacy; a shift in values and principles.

The question is how do our principles change when the underlying fabric of what is possible changes, the world we designed for was physical; it is now digital-first. Now we are becoming aware that the fabric has changed, where next? By example, Lexis is the legal system and database. With a case in mind, you use this tool to uncover previous judgments and specific cases to determine and inform your thinking. However, this database is built on humans and physical first. Any digital judgements in this database are still predicated on the old frameworks, what is its value when the very fabric of all those judgements changes. Do we use it to slow us down and prevent adoption? Time to unpack this

Physical-world first (framed as AD 00 to 2010)

Classic thinking (western capital civilisation philosophy) defined values and principles which have created policy, norms and rules. Today’s policy is governed by people and processes. We have history to provide visibility over time and can call on millennia of thought, thinking and wisdom. Depending on what is trending/ leading as a philosophy we create norms. In a physical and human first world, we have multi-starting positioning. We can start with a market, followed by norms, followed by doctrine/ architecture — creating law and regulations OR we can start with norms, followed by doctrine/ architecture, followed by market-creating law.

Without our common and accepted belief, our physical world would not work. Law, money, rights are not real, they are command and control schema with shared beliefs. Our created norms are based on our experience with the belief. We cope by managing our appetite to risk.

Digital world first (frame as AD 2020 — AD MMMCCX )

People-in-companies rather than people-in-government form the new norms as companies have the capital to include how to avoid the rules and regulations. The best companies are forming new rules to suit them. Companies have the users to mould the norms with the use of their data. Behaviour can be directed. Companies set their own rules. Doctrine/architecture creates the market, forming norms, and the law protects those who control the market. Policy can create rules but it has no idea how rules are implemented or governed as the companies make it complex and hide the data. There are few signs of visible “core” human values, indeed there are no shared and visible data principles. We are heading to the unknown and unimagined.

The companies automate, the decisions become automated, the machine defines the rules and changes the risk model. We are heading to the unknown and unimagined as we have no data principles.

By example. Our news and media have changed models. The editor crafted control to meet the demand of an audience were willing to pay to have orchestrated content that they liked. As advertising became important, content mirrored advertising preferences and editorial became the advertising and advertising the content. Digital created clicks that drove a new model to anything that drives clicks works. The fabric changed from physical to digital and in doing so we lost the principles and rules of the physical first world to a digital-first world that has not yet agreed on principles for data.

To our favourite topic: Data and Identity

Imagine looking at this framework of “principles, rules and risk” within the industry and sectors seeking to re-define, re-imagine and create ways for people to manage the digital representations of themselves with dignity. How would privacy and identity be presented?

Within data and identity, we have an abundance of rules and regulations and as many opinions on what we are trying to achieve. We don’t appear to have an agreed risk framework at any level, individual, company, society, national or global.

The stated GDPR principles are set out in Article 5:

Lawfulness, fairness and transparency. Purpose limitation. Data minimisation. Accuracy. Storage limitation. Integrity and confidentiality (security) Accountability.

We know they are called “Principles” by the framing of the heading in Article 5, however, are these principles, values or rules? Further are these boundaries, stewardships or a bit of a mashup. By example to get round “Purpose Limitation,” terms and conditions become as wide as possible so that all and or any use is possible. Data minimisation is only possible if you know the data you want, which is rarely the case if you are a data platform. If a principle of The European Union is to ensure the free “movement/mobility” of people, goods, services and capital within the Union (the ‘four freedoms’), does data identity ideals and GDPR align?

Considering the issue with “the regulation of Facebook” or the “regulation of” Big Tech, in general, is that ON PRINCIPLE they shouldn’t exist, no one entity should have that much power and control over people’s identities and their data? So the framings that accept them, as acceptable, won’t create rules that actually moves towards the principle of ending the current hegemony but rather just seek to regulate it as is. If we add in open API’s and the increasing level of data mobility, portability and sharing whose “rules or principles” should be adopted?

How do your principles change when the underlying fabric of what is possible changes? The entire privacy framework, say in the US today, is based on early 1970’s reports written in the United States to address concerns over mass state databases that were proposed in the mid-late 1960’s and the growing data broker industry that was sending people catalogues out of the blue. It doesn’t take account for the world we live in now where “everyone” has a little computer in their pocket. Alas, IMHO, GDPR is not a lot better than rules with no truly human-based core principles.

The lobby of time

By example here is the 1973: The Code of Fair Information Practices This Code was the central contribution of the HEW (Health, Education, Welfare) Advisory Committee on Automated Data Systems. The Advisory Committee was established in 1972, and the report released in July. The simplicity and the power of the code have been eroded and water down so that the code is now ineffective. Would we be in a much better place if we had adopted such thinking at the time?

There must be no personal data record-keeping systems whose very existence is secret. There must be a way for a person to find out what information about the person is in a record and how it is used. There must be a way for a person to prevent information about the person that was obtained for one purpose from being used or made available for other purposes without the person’s consent. There must be a way for a person to correct or amend a record of identifiable information about the person. Any organization creating, maintaining, using, or disseminating records of identifiable personal data must assure the reliability of the data for their intended use and must take precautions to prevent misuses of the data. Conclusion

We have outdated “principles” driving rules in a digital-first world. This world is now dominated by companies setting norms and without reference to any widely agreed-upon values. The downside of big tech gaining so much power that they are actually seen by people-in-government as “equivalent to nation-states” is telling. We have small well-networked organizations attempting to make a dent in this. MyData, for example, has generated, from an engaged collaborative community, a set of ideals (principles) that provide a good starting point for a wider constructive discussion, but we need historians, anthropologists, ontologist, psychologist, data scientists and regular every day people who are the users to be able to close the loop between the rules we have, the risk frameworks we manage to and the principles that we should be aiming for.

How can we leverage innovative democratic deliberative processes like citizen’s jury’s that are used in some parts of Europe to close the loop between rules and principles around emerging technology?

Take Away

How are we checking the rules we have are aligned to our principles? How are we checking our principles? Is our risk framework able to adapt to new principles and changes to rules? How do we test the rules that define and constrain create better outcomes?

Tim Bouma's Blog

Tony Fish Drummond Reed This is a tricky balance and you are making me aware of a use case I hadn’t…

Tony Fish Drummond Reed This is a tricky balance and you are making me aware of a use case I hadn’t really considered. This is by no means intended to strip away how you can present yourself. If that was the case, we will have lost all of our gains. Rather, there should be a spectrum of how one can present or accept. Unfortunately we know that there is no black and white; some acceptance rules are

Tony Fish Drummond Reed This is a tricky balance and you are making me aware of a use case I hadn’t really considered. This is by no means intended to strip away how you can present yourself. If that was the case, we will have lost all of our gains. Rather, there should be a spectrum of how one can present or accept. Unfortunately we know that there is no black and white; some acceptance rules are unreasonably unequivocal and we are forced to bend the rules (haven’t we all?) for a better outcome. But that is also exploited. In some cases it’s worth burning the registry office to keep everyone safe, but I am hoping there is a middle ground where individuals truly control what they have and reveal nothing to their disadvantage.

In the end, the world is not perfect, and we must be diligent that whatever we create, however good it is, can be used for imperfect ends.

Sunday, 22. November 2020

Tim Bouma's Blog

Next Stop: Global Verification Network

Next Stop: A Global Verification Network Photo by Clem Onojeghuo on Unsplash Authors note: This is my opinion only and does not reflect that of my employer or any organization with which I am involved. As this is an opinion, I take full responsibility for any implied, explicit, or unconscious bias. I am open to feedback and correction; this opinion is subject to change at any time. We’r
Next Stop: A Global Verification Network Photo by Clem Onojeghuo on Unsplash

Authors note: This is my opinion only and does not reflect that of my employer or any organization with which I am involved. As this is an opinion, I take full responsibility for any implied, explicit, or unconscious bias. I am open to feedback and correction; this opinion is subject to change at any time.

We’re almost there for truly global trusted interoperability. We almost have all of the networks we need. Let’s go through the networks we already have or will have soon (please note — I am only focusing on electronic networks, not physical or social networks)

Global Communication Network — The Internet as we know it today. Conceptualized as a singular, ubiquitous thing that we take for granted, it is actually a network of networks and an amalgam of protocols and technologies abstracted and unified bound by a set of rules known as Internet Protocol. We can just communicate with one another.

Global Location Network — This is the Global Positioning System (GPS). GPS is so embedded in our lives — it is baked into the chips that we wear and take with up(watches, Fitbits, cycling computers, etc.), we no longer notice its presence. We can just know where we are.

Global Monetary Network — This network is still emerging. Bitcoin is the frontrunner, but there are contenders and competitors, such as Central Bank Digital Currencies (CBDCs). However this will play out, we will soon be able to exchange monetary value with one another, without the backing of governments and relying on financial intermediaries we have used for centuries.

So what is the next stop for the network? It’s this:

Global Verification Network — A network to independently verify without reliance on trusted intermediaries. Simply put, someone presents you with something — a claim, a statement, or whatever, and you will be able to prove that it is true without accepting it a face value or calling home to a centralized system that could deny you service, surveil you, or give you a false confirmation (for whatever reason). The business of trust can then be between you and the presenter, and you decide what you need to independently verify.

The exact capabilities of this global verification network are still to be determined but it is becoming clearer every day. Much of what is required as ingredients already exist as siloed bespoke add-ons onto the Internet as we today (TLS, etc.). Further, the cryptography that will enable this global verification network has already existed for years if not decades.

The hardest part ahead is not the technology, it’s the wholesale re-conceptualization of what is required for a global verification network that puts the power of the network back into the endpoints that is you and me.

In the coming weeks, I will be providing more detail, but I want you to take away from this post, that the next major stop for networks is a global verification network.

Saturday, 21. November 2020

Mike Jones: self-issued

Concise Binary Object Representation (CBOR) Tags for Date is now RFC 8943

The Concise Binary Object Representation (CBOR) Tags for Date specification has now been published as RFC 8943. In particular, the full-date tag requested for use by the ISO Mobile Driver’s License specification in the ISO/IEC JTC 1/SC 17 “Cards and security devices for personal identification” working group has been created by this RFC. The abstract […]

The Concise Binary Object Representation (CBOR) Tags for Date specification has now been published as RFC 8943. In particular, the full-date tag requested for use by the ISO Mobile Driver’s License specification in the ISO/IEC JTC 1/SC 17 “Cards and security devices for personal identification” working group has been created by this RFC. The abstract of the RFC is:


The Concise Binary Object Representation (CBOR), as specified in RFC 7049, is a data format whose design goals include the possibility of extremely small code size, fairly small message size, and extensibility without the need for version negotiation.


In CBOR, one point of extensibility is the definition of CBOR tags. RFC 7049 defines two tags for time: CBOR tag 0 (date/time string as per RFC 3339) and tag 1 (POSIX “seconds since the epoch”). Since then, additional requirements have become known. This specification defines a CBOR tag for a date text string (as per RFC 3339) for applications needing a textual date representation within the Gregorian calendar without a time. It also defines a CBOR tag for days since the date 1970-01-01 in the Gregorian calendar for applications needing a numeric date representation without a time. This specification is the reference document for IANA registration of the CBOR tags defined.

Note that a gifted musical singer/songwriter appears in this RFC in a contextually appropriate fashion, should you need an additional incentive to read the specification. ;-)

Friday, 20. November 2020

MyDigitalFootprint

programmable money - the big idea

I had the pleasure of hosting @ledaglyptis on mashup*.  We chatted about programmable money. A central theme was about gaining acceptance of a “big idea.”   Programmable money, cryptocurrency, control of money, tracking, avoiding fraud, CBDC are all part of a big idea, which we concluded has a few issues.  However, Leda beautifully draws on her background as a political scientist a

I had the pleasure of hosting @ledaglyptis on mashup*.  We chatted about programmable money. A central theme was about gaining acceptance of a “big idea.”   Programmable money, cryptocurrency, control of money, tracking, avoiding fraud, CBDC are all part of a big idea, which we concluded has a few issues.  However, Leda beautifully draws on her background as a political scientist and gave us this thinking.

Starting from a Big Idea that will struggle to gain acceptance as is it a jump to far for ordinary citizens, companies and capital steps in to make and create acceptable use cases.   These are adopted by niche markets who gain value and benefits.  Slowly many capital-backed incremental, agile acceptable use cases become a wider acceptance which means we end up with the badly thought through Big Idea. Is this a back door or just humanity and now do we pick up all the known problems which are now at scale - create a new BIG IDEA!  

The question is how do we add into the loop solving known social problems with the Big Idea at an earlier stage?  We know programmable money has great use cases which are attracting new venture money, we know this will eventually disenfranchise the poor as they cannot access or use funds as free agents.  Who, when and how do we solve the social issues as we create wealth.  Perhaps this is true ESG reporting and better governance.  One for another mashup* 



Thursday, 19. November 2020

MyDigitalFootprint

data portability, mobility, sharing, exchange and market in one diagram

all explained here  https://www.mydigitalfootprint.com/2019/09/facebook-published-charting-way-forward.html

What makes someone good at judgment?

  "I’ve found that leaders with good judgment tend to be good listeners and readers—able to hear what other people actually mean, and thus able to see patterns that others do not. They have a breadth of experiences and relationships that enable them to recognize parallels or analogies that others miss—and if they don’t know something, they’ll know someone who does and lean on that pe


  "I’ve found that leaders with good judgment tend to be good listeners and readers—able to hear what other people actually mean, and thus able to see patterns that others do not. They have a breadth of experiences and relationships that enable them to recognize parallels or analogies that others miss—and if they don’t know something, they’ll know someone who does and lean on that person’s judgment. They can recognize their own emotions and biases and take them out of the equation.
They’re adept at expanding the array of choices under consideration. Finally, they remain grounded in the real world: In making a choice they also consider its implementation."
Fantastic paper  https://hbr.org/2020/01/the-elements-of-good-judgment

SIR ANDREW LIKIERMAN is a professor at London Business
School and a director of Times Newspapers and the Beazley
Group, both also in London. He has served as dean at LBS and is
a former director of the Bank of England.

What purpose does a Board serve?

Who is allowed to ask the question, why does this board exist and to whom is it accountable? Purpose, when framed by shareholder primacy, was easy. Make sure the board creates and delivers value and wealth for the shareholders; almost at any cost. The skills, processes, practices and values needed were largely simple and financially driven/ rewarded. Much has been written about the topic a
Who is allowed to ask the question, why does this board exist and to whom is it accountable?


Purpose, when framed by shareholder primacy, was easy. Make sure the board creates and delivers value and wealth for the shareholders; almost at any cost. The skills, processes, practices and values needed were largely simple and financially driven/ rewarded. Much has been written about the topic and the theory forms the basis of the practices that control where we are today. The fundamental fabric has now changed. Indeed shareholders never owned the company and real-time trade removed a belief about responsibilities and ownership. It has to be said that these ideals were only a mechanism we created to exercise accountability and responsibility controls. However, those controls and beliefs are now themselves lost. By example, audit is broken and does not work.

When reframing the purpose for a board, based on 2020 reasons for a business to exist, of which there are many, but: when framed for say sustainability, becomes complex. Sustainability for whom, what metrics, skills, data and who decides? when framed for say ESG, becomes complex. ESG for whom, what metrics, data and who decides? when framed for eco-system survival, become complex ..... when framed by the provision of help, service and support, it becomes complex 
Questions I am sat on? Are boards working for us?
Who is "the us" you framed in the last question?
Of whom are you asking the question and what voices are you not listening to?
Perhaps we need to start with a question. What is the one clear purpose a board exists in this instance? Is it say for Accountability OR Better judgment and who decides? Who decides, who decides? 
IMHO it is this latter question we appear to have lost, leaving me to ask if a board can ask itself, and truly reflect on it own effectiveness?  In the absence of oversight are we left with transparency? 


UK National Data Strategy 2020

https://www.gov.uk/government/publications/uk-national-data-strategy/national-data-strategy#executive-summary The pillars are spot on Data foundations: The true value of data can only be fully realised when it is fit for purpose, recorded in standardised formats on modern, future-proof systems and held in a condition that means it is findable, accessible, interoperable and reusable

https://www.gov.uk/government/publications/uk-national-data-strategy/national-data-strategy#executive-summary
The pillars are spot on

Data foundations: The true value of data can only be fully realised when it is fit for purpose, recorded in standardised formats on modern, future-proof systems and held in a condition that means it is findable, accessible, interoperable and reusable. By improving the quality of the data, we can use it more effectively, and drive better insights and outcomes from its use.

Data skills: To make the best use of data, we must have a wealth of data skills to draw on. That means delivering the right skills through our education system, but also ensuring that people can continue to develop the data skills they need throughout their lives.

Data availability: For data to have the most effective impact, it needs to be appropriately accessible, mobile and re-usable. That means encouraging better coordination, access to and sharing of data of appropriate quality between organisations in the public, private and third sectors, and ensuring appropriate protections for the flow of data internationally.

Responsible data: As we drive increased use of data, we must ensure that it is used responsibly, in a way that is lawful, secure, fair, ethical, sustainable and accountable, while also supporting innovation and research.

Wednesday, 18. November 2020

Heather Vescent

On Being Self Less

Photo by Linus Nylund on Unsplash The Supply Chain of You As I dig into the Buddhist rabbit hole of emptiness, one thing comes up over and over — this idea of selflessness. As I struggled to understand the Buddhist description of selflessness, I realized I was limited by an existing indoctrination that had a very different (mis?) interpretation of selflessness, that comes from my midwest
Photo by Linus Nylund on Unsplash The Supply Chain of You

As I dig into the Buddhist rabbit hole of emptiness, one thing comes up over and over — this idea of selflessness. As I struggled to understand the Buddhist description of selflessness, I realized I was limited by an existing indoctrination that had a very different (mis?) interpretation of selflessness, that comes from my midwestern religious upbringing.

Selflessness

I was taught selflessness as some aspect of Christianity, that Jesus was selfless and we should be too. And I learned that this kind of selflessness was about thinking about others, considering others, putting others before you. And I was indoctrinated that I should put others before me.

Well this didn’t work so great for me. While I was busy putting others before me, I did not receive the same treatment. I was expected to put others before me, AND ALSO meet all my own needs. Hello burnout, not to mention unfairness.

Selfish

I rebelled against this and went to take care of my needs first (well first I had to figure out what I needed cause I’d been brainwashed by society). This caused the Christians in my life to call me selfish. But if I didn’t take care of my needs, and other people didn’t take care of my needs either, where did that leave me? Was I supposed to be ok with that?

Selflessness Redux

As I’ve gone deeper into the Tibetan Mahayana Buddhist canon of emptiness, I came against “selflessness” again. Selflessness is supposedly one of the easiest ways to understand the weight and important of emptiness (I disagree, but that is a post for another day), because by starting with selflessness, you supposedly realize emptiness directly and relatedly to yourself. (There are also some epic mental logistics!)

Selfless in the Buddhist canon means you (and everyone) has an ever changing “I” aka this thing you think is you, that you ascribe an identity too, is not solid or everlasting — it is constantly changing based on the context and relation to the world. A bunch of years ago, I gave a talk called “How We Create I(dentity)” which talked about how we each have complex identities that change based on context.

Remember the uproar when Facebook was like you can only have one identity there, and it has to be the legal you? That’s the wrong view of identity. Your FB identity is constrained to the FB platform. (And teaser: your FB identity is dependent on the platform too.)

Dependent Arising

This “you” is dependent on many things. It’s dependent on the evolution of DNA so you have a brain and a body to move around in this world. It’s dependent on the social structure and education and global politics and the food you eat and what your ancestors ate and food is itself dependent on the earth and the sun.

We have a business description for this: the supply chain.

Think about it. You go to a store, you have shelves of products. Those products didn’t just magically appear out of nothing! They don’t exist on their own. The box of cookies exists because of all the steps it took along the supply chain, from the growing of the farmer’s wheat, the harvest tracked on an IoT connected John Deere tractor, the grain shipped cross border (and taxed) to a factory, which makes, say, King Arthur flour, which then is sold to bakeries, and those bakeries use human and machine labor to combine the flour with eggs, sugar, butter, baking soda, and chocolate, which is baked, packaged, marketed, shipped to your store where you can buy and eat them during a global pandemic.

This is called “dependent arising.” Which basically means, the thing does not, can not, exist on its own, of its own accord.

The box of cookies does not magically pop into existence from emptiness. It comes into existence bit by bit as it moves along the supply chain. We Create the Identity (and the product) of the thing.

The Supply Chain of You

Now, apply this scenario to you. You are the box of cookies. Your existence is dependent on many things. You can not exist separately. You are dependent arising. If you understand and accept that you are a result of the “supply chain of you” and you are constantly changing based on context, you may also accept/realize that you are not as solid as you think you are.

This is selflessness. It is the understanding that your existence is dependent on the world around you AND this constantly changing you is not a solid everlasting thing. You have no self to center on — because the self is a constantly changing projection. (I like to think about the self as disco lights at the club creating the ambiance of the dancefloor.)

Resolution?

My Christian understanding of selflessness is about putting others first because ??? IDK Jesus said so?

My Buddhist understanding of selflessness is about realizing I could not exist without everything in the world, and that I do not exist — my identity/self does not exist — outside of the world. And so in this understanding I see how I connect and am created/influenced by everything.

In the Christian sense, if I do not put others first, I am “bad,” but this is self alienating. Whereas in the Buddhist sense, who I am is created by the world, and thus, I have the power to influence and create others as much as they have to influence and create me. So I can authentically consider others in order to influence and co-create who they are. Which if I think about it, is perhaps not so different from intention of considering others in the first place, with one key difference, considering others as if they are you instead of not-you, because together We Create (individual & collective) I(dentity).

Monday, 16. November 2020

Identity Praxis, Inc.

An Interview on Self-sovereign Identity with Kaliya Young: The Identity Women

Understanding the future of the Internet and the flows of identity & personal information   I enjoyed interviewing Kaliya Young, The Identity Women, last week, during an Identity Praxis Experts Corner Interview on November 13, 2020. About Kaliya and Her Purpose   Kaliya, “The Identity Women,” is a preeminent expert in all things identity and […] The post An Interview on Self-sovere
Understanding the future of the Internet and the flows of identity & personal information

 

I enjoyed interviewing Kaliya Young, The Identity Women, last week, during an Identity Praxis Experts Corner Interview on November 13, 2020.

About Kaliya and Her Purpose

 

Kaliya, “The Identity Women,” is a preeminent expert in all things identity and personal information management standards, protocols, resources, and relationships.

If you listen to our interview, you’ll hear about her purpose and passion, how she is living it, and how she helps guide the world down a path toward self-sovereignty. At the end of this path is the promise that one day people—you, me…everyone—will have control, i.e., self-determination and agency, over their identity and personal information.

Kaliya’s purpose is to answer this profound question: “How do we own, control, manage, and represent ourselves in the digital world, independently of the BigTech companies (Facebook, Google, etc.)?”

For the last 15+ years, Kaliya has been at the center of the self-sovereign identity (SSI) movement.

The SSI movement is all about creating open standards and technologies that change how identity and personal information are collected, managed, and exchanged throughout society. In other words, SSI is about getting our identity out of the grip of BigTech and into the hands of the individual, the data subject. But, SSI is so much more; it is also about evolving the Internet, making it more efficient and secure, and creating new opportunities for businesses to innovate and to forge new and lasting relationships with the people they serve.

The Opportunities form SSI.

In our interview, Kaliya highlights several opportunities that SSI can make possible, including:

The availability of new protocols will make it possible to securely and efficiently move data across the Internet without it being centralized in a few players’ hands. The possibility for people to have control over their identity and data is created and moved, rather than it being in control of BigTech like Facebook and Google (see Kaliya’s Jan. 2020 interview in Wired, where she discusses making Facebook obsolete). The chance for businesses to build a new kind of trusted, secure, and transparent relationship with their customers, and to do so while saving time, money, and reducing risk (Kaliya recommends that you check out DIDComm Messaging, an emerging protocol that is promised to bring this opportunity to light). SSI Use Cases

It is still early days for SSI, but people worldwide are working diligently to create the foundation of SSI so that a wealth of privacy and people-centric services can come to light.

Following my interview with Kaliya, I took a look at the W3C Verifiable Credentials Working Group (VCWG), as recommended by Kaliya. The W3C VCWG is a team diligently working on adding a new, secure identity management layer to the Internet. Among other efforts that they are working on, I found that, in Sept. 2020, they released a list of use cases and technical requirements for self-sovereign identity, see the Use Cases and Requirements for Decentralized Identifiers draft spec.

Here is a list of high-lighted use cases, The online shopper is assured that the product they’re buying is authentic. Owners of manufactured goods, e.g., a car, can track the product’s ownership history and the provenance of every part (original and replaced) while preserving players’ anonymity up and down the supply chain for the life of the product. Support for data vaults, aka personal data stores; people can securely store their data in the cloud and be confident that they and only they have access to it and that they can offer fine-grained access to their data when they want to. For example, they can securely share their age in a way that someone else can verify and trust without needing the person’s actual birth date or any additional information. Track verifiable credentials back to their issuer; people or organizations looking to verify someone’s data will be able to track a verifiable credential or piece of data back to a trusted source, like a chamber of commerce, bank, or DMV. New and improved data exchange consent management, not just for data exchange but to manage online tracking to power personalization and analytics Power secure and anonymous payments Secure physical and digital identity cards or licensing credentials that allow for fine-grained control of exactly what data is shared when the proof of identity or licensee rights need to be verified The Challenges for SSI

According to Kaliya, the most significant challenges that we must overcome to make SSI a reality include,

Counteract the inertia of the status-quo; people don’t like change; they are used to existing knowledge-based authentication practices, processes, and systems. People’s awareness generation and adoption of new SSI empowered services. Education for everyone: developers, users, executives, customers, clients, investors, regulators, and so much more. The new market model, i.e., we now face a three-sided market, and all the business models and systems operations must evolve to accommodate the new use cases. Kaliya: A wealth of resources and knowledge 

Kaliya is a wealth of knowledge.

She knows the SSI industry structure. She knows the leading players. She can point you to the right resources to introduce you to people that can help you understand, plan, and execute SSI solutions and services.

To put a fine point on it, working with Kaliya can save you months, if not years, of stumbling around in the dark as you look to figure out what SSI can do for you and your business.

Here are just a few of the people and resources that Kaliya high-lighted and alluded to in our interview:

Decentralized Identity Foundation, a leading industry group spearheading SSI standards. W3C Credentials community group, a leading industry group spearheading SSI standards. Trust over IP Foundation, a leading industry group spearheading SSI standards and governance models. Kim Cameron and the 7 Laws of Identity, a godfather and visionary in all things identity DIDComm Messaging Protocol, a protocol for trusted data exchange Internet Identity Workshop, a bi-annual incoherence where all things SSI are discussed Domains of Identity, a book she wrote on identity. Comprehensive Guide to Self Sovereign Identity, a book she wrote on self-sovereign identity.

That’s it for now. Enjoy. We’ll be sure to bring Kaliya back soon.

The post An Interview on Self-sovereign Identity with Kaliya Young: The Identity Women appeared first on Identity Praxis, Inc..


infominer

Leveling Up - What I’ve been working on lately.

It’s been a year since my last post… Overwhelmed by trying to keep up with the fast flows of information, and my own internal processes, I took a break from social media, stopped working on most of the projects I had begun, and turned inward. I’ve also been busy leveling up, and discovering my potential. So what exactly have I doing with all that time?

It’s been a year since my last post… Overwhelmed by trying to keep up with the fast flows of information, and my own internal processes, I took a break from social media, stopped working on most of the projects I had begun, and turned inward. I’ve also been busy leveling up, and discovering my potential.

So what exactly have I doing with all that time?

Managing Emotions

If you haven’t been following my story, so far, the short version is that I quit drinking a few years ago, and all of this info-gathering has been an important part of my recovery process, and a path to creating a new life.

All the same, however far along I had come in my work, and self-education, I still had the challenges processing every day emotions, which I’d been primarily avoiding by processing as much information on valuable topics as possible.

Turning that around, I shifted most of my info-gathering from blockchain, cryptocurrencies and decentralized-identity, to focus more on mental hygiene and emotional agility.

The most valuable information I’ve encountered in that regard is Marshall Rosenberg’s Nonviolent Communication.

We’re interested, in nonviolent communication, with the kind of honesty that supports people connecting with each other in a way that makes compassionate giving inevitable, that makes it enjoyable for people to contribute to each other’s well being. - Marshall Rosenberg

I’ve got a couple projects brewing on that topic, and will write more about that later, so don’t want to take too much space for that now.

In brief, I can say that spending time with the teachings of Marshall Rosenberg has made a significant contribution to my mental health and emotional wellbeing.

If you’re curious to learn more, I called a session at my first IIW on the topic, you can check out the notes for that session on the IIW Wiki, which provides a high-level overview, and lots of links.

Developing the backend for a sustainable weekly newsletter

I’d been chatting with Kaliya Identity Woman for around a year, after contacting her about the potential for our collaborating on decentralized identity. At some point, she proposed the idea of writing a newsletter together, under the Identosphere.net domain.

Instead of jumping in head first, like I usually do, we’ve spent a lot of time figuring out how to run a newsletter, sustainably, with as few third party services as possible, while I’m learning my way around various web-tools.

We’re tackling a field that touches every domain, has a deep history, and is currently growing faster than anyone can keep up with. But this problem of fast-moving information streams isn’t unique to digital identity, and I’d like to share this process for others to benefit from.

GitHub Pages

I started in Decentralized ID creating freelance content, and ended up building an Awsome List, to organize my findings. Once that list outgrew the Awesome format, I began learning to create static web-sites with GitHub Pages and Jekyll.

GitHub Pages Starter Pack (a resource I’ve created along that journey)

Static Websites are great for security and easy to set up, but if you’re trying to create any type of business online, you’re gonna want some forms so you can begin collecting e-mail subscribers! Forms are not supported natively through Jekyll or GitHub Pages.

Enter Staticman

Staticman is a comments engine for static websites, but can be used for any kind of form, with the proper precautions.

It can be deployed to Heroku with a click of a button, made into a GitHub App, or run on your own server. Once set up, it will submit a pull-request to your repository with the form details (and an optional mailgun integration).

I set it up on my own server and created a bot account on GitHub with permissions to a private repository for the Staticman app to update with subscriptions e-mails to.

Made the form, and a staticman.yml config file in the root of the private repository where I’m collecting e-mail addresses.

The Subscription Form <center> <h3>Subscribe for Updates</h3> <form class="staticman" method="POST" action="https://identosphere.net/staticman/v2/entry/infominer33/subscribe/master/subscribe"> <input name="options[redirect]" type="hidden" value="https://infominer.xyz/subscribed"> <input name="options[slug]" type="hidden" value="infohub"> <input name="fields[name]" type="text" placeholder="Name (optional)"><br> <input name="fields[email]" type="email" placeholder="Email"><br> <input name="fields[message]" type="text" placeholder="Areas of Interest (optional)"><br> <input name="links" type="hidden" placeholder="links"> <button type="submit">Subscribe</button> </form> </center> The staticman.yml config in the root of my private subscribe repo subscribe: allowedFields: ["name", "email", "message"] allowedOrigins: ["infominer.xyz","identosphere.net"] branch: "master" commitMessage: "New subscriber: {fields.name}" filename: "subscribe-{@timestamp}" format: "yaml" generatedFields: date: type: "date" options: format: "iso8601" moderation: false name: "infominer.xyz" path: "{options.slug}" requiredFields: ["email"]

It seems to be struggling with GitHub’s recent move to change the name of your default branch from master to main (for new repositories). So, unfortunately, I had to re-create a master branch to get it running.

Planet Pluto Feed Reader

Trying to keep up with Self-Sovereign Identity, kaliya and I started out with Feedly, but the pricing for collaborative feed creation is nearly $50 a month for two users. There was no way I could go with that for very long.

One of the most promising projects I found, in pursuit of keeping up with all the info, is Planet Pluto Feed Reader, by Gerald Bauer.

In online media a planet is a feed aggregator application designed to collect posts from the weblogs of members of an internet community and display them on a single page. - Planet (Software)

For the uninitiated, I should add that websites generate RSS feeds that can be read by a newsreader, allowing users to keep up with posts from multiple locations without needing to visit each site individually. You very likely use RSS all the time without knowing, for example, your podcast player depends on RSS feeds to bring episodes directly to your phone.

What Pluto Feed reader does is just like your podcast app, except, instead of an application on your phone that only you can browse, it builds a simple webpage from the feeds you add to it, that can be published on GitHub, your favorite static web-hosting service, or on your own server in the cloud.

Pluto is built with Ruby, using the ERB templating language for web-page design.

One of the cool things about ERB is it lets you use any ruby function in your web-page template, supporting any capability you might want to enable while rendering your feed. This project has greatly helped me to learn the basics of Ruby while customizing its templates to suit my needs.

Feed Search

I use the RSSHub Radar browser extension to find feeds for sites while I’m browsing. However, this would be a lot of work when I want to get feeds for a number of sites at once.

I found a few simple python apps that find feeds for me. They aren’t perfect, but they do allow me to find feeds for multiple sites at the same time, all I have to do is format the query and hit enter.

As you can see below, these are not fully formed applications, just a few lines of code. To run them, it’s necessary to install Python, install the package with pip (pip install feedsearch-crawler), and type python at the command prompt, which takes you to a Python terminal that will recognize these commands.

From there you can type\paste python commands for demonstration, practice, or for simple scripts like this. I could also put the following scripts into their own feedsearch.py file and type python feedsearch.py, but I haven’t gotten around to doing anything like that.

Depending on the site, and the features you’re interested in, either of these feed seekers has their merits.

Feedsearch Crawler DBeath/feedsearch-crawler from feedsearch_crawler import search import logging logging.basicConfig(filename='example.log',level=logging.DEBUG) import output_opml list = ["http://bigfintechmedia.com/Blog/","http://blockchainespana.com/","http://blog.deanland.com/"] for items in list: feeds = search(items) output_opml(feeds).decode() logger = logging.getLogger("feedsearch_crawler") Feed seeker mitmedialab/feed_seeker from feed_seeker import generate_feed_urls list = ["http://bigfintechmedia.com/Blog/","http://blockchainespana.com/","http://blog.deanland.com/"] for items in list: for url in generate_feed_urls(items): print(url) GitHub Actions

Pluto Feed Reader is great, but I needed to find a way for it to run a regular schedule, so I wouldn’t have to run the command every time I wanted to check for new feeds. For this, I’ve used GitHub actions.

This is an incredible feature of GitHub that allows you to spin up a virtual machine, install an operating system, dependencies supporting your application, and whatever commands you’d like to run, on a schedule.

name: Build BlogCatcher on: schedule: # This action runs 4x a day. - cron: '0/60 */4 * * *' push: paths: # It also runs whenever I add a new feed to Pluto's config file. - 'planetid.ini' jobs: updatefeeds: # Install Ubuntu runs-on: ubuntu-latest steps: # Access my project repo to apply updates after pluto runs - uses: actions/checkout@v2 - name: Set up Ruby uses: ruby/setup-ruby@v1 with: ruby-version: 2.6 - name: Install dependencies # Download and install SQLite (needed for Pluto), then delete downloaded installer run: | wget http://security.ubuntu.com/ubuntu/pool/main/s/sqlite3/libsqlite3-dev_3.22.0-1ubuntu0.4_amd64.deb sudo dpkg -i libsqlite3-dev_3.22.0-1ubuntu0.4_amd64.deb rm libsqlite3-dev_3.22.0-1ubuntu0.4_amd64.deb gem install pluto && gem install nokogiri && gem install sanitize - name: build blogcatcher # This is the command I use to build my pluto project run: pluto b planetid.ini -t planetid -o docs - name: Deploy Files # This one adds the updates to my project run: | git remote add gh-token "https://github.com/identosphere/identity-blogcatcher.git" git config user.name "github-actions[bot]" git config user.email "41898282+github-actions[bot]@users.noreply.github.com" git add . git commit -a -m "update blogcatcher" git pull git push gh-token master Identosphere Blogcatcher

Identosphere Blogcatcher (source) is a feed aggregator for personal blogs of people who’ve been working on digital identity through the years, inspired by the original Planet Identity.

We also have a page for companies, and another for organizations working in the field.

Identosphere Weekly Highlights

Last month, Kaliya suggested that since we have these pages up and running smoothly, we were ready to start our newsletter. This is just a small piece of the backend information portal we’re working towards, and not enough to make this project as painless and comprehensive as possible, but we had enough to get started.

Every weekend we get together, browse the BlogCatcher, and share essential content others in our field will appreciate.

We’ll be publishing our 6th edition, at the start of next week, and our numbers are doing well!

This newsletter is free, and a great opportunity for us to work together on something consistent while developing a few other ideas.

identosphere.substack.com

Setting up a newsletter without third-party intermediaries is more of a challenge than I’m currently up for, so we’ve settled on Substack for now, which seems to be a trending platform for tech newsletters.

It has a variety of options for both paid and free content, and you can read our content before subscribing.

Support us on Patreon

While keeping the newsletter free, we are accepting contributions via Patreon. (yes another intermediary, but we can draw upon a large existing userbase, and it’s definitely easier than setting up a self-hosted alternative.)

So far, we have enough to cover a bit more than server costs, and this will ideally grow to support our efforts, and enable us to sustainably continue developing these open informational projects.

Python, Twitter Api, and GitHub Actions

Since we’re publishing this newsletter, and I’ve gotten a better handle on my inner state, I decided it was time to come back to twitter. However, I knew I couldn’t do it the old way, where I manually re-tweeted everything of interest, spending hours a day scrolling multiple accounts trying to stay abreast of important developments.

Instead, I dove into the twitter api. The benefits of using twitter programmatically can’t be understated. For my first project, I decided to try an auto-poster, which could enable me to keep an active twitter account, without having to regularly pay attention to twitter.

I found a simple guide How To Write a Twitter Bot with Python and tweepy composed of a dozen lines of python. That simple script posts a tweet to your account, but I wanted to post from a pre-made list, and so figured out how to read from a yaml file, and then used GitHub actions to run the script on a regular schedule.

While that didn’t result in anything I’m ready to share here, quite yet, somewhere during that process I realized that I could write python. After playing around with Ruby, in ERB, to build the BlogCatcher, and running various python scripts that other people wrote, tinkering where necessary, eventually I had pieced together enough knowledge I could actually write my own code!

Decentralized ID Weekly Twitter Collections

With that experience as a foundation I knew I was ready to come back to Twitter, begin trying to make more efficient use of its wealth of knowledge, and see about keeping up my accounts without losing too much hair.

I made a script that searches twitter for a variety of keywords related to decentralized identity, and write the tweet text and some other attributes to a csv file. From there, I can sort through those tweets, and save only the most relevant, and publish a few hundred tweets about decentralized identity to a weekly twitter collection, that make our job a lot easier than going to 100’s of websites to find out what’s happening. :D

Soon, these will be regularly published to decentralized-id.com, which I found out is an approved method of re-publishing tweets, unlike the ad hoc method I was using before, sharing them to discord channels (which grabs metadata and displays the preview image \ text), exporting their contents and re-publishing that.

I do intend to share my source for all that after I’ve gotten the kinks worked out, and set it running on an action.

Twitter collections I’ve made so far Self Sovereign ID 101 October Week 5 #SSI #DID November Week 1 #SSI #DID Decentralized-ID.com

Now we have a newsletter, and are seeking patrons, it seemed appropriate to work on developing decentralized-id.com, cleaning up some of its hastily thrown together parts, and adding a bunch of content to better represent the decentralized identity space.

That said, I’ve not given up on Bitcoin, or Crypto. On the contrary, I’m sure this work is only expanding my future capacity to continue working to fulfil the original vision of those resources.

Web-Work.Tools

With the information and skills i’ve gathered over the past year, web-work.tools was starting to look pretty out of date, relative to the growth of my understanding.

I updated GitHub Pages Starter Pack \ Extended Resources quite a lot to reflect those learnings, and separate the wheat from the chaff of the links I gathered when I was first figuring out how to use GitHub pages.

Thanks for stopping by! Subscribe for Updates


Subscribe

Here's Tom with the Weather

Sunday, 15. November 2020

Tim Bouma's Blog

Trust Frameworks? Standards Matter.

Photo by Tekton on Unsplash Note: This post is the author’s opinion only and does not represent the opinion of the author’s employer, or any organizations with which the author is involved. Over the past few years, and especially in the face of the COVID-19, there has been a proliferation of activity of developing digital identity trust frameworks. Trust frameworks are being developed by the
Photo by Tekton on Unsplash

Note: This post is the author’s opinion only and does not represent the opinion of the author’s employer, or any organizations with which the author is involved.

Over the past few years, and especially in the face of the COVID-19, there has been a proliferation of activity of developing digital identity trust frameworks. Trust frameworks are being developed by the private sector and the public sector, as collaborative or sector-specific efforts. Trust mark and trust certification programs are also emerging alongside trust framework development efforts.

These trust framework development efforts are worthy undertakings and the results of these efforts should automatically engender trust. But the problem that we are now faced with, all good intentions aside, is — how do we truly trust a trust framework?

The answer is simple — with standards.

Trust frameworks need standards to be trusted.

Within the Canadian context, a standard is defined by the Standards Council of Canada, as:

“a document that provides a set of agreed-upon rules, guidelines or characteristics for activities or their results. Standards establish accepted practices, technical requirements, and terminologies for diverse fields.”

This standard definition might sound straightforward — making a ‘standard” might sound easy but the hard part is all the work leading up to agreeing on those things that are part of a standard — an agreed-upon rules, guidelines or characteristics for activities or their results.

That’s where trust frameworks come into play. Much of the work that eventually ends up in a standard is years if not decades in the making. For years I have been part of developing the Public Sector Profile of the Pan-Canadian Trust Framework. This work had started in earnest in early 2015, and building on work that goes as far as back as 2007 (you can find a lot of the historical material in the docs folder in the PCTF repository on GitHub)

What has come out of all of this work is a trust framework — a set of agreed on principles, definitions, standards, specifications, conformance criteria, and assessment approach.

This definition of a trust framework, sounds pretty much like a standard, doesn’t it? Yes and no. What the trust framework has not gone through is a standards development process that respects and safeguards the interests of all stakeholders affected by the standard. Within the Canadian context, that’s where Standards Council of Canada comes into play by specifying how standards should be developed and how to accredit certain bodies to be standards development organizations.

So trust frameworks, however good and complete they are, still need to go through the step of becoming an official standard. Fortunately, this is the case in Canada, where the Public Sector Profile of the Pan-Canadian Trust Framework was used to develop CAN/CIOSC 103–1:2020 Digital trust and Identity — Part 1: Fundamentals. This standard was developed by the CIO Strategy Council, a standards development organization accredited by the Standards Council of Canada.

In closing, there are lots of trust frameworks being developed today. But to be truly trusted, a trust framework needs to either apply existing standards or become a standard itself. In Canada, we have been extremely fortunate to see the good work that we have done in the public sector to be transformed into a national standard that serves the interests of all Canadians.

Saturday, 14. November 2020

FACILELOGIN

The Role of CIAM in Digital Transformation

Companies and organizations have strategic decisions to make at the Customer Identity & Access Management (CIAM) front. First, they have to decide whether to invest into a dedicated CIAM solution or to build on existing infrastructure. If there is already a foundation, what should be their next steps to have a mature CIAM strategy in place? If they do not have a CIAM solution, where do they st

Companies and organizations have strategic decisions to make at the Customer Identity & Access Management (CIAM) front. First, they have to decide whether to invest into a dedicated CIAM solution or to build on existing infrastructure. If there is already a foundation, what should be their next steps to have a mature CIAM strategy in place? If they do not have a CIAM solution, where do they start? Applications, systems, identities tend to be siloed while as a business grows, it’s imperative they are cohesive and well-integrated in order to provide a superior customer experience.

An effective CIAM solution will help connect various applications and systems such as CRM, data management, analytics and marketing platforms. This helps to move towards a 360-degree view of the customer which is a key prerequisite for successful digital transformation.

In the following webinar recording, I join with KuppingerCole Senior Analyst and Lead Advisor Matthias Reinwarth to explain how CIAM helps to achieve digital transformation, best practices in CIAM and pitfalls to avoid. Also we talk about the 5 pillars in CIAM essential for your CIAM strategy and maturity models to determine stage of growth.

The Role of CIAM in Digital Transformation was originally published in FACILELOGIN on Medium, where people are continuing the conversation by highlighting and responding to this story.

Friday, 13. November 2020

MyDigitalFootprint

perspective and judgment - changing our path

René Magritte painting shows an image of a pipe. Below it, Magritte painted, "Ceci n'est pas une pipe", French for "This is not a pipe".  "How people reproached me for it! And yet, could you stuff my pipe? No, it's just a representation, is it not? So if I had written on my picture "This is a pipe", I'd have been lying!"  René Magritte How often, when we obseve som




René Magritte painting shows an image of a pipe. Below it, Magritte painted, "Ceci n'est pas une pipe", French for "This is not a pipe". 
"How people reproached me for it! And yet, could you stuff my pipe? No, it's just a representation, is it not? So if I had written on my picture "This is a pipe", I'd have been lying!"  René Magritte

How often, when we obseve something, do we assume we are right in what we see and assume.  It is a pipe, but is it not as it is a picture of a pipe.  In our path to experiance it becomes harder to hold paradox's of thinking. 
What data led story have we heard today where we beleive it is the story, but infact it is just a representation of the story.   Indeed can data ever be more than a representation?

Tim Bouma's Blog

Self-Sovereign Identity: Interview with Tim Bouma

An interview by SSI_Ambassador a Twitter account with educational content about self-sovereign identity with a focus on the European Union. The SSI_Ambassador account is managed by Adrian Doerk and the interview was conducted as part of Adrian’s Bachelor’s thesis. I have asked Adrian’s permission to post this material and he has graciously granted me permission. The post is a lightly edited versio

An interview by SSI_Ambassador a Twitter account with educational content about self-sovereign identity with a focus on the European Union. The SSI_Ambassador account is managed by Adrian Doerk and the interview was conducted as part of Adrian’s Bachelor’s thesis. I have asked Adrian’s permission to post this material and he has graciously granted me permission. The post is a lightly edited version of the interview transcript. The interview took place in September 2020.

Note: All views and opinions expressed are mine only and do not represent that of my employer or organizations with whom I am involved.

Photo by Lili Popper on Unsplash The growth factors of Self-Sovereign Identity Solutions in Europe”

Adrian Doerk: My research question is concerned about the growth factors of self-sovereign identity solutions in Europe. You as somebody who is very familiar with the topic of SSI, what would you think about, when you read the term growth factor of self-sovereign identity, what comes to your mind?

Tim Bouma: I believe the main growth factor is going to be adoption by users and it has to be really easy. Another growth factor is that SSI will need to be part of an infrastructure. I’m not sure if SSI is viable being marketed as a separate product because I don’t think end-users really understand it. The growth factor is going to be similar to plumbing — some additional standardized capabilities that we need to build. It will be as exciting as buying a 1/4 inch washer and bolt. It will just be part of the infrastructure and the demand will be from higher-order products not for SSI itself. I’d say most people won’t even know what it is, nor should they know about it. It’s not that different from the markets in the early days of PC networking. Remember you had your choice of drivers and different companies providing those things and after a while, it just gets baked in the operating system and people don’t even know that they’re using it. As it for being a discrete market, I see very quickly being subsumed by a higher-order products and like subsumed into mobile operating systems, into desktop devices, tablets, etc. It’s not that different from how a lot of other products or technologies evolved over time.

Adrian Doerk: We as SSI for German consortia we want to build infrastructure for Europe, so you might have read our press release. Probably not — no worries. So basically, our idea is to come up with a base layer infrastructure which is used as a public utility as defined in the Trust over IP stack level one with a European scope in terms of the governance and a worldwide usage. So considering this plan as public private partnership. What would be your recommendations for the governance for this network?

Tim Bouma: Well, you are totally aligned with my thinking. In fact, we’re about to announce a challenge. There’s a couple of things going on within the government, Canada. We’re launching a technology challenge (note: since this interview the challenge has been launched.) to figure out exactly what layer one would be for the digital infrastructure with the standards, and also what specifically is the scope of layer one and I can point you to that link afterwards, but that’s what I’ve been working on. We were just awarding the contracts as we speak. We’re getting six vendors to help us out. I think to answer your questions, I have some good ideas, but I’m not 100% sure because it is relatively new area and I think we need to be quite open on having our assumptions challenged and change during the course, but I see a very clear differentiation between the technical interoperability and the business interoperability, and in fact the challenge that I’m doing We’ve got six different use cases ranging from government security clearances to issuing of cannabis licenses to name a few. I’m not concerned about the content of the credential because that’s more business interoperability. I’m concerned that whatever credential, SSI credential or whatever is being issued into the system can actually be verified from the system irrespective of what’s inside. I hope I’m not losing track your question here. I see a very clear division of the private sector operating that system. I don’t see why government needs to build it and operate it. We don’t do that for networks, we don’t do that for payment rails. It has to be done in a way that governments have optionality that if a new operator comes along that’s more trustworthy or has different characteristics, there’s no reason why they can’t be used. There’s a risk. Maybe it’s not a risk for this to turn into a natural monopoly if we aren’t careful to make sure that we don’t have the standards 100% right? We have to be very, very careful that we want to have a plurality of operators. But that doesn’t mean a whole lot of them. I see that there were probably only for national infrastructure that maybe one or two domestic operators. And then probably, you know there’s going to be some international operators, but they need to work together so that’s a choice.

Adrian Doerk: Who exactly do you mean with operators? Do you mean the Stewards?

Tim Bouma: OK, so there’s two different things. There’s a steward, the governance which and again this is going to be a tricky and I’ve noticed that the Trust over IP Foundation revised their model that you could have governance at each of the layers. And so the question is governance at which layer and then what’s the composition of that governance? I would see at layer one. It’s largely a technical issue. It could be just part predominantly private sector players, maybe some government or nonprofit, but I just don’t know yet. I think where a government really will play is not in the infrastructure itself, but how that infrastructure is used and relied on for doing administration of programs. Provision of services. You know it could be passports. It can be currency. It could be educational credentials or whatever. I think government needs to be concerned at that level, but less so at the lower levels. But having confidence in those lower levels.

Adrian Doerk: When we speak about adoption, one of the big topics is use cases in general. We think that more or less the low hanging fruit, which is really easy to implement, is where you have the issuer also as a relying party. For example a University, which issued a student ID and then checks it again to issue him some other credentials. What would you think would be good for the start for different use cases? Let me reframe the question shortly. What are your recommendations for use cases to start with? What is the best one?

Tim Bouma: We had six vendors propose to us and they came up with six different use cases, and they’re quite varied, and I don’t think I can say which one is going to take off by adoption or not, but there is a government security clearances, there’s a cannabis licensing, there’s one for having your digital birth certificate, there’s one for a job site permit, it came from oil and gas. I’m not so sure which one is going to play out. I think what’s more important is really having a crystal clear understanding of what’s the digital infrastructure that can serve all of those use cases. That’s where my thinking is. What’s the absolute minimum that needs to be built? That could be an infrastructure so I think any one of these use cases can take off, but I think that model of issuer Holder verifier and we’ve generalized it to methods. It doesn’t have to be a blockchain. It could be a database. It could be different ways of doing it. There’s a super pattern there that will just serve all the use cases and this is where I’ve been putting a lot of intellectual effort just on my own time just to understand what the parallels are to digital currency and digital identity. It all boils down to kind of the similar idea is that I need to independently verify something. And I need to do it in a way that’s as flexible as possible, and then I need to have some additional functions. Digital currency. You need a transfer capability for digital identity or digital verification. I don’t think you need that. What are the absolute minimal requirements for this digital infrastructure? And it’s kind of like standardizing on paper and ink for doing contracts. You know you need paper and you need ink. What should we all standardize on? 8 1/2 by 11 or 8, four and a special type of ink that you need to use or just ink. Can’t be pencil or graphite or crayon and that’s good enough to move on to all the other very use cases. I don’t know what use case is going to take off. I think the important thing for us to do is do the critical thinking to figure out what are the common patterns on underneath there that are going to apply in all of those use cases. And as I said my working hypothesis now is that the issuer, holder, verifier with some ornamentation will do the job.

Adrian Doerk: Considering you your knowledge with the pan Canadian trust framework. You, as a policymaker, what will be your recommendation for policymakers in the European Union which work for example at the European self-sovereign identity framework?

Tim Bouma: It’s interesting. ’cause I actually had a call on this very same issue. I think policymakers actually have to go back to the drawing board and take a look at all the concepts and see if they have the right concepts to actually build out a framework and regulation, and that’s what we’ve been doing with Pan Canadian Trust Framework. We’ve recognized that what we tried to do is ingest all the latest concepts, such as issuer, holder, verifier credentials and express them in a way that does not limit them by assumption, like you don’t assume the credential is a document for example, or physical document. Or it’s just manifested only as a physical document. A credentials is a claim that can be independently verifiable and coming up with those concepts. So when you’re actually building up the frameworks and regulations you have a robust and a framework that doesn’t constrain you to a particular technological approach. There may be new technologies that come along that you didn’t even anticipate, but if you’ve done your critical thinking up front, there should be no reason why you can’t adopt that, so I think we’re just at this interesting point right now. I think we have an opportunity to go back to the drawing board. And this is just not an issue of just updating like eIDAS or other regulations and just tweaking a bit. It’s like going back to the drawing board and just say do we have the right policy constructs, which then could become regulatory requirements or legislative requirements. I think that we’re building a next generation of solutions here, and I think it’s really important that that we have the right constructs going forward, and I think we do have good confidence because I’ve looked at my evolution of thinking. You know I really started to get deep in the space in 2016 and really spend a lot of time internalizing the concepts. And it’s just a lot of iterations, but I feel like we’re in a good spot now to actually have a conversation of what these frameworks and regulations might be. It’s not just taking a paper analogue and saying, You know, just let’s do a digital equivalent of that, or a document analogue. We have to think about it differently.

Adrian Doerk: Then I would like to come to my last question. What do you think will be the negative sites or the danger sites of SSI?

Tim Bouma: Aside from all the hype and blue-sky stuff that has no merit. You see this often with any type of new technology, for example that SSI will solve hunger. It will solve society’s problems. First of all, just making sure it doesn’t get implicated in outrageous claims and that it has nothing that those are deeper problems to solve. So I think, as Gartner calls it, there’s the hype cycle. Of course, when you have the hype cycle, you get the what I call the allergic reaction that people will say, “We’re not going to use it because, you know, it’s got a bad name.” The other thing that we need to be concerned with or cognizant of is that we could build some capabilities that are outside of the states control. And I don’t know how that would manifest itself. All right, the great example is the Bitcoin Blockchain. It basically is a system that just runs on its own and no one can stop it because the way it’s structured, there’s no Corporation or operator that you can actually like take down and the algorithms, proof of work, and that it’s all open and permissionless. People are valuing like whatever is associated with their Bitcoin address because they value it. And there’s basically no way that a state or large actor can actually control that. And also not really bad thing. You know the way I’ve been describing it is that in the Bitcoin context from the economic context, we may have a new macroeconomic factor coming on the horizon that we need to work into our models around a proof of work turning energy into a digital assets and how that plays out, don’t know. So I, I think some of the downsides might be is. There may be some key capabilities that could be built. That could be viewed as illegal or unlawful in certain contexts, and so they they ban it outright. So I think we have to be very careful with this new technology to make sure that we bring the stakeholders along so we can embrace the positive side of the technology. Every technology is a two-edged sword, gunpowder, guns, you know anything? There’s an upside and there’s the downside, right? And I think that’s something that we have to be very cognizant of just like you know. In the mid 90s you had the crypto wars with the clipper chip. You can only have expert with certain key strengths and that caused a reaction and so we have to be careful that we don’t get caught into those same traps of us against the government or government against them. I think we have to figure out how to work this out together.

Tuesday, 10. November 2020

Identity Woman

Self-Sovereign Identity Critique, Critique /7

This is the 7/8 posts addressing the accusation by Philip Sheldrake that SSI is dystopian. We have now gotten to the Buckminster Fuller section of the document. I <3 Bucky. He was an amazing visionary and like Douglas Englebart, who I had the good fortune to meet and have lunch with, dedicated his life to […] The post Self-Sovereign Identity Critique, Critique /7 appeared first on Identity Wo

This is the 7/8 posts addressing the accusation by Philip Sheldrake that SSI is dystopian. We have now gotten to the Buckminster Fuller section of the document. I <3 Bucky. He was an amazing visionary and like Douglas Englebart, who I had the good fortune to meet and have lunch with, dedicated his life to […]

The post Self-Sovereign Identity Critique, Critique /7 appeared first on Identity Woman.


Self-Soverieng Identity Critique, Critique /6

So Philip here is where you go off the rails to make the assertion that we working on SSI are trying to ‘encompass all of what it means to be human and have an identity’ with our technologies. it’s time to explore a possible categorization of all things ‘identity’ that will help throw some more […] The post Self-Soverieng Identity Critique, Critique /6 appeared first on Identity Woman.

So Philip here is where you go off the rails to make the assertion that we working on SSI are trying to ‘encompass all of what it means to be human and have an identity’ with our technologies. it’s time to explore a possible categorization of all things ‘identity’ that will help throw some more […]

The post Self-Soverieng Identity Critique, Critique /6 appeared first on Identity Woman.


Self-Sovereign Identity Critique, Critique /5

This is part 5 of 8 posts critiquing Philip’s assertion that all of SSI is a Dystopian effort when its really the work of a community of practical idealists who really want to build real things in the real world and do the right thing. This volume focuses on this quote that draws on Lawrence […] The post Self-Sovereign Identity Critique, Critique /5 appeared first on Identity Woman.

This is part 5 of 8 posts critiquing Philip’s assertion that all of SSI is a Dystopian effort when its really the work of a community of practical idealists who really want to build real things in the real world and do the right thing. This volume focuses on this quote that draws on Lawrence […]

The post Self-Sovereign Identity Critique, Critique /5 appeared first on Identity Woman.


Self-Sovereign Identity Critique, Critique /4

Philip’s essay has so many flaws that I have had to continue to pull it a part in ta series. Below is a quote from Philip’s critique and I am so confused – What are you talking about? Who has built this system with SSI that you speak of? It just doesn’t exist yet. AND […] The post Self-Sovereign Identity Critique, Critique /4 appeared first on Identity Woman.

Philip’s essay has so many flaws that I have had to continue to pull it a part in ta series. Below is a quote from Philip’s critique and I am so confused – What are you talking about? Who has built this system with SSI that you speak of? It just doesn’t exist yet. AND […]

The post Self-Sovereign Identity Critique, Critique /4 appeared first on Identity Woman.


Self-Sovereign Identity Critique, Critique /3

I will continue to lay into Philip for failing making broad sweeping generalizations about it that are simply not true and create mis-information in our space. He goes on his piece to say this: When the SSI community refers to an ‘identity layer’, its subject is actually a set of algorithms and services designed to […] The post Self-Sovereign Identity Critique, Critique /3 appeared first on Iden

I will continue to lay into Philip for failing making broad sweeping generalizations about it that are simply not true and create mis-information in our space. He goes on his piece to say this: When the SSI community refers to an ‘identity layer’, its subject is actually a set of algorithms and services designed to […]

The post Self-Sovereign Identity Critique, Critique /3 appeared first on Identity Woman.


Self-Sovereign Identity Critique, Critique /2

At one point in my career I would have been considered “non-technical”. This however is no longer the case. I don’t write code and I don’t as yet write specs. I do understand this technology as deeply as anyone can who isn’t writing the code can. I co-chair a technical working group developing standards for […] The post Self-Sovereign Identity Critique, Critique /2 appeared first on Identity Wom

At one point in my career I would have been considered “non-technical”. This however is no longer the case. I don’t write code and I don’t as yet write specs. I do understand this technology as deeply as anyone can who isn’t writing the code can. I co-chair a technical working group developing standards for […]

The post Self-Sovereign Identity Critique, Critique /2 appeared first on Identity Woman.


MyDigitalFootprint

The changing nature of business

Wanted to compare the corporate and large company business models that I have seen adapt and morph over the past thirty years. Opted to use the traditional triangle, not to show organisation structure but to represent the number of people in roles.   The1990 model For me this is the traditional business concept, the one taught on my MBA, the one I was helped grow and my first startu

Wanted to compare the corporate and large company business models that I have seen adapt and morph over the past thirty years. Opted to use the traditional triangle, not to show organisation structure but to represent the number of people in roles.  

The1990 model

For me this is the traditional business concept, the one taught on my MBA, the one I was helped grow and my first startup.  There were roles and structures and the more senior the role was related to a higher level of understanding, experience and insight.  Yes, Peter’s principle was rife.  The exec team could mange with a dashboard as the processes and foundations of the business were linear.  Any complexity was the fun of senior management allowing the sponge of middle management to both slow down change but maintain stability.  

The Naughties “digital”

The internet pioneers were off creating new and existing models, leaving the corporates to adapt to the new.  Finding and retaining digital expertise became the mantra for the CEO’s,  the change programmes adsorbed middle management who never had these skills of transformation to focus on keeping their jobs and being paid through the delivery of the plan and KPI at any cost.  The senior leadership team took over the role of trying to manage with dashboards to simplify the complexity of everything changing, model, competition, skills, margin, value, customer, channels, processes).  Many exec teams continued based on path dependency as they did not have the skills or experience to grasp the changes.  Leaders who learnt fasts and that had the skills and experience thrived.

With noting that the emerging new growth businesses are now competing with others like themselves and those still stuck in/ with a 1990 model. 

Right Now 

Pioneers of the brave new models are winning in a data and platform economy.   The market now has players who are still operating with a 1990 model underpinned with simple linear processes, the middle sponge and executives leading with a dashboard, who are fighting with those who through one transformation to digital but did not realise it was going to be a multistage evolution.  

Data and platform business are building data and data expertise, the analysis is managed into dashboards that adsorb operations and delivery, continually looking for improvement and efficiency, more personalisation and better automation.  The senior leadership team are now deep into managing complexity, ethics, automated decision and new risk.  The senior team are deeply dependent on an ecosystem for both data and activities as they collectively deliver to customers. Pioneers are heading the businesses with vision, passion and principles, it is not about management and reporting but about crafting a direction of travel.

It is easy with hindsight to observe the differences between a corporate still tracking on a 1990 model and one that has data and platform as a foundation.

I somehow think our worst nightmare has become true with the emergence of combination of part transformed organisations!





Wednesday, 11. November 2020

Phil Windley's Technometria

DIDComm and the Self-Sovereign Internet

Summary: DIDComm is the messaging protocol that provides utility for DID-based relationships. DIDComm is more than just a way to exchange credentials, it's a protocol layer capable of supporting specialized application protocols for specific workflows. Because of its general nature and inherent support for self-sovereign relationships, DIDComm provides a basis for a self-sovereign internet much mo

Summary: DIDComm is the messaging protocol that provides utility for DID-based relationships. DIDComm is more than just a way to exchange credentials, it's a protocol layer capable of supporting specialized application protocols for specific workflows. Because of its general nature and inherent support for self-sovereign relationships, DIDComm provides a basis for a self-sovereign internet much more private, enabling, and flexible than the one we've built using Web 2.0 technologies.

DID-based relationships are the foundation of self-sovereign identity (SSI). The exchange of DIDs to form a connection with another party gives both parties a relationship that is self-certifying and mutually authenticated. Further, the connection forms a secure messaging channel called DID Communication or DIDComm. DIDComm messaging is more important than most understand, providing a secure, interoperable, and flexible general messaging overlay for the entire internet.

Most people familiar with SSI equate DIDComm with verifiable credential exchange, but it's much more than that. Credential exchange is just one of an infinite variety of protocols that can ride on top of the general messaging protocol that DIDComm provides. Comparing DIDComm to the venerable TCP/IP protocol suite does not go too far. Just as numerous application protocols ride on top of TCP/IP, so too can various application protocols take advantage of DIDComm's secure messaging overlay network. The result is more than a secure messaging overlay for the internet, it is the foundation for a self-sovereign internet with all that that implies.

DID Communications Protocol

DIDComm messages are exchanged between software agents that act on behalf of the people or organizations that control them. I often use the term "wallet" to denote both the wallet and agent, but in this post we should distinguish between them. Agents are rule-executing software systems that exchange DIDComm messages. Wallets store DIDs, credentials, personally identifying information, cryptographic keys, and much more. Agents use wallets for some of the things they do, but not all agents need a wallet to function.

For example, imagine Alice and Bob wish to play a game of TicTacToe using game software that employs DIDComm. Alice's agent and Bob's agent will exchange a series of messages. Alice and Bob may be using a game UI and be unaware of the details but the agents are preparing plaintext JSON messages1 for each move using a TicTacToe protocol that describes the format and appropriateness of a given message based on the current state of the game.

Alice and Bob play TicTacToe over DIDComm messaging (click to enlarge)

When Alice places an X in a square in the game interface, her agent looks up Bob's DID Document. She received this when she and Bob exchanged DIDs and it's kept up to date by Bob whenever he rotates the keys underlying the DID2. Alice's agent gets two key pieces of information from the DID Document: the endpoint where messages can be sent to Bob and the public key Bob's agent is using for the Alice:Bob relationship.

Alice's agent uses Bob's public key to encrypt the JSON message to ensure only Bob's agent can read it and adds authentication using the private key Alice uses in the Alice:Bob relationship. Alice's agent arranges to deliver the message to Bob's agent through whatever means are necessary given his choice of endpoint. DIDComm messages are often routed through other agents under Alice and Bob's control.

Once Bob's agent receives the message, it authenticates that it came from Alice and decrypts it. For a game of TicTacToe, it would ensure the message complies with the TicTacToe protocol given the current state of play. If it complies, the agent would present Alice's move to Bob through the game UI and await his response so that the process could continue. But different protocols could behave differently. For example, not all protocols need to take turns like the TicTacToe protocol does.

DIDComm Properties

The DIDComm Protocol is designed to be

Secure Private Interoperable Transport-agnostic Extensible

Secure and private follow from the protocol's support for heterachical (peer-to-peer) connections and decentralized design along with its use of end-to-end encryption.

As an interoperable protocol, DIDComm is not dependent on a specific operating system, programming language, vendor, network, hardware platform, or ledger3. While DIDComm was originally developed within the Hyperledger Aries project, it aims to be the common language of any secure, private, self-sovereign interaction on, or off, the internet.

In addition to being interoperable, DIDComm should be able to make use of any transport mechanism including HTTP(S) 1.x and 2.0, WebSockets, IRC, Bluetooth, NFC, Signal, email, push notifications to mobile devices, Ham radio, multicast, snail mail, and more.

DIDComm is an asynchronous, simplex messaging protocol that is designed for extensibility by allowing for protocols to be run on top of it. By using asynchronous, simplex messaging as the lowest common denominator, almost any other interaction pattern can be built on top of DIDComm. Application-layer protocols running on top of DIDComm allow extensibility in a way that also supports interoperability.

DIDComm Protocols

Protocols describe the rules for a set of interactions, specifying the kinds of interactions that can happen without being overly prescriptive about their nature or content. Protocols formalize workflows for specific interactions like ordering food at a restaurant, playing a game, or applying for college. DIDComm and its application protocols are one of the cornerstones of the SSI metasystem, giving rise to a protocological culture within the metasystem that is open, agentic, inclusive, flexible, modular, and universal.

While we have come to think of SSI agents being strictly about exchanging peer DIDs to create a connection, request and issue a credential, or prove things using credentials, these are merely specific protocols defined to run over the DIDComm messaging protocol. Many others are possible. The follow specifications describe the protocols for these three core applications of DIDComm:

Connecting with others Requesting and issuing credentials Proving things using credentials

There's a protocol for agents to discover the protocols that another agent supports. And another for one agent to make an introduction4 of one agent to another. The TicTacToe game Alice and Bob played above is enabled by a protocol for TicTacToe. Bruce Conrad who works on picos with me implemented the TicTacToe protocol for picos, which are DIDComm.

Daniel Hardman has provided a comprehensive tutorial on defining protocols on DIDComm. We can imagine a host of DIDComm protocols for all kinds of specialized interactions that people might want to undertake online including the following:

Delegating Commenting Notifying Buying and selling Negotiating Enacting and enforcing contracts Putting things in escrow (and taking them out again) Transferring ownership Scheduling Auditing Reporting errors

As you can see from this partial list, DIDComm is not just a secure, private way to connect and exchange credentials. Rather DIDComm is a foundation protocol that provides a secure and private overlay to the internet for carrying out almost any online workflow. Consequently, agents are more than the name "wallet" would imply, although that's a convenient shorthand for the common uses of DIDComm today.

A Self-Sovereign Internet

Because of the self-sovereign nature of agents and the flexibility and interoperable characteristics they gain from DIDComm, they form the basis for new, more empowering internet. While self-sovereign identity is the current focus of DIDComm, its capabilities exceed what many think of as "identity." When you combine the vast landscape of potential verifiable credentials with DIDComm's ability to create custom message-based workflows to support very specific interactions, it's easy to imagine that the DIDComm protocol and the heterarchical network of agents it enables will have an impact as large as the web, perhaps the internet itself.

Notes DIDComm messages do not strictly have to be formatted as JSON. Alice's agent can verify that it has the right DID Document and the most recent key by requesting a copy of Bob's key event log (called delta's for peer DIDs) and validating it. This is the basis for saying peer DIDs are self certifying. I'm using "ledger" as a generic term for any algorithmically controlled distributed consensus-based datastore including public blockchains, private blockchains, distributed file systems, and others. Fuse with Two Owners shows an introduction protocol for picos I used in building Fuse in 2014 before picos used DIDComm. I'd like to revisit this as a way of building introductions into picos using a DIDComm protocol.

Photo Credit: Mesh from Adam R (Pixabay)

Tags: decentralized+identifiers didcomm protocol ssi identity credentials self-sovereign vrm me2b

Tuesday, 10. November 2020

Identity Woman

Self-Sovereign Identity Critique, Critique /8

Now we are in the Meg Wheatly section of the article. I’ve been reading Meg’s book since I read Leadership and the New Sciences 25 years ago. Relationships are the pathways for organizing, required for the creation and transformation of information, the expansion of the organizational identity, and accumulation of wisdom. Relationships are formed with […] The post Self-Sovereign Identity Critiqu

Now we are in the Meg Wheatly section of the article. I’ve been reading Meg’s book since I read Leadership and the New Sciences 25 years ago. Relationships are the pathways for organizing, required for the creation and transformation of information, the expansion of the organizational identity, and accumulation of wisdom. Relationships are formed with […]

The post Self-Sovereign Identity Critique, Critique /8 appeared first on Identity Woman.

Monday, 09. November 2020

Phil Windley's Technometria

Operationalizing Digital Relationships

Summary: An SSI wallet provides a place for people to stand in the digital realm. Using the wallet, people can operationalize their digital relationships as peers with others online. The result is better, more authentic, digital relationships, more flexible online interactions, and the preservation of human freedom, privacy, and dignity. Recently, I've been making the case for self

Summary: An SSI wallet provides a place for people to stand in the digital realm. Using the wallet, people can operationalize their digital relationships as peers with others online. The result is better, more authentic, digital relationships, more flexible online interactions, and the preservation of human freedom, privacy, and dignity.

Recently, I've been making the case for self-sovereign identity and why it is the correct architecture for online identity systems.

In Relationships and Identity, I discuss why identity systems are really built to manage relationships rather than identities. I also show how the architecture of the identity system affects the three important properties relationships must have to function effectively: integrity, lifespan, and utility. In The Architecture of Identity Systems I introduced a classification scheme that broadly categorized identity systems into one of three broad architectures: administrative, algorithmic, or autonomic. In Authentic Digital Relationships, I discuss how the architecture of an identity system affects the authenticity of the relationships it manages.

This post focuses how people can operationalize the relationships they are party to and become full fledged participants online. Last week, Doc Searls posted What SSI Needs where he recalls how the graphical web browser was the catalyst for making the web real. Free (in both the beer and freedom senses) and substitutable, browsers provided the spark that gave rise to the generative qualities that ignited an explosion of innovation. Doc goes on to posit that the SSI equivalent of the web browser is what we have called a "wallet" since it holds, analogously to your physical wallet, credentials.

The SSI wallet Doc is discussing is the tool people use to operationalize their digital relationships. I created the following picture to help illustrate how the wallet fulfills that purpose.

Relationships and Interactions in Sovrin Network (click to enlarge)

This figure shows the relationships and interactions in SSI networks enabled by the Hyperledger Indy and Aries protocols. The most complete example of these protocols in production is the Sovrin Network.

In the figure, Alice has an SSI wallet1. She uses the wallet to manage her relationships with Bob and Carol as well as a host of organizations. Bob and Carol also have wallets. They have a relationship with each other and Carol has a relationship with Bravo Corp, just as Alice does2. These relationships are enabled by autonomic identifiers in the form of peer DIDs (blue arrows). The SSI wallet each participant uses provides a consistent user experience, like the browser did for the Web. People using wallets don't see the DIDs (identifiers) but rather the connections they have to other people, organizations, and things.

These autonomic relationships are self-certifying meaning they don't rely on any third party for their trust basis. They are also mutually authenticating: each of the parties in the relationship can authenticate the other. Further, these relationships create a secure communications channel using the DIDComm protocol. Because of the built-in mutual authentication, DIDComm messaging creates a batphone-like experience wherein each participant knows they are communicating with the right party without the need for further authentication. As a result, Alice has trustworthy communication channels with everyone with whom she has a peer DID relationship.

Alice, as mentioned, also has a relationship with various organizations. One of them, Attester Org, has issued a verifiable credential to Alice. They issued the credential (green arrows) using the Aries credential exchange protocol that runs on top of the DIDComm-based communication channel enabled by the peer DID relationship Alice has with Attester. The credential they issue is founded on the credential definition and public DID (an algorithmic identifier) that Attester Org wrote to the ledger.

When Alice later needs to prove something (e.g. her address) to Certiphi Corp, she presents the proof over the DIDComm protocol, again enabled by the peer DID relationship she has with Certiphi Corp. Certiphi is able to validate the fidelity of the credential by reading the credential definition from the ledger, retrieving Attester Org's public DID from the credential definition, and resolving it to get Attester Org's public key to check the credential's signature. At the same time, Certiphi can use cryptography to know that the credential is being presented by the person it was issued to and that it hasn't been revoked.

This diagram has elements of each architectural style described The Architecture of Identity Systems.

Alice has relationships with five different entities: her friends Bob and Carol as well as three different organizations. These relationships are based on autonomic identifiers in the form of peer DIDs. All of the organizations use enterprise wallets to manage autonomic relationships4. As a credential issuer, Attester Org has an algorithmic identifier in the form of a public DID that has been recorded on the ledger. The use of algorithmic identifiers on a ledger3 allows public discovery of the credential definition and the public DID by Certiphi Corp when it validates the credential. The use of a ledger for this purpose is not optional unless we give up the loose coupling it provides. Loose coupling provides scalability, flexibility, and isolation. Isolation is critical to the privacy protections that verifiable credential exchange via Aries promises. Each company will keep track of attributes and other properties they need for the relationship to provide the needed utility. These are administrative systems since they are administered by the organization for their own purpose and their root of trust is a database managed by the organization. The difference between these administrative systems and those common in online identity today is that only the organization is depending on them. People have their own autonomic root of trust in the key event log that supports their wallet.

Alice's SSI wallet allows her to create, manage, and utilize secure, trustworthy communications channels with anyone online without reliance on any third party. Alice's wallet is also the place where specific, protocol-enabled interactions like credential exchange happen. The wallet is flexible tool that Alice uses to manage her digital life.

We have plenty of online relationships today, but they are not operational because we are prevented from acting by their anemic natures. Our helplessness is the result of the power imbalance that is inherent in bureaucratic relationships. The solution to the anemic relationships created by administrative identity systems is to provide people with the tools they need to operationalize their self-sovereign authority and act as peers with others online. When we dine at a restaurant or shop at a store in the physical world, we do not do so within some administrative system. Rather, as embodied agents, we operationalize our relationships, whether they be long-lived or nascent, by acting for ourselves. The SSI wallet is the platform upon which people can stand and become digitally embodied to operationalize their digital life as full-fledged participants in the digital realm.

Notes Alice's SSI wallet is like other wallets she has on her phone with several important differences. First, it is enabled by open protocols and second, it is entirely under her control. I'm using the term "wallet" fairly loosely here to denote not only the wallet but also the agent necessary for the interactions in an SSI ecosystem. For purposes of this post, delineating them isn't important. In particular, Alice may not be aware of the agent, but she will know about her wallet and see it as the tool she uses. Note that Bob doesn't have a relationship with any of the organizations shown. Each participant has the set of relationships they choose to have to meet their differing circumstances and needs. I'm using "ledger" as a generic term for any algorithmically controlled distributed consensus-based datastore including public blockchains, private blockchains, distributed file systems, and others. Enterprise wallets speak the same protocols as the wallets people use, but are adapted to the increased scale an enterprise would likely need and are designed to be integrated with the enterprise's other administrative systems.

Photo Credit: Red leather wallet on white paper from Pikrepo (CC0)

Tags: aries indy ssi autonomic algorithmic decentralized+identifiers credentials identity vrm me2b


Boris Mann's Blog

Went out for a bike ride along Arbutus Greenway. Beautiful sun and a stop at Beacoup Bakery on the way back, plus all the cyclists waiting for the train at Union.

Went out for a bike ride along Arbutus Greenway. Beautiful sun and a stop at Beacoup Bakery on the way back, plus all the cyclists waiting for the train at Union.

Sunday, 08. November 2020

Boris Mann's Blog

I got sent this Azerbaijani Country Life Vlog video blog by @florence_ann and ended up watching the entire 45 minute video. Added to @atbrecipes.

I got sent this Azerbaijani Country Life Vlog video blog by @florence_ann and ended up watching the entire 45 minute video.

Added to @atbrecipes.


Doc Searls Weblog

We’re in the epilogue now

The show is over. Biden won. Trump lost. Sure, there is more to be said, details to argue. But the main story—Biden vs. Trump, the 2020 Presidential Election, is over. So is the Trump presidency, now in the lame duck stage. We’re in the epilogue now. There are many stories within and behind the story, […]


The show is over. Biden won. Trump lost.

Sure, there is more to be said, details to argue. But the main story—Biden vs. Trump, the 2020 Presidential Election, is over. So is the Trump presidency, now in the lame duck stage.

We’re in the epilogue now.

There are many stories within and behind the story, but this was the big one, and it had to end. Enough refs calling it made the ending official. President Trump will continue to fight, but the outcome won’t change. Biden will be the next president. The story of the Trump presidency will end with Biden’s inauguration.

The story of the Biden presidency began last night. Attempts by Trump to keep the story of his own presidency going will be written in the epilogue, heard in the coda, the outro, the postlude.

Fox News, which had been the Trump administration’s house organ, concluded the story when it declared Biden the winner and moved on to covering him as the next president.

This is how stories go.

This doesn’t mean that the story was right in every factual sense. Stories aren’t.

As a journalist who has covered much and has been covered as well, I can vouch for the inevitability of inaccuracy, of overlooked details, of patches, approximations, compressions, misquotes and summaries that are more true to story, arc, flow and narrative than to all the facts involved, or the truths that might be told.

Stories have loose ends, and big stories like this one have lots of them. But they are ends. And The End is here.

We are also at the beginning of something new that isn’t a story, and does not comport with the imperatives of journalism: of storytelling, of narrative, of characters with problems struggling toward resolutions.

What’s new is the ground on which all the figures in every story now stand. That ground is digital. Decades old at most, it will be with us for centuries or millennia. Arriving on digital ground is as profound a turn in the history of our species on Earth as the one our distant ancestors faced when they waddled out of the sea and grew lungs to replace their gills.

We live in the digital world now now, in addition to the physical one where I am typing and you are reading, as embodied beings.

In this world we are not just bodies. We are something and somewhere else, in a place that isn’t a place: one without distance or gravity, where the only preposition that applies without stretch or irony is with. (Because the others—over, under, beside, around, though, within, upon, etc.—pertain too fully to positions and relationships in the physical world.)

Because the digital world is ground and not figure (here’s the difference), it is as hard for us to make full sense of being there as it was for the first fish to do the same with ocean or for our amphibian grandparents to make sense of land. (For some help with this, dig David Foster Wallace’s This is water.)

The challenge of understanding digital life will likely not figure in the story of Joe Biden’s presidency. But nothing is more important than the ground under everything. And this ground is the same as the one without which we would not have had an Obama or a Trump presidency. It will at least help to think about that.

 

Saturday, 07. November 2020

Cyberforge (Anil John)

Portable identity

CyberForge Journal 11/07/20

The current focus of the digital identity community is on interoperability. That is critical work, but only a starting point for the long-term goal of enabling true front-end and back-end portability.

Click here to continue reading. Or, better yet, subscribe via email and get the best hand-picked science and insights to build and enhance innovative cybersecurity products, services and companies.

Thursday, 05. November 2020

Nader Helmy

Highlights of #IIW31

Last month’s Internet Identity Workshop (IIW) was held entirely online for the second time in its history. This bi-annual unconference, typically hosted in Mountain View, CA at the Computer History Museum, connects a wide variety of people from across the globe focused on solving the hard problems around digital identity. As an unconference, the attendees set the agenda each day. The format is foc

Last month’s Internet Identity Workshop (IIW) was held entirely online for the second time in its history. This bi-annual unconference, typically hosted in Mountain View, CA at the Computer History Museum, connects a wide variety of people from across the globe focused on solving the hard problems around digital identity. As an unconference, the attendees set the agenda each day. The format is focused on open collaboration which creates a real and rare opportunity to organically discuss not only the latest technology developments, but also the surrounding social, political and legal implications.

This year, over 400 attendees from across the globe participated in IIW31. It was a great showcase of both the work and progress being achieved across a variety of open standards and open source communities.

Although there is too much to cover in one write up, we wanted to highlight MATTR’s pick of the three major themes:

KERI comes into the spotlight Bridging Decentralized Identity with Federated Identity Systems Self-Sovereign Identity (SSI) maturing into multi-dimensional & relationship-oriented identity KERI comes into the spotlight

One of the technologies that received a great deal of attention this year was Sam Smith’s Key Event Receipt Infrastructure (KERI). Sam first introduced this topic over a year ago, and it’s evident that a lot has happened since then to bring these ideas and concepts to life.

KERI attempts to address the self-certifying aspect of identifier-based systems such as DIDs. Central to this technology is the concept of Key Event Logs (KELs) which are maintained by each user in a distributed identity system. Key Event Logs contain digitally signed messages of all key management related transactions done by the user, and can be used not only to prove control over your identifier, but also shared with other external ‘witnesses’ who can independently validate and prove that you control your identifier. In practice, this means that KERI identifiers and keys are not “ledger-locked” — they are fully portable and can be validated using any ledger, distributed database, or other verifiable data registry.

In a session called “KERI for Muggles”, Drummond Reed and Sam Smith identified 7 main characteristics that make KERI useful for identity:

Self-certifying identifiers Self-certifying Key Event Logs Witnesses for Key Event Logs Pre-rotation as simple, safe, scalable protection against key compromise System-independent validation Delegated self-certifying identifiers enable enterprise-class key management Compatibility with the GDPR “right to be forgotten”

There were a number of additional sessions that dived into advanced KERI topics, including how KERI can create interoperability between different DID methods and solving the operational concerns related to bringing these ideas to life. There are several ongoing open-source projects at the Decentralized Identity Foundation (DIF) which are driving this work forward. We look forward to the positive change this will bring to our ecosystem, particularly in reducing our technology’s strict dependence on specific infrastructure like distributed ledgers.

Bridging Decentralized Identity with Federated Identity Systems

Following on from IIW30, there was a continued theme around the bridging of W3C Verifiable Credentials (VCs) with existing identity protocols on the web today such as OpenID Connect (OIDC), which are mainly used in federated identity systems.

Evolution of existing protocols such as OIDC allows existing infrastructure providers to make adjustments to their implementations in order to start using new technology, rather than having to tear it all down and rebuild. The goal here is to build as much as we can upon previous work while addressing a number of significant shortcomings around federated identity as it exists today. We’ve covered this topic before, so needless to say, we’re committed to solving this problem and we were extremely supportive of the continued discussion around these issues at this IIW.

A key component of this approach is the ability to allow existing identity solutions to issue and verify Verifiable Credentials. One way to achieve this is to extend the OpenID Connect protocol to enable more portable digital identity. Instead of using OIDC as a bearer token between an Issuer and Relying party, we can create a client-bound assertion that’s held in a user’s digital wallet. To that end, we introduced an approach that we’re calling OpenID Connect Credential Provider, which turns traditional Identity Providers (IdPs) into credential providers. It modifies current authentication/issuance flows used by OpenID Connect to allow IdPs to issue assertions that are re-provable and reusable for authentication. This helps to decouple the issuance of identity-relation information and the presentation of that information by a user, introducing the “wallet” layer between issuers and relying parties. There was great participation in this discussion from some of today’s existing Identity Providers, and one of the outcomes from this IIW is that the specification will be contributed to the OpenID Foundation (OIDF) as a work item, where it will be further developed by the community there.

There was also a great deal of interest around the evolution of Self-Issued OpenID (SIOP). The SIOP chapter of the OpenID Connect specification was written in a way that clearly tried to leave room for some of the user-centric design concepts that are quite popular today, but given that it was first introduced in 2014, it lacked the context of today’s technologies such as DIDs and VCs. ُThere’s a laundry list of topics to cover here, so kick starting this effort has been a matter of breaking down each component and coming up with potential solutions. We identified 3 main problems that SIOP aims to address:

Enabling portable identities between providers Solving the NASCAR problem Dealing with different deployment types of OpenID providers

There are a number of insights we can gain by addressing each of these issues separately, and bringing them together to form more robust solutions. It was exciting to see that there are many people interested in revisiting SIOP and trying to address the underlying issues with federated identity. The work will continue to develop at the OpenID Foundation AB working group, so the conversation doesn’t stop at IIW.

In related news, there was some collaborative dialogue with Google’s WebID technology, which aims to explore how browsers can help solve the user privacy problem of cross-site cookie tracking on the web. Given that the IIW community is deeply focused on issues of security and user privacy, it made a lot of sense for those working on WebID to connect with people working on OpenID Connect, as well as the W3C community working on the Credential Handler API (CHAPI). The team developing Chromium discussed the possibility of adding a minimal set of browser APIs to work with CHAPI. Having something like CHAPI supported directly by the browser would allow for a more friendly user experience, allowing a credential holder to choose the provider of their identity (instead of being dictated by the relying party) and enabling better credential storage synchronization. The discussion here is really just a starting point, and we’re looking forward to seeing how WebID incorporates this approach and continues to work with the IIW community as the work develops.

SSI maturing into multi-dimensional & relationship-oriented identity

It’s been nearly 15 years since Kim Cameron wrote the 7 Laws of Identity, and over 4 years since Christopher Allen first laid out the 10 Principles of SSI. In the time since then, thinking has evolved a great deal around how to bring the power of SSI to users. This work has not only included building out the foundational technologies, but also addressing the practical tradeoffs between convenience, usability, and privacy.

As these technologies get deployed in the real world, what’s emerging is the fact that what we call ‘SSI’ (Self-Sovereign Identity) is more a set of guiding principles in how we would like digital identity systems of the future to behave like or be based around, rather than a concrete checklist where you can say this technology is SSI. As it is, there are always new ideas about how to bring these principles to life in real world applications. This continued to be a hot topic at this IIW, including:

Scaling peer communications Scaling blockchains Scaling privacy Scaling peer communications

Besides all of the ongoing work on KERI, there were a number of updates surrounding technology enabling peer-to-peer communication. There were a few sessions dedicated to giving the community an update on the DIDComm protocol, showing that the specification is maturing to a point where it will be stable enough to broadly implement. In addition, Daniel Hardman of Evernym presented a thought provoking session on the intersection of privacy and discoverability on the web, which he calls “The Kobayashi Maru Problem of SSI”.

Scaling blockchains

The team at Consensys presented an interesting proposal for a new DID method based on ZK Rollups. In recent years, there has been a big push in the community to reconsider the use of ledgers and minimize what information is made public, especially when it comes to identifying individual people. While there has been a lot of interest in DID methods that don’t use any blockchain or ledger infrastructure, Consensys and many others continue to explore the use of ledgers due to their unique properties, such as the ability to offer global resolvability. Although it’s early stages for this work and there is no codebase yet, there was a lot of interest from those in the community looking to use blockchain-based DID methods. Their approach would give DIDs global resolvability with the potential to keep the DID “off ledger” and privacy preserving, making it ideal for persons using DIDs. As mentioned earlier, the work on KERI is also helping to illustrate how we can continue using decentralized ledger technologies in a way that preserves user sovereignty and data portability.

Scaling privacy

As a followup to last IIW, we provided an introduction to multi-message signatures and an overview of recent developments made around using BBS+ Signatures for selective disclosure in JSON-LD Verifiable Credentials. We first introduced this technology at IIW30 in April of this year. Since then, steady progress on open-source implementations and technical specs have resulted in a number of additional enhancements to the privacy preserving characteristics of zero knowledge proof based credentials, including the ability to have blinded subject credentials and domain bound proofs. In addition to progress on the technical specification, there have been significant developments on open-source implementations from a number of independent organizations. This progress has made the case for privacy-preserving credentials and selective disclosure stronger than ever.

Summary

SSI and decentralized digital identity are maturing at a steady pace, but we still have a lot of work ahead to solve the difficult problems around trust, privacy, and scalability that come with deploying systems in the real world. We’re excited to see progress on the critical issues move forward as the work happens across so many different communities and areas of interest.

As always, we’re incredibly grateful for the work that Kaliya Young, Phil Windley, and the team at IIW are doing to bring together different perspectives and to host important discussions which are open, respectful, and always illuminating. As for MATTR, you can usually find us on GitHub and working within organizations like the W3C, DIF, OIDF, and Hyperledger to push forward the vision for a more open and accessible internet. We will see you all at IIW32!

Highlights of #IIW31 was originally published in MATTR on Medium, where people are continuing the conversation by highlighting and responding to this story.


Doc Searls Weblog

How the once mighty fall

For many decades, one of the landmark radio stations in Washington, DC was WMAL-AM (now re-branded WSPN), at 630 on (what in pre-digital times we called) the dial. As AM listening faded, so did WMAL, which moved its talk format to 105.9 FM in Woodbridge and its signal to a less ideal location, far out […]

For many decades, one of the landmark radio stations in Washington, DC was WMAL-AM (now re-branded WSPN), at 630 on (what in pre-digital times we called) the dial. As AM listening faded, so did WMAL, which moved its talk format to 105.9 FM in Woodbridge and its signal to a less ideal location, far out to the northwest of town.

They made the latter move because the 75 acres of land under the station’s four towers in Bethesda had become far more valuable than the signal. So, like many other station owners with valuable real estate under legacy transmitter sites, Cumulus Mediasold sold the old site for $74 million. Nice haul.

I’ve written at some length about this here and here in 2015, and here in 2016. I’ve also covered the whole topic of radio and its decline here and elsewhere.

I only bring the whole mess up today because it’s a five-year story that ended this morning, when WMAL’s towers were demolished. The Washington Post wrote about it here, and provided the video from which I pulled the screen-grab above. Pedestrians.org also has a much more complete video on YouTube, here. WRC-TV, channel 4, has a chopper view (best I’ve seen yet) here. Spake the Post,

When the four orange and white steel towers first soared over Bethesda in 1941, they stood in a field surrounded by sparse suburbs emerging just north of where the Capital Beltway didn’t yet exist. Reaching 400 feet, they beamed the voices of WMAL 630 AM talk radio across the nation’s capital for 77 years.

As the area grew, the 75 acres of open land surrounding the towers became a de facto park for runners, dog owners and generations of teenagers who recall sneaking smokes and beer at “field parties.”

Shortly after 9 a.m. Wednesday, the towers came down in four quick controlled explosions to make way for a new subdivision of 309 homes, taking with them a remarkably large piece of privately owned — but publicly accessible — green space. The developer, Toll Brothers, said construction is scheduled to begin in 2021.

Local radio buffs say the Washington region will lose a piece of history. Residents say they’ll lose a public play space that close-in suburbs have too little of.

After seeing those towers fall, I posted this to a private discussion among broadcast engineers (a role I once played, briefly and inexpertly, many years ago):

It’s like watching a public execution.

I’m sure that’s how many of who have spent our lives looking at and maintaining these things feel at a sight like this.

It doesn’t matter that the AM band is a century old, and that nearly all listening today is to other media. We know how these towers make waves that spread like ripples across the land and echo off invisible mirrors in the night sky. We know from experience how the inverse square law works, how nulls and lobes are formed, how oceans and prairie soils make small signals large and how rocky mountains and crappy soils are like mud to a strong signal’s wheels. We know how and why it is good to know these things, because we can see an invisible world where other people only hear songs, talk and noise.

We also know that, in time, all these towers are going away, or repurposed to hold up antennas sending and receiving radio frequencies better suited for carrying data.

We know that everything ends, and in that respect AM radio is no different than any other medium.

What matters isn’t whether it ends with a bang (such as here with WMAL’s classic towers) or with a whimper (as with so many other stations going dark or shrinking away in lesser facilities). It’s that there’s still some good work and fun in the time this old friend still has left.

Wednesday, 04. November 2020

FACILELOGIN

What’s new in OAuth 2.1?

The OAuth 2.1 authorization framework enables a third-party application to obtain limited access to an HTTP service, either on behalf of a… Continue reading on FACILELOGIN »

The OAuth 2.1 authorization framework enables a third-party application to obtain limited access to an HTTP service, either on behalf of a…

Continue reading on FACILELOGIN »


Boris Mann's Blog

Yesterday’s lunch at Laksa King was a bowl of laksa, a bowl of mohinga (Fernando got a picture of it), and an order of roti canai

Yesterday’s lunch at Laksa King was a bowl of laksa, a bowl of mohinga (Fernando got a picture of it), and an order of roti canai

Tuesday, 03. November 2020

Jon Udell

Moonstone Beach Breakdown

Travel always has its ups and downs but I don’t think I’ve ever experienced both at the same time as intensely as right now. I’m at Moonstone Beach in Cambria, just south of San Simeon, in a rented camper van. After a walk on the beach I hop in, reverse, clip a rock, blow a … Continue reading Moonstone Beach Breakdown

Travel always has its ups and downs but I don’t think I’ve ever experienced both at the same time as intensely as right now.

I’m at Moonstone Beach in Cambria, just south of San Simeon, in a rented camper van. After a walk on the beach I hop in, reverse, clip a rock, blow a tire, and come to rest alongside the guard rail facing the ocean.

I call roadside assistance; they can deliver a tire but not until tomorrow morning.

I may be about to win the road trip breakdown lottery. I’m snuggled in my two-sleeping-bag nest on the air mattress in the back of the van, on a bluff about 25 feet above the beach, with the van’s big side door open, watching and hearing the tide roll in.

The worst and best parts of my trip are happening at the same time. I screwed up, am stuck, cannot go anywhere. But of all the places I could have been stuck on this trip, I’m stuck in the place I most want to be.

The sign says the gate closes at 6, but nobody has shown up by 7 when everyone else is gone. I can’t reach the authorities. This would be the campsite of my dreams if I’m allowed to stay.

The suspense is killing me.

Eventually a cop shows up, agrees that I can’t go anywhere, and gives me permission to stay for the night. I win the lottery! Nobody ever gets to stay here overnight. But here I am.

We’re all stuck in many ways for many reasons. A road trip during the final week before the election seemed like a way to silence the demons. Roaming around the state didn’t really help. But this night on the bluff over Moonstone Beach most certainly will.

In the light of the full moon, the crests of the waves are sometimes curls of silver, sometimes wraiths of foam that drift slowly south, continually morphing.

I don’t know how we’re all going to get through this winter. I don’t know what comes next. I don’t even have a plan for tomorrow. But I am so grateful to be here now.

Monday, 02. November 2020

Boris Mann's Blog

I made a chocolate cake to use up sourdough starter discard. It’s not a pretty cake, and making an entire sheet cake to use up a little sourdough is perhaps overkill, but I’m happy with how it turned out.

I made a chocolate cake to use up sourdough starter discard.

It’s not a pretty cake, and making an entire sheet cake to use up a little sourdough is perhaps overkill, but I’m happy with how it turned out.

Sunday, 01. November 2020

Just a Theory

Central Park Autumn

A couple photos of the gorgeous fall colors oer The Pool in Central Park.

Autumn colors over The Pool © 2020 David E. Wheeler

It’s that most magical time of year in Central Park: Autumn. I spent a lot of time wandering around The Pool yesterday. Lots of folks were about, taking in the views, shooting photos. The spectacular foliage photographed best backlit by the sun. Here’s another one.

⧉ Hard to go wrong with these colors. © 2020 David E. Wheeler

Both shot with an iPhone 12 Pro.

More about… New York City Central Park The Pool Autumn Leaves

Saturday, 31. October 2020

Nicholas Rempel

Migrating This Site Away From Gatsby

Well it's happened again. I've done yet another rebuild of this website using Django and Wagtail.

Well it's happened again. I've done yet another rebuild of this website using Django and Wagtail. For my previous build, I used Gatsby to build a static site which I hosted on Netlify. The dynamic content of the site was stored in an instance of Ghost which was then pulled in at build-time.

Overall, I was quite disappointed with Gatsby. I found the framework to be quite over-engineered. At one point, I found myself trying to resize a photo or something and it required some (to me at least) very complex configuration and some fancy graphQL query just to get the image to load. Another issue I have is with the offline plugin. Gatsby uses service workers to cache your entire site offline which can then be served by a service worker in case your users have spotty internet. Or no internet. The problem comes with removing this service worker once you move away from the framework. I needed to do quite a bit of research to figure out how to delete the service worker once I moved over to the new site since Gatsby chooses to use caching very aggressively which is not recommended.

I understand why Gatsby makes these choices. Their goal is to make websites load insanely fast which I think they accomplish at a great cost of complexity. And they work hard to ensure that Gatsby sites score highly on the Google Lighthouse test. I have two issues with these goals. First, I'm not sure pursuing a high score on Lighthouse is a good goal to optimize for. Making a website load quickly is important, but Lighthouse will ding you for things using a particular Javascript library instead of their recommended smaller library. This is a game of cat-and-mouse which has little benefit beyond speeding up your website (which is important). Second, the amount of complexity that Gatsby introduces will likely slow down the development of your site and stifle your creativity. I can't comment on the experience of using Gatsby in the context of a larger team, but for me I know it led to making fewer improvements to the site because of the complexity.

As for Ghost as a headless CMS – I think there are some limitations there. Overall, Ghost is a great platform. It's reliable and usable and has a good writing experience. I do think that their push into the JAMStack fad was mostly a marketing play since some limitations were never addressed even after several years. I would definitely use Ghost again, just not as a headless CMS. I would use it how it was primarily designed to be used.

So why did I choose Django and Wagtail? Well, I've been using Django on and off for a long time. Probably 7 years at this point. The framework is reliable and seriously productive. Wagtail is built as a Django "app" so it's very familiar to me and it's very powerful. Overall, I'm content with the developer experience so far. I think the admin interface that is used to publish content could use some work. It's functional but frankly it's pretty ugly. Especially compared to something like Statamic which has been getting some attention lately.

It's been a while since I've run my blog with server-rendered pages with dynamic content as opposed to a static site published to a CDN. I think the fact that I can log into a dashboard and publish content is so much more ergonomic from a writing and publishing perspective that I'm more likely to write more. Compared to the process of building a static site and deploying an update, it's quite nice. The downside, obviously, is that I need to worry again about load on the site and downtime during traffic spikes. I'm hopeful that this is something that I can avoid by using caching and a CDN. The site now has virtually no javascript. Compared to Gatsby this is a big change. We'll see how it turns out but my theory is that I should be able to achieve similar speeds (good enough anyway) using some clever caching and a CDN. Browsers are pretty good at rendering plain HTML after all.

Friday, 30. October 2020

Margo Johnson

did:(customer)

Transmute’s evolving criteria for matching DID methods to business requirements. Photo by Louis Hansel @shotsoflouis on Unsplash Transmute builds solutions that solve real business problems. For this reason, we support a number of different decentralized identifier (DID) methods. While we are committed to providing optionality to our customers, it’s equally important to communicate the select
Transmute’s evolving criteria for matching DID methods to business requirements. Photo by Louis Hansel @shotsoflouis on Unsplash

Transmute builds solutions that solve real business problems. For this reason, we support a number of different decentralized identifier (DID) methods. While we are committed to providing optionality to our customers, it’s equally important to communicate the selection criteria behind these options so that customers can consider the tradeoffs of underlying DID-methods alongside the problem set they’re solving for. Essentially, we help them pick the right tool for the job.

In the spirit of sharing and improving as an industry, here are the work-in-progress criteria we use to help customers assess what DID method is best for their use case: Interoperability

This DID method meets the interoperability requirements of my business, for example:

Other parties can verify my DID method. I can switch out this DID method in the future if my business needs change. Security

This DID method meets the security requirements of my business, such as:

Approved cryptography for jurisdiction/industry Ledger/anchoring preferences Key rotation/revocation Privacy

This DID method meets privacy requirements relevant to my use case, for example:

Identifiers of individuals (data privacy and consent priorities) Identifiers for companies (organization identity and legal protection priorities) Identifiers for things (scaling, linking, and selective sharing priorities) Scalability

This DID method meets the scalability needs of my business use case, for example:

Speed Cost Stability/maturity Root(s) of Trust

This DID method appropriately leverages existing roots of trust that have value for my business or network (or it is truly decentralized). For example:

Trusted domain Existing identifiers/ identity systems Existing credentials

We are currently using and improving these criteria as we co-design and implement solutions with customers.

For example, our commercial importer customers care a lot about ensuring that their ecosystem can efficiently use the credentials they issue (interoperability) without disclosing sensitive trade information (privacy). Government entities emphasize interoperability and accepted cryptography. Use cases that include individual consumers focus more on data privacy regulation and control/consent. In some instances where other standardized identifiers already exist, DIDs may not make sense as primary identifiers at all.

Examples of DID methods Transmute helps customers choose from today include: Sidetree Element (did:elem, Ethereum anchoring), Sidetree Ion (did:ion, Bitcoin anchoring), Sidetree Photon (did:photon, Amazon QLDB anchoring), did:web (ties to trusted domains), did:key (testing and hardware-backed keys), and more.

How do you think about selecting the right DID method for the job?

Let’s improve this framework together.

did:(customer) was originally published in Transmute on Medium, where people are continuing the conversation by highlighting and responding to this story.

Thursday, 29. October 2020

Ally Medina - Blockchain Advocacy

CA’s 2020 Blockchain Legislative Roundup

After a cup of menstrual blood went flying across the Senate floor, I had assumed 2019 would be California’s wildest legislative session for a while. Covid-19 proved me unfortunately wrong. The Legislative process, calendar and agenda was quickly thrown into the dumpster fire of March and everyone turned back to the white board. When the Legislature returned from its two-month long “stay at

After a cup of menstrual blood went flying across the Senate floor, I had assumed 2019 would be California’s wildest legislative session for a while. Covid-19 proved me unfortunately wrong. The Legislative process, calendar and agenda was quickly thrown into the dumpster fire of March and everyone turned back to the white board.

When the Legislature returned from its two-month long “stay at home” recess in May, it passed a stripped-down state budget which reflected lower revenues given the pandemic-induced recession. They then began prioritizing and shelving hundreds of bills that would no longer make the cut in the truncated legislative calendar. Faced with less time to hold hearings and less money to spend on new proposals, legislators shelved an estimated three-quarters of the bills introduced at the beginning of the two-year session.

Here’s what happened for BAC’s blockchain/crypto sponsored bills:

AB 953 (Ting, San Francisco), which would have allowed state and local taxes to be paid with stablecoins, was sadly withdrawn, lacking a clear Covid-19 nexus.

AB 2004 (Calderon, Whittier) marked the first time verifiable credentials saw legislative debate. The bill to allow the use of verifiable credentials for covid-19 test results and other medical records made it through both houses with bipartisan support. Due to state budget restraints, it was ultimately vetoed, however the concept gained significant legislative momentum quickly. We are actively working on our strategy for verifiable credentials policy next year.

AB 2150 (Calderon, Whittier) spun through several dizzying iterations. The Blockchain Advocacy Coalition worked closely with Assemblymember Calderon’s office to suggest amended language that would have directed the Department of Business Oversight to study the applicability of SEC Commissioner Hester Pierce’s Proposal to an intrastate safe harbor. An idea that in previous years seemed far fetched suddenly had political legs and was well received by the agency and several committees. It died in the Senate Appropriations Committee along with nearly everything else that had a significant price tag or wasn’t urgently related to the pandemic.

Fruitful discussions with the DBO about crypto regulation were well timed, however. Given the microscope on consumer protections due to the economic distress caused by the pandemic, the agency received a $19.2 million allocation and a new name: California Department of Financial Protection and Innovation (CDFPI- much worse acronym imo).

HERE’S THE PART YOU NEED TO PAY ATTENTION TO:

With this new agency budget/mission, comes a very likely change in the way cryptocurrency is regulated in CA. AB 1864 does a few things:

Establishes a Financial Technology Innovation Office based in San Francisco Requires the department to promulgate rules regarding registration requirements Charges this department with regulating currently unregulated financial services including issuers of stored value or such business

This marks a departure from the agency’s previous approach. Virtual currency businesses did not have any separate registration requirements or the need to apply for a money transmitter license. BAC participated in stakeholder calls this summer about the agency’s expansion and we are continuing to engage with the agency about how these registration requirements will be created. Cryptocurrency businesses need to understand that the agency has been given the authority to create these standards without going back to the legislature, so early engagement is key.

Interested in joining our coalition and having a seat at the table? Contact: ally@blockadvocacy.org

BAC has previously facilitated educational workshops with the Department of Business Oversight and hosted roundtables with Gov. Newsom, Treasurer Ma and the Legislature to build an understanding of the importance of the blockchain industry in CA.


Mike Jones: self-issued

Second OpenID Foundation Virtual Workshop

Like the First OpenID Foundation Virtual Workshop, I was once again pleased by the usefulness of the discussions at the Second OpenID Foundation Virtual Workshop held today. Many leading identity engineers and businesspeople participated, with valuable conversations happening both via the voice channel and in the chat. Topics included current work in the working groups, […]

Like the First OpenID Foundation Virtual Workshop, I was once again pleased by the usefulness of the discussions at the Second OpenID Foundation Virtual Workshop held today. Many leading identity engineers and businesspeople participated, with valuable conversations happening both via the voice channel and in the chat. Topics included current work in the working groups, such as eKYC-IDA, FAPI, MODRNA, FastFed, EAP, Shared Signals and Events, and OpenID Connect, plus OpenID Certification, OpenID Connect Federation, and Self-Issued OpenID Provider (SIOP) extensions.

Identity Standards team colleagues Kristina Yasuda and Tim Cappalli presented respectively on Self-Issued OpenID Provider (SIOP) extensions and Continuous Access Evaluation Protocol (CAEP) work. Here’s my presentation on the OpenID Connect working group (PowerPoint) (PDF) and the Enhanced Authentication Profile (EAP) (PowerPoint) (PDF) working group. I’ll add links to the other presentations when they’re posted.

Wednesday, 28. October 2020

The Dingle Group

Bridging to Self-Sovereign Identity

How to enable the Enterprise to move from existing centralized or federated identity access management systems to a decentralized model was the topic in the 15th Vienna Digital Identity Meetup*. In both the private and public sector the capital investments in IAMs runs into the billions of dollars, for decentralized identity models to make serious inroads in this sector providing a of roadmap

How to enable the Enterprise to move from existing centralized or federated identity access management systems to a decentralized model was the topic in the 15th Vienna Digital Identity Meetup*.  In both the private and public sector the capital investments in IAMs runs into the billions of dollars, for decentralized identity models to make serious inroads in this sector providing a of roadmap or transition journey that both educates and enables the enterprise to make the move is required.

In our 15th event we discussed the routes  being taken by Raonsecure and IdRamp.  Both Raonsecure and IdRamp are being successful in making the introduction of  decentralized identity concepts to the market and helping their customers start on this transition journey.

Alex David (Senior Manager, Raonsecure) started with brief update on the forces driving decentralized identifiers in the Korean market and then presented Raonsecure’s Omnione product.  In keeping with the theme on the ‘bridging’ to bring DIDs and VCs into the market Alex went through six different pilots,  and proof of concept solutions that Omnione has implemented in the Korean market.  These include working with the Korean Military Manpower Association on DID based authentication and issuing of verifiable credentials (VCs) to Korean veterans to the use of DIDs for driverless car identification in an autonomous vehicle pilot in Sejong, Korea.

Mike Vesey (CEO, IdRamp) introduced IdRamp and discussed their core objective of bringing DIDs and VCs into the enterprise market by creating a ’non-frightening’ educational path to adoption of decentralized identity.  IdRamp is a session based transactional gateway (no logging) to all things identity, providing common service delivery and compliance and consent management across identity platforms.  They are not an identity service provider but resemble more closely as a digital notary in the generation of decentralized identifiers and verifiable credentials.  The service integration capability of IdRamp was demonstrated with the use of employee issued verifiable credentials to authenticate to a enterprise service (in this case a Zoom session login).  

Finally, the seed of an upcoming event was planted.  As with any new technology market breaking through the ’noise’ of everyday life is very difficult.  This is no different for DIDs and VCs.  You will have to watch the recording to get the topic…. 

For a recording of the event please check out the link: https://vimeo.com/472937478

Time markers:

0:00:00 - Introduction

0:05:04 - State of DIDs in South Korea

0:13:29 - Introducing Omnione

0:24:00 - DIDs and VCs in action in South Korea

0:51:18 - Introduction to IdRamp

1:04:00 - Interactive demo on IdRamp

1:08:00 - Wallets and compatibility

1:11:00 - Service integration demo & discussion

1:31:57 - Upcoming events

For more information on Raonsecure Omnione: https://omnione.net/en/main

For more information on IdRamp: https://idramp.com/

And as a reminder, due to increased COVID-19 infections we are back to online only events. Hopefully we will be back to in person and online soon!

If interested in getting notifications of upcoming events please join the event group at: https://www.meetup.com/Vienna-Digital-Identity-Meetup/

*Vienna Digital Identity Meetup is hosted by The Dingle Group and is focused on educating business, societal, legal and technologists on the value that a high assurance digital identity creates by reducing risk and strengthening provenance. We meet on the 4th Monday of every month, in person (when permitted) in Vienna and online on Zoom. Connecting and educating across borders and oceans.


Tuesday, 27. October 2020

MyDigitalFootprint

reflecting on the #socialdilemma, do mirrors provide a true reflection?

This is a response to the filmmaker Jeff Orlowski Netflix documentary Social Dilemma.  If you have not watched it yet, it is worth it.  There are many write up’s,  here are three of my picks one, two three.    However, rather than write another commentary on the pros and cons of the movie, I wanted to reflect on the idea that Social Media is a reflection of society

This is a response to the filmmaker Jeff Orlowski Netflix documentary Social Dilemma.  If you have not watched it yet, it is worth it.  There are many write up’s,  here are three of my picks one, two three.   

However, rather than write another commentary on the pros and cons of the movie, I wanted to reflect on the idea that Social Media is a reflection of society - as this is a core tenant on the work.  The big theme is that all the platforms do is create a mirror reflecting back what society is like.   However, there is more than one type of mirror!


 

The play in our minds is that a mirror is a mirror and does what it says on the tin, it reflects. If true, job done as such and it makes no sense to read on.  However, we know from CSI and other spy films there are one ways mirrors as well.  If we look at Social Media only through the lens of a reflecting mirror; we certainly can’t blame anyone for any outcome but ourselves.   But we intrinsically know that Social Media is not a reflective mirror. 

The imagination mirror is one where we make up what we want to see. Another reflection mirror is one that disperses light, we see the rainbow of choice or like the ones at the fairground that provide a distorted reflection of reality.     

The lower three are the dark mirrors - where the reflection is what someone else would like you believe is a truth, but creating confusion through a contradictory position.  The true black mirrors of control and manipulation complete the set.  These are closer to the two way spy mirrors as it allows others to see us without us knowing and be able to control the situation without us really knowing.

The key point, we should not assume or trust that the mirror we hold up is one that can provide a true reflection.


 


Monday, 26. October 2020

Webistemology - John Wunderlich

Privacy in Ontario?

MyData Canada recently submitted a report to the Government of Ontario in response to its consultation for strengthening privacy protections in Ontario.
MyData Canada Privacy Law Reform Submission

MyData Canada recently submitted a report to the Government of Ontario in response to its consultation for strengthening privacy protections in Ontario. You can download the submission from the MyData Canada site. I am a board member of MyData Global and a member of the MyData Canada hub. This is a brief summary of some of the recommendations in that report.

Part of what MyData Canada would like the province of Ontario to address is the current ‘gatekeeper model’ where each of us cede control to information about us under terms or privacy policies based on a flawed consent model. As the report puts it,

Behind each of the gate-keepers (“data controllers” in GDPR terms) in the centralized model are thousands of intermediaries and data brokers invisible to the individuals whose data they process. Individuals’ personal data is used in ways they could not anticipate, and often without their awareness or even the opportunity to meaningfully consent.

This needs to be fixed.

Background

Canada has a multi-jurisdictional privacy environment. That means that both levels of government have privacy commissioners and privacy laws. Ontario, Canada’s most populous province, does not have a private sector privacy law. This leaves a number of categories of persons and organizations uncovered; don’t ask why, it’s a constitutional jurisdictional thing. Thus the consultation and submission. MyData Canada believes that,

…our proposed approach will help accelerate the development and uptake of privacy- focused, human-centric innovation and ultimately serve to regain public trust and confidence in the digital economy.
2-Branch Privacy Reform

MyData Canada proposes a two branch approach to privacy law reform; a harmonization branch, and a transformation branch.

The harmonization branch proposes an incremental approach to enable any new Ontario law to work harmoniously with other regimes, both in Canada and in the rest of the world. This branch is intended to ensure that Ontario is a low friction end point for cross border data flows with other data protection data. At the same time this branch will introduce a regulatory framework with a functional equivalency to the CCPA in the US and to the GDPR in the EU. In essence, this broad framework skates to where the puck will be with respect to global data protection laws. The digital transformation branch proposes to simultaneously create a ‘next-generation’ regulatory space within Ontario. This space will allow Ontario based companies or organization to create new forward looking and individually centred solutions. To continue the metaphor, this branch will allow breakaway solutions that will disrupt the current platform information gatekeepers and return autonomy to individuals. Harmonize Up

Rather than seeking a lowest common denominator or participating in a race to the bottom, MyData recommends harmonizing ‘up’ including the following:

Adopting a principled and risk based approach to privacy regulation; Coordinating with other provinces and the federal government, perhaps including a pan-Canadian council of information and privacy commissioners; Aligning with Convention 108 and 108+; Increased enforcement powers; and Implementation support for businesses and organizations for compliance. Digital Transformation

Create a regulatory environment to reward first movers with privacy enhancing technologies that put people at the centre of their own data. Recommendations include:

Creating a privacy technology incubator; Host regulatory sandboxes and hackathons; Grants and other incentives for privacy ‘retrofits’; Create and support a regime for seals, badges, and privacy trust marks; Foster interoperability by requiring api or similar means to prevent or counter ‘platform dominance’ and network affects; and Create up-skilling programs for a multi-disciplinary privacy engineering centre of excellence in Ontario. Summing up

The above is just a summary of the first recommendations of the report. It includes further recommendations on:

Taking a comprehensive approach to move beyond compliance; Adopting a Consumer Protection and Human Rights oriented regulatory enforcement model Adopting a multi-stakeholder and inclusive model to spur innovation and open data

If you find this interesting please download and share the report.

Sunday, 25. October 2020

Just a Theory

Automate Postgres Extension Releases on GitHub and PGXN

Go beyond testing and fully automate the release of Postgres extensions on both GitHub and PGXN using GitHub actions.

Back in June, I wrote about testing Postgres extensions on multiple versions of Postgres using GitHub Actions. The pattern relies on Docker image, pgxn/pgxn-tools, which contains scripts to build and run any version of PostgreSQL, install additional dependencies, build, test, bundle, and release an extension. I’ve since updated it to support testing on the the latest development release of Postgres, meaning one can test on any major version from 8.4 to (currently) 14. I’ve also created GitHub workflows for all of my PGXN extensions (except for pgTAP, which is complicated). I’m quite happy with it.

But I was never quite satisfied with the release process. Quite a number of Postgres extensions also release on GitHub; indeed, Paul Ramsey told me straight up that he did not want to manually upload extensions like pgsql-http and PostGIS to PGXN, but for PGXN to automatically pull them in when they were published on GitHub. It’s pretty cool that newer packaging systems like pkg.go.dev auto-index any packages on GibHub. Adding such a feature to PGXN would be an interesting exercise.

But since I’m low on TUITs for such a significant undertaking, I decided instead to work out how to automatically publish a release on GitHub and PGXN via GitHub Actions. After experimenting for a few months, I’ve worked out a straightforward method that should meet the needs of most projects. I’ve proven the pattern via the pair extension’s release.yml, which successfully published the v0.1.7 release today on both GitHub and PGXN. With that success, I updated the pgxn/pgxn-tools documentation with a starter example. It looks like this:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 name: Release on: push: tags: - 'v*' # Push events matching v1.0, v20.15.10, etc. jobs: release: name: Release on GitHub and PGXN runs-on: ubuntu-latest container: pgxn/pgxn-tools env: # Required to create GitHub release and upload the bundle. GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} steps: - name: Check out the repo uses: actions/checkout@v2 - name: Bundle the Release id: bundle run: pgxn-bundle - name: Release on PGXN env: # Required to release on PGXN. PGXN_USERNAME: ${{ secrets.PGXN_USERNAME }} PGXN_USERNAME: ${{ secrets.PGXN_PASSWORD }} run: pgxn-release - name: Create GitHub Release id: release uses: actions/create-release@v1 with: tag_name: ${{ github.ref }} release_name: Release ${{ github.ref }} body: | Changes in this Release - First Change - Second Change - name: Upload Release Asset uses: actions/upload-release-asset@v1 with: # Reference the upload URL and bundle name from previous steps. upload_url: ${{ steps.release.outputs.upload_url }} asset_path: ./${{ steps.bundle.outputs.bundle }} asset_name: ${{ steps.bundle.outputs.bundle }} asset_content_type: application/zip

Here’s how it works:

Lines 4-5 trigger the workflow only when a tag starting with the letter v is pushed to the repository. This follows the common convention of tagging releases with version numbers, such as v0.1.7 or v4.6.0-dev. This assumes that the tag represents the commit for the release.

Line 10 specifies that the job run in the pgxn/pgxn-tools container, where we have our tools for building and releasing extensions.

Line 13 passes the GITHUB_TOKEN variable into the container. This is the GitHub personal access token that’s automatically set for every build. It lets us call the GitHub API via actions later in the workflow.

Step “Bundle the Release”, on Lines 17-19, validates the extension META.json file and creates the release zip file. It does so by simply reading the distribution name and version from the META.json file and archiving the Git repo into a zip file. If your process for creating a release file is more complicated, you can do it yourself here; just be sure to include an id for the step, and emit a line of text so that later actions know what file to release. The output should look like this, with $filename representing the name of the release file, usually $extension-$version.zip:

::set-output name=bundle::$filename

Step “Release on PGXN”, on lines 20-25, releases the extension on PGXN. We take this step first because it’s the strictest, and therefore the most likely to fail. If it fails, we don’t end up with an orphan GitHub release to clean up once we’ve fixed things for PGXN.

With the success of a PGXN release, step “Create GitHub Release”, on lines 26-35, uses the GitHub create-release action to create a release corresponding to the tag. Note the inclusion of id: release, which will be referenced below. You’ll want to customize the body of the release; for the pair extension, I added a simple make target to generate a file, then pass it via the body_path config:

- name: Generate Release Changes run: make latest-changes.md - name: Create GitHub Release id: release uses: actions/create-release@v1 with: tag_name: ${{ github.ref }} release_name: Release ${{ github.ref }} body_path: latest-changes.md

Step “Upload Release Asset”, on lines 36-43, adds the release file to the GitHub release, using output of the release step to specify the URL to upload to, and the output of the bundle step to know what file to upload.

Lotta steps, but works nicely. I only wish I could require that the testing workflow finish before doing a release, but I generally tag a release once it has been thoroughly tested in previous commits, so I think it’s acceptable.

Now if you’ll excuse me, I’m off to add this workflow to my other PGXN extensions.

More about… Postgres PGXN GitHub GitHub Actions Automation CI/CD

Webistemology - John Wunderlich

A near term future history

This is a flight of fancy written a week and a half before the US election. I hope it proves to be bad speculation.
What if?

Here’s a speculative fiction plot, NOT a prediction, prompted by the news frenzy of US presidential politics, influenced by my own contrarian nature. If you read history and science fiction we live in an interesting time. Whether it will be a historical turning point in US and global history is not something that we are privileged to know at the time. But if this is a historic turning point I would like to plot out a story based on where we are, not much more than a week before the US election between President Trump and former Vice-President Biden. Let’s ignore that reality strains credulity, and proceed from there. Nothing below should be read as representative of my own views or preferences.

Biden wins?

The Democrats win the House and the Senate. Joe Biden wins the popular vote but the electoral college remains unclear pending delayed counts for mail-in ballots. Election night ends with both candidates being declared the victor by different media outlets.

Taking it to the streets

On the day after the election Trump claims to be the legitimate President based on claims of a rigged election. From the White House he asks his supporters to come out to defend his victory. Activists from both sides take to the streets in every major American city. The national guard takes sides, but on different sides in different states or cities, and civic order breaks down.

COVID confusion

The electoral college convenes and declares Biden to be the winner. Trump and his staff leave the White House under political pressure. President elect Biden is then revealed to be ill with COVID, increasing uncertainty. Before the inauguration Biden dies in hospital, and Kamala Harris is sworn in as the 46th President of the United States, with Pete Buttigieg as her vice-president. The Joint Chiefs of Staff are prominently on display at the inauguration even as conflict rages on the streets of America.

Consolidation

President Harris deploys the military, aided by private military contractors, to establish order on the streets. Former President Trump flees the country to Moscow, declaring a government in exile. The Harris administration declares a continuing state of military emergency with Democratic majorities in the House and Senate. The state of emergency delays elections until the ‘emergency is over’.

Epilogue

10 years after the start of the Harris dynasty, the United States has become the Democratic Republic of America, run by the United Democratic Party under Grand President Harris. The former Republican Party has been absorbed into the United Democratic Party. Wall Street has become a financial back-water. The US overseas military presence has been dramatically reduced because of domestic military requirements. There is an underground resistance, led by Alexandria Ocasio-Cortez, the most wanted fugitive in American. Climate change is at 2.5 degrees and increasing. China has become the dominant global power, dominating the United Nation, based on its Belt and Road initiative.

End note

I hope that this will prove to be really bad speculation. The past is prologue but it is our job to create the history we want to inhabit. I wish my American friends a successful and well-run election with a successful and peaceful transition of power if the Biden-Harris ticket wins.


Boris Mann's Blog

One of the few times I’ve stayed somewhere else on #bowenisland. Up on Eagle Cliff, looking out to Strait of Georgia and across to UBC. Wind and whitecaps.

One of the few times I’ve stayed somewhere else on #bowenisland. Up on Eagle Cliff, looking out to Strait of Georgia and across to UBC. Wind and whitecaps.

Friday, 23. October 2020

Boris Mann's Blog

My parents are moving today. Third move in 47 years they’ve been in Canada.

My parents are moving today. Third move in 47 years they’ve been in Canada.

Thursday, 22. October 2020

Identity Praxis, Inc.

PodCast – On eCommerce

Really enjoyed an engaging interview (09:24 min) on eCommerce with Miguel Arriola, Solutions Architect Manager at Gretrix. #57 Michael Becker the CEO of Identity Praxis, Inc. tells us his thoughts on eCommerce today, with insights into the future of personal information management exchange. The post PodCast – On eCommerce appeared first on Identity Praxis, Inc..

Really enjoyed an engaging interview (09:24 min) on eCommerce with Miguel Arriola, Solutions Architect Manager at Gretrix.

#57 Michael Becker the CEO of Identity Praxis, Inc. tells us his thoughts on eCommerce today, with insights into the future of personal information management exchange.

The post PodCast – On eCommerce appeared first on Identity Praxis, Inc..


Doc Searls Weblog

On KERI: a way not to reveal more personal info than you need to

You don’t walk around wearing a name badge.  Except maybe at a conference, or some other enclosed space where people need to share their names and affiliations with each other. But otherwise, no. Why is that? Because you don’t need a name badge for people who know you—or for people who don’t. Here in civilization […]

You don’t walk around wearing a name badge.  Except maybe at a conference, or some other enclosed space where people need to share their names and affiliations with each other. But otherwise, no.

Why is that?

Because you don’t need a name badge for people who know you—or for people who don’t.

Here in civilization we typically reveal information about ourselves to others on a need-to-know basis: “I’m over 18.” “I’m a citizen of Canada.” “Here’s my Costco card.” “Hi, I’m Jane.” We may or may not present credentials in these encounters. And in most we don’t say our names. “Michael” being a common name, a guy called “Mike” may tell a barista his name is “Clive” if the guy in front of him just said his name is “Mike.” (My given name is David, a name so common that another David re-branded me Doc. Later I learned that his middle name was David and his first name was Paul. True story.)

This is how civilization works in the offline world.

Kim Cameron wrote up how this ought to work, in Laws of Identity, first published in 2004. The Laws include personal control and consent, minimum disclosure for a constrained use, justifiable parties, and plurality of operators. Again, we have those in here in the offline world where your body is reading this on a screen.

In the online world behind that screen, however, you have a monstrous mess. I won’t go into why. The results are what matter, and you already know those anyway.

Instead, I’d like to share what (at least for now) I think is the best approach to the challenge of presenting verifiable credentials in the digital world. It’s called KERI, and you can read about it here: https://keri.one/. If you’d like to contribute to the code work, that’s here: https://github.com/decentralized-identity/keri/.

I’m still just getting acquainted with it, in sessions at IIW. The main thing is that I’m sure it matters. So I’m sharing that sentiment, along with those links.

 


Justin Richer

Filling in the GNAP

Filling in the GNAP About a year ago I wrote an article arguing for creating the next generation of the OAuth protocol. That article, and some of the other writing around it, has been picked up recently, and so people have been asking me what’s the deal with XYZ, TxAuth, OAuth 3.0, and anything else mentioned there. As you can imagine, a lot has happened in the last year and we’re in a very
Filling in the GNAP

About a year ago I wrote an article arguing for creating the next generation of the OAuth protocol. That article, and some of the other writing around it, has been picked up recently, and so people have been asking me what’s the deal with XYZ, TxAuth, OAuth 3.0, and anything else mentioned there. As you can imagine, a lot has happened in the last year and we’re in a very different place.

The short version is that there is now a new working group in the IETF: Grant Negotiation and Authorization Protocol (gnap). The mailing list is still at txauth, and the first WG draft is available online now as draft-ietf-gnap-core-protocol-00.

How Are These All Related?

OK, so there’s GNAP, but now you’re probably asking yourself what’s the difference between GNAP and XYZ, or TxAuth, or OAuth 3.0. With the alphabet soup of names, it’s certainly confusing if you haven’t been following along the story in the last year.

The XYZ project started as a concrete proposal for how a security protocol could work post-OAuth 2.0. It was based on experience with a variety of OAuth-based and non-OAuth-based deployments, and on conversations with developers from many different backgrounds and walks. This started out as a test implementation of ideas, which was later written down into a website and even later incorporated into an IETF individual draft. The most important thing about XYZ is that it has always been implementation-driven: things almost always started with code and moved forward from there.

This led to the project itself being called OAuth.XYZ after the website, and later just XYZ. When it came time to write the specification, this document was named after a core concept in the architecture: Transactional Authorization. The mailing list at IETF that was created for discussing this proposal was named after this draft: TxAuth. As such, the draft, project, and website were all referred to as either XYZ or TxAuth depending on who and when you asked.

After months of discussion and debate (because naming things is really hard), the working group settled on GNAP, and GNAP is now the official name of both the working group and the protocol the group is working on publishing.

As for OAuth 3.0? Simply put, it canonically does not exist. The GNAP work is being done by many members of the OAuth community, but not as part of the OAuth working group. While there may be people who refer to GNAP as OAuth 3.0, and it does represent a similar shift forward that OAuth 2.0 did, GNAP is not part of the OAuth protocol family. It’s not out of the question for the OAuth working group decides to adopt GNAP or something else in the future to create OAuth 3.0, but right now that is not on the table.

The GNAP Protocol

Not only is GNAP an official working group, but the GNAP protocol has also been defined in an official working group draft document. This draft represents the output of several months of concerted effort by a design team within the GNAP working group. The protocol in this document is not exactly the same as the earlier XYZ/TxAuth protocol, since it pulled from multiple sources and discussions, but there are some familiar pieces.

The upshot is that GNAP is now an official draft protocol.

GNAP is also not a final protocol by any stretch. If you read through the draft, you’ll notice that there are a large number of things tagged as “Editor’s Notes” and similar commentary throughout, making up a significant portion of the page count. These represent portions of the protocol or document where the design team identified some specific decisions and choices that need to be made by the working group. The goal was to present a set of initial choices along with rationale and context for them.

But that’s not to say that the only flexible portions are those marked in the editor’s notes. What’s important about the gnap-00 document is that it’s a starting point for the working group discussion. It gives the working group something concrete to talk about and debate instead of a blank page of unknown possibilities (and monsters). With this document in hand, the working group can and will change the protocol and how it’s presented over the specification’s lifecycle.

The Immediate Future

Now that GNAP is an active standard under development, XYZ will shift into being an open-source implementation of GNAP from here out. As of the time of publication, we are actively working to implement all of the changes that were introduced during the design team process. Other developers are gearing up to implement the gnap-00 draft as well, and it will be really interesting to try to plug these into each other to test interoperability at a first stage.

TxAuth and Transactional Authorization are functionally retired as names for this work, though the mailing list at IETF will remain txauth so you might still hear reference to that from time to time because of this.

And as stated above, OAuth 3.0 is not a real thing. Which is fine, since OAuth 2.0 isn’t going anywhere any time soon. The work on GNAP is shifting into a new phase that is really just starting. I think we’ve probably got a couple years of active work on this specification, and a few more years after that before anything we do really sees any kind of wide adoption on the internet. These things take a long time and a lot of work, and it’s my hope to see a diverse and engaged group building things out!

Wednesday, 21. October 2020

Boris Mann's Blog

Dude Chilling Park in the fall

Dude Chilling Park in the fall


Identity Praxis, Inc.

Atlanta Innovation Forum Webinar – The Challenging New World of Privacy & Security

An in-depth conversation on privacy & security.  On October 15, 2020, I had a wonderful time discussing privacy and security. Speakers Joining me on the panel were, Carlos J. Bosch, Head of Technology, GSMA North America Matt Littleton, Global Advanced Compliance Specialist, Microsoft Donna Gallaher, President & CEO, New Oceans Enterprises Michael Becker, Founder & […] The post

An in-depth conversation on privacy & security. 

On October 15, 2020, I had a wonderful time discussing privacy and security.

Speakers

Joining me on the panel were,

Carlos J. Bosch, Head of Technology, GSMA North America Matt Littleton, Global Advanced Compliance Specialist, Microsoft Donna Gallaher, President & CEO, New Oceans Enterprises Michael Becker, Founder & CEO, Identity Praxis Chad Hunt, Supervisory Special Agent, Federal Bureau of Investigation Julie Meredith, Federal Bureau of Investigation Key Themes

The key themes that came out of our conversation:

An assessment of corporate and individual threats and attach vectors (e.g. phishing, ransomware, etc.) People’s sentiment in today’s age: Connection, concern, control (compromised by convenience) Strategies for corporate risk assessment Strategies for executing corporate privacy & security measures Relevance and adhering to privacy regulations (e.g. GDRP, CCPA) Definitions of key terms, concepts, and nuances: privacy, security, compliance, identity, risk, etc. A review of key frameworks: Personal Information Management Triad, Five-pillars of digital sovereignty

You can watch our 60-minute discussion below.

 

 

Reference

737475 {737475:QWZEW3I6} items 1 apa default asc https://identitypraxis.com/wp-content/plugins/zotpress/ Bosch, C., Hunt, C., Gallaher, D., Becker, M., & Meredith, J. (2020, October 15). The Challenging New World of Privacy & Security. Atlanta Innovation Forum The Challenging New World of Privacy & Security Webinar, Online. https://www.youtube.com/watch?v=JmlvOKg_dS4

The post Atlanta Innovation Forum Webinar – The Challenging New World of Privacy & Security appeared first on Identity Praxis, Inc..


Karyl Fowler

Transmute Closes $2M Seed Round

We’re thrilled to announce the close of Transmute’s $2 million series seed round led by Moonshots Capital, and joined by TMV, Kerr Tech Investments and several strategic angels. Transmute has gained momentum on our mission to be the trusted data exchange platform for global trade. As a byproduct of the pandemic, the world is collectively facing persistent supply chain disruption and unpredictabil

We’re thrilled to announce the close of Transmute’s $2 million series seed round led by Moonshots Capital, and joined by TMV, Kerr Tech Investments and several strategic angels.

Transmute has gained momentum on our mission to be the trusted data exchange platform for global trade. As a byproduct of the pandemic, the world is collectively facing persistent supply chain disruption and unpredictability. This coupled with increasing traceability regulations is driving an urgency for importers to fortify their supply chains. COVID-19 especially has highlighted the need for preventing counterfeit goods and having certainty about your suppliers (and their suppliers).

Transmute Co-founders, Karyl Fowler & Orie Steele @ SXSW 2019

Transmute’s software is upgrading trade documentation today to give importers a competitive edge in an increasingly dynamic, global marketplace. Leveraging decentralized identifier (DID) and verifiable credential (VC) tech with existing cloud-based systems, Transmute is able to offer digital product and supplier credentials that are traceable across an entire logistics ecosystem. From point of origin to end customer, we are unlocking unprecedented visibility into customers’ supplier networks.

Disrupting a highly regulated and old-fashioned industry is complex, and an intentional first step in our go-to-market strategy has been balancing both the needs of regulators and commercial customers.

This is why we’re incredibly proud to join forces with our lead investors at Moonshots Capital, a VC firm focused on investing in extraordinary leaders. We look forward to growing alongside Kelly Perdew (our newest Board of Directors member) and his founding partner Craig Cummings. They’re a team of military veterans and serial entrepreneurs with extensive success selling into government agencies and enterprises.

We are equally proud to be joined by Marina Hadjipateras and the team at TMV, a New York-based firm focused on funding pioneering, early-stage founders. Between their commitment to diverse teams, building sustainable futures and their deep expertise in global shipping and logistics, we feel more than ready to take on global trade with this firm.

The support of Kerr Tech Investments, led by Josh and Michael Kerr, further validates our company’s innovative approach to data exchange. Josh is a seasoned entrepreneur, an e-signature expert and has been advising us since Transmute’s inception.

Closing our seed round coincides with another exciting announcement: our recent launch of Phase II work with the U.S. Department of Homeland Security, Science & Technology’s Silicon Valley Innovation Program (SVIP) to enhance “transparency, automation and security in processing the importation of raw materials” like steel.

Our vision is more broad than just improving how trade gets done, and steel imports are just the beginning. We’re inserting revolutionary changes into the fabric of how enterprises manage product and supplier identity, effectively building a bridge — or a fulcrum, rather — towards new revenue streams and business models across industries.

Last — but absolutely not least — I want to give a personal shoutout to my core teammates; startups are a team sport, and our team is stacked! Tremendous congratulations as the