Last Update 2:11 AM November 29, 2022 (UTC)

Identity Blog Catcher

Brought to you by Identity Woman and Infominer.
Support this collaboration on Patreon!!!

Tuesday, 29. November 2022

John Philpin : Lifestream

Two interesting links for the Mastodon world … as you / if y

Two interesting links for the Mastodon world … as you / if you move out of Twitter Mastodon servers Unfollow Twitter users who don’t tweet often enough.

Two interesting links for the Mastodon world … as you / if you move out of Twitter

Mastodon servers

Unfollow Twitter users who don’t tweet often enough.


Just added myself to the wait list. because I clearly have t

Just added myself to the wait list. because I clearly have the time!

Just added myself to the wait list. because I clearly have the time!


Simon Willison

Stable Diffusion 2.0 and the Importance of Negative Prompts for Good Results

Stable Diffusion 2.0 and the Importance of Negative Prompts for Good Results Stable Diffusion 2.0 is out, and it's a very different model from 1.4/1.5. It's trained using a new text encoder (OpenCLIP, in place of OpenAI's CLIP) which means a lot of the old tricks - notably using "Greg Rutkowski" to get high quality fantasy art - no longer work. What DOES work, incredibly well, is negative prompt

Stable Diffusion 2.0 and the Importance of Negative Prompts for Good Results

Stable Diffusion 2.0 is out, and it's a very different model from 1.4/1.5. It's trained using a new text encoder (OpenCLIP, in place of OpenAI's CLIP) which means a lot of the old tricks - notably using "Greg Rutkowski" to get high quality fantasy art - no longer work. What DOES work, incredibly well, is negative prompting - saying things like "cyberpunk forest by Salvador Dali" but negaive on "trees, green". Max Woolf explores negative prompting in depth in this article, including how to combine it with textual inversion.


Ben Werdmüller

Thinking about taking your computer to the repair shop? Be very afraid

“Researchers at University of Guelph in Ontario, Canada, recovered logs from laptops after receiving overnight repairs from 12 commercial shops. The logs showed that technicians from six of the locations had accessed personal data and that two of those shops also copied data onto a personal device. Devices belonging to females were more likely to be snooped on, and that snoopi

“Researchers at University of Guelph in Ontario, Canada, recovered logs from laptops after receiving overnight repairs from 12 commercial shops. The logs showed that technicians from six of the locations had accessed personal data and that two of those shops also copied data onto a personal device. Devices belonging to females were more likely to be snooped on, and that snooping tended to seek more sensitive data, including both sexually revealing and non-sexual pictures, documents, and financial information.” #Technology

[Link]

Monday, 28. November 2022

John Philpin : Lifestream

I installed ‘one app’ to try it out. It didn’t work out.

I installed ‘one app’ to try it out. It didn’t work out. I have now uninstalled it - along with 10 others that got installed at the same time that you didn’t tell me about. I’m looking at you Adobe.

I installed ‘one app’ to try it out.

It didn’t work out.

I have now uninstalled it - along with 10 others that got installed at the same time that you didn’t tell me about.

I’m looking at you Adobe.


Ben Werdmüller

Post, the latest Twitter alternative, is betting big on micropayments for news

“Consumers have changed their behavior. They want to consume their news in their feed. And so, obviously, consumption from a feed does not work with subscription. And social media networks, with their advertising-based model, promote the worst in us because it works. I mean, the algorithms are don’t really care. They just, you know, try to achieve the engagement at any cost, r

“Consumers have changed their behavior. They want to consume their news in their feed. And so, obviously, consumption from a feed does not work with subscription. And social media networks, with their advertising-based model, promote the worst in us because it works. I mean, the algorithms are don’t really care. They just, you know, try to achieve the engagement at any cost, right?” OK, but open, non-proprietary feed technology is available and widely used … #Media

[Link]


John Philpin : Lifestream

Day 29: fish As a kid in my German class I learnt; “Fi

Day 29: fish As a kid in my German class I learnt; “Fischers Fritze fischt frische Fische, frische Fische fischt Fischers Fritze'” Rough Translation; “Fisher Fritz catches fresh fish. Fresh fish is what Fisher Fritz catches.” All my MicroBlog Vember 2022 posts

Day 29: fish

As a kid in my German class I learnt;

“Fischers Fritze fischt frische Fische, frische Fische fischt Fischers Fritze'”

Rough Translation;

“Fisher Fritz catches fresh fish. Fresh fish is what Fisher Fritz catches.”

All my MicroBlog Vember 2022 posts


I guess ‘Cyber Monday' explains why my junk folder has at le

I guess ‘Cyber Monday' explains why my junk folder has at least 5 times as many entries as usual?

I guess ‘Cyber Monday' explains why my junk folder has at least 5 times as many entries as usual?


Jon Udell

Autonomy, packet size, friction, fanout, and velocity

Nostalgia is a dangerous drug and it’s always risky to wallow in it. So those of us who fondly remember the early blogosphere, and now want to draw parallels to the fediverse, should do so carefully. But we do want to learn from history. Here’s one way to compare five generations of social software along … Continue reading Autonomy, packet size, friction, fanout, and velocity

Nostalgia is a dangerous drug and it’s always risky to wallow in it. So those of us who fondly remember the early blogosphere, and now want to draw parallels to the fediverse, should do so carefully. But we do want to learn from history.

Here’s one way to compare five generations of social software along the five dimensions named in the title of this post.

Autonomy Packet Size Friction Fanout Velocity Usenet medium high medium medium low Blogosphere high high high low low Facebook low high low medium high Twitter low low low high high Fediverse high medium high medium medium

These are squishy categories, but I think they surface key distinctions. Many of us who were active in the blogosphere of the early 2000s enjoyed a high level of autonomy. Our RSS readers were our Internet dashboards. We loaded them with a curated mix of official and individual voices. There were no limits on the size of packets exchanged in this network. You could write one short paragraph or a 10,000-word essay. Networking wasn’t frictionless because blog posts did mostly feel like essays, and because comments didn’t yet exist. To comment on my blog post you’d write your own blog post linking to it.

That friction limited the degree to which a post would fan out through the network, and the velocity of its propagation. The architecture of high friction, low fanout, and low velocity was a winning combination for a while. In that environment I felt connected but not over-connected, informed but not overloaded.

Twitter flipped things around completely. It wasn’t just the loss of autonomy as ads and algos took over. With packets capped at 120 characters, and tweets potentially seen immediately by everyone, friction went nearly to zero. The architecture of low friction created an addictive experience and enabled powerful effects. But it wasn’t conducive to healthy discourse.

The fediverse can, perhaps, strike a balance. Humans didn’t evolve to thrive in frictionless social networks with high fanout and velocity, and arguably we shouldn’t. We did evolve in networks governed by Dunbar’s number, and our online networks should respect that limit. We need less friction within communities of knowledge and practice, more friction between them. We want messages to fan out pervasively and rapidly within communities, but less so between them.

We’re at an extraordinary inflection point right now. Will the fediverse enable us to strike the right balance? I think it has the right architectural ingredients to land where I’ve (speculatively) placed it in that table. High autonomy. As little friction as necessary, but not too little. As much fanout and velocity as necessary, but not too much. Nobody knows how things will turn out, predictions are futile, behavior is emergent, but I am on the edge of my seat watching this all unfold.


John Philpin : Lifestream

Belief in nothing suggests that you believe in something. If

Belief in nothing suggests that you believe in something. If only nothing. I have a hard time in believing in any kind of absolute. To do so suggests that my mind is made up. That said, I love the debates.

Belief in nothing suggests that you believe in something. If only nothing.

I have a hard time in believing in any kind of absolute. To do so suggests that my mind is made up.

That said, I love the debates.


Activist investors at Google are arguing that they pay peopl

Activist investors at Google are arguing that they pay people too much. … let the ‘race to the bottom’ commence.

Activist investors at Google are arguing that they pay people too much.

… let the ‘race to the bottom’ commence.


There’s a new ’thing’ circulating … that oft’ includes a sta

There’s a new ’thing’ circulating … that oft’ includes a statement that references some percentage of men and a lesser percentage of women not having close friends … The percentage is very specific. The definition of ‘close’ non existent.

There’s a new ’thing’ circulating … that oft’ includes a statement that references some percentage of men and a lesser percentage of women not having close friends …

The percentage is very specific.

The definition of ‘close’ non existent.


Red Hand Files | Issue 214 | Beautiful

Red Hand Files | Issue 214 | Beautiful

Red Hand Files | Issue 214 | Beautiful


Ben Werdmüller

With no child tax credit and inflation on the rise, families are slipping back into poverty

“U.S. households are having to pay between $300 to $400 more each month compared to last year because of inflation. Food insecurity is rising once again. Now, advocates are pointing to a growing body of work that shows how low-income and marginalized families relied on the program to survive.” #Society [Link]

“U.S. households are having to pay between $300 to $400 more each month compared to last year because of inflation. Food insecurity is rising once again. Now, advocates are pointing to a growing body of work that shows how low-income and marginalized families relied on the program to survive.” #Society

[Link]


News Outlets Urge U.S. to Drop Charges Against Julian Assange

“In a joint open letter, The Times, The Guardian, Le Monde, Der Spiegel and El País said the prosecution of Mr. Assange under the Espionage Act “sets a dangerous precedent” that threatened to undermine the First Amendment and the freedom of the press. “Obtaining and disclosing sensitive information when necessary in the public interest is a core part of the daily work of journ

“In a joint open letter, The Times, The Guardian, Le Monde, Der Spiegel and El País said the prosecution of Mr. Assange under the Espionage Act “sets a dangerous precedent” that threatened to undermine the First Amendment and the freedom of the press. “Obtaining and disclosing sensitive information when necessary in the public interest is a core part of the daily work of journalists,” the letter said. “If that work is criminalized, our public discourse and our democracies are made significantly weaker.”” #Media

[Link]


Simon Willison

Quoting JWZ

If posts in a social media app do not have URLs that can be linked to and viewed in an unauthenticated browser, or if there is no way to make a new post from a browser, then that program is not a part of the World Wide Web in any meaningful way. Consign that app to oblivion. — JWZ

If posts in a social media app do not have URLs that can be linked to and viewed in an unauthenticated browser, or if there is no way to make a new post from a browser, then that program is not a part of the World Wide Web in any meaningful way.

Consign that app to oblivion.

JWZ


John Philpin : Lifestream

Holding your keys wandering around the house trying to find

Holding your keys wandering around the house trying to find your keys. Searching for your glasses that are clearly on your head .. it’s just that you can’t see them up there. Just went through a digital version of those two scenarios. GAWD!

Holding your keys wandering around the house trying to find your keys.

Searching for your glasses that are clearly on your head .. it’s just that you can’t see them up there.

Just went through a digital version of those two scenarios.

GAWD!


Ben Werdmüller

PSA: Do Not Use Services That Hate The Internet

“If posts in a social media app do not have URLs that can be linked to and viewed in an unauthenticated browser, or if there is no way to make a new post from a browser, then that program is not a part of the World Wide Web in any meaningful way. Consign that app to oblivion.” #Technology [Link]

“If posts in a social media app do not have URLs that can be linked to and viewed in an unauthenticated browser, or if there is no way to make a new post from a browser, then that program is not a part of the World Wide Web in any meaningful way. Consign that app to oblivion.” #Technology

[Link]


Damien Bod

Sharing Microsoft Graph permissions and solution Azure App Registrations

This article looks at using Microsoft Graph permissions in Azure App registrations and whether you should use Graph in specific Azure App registrations types and if it is ok to expose these with other scopes and roles. Is it ok to expose Graph permissions in public Azure App registrations? Using Graph with public applications As […]

This article looks at using Microsoft Graph permissions in Azure App registrations and whether you should use Graph in specific Azure App registrations types and if it is ok to expose these with other scopes and roles. Is it ok to expose Graph permissions in public Azure App registrations?

Using Graph with public applications

As a rule, I do not allow any Graph permissions to be assigned to an Azure App registration used by a public application apart from the delegated User.Read permission or others like this. The problem with sharing a Graph permission in this way is that you allow the full permission to be shared and not just the specific use case from your application. For example, if I expose the User.ReadWrite.All permission in a public Azure App registration, anyone who acquires an access token for this permission can do everything allowed. My application might only need to allow a user to update the firstname, lastname properties. With this token, I could list out all users on this tenant and share this in an unfriendly way or create and delete users.

If you find a public Azure App registration and Graph permissions other than the User.Read scope, you probably have found a security attack possibility or evaluated permissions problem.

A better setup is to separate the Graph permissions into a different Azure App registration and only allow this to be used in a confidential client.

Using Graph with application App Registrations

When using Graph application permissions, I create a specific Azure App registration to expose the permissions which require a secret or certificate to acquire the access token. You should not share this with an Azure App registration used to expose different APIs. If only using this inside a specific confidential Web client which is not used to expose further APIs, then it is ok to share the Graph permissions and the confidential client definitions in the same Azure App registration.

I never mix application permissions and delegated permissions in the same Azure App registration. If you find yourself doing this, it is probably the result of an architecture error and should be reviewed.

Separating Graph permissions and solution App registrations

When using an Azure App registration to use an specific API role or scope, do not use the same one to expose a Graph permission. This is because the intent of the Azure App registration is to do whatever is allowed in the API exposed using this. If you also allow Graph permissions, the App registrations can be used for two different intentions. The client using this could execute an evaluated privilege attack as the same secret/certificate can be used to acquire both of the permissions or even worse, no secret at all is required to get an access token for the Graph API.

You might not want to allow the client using the API to have the full access to everything exposed with the Graph permission. You probably only want to expose a subset. This can be solved by using an API which validates the specific request and in a trusted environment and the Graph permissions from a separate Azure App registration can be used with a confidential client and only the subset of the Graph features are exposed to the third party. A secret or certificate must be required to get the access token for the Graph permission. It it not possible for the third party application to get an access token for the Graph permission.

What about sharing Graph permissions and third party applications

Shared Graph permissions to third party clients is sharing trust. I would avoid this as much as possible and use a zero trust strategy. Only expose or share what is required. How secure are the secrets or certificates in the other solutions? How easy is it to rotate the secrets?

Links:

https://learn.microsoft.com/en-us/aspnet/core/blazor/security/webassembly/graph-api?view=aspnetcore-6.0

https://learn.microsoft.com/en-us/graph/tutorials

https://learn.microsoft.com/en-us/training/modules/msgraph-dotnet-core-show-user-emails/

https://developer.microsoft.com/en-us/graph/

https://github.com/AzureAD/microsoft-identity-webhttps://learn.microsoft.com/en-us/azure/active-directory/develop/v2-oauth2-client-creds-grant-flow

https://learn.microsoft.com/en-us/graph/permissions-reference

https://posts.specterops.io/azure-privilege-escalation-via-azure-api-permissions-abuse-74aee1006f48

https://learn.microsoft.com/en-us/azure/active-directory/develop/secure-least-privileged-access

Using Microsoft Graph API in ASP.NET Core

John Philpin : Lifestream

Just received 2 AI files that I need to tweak. Not a designe

Just received 2 AI files that I need to tweak. Not a designer, so don’t have Illustrator. I went a-looking. The application is definitely ‘locked down’. Either Adobe or Corel are needed to edit the files. Both require body part harvesting as part of the acquisition process.

Just received 2 AI files that I need to tweak. Not a designer, so don’t have Illustrator.

I went a-looking.

The application is definitely ‘locked down’.

Either Adobe or Corel are needed to edit the files.

Both require body part harvesting as part of the acquisition process.


Day 28: trend The trend line of my daily posts on my micro

Day 28: trend The trend line of my daily posts on my micro blog has definitely been heading upwards as I participated in the daily post challenge that we all know as ‘MicroBlog Vember’. All my MicroBlog Vember 2022 posts

Day 28: trend

The trend line of my daily posts on my micro blog has definitely been heading upwards as I participated in the daily post challenge that we all know as ‘MicroBlog Vember’.

All my MicroBlog Vember 2022 posts


🎶🎵 Today I learned that Zane Lowe is in New Zealand and open

🎶🎵 Today I learned that Zane Lowe is in New Zealand and opened for Groove Armada at their Auckland gig on Saturday I really need to pay more attention … AHEAD of time.

🎶🎵 Today I learned that Zane Lowe is in New Zealand and opened for Groove Armada at their Auckland gig on Saturday

I really need to pay more attention … AHEAD of time.

Sunday, 27. November 2022

John Philpin : Lifestream

Getting behind on my music podcast listening pleasure… This

Getting behind on my music podcast listening pleasure… This is after the culling.

Getting behind on my music podcast listening pleasure… This is after the culling.


Ben Werdmüller

I find it very emotionally hard when ...

I find it very emotionally hard when my baby is sad. I wish I could fix it for him. Poor little guy.

I find it very emotionally hard when my baby is sad. I wish I could fix it for him. Poor little guy.


John Philpin : Lifestream

🎶🎵 The Sound Track Of My Life is another old post that poppe

🎶🎵 The Sound Track Of My Life is another old post that popped up over the weekend when i was searching for other stuff. It was meant to be a ‘living post’. It clearly died. I merged it with other things I am writing in Obsidian. One day it might make it back into the wild.

🎶🎵 The Sound Track Of My Life is another old post that popped up over the weekend when i was searching for other stuff.

It was meant to be a ‘living post’. It clearly died.

I merged it with other things I am writing in Obsidian.

One day it might make it back into the wild.


🎶🎵 7 years ago I was listening to The Loop, which is probabl

🎶🎵 7 years ago I was listening to The Loop, which is probably why I wrote A Letter To Jim Dalrymple and Merlin Mann. They were clearly chatting about music and I obviously took issue. Never heard anything back. Not sure why it was Merlin and not Dave on the show.

🎶🎵 7 years ago I was listening to The Loop, which is probably why I wrote A Letter To Jim Dalrymple and Merlin Mann. They were clearly chatting about music and I obviously took issue.

Never heard anything back.

Not sure why it was Merlin and not Dave on the show.


Day 27: motivation Motivation is hard post ’Turkey Day’

Day 27: motivation Motivation is hard post ’Turkey Day’ All my MicroBlog Vember 2022 posts

Day 27: motivation

Motivation is hard post ’Turkey Day’

All my MicroBlog Vember 2022 posts


Ben Werdmüller

Finding Affordable Child Care Has Never Been Harder

“America’s child-care infrastructure was broken before the pandemic, but the past few years have pushed it to the brink. Now, as more employers expect workers back in the office, a perfect storm of day-care closures, staffing shortages, and inflation have made finding affordable child care harder than ever.” #Society [Link]

“America’s child-care infrastructure was broken before the pandemic, but the past few years have pushed it to the brink. Now, as more employers expect workers back in the office, a perfect storm of day-care closures, staffing shortages, and inflation have made finding affordable child care harder than ever.” #Society

[Link]


Bye, Twitter

“Speaking as a random was-successful-on-Twitter person, I can see no good arguments for redirecting my voice into anyone else’s for-profit venture-funded algorithm-driven engagement-maximizing wet dream.” #Twitter [Link]

“Speaking as a random was-successful-on-Twitter person, I can see no good arguments for redirecting my voice into anyone else’s for-profit venture-funded algorithm-driven engagement-maximizing wet dream.” #Twitter

[Link]

Saturday, 26. November 2022

John Philpin : Lifestream

Paging all CRAFT users. I have been a happy user of CRAFT

Paging all CRAFT users. I have been a happy user of CRAFT for nearly as long as they have been around using their Personal Pro level. They have a deal right now that allowed me to upgrade to their business level - which includes custom domain - and I pay nothing extra. Enjoy.

Paging all CRAFT users.

I have been a happy user of CRAFT for nearly as long as they have been around using their Personal Pro level.

They have a deal right now that allowed me to upgrade to their business level - which includes custom domain - and I pay nothing extra.

Enjoy.


Ben Werdmüller

Defederation and governance processes

“To “keep things the way they are” is always an option, never the default. Framing this option as a default position introduces a significant conservative bias — listing it as an option removes this bias and keeps a collective evolutionary.” Some great thoughts on collective decision-making that pertain directly to open source. #Technology [Link]

“To “keep things the way they are” is always an option, never the default. Framing this option as a default position introduces a significant conservative bias — listing it as an option removes this bias and keeps a collective evolutionary.” Some great thoughts on collective decision-making that pertain directly to open source. #Technology

[Link]


"Feedback is a gift but not a ...

"Feedback is a gift but not a demand for change," I tell my infant son, thinking I'm being very funny.

"Feedback is a gift but not a demand for change," I tell my infant son, thinking I'm being very funny.


Hip! Hip! Phooey!

I had no idea: “The phrase [hip hip hooray] does have anti-Semitic roots. Rioters in Europe sometimes shouted "Hep! Hep!" while on prowl for Jews, and mob harassment of Jews in Hamburg, Frankfurt, and other German cities in 1819 became known as the "Hep! Hep!" riots. Hitler's storm troopers adopted this jeer.” #Society [Link]

I had no idea: “The phrase [hip hip hooray] does have anti-Semitic roots. Rioters in Europe sometimes shouted "Hep! Hep!" while on prowl for Jews, and mob harassment of Jews in Hamburg, Frankfurt, and other German cities in 1819 became known as the "Hep! Hep!" riots. Hitler's storm troopers adopted this jeer.” #Society

[Link]


Simon Willison

Coping strategies for the serial project hoarder

I gave a talk at DjangoCon US 2022 in San Diego last month about productivity on personal projects, titled "Massively increase your productivity on personal projects with comprehensive documentation and automated tests". The alternative title for the talk was Coping strategies for the serial project hoarder. I'm maintaining a lot of different projects at the moment. Somewhat unintuitively, the

I gave a talk at DjangoCon US 2022 in San Diego last month about productivity on personal projects, titled "Massively increase your productivity on personal projects with comprehensive documentation and automated tests".

The alternative title for the talk was Coping strategies for the serial project hoarder.

I'm maintaining a lot of different projects at the moment. Somewhat unintuitively, the way I'm handling this is by scaling down techniques that I've seen working for large engineering teams spread out across multiple continents.

The key trick is to ensure that every project has comprehensive documentation and automated tests. This scales my productivity horizontally, by freeing me up from needing to remember all of the details of all of the different projects I'm working on at the same time.

You can watch the talk on YouTube (25 minutes). Alternatively, I've included a detailed annotated version of the slides and notes below.

This was the title I originally submitted to the conference. But I realized a better title was probably...

Coping strategies for the serial project hoarder

This video is a neat representation of my approach to personal projects: I always have a few on the go, but I can never resist the temptation to add even more.

My PyPI profile (which is only five years old) lists 185 Python packages that I've released. Technically I'm actively maintaining all of them, in that if someone reports a bug I'll push out a fix. Many of them receive new releases at least once a year.

Aside: I took this screenshot using shot-scraper with a little bit of extra JavaScript to hide a notification bar at the top of the page:

shot-scraper 'https://pypi.org/user/simonw/' \ --javascript " document.body.style.paddingTop = 0; document.querySelector( '#sticky-notifications' ).style.display = 'none'; " --height 1000

How can one individual maintain 185 projects?

Surprisingly, I'm using techniques that I've scaled down from working at a company with hundreds of engineers.

I spent seven years at Eventbrite, during which time the engineering team grew to span three different continents. We had major engineering centers in San Francisco, Nashville, Mendoza in Argentina and Madrid in Spain.

Consider timezones: engineers in Madrid and engineers in San Francisco had almost no overlap in their working hours. Good asynchronous communication was essential.

Over time, I noticed that the teams that were most effective at this scale were the teams that had a strong culture of documentation and automated testing.

As I started to work on my own array of smaller personal projects, I found that the same discipline that worked for large teams somehow sped me up, when intuitively I would have expected it to slow me down.

I wrote an extended description of this in The Perfect Commit.

I've started structuring the majority of my work in terms of what I think of as "the perfect commit" - a commit that combines implementation, tests, documentation and a link to an issue thread.

As software engineers, it's important to note that our job generally isn't to write new software: it's to make changes to existing software.

As such, the commit is our unit of work. It's worth us paying attention to how we cen make our commits as useful as possible.

Here's a recent example from one of my projects, Datasette.

It's a single commit which bundles together the implementation, some related documentation improvements and the tests that show it works. And it links back to an issue thread from the commit message.

Let's talk about each component in turn.

There's not much to be said about the implementation: your commit should change something!

It should only change one thing, but what that actually means varies on a case by case basis.

It should be a single change that can be documented, tested and explained independently of other changes.

(Being able to cleanly revert it is a useful property too.)

The goals of the tests that accompany a commit are to prove that the new implementation works.

If you apply the implementation the new tests should pass. If you revert it the tests should fail.

I often use git stash to try this out.

If you tell people they need to write tests for every single change they'll often push back that this is too much of a burden, and will harm their productivity.

But I find that the incremental cost of adding a test to an existing test suite keeps getting lower over time.

The hard bit of testing is getting a testing framework setup in the first place - with a test runner, and fixtures, and objects under test and suchlike.

Once that's in place, adding new tests becomes really easy.

So my personal rule is that every new project starts with a test. It doesn't really matter what that test does - what matters is that you can run pytest to run the tests, and you have an obvious place to start building more of them.

I maintain three cookiecutter templates to help with this, for the three kinds of projects I most frequently create:

simonw/python-lib for Python libraries simonw/click-app for command line tools simonw/datasette-plugin for Datasette plugins

Each of these templates creates a project with a setup.py file, a README, a test suite and GitHub Actions workflows to run those tests and ship tagged releases to PyPI.

I have a trick for running cookiecutter as part of creating a brand new repository on GitHub. I described that in Dynamic content for GitHub repository templates using cookiecutter and GitHub Actions.

This is a hill that I will die on: your documentation must live in the same repository as your code!

You often see projects keep their documentation somewhere else, like in a wiki.

Inevitably it goes out of date. And my experience is that if your documentation is out of date people will lose trust in it, which means they'll stop reading it and stop contributing to it.

The gold standard of documentation has to be that it's reliably up to date with the code.

The only way you can do that is if the documentation and code are in the same repository.

This gives you versioned snapshots of the documentation that exactly match the code at that time.

More importantly, it means you can enforce it through code review. You can say in a PR "this is great, but don't forget to update this paragraph on this page of the documentation to reflect the change you're making".

If you do this you can finally get documentation that people learn to trust over time.

Another trick I like to use is something I call documentation unit tests.

The idea here is to use unit tests to enforce that concepts introspected from your code are at least mentioned in your documentation.

I wrote more about that in Documentation unit tests.

Here's an example. Datasette has a test that scans through each of the Datasette plugin hooks and checks that there is a heading for each one in the documentation.

The test itself is pretty simple: it uses pytest parametrization to look through every introspected plugin hook name, and for each one checks that it has a matching heading in the documentation.

The final component of my perfect commit is this: every commit must link to an issue thread.

I'll usually have these open in advance but sometimes I'll open an issue thread just so I can close it with a commit a few seconds later!

Here's the issue for the commit I showed earlier. It has 11 comments, and every single one of those comments is by me.

I have literally thousands of issues on GitHub that look like this: issue threads that are effectively me talking to myself about the changes that I'm making.

It turns out this a fantastic form of additional documentation.

What goes in an issue?

Background: the reasons for the change. In six months time you'll want to know why you did this. State of play before-hand: embed existing code, link to existing docs. I like to start my issues with "I'm going to change this code right here" - that way if I come back the next day I don't have to repeat that little piece of research. Links to things! Documentation, inspiration, clues found on StackOverflow. The idea is to capture all of the loose information floating around that topic. Code snippets illustrating potential designs and false-starts. Decisions. What did you consider? What did you decide? As programmers we make decisions constantly, all day, about everything. That work doesn't have to be invisible. Writing them down also avoids having to re-litigate them several months later when you've forgotten your original reasoning. Screenshots - of everything! Animated screenshots even better. I even take screenshots of things like the AWS console to remind me what I did there. When you close it: a link to the updated documentation and demo

The reason I love issues is that they're a form of documentation that I think of as temporal documentation.

Regular documentation comes with a big commitment: you have to keep it up to date in the future.

Issue comments skip that commitment entirely. They're displayed with a timestamp, in the context of the work you were doing at the time.

No-one will be upset or confused if you fail to keep them updated to match future changes.

So it's a commitment free form of documentation, which I for one find incredibly liberating.

I think of this approach as issue driven development.

Everything you are doing is issue-first, and from that you drive the rest of the development process.

This is how it relates back to maintaining 185 projects at the same time.

With issue driven development you don't have to remember anything about any of these projects at all.

I've had issues where I did a bunch of design work in issue comments, then dropped it, then came back 12 months later and implemented that design - without having to rethink it.

I've had projects where I forgot that the project existed entirely! But I've found it again, and there's been an open issue, and I've been able to pick up work again.

It's a way of working where you treat it like every project is going to be maintained by someone else, and it's the classic cliche here that the somebody else is you in the future.

It horizontally scales you and lets you tackle way more interesting problems.

Programmers always complain when you interrupt them - there's this idea of "flow state" and that interrupting a programmer for a moment costs them half an hour in getting back up to speed.

This fixes that! It's much easier to get back to what you are doing if you have an issue thread that records where you've got to.

Issue driven development is my key productivity hack for taking on much more ambitious projects in much larger quantities.

Another way to think about this is to compare it to laboratory notebooks.

Here's a page from one by Leonardo da Vinci.

Great scientists and great engineers have always kept detailed notes.

We can use GitHub issues as a really quick and easy way to do the same thing!

Another thing I like to use these for is deep research tasks.

Here's an example, from when I was trying to figure out how to run my Python web application in an AWS Lambda function:

Figure out how to deploy Datasette to AWS Lambda using function URLs and Mangum

This took me 65 comments over the course of a few days... but by the end of that thread I'd figured out how to do it!

Here's the follow-up, with another 77 comments, in which I figure out how to serve an AWS Lambda function with a Function URL from a custom subdomain.

I will never have to figure this out ever again! That's a huge win.

https://github.com/simonw/public-notes is a public repository where I keep some of these issue threads, transferred from my private notes repos using this trick.

The last thing I want to encourage you to do is this: if you do project, tell people what it is you did!

This counts for both personal and work projects. It's so easy to skip this step.

Once you've shipped a feature or built a project, it's so tempting to skip the step of spending half an hour or more writing about the work you have done.

But you are missing out on so much of the value of your work if you don't give other people a chance to understand what you did.

I wrote more about this here: What to blog about.

For projects with releases, release notes are a really good way to do this.

I like using GitHub releases for this - they're quick and easy to write, and I have automation setup for my projects such that creating release notes in GitHub triggers a build and release to PyPI.

I've done over 1,000 releases in this way. Having them automated is crucial, and having automation makes it really easy to ship releases more often.

Please make sure your release notes have dates on them. I need to know when your change went out, because if it's only a week old it's unlikely people will have upgraded to it yet, whereas a change from five years ago is probably safe to depend on.

I wrote more about writing better release notes here.

This is a mental trick which works really well for me. "No project of mine is finished until I've told people about it in some way" is a really useful habit to form.

Twitter threads are (or were) a great low-effort way to write about a project. Build a quick thread with some links and images, and maybe even a video.

Get a little unit about your project out into the world, and then you can stop thinking about it.

(I'm trying to do this on Mastodon now instead.)

Even better: get a blog! Having your own corner of the internet to write about the work that you are doing is a small investment that will pay off many times over.

("Nobody blogs anymore" I said in the talk... Phil Gyford disagrees with that meme so much that he launched a new blog directory to show how wrong it is.)

The enemy of projects, especially personal projects, is guilt.

The more projects you have, the more guilty you feel about working on any one of them - because you're not working on the others, and those projects haven't yet achieved their goals.

You have to overcome guilt if you're going to work on 185 projects at once!

This is the most important tip: avoid side projects with user accounts.

If you build something that people can sign into, that's not a side-project, it's an unpaid job. It's a very big responsibility, avoid at all costs!

Almost all of my projects right now are open source things that people can run on their own machines, because that's about as far away from user accounts as I can get.

I still have a responsibility for shipping security updates and things like that, but at least I'm not holding onto other people's data for them.

I feel like if your project is tested and documented, you have nothing to feel guilty about.

You have put a thing out into the world, and it has tests to show that it works, and it has documentation that explains what it is.

This means I can step back and say that it's OK for me to work on other things. That thing there is a unit that makes sense to people.

That's what I tell myself anyway! It's OK to have 185 projects provided they all have documentation and they all have tests.

Do that and the guilt just disappears. You can live guilt free!

You can follow me on Mastodon at @simon@simonwillison.net or on GitHub at github.com/simonw. Or subscribe to my blog at simonwillison.net!

From the Q&A:

You've tweeted about using GitHub Projects. Could you talk about that? GitHub Projects V2 is the perfect TODO list for me, because it lets me bring together issues from different repositories. I use a project called "Everything" on a daily basis (it's my browser default window) - I add issues to it that I plan to work on, including personal TODO list items as well as issues from my various public and private repositories. It's kind of like a cross between Trello and Airtable and I absolutely love it. How did you move notes from the private to the public repo? GitHub doesn't let you do this. But there's a trick I use involving a temp repo which I switch between public and private to help transfer notes. More in this TIL. Question about the perfect commit: do you commit your failing tests? I don't: I try to keep the commits that land on my main branch always passing. I'll sometimes write the failing test before the implementation and then commit them together. For larger projects I'll work in a branch and then squash-merge the final result into a perfect commit to main later on.

John Philpin : Lifestream

Crisis, 2021 - ★★★½

Alright. Not great.

Alright. Not great.


Lesson Plan, 2022 - ★★

I know that there are only 7 stories in the world. My question is why is this the one that keeps getting told - and then badly. In fairness, I got to the end, so it wasn't a total dud, but if you want to watch this story sometime, choose a different movie.

I know that there are only 7 stories in the world. My question is why is this the one that keeps getting told - and then badly. In fairness, I got to the end, so it wasn't a total dud, but if you want to watch this story sometime, choose a different movie.


Ben Werdmüller

Twitter Blesses Extremists With Paid 'Blue Checks'

“Hatewatch’s investigation of extremists’ use of Twitter Blue, based in part on a third-party public list of paid blue-check accounts, found that white nationalists, anti-LGBTQ extremists and other far-right individuals and groups now sport what was once a symbol of credibility on the platform.” #Twitter [Link]

“Hatewatch’s investigation of extremists’ use of Twitter Blue, based in part on a third-party public list of paid blue-check accounts, found that white nationalists, anti-LGBTQ extremists and other far-right individuals and groups now sport what was once a symbol of credibility on the platform.” #Twitter

[Link]


John Philpin : Lifestream

From Mars Edit 5 - Beta

Directly. No Gatekeeper.

Directly. No Gatekeeper.


Server Side Caching Server Side Caching Server Side Cach

Server Side Caching Server Side Caching Server Side Caching Server Side Caching Server Side Caching Server Side Caching Server Side Caching Server Side Caching Server Side Caching Server Side Caching Server Side Caching … when will I remember!

Server Side Caching
Server Side Caching
Server Side Caching
Server Side Caching
Server Side Caching
Server Side Caching
Server Side Caching
Server Side Caching
Server Side Caching
Server Side Caching
Server Side Caching

… when will I remember!


Simon Willison

An Interactive Guide to Flexbox

An Interactive Guide to Flexbox Joshua Comeau built this fantastic guide to CSS flexbox layouts, with interactive examples of all of the properties. This is a really useful tour of the layout model.

An Interactive Guide to Flexbox

Joshua Comeau built this fantastic guide to CSS flexbox layouts, with interactive examples of all of the properties. This is a really useful tour of the layout model.


John Philpin : Lifestream

Ghost Reader Is Solving A Problem That A Lot of A.I.'s Don't Seem To

A lot of AIs seem to be focussed on how to create. new things, the interesting thing that Readwise is up to is that they are focussed on using AI to better understand what is already there ….. I think that is an interesting problem to solve. Ghost Reader, which is definitely NOT Ghost Rider IS definitely interesting. I have been playing round a little with it and decided that the best way of

A lot of AIs seem to be focussed on how to create. new things, the interesting thing that Readwise is up to is that they are focussed on using AI to better understand what is already there ….. I think that is an interesting problem to solve.

Ghost Reader, which is definitely NOT Ghost Rider IS definitely interesting.

I have been playing round a little with it and decided that the best way of describing what it does is with a real world example.

I do like reading Napkin Math - but don’t always have the time.

Take a piece that came out this past week. At over 3,000 words it’s a 12 minute read - and THAT is if you get it first time round. I don’t always get it - and on a topic like this - I knew it would be a long job (I am slower than the average bear).

It was written by Evan Armstrong who summarized the article thus;

“Over the last few months, I’ve been obsessed with AI—chatting with AI researchers in London, talking product development with startups in Argentina, debating philosophy with AI safety groups in San Francisco.”

Sounds good, So here’s what I did.

Evan’s original piece on the site is here.

I dropped it in to Readwise and then applied Ghost Reader, which took about 20 seconds to come up with the summary. (My bold and line breaks added to help you understand how it broke down the article and make it easier to compare to the original.) AND - if you read the article you will see that my choice of the use case might not exactly be random since Evan actually quotes Readwise Daniel Doyon in item 4 - META!

Anyway, here’s the Ghost Reader summary …

“The text discusses six micro-theories about how artificial intelligence (AI) will change human society.
The first theory is that AI will make it easier for people to create new things.
The second theory is that companies will compete more on sales and marketing than on the quality of their AI models.
The third theory is that AI will make it easier for people to find and consume custom-generated content.
The fourth theory is that AI will amplify existing power law dynamics in the digital media world.
The fifth theory is that the most successful AI companies will be those that use AI to make their products more delightful.
The sixth and final theory is that AI will enable entirely new modalities of digital interactions.”

Compare Here are my one line extracts coped out of the article

Fine-tuned models win battles, foundational models win wars

Long-term model differentiation comes from data-generating use cases

Open source makes AI startups into consulting shops, not SaaS companies

Most endpoints compete on GTM, not AI

AI will not disrupt the creator economy, it will only amplify existing power law dynamics

Invisible AI will be the most valuable deployment of AI

For a 20 second ‘read, analyze, write’ a 3,000 word article - not a bad start. Not sure I think that it got to all the salient points - I wonder what Daniel and/or Evan think … but remember - as the article itself points out MOST AIs appearing are focussed on how to create … the interesting thing that Readwise is up to is that they are focussed on using AI to better understand what is already there ….. and they are only just now getting going.

I think that is an interesting problem to solve.

What do you think?

Friday, 25. November 2022

John Philpin : Lifestream

Outlining and Bike Dr. Drang neatly breaks down why Outlin

Outlining and Bike Dr. Drang neatly breaks down why Outlining - that he loves - somehow doesn’t do what he wants. He nails my thoughts. Glad its not just me!

Outlining and Bike

Dr. Drang neatly breaks down why Outlining - that he loves - somehow doesn’t do what he wants. He nails my thoughts.

Glad its not just me!


My thanks to @maique and @pratik - you KNOW what I am talkin

My thanks to @maique and @pratik - you KNOW what I am talking about!

My thanks to @maique and @pratik - you KNOW what I am talking about!


Ben Werdmüller

How to Weave the Artisan Web

“Now, why should we bring back that artisan, hand-crafted Web? Oh, I don’t know. Wouldn’t it be nice to have a site that’s not run by an amoral billionaire chaos engine, or algorithmically designed to keep you doomscrolling in a state of fear and anger, or is essentially spyware for governments and/or corporations?” #Technology [Link]

“Now, why should we bring back that artisan, hand-crafted Web? Oh, I don’t know. Wouldn’t it be nice to have a site that’s not run by an amoral billionaire chaos engine, or algorithmically designed to keep you doomscrolling in a state of fear and anger, or is essentially spyware for governments and/or corporations?” #Technology

[Link]


I really hate the pushback against “woke”. ...

I really hate the pushback against “woke”. Modern civil rights movements have felt like real progress that has been a long time coming. The backlash is disappointing and regressive. We do live in a fundamentally unequal society; it’s past time we did something to change it.

I really hate the pushback against “woke”. Modern civil rights movements have felt like real progress that has been a long time coming. The backlash is disappointing and regressive. We do live in a fundamentally unequal society; it’s past time we did something to change it.


The cost of your cat pictures

“Anyone who thinks that social media sites are “public spaces” is welcome to propose that Congress gives Facebook $30,000,000,000 a year to keep up that infrastructure. Otherwise, no, it’s not. That’s $30,000,000,000 a year in private money being used to buy private property.” #Technology [Link]

“Anyone who thinks that social media sites are “public spaces” is welcome to propose that Congress gives Facebook $30,000,000,000 a year to keep up that infrastructure. Otherwise, no, it’s not. That’s $30,000,000,000 a year in private money being used to buy private property.” #Technology

[Link]

Thursday, 24. November 2022

@_Nat Zone

ヴァイオレット・エヴァーガーデンの舞台で「みちしるべ」

去る8月6日の原爆忌に、今日(11月25日)金曜ロ…

去る8月6日の原爆忌に、今日(11月25日)金曜ロードショーで放映される「ヴァイオレット・エヴァーガーデン」の舞台であるC.H郵便社のモデルになった京都文化博物館別館ホールで、ツィンバロンの演奏会を開き、そのアンコールに同作のシーズン1のエンディング・テーマである「みちしるべ」を演奏してきました。戦争の悲惨さを訴えかける同作、そしてそのPTSDから「愛してる」をさがしてヴァイオレットが歩みを始めるC.H郵便社が、ウクライナ戦争が始まった今年のことを考え、この作品を作っているときに焼き討ちテロにあって亡くなった京都アニメーションの方々とそこから立ち上がってきた方々のことを考え、はたまたこの原爆忌に原爆で亡くなった方々の思いに馳せ、戦争の無い世界を想うのに最適なシチュエーションと曲であると思ったからです。

ややネタバレになりますが、ご存知の無い方のために書くと、「ヴァイオレット・エヴァーガーデン」は京都アニメーション社が制作した非常に美しいアニメで、第一次世界大戦と思しき当時のドイツ近辺で兵器として育てられ心を閉ざしてしまった少女が、最後の上官となった「少佐」に人として扱われ、人間性を取り戻し始めたときに戦場で両腕を失い、少佐が「戦死」直前にかけたことば「愛してる」の意味を探して戦後生きていこうとする作品です。失った両腕は高性能の義手で補って、タイプならば打てるようになったので、まだ文盲の方が多い社会で自動筆記人形(代筆業)をC.H郵便社で行いながら「愛してる」を探します。戦争PTSD、自閉症、当時ホワイトカラーとしての女性の社会進出の最前線であったタイピスト、社会に残り続ける戦争の傷跡、生きるとは、死ぬとは、その意味とは、そうしたことを、京都アニメーションの透明感の高い絵で、静けさの中に重層的に塗り重ねていくドラマです。長いですが、ぜひ見ていただきたいアニメです。

今回の演奏では今年侵略されているウクライナでもよく演奏される楽器「ツィンバロン」を使いました。演奏している崎村潤子はこの10月にはチェコでウクライナの奏者と一緒に演奏をしてきています。

ツィンバロンは東欧から中欧に広がるダルシマー族の楽器です1小型のものは軍楽隊でも使われますから、ヴァイオレットもひょっとしたらきいたことがあったかもしれません。そんなことにも心を馳せながら聞いてみてください。

もし気に入ったら、Youtubeのチャンネル登録お願いします

関西ツィンバロン協会主催コンサート
「哀愁の響き、再び京へ 2022」より 曲:みちしるべ(作詞:茅原実里 作曲:菊田大介) ツィンバロン:崎村潤子 ピアノ:清水純子 ピアノ調律: 篠原賢次 ツィンバロン調整調律: 小出賢司 ツィンバロン 塚原るみ所有: ボハーク社Luxury Prestige プラチナスポンサー:塚原るみ様、小倉美佐子様 後援:在大阪ハンガリー国名誉総領事館 録画場所:京都文化博物館別館 録画日時:2022年8月6日 撮影・編集:Tonescape おまけ

ライデンシャフトリヒの一部はOAuth Workshop があった街、ローマ遺産の街「トリーア」の近く、モーゼル川沿いの「コーヘン」という街のようです。この辺りは風光明媚で有名です。その他ハイデルベルクもモデルに取られているようです。行ってみたいですね。


Ben Werdmüller

ConsenSys Under Fire for Collecting MetaMask Users’ Wallet and IP Addresses

““When you use Infura as your default RPC provider in MetaMask, Infura will collect your IP address and your Ethereum wallet address when you send a transaction,” Consensys said.” #Crypto [Link]

““When you use Infura as your default RPC provider in MetaMask, Infura will collect your IP address and your Ethereum wallet address when you send a transaction,” Consensys said.” #Crypto

[Link]


Thankful

My favorite thing about Thanksgiving is the admittedly slightly hokey tradition of going around the table and saying what you’re thankful for. There’s a lot to be skeptical about - this is a holiday that celebrates colonization and the eradication of entire nations - but this one act, bringing gratitude front and center, is good. As I write this, I want to acknowledge that my baby and I have di

My favorite thing about Thanksgiving is the admittedly slightly hokey tradition of going around the table and saying what you’re thankful for. There’s a lot to be skeptical about - this is a holiday that celebrates colonization and the eradication of entire nations - but this one act, bringing gratitude front and center, is good.

As I write this, I want to acknowledge that my baby and I have directly benefited from the occupation of Ohlone, Wampanoag, Lenape, and Latgawa lands - and indirectly for the occupation of North America as a whole.

I have a lot to be thankful about.

This year, I’m thankful for my baby: his sly grins and the sense of humor I can already see develop give me life. He is responsible for my permanent state of absolute sleep deprivation, but also, far more importantly, for so much fun and purpose.

I’m thankful for my family and friends: my allies.

I’m thankful for all the wonderful people in my life who embody kindness, empathy, wisdom, mentorship, and knowledge.

I’m thankful that I live in a context of peace and democracy - however imperfect it might be, I did not wake up in fear of my life this morning, unlike so many other people today.

I’m thankful for my job: it’s a big deal to have found a place to do meaningful work that also happens to be full of empathetic, lovely people who genuinely care about the world and about each other. I’m thankful that it gives me space to be a three-dimensional human.

I’m thankful for the internet, and for the web. The real web, that is: the one that operates as a commons with no central ownership and is a bedrock for us all to build on, for all the definitions of building.

I’m thankful for everyone who is working towards a kinder, more equal world, even when so much is aligned around individualism over community, profit over equity, and exclusivity over inclusion. It’s often rough, thankless work, but it makes the world so much better - and it gives me hope that the world my baby grows up to inhabit might not be terrible after all.

I’m thankful to have health. I’m thankful for healthcare: for vaccines, for the miracle of transplantation, for genetic therapies, for mental health support, for ICUs and children’s ERs. I’m thankful for all the scientific research and testing that makes all of this possible.

I’m thankful you’re here. I’m thankful we’re all here, together, not just surviving but building something better.


Simon Willison

Microsoft Flight Simulator: WebAssembly

Microsoft Flight Simulator: WebAssembly This is such a smart application of WebAssembly: it can now be used to write extensions for Microsoft Flight Simulator, which means you can run code from untrusted sources safely in a sandbox. I'm really looking forward to more of this kind of usage - I love the idea of finally having a robust sandbox for running things like plugins. Via @simon

Microsoft Flight Simulator: WebAssembly

This is such a smart application of WebAssembly: it can now be used to write extensions for Microsoft Flight Simulator, which means you can run code from untrusted sources safely in a sandbox. I'm really looking forward to more of this kind of usage - I love the idea of finally having a robust sandbox for running things like plugins.

Via @simon

Wednesday, 23. November 2022

Ben Werdmüller

Someone keeps trying to sign up to ...

Someone keeps trying to sign up to my newsletter with my own email address. That's not how it works! But also, it's got me worried that there's a weird bug somewhere on my website. Let me know if you're trying to subscribe and it's not working.

Someone keeps trying to sign up to my newsletter with my own email address. That's not how it works! But also, it's got me worried that there's a weird bug somewhere on my website. Let me know if you're trying to subscribe and it's not working.


Rebecca Rachmany

L1s Are Doomed Unless….

Reflections following Cardano Summit 2022 With Ethereum now functionally a censored network, many have been asking what L1 they should build on. With that in mind (plus an invitation from the magical Catalyst4Climate group), I attended the two-day Cardano Summit in Lausanne, and asked everyone I could the key question: “Does it work?” Cardano Community: Less Shill, More Goodwill
Reflections following Cardano Summit 2022

With Ethereum now functionally a censored network, many have been asking what L1 they should build on. With that in mind (plus an invitation from the magical Catalyst4Climate group), I attended the two-day Cardano Summit in Lausanne, and asked everyone I could the key question: “Does it work?”

Cardano Community: Less Shill, More Goodwill

Cardano has taken a slower route towards development of its solution, with more than 160 academic papers and the establishment of a line of studies at Edinburgh University to create and maintain a Decentralization Index. They eschew the philosophy of “move fast and break things” for a more methodical and reliable approach to building software.

The community surrounding Cardano reflects this philosophy. The dress was business casual rather than cypherpunk haphazard. Shilling was at a minimum, with Impact-related projects having equal weight with DeFi and technical sessions. The Cardano crowd is intentional about not speaking about other protocols as well as avoiding discussions of the ADA price. In short, compared to other crypto communities, the Cardano crowd came off as a group of mature professionals rather than revolutionaries or slick marketers.

On the one hand, the more mature feel was welcome, reflected in several serious projects with large bodies such as the UN and government institutions. On the downside, it seemed there were fewer truly cutting edge innovations. While Cardano hosts a number of delightful NFT projects, the more practical teams took the floor most often. Self Sovereign Identity projects were abundant, which gives Cardano a huge boost compared to crypto projects who assert that NFTs will somehow suffice as Verifiable Credentials. Live demos included using DID and VC for swapping business contacts and a working PoS system based on a Raspberry Pi.

What, No Elephant?

Remarkably, in the middle of the coldest crypto winter, there was no mention of any of the difficulties that have befallen the rest of the market. With Ethereum becoming a censored network, DAOs under legal scrutiny, Solana failing to live up to its promises, and centralized exchanges crashing the market, now is the perfect time to be speaking about lessons learned.

Yet, there was almost complete silence about the news in the industry and how to avoid problems such as:

NFTs becoming “mutable” by OpenSea. Majority of Ethereum blocks censoring private transactions. Prosecution of DAOs under US regulations. Under-collateralization of assets on exchanges. Implications of the recent ruling against LBRY, declaring their tokens to be securities.

Of course, in the decentralized world, these are sticky problems. If you aren’t a centralized entity, it’s easy to argue that it’s not up to you who opens an NFT marketplace, where and how the validators choose to manage their operations, or how the regulators treat the projects built on your Layer 1.

Whether it’s up to you or not, however, these are real threats. Ignoring them won’t make them go away.

Regulation and Validators

Like every blockchain, Cardano does have a group working with regulators. They do have consortia of their Stake Pool Operators, and Delegators can stake on the nodes they consider most reliable. Unfortunately, this seems inadequate.

While 60% of the nodes in Cardano are on bare metal, 90% of the Relayers are hosted on the major cloud provider. Plus, it’s not clear what “bare metal” means in terms of jurisdiction. If someone has a fully-owned bare metal rack in their basement, and the government of their country says you can’t run Tornado Cash through your node, what do you do? We know the answer to that one.

Today, every L1 should be thinking very carefully about the companies hosting their nodes, as well as the jurisdictions where the servers are located.

Which brings us to regulation. It should be obvious at this point that the regulatory bodies are not friends of private blockchains. It should also be obvious why that is. Nation-states have no interest in a bunch of technologists offering an alternative to the nation’s sovereign monetary system.

You Are Not an Operating System

Cardano, like the other Layer 1s, sees itself as a kind of operating system. As such, L1s don’t govern what is built on them. They don’t decide on the direction of the community, how to prioritize traffic, or what projects to encourage on the platform. Except for the fact that, over time, it’s inevitable that L1s end up prioritizing DeFi because that’s where the money is. Also, individuals inevitably end up complying with laws, no matter how unjust they are, because the consequences of civil disobedience are too high for most people.

Why stand up for LBRY or Ooki? You’ve got your own problems and the regulators haven’t come after you yet. Why not let your nodes live on Alchemy or Infura? You’ll just annoy your community and limit who can be a host. Why speak out about centralized exchanges or disappearing NFTs on OpenSea? Where else will you get that volume of transaction fees?

All of these are examples of the trap of decentralization, free markets, and an industry based on individual game theory.

For any individual in the game, it’s easiest to host a node or relayer on Alchemy or Google. For any individual in the game, it’s easiest to manage their onboarding and offboarding in whatever legitimate or illegitimate way that works for them. For any individual project it’s easiest to call their token a “utility” or “governance” token based on whatever is legal at any given moment. For any individual in the game, it’s easiest to use the popular tools even if they aren’t as decentralized.

For the industry as a whole, these individual game theoretic choices lead to illegalization, censorship, and CBDCs. These individual choices are leading to a bunch of useless L1s and Bitcoin as the last chain standing.

The Bitcoin Maxis may have been right all along, but even the Bitcoin Maxis are hurting these days. The inability of L1s to deliver on their promises damages everyone.

Decentralized Governance: Can It Get Better

Governance is how individuals do things together. From that perspective, DAOs have been another huge failing of the Web3 industry. Hard Forks and Rage Quits are the opposite of democracy. Rather than a Web3 movement, what we have is a lot of noise and competition.

In the next few blogs, I’ll be exploring how Layer 1s and the industry as a whole might approach some of these problems. I mean. Hopefully. Hopefully, I’ll have some practical and constructive ideas over the next few weeks. At this point, I have a lot more questions than answers, and I have a lot more faith than evidence that we as an industry can resolve these issues.

To be perfectly honest, I’m about as happy as a Bitcoin Maxi that I’ve been right all along about these things. Right but Rekt is not a good look. We can do better.


Ben Werdmüller

Let’s just say I’ll be very excited ...

Let’s just say I’ll be very excited when my baby can sleep through the night.

Let’s just say I’ll be very excited when my baby can sleep through the night.


They sell boy pacifiers and girl pacifiers. ...

They sell boy pacifiers and girl pacifiers. Our baby is biologically male, and - this will shock you!! - sucks on girl pacifiers. The whole thing is ridiculous; even more ridiculously, some people would actually care about this.

They sell boy pacifiers and girl pacifiers. Our baby is biologically male, and - this will shock you!! - sucks on girl pacifiers. The whole thing is ridiculous; even more ridiculously, some people would actually care about this.


The best TV show ever made is ...

The best TV show ever made is 59 years old today. Happy birthday, Doctor Who.

The best TV show ever made is 59 years old today. Happy birthday, Doctor Who.


Amazon Is Gutting Its Voice Assistant Alexa

“By 2018, the division was already a money pit. That year, The New York Times reported that it lost roughly $5 billion. This year, an employee familiar with the hardware team said, the company is on pace to lose about $10 billion on Alexa and other devices.” The strategy depended on Alexa users paying for goods and services through the assistant - and they just didn’t. #Techn

“By 2018, the division was already a money pit. That year, The New York Times reported that it lost roughly $5 billion. This year, an employee familiar with the hardware team said, the company is on pace to lose about $10 billion on Alexa and other devices.” The strategy depended on Alexa users paying for goods and services through the assistant - and they just didn’t. #Technology

[Link]


Socratic blogging

I like Substack’s emphasis on letters between publications: a way to have an in-depth conversation between two bloggers who have a different point of view on a different topic. It reminds me a little of CJR’s Galley site, which hosted some interesting conversations. But of course, you don’t need Substack or to be in CJR’s circle to create a conversation in this way. All you need is to have a co

I like Substack’s emphasis on letters between publications: a way to have an in-depth conversation between two bloggers who have a different point of view on a different topic. It reminds me a little of CJR’s Galley site, which hosted some interesting conversations.

But of course, you don’t need Substack or to be in CJR’s circle to create a conversation in this way. All you need is to have a counterpart writer, a blog or a newsletter each, and a willingness to correspond over thoughtful, long-form posts on a single topic for around three posts each.

If you want to get technical, you can even use microformats u-in-reply-to syntax and webmentions to conversationally glue the blog posts together. But the most important thing is to write and explore an idea.

It’s a lovely way to dive deep into a contentious topic, and I’d love to see more of it.


We Can't Depend on Platforms Anymore

“For a solid decade, many media operators thought they could build a sustainable business on the backs of the platforms. Those days are dying. Owned audiences are the future like they always should have been.” Spoiler alert: we never could depend on platforms. #Media [Link]

“For a solid decade, many media operators thought they could build a sustainable business on the backs of the platforms. Those days are dying. Owned audiences are the future like they always should have been.” Spoiler alert: we never could depend on platforms. #Media

[Link]


Simon Willison

Weeknotes: Implementing a write API, Mastodon distractions

Everything is so distracting at the moment. The ongoing Twitter catastrophe, the great migration (at least amongst most of the people I pay attention to) to Mastodon, the FTX calamity. It's been very hard to focus! I've been continuing to work on the write API for Datasette that I described previously. I've decided that the first release to include that work will also be the first alpha version

Everything is so distracting at the moment. The ongoing Twitter catastrophe, the great migration (at least amongst most of the people I pay attention to) to Mastodon, the FTX calamity. It's been very hard to focus!

I've been continuing to work on the write API for Datasette that I described previously. I've decided that the first release to include that work will also be the first alpha version of Datasette 1.0 - you can see my progress towards that goal in the Datasette 1.0a0 milestone.

This alpha will be the first in a sequence of alphas. There's still a lot more work to do - most notably:

Refactor Datasette's HTML templates to exclusively use values that are available in the API (including via a new ?_extra= mechanism). This will help achieve the goal of having those template contexts officially documented, such that custom template authors can depend on them being stable not changing between dot-releases. This means some breaking API changes, which need to be documented and stable before 1.0. Finalize the design of the plugin hooks for 1.0 Change how metadata.json works - it's grown a whole bunch of functionality that has nothing to do with metadata, so I'd like to rename it. Review how authentication and permissions work - there may be some changes I can make here to improve their usability prior to 1.0.

I hope to put out alpha releases quite frequently as the different parts of 1.0 start to come together.

dclient

Designing a good API is difficult if you don't have anything that uses it! But you can't build things against an API that doesn't exist yet.

To help overcome this chicken-and-egg problem, I've started a new project: dclient.

dclient is the Datasette Client - it's a CLI utility for interacting with remote Datasette instances.

I'm planning to imitate much of the existing sqlite-utils design, which provides a CLI for manipulating local SQLite database files.

Eventually you'll be able to use dclient to authenticate with a remote Datasette instance and then do things like pipe CSV files into it to create new tables.

So far it has one, obvious feature: you can use it to run a SQL query against a remote Datasette instance:

dclient query \ https://datasette.io/content \ "select * from news limit 1"

Returns:

[ { "date": "2022-10-27", "body": "[Datasette 0.63](https://docs.datasette.io/en/stable/changelog.html#v0-63) is out. Here are the [annotated release notes](https://simonwillison.net/2022/Oct/27/datasette-0-63/)." } ]

It also supports aliases, so you can create an alias for a database like this:

dclient alias add content https://datasette.io/content

And then run the above query like this instead:

dclient query content "select * from news limit 1"

One fun additional feature: if you install dclient in the same virtual environment as Datasette itself it registers itself as a command plugin:

datasette install dclient

You can then access its functionality via datasette client instead:

datasette client query content \ "select * from news limit 1" A flurry of plugins

I also pushed out a flurry of plugin releases, listed below. Almost all of these are a result of a tiny change to how breadcrumbs work in Datasette 0.63 which turned out to break the display of navigation in a bunch of plugins. Details in this issue - thanks to Brian Grinstead for pointing it out.

Releases this week dclient: 0.1a2 - (3 releases total) - 2022-11-22
A client CLI utility for Datasette instances datasette-graphql: 2.1.2 - (37 releases total) - 2022-11-19
Datasette plugin providing an automatic GraphQL API for your SQLite databases datasette: 0.63.2 - (118 releases total) - 2022-11-19
An open source multi-tool for exploring and publishing data datasette-edit-schema: 0.5.2 - (11 releases total) - 2022-11-18
Datasette plugin for modifying table schemas datasette-indieauth: 1.2.2 - (11 releases total) - 2022-11-18
Datasette authentication using IndieAuth and RelMeAuth datasette-import-table: 0.3.1 - (7 releases total) - 2022-11-18
Datasette plugin for importing tables from other Datasette instances datasette-public: 0.2.1 - (3 releases total) - 2022-11-18
Make specific Datasette tables visible to the public datasette-copyable: 0.3.2 - (5 releases total) - 2022-11-18
Datasette plugin for outputting tables in formats suitable for copy and paste datasette-edit-templates: 0.2 - (3 releases total) - 2022-11-18
Plugin allowing Datasette templates to be edited within Datasette datasette-configure-fts: 1.1.1 - (11 releases total) - 2022-11-18
Datasette plugin for enabling full-text search against selected table columns datasette-socrata: 0.3.1 - (5 releases total) - 2022-11-18
Import data from Socrata into Datasette datasette-ripgrep: 0.7.1 - (12 releases total) - 2022-11-18
Web interface for searching your code using ripgrep, built as a Datasette plugin datasette-search-all: 1.1.1 - (9 releases total) - 2022-11-18
Datasette plugin for searching all searchable tables at once TIL this week Generating OpenAPI specifications using GPT-3 JSON Pointer Writing tests with Copilot HTML datalist How to create a tarball of a git repository using "git archive" Verifying your GitHub profile on Mastodon Wider tooltip areas for Observable Plot Writing a CLI utility that is also a Datasette plugin

Ben Werdmüller

Doctor who provided abortion care to 10-year-old fights to protect medical records

“As a physician, I never imagined that I would be in the position to engage in a legal fight to protect the rights of women and girls to not have their private medical records released for political purposes. But nonetheless, I feel strongly that this fight — the fight for physicians to compassionately provide abortion care to every single person who needs their care and their

“As a physician, I never imagined that I would be in the position to engage in a legal fight to protect the rights of women and girls to not have their private medical records released for political purposes. But nonetheless, I feel strongly that this fight — the fight for physicians to compassionately provide abortion care to every single person who needs their care and their patients to access safe, legal abortion care, free from fear of criminalization — is worth waging.” #Society

[Link]

Tuesday, 22. November 2022

Phil Windleys Technometria

A Healthcare Utopia of Rules

Summary: Verifiable credentials have a number of use cases in healthcare. Using them can reduce the administrative burden that people experience at the hands of the bureaucracies that inevitably develop. I have a medical condition that requires that I get blood tests every three months. And, having recently changed jobs, my insurance, and thus the set of acceptable labs, changed recently.

Summary: Verifiable credentials have a number of use cases in healthcare. Using them can reduce the administrative burden that people experience at the hands of the bureaucracies that inevitably develop.

I have a medical condition that requires that I get blood tests every three months. And, having recently changed jobs, my insurance, and thus the set of acceptable labs, changed recently. I know that this specific problem is very US-centric, but bear with me, I think the problems that I'll describe, and the architectures that lead to them, are more general than my specific situation.

My doctor sees me every 6 months, and so gives me two lab orders each time. Last week, I showed up at Revere Health's lab. They were happy to take my insurance, but not the lab order. They needed a date on it. So, I called my doctor and they said they'd fax over an order to the lab. We tried that three times but the lab never got it. So my doctor emailed it to me. The lab wouldn't take the electronic lab order from my phone, wouldn't let me email it to them (citing privacy issues with non-secure email), and couldn't let me print it there. I ended up driving to the UPS Store to print it, then return to the lab. Ugh.

This story is a perfect illustration of what David Graeber calls the Utopia of Rules. Designers of administrative systems do the imaginative work of defining processes, policies, and rules. But, as I wrote in Authentic Digital Relationships:

Because of the systematic imbalance of power that administrative ... systems create, administrators can afford to be lazy. To the administrator, everyone is structurally the same, being fit into the same schema. This is efficient because they can afford to ignore all the qualities that make people unique and concentrate on just their business. Meanwhile subjects are left to perform the "interpretive labor," as Graeber calls it, of understanding the system, what it allows or doesn't, and how it can be bent to accomplish their goals. Subjects have few tools for managing these relationships because each one is a little different from the others, not only technically, but procedurally as well. There is no common protocol or user experience [from one administrative system to the next].

The lab order format my doctor gave me was accepted just fine at Intermountain Health Care's labs. But Revere Health had different rules. I was forced to adapt to their rules, being subject to their administration.

Bureaucracies are often made functional by the people at the front line making exceptions or cutting corners. In my case no exceptions were made. They were polite, but ultimately uncaring and felt no responsibility to help me solve the problem. This is an example of the "interpretive labor" borne by the subjects of any administrative system.

Centralizing the system—such as having one national healthcare system—could solve my problem because the format for the order and the communication between entities could be streamlined. You can also solve the problem by defining cross-organization schema and protocols. My choice, as you might guess, would be a solution based on verifiable credentials—whether or not the healthcare system is centralized. Verifiable credentials offer a few benefits:

Verifiable credentials can solve the communication problem so that everyone in the system gets authentic data. Because the credentials issued to me, I can be a trustworthy conduit between the doctor and the lab. Verifiable credentials allow an interoperable solution with several vendors. The tools, software, and techniques for verifiable credentials are well understood.

Verifiable credentials don't solve the problem of the lab being able to understand the doctor's order or the order having all the right data. That is a governance problem outside the realm of technology. But because we've narrowed the problem to defining the schema for a given localized set of doctors, labs, pharmacies, and other health-care providers, it might be tractable.

Verifiable credentials are a no-brainer for solving problems in health care. Interestingly, many health care use cases already use the patient as the conduit for transferring data between providers. But they are stuck in a paper world because many of the solutions that have been proposed for solving it, lead to central systems that require near-universal buy-in to work. Protocol-based solutions are the antedote to that and, fortunately, they're available now.

Photo Credit: Blood Draw from Linnaea Mallette (CC0 1.0)

Tags: identity agents ssi autonomic+identity


MyDigitalFootprint

Chaos and the abyss

This read describes the space between chaos and the abyss, where we find ourselves when we allow machines to make decisions without safeguarding collective criticism or realise they can change our minds.   ----- There is a reality that we are not forced to recognise our collective ethical and own moral bias without others. However, these biases are the basis of our decision-making, so
This read describes the space between chaos and the abyss, where we find ourselves when we allow machines to make decisions without safeguarding collective criticism or realise they can change our minds.  

-----

There is a reality that we are not forced to recognise our collective ethical and own moral bias without others. However, these biases are the basis of our decision-making, so asking a machine to "take an unelected position of trust" and make a decision on our collective behalf creates a space we should explore as we move from human criticism to machine control.


Machines are making decisions.   

Automation is incredibly powerful and useful, and we continue to learn to reduce bias in automated decision-making by exploring data sets and understanding the outcomes by testing for bias.  As we continue testing, iterating and learning about using past data for future decisions, we expose many of our human frailties and faults.  


The decisions we ask machines to make today are easily compared to where we think we are going.  However, since I cannot confirm or write down my cognitive biases today (out of the 180 plus available). If asked which biases would be the same tomorrow, I would be unable to tell you.  Therefore, I am even less convinced that, as a team, we can agree on our team biases as these will change as a new sun rises because we all have eaten our own choice of food, have different biology, chemistry and bacteria and have had divergence experiences since the last sunrise.

AI and Hypocrisy

Hypocrisy is the practice of engaging in the same behaviour or activity for which one criticises another. Our past and present actions can be different, but because of our past, we have learnt, and change has happened, but that does not mean we should not be able to call out when someone is making the same mistakes.  


A defence used by those called out is to cry “hypocrisy”.  Human rights issues and football spring to mind. How can you judge when you did the same? As a Brit, we are responsible for some of the worst abuses of power and wrong thinking, but we are changing; I agree that it is not fast or far enough. However, the point here is that humans learn and can call something out to other humans if they are making the same mistakes.  I accept we are not very good at either. 


However, contemporary discourse is that if your past is flawed, you are not empowered to be critical of others.  However, if we ever believe that we are beyond criticism, fault or learning, surely we become delusional and unable to see the wrong we are doing, believing we are more moral or ethical.  But what about machines? When machines make a biased decision, who is there to be critical or will the AI call hypocrisy? 


I struggle with the idea that the company values, purpose and culture are good predictors of the decision-making processes that we have in place because of bias.  A good human culture can exist, but that is one of learning, but that does not mean the machine that powers the organisation is aligned with learning in the same direction.


This thinking about hypocrisy and culture creates gaps, voids and chasms filled with chaos between individuals' integrity, the integrity of the wider team/ company and what decisions we ask machines (automation) to make.   This is not new, and such gaps have been the study by many philosophies and political scientists since Aristotle.  


So how do we enable a machine to make a decision based on data but then allow other machines to see the inconsistency and defend hypocrisy?  This is the space between chaos and the abyss.

So how do we enable a machine to make a decision based on data but then allow other machines to see the inconsistency and defend hypocrisy? 
Being explainable is not the problem.

Explainable is in fashion in AI; however, events of 2020 to 2022 have presented rich picking from COVID lockdowns, the cost of living crisis, football WorldCup hosting and COP28 to say that explainable is not much use when decisions impact humans.  Equally, making an algorithm or the code behind it explainable does not solve the problem.  Neural networks are accurate but un-interpretable, whereas Decision Trees are interpretable but inaccurate.  I can explain an outcome, but that does not mean I can predict it.  We can explain mass shootings, but that is of little value or comfort to those who lost a loved one. 

Jumping into the abyss.

Machines with bias in decision-making are not new, nor is explainable AI thinking.  However, when we (humans) are criticised or called out, we often become defensive and don't change. Will machines be different?  Calling out that someone is wrong does not persuade them to follow a different path. Calling out a decision made by a machine is not going to change the machine's decision-making process.


Here is the final jump. How to change someone's mind is a super article from Manfred F. R. Kets de Vries at Insead.   It sets down The Scheherazade method of Seven steps to change a person’s mind.  Now when a machine learns that it is easier to change a human mind by following these steps, are we in danger of seeding the last part of independent thinking to the machine? We will not see the problems as our minds have been aligned to someone elses decision-making methods (calling out loyality).  It is why we need a void between morals and ethics and should celebrate unethical moral and immoral ethics as they show us the tensions and gaps in our thinking and allow us to learn. 

 


This is a nod to Brian Cork based on his comment on a previous article on Fear thank you.

Monday, 21. November 2022

Simon Willison

Building a BFT JSON CRDT

Building a BFT JSON CRDT Jacky Zhao describes their project to build a CRDT library for JSON data in Rust, and includes a thorough explanation of what CRDTs are and how they work. "I write this blog post mostly as a note to my past self, distilling a lot of what I’ve learned since into a blog post I wish I had read before going in" - the best kind of blog post! Via Hacker News

Building a BFT JSON CRDT

Jacky Zhao describes their project to build a CRDT library for JSON data in Rust, and includes a thorough explanation of what CRDTs are and how they work. "I write this blog post mostly as a note to my past self, distilling a lot of what I’ve learned since into a blog post I wish I had read before going in" - the best kind of blog post!

Via Hacker News


Damien Bod

Use multiple Azure AD access tokens in an ASP.NET Core API

This article shows how to setup an ASP.NET Core application to authorize multiple access tokens from different Azure AD App registrations. Each endpoint can only accept a single AAD access token and it is important that the other access tokens do not work on the incorrect API. ASP.NET Core Schemes and Policies are used to […]

This article shows how to setup an ASP.NET Core application to authorize multiple access tokens from different Azure AD App registrations. Each endpoint can only accept a single AAD access token and it is important that the other access tokens do not work on the incorrect API. ASP.NET Core Schemes and Policies are used to force the delegated authorization.

Code: https://github.com/damienbod/AspNetCoreApiAuthMultiIdentityProvider/tree/main/AadMultiApis

Setup

Azure Active Directory is used to implement the identity provider and is responsible for creating the access tokens. Two Azure App registrations are used to implement the APIs. One Azure App registration is created for application clients and accepts tokens from multiple tenants which have the correct roles claims. A secret is required to get and access token for the App registration. Any tenant could used this endpoint so maybe extra authorization is required in case the clients shared the secrets, certificates or something like this. It would probably make sense to validate the tenant used to acquire the access token. The delegated Azure App registration is implemented as a single tenant and can only be used in the second API.

A test application implemented as a server rendered UI confidential client is used to send the API calls. The application can acquire both types of access tokens and send the tokens to the correct endpoints. (Or incorrect endpoints for testing)

Implement the API

The AddMicrosoftIdentityWebApi method from the Micorsoft.Identity.Web Nuget package is used to implement the AAD MSAL clients. Separate ASP.NET Core schemes are used for the different access tokens. The different tokens use different configurations and also use separate ASP.NET Core policies for forcing the authorization and the specific claims.

services.AddAuthentication(Consts.AAD_MULTI_SCHEME) .AddMicrosoftIdentityWebApi(Configuration, "AzureADMultiApi", Consts.AAD_MULTI_SCHEME); services.AddAuthentication(Consts.AAD_SINGLE_SCHEME) .AddMicrosoftIdentityWebApi(Configuration, "AzureADSingleApi", Consts.AAD_SINGLE_SCHEME);

The Azure AD configurations are added to the app settings and are specific for each client.

"AzureADMultiApi": { "Instance": "https://login.microsoftonline.com/", "Domain": "damienbodhotmail.onmicrosoft.com", "TenantId": "7ff95b15-dc21-4ba6-bc92-824856578fc1", "ClientId": "967925d5-87ea-46e6-b0eb-1223c001fd77" }, "AzureADSingleApi": { "Instance": "https://login.microsoftonline.com/", "Domain": "damienbodhotmail.onmicrosoft.com", "TenantId": "7ff95b15-dc21-4ba6-bc92-824856578fc1", "ClientId": "b2a09168-54e2-4bc4-af92-a710a64ef1fa" },

The AddAuthorization is used to add the ASP.NET Core policies which are specific for the schemes and not global. The policies validate the required claims. If using a multi tenant App registration, you might need to validate the tenant used to acquire the access token as well. Access tokens for both clients must be acquired using a secret (or certificate would also be ok, type 2). This is important for multi-tenant App registrations if allowing any enterprise application to use this.

services.AddAuthorization(policies => { policies.AddPolicy(Consts.MUTLI_AAD_POLICY, p => { // application access token // "roles": [ // "application-api-role" // ], // "azp": "967925d5-87ea-46e6-b0eb-1223c001fd77", p.RequireClaim("azp", "967925d5-87ea-46e6-b0eb-1223c001fd77"); // client secret = 1, 2 if certificate is used p.RequireClaim("azpacr", "1"); }); policies.AddPolicy(Consts.SINGLE_AAD_POLICY, p => { // delegated access token => "scp": "access_as_user", // "azp": "46d2f651-813a-4b5c-8a43-63abcb4f692c", p.RequireClaim("azp", "46d2f651-813a-4b5c-8a43-63abcb4f692c"); // client secret = 1, 2 if certificate is used p.RequireClaim("azpacr", "1"); }); });

An authorization filter is added to the AddControllers method which requires one of our defined schemes.

services.AddControllers(options => { var policy = new AuthorizationPolicyBuilder() .RequireAuthenticatedUser() .AddAuthenticationSchemes( Consts.AAD_MULTI_SCHEME, Consts.AAD_SINGLE_SCHEME) .Build(); options.Filters.Add(new AuthorizeFilter(policy)); });

The middelware is setup like any ASP.NET Core application using authentication. You could add a RequireAuthorization method to the MapControllers method as well.

app.UseRouting(); app.UseAuthentication(); app.UseAuthorization(); app.UseEndpoints(endpoints => { endpoints.MapControllers(); });

The Controllers used to expose the endpoints use both the scheme and the policy to validate the access token. It is important that correct access token only works for the correct endpoint. Controllers support authorization using attributes in a developer friendly way. You can develop secure endpoints really efficiently using this.

[Authorize(AuthenticationSchemes = Consts.AAD_MULTI_SCHEME, Policy = Consts.MUTLI_AAD_POLICY)] [Route("api/[controller]")] public class MultiController : Controller { [HttpGet] public IEnumerable<string> Get() { return new string[] { "data 1 from the multi api", "data 2 from multi api" }; } } Test Confidential Client

To test the API, I created an ASP.NET Core Razor page application which authenticates using a confidential OpenID Connect code flow client. The application acquires the different access tokens using services. The single tenant service gets a delegated access token to access the single tenant API.

using Microsoft.Identity.Web; using System.Net.Http.Headers; namespace RazorAzureAD; public class SingleTenantApiService { private readonly IHttpClientFactory _clientFactory; private readonly ITokenAcquisition _tokenAcquisition; private readonly IConfiguration _configuration; public SingleTenantApiService(IHttpClientFactory clientFactory, ITokenAcquisition tokenAcquisition, IConfiguration configuration) { _clientFactory = clientFactory; _tokenAcquisition = tokenAcquisition; _configuration = configuration; } public async Task<List<string>> GetApiDataAsync(bool testIncorrectMultiEndpoint = false) { var client = _clientFactory.CreateClient(); var scope = _configuration["AzureADSingleApi:ScopeForAccessToken"]; var accessToken = await _tokenAcquisition.GetAccessTokenForUserAsync(new[] { scope }); client.BaseAddress = new Uri(_configuration["AzureADSingleApi:ApiBaseAddress"]); client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", accessToken); client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json")); HttpResponseMessage response; if (testIncorrectMultiEndpoint) { response = await client.GetAsync("api/Multi"); // must fail } else { response = await client.GetAsync("api/Single"); } if (response.IsSuccessStatusCode) { var responseContent = await response.Content.ReadAsStringAsync(); var data = System.Text.Json.JsonSerializer.Deserialize<List<string>>(responseContent); if(data != null) return data; } throw new ApplicationException($"Status code: {response.StatusCode}, Error: {response.ReasonPhrase}"); } }

The MultiTenantApplicationApiService class is used to get an application access token using the OAuth client credentials flow. This requires a secret (or certificate) and no user is involved in this flow.

using Microsoft.Identity.Client; using System.Net.Http.Headers; namespace RazorAzureAD; public class MultiTenantApplicationApiService { private readonly IHttpClientFactory _clientFactory; private readonly IConfiguration _configuration; public MultiTenantApplicationApiService(IHttpClientFactory clientFactory, IConfiguration configuration) { _clientFactory = clientFactory; _configuration = configuration; } public async Task<List<string>> GetApiDataAsync(bool testIncorrectMultiEndpoint = false) { // 1. Client client credentials client var app = ConfidentialClientApplicationBuilder .Create(_configuration["AzureADMultiApi:ClientId"]) .WithClientSecret(_configuration["AzureADMultiApi:ClientSecret"]) .WithAuthority(_configuration["AzureADMultiApi:Authority"]) .Build(); var scopes = new[] { _configuration["AzureADMultiApi:Scope"] }; // default scope // 2. Get access token var authResult = await app.AcquireTokenForClient(scopes) .ExecuteAsync(); // 3. Use access token to access token var client = _clientFactory.CreateClient(); client.BaseAddress = new Uri(_configuration["AzureADMultiApi:ApiBaseAddress"]); client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", authResult.AccessToken); client.DefaultRequestHeaders.Accept .Add(new MediaTypeWithQualityHeaderValue("application/json")); HttpResponseMessage response; if (testIncorrectMultiEndpoint) { response = await client.GetAsync("api/Single"); // must fail } else { response = await client.GetAsync("api/Multi"); } if (response.IsSuccessStatusCode) { Console.WriteLine(await response.Content.ReadAsStringAsync()); } if (response.IsSuccessStatusCode) { var responseContent = await response.Content.ReadAsStringAsync(); var data = System.Text.Json.JsonSerializer.Deserialize<List<string>>(responseContent); if (data != null) return data; } throw new ApplicationException($"Status code: {response.StatusCode}, Error: {response.ReasonPhrase}"); } }

When the application is run, the APIs can be tested and validated. If running this locally, you need to setup your own Azure App registrations and change the configuration.

Links

https://github.com/AzureAD/microsoft-identity-web

https://learn.microsoft.com/en-us/aspnet/core/introduction-to-aspnet-core?view=aspnetcore-6.0

Sunday, 20. November 2022

Werdmüller on Medium

What is a globalist?

An overloaded term often used as a racist dog-whistle. Continue reading on Medium »

An overloaded term often used as a racist dog-whistle.

Continue reading on Medium »


Simon Willison

Tracking Mastodon user numbers over time with a bucket of tricks

Mastodon is definitely having a moment. User growth is skyrocketing as more and more people migrate over from Twitter. I've set up a new git scraper to track the number of registered user accounts on known Mastodon instances over time. It's only been running for a few hours, but it's already collected enough data to render this chart: I'm looking forward to seeing how this trend continues

Mastodon is definitely having a moment. User growth is skyrocketing as more and more people migrate over from Twitter.

I've set up a new git scraper to track the number of registered user accounts on known Mastodon instances over time.

It's only been running for a few hours, but it's already collected enough data to render this chart:

I'm looking forward to seeing how this trend continues to develop over the next days and weeks.

Scraping the data

My scraper works by tracking https://instances.social/ - a website that lists a large number (but not all) of the Mastodon instances that are out there.

That site publishes an instances.json array which currently contains 1,830 objects representing Mastodon instances. Each of those objects looks something like this:

{ "name": "pleroma.otter.sh", "title": "Otterland", "short_description": null, "description": "Otters does squeak squeak", "uptime": 0.944757, "up": true, "https_score": null, "https_rank": null, "ipv6": true, "openRegistrations": false, "users": 5, "statuses": "54870", "connections": 9821, }

I have a GitHub Actions workflow running approximately every 20 minutes that fetches a copy of that file and commits it back to this repository:

https://github.com/simonw/scrape-instances-social

Since each instance includes a users count, the commit history of my instances.json file tells the story of Mastodon's growth over time.

Building a database

A commit log of a JSON file is interesting, but the next step is to turn that into actionable information.

My git-history tool is designed to do exactly that.

For the chart up above, the only number I care about is the total number of users listed in each snapshot of the file - the sum of that users field for each instance.

Here's how to run git-history against that file's commit history to generate tables showing how that count has changed over time:

git-history file counts.db instances.json \ --convert "return [ { 'id': 'all', 'users': sum(d['users'] or 0 for d in json.loads(content)), 'statuses': sum(int(d['statuses'] or 0) for d in json.loads(content)), } ]" --id id

I'm creating a file called counts.db that shows the history of the instances.json file.

The real trick here though is that --convert argument. I'm using that to compress each snapshot down to a single row that looks like this:

{ "id": "all", "users": 4717781, "statuses": 374217860 }

Normally git-history expects to work against an array of objects, tracking the history of changes to each one based on their id property.

Here I'm tricking it a bit - I only return a single object with the ID of all. This means that git-history will only track the history of changes to that single object.

It works though! The result is a counts.db file which is currently 52KB and has the following schema (truncated to the most interesting bits):

CREATE TABLE [commits] ( [id] INTEGER PRIMARY KEY, [namespace] INTEGER REFERENCES [namespaces]([id]), [hash] TEXT, [commit_at] TEXT ); CREATE TABLE [item_version] ( [_id] INTEGER PRIMARY KEY, [_item] INTEGER REFERENCES [item]([_id]), [_version] INTEGER, [_commit] INTEGER REFERENCES [commits]([id]), [id] TEXT, [users] INTEGER, [statuses] INTEGER, [_item_full_hash] TEXT );

Each item_version row will tell us the number of users and statuses at a particular point in time, based on a join against that commits table to find the commit_at date.

Publishing the database

For this project, I decided to publish the SQLite database to an S3 bucket. I considered pushing the binary SQLite file directly to the GitHub repository but this felt rude, since a binary file that changes every 20 minutes would bloat the repository.

I wanted to serve the file with open CORS headers so I could load it into Datasette Lite and Observable notebooks.

I used my s3-credentials tool to create a bucket for this:

~ % s3-credentials create scrape-instances-social --public --website --create-bucket Created bucket: scrape-instances-social Attached bucket policy allowing public access Configured website: IndexDocument=index.html, ErrorDocument=error.html Created user: 's3.read-write.scrape-instances-social' with permissions boundary: 'arn:aws:iam::aws:policy/AmazonS3FullAccess' Attached policy s3.read-write.scrape-instances-social to user s3.read-write.scrape-instances-social Created access key for user: s3.read-write.scrape-instances-social { "UserName": "s3.read-write.scrape-instances-social", "AccessKeyId": "AKIAWXFXAIOZI5NUS6VU", "Status": "Active", "SecretAccessKey": "...", "CreateDate": "2022-11-20 05:52:22+00:00" }

This created a new bucket called scrape-instances-social configured to work as a website and allow public access.

It also generated an access key and a secret access key with access to just that bucket. I saved these in GitHub Actions secrets called AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.

I enabled a CORS policy on the bucket like this:

s3-credentials set-cors-policy scrape-instances-social

Then I added the following to my GitHub Actions workflow to build and upload the database after each run of the scraper:

- name: Build and publish database using git-history env: AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} run: |- # First download previous database to save some time wget https://scrape-instances-social.s3.amazonaws.com/counts.db # Update with latest commits ./build-count-history.sh # Upload to S3 s3-credentials put-object scrape-instances-social counts.db counts.db \ --access-key $AWS_ACCESS_KEY_ID \ --secret-key $AWS_SECRET_ACCESS_KEY

git-history knows how to only process commits since the last time the database was built, so downloading the previous copy saves a lot of time.

Exploring the data

Now that I have a SQLite database that's being served over CORS-enabled HTTPS I can open it in Datasette Lite - my implementation of Datasette compiled to WebAssembly that runs entirely in a browser.

https://lite.datasette.io/?url=https://scrape-instances-social.s3.amazonaws.com/counts.db

Any time anyone follows this link their browser will fetch the latest copy of the counts.db file directly from S3.

The most interesting page in there is the item_version_detail SQL view, which joins against the commits table to show the date of each change:

https://lite.datasette.io/?url=https://scrape-instances-social.s3.amazonaws.com/counts.db#/counts/item_version_detail

(Datasette Lite lets you link directly to pages within Datasette itself via a #hash.)

Plotting a chart

Datasette Lite doesn't have charting yet, so I decided to turn to my favourite visualization tool, an Observable notebook.

Observable has the ability to query SQLite databases (that are served via CORS) directly these days!

Here's my notebook:

https://observablehq.com/@simonw/mastodon-users-and-statuses-over-time

There are only four cells needed to create the chart shown above.

First, we need to open the SQLite database from the remote URL:

database = SQLiteDatabaseClient.open( "https://scrape-instances-social.s3.amazonaws.com/counts.db" )

Next we need to use an Obervable Database query cell to execute SQL against that database and pull out the data we want to plot - and store it in a query variable:

SELECT _commit_at as date, users, statuses FROM item_version_detail

We need to make one change to that data - we need to convert the date column from a string to a JavaScript date object:

points = query.map((d) => ({ date: new Date(d.date), users: d.users, statuses: d.statuses }))

Finally, we can plot the data using the Observable Plot charting library like this:

Plot.plot({ y: { grid: true, label: "Total users over time across all tracked instances" }, marks: [Plot.line(points, { x: "date", y: "users" })], marginLeft: 100 })

I added 100px of margin to the left of the chart to ensure there was space for the large (4,696,000 and up) labels on the y-axis.

A bunch of tricks combined

This project combines a whole bunch of tricks I've been pulling together over the past few years:

Git scraping is the technique I use to gather the initial data, turning a static listing of instances into a record of changes over time git-history is my tool for turning a scraped Git history into a SQLite database that's easier to work with s3-credentials makes working with S3 buckets - in particular creating credentials that are restricted to just one bucket - much less frustrating Datasette Lite means that once you have a SQLite database online somewhere you can explore it in your browser - without having to run my full server-side Datasette Python application on a machine somewhere And finally, combining the above means I can take advantage of Observable notebooks for ad-hoc visualization of data that's hosted online, in this case as a static SQLite database file served from S3

Every remaining website using the .museum TLD

Every remaining website using the .museum TLD Jonty did a survey of every one of the 1,134 domains using the .museum TLD, which dates back to 2001 and is managed by The Museum Domain Management Association. Via @jonty@chaos.social

Every remaining website using the .museum TLD

Jonty did a survey of every one of the 1,134 domains using the .museum TLD, which dates back to 2001 and is managed by The Museum Domain Management Association.

Via @jonty@chaos.social

Saturday, 19. November 2022

Simon Willison

Quoting Andrew Godwin

... it [ActivityPub] is crucially good enough. Perfect is the enemy of good, and in ActivityPub we have a protocol that has flaws but, crucially, that works, and has a standard we can all mostly agree on how to implement - and eventually, I hope, agree on how to improve. — Andrew Godwin

... it [ActivityPub] is crucially good enough. Perfect is the enemy of good, and in ActivityPub we have a protocol that has flaws but, crucially, that works, and has a standard we can all mostly agree on how to implement - and eventually, I hope, agree on how to improve.

Andrew Godwin


Hyperonomy Digital Identity Lab

Facts, Opinions, and Folklore: A Preliminary Taxonomy

This preliminary taxonomy attempts to characterize the differences between facts, options, folklore, and related terminology. Your feedback is appreciated.

This preliminary taxonomy attempts to characterize the differences between facts, options, folklore, and related terminology. Your feedback is appreciated.

Facts True Facts – Truths – Hard Facts – 100% true False Beliefs – believed to be true but are, in fact, false Fake Facts – knowingly or purposely 0% true (100% false) Perturbations of the Facts Misunderstood Facts Misconceived Facts Misstated or Miscommunicated Facts Opinions Feedback that may or may not be true Vague Poor recollected Subjective opinions Humorous/satirical Folklore Feedback originating with a fourth party and passed on by a third-party

Friday, 18. November 2022

Simon Willison

Datasette Lite: Loading JSON data

Datasette Lite: Loading JSON data I added a new feature to Datasette Lite: you can now pass it the URL to a JSON file (hosted on a CORS-compatible hosting provider such as GitHub or GitHub Gists) and it will load that file into a database table for you. It expects an array of objects, but if your file has an object as the root it will search through it looking for the first key that is an array

Datasette Lite: Loading JSON data

I added a new feature to Datasette Lite: you can now pass it the URL to a JSON file (hosted on a CORS-compatible hosting provider such as GitHub or GitHub Gists) and it will load that file into a database table for you. It expects an array of objects, but if your file has an object as the root it will search through it looking for the first key that is an array of objects and load those instead.

Via Issue 54: ?json=URL parameter for loading JSON data


Jon Udell

Debuggable explanations

I’ve been reviewing Greg Wilson’s current book project, Software Design in Python. Like the earlier JavaScript-based Software Design by Example it’s a guided tour of tools, techniques, and components common to many software systems: testing frameworks, parsers, virtual machines, debuggers. Each chapter of each of these books shows how to build the simplest possible working … Continue reading Debugg

I’ve been reviewing Greg Wilson’s current book project, Software Design in Python. Like the earlier JavaScript-based Software Design by Example it’s a guided tour of tools, techniques, and components common to many software systems: testing frameworks, parsers, virtual machines, debuggers. Each chapter of each of these books shows how to build the simplest possible working version of one of these things.

Though I’ve used this stuff for most of my life, I’ve never studied it formally. How does an interpreter work? The chapter on interpreters explains the basic mechanism using a mixture of prose and code. When I read the chapter I can sort of understand what’s happening, but I’m not great at mental simulation of running code. I need to run the code in a debugger, set breakpoints, step through execution, and watch variables change. Then it sinks in.

The GitHub repo for the book includes all the text and all the code. I’d like to put them side-by-side, so that as I read the narrative I can run and debug the code that’s being described. Here’s how I’m doing that in VSCode.

This is pretty good! But it wasn’t dead simple to get there. In a clone of the repo, the steps included:

Find the HTML file for chapter 3. Install a VSCode extension to preview HTML. Find the code for chapter 3. Adjust the code to not require command-line arguments. Arrange the text and code in side-by-side panes.

Though it’s all doable, the activation threshold is high enough to thwart my best intention of repeating the steps for every chapter.

Whether in VSCode or another IDE or some other kind of app, what would be the best way to lower that activation threshold?

Thursday, 17. November 2022

MyDigitalFootprint

We need more unethical morals!

I explore ethics, morals and integrity in the context of decision-making. This piece explores the void between ethics and morals and why we need this place to exist because it allows us to explore the reason why unethical morals force us to new thinking. The difference in definition between Ethics and Morals Definition: Ethics are guiding principles of conduct of an individual or group. Defi
I explore ethics, morals and integrity in the context of decision-making. This piece explores the void between ethics and morals and why we need this place to exist because it allows us to explore the reason why unethical morals force us to new thinking.

The difference in definition between Ethics and Morals

Definition: Ethics are guiding principles of conduct of an individual or group.

Definition: Morals are principles on which one’s judgments of right and wrong are based.

Therefore an important difference between ethics and morals is that ethics are relatively uniform within a group, whereas morals are individual and heavily influenced by local culture and beliefs.


How to change someone's mind is a super article from Manfred F. R. Kets de Vries at Insead.  It is important because if we want more people in the moral group, we need those with different ethics to change. And if we want to update our morals, we need to be able to change our ethics.

In Manfred’s article, I believe that ethics and morals become mixed up between what a writer means and what the reader understands. It is very confusing when a narrative uses the ideals of ethics and morals but only applies one of the words.

What we are aware of is that there is a dynamic relationship between what an individual thinks and what a group thinks and how both an individual can affect group thinking and group thinking affects individuals. #Morals are based on our own #principles - which are influenced by our society's values. Those values create unique #ethics, which are "rules" a particular societal system gives to those in that place.

Because values are not principles, and rules are not values, it creates friction which we see as moral and ethical voids - spaces where you can have unethical morals. Rules, how we control ethical behaviour is always a laggard, which means those at the forefront of change see the void and gaps between the perceived new values and the old rules.

From a linear world viewpoint, we understand this dynamic relationship between ethics and morals as they both challenge and refine each other for the betterment (we aspire to) of both, but there is a time lag. 

However, simple language and diagrams create this rather naive viewpoint because what we witness is that our morals are challenged by society, yet we often need a movement before our collective ethics create better moral outcomes and new rules. 

Therefore we have a time-lagged gap or void which prevents the full alignment of morals and ethics.  I never realised how important this time lag is. Without a time lag, which creates separation, we would never improve but rather get trapped in negative and unproductive ways because we all accept this moral or ethical behaviour as the best we can do.  

 It is in this void we find all the complexity of modern society and politics. 

This gap represents the tensions that boards and leadership teams have to face as they find that they need to find a strategy and route between the new thinking and the old rules. 

COVID19, climate change, sustainability, and poverty are just some examples that have made us more aware of the gaps between different nations' rules, constraints and resources which provide their ethics and different mental model for a morally better society.  If it is or not is yet to be determined.

Therefore I believe we should be asking for more “unethical moral”, and more “immoral ethical” dilemmas; as this will focus our attention on the void between our existing rule set and the new rules we need if we are to make a world more inclusive, accepting and transparent and less biased, cruel and prejudiced. 

I repeat …. we need more unethical morals!


Wednesday, 16. November 2022

Simon Willison

Quoting Jack Clark

These kinds of biases aren’t so much a technical problem as a sociotechnical one; ML models try to approximate biases in their underlying datasets and, for some groups of people, some of these biases are offensive or harmful. That means in the coming years there will be endless political battles about what the ‘correct’ biases are for different models to display (or not display), and we can ultim

These kinds of biases aren’t so much a technical problem as a sociotechnical one; ML models try to approximate biases in their underlying datasets and, for some groups of people, some of these biases are offensive or harmful. That means in the coming years there will be endless political battles about what the ‘correct’ biases are for different models to display (or not display), and we can ultimately expect there to be as many approaches as there are distinct ideologies on the planet. I expect to move into a fractal ecosystem of models, and I expect model providers will ‘shapeshift’ a single model to display different biases depending on the market it is being deployed into. This will be extraordinarily messy.

Jack Clark


fasiha/yamanote

fasiha/yamanote Yamanote is "a guerrilla bookmarking server" by Ahmed Fasih - it works using a bookmarklet that grabs a full serialized copy of the page - the innerHTML of both the head and body element - and passes it to the server, which stores it in a SQLite database. The files are then served with a Content-Security-Policy': `default-src 'self' header to prevent stored pages from fetching AN

fasiha/yamanote

Yamanote is "a guerrilla bookmarking server" by Ahmed Fasih - it works using a bookmarklet that grabs a full serialized copy of the page - the innerHTML of both the head and body element - and passes it to the server, which stores it in a SQLite database. The files are then served with a Content-Security-Policy': `default-src 'self' header to prevent stored pages from fetching ANY external assets when they are viewed.

Via octodon.social/@22


JSON Changelog with SQLite

JSON Changelog with SQLite One of my favourite database challenges is how to track changes to rows over time. This is a neat recipe from 2018 which uses SQLite triggers and the SQLite JSON functions to serialize older versions of the rows and store them in TEXT columns. Via fasiha/yamanote

JSON Changelog with SQLite

One of my favourite database challenges is how to track changes to rows over time. This is a neat recipe from 2018 which uses SQLite triggers and the SQLite JSON functions to serialize older versions of the rows and store them in TEXT columns.

Via fasiha/yamanote


Mike Jones: self-issued

OpenID Presentations at November 2022 OpenID Workshop and IIW

I gave the following presentation at the Monday, November 14, 2022 OpenID Workshop at VISA: OpenID Connect Working Group (PowerPoint) (PDF) I also gave the following invited “101” session presentation at the Internet Identity Workshop (IIW) on Tuesday, November 15, 2022: Introduction to OpenID Connect (PowerPoint) (PDF)

I gave the following presentation at the Monday, November 14, 2022 OpenID Workshop at VISA:

OpenID Connect Working Group (PowerPoint) (PDF)

I also gave the following invited “101” session presentation at the Internet Identity Workshop (IIW) on Tuesday, November 15, 2022:

Introduction to OpenID Connect (PowerPoint) (PDF)

Tuesday, 15. November 2022

Heres Tom with the Weather

Mastodon Discovery

Making notes is helpful when reading and running unfamiliar code for the first time. I usually start with happy paths. Here’s some notes I made while learning about Mastodon account search and discovery. It’s really cool to poke around the code that so many people are using every day to find each other. When you search on an account identifier on Mastodon, your browser makes a request to you

Making notes is helpful when reading and running unfamiliar code for the first time. I usually start with happy paths. Here’s some notes I made while learning about Mastodon account search and discovery. It’s really cool to poke around the code that so many people are using every day to find each other.

When you search on an account identifier on Mastodon, your browser makes a request to your Mastodon instance:

/api/v2/search?q=%40herestomwiththeweather%40mastodon.social&resolve=true&limit=5

The resolve=true parameter tells your Mastodon instance to make a webfinger request to the target Mastodon instance if necessary. The search controller makes a call to the SearchService

def search_results SearchService.new.call( params[:q], current_account, limit_param(RESULTS_LIMIT), search_params.merge(resolve: truthy_param?(:resolve), exclude_unreviewed: truthy_param?(:exclude_unreviewed)) ) end

and since resolve=true, SearchService makes a call to the ResolveAccountService

if options[:resolve] ResolveAccountService.new.call(query)

The purpose of ResolveAccountService is to “Find or create an account record for a remote user” and return an account object to the search controller. It includes WebfingerHelper which is a trivial module with just one one-line method named webfinger!()

module WebfingerHelper def webfinger!(uri) Webfinger.new(uri).perform end end

This method returns a webfinger object. Rather than call it directly, ResolveAccountService invokes process_webfinger! which invokes it and then asks the returned webfinger object’s subject method for its username and domain and makes them instance variables of the service object.

def process_webfinger!(uri) @webfinger = webfinger!("acct:#{uri}") confirmed_username, confirmed_domain = split_acct(@webfinger.subject) if confirmed_username.casecmp(@username).zero? && confirmed_domain.casecmp(@domain).zero? @username = confirmed_username @domain = confirmed_domain return end

If the Mastodon instance does not already know about this account, ResolveAccountService invokes fetch_account! which calls the ActivityPub::FetchRemoteAccountService which inherits from ActivityPub::FetchRemoteActorService

@account = ActivityPub::FetchRemoteAccountService.new.call(actor_url, suppress_errors: @options[:suppress_errors])

The actor_url will look something like

https://mastodon.social/users/herestomwiththeweather

The ActivityPub::FetchRemoteActorService passes the actor_url parameter to fetch_resource to receive a json response for the remote account.

@json = begin if prefetched_body.nil? fetch_resource(uri, id) else

The response includes a lot of information including name, summary, publicKey, images and urls to fetch more information like followers and following.

Finally, the ActivityPub::FetchRemoteActorService calls the ActivityPub::ProcessAccountService, passing it the json response.

ActivityPub::ProcessAccountService.new.call(@username, @domain, @json, only_key: only_key, verified_webfinger: !only_key)

If the Mastodon instance does not know about the account, ActivityPub::ProcessAccountService invokes create_account and update_account to save the username, domain and all the associated urls from the json response to a new account record in the database.

create_account if @account.nil? update_account

I have several questions about how following others works and will probably look at that soon. I may start out by reading A highly opinionated guide to learning about ActivityPub which I bookmarked a while ago.


Identity Woman

Identosphere

Infominer and I have been publishing the weekly Identosphere Newsletter and Summary of all that is happening Self-Sovereign and Decentralized Identity. These are ways you can contribute a one time end of the year contribution: Or subscribe with a contribution every month this button will take you to a page where you can pick a […] The post Identosphere appeared first on Identity Woman.

Infominer and I have been publishing the weekly Identosphere Newsletter and Summary of all that is happening Self-Sovereign and Decentralized Identity. These are ways you can contribute a one time end of the year contribution: Or subscribe with a contribution every month this button will take you to a page where you can pick a […]

The post Identosphere appeared first on Identity Woman.


Damien Bod

Create Azure App Registration for API using Powershell

This post shows how to setup an Azure App registration using Powershell for an application access token using an application role. In Azure roles are used for App only, scopes are used for delegated flows (Or roles for users). The Azure App registration uses OAuth2 with the client credentials flow. A secret and a client_id […]

This post shows how to setup an Azure App registration using Powershell for an application access token using an application role. In Azure roles are used for App only, scopes are used for delegated flows (Or roles for users). The Azure App registration uses OAuth2 with the client credentials flow. A secret and a client_id is used.

Code: https://github.com/damienbod/GrpcAzureAppServiceAppAuth

The AzureAD Powershell module is used to create a new Azure App registration. The New-AzureADApplication function creates a new Azure App registration with a secret on the defined tenant from the authentication flow.

$Guid = New-Guid $startDate = Get-Date $allowPassthroughUsers = false $PasswordCredential = New-Object -TypeName Microsoft.Open.AzureAD.Model.PasswordCredential $PasswordCredential.StartDate = $startDate $PasswordCredential.EndDate = $startDate.AddYears(20) $PasswordCredential.KeyId = $Guid $PasswordCredential.Value = ([System.Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes(($Guid)))) if(!($myApp = Get-AzureADApplication -Filter "DisplayName eq '$($appName)'" -ErrorAction SilentlyContinue)) { $myApp = New-AzureADApplication -DisplayName $appName -PasswordCredentials $PasswordCredential -AllowPassthroughUsers $allowPassthroughUsers # Write-Host $myApp | Out-String | ConvertFrom-Json }

We need an App Role and this is exposed in the access token. The App Role can be created using this function. See this link for the original:

https://stackoverflow.com/questions/51651889/how-to-add-app-roles-under-manifest-in-azure-active-directory-using-powershell-s

This Azure App registration is created for an Application client, ie no user. If creating this for delegated flows, the AllowedMemberTypes would need to be changed and no secret/certificate is required. A scope would probably be used as well instead of a Role, but this depends on the solution authorization architecture.

function CreateApplicationAppRole([string] $Name, [string] $Description) { $appRole = New-Object Microsoft.Open.AzureAD.Model.AppRole $appRole.AllowedMemberTypes = New-Object System.Collections.Generic.List[string] $appRole.AllowedMemberTypes.Add("Application"); $appRole.DisplayName = $Name $appRole.Id = New-Guid $appRole.IsEnabled = $true $appRole.Description = $Description $appRole.Value = $Name; return $appRole }

The Set-AzureADApplication function adds the roles to the Azure App registration.

$appRoles = $myApp.AppRoles $newRole = CreateApplicationAppRole -Name $appRoleName -Description $appRoleName $appRoles.Add($newRole) Set-AzureADApplication -ObjectId $myApp.ObjectId -AppRoles $appRoles $appRoleId = $newRole.Id

The App Role can now be used and exposed in the access token. This is added using the RequiredResourceAccess

$req = New-Object -TypeName "Microsoft.Open.AzureAD.Model.RequiredResourceAccess" $acc1 = New-Object -TypeName "Microsoft.Open.AzureAD.Model.ResourceAccess" -ArgumentList $appRoleId,"Role" $req.ResourceAccess = $acc1 $req.ResourceAppId = $myApp.AppId Set-AzureADApplication -ObjectId $myApp.ObjectId -RequiredResourceAccess $req

For some unknown reasin, the Powershell AzureAD module add default Oauth2Permissions to the Azure App registration. This can be disabled. We have no scopes as this is an application client, i.e. AppOnly.

$Scopes = New-Object System.Collections.Generic.List[Microsoft.Open.AzureAD.Model.OAuth2Permission] $Scope = $myApp.Oauth2Permissions | Where-Object { $_.Value -eq "user_impersonation" } $Scope.IsEnabled = $false $Scopes.Add($Scope) Set-AzureADApplication -ObjectId $myApp.ObjectID -Oauth2Permissions $Scopes

The API IdentifierUris is added to the Azure App registration.

$apiUrl = "api://" + $myApp.AppId $IdentifierUris = New-Object System.Collections.Generic.List[string] $IdentifierUris.Add($apiUrl) Set-AzureADApplication -ObjectId $myApp.ObjectID -IdentifierUris $IdentifierUris

A service principal can be created for the Azure App Registration. This can then be used in the enterprise applications.

$createdServicePrincipal = New-AzureADServicePrincipal -AccountEnabled $true -AppId $myApp.AppId -DisplayName $appName

Graph application roles can also be added to the Azure App Registration if required. I usually separated this to a different Azure App registration.

$req = New-Object -TypeName "Microsoft.Open.AzureAD.Model.RequiredResourceAccess" $acc1 = New-Object -TypeName "Microsoft.Open.AzureAD.Model.ResourceAccess" -ArgumentList "62a82d76-70ea-41e2-9197-370581804d09","Role" $acc2 = New-Object -TypeName "Microsoft.Open.AzureAD.Model.ResourceAccess" -ArgumentList "5b567255-7703-4780-807c-7be8301ae99b","Role" $acc3 = New-Object -TypeName "Microsoft.Open.AzureAD.Model.ResourceAccess" -ArgumentList "9e3f62cf-ca93-4989-b6ce-bf83c28f9fe8","Role" $acc4 = New-Object -TypeName "Microsoft.Open.AzureAD.Model.ResourceAccess" -ArgumentList "741f803b-c850-494e-b5df-cde7c675a1ca","Role" $acc5 = New-Object -TypeName "Microsoft.Open.AzureAD.Model.ResourceAccess" -ArgumentList "df021288-bdef-4463-88db-98f22de89214","Role" $acc6 = New-Object -TypeName "Microsoft.Open.AzureAD.Model.ResourceAccess" -ArgumentList "7ab1d382-f21e-4acd-a863-ba3e13f7da61","Role" $acc7 = New-Object -TypeName "Microsoft.Open.AzureAD.Model.ResourceAccess" -ArgumentList "19dbc75e-c2e2-444c-a770-ec69d8559fc7","Role" $req.ResourceAccess = $acc1,$acc2,$acc3,$acc4,$acc5,$acc6,$acc7 $req.ResourceAppId = "00000003-0000-0000-c000-000000000000"

Add the item to the Azure App registration.

################################## ### Create an RequiredResourceAccess list ################################## $requiredResourceAccessItems = New-Object System.Collections.Generic.List[Microsoft.Open.AzureAD.Model.RequiredResourceAccess] $requiredResourceAccessItems.Add($req) Set-AzureADApplication -ObjectId $myApp.ObjectId -RequiredResourceAccess $requiredResourceAccessItems

Access tokens version 2 are used. This needs to be set in the manifest. This property is called accessTokenAcceptedVersion in the portal and requestedAccessTokenVersion in Graph. Set the accessTokenAcceptedVersion to version 2 access tokens.

$Body = @{ api = @{ requestedAccessTokenVersion = 2 } } | ConvertTo-Json -Compress | ConvertTo-Json $null = az rest --method PATCH --uri "https://graph.microsoft.com/v1.0/applications/$($appRegObjectId)" --body $Body --headers "Content-Type=application/json" Running the scripts

Install the required Azure AD Powershell module:

Install-Module AzureAD -AllowClobber

Connect to the correct tenant using an account which has the privileges to create App registrations:

Connect-AzureAD -TenantId 5698af84-5720-4ff0-bdc3-9d9195314244

Run the script replacing the tenantId and your Azure App Registration name:

.\app-reg-application-cc.ps1 -tenantId 5698af84-5720-4ff0-bdc3-9d9195314244 -appName AppRegTest Login Azure CLI and Update access token version az login --tenant 5698af84-5720-4ff0-bdc3-9d9195314244

You can read the id from the manufest (ObjectId) “id”: “ba62783f-fb6b-48a9-ba51-f56355e84926”,

.\update-access-token-version2.ps1 -TenantId 5698af84-5720-4ff0-bdc3-9d9195314244 -appRegObjectId ba62783f-fb6b-48a9-ba51-f56355e84926

Create new secret

You can read the id from the manufest (ObjectId) “id”: “ba62783f-fb6b-48a9-ba51-f56355e84926”,

.\app-new-secrets.ps1 -TenantId 5698af84-5720-4ff0-bdc3-9d9195314244 -appRegObjectId ba62783f-fb6b-48a9-ba51-f56355e84926

See the full scripts in the Github repository accompanying this blog.

Notes

We are using secrets in this demo. You can also update to using certificates instead of secrets which then use client assertions on the access token request. I normal store the secret or certificate in an Azure Key Vault and use this directly from the application and services. I would normally add this to DevOps and create a single script for all infrastructure.

After the App registrations have been created, you need to grant consent before these can be used.

Links:

https://docs.microsoft.com/en-us/powershell/module/azuread/new-azureadapplication?view=azureadps-2.0

https://stackoverflow.com/questions/42164581/how-to-configure-a-new-azure-ad-application-through-powershell

Monday, 14. November 2022

Aaron Parecki

How to Build a Restreaming Server with a Raspberry Pi

First of all, what is a restreaming server? Sometimes you want to livestream video from a device like an ATEM Mini or OBS to multiple destinations. Many devices and software like this will let you push video to just one RTMP destination at a time.

First of all, what is a restreaming server? Sometimes you want to livestream video from a device like an ATEM Mini or OBS to multiple destinations. Many devices and software like this will let you push video to just one RTMP destination at a time.

To stream to multiple destinations, you need to use a restream server so that the device can stream the one stream to the server, and the restream server pushes to multiple destinations.

There are paid services you can use to restream for you, restream.io being one of the most well-known ones. This is a great solution too, and if you're just looking for a quick way to restream to multiple platforms, this is the easiest way to go.

(Note: the YoloBox does let you publish to multiple destinations with no extra setup, so if you're using that device, you can just ignore this whole tutorial!)

But, sometimes you want to do this yourself, avoid paying third party services, or you might need to restream to local devices that something on the public internet can't reach. That's what the rest of this blog post is about. I'll show you how to set up a Raspberry Pi (or really any other Linux computer) to restream your livestreams to multiple destinations.

Getting Started

Before we get into the details, you'll need to start with a Raspberry Pi or an Ubuntu server that's already set up and running. That should be as easy as following the official setup guide for Raspberry Pi. Also note that if you're comfortable with SSH, you can install the Raspberry Pi OS "Lite" without the desktop environment.

Install nginx

The magic that makes this all work is the nginx web server with a custom module that supports RTMP.

Install nginx and the rtmp module by running the following commands on the command line, either over SSH or by opening the Terminal on the desktop.

sudo apt update sudo apt install nginx libnginx-mod-rtmp Configure your Restream Server

Now we need to set up an RTMP server in nginx. Edit the main nginx config file:

sudo nano /etc/nginx/nginx.conf

Scroll all the way to the bottom and copy the below text into the config file:

rtmp { server { listen 1935; application restream { # Enable livestreaming live on; # Disable recording record off; # Allow only this machine to play back the stream allow play 127.0.0.1; deny play all; # Push your stream to one or more RTMP destinations push rtmp://a.rtmp.youtube.com/live2/XXXX-XXXX-XXXX-XXXX-XXXX; push rtmp://a.rtmp.youtube.com/live2/XXXX-XXXX-XXXX-XXXX-XXXX; push rtmp://live-cdg.twitch.tv/app/live_XXXXXXXX; } } }

Save this file by pressing ctrl+X, then Y, then enter.

To test the config file for errors, type:

sudo nginx -t

If that worked, you can reload nginx to make your changes take effect:

sudo nginx -s reload Start Streaming

At this point the Raspberry Pi is ready! You can now stream to this box and it will send a copy to each configured destination! Any stream key will work, and you can stream using any sort of device or software like OBS. You'll need to find the IP address of the Raspberry Pi which you can do by typing:

hostname -I

To stream to the Raspberry Pi, use the RTMP URL: rtmp://YOUR_IP_ADDRESS/restream and anything as the stream key.

NOTE: The way this is set up, anyone can stream to this if they know the IP address since it will accept any stream key. If you want to restrict this, you can use a long random string in place of restream in the config. For example:

... application restream-ABCD-EFGH-IJKL-MNOP { ...

Now you are ready to stream! Start pushing an RTMP feed to your server and it will send a copy to each of your configured destinations!

If you want to stream to this from an ATEM Mini, you'll need to create a custom streaming config and load that in to the software control app. You can use this XML generator to create the configuration.

Fill out your server's IP address, and use either restream or restream-ABCD-EFGH-IJKL-MNOP as the path.

Further Reading

Now that you have the nginx RTMP module installed, there's a lot more things you can do! You can read the official documentation for a full list of other commands you can use. You can do things like:

Record a local copy of anything your RTMP server receives Create multiple resolutions of your video and push different resolutions to different platforms Create a vertical cropped version of your feed and send it to another RTMP destination Notify external services when you start or stop streaming

Leave a comment below if you're interested in a tutorial on any of these other interesting features!


@_Nat Zone

gBizIDどストライク案件:twitter社、企業による担当者認証を近日始めるとのこと

11月13日の朝、twitter社の新オーナーであ…

11月13日の朝、twitter社の新オーナーであるイーロン・マスク氏が近日中に企業による担当者認証をはじめるとツイートしました。

Rolling out soon, Twitter will enable organizations to identify which other Twitter accounts are actually associated with them

— Elon Musk (@elonmusk) November 13, 2022

この場合、まずはその企業の正当なツイッターアカウントを特定しなければなりません。これは、最終的には「twitter社が判断者とならざるを得ないと思うが、良い意見があれば聞きたい」とのこと。

Ultimately, I think there is no choice but for Twitter to be the final arbiter, but I’m open to suggestions

— Elon Musk (@elonmusk) November 13, 2022

また、その企業の担当者/責任者が他のユーザの所属状態の管理をできるようにするとのこと。

We will enable organizations to manage affiliations

— Elon Musk (@elonmusk) November 13, 2022

この企業を特定して、その中の権限 (Authority) を指定していくというのは、各国毎にいろいろな仕組みがあります。英国などではそういったデータがオープンデータになっておりますし、日本では gBizIDという仕組みがあります。

gBizIDは企業の履歴事項全部証明書と代表者の実印を使って登録したIDレジスターを持つIDPで、データをOpenIDとして検証可能な形で出すことができます。なので、Twitter社はgBizIDからデータを受け取るようにすれば、希望のことが直ぐにできるようになります。

このツイートを見た瞬間、日本だったらこうすればよいのじゃね?とリプしそうになりましたが、本来デジタル庁がやるべきでしょうから柄にもなく自重しました。デジタル庁さん、チャンスですぜ!

また、このスレの最後にマスク氏は「『確認済み』が実際のところ何を意味するのかということの粒度を上げていくことは良いことだ」と締めくくっています。全くわが意を得たり。願わくば、ちゃんとしたアイデンティティのプロが検討チームに入っていますように。

Increasing granularity about what “verified” actually means is the right move

— Elon Musk (@elonmusk) November 13, 2022

Sunday, 13. November 2022

Simon Willison

Datasette is 5 today: a call for birthday presents

Five years ago today I published the first release of Datasette, in Datasette: instantly create and publish an API for your SQLite databases. Five years, 117 releases, 69 contributors, 2,123 commits and 102 plugins later I'm still finding new things to get excited about with th project every single day. I fully expect to be working on this for the next decade-plus. Datasette is the ideal proje

Five years ago today I published the first release of Datasette, in Datasette: instantly create and publish an API for your SQLite databases.

Five years, 117 releases, 69 contributors, 2,123 commits and 102 plugins later I'm still finding new things to get excited about with th project every single day. I fully expect to be working on this for the next decade-plus.

Datasette is the ideal project for me because it can be applied to pretty much everything that interests me - and I'm interested in a lot of things!

I can use it to experiment with GIS, explore machine learning data, catalog cryptozoological creatures and collect tiny museums. It can power blogs and analyze genomes and figure out my dog's favourite coffee shop.

The official Datasette website calls it "an open source multi-tool for exploring and publishing data". This definitely fits how I think about the project today, but I don't know that it really captures my vision for its future.

In "x for y" terms I've started thinking of it as Wordpress for Data.

Wordpress powers 39.5% of the web because its thousands of plugins let it solve any publishing problem you can think of.

I want Datasette to be able to do the same thing for any data analysis, visualization, exploration or publishing problem.

There's still so much more work to do!

Call for birthday presents

To celebrate this open source project's birthday, I've decided to try something new: I'm going to ask for birthday presents.

An aspect of Datastte's marketing that I've so far neglected is social proof. I think it's time to change that: I know people are using the software to do cool things, but this often happens behind closed doors.

For Datastte's birthday, I'm looking for endorsements and case studies and just general demonstrations that show how people are using it do so cool stuff.

So: if you've used Datasette to solve a problem, and you're willing to publicize it, please give us the gift of your endorsement!

How far you want to go is up to you:

Not ready or able to go public? Drop me an email. I'll keep it confidential but just knowing that you're using these tools will give me a big boost, especially if you can help me understand what I can do to make Datasette more useful to you Add a comment to this issue thread describing what you're doing. Just a few sentences is fine - though a screenshot or even a link to a live instance would be even better Best of all: a case study - a few paragraphs describing your problem and how you're solving it, plus permission to list your logo as an organization that uses Datasette. The most visible social proof of all!

I thrive on talking to people who are using Datasette, so if you want to have an in-person conversation you can sign up for a Zoom office hours conversation on a Friday.

I'm also happy to accept endorsements in replies to these posts on Mastodon or on Twitter.

Here's to the next five years of Datasette. I'm excited to see where it goes next!


@_Nat Zone

イーロン・マスク、Twitterは銀行になると社員向けに語る

Twitter は銀行サービスを提供する Twit…
Twitter は銀行サービスを提供する

Twitterの新オーナーであるイーロン・マスク1氏が、第1回社員向けミーティングで、Twitter は銀行・(国際)決済カンパニーになると語ったようです。Paypal 2.0 ですね。

イーロン・マスク氏はオンライン銀行のX.com2を1999年に創業して、翌年 confinitiy と合併させて PayPal を作ったので、彼にとっては土地勘がある世界。当時は email しかなかったけどこれだけユーザ数のいるDMのプラットフォームがあるなら、それをベースにやればベターなものができると。もともと銀行を作ったくらいだから既存の金融機関とのつなぎもしっかりつなぎこむ。多分、殆ど透過的になるのではないでしょうか。送金や決済のプラットフォームとしてかなり便利になりそうです。国際送金もこれでできるようになると色々影響が大きいですね。ただ、KYCが課題ですね。

Twitter は動画サービスも提供し、クリエーターにより良い収益機会を与える

また、動画も長い動画も挙げられるようにするようです。今はYoutube などに上げてそのリンクを置くしか無いが、twitter で完結できるようにする。 そして、クリエーターはYoutubeなどから得られるよりも大きな収益を得られるようにするようです。ユーザー体験的にはtiktokのようなものを考えているよう。

リモートワークについて

その他リモートワークについても語っています。原則出社、パフォーマンスが出る人はリモートでも良い。このポリシーはSpace X や Tesla と共通、とのこと。

上記に関する詳細は The Verge の「Inside Elon Musk’s first meeting with Twitter employees』からご参照ください。

Saturday, 12. November 2022

Werdmüller on Medium

The fediverse is happening. Here’s how to take part

A guide to getting started with Mastodon Continue reading on Medium »

A guide to getting started with Mastodon

Continue reading on Medium »

Friday, 11. November 2022

Phil Windleys Technometria

Verifying Twitter

Summary: Elon has started a monthly $8 fee for verified twitter users. A verifiable credential-based solution would be a better way to increase trust in the platform by authenticating users as real people without attendent privacy concerns. This thread from Matthew Yglesias concerning Twitter's decision to charge for the blue verification checkmark got me thinking. Matthew makes some good

Summary: Elon has started a monthly $8 fee for verified twitter users. A verifiable credential-based solution would be a better way to increase trust in the platform by authenticating users as real people without attendent privacy concerns.

This thread from Matthew Yglesias concerning Twitter's decision to charge for the blue verification checkmark got me thinking. Matthew makes some good points:

Pseudonymity has value and offers protection to people who might not otherwise feel free to post if Twitter required real names like Facebook tries to. Verification tells the reader that the account is run by a person There's value to readers in knowing the real name and professional affiliation of some accounts

Importantly, the primary value accrues to the reader, not the tweeter. So, charging the tweeter $20/month (now $8) is charging the wrong party. In fact, more than the reader, the platform itself realizes the most value from verification because it can make the platform more trustworthy. Twitter will make more money if the verification system can help people understand the provenance of tweets because ads will become more valuable.

Since no one asked me, I thought I'd offer a suggestion on how to do this right. You won't be surprised that my solution uses verifiable credentials.

First, Twitter needs to make being verified worthwhile to the largest number of users possible. Maybe that means that tweets from unverified accounts are delayed or limited in some way. There are lots of options and some A/B testing would probably show what incentives work best.

Second, pick a handful (five springs to mind) of initial credential issuers that Twitter will trust and define the credential schema they'd prefer. Companies like Onfido can already do this. It wouldn't be hard for others like Equifax, ID.me, and GLEIF to issue credentials based on the "real person" or "real company" verifications they're already doing. These credential issuers could charge whatever the market would bear. Twitter might get some of this money.

Last, Twitter allows anyone with a "real person" credential from one of these credential issuers to verify their profile. The base verification would be for the holder to use zero-knowledge proof to prove they are a person or legal entity. If they choose, the credential holder might want to prove their real name and professional affiliation, but that wouldn't be required. Verifying these credentials as part of the Twitter profile would be relatively easy for Twitter to implement.

Twitter would have to decide what to do about accounts that are not real people or legal entities. Some of these bots have value. Maybe there's a separate verification process for these that requires that the holder of the bot account prove who they are to Twitter so they can be held responsible for their bot's behavior.

You might be worried that the verified person would sell their verification or verify multiple accounts. There are a number of ways to mitigate this. I explained some of this in Transferable Accounts Putting Passengers at Risk.

Real person verification using verifiable credentials has a number of advantages.

First, Twitter never knows anyone's real name unless that person chooses to reveal it. This means that Twitter can't be forced to reveal it to someone else. They just know they're a real person. This saves Twitter from being put in that position and building infrastructure and teams to deal with it. Yes, the police, for example, could determine who issued the Twitter Real Person credential and subpoena them, but that's the business these companies are in, so presumably they already have processes for doing this. Another nice perk from this is that Twitter jump starts an ecosystem for real person credentials that might have uses somewhere else. This has the side benefit of making fraud less likely since the more a person relies on a credential the less likely they are to use it for fraudulent purposes. A big advantage is that Twitter can now give people peace of mind that they accounts they're following are controlled by real people. Tools might let people adjust their feed accordingly so they see more tweets by real people. Twitter also can give advertisers comfort that their engagement numbers are closer to reality. Twitter makes more money.

Yglesias says:

Charging power users for features that most people don’t need or want makes perfect sense.

But verification isn’t a power user feature, it’s a terrible implementation of what’s supposed to be a feature for the everyday user. It should help newbies figure out what’s going on.

Verifiable credentials can help make Twitter a more trustworthy place by providing authentic data about people and companies creating accounts—and do it better than Twitter's current system. I'm pretty sure Twitter won't. Elon seems adamant that they are going to charge to get the blue checkmark. But, I can dream.

Bonus Link: John Bull's Twitter thread on Trust Thermoclines

Notes

Photo Credit: tree-nature-branch-bird-flower-wildlife-867763-pxhere.com from Unknown (CC0)

Tags: twitter identity verifiable+credentials


Simon Willison

Home invasion: Mastodon's Eternal September begins

Home invasion: Mastodon's Eternal September begins Hugh Rundle's thoughtful write-up of the impact of the massive influx of new users from Twitter on the existing Mastodon community. If you're new to Mastodon (like me) you should read this and think carefully about how best to respectfully integrate with your new online space.

Home invasion: Mastodon's Eternal September begins

Hugh Rundle's thoughtful write-up of the impact of the massive influx of new users from Twitter on the existing Mastodon community. If you're new to Mastodon (like me) you should read this and think carefully about how best to respectfully integrate with your new online space.

Wednesday, 09. November 2022

Simon Willison

PyScript Updates: Bytecode Alliance, Pyodide, and MicroPython

PyScript Updates: Bytecode Alliance, Pyodide, and MicroPython Absolutely huge news about Python on the Web tucked into this announcement: Anaconda have managed to get a version of MicroPython compiled to WebAssembly running in the browser. Pyodide weighs in at around 6.5MB compressed, but the MicroPython build is just 303KB - the size of a large image. This makes Python in the web browser applic

PyScript Updates: Bytecode Alliance, Pyodide, and MicroPython

Absolutely huge news about Python on the Web tucked into this announcement: Anaconda have managed to get a version of MicroPython compiled to WebAssembly running in the browser. Pyodide weighs in at around 6.5MB compressed, but the MicroPython build is just 303KB - the size of a large image. This makes Python in the web browser applicable to so many more potential areas.


Semantic text search using embeddings

Semantic text search using embeddings Example Python notebook from OpenAI demonstrating how to build a search engine using embeddings rather than straight up token matching. This is a fascinating way of implementing search, providing results that match the intent of the search ("delicious beans" for example) even if none of the keywords are actually present in the text.

Semantic text search using embeddings

Example Python notebook from OpenAI demonstrating how to build a search engine using embeddings rather than straight up token matching. This is a fascinating way of implementing search, providing results that match the intent of the search ("delicious beans" for example) even if none of the keywords are actually present in the text.


Inside the mind of a frontend developer: Hero section

Inside the mind of a frontend developer: Hero section Ahmad Shadeed provides a fascinating, hyper-detailed breakdown of his approach to implementing a "hero section" component using HTML and CSS, including notes on CSS grids and gradient backgrounds.

Inside the mind of a frontend developer: Hero section

Ahmad Shadeed provides a fascinating, hyper-detailed breakdown of his approach to implementing a "hero section" component using HTML and CSS, including notes on CSS grids and gradient backgrounds.


Designing a write API for Datasette

Building out Datasette Cloud has made one thing clear to me: Datasette needs a write API for ingesting new data into its attached SQLite databases. I had originally thought that this could be left entirely to plugins: my datasette-insert plugin already provides a JSON API for inserting data, and other plugins like datasette-upload-csvs also implement data import functionality. But some things

Building out Datasette Cloud has made one thing clear to me: Datasette needs a write API for ingesting new data into its attached SQLite databases.

I had originally thought that this could be left entirely to plugins: my datasette-insert plugin already provides a JSON API for inserting data, and other plugins like datasette-upload-csvs also implement data import functionality.

But some things deserve to live in core. An API for manipulating data is one of them, because it can hopefully open up a floodgate of opportunities for other plugins and external applications to build on top of it.

I've been working on this over the past two weeks, in between getting distracted by Mastodon (it's just blogs!).

Designing the API

You can follow my progress in this tracking issue: Write API in Datasette core #1850. I'm building the new functionality in a branch (called 1.0-dev, because this is going to be one of the defining features of Datasette 1.0 - and will be previewed in alphas of that release).

Here's the functionality I'm aiming for in the first alpha:

API for writing new records (singular or plural) to a table API for updating an existing record API for deleting an existing record API for creating a new table - either with an explicit schema or by inferring it from a set of provided rows API for dropping a table

I have a bunch of things I plan to add later, but I think the above represents a powerful, coherent set of initial functionality.

In terms of building this, I have a secret weapon: sqlite-utils. It already has both a Python client library and a comprehensive CLI interface for inserting data and creating tables. I've evolved the design of those over multiple major versions, and I'm confident that they're solid. Datasette's write API will mostly implement the same patterns I've eventually settled on for sqlite-utils.

I still need to design the higher level aspects of the API though - the endpoint URLs and the JSON format that will be used.

This is still in flux, but my current design looks like this.

To insert records:

POST /database/table/-/insert { "rows": [ {"id": 1, "name": "Simon"}, {"id": 2, "name": "Cleo"} ] }

Or use "row": {...} to insert a single row.

To create a new table with an explicit schema:

POST /database/-/create { "name": "people", "columns": [ { "name": "id", "type": "integer" }, { "name": "title", "type": "text" } ] "pk": "id" }

To create a new table with a schema automatically derived from some initial rows:

POST /database/-/create { "name": "my new table", "rows": [ {"id": 1, "name": "Simon"}, {"id": 2, "name": "Cleo"} ] "pk": "id" }

To update a record:

POST /database/table/134/-/update { "update": { "name": "New name" } }

Where 134 in the URL is the primary key of the record. Datasette supports compound primary keys too, so this could be /database/docs/article,242/-/update for a table with a compound primary key.

I'm using a "update" nested object here rather than having everything at the root of the document because that frees me up to add extra future fields that control the update - "alter": true to specify that the table schema should be updated to add new columns, for example.

To delete a record:

POST /database/table/134/-/delete

I thought about using the HTTP DELETE verb here and I'm ready to be convinced that it's a good idea, but thinking back over my career I can't see any times where I've seen DELETE offered a concrete benefit over just sticking with POST for this kind of thing.

This isn't going to be a pure REST API, and I'm OK with that.

So many details

There are so many interesting details to consider here - especially given that Datasette is designed to support ANY schema that's possible in SQLite.

Should you be allowed to update the primary key of an existing record? What happens if you try to insert a record that violates a foreign key constraint? What happens if you try to insert a record that violates a unique constraint? How should inserting binary data work, given that JSON doesn't have a binary type? What permissions should the different API endpoints require (I'm looking to add a bunch of new ones) How should compound primary keys be treated? Should the API return a copy of the records that were just inserted? Initially I thought yes, but it turns out to be a big impact on insert speeds, at least in SQLite versions before the RETURNING clause was added in SQLite 3.35.0 (in March 2021, so not necessarily widely available yet). How should the interactive API explorer work? I've been building that in this issue.

I'm working through these questions in the various issues attached to my tracking issue. If you have opinions to share you're welcome to join me there!

Token authentication

This is another area that I've previously left to plugins. datasette-auth-tokens adds Authorization: Bearer xxx authentication to Datasette, but if there's a write API in core there really needs to be a default token authentication mechanism too.

I've implemented a default mechanism based around generating signed tokens, described in issue #1852 and described in this in-progress documentation.

The basic idea is to support tokens that are signed JSON objects (similar to JWT but not JWT, because JWT is a flawed standard - I rolled my own using itsdangerous).

The signed content of a token looks like this:

{ "a": "user_id", "t": 1668022423, "d": 3600 }

The "a" field captures the ID of the user created that token. The token can then inherit the permissions of that user.

The "t" field shows when the token was initially created.

The "d" field is optional, and indicates after how many seconds duration the token should expire. This allows for the creation of time-limited tokens.

Tokens can be created using the new /-/create-token page or the new datasette create-token CLI command.

It's important to note that this is not intended to be the only way tokens work in Datasette. There are plenty of applications where database-backed tokens makes more sense, since it allows tokens to be revoked individually without rotating secrets and revoking every issued token at once. I plan to implement this pattern myself for Datasette Cloud.

But I think this is a reasonable default scheme to include in Datasette core. It can even be turned off entirely using the new --setting allow_signed_tokens off option.

I'm also planning a variant of these tokens that can apply additional restrictions. Let's say you want to issue a token that acts as your user but is only allowed to insert rows into the docs table in the primary database. You'll be able to create a token that looks like this:

{ "a": "simonw", "t": 1668022423, "r": { "t": { "primary: { "docs": ["ir"] } } } }

"r" means restrictions. The "t" key indicates per-table restrictions, and the "ir" is an acronym for the insert-row permission.

I'm still fleshing out how this will work, but it feels like an important feature of any permissions system. I find it frustrating any time I'm working with a a system that doesn't allow me to create scoped-down tokens.

Releases this week json-flatten: 0.3 - (2 releases total) - 2022-10-29
Python functions for flattening a JSON object to a single dictionary of pairs, and unflattening that dictionary back to a JSON object datasette-edit-templates: 0.1 - (2 releases total) - 2022-10-27
Plugin allowing Datasette templates to be edited within Datasette datasette: 0.63 - (116 releases total) - 2022-10-27
An open source multi-tool for exploring and publishing data sqlite-utils: 3.30 - (104 releases total) - 2022-10-25
Python CLI utility and library for manipulating SQLite databases datasette-indieauth: 1.2.1 - (10 releases total) - 2022-10-25
Datasette authentication using IndieAuth and RelMeAuth shot-scraper: 1.0.1 - (24 releases total) - 2022-10-24 TIL this week os.remove() on Windows fails if the file is already open Finding the SQLite version used by Web SQL in Chrome git bisect The pdb interact command GitHub Pages: The Missing Manual Getting Mastodon running on a custom domain Export a Mastodon timeline to SQLite

MyDigitalFootprint

Why does fear fill the gap?

In that moment of panic, we forget to reflect on what type of gap this is and why it has been filled with fear. Leadership is a recognition of the gaps, that not all gaps are the same and how to prevent fear being the first response. Image source: Susan David, Ph.D (love her work) Fear and Gaps  Fear is an unpleasant emotion caused by the immediate or expected threat of danger, pain, or
In that moment of panic, we forget to reflect on what type of gap this is and why it has been filled with fear. Leadership is a recognition of the gaps, that not all gaps are the same and how to prevent fear being the first response.

Image source: Susan David, Ph.D (love her work)

Fear and Gaps 

Fear is an unpleasant emotion caused by the immediate or expected threat of danger, pain, or harm, but it is also so much more.  We know fear sells in terms of marketing.  We understand FOMO (fear of missing out) and the fear of failure (FOF) are significant drivers. We are aware that fear produces a unique reaction in the body driven from the gut ahead of the brain (Antonio Damasio research). Fear is a stimuli but is subjective and how fear is perceived is different for everyone. Different types of fear spread at different speeds. Brands and the media use fear and to create headlines and force change.  COP27 and climate change agenda are not adverse to utilising this insight.

We should be aware that fear drives many decisions we make.  Therefore, the interesting question becomes, “Why is it that fear fills the gaps between what we know/ believe and the unknown/ uncertain?” A further question on the link between fear and trust is worth exploring, but it is beyond this post. 

Why is it that fear tends to be the feeling that fills the gaps between what we know/ believe and the unknown/ uncertain?
Peak Human Purpose 

In the Peak Paradox framework, one of the peaks to optimise for is “Peak Human Purpose”. Each of the four purposes of the framework exists at the exclusion of anything else - purity at an extreme.  At peak human purpose, we are here (humans on earth) to escape ultimate death by reproducing as much as possible with the broadest community we can. We also have to adapt as fast as possible. We have to meet our chemistry requirements to stay alive for as long as possible to adopt and reproduce at the expense of anything else. These form the most basic definitions of life with clarity and purity.

Whilst the purity of all the peak purposes might be controversial (even to myself), saying the purity of human purpose is chemistry/ biology does not go down very well; it is too simplistic. However, this is a model for framing thinking, so please go with it as it needs to be pure, and every other human purpose has conflicts with someone.  The point here is that when we realise that fear and/or anxiety fills gaps, we understand that we are optimising for something deeply human - life, survival, and thriving.   

The point here is that when we realise that fear/or anxiety fills gaps, we understand that we are optimising for something deeply human.

I am often questioned why I put “Human Purpose” as one of the peaks, and it is because of some deeply human traits of life that influence our decisions and create conflicts and tensions within us and our groups.  Fear and anxiety are some of these feelings.  I am neither an expert, counsellor or theorist in any human behaviour or psychology; however, that does not stop me from realising how much chemistry, biology and experience influence our decision-making, if we want to realise it or not. These disciplines are currently undervalued as is that fear is baked in to some systems from management, control and performance. 

Different gaps have different fears

Different gaps have different fears sounds obvious, but it is not, as the only gap that fear is filling is the one in front of us right now.  Fear steps in when there is a gap in our knowledge/ information.  We hear a noise we cannot explain; someone is walking behind us or an imagined scenario.  Fear is not limited to our personal lives and is an active component in the world of our daily commercial activities and actions. Geoffery Moores's Book “Crossing the Chasm” is about a book that sells by creating fear.  The book is much more important than that and is a fantastic insight into adoption - however until you knew about the chasm, you did not fear it.

Without a doubt, life would be easier if only one gap and one fear existed. However, we have to be content with the fact that every moment we are dealing with different gaps (leadership, innovation, knowledge, information, experience) and different fears that come from the gaps we have right now and those we image in the future.  

What do we image are the boundaries? 

The image below illustrates two different gaps. The original thought was caught in an ESG session, so using this as an example.  For some, ESG is a gap (think fear) between what is unknown and known and how we cross it (the gap on the left below.)  For others, ESG is a gap (think fear) between what is known and the action they need to take (the gap on the right below.)  At the recent Sibos conference*, where this thinking emerged, was a debate about the role data has in ESG and whether data can ever be useful because there are two gaps. A good question to ask is which area does your ESG data fall? This removes the ideas about for and against, and forces you to determine which camp your data represents! 


The fear in each gap is real and, depending on the persona and team - determines how you will cross your fear fill gap. However, this model whilst “obvious” might not actually be a good representation of the issue..

Just Fix It

We (humans) tend to have an obsession with fixing things.  The majority who will read this realise that we cannot fix wicked problems, usually because we cannot understand them. Even our systems thinking and explanation have limits because of the boundary interconnection problem.  (the unknown consequences of your action on another system and vice-versa).  

When you hear someone bark an order to “fix something”, we know it resembles the old order of control, dogma and hierarchy.  There was a belief that in a more simple time, leadership should/could just fix everything. However, not everything is easy to fix (humans, climate, economy, inflation) and not all problems have solutions (my favourite is pure maths), and the majority of what we face every day requires us to walk past the ideal of a “quick fix” and wrestle with the complexity of wicked systems.   

We should not ignore the power and pull of a “fix it” mental model.  We all tend to do it as a first untrained response when faced with a fear, gap or problem. Because of the “fix-it” mental model, our gaps are mostly filled with fear because there is no immediate fix within our experience. Our early experience in education and business teaches us we can “fix it” by defining the problem and building a solution. To do this, we have to accept we must ignore critical facts that add complexity to the actual problem or lack experience to see such layers. 

Management, leadership and MBA courses all spend a lot of time teaching us to ask, “what is the problem to be solved?” Usually, so we can determine if the pre-packed solution on offer aligns with the problem at hand.  When we know the problem, we can write a plan.  This fix-it provides a perception that we know how to cross the two big chasms filled with fears. This is not true because a “Fix it” mentality and language ignore struggles, dilemmas, compromises and paradoxes.   
We, humans and our environment, are not a problem to be fixed but something to be crafted, shaped and moulded over time.


We, humans, and our environment are not a problem to be fixed but something to be crafted, shaped and moulded over time.

The purpose of the Peak Paradox framework is to embrace “fix it” thinking for simple things but then build a model that allows us to picture and imagine many of the complications of a dynamic interactive, interdependent system of systems.  Wicked problems.  

A single independent system can be modelled and might be fixable. A system of systems cannot be modelled or fixed as there are unknowns at the boundaries between the systems presenting unknown effects, dependencies and consequences.  Not all humans have the same motivations, incentives or desires - a core identification in the peak paradox framework.

Moving on from the “fix-it” model

When we take out the “fix-it” thinking and redraw the two chasms, we observe that it is critical that executives are able to cope with leading in uncertainty and management who remain flexible so they can continually adapt the plan.  I would argue this is why reporting and compliance boards fail and don’t work for any stakeholder, as they focus on the wrong model - “fix-it” 

How does fear align with the Peak Paradox framework and thinking on sustainability? 

“Fix-It” thinking defines problems and solves them or ends up with gaps filled with fears, the information gap.  I see too many executive boards fixated on reporting, gaps and compliance, translating leadership into an instruction to fix it as there is a divergence between the plan and the actual.  Leadership is surely about bringing vision, belief and skills to help bridge the gap, not by barking instructions to fix it but by providing the next base camp on an uncharted map.  Stakeholder trust management to be flexible and adaptable so cope with change and the plan is there to change and not to manage to.  The delta (gap in the plan to actual) is not to be feared but embraced and understood. Dashboards are a leadership killer 

Dashboards are a leadership killer 

Humans and the earth (terra firma, water and climate) need to find a sustainable compromise and are the same in this respect. We don’t need to be fixed, and we don’t need fixing. What we do need is a map.  

Cop 27  “Fix-It” or map? 

The obsession with 1.5 degrees to me is a problem. The earth will not end, but yes, it will definitely become far more difficult for humans on earth to thrive over just survive. The changes in temperature will affect some humans in some regions far more. Our favoured economic model is likely also to tested to a breaking point.  

I am a massive supporter of SDG’s and change but my issue is that 1.5 degrees is a solution to a problem that we have not fully defined and depends on the “we can fix-it” mental model; The same with NetZero and ESG data.  These are solutions to problems we don’t understand.  These are wicked problems that should not be boiled down to a single number that no one can do anything about.  1.5 degrees is not a vision, a north star or a plan - it is a target. It should be the first camp on a long journey.   However fear fills the gaps and drives a model that drives more fear into making the gaps bigger.   

Perhaps we should step back to agree and determine what fears and gaps we are talking about.    


Thank you. 
* At Sibos 22 (the big banking, payments and finance conference), I had the joy of meeting a flock of old friends and meeting IRL some new ones I had only ever interacted with digitally. During one of the #Innotribe ESG sessions, it was good to interactively pen ideas based on the content as I sat with Yael Rozencwajg, which has become this post.  


Tuesday, 08. November 2022

Simon Willison

Mastodon is just blogs

And that's great. It's also the return of Google Reader! Mastodon is really confusing for newcomers. There are memes about it. If you're an internet user of a certain age, you may find an analogy that's been working for me really useful: Mastodon is just blogs. Every Mastodon account is a little blog. Mine is at https://fedi.simonwillison.net/@simon. You can post text and images to it. Y

And that's great. It's also the return of Google Reader!

Mastodon is really confusing for newcomers. There are memes about it.

If you're an internet user of a certain age, you may find an analogy that's been working for me really useful:

Mastodon is just blogs.

Every Mastodon account is a little blog. Mine is at https://fedi.simonwillison.net/@simon.

You can post text and images to it. You can link to things. It's a blog.

You can also subscribe to other people's blogs - either by "following" them (a subscribe in disguise) or - fun trick - you can add .rss to their page and subscribe in a regular news reader (here's my feed).

A Mastodon server (often called an instance) is just a shared blog host. Kind of like putting your personal blog in a folder on a domain on shared hosting with some of your friends.

Want to go it alone? You can do that: run your own dedicated Mastodon instance on your own domain (or pay someone to do that for you - I'm using masto.host).

Feeling really nerdy? You can build your own instance from scratch, by implementing the ActivityPub specification and a few others, plus matching some Mastodon conventions.

Differences from regular blogs

Mastodon (actually mostly ActivityPub - Mastodon is just the most popular open source implementation) does add some extra features that you won't get with a regular blog:

Follows: you can follow other blogs, and see who you are following and who is following you Likes: you can like a post - people will see that you liked it Retweets: these are called "boosts". They duplicate someone's post on your blog too, promoting it to your followers Replies: you can reply to other people's posts with your own Privacy levels: you can make a post public, visible only to your followers, or visible only to specific people (effectively a group direct message)

These features are what makes it interesting, and also what makes it significantly more complicated - both to understand and to operate.

Add all of these features to a blog and you get a blog that's lightly disguised as a Twitter account. It's still a blog though!

It doesn't have to be a shared host

This shared hosting aspect is the root of many of the common complaints about Mastodon: "The server admins can read your private messages! They can ban you for no reason! They can delete your account! If they lose interest the entire server could go away one day!"

All of this is true.

This is why I like the shared blog hosting analogy: the same is true there too.

In both cases, the ultimate solution is to host it yourself. Mastodon has more moving pieces than a regular static blog, so this is harder - but it's not impossibly hard.

I'm paying to host my own server for exactly this reason.

It's also a shared feed reader

This is where things get a little bit more complicated.

Do you still miss Google Reader, almost a decade after it was shut down? It's back!

A Mastodon server is a feed reader, shared by everyone who uses that server.

Users on one server can follow users on any other server - and see their posts in their feed in near-enough real time.

This works because each Mastodon server implements a flurry of background activity. My personal server, serving just me, already tells me it has processed 586,934 Sidekiq jobs since I started using it.

Blogs and feed readers work by polling for changes every few hours. ActivityPub is more ambitious: any time you post something, your server actively sends your new post out to every server that your followers are on.

Every time someone followed by you (or any other user on your server) posts, your server receives that post, stores a copy and adds it to your feed.

Servers offer a "federated" timeline. That's effectively a combined feed of all of the public posts from every account on Mastodon that's followed by at least one user on your server.

It's like you're running a little standalone copy of the Google Reader server application and sharing it with a few dozen/hundred/thousand of your friends.

May a thousand servers bloom

If you're reading this with a web engineering background, you may be thinking that this sounds pretty alarming! Half a million Sidekiq jobs to support a single user? Huge amounts of webhooks firing every time someone posts?

Somehow it seems to work. But can it scale?

The key to scaling Mastodon is spreading the cost of all of that background activity across a large number of servers.

And unlike something like Twitter, where you need to host all of those yourself, Mastodon scales by encouraging people to run their own servers.

On November 2nd Mastodon founder Eugen Rochko posted the following:

199,430 is the number of new users across different Mastodon servers since October 27, along with 437 new servers. This bring last day's total to 608,837 active users, which is without precedent the highest it's ever been for Mastodon and the fediverse.

That's 457 new users for each new server.

Any time anyone builds something decentralized like this, the natural pressure is to centralize it again.

In Mastodon's case though, decentralization is key to getting it to scale. And the organization behind mastodon.social, the largest server, is a German non-profit with an incentive to encourage new servers to help spread the load.

Will it break? I don't think so. Regular blogs never had to worry about scaling, because that's like worrying that the internet will run out of space for new content.

Mastodon servers are a lot chattier and expensive to run, but they don't need to talk to everything else on the network - they only have to cover the social graph of the people using them.

It may prove unsustainable to run a single Mastodon server with a million users - but if you split that up into ten servers covering 100,000 users each I feel like it should probably work.

Running on multiple, independently governed servers is also Mastodon's answer to the incredibly hard problem of scaling moderation. There's a lot more to be said about this and I'm not going to try and do it justice here, but I recommend reading this Time interview with Mastodon founder Eugen for a good introduction.

How does this all get paid for?

One of the really refreshing things about Mastodon is the business model. There are no ads. There's no VC investment, burning early money to grow market share for later.

There are just servers, and people paying to run them and volunteering their time to maintain them.

Elon did us all a favour here by setting $8/month as the intended price for Twitter Blue. That's now my benchmark for how much I should be contributing to my Mastodon server. If everyone who can afford to do so does that, I think we'll be OK.

And it's very clear what you're getting for the money. How much each server costs to run can be a matter of public record.

The oldest cliche about online business models is "if you're not paying for the product, you are the product being sold".

Mastodon is our chance to show that we've learned that lesson and we're finally ready to pay up!

Is it actually going to work?

Mastodon has been around for six years now - and the various standards it is built on have been in development I believe since 2008.

A whole generation of early adopters have been kicking the tyres on this thing for years. It is not a new, untested piece of software. A lot of smart people have put a lot of work into this for a long time.

No-one could have predicted that Elon would drive it into hockeystick growth mode in under a week. Despite the fact that it's run by volunteers with no profit motive anywhere to be found, it's holding together impressively well.

My hunch is that this is going to work out just fine.

Don't judge a website by its mobile app

Just like blogs, Mastodon is very much a creature of the Web.

There's an official Mastodon app, and it's decent, but it suffers the classic problem of so many mobile apps in that it doesn't quite keep up with the web version in terms of features.

More importantly, its onboarding process for creating a new account is pretty confusing!

I'm seeing a lot of people get frustrated and write-off Mastodon as completely impenetrable. I have a hunch that many of these are people who's only experience has come from downloading the official app.

So don't judge a federated web ecosystem exclusively by its mobile app! If you begin your initial Mastodon exploration on a regular computer you may find it easier to get started.

Other apps exist - in fact the official app is a relatively recent addition to the scene, just over a year old. I'm personally a fan of Toot! for iOS, which includes some delightful elephant animations.

The expanded analogy

Here's my expanded version of that initial analogy:

Mastodon is just blogs and Google Reader, skinned to look like Twitter.


Werdmüller on Medium

It’s time to be heard

Voting is not a right to take lightly. Continue reading on Medium »

Voting is not a right to take lightly.

Continue reading on Medium »

Monday, 07. November 2022

Simon Willison

Blessed.rs Crate List

Blessed.rs Crate List Rust doesn't have a very large standard library, so part of learning Rust is figuring out which of the third-party crates are the best for tackling common problems. This here is an opinionated guide to crates, which looks like it could be really useful. Via Hacker News

Blessed.rs Crate List

Rust doesn't have a very large standard library, so part of learning Rust is figuring out which of the third-party crates are the best for tackling common problems. This here is an opinionated guide to crates, which looks like it could be really useful.

Via Hacker News

Sunday, 06. November 2022

Doc Searls Weblog

On Twitter 2.0

So far the experience of using Twitter under Musk is pretty much unchanged. Same goes for Facebook. Yes, there is a lot of hand-wringing, and the stock market hates Meta (the corporate parent to which Facebook gave birth); but so far the experience of using both is pretty much unchanged. This is aside from the fact […]

So far the experience of using Twitter under Musk is pretty much unchanged. Same goes for Facebook.

Yes, there is a lot of hand-wringing, and the stock market hates Meta (the corporate parent to which Facebook gave birth); but so far the experience of using both is pretty much unchanged.

This is aside from the fact that the two services are run by feudal overlords with crazy obsessions and not much feel for roads they both pave and ride.

As for Meta (and its Reality Labs division), virtual and augmented realities (VR and AR) via headgear are today where “Ginger” was before she became the Segway: promising a vast horizontal market that won’t materialize because its utilities are too narrow.

VR/AR will, like the Segway, will find some niche uses. For Segway, it was warehouses, cops, and tourism. For VR/AR headgear it will be gaming, medicine, and hookups in meta-space. The porn possibilities are beyond immense.

As for business, both Twitter and Facebook will continue to be hit by a decline in personalized advertising and possibly a return to the old-fashioned non-tracking-based kind, which the industry has mostly forgotten how to do. But it will press on.

Not much discussed, but a real possibility is that advertising overall will at least partially collapse. This has been coming for a long time. (I’ve been predicting it at least since 2008.) First, there is near-zero (and widespread negative) demand for advertising on the receiving end. Second, Apple is doing a good job of working for its customers by providing ways to turn off or thwart the tracking that aims most ads online. And Apple, while not a monopoly, is pretty damn huge.

It may also help to remember that trees don’t grow to the sky. There is a life cycle for companies just as there is for living things.


Simon Willison

What to blog about

You should start a blog. Having your own little corner of the internet is good for the soul! But what should you write about? It's easy to get hung up on this. I've definitely felt the self-imposed pressure to only write something if it's new, and unique, and feels like it's never been said before. This is a mental trap that does nothing but hold you back. Here are two types of content that

You should start a blog. Having your own little corner of the internet is good for the soul!

But what should you write about?

It's easy to get hung up on this. I've definitely felt the self-imposed pressure to only write something if it's new, and unique, and feels like it's never been said before. This is a mental trap that does nothing but hold you back.

Here are two types of content that I guarantee you can produce and feel great about producing: TILs, and writing descriptions of your projects.

Today I Learned

A TIL - Today I Learned - is the most liberating form of content I know of.

Did you just learn how to do something? Write about that.

Call it a TIL - that way you're not promising anyone a revelation or an in-depth tutorial. You're saying "I just figured this out: here are my notes, you may find them useful too".

I also like the humility of this kind of content. Part of the reason I publish them is to emphasize that even with 25 years of professional experience you should still celebrate learning even the most basic of things.

I learned the "interact" command in pdb the other day! Here's my TIL.

I started publishing TILs in April 2020. I'm up to 346 now, and most of them took less than 10 minutes to write. It's such a great format for quick and satisfying online writing.

My collection lives at https://til.simonwillison.net - which publishes content from my simonw/til GitHub repository.

Write about your projects

If you do a project, you should write about it.

I recommend adding "write about it" to your definition of "done" for anything that you build or create.

Like with TILs, this takes away the pressure to be unique. It doesn't matter if your project overlaps with thousands of others: the experience of building it is unique to you. You deserve to have a few paragraphs and a screenshot out there explaining (and quietly celebrating) what you made.

The screenshot is particularly important. Will your project still exist and work in a decade? I hope so, but we all know how quickly things succumb to bit-rot.

Even better than a screenshot: an animated GIF screenshot! I capture these with LICEcap. And a video is even better than that, but those take a lot more effort to produce.

It's incredibly tempting to skip the step where you write about a project. But any time you do that you're leaving a huge amount of uncaptured value from that project on the table.

These days I make myself do it: I tell myself that writing about something is the cost I have to pay for building it. And I always end up feeling that the effort was more than worthwhile.

Check out my projects tag for examples of this kind of content.

So that's my advice for blogging: write about things you've learned, and write about things you've built!

Saturday, 05. November 2022

Simon Willison

GOV.UK: Rules for getting production access

GOV.UK: Rules for getting production access Fascinating piece of internal documentation on GOV.UK describing their rules, procedures and granted permissions for their deployment and administrative ops roles. Via @kkremitzki

GOV.UK: Rules for getting production access

Fascinating piece of internal documentation on GOV.UK describing their rules, procedures and granted permissions for their deployment and administrative ops roles.

Via @kkremitzki


It looks like I'm moving to Mastodon

Elon Musk laid off about half of Twitter this morning. There are many terrible stories emerging about how this went down, but one that particularly struck me was that he laid off the entire accessibility team. For me this feels like a microcosm of the whole situation. Twitter's priorities are no longer even remotely aligned with my own. I've been using Twitter since November 2006 - wow, that's 1

Elon Musk laid off about half of Twitter this morning. There are many terrible stories emerging about how this went down, but one that particularly struck me was that he laid off the entire accessibility team. For me this feels like a microcosm of the whole situation. Twitter's priorities are no longer even remotely aligned with my own.

I've been using Twitter since November 2006 - wow, that's 16 years! I've accumulated 42,804 followers there. It's been really good to me, and I've invested a lot of work generating content there to feed the machine.

I can't see myself putting the same work in to help the world's (current) richest man pay the billion dollar annual interest on the loans he took out to buy the place on a weird narcissistic whim.

So I've started to explore Mastodon - and so far it's exceeding all of my expectations.

My new profile is at https://fedi.simonwillison.net/@simon - you can follow @simon@simonwillison.net in your Mastodon client of choice.

Not ready to sign up for Mastodon? It turns out RSS support is baked in too - you can subscribe to https://fedi.simonwillison.net/@simon.rss in your feed reader (I really like NetNewsWire for macOS and iOS these days).

Why Mastodon?

The lesson I have learned from Twitter is that, even if a service you trust makes it past an IPO and becomes a public company, there's always a risk that it can be bought by someone who very much doesn't share your values.

Mastodon has been designed to avoid this from the start. It operates as a federated network of independent servers, each of which is run by a different person or organization with the ability to set their own rules and standards.

You can also host your own instance on your own domain.

My initial nudge to try this out was from Jacob and Andrew, who figured out how to do exactly that:

The Fediverse, And Custom Domains - Andrew Godwin Setting up a personal Fediverse ID / Mastodon instance - Jacob Kaplan-Moss

Andrew and Jacob both opted to pay masto.host to run their instance for them. I've decided to do the same. It's on my domain, which means if I ever want to run it myself I can do so without any visible disruption.

I'm paying $9/month. I find it darkly amusing that this is a dollar more than Elon has been planning to charge for users to keep their verified status on Twitter!

If you don't want to use your own domain there are plenty of good free options, though I recommend reading Ash Furrow's post about his shutdown of mastodon.technology to help understand how much of a commitment it is for the admins who run a free instance.

This post by @klillington@mastodon.ie has some good links for getting started understanding the system. I particularly enjoyed Nikodemus’ Guide to Mastodon as it matched most closely the questions I had at first.

Initial impressions

Despite taking the second hardest route to joining Mastodon (the hardest route is spinning up a new server from scratch) it took me just less than an hour to get started. I wrote up a TIL describing what I did - more or less directly following the steps described by Andrew and Jacob.

I signed into my new account and started following people, by pasting in their full Mastodon names (mine is @simon@simonwillison.net). I was initially surprised that this did nothing: your timeline won't be populated until the people you follow have said something.

And then people started to toot, and my timeline slowly kicked into life.

And it was really, really pleasant.

My fear was that everyone on Mastodon would spend all of their time talking about Mastodon - especially given the current news. And sure, there's some of that. (I'm obviously guilty here.)

But there's lots of stuff that isn't that. The 500 character limit gives people a bit more space, and replies work much like they do on Twitter. I followed a bunch of people, replied to a few things, posted some pelican photos and it all worked pretty much exactly as I hoped it would.

It's also attracting very much the kind of people I want to hang out with. Mastodon is, unsurprisingly, entirely populated by nerds. But the variety of nerds is highly pleasing to me.

I've been checking in on the #introduction hashtag and I'm seeing artists, academics, writers, historians. It's not just programmers. The variety of interest areas on Twitter is the thing I'll miss most about it, so seeing that start to become true on Mastodon too is a huge relief.

Considering how complicated a federated network is, the fact that it's this smooth to use is really impressive. It helps that they've had six years to iron out the wrinkles - the network seems to be coping with the massive influx of new users over the past few days really well.

I'm also appreciating how much thought has been put into the design of the system. Quote tweeting isn't supported, for reasons explained by Eugen Rochko in this 2018 post:

Another feature that has been requested almost since the start, and which I keep rejecting is quoting messages. Coming back to my disclaimer, of course it’s impossible to prevent people from sharing screenshots or linking to public resources, but quoting messages is immediately actionable. It makes it a lot easier for people to immediately engage with the quoted content… and it usually doesn’t lead to anything good. When people use quotes to reply to other people, conversations become performative power plays. “Heed, my followers, how I dunk on this fool!” When you use the reply function, your message is broadcast only to people who happen to follow you both. It means one person’s follower count doesn’t play a massive role in the conversation. A quote, on the other hand, very often invites the followers to join in on the conversation, and whoever has got more of them ends up having the upper hand and massively stressing out the other person.

Mastodon so far feels much more chilled out than Twitter. I get the impression this is by design. When there's no profit motive to "maximize engagement" you can design features to optimize for a different set of goals.

And there's an API

Unsurprisingly, Mastodon has a powerful API. It's necessary for the system itself to work - those toots aren't going to federate themselves!

Poking around with it is really fun.

First, a friendly note. @pamela@bsd.network wrote the following:

Hacky folks, please resist finding ways to scrape the fediverse, build archives, automate tools and connect to people via bot without their consent.

[...]

Whatever your thing is, make it 100% opt-in. Make it appropriate for a significantly more at-risk user than you are. Make sure it forgets things, purges info about servers it can't contact, can't operate in any sort of logged-in mode where consent is an issue.

We will straight up help advertise your cool thing if it respects users properly and takes the time to consider the safety and preferences of every person involved. There are a lot of fun, thoughtfully-designed toys! And there are a lot of people really tired of having to come and tell you off when you wanted to help, honestly. Help yourself and ask around before you flip on your cool new thing, let folks point out what you're missing.

(Read the whole thing, it's great.)

So far I've done a couple of things.

I built a Git scraper to track the list of peer instances that various servers have picked up. This feels like a reasonable piece of public information to track, and it's a fun way to get a feel for how the network is growing.

I also figured out how to Export a Mastodon timeline to SQLite using the timelines API and my paginate-json and sqlite-utils CLI tools, so I could explore it in Datasette.

Running my own instance means I have no ethical qualms at all about hammering away at my own API endpoint as fast as I like!

I like to follow a lot of different people, and I don't like to feel committed to reading everything that crosses my timeline - so I expect that the feature I'll miss most from Twitter will be the algorithmic timeline! This is very much not in the spirit of Mastodon, which is firmly committed to a reverse chronological sort order.

But with access to the raw data I can start experimenting with alternative timeline solutions myself.

I'm somewhat intrigued by the idea of iterating on my own algorithmic timeline, to try and keep the variety of content high while hopefully ensuring I'm most likely to catch the highlights (whatever that means.)

Past experience building recommendation systems has taught me that one of the smartest seeming things you can do is pick the top 100 most interesting looking things based on very loose criteria and then apply random.shuffle() to produce a final feed!

I have a hunch that this is going to be a lot of fun.


Nikodemus’ Guide to Mastodon

Nikodemus’ Guide to Mastodon I've been reading a bunch of different Mastodon guides and this one had pretty much exactly the information I needed to see when I first started out. Via @klillington@mastodon.ie

Nikodemus’ Guide to Mastodon

I've been reading a bunch of different Mastodon guides and this one had pretty much exactly the information I needed to see when I first started out.

Via @klillington@mastodon.ie

Friday, 04. November 2022

Simon Willison

Don't Read Off The Screen

Don't Read Off The Screen Stuart Langridge provides a fantastic set of public speaking tips in a five minute lightning talk remix of Sunscreen. Watch with sound. Via @Simonscarfe

Don't Read Off The Screen

Stuart Langridge provides a fantastic set of public speaking tips in a five minute lightning talk remix of Sunscreen. Watch with sound.

Via @Simonscarfe

Thursday, 03. November 2022

Identity Woman

Thoughtful Biometrics Workshop

It is happening again. February 13-17th. Registration will open soon. Two things happened today that solidified the decision to move forward with the event. I had a great conversation with a government of Canada official who started his career as an officer at a boarder crossing and is currently inside the government on modernization on […] The post Thoughtful Biometrics Workshop appeared first

It is happening again. February 13-17th. Registration will open soon. Two things happened today that solidified the decision to move forward with the event. I had a great conversation with a government of Canada official who started his career as an officer at a boarder crossing and is currently inside the government on modernization on […]

The post Thoughtful Biometrics Workshop appeared first on Identity Woman.


Phil Windleys Technometria

The Nature of Identity

Summary: This post is an excerpt from my upcoming book, Learning Digital Identity, which will be available January 2023. Cogito, ergo sum. —René Descartes The Peace of Westphalia, which ended the Thirty Years' War in 1648, created the concept of Westphalian sovereignty: the principle of international law that "each state has sovereignty over its territory and domestic affairs

Summary: This post is an excerpt from my upcoming book, Learning Digital Identity, which will be available January 2023.

Cogito, ergo sum.
—René Descartes

The Peace of Westphalia, which ended the Thirty Years' War in 1648, created the concept of Westphalian sovereignty: the principle of international law that "each state has sovereignty over its territory and domestic affairs, to the exclusion of all external powers, on the principle of non-interference in another country's domestic affairs, and that each state (no matter how large or small) is equal in international law."1

The ensuing century saw many of these states begin civil registration for their citizens, in an effort to turn their sovereignty over territory into governance over the people living in those lands. These registrations, from which our modern system of birth certificates springs, became the basis for personal identity and legal identity in a way that conflated these two concepts.

Birth certificates are a source of legal identity and a proof of citizenship, and thus the basis for individual identity in most countries. Civil registration has become the foundation for how states relate to their citizens. As modern nation-states have become more and more influential (and often controlling) in the lives of their citizens, civil registration and its attendant legal identity have come to play a larger and larger role in their lives. People present proof of civil registration for many purposes: to prove who they are and, springing from that, their citizenship.

Even so, Descartes did not say, "I have a birth certificate, therefore I am." When most people hear the word identity, they think about birth certificates, passports, driver's licenses, logins, passwords, and other sorts of credentials. But clearly, we are more than our legal identity. For most purposes and interactions, our identity is defined through our relationships. Even more deeply, we each experience these independently as an autonomous being with an individual perspective.

This dichotomy reflects identity's dual nature. While identity is something others assign to us, it is also something deep inside of us, reflecting what Descartes actually said: "I think, therefore I am."

A Bundle of Sticks?

Another way to think about the dual nature of identity is to ask, "Am I more than a set of attributes?" Property rights are often thought of as a "bundle of sticks": each right is separable from the rest and has value independent of the rest. Similarly, identity is often considered a bundle of attributes, each with independent value. This is known in philosophy as bundle theory, originated by David Hume.

Bundle theory puts attributes into a collection without worrying about what ties them together. As an example, you might identify a plum as purple, spherical, 5 centimeters in diameter, and juicy. Critics of bundle theory question how these attributes can be known to be related without knowing the underlying substance—the thing itself.

Substance theory, on the other hand, holds that attributes are borne by "an entity which exists in such a way that it needs no other entity to exist," according to our friend Descartes. Substance theory gives rise to the idea of persistence in the philosophy of personal identity. People, organizations, and things persist through time. In one sense, you are the same person who you were when you were 16. But in another, you are not. The thing that makes you the same person over your lifetime is substance. The thing that makes you different is the collection of ever-changing attributes you present to the outside world over time.

I'm no philosopher, but I believe both viewpoints are useful for understanding digital identity. For many practical purposes, viewing people, organizations, and things as bundles of attributes is good enough. This view is the assumption upon which the modern web is built. You log into different services and present a different bundle of attributes to each. There is no substance, at least in the digital sense, since the only thing tying them together is you, a decidedly nondigital entity.

This lack of a digital representation of you, that you alone control, is one of the themes I'll return to several times in my book. At present, you are not digitally embodied—your digital existence depends on other entities. You have no digital substance to connect the various attributes you present online. I believe that digital identity systems must embody us and give us substance if we are to build a digital future where people can operationalize their online existence and maintain their dignity as autonomous human beings.

Notes "Nation-States and Sovereignty," History Guild, accessed October 5, 2022. Substance theory has many more proponents than Descartes, but his definition is helpful in thinking through identity’s dual nature.

Photo Credit: Smoke sticks for honey harvesting from Lucy McHugh/CIFOR (CC BY-NC-ND 2.0, photo cropped vertically)

Tags: identity ldid book

Tuesday, 01. November 2022

reb00ted

California water prices have quadrupled

Why should other countries have all the fun with exploding prices for base resources, like heating in the UK, or all kinds of energy across Europe? Nasdaq has an index for open-market wholesale prices for water in the US West, mostly California. Currently, it is in the order of a $1000 per acre-foot, while the non-drought price seems to be about $250. Quadrupled. Links: current prices, ex

Why should other countries have all the fun with exploding prices for base resources, like heating in the UK, or all kinds of energy across Europe?

Nasdaq has an index for open-market wholesale prices for water in the US West, mostly California. Currently, it is in the order of a $1000 per acre-foot, while the non-drought price seems to be about $250.

Quadrupled.

Links: current prices, explanation.


Simon Willison

RFC 7807: Problem Details for HTTP APIs

RFC 7807: Problem Details for HTTP APIs This RFC has been brewing for quite a while, and is currently in last call (ends 2022-11-03). I'm designing the JSON error messages for Datasette at the moment so this could not be more relevant for me. Via Nicolas Fränkel

RFC 7807: Problem Details for HTTP APIs

This RFC has been brewing for quite a while, and is currently in last call (ends 2022-11-03). I'm designing the JSON error messages for Datasette at the moment so this could not be more relevant for me.

Via Nicolas Fränkel

Monday, 31. October 2022

Damien Bod

Switch tenants in an ASP.NET Core app using Azure AD with multi tenants

This article shows how to switch between tenants in an ASP.NET Core multi-tenant application using a multi-tenant Azure App registration to implement the identity provider. Azure roles are added to the Azure App registration and this can be used in the separate enterprise applications created from the multi-tenant Azure App registration to assign users and […]

This article shows how to switch between tenants in an ASP.NET Core multi-tenant application using a multi-tenant Azure App registration to implement the identity provider. Azure roles are added to the Azure App registration and this can be used in the separate enterprise applications created from the multi-tenant Azure App registration to assign users and groups.

Code: https://github.com/damienbod/AspNetCoreTenantSelect

Azure AD is used to implement the identity provider for the ASP.NET Core application. In the home tenant, an Azure App registration was created to support multiple tenants. Three roles for users and groups were created and added to the Azure App registration. The first time a user authenticates using the Azure App registration, an administrator can give consent for the tenant. This creates an Azure enterprise application inside the corresponding tenant. Users or groups can be assigned the roles from the Azure App registration. This is specific for the corresponding tenant only.

If a user exists in two separate tenants, the user needs an easy way to switch between the tenants without a logout and a login. The user can be assigned separate roles in each tenant. The email is used to identify the user, as separate OIDs are created for each tenant. The user can be added as an external user in multiple tenants with the same email.

The ASP.NET Core application uses the Azure App registration to authentication.

The ASP.NET Core application uses Microsoft.Identity.Web to implement the OpenID Connect client. This client uses MSAL. The user of the application needs a way to switch between the tenants. To do this, the specific tenant must be used in the authorize request of the OpenID Connect flow. If the common endpoint is used, which is the standard for a multi-tenant Azure App registration, the user cannot switch between the tenants without an account logout first or using a separate incognito browser.

A cache is used to store the preferred tenant of the authenticated user. The user of the application can select the required tenant and the tenant is used for authentication. Before the authorize request is sent to Azure AD, the ProtocolMessage.IssuerAddress is used with the correct tenant GUID identifier. The prompt select_account was added for the authorize request in the OpenID Connect flow so that the user will always be asked to choose an account. Most of us have multiple identities and account nowadays.

The application requires an authenticated user. The default authentication uses the common endpoint and no select account prompt.

There are different ways to implement the switch tenant logic. I have not focused on this. I just add the selected organization to an in-memory cache. You can for example keep a database of your specific allowed organizations and authorize this after a successful authentication using claims returned from the identity provider. You could also provide the organization as a query parameter in the URL. The Azure AD Microsoft.Identity.Web.UI client and ASP.NET Core application requires that the application starts the authentication flow from a HTTP GET, and not a redirect to a GET or a POST request.

services.AddTransient<TenantProvider>(); services.AddAuthentication(OpenIdConnectDefaults.AuthenticationScheme) .AddMicrosoftIdentityWebApp(builder.Configuration.GetSection("AzureAd")); WebApplication? app = null; services.Configure<MicrosoftIdentityOptions>(OpenIdConnectDefaults.AuthenticationScheme, options => { options.Prompt = "select_account"; var redirectToIdentityProvider = options.Events.OnRedirectToIdentityProvider; options.Events.OnRedirectToIdentityProvider = async context => { if(app != null) { var tenantProvider = app.Services.GetRequiredService<TenantProvider>(); var email = context.HttpContext!.User.Identity!.Name; if (email != null) { var tenant = tenantProvider.GetTenant(email); var address = context.ProtocolMessage.IssuerAddress.Replace("common", tenant.Value); context.ProtocolMessage.IssuerAddress = address; } } await redirectToIdentityProvider(context); }; }); services.AddRazorPages().AddMvcOptions(options => { var policy = new AuthorizationPolicyBuilder() .RequireAuthenticatedUser() .Build(); options.Filters.Add(new AuthorizeFilter(policy)); }).AddMicrosoftIdentityUI();

The TenantProvider service implements the tenant select logic so that a user can switch between tenants or accounts without signing out or switching browsers. This can be replaced with a database or logic as your business requires. I hard coded some test tenants for the organization switch. Some type of persistence or database would be better for this. An in-memory cache is used to persist the user and the preferred organization.

public class TenantProvider { private static readonly SelectListItem _org1 = new("Org1", "7ff95b15-dc21-4ba6-bc92-824856578fc1"); private static SelectListItem _org2 = new("Org2", "a0958f45-195b-4036-9259-de2f7e594db6"); private static SelectListItem _org3 = new("Org3", "5698af84-5720-4ff0-bdc3-9d9195314244"); private static SelectListItem _common = new("common", "common"); private static readonly object _lock = new(); private IDistributedCache _cache; private const int cacheExpirationInDays = 1; public TenantProvider(IDistributedCache cache) { _cache = cache; } public void SetTenant(string email, string org) { AddToCache(email, GetTenantForOrg(org)); } public SelectListItem GetTenant(string email) { var org = GetFromCache(email); if (org != null) return org; return _common; } public List<SelectListItem> GetAvailableTenants() { return new List<SelectListItem> { _org1, _org2, _org3, _common }; } private SelectListItem GetTenantForOrg(string org) { if (org == "Org1") return _org1; else if (org == "Org2") return _org2; else if (org == "Org3") return _org3; return _common; } private void AddToCache(string key, SelectListItem userActiveOrg) { var options = new DistributedCacheEntryOptions() .SetSlidingExpiration(TimeSpan.FromDays(cacheExpirationInDays)); lock (_lock) { _cache.SetString(key, JsonSerializer.Serialize(userActiveOrg), options); } } private SelectListItem? GetFromCache(string key) { var item = _cache.GetString(key); if (item != null) { return JsonSerializer.Deserialize<SelectListItem>(item); } return null; } }

An ASP.NET Core Razor Page is used to implement the tenant switch UI logic. This just displays the available tenants and allows the user to choose a new tenant.

public class SwitchTenantModel : PageModel { private readonly TenantProvider _tenantProvider; public SwitchTenantModel(TenantProvider tenantProvider) { _tenantProvider = tenantProvider; } [BindProperty] public string Domain { get; set; } = string.Empty; [BindProperty] public string TenantId { get; set; } = string.Empty; [BindProperty] public List<string> RolesInTenant { get; set; } = new List<string>(); [BindProperty] public string AppTenantName { get; set; } = string.Empty; [BindProperty] public List<SelectListItem> AvailableAppTenants { get; set; } = new List<SelectListItem>(); public void OnGet() { var name = User.Identity!.Name; if (name != null) { AvailableAppTenants = _tenantProvider.GetAvailableTenants(); AppTenantName = _tenantProvider.GetTenant(name).Text; List<Claim> roleClaims = HttpContext.User.FindAll(ClaimTypes.Role).ToList(); foreach (var role in roleClaims) { RolesInTenant.Add(role.Value); } TenantId = HttpContext.User.FindFirstValue("http://schemas.microsoft.com/identity/claims/tenantid"); } } /// <summary> /// Only works from a direct GET, not a post or a redirect /// </summary> public IActionResult OnGetSignIn([FromQuery]string domain) { var email = User.Identity!.Name; if(email != null) _tenantProvider.SetTenant(email, domain); return Challenge(new AuthenticationProperties { RedirectUri = "/" }, OpenIdConnectDefaults.AuthenticationScheme); } }

The Index Razor page in the ASP.NET Core application displays the actually tenant, the organization and the roles for this identity in this tenant.

public void OnGet() { var name = User.Identity!.Name; if(name != null) { AvailableAppTenants = _tenantProvider.GetAvailableTenants(); AppTenantName = _tenantProvider.GetTenant(name).Text; List<Claim> roleClaims = HttpContext.User.FindAll(ClaimTypes.Role).ToList(); foreach (var role in roleClaims) { RolesInTenant.Add(role.Value); } TenantId = HttpContext.User.FindFirstValue( "http://schemas.microsoft.com/identity/claims/tenantid"); } }

After a successful authentication using Azure AD and the multi-tenant Azure App registration, the user can see the assigned roles and the tenant.

The tenant switch is displayed in a HTML list and the authentication request with the select account prompt is sent to Azure AD.

The new tenant and the new corresponding roles for the authorization are displayed after a successful authentication.

Switching tenants is becoming a required feature in most applications now that we have access to multiple Azure AD tenants and domains using the same email. This makes using external identities for an Azure AD user in a multiple domain environment a little less painful.

Notes

If using this in environment where all tenants are not allowed, the tid claim must be validated. You should always restrict the tenants in a multi tenant application if possible. You could force this by adding a tenant requirement.

services.AddRazorPages().AddMvcOptions(options => { var policy = new AuthorizationPolicyBuilder() .RequireAuthenticatedUser() // Eanble to force tenant restrictions .AddRequirements(new[] { new TenantRequirement() }) .Build(); options.Filters.Add(new AuthorizeFilter(policy)); }).AddMicrosoftIdentityUI(); Links

https://learn.microsoft.com/en-us/azure/active-directory/fundamentals/multi-tenant-user-management-introduction

https://github.com/AzureAD/microsoft-identity-web

https://learn.microsoft.com/en-us/azure/active-directory/develop/quickstart-register-app

https://learn.microsoft.com/en-us/azure/active-directory/manage-apps/add-application-portal


Simon Willison

mitsuhiko/insta

mitsuhiko/insta I asked for recommendations on Twitter for testing libraries in other languages that would give me the same level of delight that I get from pytest. Two people pointed me to insta by Armin Ronacher, a Rust testing framework for "snapshot testing" which automatically records reference values to your repository, so future tests can spot if they change. Via @david_raznick

mitsuhiko/insta

I asked for recommendations on Twitter for testing libraries in other languages that would give me the same level of delight that I get from pytest. Two people pointed me to insta by Armin Ronacher, a Rust testing framework for "snapshot testing" which automatically records reference values to your repository, so future tests can spot if they change.

Via @david_raznick

Saturday, 29. October 2022

Simon Willison

The Perfect Commit

For the last few years I've been trying to center my work around creating what I consider to be the Perfect Commit. This is a single commit that contains all of the following: The implementation: a single, focused change Tests that demonstrate the implementation works Updated documentation reflecting the change A link to an issue thread providing further context Our job as software

For the last few years I've been trying to center my work around creating what I consider to be the Perfect Commit. This is a single commit that contains all of the following:

The implementation: a single, focused change Tests that demonstrate the implementation works Updated documentation reflecting the change A link to an issue thread providing further context

Our job as software engineers generally isn't to write new software from scratch: we spend the majority of our time adding features and fixing bugs in existing software.

The commit is our principle unit of work. It deserves to be treated thoughtfully and with care.

Implementation

Each commit should change a single thing.

The definition of "thing" here is left deliberately vague!

The goal is have something that can be easily reviewed, and that can be clearly understood in the future when revisited using tools like git blame or git bisect.

I like to keep my commit history linear, as I find that makes it much easier to comprehend later. This further reinforces the value of each commit being a single, focused change.

Atomic commits are also much easier to cleanly revert if something goes wrong - or to cherry-pick into other branches.

For things like web applications that can be deployed to production, a commit should be a unit that can be deployed. Aiming to keep the main branch in a deployable state is a good rule of thumb for deciding if a commit is a sensible atomic change or not.

Tests

The ultimate goal of tests is to increase your productivity. If your testing practices are slowing you down, you should consider ways to improve them.

In the longer term, this productivity improvement comes from gaining the freedom to make changes and stay confident that your change hasn't broken something else.

But tests can help increase productivity in the immediate short term as well.

How do you know when the change you have made is finished and ready to commit? It's ready when the new tests pass.

I find this reduces the time I spend second-guessing myself and questioning whether I've done enough and thought through all of the edge cases.

Without tests, there's a very strong possibility that your change will have broken some other, potentially unrelated feature. Your commit could be held up by hours of tedious manual testing. Or you could YOLO it and learn that you broke something important later!

Writing tests becomes far less time consuming if you already have good testing practices in place.

Adding a new test to a project with a lot of existing tests is easy: you can often find an existing test that has 90% of the pattern you need already worked out for you.

If your project has no tests at all, adding a test for your change will be a lot more work.

This is why I start every single one of my projects with a passing test. It doesn't matter what this test is - assert 1 + 1 == 2 is fine! The key thing is to get a testing framework in place, such that you can run a command (for me that's usually pytest) to execute the test suite - and you have an obvious place to add new tests in the future.

I use these cookiecutter templates for almost all of my new projects. They configure a testing framework with a single passing test and GitHub Actions workflows to exercise it all from the very start.

I'm not a huge advocate of test-first development, where tests are written before the code itself. What I care about is tests-included development, where the final commit bundles the tests and the implementation together. I wrote more about my approach to testing in How to cheat at unit tests with pytest and Black.

Documentation

If your project defines APIs that are meant to be used outside of your project, they need to be documented. In my work these projects are usually one of the following:

Python APIs (modules, functions and classes) that provide code designed to be imported into other projects. Web APIs - usually JSON over HTTP these days - that provide functionality to be consumed by other applications. Command line interface tools, such as those implemented using Click or Typer or argparse.

It is critical that this documentation must live in the same repository as the code itself.

This is important for a number of reasons.

Documentation is only valuable if people trust it. People will only trust it if they know that it is kept up to date.

If your docs live in a separate wiki somewhere it's easy for them to get out of date - but more importantly it's hard for anyone to quickly confirm if the documentation is being updated in sync with the code or not.

Documentation should be versioned. People need to be able to find the docs for the specific version of your software that they are using. Keeping it in the same repository as the code gives you synchronized versioning for free.

Documentation changes should be reviewed in the same way as your code. If they live in the same repository you can catch changes that need to be reflected in the documentation as part of your code review process.

And ideally, documentation should be tested. I wrote about my approach to doing this using Documentation unit tests. Executing example code in the documentation using a testing framework is a great idea too.

As with tests, writing documentation from scratch is much more work than incrementally modifying existing documentation.

Many of my commits include documentation that is just a sentence or two. This doesn't take very long to write, but it adds up to something very comprehensive over time.

How about end-user facing documentation? I'm still figuring that out myself. I created my shot-scraper tool to help automate the process of keeping screenshots up-to-date, but I've not yet found personal habits and styles for end-user documentation that I'm confident in.

A link to an issue

Every perfect commit should include a link to an issue thread that accompanies that change.

Sometimes I'll even open an issue seconds before writing the commit message, just to give myself something I can link to from the commit itself!

The reason I like issue threads is that they provide effectively unlimited space for commentary and background for the change that is being made.

Most of my issue threads are me talking to myself - sometimes with dozens of issue comments, all written by me.

Things that can go in an issue thread include:

Background: the reason for the change. I try to include this in the opening comment. State of play before the change. I'll often link to the current version of the code and documentation. This is great for if I return to an open issue a few days later, as it saves me from having to repeat that initial research. Links to things. So many links! Inspiration for the change, relevant documentation, conversations on Slack or Discord, clues found on StackOverflow. Code snippets illustrating potential designs and false-starts. Use ```python ... ``` blocks to get syntax highlighting in your issue comments. Decisions. What did you consider? What did you decide? As programmers we make hundreds of tiny decisions a day. Write them down! Then you'll never find yourself relitigating them in the future having forgotten your original reasoning. Screenshots. What it looked like before, what it looked like after. Animated screenshots are even better! I use LICEcap to generate quick GIF screen captures or QuickTime to capture videos - both of which can be dropped straight into a GitHub issue comment. Prototypes. I'll often paste a few lines of code copied from a Python console session. Sometimes I'll even paste in a block of HTML and CSS, or add a screenshot of a UI prototype.

After I've closed my issues I like to add one last comment that links to the updated documentation and ideally a live demo of the new feature.

An issue is more valuable than a commit message

I went through a several year phase of writing essays in my commit messages, trying to capture as much of the background context and thinking as possible.

My commit messages grew a lot shorter when I started bundling the updated documentation in the commit - since often much of the material I'd previously included in the commit message was now in that documentation instead.

As I extended my practice of writing issue threads, I found that they were a better place for most of this context than the commit messages themselves. They supported embedded media, were more discoverable and I could continue to extend them even after the commit had landed.

Today many of my commit messages are a single line summary and a link to an issue!

The biggest benefit of lengthy commit messages is that they are guaranteed to survive for as long as the repository itself. If you're going to use issue threads in the way I describe here it is critical that you consider their long term archival value.

I expect this to be controversial! I'm advocating for abandoning one of the core ideas of Git here - that each repository should incorporate a full, decentralized record of its history that is copied in its entirety when someone clones a repo.

I understand that philosophy. All I'll say here is that my own experience has been that dropping that requirement has resulted in a net increase in my overall productivity. Other people may reach a different conclusion.

If this offends you too much, you're welcome to construct an even more perfect commit that incorporates background information and additional context in an extended commit message as well.

One of the reasons I like GitHub Issues is that it includes a comprehensive API, which can be used to extract all of that data. I use my github-to-sqlite tool to maintain an ongoing archive of my issues and issue comments as a SQLite database file.

Not every commit needs to be "perfect"

I find that the vast majority of my work fits into this pattern, but there are exceptions.

Typo fix for some documentation or a comment? Just ship it, it's fine.

Bug fix that doesn't deserve documentation? Still bundle the implementation and the test plus a link to an issue, but no need to update the docs - especially if they already describe the expected bug-free behaviour.

Generally though, I find that aiming for implementation, tests, documentation and an issue link covers almost all of my work. It's a really good default model.

Write scrappy commits in a branch

If I'm writing more exploratory or experimental code it often doesn't make sense to work in this strict way. For those instances I'll usually work in a branch, where I can ship "WIP" commit messages and failing tests with abandon. I'll then squash-merge them into a single perfect commit (sometimes via a self-closed GitHub pull request) to keep my main branch as tidy as possible.

Some examples

Here are some examples of my commits that follow this pattern:

Upgrade Docker images to Python 3.11 for datasette #1853 - a pretty tiny change, but still includes tests, docs and an issue link. sqlite-utils schema now takes optional tables for sqlite-utils #299 shot-scraper html command for shot-scraper #96 s3-credentials put-objects command for s3-credentials #68 Initial implementation for datasette-gunicorn #1 - this was the first commit to this repository, but I still bundled the tests, docs, implementation and a link to an issue.

Friday, 28. October 2022

Moxy Tongue

A Society Worth Contributing To

 [Authoritative Work In Progress..]

 [Authoritative Work In Progress..]

Thursday, 27. October 2022

Heres Tom with the Weather

RubyConf in Houston

Earlier this week, I signed up for RubyConf 2022 which is Nov. 29 - Dec. 1 in Houston. This is my first conference since the pandemic started and I was glad to see the safety precautions. The schedule also looks great! Please say “Hi!” if you see me there.

Earlier this week, I signed up for RubyConf 2022 which is Nov. 29 - Dec. 1 in Houston. This is my first conference since the pandemic started and I was glad to see the safety precautions. The schedule also looks great! Please say “Hi!” if you see me there.

Tuesday, 25. October 2022

Heres Tom with the Weather

IndieAuth login history

In my last post, I mentioned that I planned to add login history to Irwin. As I was testing my code, I logged into indieweb.org and noticed that I needed to update my code to support 5.3.2 Profile URL Response of the IndieAuth spec as this IndieAuth client does not need an access token. Here’s what the history looks like on my IndieAuth server: If I click on a login timestamp, I have the

In my last post, I mentioned that I planned to add login history to Irwin. As I was testing my code, I logged into indieweb.org and noticed that I needed to update my code to support 5.3.2 Profile URL Response of the IndieAuth spec as this IndieAuth client does not need an access token. Here’s what the history looks like on my IndieAuth server:

If I click on a login timestamp, I have the option to revoke the access token associated with the login if it exists and has not already expired. My next step is to test some other micropub servers than the one I use to see what interoperability updates I may need to make.

Monday, 24. October 2022

reb00ted

The Push-Pull Publish-Subscribe Pattern (PuPuPubSub)

Preface The British government clearly has more tolerance for humor when naming important things than the W3C does. Continuing in the original fashion, thus this name. The Problem The publish-subscribe pattern is well known, but in some circumstances, it suffers from two important problems: When a subscriber is temporarily not present, or cannot be reached, sent events are often lost.
Preface

The British government clearly has more tolerance for humor when naming important things than the W3C does. Continuing in the original fashion, thus this name.

The Problem

The publish-subscribe pattern is well known, but in some circumstances, it suffers from two important problems:

When a subscriber is temporarily not present, or cannot be reached, sent events are often lost. This can happen, for example, if the subscriber computer reboots, falls off the network, goes to sleep, has DNS problems and the like. Once the subscriber recovers, it is generally not clear what needs to happen for the subscriber to catch up to the events it may have missed. It is not even clear whether it has missed any. Similarly, it is unclear for how long the publisher needs to retry to send a message; it may be that the subscriber has permanently gone away.

Subscriptions are often set up as part of the following pattern:

A resource on the Web is accessed. For example, a user reads an article on a website, or a software agent fetches a document. Based on the content of the obtained resource, a decision is made to subscribe to updates to that resource. For example, the user may decide that they are interested in updates to the article on the website they just read. There is a time lag between the time the resource has been accessed, and when the subscription becomes active, creating a race condition during which update events may be missed.

While these two problems are not always significant, there are important circumstances in which they are, and this proposal addresses those circumstances.

Approach to the solution

We augment the publish-subscribe pattern by:

have all events, as well as the content of the resource that whose changes are supposed to be tracked, be time-stamped; alternatively a monotonically increasing sequence number could be used;

having the publisher store the history of events emitted so far; for efficiency reasons, this may be shortened to some time window reaching to the present, as appropriate for the application; for example, all events in the last month;

having the publisher provide a query interface to the subscriber to that history, with a “since” time parameter, so the subscriber can obtain the sequence of events emitted since a certain time;

in addition to the callback address, providing to the publisher when subscribing

a time stamp, and a subscription id;

considering the actual sending if an event from the publisher to the subscriber to be a performance optimization, which when failing, is only an inconvenience rather than a cause of lost data.

Details About the race condition

The future subscriber accesses resource R and finds time stamp T0. For example, a human reads a web page whose publication date is April 23, 2021, 23:00:00 UTC.

After some time passes, the subscriber decides to subscribe. It does this with the well-known subscription pattern, but in addition to providing a callback address, it also provides time stamp T0 and a unique (can be random) subscription id. For example, a human’s hypothetical news syndication app may provide an event update endpoint to the news website, plus time T0.

The publisher sets up the subscription, and immediately checks whether any events should have been sent between T0 and the present. If so, it emits those events to the subscriber, in sequence, before continuing with regular operations. As a result, there is no more race condition between subscription and event.

When sending an event, the publisher also sends the subscription id.

About temporary unavailability of the subscriber

After a subscription is active, the subscriber disappears and new events cannot be delivered. The publisher may continue to attempt to deliver events for as long as it likes, or stop immediately.

When the subscriber re-appears, it finds the time of the last event it had received from the publisher, say time T5. It queries the event history published by the publisher with parameter T5 to find out what events it missed. It can then re-subscribe with a later starting time stamp (say T10), or with T5, causing the publisher to send all events from T5. When it re-subscribes, it uses a different subscription id.

After the subscriber has re-appeared, it ignores/rejects all incoming events with the old subscription id.

Observations

Publishers do not need to remember subscriber-specific state. (Thanks, Kafka, for showing us!) That makes it easy to implement the publisher side.

From the perspective of the publisher, delivery of events to clients that can receive callbacks, and those that need to poll, both works. (It sort of emulates RSS except that a starting time parameter is provided by the client, instead of a uniform window decided on by the publisher as in RSS)

Clients only need to keep a time stamp as state, something they probably have already anyway.

Clients can implement a polling or push strategy, or dynamically change between those, without the risk of losing data.

Feedback?

Would love your thoughts!

Friday, 21. October 2022

Werdmüller on Medium

The end of Twitter

Our online public squares are sunsetting. What’s next? Continue reading on Medium »

Our online public squares are sunsetting. What’s next?

Continue reading on Medium »

Thursday, 20. October 2022

MyDigitalFootprint

How to build a #team fit for #uncertainty

The pandemic changed us, our views, what we value and how we work.  We might not recognise all the changes and hang on in the hope of a return to something we loved, but we must make the best of it now.  We should be aware that the change has not only affected us but also our teams.  Whilst the Bruce Tuckman 1965 forming–storming–norming–performing model of group development is ti
The pandemic changed us, our views, what we value and how we work.  We might not recognise all the changes and hang on in the hope of a return to something we loved, but we must make the best of it now.  We should be aware that the change has not only affected us but also our teams. 

Whilst the Bruce Tuckman 1965 forming–storming–norming–performing model of group development is timeless. No one is likely to dissent that phases remain necessary and inevitable for a team to grow, face challenges, tackle problems, find solutions, plan work, and deliver results. 

However, because we have changed, so has the utility of the tools we apply that move us along the journey from forming to performing.  Tools learnt and built in stable and certain times, have less applicability when we are faced with volatility and uncertainty. 


It is the usefulness of tools we utilise that move us on the journey from forming to performing that has changed. 

More books and articles on “teams” and “leadership” exist than on almost all other management topics. However, forming teams today and how to get teams to perform is continually changing. Below is a new framework for the tool bag. Frameworks are just like a tradesperson supplementing their faithful and well-used wooden screwdriver with a sophisticated modern digital electronic driver.  Both are needed, but the update can be faster and have more applications.

Because this is not a book on team building, an overly simplistic view of the Tuckman model is to recognise there are tensions and compromises in all new teams that have to be addressed (storming.)  When addressed, align the team to the reason this team exists (norming.) Finally, focus on the tasks that are delegated to the team (performing.) If only it were that simple or that quick!!!  Critically team leadership helps the team realise, faster than it would otherwise, that this team has an identity, purpose and unique capabilities.  

The focus of the Peak-Paradox framework in this context is to aid in the unpacking of the storming part in the process. How to unpick and identify different tensions and compromises is essential, and there are many tools for this when the markets are stable and certain. The Peak Paradox framework has the most value in this part of a team's journey, especially when everything is changing.  Digital interactions, Gen Z and remote working mean we need new tools to supplement the existing ones.

The Peak Paradox framing makes us realise that teams and communities don’t naturally exist in all places equally. The purer the purpose that drives an individual, the less likely they can find others who will make a team and walk someone else's path.  Yes, there are certain exceptions, but those are not normal.  

Teams and communities don’t exist in all places equally.

At Peak Paradox, where you try to optimise for everything, teams will not naturally exist as those in the team cannot agree on what they are optimising for. Debate, argument and hostility remain forever, with a team never escaping the forming stage.  Indeed, forming a team with individuals who dwell at the two extremes (Peak Purpose and Peak Paradox) would appear to be futile. However, when humans' only mission is to survive, some of the best teams form.  Always an exception. 

Note: these comments should not be confused with applying the Peak Paradox framework to leadership or decision-making - the focus here is purely on team storming.


A team can start weak, messy or strong - the descriptions of the areas on the chart. They will all still follow the forming – storming – norming – performing model.  These three areas give rise to where teams form and where teams need to move if they are to perform.

Weak teams.  The reality, the individuals are not weak, the team is weak.  These teams are made of very strong-minded individuals who know what they want, can lead themselves and know what to do but cannot work as a team at the outset. In this case, it is about unpacking what purpose they align to and how they would approach fulfilling the purpose this team has. This allows you to unpack what tensions and compromises they have to live with and how they will deal with them.  The right question to ask each team member is, “Where do you naturally align on the four purposes?”  Often these teams find they need to lose members or find a different leadership style to be able to move on.  

Messy teams.   This is because the individuals are messy, and we come with bias, history, experience and incentives.  Messy teams are full of individuals who grasp compromise and tension and live with it every day but have been unable to find the right place for them.  They are drawn to optimise for many things at the same time, they comprehend ambiguity, complexity, volatility and uncertainty.  Messy teams can be easily guided by strong leadership, but often the cracks appear much later when an individual cannot live with the compromises that are now enforced.  The right question to ask each team member is, “What is the one thing you will not compromise over because it creates tension you cannot live with?”

Strong teams.   Teams both start here and should come here to perform, as there is a balance between clarity of purpose, tensions and compromises - sufficient diversity of thought and experience means they can work through problems that occur on the journey the team travels in delivering.  

The team who starts here will still go on the same F-S-N-P path and, depending on the alignment the team has (if well selected), can get to performing fast.  A misaligned team that was selected because of the wrong criteria can fall apart and never get anywhere - this is due to the fact that the individuals oppose the purpose and optimisation of others in the team.  The division is divisive.  Strong starting teams are not always the route to the best outcome.  The trick here is to use the Peak Paradox model to select diverse team members who will be able to cope with the demands and requirements other team members place on the group.  The right question to ask is, “What will you compromise on to make another team member more successful?”

The teams who come to this area from a different starting place, messy or weak, may take time however, on that journey, some team members may have to be lost, and others may be transformed, but they will come to a strong alignment and cohesion by the time they are performing.  This is a thing of beauty. The right question to ask when you get to norming is, “what sacrosanct thing can you compromise on to be part of the team?”


The value of the Peak Paradox framework to modern team building is it enables you to ask questions and plot where people are. This allows you to visualise gaps so you can work on how to bring a team together to perform.  This is very different to culture and style analysis.  

Why is this new? Because Gen-Z is much more opinionated and vocal on their purpose than previous generations, they are also far less likely to compromise to be in a team they don’t want to be in.  Old tools still work, but new tools can help us get there faster.

The title is “How to build a #team fit for #uncertainty” 

When there is stability and certainty, teams can perform with a far narrower, more holistic and aligned view, principally because decision-making has more data, and history is a good predictor.  There is a greater demand for variance, tension and compromise in teams during instability and uncertainty.   Building teams that are resilient when all is changing demands a deeper understanding of what drives individuals and how they cope with new tensions and compromises.  To do that, we need new tools to help leadership and team builders visualise what they have and where they need to be.

My suggestion to start this, is that you

1. Plot where the team needs to be on the Peak Paradox map to deliver its objective.

2. Plot the individuals and determine if you have a team that can deliver or has too many tensions, which means compromise will not be sufficient to get them to perform.  

3. Do it again before each review or every time a new member joins or leaves.

4. Question yourself if the current team is too narrow and aligned or too divergent and divisive. 

Another reality - none of this is easy, and for anyone who entered work place post-1990, this is different to other significant but localised market disruptions in 2001 and 2008. 


Wednesday, 19. October 2022

Identity Praxis, Inc.

MEF Market Review: Personal Data and Identity Meeting of the Waters

The Mobile Ecosystem Forum released a report today—MEF Market Review: Personal Data and Identity Meeting of the Waters—that I’ve been working on for a while. You can download it for FREE here. The report explains the current state of the personal data and identity market. Take a look. Let’s collaborate.  The world is responding to […] The post MEF Market Review: Personal Data and Identity M

The Mobile Ecosystem Forum released a report today—MEF Market Review: Personal Data and Identity Meeting of the Waters—that I’ve been working on for a while. You can download it for FREE here.

The report explains the current state of the personal data and identity market. Take a look. Let’s collaborate. 

The world is responding to the growing importance of personal data and identity. This response is reshaping the world’s markets. Regulatory, technological, cultural, and economic factors are shifting the context of personal data and identity: the what, when, why, who, where, and how. In light of these shifts, we’re witnessing the nature of personal data and identity change—i.e., the definition or lack thereof. We are seeing shifts in personal data control, i.e., from organizations to individuals. As a result of the Internet of Things (IoT) use, AI, and other technical advancements, personal data is exponentially growing in scope and scale. Many stakeholders are waking up to the value of personal data—not just the idea that it is the “new black gold” but something entirely different; that it is a non-rivalrous, non-depleting, regenerative asset. And finally, we’re seeing an explosion of people-centric regulations rolling out—by 2024, according to Gartner, 75% of the world’s population will be endowed with rights under one or more of these regulations, and organizations will be held accountable to a myriad of new obligations. This all means that we are witnessing the birth of the “personal data and identity meeting of the waters” and a new economy, the personal information economy, where individuals will have a legit seat at the economic table for personal data and identity.  

This report, the “MEF Market Report: Personal Data and Identity Meeting of the Waters,” provides a detailed overview of what’s happening to and with personal data and identity, why you should care, and what you—all of us—should consider doing to harness the power of personal data and identity responsibly. We hope that this report is used as a guide to help us come together to: 

Rebuild trusted relationships by inviting individuals to the table Educate and empower all actors Shape and reshape new and existing personal data and identity policies, frameworks, laws, and regulations Attack cybercrime and enhance data stewardship practices Lobby to address market failures and support people-centric infrastructure as a public utility Consider interoperable technology standards and protocols Envision new and evolved business models

The MEF Personal Data and Identity working group welcomes your feedback and contribution. Please message me and let’s discuss the world of personal data and identity. 

The post MEF Market Review: Personal Data and Identity Meeting of the Waters appeared first on Identity Praxis, Inc..


Doc Searls Weblog

The Rhetoric of War

I wrote this more than a quarter century ago when Linux Journal was the only publication that would have me, and I posted unsold essays and wannabe columns at searls.com. These postings accumulated in this subdirectory for several years before Dave Winer got me to blog for real, starting here. Interesting how much has changed since I wrote […]

I wrote this more than a quarter century ago when Linux Journal was the only publication that would have me, and I posted unsold essays and wannabe columns at searls.com. These postings accumulated in this subdirectory for several years before Dave Winer got me to blog for real, starting here.

Interesting how much has changed since I wrote this, and how much hasn’t. Everything I said about metaphor applies no less than ever, even as all the warring parties mentioned have died or moved on to other activities, if not battles. (Note that there was no Google at this time, and the search engines mentioned exist only as fossils in posts such as this one.)

Perhaps most interesting is the paragraph about MARKETS ARE CONVERSATIONS. While that one-liner had no effect at the time, it became a genie that would not return to its bottle after Chris Locke, David Weinberger, Rick Levine and I put it in The Cluetrain Manifesto in 1999. In fact, I had been saying “markets are conversations” to no effect at least since the 1980s. Now “join the conversation” is bullshit almost everywhere it’s uttered, but you can’t stop hearing it. Strange how that goes.

MAKE MONEY, NOT WAR
TIME TO MOVE PAST THE WAR METAPHORS OF THE INDUSTRIAL AGE

By Doc Searls
19 March 1997

“War isn’t an instinct. It’s an invention.”

“The metaphor is probably the most fertile power possessed by man.”

“Conversation is the socializing instrument par excellence.”

-José Ortega y Gasset

Patton lives

In the movie “Patton,” the general says, “Compared to war, all other forms of human endeavor shrink to insignificance.” In a moment of self-admonition, he adds, “God help me, I love it so.”

And so do we. For proof, all we have to do is pick up a trade magazine. Or better yet, fire up a search engine.

Altavista says more than one million documents on the Web contain the words Microsoft, Netscape, and war. Hotbot lists hundreds of documents titled “Microsoft vs. Netscape,” and twice as many titled “Netscape vs. Microsoft.”

It’s hard to find an article about the two companies that does not cast them as opponents battling over “turf,” “territory,” “sectors” and other geographies.

It’s also hard to start a conversation without using the same metaphorical premise. Intranet Design Magazine recently hosted a thread titled “Who’s winning?? Netscape vs. Microsoft.” Dave Shafer starts the thread with “Wondering what your informed opinion is on who is winning the internet war and what affects this will have on inter/intranet development.” The first respondent says, “sorry, i’m from a french country,” and “I’m searching for economical informations about the war between Microsoft and Netscape for the control of the WEB industrie.” Just as telling is a post by a guy named Michael, who says “Personaly I have both on my PC.”

So do I. Hey, I’ve got 80 megs of RAM and a 2 gig hard drive, so why not? I also have five ISPs, four word processors, three drawing programs, and two presentation packages. I own competing products from Apple, IBM, Microsoft, Netscape, Adobe, Yamaha, Sony, Panasonic, Aiwa, Subaru, Fisher Price and the University of Chicago — to name just a few I can see from where I sit. I don’t sense that buying and using any of these is a territorial act, a victory for one company, or a defeat for another.

But that doesn’t mean we don’t have those perceptions when we write and talk about companies and the markets where they compete. Clearly, we do, because we understand business — as we understand just about everything — in metaphorical terms. As it happens, our understanding of companies and markets is largely structured by the metaphors BUSINESS IS WAR and MARKETS ARE BATTLEFIELDS.

By those metaphors we share an understanding that companies fight battles over market territories that they attack, defend, dominate, yield or abandon. Their battlefields contain beachheads, bunkers, foxholes, sectors, streams, hills, mountains, swamps, streams, rivers, landslides, quagmires, mud, passages, roadblocks, and high ground. In fact, the metaphor BUSINESS IS WAR is such a functional conceptual system that it unconsciously pumps out clichés like a machine. And since sports is a sublimated and formalized kind of war, the distances between sports and war metaphors in business are so small that the vocabularies mix without clashing.

Here, I’ll pick up the nearest Business Week… it’s the January 13 issue. Let’s look at the High Technology section that starts on page 104. The topic is Software and the headline reads, “Battle stations! This industry is up for grabs as never before…” Here’s the first paragraph, with war and sports references capitalized: “Software was once an orderly affair in which a few PLAYERS called most of the shots. The industry had almost gotten used to letting Microsoft Corp. set the agenda in personal computing. But as the Internet ballooned into a $1 billion software business in 1996, HUGE NEW TERRITORIES came up for grabs. Microsoft enters the new year in a STRONG POSITION TO REASSERT CONTROL. But it will have to FIGHT OFF Netscape, IBM, Oracle and dozens of startups that are DESPERATELY STAKING OUT TURF on the Net. ‘Everyone is RACING TO FIND MARKET SPACE and get established…'”

Is this a good thing? Does it matter? The vocabularies of war and sports may be the most commonly used sources of metaphors, for everything from academic essays to fashion stories. Everybody knows war involves death and destruction, yet we experience little if any of that in the ordinary conduct of business, or even of violent activities such as sports.

So why should we concern ourselves with war metaphors, when we all know we don’t take them literally?

Two reasons. First, we do take them literally. Maybe we don’t kill each other, but the sentiments are there, and they do have influences. Second, war rarely yields positive sums, except for one side or another. The economy the Internet induces is an explosion of positive sums that accrue to many if not all participants. Doesn’t it deserve a more accurate metaphor?

For answers, let’s turn to George Lakoff.

The matter of Metaphor

“Answer true or false,” Firesign Theater says. “Dogs flew spaceships. The Aztecs invented the vacation… If you answered ‘false’ to any of these questions, then everything you know is wrong.”

This is the feeling you begin to get when you read George Lakoff, the foremost authority on the matter of metaphor. Lakoff is Professor of Linguistics and Cognitive Science at UC-Berkeley, the author of Women, Fire and Dangerous Things and Moral Politics: What Conservatives Know that Liberals Don’t. He is also co-author of Metaphors We Live By and More than Cool Reason. All are published by the University of Chicago Press.

Maybe that’s why they didn’t give us the real story in school. It would have been like pulling the pins out of a bunch of little hand grenades.

If Lakoff is right, the most important class you ignored in school was English — not because you need to know all those rules you forgot or books you never read, but because there’s something else behind everything you know (or think you know) and talk about. That something is a metaphor. (And if you think otherwise, you’re wrong.)

In English class — usually when the subject was poetry — they told us that meaning often arises out of comparison, and that three comparative devices are metaphor, simile, and analogy. Each compares one thing to another thing that is similar in some way:

Metaphors say one thing is another thing, such as “time is money,” “a computer screen is a desktop,” or (my favorite Burt Lancaster line) “your mind is a cookie of arsenic.” Similes say one thing is like another thing, such as “gone like snow on the water” or “dumb as a bucket of rocks.” Analogies suggest partial similarities between unalike things, as with “licorice is the liver of candy.”

But metaphor is the device that matters, because, as Lakoff says, “We may not always know it, but we think in metaphor.” And, more to the point, “Metaphors can kill.” Maybe that’s why they didn’t give us the real story in school. It would have been like pulling the pins out of a bunch of little hand grenades.

But now we’re adults, and you’d think we should know how safely to arm and operate a language device. But it’s not easy. Cognitive science is relatively new and only beginning to make sense of the metaphorical structures that give shape and meaning to our world. Some of these metaphors are obvious but many others are hidden. In fact, some are hidden so well that even a guru like Lakoff can overlook them for years.

Lakoff’s latest book, “Moral Politics: What Conservatives Know and Liberals Don’t,” was inspired by his realization that the reason he didn’t know what many conservatives were talking about was that, as a Liberal, he didn’t comprehend conservative metaphors. Dan Quayle’s applause lines went right past him.

After much investigation, Lakoff found that central to the conservative worldview was a metaphor of the state as a strict father and that the “family values” conservatives espouse are those of a strict father’s household: self-reliance, rewards and punishments, responsibility, respect for authority — and finally, independence. Conservatives under Ronald Reagan began to understand the deep connection between family and politics, while Liberals remained clueless about their own family metaphor — the “nurturant parent” model. Under Reagan, Lakoff says, conservatives drove the language of strict father morality into the media and the body politic. It won hearts and minds, and it won elections.

So metaphors matter, big time. They structure our perceptions, the way we make sense of the world, and the language we use to talk about things that happen in the world. They are also far more literal than poetry class would lead us to believe. Take the metaphor ARGUMENT IS WAR —

“It is important to see that we don’t just talk about arguments in terms of war. We can actually win or lose arguments. We see the person we are arguing with as an opponent. We attach kis decisions and defend our own. We gain and lose ground. We plan and use strategies… Many of the things we do in arguing are partially structured by the concept of war.” (From Metaphors We Live By)

In our culture argument is understood and structured by the war metaphor. But in other cultures it is not. Lakoff invites us to imagine a culture where argument is viewed as dance, participants as performers and the goal to create an aesthetically pleasing performance.

Right now we understand that “Netscape is losing ground in the browser battle,” because we see the browser business a territory over which Netscape and Microsoft are fighting a war. In fact, we are so deeply committed to this metaphor that the vocabularies of business and war reporting are nearly indistinguishable.

Yet the Internet “battlefield” didn’t exist a decade ago, and the software battlefield didn’t exist a decade before that. These territories were created out of nothingness. Countless achievements have been made on them. Victories have been won over absent or equally victorious opponents.

In fact, Netscape and Microsoft are creating whole new markets together, and both succeed mostly at nobody’s expense. Netscape’s success also owes much to the robust nature of the Windows NT Server platform.

The war stories we’re telling about the Internet are turning into epic lies.

At the same time Microsoft has moved forward in browsers, directory services, languages, object models and other product categories — mostly because it’s chasing Netscape in each of them.

Growing markets are positive-sum creations, while wars are zero-sum at best. But BUSINESS IS WAR is an massive metaphorical machine that works so well that business war stories almost write themselves. This wouldn’t be a problem if business was the same now as it was twenty or fifty years ago. But business is changing fast, especially where the Internet is involved. The old war metaphor just isn’t doing the job.

Throughout the Industrial Age, both BUSINESS IS WAR and MARKETS ARE BATTLEFIELDS made good structure, because most industries and markets were grounded in physical reality. Railroads, shipping, construction, automobiles, apparel and retail were all located in physical reality. Even the phone system was easily understood in terms of phones, wires and switches. And every industrial market contained finite labor pools, capital, real estate, opportunities and natural resources. Business really was war, and markets really were battlefields.

But the Internet is hardly physical and most of its businesses have few physical limitations. The Web doesn’t look, feel or behave like anything in the analog world, even though we are eager to describe it as a “highway” or as a kind of “space.” Internet-related businesses appear and grow at phenomenal rates. The year 1995 saw more than $100 billion in new wealth created by the Internet, most of it invested in companies that were new to the world, or close to it. Now new markets emerge almost every day, while existing markets fragment, divide and expand faster than any media can track them.

For these reasons, describing Internet business in physical terms is like standing at the Dawn of Life and describing new species in terms of geology. But that’s what we’re doing, and every day the facts of business and technology life drift farther away from the metaphors we employ to support them. We arrive at pure myth, and the old metaphors stand out like bones from a dry corpse.

Of course myths are often full of truth. Fr. Seán Olaoire says “there are some truths so profound only a story can tell them.” But the war stories we’re telling about the Internet are turning into epic lies.

Describing Internet business in physical terms is like standing at the Dawn of Life and describing new species in terms of geology.

What can we do about it?

First, there’s nothing we can do to break the war metaphor machine. It’s just too damn big and old and good at what it does. But we can introduce some new metaphors that make equally good story-telling machines, and tell more accurately what’s going on in this new business world.

One possibility is MARKETS ARE CONVERSATIONS. These days we often hear conversations used as synonyms for markets. We hear about “the privacy conversation” or “the network conversation.” We “talk up” a subject and say it has a lot of “street cred.” This may not be much, but it does accurately structure an understanding of what business is and how markets work in the world we are creating with the Internet.

Another is the CONDUIT metaphor. Lakoff credits Michael Reddy with discovering hidden in our discussions of language the implication of conduit structure:

Your thinking comes through loud and clear. It’s hard to put my ideas into words You can’t stuff ideas into a sentence His words carry little meaning

The Net facilitates communication, and our language about communication implies contuits through which what we say is conveyed. The language of push media suggests the Net is less a centerless network — a Web — than a set of channels through which stuff is sent. Note the preposition. I suggest that we might look more closely at how much the conduit metaphor is implicit in what we say about push, channels and related subjects. There’s something to it, I think.

My problem with both CONDUIT and CHANNEL is that they don’t clearly imply positive sums, and don’t suggest the living nature of the Net. Businesses have always been like living beings, but in the Net environment they enjoy unprecedented fecundity. What’s a good metaphor for that? A jungle?

Whatever, it’s clearly not just a battlefield, regardless of the hostilities involved. It’s time to lay down our arms and and start building new conceptual machines. George Lakoff will speak at PC Forum next week. I hope he helps impart some mass to one or more new metaphorical flywheels. Because we need to start telling sane and accurate stories about our businesses and our markets.

If we don’t, we’ll go on shooting at each other for no good reason.

Links

Here are a few links into the worlds of metaphor and cognitive science. Some of this stuff is dense and heavy; but hey, it’s not an easy subject. Just an important one..

The University of Oregon Metaphor Center, which has piles of other links. A good place to start. The Conceptual Metaphor Home Page, UC-Berkeley’s massive list of metaphor names, source domains and target domains. Oddly, neither business nor markets can be found on any of the three lists. Let’s get them in there. Morality, Metaphor and Politics, Or, Why Conservatives Have Left Liberals In the Dust, by George Lakoff. This is Moral Politics condensed to an essay. An excellent introduction to conceptual metaphor, made vivid by a very hot topic. Metaphor and War: The Metaphor System Used to Justify War in the Gulf. Be sure to look at both Part 1 and Part 2 Conceptual Blending on the Information Highway: How Metaphorical Inferences Work, by Tom Rohrer. An exploration of the INFORMATION HIGHWAY metaphor for the Internet.

I also explored the issue of push media in Shoveling Push and When Push Becomes Shove. And I visited the Microsoft vs. Netscape “war” in Microsoft + Netscape: The Real Story. All three are in Reality 2.0.


@_Nat Zone

JIS X 9252 『オンラインにおけるプライバシーに関する通知及び同意』に関する意見受付公告が開始されました

JISX9252(案) は ISO/IEC 291…

JISX9252(案) は ISO/IEC 29184 Online privacy notice and consent を元に作られたJIS規格案1で、日本でよく「プライバシー・ポリシー」と呼ばれる、個人情報等の収集利用に関して個人に伝えるための文書(Notice, 通知)の書き方と、同意を処理の根拠として使う場合の、これに基づく同意の取得の仕方を記載したものです。通知の書き方および掲示の部分は、同意が処理の根拠でなくても使われるべきものです。

ISO/IEC 29184 は、日本の経済産業省のガイドライン(「〜しなければならない (shall)」の無い文書)を元の文書として策定が始まりましたが、EDPBの意見で多くの「should」が「shall」に変えられ、強化されています。今回、これが晴れて日本語化されJIS規格として発行の方向にあるということです。

この公告のページに直リンクを貼りたいのですが、うまく貼ることができません。公告の一覧ページはこちらから行くことができます→ https://www.jisc.go.jp/app/jis/general/GnrOpinionReceptionNoticeList?show

この中で、「JISX9252」で検索してみてください。するとこんな感じのページに行くことができます。

(出所)日本産業標準調査会

ここから原案をダウンロードしたり意見提出したりすることができます。

この規格はかなり時間がかかりました。ご尽力された事務局2の方々、経産省の方々、委員の方々、日本規格協会の方々に心からの御礼を申し上げたいと思います。


マイナンバーカードと保険証の一体化以外にもいろいろ重要な話が出たマイナンバーカード関連10/13河野デジタル大臣会見要旨と個人的感想

マイナンバーカードの普及の取り組みについて、さる1…

マイナンバーカードの普及の取り組みについて、さる10月13日10時頃より河野大臣の会見がありました。重要な発表があったので、すこし遅くなりましたがまとめておきます。もとの会見のようすはYoutubeで見ることができます。

I. 河野大臣より総理への報告

デジタル社会を新しく作っていくためのパスポートの役割をマイナンバーカードは果たすわけだが、そのためのマイナンバーカードの普及・利用の拡大、総理からの指示の下 9/29 から関係省庁の連絡会議を議長として行っている。関係省庁からの検討の結果を取りまとめ。取得/利用の加速の取り組みおよび経済対策におけるマイナンバーカード関連の施策について総理に報告した。

1. マイナンバーカードと健康保険証の一体化 マイナンバーカードと健康保険証の一体化(閣議決定済み)の加速のために訪問診療、あんま鍼灸などにおいてマイナンバーカードに対応するための補正予算の要求を予定をする マイナンバーカードの取得の徹底、カードの手続き様式の見直しの上、この検討を行った上で、 2024年度秋に、現在の健康保険証の廃止を目指す 2. 運転免許証との一体化 2024年度末の運転免許証との一体化の予定を前倒しできないか検討を警察庁と進めていく。 2023年度からオンライン講習(一部道府県)をゴールドから一般へ広げていく。(47 都道府県全てに広がるのはもうちょっと時間がかかる) 3. マイナンバーカードの電子証明書のスマホ搭載 オンライン申請、マイナポータルへのログイン、コンビニ交付などをスマホでできるようにする。 現在システム構築を実施中。Android スマホによるサービスの提供開始を2023-05-11に。 4. 金融機関等による基本4情報の提供サービス また、来年の5/16 から公的個人認証サービスを利用する金融機関等の事業者に、本人の同意を前提として、住所などの基本四情報を提供するサービスを開始。金融機関などの事業者においては、継続的な顧客確認を効率的スピーディーに行うことができるようになる。 5. 民間事業者における電子証明書利用料の当面の無料化 電子証明書の有効性を確認する際、現行では署名用の場合は一件 20円、利用証明用の場合は一件2円の利用料が必要となっているが、当面 3 年間はこの手数料を両方とも無料に。来年の 1 月からこの無料化処置を始める。 II. 総理からの指示

今月中に取りまとめる経済総合対策総合経済対策に関して、

免許証や保険証などの各種カードのマイナンバーへの一体化の加速、カードの取得促進のための戦略的な広報や自治体支援、民間事業者の電子証明書の手数料の当面の無料化、 民間でのカード利活用の実証実験の支援、自治体でのカード利活用の拡大の支援、 こうしたカードの利活用シーンの拡大索

の3点を盛り込むよう指示。

また、マイナンバーカードと健康保険証の一体化については、

特に細部に渡りきめ細かく環境を整備する必要がある。 医療を受ける国民、医療提供する医療機関関係者などの理解が得られるよう、丁寧に取り組んでいく必要がある。 このため、総合経済対策の決定までの間、厚労大臣デジタル大臣総務大臣が連携して細部に渡り、遺漏の無いようまた関係者の理解が得られるよう、詰めの作業を行ってもらいたい。

という指示があった。

III. 個人的な感想

質疑応答をまだ聞いていないので中間的な感想ですが…。

「5. 民間事業者における電子証明書利用料の当面の無料化」は地味に大きい。これで使いやすくなる。 「4. 金融機関等による基本4情報の提供サービス」は、これによって住所の変更を追えるようになるのはコスト効果は大きいだろうし、利用者にとっての利便性も上がるので良い。実装面ではどうやるのか興味がある。API提供になるはずだが…。同意を前提に、かつアクセス元も絞るというと、Open Banking とかでやってるFAPIみたいな仕組みになりますかね。 「マイナンバーカードと健康保険証の一体化」は、デリゲーション・ユースケースをどうするのか興味がある。そもそも保険証というのは保険サービスへのアクセス権を表す認可トークンであって、身分証明書(認証トークン)では無い。対象スコープを限定したトークンである。この性質をつかって、ヘルパーさんなどに預けるなどの対応が比較的容易にされてきた。これを、オールマイティ・トークンたるマイナンバーカードに一本化してしまい、スコープを限定したトークンをなくしてしまうと、こうした「預ける」ユースケースで問題が出てくるのではないかと思われる。このあたりの運用をどうするのか興味がある。 また、保険証としての利用シーンでは、本人が記憶による認証は使えない場合が多いであろうし、「顔」というモダリティも負傷などで使えない状況であることがそれなりに想定される。また、カードリーダーの電力が得られないようないわゆるVerifier offlineのケースも。このようなケースにどうするのだろうか。結局、ICカードのアプリケーションとして保険証アプリが載っているだけになるのかな?でも、その場合は、券面利用ができなくなりますよねぇ…。カードにこだわるのは、券面利用したいというニーズもあるからですよねぇ。このあたりが「カードの手続き様式の見直し」に跳ねてくるんだろうか。1 「2. 運転免許証との一体化」は、わたしは制度の検討の段階で主張していたのでなによりではある。もっとも、発行基盤としての利用だが。 「3. マイナンバーカードの電子証明書のスマホ搭載」だが、無理にX.509電子証明書でやる必要はない気がする。カードは発行するんですよね?であれば、カードに入っている公的個人認証で「bless」した利用目的別のトークンを入れるくらい軽くしても良いのではないかと思う。

idcon vol.29 WebAuthn, Next Stage まとめ

去る10月12日に、田町のマネーフォーワードでid…

去る10月12日に、田町のマネーフォーワードでidcon vol.29 が開催されました1

以下、リアルタイム tweet していたもののまとめです。

Android & Chrome 上での Passkey 実装について by @agektmr プレゼンテーション概要 1) passkey は password を置き換える技術。2段階認証なども置き換える。2) ローカルで認証できる。生体認証はネットワークに送られない。3) 作られたpasskeyはデバイス間で同期される Passkey は Google, Apple, Microsoft 三社の協調した取り組み。 passkey のここがすごい: 技術は @FIDOAlliance と W3C の WebAuthn WGで標準化されている それ自体で2要素認証。所有+Knowledge or Inheritance フィッシング攻撃耐性がある。passkey はドメインに紐付けられている。 普及すればパスワードが要らなくなる。公開鍵だけがサーバにあるので漏れてもリスクは低い デモ この例ではUse your screen lock という表示で生体認証で入る。 Google の #passkey は discoverable credentials で実装Appleの場合はFIDO Credential を作ると全部 passkey。 Passkey の範囲が会社によって微妙に違う。apple は #fido credential 全部。Google はデバイス間で同期されるものだけを passkey という。Google password manager で同期される。 同期されることがブレークスルーである理由:以前はローカルデバイスに紐付いていたので別の端末に行くと使えなかった。旧端末でログインするか、SMS認証するかとかしなければならなく、100個デバイスがあると…。 Device unlock のコードで同期される。Google play services が必要。Android上であれば、Google Password Manager に対応すれば #passkey を使用可能。 Google password manager は、Mac/iOS/iPadOS/windowsでは使えない。Android でしか使えない。Apple ecosystem では icloud keychain を使う方向だが現段階ではAPIが無い。 Googleはサードパーティのパスワードマネージャを使うこともできるようにする予定 おすすめの #passkey UX 1) Form autofill – password も使っているユーザも考慮するとこうすればシームレスにログインできるようになる。ConditionalUI。ユーザが最初にログインした直後にpasskeyを作る。 2) エコシステムマタギの場合には、caBLE→Hybrid に名称変更されたモードを使う。QRコードを表示してスマホでスキャン。BLEで近接認証しているので、QRコードをどこかに送ってもそれではログインできない Device public key – デバイスに紐付いていた前提が崩れているがそれは良いのかという疑問に答えるために定義された。WebAuthのExtension。Appleは対応する予定はない。Androidはサポート。ただし、attestationは当初サポートしない。 Q&A

Q. #passkey の同期はまさか秘密鍵を?
A. YES。暗号化して。<質問者の方「大胆だなー」

Q. E2EEをどうするの?
A. わからない。

Q. Conditional UI と 別の端末を使ってhybrid を使おうとするとアカウントが見つからないのでは?
A. 改善の余地あり。Appleのアプローチは良いのではないか。

iOS, iPadOS, macOS 上での Passkey 実装について by @nov プレゼンテーション概要 iOS16+, iPadOS16+ (今月末), macOS13+ で #passkey 使える。 もともと #passkey が発表されたのはWWDC21。かんたんさをアピールしていた。 Always with you (if you only use Apple devices) Passkey + Autofill = new WebAuthn UX。10年くらい #fido やってるけど誰も使わない。セキュリティはみんな気にしない。多少セキュリティが落ちてもUXが良くないとだめ、ということ。 デモ ブラウザーからQRコードを使ってiPhoneを鍵にしてログイン。 QRコードの下にyubikeyを呼び出すリンクもあるが、conditional UI は yubikey を使う人には良くない。 passkey の同期は数秒でできた。 解決された問題: Appleのデバイスを持っている人は同期される emailフィールドのfillin, パスワードのfill-in と2回faceidしていたのが1回ですむ。 解決されていない問題: Discoverable credential に入っている email address のアップデートはW3Cで議論されているが今は解決されていない。 再認証で conditional UI を使いたい場合。Conditional UIの場合、どの鍵を使えという指定ができない。そのため、password fill-in に複数の候補が出てしまう。これは、仕様にマージされた。 サインアップをオートフィルでやるのはどうするか。パスワードだとパスワードマネージャが勝手にやってくれる。なんとかしてほしい。 3つのプラットフォームをまたがる場合の解決ができていない。その観点ではそんなに game changer ではない。 Apple では icloud keychain が disable されていると、fido credential がそもそも作れない。会社端末ではdisableされている。どうせいというのか。<ただし、Roaming Credential は使える。 POST /.well-known/change-password みたいに、POST /.well-known/webauthn-credentials ができたらユーザは気づかないうちに目的を達せられるのでは Q&A

Q. icloud keychain が disable されている場合はYubikeyなどを使えということか?
A. YES。iPhone使うか、Roaming credential を使え。(でも、大変だからパスワード使えというのがAppleの答えと思う。)

ヤフーの WebAuthn UX の紹介と Passkey 対応 UX の考察 by YumejiHattori (@kura_lab の後輩) YJの認証は2画面認証 (identifier first pattern) になっている。IDにFIDOが登録されていると、2画面目に行くと自動で WebAuthn が発動する。この場合、credential が登録されていない端末では失敗ダイアログが出てしまう。 これは、credentials.exisits (credentialId) をSuperCookie として使われてしまうなどの理由で FIDO ではサポートしていなかったことに起因。 これを回避するために、OSによってクレデンシャルが存在するかの推測をしている。 passkey 登場による課題1。 #iOS: 登録済み #MacOS: 未登録の場合、#webauthn が発動しない。 authenticatorData.flags に含まれるフラグ BackupState (BS) があると、iOSとMacOSでは同期されている可能性がある。そのため、MacでもWebAuthn発動できるようになる。 Passkeys登場による課題(2) ハイブリッド(caBLE)のサポートをしていないことに起因する課題。iOSを使ってWindowsでログインすると、Windows:登録済み iOS: 未登録になってしまう。 サービス側でユーザにWebAuthnをサジェストしたいが…。 Passkeys へ完全対応するための方法の例: ユーザによる認証方法の選択必須課(例:GitHub)めちゃめちゃ目立つように「生体認証を利用」が出てくる。しかし、大半のユーザはその利用可否に関わらずこのボタンを押してしまって「がっかり体験」 Conditional UI を使うと、クレデンシャルが存在する場合はTouch IDが選択肢に出る。存在しない場合にはPWおよび別のデバイスを使って… がでる。これは、まさにやりたかったことだが Conditional UIの課題 サービス側のクレデンシャル削除時の挙動:端末では利用できるがサービス側が受け付けなくなる→ガッカリ体験 これらを踏まえたベターなUXはどうあるべきか。 if 再認証ユースケース:Cookieの認証履歴等でWebAuthn自動発動 elseif: 新規ログイン: Conditional UI else: WebAuthnが成功する可能性が低いので(消極的, hybrid含む)WebAuthn選択画面 Q&A

Q. Hybridを使ってiOSを使ってWindowsからログインした場合に、iOSを使ってきたというのはわからないのか?
A. Hybrid 経由で登録する場合にはわからない。Appleはあえてわからないようにしている。

Q. 一旦SMSにfallbackしたユーザをWebAuthnに戻したい場合はどうする?
A. 確認コードの認証画面でWebauthnを登録してもらう、かな。

Q. RP側でオペレーションが走ってクレデンシャルが消された場合はパスワードでも同じ。パスワードの場合は変更するとPassword Manager に反映される。Passkeyの場合は?
A. Androidの場合は同じユーザハンドルで新しいクレデンシャルを作ると上書きされる。(by @agektmr )

Q. AirDropはサブスクを共有するのに使われてしまうのではないか。
A. Netflix など多くのサイトでは家族間では共有されるので、こうした場合は問題ない。

全体Q&A

Q. Airdrop phishing はありえないか?
A. お互いに連絡先に入っていないとできないが、突破される可能性はある。

Q. Passkeyという言葉が enduser向けに使われていくのか?今のUXは各社でバラバラ。
A. まだ表では使っていないが、helpなどでは使っている。

Q. email field で passkey 聞かれ、password field でも聞かれることが起きる。プラットフォームで対応時期がバラバラ
A. Reverse bruteforce を考えると…

Q. Reauthenticationのとき、#passkey に対応すると、platformの乗っ取りリスクに依存してしまうがRPはどう考えているか?
A. リスク的に耐えきれないなら使わない。<某米国金融機関はぐえーっと言っていた (by @_nat)。

Q. コンシューマ向けではattestationは無い前提?
A. 某社的には無い前提。

Q. 悪意を持っているauthenticator の場合、attestationが無いとごまかしうるのでは?
A. 理論的にはyes。
A2. 米国で某PFベンダーがattestation使ってるところがないということを言っていた。実はあるけど。
A3. もともと紳士協定のもとで信用できるということだったのが、今回明示的に信用できないということになったということでは?

Q. Passkeyですか? passkeys ですか?
A. password/passwords と同じ扱い。
A2. ちなみに、better password 、password manager で管理されていることがわかるパスワードと考えれば良い。


Doc Searls Weblog

Places

Let’s say you want to improve the Wikipedia page for Clayton Indiana with an aerial photograph. Feel free to use the one above. That’s why I shot it, posted it, and licensed it permissively. It’s also why I put a helpful caption under it, and some call-outs in mouse-overs. It’s also why I did the […]

Let’s say you want to improve the Wikipedia page for Clayton Indiana with an aerial photograph. Feel free to use the one above. That’s why I shot it, posted it, and licensed it permissively. It’s also why I put a helpful caption under it, and some call-outs in mouse-overs.

It’s also why I did the same with Danville, Indiana:

Also Brownsville, Indiana, featuring the Brickyard VORTAC station (a navigational beacon used by aircraft):

Eagle Creek Park, the largest in Indianapolis, and its Reservoir:

The district of Indianapolis charmlessly called Park 100:

The White River, winding through Indianapolis:

Where the White River joins and the Wabash, which divides Southern Indiana from Southern Illinois (which is on the far side here, along with Mt. Carmel):

Among other places.

These were shot on the second leg of a United flight from Seattle to Indianapolis by way of Houston. I do this kind of thing on every flight I take. Partly it’s because I’m obsessed with geography, geology, weather, culture, industry, infrastructure, and other natural things. And partly it’s to provide a useful service.

I don’t do it for the art, though sometimes art happens. For example, with this shot of salt ponds at the south end of San Francisco Bay:

Airplane windows are not optically ideal for photography. On the contrary, they tend to be scratched, smudged, distorted, dirty, and worse. Most of the photos above were shot through a window that got frosty and gray at altitude and didn’t clear until we were close to landing. The air was also hazy. For cutting through that I can credit the dehaze slider in Adobe Photoshop 2021. I can also thank Photoshop for pulling out color and doing other things that make bad photos useful, if not good in the artsy sense. They fit my purpose, which is other people’s purposes.

In addition to Adobe, I also want to tip my hat toward Sony, for making the outstanding a7iv mirrorless camera and the 24-105mm f/4 FE G OSS lens I used on this flight. Also Flickr, which makes it easy to upload, organize, caption, tag, and annotate boundless quantities of full- (and other-) size photos—and to give them Creative Commons licenses. I’ve been using Flickr since it started in 2005, and remain a happy customer with two accounts: my main one, and another focused on infrastructure.

While they are no longer in a position to care, I also want to thank the makers of iView MediaPro, Microsoft Expressions and PhaseOne MediaPro for providing the best workflow software in the world, at least for me. Alas, all are now abandonware, and I don’t expect any of them to work on a 64-bit operating system, which is why, for photographic purposes, I’m still sitting on MacOS Mojave 10.14.6.

I’m hoping that I can find some kind of substitute when I get a new laptop, which will inevitably come with an OS that won’t run the oldware I depend on. But I’ll save that challenge for a future post.

Monday, 17. October 2022

Damien Bod

Is scanning QR Codes for authentication safe?

This article explains why cross device authentication has security issues as it is subject to phishing attacks unless further authentication is used in the client. Scanning QR Codes for authentication does not protect against phishing and leaves the users open to having their session stolen. Phishing There a many forms of phishing and this is […]

This article explains why cross device authentication has security issues as it is subject to phishing attacks unless further authentication is used in the client. Scanning QR Codes for authentication does not protect against phishing and leaves the users open to having their session stolen.

Phishing

There a many forms of phishing and this is one of the biggest problems the industry needs to solve at present. As a service or software provider, it is not correct for you to expect your clients to be wise enough, not to fall for a phishing attack. We need and must create systems where phishing is not possible. At present the industry provides three solutions to protect against phishing; FIDO2, certificate authentication using PKI and windows for business authentication. FIDO2 will become much more user friendly once the new hardware devices have built in FIDO2 keys and the rollout of passkeys is supported by most of the big service providers.

When using cross device authentication, it is not possible to protect against phishing, or at least we have no known solution for this at present.

Phishing in Self sovereign identity

Users authenticating on HTTPS websites using verifiable credentials stored on your wallet are still vulnerable to phishing attacks. This can be improved by using FIDO2 as a second factor to the SSI authentication. The DIDComm communication between agents has strong protection against phishing but this does not protect against phishing when started using a QR Code from a separate device.

Self sovereign identity phishing scenario using HTTPS websites:

User opens a phishing website in the browser using HTTPS and clicks the fake sign-in Phishing service initializes the authentication using a correct SSI sign-in on the correct website using HTTPS and gets presented with a QR Code (URI link) for a digital wallet to complete. Phishing service presents this back on the phishing website to the victim. Victim scans the QR Code using the digital wallet and completes the authentication using the agent in the digital wallet, DIDComm V2, trusted registry etc. When the victim completes the sign-in using the out of band digital wallet and agent, the HTTPS website being used by the attacker gets updated with the session of the victim.

This can only be prevented by using the browser client-side origin as part of the flow, signing this and returning this back to the server to be validated, thus preventing this type of phishing attack. This cannot be prevented unless the origin from the client browser is used and validated in the authentication process. The browser client-side origin is not used in the SSI login.

Here’s the same problem description: Risk Mitigation for Cross Device Flows

Phishing in cross device

In general, any cross device authentication does not protect against phishing. You need further authentication in the browser/agent client where the session is running.

Comparing apples and oranges

A lot of people pushing these systems compare this form of authentication with single factor password authentication. This type of cross device authentication is much better than password authentication, this is correct, but the industry has already solved this and for any good enterprise systems or secure systems, users can no longer use single password authentication, this is a solved problem. It is just not rolled out to the wider public for the consumer market. Introducing another system will not solve this either.

The cross device application running on the separate device can authenticate its part of the authentication flow correctly and this step in the authentication process is ok and cannot be phished if implemented correctly like for example using Didcomm V2 and trust registries but this does not solve the phishing weakness when starting from a HTTPS website with a QR Code scanned from the original device.

Workarounds

Use a phishing resistant authentication like FIDO2, PKI solution using certificates or Windows for business. Once authenticated, the QR Code can be scanned in a safe way and the cross device authentication can be used for example to verify an identity. This works because you are already inside a secure session and the same session will get updated with the scanned data then.

Links

https://datatracker.ietf.org/doc/draft-kasselman-cross-device-security/

Challenges to Self Sovereign Identity

https://learn.microsoft.com/en-us/azure/active-directory/develop/msal-authentication-flows

https://docs.oasis-open.org/esat/sqrap/v1.0/cs01/sqrap-v1.0-cs01.html#_Toc103082217

Sunday, 16. October 2022

Doc Searls Weblog

On digital distance

In July 2008, when I posted the photo above on this blog, some readers thought Santa Barbara Mission was on fire. It didn’t matter that I explained in that post how I got the shot, or that news reports made clear that the Gap Fire was miles away. The photo was a good one, but it […]


In July 2008, when I posted the photo above on this blog, some readers thought Santa Barbara Mission was on fire. It didn’t matter that I explained in that post how I got the shot, or that news reports made clear that the Gap Fire was miles away. The photo was a good one, but it also collapsed three dimensions into just two. Hence the confusion. If you didn’t know better, it looked like the building was on fire. The photo removed distance.

So does the Internet, at least when we are there. Let’s look at what there means.

Pre-digital media were limited by distance, and to a high degree defined by it. Radio and television signals degrade across distances from transmitters, and are limited as well by buildings, terrain, weather, and (on some frequency bands), ionospheric conditions. Even a good radio won’t get an absent signal. Nor will a good TV. Worse, if you live more than a few dozen miles from a TV station’s transmitter, you need a good antenna mounted on your roof, a chimney, or a tall pole. For signals coming from different locations, you need a rotator as well. Even on cable, there is still a distinction between local channels and cable-only ones. You pay more to get “bundles” of the latter, so there is a distance in cost between local and distant channel sources. If you get your TV by satellite, your there needs to be in the satellite’s coverage footprint.

But with the Internet, here and there are the same. Distance is gone, on purpose. Its design presumes that all physical and wireless connections are one, no matter who owns them or how they get paid to move chunks of Internet data. It is a world of ends meant to show nothing of its middles, which are countless paths the ends ignore. (Let’s also ignore, for the moment, that some countries and providers censor or filter the Internet, in some cases blocking access from the physical locations their systems detect. Those all essentially violate the Internet’s simple assumption of openness and connectivity for everybody and everything at every end.)

For people on the Internet, distance is collapsed to the height and width of a window. There is also no gravity because space implies three dimensions and your screen has only two, and the picture is always upright. When persons in Scotland and Australia talk, neither is upside down to the other. But they are present with each other and that’s what matters. (This may change in the metaverse, whatever that becomes, but will likely require headgear not everyone will wear. And it will still happen over the Internet.)

Digital life, almost all of which now happens on the Internet, is new to human experience, and our means of coping are limited. For example, by language. And I don’t mean different ones. I mean all of them, because they are made for making sense of a three-dimensional physical world, which the Internet is not.

Take prepositions. English, like most languages, has a few dozen prepositions, most of which describe placement in three-dimensional space. Over, around, under, through, beside, within, off, on, over, aboard… all presume three dimensions. That’s also where our bodies are, and it is through our bodies that we make sense of the world. We say good is light and bad is dark because we are diurnal hunters and gatherers, with eyes optimized for daylight. We say good is up and bad is down because we walk and run upright. We “grasp” or “hold on” to an idea because we have opposable thumbs on hands built to grab. We say birth is “arrival,” death is “departure” and careers are “paths,” because we experience life as travel.

But there are no prepositions yet that do justice to the Internet’s absence of distance. Of course, we say we are “on” the Internet like we say we are “on” the phone. And it works well enough, as does saying we are on” fire or drugs. We just frame our understanding of the Internet in metaphorical terms that require a preposition, and “on” makes the most sense. But can we do better than that? Not sure.

Have you noticed that how we present ourselves in the digital world also differs from how we do the same in the physical one? On social media, for example, we perform roles, as if on a stage. We talk to an audience straight through a kind of fourth wall, like an actor, a lecturer, a comedian, musician, or politician. My point here is that the arts and methods of performing in the physical world are old, familiar, and reliant on physical surroundings. How we behave with others in our offices, our bedrooms, our kitchens, our clubs, and our cars are all well-practiced and understood. In social media, the sense of setting is much different and more limited.

In the physical world, much of our knowledge is tacit rather than explicit, yet digital technology is entirely explicit: ones and zeroes, packets and pixels. The tacit is there, but living in an on/off present/absent two-dimensional world is still new and lacking much of what makes life in the three-dimensional natural world so rich and subtle.

Marshall McLuhan says all media, all technologies, extend us. When we drive a car we wear it like a carapace or a suit of armor. We also speak of it in the first person possessive: my engine, my fenders, my wheels, much as we would say my fingers and my hair. There is distance here too, and it involves performance. A person who would never yell at another person standing in line at a theater might do exactly that at another car. That kind of distance is gone, or very different, in the digital world.

In a way we are naked in the digital world, and vulnerable. By that I mean we lack the rudimentary privacy tech we call clothing and shelter, which protect our private parts and spaces from observation and intrusion while also signaling the forms of observation and contact that we permit or welcome. The absence of this kind of privacy tech is why it is so easy for websites and apps to fill our browsers with cookies and other forms of tracking tech. In this early stage of life on the Internet, what’s called privacy is just the “choices” sites and services give us, none of which are recorded where we can easily find, audit or dispute them.

Can we get new forms of personal tech that truly extend and project our agency in the digital world? I think so, but it’s a good question because we don’t yet have an answer.

 

Saturday, 15. October 2022

Just a Theory

Collective Decision-Making with AHP

How the New York Times Identity team tried out the Analytic Hierarchy Process to select a user ID format.

Me, writing for NYT Open:

The Identity Team at the Times, responsible for building and maintaining identity and authentication services for all of our users, has embarked on an ambitious project to build a centralized identity platform. We’re going to make a lot of decisions, such as what languages we should use, what database, how we can best protect personal information, what the API should look like, and so much more. Just thinking about the discussions and consensus-building required for a project of this scope daunts even the most experienced decision-makers among us. Fortuitously, a presentation at StaffPlus NYC by Comcast Fellow John Riviello introduced a super fascinating approach to collective decision-making, the Analytic Hierarchy Process (AHP).

I quite enjoyed our experiment with AHP, a super useful tool for collective decision-making. For a less technical primer, Wikipedia has some great examples:

Choosing a leader for an organization Buying a family car More about… Decision Making AHP New York Times

Thursday, 13. October 2022

Doc Searls Weblog

Community Governance Outside the Web’s Dictatorships

It’s one thing to move off centralized online spaces run by corporate giants, and another to settle the decentralized frontiers where we create new communities. As those communities get organized, forms of governance emerge. Or are deliberately chosen. Either way, the subject could hardly matter more, if those communities wish to persist and thrive. At this […]


It’s one thing to move off centralized online spaces run by corporate giants, and another to settle the decentralized frontiers where we create new communities. As those communities get organized, forms of governance emerge. Or are deliberately chosen. Either way, the subject could hardly matter more, if those communities wish to persist and thrive.

At this Beyond the Web salon, Nathan Schneider (@ntnsndr), a professor in media studies at the University of Colorado and a leading authority on cooperative governance, will present two prototypes that address what and how questions about governance, both online and off. In the manner of all our salons, a productive discussion will follow. So please come and participate. Register here.


Werdmüller on Medium

“Free speech” networks and anti-semitism

A growing extremist movement is hiding in plain sight. Continue reading on Medium »

A growing extremist movement is hiding in plain sight.

Continue reading on Medium »

Tuesday, 11. October 2022

Phil Windleys Technometria

Using OpenID4VC for Credential Exchange

Summary: Verifiable credentials have transport options other than DIDComm. In this post, I explore the OpenID4VC specification which allows credentials to be used in the OpenID ecosystem. In my discussions of verifiable credentials, I assume DIDs are the underlying identifier. This implies that DIDComm, the messaging protocol based on DIDs, underlies the exchange of verifiable credentials.

Summary: Verifiable credentials have transport options other than DIDComm. In this post, I explore the OpenID4VC specification which allows credentials to be used in the OpenID ecosystem.

In my discussions of verifiable credentials, I assume DIDs are the underlying identifier. This implies that DIDComm, the messaging protocol based on DIDs, underlies the exchange of verifiable credentials. This does not have to be the case.

The OpenID Foundation has defined protocols on top of OAuth1 for issuing and presenting credentials. These specifications support the W3C Verifiable credential data model specification and support both full credential and derived credential presentations. The OpenID specifications allow for other credential formats as well, such as the ISO mobile driver’s license.

In addition to defining specifications for issuing and presenting credentials, OpenID for Verifiable Credentials (OpenID4VC) introduces2 a wallet for holding and presenting credentials. Open ID Connect (OIDC) redirects interactions between the identity provider (IdP) and relying party (RP) through a user agent under a person’s control, but there was never an OIDC-specific user agent. The addition of a wallet allows OpenID4VC to break the link that has traditionally existed between the IdP and RP in the form of federation agreements and an interaction protocol wherein the IdP always knew when a person used OIDC to authenticate at the RP. OpenID4VC offers direct presentation using the wallet.

Extending OAuth and OIDC to support the issuance and presentation of verifiable credentials provides for richer interactions than merely supporting authentication. All the use cases we’ve identified for verifiable credentials are available in OpenID4VC as well.

In addition to using the OpenID4VC wallet with a traditional OIDC IdP, OpenID has also added a specification for Self-issued OpenID Providers (SIOP). A SIOP is an IdP that is controlled by the entity who uses the wallet. A SIOP might use DIDs, KERI, or something else for identifiers. The SIOP allows Alice to control the identifiers and claims she releases to the RP. As with DIDs, a SIOP-based relationship between the RP and Alice is not intermediated by an external, federated IdP as it is in the traditional OIDC model.

When Alice uses a wallet that supports OpenID4VC and SIOP to present credentials to an RP, the Alice has a relationship with the RP based on a self-issued identity token she creates. SIOP allows Alice to make presentations independent from any specific IdP. As a result, she can present credentials from any issuer to the RP, not just information from a single IdP as is the case in traditional OIDC.

Like any other credential presentation, the RP can verify the fidelity of the credential cryptographically. This can include knowing that it was issued to the wallet Alice is using to make the presentation. The RP also gets the identifier for the credential issuer inside the presentation and must decide whether to trust the information presented.

To make fidelity and provenance determinations for the credential, the RP will need the public key for the credential issuer as is the case with any credential verification. The verifiable data registry (VDR) in an OpenID4VC credential exchange might be a ledger or other decentralized data store if the presentation uses DIDs, or it might be obtained using PKI or web-pages accessible under a domain name controlled by the issuer. Depending on how this is done, the credential issuer might or might not know which credentials the RP is verifying. The design of the VDR plays a large role in whether credential exchange has all the properties we might desire.

OpenID4VC is an important example of alternatives to DIDComm in verifiable credential exchange thanks to OIDC’s large deployment base and developers’ familiarity with its underlying protocols and procedures. Because the W3C specification for verifiable credentials does not specify an underlying mechanism for exchanging credentials, others are possible. If you find a need from an alternative, be sure to carefully vet its design to ensure it meets your privacy, authenticity, and confidentiality requirements.

Recall that OpenID Connect is based on OAuth. For details on OpenID4VC, I recommend the introductory whitepaper from the OpenID Foundation: OpenID for Verifiable Credentials: A Shift in the Trust Model Brought by Verifiable Credentials (June 23, 2022)

Tags: identity ssi openid verifiable+credentials


Doc Searls Weblog

The most important standard in development today

It’s P7012: Standard for Machine Readable Personal Privacy Terms, which “identifies/addresses the manner in which personal privacy terms are proffered and how they can be read and agreed to by machines.” P7012 is being developed by a working group of the IEEE. Founded in 1963, the IEEE is the largest association of technical professionals in the world […]

It’s P7012: Standard for Machine Readable Personal Privacy Terms, which “identifies/addresses the manner in which personal privacy terms are proffered and how they can be read and agreed to by machines.”

P7012 is being developed by a working group of the IEEE. Founded in 1963, the IEEE is the largest association of technical professionals in the world and is serious in the extreme.

This standard will guide the way the companies of the world agree to your terms. Not how you agree to theirs. We have the latter “system” right now and it is failing utterly, massively, and universally. Let me explain.

First, company privacy policies aren’t worth the pixels they’re printed on. They can change on a whim, and there is nothing binding about them anyway.

Second, the system of “agreements” we have today do nothing more than put fig leaves over the hard-ons companies have for information about you: information you give up when you agree to a consent notice.

Consent notices are those banners or pop-overs that site owners use to halt your experience and shake down consent to violations of your privacy. There’s usually a big button that says ACCEPT, and some smaller print with a link going to “settings.” Those urge you to switch on or off the “necessary,” “functional,” “performance,” and “targeting” or “marketing” cookies that the site would like to jam into your browser.

Regardless of what you “choose,” there are no simple or easy ways to discover or dispute violations of your “agreement” to anything. Worse, you have to do this with nearly every freaking website you encounter, universalizing the meaninglessness of the whole thing.

But what if sites and services agreed to your terms, soon as you show up?

We have that in the natural world, where it is impolite in the extreme to look under the personal privacy protections called clothing. Or to penetrate other personal privacy protections, such as shelter, doors, shades, and locks. Or to plant tracking beacons on people to follow them like marked animals. There are social contracts forbidding all of those. We expect that contract to be respected, and for the most part it is.

But we have no such social contracts on the Net. In fact, we have the opposite: a feeding frenzy on private information about us, made possible by our powerlessness to stop it, plus boundless corporate rationalization.

We do have laws meant to reduce that frenzy by making some of it illegal. Others are in the works, most notably in Europe. What they have done to stop it so far rounds to zero. In his latest book, ADSCAM: How Online Advertising Gave Birth to One of History’s Greatest Frauds, and Became a Threat to Democracy, Bob Hoffman has a much more sensible and effective policy suggestion than any others we’ve seen: simply ban tracking.

While we wait for that, we can use the same kind of tool that companies are using: a simple contract. Sign here. Electronically. That’s what P7012 will standardize.

There is nothing in the architecture of the Net or the Web to prevent a company from agreeing to personal terms.

In fact, at its base—in the protocol called TCP/IP—the Internet is a peer-to-peer system. It does not consign us to subordinate status as mere “users,” “consumers,” “eyeballs,” or whatever else marketers like to call us.

To perform as full peers in today’s online world, we need easy ways for company machines to agree to the same kind of personal terms we express informally in the natural world. That’s what P7012 will make possible.

I’m in that working group, and we’ve been at it for more than two years. We expect to have it done in the next few months. If you want to know more about it, or to help, talk to me.

And start thinking about what kind of standard-form and simple terms a person might proffer: ones that are agreeable to everyone. Because we will need them. And when we get them, surveillance capitalism can finally be replaced by a much larger and friendlier economy: one based on actual customer intentions rather than manipulations based on guesswork and horrible manners.

One candidate is #NoStalking, aka P2B1beta. #NoStalking was developed with help from the Cyberlaw Clinic at Harvard Law School and the Berkman Klein Center, and says “Just give me ads not based on tracking me.” In other words, it does permit advertising and welcomes sites and services making money that way. (This is how the advertising business worked for many decades before it started driving drunk on personal data.)

Constructive and friendly agreements such as #NoStalking will help businesses withdraw from their addiction to tracking, and make it easier for businesses to hear what people actually want.

Monday, 10. October 2022

Damien Bod

Force phishing resistant authentication in an ASP.NET Core application using Azure AD

This article shows how to force a phishing resistant authentication for an ASP.NET Core application using Azure AD and a conditional access policy which forces a phishing resistant authentication using a conditional access authentication context. The ASP.NET Core application forces this by requiring the acrs claim in the id_token with the value of c4 which […]

This article shows how to force a phishing resistant authentication for an ASP.NET Core application using Azure AD and a conditional access policy which forces a phishing resistant authentication using a conditional access authentication context. The ASP.NET Core application forces this by requiring the acrs claim in the id_token with the value of c4 which is the authentication context created for this in this tenant.

Code https://github.com/damienbod/AspNetCoreAzureADCAE

The following five steps are required to set this up. I used Microsoft Entra for this but Graph can be used as well which makes automatization and infrastructure as code (IaC) deployments possible.

Step 1 Authentication method definition Step 2 Create an CA Auth context Step 3 Create a policy to use the Azure AD CA Auth context Step 4 Define the Azure App registration Step 5 Use inside an ASP.NET Core application Step 1 Authentication method definition

Using https://entra.microsoft.com, you can create a custom authentication method, or re-use some of the given standard definitions. A phishing resistant authentication method is already defined in the portal. You can also create a custom method that for example only accepts your company specific FIDO2 keys.

Creating a specific authentication method allows you to choose a specific authentication method. The phishing resistant default definition is a pre-defined authentication method.

Step 2 Create an CA Auth context

An authentication context needs to be created to force a policy in the ASP.NET Core application. Create a new authentication context using the Microsoft Entra.

Add the naming of the policy and the authentication context identifier. I used c4 for my phishing resistant requirement.

You could also create the authentication context using Microsoft Graph. I created the CaeAdministrationTool application in the demo repo showing how to implement this.

Step 3 Create a policy to use the Azure AD CA Auth context

You can create the conditional access policy to use the authentication context and force the authentication method as requires. Create a new policy.

Select the created authentication context in the Cloud apps or actions.

In the access controls, grant access to users that authenticate with a phishing resistant authentication.

Using this policy, a user must authentication with one of the phishing resistant methods.

Step 4 Define the Azure App registration

The ASP.NET Core application requires an Azure App registration with the correct configuration to authenticate with Azure AD and the use conditional access policy. The application must validate the CA and not a downstream API. This means we need to validate this in the id_token. This is done using the xms_cc claim. You can add this in the App registration manifest.

"optionalClaims": { "idToken": [ { "name": "xms_cc", "source": null, "essential": false, "additionalProperties": [] } ], "accessToken": [], "saml2Token": [] },

Microsoft.Identity.Web is used to implement the authentication for Azure AD. The app.settings requires the ClientCapabilities value with the cp1 value.

"AzureAd": { "Instance": "https://login.microsoftonline.com/", "Domain": "damienbodsharepoint.onmicrosoft.com", "TenantId": "5698af84-5720-4ff0-bdc3-9d9195314244", "ClientId": "daffd2e8-3718-4ac4-b971-c7f1bb570375", "CallbackPath": "/signin-oidc", "ClientCapabilities": [ "cp1" ] //"ClientSecret": "--in-user-secrets-or-key-vault--" },

You can check some of the standard settings required for the Web application Azure App registration using the manifest definition. The docs for this can be found on the Microsoft.Identity.Web Github repo Wiki.

"oauth2AllowIdTokenImplicitFlow": true, "optionalClaims": { "idToken": [ { "name": "xms_cc", "source": null, "essential": false, "additionalProperties": [] } ], "accessToken": [], "saml2Token": [] }, "replyUrlsWithType": [ { "url": "https://localhost:44414/signin-oidc", "type": "Web" } ], "signInAudience": "AzureADMyOrg", Step 5 Use inside an ASP.NET Core application

The application must validate if the id_token and the corresponding claims identity has the acrs claim with the value c4 which represents the authentication context created for the phishing resistant conditional access policy. If this is configured correctly, a phishing resistant authentication is required to use the application.

using Microsoft.AspNetCore.Http; using Microsoft.Extensions.Configuration; using System; using System.Linq; namespace RazorCaePhishingResistant; /// <summary> /// Claims challenges, claims requests, and client capabilities /// /// https://docs.microsoft.com/en-us/azure/active-directory/develop/claims-challenge /// /// Applications that use enhanced security features like Continuous Access Evaluation (CAE) /// and Conditional Access authentication context must be prepared to handle claims challenges. /// </summary> public class CaeClaimsChallengeService { private readonly IConfiguration _configuration; public CaeClaimsChallengeService(IConfiguration configuration) { _configuration = configuration; } public string? CheckForRequiredAuthContextIdToken(string authContextId, HttpContext context) { if (!string.IsNullOrEmpty(authContextId)) { string authenticationContextClassReferencesClaim = "acrs"; if (context == null || context.User == null || context.User.Claims == null || !context.User.Claims.Any()) { throw new ArgumentNullException(nameof(context), "No Usercontext is available to pick claims from"); } var acrsClaim = context.User.FindAll(authenticationContextClassReferencesClaim).FirstOrDefault(x => x.Value == authContextId); if (acrsClaim?.Value != authContextId) { string clientId = _configuration.GetSection("AzureAd").GetSection("ClientId").Value; var cae = "{\"id_token\":{\"acrs\":{\"essential\":true,\"value\":\"" + authContextId + "\"}}}"; return cae; } } return null; } }

The admin page requires a strong authentication. If this is missing, a challenge is returned informing the identity provider (AAD) which claim is required to use the application. This is specified in the OpenID Connect shared signal and events specifications.

using Microsoft.AspNetCore.Authentication; using Microsoft.AspNetCore.Mvc; using Microsoft.AspNetCore.Mvc.RazorPages; using System.Collections.Generic; namespace RazorCaePhishingResistant.Pages; public class AdminModel : PageModel { private readonly CaeClaimsChallengeService _caeClaimsChallengeService; public AdminModel( CaeClaimsChallengeService caeClaimsChallengeService) { _caeClaimsChallengeService = caeClaimsChallengeService; } [BindProperty] public IEnumerable<string>? Data { get; private set; } public IActionResult OnGet() { // if CAE claim missing in id token, the required claims challenge is returned // C4 is used in the phishing resistant policy var claimsChallenge = _caeClaimsChallengeService .CheckForRequiredAuthContextIdToken(AuthContextId.C4, HttpContext); if (claimsChallenge != null) { var properties = new AuthenticationProperties { RedirectUri = "/admin" }; properties.Items["claims"] = claimsChallenge; return Challenge(properties); } Data = new List<string>() { "Admin data 1", "Admin data 2" }; return Page(); } }

Now the admin page of the application requires the correct authentication. You could force this for the complete application or just a single page of the app. If the user does not authenticate using a phishing resistant authentication or cannot use this, the application page cannot be used. This is really useful when deploying administration applications to tenants which do not enforce this for all users.

Notes

This works really well and you can force a phishing resistant authentication from the application. MFA using the authenticator app will not work and the users of of Azure AD and applications using Azure AD have more protection.

Links

https://entra.microsoft.com/#view/Microsoft_AAD_IAM/AuthenticationMethodsMenuBlade/~/AuthStrengths

https://danielchronlund.com/2022/01/07/the-attackers-guide-to-azure-ad-conditional-access/

https://entra.microsoft.com

https://cloudbrothers.info/en/azure-attack-paths/

https://github.com/AzureAD/microsoft-identity-web

OpenID Connect Signals and Events

Sunday, 09. October 2022

Heres Tom with the Weather

Minimum Viable IndieAuth Server

One of the building blocks of the Indieweb is IndieAuth. Like many others, I bootstrapped my experience with indieauth.com but as Marty McGuire explains, there are good reasons to switch and even consider building your own. Because I wanted a server as simple to understand as possible but also wanted to be able to add features that are usually not available, I created a rails project called Irwin

One of the building blocks of the Indieweb is IndieAuth. Like many others, I bootstrapped my experience with indieauth.com but as Marty McGuire explains, there are good reasons to switch and even consider building your own. Because I wanted a server as simple to understand as possible but also wanted to be able to add features that are usually not available, I created a rails project called Irwin and recently configured my blog to use it.

This is not production ready code. While I know that the micropub server I use works with it, I expect others may not. Also, there is no support for refresh tokens and other things in the spec that I didn’t consider high priority. It does support PKCE but not the less useful “plain” method.

All of IndieAuth Spec Updates 2020 was very clear and helpful. In one case, I made the server probably too strict (as an easy way to curtail spam registrations). It requires that the hosts for a blog’s authorization endpoint and token endpoint match the host of the IndieAuth server before a user can register an account on the indieauth server.

I plan to add an option for a user to keep a history of logins to indieauth clients soon. Please let me know if you have any questions or suggestions.


Werdmüller on Medium

The future of cars

Towards the electric car as an open platform. Continue reading on Medium »

Towards the electric car as an open platform.

Continue reading on Medium »

Tuesday, 04. October 2022

Equals Drummond

Speaking Up

Over the past year, I have been so engrossed in building decentralized digital trust infrastructure that I haven’t made a single post on this blog. But this morning, in a newsletter from The Atlantic, I read an editorial by Tom … Continue reading →

Over the past year, I have been so engrossed in building decentralized digital trust infrastructure that I haven’t made a single post on this blog. But this morning, in a newsletter from The Atlantic, I read an editorial by Tom Nichols that struck me as so important that I am republishing it here in its entirety.

*****

I began the morning, as I often do, with a cup of coffee and a discussion with a friend. We were talking about last week’s nuclear warnings from Russian President Vladimir Putin, and while we were on the subject of unhinged threats, I mentioned Donald Trump’s bizarre statement over the weekend that Senate Minority Leader Mitch McConnell had a “DEATH WISH,” with a racist slam on McConnell’s wife, Elaine Chao, added in for good measure.

“Oh, yeah,” my friend said. “I’d forgotten about that.” To be honest, so had I. But when I opened Twitter today, The Bulwark publisher Sarah Longwell’s tweet that “we are still under-reacting to the threat of Trump” jumped out at me. She’s right.

We are also, in a way, underreacting to the war in Ukraine. Our attention, understandably, has become focused on the human drama. But we are losing our grip on the larger story and greater danger: Russia’s dictator is demanding that he be allowed to take whatever he wants, at will and by force. He is now, as both my colleague Anne Applebaum and I have written, at war not only with Ukraine, but with the entire international order. He (like his admirer Trump) is at war with democracy itself.

And somehow, we have all just gotten used to it.

We are inured to these events not because we are callous or uncaring. Rather, people such as Trump and Putin have sent us into a tailspin, a vortex of mad rhetoric and literal violence that has unmoored us from any sense of the moral principles that once guided us, however imperfectly, both at home and abroad. This is “the widening gyre” W. B. Yeats wrote about in 1919, the sense that “anarchy is loosed upon the world” as “things fall apart.”

For many years, I have often felt this way in the course of an ordinary day, when it seems as if I am living in a dystopian alternate universe. A time of hope and progress that began in the late 1980s was somehow derailed, perhaps even before the last chunks of the Berlin Wall’s corpse were being cleared from the Friedrichstrasse. (This was a time, for example, when we started taking people like Ross Perot seriously, which was an early warning sign of our incipient post–Cold War stupor.) Here are some of the many moments in which I have felt that sense of vertigo:

In my lifetime, I have seen polio defeated and smallpox eradicated. Now hundreds of thousands of Americans are dead—and still dying—because they refused a lifesaving vaccine as a test of their political loyalty to an ignoramus. After living under the threat of Armageddon, I saw the Soviet flag lowered from the Kremlin and an explosion of freedom across Eastern Europe. An American president then took U.S. strategic forces off high alert and ordered the destruction of thousands of nuclear weapons with the stroke of a pen. Now, each day, I try to estimate the chances that Putin, one of the last orphans of the Soviet system, will spark a nuclear cataclysm in the name of his delusional attempt to turn the clock back 30 years. As a boy in 1974, I delivered the newspaper that announced the resignation of Richard Nixon, who was driven from office in a political drama so wrenching that part of its name—Watergate—has become a suffix in our language for a scandal of any kind. Now the front-runner for the Republican presidential nomination is a former president who is a walking Roman candle of racist kookery and unhinged conspiracy theories, who has defied the law with malicious glee, and who has supported mobs that wanted to kill his vice president.

Against all this, how can we not be overwhelmed? We stand in the middle of a flood of horrendous events, shouted down by the outsize voices of people such as Trump and his stooges, enervated and exhausted by the dark threats of dictators such as Putin. It’s just too much, especially when we already have plenty of other responsibilities, including our jobs and taking care of our loved ones. We think we are alone and helpless, because there is nothing to convince us otherwise. How can anyone fight the sense that “the center cannot hold”?

But we are not helpless. The center can hold—because we are the center. We are citizens of a democracy who can refuse to accept the threats of mob bosses, whether in Florida or in Russia. We can and must vote, but that’s not enough. We must also speak out. By temperament, I am not much for public demonstrations, but if that’s your preferred form of expression, then organize and march. The rest of us, however, can act, every day, on a small scale.

Speak up. Do not stay silent when our fellow citizens equivocate and rationalize. Defend what’s right, whether to a friend or a family member. Refuse to laugh along with the flip cynicism that makes a joke of everything. Stay informed so that the stink of a death threat from a former president or the rattle of a nuclear saber from a Russian autocrat does not simply rush past you as if you’ve just driven by a sewage plant.

None of this is easy to do. But we are entering a time of important choices, both at home at the ballot box and abroad on foreign battlefields, and the center—the confident and resolute defense of peace, freedom, and the rule of law—must hold.

*****

Amen.

Monday, 03. October 2022

Damien Bod

Implement the On Behalf Of flow between an Azure AD protected API and an API protected using OpenIddict

This article shows how to implement the On Behalf Of flow between two APIs, one using Azure AD to authorize the HTTP requests and a second API protected using OpenIddict. The Azure AD protected API uses the On Behalf Of flow (OBO) to get a new OpenIddict delegated access token using the AAD delegated access […]

This article shows how to implement the On Behalf Of flow between two APIs, one using Azure AD to authorize the HTTP requests and a second API protected using OpenIddict. The Azure AD protected API uses the On Behalf Of flow (OBO) to get a new OpenIddict delegated access token using the AAD delegated access token. An ASP.NET Core Razor page application using a confidential client is used to get the Azure AD access token with an access_as_user scope. By using the OBO flow, delegated and application authorization mixing can be avoided and the trusted between systems can be reduced.

Code: https://github.com/damienbod/OnBehalfFlowOidcDownstreamApi

Architecture Setup

We have a solution setup using OpenIddict as the identity provider. This could be any OpenID Connect server. We have APIs implemented and secured using users from this system and protected with the OpenIddict identity provider. With some Azure AD system constraints and new collaboration requirements, we need to support users from Azure AD. Due to this, our users from Azure AD need to use the APIs protected with the OpenIddict identity provider. The On Behalf Of flow is used to support this. An new Azure AD protected API is used to accept the Azure AD access token used for accessing the “protected” APIs. This token is used to get a new OpenIddict delegated access token which can be used for the existing APIs. For this to work, the users must exist in both systems. We match the accounts using an email. You could also use the Azure OID and save this to the existing system. It is harder to add custom IDs to Azure AD from other identity providers. A second way of implementing this would be to use the OAuth token exchange RFC 8693. This supports much more than we require. I based the implementation on the Microsoft documentation. By using the OBO flow, delegated access tokens can be used everyway and the trust required can be reduced between the APIs. I try to use delegated user access tokens whenever possible. Application access tokens should be avoided for UIs or user delegated flows.

Implement the OBO client

The Azure AD protected API uses the On Behalf Of flow to get a new access token to access the downstream API protected with the OpenIddict identity provider. The Azure AD delegated user access token is used to acquire a new access token.

[HttpGet] public async Task<IEnumerable<string>?> Get() { var scopeRequiredByApi = new string[] { "access_as_user" }; HttpContext.VerifyUserHasAnyAcceptedScope(scopeRequiredByApi); var aadBearerToken = Request.Headers[HeaderNames.Authorization] .ToString().Replace("Bearer ", ""); var dataFromDownstreamApi = await _apiService .GetApiDataAsync(aadBearerToken); return dataFromDownstreamApi; }

The GetApiTokenObo method is used to get the access token from a distributed cache or from the OpenIddict identity provider using the OBO flow. With the new user access token, the protected API can be used.

var client = _clientFactory.CreateClient(); client.BaseAddress = new Uri(_downstreamApi.Value.ApiBaseAddress); var access_token = await _apiTokenClient.GetApiTokenObo( _downstreamApi.Value.ClientId, _downstreamApi.Value.ScopeForAccessToken, _downstreamApi.Value.ClientSecret, aadAccessToken ); client.SetBearerToken(access_token); var response = await client.GetAsync("api/values"); if (response.IsSuccessStatusCode) { var data = await JsonSerializer.DeserializeAsync<List<string>>( await response.Content.ReadAsStreamAsync()); if(data != null) return data; return new List<string>(); }

The GetApiTokenOboAad method uses the OBO helper library to acquire the new access token. The GetDelegatedApiTokenOboModel parameter class is used to define the required values.

private async Task<AccessTokenItem> GetApiTokenOboAad(string clientId, string scope, string clientSecret, string aadAccessToken) { var oboHttpClient = _httpClientFactory.CreateClient(); oboHttpClient.BaseAddress = new Uri( _downstreamApiConfigurations.Value.IdentityProviderUrl); var oboSuccessResponse = await RequestDelegatedAccessToken .GetDelegatedApiTokenObo( new GetDelegatedApiTokenOboModel { Scope = scope, AccessToken = aadAccessToken, ClientSecret = clientSecret, ClientId = clientId, EndpointUrl = "/connect/obotoken", OboHttpClient = oboHttpClient }, _logger); if (oboSuccessResponse != null) { return new AccessTokenItem { ExpiresIn = DateTime.UtcNow.AddSeconds(oboSuccessResponse.ExpiresIn), AccessToken = oboSuccessResponse.AccessToken }; } _logger.LogError("no success response from OBO access token request"); throw new ApplicationException("no success response from OBO access token request"); }

The GetDelegatedApiTokenObo method implements the OBO client request. The parameters are passed in the body of the HTTP POST request as defined on the Microsoft documentation. The requests returns a success response or an error response.

public static async Task<OboSuccessResponse?> GetDelegatedApiTokenObo( GetDelegatedApiTokenOboModel reqData, ILogger logger) { if (reqData.OboHttpClient == null) throw new ArgumentException("Httpclient missing, is null"); // Content-Type: application/x-www-form-urlencoded var oboTokenExchangeBody = new[] { new KeyValuePair<string, string>("grant_type", "urn:ietf:params:oauth:grant-type:jwt-bearer"), new KeyValuePair<string, string>("client_id", reqData.ClientId), new KeyValuePair<string, string>("client_secret", OboExtentions.ToSha256(reqData.ClientSecret)), new KeyValuePair<string, string>("assertion", reqData.AccessToken), new KeyValuePair<string, string>("scope", reqData.Scope), new KeyValuePair<string, string>("requested_token_use", "on_behalf_of"), }; var response = await reqData.OboHttpClient.PostAsync(reqData.EndpointUrl, new FormUrlEncodedContent(oboTokenExchangeBody)); if (response.IsSuccessStatusCode) { var tokenResponse = await JsonSerializer.DeserializeAsync<OboSuccessResponse>( await response.Content.ReadAsStreamAsync()); return tokenResponse; } if (response.StatusCode == System.Net.HttpStatusCode.Unauthorized) { // Unauthorized error var errorResult = await JsonSerializer.DeserializeAsync<OboErrorResponse>( await response.Content.ReadAsStreamAsync()); if(errorResult != null) { logger.LogInformation("{error} {error_description} {correlation_id} {trace_id}", errorResult.error, errorResult.error_description, errorResult.correlation_id, errorResult.trace_id); } else { logger.LogInformation("RequestDelegatedAccessToken Error, Unauthorized unknown reason"); } } else { // unknown error, log logger.LogInformation("RequestDelegatedAccessToken Error unknown reason"); } return null; }

The following HTTP POST request is sent to the identity provider supporting the OBO flow. As you can see, a secret (or certificate) is used so only a trusted application can implement an OBO client.

// Content-Type: application/x-www-form-urlencoded grant_type=urn:ietf:params:oauth:grant-type:jwt-bearer &client_id={clientId} &client_secret={clientSecret} &assertion={accessToken} &scope={scope} &requested_token_use=on_behalf_of Implement the OBO token exchange

The identity provider supporting the OBO flow needs to accept the POST request, validate the request and issue a new access token using the correct signature. It might be possible to integrate this flow better into the different identity providers, I kept this separate which would make it easier to re-use on different IDPs. All you need is the correct signature created from the certificate used by the identity provider. The Exchange method needs to validate the request correctly. The access token sent in the assertion parameter needs full validation included the signature, issuer and audience. You should not allow unspecific access tokens to be used. The method also validates the client ID and a client secret to validate the trusted application which sent the request. You could also use a certificate for this which can then use client assertions as well.

Logging is added and PII logs are also logged if this feature is activate. Once all the parameters are validated, the CreateDelegatedAccessTokenPayloadModel class is used to create the new user access token using the data from the payload and the original valid AAD access token.

[AllowAnonymous] [HttpPost("~/connect/obotoken"), Produces("application/json")] public async Task<IActionResult> Exchange([FromForm] OboPayload oboPayload) { var (Valid, Reason) = ValidateOboRequestPayload.IsValid(oboPayload, _oboConfiguration); if(!Valid) { return UnauthorizedValidationParametersFailed(oboPayload, Reason); } // get well known endpoints and validate access token sent in the assertion var configurationManager = new ConfigurationManager<OpenIdConnectConfiguration>( _oboConfiguration.AccessTokenMetadataAddress, new OpenIdConnectConfigurationRetriever()); var wellKnownEndpoints = await configurationManager.GetConfigurationAsync(); var accessTokenValidationResult = ValidateOboRequestPayload.ValidateTokenAndSignature( oboPayload.assertion, _oboConfiguration, wellKnownEndpoints.SigningKeys); if(!accessTokenValidationResult.Valid) { return UnauthorizedValidationTokenAndSignatureFailed( oboPayload, accessTokenValidationResult); } // get claims from aad token and re use in OpenIddict token var claimsPrincipal = accessTokenValidationResult.ClaimsPrincipal; var name = ValidateOboRequestPayload.GetPreferredUserName(claimsPrincipal); var isNameAnEmail = ValidateOboRequestPayload.IsEmailValid(name); if(!isNameAnEmail) { return UnauthorizedValidationPrefferedUserNameFailed(); } // validate user exists var user = await _userManager.FindByNameAsync(name); if (user == null) { return UnauthorizedValidationNoUserExistsFailed(); } // use data and return new access token var (ActiveCertificate, _) = await Startup.GetCertificates(_environment, _configuration); var tokenData = new CreateDelegatedAccessTokenPayloadModel { Sub = Guid.NewGuid().ToString(), ClaimsPrincipal = claimsPrincipal, SigningCredentials = ActiveCertificate, Scope = _oboConfiguration.ScopeForNewAccessToken, Audience = _oboConfiguration.AudienceForNewAccessToken, Issuer = _oboConfiguration.IssuerForNewAccessToken, OriginalClientId = _oboConfiguration.AccessTokenAudience }; var accessToken = CreateDelegatedAccessTokenPayload.GenerateJwtTokenAsync(tokenData); _logger.LogInformation("OBO new access token returned sub {sub}", tokenData.Sub); if(IdentityModelEventSource.ShowPII) { _logger.LogDebug("OBO new access token returned for sub {sub} for user {Username}", tokenData.Sub, ValidateOboRequestPayload.GetPreferredUserName(claimsPrincipal)); } return Ok(new OboSuccessResponse { ExpiresIn = 60 * 60, AccessToken = accessToken, Scope = oboPayload.scope }); }

The OboErrorResponse response can be created using the error result from the validation. The PII data is only logged if the application is allowed to log this. This is not set in productive deployments.

private IActionResult UnauthorizedValidationParametersFailed( OboPayload oboPayload, string Reason) { var errorResult = new OboErrorResponse { error = "Validation request parameters failed", error_description = Reason, timestamp = DateTime.UtcNow, correlation_id = Guid.NewGuid().ToString(), trace_id = Guid.NewGuid().ToString(), }; _logger.LogInformation("{error} {error_description} {correlation_id} {trace_id}", errorResult.error, errorResult.error_description, errorResult.correlation_id, errorResult.trace_id); if (IdentityModelEventSource.ShowPII) { _logger.LogDebug("OBO new access token returned for assertion {assertion}", oboPayload.assertion); } return Unauthorized(errorResult); }

The GenerateJwtTokenAsync method creates a new OpenIddict user access token with the correct signature. We could also create a reference token or any other type of access token if required. The claims can be added as required and these claims are validated in the downstream API protected using OpenIddict.

public static string GenerateJwtTokenAsync(CreateDelegatedAccessTokenPayloadModel payload) { SigningCredentials signingCredentials = new X509SigningCredentials(payload.SigningCredentials); var alg = signingCredentials.Algorithm; //{ // "alg": "RS256", // "kid": "....", // "typ": "at+jwt", //} var subject = new ClaimsIdentity(new[] { new Claim("sub", payload.Sub), new Claim("scope", payload.Scope), new Claim("act", $"{{ \"sub\": \"{payload.OriginalClientId}\" }}", JsonClaimValueTypes.Json ) }); if(payload.ClaimsPrincipal != null) { var name = ValidateOboRequestPayload.GetPreferredUserName(payload.ClaimsPrincipal); var azp = ValidateOboRequestPayload.GetAzp(payload.ClaimsPrincipal); var azpacr = ValidateOboRequestPayload.GetAzpacr(payload.ClaimsPrincipal); if(!string.IsNullOrEmpty(name)) subject.AddClaim(new Claim("name", name)); if (!string.IsNullOrEmpty(name)) subject.AddClaim(new Claim("azp", azp)); if (!string.IsNullOrEmpty(name)) subject.AddClaim(new Claim("azpacr", azpacr)); } var tokenHandler = new JwtSecurityTokenHandler(); var tokenDescriptor = new SecurityTokenDescriptor { Subject = subject, Expires = DateTime.UtcNow.AddHours(1), IssuedAt = DateTime.UtcNow, Issuer = "https://localhost:44318/", Audience = payload.Audience, SigningCredentials = signingCredentials, TokenType = "at+jwt" }; if (tokenDescriptor.AdditionalHeaderClaims == null) { tokenDescriptor.AdditionalHeaderClaims = new Dictionary<string, object>(); } if (!tokenDescriptor.AdditionalHeaderClaims.ContainsKey("alg")) { tokenDescriptor.AdditionalHeaderClaims.Add("alg", alg); } var token = tokenHandler.CreateToken(tokenDescriptor); return tokenHandler.WriteToken(token); }

The following token will be created:

{ "alg": "RS256", "kid": "5626CE6A8F4F5FCD79C6642345282CA76D337548", "x5t": "VibOao9PX815xmQjRSgsp20zdUg", "typ": "at+jwt" }.{ "sub": "8ec43e8d-1873-49ab-a4e2-744ed40586a2", "name": "-upn or email of user--", "scope": "--requested validated scope in OBO request--", "azp": "azp value from AAD token", "azpacr": "1", // AAD auth type, 1 == used client secret "act": { "sub": "--Guid from the AAD app registration client ID used for the API--" }, "nbf": 1664533888, "exp": 1664537488, "iat": 1664533888, "iss": "https://localhost:44318/", "aud": "--aud from configuration--" }.[Signature] Token Exchange validation

The validate class checks both the POST request parameters as well as the signature and the claims of the Azure AD access token. It is important to validate this fully.

public static (bool Valid, string Reason) IsValid(OboPayload oboPayload, OboConfiguration oboConfiguration) { if(!oboPayload.requested_token_use.ToLower().Equals("on_behalf_of")) { return (false, "obo requested_token_use parameter has an incorrect value, expected on_behalf_of"); }; if (!oboPayload.grant_type.ToLower().Equals("urn:ietf:params:oauth:grant-type:jwt-bearer")) { return (false, "obo grant_type parameter has an incorrect value, expected urn:ietf:params:oauth:grant-type:jwt-bearer"); }; if (!oboPayload.client_id.Equals(oboConfiguration.ClientId)) { return (false, "obo client_id parameter has an incorrect value"); }; if (!oboPayload.client_secret.Equals(OboExtentions.ToSha256(oboConfiguration.ClientSecret))) { return (false, "obo client secret parameter has an incorrect value"); }; if (!oboPayload.scope.ToLower().Equals(oboConfiguration.ScopeForNewAccessToken.ToLower())) { return (false, "obo scope parameter has an incorrect value"); }; return (true, string.Empty); } public static (bool Valid, string Reason, ClaimsPrincipal? ClaimsPrincipal) ValidateTokenAndSignature( string jwtToken, OboConfiguration oboConfiguration, ICollection<SecurityKey> signingKeys) { try { var validationParameters = new TokenValidationParameters { RequireExpirationTime = true, ValidateLifetime = true, ClockSkew = TimeSpan.FromMinutes(1), RequireSignedTokens = true, ValidateIssuerSigningKey = true, IssuerSigningKeys = signingKeys, ValidateIssuer = true, ValidIssuer = oboConfiguration.AccessTokenAuthority, ValidateAudience = true, ValidAudience = oboConfiguration.AccessTokenAudience }; ISecurityTokenValidator tokenValidator = new JwtSecurityTokenHandler(); var claimsPrincipal = tokenValidator.ValidateToken(jwtToken, validationParameters, out var _); return (true, string.Empty, claimsPrincipal); } catch (Exception ex) { return (false, $"Access Token Authorization failed {ex.Message}", null); } } public static string GetPreferredUserName(ClaimsPrincipal claimsPrincipal) { string preferredUsername = string.Empty; var preferred_username = claimsPrincipal.Claims.FirstOrDefault(t => t.Type == "preferred_username"); if (preferred_username != null) { preferredUsername = preferred_username.Value; } return preferredUsername; } public static string GetAzpacr(ClaimsPrincipal claimsPrincipal) { string azpacr = string.Empty; var azpacrClaim = claimsPrincipal.Claims.FirstOrDefault(t => t.Type == "azpacr"); if (azpacrClaim != null) { azpacr = azpacrClaim.Value; } return azpacr; } public static string GetAzp(ClaimsPrincipal claimsPrincipal) { string azp = string.Empty; var azpClaim = claimsPrincipal.Claims.FirstOrDefault(t => t.Type == "azp"); if (azpClaim != null) { azp = azpClaim.Value; } return azp; }

What about OAuth token exchange (RFC 8693)

OAuth token exchange (RFC 8693) can also be used to implement this feature and it does not vary too much from this implementation. I will probably do a second demo using the OAuth specifications. The OBO flow is like a subset of the OAuth RFC with some features, claims implemented differently. Every identity provider seems to implement these things differently or with different levels of support, so IDPs which can be adapted are usually the best choice. Inter-opt between identity providers is not so easy and not well documented. The OBO flow is a Microsoft specific flow which is close to the OAuth token exchange RFC specification.

Improvements

I would love feedback on how to improve this or pull requests in the github repo.

Links

https://learn.microsoft.com/en-us/azure/active-directory/develop/v2-oauth2-on-behalf-of-flow

https://documentation.openiddict.com/configuration/application-permissions.html

https://datatracker.ietf.org/doc/html/rfc8693

https://stackoverflow.com/questions/61536846/difference-between-jwt-bearer-and-token-exchange-grant-types

Friday, 30. September 2022

Phil Windleys Technometria

A New Job at AWS

Summary: I started a new job! I've been dark for a few weeks. Things have been busy. I'm retiring from BYU (after 29 years, albeit with some interruptions) and starting a new job with Amazon Web Services (AWS). The job is in AWS Identity, and involves automated reasoning (formal methods). My Ph.D. dissertation was on using formal methods to verify the correctness of microprocessors. So t

Summary: I started a new job!

I've been dark for a few weeks. Things have been busy. I'm retiring from BYU (after 29 years, albeit with some interruptions) and starting a new job with Amazon Web Services (AWS). The job is in AWS Identity, and involves automated reasoning (formal methods). My Ph.D. dissertation was on using formal methods to verify the correctness of microprocessors. So the new job combines two things I've spent a good portion of my professional life working on. I'm loving it.

The name of what I and the larger group (Automated Reasoning Group) are doing is "provable security." AWS Provable Security automatically generates mathematical proofs to assert universal statements about the security properties of your AWS application. For example, Access Analyzer uses automated reasoning to analyze all public and cross-account access paths to your resources and provides comprehensive analysis of those paths, making statements like "None of your S3 buckets are publicly available."

What is Automated Reasoning? How Is it Used at AWS? (click to play)

To understand this better, and get a glimpse of where it could go, I recommend this talk from AWS re:inforce by Neha Rungta and Andrew Gacek.

AWS re:Inforce 2022 - High assurance with provable security (click to play)

When I was doing formal methods, we dreamed of the day when automated reasoning would handle real problems for people without access to highly trained Ph.D. researchers. Now, that's possible and available in 1-click, for many problems. I'm excited to be working on it.

Tags: identity aws security formal+methods automated+reasoning


Heres Tom with the Weather


Identity Praxis, Inc.

Aligning the 3 Pillars of the Personal Data and Identity Marketplace

Published by the Mobile Ecosystem Forum on September 29, 2022 For industry and society to function effectively, we need trust: “to have and maintain confidence in the honesty of another (an individual, system, service, or process) to meet their social, commercial and civic obligations.”. In the context of today’s digitally-driven society—with so many actors, systems, […] The post Aligning the 3

Published by the Mobile Ecosystem Forum on September 29, 2022

For industry and society to function effectively, we need trust: “to have and maintain confidence in the honesty of another (an individual, system, service, or process) to meet their social, commercial and civic obligations.”. In the context of today’s digitally-driven society—with so many actors, systems, and processes in play with varying agendas—trust is hard to achieve, especially in light of the increasing sophistication, and losses as a result of cybercrime which according to one analyst are estimated to cost the world $10.5 Trillion annually by 2025.

To achieve trust, we must strive to align, and stay aligned, with the three personal data and identity marketplace pillars.

The Three Personal Data and Identity Marketplace Pillars

To understand the personal data and identity marketplace, it is helpful to first understand a handful of key concepts and their interplay—namely privacy, security, and compliance.

Privacy is a process related to an individual being in control of both their physical self (person, stewards, or property—house, cards, connected devices, etc.) and their digital self (i.e., their personal data). For an individual to have privacy they must be in a position to manage all five elements of privacy (the “5 Ws”). These are: who, what, when, where, and why. “Who” refers to the entity (e.g., another individual, enterprise, government, or machine) seeking to gain access to the individual. “What” refers to what an entity is looking to access, i.e., aspects of the individual’s physical or digital self. “When” refers to the timing of the access, i.e., when and for how long will the entity have physical or digital access to elements of the individual. “Where” refers to the location where the interaction, physical connection, or personal data exchange, will take place.

This could be in the real world, via mobile, in the cloud, locally on an individual’s device, etc. “Why” refers to requesting an entity’s intention and purpose for wanting access, e.g., what they are going to do with the individual’s data (and, to maintain trust, will they ensure there are no unauthorized secondary uses of the data).

Security, in the context of personal data and identity, refers to the state of a system or service being free from the threat of unauthorized access and ensuring all access control policies—also known as permissions and privileges—are fully operational. To put it another way, a system or service is considered secure when only authorized individuals can access it, i.e., login, and said individuals can only access content and services in accordance with the privileges bestowed upon them by the service administrator. Note: Systems and services administrators will have layers of identity management (i.e., authorization, identification, and verification) to assure, with the appropriate level of confidence (aka risk tolerance), that an individual (or at least the credentials the individual is using to access a system) is authorized and has not been compromised. Compliance refers to the act of ensuring that all activities related to the legal (both commercial and civic) and regulatory (both industry self-regulatory and government regulatory) requirements are met by all actors involved in an exchange.

Two additional terms are relevant for this discussion: governance and cybersecurity.

Governance refers to the effort of providing oversight on the alignment and execution of all processes and actions necessary for adhering to compliance requirements and the delivery of services. Cybersecurity refers to the efforts undertaken to protect a system or service from cyber-attacks. In other words, this is the effort to protect all aspects of a system (inc., data, storage, network, devices) from unauthorized access and the compromising of the system’s access and control policies so that systems processes are not overridden, systems are not physically damaged, and data is hacked or leaked.

The figure (Figure 1: 3 Pillars of the Personal Data and Identity Marketplace) below illustrates the interplay between these three pillars.

Figure 1: 3 Pillars of the Personal Data and Identity Marketplace

Establishing and Maintaining Trust

When the three pillars of personal data and identity management are working in harmony, and continue to do so over time, all parties in a physical or digital exchange can build and maintain trust. This trust is reinforced by the interplay of the overlapping elements of the system: Faith (people have faith in systems to function properly), Ombudsman (people feel protected as there are institutions providing oversight of the systems on their behalf), and Accountability (public and private institutions are holding systems and their administrators accountable to complying to the law and regulations).

When any one of these pillars becomes unstable, trust is eroded, the marketplace becomes less efficient, and any number of harms can befall the various actors (individuals, private organizations, and public institutions).

Where Do We Go From Here?

Establishing and maintaining trust is paramount for a healthy society and economy. For the personal data and identity marketplace to continue to flourish, more is needed than what is depicted above.

First, individuals, the people, need to take more personal accountability for the flow of their identity and personal data; they should not just rely on the ombudsman and faith. They should educate themselves on both the opportunities that can be generated from actively managing their data and the risks when they do not. They should learn to enact the digital rights afforded to them by the current and impending regulations. And, they should actively adopt passive and active technologies and services to protect themselves.

The enterprise must participate in the education and support of their constituents (aka prospects, users, customers, patients, voters, etc.), to continue to fortify their systems—which will include the adoption of emerging self-sovereign identity infrastructure, i.e., technology that will be individually in control of their data.

Finally, government and non-governmental bodies must become more fluent in today’s technology and continue with their efforts to help establish policies and guidelines that support and protect the agency of the individual while supporting and stimulating innovation and local, regional, federal, international, and global market competition.

The post Aligning the 3 Pillars of the Personal Data and Identity Marketplace appeared first on Identity Praxis, Inc..


Call for Insights & White Paper Contributors: Is it rude to ask your age? Actually no. It is a legal requirement

Note: I co-authored this piece with Iain Corby, Director at Age Verification Providers Association (AVPA). Get ready; new and evolving personal data and identity management methods are causing a tsunami of change. In particular, two crucial areas driving change are the proper administration of the sale of age-restricted products and services and what businesses must do […] The post Call for

Note: I co-authored this piece with Iain Corby, Director at Age Verification Providers Association (AVPA).

Get ready; new and evolving personal data and identity management methods are causing a tsunami of change. In particular, two crucial areas driving change are the proper administration of the sale of age-restricted products and services and what businesses must do to responsibly and securely interact with children and other age-protected classes.

Do you have a view on how personal data and digital identity will evolve over the next 18-36 months, especially regarding age verification and age assurance? If you are working with and have insights into age verification and age assurance, we want to hear from you (message us on LinkedIn: Michael Becker and Iain Corby).

New Regulations and Technologies are Driving Change

Will big global platforms step into the market? Will users resist giving more personal data to the largest tech companies? Will they resist using a government-issued ID as they surf the net and interact with phygital experiences? Or will emerging people-centric approaches to identity and age verification and assurance lead the way?

The Age Verification Providers Association (AVPA) and Mobile Ecosystem Forum’s Personal Data & Identity (PD&I) Working Group proposition is that age will be at the vanguard for digital identities execution and adoption.

There are few legal reasons to need a digital identity today – but there is a wide range of new regulations and initiatives driving the need to have a digital proof of age – GDPR, the EU Audio-Visual Media Services Directive, advertising restrictions, the forthcoming EU Digital Services ActUK Online Safety BillU.S. Children’s Online Privacy Protection RuleEU Electronic Identification and Trust Services (eIDAS 2.0) or the UK Digital ID and Attributes Trust Framework, as well as firmer enforcement of age-restricted online commerce.

The ability for businesses to prove the age of their customers, without their customers having to disclose their entire identity and complete age details, is going to be a ubiquitous business requirement for phygital commerce and online services.

A Call for Age Verification and Assurance Insights & White Paper Contributors

The MEF PD&I Working Group, in partnership with the AVPA, is drafting a white paper, tentatively titled “Is it rude to ask your age? Actually no. It is a legal requirement.” This landscape is being dramatically changed by new regulations, technology, business models, and growing consumer awareness around personal data & identity.

This white paper seeks to document the diverse views surrounding age verification and age assurance and to set a vision for the future in a crowded and fast-moving space. This effort is being led by the MEF’s PD&I Working Group. Iain Corby, Executive Director at AVPA is taking the lead on the project and is supported by Michael Becker, CEO of Identity Praxis and Chair of the MEF PD&I Working Group, and other working group members.

Our goal with this effort is to document the current state of the age verification and age assurances landscape and to educate business leaders on what they need to know and do to thrive in this new era of personal data and identity.

We are looking for contributions from business leaders, technologists, futurists, regulators, policy-makers, and academics. Specifically, we are looking for insights into age verification and age assurance,

Use cases Technologies Customer journeys and experiences Business model impacts Challenges that businesses are facing in complying with changes to the landscape and their response Consumer opinion and adoption and more

MEF members are welcome to join in the effort and will be acknowledged as contributors to the paper, contact Michael Becker to get o the working group roster. Non-MEF members are welcome to contribute; their contributions will be included in the references if used.

If you have something to contribute, please get in touch with Iain Corby, Executive Director at AVPA, or Michael Becker via LinkedIn.

@Mobile Ecosystem Forum @Identity Praxis @Age Verification Providers Association #ageverification #ageassurance #identity #personaldata #ecommerce #regulation

The post Call for Insights & White Paper Contributors: Is it rude to ask your age? Actually no. It is a legal requirement appeared first on Identity Praxis, Inc..


Inviting People to the Table: Sourcing Personal Data From Individuals, Offering Value in Return

Authors: Drew Clabes, Marketing Manager, Identity Praxis, Inc. and Michael J. Becker, CEO, Identity Praxis, Inc. Published by the Mobile Ecosyestem Forum September 22, 2022. September 2022, Michael Becker, Co-Founder & CEO of Identity Praxis and Mobile Ecosystem Forum (MEF) PD&I Working Group Chair, held an engaging interview with Co-Founder, and CTO

Authors: Drew Clabes, Marketing Manager, Identity Praxis, Inc. and Michael J. Becker, CEO, Identity Praxis, Inc.

Published by the Mobile Ecosyestem Forum September 22, 2022.

September 2022, Michael Becker, Co-Founder & CEO of Identity Praxis and Mobile Ecosystem Forum (MEF) PD&I Working Group Chair, held an engaging interview with Co-Founder, and CTO of CYDigital John Rizzo. The two delve into the inspiration behind Marteq.io, a personal information management system (PIMS) that helps brands connect and reconnect with their customers on the principles of transparency, accountability, and individual data empowerment.

Click here to see the interview on YouTube (43:15 min)

Marteq.io Primary Sourcing Personal Data

Brands are pivoting to sourcing data directly from the individual. Why? To empower individuals. Build trust. Comply with regulations. Reduce risk. And more. As noted by CYDigital, by 2023 brand spending on user-generated data will outpace their spending on 3rd party data.

Marteq.io gives people a seat at the commercial table for their data. CY Digital’s Marteq.io is both a marketing automation offer platform and PIMS, enabling brands to launch connect, communicate, conduct commerce with, and reward their prospects and customers, for sharing their data directly with the brand (a practice known as zero-party data sourcing).

Zero-party data helps regulate the transfer of personal data information, as the individual–the zero part–is the source of their own data. With an application and service like Marteq.io, the on-and-off switch for sharing clickstream data is now at the fingertips of the individual. This benefits both sides. Consumers can generate passive rewards by sharing their behavior with trusted brands. Brands build trust and will have access to the insights they need to personalize and serve the connected individual with personalized ads, offers, and promotions ultimately equating to increased revenues, reduced costs, and reduced regulatory compliance.

With Marteq.io, brands can invite consumers to share their browsing behavior and location with them, and in return, the brand can reward the individual with unique offers and promotions. What’s unique about Marteq.io, however, is that the individual is in control. Marteq.io gives people the power to securely and safely collect their desktop and mobile browsing behavior and then selectively give brands permission to access this data so that the brand can personalize their experience and offer them rewards. Again, the individual is in control. At any time, they can revoke the access permissions they’ve given a brand.

Side note: Marteq.io is not alone in this space; there are other players offering similar solutions, including dedicated browsers, mobile apps, and extensions. We’ll report on these laters.

Understanding Types of Personal Data

According to Rizzo, data can be classified into two categories declarative, and behavioral data. Rizzo notes, declarative data is collected from consumers submitting forms, and services, or sharing other interests and preferences. Whereas behavioral data is data history, e.g., sites visited, media consumed, etc.– that reflects an individual’s activities and can be used to infer their interests. Acquiring this data from the individual source can have exceptional benefits for brands. CYDigital, in one study, found that engaging people through Marteq led to an impressive 21% conversion rate and a 134% increase in average order value.

The True Value of a Consumer

The value of the data generated by each connected individual today is being consolidated by players like Google and Facebook. Rizzo points outs, that these firms are generating an average of $1,300-1,500 in advertising revenue from each user. Becker added to this point by sharing estimates from  Jaron Lanier, who suggests that the annual value of an individual’s data to marketers is worth about $20,000. Using a tool like Marteq, brands increase the ROI of their marketing spend by redirecting to the individual.

Marteq.io December 2021 Adoption Study Results

In 2021, with Resondi, CYDigial conducted a Marteq.io adoption study with a group of 25,000 UK adults. They found that people are absolutely willing to share data with brands in return for value, and not just a few people: 74.7% of the people exposed to the study’s message of empowerment around data downloaded the Marteq.io browser extension. Of those 62.6% activated their account, and 55.6% explicitly turned on data sharing. In the end, over 26% of the audience became active users of Marteq.io.

These results are backed up by other industry research which suggests that people are ready and willing to share data when they are empowered and given the right offer. 89% of people are willing to share data with those they trust and for value. 71% are willing to share personal information about their likes, preferences, needs, etc. for better recommendations for their needs, and 26% are likely to share their personal information. The skepticism lies in what exactly is being done with my data, but once consumers are aware they’re more likely to give companies their desired information.

Conclusion

CYDigital’s values will last the test of time because unlike other companies Marteq.io intrusts personal data only to be handled by the software. The past has shown humans having access to data has caused many problems in terms of trust and control. Marteq.io and other PIMS will be the future of personal data collection. Marteq is an example for others to follow. Valuing the consumer brings benefits for all.

RELEVANT LINKS

Michael Becker LinkedIn:  https://www.linkedin.com/in/privacyshaman/ John Rizzo LinkedIn: https://www.linkedin.com/in/rizzo/ Identity Praxis:  https://identitypraxis.com/  CyDigital: https://www.cyd.digital/ Mobile Ecosystem Forum Personal Data & Identity Working Group: https://mobileecosystemforum.com/personal-data-identity/ Marteq.io consumer adoption research results: https://www.cyd.digital/supporting-content Tom Fishburne Markettoonist: https://marketoonist.com/2022/09/lifecycle-of-social-media.html Data completeness a key to effective net-based customer service systems: https://dl.acm.org/doi/10.1145/777313.777339

The post Inviting People to the Table: Sourcing Personal Data From Individuals, Offering Value in Return appeared first on Identity Praxis, Inc..


Doc Searls Weblog

From Hollywood Park Racetrack to SoFi Stadium

Hollywood Park Racetrack is gone. In its place is SoFi Stadium, the 77,000-seat home of Los Angeles’ two pro football teams and much else, including the 6,000-seat YouTube Theater. There’s also more to come in the surrounding vastness of Hollywood Park, named after the racetrack. Wikipedia says the park— consists of over 8.5 million square feet (790,000 m2) […]

Hollywood Park Racetrack, 1938

Hollywood Park Racetrack is gone. In its place is SoFi Stadium, the 77,000-seat home of Los Angeles’ two pro football teams and much else, including the 6,000-seat YouTube Theater. There’s also more to come in the surrounding vastness of Hollywood Park, named after the racetrack. Wikipedia says the park—

consists of over 8.5 million square feet (790,000 m2) that will be used for office space and condominiums, a 12-screen Cinepolis movie theaterballrooms, outdoor spaces for community programming, retail, a fitness center, a luxury hotel, a brewery, up-scale restaurants and an open-air shopping and entertainment complex.

The picture above (via this Martin Turnbull story) is an aerial view of the racetrack in 1938, shortly after it opened. Note the parking lot: immense and almost completely filled with cars. Perhaps this was the day Seabiscuit won his inaugural Gold Cup. Whether or not, few alive today remember when only baseball was more popular than horse racing in the U.S.

What interests me about this change is that I’ve enjoyed a bird’s-eye view of it, while approaching Los Angeles International Airport on commercial passenger planes. I’ve also photographed that change over the course of seventeen years, through those same windows. Between 2005 and 2022, I shot many dozens of photos of the racetrack site (along with the adjacent Hollywood Park Casino) from its last working days as a racetrack to the completion of SoFi Stadium (with the casino’s relocation to a corner of what had been the Racetrack’s parking lot).

In this album on Flickr are 91 photos of that change. Here I tell the story on one page. We’ll start in January 2005:

At this time the racetrack was long past its prime but still functioning along with the casino. (Look closely and you’ll see the word CASINO in red on the roof of the nearest grandstand. The casino itself is the gray building to its left.) In the distance, you can see the skyline of the West Wilshire region and the Hollywood Hills, topped by the HOLLYWOOD sign. (Hollywood Park is actually in Inglewood.)

This same year, Churchill Downs Incorporated sold the track to the Bay Meadows Land Company, owned by Stockbridge Capital Group, for $260 million in cash. This was good for the private capital business, but doom for the track. Bay Meadows, an equally famous racetrack just south of San Francisco, was also doomed.

This shot was taken seven months later, this time looking south:

Note the fountains in the ponds and the pavilion for members and special guests. Also, notice the separate grandstand for the Casino. The cars in the lots are almost certainly extras for LAX’s car rental companies, leasing unused parking spaces. But you can still see in the racetrack what (it says here) was “once described as too beautiful for words.”

The next photo is from April 2007:

Everything still appears operative. You can even see horses practicing on the dirt track. Also note The Forum across the street on the north side. Now the Kia Forum, its roof at various times also bore Great Western and Chase brand images. It was built in 1966 and is still going strong. During its prime, the Lakers in their Showtime era played there. (The team moved downtown to Staples Center in 1999.)

Next is this view, three months later in July 2007, looking south from the north side:

Note the stables between the racetrack and the practice track on the left. Also, note how the inner track, which had turned from dark brown to blue in prior photos, is now a light brown. It will later be green as well.

(Studying this a bit, I’ve learned that good horse race tracks are very deep flat-topped trenches filled with layers of special dirt that require constant grooming, much of which is devoted to making sure the surface is to some degree wet. In arid Los Angeles, this is a steep requirement. For more on how this works, this Wired story will help.)

Two months later, in September 2007, this view looking north takes in most of the Hollywood Park property, plus The Forum, Inglewood Cemetery, Baldwin Hills (beyond the cemetery and to the left or west):

The Hollywood Hills, with its white sign, is below the clouds, in the top middle, and the downtown Los Angeles skyline is in the top right.

Here on the Hollywood Park property, the casino will be rebuilt on the near edge of the property, along South Stadium Drive.

Here, a few months later, in February 2008, the inner track is once again blue:

This time take note of the empty areas of the parking lot, and how some regions are partitioned off. Ahead we’ll see these spaces variously occupied.

A few seconds after the shot above, I took this shot of the casino and club grounds:

The next shot comes a year and a half later, in September 2009:

Here the inner track has returned to green grass. In the far corner of the parking lot, across from The Forum, a partitioned section has activity involving at least six tents, plus other structures.

Almost three years passed before I got another view, in May 2102, this time looking south from the north side:

Here we get a nice view of the stables and the practice track. On the far side of both is a shopping center anchored by Home Depot and Target. (The white roofs are left and right.) Look in the coming shots at how those will change. Also, note the keystone-shaped fencing inside the practice track.

Here is the same scene one month later, in June 2012:

The keystone shape in the practice track is oddly green now, watered while the rest of the ground inside the track is not. A few seconds later I shot this:

Here the main change is the black-on-orange Belfair logo on the roof of the main grandstand. The paint job is new, but in fact, the racetrack became the Betfair Hollywood Park back in March, of this year.

In December begins California’s short rainy season, which we see here in my last view of the racetrack in 2012:

It’s a bit hard to see that the main track is the outer one in dark brown. We also see that the inner track, which had been blue and then green, is now brown: dirt instead of grass. This is my last view before the racetrack got its death sentence. Wikipedia:

On May 9, 2013 in a letter to employees, Hollywood Park president F. Jack Liebau announced that the track would be closing at the end of their fall racing season in 2013. In the letter, Liebau stated that the 260 acres on which the track sits “now simply has a higher and better use”, and that “in the absence of a favorable change in racing’s business model, the ultimate development of the Hollywood property was inevitable”. It was expected that the track would be demolished and replaced by housing units, park land and an entertainment complex, while the casino would be renovated.

My next pass over the property was on June 16, 2013:

The racetrack here is still verdant and irrigated, as you can see from the sprays onto the inner track, which is grass again. The last race here would come six months later, and demolition would begin shortly after that.

One year later, in June 2014, we can see the practice track and the stables absent of any use or care, condemned:

Farther west we see the casino is still operative, with cars in the parking lot:

Racing is done, but some of the ponds are still filled.

Three months later, in September 2014, demolition has begun:

Half the stables are gone, and the whole racetrack area has been bulldozed flat. Two things to note here. First is the row of red trees on the slope at the near end of the track. I believe these are red maples, which turn color in Fall even this far away from their native range. They were a nice touch. Second is the pond at the far end of the track. This is where they will start to dig a vast bowl—a crater—that will become the playing field inside the new SoFi Stadium.

Two months later, in November 2014, all the stables are completely gone, and there is a road across a dirt pile that bridges the old outer track:

This shot looks northeast toward the downtown Los Angeles skyline, and you can see the Hollywood sign on the dark ridge at the left edge of the frame, below a bit of the plane’s wing. The blur at the bottom, across the parking lot, is from the plane’s engine exhaust. (One reason I prefer my windows forward of the wing.)

This next shot is another two months later, in January 2015:

The casino is still happening, but the grandstand is ready for demolition and the racetrack area is getting prepared for SoFi.

One month after that, in February 2015, we see how winter rains have turned some untouched areas green:

Only two of the red trees remain (or so it appears), and the grandstands are still there, along with an operative casino.

This next shot is eight months later, in October, 2015:

Now the grandstand is gone. It was demolished in May. Here is a KNBC/4 report on that, with a video. And here is a longer hand-held amateur video that also gets the whole thing with stereo sound. New construction is also happening on the left, next to the old casino. This is for the new casino and its parking garage.

The next shot is almost a year later, in September, 2016:

It was a gloomy and overcast day, but you can see the biggest changes starting to take shape. The new casino and its parking garage are all but done, digging of the crater that will become the SoFi stadium has started, and landscaping is also starting to take shape, with hills of dirt in the middle of what had been the racetrack.

Ten months later, in July 2017, the SoFi crater is dug, structural pieces are starting to stand up, the new casino is operating and the old casino is gone:

Here is a close-up of work in and around the SoFi crater, shot a few seconds earlier:

The cranes in the pale gray area stand where a pond will go in. It will be called Rivers Lake.

This shot a few seconds later shows the whole west end of what will become the Hollywood Park complex:

The area in the foreground will become a retail center. The buildings on the left (west) side of the site are temporary ones for the construction project. On the right is the one completed permanent structure: the casino and its parking garage. Buildings on the left or west edge are temporary ones for the construction project.

Three months later, in January 2018, I flew over the site at night and got this one good shot (at 1/40th of a second moving at 200+mph):

Now they’re working day and night raising the SoFi structure in the crater. I share this to show how fast this work is going. You can see progress in this photo taken one month later, in February 2018, again at night:

More than a year went by before I passed over again. That was in August 2019. Here is my first shot on that pass:

Here you SoFi’s superstructure is mostly framed up, and some of the seating is put in place. Here is a wider view shot two seconds later, after I zoomed out a bit:

In both photos you see the word FORUM on The Forum’s roof. (It had previously said “Great Western” and “Chase.” It is now the Kia Forum.) You can also see the two ponds in full shape. The left one will be called Rivers Lake. The right one will pour into it over a waterfall. Cranes on the left stand in the outline of what will become an eight-story office building.

Three months later, in November 2019, the outside surfaces of the stadium are about halfway up:

We also see Rivers Lake lined, with its gray slopes and white bottom.

After this the Covid pandemic hit. I didn’t travel by air (or much at all) for almost two years, and most sporting events were canceled or delayed. So the next time I passed over the site in a position to shoot it was April 2022, when SoFi Stadium was fully operational, and the area around it mostly complete:

Here we see the shopping center in the foreground, now with the Target store showing its logo to the sky. The old practice track and stables have been replaced by parking. A few seconds later I zoomed in on the completed stadium:

We see Rivers Lake, the office building, and its parking structure are also done, as are the parking lots around the stadium. You can also see “SoFi Stadium” in raised lettering on the roof.

And that completes the series, for now.

There are a total of thirty-one photos above. All the links in the photos above will take you to a larger collection. Those in turn are a fraction among the hundreds I shot of the site. And those hundreds are among many thousands I’ve shot of ground and sky from passenger planes. So far I’ve posted over 42,000 photos tagged aerial or windowseat in my two Flickr accounts:

https://flickr.com/photos/docsearls/ https://flickr.com/photos/infrastructure/

Hundreds of those photos have also found their ways into Wikipedia, because I license nearly all my photos online to encourage cost-free re-use. So, when people with an interest in a topic search for usable pictures they’d like to see in Wikipedia, they often find some of mine and park them at Wikimedia Commons, which is Wikipedia’s library of available images. Of the hundreds you’ll find there in a search for “aerial” plus my name, one is the top photo in the Wikipedia article on Hollywood Park Racetrack. I didn’t put it there or in Wikimedia Commons. Randos did.

My purpose in putting up this post is to encourage documentation of many things: infrastructure changes, geological formations, and any other subject that tends to get overlooked. In other words, to be useful.

A friend yesterday said, “as soon as something becomes infrastructure, it becomes uninteresting.” But not unimportant. That’s one reason I hope readers will amplify or correct what I’ve written here. Blogging is good for that.

For the curious, the cameras I used (which Flickr will tell you if you go there), were:

Nikon Coolpix E5700 with a built-in zoom (2005) Canon 30D with an 18-200 Tamron zoom (2005-2009) Canon 5D with Canon 24-70mm, 24-85mm, and EF24-105mm f/4L zooms (2012-2015) Canon 5D Mark III with the same EF24-105mm f/4L zoom (2016-2019) Sony a7R with a Sony FE 24-105mm F4 G OSS zoom (2022)

I’m not a big spender, and photography is a sideline for me, so I tend to buy used gear and rent the good stuff. On that list, the only items I bought new were the Nikon Coolpix and the two 24-105 zooms. The Canon 5D cameras were workhorses, and so was the 24-105 f4L Canon zoom. The Sony a7R was an outgrown but loved gift from a friend, a fine art photographer who had moved on to newer (and also loved) Sony gear. Experience with that camera (which has since died) led me this June to buy a new Sony a7iv, which is a marvel. Though it has a few fewer pixels than the a7R, it still has 33 million of them, which is enough for most purposes. Like the a7R, it’s mirrorless, so what you see in the viewfinder or the display on the back is what you get. It also has a fully articulated rear display, which is great for shooting out the plane windows I can’t put my face in (and there are many of those). It’s like a periscope. So expect to see more and better shots from planes soon.

And, again, give me corrections and improvements on anything I’ve posted here.

 


Bill Wendels Real Estate Cafe

#GenerationPricedOut: Discuss right way to buy a home or Homebuyer Bill of Rights?

#DefensiveHomebuying:  Today, a leading buyer agent published a comprehensive article asking if there is a “right” way to buy a home. DIY homebuyers, if you’ve… The post #GenerationPricedOut: Discuss right way to buy a home or Homebuyer Bill of Rights? first appeared on Real Estate Cafe.

#DefensiveHomebuying:  Today, a leading buyer agent published a comprehensive article asking if there is a “right” way to buy a home. DIY homebuyers, if you’ve…

The post #GenerationPricedOut: Discuss right way to buy a home or Homebuyer Bill of Rights? first appeared on Real Estate Cafe.

Heres Tom with the Weather

Wednesday, 28. September 2022

Mike Jones: self-issued

The OpenID Connect Logout specifications are now Final Specifications

Thanks to all who helped us reach this important milestone! This was originally announced on the OpenID blog. These now Final specifications are: OpenID Connect Session Management 1.0 OpenID Connect Front-Channel Logout 1.0 OpenID Connect Back-Channel Logout 1.0 OpenID Connect RP-Initiated Logout 1.0 Don’t just sign in. Also sign out!

Thanks to all who helped us reach this important milestone! This was originally announced on the OpenID blog. These now Final specifications are:

OpenID Connect Session Management 1.0 OpenID Connect Front-Channel Logout 1.0 OpenID Connect Back-Channel Logout 1.0 OpenID Connect RP-Initiated Logout 1.0

Don’t just sign in. Also sign out!

Tuesday, 27. September 2022

Timothy Ruff

Zero Trust, Web5, and GLEIF’s vLEI

My business partner at Digital Trust Ventures, Dr. Samuel Smith, happens to be the smartest human I’ve met, and through my line of work I’ve been fortunate to meet some smart ones. In an email exchange during the last 72 hours, Sam opined about the McKinsey Technology Trends Report for 2022 (the full report), which strongly touts both self-sovereign identity (SSI) — which I now believe shoul

My business partner at Digital Trust Ventures, Dr. Samuel Smith, happens to be the smartest human I’ve met, and through my line of work I’ve been fortunate to meet some smart ones.

In an email exchange during the last 72 hours, Sam opined about the McKinsey Technology Trends Report for 2022 (the full report), which strongly touts both self-sovereign identity (SSI) — which I now believe should be considered as part of Web5 — and zero trust architecture (ZTA). As happens often, I found Sam’s private comments insightful, but this time so much so that I’m making them immediately public, with his permission.

I’ve not changed a word, other than adding [Web5] and emphasis where appropriate.

I think a useful insight with regards the McKinsey report is that the GLEIF vLEI is leveraging a zero trust architecture (ZTA) to provide digital identity. This means that the benefits of both trends are realized in the vLEI. Moreover, once an enterprise adopts a ZTA for digital identity, using ZTA for adjacent functions to digital identity becomes easier. Indeed, fully decentralized ZTAs fall short unless they include a zero trust digital identity system as the basis for verification (which verification is essential to the function of ZTA). The two go hand-in-glove.
All forms of zero trust require some form of access control which in turn requires some form of digital identity. But centralized digital identity has large trust surfaces (things that must be trusted and therefore can’t be verified). But truly fully end-verifiable zero trust is the gold standard for ZTA and end-verifiable is largely incompatible with anything but decentralized digital identity. Decentralized digital identity makes zero trust more secure because it both increases the surface of what can be verified and decreases the surface of what must be trusted without verification.
Autonomous control over data and relationships [Web5] is best enabled by end-verifiable mechanisms where the parties involved in any interaction are enabled to choose their trust surfaces and trust anchors. GLEIF is an end-verifiable trust anchor. GLEIF vLEI credentials enable a party to an interaction to present trustworthy end-verifiable artifacts (AKA vLEIs) to bootstrap trust in a decentralized way.
Decentralized means control is diffuse. Control structures are distributed amongst parties. Autonomous control means each party picks its control structure. If my identifiers are portable across platforms and I get to pick what trust anchors I trust then I have a high degree of autonomous control. Each side of an interaction/transaction gets to decide what they will trust and what they will not trust without verification. The transaction is not finalized until both sides are satisfied with the other side’s trustworthiness according to each’s own criteria for sufficient trustworthiness.
I guess what I am saying is that ZTAs all require digital identity but end-verifiable digital identity requires a ZTA. But not all forms of digital identity require ZTAs.

If you’re like most people, you might want (or need) to read what Sam writes at least twice, slowly. My opinion: Sam Smith is the TBL of “the authentic web”, and I believe his short missive above contains some critical insights about the surprisingly strong relationship between Web5, decentralization, and the globally recognized gold standard of cybersecurity, zero trust architecture.

Sam’s insights are worth paying close attention to. For those who don’t know, here’s a bit about Sam’s accomplishments in this space.

Sam is the originator of the blockchain-based issue-hold-verify model that the SSI/Web5 industry now embraces. Evernym hired Sam as a consultant in 2015, having discovered him from his early 2015 paper, Open Reputation. At the time he was the only one we could find who was talking (writing) smartly about how blockchain might be used for identity, and he happened to live 30 minutes away.

Sam and Jason Law, my co-founder at Evernym, soon figured out how to “give people their stuff” through the use of digital wallets, rather than “put it on the blockchain” as everyone else was doing at the time (some still do!). Sam authored Identity System Essentials along with Dmitry Khovratovich, a world-class cryptographer we’d engaged, and the issue-hold-verify model using blockchain was born, resulting in Sovrin, with Hyperledger Indy, Aries, and Ursa under the hood.

But that impressive innovation pales in comparison to what he’s invented since with KERI, ACDCs, CESR and more: making SSI possible (and better) without blockchain (KERI); solving decentralized (and centralized) key management by solving the key rotation dilemma (KERI); making verifiable credentials chain-able, highly secure, and ultra lightweight to implement (ACDCs); making event streaming ultra-compact and switchable between text and binary without sacrificing security (CESR); and more.

My advice: if you’re interested in digital trust, ZTA, digital identity, and all things Web5, when Sam writes or speaks: listen, learn, and act accordingly, and you’ll end up toward the front of this train rather than the caboose. His written and oral communication is often thick and technical, but hang in there and let us know what needs better ‘translation’ to simpler language, and we’ll do our best.

This short post is just a taste of what’s to come on these rich topics. We’ll be writing (and podcasting) much more about zero trust, the vLEI, and of course Web5 over the coming weeks and months, so stay tuned.

Sunday, 25. September 2022

Jon Udell

Curating the Studs Terkel archive

I can read much faster than I can listen, so I rarely listen to podcasts when I’m at home with screens on which to read. Instead I listen on long hikes when I want to shift gears, slow down, and absorb spoken words. Invariably some of those words strike me as particularly interesting, and I … Continue reading Curating the Studs Terkel archive

I can read much faster than I can listen, so I rarely listen to podcasts when I’m at home with screens on which to read. Instead I listen on long hikes when I want to shift gears, slow down, and absorb spoken words. Invariably some of those words strike me as particularly interesting, and I want to capture them. Yesterday, what stuck was these words from a 1975 Studs Terkel interview with Muhammad Ali:

Everybody’s watching me. I was rich. The world saw me, I had lawyers to fight it, I was getting credit for being a strong man. So that didn’t really mean nothing. What about, I admire the man that had to go to jail named Joe Brown or Sam Jones, who don’t nobody know who’s in the cell, you understand? Doing his time, he got no lawyers’ fees to pay. And when he get out, he won’t be praised for taking a stand. So he’s really stronger than me. I had the world watching me. I ain’t so great. I didn’t do nothing that’s so great. What about the little man don’t nobody know? He’s really the one.

I heard these words on an episode of Radio OpenSource about the Studs Terkel Radio Archive, an extraordinary compilation of (so far) about a third of his 5600 hours of interviews with nearly every notable person during the latter half of the twentieth century.

If you weren’t aware of him, the Radio OpenSource episode, entitled Studs Terkel’s Feeling Tone, is the perfect introduction. And it’s delightful to hear one great interviewer, Chris Lydon, discuss with his panel of experts the interviewing style of perhaps the greatest interviewer ever.

Because I’d heard Muhammad Ali’s words on Radio OpenSource, I could have captured them in the usual way. I always remember where I was when I heard a segment of interest. If that was 2/3 of the way through my hike, I’ll find the audio at the 2/3 mark on the timeline. I made a tool to help me capture and share a link to that segment, but it’s a clumsy solution.

What you’d really rather do is search for the words in a transcript, select surrounding context, use that selection to define an audio segment, and share a link to both text and audio. That’s exactly what I did to produce this powerful link, courtesy of WFMT’s brilliant remixer, which captures both the written words I quoted above and the spoken words synced to them.

That’s a dream come true. Thank you! It’s so wonderful that I hesitate to ask, but … WFMT, can you please make the archive downloadable? I would listen to a lot of those interviews if I could put them in my pocket and take them with me on hikes. Then I could use your remixer to help curate the archive.

Tuesday, 20. September 2022

MyDigitalFootprint

The Gap between #Purpose and #How is filled with #Paradox

Data would indicate that our global interest in purpose is growing. In truth, searching on Google for purpose is probably not the best place to start, and I write a lot about how to use data to frame an argument as this viewpoint highlights that the gap between purpose and how is filled with paradox. Source: Google Trends The Peak Paradox framework can be viewed from many different perspecti
Data would indicate that our global interest in purpose is growing. In truth, searching on Google for purpose is probably not the best place to start, and I write a lot about how to use data to frame an argument as this viewpoint highlights that the gap between purpose and how is filled with paradox.

Source: Google Trends


The Peak Paradox framework can be viewed from many different perspectives.  In this 3-minute read, I want to focus on the gap between “Purpose” And “How.” For example, Robin Hood's (as in the legendary heroic outlaw originally depicted in English folklore and subsequently featured in literature and film - not the stock trading company) purpose was “The redistribution of wealth.”  He and his band of merry fellows implemented the purpose by any means, mainly robbing the rich and giving to the poor (how they did it).   The Purpose was not wrong, but How was an interesting take on roles in society.   Google’s Purpose is to “Organise the world's information.” How it does this is by collection and ownership of data. The Purpose is not wrong, but How is an interesting take on data ownership.  Facebook's Purpose is “To connect every person in the world.”  It does this by using your data to manipulate you and your network. The Purpose is not wrong, but How Facebook does it is an interesting take on control and the distribution of value?

Reviewing the world's top businesses' mission/ purpose statement, we will conclude that “in general, “purpose is good.”  When we question “How” the purpose is implemented, we shine a light on the incentives, motivations and methods.  

“How” provides insights into the means; however, we should not be too quick to judge how. Consider The Suffragette mission, Climate Change movements or Anti-Apartheid.  Sometimes “how” is left with fewer choices or options?

Apple’s Purpose is “To bring the best personal computing experience to students, educators, creative professionals, and consumers around the world through its innovative hardware, software, and internet offerings.” How Apple does this is to make you dependent on them, their products, and their services.  Apple now positions their devices (iwatch), meaning you might not make it out alive without one.   The “How” is to exploit via lock-in users without them realising (they are not alone.) 

We should question what Principles “How” aligns to

In political systems we see structural tensions.   Different sides of political systems don’t demand worse security, degrading healthcare, more poverty or less education.  Fundamentally everyones similar purpose is a better society for all and that is broadly accepted, however #how individuals believe a policy can be delivered creates tension, fractions and division.  Along with the allocation and priority of resources created by scarcity. #decisions

The Peak Paradox framework positions different ideological purposes to expose the conflicts that we all face as we cannot optimise for one thing at the exclusion of others, but we have to find our place where we feel at peace, and even better if we can find a team who also find rest within the same compromises.  That does not mean we have to agree or not be challenged, as the world will throw enough of those.  

Take Away

As you move towards making decisions, the decisions become about details to realise your balanced purpose. The HOW implementation decisions need to align with principles, and this is where we see the gaps.  The gap is not in the purpose but in the misalignment of the implementation to principles that we believe in if we were asked to deliver the purpose.  


If we were asked to deliver the purpose, a gap appears at the misalignment of the implementation to principles between what we believe in and what others align to.


 




Monday, 19. September 2022

Damien Bod

ASP.NET Core Api Auth with multiple Identity Providers

This article shows how an ASP.NET Core API can be secured using multiple access tokens from different identity providers. ASP.NET Core schemes and policies can be used to set this up. Code: https://github.com/damienbod/AspNetCoreApiAuthMultiIdentityProvider The ASP.NET Core API has a single API and needs to accept access tokens from three different identity providers. Auth0, OpenIddict and […]

This article shows how an ASP.NET Core API can be secured using multiple access tokens from different identity providers. ASP.NET Core schemes and policies can be used to set this up.

Code: https://github.com/damienbod/AspNetCoreApiAuthMultiIdentityProvider

The ASP.NET Core API has a single API and needs to accept access tokens from three different identity providers. Auth0, OpenIddict and Azure AD are used as identity providers. OAuth2 is used to acquire the access tokens. I used self contained access tokens and only signed, not encrypted. This can be changed and would result in changes to the ForwardDefaultSelector implementation. Each of the access tokens need to be validated fully and also the signatures. How to validate a self contained JWT access token is documented in the OAuth2 best practices. We use an ASP.NET Core authentication handler to validate the specific claims from the different identity providers.

The authentication is added like any API implementation, except the default scheme is setup to a new value which is not used by any of the specific identity providers. This scheme is used to implement the ForwardDefaultSelector switch. When the API receives a HTTP request, it must decide what token this is and implement the token validation for this identity provider. The Auth0 token validation is implemented used standard AddJwtBearer which validates the issuer, audience and the signature.

services.AddAuthentication(options => { options.DefaultScheme = "UNKNOWN"; options.DefaultChallengeScheme = "UNKNOWN"; }) .AddJwtBearer(Consts.MY_AUTH0_SCHEME, options => { options.Authority = Consts.MY_AUTH0_ISS; options.Audience = "https://auth0-api1"; options.TokenValidationParameters = new TokenValidationParameters { ValidateIssuer = true, ValidateAudience = true, ValidateIssuerSigningKey = true, ValidAudiences = Configuration.GetSection("ValidAudiences").Get<string[]>(), ValidIssuers = Configuration.GetSection("ValidIssuers").Get<string[]>() }; })

AddJwtBearer is also used to implement the Azure AD access token validation. I normally use Microsoft.Identity.Web for Microsoft Azure AD access tokens but this adds some extra magic overwriting the default middleware and preventing the other identity providers from working. This is where client security gets really complicated as each identity provider vendor push their own client solution with different methods and different implementations hiding the underlying OAuth2 implementation. If the identity provider vendor specific client does not override the default schemes, policies of the ASP.NET Core middleware, then it ok to use. I like to implement as little as possible as this makes it easier to maintain over time. Creating these wrapper solutions hiding some of the details probably makes the whole security story more complicated. If these wrappers where compatible with 80% of non-specific vendor solutions, then the clients would be good.

.AddJwtBearer(Consts.MY_AAD_SCHEME, jwtOptions => { jwtOptions.MetadataAddress = Configuration["AzureAd:MetadataAddress"]; jwtOptions.Authority = Configuration["AzureAd:Authority"]; jwtOptions.Audience = Configuration["AzureAd:Audience"]; jwtOptions.TokenValidationParameters = new TokenValidationParameters { ValidateIssuer = true, ValidateAudience = true, ValidateIssuerSigningKey = true, ValidAudiences = Configuration.GetSection("ValidAudiences").Get<string[]>(), ValidIssuers = Configuration.GetSection("ValidIssuers").Get<string[]>() }; })

I also used AddOpenIddict to implement the JWT access token validation from OpenIddict. In this example, I use self contained unencrypted access tokens so I disable the default more secure solution using introspection and encrypted access tokens (reference). This would also need to be changed on the IDP. I used the vendor specific client here because it does not override to ASP.NET Core default middleware and so does not break the validation from the other vendors. You could also validate this access token like above with plain JWT OAuth.

// Register the OpenIddict validation components. // Scheme = OpenIddictValidationAspNetCoreDefaults.AuthenticationScheme services.AddOpenIddict() .AddValidation(options => { // Note: the validation handler uses OpenID Connect discovery // to retrieve the address of the introspection endpoint. options.SetIssuer("https://localhost:44318/"); options.AddAudiences("rs_dataEventRecordsApi"); // Configure the validation handler to use introspection and register the client // credentials used when communicating with the remote introspection endpoint. //options.UseIntrospection() // .SetClientId("rs_dataEventRecordsApi") // .SetClientSecret("dataEventRecordsSecret"); // disable access token encryption for this options.UseAspNetCore(); // Register the System.Net.Http integration. options.UseSystemNetHttp(); // Register the ASP.NET Core host. options.UseAspNetCore(); });

The AddPolicyScheme method is used to implement the ForwardDefaultSelector switch. The default scheme is set to UNKNOWN and so per default access tokens will use this first. Depending on the issuer, the correct scheme is set and the access token is fully validated using the correct signatures etc. You could also implement logic here for reference tokens using introspection or cookies authentication etc. This implementation will always be different depending on how you secure the API. Sometimes you use cookies, sometimes reference tokens, sometimes encrypted tokens and so you need to identity the identity provider somehow and forward this on to the correct validation.

.AddPolicyScheme("UNKNOWN", "UNKNOWN", options => { options.ForwardDefaultSelector = context => { string authorization = context.Request.Headers[HeaderNames.Authorization]; if (!string.IsNullOrEmpty(authorization) && authorization.StartsWith("Bearer ")) { var token = authorization.Substring("Bearer ".Length).Trim(); var jwtHandler = new JwtSecurityTokenHandler(); // it's a self contained access token and not encrypted if (jwtHandler.CanReadToken(token)) { var issuer = jwtHandler.ReadJwtToken(token).Issuer; if(issuer == Consts.MY_OPENIDDICT_ISS) // OpenIddict { return OpenIddictValidationAspNetCoreDefaults.AuthenticationScheme; } if (issuer == Consts.MY_AUTH0_ISS) // Auth0 { return Consts.MY_AUTH0_SCHEME; } if (issuer == Consts.MY_AAD_ISS) // AAD { return Consts.MY_AAD_SCHEME; } } } // We don't know what it is return Consts.MY_AAD_SCHEME; }; });

Now that the signature, issuer and the audience is validated, specific claims can also be checked using an ASP.NET Core policy and a handler. The AddAuthorization is used to add this.

services.AddSingleton<IAuthorizationHandler, AllSchemesHandler>(); services.AddAuthorization(options => { options.AddPolicy(Consts.MY_POLICY_ALL_IDP, policyAllRequirement => { policyAllRequirement.Requirements.Add(new AllSchemesRequirement()); }); });

The handler checks the specific identity provider access claims using the iss cliam as the switch information. You can add scopes, roles or whatever and this is identity provider specific. All do this differently.

using Microsoft.AspNetCore.Authorization; namespace WebApi; public class AllSchemesHandler : AuthorizationHandler<AllSchemesRequirement> { protected override Task HandleRequirementAsync(AuthorizationHandlerContext context, AllSchemesRequirement requirement) { var issuer = string.Empty; var issClaim = context.User.Claims.FirstOrDefault(c => c.Type == "iss"); if (issClaim != null) issuer = issClaim.Value; if (issuer == Consts.MY_OPENIDDICT_ISS) // OpenIddict { var scopeClaim = context.User.Claims.FirstOrDefault(c => c.Type == "scope" && c.Value == "dataEventRecords"); if (scopeClaim != null) { // scope": "dataEventRecords", context.Succeed(requirement); } } if (issuer == Consts.MY_AUTH0_ISS) // Auth0 { // add require claim "gty", "client-credentials" var azpClaim = context.User.Claims.FirstOrDefault(c => c.Type == "azp" && c.Value == "naWWz6gdxtbQ68Hd2oAehABmmGM9m1zJ"); if (azpClaim != null) { context.Succeed(requirement); } } if (issuer == Consts.MY_AAD_ISS) // AAD { // "azp": "--your-azp-claim-value--", var azpClaim = context.User.Claims.FirstOrDefault(c => c.Type == "azp" && c.Value == "46d2f651-813a-4b5c-8a43-63abcb4f692c"); if (azpClaim != null) { context.Succeed(requirement); } } return Task.CompletedTask; } }

An authorize attribute can be added to the controller exposing the API and the policy is added. The AuthenticationSchemes is used to add a comma separated string of all the supported schemes.

[Authorize(AuthenticationSchemes = Consts.ALL_MY_SCHEMES, Policy = Consts.MY_POLICY_ALL_IDP)] [Route("api/[controller]")] public class ValuesController : Controller { [HttpGet] public IEnumerable<string> Get() { return new string[] { "data 1 from the api", "data 2 from the api" }; } }

This works good and you can force the authentication at the application level. Using this, you can implement a single API to use multiple access tokens but this does not mean that you should do this. I would always separate the APIs and identity providers to different endpoints if possible. Sometimes you need this and ASP.NET Core makes this easy as long as you use the standard implementations. If you specific vendor client libraries to implement the security, then you need to understand what the wrapper do and how the schemes, policies in the ASP.NET Core middleware are implemented. Setting the default scheme affects all the clients and not just the specific vendor implementation.

Links

https://docs.microsoft.com/en-us/aspnet/core/security/authentication/policyschemes

Sunday, 18. September 2022

Doc Searls Weblog

Attention is not a commodity

In one of his typically trenchant posts, titled Attentive, Scott Galloway (@profgalloway) compares human attention to oil, meaning an extractive commodity: We used to refer to an information economy. But economies are defined by scarcity, not abundance (scarcity = value), and in an age of information abundance, what’s scarce? A: Attention. The scale of the […]

In one of his typically trenchant posts, titled Attentive, Scott Galloway (@profgalloway) compares human attention to oil, meaning an extractive commodity:

We used to refer to an information economy. But economies are defined by scarcity, not abundance (scarcity = value), and in an age of information abundance, what’s scarce? A: Attention. The scale of the world’s largest companies, the wealth of its richest people, and the power of governments are all rooted in the extraction, monetization, and custody of attention.

I have no argument with where Scott goes in the post. He’s right about all of it. My problem is with framing it inside the ad-supported platform and services industry. Outside of that industry is actual human attention, which is not a commodity at all.

There is nothing extractive in what I’m writing now, nor in your reading of it. Even the ads you see and hear in the world are not extractive. They are many things for sure: informative, distracting, annoying, interrupting, and more. But you do not experience some kind of fungible good being withdrawn from your life, even if that’s how the ad business thinks about it.

My point here is that reducing humans to beings who are only attentive—and passively so—is radically dehumanizing, and it is important to call that out. It’s the same reductionism we get with the word “consumers,” which Jerry Michalski calls “gullets with wallets and eyeballs”: creatures with infinite appetites for everything, constantly swimming upstream through a sea of “content.” (That’s another word that insults the infinite variety of goods it represents.)

None of us want our attention extracted, processed, monetized, captured, managed, controlled, held in custody, locked in, or subjected to any of the other verb forms that the advertising world uses without cringing. That the “attention economy” produces $trillions does not mean we want to be part of it, that we like it, or that we wish for it to persist, even though we participate in it.

Like the economies of slavery, farming, and ranching, the advertising economy relies on mute, passive, and choice-less participation by the sources of the commodities it sells. Scott is right when he says “You’d never say (much of) this shit to people in person.” Because shit it is.

Scott’s focus, however, is on what the big companies do, not on what people can do on their own, as free and independent participants in networked whatever—or as human beings who don’t need platforms to be social.

At this point in history it is almost impossible to think outside of platformed living. But the Internet is still as free and open as gravity, and does not require platforms to operate. And it’s still young: at most only decades old. In how we experience it today, with ubiquitous connectivity everywhere there’s a cellular data connection, it’s a few years old, tops.

The biggest part of that economy extracts personal data as a first step toward grabbing personal attention. That is the actual extractive part of the business. Tracking follows it. Extracting data and tracking people for ad purposes is the work of what we call adtech. (And it is very different from old-fashioned brand advertising, which does want attention, but doesn’t track or target you personally. I explain the difference in Separating Advertising’s Wheat and Chaff.)

In How the Personal Data Extraction Industry Ends, which I wrote in August 2017, I documented how adtech had grown in just a few years, and how I expected it would end when Europe’s GDPR became enforceable starting the next May.

As we now know, GDPR enforcement has done nothing to stop what has become a far more massive, and still growing, economy. At most, the GDPR and California’s CCPA have merely inconvenienced that economy, while also creating a second economy in compliance, one feature of which is the value-subtract of websites worsened by insincere and misleading consent notices.

So, what can we do?

The simple and difficult answer is to start making tools for individuals, and services leveraging those tools. These are tools empowering individuals with better ways to engage the world’s organizations, especially businesses. You’ll find a list of fourteen different kinds of such tools and services here. Build some of those and we’ll have an intention economy that will do far more for business than what it’s getting now from the attention economy, regardless of how much money that economy is making today.

Friday, 16. September 2022

Timothy Ruff

What is Web5?

Last week I published “Web3, Web5, & SSI” which argued “why the SSI community should escape Web3 and follow Jack Dorsey and Block into a Web5 big tent, with a common singular goal: the autonomous control of authentic data and relationships”. In this short post I’m proposing a definition for Web5 and providing an example list of Web5 Technologies that I think satisfy the definition. There is n

Last week I published “Web3, Web5, & SSI” which argued “why the SSI community should escape Web3 and follow Jack Dorsey and Block into a Web5 big tent, with a common singular goal: the autonomous control of authentic data and relationships”.

In this short post I’m proposing a definition for Web5 and providing an example list of Web5 Technologies that I think satisfy the definition. There is no naming authority to appeal to for WebX definitions, they materialize from how they’re used, so since Web5 is still quite new, a lasting definition is still to be determined (pun intended).

TBD’s Definition of Web5

I should first give credit where it is due: the TBD initiative at Block, headed up by Daniel Buchner and initiated by Jack Dorsey, coined the term Web5. On the front page of their website introducing Web5 to the world, here is how they define it:

WEB5: AN EXTRA DECENTRALIZED WEB PLATFORM
Building an extra decentralized web that puts you in control of your data and identity.

All true and all good, but I would aim for a definition that captures more of the desired results of Web5, not its implementation methods. To me, “an extra decentralized web” is foundational to Web5 but it is a means, not an end. The word and principle “decentralize” is a means toward the end of greater empowerment of individuals.

The phrase “puts you in control of your data and identity” is accurate and speaks to that empowerment, but IMO is lacking crucial references to “authenticity” and “relationships” that I think are equally important for the reasons explained in the next section.

Kudos to the TBD team for the phrase “data and identity”, because I also believe Web5 is about all authentic data, not just identity data. (In last week’s piece there’s a section titled “It’s Not Just About Identity” that elaborates on this point.)

My Proposed Definition of Web5

After discussion with dozens of SSI pros (listed in last week’s post), I’ve discovered a surprising amount of agreement — though not unanimity — with this proposed definition for Web5:

The autonomous control of authentic data and relationships.

It’s not perfect. It’s too long for some uses, too short for others, and undoubtedly some will take issue with my word choices (and already have). Sometimes there’s just not enough words in the dictionary for all this new tech, but I think this definition captures the key desired objectives of Web5 and meaningfully separates us from all other “Webs”.

Each word was chosen carefully for its depth, accuracy, and importance:

Autonomous” has a hint of “self-sovereign” but with more of an air of neutrality and independence than authority or defiance. It is accurate while less provocative than “self-sovereign”. It implies decentralization without using the word (which is also a tad provocative). It works well for IoT applications. Critically, autonomy is the element that makes it difficult for big tech platforms to be part of Web5, at least until they allow users to migrate their data and their relationships away to competing platforms.

Control” is also a neutral but accurate term, and important that those in the decentralization camp begin to use in place of “own” when referring to “our” data. Data ownership is a trickier topic than most realize, as expertly explained by Elizabeth Renieris in this piece¹. Having “control” in a Web5 context implies a right to control, regardless of where lines of literal ‘ownership’ may be drawn. When coupled with “autonomous”, “control” can be exercised without the invited involvement of or interference from third parties, which is precisely what’s intended. “Control” also means the power to delegate authority and/or tasks to other people, organizations and things, and to revoke delegation when desired.

Authentic” We simply cannot achieve the aim of individual autonomy without verifiable authenticity of our data and relationships, indeed it is that authenticity that can break the chains of our current captivity to Web2. The intermediaries of Web2 and even Web3 provide the trust mechanisms — within their walled gardens — that enable digital interactions to proceed. Without a comparable or superior means of authenticating data and relationships when interacting peer-to-peer, we’ll not be able to escape the confines of these ‘trusted’ intermediaries.

I propose that, in the context of Web5, the word “authentic” always means two things:

1. having verifiable provenance (who issued/signed it);

2. having verifiable integrity (it hasn’t been altered, revoked, or expired).

When a piece of data is authentic, I know who issued/signed it and I know it is still valid. Whether I choose to trust the signer and what I do with the signed content — it could be untrue, not useful, or gibberish — are separate, secondary decisions.

Authentic relationships are similar to data: I know who (or what) is on the other side of a connection and I know that my connection to them/it is still valid.

Data” conveys that we’re referring to digital things, not physical (though physical things will increasingly have their digital twins). With Web5 all data of import can be digitally, non-repudiably signed both in transit and at rest. Every person, organization and thing can digitally sign and every person, organization and thing can verify what others have signed. It’s ubiquitous Zero Trust computing. For privacy purposes, all the capabilities invented in the SSI community still apply: pseudonymity, pairwise relationships, and selective disclosure can minimize correlatability when needed.

Relationships” means the secure, direct, digital connections between people, organizations, and things. Autonomous relationships are the ‘sleeper’ element of Web5, the thing that seems simple and innocuous at first glance but in time becomes most important of all. Authentic autonomous relationships will finally free people, organizations and things from the captivity of big tech platforms. (I’m working on a separate piece dedicated to Web5-enabled autonomous relationships, it’s an oxymoronic mind-bender and a very exciting topic for SSI enthusiasts).

Web5 Technologies

I originally grouped this list by tech stack (Ion, Aries, KERI, etc.), but since several items were used by more than one stack (VCs, DIDs, etc.), it’s now simply alphabetical.

Autonomic Identifiers (AIDs)

Authentic Chained Data Containers (ACDCs)

BBS+ Signatures

Composable Event Streaming Representation (CESR)

Decentralized Identifiers (DIDs)

Decentralized Web Apps (DWAs)

Decentralized Web Nodes (DWNs)

Decentralized Web Platform (DWP)

DIDComm

GLEIF Verifiable LEI (vLEI)

Hyperledger Aries

Hyperledger Indy

Hyperledger Ursa

Key Event Receipt Infrastructure (KERI)

Out-of-band Introduction (OOBI)

Sidetree/Ion

Soulbound Tokens (SBTs)

Universal Resolver

Verifiable Credentials (VCs)

Wallets

Zero Knowledge Proofs (ZKPs)

Some of these things are not like the others, and the list is only representative, not exhaustive. The point is, each of these technologies exists to pursue some aspect of the endgame of autonomous control of authentic data and relationships.

What About Blockchain?

Blockchain can enable “autonomous control of authentic data and relationships”, which is why we used it when we conceived and wrote Hyperledger Indy, Aries, and Ursa and built Sovrin. Blockchain underpins most of the Web5 Technologies listed above, so it certainly has its place within Web5. That said, with Web3 — which I define as the decentralized transfer of value — blockchain technology is required due to its double-spend proof and immutability characteristics, whereas with Web5 blockchain is useful, but not required. Therefore, I consider blockchain to be primarily a Web3 technology because Web3 couldn’t exist without it.

It’s Up to You

Anyone who reads my last piece and this one will get the clear feeling that I like both the label and vision of Web5, and my affinity for it has only grown as I write about and use it in conversation. It just works well in conveying a nice grouping of all these abstract concepts, and in ways that the comparable mess of prior terms did not.

But it won’t go anywhere if TBD and I are the only ones using it, it needs to catch on to be used, and be used to catch on. If you like the basic definition I’ve proposed above, even with a tweak or two, I invite you to consider using “Web5” to describe your activities in this space.

¹When a doctor writes down a note about my condition, who ‘owns’ that note… me, the doctor, or the hospital who employs the doctor? The fact is that all three have rights to the data; no party singly ‘owns’ it.


Heres Tom with the Weather


Doc Searls Weblog

Because We Still Have Net 1.0

That’s the flyer for the first salon in our Beyond the Web Series at the Ostrom Workshop, here at Indiana University. You can attend in person or on Zoom. Register here for that. It’s at 2 PM Eastern on Monday, September 19. And yes, all those links are on the Web. What’s not on the Web—yet—are all […]


That’s the flyer for the first salon in our Beyond the Web Series at the Ostrom Workshop, here at Indiana University. You can attend in person or on Zoom. Register here for that. It’s at 2 PM Eastern on Monday, September 19.

And yes, all those links are on the Web. What’s not on the Web—yet—are all the things listed here. These are things the Internet can support, because, as a World of Ends (defined and maintained by TCP/IP), it is far deeper and broader than the Web alone, no matter what version number we append to the Web.

The salon will open with an interview of yours truly by Dr. Angie Raymond, Program Director of Data Management and Information Governance at the Ostrom Workshop, and Associate Professor of Business Law and Ethics in the Kelley School of Business (among too much else to list here), and quickly move forward into a discussion. Our purpose is to introduce and talk about these ideas:

That free customers are more valuable—to themselves, to businesses, and to the marketplace—than captive ones. That the Internet’s original promises of personal empowerment, peer-to-peer communication, free and open markets, and other utopian ideals, can actually happen without surveillance, algorithmic nudging, and capture by giants, all of which have all become norms in these early years of our digital world. That, since the admittedly utopian ambitions behind 1 and 2 require boiling oceans, it’s a good idea to try first proving them locally, in one community, guided by Ostrom’s principles for governing a commons. Which we are doing with a new project called the Byway.

This is our second Beyond the Web Salon series. The first featured David P. Reed, Ethan Zuckerman, Robin Chase, and Shoshana Zuboff. Upcoming in this series are:

Nathan Schneider on October 17 Roger McNamee on November 14 Vinay Gupta on December 12

Mark your calendars for those.

And, if you’d like homework to do before Monday, here you go:

Beyond the Web (with twelve vexing questions that cannot be answered on the Web as we know it). An earlier and longer version is here. The Cluetrain Manifesto, (published in 1999), and New Clues (published in 2015). Are these true yet? Why not? Customer Commons. Dig around. See what we’re up to there. A New Way, Byway, and Byway FAQ. All are at Customer Commons and are works in progress. The main thing is that we are now starting work toward actual code doing real stuff. It’s exciting, and we’d love to have your help. Ostrom Workshop history. Also my lecture on the 10th anniversary of Elinor Ostrom’s Nobel Prize. Here’s the video, (start at 11:17), and here’s the text. Privacy Manifesto. In wiki form, at ProjectVRM. Here’s the whole project’s wiki. And here’s its mailing list, active since I started the project at Harvard’s Berkman Klein Center (which kindly still hosts it) in 2006.

See you there!


Aaron Parecki

New Draft of OAuth for Browser-Based Apps (Draft -11)

With the help of a few kind folks, we've made some updates to the OAuth 2.0 for Browser-Based Apps draft as discussed during the last IETF meeting in Philadelphia.

With the help of a few kind folks, we've made some updates to the OAuth 2.0 for Browser-Based Apps draft as discussed during the last IETF meeting in Philadelphia.

You can find the current version, draft 11, here:

https://www.ietf.org/archive/id/draft-ietf-oauth-browser-based-apps-11.html

The major changes in this version are adding two new architecture patterns, the "Token Mediating Backend" pattern based on the TMI-BFF draft, and the "Service Worker" pattern of using a Service Worker as the OAuth client. I've also done a fair amount of rearranging of various parts of the document to hopefully make more sense.

Obviously there is no clear winner in terms of which architecture pattern is best, so instead of trying to make a blanket recommendation, the goal of this draft is to document the pros and cons of each. If you have any input into either benefits or drawbacks that aren't mentioned yet in any of the patterns discussed, please feel free to chime in so we can add them to the document! You're welcome to either reply on the list, open an issue on the GitHub repository, or contact me directly. Keep in mind that only comments on the mailing list are part of the official record.

Thursday, 15. September 2022

MyDigitalFootprint

How we value time frames our outcomes and incentives.

I am aware that a human limitation is our understanding of time.  Time itself is something that humans have created to help us comprehend the rules that govern us.  Whilst society is exceptionally good at managing short-time frames (next minute, hour, day and week),  it is well established that humans are very bad at comprehending longer time frames (decades, centuries and millenni

I am aware that a human limitation is our understanding of time.  Time itself is something that humans have created to help us comprehend the rules that govern us.  Whilst society is exceptionally good at managing short-time frames (next minute, hour, day and week),  it is well established that humans are very bad at comprehending longer time frames (decades, centuries and millenniums).  Humans are proficient at overestimating what we can do in the next year and wildly under-estimate what we can achieve in 10 years.  (Gates Law)

Therefore, I know there is a problem when we consider how we should value the next 50,000 years.  However, we are left with fewer short-terms options each year and are left to consider longer and bigger - the very thing we are less capable of. 

Why 50,000 years

The orange circle below represents the 6.75 trillion people (UN figure) who will be born in the next 50,000 years.  The small grey circle represents the 100 billion dead who have already lived on earth in the past 50,000 years.  The living today is 7.7 billion, the small dot between the grey and orange.

 

 

The 100 billion dead lived on a small blue planet (below), which we have dug up, cut down, moved, built and created waste - in about equal measures.  These activities have been the foundation of our economy and how we have created wealth thus far.  We realise, in hindsight, that our activities have not always been done in a long-term sustainable way.  Susutainble here should be interrupted as the avoidance of the depletion of natural resources in order to maintain an ecological balance.

 

Should we wish for future generations also to enjoy the small blue planet, a wealth or value calculation based on time will not shift the current economic model. We need to move the existing financial model based on exploiting "free" resources to new models focused on rewarding reuse and renovation; how we use discount rates as a primary function for justifying financial decisions works in the short term but increasingly looks broken because it does not support long-term sustainability thinking as part of any justification.  Time breaks the simplicity of the decision.

Therefore, we appear to face a choice; that can be summarised as follows:

If we frame #esg and #climate in the language of money and finance, the obvious question is how do we value the next 50,000 years, as this would change how we calculate value - in the hope that it would move the dial in terms of investment.  We need to upgrade the existing tools to keep them relevant.  

If we frame #esg, #climate and our role in caring for our habitat as a circular economy (a wicked problem) and not a highly efficient/ effective finance-driven supply chain, we need a different way to calculate value over time economically.

Should we search for the obvious over wicked complexity?

Many deeply intellectual and highly skilled disciplines instinctively favour complex explanations that make their voice sound more valuable and clever. (Accounting, AI, Quantum Physics, Data Analysts, Statisticians, human behaviours). A better solution may lie in something seemingly obvious or trivial, which is therefore perceived to have less value in the eyes of the experts.   We can make #ESG and ecosystem thinking very complex, very quickly, but should we rather be asking better questions? 

We know that better science and more data will not resolve the debate. When we use data and science in our arguments and explain the problem, individuals will selectively credit and discredit information in patterns that reflect their commitment to certain values. 

Many of the best insights and ideas are obvious, but only in retrospect as they have been pointed out, like the apocryphal story of "The Egg of Columbus". Once revealed, they seem self-evident or absurdly simplistic, but that does not prevent their significance from being widely overlooked for years. 

Can data lead to better humanity?

We are (currently) fixated on data, but "data'' has morphed from "useful information" to refer to that unrepresentative informational lake which happens to have a numerical expression for everything and hence capable of aggregation and manipulation into a model that predicts whatever outcome you want.  It is very clever, as are the people who manage and propagate it. 

ESG, climate and the valuation of nature has become "data" - hoping it will improve decision-making and outcomes.  Using data, we have already been able to put a value on most of our available natural resources, including human life (life insurance). Whilst we can value risk and price in uncertainty, we have not found or agreed on how to measure or value "the quality of life".  

Right now, businesses can only decide based on their data and financial skills.  This makes leadership look cleaver and capable in the eyes of an economist, but we are at risk of acting dumb in the eyes of an ecologist, anthropologist and biologist. Reliance on data can make you blind to the obvious. Jeff Bezos commented, "When the anecdotes and the data disagree, the anecdotes are usually right".  Our climate challenge might not be about data but reframing to make the obvious - well, obvious. Perhaps we should stop asking leadership to decide everything based on data or value and instead demand they care for our natural world if it was their only job!

Perhaps we should stop asking leadership to decide everything based on data or value and instead demand they care for our natural world if it was their only job!

Imagine two models; in the first, you have one pen for life, and in the second, you have a new pen for every event.  One model supports "growth", and the other a more sustainable ecology.  One helps support secondary markets such as a gift economy and can make an individual feel valued.  The other is a utility, where the pen has no instinct economic value beyond function but has an unmeasurable sentimental value. 

If the framing is growth and profitability, then the former model appeals to investors as there is a growing demand for more pens.  This also drives innovation and differentiation, and so emerges an entire pen industry.  (telegraph road - dire straits) Employment and opportunity are abundant, but so is the consumption of the earth's resources, and no one cares if a pen is lost or thrown away.  Related industries benefit from gifts to delivery - a thriving, interconnected economy. 

If the framing is for a sustainable ecology, pens are just a means for an entire ecosystem to develop. One pen for life means it is maintained, treasured and repaired.  There is no growth in the market, and innovation is hard; there are few suppliers with cartel-type controls.  Transparency and costs might not be a priority.  However, if those who made the pens could also benefit from the content/ value/ wealth created by the pen, such that the pen is no longer stand-alone but tiny incremental payments over life - everything changes.  Value is created by what people do with the pen, not the pen itself. Pick axes in the wild west railways is a well-documented example, what would have happened if they were given a percentage of train ticket prices? 

Where does this shine a light?

It appears that decision-making and the tools we need for a "sustainable" age are different from what we have today.  First, we need to ask questions to realise the tools we have are broken; however, all our biases prevent us from asking questions we don't want the answer to.

Below are five typical questions we often utilise as aids to improve decision-making. What the commentary after each question suggests if that our framing means we don't ask the right questions, even though we are taught these are the right questions.

Have I considered all of the options or choices? Too often, we assume that there are no additional alternatives and, therefore, the decision has to be made from the choices you have right now.  We also discount alternatives that appear more difficult. If the choice we want is available, we ignore others. 

Do I have evidence from the past that could help me make an informed decision?  We will never change if we depend on history because we reinforce past experiences as conformational guidance and reassurance.

Will I align with this decision in the future? To answer this, you need to have imagined and articulated your future and then tested it to check that it is still valid with how others see a future.  It is far too easy to make wild assumptions and live a fantasy.   How you see the future might be wildly different from those, you need support from to be able to deliver the future. 

What does this decision require from me right now?  This is a defence question to try and find the route of least resistance and lowest energy.  We use this as a way to justify efficiency and effectiveness over efficacy. 

Is this a decision or a judgement?  We find some facts and data are preferable to a route we find too hard, so we ignore them and say it is a judgement call or a gut feeling.  When data shows us a path we don't like, we tend to find reasons to take the path we want.  (re-read that Jeff Bezos quote above) 

These simple questions highlight that we have significant built-in path dependency, which makes asking new questions and seeing the limitation of our existing tools hard.  A wicked problem emerges because our existing tools and questions largely frame outcomes to remain the same. 

It follows that purpose should bound strategy, and both should frame structure.  Without purpose, strategy is just a list of tasks and activities; determining what you do leads to a better outcome is somewhat impossible.  We obviously value time, as it frames our strategy, outcomes and incentives.  

Therefore the question to comment on is, "how should we measure the quality of life today, and how will our measures change over the next 100 years?"

















Wednesday, 14. September 2022

@_Nat Zone

OpenWallet Foundation の組成が発表されました

14日夕方、ダブリンで行われたOpen Sourc…

14日夕方、ダブリンで行われたOpen Source Summit にて、OpenWallet Foundation (オープンウォレット・ファウンデーション)の組成が発表されました。OpenWallet Foundation は、標準プロトコルに則ったオープンソースのウォレット1エンジンを作るためのプロジェクトです。オープンソースプロジェクトで、標準の開発は行いません。あらたな標準が必要な場合には、他の標準化団体に行って標準化を行います。

BIG NEWS AT #OSSummit: OpenWallet has announced intent to form the OpenWallet Foundation, under the umbrella of the Linux Foundation!#OpenWallet #OpenSource pic.twitter.com/ybIMRND5eo

— The Linux Foundation (@linuxfoundation) September 14, 2022
わたしもここ数ヶ月、多くの仲間と一緒に推進してきたものです。「仲間」には、クレジットカードスキーム、自動車メーカー、IT系企業、IndiaStack、MOSIPなどの政府系オープンソース、標準化団体などがいます。

Linux Foundation からのプレスリリースはこちら 2ですが、ここに登場3して「サポートの言葉」を載せているのはそのなかのほんの一部です。これから徐々に明らかになっていくことでしょう。パネルディスカッションには、ここに乗っていないところでは、マスターカードやマイクロソフトも参加していたことをお伝えします。展開から目が離せませんね。

最新の展開においつくには、OpenWallet Foundation のWebサイト https://openwallet.foundation/ に登録してください。

@goldscheider is giving a keynote introducing the OpenWallet Foundation at #ossummit Europe. #OpenWalletFoundation pic.twitter.com/3f8KXH5lR7

— Mike Dolan (@mdolan) September 14, 2022

Tuesday, 13. September 2022

Werdmüller on Medium

The world is not designed for equitable parenting

Why does there need to be one primary carer? Continue reading on Medium »

Why does there need to be one primary carer?

Continue reading on Medium »

Monday, 12. September 2022

Damien Bod

Setup application client in Azure App Registration with App roles to use a web API

In Azure AD, a client application with no user (daemon client) which uses an access token to access an API protected with Microsoft Identity needs to use an Azure API Registration with App Roles. Scopes are used for delegated flows (with a User and a UI login). This is Azure AD specific not OAuth2. This […]

In Azure AD, a client application with no user (daemon client) which uses an access token to access an API protected with Microsoft Identity needs to use an Azure API Registration with App Roles. Scopes are used for delegated flows (with a User and a UI login). This is Azure AD specific not OAuth2. This post shows the portal steps to setup an Azure App Registration with Azure App roles.

Code: https://github.com/damienbod/GrpcAzureAppServiceAppAuth

This is a follow up post to this article:

https://damienbod.com/2022/08/29/secure-asp-net-core-grpc-api-hosted-in-a-linux-kestrel-azure-app-service/

To set this up, an Azure App Registration needs to be created. We would like to allow only single tenant Azure AD access. This is not really used as the client credentials configuration with a secret or a certificate. No platform needs to be selected as this is only an API.

In the App Registration Expose an API blade, the Application ID URI must be set.

Now create an App role for the application. Set the Allow member types to Applications. This is added to the claims in the access token.

In the Manifest, set the accessTokenAcceptedVersion to 2 and save the manifest.

Now the App Registration needs to allow the App role. In the API permissions blade, search for the name of the Azure App Registration in the APIs tab and select the new App Role you created as a allowed permission. You must then grant admin consent for this.

You need to use a secret or a certificate to acquire the access token. This needs to be added to the Azure App Registration. Some type of rotation needs to be implemented for this.

You can validate the app role in the access token claims. If the roles claim is not included in the access token, the API will return a 401 to the client without a good log message.

Next steps would be to automate this using Powershell or Terreform and to solve the secret, certification rotation.

Links:

https://docs.microsoft.com/en-us/azure/active-directory/develop/scenario-daemon-overview

/https://damienbod.com/2020/10/01/implement-azure-ad-client-credentials-flow-using-client-certificates-for-service-apis/

https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/tree/master/2-Call-OwnApi

Secure ASP.NET Core GRPC API hosted in a Linux kestrel Azure App Service

Saturday, 10. September 2022

Bill Wendels Real Estate Cafe

Use 27th anniversary of Real Estate Cafe to mobilize BILLIONS in consumer savings “a la carte”

“Find me a buyer who paid their realtor (directly) & out of their pocket. Can anyone name one – just one?” That challenge was issued… The post Use 27th anniversary of Real Estate Cafe to mobilize BILLIONS in consumer savings “a la carte” first appeared on Real Estate Cafe.

“Find me a buyer who paid their realtor (directly) & out of their pocket. Can anyone name one – just one?” That challenge was issued…

The post Use 27th anniversary of Real Estate Cafe to mobilize BILLIONS in consumer savings “a la carte” first appeared on Real Estate Cafe.

Werdmüller on Medium

A letter to my mother on the event of my child’s birth

Dear Ma, Continue reading on Medium »

Friday, 09. September 2022

Doc Searls Weblog

Of Waste and Value

One morning a couple months ago, while I was staying at a friend’s house near Los Angeles, I was surprised to find the Los Angeles Times still being delivered there. The paper was smaller and thinner than it used to be, with minimized news, remarkably little sports, and only two ads in the whole paper. […]

One morning a couple months ago, while I was staying at a friend’s house near Los Angeles, I was surprised to find the Los Angeles Times still being delivered there. The paper was smaller and thinner than it used to be, with minimized news, remarkably little sports, and only two ads in the whole paper. One was for Laemmle Theaters. The other was for a law firm. No inserts from grocery stores. No pitches for tires in the sports section, for clothing in the culture section, for insurance in the business section, or for events in the local section. I don’t even recall if those sections still existed, because the paper itself had been so drastically minimized

Economically speaking, a newspaper has just two markets: advertisers and readers. The photo above says what one advertiser thinks: that ads in print are a waste—and so is what they’re printed on, including the LA Times. The reader whose house I stayed in has since canceled her subscription. She also isn’t subscribing to the online edition. She also subscribes to no forms of advertising, although she can hardly avoid ads online, or anywhere outside her home.

Many years ago, Esther Dyson said the challenge for business isn’t to add value but to subtract waste. So I’m wondering how much time, money, and effort Pavillions is wasting by sending ads to people—even to those who scan that QR code.

Peter Drucker said “the purpose of a business is to create a customer.” So, consider the difference between a customer created by good products and services and one created by coupons and “our weekly ad in your web browser.”

A good example of the former is Trader Joe’s., which has no loyalty program, no stuff “on sale,” no human-free checkout, almost no advertising—and none of the personal kind. Instead, Trader Joe’s creates customers with good products, good service, good prices, and helpful human beings. It never games customers with what Doug Rauch, retired president of Trader Joe’s, calls “gimmicks.”*

I actually like Pavillions. But only two things make me a Pavillioins customer. One is their location (slightly closer than Trader Joe’s), and the other is that they carry bread from LaBrea Bakery.

While I would never sign up for a weekly ad from Pavillions, I do acknowledge that lots of people love coupons and hunting for discounts.

But how much of that work is actually waste as well, with high cognitive and operational overhead for both sellers and buyers? How many CVS customers like scanning their loyalty card or punching in their phone number when they check out of the store—or actually using any of the many discounts printed on the store’s famous four-foot-long receipts? (Especially since many of those discounts are for stuff the customer just bought? Does CVS, which is a good chain with locations everywhere, actually need those gimmicks?)

Marketers selling services to companies like Pavillions and CVS will tell you, with lots of supporting stats, that coupons and personalized (aka “relevant” and “interest-based”) ads and promos do much to improve business. But what if a business is better to begin with, so customers come there for that reason, rather than because they’re being gamed with gimmicks?

Will the difference ever become fully obvious? I hope so, but I don’t know.

One thing I do know is that there is less and less left of old-fashioned brand advertising: the kind that supported newspapers in the first place. That kind of advertising was never personal (that was the job of “direct response marketing”). It was meant instead for populations of possible customers and carried messages about the worth of a brand.

This is the kind of advertising we still see on old-school TV, radio and billboards. Sometimes also in vertical magazines. (Fashion, for example.) But not much anymore in newspapers.

Why this change? Well, as I put it in Separating Advertising’s Wheat and Chaff, “Madison Avenue fell asleep, direct response marketing ate its brain, and it woke up as an alien replica of itself.”

That was seven years ago. A difference now is that it’s clearer than ever that digital tech and the Internet are radically changing every business, every institution, and every person who depends on it. Everywhere you drive in Los Angeles today, there are For Lease signs on office buildings. The same is true everywhere now graced with what Bob Frankston calls “ambient connectivity.” It’s a bit much to say nobody is going back to the office, but it’s obvious that you need damn good reasons for going there.

Meanwhile, I’m haunted by knowing a lot of real value is being subtracted as waste. (I did a TEDx talk on this topic, four years ago.) And that we’re not going back.

*I tell more of the Trader Joe’s story, with help from Doug, in The Intention Economy.

Thursday, 08. September 2022

Identity Woman

FTC on Commercial Surveillance and Data Security Rulemaking

Today, Sept 8th, the FTC held a Public Forum on commercial surveillance and data security and I made a public comment that you can find below. I think the community focused on SSI should collaborate together on some statements to respond to the the FTC advance notice of proposed rulemaking related to this and has […] The post FTC on Commercial Surveillance and Data Security Rulemaking appeared f

Today, Sept 8th, the FTC held a Public Forum on commercial surveillance and data security and I made a public comment that you can find below. I think the community focused on SSI should collaborate together on some statements to respond to the the FTC advance notice of proposed rulemaking related to this and has […]

The post FTC on Commercial Surveillance and Data Security Rulemaking appeared first on Identity Woman.


Kyle Den Hartog

A response to Identity Woman’s recent blog post about Anoncreds

Kaliya has done a great job of honestly and fairly distilling a nuanced technical discussion about the Hyperledger SSI stack

Now that I’m not involved in DIDs and VCs full time, I tend not to find myself engaging with this space as much, but I definitely want to call this essay at as a MUST read for those exploring the SSI space.

Kaliya IdentityWoman Young has really taken the time to break down the nuances of a very technical conversation and distill it to non technical readers in a honest and fair assessment of Hyperledger Indy, Aries, and more specifically Anoncreds.

I know and have been heavily involved with many of the people who have driven this work forward for years. When I was at Evernym I had the privilege of helping to build out the Aries stack and get that project running. I also, spent a small portion of time working on the token that was meant to be launched for the Sovrin network. I have a massive amount of respect for the work and commitment put in by various leaders in this portion of the community. Without their hard work to spread the word, spend millions of investment on shipping code, and drive the standards forward the SSI community wouldn’t exist in the form it does today.

With that in mind, I think it’s important that these points are raised so that good discussion of progress can occur. Some aspects of the Hyperledger SSI stack are useful and will exist in more useful forms than others. For example, the Sovrin ledger has been a well known implementation of SSI that has struggled in the past and with the immense amount of time and effort that ledger is now in a much better state compared to when I worked at Evernym. I’ve witnessed many of the changes, discussions, and late night efforts by many people to keep the Indy node codebase working in production and they’ve done an amazing job at it. It’s only when I started to take a step back that I realized that the architecture of Indy being a private, permissioned ledger leaves it heading in the same direction as many large corporations now extinct browser and intranet projects for many of the same reasons. Private networks lead to expensive costs that are hard to maintain (read expensive) and the means to maintain them often time don’t outweigh the value they create. It’s only through sharing in an open network that we can leverage economies of scales to make the value outweigh the costs because they become shared. That’s why the internet has filled the space where the corporate intranets have failed.

With respect to Anoncreds, there’s been some incredibly well intentioned desires to deploy private and secure solutions for VCs that can be enforced by cryptography. This was the original reason why CL-Signatures were chosen to my knowledge and it remains one of the most motivating factor for that community. They truly believe that privacy is a first class feature of any SSI system and I 100% agree with the principle, but not the method chosen (anymore - at one point I thought it was the best approach but with time comes wisdom). Privacy erosions aren’t going to happen because selective disclosure wasn’t enabled by the signature scheme chosen. They’re going to happen at the legal layer where terms of service and business risk is evaluated and enforced. The classic example of presenting a driver’s license to prove you’re over 21 is a perfect example of this. Sure, I could share only that I’m over 21 and that I had the driver’s license issued by a valid issuer. However, this opens up the possibility that I could be presenting someone elses credential so I also need to present additional correlatable information (such as a photo or a name) to prove I am the person who the credential was intended for. This is because authentication is inherently a correlatable event where with a greater correlation of information the risk of a false positive (e.g. authenticating the wrong person) are reduced. So, for businesses who are trying to validate a driver’s license they are naturally going to trend towards requiring a greater amount of information to reduce their legal risks. The same issues are even more pervasive in the KYC space. So, in my opinion selective disclosure is a useful optimization tool to assist, but it’s only with proper differential privacy analysis of the information gathered, legal enforcement and insurance to offset costs of risk that we’ll achieve better privacy with these systems.

All in all, I think this is an important discussion to be brought up. Especially as the VC v2 work begins underway and explores where it’s important to converge the options between a variety of different deployed systems. There’s bound to be tough discussions with some defenses resting on sunk costs fallacies, but for the best of the SSI community I think these things need to be discussed openly. It’s only when we start having the truly hard discussions about deciding what optional features and branches of code and specs can be cut that we can start to converge on an interoperable solution that’s useful for globally. Thank you Kaliya for resurfacing this discussion to a more public place!

Wednesday, 07. September 2022

Identity Woman

Being “Real” about Hyperledger Indy & Aries / Anoncreds

Executive Summary This article surfaces a synthesis of challenges / concerns about Hyperledger Indy & Aries / Anoncreds, the most marketed Self-Sovereign Identity technical stack. It is aimed to provide both business and technical decision makers a better understanding of the real technical issues and related business risks of Hyperledger Indy & Aries / Anoncreds, […] The post Being “Rea

Executive Summary This article surfaces a synthesis of challenges / concerns about Hyperledger Indy & Aries / Anoncreds, the most marketed Self-Sovereign Identity technical stack. It is aimed to provide both business and technical decision makers a better understanding of the real technical issues and related business risks of Hyperledger Indy & Aries / Anoncreds, […]

The post Being “Real” about Hyperledger Indy & Aries / Anoncreds appeared first on Identity Woman.


Timothy Ruff

Web3, Web5 & SSI

Why the SSI community should escape Web3 and follow Jack Dorsey and Block into a Web5 big tent, with a common singular goal: the autonomous control of authentic data and relationships. TL;DR As a ten-year veteran of the SSI space, my initial reaction to Jack Dorsey’s (Block’s) announcement of Web5 — which is purely SSI tech — was allergic. After further thought and discussion with SSI pros, I

Why the SSI community should escape Web3 and follow Jack Dorsey and Block into a Web5 big tent, with a common singular goal: the autonomous control of authentic data and relationships.

TL;DR As a ten-year veteran of the SSI space, my initial reaction to Jack Dorsey’s (Block’s) announcement of Web5 — which is purely SSI tech — was allergic. After further thought and discussion with SSI pros, I now see Web5 as an opportunity to improve adoption for all of SSI and Verifiable Credentials, not just for Block. SSI adoption would benefit by separating from two things: 1. the controversies of Web3 (cryptocurrency, smart contracts, NFTs, Defi, blockchain); 2. the term “self-sovereign identity”. Let ‘crypto’ have the “Web3” designation. SSI will be bigger than crypto/Web3 anyway and deserves its own ‘WebX’ bucket. Web5 can be that bucket, bringing all SSI stacks — Ion, Aries, KERI, etc. — into one big tent. Web5 should be about “autonomous control of authentic data and relationships” and it should welcome any and all technical approaches that aim for that goal. I think a strong, inclusive and unifying designation is “Web5 technologies”. I love the principle of self-sovereignty and will continue to use the term SSI in appropriate conversations, but will begin to use Web5 by default. I invite others to do the same. Web5 Resistance

Jack Dorsey, of Twitter and Block (formerly Square) fame, has recently introduced to the world what he calls Web5, which he predicts will be “his most important contribution to the internet”. Web5 appears to be purely about SSI, and Dorsey’s favored approach to it. He leaves Web3 focused on crypto — cryptocurrencies, NFTs, Defi, etc. — and nothing more. Web5 separates SSI and verifiable credentials into their own, new bucket, along with the personal datastores and decentralized apps individuals will need to use them.

Sounds okay, but when I first heard about Web5 I had a rather allergic reaction…

Where’s Web4?

Isn’t SSI already part of Web3? What’s wrong with Web3, that SSI isn’t/shouldn’t be part of it?

The initial Web5 material is too centered around Block and their favored technical approach…

Web5 just sounds like a rebranding/marketing ploy for the BlueSky project Jack launched at Twitter…

And so on. I’ve since learned the thinking behind skipping Web4: Web2 + Web3 = Web5 (duh), but that question was the least of my concerns.

As I began to write this piece in a rather critical fashion, my desire to have a ‘Scout Mindset’ kicked in and I started to think about Web5 in a new light. I floated my new perspectives by several people I respect greatly, including Sam Smith (DTV); Stephan Wolf (CEO of GLEIF); Daniel Hardman and Randy Warshaw (Provenant); Nick Ris, Jamie Smith, Richard Esplin and Drummond Reed (Avast); James Monaghan; Dr. Phil Windley; Doc and Joyce Searls; Nicky Hickman; Karyl Fowler and Orie Steele (Transmute); Riley Hughes (Trinsic); Andre Kudra (esatus); Dan Gisolfi (Discover); Fraser Edwards (cheqd); the verifiable credentials team at Salesforce; and dozens of fine folks at the Internet Identity Workshop. Everyone seemed to nod heads in agreement with my key points — especially about the need for separation from the controversies of Web3 — without poking any meaningful holes. I also confirmed a few things with Daniel Buchner, the SSI pro Jack nabbed from Microsoft who leads the Block team that conceived the Web5 moniker, just to be sure I wasn’t missing anything significant.

The result is this post, and though it’s not what I originally intended — a takedown of Web5 — it presents something far more important: an opportunity for all SSI communities and technologies to remove major impediments to adoption and to unify around a clear, singular goal: the autonomous control of authentic data and relationships.

Controversies Inhibiting SSI Adoption

While I disagree with some of the specifics of Block’s announced Web5 technical approach to SSI, I really liked how they’d made a clean separation from two different controversies that I think have bogged down SSI adoption for years…

Controversy #1: “Crypto”

By “crypto” I mean all the new tech in the cryptocurrency (not cryptography) space: cryptocurrencies, smart contracts, NFTs, Defi, and… blockchain.

To be sure, what Satoshi Nakamato ushered in with his/her/their 2008 bitcoin white paper has changed the world forever. The tech is extraordinary and the concepts are liberating and intoxicating, there’s no doubt about that. But there is doubt about how far crypto could or should go in its current form, and about what threats it represents to security, both monetary and cyber, and the overall economic order of things.

I’m not arguing those points one way or the other, but I am asserting that cryptocurrency remains highly controversial and NFTs and ‘Defi’ are comparably so. Even the underlying blockchain technology, once the darling of forward-thinking enterprises and governments the world over, has quietly fallen out of favor with many of those same institutions. IBM, which once declared blockchain one of their three strategic priorities, has apparently cut back 90% on it, not seeing the promised benefits materialize.

The term Web3 itself is becoming increasingly toxic, as even the ‘inventor of the word-wide-web’ prefers to distance himself from it.

Again, I’m not jumping on the anti-crypto bandwagon or speculating about why these awesome technologies are now so controversial, I’m simply making the point that they are now controversial, which harms the adoption of associated technologies.

Controversy #2: “Self-Sovereign”

When properly defined, SSI shouldn’t be controversial to anyone: it’s the ability for individuals to create direct digital relationships with other people, organizations, and things and to carry and control digital artifacts (often about themselves) and disclose anything about those artifacts if, when, and however they please. The “sovereignty” means the ability to control and/or carry those artifacts plus the liberty to determine disclosure; it does not mean an ability to challenge the sovereignty of authority.

But many ears in authority never hear that clarification, and to those ears the words “self-sovereign identity” sound like a challenge to their authority, causing them to stop listening and become unwilling to learn further. In the EU, for example, critics use the term SSI literally in their attempts to scare those in authority from considering it seriously. Their critique is logical; the impetus behind Web3 has been decentralization, self-determination, and a lessening of governmental power and control. The raw fact that SSI technology doesn’t accomplish those ends — despite both its name and its association with Web3 implying that it attempts precisely that — becomes lost in the noise.

Large enterprises who’ve aggressively delved into SSI and VC technologies, such as IBM and Microsoft, have avoided the term altogether, preferring “decentralized identity”. Why? Because they perceive “self-sovereign identity” as benefitting the individual and not the enterprise, whereas “decentralized identity” leaves room for both.

Regardless of the specifics behind any controversy, if it’s controversial, it’s a problem. If broad adoption by the very places we’d want to accept our verifiable credentials — government and enterprise — is inhibited by a term they find distasteful, it’s time to look for another term.

It’s Not Just About Identity

Another issue I see with “SSI”, though more confusing than controversial, is the laser focus on identity. What does “identity” even mean? The word is harder to define than many realize. To some, identity is your driver’s license or passport, to others it’s your username and password or your certificates, achievements and other entitlements. Dr. Phil Windley, the co-founder of IIW, persuasively argues that identity includes all the relationships that it’s used with, because without relationships you don’t need identity.

Who am I to disagree with the author of Digital Identity? He’s probably right, which kinda proves my first point: the definition of identity is an amorphous moving target.

My second and larger point is this: many use cases I now deal with have an element of identity but are more about other data that may be adjacent to it. Using SSI technologies, all data of import — identity and otherwise — can be digitally signed and provably authentic, both in transit and at rest, opening a broad swath of potential use cases that organizations would pay handsomely to solve.

One example: a digitally signed attestation that certain work has been performed. It could include full details about the work and every sub-task with separate sub-signatures from all involved parties, resulting in a digital, machine readable, auditable record that can be securely shared with other, outside parties. Even when shared via insecure means (e.g. the Internet), all parties can verify the provenance and integrity of the data.

Other examples: invoices that are verifiably authentic, saving billions in fraud each year; digitally signed tickets, vouchers, coupons; proof-of-purchase receipts; etc. The list is practically unending. My bottom line: SSI tech is about all authentic data and relationships, not just identity.

(If you’re still thinking “SSI” technologies are only for individuals, think again… the technologies that underlie SSI enable authentic data and relationships everywhere, solving previously intractable problems and providing arguably more benefits for organizations than for individuals…)

Let ‘Crypto’ Have “Web3”

If you ask most people who’ve heard of Web3 what it is, they’ll likely mention something about cryptocurrency. A more informed person might mention smart contracts, NFTs, Defi and blockchain. A few might even mention guiding principles like “decentralization” or “individual control of digital assets”. Almost no one, outside of the SSI space, would mention SSI, decentralized identity, or verifiable credentials as part of Web3. Andreessen Horowitz, the largest Web3 investor with $7.6 billion invested so far, recently published their 2022 outlook on Web3 without mentioning SSI or “identity” even once.

The bald truth: at present the SSI community is on the outside looking in on Web3, saying the equivalent of “hey, me too!” while Web3 crypto stalwarts sometimes respond with, “yes, you too, we do need identity.” But the situation is clear: SSI and VCs are second- or even third-class citizens of Web3, only mentioned as an afterthought upon the eventual realization of how critical accurate, secure attribution (sloppily, “identity”) really is.

Web5 says — and I now agree — let ’em have it. Let the crypto crowd own the Web3 moniker lock, stock, and barrel, and let’s instead use an entirely separate ‘WebX’ designation for all SSI technologies, which are more impactful anyway.

SSI is Bigger Than Crypto

If crypto is big enough to be worthy of its own WebX designation, SSI technologies (VCs, DIDs, KERI, etc.) are even more so; my crystal ball says that SSI will be bigger than crypto will be. It’s not a competition, but the comparison is relevant when considering whether SSI should be a part of crypto or separate from it.

Having SSI as a bolt-on to Web3 — or Web2 for that matter — severely under-appreciates SSI’s eventual impact on the world. One indicator of that eventual impact is that AML (Anti-Money Laundering) compliance will continue to be required in all significant financial transactions anywhere on the planet, crypto or otherwise; every industrialized nation in the world agrees on this. The only technologies I’m aware of that can elegantly balance the minimum required regulatory oversight with the maximum possible privacy are SSI technologies. Web3 simply cannot achieve its ultimate goals of ubiquity without SSI tech embedded pretty much everywhere.

Another, even larger reason why SSI will be more impactful than crypto: SSI technologies will pervade most if not all digital interactions in the future, not just those where money/value is transferred. In sum: Web3 is about the decentralized transfer of value; SSI/Web5 is about verifiable authenticity of all data and all digital interactions.

Enter Web5 “Web5 Technologies”

SSI is about autonomous control of authentic data and relationships; this is the endgame that Web5 should be about, regardless of which architecture is used to get there.

Block‘s preferred SSI/Web5 architecture relies on Ion/Sidetree, which depends on the Bitcoin blockchain. Fine with me, as long as it results in the autonomous control of authentic data and relationships. My preferred approach does not rely on shared ledgers, it utilizes the IETF KERI protocol instead. As long as the result is self-sovereignty, the autonomous control of authentic data and relationships, Block should be all for it.

I’ve spoken with Daniel Buchner about this, twice, just to be sure; they are.

But Ion and KERI aren’t the only games in town; there are also Hyperledger Aries-based stacks that use W3C Verifiable Credentials and DIDs but eschew Decentralized Web Nodes, and use blockchains other than Bitcoin/Ion. I understand that other approaches are also emerging, each with their own tech and terminology. To each I say: if your aim is also the autonomous control of authentic data and relationships, Welcome! The point here is that every Web5 community would benefit from separation from Web3 and SSI controversies, and from closer association and collaboration with sibling communities with which they’ll eventually need to interoperate.

Can’t We All Just Get Along?

Though I’m not directly involved in the SSI standards communities, I’ve heard from several who are that tensions over technical differences are rather high right now. I’ve found in my life that when disagreements get heated, it helps to take a step back and rediscover common ground.

One big, overarching thing that unites the various approaches to SSI is the sincere desire for autonomous control of authentic data and relationships. Indeed, the different approaches to SSI are sibling technologies in that they aim for the same endgame, but with the added pressure that the endgame cannot be reached without ultimately achieving interoperability between competing actors.

Not only should we get along, we must get along to reach our common goal.

Sometimes families need reunions, to reconnect over shared goals and experiences. Perhaps reconvening under the guise of “Web5 Technologies” can help scratch that family itch, and reduce the temperature of conversation among friends a few degrees.

S̶S̶I̶ Web5

As a word, I see Web5 as neutral and with little inherent meaning — just something newer and more advanced than Web3. I’m aware that Web5 was originally conceived as a meme, a troll-ish response from Jack to the Web3 community to convey his disappointment in what he asserts is a takeover of Web3 by venture capitalists. Regardless of those light-hearted origins, Jack and company ran with Web5 in all seriousness, throwing their considerable weight and resources behind its launch and the development of their preferred architecture.

As an evolution, however, Web5 could represent something far more powerful than a catchy new label, it could help organize, distill, propel, and realize the ultimate aspirations of the SSI community. My friend Kalin Nicolov defines Web5 forcefully, especially how he sees that it differs from Web1/2/3:

“Web5 will be the first true evolution of the internet. Web3, like those before it, was seeking to build platforms on top of the internet — centralized walled gardens, owned and controlled by the few. Web5 is harking back to the true spirit of TimBL’s vision of the web. Having learned the hard way, the Web5 /digital trust/ community is trying to create protocols that are complementary and symbiotic, interoperable and composable.
While Web1/2/3 were web-as-a-platform, Web5 is the first to be web-as-interoperable-set-of-protocols, i.e. serving agency to the edge as opposed to the ridiculous concentration of Web2 (hello AWS) and the aspiring oligopoly of Web3 (hello CZ, hello Coinbase and third a16z fund)”

To me, switching my default terminology to “Web5” is simply pragmatic; it creates useful separation from Web3/crypto technologies and controversies and the problematic perception of the phrase “self-sovereign identity”. Any term that doesn’t start with “Web” leaves the impression that SSI is still part of Web3; the only way to make a clean break is with a new WebX designation, so it might as well be Web5. A subtle switch to using Web5 won’t be some whiz-bang exciting change like an announcement from Jack or some new Web3 shiny thing, but greater industry clarity and collaboration could still be useful toward two critically important ends: adoption and interoperability.

So while I’ve used and loved the term SSI for the better part of a decade now, and will continue to use it in conversation, I’ll now begin to use the term “Web5 Technologies” instead more often than not. I’ll also use both terms in tandem — “SSI/Web5” — until it catches on.

A change in terminology can only happen through use, so I invite you to join me. Thanks to those who’ve already begun.

Monday, 05. September 2022

Damien Bod

Implement a GRPC API with OpenIddict and the OAuth client credentials flow

This post shows how to implement a GRPC service implemented in an ASP.NET Core kestrel hosted service. The GRPC service is protected using an access token. The client application uses the OAuth2 client credentials flow with introspection and the reference token is used to get access to the GRPC service. The GRPC API uses introspection […]

This post shows how to implement a GRPC service implemented in an ASP.NET Core kestrel hosted service. The GRPC service is protected using an access token. The client application uses the OAuth2 client credentials flow with introspection and the reference token is used to get access to the GRPC service. The GRPC API uses introspection to validate and authorize the access. OpenIddict is used to implement the identity provider.

Code: https://github.com/damienbod/AspNetCoreOpeniddict

The applications are setup with a identity provider implemented using OpenIddict, an GRPC service implemented using ASP.NET Core and a simple console application to request the token and use the API.

Setup GRPC API

The GRPC API service needs to add services and middleware to support introspection and to authorize the reference token. I use the AddOpenIddict method from the OpenIddict client Nuget package, but any client package which supports introspection could be used. If you decided to use a self contained JWT bearer token, then the standard JWT bearer token middleware could be used. This can only be used if the tokens are not encrypted and are self contained JWT tokens. The aud is defined as well as the required claims. A secret is required to use introspection.

GRPC is added and the kestrel is setup to support HTTP2. For local debugging, UseHttps is added. You should always develop with HTTPS and never HTTP as the dev environment should be as close as possible to the target system and you do not deploy unsecure HTTP services even when these are hidden behind a WAF.

using GrpcApi; using OpenIddict.Validation.AspNetCore; var builder = WebApplication.CreateBuilder(args); builder.Services.AddAuthentication(options => { options.DefaultScheme = OpenIddictValidationAspNetCoreDefaults.AuthenticationScheme; }); builder.Services.AddOpenIddict() .AddValidation(options => { // Note: the validation handler uses OpenID Connect discovery // to retrieve the address of the introspection endpoint. options.SetIssuer("https://localhost:44395/"); options.AddAudiences("rs_dataEventRecordsApi"); // Configure the validation handler to use introspection and register the client // credentials used when communicating with the remote introspection endpoint. options.UseIntrospection() .SetClientId("rs_dataEventRecordsApi") .SetClientSecret("dataEventRecordsSecret"); // disable access token encyption for this options.UseAspNetCore(); // Register the System.Net.Http integration. options.UseSystemNetHttp(); // Register the ASP.NET Core host. options.UseAspNetCore(); }); builder.Services.AddAuthorization(options => { options.AddPolicy("dataEventRecordsPolicy", policyUser => { policyUser.RequireClaim("scope", "dataEventRecords"); }); }); builder.Services.AddGrpc(); // Configure Kestrel to listen on a specific HTTP port builder.WebHost.ConfigureKestrel(options => { options.ListenAnyIP(8080); options.ListenAnyIP(7179, listenOptions => { listenOptions.UseHttps(); listenOptions.Protocols = Microsoft.AspNetCore.Server.Kestrel.Core.HttpProtocols.Http2; }); });

The middleware is added like any secure API. GRPC is added instead of controllers, pages or whatever.

var app = builder.Build(); app.UseHttpsRedirection(); app.UseRouting(); app.UseAuthentication(); app.UseAuthorization(); app.UseEndpoints(endpoints => { endpoints.MapGrpcService<GreeterService>(); endpoints.MapGet("/", async context => { await context.Response.WriteAsync("GRPC service running..."); }); }); app.Run();

The GRPC service is secured using the authorize attribute with a policy checking the scope claim.

using Grpc.Core; using Microsoft.AspNetCore.Authorization; namespace GrpcApi; [Authorize("dataEventRecordsPolicy")] public class GreeterService : Greeter.GreeterBase { public override Task<HelloReply> SayHello(HelloRequest request, ServerCallContext context) { return Task.FromResult(new HelloReply { Message = "Hello " + request.Name }); } }

A proto3 file is used to define the API. This is just the simple example from the Microsoft ASP.NET Core GRPC documentation.

syntax = "proto3"; option csharp_namespace = "GrpcApi"; package greet; // The greeting service definition. service Greeter { // Sends a greeting rpc SayHello (HelloRequest) returns (HelloReply); } // The request message containing the user's name. message HelloRequest { string name = 1; } // The response message containing the greetings. message HelloReply { string message = 1; }

Setup OpenIddict client credentials flow with introspection

We use OpenIddict to implement the client credentials flow with introspection. The client uses the grant type ClientCredentials and a secret to acquire the reference token.

// API application CC if (await manager.FindByClientIdAsync("CC") == null) { await manager.CreateAsync(new OpenIddictApplicationDescriptor { ClientId = "CC", ClientSecret = "cc_secret", DisplayName = "CC for protected API", Permissions = { Permissions.Endpoints.Authorization, Permissions.Endpoints.Token, Permissions.GrantTypes.ClientCredentials, Permissions.Prefixes.Scope + "dataEventRecords" } }); } static async Task RegisterScopesAsync(IServiceProvider provider) { var manager = provider.GetRequiredService<IOpenIddictScopeManager>(); if (await manager.FindByNameAsync("dataEventRecords") is null) { await manager.CreateAsync(new OpenIddictScopeDescriptor { DisplayName = "dataEventRecords API access", DisplayNames = { [CultureInfo.GetCultureInfo("fr-FR")] = "Accès à l'API de démo" }, Name = "dataEventRecords", Resources = { "rs_dataEventRecordsApi" } }); } }

The AddOpenIddict method is used to define the supported features of the OpenID Connect server. Per default, encryption is used as well as introspection. The AllowClientCredentialsFlow method is used to added the support for the OAuth client credentials flow.

services.AddOpenIddict() .AddCore(options => { options.UseEntityFrameworkCore() .UseDbContext<ApplicationDbContext>(); options.UseQuartz(); }) .AddServer(options => { options.SetAuthorizationEndpointUris("/connect/authorize") .SetLogoutEndpointUris("/connect/logout") .SetIntrospectionEndpointUris("/connect/introspect") .SetTokenEndpointUris("/connect/token") .SetUserinfoEndpointUris("/connect/userinfo") .SetVerificationEndpointUris("/connect/verify"); options.AllowAuthorizationCodeFlow() .AllowHybridFlow() .AllowClientCredentialsFlow() .AllowRefreshTokenFlow(); options.RegisterScopes(Scopes.Email, Scopes.Profile, Scopes.Roles, "dataEventRecords"); // Register the signing and encryption credentials. options.AddDevelopmentEncryptionCertificate() .AddDevelopmentSigningCertificate(); options.UseAspNetCore() .EnableAuthorizationEndpointPassthrough() .EnableLogoutEndpointPassthrough() .EnableTokenEndpointPassthrough() .EnableUserinfoEndpointPassthrough() .EnableStatusCodePagesIntegration(); })

You also need to update the Account controller exchange method to support the OAuth2 client credentials (CC) flow. See the OpenIddict samples for reference.

Implementing the GRPC client

The client gets an access token and uses this to request the data from the GRPC API. The ClientCredentialAccessTokenClient class requests the access token using a secret, client Id and a scope. In a real application, you should cache and only request a new access token if it has expired, or is about to expire.

using IdentityModel.Client; using Microsoft.Extensions.Configuration; namespace GrpcAppClientConsole; public class ClientCredentialAccessTokenClient { private readonly HttpClient _httpClient; private readonly IConfiguration _configuration; public ClientCredentialAccessTokenClient( IConfiguration configuration, HttpClient httpClient) { _configuration = configuration; _httpClient = httpClient; } public async Task<string> GetAccessToken( string api_name, string api_scope, string secret) { try { var disco = await HttpClientDiscoveryExtensions.GetDiscoveryDocumentAsync( _httpClient, _configuration["OpenIDConnectSettings:Authority"]); if (disco.IsError) { Console.WriteLine($"disco error Status code: {disco.IsError}, Error: {disco.Error}"); throw new ApplicationException($"Status code: {disco.IsError}, Error: {disco.Error}"); } var tokenResponse = await HttpClientTokenRequestExtensions.RequestClientCredentialsTokenAsync(_httpClient, new ClientCredentialsTokenRequest { Scope = api_scope, ClientSecret = secret, Address = disco.TokenEndpoint, ClientId = api_name }); if (tokenResponse.IsError) { Console.WriteLine($"tokenResponse.IsError Status code: {tokenResponse.IsError}, Error: {tokenResponse.Error}"); throw new ApplicationException($"Status code: {tokenResponse.IsError}, Error: {tokenResponse.Error}"); } return tokenResponse.AccessToken; } catch (Exception e) { Console.WriteLine($"Exception {e}"); throw new ApplicationException($"Exception {e}"); } } }

The console application uses the access token to request the GRPC API data using the proto3 definition.

using Grpc.Net.Client; using GrpcApi; using Microsoft.Extensions.Configuration; using Grpc.Core; using GrpcAppClientConsole; var builder = new ConfigurationBuilder() .SetBasePath(Directory.GetCurrentDirectory()) .AddJsonFile("appsettings.json"); var configuration = builder.Build(); var clientCredentialAccessTokenClient = new ClientCredentialAccessTokenClient(configuration, new HttpClient()); // 2. Get access token var accessToken = await clientCredentialAccessTokenClient.GetAccessToken( "CC", "dataEventRecords", "cc_secret" ); if (accessToken == null) { Console.WriteLine("no auth result... "); } else { Console.WriteLine(accessToken); var tokenValue = "Bearer " + accessToken; var metadata = new Metadata { { "Authorization", tokenValue } }; var handler = new HttpClientHandler(); var channel = GrpcChannel.ForAddress( configuration["ProtectedApiUrl"], new GrpcChannelOptions { HttpClient = new HttpClient(handler) }); CallOptions callOptions = new(metadata); var client = new Greeter.GreeterClient(channel); var reply = await client.SayHelloAsync( new HelloRequest { Name = "GreeterClient" }, callOptions); Console.WriteLine("Greeting: " + reply.Message); Console.WriteLine("Press any key to exit..."); Console.ReadKey(); }

GRPC in ASP.NET Core works really well with any OAuth2, OpenID Connect server. This is my preferred way to secure GRPC services and I only use certification authentication if this is required due to the extra effort to setup the hosted environments and the deployment of the client and server certificates.

Links

https://github.com/grpc/grpc-dotnet/

https://docs.microsoft.com/en-us/aspnet/core/grpc

https://documentation.openiddict.com/

https://github.com/openiddict/openiddict-samples

https://github.com/openiddict/openiddict-core

Tuesday, 30. August 2022

Bill Wendels Real Estate Cafe

Use massive class action lawsuits to mobilize Consumer Movement in Real Estate!

Any doubt that real estate is the Sleeping Giant of the Consumer Movement? For decades, #RECALL – Real Estate Consumer Alliance – has estimated homebuyers & sellers could save… The post Use massive class action lawsuits to mobilize Consumer Movement in Real Estate! first appeared on Real Estate Cafe.

Any doubt that real estate is the Sleeping Giant of the Consumer Movement? For decades, #RECALL – Real Estate Consumer Alliance – has estimated homebuyers & sellers could save…

The post Use massive class action lawsuits to mobilize Consumer Movement in Real Estate! first appeared on Real Estate Cafe.

reb00ted

The 5 people empowerment promises of web3

Over at Kaleido Insights, Jessica Groopman, Jaimy Szymanski, and Jeremiah Owyang (the former Forrester “Open Social” analyst) describe Web3 Use Cases: Five Capabilities Enabling People. I don’t think this post has gotten the attention it deserves. At the least, it’s a good starting framework to understand why so many people are attracted to the otherwise still quite underdefined web3

Over at Kaleido Insights, Jessica Groopman, Jaimy Szymanski, and Jeremiah Owyang (the former Forrester “Open Social” analyst) describe Web3 Use Cases: Five Capabilities Enabling People.

I don’t think this post has gotten the attention it deserves. At the least, it’s a good starting framework to understand why so many people are attracted to the otherwise still quite underdefined web3 idea. Hint: it’s not just getting rich quick.

I want to riff on this list a bit, by interpreting some of the categories just a tad differently, but mostly by comparing and contrasting to the state of the art (“web2”) in consumer technology.

Empowerment promise State of the art ("web2") The promise ("web3") Governance How much say do you, the user, have in what the tech products do that you use? What about none! The developing companies do what they please, and very often the opposite of what their users want. Users are co-owners of the product, and have a vote through mechanisms such as DAOs. Identity You generally need at least an e-mail address hosted by some big platform to sign up for anything. Should the platform decide to close your account, even mistakenly, your identity effectively vanishes. Users are self-asserting their identity in a self-sovereign manner. We used to call this "user-centric identity", with protocols such as my LID or OpenID before they were eviscerated or co-opted by the big platforms. Glad to see the idea is making a come-back. Content ownership Practically, you own very little to none of the content you put on-line. While theoretically, you keep copyright of your social media posts, for example, today it is practically impossible to quit social media accounts without losing at least some of your content. Similarly, you are severely limited in your options for privacy, meaning where your data goes and does not go. You, and only you, decide where and how to use your content and all other data. It is not locked into somebody else's system. Ability to build Ever tried to add a feature to Facebook? It's almost a ridiculous proposition. Of course they won't let you. Other companies are no better. Everything is open, and composable, so everybody can build on each other's work. Exchange of value Today's mass consumer internet is largely financed through Surveillance Capitalism, in the form of targeted advertising, which has led to countless ills. Other models generally require subscriptions and credit cards and only work in special circumstances. Exchange of value as fungible and non-fungible tokens is a core feature and available to anybody and any app. An entirely new set of business models, in addition to established ones, have suddently become possible or even easy.

As Jeremiah pointed out when we bumped into each other last night, public discussion of “web3” is almost completely focused on this last item: tokens, and the many ill-begotten schemes that they have enabled.

But that is not web3’s lasting attraction. The other four promises – participation in governance, self-sovereign identity, content ownership and the freedom to build – are very appealing. In fact, it is hard to see how anybody (other than an incumbent with a turf to defend) could possible argue against any of them.

If you don’t like the token part? Just don’t use it. 4 out of the 5 web3 empowerment promises for people, ain’t bad. And worth supporting.

Monday, 29. August 2022

Altmode

Early Fenton Ancestry

I have been researching my ancestry over the past 20 years or so. I have previously written about my Fenton ancestors who had a farm in Broadalbin, New York. I am a descendant of Robert Fenton, one of the early Connecticut Fentons. Since one of his other descendants was Reuben Eaton Fenton, a US Senator […]

I have been researching my ancestry over the past 20 years or so. I have previously written about my Fenton ancestors who had a farm in Broadalbin, New York.

I am a descendant of Robert Fenton, one of the early Connecticut Fentons. Since one of his other descendants was Reuben Eaton Fenton, a US Senator and New York governor, this branch of the family has been well researched and documented, in particular with the publication in 1867 of A genealogy of the Fenton family : descendants of Robert Fenton, an early settler of ancient Windham, Conn. (now Mansfield) by William L. Weaver1. Here’s what Weaver has to say about Robert Fenton’s origin:

Robert Fenton, who is first heard of at Woburn, Mass., in 1688, was the common ancestor of the Connecticut Fentons. We can learn nothing in regard to his parentage, birthplace, or nationality. The records of Woburn shed no light on the subject; and we can find no trace of him elsewhere, previous to his appearance in that town.

The genealogy goes on to relate an old tradition that Robert Fenton had come from Wales, but I have been unable to find any basis for that tradition.

Somewhere in my research, I came upon a reference to a Robert Fenton in The Complete Book of Emigrants, 1661-1699 by Peter Wilson Coldham2. It said that on 17 July 1682, a number of Midland Circuit prisoners had been reprieved to be transported to America, including Robert Fenton of Birmingham, and included a reference number for the source material. The timing was about right, but there was still the question whether this is the same Robert Fenton that appeared in Woburn six years later.

In 2018 I was going to London for a meeting, and wondered if I could find out any more about this. I found out that the referenced document was available through The National Archives, and I could place a request to view the document. But first I needed to complete online training for a “reader’s ticket”. This was a short video class on the handling of archival documents and other rules, some unexpected (no erasers are allowed in the reading room). I completed the training and arranged for the document to be available when I was in London.

The National Archives is located on an attractive campus in Kew, just west of London. My wife and I went to the reading room and were photographed for our reader’s tickets (actually plastic cards) and admitted to the room. We checked out the document and took it to a reading desk. On opening the box, we had quite a surprise: the document was a scroll!

We opened one of the scrolls carefully (using the skills taught in the online course) and started to examine it. The writing was foreign to us, but scanning through it we quickly found what appeared to be “Ffenton”. This seemed to be the record we were looking for. We photographed the sections of the scroll that contained several mentions of “Ffenton” and examined some of the rest before carefully rerolling and returning it. What an experience it was to actually touch 335 year-old records relating to an ancestor!

Midland Circuit pardon part 1 Midland Circuit pardon part 2 Midland Circuit pardon part 3 Midland Circuit pardon part 4

When we returned home, we intended to figure out what the document actually said, but it became clear (over the next few years!) that this required an expert. I contacted a professor at Stanford that specializes in paleography to see if he could offer help or a referral, and he gave me a general idea of the document and that it was written in Latin (also that the Ff was just the old way of writing F). Eventually I was referred to Peter Foden, an archival researcher located in Wales, one of whose specialties is transcribing and translating handwritten historical documents (Latin to English). I sent copies of the document pictures to him, and was able to engage his services to transcribe and translate the document.

Peter’s translation of the document is as follows:

The King gives greeting to all to whom these our present letters shall come.
Know that we motivated purely by our pity, of our especial grace and knowledge of the matter by the certification and information of our beloved and faithful Thomas Raymond, knight, one of our Justices assigned for Pleas to be held before us, and Thomas Streete, knight, one of the Lords Justices of our Exchequer, assigned for Gaol Delivery of our Gaols in Lincolnshire, Nottinghamshire, Derbyshire, the City of Coventry, Warwickshire and Northamptonshire, for prisoners being in the same, have pardoned, forgiven and released and by these presents for ourselves, our heirs and successors, do pardon, forgive and release, Robert Sell late of Derby in the County of Derbyshire, labourer, by whatsoever other names or surnames or additional names or nicknames of places, arts or mysteries, the same Robert Sell may be listed, called, or known, or was lately listed, called or known, or used to be listed, called or known, the felony of murder, the felony of killing and slaying of a certain Dorothy Middleton however done, committed or perpetrated, for which the same Robert Sell stands indicted, attainted or judged. And furthermore, out of our more abundant special grace, we have pardoned, forgiven, and released and by these presents for ourselves our heirs and successors we do pardon, forgive and release Henry Ward late of the town of Nottingham in the County of Nottinghamshire, labourer, Thomas Letherland, late of the town of Northampton in the County of Northampton, John Pitts of the same place, labourer, Samuel Shaw the younger of the same place, labourer, John Attersley late of Spalding in the county of Lincolnshire, labourer, John Brewster, late of of Grantham in the County of Lincolnshire, labourer, Peter Waterfall late of Derby in the County of Derbyshire, labourer, John Waterfall of the same place, labourer, John White late of the City of Coventry in the County of the same, labourer, Joseph Veares late of Birmingham in the county of Warwickshire, labourer, Edward Cooke of the same place, labourer, Robert Fenton of the same place, labourer, Mary Steers of the same place, spinster, Thomas Smith of the same place, labourer, Humfrey Dormant late of the Borough of Warwick in the county of Warwick, labourer, Edward Higgott late of Derby in the county of Derbyshire, labourer, Eliza Massey of the same place, widow, and Jeremy Rhodes late of Worcester in the County of Worcester, labourer, or by whatsoever other names or surnames or additional names or surnames or names of places, arts or mysteries, the same Henry Ward, Thomas Letherland, John Pitts, Samuel Shaw, John Attersley, John Brewster, Peter Waterfall, John Waterfall, John White, Joseph Veares, Edward Cooke, Robert Fenton, Mary Steares, Thomas Smith, Humfrey Dormant, Edward Higgott, Eliza Massey and Jeremy Rhodes may be listed, called or known, or were lately listed, called or known, or any of them individually or collectively was or were listed, called or known, of every and every kind of Treason and crimes of Lese Majeste of and concerning clipping, washing, forgeries and other falsehoods of the money of this Kingdom of England or of whatsoever other kingdoms and dominions, and also every and every kind of concealments, treasons, and crimes of lese majeste of and concerning the uttering of coinage being clipped, filed and diminished, by whomsoever (singular or plural) the said coinage was clipped, filed and diminished, and also every and every kind of felonies, homicides, burglaries and trespasses whatsoever by them or any of them done, committed or perpetrated, whereof the same Robert Sell, Henry Ward, Thomas Letherland, John Pitts, Samuel Shaw, John Attersley, John Brewster, Peter Waterfall, John Waterfall, John White, Joseph Veares, Edward Cooke, Robert Fenton, Mary Steares, Thomas Smith, Humfrey Dormant, Edward Higgott, Eliza Massey and Jeremy Rhodes, are indicted, attainted, or judged, or are not indicted, attainted, or judged, and the accessories of each of them, and the escapes made thereupon, and also all and singular the indictments, judgments, fines, condemnations, executions, bodily penalties, imprisonments, punishments, and all other things about these matters, that we or our heirs or successors in any way had, may have or in the future shall have, also Outlawries if pronounced or to be pronounced against them or any of them by reason of these matters, and all and all kinds of lawsuits, pleas, and petitions and demands whatsoever which belong, now or int the future, to us against them or any of them, by reason or occasion of these matters, or of any of them, and we give and grant unto them and unto each of them by these presents our firm peace, so that nevertheless they and each them should stand (singular and plural) righteously in our Court if any anyone if anyone summons them to court concerning these matters or any of them, if they cannot find good and sufficient security for their good behaviour towards us our heirs and successors and all our people, according to the form of a certain Act of Parliament of the Lord Edward the Third late King of England, our ancestor, edited and provided at Westminster in the tenth year of his reign. And furthermore, of our abundant special grace, and certain knowledge and pure motives, for us our heirs and successors, we will and grant that they shall have letters of pardon and all and singular matters contained in the same shall stand well, firmly, validly, sufficiently and effectually in Law and shall be allowed by all and all kinds of our Officers and Servants and those of our heirs and successors, notwithstanding the Statute in the Parliament of the Lord Richard the Second late King of England held at Westminster in the thirteenth year of his reign, or any other Statute, Act, Order or Provision made to the contrary in any manner, provided always that if the said Henry Ward, Thomas Letherland, John Pitts, Samuel Shaw, John Attersley, John Brewster, Peter Waterfall, John Waterfall, John White, Joseph Veares, Edward Cooke, Robert Fenton, Mary Steares, Thomas Smith, and Humfrey Dormant, do not leave the Kingdom of England to cross the sea towards some part of America now settled by our subjects, within the space of six months next after the date of these presents, or if they remain or return within seven years immediately following the six months after the date of these presents, or any of them shall return within the space of seven years next after the date of these presents, that then this our pardon be and shall be wholly void and of none effect in respect of Henry Ward, Thomas Letherland, John Pitts, Samuel Shaw, John Attersley, Peter Waterfall, John Waterfall, John White, Joseph Veares, Edward Cooke, Robert Fenton, Mary Steeres and Humfrey Dormant and each of them, notwithstanding anything in these presents to the contrary thereof. We wish however that this our pardon be in all respects firm, valid and sufficient for the same Henry Ward, Thomas Letherland, John Pitts, Samuel Shaw, John Attersley, Peter Waterfall, John Waterfall, John White, Joseph Veares, Edward Cooke, Robert Fenton, Mary Steeres and Humfrey Dormant and each of them, if they shall perform and fulfil or any of them shall perform and fulfil the said conditions. And we furthermore also wish that after the issue of this our pardon, the said Henry Ward and all other persons named in the previous condition here mentioned to be pardoned under the same condition shall remain in the custody of our Sheriffs in our said Gaols where they are now detained until they and each of them be transported (singular or plural) to the aforementioned places beyond the seas, according to the said Condition. In witness of which, the King is witness, at Westminster on the fourteenth day of July by the King himself.

So it appears that Robert Fenton was convicted of treason “concerning clipping, washing, forgeries and other falsehoods of the money of this Kingdom of England.” He was pardoned on condition that he leave for America within six months and stay no less than seven years. While I have found no record of Robert having practiced forgery in America, his son Francis was nicknamed “Moneymaker” because of his well-known forgery escapades3. One of those events caused the Fenton River in Connecticut to be named after him. Francis might have learned “the family business” from his father, so this strengthens the likelihood that this Robert Fenton from Birmingham is my ancestor.

Appendix

For reference, here is the Latin transcription, referenced to the parts of the document pictured above. The line breaks match the original:

(part 1)

Rex &c Omnibus ad quos presentes littere nostre pervenerint Salutem Sciatis quod
nos pietate moti de gratia nostra speciali ac exita scientat & mero motu
nostris ex cirtificatione & relatione dilectorum & fidelium nostrorum Thome Raymond
militis unius Justiciorum nostrorum ad placita coram nobis tenenda Assignatorum & Thome
Streete militis unius Baronum Scaccarii nostri Justiciorum nostrorum Gaolas nostras
lincolniensis Nott Derb Civitatis Coventrie Warr & Northton de prisonibus
in eadem existentibus deliberandum assignatorum Pardonavimus remissimus et
relaxavimus ac per presentes pro nobis heredibus & Successoribus nostris
pardonamus remittimus & relaxamus Roberto Sell nuper de Derb in Comitatu
Derb laborarium seu quibuscumque aliis nominibus vel cognominibus seu additionibus
nominium vel cognominium locorum artium sive misteriorum idem Robertus
Sell cenceatur vocetur sive nuncupetur aut nuper cencebatur
vocabatur sive nuncupabatur feloniam mortemnecem feloniam
interfectionem & occisionem cuiusdam Dorothee Middleton qualitercumque
factam comissam sive perpetratam unde idem Robertus Sell indictatus attinctus
sive adiudicatus existit Et ulterius de uberiori gratia nostra speciali
pardonavimus remissimus & relaxavimus ac per presentest pro nobis heredibus
& Successoribus nostris pardonamus remittimus & relaxamus Henrico Ward
nper de Villa Nott in Comitatu Nott laborario Thome Letherland nuper
de villa Northton in Comitatu Northton laborario Johanni Pitts de eadem
laborario Samueli Shaw junioris de eadem laborario Johanni Attersley nuper de
Spalding in Comitatu Lincoln laborario Johanni Brewster nuper de Grantham in
Comitatu Lincoln laborario Petro Waterfall nuper de Derb in Comitatu Derb laborario
Johanni Waterfall de eadem laborario Johanni White nuper de Civitate Coventr’
in Comitatu eiusdem laborario Josepho veares nuper de Birmingham in Comitatu
Warr laborario Edwardo Cooke de eadem laborario Roberto Fenton de eadem laborario
Marie Steers de eadem Spinster Thome Smith de eadem laborario
Humfrido Dormant nuper de Burgo Warr in Comitatu Warr laborario Edwardo
Higgott nuper de Derb in Comitatu Derb laborario Elize Massey de eadem vidue

(part 2)
& Jeremie Rhodes nuper de Wigorn in Comitatu Wigorn laborario seu quibuscumque
aliis nominibus vel cognominibus seu additionibus nominium vel cognominium
locorum artium sive misteriorum iidem Henricus Ward Thomas
Letherland Johannes Pitts Samuel Shaw Johannes Attesley Johannes
Brewster Petrus Waterfall Johannes Waterfall Johannes White Josephus
Veares Edwardus Cooke Robertus Fenton Maria Steares Thomas
Smith Humfrius Dormant Edwardus Higgott Eliza Massey et
Jeremias Rhodes cenceantur vocentur sive nuncupentur aut nuper
cencebantur vocabantur sive nuncupabantur aut eorum aliquis
cenceatur vocetur sive nuncupetur aut nuper cencebatur vocabatur
sive nuncupabatur omnes & omnimodos proditiones & crimina lese maiestatis
de & concernentes tonsura lotura falsis Fabricationibus & aliis falsitatibus monete
huius Regni Anglie aut aliorum Regnorum & Dominiorum quorumcumque necnon
omnes & omnimodos misprisones proditiones & criminis lese maiestatis de et
concernentes utteratione pecunie existentes tonsurates filate & diminute oien per
quos vel per quem pecuniam predictam tonsuram filatam & diminutam fuit acetiam
omnes & omnimoda felonias homicidas Burglarias & transgressas quascumque
per ipsos vel eorum aliquem qualitercumque factas commissas sive perpetratas
unde iidem Robertus Sell Henricus Ward Thomas Letherland
Johannes Pitt Samuel Shaw Johannes Attersley Johannes Brewster Petrus
Waterfall Johannes Waterfall Johannes White Josephus Veares
Edwardus Cooke Robertus Fenton Maria Steeres Thomas Smith
Humfrius Dormant Edwardus Higgott Eliza Massey & Jeremias
Rhodes indictati convicti attincti sive adiudicati existunt
vel non indictati convicti attincti sive adiudicati existunt

(part 3)
ac accessares eorum cuiuslibet & fugam & fugas super inde facta acetiam
omnia & singula Indictamenta Judicia fines Condemnationes
executiones penas corporales imprisonamenta punitiones & omnes alia
seu eorum aliquem per premissis vel aliquo premissorum habuimus habuemus vel in
futuro habere poterimus aut heredes seu Successores nostri ullo modo habere
poterint Necnon utlagarii si quo versus ipsos seu eorum aliquem occasione
premissorum sunt promulgata seu fiunt promulganda & omnes & omnimodas
sectas querelas & impetitiones & demandas quecumque que nos versus
ipsos seu eorum aliquem pertinent seu pertinere poterint ratione vel occasione
premissorum seu eorum alicuius & firmam pacem nostram eis & eorum cuilibet
damus & concedimus per presentes Ita tamen quod ipsi & eorum quilibet stent
& stet recte in Curia nostra si quis versus ipsos seu eorum aliquem loqui
voluint de premissis vel aliquo premissorum licet quod ipsi vel ipse et
eorum quilibet bonam & sufficientem securitatem non inveniunt de se bene
gerendo erga nos heredes & Successores nostros & cunctum populum nostrum
iuxta formam cuiusdam Actus Parliamenti Domini Edwardi nuper Regis
Anglie tertii progenitoris nostri Anno Regni sui decimo apud Westmonasterium
editi & provisi Et ulterius de uberiori gratia nostra speciali ac ex certa
scientia & mero motu nostris pro nobis heredibus & succcessoribus nostris volumus &
concedimus quod habere littere pardinationis & omnia & singula in eisdem
contenta bone firme valide sufficienter & effectuale in lege stabunt
& existunt & per omnes & omnimodos Officiarios & Ministros nostros & heredes &
Successores nostrorum allocentur Statuto in Parliamento Domini Ricardi nuper
Regis Anglie secundi Anno regni siu decimo tertio apud Westmonasterium tenti
aut aliquo alio Statuto Actu ordinacione vel provisione in contrario

(part 4)
inde facto in aliquo non obstante Proviso tamen quod si predicti
Henricus Ward Thomas Letherland Johannes Pitts Samuel Shaw
Johannes Attersley Petrus Waterfall Johannes Waterfall Johannes White
Josephus Veares Edwardus Cooke Robertus Fenton Maria Steeres
& Humfrius Dormant non exibunt extra Regnum Anglie transituri
extra mare versus aliqui partem Americe modo inhabitatum per subitos
nostros infra spacium sex mensium proximas post datum presentium aut si ipsi
infra septem Annos immediate sequentes sex menses post datum
presentium remanent aut remanebunt aut redibunt aut eorum aliquis
redibit in Angliam infra spacium septem Annorum proximos post datum presentium
quod tunc hec nostra pardonatus sit & erit omnino vacua & nulli vigoris
quoad ipsos Henricum Ward Thomam Letherland Johannem Pitts
Samuel Shaw Johannem Attersley Petrum Waterfall Johannem Waterfall
Johannem White Josephum Veaeres Edwardum Cooke Robertum Fenton
Mariam Steeres & Humfrium Dormant & quemlibet eorum aliquid in
hiis presentibus in contrario inde non obstante volumus tamen quod hec
nostra pardonatio sit in omnibus firma valida & sufficiens eiusdem
Henrico Ward Thome Letherland Johanni Pitts Samueli Shaw Johanni
Attersley Petro Waterfall Johanni Waterfall Johanni White Josepho
Veares Edwardo Cooke Roberto Fenton Marie Steeres & Humfrio
Dormant & cuilibet eorum si performabunt & perimplebunt aut aliquis eorum
perimplebit & performabit conditiones predictos volumus etiam ulterius quod
post allocationem huius pardonationis nostre predictus Henricus Ward & omnes
alii persones in conditione predicto nominati preantea hic mentionati sub
eadem conditione fore pardonati remanebunt sub Custodia vicecomitium nostrorum
in Gaolis nostris predictis ubi modo detenti sunt quousque ipsi & ipsei
ac eorum quilibet & earum quilibet transportati fuint vel transportati
fuit in partibus transmarinis prementionatis secundum Conditionem predictam
In cuius &c [rei testimonium] Teste Rege apud Westmonasterium decimo quarto die Julii
per ipsum Regem

References

1. Weaver, William L. (William Lawton). A Genealogy of the Fenton Family : Descendants of Robert Fenton, an Early Settler of Ancient Windham, Conn. (Now Mansfield). Willimantic, Conn. : [s.n.], 1867. http://archive.org/details/genealogyoffento05weav.

2. Coldham, Peter Wilson. The Complete Book of Emigrants, 1661-1699. Baltimore, Maryland: Genealogical Publishing Co., Inc., 1990.

3. Weaver, p. 7

Saturday, 27. August 2022

Jon Udell

GitHub for English teachers

I’ve long imagined a tool that would enable a teacher to help students learn how to write and edit. In Thoughts in motion I explored what might be possible in

I’ve long imagined a tool that would enable a teacher to help students learn how to write and edit. In Thoughts in motion I explored what might be possible in Federated Wiki, a writing tool that keeps version history for each paragraph. I thought it could be extended to enable the kind of didactic editing I have in mind, but never found a way forward.

In How to write a press release I tried bending Google Docs to this purpose. To narrate the process of editing a press release, I dropped a sample release into a GDoc and captured a series of edits as named versions. Then I captured the versions as screenshots and combined them with narration, so the reader of the blog post can see each edit as a color-coded diff with an explanation.

The key enabler is GDoc’s File -> Version history -> Name current version, along with File -> See version history‘s click-driven navigation of the set of diffs. It’s easy to capture a sequence of editing steps that way.

But it’s much harder to present those steps as I do in the post. That required me to make, name, and organize a set of images, then link them to chunks of narration. It’s tedious work. And if you want to build something like this for students, that’s work you shouldn’t be doing. You just want to do the edits, narrate them, and share the result.

This week I tried a different approach when editing a document written by a colleague. Again the goal was not only to produce an edited version, but also to narrate the edits in a didactic way. In this case I tried bending GitHub to my purpose. I put the original doc in a repository, made step-by-step edits in a branch, and created a pull request. We were then able to review the pull request, step through the changes, and review each as a color-coded diff with an explanation. No screenshots had to be made, named, organized, or linked to the narration. I could focus all my attention on doing and narrating the edits. Perfect!

Well, perfect for someone like me who uses GitHub every day. If that’s not you, could this technique possibly work?

In GitHub for the rest of us I argued that GitHub’s superpowers could serve everyone, not just programmers. In retrospect I felt that I’d overstated the case. GitHub was, and remains, a tool that’s deeply optimized for programmers who create and review versioned source code. Other uses are possible, but awkward.

As an experiment, though, let’s explore how awkward it would be to recreate my Google Docs example in GitHub. I will assume that you aren’t a programmer, have never used GitHub, and don’t know (or want to know) anything about branches or commits or pull requests. But you would like to be able to create a presentation that walks a learner though a sequence of edits, with step-by-step narration and color-coded diffs. At the end of this tutorial you’ll know how to do that. The method isn’t as straightforward as I wish it were. But I’ll describe it carefully, so you can try it for yourself and decide whether it’s practical.

Here’s the final result of the technique I’ll describe.

If you want to replicate that, and don’t already have a GitHub account, create one now and log in.

Ready to go? OK, let’s get started.

Step 1: Create a repository

Click the + button in the top right corner, then click New repository.

Here’s the next screen. All you must do here is name the repository, e.g. editing-step-by-step, then click Create repository. I’ve ticked the Add a README file box, and chosen the Apache 2.0 license, but you could leave the defaults — box unchecked, license None — as neither matters for our purpose here.

Step 2: Create a new file

On your GitHub home page, click the Repositories tab. Your new repo shows up first. Click its link to open it, then click the Add file dropdown and choose Create new file. Here’s where you land.

Step 3: Add the original text, create a new branch, commit the change, and create a pull request

What happens on the next screen is bewildering, but I will spare you the details because I’m assuming you don’t want to know about branches or commits or pull requests, you just want to build the kind of presentation I’ve promised you can. So, just follow this recipe.

Name the file (e.g. sample-press-release.txt Copy/paste the text of the document into the edit box Select Create a new branch for this commit and start a pull request Name the branch (e.g. edits) Click Propose new file

On the next screen, title the pull request (e.g. edit the press release) and click Create pull request.

Step 4: Visit the new branch and begin editing

On the home page of your repo, use the main dropdown to open the list of branches. There are now two: main and edits. Select edits

Here’s the next screen.

Click the name of the document you created (e.g. sample-press-release.txt to open it.

Click the pencil icon’s dropdown, and select Edit this file.

Make and preview your first edit. Here, that’s my initial rewrite of the headline. I’ve written a title for the commit (Step 1: revise headline), and I’ve added a detailed explanation in the box below the title. You can see the color-coded diff above, and the rationale for the change below.

Click Commit changes, and you’re back in the editor ready to make the next change.

Step 5: Visit the pull request to review the change

On your repo’s home page (e.g. https://github.com/judell/editing-step-by-step), click the Pull requests button. You’ll land here.

Click the name of the pull request (e.g. edit the press release) to open it. In the rightmost column you’ll see links with alphanumeric labels.

Click the first one of those to land here.

This is the first commit, the one that added the original text. Now click Next to review the first change.

This, finally, is the effect we want to create: a granular edit, with an explanation and a color-coded diff, encapsulated in a link that you can give to a learner who can then click Next to step through a series of narrated edits.

Lather, rinse, repeat

To continue building the presentation, repeat Step 4 (above) once per edit. I’m doing that now.

… time passes …

OK, done. Here’s the final edited copy. To step through the edits, start here and use the Next button to advance step-by-step.

If this were a software project you’d merge the edits branch into the main branch and close the pull request. But you don’t need to worry about any of that. The edits branch, with its open pull request, is the final product, and the link to the first commit in the pull request is how you make it available to a learner who wants to review the presentation.

GitHub enables what I’ve shown here by wrapping the byzantine complexity of the underlying tool, Git, in a much friendlier interface. But what’s friendly to a programmer is still pretty overwhelming for an English teacher. I still envision another layer of packaging that would make this technique simpler for teachers and learners focused on the craft of writing and editing. Meanwhile, though, it’s possible to use GitHub to achieve a compelling result. Is it practical? That’s not for me to say, I’m way past being able to see this stuff through the eyes of a beginner. But if that’s you, and you’re motivated to give this a try, I would love to know whether you’re able to follow this recipe, and if so whether you think it could help you to help learners become better writers and editors.

Friday, 26. August 2022

Phil Windleys Technometria

ONDC: An Open Network for Ecommerce

Summary: Open networks provide the means for increased freedom and autonomy as more of our lives move to the digital realm. ONDC is an experiment launching in India that is hoping to bring these benefits to shoppers and merchants. I read about the Open Network for Digital Commerce (ONDC) on Azeem Azhar's Exponential View this week and then saw a discussion of it on the VRM mailing

Summary: Open networks provide the means for increased freedom and autonomy as more of our lives move to the digital realm. ONDC is an experiment launching in India that is hoping to bring these benefits to shoppers and merchants.

I read about the Open Network for Digital Commerce (ONDC) on Azeem Azhar's Exponential View this week and then saw a discussion of it on the VRM mailing list. I usually take multiple hits on the same thing as a sign I ought to dig in a little more.

Open Network for Digital Commerce is a non-profit established by the Indian government to develop open ecommerce. The goal is to end platform monopolies in ecommerce using an open protocol called Beckn. I'd never heard of Beckn before. From the reaction on the VRM mailing list, not many there had either.

This series of videos by Ravi Prakash, the architect of Beckn, is a pretty good introduction. The first two are largely tutorials on open networks and protocols and their application to commerce. The real discussion of Beckn starts about 5'30" into the second video. One of Beckn's core features is a way for buyers to discover sellers and their catalogs. In my experience with decentralized systems, discovery is one of the things that has to work well.

The README on the specifications indicates that buyers (identified as BAPs) address a search to a Beckn gateway of their choice. If the search doesn't specify a specific seller, then the gateway broadcasts the request to multiple sellers (labeled BPPs) whose catalogs match the context of the request. Beckn's protocol routes these requests to the sellers who they believe can meet the intent of the search. Beckn also includes specifications for ordering, fulfillment, and post-fulfillment activities like ratings, returns, and support.

Beckn creates shared digital infrastructure (click to enlarge)

ONDC's goal is to allow small merchants to compete with large platforms like Amazon, Google, and Flipkart. Merchants would use one of several ONDC-compatible clients to list their catalogs. When a buyer searches, products from their catalog would show up in search results. Small and medium merchants have long held the advantage in being close to the buyer, but lacked ways to easily get their product offerings in front of online shoppers. Platforms hold these merchants hostage because of their reach, but often lack local options. ONDC wants to level that playing field.

Will the big platforms play? The India Times interviewed Manish Tiwary, country Manager for Amazon's India Consumer Business. In the article he says:

I am focused on serving the next 500 million customers. Therefore, I look forward to innovations, which will lift all the boats in the ecosystem.

At this stage, we are engaging very closely with the ONDC group, and we are quite committed to what the government is wanting to do, which is to digitize kiranas, local stores...I spoke about some of our initiatives, which are preceding even ONDC... So yes, excited by what it can do. It's a nascent industry, we will work closely with the government.

From Open Network for Digital Commerce a fascinating idea; excited about prospects: Amazon India exec
Referenced 2022-08-15T10:24:19-0600

An open network for ecommerce would change how we shop online. There are adoption challenges. Not the least of which is getting small merchants to list what they have for sale and keep inventory up to date. Most small merchants don't have sophisticated software systems to interface for automatic updates—they'll do it by hand. If they don't see the sales, they'll not spend the time maintaining their catalog. Bringing the tens of millions of small merchants in India online will be a massive effort.

I'm fascinated by efforts like these. I spend most of my time right now writing about open networks for identity as I wrap up my forthcoming O'Reilly book. I'm not sure anyone really knows how to get them going, so it takes a lot of work with more misses than hits. But I remain optimistic that open networks will ultimately succeed. Don't ask me why. I'm not sure I can explain it.

Photo Credit: Screenshots from Beckn tutorial videos from Ravi Prakash (CC BY-SA 4.0)

Tags: ecommerce protocol ondc beckn vrm

Thursday, 25. August 2022

@_Nat Zone

9/6 シンガポールでIdentity Week Asiaに出ます

10日後になりましたが、シンガポールで開催される …

10日後になりましたが、シンガポールで開催される Identity Week Asia に出演いたします。

UPDATE (8/31): パネリストとして出るだけでなくこのトラック全体のモデレータをやることになりました。

Identity Week Asia 2022 Day 1 (2022-09-06) @ 11:20 Panel: Future of identity and authentication in the financial services

This panel is sponsored by Pindrop and brings together perspectives from across Asia to discuss authentication in financial services.

Nat Sakimura, Chairman, OpenID Foundation Kendrick Lee, Director, National Digital Identity, GovTech Singapore Tim Prugar, Technical Advisor to the CTO, Pindrop Sugandhi Govil, VP, Compliance APAC, Genesis Asia Pacific Jaebeom Kim, Principal Researcher, Telecommunications Technology Association

9/15(木) 14時〜日銀主催「ISOパネル(第6回):オンラインでの本人確認(eKYC) ―新たな国際標準ISO 5158の概要と活用可能性―」

来る2022年9月15日(木) 14:00 …

来る2022年9月15日(木) 14:00 – 15:30、日本銀行決済機構局主催のイベント「ISOパネル(第6回):オンラインでの本人確認(eKYC) ―新たな国際標準ISO 5158の概要と活用可能性―」が開かれます。

メインテーマは、新たな国際規格であるモバイル金融サービスにおける顧客識別のガイドライン(Mobile financial services – Customer identification guidelines、ISO 5158)をです。

プログラムは以下のような形:

プレゼンテーション「顧客識別と認証技術 ― ISO 5158の概要と関連技術―」(仮) 橋本 崇 ISO/TC 68国内委員会事務局長(日本銀行決済機構局企画役) プレゼンテーション「ISO 5158に関連する生体認証等の規格」(仮) 山田 朝彦 ISO/IEC JTC 1/SC 27/WG 3, 5, SC 37/WG 2, 4, 5, 6エキスパート、ISO/TC 68国内委員会委員 パネルディスカッション「オンラインでの本人確認(eKYC)―新たな国際標準ISO 5158の活用可能性―」 LINE Pay株式会社 オペレーション統括本部 本部長 志手 啓祐 氏 ISO/IEC JTC 1/SC 37国内委員会 前委員長、株式会社Cedar Founder 新崎 卓 氏 株式会社TRUSTDOCK 取締役 肥後 彰秀 氏 一般社団法人キャッシュレス推進協議会 事務局長  福田 好郞 氏

詳細は、日銀のページからご覧になっていただけます。締切は9月11日です。わたしも登録しました1。ぜひ奮ってご参加ください。

新たな国際標準「ISO5158:モバイル金融サービス ―顧客識別のガイドライン」をご紹介。本規格策定に参画されたパネリストの皆様と今後の活用可能性等の議論を深めます。お気軽にご応募ください。#ISOパネル #eKYC #認証 #本人確認https://t.co/OOqFb6x69c pic.twitter.com/lK3V91KSWL

— 日本銀行 (@Bank_of_Japan_j) August 5, 2022

ネットショッピング代など、スマホで支払う機会が増えており本人確認は重要な課題です。この本人確認のISO規格が新たに作られます。わかりやすく説明しパネルディスカッションを行います。お気軽にご応募下さい。#認証 #eKYC #モバイル金融https://t.co/OOqFb6xDYK pic.twitter.com/V9b9OaBenQ

— 日本銀行 (@Bank_of_Japan_j) August 24, 2022

Thursday, 18. August 2022

Werdmüller on Medium

What is a man?

And why does it matter? Continue reading on Medium »

And why does it matter?

Continue reading on Medium »

Tuesday, 16. August 2022

Werdmüller on Medium

Neumann Owns

Flow has nothing to do with the housing crisis. Continue reading on Medium »

Flow has nothing to do with the housing crisis.

Continue reading on Medium »

Monday, 15. August 2022

reb00ted

Levels of information architecture

I’ve been reading up on what is apparently called information architecture: the “structural design of shared information environments”. A quite fascinating discipline, and sorely needed as the amount of information we need to interact with on a daily basis keeps growing. I kind of think of it as “the structure behind the design”. If design is the what you see when looking at something, informa

I’ve been reading up on what is apparently called information architecture: the “structural design of shared information environments”.

A quite fascinating discipline, and sorely needed as the amount of information we need to interact with on a daily basis keeps growing.

I kind of think of it as “the structure behind the design”. If design is the what you see when looking at something, information architecture are the beams and struts and foundations etc that keeps the whole thing standing and comprehensible.

Based on what I’ve read so far, however, it can be a bit myopic in terms of focusing just on “what’s inside the app”. That’s most important, obviously, but insufficient in the age of IoT – where some of the “app” is actually controllable and observable through physical items – and the expected coming wave of AR applications. Even here and now many flows start with QR codes printed on walls or scanned from other people’s phones, and we miss something in the “design of shared information environments” if we don’t make those in-scope.

So I propose this outermost framework to help us think about how to interact with shared information environments:

Universe-level: Focuses on where on the planet where a user could conceivably be, and how that changes how they interact with the shared information environment. For example, functionality may be different in different regions, use different languages or examples, or not be available at all. Environment-level: Focuses on the space in which the user is currently located (like sitting on their living room couch), or that they can easily reach, such as a bookshelf in the same room. Here we can have a discussion about, say, whether the user will pick up their Apple remote, run the virtual remote app on their iOS device, or walk over to the TV to turn up the volume. Device-level: Once the user has decided which device to use (e.g. their mobile phone, their PC, their AR goggles, a button on the wall etc), this level focuses on what they user does on the top level of that device. On a mobile phone or PC, that would be the operating-system level features such as which app to run (not the content of the app, that’s the next level down), or home screen widgets. Here we can discuss how the user interacts with the shared information space given that they also do other things on their device; how to get back and forth; integrations and so forth. App-level: The top-level structure inside an app: For example, an app might have 5 major tabs reflecting 5 different sets of features. Page-level: The structure of pages within an app. Do they have commonalities (such as all of them have a title at the top, or a toolbox to the right) and how are they structured. Mode-level: Some apps have “modes” that change how the user interacts with what it shown on a page. Most notably: drawing apps where the selected tool (like drawing a circle vs erasing) determines different interaction styles.

I’m just writing this down for my own purposes, because I don’t want to forget it and refer to it when thinking of design problems. And perhaps it is useful for you, the reader, as well. If you think it can be improved, let me know!

Saturday, 13. August 2022

Jon Udell

How to rewrite a press release: a step-by-step guide

As a teaching fellow in grad school I helped undergrads improve their expository writing. Some were engineers, and I invited them to think about writing and editing prose in the same ways they thought about writing and editing code. Similar rules apply, with different names. Strunk and White say “omit needless words”; coders say “DRY” … Continue reading How to rewrite a press release: a step-by-ste

As a teaching fellow in grad school I helped undergrads improve their expository writing. Some were engineers, and I invited them to think about writing and editing prose in the same ways they thought about writing and editing code. Similar rules apply, with different names. Strunk and White say “omit needless words”; coders say “DRY” (don’t repeat yourself.) Writers edit; coders refactor. I encouraged students to think about writing and editing prose not as a creative act (though it is one, as is coding) but rather as a method governed by rules that are straightforward to learn and mechanical to apply.

This week I applied those rules to an internal document that announces new software features. It’s been a long time since I’ve explained the method, and thanks to a prompt from Greg Wilson I’ll give it a try using another tech announcement I picked at random. Here is the original version.

I captured the transformations in a series of steps, and named each step in the version history of a Google Doc.

Step 1

The rewritten headline applies the following rules.

Lead with key benefits. The release features two: support for diplex-matched antennas and faster workflow. The original headline mentions only the first, I added the second.

Clarify modifiers. A phrase like “diplex matched antennas” is ambiguous. Does “matched” modify “diplex” or “antennas”? The domain is unfamiliar to me, but I suspected it should be “diplex-matched” and a web search confirmed that hunch.

Omit needless words. The idea of faster workflow appears in the original first paragraph as “new efficiencies aimed at streamlining antenna design workflows and shortening design cycles.” That’s a long, complicated, yet vague way of saying “enables designers to work faster.”

Step 2

The original lead paragraph was now just a verbose recap of the headline. So poof, gone.

Step 3

The original second paragraph, now the lead, needed a bit of tightening. Rules in play here:

Strengthen verbs. “NOUN is a NOUN that VERBs” weakens the verb. “NOUN, a NOUN, VERBs” makes it stronger.

Clarify modifiers. “matching network analysis” -> “matching-network analysis”. (As I look at it again now, I’d revise to “analysis of matching networks.”)

Break up long, weakly-linked sentences. The original was really two sentences linked weakly by “making it,” so I split them.

Omit needless words. A word that adds nothing, like “applications” here, weakens a sentence.

Strengthen parallelism. If you say “It’s ideal for X and Y” there’s no problem. But when X becomes “complex antenna designs that involve multi-state and multi-port aperture or impedance tuners,” and Y becomes “corporate feed networks with digital phase shifters,” then it helps to make the parallelism explicit: “It’s ideal for X and for Y.”

Step 4

Omit needless words. “builds on the previous framework with additional” -> “adds”.

Simplify. “capability to connect” -> “ability to connect”.

Show, don’t tell. A phrase like “time-saving options in the schematic editor’s interface” tells us that designers save time but doesn’t show us how. That comes next: “the capability to connect two voltage sources to a single antenna improves workflow efficiency.” The revision cites that as a shortcut.

Activate the sentence. “System and radiation efficiencies … can be effortlessly computed from a single schematic” makes efficiencies the subject and buries the agent (the designer) who computes them. The revision activates that passive construction. Similar rules govern the rewrite of the next paragraph.

Step 5

When I reread the original fourth paragraph I realized that the release wasn’t only touting faster workflow, but also better collaboration. So I adjusted the headline accordingly.

Step 6

Show, don’t tell. The original version tells, the new one shows.

Simplify. “streamline user input” -> “saves keystrokes” (which I might further revise to “clicks and keystrokes”).

Final result

Here’s the result of these changes.

I haven’t fully explained each step, and because the domain is unfamiliar I’ve likely missed some nuance. But I’m certain that the final version is clearer and more effective. I hope this step-by-step narration helps you see how and why the method works.

Friday, 12. August 2022

Mike Jones: self-issued

Publication Requested for OAuth DPoP Specification

Brian Campbell published an updated OAuth 2.0 Demonstrating Proof-of-Possession at the Application Layer (DPoP) draft addressing the shepherd review comments received. Thanks to Rifaat Shekh-Yusef for his useful review! Following publication of this draft, Rifaat also created the shepherd write-up, obtained IPR commitments for the specification, and requested publication of the specification as an

Brian Campbell published an updated OAuth 2.0 Demonstrating Proof-of-Possession at the Application Layer (DPoP) draft addressing the shepherd review comments received. Thanks to Rifaat Shekh-Yusef for his useful review!

Following publication of this draft, Rifaat also created the shepherd write-up, obtained IPR commitments for the specification, and requested publication of the specification as an RFC. Thanks all for helping us reach this important milestone!

The specification is available at:

https://tools.ietf.org/id/draft-ietf-oauth-dpop-11.html

ian glazers tuesdaynight

Lessons on Salesforce’s Road to Complete Customer MFA Adoption

What follows is a take on what I learned as Salesforce moved to require all of its customers to use MFA. There’s plenty more left on the cutting room floor but it will definitely give you a flavor for the experience. If you don’t want to read all this you can check out the version … Continue reading Lessons on Salesforce’s Road to Complete Customer MFA Adoption

What follows is a take on what I learned as Salesforce moved to require all of its customers to use MFA. There’s plenty more left on the cutting room floor but it will definitely give you a flavor for the experience. If you don’t want to read all this you can check out the version I delivered at Identiverse 2022.

i

Thank you.

It is an honor and a privilege to be here on the first day of Identiverse. I want to thank Andi and the entire program team for allowing me to speak to you today.

This talk is an unusual one for me. I have had the pleasure and privilege to be here on stage before. But in all the times that i have spoken to you, I have been wearing my IDPro hat. I have never had the opportunity to represent my day job and talk about what my amazing team does. So today I am here to talk to you as a Salesforce employee.

And because of that you’re going to note a different look and feel for this presentation. Very different. I get to use the corporate template and I am leaning in hard to that.

Salesforce is a very different kind of company and that shows in up many different ways. Including the fact that, yes, there’s a squirrel-like thing on this slide. That’s Astro – they are one of our mascots. Let’s just get one thing out of the way up front – yes, they have their own backstories and different pronouns; no, they do not all wear pants. Let’s move on.

So the reason why I am here today is to talk to you about Salesforce’s journey towards complete customer adoption of MFA. There are 2 key words in this: Customer and Journey.

‘Customer’ is a key word here because the journey we are on is to drive our customers’ users to use MFA. This is not going to be a talk about how we enable our workforce to use MFA. Parenthetically we did that a few years ago and got ~95% of all employees enrolled in MFA in under 48 hours. Different talk another time. We are focused on raising the security posture of our customers with their help.

Journey is the other key word here. The reason why I want to focus on the Journey is because I believe there is something for everyone to take away and apply in their own situations. And I want to tell this Journey as a way of sharing the lessons I have learned, my team has learned, to help avoid the mistakes we made along the way.

The Journey Begins

So the Journey towards complete customer MFA. 

It starts in the Fall of 2019. Our CISO at the time makes a pronouncement. Because MFA is the single most effective way our customers could protect themselves and their customers, we wanted to drive more use of MFA. So the pronouncement was simple: in 3 months time every one of our at the time 10 product groups (known as our Clouds) will adopt a common MFA service (which was still in development at the time btw) and by February 1 of 2021, 100% of end users of our well over 1500,000 customers will use MFA or SSO on every log in. Again this is Salesforce changing all of our customers’ and all of their users’ behavior across all of our products in roughly a year’s time.

That means in a year’s time we are going change the way every single person who logs into a Salesforce product. And let’s be honest with ourselves fellow identity nerds, this is what people think of MFA:

100% service adoption in 3 months. 100% user penetration within about 1 year.

100%

Of all end users.

All of them.

100%

There’s laughing from the audience. There’s some whispering to neighbors. I assume this is your reaction to the low bar that the CISO set for us… a trivial thing to achieve.

Oh wait no… the opposite. You, like I did at the time reacted to the absolute batshit nutsery of that goal. What the CISO is proposing is to tell customers, WHO PAY US, here is the minimum bar for your users’ security posture and you must change behaviors and potentially technologies if you don’t currently meet that bar and want to use our services.

100%… oh hell no.

I reacted like most 5 year olds would. I stomped my feet. I pulled the covers over my head thinking if the monsters couldn’t see me, they couldn’t get me. If I just didn’t acknowledge the CISO’s decree, it would somehow not apply. Super mature response. Lasted for about a week. Then I learned that the CISO committed all this to our Board of Directors. So… the chances of ignoring this were zero. But still, I fought against the tide. I was difficult. I was difficult to the program team and to my peer in Security. That was immature and just wasted time. I spent time rebuilding those relationships during the first 6 months of the program.

Step 0: Get a writer and a data person

What would you do hotshot?
If you got this decree what should be the first thing you’d do. Come on – shout them out! (Audience shout out answers.) All good ideas… but the first thing you should do is hire the best tech writer you can. Trust me you are going to need that person in the 2nd and 3rd acts and its gonna take them a bit of time to come up to speed… so get going, hire a writer!

(It’s also not a bad idea to get data people on the team. If you are going to target 100% rollout then you need good ways to measure your progress. And you’ll want to slice and dice that to better understand where you need more customer outreach and which regions or business are doing well.)

Step 1: Form a program with non-SMEs

Ok probably the next thing you’d do is get a program running which is what we did. That program was and is run by non-identity people. Honestly, my first reaction was that this was going to be a problem. What I foresaw was a lot of explaining the “basics” of identity and MFA and SSO to the program team and not a lot of time left to do the work.

I was right and I was wrong. I was correct in that I and my team did spend a lot of time explaining identity concepts to the program team. I was wrong in that the work of explaining was actually the work that needed to be done. The program team were not identity people and we were asking them to do identity stuff and this was just like the admins at our customers. They were not identity people and we were now asking them to do identity stuff.

So having a program team of non-subject matter experts was a great feature not a bug. As the SMEs, my team spent hours explaining so many things to the program team and it turned out that the time we spent there was a glimpse of what the entire program would need to do with our customers.

Not only did we have a program team staffed with non-subject matter experts, we also formed a steering committee staffed, in part, with non-subject matter experts. The Steerco was headed by a representative from Security, Customer Success, and Product. This triumvirate helped us to balance the desires of Security with the realities of the customers with our ability to deliver needed features. 

Step 2: Find the highest ranking exec you can and use them as persuaders as needed

Next up – if we needed all of our clouds to use MFA, we need to actually get their commitment to do so. The program dutifully relayed the CISO’s decree to the general managers of all the clouds. Understand that Salesforce’s financial year starts Feb 1, so we were just entering Q4 and here comes the program team telling the GMs, “yeah on top of all your revenue goals for the year, you need to essentially drop everything and integrate with the MFA service,” which again wasn’t GA yet. 

We were asking the GMs to change their Q4 and next fiscal year plans by adding a significant Trust-related program. And at Salesforce Trust is our number 1 value which means that this program had to go to the top of every cloud’s backlog. As a product manager, if someone told me “hey Ian, this thing that you really had no plans to do now has to be done immediately” I would take it poorly. Luckily, we have our CISO with the support of our Co-CEOs and Board to persuade the GMs.

Step 3: Get, maintain, and measure alignment using cultural and operational norms

So we got GM commitments but needed a way to keep them committed in the forthcoming years of the program. We used our execs to help do this and we relied on a standard Salesforce planning mechanism: the V2MOM. V2MOM stands for Vision, Values, Methods, Obstacles, and Measures. Essentially, where do you want to go, what is important to you in that journey, what are the things you are going to do get to that destination, what roadblocks do you expect, and how will you measure your progress. V2MOMs are ingrained in Salesforce culture and operations. Specific to MFA, we made sure that service adoption and customer MFA adoption measures were in the very first Method of every Cloud’s V2MOM and we used the regular review processes within Salesforce to monitor our progress.

Do not create someting new! Find whatever your organization uses to gain and monitor alignment and progress and use it!

Lesson 1: Service delivery without adoption is the same thing as no service delivery

Round about this time I made the first of many mistakes. We had just GA’ed the new MFA service and I wanted to publish a congratulatory note and get all the execs to pile on. Keep in mind that the release was an MVP release and exactly zero clouds had adopted it. My boss stoped me from sending the note. Instead of a congratulatory piling on from the execs, I got a piling on from the CISO for the lack of features and adoption. 

I am a product manager and live in a product org… not an internal IT org, not the Security org. My world is about shipping features… my world was about to get rocked. I had lost sight of the most important thing, especially to the execs: adoption.

Thus service delivery without adoption is the same thing as no service delivery.

Lesson 2: Plan to replan

At this point it is roughly February 2020 and no clouds had adopted the MFA service and we had just started to get metrics from the clouds as to their existing MFA and SSO penetration. It wasn’t pretty but at least we knew where we stood. And where we stood made it pretty clear to see that we were not going to be in a position to drive customer adoption of MFA and certainly not achieve 100% user coverage within the original year’s time.

We needed to reset our timeline and in doing so we had to draw up a two new sets of plans: one for our clouds adopting the MFA service and one for our customer adoption. In that process, we moved the dates out for both. We gave our clouds more time to adopt the MFA service and moved the date for 100% customer end-user adoption to February 1 2022.

No matter how prepared you are at the beginning of a program like this, there will always be externalities that force you to adapt. 

Continue onwards

So with our new plans in hand, a reasonably well-oiled program in place, we began to roll out communications to customers in April of 2020. We explained what we wanted them to do 100% MFA usage and why – MFA is the single best control they could employ to protect themselves, their customers, and their data against things like credential stuffing and password reuse. And we let them know about the deadline of February 1 2022. We did this in the clearest ways we knew how to express ourselves. We did it in multiple formats, languages, and media. We had teams of people calling customers and making them aware of the MFA requirements.

Remember when I said hire a writer early… yeah, that. Clear comms is crucial. Clear comms about identity stuff to non-identity people is really difficult and crucial to get as right as possible (and then iterate… a lot.)

Gain traction; get feedback

The program team we had formed was based on a template for a feature adoption team. Years ago, Salesforce released a fundament change to its UX tier which had profound impact to how our customers built and interacted with apps on our platform. To drive adoption for the new UX tier, we put together an adoption team… and we lifted heavily from that team and their approach.

Using the wisdom of those people, we knew that we were going to have to meet our customers where they were. First and foremost, we need a variety of ways to get the MFA message out. We used both email and in-app messages along with good ole’ fashion phone calls – yes we called our customer admins. Besides a microsite, we build ebooks and an FAQ. We put on multiple webinars and found space in our in-person events to spread the word. We even build some specialized apps in some of our products to drive MFA awareness.

And we listened… our Customer Success Group brought back copious notes from their interactions with customers. We opened a dedicated forum in our Trailblazer Community. We trained a small army of people to respond to customer questions. We tracked customer escalations and sentiment and reported all of this to the CISO and other senior execs.

Wobbler #1

In our leadership development courses at Salesforce, we do a business simulation. This simulation puts attendees in the shoes of executives of a mythical company and they are asked to make resource allocation and other decisions. Over the course of the classes, you compete with fellow attendees and get to see the impact of your decisions. It’s a lot of fun. 

One consistent thing in all of the simulations is “The Wobbler.” The Wobbler is an externality thrown at you and your teammates. They can be intense; they can definitely knock a winning team out of contention. And so you can say to a colleague, “We were doing great until this wobbler” and they totally know what you mean.

Predictably, the MFA program was due for a wobbler. This one came in from a discrepancy in what we were communicating and the CISO noticed it first. Despite the many status briefings. Despite having one of his trusted deputies as part of the steering committee for the MFA program. There was a big disconnect. The MFA Program was telling our customers “By February 1 2022 you need to be doing MFA or SSO.” The CISO thought we were telling customers “MFA or SSO with MFA.” 

There are probably a few MBA classes on executive communication that could be written about this “little” disconnect. There was going to be no changing the CISO’s mind; the program team simply needed to start communicating the requirement of MFA or SSO with MFA.

From our customers perspective, Salesforce was moving the goal posts. They were stressed enough as it is and this eroded trust. Our poor lead writer had a very bad week. The customer success teams doing outreach and talking to customers had very bad weeks. My teams had to redo their release plans to pull forward instrumentation to log and surface whether an SSO login used MFA upstream.

A word from our speaker

And now a word from a kinda popular SaaS Service Provider: “Hi, are you like me? Are you a service provider just trying to make the internet a safer place and increase the security posture of your customers but are thwarted by the lack of insight into the upstream authentication event? Isn’t that frustrating? But don’t worry we have standards and things like AuthNContext in SAML and AMR claims in OIDC. Now if only on-prem and IDaaS IDPs would populate those claims consistently as well as consistently use the same values in those claims. If we could do that, it would make the world a better place. Don’t let this guy down.” 

Ok I know this isn’t sexy stuff but please please please! It is damn hard as an SP to consistently get any insight into the upstream user authentication event. I know my own services can do better here when we act as an IDP. Please, industry peers, please please make this data available to downstream SPs. And, standards nerds, I know it ain’t sexy but can we please standardize or at least normalize not only the values in those claims but the order and meaning of the order of values within those claims. Pretty please? (include scrolling spreadsheet of all the amr values we’ve seen)

Step 4: Accommodate the hard use case

The wheels had begun to gain traction so to speak. We heard from customer CISOs who were thrilled about our MFA requirements – it gave them the justification they were looking for to go much bigger with MFA. But we also heard from customers with hard use cases for whom there aren’t always great answers. For example, we have customers who use 3rd parties to administer and modify their Salesforce environments. Getting MFA into those peoples’ hands is tricky. Another example, people doing robotic process automation or have UX test suites struggle to meet the MFA requirements of MFA on every UI-based login. Those users look like “regular” human users and have access to customer data. They need MFA. And yet the support for MFA in those areas is spotty at best.

We had another source of challenging use cases – brought to us by our ISV and OEM partners. These vital parts of our business have a unique relationship with our products and our customers and the challenges that our customers feel are amplified for our ISVs and OEMs.

What we learned was that there are going to be use cases that are just damn hard to deal with. 3rd party call centers. RPA tools. Managed service providers. The lesson here is – it’s okay. Your teams are comprised of smart people and even still there is no way to know at the onset of such a program these use cases. Find the flexibility to meet the customers where they are… and that includes negotiated empathy with your executives and stakeholders. I truly believe there is always a path forward but it does require flexibility.

Wobbler #2

At this point we have clouds mostly adopted, people are rolling out MFA controls in their products. Customer adoption of MFA and SSO are climbing and we are feeling good. And, predictably, the universe decided to take us down a peg. Enter Wobbler #2 – outages. 

Raise your hand if you know the people that maintain the DNS infrastructure at your company… if you don’t know them, find them. Bring them chocolate, whiskey, aspirin… DNS is hard. And when DNS goes squirrely it tends to have a massive blast radius. Salesforce had a DNS-related outage and the MFA service that most of our clouds had just adopted was impacted. 

And a few weeks after we recovered from that, the MFA service suffered a second outage due to a regional failover process not failing over in a predicted manner. 

We recovered, we learned, we strengthened the service, we strengthened ourselves. 

So when things are going well, just assume that Admiral Ackbar is going to appear in your life… “It’s a trap.” 

Step 5: Address the long tail

So where are we today? Well, while we found lots of MFA and SSO adoption in our largest customers – especially SSO, we have a lot of customers with less than 100 users and their adoption rates were low. One concerning thing about these customers is that the ratio of general users to those with admin rights is very high. Where privileged users might make up less than low single digits of the total user population in larger tenants, it was much much higher in smaller ones. Although we had a great outreach program there are literally tens of thousands of tenants and thus tens of thousands of customers whose login configurations and behaviors we had to change.

And here is where we learned that we had to enlist automation and that is where our teams are focused today. Building ways to ensure that new tenants have MFA turned on by default, customers have ways of opting out specific users such as their UX testing users, and means to turn on MFA for all customers, not just new ones without breaking those that put in the effort to do MFA previously. That takes time but it is well spent time – we are going to automatically change the behavior of the system which directly impacts our customers users – it is not something ones does lightly (one does not simply turn on mfa meme)

Lesson 3: Loving 100% percent

Standing here today, I can say that I really like the 100% goal. As I wrote this talk, I looked back at some of my email from the beginning of the project… and I am a little ashamed. I really fought the 100% goal hard… it wasn’t a good look. It wasn’t the right thing to do. The reason I like the goal is that although we are at roughly 80% of our monthly active users using mfa or sso, had we not made 100% the goal then we’d have achieved less and been fine with where we are. Without that goal we wouldn’t have pushed to address the long tail of customers; we would not have innovated to find better solutions for both our customers and ourselves. Would I have liked our CISO to deliver the goal in a different way? Sure. But I have become a fan of a seemingly impossible goal… so long as it is expressed with empathy and care.

Step 6: Re-remember the goal

We ended last fiscal year with about 14 million monthly active users of MFA or SSO with MFA. They represent 14M people who are habituated; the identity ceremonies they perform include MFA.

And that has a huge knock on effect. They bring that ceremony inclusive of MFA home with them. They bring the awareness of MFA to their families and friends. And this helps keep them safer in their business and their personal lives. The growth of MFA use in a business context is a huge deal professionally speaking. As I tell the extended team, what they have done and are doing is resume-building work: they rolled out and drove adoption of MFA across multiple lines of business at a 200 billion dollar company. That is no small feat!

But that knock on effect – that those same users are going to bring MFA home with them and look to use it in their family lives… that, as an identity practitioner, is just as big of a deal. That makes the journey worth it.

Thank you.

Thursday, 11. August 2022

Mike Jones: self-issued

JWK Thumbprint URI is now RFC 9278

The JWK Thumbprint URI specification has been published as RFC 9278. Congratulations to my co-author, Kristina Yasuda, on the publication of her first RFC! The abstract of the RFC is: This specification registers a kind of URI that represents a JSON Web Key (JWK) Thumbprint value. JWK Thumbprints are defined in RFC 7638. This enables […]

The JWK Thumbprint URI specification has been published as RFC 9278. Congratulations to my co-author, Kristina Yasuda, on the publication of her first RFC!

The abstract of the RFC is:


This specification registers a kind of URI that represents a JSON Web Key (JWK) Thumbprint value. JWK Thumbprints are defined in RFC 7638. This enables JWK Thumbprints to be used, for instance, as key identifiers in contexts requiring URIs.

The need for this arose during specification work in the OpenID Connect working group. In particular, JWK Thumbprint URIs are used as key identifiers that can be syntactically distinguished from other kinds of identifiers also expressed as URIs in the Self-Issued OpenID Provider v2 specification.

Tuesday, 09. August 2022

SeanBohan.com

The Panopticon is (going to be) Us

I originally wrote this on the ProjectVRM mailing list in January of 2020. I made some edits to fix errors and clunky phrasingI didn’t like. It is a rant and a series of observations and complaints derived from after dinner chats/walks with my significant other (who is also a nerd). This is a weak-tea attempt… Continue reading... The post The Panopticon is (going to be) Us first appeared on SeanB

I originally wrote this on the ProjectVRM mailing list in January of 2020. I made some edits to fix errors and clunky phrasingI didn’t like. It is a rant and a series of observations and complaints derived from after dinner chats/walks with my significant other (who is also a nerd). This is a weak-tea attempt at the kind of amazing threads Cory Doctorow puts out. 

I still hold out hope (for privacy, for decentralized identity, for companies realizing their user trust is worth way more than this quarter’s numbers). But unless there are changes across the digital world (people, policy, corps, orgs), it is looking pretty dark.

TLDR: 

There is a reason why AR is a favorite technology for Black Mirror screenwriters. 

Where generally available augmented reality and anonymity in public is going is bad and it is going to happen unless the users start demanding better and the Bigs (GAMAM+) decide that treating customers better is a competitive priority. 

My (Dark) Future of AR:

Generally available Augmented Reality will be a game changer for user experience, utility and engagement. The devices will be indistinguishable from glasses and everyone will wear them. 

The individual will wear their AR all the time, capturing sound, visuals, location and other data points at all times as they go about their day. They will only very rarely take it off (how often do you turn off your mobile phone?), capturing what they see, maybe what they hear, and everyone around them in the background, geolocated and timestamped. 

Every user of this technology will have new capabilities (superpowers!):

Turn by turn directions in your field of view Visually search their field of view during the time they were in a gallery a week ago (time travel!) Find live performance details from a band’s billboard (image recognition!) Product recognition on the shelves of the grocery store (computer-vision driven dynamic shopping lists!)  Know when someone from your LinkedIn connections is also in a room you are in, along with where they are working now (presence! status! social!). 

Data (images, audio, location, direction, etc.) will be directly captured. Any data exhaust (metadata, timestamps, device data, sounds in the background, individuals and objects in the background) will be hoovered up by whoever is providing you the “service”. All of this data (direct and indirect) will probably be out of your control or awareness. Compare it to the real world: do you know every organization that has data *about* you right now? What happens when that is 1000x. 

Thanks to all of this data being vacuumed up and processed and parsed and bought and sold, Police (state, fed, local, contract security, etc.) WILL get new superpowers too. They can and will request all of the feeds from Amazon and Google and Apple for a specific location at a specific time, Because your location is in public, all three will have a harder time resisting (no expectation of privacy, remember?). Most of these requests will be completely legitimate and focused on crime or public safety. There will definitely be requests that are unethical, invalid and illegal and citizens will rarely find out about these. Technology can and will be misused in banal and horrifying ways.

GAMAM* make significant revenue from advertising. AR puts commercial realtime data collection on steroids.

“What product did he look at? For how long? Where was he? Let’s offer him a discount in realtime!”

The negative impacts won’t be for everyone, though. If I had a million dollars I would definitely take the bet where Elon Musk, Eric Schmidt, the Collisons, Sergey, Larry, Bezos and Tim Cook and other celebrities will all have the ability to “opt out” of being captured and processed. The rest of us will not get to opt out unless we pay $$$ – continuing to bring the old prediction “it isn’t how much privacy you have a right to, it is how much privacy you can afford” to life. 

You won’t know who is recording you and have to assume it is happening all of the time. 

We aren’t ready

Generally available augmented reality has societal / civil impacts we aren’t prepared for. We didn’t learn any lessons over the last 25 years regarding digital technology and privacy. AR isn’t the current online world where you can opt out, run an adblocker, run a VPN, not buy from Amazon, delete your a social media account, compartmentalize browsers (one for work, one for research, one for personal), etc. AR is an overlay onto the real world, where everyone will be indirectly watching everyone else… for someone else’s benefit. I used the following example discussing this challenge with a friend:

2 teens took a selfie on 37th street and 8th avenue in Manhattan to celebrate their trip to NYC.  In the background of their selfie a recovering heroin addict steps out of a methadone clinic on the block. His friends and coworkers don’t know he has a problem but he is working hard to get clean.  The teens posted the photo online That vacation photo was scraped by ClearView AI or another company using similar tech with less public exposure Once captured, it would be trivial to identify him Months or years later (remember, there is no expiration date on this data and data gets cheaper and cheaper every day) he applies for a job and is rejected during the background check.  Why? Because the background check vendor used by his prospective employer pays for a service that compares his photo to an index of “questionable locations and times/dates” including protest marches, known drug locations, riots, and methadone clinics. That data is then processed by an algorithm that scores him as a risk and he doesn’t get the job. 

“Redlining” isn’t a horrible practice of the past, with AR we can do it in new and awful ways. 

Indirect data leakage is real: we leak other people’s data all the time. With AR, the panopticon is us: you and me and everyone around us who will be using this tech in their daily lives. This isn’t the state or Google watching us – AR is tech where the surveillance is user generated from my being able to get turn by turn directions in my personal HeadsUp Display. GAFAM are downstream and will exploit all that sweet sweet data. 

This is going from Surveillance to “Sous-veillance”… but on steroids because we can’t opt out of going to work, or walking down the street, or running to the grocery, or riding the subway to a job interview or, or going to a protest march, or going to an AA meeting, or, or, or living our lives. A  rebuttal to the, “I don’t have to worry about surveillance because I have nothing to hide”is that  *we all* have to fight for privacy and reduced surveillance, especially those who have nothing to hide because some of our fellow humans are in marginalized communities who cannot fight for themselves and because this data can and will impact us in ways we can’t identify. The convenience of reading emails while walking to work shouldn’t possibly out someone walking into an AA meeting, or walking out of a halfway house, etc. 

No consumer, once they get the PERSONAL, INTIMATE value and the utility out of AR, will want to have the functionality of their AR platform limited or taken away by any law about privacy – even one that protects *their* privacy. This very neatly turns everyone using generally available AR technology into a surveillance node. 

The panopticon is us. 

There is a reason AR is a favorite plot device in Black Mirror. 

It is going to be up to us. 

For me, AR is the most “oh crap” thing out there, right now. I love the potential of the technology, yet I am concerned about how it will be abused if we aren’t VERY careful and VERY proactive, and based on how things have been going for the last 20+ years. I have a hard time being positive on where this is going to go. 

There are a ton of people working on privacy in AR/VR/XR. The industry is still working on the “grammar” or “vocabulary” for these new XR-driven futures and there are a lot of people and organized efforts to prevent some of the problems mentioned above. We don’t have societal-level agreements on what is and is not acceptable when it comes to personal data NOW. In a lot of cases the industry is looking forward to ham-handedly trying to stuff Web2, Web1 and pre-Web business models (advertising) into this very sleek, new, super-powered platform. Governments love personal data even though they are legislating on it (in some effective and not effective ways). 

The tech (fashion, infrastructure) is moving much faster than culture and governance can react. 

My belief, in respect to generally available Augmented Reality and the potential negative impacts on the public, is we are all in this together and the solution isn’t a tech or policy or legislative or user solution but a collective one. We talk about privacy a lot and what THEY (govs, adtech, websites, hardware, iot, services, etc.) are doing to US, but what about what we are doing to each other? Yup, individuals need to claim control over their digital lives, selves and data. Yes, Self Sovereign Identity as default state would help. 

To prevent the potential dystopias mentioned above, we need aggressive engagement by Users. ALL of us need to act in ways that protect our privacy/identity/data/digital self as well as those around us. WE are leaking our friends’ identity, correlated attributes, and data every single day. Not intentionally, but via our own digital (and soon physical thanks to AR) data exhaust. We need to demand to be treated better by the companies, orgs and govs we interact with on a daily basis. 

Governments need to get their act together in regards to policy and legislation. There needs to be real consequences for bad behavior and poor stewardship of users data. 

Businesses need to start listening to their customers and treating them like customers and not sheep to be shorn. Maybe companies like AVAST can step up and bring their security/privacy-know how to help users level-up. Maybe a company like Facebook can pivot and “have the user’s back” in this future.

IIW, MyData, CustomerCommons, VRM, and the Decentralized/Self Soverign Identity communities are all working towards changing this for the good of everyone. 

At the end of the day, along with need a *Digital Spring* where people stand up and say “no more” to all the BS happening right now (adtech, lack of agency, abysmal data practices, lack of liberty for digital selves) before we get to a world where user generated surveillance is commonplace.

(Yes dear reader, algorithms are a big part of this issue and I am only focused on the AR part of the problem with this piece. The problem is a big awful venn diagram of issues and actors with different incentives).

The post The Panopticon is (going to be) Us first appeared on SeanBohan.com.

Monday, 08. August 2022

Just a Theory

RFC: Restful Secondary Key API

A RESTful API design conundrum and a proposed solution.

I’ve been working on a simple CRUD API at work, with an eye to make a nicely-designed REST interface for managing a single type of resource. It’s not a complicated API, following best practices recommended by Apigee and Microsoft. It features exactly the sorts for APIs you’d expect if you’re familiar with REST, including:

POST /users: Create a new user resource GET /users/{uid}: Read a user resource PUT /users/{uid}: Update a user resource DELETE /users/{uid}: Delete a user resource GET /users?{params}: Search for user resources

If you’re familiar with REST, you get the idea.

There is one requirement that proved a bit of design challenge. We will be creating canonical ID for all resources managed by the service, which will function as the primary key. The APIs above reference that key by the {uid} path variable. However, we also need to support fetching a single resource by a number of existing identifiers, including multiple legacy IDs, and natural keys like, sticking to the users example, usernames and email addresses. Unlike the search API, which returns an array of resources, we need a nice single API like GET /users/{uid} that returns a single resource, but for a secondary key. What should it look like?

None of my initial proposals were great (using username as the sample secondary key, though again, we need to support a bunch of these):

GET /users?username={username} — consistent with search, but does it return a collection like search or just a single entry like GET /users/{uid}? Would be weird not to return an array or not based on which parameters were used. GET /users/by/username/{username} — bit weird to put a preposition in the URL. Besides, it might conflict with a planned API to fetch subsets of info for a single resource, e.g., GET /users/{uid}/profile, which might return just the profile object. GET /user?username={username} — Too subtle to have the singular rather than plural, but perhaps the most REST-ish. GET /lookup?obj=user&username={username} Use special verb, not very RESTful

I asked around a coding Slack, posting a few possibilities, and friendly API designers suggested some others. We agreed it was an interesting problem, easily solved if there was just one alternate that never conflicts with the primary key ID, such as GET /users/{uid || username}. But of course that’s not the problem we have: there are a bunch of these fields, and they may well overlap!

There was some interest in GET /users/by/username/{username} as an aesthetically-pleasing URL, plus it allows for

/by => list of unique fields /by/username/ => list of all usernames?

But again, it runs up against the planned use of subdirectories to return sub-objects of a resource. One other I played around with was: GET /users/user?username={username}: The user sub-path indicates we want just one user much more than /by does, and it’s unlikely we’d ever use user to name an object in a user resource. But still, it overloads the path to mean one thing when it’s user and another when it’s a UID.

Looking back through the options, I realized that what we really want is an API that is identical to GET /users/{uid} in its behaviors and response, just with a different key. So what if we just keep using that, as originally suggested by a colleague as GET /users/{uid || username} but instead of just the raw value, we encode the key name in the URL. Turns out, colons (:) are valid in paths, so I defined this route:

GET /users/{key}:{value}: Fetch a single resource by looking up the {key} with the {value}. Supported {key} params are legacy_id, username, email_address, and even uid. This then becomes the canonical “look up a user resource by an ID” API.

The nice thing about this API is that it’s consistent: all keys are treated the same, as long as no key name contains a colon. Best of all, we can keep the original GET /users/{uid} API around as an alias for GET /users/uid:{value}. Or, better, continue to refer to it as the canonical path, since the PUT and DELETE actions map only to it, and document the GET /users/{key}:{value} API as accessing an alias for symlink for GET /users/{uid}. Perhaps return a Location header to the canonical URL, too?

In any event, as far as I can tell this is a unique design, so maybe it’s too weird or not properly RESTful? Would love to know of any other patterns designed to solve the problem of supporting arbitrarily-named secondary unique keys. What do you think?

Update: Aristotle Pagaltzis started a discussion on this pattern in a Gist.

More about… REST API Secondary Key RFC

Monday, 08. August 2022

Jon Udell

The Velvet Bandit’s COVID series

The Velvet Bandit is a local street artist whose work I’ve admired for several years. Her 15 minutes of fame happened last year when, as reported by the Press Democrat, Alexandria Ocasio-Cortez wore a gown with a “Tax the Rich” message that closely resembled a VB design. I have particularly enjoyed a collection that I … Continue reading The Velvet Bandit’s COVID series

The Velvet Bandit is a local street artist whose work I’ve admired for several years. Her 15 minutes of fame happened last year when, as reported by the Press Democrat, Alexandria Ocasio-Cortez wore a gown with a “Tax the Rich” message that closely resembled a VB design.

I have particularly enjoyed a collection that I think of as the Velvet Bandit’s COVID series, which appeared on the boarded-up windows of the former Economy Inn here in Santa Rosa. The building is now under active renovation and the installation won’t last much longer, so I photographed it today and made a slideshow.

I like this image especially, though I have no idea what it means.

If you would like to buy some of her work, it’s available here. I gather sales have been brisk since l’affaire AOC!

Sunday, 07. August 2022

reb00ted

An autonomous reputation system

Context: We never built an open reputation system for the internet. This was a mistake, and that’s one of the reasons why we have so much spam and fake news. But now, as governance takes an ever-more prominent role in technology, such as for the ever-growing list of decentralized projects e.g. DAOs, we need to figure out how to give more power to “better” actors within a given community or conte

Context: We never built an open reputation system for the internet. This was a mistake, and that’s one of the reasons why we have so much spam and fake news.

But now, as governance takes an ever-more prominent role in technology, such as for the ever-growing list of decentralized projects e.g. DAOs, we need to figure out how to give more power to “better” actors within a given community or context, and disempower or keep out the detractors and direct opponents. All without putting a centralized authority in place.

Proposal: Here is a quite simple, but as I think rather powerful proposal. We use an on-line discussion group as an example, but this is a generic protocol that should be applicable to many other applications that can use reputation scores of some kind.

Let’s call the participants in the reputation system Actors. As this is a decentralized, non-hierarchical system without a central power, there is only one class of Actor. In the discussion group example, each person participating in the discussion group is an Actor.

An Actor is a person, or an account, or a bot, or anything really that has some ability to act, and that can be uniquely identified with an identifier of some kind within the system. No connection to the “real” world is necessary, and it could be as simple as a public key. There is no need for proving that each Actor is a distinct person, or that a person controls only one Actor. In our example, all discussion group user names identify Actors.

The reputation system manages two numbers for each Actor, called the Reputation Score S, and the Rating Tokens Balance R. It does this in a way that it is impossible for those numbers to be changed outside of this protocol.

For example, these numbers could be managed by a smart contract on a blockchain which cannot be modified except through the outlined protocol.

The Reputation Score S is the current reputation of some Actor A, with respect to some subject. In the example discussion group, S might express the quality of content that A is contributing to the group.

If there is more than one reputation subject we care about, there will be an instance of the reputation system for each subject, even if it covers the same Actors. In the discussion group example, the reputation of contributing good primary content might be different from reputation for resolving heated disputes, for example, and would be tracked in a separate instance of the reputation system.

The Reputation Score S of any Actor automatically decreases over time. This means that Actors have a lower reputation if they were rated highly in the past, than if they were rated highly recently.

There’s a parameter in the system, let’s call it αS, which reflects S’s rate of decay, such as 1% per month.

Actors rate each other, which means that they take actions, as a result of which the Reputation Score of another Actor changes. Actors cannot rate themselves.

It is out of scope for this proposal to discuss what specifically might cause an Actor to decide to rate another, and how. This tends to be specific to the community. For example, in a discussion group, ratings might often happen if somebody reads newly posted content and reacts to it; but it could also happen if somebody does not post new content because the community values community members who exercise restraint.

The Rating Tokens Balance R is the set of tokens an Actor A currently has at their disposal to rate other Actors. Each rating that A performs decreases their Rating Tokens Balance R, and increases the Reputation Score S of the rated Actor by the same amount.

Every Actor’s Rating Tokens Balance R gets replenished on a regular basis, such as monthly. The regular increase in R is proportional to the Actor’s current Reputation Score S.

In other words, Actors with high reputation have a high ability to rate other Actors. Actors with a low reputation, or zero reputation, have little or no ability to rate other Actors. This is a key security feature inhibiting the ability for bad actors to take over.

The Rating Token Balance R is capped to some maximum value Rmax, which is a percentage of the current reputation of the Actor.

This prevents passive accumulation of rating tokens that then could be unleashed all at once.

The overall number of new Ratings Tokens that is injected into the system on a regular basis as replenishment is determined as a function of the desired average Reputation Score of Actors in the system. This enables Actors’ average Reputation Scores to be relatively constant over time, even as individual reputations increase and decrease, and Actors join and leave the system.

For example, if the desired average Reputation Score is 100 in a system with 1000 Actors, if the monthly decay reduced the sum of all Reputation Scores by 1000, 10 new Actors joined over the month, and 1000 Rating Tokens were eliminated because of the cap, 3000 new Rating Tokens (or something like that, my math may be off – sorry) would be distributed, proportional to their then-current Reputation Scores, to all Actors.

Optionally, the system may allow downvotes. In this case, the rater’s Rating Token Balance still decreases by the number of Rating Tokens spent, while the rated Actor’s Reputation also decreases. Downvotes may be more expensive than upvotes.

There appears to be a dispute among reputation experts on whether downvotes are a good idea, or not. Some online services support them, some don’t, and I assume for good reasons that depend on the nature of the community and the subject. Here, we can model this simply by introducing another coefficient between 0 and 1, which reflects the decrease of reputation of the downvoted Actor given the number of Rating Tokens spent by the downvoting Actor. In case of 1, upvotes cost the same as downvotes; in case of 0, no amount of downvotes can actually reduce somebody’s score.

To bootstrap the system, an initial set of Actors who share the same core values about the to-be-created reputation each gets allocated a bootstrap Reputation Score. This gives them the ability to receive Rating Tokens with which they can rate each other and newly entering Actors.

Some observations:

Once set up, this system can run autonomously. No oversight is required, other than perhaps adjusting some of the numeric parameters before enough experience is gained what those parameters should be in a real-world operation.

Bad Actors cannot take over the system until they have played by the rules long enough to have accumulated sufficiently high reputation scores. Note they can only acquire reputation by being good Actors in the eyes of already-good Actors. So in this respect this system favors the status quo and community consensus over facilitating revolution, which is probably desirable: we don’t want a reputation score for “verified truth” to be easily hijackable by “fake news”, for example.

Anybody creating many accounts aka Actors has only very limited ability to increase the total reputation they control across all of their Actors.

This system appears to be generally-applicable. We discussed the example of rating “good” contributions to a discussion group, but it appears this could also be applied to things such as “good governance”, where Actors rate higher who consistently perform activities others believe are good for governance; their governance reputation score could then be used to get them more votes in governance votes (such as to adjust the free numeric parameters, or other governance activities of the community).

Known issues:

This system does not distinguish reputation on the desired value (like posting good content) vs reputation in rating other Actors (e.g. the difference between driving a car well, and being able to judge others' driving ability, such as needed for driving instructors. I can imagine that there are some bad drivers who are good at judging others’ driving abilities, and vice versa). This could probably be solved with two instances of the system that are suitable connected (details tbd).

There is no privacy in this system. (This may be a feature or a problem depending on where it is applied.) Everybody can see everybody else’s Reputation Score, and who rated them how.

If implemented on a typical blockchain, the financial incentives are backwards: it would cost to rate somebody (a modifying operation to the blockchain) but it would be free to obtain somebody’s score (a read-only operation, which is typically free). However, rating somebody does not create immediate benefit, while having access to ratings does. So a smart contract would have to be suitably wrapped to present the right incentive structure.

I would love your feedback.

This proposal probably should have a name. Because it can run autonomously, I’m going to call it Autorep. And this is version 0.5. I’ll create new versions when needed.

Wednesday, 03. August 2022

Phil Windleys Technometria

The Path to Redemption: Remembering Craig Burton

Summary: Last week I spoke at the memorial service for Craig Burton, a giant of the tech industry and my close friend. Here are, slightly edited, my remarks. When I got word that Craig Burton had died, the news wasn't unexpected. He'd been ill with brain cancer for a some time and we knew his time was limited. Craig is a great man, a good person, a valued advisor, and a fabulous frien

Summary: Last week I spoke at the memorial service for Craig Burton, a giant of the tech industry and my close friend. Here are, slightly edited, my remarks.

When I got word that Craig Burton had died, the news wasn't unexpected. He'd been ill with brain cancer for a some time and we knew his time was limited. Craig is a great man, a good person, a valued advisor, and a fabulous friend. Craig's life is an amazing story of success, challenge, and overcoming.

I first met Craig when I was CIO for Utah and he was the storied co-founder of Novell and the Burton Group. Dave Politis calls Craig "one of Utah's tech industry Original Gangsters". I was a bit intimidated. Craig was starting a new venture with his longtime friend Art Navarez, and wanted to talk to me about it. That first meeting was where I came to appreciate his famous wit and sharp, insightful mind. Over time, our relationship grew and I came to rely him whenever I had a sticky problem to unravel. One of Craig's talents was throwing out the conventional thinking and starting over to reframe a problem in ways that made solutions tractable. That's what he'd done at Novell when he moved up the stack to avoid the tangle of competing network standards and create a market in network services.

When Steve Fulling and I started Kynetx in 2007 we knew we needed Craig as an advisor. He mentored us—sometimes gently and sometimes with a swift kick. He advised us. He dove into the technology and developed applications, even though he wasn't a developer. He introduced us to one of our most important investors, and now good friend, Roy Avondet. He was our biggest cheerleader and we were grateful for his friendship and help. Craig wasn't just an advisor. He was fully engaged.

One of Craig's favorite words was "ubiquity" and he lived his life consistent with that philosophy. Let me share three stories about Craig from the Kynetx days that I hope show a little bit of his personality:

Steve, Craig, and I had flown to Seattle to meet with Microsoft. Flying with Craig is always an adventure, but that's another story. We met with some people on Microsoft's identity team including Kim Cameron, Craig's longtime friend and Microsoft's Chief Identity Architect. During the meeting someone, a product manager, said something stupid and you could just see Craig come up in his chair. Kim, sitting in the corner, was trying not to laugh because he knew what was coming. Craig, very deliberately and logically, took the PM's argument apart. He wasn't mean; he was patient. But his logic cut like a knife. He could be direct. Craig always took charge of a room. Craig's trademark look (click to enlarge) We hosted a developer conference at Kynetx called Impact. Naturally, Craig spoke. But Craig couldn't just give a standard presentation. He sat, in a big chair on the stage and "held forth". He even had his guitar with him and sang during the presentation. Craig loved music. The singing was all Craig. He couldn't just speak, he had to entertain and make people laugh and smile. Craig and me at Kynetx Impact in 2011 (click to enlarge) At Kynetx, we hosted Free Lunch Friday every week. We'd feed lunch to our team, developers using our product, and anyone else who wanted to come visit the office. We usually brought in something like Jimmy Johns, Costco pizza, or J Dawgs. Not Craig. He and Judith took over the entire break room (for the entire building), brought in portable burners, and cooked a multi-course meal. It was delicious and completely over the top. I can see him with his floppy hat and big, oversized glasses, flamboyant and happy. Ubiquity! Craig with Britt Blaser at IIW (click to enlarge)

I've been there with Craig in some of the highest points of his life and some of the lowest. I've seen him meet his challenges head on and rise above them. Being his friend was hard sometimes. He demanded much of his friends. But he returned help, joy, and, above all, love. He regretted that his choices hurt others besides himself. Craig loved large and completely.

The last decade of Craig's life was remarkable. Craig, in 2011, was a classic tragic hero: noble, virtuous, and basking in past success but with a seemingly fatal flaw. But Craig's story didn't end in 2011. Drummond Reed, a mutual friend and fellow traveler wrote this for Craig's service:

Ten years ago, when Craig was at one of the lowest points in his life, I had the chance to join a small group of his friends to help intervene and steer him back on an upward path. It was an extraordinary experience I will never forget, both because of what I learned about Craig's amazing life, and what it proved about the power of love to change someone's direction. In fact Craig went on from there not just to another phase of his storied career, but to reconnect and marry his high school sweetheart.

Craig and his crew: Doc Searls, me, Craig, Craig's son Alex, Drummond Reed, and Steve Fulling (click to enlarge)

Craig found real happiness in those last years of his life—and he deserved it.

Craig Burton was a mountain of a man, and a mountain of mind. And he moved the mountains of the internet for all of us. The digital future will be safer, richer, and more rewarding for all of us because of the gifts he gave us.

Starting with that intervention, Craig began a long, painful path to eventual happiness and redemption.

Craig overcame his internal demons. This was a battle royale. He had help from friends and family (especially his sisters), but in the end, he had to make the change, tamp down his darkest urges, and face his problems head on. His natural optimism and ability to see things realistically helped. When he finally turned his insightful mind on himself, he began to make progress. Craig had to live and cope with chronic health challenges, many of which were the result of decisions he'd made earlier in his life. Despite the limitations they placed on him, he met them with his usual optimism and love of life. Craig refound his faith. I'm not sure he ever really lost it, but he couldn't reconcile some of his choices with what he believed his faith required of him. In 2016, he decided to rejoin the Church of Jesus Christ of Latter-Day Saints. I was privileged to be able to baptize him. A great honor, that he was kind enough to give me. Craig also refound love and his high school sweetheart, Paula. The timing couldn't have been more perfect. Earlier and Craig wouldn't have been ready. Later and it likely would have been too late. They were married in 2017 and later had the marriage sealed in the Seoul Korea Temple. Craig and Paula were living in Seoul at the time, engaged in another adventure. While Craig loved large, I believe he may have come to doubt that he was worthy of love himself. Paula gave him love and a reason to strive for a little more in the last years of his life. Craig and Paula (click to enlarge)

As I think about the last decade of Craig's life and his hard work to set himself straight, I'm reminded of the parable of the Laborers in the Vineyard. In that parable, Jesus compares the Kingdom of Heaven to a man hiring laborers for his vineyard. He goes to the marketplace and hires some, promising them a penny. He goes back later, at the 6th and 9th hours, and hires more. Finally he hires more laborers in the 11th hour. When it comes time to pay them, he gives everyone the same wage—a penny. The point of the parable is that it doesn't matter so much when you start the journey, but where you end up.

I'm a believer in Jesus Christ and the power of his atonement and resurrection. I know Craig was too. He told me once that belief had given him the courage and hope to keep striving when all seemed lost. Craig knew the highest of the highs. He knew the lowest of the lows. The last few years of his life were among the happiest I ever saw him experience. He was a new man. In the end, Craig ended up in a good place.

I will miss my friend, but I'm eternally grateful for his life and example.

Other Tributes and Remembrances Craig Burton Obituary Remembering Craig Burton by Doc Searls Doc Searls photo album of Craig In Honor of Craig Burton from Jamie Lewis Silicon Slopes Loses A Tech Industry OG: R.I.P., Craig Burton by David Politis

Photo Credits: Craig Burton, 1953-2022 from Doc Searls (CC BY 2.0)

Tags: identity iiw novell kynetx utah


@_Nat Zone

中央集権IDから分散IDに至るまで、歴史は繰り返す | 日経クロステック

先週に引き続き、シリーズ「徹底考察、ブロックチェー…

先週に引き続き、シリーズ「徹底考察、ブロックチェーンは人類を幸せにするのか」の第3回として分散IDの歴史第2弾が日経クロステックに掲載されました。「中央集権IDから分散IDに至るまで、歴史は繰り返す」です。前回はW3C DIDとXRIの対比を見ましたが、今回はいよいよOpenIDの話です。OpenIDの背景にある思想などが書いてあるものは少ないので、OpenIDは中央集権という主張をされるかたは、まずはこのあたりを読んですこし考えてから話していただけるとよろしいかと思います。

目次は以下のような感じです。

(導入部) 「自己主権」「自主独立」を体現するOpenIDの思想 Account URI:メールアドレスは打てるけれどもURLは打てない問題 the ‘acct’ URIの概要 the ‘acct’ URI のResolution OpenID Connectとthe ‘acct’ URIとの関係 SIOPとアルゴリズム的に生成されるメタデータ文書 DIDは権力の分散に寄与するのか~歴史は繰り返す

なお、現在1ページめに

YADIS(Yet Another Distributed Identity Systemと、もう1つの分散アイデンティティーシステム)

(出所) https://xtech.nikkei.com/atcl/nxt/column/18/02132/072900003/

という表記がありますが、この「と」は誤記です。「もう一つの分散アイデンティティシステム」という名前のシステムですので、以下のような感じの表記が正しいです。現在修正依頼中です。

YADIS(Yet Another Distributed Identity System、「もう1つの分散アイデンティティーシステム」)

(出所)筆者

※ 3日17:59 修正確認しました。

それでは、どうぞ。

日経BP 中央集権IDから分散IDに至るまで、歴史は繰り返す

Tuesday, 02. August 2022

Heres Tom with the Weather

Monday, 01. August 2022

Jon Udell

Subtracting devices

People who don’t listen to podcasts often ask people who do: “When do you find time to listen?” For me it’s always on long walks or hikes. (I do a lot of cycling too, and have thought about listening then, but wind makes that impractical and cars make it dangerous.) For many years my trusty … Continue reading Subtracting devices

People who don’t listen to podcasts often ask people who do: “When do you find time to listen?” For me it’s always on long walks or hikes. (I do a lot of cycling too, and have thought about listening then, but wind makes that impractical and cars make it dangerous.) For many years my trusty podcast player was one or another version of the Creative Labs MuVo which, as the ad says, is “ideal for dynamic environments.”

At some point I opted for the convenience of just using my phone. Why carry an extra, single-purpose device when the multi-purpose phone can do everything? That was OK until my Quixotic attachment to Windows Phone became untenable. Not crazy about either of the alternatives, I flipped a coin and wound up with an iPhone. Which, of course, lacks a 3.5mm audio jack. So I got an adapter, but now the setup was hardly “ideal for dynamic environments.” My headset’s connection to the phone was unreliable, and I’d often have to stop walking, reseat it, and restart the podcast.

If you are gadget-minded you are now thinking: “Wireless earbuds!” But no thanks. The last thing I need in my life is more devices to keep track of, charge, and sync with other devices.

I was about to order a new MuVo, and I might still; it’s one of my favorite gadgets ever. But on a recent hike, in a remote area with nobody else around, I suddenly realized I didn’t need the headset at all. I yanked it out, stuck the phone in my pocket, and could hear perfectly well. Bonus: Nothing jammed into my ears.

It’s a bit weird when I do encounter other hikers. Should I pause the audio or not when we cross paths? So far I mostly do, but I don’t think it’s a big deal one way or another.

Adding more devices to solve a device problem amounts to doing the same thing and expecting a different result. I want to remain alert to the possibility that subtracting devices may be the right answer.

There’s a humorous coda to this story. It wasn’t just the headset that was failing to seat securely in the Lightning port. Charging cables were also becoming problematic. A friend suggested a low-tech solution: use a toothpick to pull lint out of the socket. It worked! I suppose I could now go back to using my wired headset on hikes. But I don’t think I will.


Mike Jones: self-issued

JSON Web Proofs BoF at IETF 114 in Philadelphia

This week at IETF 114 in Philadelphia, we held a Birds-of-a-Feather (BoF) session on JSON Web Proofs (JWPs). JSON Web Proofs are a JSON-based representation of cryptographic inputs and outputs that enable use of Zero-Knowledge Proofs (ZKPs), selective disclosure for minimal disclosure, and non-correlatable presentation. JWPs use the three-party model of Issuer, Holder, and Verifier […]

This week at IETF 114 in Philadelphia, we held a Birds-of-a-Feather (BoF) session on JSON Web Proofs (JWPs). JSON Web Proofs are a JSON-based representation of cryptographic inputs and outputs that enable use of Zero-Knowledge Proofs (ZKPs), selective disclosure for minimal disclosure, and non-correlatable presentation. JWPs use the three-party model of Issuer, Holder, and Verifier utilized by Verifiable Credentials.

The BoF asked to reinstate the IETF JSON Object Signing and Encryption (JOSE) working group. We asked for this because the JOSE working group participants already have expertise creating simple, widely-adopted JSON-based cryptographic formats, such as JSON Web Signature (JWS), JSON Web Encryption (JWE), and JSON Web Key (JWK). The JWP format would be a peer to JWS and JWE, reusing elements that make sense, while enabling use of new cryptographic algorithms whose inputs and outputs are not representable in the existing JOSE formats.

Presentations given at the BoF were:

Chair SlidesKaren O’Donoghue and John Bradley The need: Standards for selective disclosure and zero-knowledge proofsMike Jones What Would JOSE Do? Why re-form the JOSE working group to meet the need?Mike Jones The selective disclosure industry landscape, including Verifiable Credentials and ISO Mobile Driver Licenses (mDL)Kristina Yasuda A Look Under the Covers: The JSON Web Proofs specifications – Jeremie Miller Beyond JWS: BBS as a new algorithm with advanced capabilities utilizing JWPTobias Looker

You can view the BoF minutes at https://notes.ietf.org/notes-ietf-114-jwp. A useful discussion ensued after the presentations. Unfortunately, we didn’t have time to finish the BoF in the one-hour slot. The BoF questions unanswered in the time allotted would have been along the lines of “Is the work appropriate for the IETF?”, “Is there interest in the work?”, and “Do we want to adopt the proposed charter?”. Discussion of those topics is now happening on the jose@ietf.org mailing list. Join it at https://www.ietf.org/mailman/listinfo/jose to participate. Roman Danyliw, the Security Area Director who sponsored the BoF, had suggested that we hold a virtual interim BoF to complete the BoF process before IETF 115 in London. Hope to see you there!

The BoF Presenters:

The BoF Participants, including the chairs:

Friday, 29. July 2022

Heres Tom with the Weather

P-Hacking Example

One of the most interesting lessons from the pandemic is the harm that can be caused by p-hacking. A paper with errors related to p-hacking that hasn’t been peer-reviewed is promoted by one or more people with millions of followers on social media and then some of those followers suffer horrible outcomes because they had a false sense of security. Maybe the authors of the paper did not even real

One of the most interesting lessons from the pandemic is the harm that can be caused by p-hacking. A paper with errors related to p-hacking that hasn’t been peer-reviewed is promoted by one or more people with millions of followers on social media and then some of those followers suffer horrible outcomes because they had a false sense of security. Maybe the authors of the paper did not even realize the problem but for whatever reason, the social media rock stars felt the need to spread the misinformation. And another very interesting lesson is that the social media rock stars seem to almost never issue a correction after the paper is reviewed and rejected.

To illustrate p-hacking with a non-serious example, I am using real public data with my experience attending drop-in hockey.

I wanted to know if goalies tended to show up more or less frequently on any particular day of the week because it is more fun to play when at least one goalie shows up. I collected 85 independent samples.

For all days, there were 27 days with 1 goalie and 27 days with 2 goalies and 31 days with 0 goalies.

Our test procedure will define the test statistic X = the number of days that at least one goalie registered.

I am not smart so instead of committing to a hypothesis to test prior to looking at the data, I cheat and look at the data first and notice that the numbers for Tuesday look especially low. So, I focus on goalie registrations on Tuesdays. Using the data above for all days, the null hypothesis is that the probability that at least one goalie registered on a Tuesday is 0.635.

For perspective, taking 19 samples for Tuesday would give an expected value of 12 samples where at least 1 goalie registered.

Suppose we wanted to propose an alternative hypothesis that p < 0.635 for Tuesday. What is the rejection region of values that would refute the null hypothesis (p=0.635)?

Let’s aim for α = 0.05 as the level of significance. This means that (pretending that I had not egregiously cherry-picked data beforehand) we want there to be less than a 5% chance that the experimental result would occur inside the rejection region if the null hypothesis was true (Type I error).

For a binomial random variable X, the pmf b(x; n, p) is

def factorial(n) (1..n).inject(:*) || 1 end def combination(n,k) factorial(n) / (factorial(k)*factorial(n-k)) end def pmf(x,n,p) combination(n,x) * (p ** x) * ((1 - p) ** (n-x)) end

The cdf B(x; n, p) = P(X x) is

def cdf(x,n,p) (0..x).map {|i| pmf(i,n,p)}.sum end

For n=19 samples, if x 9 was chosen as the rejection region, then α = P(X 9 when X ~ Bin(19, 0.635)) = 0.112

2.4.10 :001 > load 'stats.rb' => true 2.4.10 :002 > cdf(9,19,0.635) => 0.1121416295262306

This choice is not good enough because even if the null hypothesis is true, there is a large 11% chance (again, pretending I had not cherry-picked the data) that the test statistic falls in the rejection region.

So, if we narrow the rejection region to x 8, then α = P(X 8 when X ~ Bin(19, 0.635)) = 0.047

2.4.10 :003 > cdf(8,19,0.635) => 0.04705965393607316

This rejection region satisfies the requirement of a 0.05 significance level.

The n=19 samples for Tuesday are [0, 0, 0, 1, 0, 0, 0, 0, 1, 2, 0, 1, 1, 0, 1, 2, 2, 0, 0].

Since x=8 falls within the rejection region, the null hypothesis is (supposedly) rejected for Tuesday samples. So I announce to my hockey friends on social media “Beware! Compared to all days of the week, it is less likely that at least one goalie will register on a Tuesday!”

Before addressing the p-hacking, let’s first address another issue. The experimental result was x = 8 which gave us a 0.047 probability of obtaining 8 or less days in a sample of 19 assuming that the null hypothesis (p=0.635) is true. This result just barely makes the 0.05 cutoff by the skin of its teeth. So, just saying that the null hypothesis was refuted with α = 0.05 does not reveal that it was barely refuted. Therefore, it is much more informative to say the p-value was 0.047 and also does not impose a particular α on readers who want to draw their own conclusions.

Now let’s discuss the p-hacking problem. I gave myself the impression that there was only a 5% chance that I would see a significant result even if the null hypothesis (p=0.635) were true. However, since there is data for 5 days (Monday, Tuesday, Wednesday, Thursday, Friday), I could have performed 5 different tests. If I chose that same p < 0.635 alternative hypothesis for each, then there would similarly be a 5% chance of a significant result for each test. The probability that all 5 tests would not be significant would be 0.95 * 0.95 * 0.95 * 0.95 * 0.95 = 0.77. Therefore, the probability that at least one test would be significant is 1 - 0.77 = 0.23 (the Family-wise error rate) which is much higher than 0.05. That’s like flipping a coin twice and getting two heads which is not surprising at all. We should expect such a result even if the null hypothesis is true. Therefore, there is not a significant result for Tuesday.

I was inspired to write this blog post after watching Dr. Susan Oliver’s Antivaxxers fooled by P-Hacking and apples to oranges comparisons. The video references the paper The Extent and Consequences of P-Hacking in Science (2015).

Wednesday, 27. July 2022

MyDigitalFootprint

Can frameworks help us understand and communicate?

I have the deepest respect and high regard for Wardley Maps and the Cynefin framework.  They share much of the same background and evolution. Both are extremely helpful and modern frameworks for understanding, much like Porter’s five forces model was back in the 1980s.  I adopted the same terminology (novel, emergent, good and best) when writing about the development of governance

I have the deepest respect and high regard for Wardley Maps and the Cynefin framework.  They share much of the same background and evolution. Both are extremely helpful and modern frameworks for understanding, much like Porter’s five forces model was back in the 1980s. 





I adopted the same terminology (novel, emergent, good and best) when writing about the development of governance for 2050. In the article Revising the S-Curve in an age of emergence, I used the S-curve as it has helped us on several previous journeys. It supported our understanding of adoption and growth; it can now be critical in helping us understand the development and evolution of governance towards a sustainable future. An evolutionary S-curve is more applicable than ever as we enter a new phase of emergence. Our actions and behaviours emerge when we grasp that all parts of our ecosystem interact as a more comprehensive whole.

A governance S-curve can help us unpack new risks in this dependent ecosystem so that we can make better judgments that lead to better outcomes. What is evident is that we need far more than proof, lineage and provenance of data from a wide ecosystem if we are going to create better judgement environments, we need a new platform. 

The image below takes the same terminology again but moves the Cynefin framework from the four quadrant domains to consider what happens when you have to optimise for more things - as in the Peak Paradox model.  


The yellow outer disc is about optimising for single outcomes and purposes.  In so many ways, this is simple as there is only one driving force, incentive or purpose, which means the relationship between cause and effect is obvious. 

The first inner purple ring recognises that some decision-making has a limited number of dependent variables.  System thinking is required to unpick, but it is possible to come up with an optimal outcome.

The pink inner ring is the first level where the relationship between cause and effect requires analysis or some form of investigation and/ or the application of expert knowledge.  This is difficult and requires assumptions, often leading to tension and conflict.   Optimising is not easy, if at all possible.

The inner black circle - where peak paradox exists.  Complexity thrives as the relationship between cause and effect can only be perceived in hindsight.  Models can post-justify outcomes but are unlikely to scale or be repeatable.  There is a paradox because the same information can have two (or more) meaning and outcomes.

The joy of any good framework is that it can always give new understanding and insight. What a Wardley Map then adds is movement, changing of position from where you are to where you will be. 

Why does this matter?

Because what we choose to optimise for is different from what a team of humans or a company will optimise for. Note I use “optimise”, but equality could be “maximise”. These are the yin/yan of effectiveness and efficiency, a continual movement. The purpose ideals are like efficacy - are you doing the right thing?

What we choose to optimise for is different from what a team of humans or a company will optimise for.

We know that it is far easier to make a decision when there is clarity of purpose.  However, when we have to optimise for different interests that are both dependent and independent - decision-making enters zones that are hard and difficult. It requires judgement.  In complexity is where leadership can shine as they can move from simple and obvious decision-making in the outer circle to utilising collective intelligence of the wider team as the decisions become more complex. Asking “what is going on here” and understanding it is outside a single person's reach.  High functions and diverse teams are critical for decisions where paradoxes may exist.  

When it gets towards the difficult areas, leadership will first determine if they are being asked to optimise for a policy or to align to an incentive; this shines the first spotlight on a zone where they need to be.   





reb00ted

Is this the end of social networking?

Scott Rosenberg, in a piece with the title “Sunset of the social network”, writes at Axios: Mark last week as the end of the social networking era, which began with the rise of Friendster in 2003, shaped two decades of internet growth, and now closes with Facebook’s rollout of a sweeping TikTok-like redesign. A sweeping statement. But I think he’s right: Facebook is fundamentally an adve

Scott Rosenberg, in a piece with the title “Sunset of the social network”, writes at Axios:

Mark last week as the end of the social networking era, which began with the rise of Friendster in 2003, shaped two decades of internet growth, and now closes with Facebook’s rollout of a sweeping TikTok-like redesign.

A sweeping statement. But I think he’s right:

Facebook is fundamentally an advertising machine. Like other Meta products are. There aren’t really about “technologies that bring the world closer together”, as the Meta homepage has it. At least not primarily.

This advertising machine has been amazingly successful, leading to a recent quarterly revenue of over $50 per user in North America (source). And Meta certainly has driven this hard, otherwise it would not have been in the news for overstepping the consent of its users year after year, scandal after scandal.

But now a better advertising machine is in town: TikTok. This new advertising machine is powered not by friends and family, but by an addiction algorithm. This addiction algorithm figures out your points of least resistance, and pours down one advertisement after another down your throat. And as soon as you have swalled one more, you scroll a bit more, and by doing so, you are asking for more advertisements, because of the addiction. This addiction-based advertising machine is probably close to the theoretical maximum of how many advertisements one can pour down somebody’s throat. An amazing work of art, as an engineer I have to admire it. (Of course that admiration quickly changes into some other emotion of the disgusting sort, if you have any kind of morals.)

So Facebook adjusts, and transitions into another addiction-based advertising machine. Which does not really surprise anybody I would think.

And because it was never about “bring[ing] the world closer together”, they drop that mission as if they never cared. (That’s because they didn’t. At least MarkZ didn’t, and he is the sole, unaccountable overlord of the Meta empire. A two-class stock structure gives you that.)

With the giant putting their attention elsewhere, where does this leave social networking? Because the needs and the wants to “bring[ing] the world closer together”, and to catch up with friends and family are still there.

I think it leaves social networking, or what will replace it, in a much better place. What about this time around we build products whose primary focus is actually the stated mission? Share with friends and family and the world, to bring it together (not divide it)! Instead of something unrelated, like making lots of ad revenue! What a concept!

Imagine what social networking could be!! The best days of social networking are still ahead. Now that the pretenders are leaving, we can actually start solving the problem. Social networking is dead. Long live what will emerge from the ashes. It might not be called social networking, but it will be, just better.

Tuesday, 26. July 2022

reb00ted

A list of (supposed) web3 benefits

I’ve been collecting a list of the supposed benefits of web3, to understand how the term is used these days. Might as well post what I found: better, fairer internet wrest back power from a small number of centralized institutions participate on a level playing field control what data a platform receives all data (incl. identities) is self-sovereign and secure high-quality informatio

I’ve been collecting a list of the supposed benefits of web3, to understand how the term is used these days. Might as well post what I found:

better, fairer internet wrest back power from a small number of centralized institutions participate on a level playing field control what data a platform receives all data (incl. identities) is self-sovereign and secure high-quality information flows creators benefit reduced inefficiencies fewer intermediaries transparency personalization better marketing capture value from virtual items no censorship (content, finance etc) democratized content creation crypto-verified information correctness privacy decentralization composability collaboration human-centered permissionless

Some of this is clearly aspirational, perhaps on the other side of likely. Also not exactly what I would say if asked. But nevertheless an interesting list.


The shortest definition of Web3

web1: read web2: read + write web3: read + write + own Found here, but probably lots of other places, too.
web1: read web2: read + write web3: read + write + own

Found here, but probably lots of other places, too.

Thursday, 21. July 2022

MyDigitalFootprint

Why do we lack leadership?

Because when there is a leader, we look to them to lead, and they want us to follow their ideas. If you challenge the leader, you challenge leadership, and suddenly, you are not in or on the team. If you don’t support the leader, you are seen as a problem and are not a welcome member of the inner circle. If you bring your ideas, you are seen to be competitive to the system and not aligned.&nbs


Because when there is a leader, we look to them to lead, and they want us to follow their ideas. If you challenge the leader, you challenge leadership, and suddenly, you are not in or on the team. If you don’t support the leader, you are seen as a problem and are not a welcome member of the inner circle. If you bring your ideas, you are seen to be competitive to the system and not aligned.  If you don’t bring innovation, you are seen to lack leadership potential. 

The leader sets the rules unless and until the leader loses authority or it is evident that their ideas don’t add up when a challenge to leadership and a demonstration of leadership skills becomes valid.

We know this leadership model is broken and based on old command and control thinking inherited from models of war. We have lots of new leadership models, but leaders who depend on others for ideas, skills and talent, are they really the inspiration we are seeking?  

Leadership is one of the biggest written-about topics, but it focuses on the skills/ talents you need to be a leader and the characteristics you need as a leader. 

So I am stuck thinking …..

in a world where war was not a foundation, what would have been a natural or dominant model for leadership?

do we lack leaders because we have leaders - because of our history?

do we love the idea of leaders more than we love leaders?

do we have leaders because of a broken model for accountability and responsibility?

do we like leadership because it is not us leading?

do we find it easier to be critical than be criticised?

is leadership sustainable? 

if care for our natural world was our only job, what would leadership look like?


Tuesday, 19. July 2022

MyDigitalFootprint

A problem of definitions in ecomimcs that create conflicts

A problem of definitions As we are all reminded of inflation and its various manifestations, perhaps we also need to rethink some of them.  The reason is that in economics, inflation is all about a linear scale. Sustainable development does not really map very well to this scale. In eco-systems, it is about balance.  Because of the way we define growth - we aim for inflation and need t

A problem of definitions

As we are all reminded of inflation and its various manifestations, perhaps we also need to rethink some of them.  The reason is that in economics, inflation is all about a linear scale. Sustainable development does not really map very well to this scale. In eco-systems, it is about balance.  Because of the way we define growth - we aim for inflation and need to control it.  However, this scale thinking then frames how we would perceive sustainability as the framing sets these boundaries.   What happens if we change it round?


What we have today in terms of the definition that creates conflicts and therefore has to ask, is this useful for a sustainable future as we are trying to fit a square peg in a round hole.

Economics

Definition

Perceptions from the Sustainability community and long term impact

Hyperinflation

Hyperinflation is a period of fast-rising inflation; 

an Increase in prices drives for more efficiency to control pricing. Use of scale to create damp effects.  Use global supply to counter effects.

Rapid and irreparable damage

Inflation

Inflation is the rate at which the overall level of prices for various goods and services in an economy rises over a period of time.

Drives growth which is an increase in the amount of goods and services produced per head of the population over a period of time.

Significant damage and changes to eco-systems and habitat

Stagnation

Stagflation is characterised by slow economic growth and relatively high unemployment—or economic stagnation—which is at the same time accompanied by rising prices (i.e., inflation). Stagflation can be alternatively defined as a period of inflation combined with a decline in the gross domestic product (GDP).

Unstable balance but repairable damage possible 

Recession/ deflation

Deflation is when prices drop significantly due to too large a money supply or a slump in consumer spending; lower costs mean companies earn less and may institute layoffs.

Stable and sustainable

Contraction

Contraction is a phase of the business cycle in which the economy is in decline. A contraction generally occurs after the business cycle peaks before it becomes a trough.

Expansion of the ecosystem and improving habitats



Perhaps what we need/ want if we want to remove the tensions from the ideals of growth and have a sustainable future.


Sustainable development


Economics

Unstable balance and damage creates change 


Rapid growth

Out of balance, but repairable damage possible 


Unco-ordinated growth

Stable and sustainable

Requires a lot of work and investment into projects to maintain stability and sustainability.   Projects are long-term and vast.  Requires global accord and loss of intra-Varlas protections.  No sovereign states are needed as must hold everyone accountable.

Growth but without intervention would not be sustainable

Expansion of ecosystem and improving habitats

Goldilocks zone - improving quality of life and lifestyles but not at the expense of reducing the habitual area on the earth. 

Slow growth in terms of purity of economics and GDP measurements.

Stable and sustainable

Requires a lot of work and investment into projects to maintain stability and sustainability.   Projects are long-term and vast.  Requires global accord and loss of intra-Varlas protections.  No sovereign states are needed as must hold everyone accountable.

Shrinking and without intervention would not be sustainable

Out of balance, but repairable damage possible 


Unco-ordinated decline

Unstable balance and damage creates change 


Rapid decline




Wednesday, 13. July 2022

Ludo Sketches

ForgeRock Directory Services 7.2 has been released

ForgeRock Directory Services 7.2 was and will be the last release of ForgeRock products that I’ve managed. It was finished when I left the company and was released to the public a few days after. Before I dive into the… Continue reading →

ForgeRock Directory Services 7.2 was and will be the last release of ForgeRock products that I’ve managed. It was finished when I left the company and was released to the public a few days after. Before I dive into the changes available in this release, I’d like to thank the amazing team that produced this version, from the whole Engineering team led by Matt Swift, to the Quality engineering led by Carole Forel, the best and only technical writer Mark Craig, and also our sustaining engineer Chris Ridd who contributed some important fixes to existing customers. You all rock and I’ve really appreciated working with you all these years.

So what’s new and exciting in DS 7.2?

First, this version introduces a new type of index: Big Index. This type of index is to be used to optimize search queries that are expecting to return a large number of results among an even much larger number of entries. For example, if you have an application that searches for all users in the USA that live in a specific state. In a population of hundreds of millions users, you may have millions that live in one particular state (let’s say Ohio). With previous versions, searching for all users in Ohio would be unindexed and the search if allowed would scan the whole directory data to identify the ones in Ohio. With 7.2, the state attribute can be indexed as a Big Index, and the same search query would be considered as indexed, only going through the reduced set of users with that have Ohio as the value for the state attribute.

Big Indexes can have a lesser impact on write performances than regular indexes, but they tend to have a higher on disk footprint. As usual, choosing to use a Big Index type is a matter of trade-of between read and write performances, but also disk space occupation which may also have some impact on performances. It is recommended to test and run benchmarks in development or pre-production environments before using them in production.

The second significant new feature in 7.2 is the support of the HAProxy Protocol for LDAP and LDAPS. When ForgeRock Directory Services is deployed behind a software load-balancer such as HAProxy, NGINX or Kubernetes Ingress, it’s not possible for DS to know the IP address of the Client application (the only IP address known is the one of the load-balancer), therefore, it is not possible to enforce specific access controls or limits based on the applications. By supporting the HAProxy Protocol, DS can decode a specific header sent by the load-balancer and retrieve some information about the client application such as IP address but also some TLS related information if the connection between the client and the load-balancer is secured by TLS, and DS can use this information in access controls, logging, limits… You can find more details about DS support of the Proxy Protocol in DS documentation.

In DS 7.2, we have added a new option for securing and hashing passwords: Argon2. When enabled (which is the default), this allows importing users with Argon2 hashed passwords, and letting them authenticating immediately. Argon2 may be selected as well as the default scheme for hashing new passwords, by associating it with a password policy (such as the default password policy). The Argon2 password scheme has several parameters that control the cost of the hash: version, number of iterations, amount of memory to use and parallelism (aka number of threads used). While Argon2 is probably today the best algorithm to secure passwords, it can have a very big impact on the server’s performance, depending on the Argon2 parameters selected. Remember that DS encrypts the entries on disk by default, and therefore the risk of exposing hashed passwords at rest is extremely low (if not null).

Also new is the ability to search for attributes with a DistinguishedName syntax using pattern matching. DS 7.2 introduces a new matching rule named distinguishedNamePatternMatch (defined with the OID 1.3.6.1.4.1.36733.2.1.4.13). It can be used to search for users with a specific manager for example with the following filter “(manager:1.3.6.1.4.1.36733.2.1.4.13:=uid=trigden,**)” or a more human readable form “(manager:distinguishedNamePatternMatch:=uid=trigden,**)”, or to search for users whose manager is part of the Admins organisational unit with the following filter “(manager:1.3.6.1.4.1.36733.2.1.4.13:=*,ou=Admins,dc=example,dc=com)”.

ForgeRock Directory Services 7.2 includes several minor improvements:

Monitoring has been improved to include metrics about index use in searches, and access logs now contain information about the proc entry’s size (the later is also written in the access logs). The index troubleshooting attribute “DebugSearchIndex” output has been revised to provide better details for the query plan. Alert notifications are raised when backups are finished. The REST2LDAP service provides several enhancements making several queries easier.

As with every release, there has been several performances optimizations and improvements, many minor issues corrected.

You can find the full details of the changes in the Release Notes.

I hope you will enjoy this latest release of ForgeRock Directory Services. If not, don’t reach out to me, I’m no longer in charge.


Phil Windleys Technometria

The Most Inventive Thing I've Done

Summary: I was recently asked to respond in writing to the prompt "What is the most inventive or innovative thing you've done?" I decided to write about picos. In 2007, I co-founded a company called Kynetx and realized that the infrastructure necessary for building our product did not exist. To address that gap, I invented picos, an internet-first, persistent, actor-model programmin

Summary: I was recently asked to respond in writing to the prompt "What is the most inventive or innovative thing you've done?" I decided to write about picos.

In 2007, I co-founded a company called Kynetx and realized that the infrastructure necessary for building our product did not exist. To address that gap, I invented picos, an internet-first, persistent, actor-model programming system. Picos are the most inventive thing I've done. Being internet-first, every pico is serverless and cloud-native, presenting an API that can be fully customized by developers. Because they're persistent, picos support databaseless programming with intuitive data isolation. As an actor-model programming system, different picos can operate concurrently without the need for locks, making them a natural choice for easily building decentralized systems.

Picos can be arranged in networks supporting peer-to-peer communication and computation. A cooperating network of picos reacts to messages, changes state, and sends messages. Picos have an internal event bus for distributing those messages to rules installed in the pico. Rules in the pico are selected to run based on declarative event expressions. The pico matches events on its bus with event scenarios declared in each rule's event expression. The pico engine schedules any rule whose event expression matches the event for execution. Executing rules may raise additional events which are processed in the same way.

As Kynetx reacted to market forces and trends, like the rise of mobile, the product line changed, and picos evolved and matured to match those changing needs, becoming a system that was capable of supporting complex Internet-of-Things (IoT) applications. For example, we ran a successful Kickstarter campaign in 2013 to build a connected car product called Fuse. Fuse used a cellular sensor connected to the vehicle's on-board diagnostics port (OBD2) to raise events from the car's internal bus to a pico that served as the vehicle's digital twin. Picos allowed Fuse to easily provide an autonomous processing agent for each vehicle and to organize those into fleets. Because picos support peer-to-peer architectures, putting a vehicle in more than one fleet or having a fleet with multiple owners was easy.

Fuse presented a conventional IoT user experience using a mobile app connected to a cloud service built using picos. But thanks to the inherently distributed nature of picos, Fuse offered owner choice and service substitutability. Owners could choose to move the picos representing their fleet to an alternate service provider, or even self-host if they desired without loss of functionality. Operationally, picos proved more than capable of providing responsive, scalable, and resilient service for Fuse customers without significant effort on my part. Fuse ultimately shut down because the operator of the network supplying the OBD2 devices went out of business. But while Fuse ran, picos provided Fuse customers with an efficient, capable, and resilient infrastructure for a valuable IoT service with unique characteristics.

The characteristics of picos make them a good choice for building distributed and decentralized applications that are responsive, resilient to failure, and respond well to uneven workloads. Asynchronous messaging and concurrent operation make picos a great fit for modern distributed applications. For example, picos can synchronously query other picos to get data snapshots, but this is not usually the most efficient interaction pattern. Instead, because picos support lock-free asynchronous concurrency, a system of picos can efficiently respond to events to accomplish a task using reactive programming patterns like scatter-gather.

The development of picos has continued, with the underlying pico engine having gone through three major versions. The current version is based on NodeJS and is open-source. The latest version was designed to operate on small platforms like a Raspberry PI as well as cloud platforms like Amazon's EC2. Over the years hundreds of developers have used picos for their programming projects. Recent applications include a proof-of-concept system supporting intention-based ecommerce by Customer Commons.

The architecture of picos was a good fit for Customer Commons' objective to build a system promoting user autonomy and choice because picos provide better control over apps and data. This is a natural result of the pico model where each pico represents a closure over services and data. Picos cleanly separate the data for different entities. Picos, representing a specific entity, and rulesets representing a specific business capability within the pico, provide fine grained control over data and its processing. For example, if you sell a car represented in Fuse, you can transfer the vehicle pico to the new owner, after deleting the Trips application, and its associated data, while leaving untouched the maintenance records, which are isolated inside the Maintenance application in the pico.

I didn't start out in 2007 to write a programming language that naturally supports decentralized programming using the actor-model while being cloud-native, serverless, and databaseless. Indeed, if I had, I likely wouldn't have succeeded. Instead picos evolved from a simple rule language for modifying web pages to a powerful, general-purpose programming system for building any decentralized application. Picos are easily the most important technology I've invented.

Tags: picos kynetx fuse

Thursday, 07. July 2022

Pulasthi Mahawithana

10 Ways to Customize Your App’s Login Experience with WSO2 — Part 1

10 Ways to Customize Your App’s Login Experience with WSO2 — Part 1 In this series I’ll go through 10 different ways you can customize your application authentication experience with WSO2 Identity Server’s adaptive authentication feature. To give some background, WSO2 Identity Server(IS) is an open-source Identity and Access Management(IAM) product. One of its main use is to be used as an i
10 Ways to Customize Your App’s Login Experience with WSO2 — Part 1

In this series I’ll go through 10 different ways you can customize your application authentication experience with WSO2 Identity Server’s adaptive authentication feature.

To give some background, WSO2 Identity Server(IS) is an open-source Identity and Access Management(IAM) product. One of its main use is to be used as an identity provider for your applications. It can support multi factor authentication, social login, single sign-on based on several widely adopted protocols like oauth/OIDC, SAML, WS-Federation etc.

Adaptive authentication is a feature where you can move away from static authentication methods to support dynamic authentication flow. For example, without adaptive authentication, you can configure an application to authenticate with username and password as the first step and with either SMS OTP or TOTP as the second option, where all users will need to use that authentication method no matter who they are, what they are going to do with the application. With adaptive authentication, you can make this dynamic to offer better experience and/or security. In the above example we may use adaptive authentication to make the second factor required only when the user is trying to login to the application from a new device which (s)he hasn’t used before. That way the user will have a better user experience, while keeping the required security.

Traditional Vs Adaptive Authentication

With adaptive authentication, the login experience can be customized to almost anything that will give the best user experience to the user. Following are 10 high-level use cases you can achieve with WSO2 IS’s adaptive authentication.

Conditionally Stepping up the Authentication — Instead of statically having a pre-defined set of authentication methods, we can step-up/down the authentication based on several factors. Few such factors include, roles/attributes of the user, device, user’s activity, user store (in case of multiple user stores) Conditional Authorization — Similar to stepping up or down the authentication, we can authorize or deny the login to the application based on the similar factors Dynamic Account Linking — A physical user may have multiple identities provided from multiple external providers (eg. google, facebook, twitter). With adaptive authentication, you can verify and link those at authentication time. User attribute enrichment — During a login flow, the user attributes may be provided from multiple sources, in different formats. However the application may require those attributes in a different way due to which they can’t be used staight away. Adaptive authentication can be used to enrich such attributes as needed. Improve login experience — Depending on different factors (as mentioned in the first point), the login experience can be customized to look different, or to avoid any invalid authentication methods being offered to user. Sending Notifications — Can be used to trigger different events, send email notifcations during the authentication flow in case on unusual or unexpected behaviour Enchance Security — Enforce security policies, level of assuarance required by the application or by the organization Limit/Manage Concurrent Sessions — Limit the number of sessions a user may have for the application concurrently based on security requirements, or business requirements (like subscription tiers) Auditing/Analytics — Publish the useful stats to the analytics servers or gather data for auditing purposes. Bring your own functionality — In a business there are so many variables based on the domain, country/region, security standards, competitors etc. All these can’t be generalized, and hence there will be certain things which you will specifically require. Adaptive authentication provide so many flexibility to define your own functionalities which you can use to make your application authentication experience user-friendly, secure and unique.

In the next posts, I’ll go through each of the above with example scenarios and how to achieve them with WSO2 IS.


MyDigitalFootprint

Mind the Gap - between short and long term strategy

Mind the Gap This article addresses a question that ESG commentators struggle with: “Is ESG a model, a science, a framework, or a reporting tool?    Co-authored @yaelrozencwajg  and @tonyfish An analogy. Our universe is governed by two fundamental models, small and big. The gap between Quantum Physics (small) and The Theory of Relativity (big) is similar to the issues betwee

Mind the Gap

This article addresses a question that ESG commentators struggle with: “Is ESG a model, a science, a framework, or a reporting tool?    Co-authored @yaelrozencwajg  and @tonyfish




An analogy. Our universe is governed by two fundamental models, small and big. The gap between Quantum Physics (small) and The Theory of Relativity (big) is similar to the issues between how we frame and deliver short and long-term business planning. We can model and master the small (short) and the big (long), but there is a chasm between them which means we fail to understand why the modelling and outcomes of one theory; don’t enlighten us about the other.  The mismatch or gaps between our models create uncertainty and ambiguity, leading to general confusion and questionable incentives.

In physics, quantum mechanics is about understanding the small nuclear forces. However, based on our understanding of the interactions and balances between fundamental elements that express small nuclear forces, we cannot predict the movement of planets in the solar system.  Vice versa, our model of gravity allows us to understand and predict motion in space and time, enabling us to model and know the exact position of Voyager 1 since 1977, which does not help in any way to understand fundamental particle interactions.  There remains a gap between the two models, which is marketed as “The Theory of Everything” it is a hypothetical, singular, all-encompassing, coherent theoretical framework of physics that thoroughly explains and links together all physical aspects of the universe - it closes the gaps as we want it to all be explainable.

In business, we worked out that based on experience, probability, and confidence, using the past makes a reasonable predictive model in the short term (say the next three years), especially if the assumptions are based on a stable system (maintaining one sigma variance). If a change occurs, we will see it as a delta between the plan and reality as the future does not play out as the short-term model predicted. 

We have improved our capabilities in predicting the future by developing frameworks, scenario planning and game theory. We can reduce risks and model scenarios by understanding the current context.  The higher level of detail and understanding we have about the present, the better we are able to model the next short period of time. However, whilst we have learnt that our short-term models can be representative and provide a sound basis, there is always a delta to understand and manage.  No matter how big and complex our model is, it doesn't fare well in with a longer time horizon as short-term models are not helpful for long-term strategic planning. 

Long-term planning is not based on a model but instead on a narrative about how the active players' agency, influence and power will change. We are better able to think about global power shifts in the next 50 to 100 years than we can perceive what anything will look like in 10 years. We bounded by Gates Law “Most people overestimate what they can achieve in a year and underestimate what they can achieve in ten years.”

For every view of the long-term future, there is a supporting opinion.  There is an alternative view of the future for every opinion, and neither follows the short-term model trajectory.

There is a gap between the two models; (small and big)/ (short and long).  The gap is a fog-covered chasm that we have so far failed to address how to cross using models, theories or concepts from either the short- or long-term position.  In this gap, where the fog-filled chasm exists, this zone demands critical thinking, judgement and leadership.  The most critical aspects of modern humanity's ability to thrive sit in this zone: climate, eco-systems, geopolitics, global supply chains, digital identity, privacy, poverty, energy and water.


ESG has become the latest victim stuck in the foggy chasm.

ESG will be lost if we couldn’t agree now to position it as both a short-term model and a long-term value framework.  ESG has to have an equal foot in each camp and requires a simple linear narrative which connects the two, avoiding getting lost in the foggy chasm that sucks the life out of progress that sites between them.  

ESG as a short-term model must have data with a level of accuracy for reporting transparently.  However, no matter how good this data is or the reporting frameworks, it will not predict or build a sustainable future. 

ESG demands a long-term framework for a sustainable future, but we need globally agreed policies, perhaps starting from many UN SDG ideals.  How can we realistically create standards, policies and regulations when we cannot agree on what we want the future to look like because of our geographical biases. We know that a long-term vision does not easily translate into practical short-term actions and will not help deliver immediate impact, but without a purpose, north star and governance, we are likely to get more of the same.   

If the entire ESG eco-system had only focussed on one (short or long), it would have alienated the other, but right now, I fear that ESG has ended up being unable to talk about or deliver either.  The media has firmly put ESG in the foggy gap because it is the best for its advertising-driven model. As a community, we appear unable to use our language or models to show how to criss-cross the chasm. Indeed our best ideas and technologies are being used to create division and separation. For example, climate technologies such as “carbon capture and storage models” had long-term thinking in the justification. Still, it has become a short-term profit centre and tax escape for the oil and gas extraction industries. "The Great Carbon Capture Scam" by Greenpeace does a deep dive on the topic. 

As humans, we desperately need ESG to deliver a long-term sustainable future, but this is easy to ignore as anyone and everyone can have an opinion. Suppose ESG becomes only a short-term deliverable, and reporting tool, it is likely to fail as the data quality is poor and there is a lack of transparency. Whilst the level of narrative interruption is one that marketing demands, we will likely destroy our habitat before we acknowledge it and end up in the next global threat.

Repeating the opening question. “Is ESG a model, a science, a framework, or a reporting tool?” On reflection, it is a fair question as ESG has to be all. However, we appear to lack the ability to provide clarity on each element or a holistic vision for unity.  Just in ESG science alone, there is the science that can be used to defend any climate point of view you want. Therefore, maybe, a better question is, “Does ESG have a Social Identity Crisis?” If so, what can we do to solve it? 

Since: 

there is no transparency about why a party supports a specific outcome, deliverable, standard or position;

the intrinsic value of ESG is unique by context and even to the level of a distinct part of an organisation;

we cannot agree if ESG investors are legit;

we cannot agree on standards or timeframes;

practitioners do not declare how or by whom they are paid or incentivised;

And bottom line, we have not agreed on what we are optimising for!

Whilst we know that we cannot please everyone all of the time, how would you approach this thorny debate as a thought leader? 




Tuesday, 05. July 2022

Phil Windleys Technometria

Decentralized Systems Don't Care

Summary: I like to remind my students that decentralized systems don't care what they (or anyone else thinks). The paradox is that they care very much what everyone thinks. We call that coherence and it's what makes decentralized systems maddeningly frustrating to understand, architect, and maintain. I love getting Azeem Azhar's Exponential View each week. There's always a few t

Summary: I like to remind my students that decentralized systems don't care what they (or anyone else thinks). The paradox is that they care very much what everyone thinks. We call that coherence and it's what makes decentralized systems maddeningly frustrating to understand, architect, and maintain.

I love getting Azeem Azhar's Exponential View each week. There's always a few things that catch my eye. Recently, he linked to a working paper from Alberto F. Alesina, el. al. called Persistence Through Revolutions (PDF). The paper looks at the fate of the children and grandchildren of landed elite who were systematically persecuted during the cultural revolution (1966 to 1976) in an effort to eradicate wealth and educational inequality. The paper found that the grandchildren of these elite have recovered around two-thirds of the pre-cultural revolution status that their grandparents had. From the paper:

[T]hree decades after the introduction of economic reforms in the 1980s, the descendants of the former elite earn a 16–17% higher annual income than those of the former non-elite, such as poor peasants. Individuals whose grandparents belonged to the pre-revolution elite systematically bounced back, despite the cards being stacked against them and their parents. They could not inherit land and other assets from their grandparents, their parents could not attend secondary school or university due to the Cultural Revolution, their parents were unwilling to express previously stigmatized pro-market attitudes in surveys, and they reside in counties that have become more equal and more hostile toward inequality today. One channel we emphasize is the transmission of values across generations. The grandchildren of former landlords are more likely to express pro-market and individualistic values, such as approving of competition as an economic driving force, and willing to exert more effort at work and investing in higher education. In fact, the vertical transmission of values and attitudes — "informal human capital" — is extremely resilient: even stigmatizing public expression of values may not be sufficient, since the transmission in the private environment could occur regardless. From Persistence Through Revolutions
Referenced 2022-06-27T11:13:05-0600

There are certainly plenty of interesting societal implications to these findings, but I love what it tells us about the interplay between institutions, even very powerful ones, and more decentralized systems like networks and tribes1. The families are functioning as tribes, but there's like a larger social network in play as well made from connections, relatives, and friends. The decentralized social structure or tribes and networks proved resilient even in the face of some of the most coercive and overbearing actions that a seemingly all-powerful state could take.

In a more IT-related story, I also recently read this article, Despite ban, Bitcoin mining continues in China. The article stated:

Last September, China seemed to finally be serious about banning cryptocurrencies, leading miners to flee the country for Kazakhstan. Just eight months later, though, things might be changing again.

Research from the University of Cambridge's Judge Business School shows that China is second only to the U.S. in Bitcoin mining. In December 2021, the most recent figures available, China was responsible for 21% of the Bitcoin mined globally (compared to just under 38% in the U.S.). Kazakhstan came in third.

From Despite ban, Bitcoin mining continues in China
Referenced 2022-06-27T11:32:29-0600

When China instituted the crackdown, some of my Twitter friends, who are less than enthusiastic about crypto, reacted with glee, believing this would really hurt Bitcoin. My reaction was "Bitcoin doesn't care what you think. Bitcoin doesn't care if you hate it."

What matters is not what actions institutions take against Bitcoin2 (or any other decentralized system), but whether or not Bitcoin can maintain coherence in the face of these actions. Social systems that are enduring, scalable, and generative require coherence among participants. Coherence allows us to manage complexity. Coherence is necessary for any group of people to cooperate. The coherence necessary to create the internet came in part from standards, but more from the actions of people who created organizations, established those standards, ran services, and set up exchange points.

Bitcoin's coherence stems from several things including belief in the need for a currency not under institutional control, monetary rewards from mining, investment, and use cases. The resilience of Chinese miners, for example, likely rests mostly on the monetary reward. The sheer number of people involved in Bitcoin gives it staying power. They aren't organized by an institution, they're organized around the ledger and how it operates. Bitcoin core developers, mining consortiums, and BTC holders are powerful forces that balance the governance of the network. The soft and hard forks that have happened over the years represent an inefficient, but effective governance reflecting the core believes of these powerful groups.

So, what should we make of the recent crypto sell-off? I think price is a reasonable proxy for the coherence of participants in the social system that Bitcoin represents. As I said, people buy, hold, use, and sell Bitcoin for many different reasons. Price lets us condense all those reasons down to just one number. I've long maintained that stable decentralized systems need a way to transfer value from the edge to the center. For the internet, that system was telcos. For Bitcoin, it's the coin itself. The economic strength of a decentralized system (whether the internet of Bitcoin) is a good measure of how well it's fairing.

Comparing Bitcoin's current situation to Ethereum's is instructive. If you look around, it's hard to find concrete reasons for Bitcoin's price doldrums other than the general miasma that is affecting all assets (especially risk assets) because of fears about recession and inflation. Ethereum is different. Certainly, there's a set of investors who are selling for the same reasons they're selling BTC. But Ethereum is also undergoing a dramatic transition, called "the merge", that will move the underlying ledger from proof-of-work to proof-of-stake. These kinds of large scale transitions have a big impact on a decentralized system's coherence since there will inevitably be people very excited about it and some who are opposed—winners and losers, if you will.

Is the design of Bitcoin sufficient for it to survive in the long term? I don't know. Stable decentralized systems are hard to get right. I think we got lucky with the internet. And even the internet is showing weakness against the long-term efforts of institutional forces to shape it in their image. Like the difficulty of killing off decentralized social and cultural traditions and systems, decentralized technology systems can withstand a lot of abuse and still function. Bitcoin, Ethereum, and a few other blockchains have proven that they can last for more than a decade despite challenges, changing expectations, and dramatic architectural transitions. I love the experimentation in decentralized system design that they represent. These systems won't die because you (or various governments) don't like them. The paradox is that they don't care what you think, even as they depend heavily on what everyone thinks.

Notes To explore this categorization further, see this John Robb commentary on David Ronfeldt's Rand Corporation paper "Tribes, Institutions, Markets, Networks" (PDF). For simplicity, I'm just going to talk about Bitcoin, but my comments largely apply to any decentralized system

Photo Credit: Ballet scene at the Great Hall of the People attended by President and Mrs. Nixon during their trip to Peking from Byron E. Schumaker (Public Domain)

Tags: decentralization legitimacy coherence

Sunday, 03. July 2022

reb00ted

What is a DAO? A non-technical definition

Definitions of “DAO” (short for Decentralized Autonomous Organization) usually start with technology, specifically blockchain. But I think that actually misses much of what’s exciting about DAOs, a bit like if you were to explain why your smartphone is great by talking about semiconductor circuits. Let’s try to define DAO without starting with blockchain. For me: A DAO is… a distributed

Definitions of “DAO” (short for Decentralized Autonomous Organization) usually start with technology, specifically blockchain. But I think that actually misses much of what’s exciting about DAOs, a bit like if you were to explain why your smartphone is great by talking about semiconductor circuits. Let’s try to define DAO without starting with blockchain.

For me:

A DAO is…

a distributed group with a common cause of consequence that governs itself, does not have a single point of failure, and that is digital-native.

Let’s unpack this:

A group: a DAO is a form of organization. It is usually a group of people, but it could also be a group of organizations, a group of other DAOs (yes!) or any combination.

This group is distributed: the group members are not all sitting around the same conference table, and may never. The members of many DAOs have not met in person, and often never will. From the get-go, DAO members may come from around the globe. A common jurisdiction cannot be assumed, and as DAO membership changes, over time it may be that most members eventually come from a very different geography than where the DAO started.

With a common cause: DAOs are organized around a common cause, or mission, like “save the whales” or “invest in real-estate together”. Lots of different causes are possible, covering most areas of human interest, including “doing good”, “not for profit” or “for profit”.

This cause is of consequence to the members, and members are invested in the group. Because of that, members will not easily abandon the group. So we are not talking about informal pop-in-and-out-groups where maybe people have a good time but don’t really care whether the group is successful, but something where success of the group is important to the members and they will work on making the group successful.

That governs itself: it’s not a group that is subservient to somebody or some other organization or some other ruleset. Instead, the members of the DAO together make the rules, including how to change the rules. They do not depend on anybody outside of the DAO for that (unless, of course, they decide to do that). While some DAOs might identify specific members with specific roles, a DAO is much closer to direct democracy than representative democracy (e.g. as in traditional organization where shareholders elect directors who then appoint officers who then run things).

That does not have a single point of failure and are generally resilient. No single point of failure should occur in terms of people who are “essential” and cannot be replaced, or tools (like specific websites). This often is described in a DAO context as “sufficient decentralization”.

And that is digital-native: a DAO usually starts on-line as a discussion group, and over time, as its cause, membership and governance become more defined, gradually turns into a DAO. At all stages members prefer digital tools and digital interactions over traditional tools and interactions. For example, instead of having an annual membership meeting at a certain place and time, they will meet online. Instead of filling out paper ballots, they will vote electronically, e.g. on a blockchain. (This is where having a blockchain is convenient, but there are certainly other technical ways voting could be performed.)

Sounds … very broad? It is! For me, that’s one of the exciting things about DAOs. They come with very little up-front structure, so the members can decide what and how they want to do things. And if they change their minds, they change their minds and can do that any time, collectively, democratically!

Of course, all this freedom means more work because a lot of defaults fall away and need to be defined. Governance can fail in new and unexpected ways because we don’t have hundreds of years of precedent in how, say, Delaware corporations work.

As an inventor and innovator, I’m perfectly fine with that. The things I tend to invent – in technology – are also new and fail in unexpected ways. Of course, there is many situations where that would be unacceptable: when operating a nuclear power plant, for example. So DAOs definitely aren’t for everyone and everything. But where existing structure of governance are found to be lacking, here is a new canvas for you!

Wednesday, 29. June 2022

Mike Jones: self-issued

OAuth DPoP Presentation at Identiverse 2022

Here’s the DPoP presentation that Pieter Kasselman and I gave at the 2022 Identiverse conference: Bad actors are stealing your OAuth tokens, giving them control over your information – OAuth DPoP (Demonstration of Proof of Possession) is what we’re doing about it (PowerPoint) (PDF) A few photographs that workation photographer Brian Campbell took during the […]

Here’s the DPoP presentation that Pieter Kasselman and I gave at the 2022 Identiverse conference:

Bad actors are stealing your OAuth tokens, giving them control over your information – OAuth DPoP (Demonstration of Proof of Possession) is what we’re doing about it (PowerPoint) (PDF)

A few photographs that workation photographer Brian Campbell took during the presentation follow.

Mike Presenting:

Who is that masked man???

Pieter Presenting:

Monday, 27. June 2022

Phil Windleys Technometria

Fixing Web Login

Summary: Like the "close" buttons for elevator doors, "keep me logged in" options on web-site authentication screens feel more like a placebo than something that actually works. Getting rid of passwords will mean we need to authenticate less often, or maybe just don't mind as much when we do. You know the conventional wisdom that the "close" button in elevators isn't really hooked up to a

Summary: Like the "close" buttons for elevator doors, "keep me logged in" options on web-site authentication screens feel more like a placebo than something that actually works. Getting rid of passwords will mean we need to authenticate less often, or maybe just don't mind as much when we do.

You know the conventional wisdom that the "close" button in elevators isn't really hooked up to anything. That it's just there to make you feel good? "Keep me logged in" is digital identity's version of that button. Why is using authenticated service on the web so unpleasant?

Note that I'm specifically talking about the web, as opposed to mobile apps. As I wrote before, compare your online, web experience at your bank with the mobile experience from the same bank. Chances are, if you're like me, that you pick up your phone and use a biometric authentication method (e.g. FaceId) to open it. Then you select the app and the biometrics play again to make sure it's you, and you're in.

On the web, in contrast, you likely end up at a landing page where you have to search for the login button which is hidden in a menu or at the top of the page. Once you do, it probably asks you for your identifier (username). You open up your password manager (a few clicks) and fill the username and only then does it show you the password field1. You click a few more times to fill in the password. Then, if you use multi-factor authentication (and you should), you get to open up your phone, find the 2FA app, get the code, and type it in. To add insult to injury, the ceremony will be just different enough at every site you visit that you really don't develop much muscle memory for it.

As a consequence, when most people need something from their bank, they pull out their phone and use the mobile app. I think this is a shame. I like the web. There's more freedom on the web because there are fewer all-powerful gatekeepers. And, for many developers, it's more approachable. The web, by design, is more transparent in how it works, inspiring innovation and accelerating it's adoption.

The core problem with the web isn't just passwords. After all, most mobile apps authenticate using passwords as well. The problem is how sessions are set up and refreshed (or not, in the case of the web). On the web, sessions are managed using cookies, or correlation identifiers. HTTP cookies are generated by the server and stored on the browser. Whenever the browser makes a request to the server, it sends back the cookie, allowing the server to correlate all requests from that browser. Web sites, over the years, have become more security conscious and, as a result, most set expirations for cookies. When the cookie has expired, you have to log in again.

Now, your mobile app uses HTTP as well, and so it also uses cookies to link HTTP requests and create a session. The difference is in how you're authenticated. Mobile apps (speaking generally) are driven by APIs. The app makes an HTTP request to the API and receives JSON data in return which it then renders into the screens and buttons you interact with. Most API access is protected by an identity protocol called OAuth.

Getting an access token from the authorization server (click to enlarge) Using a token to request data from an API (click to enlarge)

You've used OAuth if you've ever used any kind of social login like Login with Apple, or Google sign-in. Your mobile app doesn't just ask for your user ID and password and then log you in. Rather, it uses them to authenticate with an authentication server for the API using OAuth. The standard OAuth flow returns an authentication token that the app stores and then returns to the server with each request. Like cookies, these access tokens expire. But, unlike cookies, OAuth defines a refresh token mechanism that the app can be use to get a new access token. Neat, huh?

The problem with using OAuth on the web is that it's difficult to trust browsers:

Some are in public places and people forget to log out. A token in the browser can be attacked with techniques like cross-site scripting. Browser storage mechanisms are also subject to attack.

Consequently, storing the access token, refresh token, and developer credentials that are used to carry out an OAuth flow is hard—maybe impossible—to do securely.

Solving this problem probably won't happen because we solved browser security problems and decided to use OAuth in the browser. A more likely approach is to get rid of passwords and make repeated authentication much less onerous. Fortunately, solutions are at hand. Most major browsers on most major platforms can now be used as FIDO platform authenticators. This is a fancy way of saying you can use the the same mechanisms you use to authenticate to the device (touch ID, face ID, or even a PIN) to authenticate to your favorite web site as well. Verifiable credentials are another up and coming technology that promises to significantly reduce the burdens of passwords and multi-factor authentication.

I'm hopeful that we may really be close to the end for passwords. I think the biggest obstacle to adoption is likely that these technologies are so slick that people won't believe they're really secure. If we can get adoption, then maybe we'll see a resurgence of web-based services as well.

Notes This is known as "identifier-first authentication". By asking for the identifier, the authentication service can determine how to authenticate you. So, if you're using a token authentication instead of passwords, it can present the right option. Some places do this well, merely hiding the password field using Javascript and CSS, so that password managers can still fill the password even though it's not visible. Others don't, and you have to use your password manager twice for a single login.

Photo Credit: Dual elevator door buttons from Nils R. Barth (CC0 1.0)

Tags: identity web mobile oauth