Last Update 6:48 PM August 18, 2022 (UTC)

Identity Blog Catcher

Brought to you by Identity Woman and Infominer.
Support this collaboration on Patreon!!!

Thursday, 18. August 2022

Ben Werdmüller

What is a man?

What is a man? The only answer I really care about is “whatever you want it to be”. Like all men, I’ve spent my life in a context of weirdly reductive, gender essentialist expectations - a man is physically strong, competitive, aggressive, stoic - that I couldn’t live up to because, generally speaking, that’s not what I am. Am I less of a man because I’m not aggressive, and because I prefer c

What is a man?

The only answer I really care about is “whatever you want it to be”. Like all men, I’ve spent my life in a context of weirdly reductive, gender essentialist expectations - a man is physically strong, competitive, aggressive, stoic - that I couldn’t live up to because, generally speaking, that’s not what I am. Am I less of a man because I’m not aggressive, and because I prefer collaboration to competition? I don’t think so, but there are certainly plenty of people who do.

The reason this matters for me now is not my own experience. I’ve found my way to a kind of self-acceptance, although my teenage years and most of my twenties were pretty rough: a mix of hating my body and receiving hate for not being what people expected me to be. I definitely have some pretty strong character flaws (non-confrontation and people-pleasing among them), which I’m trying to work on. But I feel some degree of pride about who I am, what I’ve managed to do, and the effect I have on the communities I’m a part of. Honestly? I’m glad I don’t adhere to the gender stereotype, even if it’s also true that I couldn’t if I wanted to.

But now I’m going to have a son (or at least, a baby who will be assigned male at birth), who will be subject to all of the same pressures and expectations, even in his first few years. There will be people who will be upset if he plays with dolls; there will be people who want to direct his interests to sports and trucks and whatever-else boys are supposed to like. There’s a fine line to walk here, because if he comes to those interests naturally, there’s nothing wrong with them! And those interests shouldn’t be gendered in the first place! I don’t want to dissuade any of his interests. But I worry about him getting there through external pressure, both explicitly and implicitly. The pressure to conform to someone else’s standard can only lead to anxiety and unhappiness; not to mention the impact it has on perpetuating gender inequality, and how he shows up for other people later in life.

To be clear, I don’t know what I’m doing. I’m not an expert in gender, or parenting, or really anything else. But I want to show up well as a parent, and I want him to show up well in the world (which are two expressions of the same thing). I just want him to be whoever he is, without regard for who other people expect him to be. That goes for every aspect of his (or her! or their!) identity. And I want the experience of that self-expression to be better than mine was, and better than so many people’s are, without fear or friction or conflict.

I guess what I’m really saying is, I don’t care what a man is, or what a boy is. I care who my child is. And that’s all that matters.

 

Photo by Annie Spratt on Unsplash

Wednesday, 17. August 2022

Simon Willison

Building games and apps entirely through natural language using OpenAI’s code-davinci model

Building games and apps entirely through natural language using OpenAI’s code-davinci model A deeply sophisticated example of using prompts to generate entire working JavaScript programs and games using the new code-davinci OpenAI model.

Building games and apps entirely through natural language using OpenAI’s code-davinci model

A deeply sophisticated example of using prompts to generate entire working JavaScript programs and games using the new code-davinci OpenAI model.


Crunchy Data: Learn Postgres at the Playground

Crunchy Data: Learn Postgres at the Playground Crunchy Data have a new PostgreSQL tutorial series, with a very cool twist: they have a build of PostgreSQL compiled to WebAssembly which runs in the browser, so each tutorial is accompanied by a working psql terminal that lets you try out the tutorial contents interactively. It even has support for PostGIS, though that particular tutorial has to lo

Crunchy Data: Learn Postgres at the Playground

Crunchy Data have a new PostgreSQL tutorial series, with a very cool twist: they have a build of PostgreSQL compiled to WebAssembly which runs in the browser, so each tutorial is accompanied by a working psql terminal that lets you try out the tutorial contents interactively. It even has support for PostGIS, though that particular tutorial has to load 80MB of assets in order to get it to work!

Via Craig Kerstiens


Plugin support for Datasette Lite

I've added a new feature to Datasette Lite, my distribution of Datasette that runs entirely in the browser using Python and SQLite compiled to WebAssembly. You can now install additional Datasette plugins by passing them in the URL. Datasette Lite background Datasette Lite runs Datasette in the browser. I initially built it as a fun technical proof of concept, but I'm increasingly finding it t

I've added a new feature to Datasette Lite, my distribution of Datasette that runs entirely in the browser using Python and SQLite compiled to WebAssembly. You can now install additional Datasette plugins by passing them in the URL.

Datasette Lite background

Datasette Lite runs Datasette in the browser. I initially built it as a fun technical proof of concept, but I'm increasingly finding it to be a genuinely useful tool for quick ad-hock data analysis and publication. Not having any server-side components at all makes it effectively free to use without fear of racking up cloud computing costs for a throwaway project.

You can read more about Datasette Lite in these posts:

Datasette Lite: a server-side Python web application running in a browser Joining CSV files in your browser using Datasette Lite Scraping data into Datasette Lite shows an example project where I scraped PSF board resolutions, stored the results in a CSV file in a GitHub Gist and then constructed this URL to open the result in Datasette Lite and execute a SQL query. Adding plugins to Datasette Lite

One of Datasette's key features is support for plugins. There are over 90 listed in the plugin directory now, with more emerging all the time. They're a fantastic way to explore new feature ideas and extend the software to handle non-default use cases.

Plugins are Python packages, published to PyPI. You can add them to Datasette Lite using the new ?install=name-of-plugin query string parameter.

Here's an example URL that loads the datasette-jellyfish plugin, which adds new SQL functions for calculating distances between strings, then executes a SQL query that demonstrates that plugin:

https://lite.datasette.io/?install=datasette-jellyfish#/fixtures?sql=SELECT%0A++++levenshtein_distance%28%3As1%2C+%3As2%29%2C%0A++++damerau_levenshtein_distance%28%3As1%2C+%3As2%29%2C%0A++++hamming_distance%28%3As1%2C+%3As2%29%2C%0A++++jaro_similarity%28%3As1%2C+%3As2%29%2C%0A++++jaro_winkler_similarity%28%3As1%2C+%3As2%29%2C%0A++++match_rating_comparison%28%3As1%2C+%3As2%29%3B&s1=barrack+obama&s2=barrack+h+obama

That URL uses ?install=datasette-dateutil to install the plugin, then executes the following SQL query:

SELECT levenshtein_distance(:s1, :s2), damerau_levenshtein_distance(:s1, :s2), hamming_distance(:s1, :s2), jaro_similarity(:s1, :s2), jaro_winkler_similarity(:s1, :s2), match_rating_comparison(:s1, :s2);

It sets s1 to "barack obama" and s2 to "barrack h obama".

Plugin compatibility

Unfortunately, many existing Datasette plugins don't aren't yet compatible with Datasette Lite. Most importantly, visualization plugins such as datasette-cluster-map and datasette-vega don't work.

This is because I haven't yet solved the challenge of loading additional JavaScript and CSS into Datasette Lite - see issue #8.

Here's the full list of plugins that I've confirmed work with Datasette Lite so far:

datasette-packages - Show a list of currently installed Python packages - demo datasette-dateutil - dateutil functions for Datasette - demo datasette-schema-versions - Datasette plugin that shows the schema version of every attached database - demo datasette-debug-asgi - Datasette plugin for dumping out the ASGI scope. - demo datasette-query-links - Turn SELECT queries returned by a query into links to execute them - demo datasette-json-html - Datasette plugin for rendering HTML based on JSON values - demo datasette-haversine - Datasette plugin that adds a custom SQL function for haversine distances - demo datasette-jellyfish - Datasette plugin that adds custom SQL functions for fuzzy string matching, built on top of the Jellyfish Python library - demo datasette-pretty-json - Datasette plugin that pretty-prints any column values that are valid JSON objects or arrays. - demo datasette-yaml - Export Datasette records as YAML - demo datasette-copyable - Datasette plugin for outputting tables in formats suitable for copy and paste - demo How it works

The implementation is pretty simple - it can be seen in this commit. The short version is that ?install= options are passed through to the Python web worker that powers Datasette Lite, which then runs the following:

for install_url in install_urls: await micropip.install(install_url)

micropip is a component of Pyodide which knows how to install pure Python wheels directly from PyPI into the browser's emulated Python environment. If you open up the browser devtools networking panel you can see that in action!

Since the ?install= parameter is being passed directly to micropip.install() you don't even need to provide names of packages hosted on PyPI - you could instead provide the URL to a wheel file that you're hosting elsewhere.

This means you can use ?install= as a code injection attack - you can install any Python code you want into the environent. I think that's fine - the only person who will be affected by this is the user who is viewing the page, and the lite.datasette.io domain deliberately doesn't have any cookies set that could cause problems if someone were to steal them in some way.

Tuesday, 16. August 2022

Simon Willison

Efficient Pagination Using Deferred Joins

Efficient Pagination Using Deferred Joins Surprisingly simple trick for speeding up deep OFFSET x LIMIT y pagination queries, which get progressively slower as you paginate deeper into the data. Instead of applying them directly, apply them to a "select id from ..." query to fetch just the IDs, then either use a join or run a separate "select * from table where id in (...)" query to fetch the fu

Efficient Pagination Using Deferred Joins

Surprisingly simple trick for speeding up deep OFFSET x LIMIT y pagination queries, which get progressively slower as you paginate deeper into the data. Instead of applying them directly, apply them to a "select id from ..." query to fetch just the IDs, then either use a join or run a separate "select * from table where id in (...)" query to fetch the full records for that page.

Via Introducing FastPage: Faster offset pagination for Rails apps


John Philpin : Lifestream

Human remains reportedly found in suitcases bought at New Ze

Human remains reportedly found in suitcases bought at New Zealand auction. Gulp .. Now that’s a headline you don’t often read … New Zealand?

Human remains reportedly found in suitcases bought at New Zealand auction.

Gulp ..

Now that’s a headline you don’t often read … New Zealand?


Werdmüller on Medium

Neumann Owns

Flow has nothing to do with the housing crisis. Continue reading on Medium »

Flow has nothing to do with the housing crisis.

Continue reading on Medium »


Ben Werdmüller

Neumann Owns

This morning, Andreessen Horowitz announced that it had invested $350M into Adam “WeWork” Neumann’s new startup, Flow. Whereas WeWork revolutionized the commercial real estate business and made ad-hoc office space easier for startups, Flow attempts to do the same for residential real estate. A lot of ink has been spilled on whether it’s okay for A16Z to have invested this money given Neumann’

This morning, Andreessen Horowitz announced that it had invested $350M into Adam “WeWork” Neumann’s new startup, Flow. Whereas WeWork revolutionized the commercial real estate business and made ad-hoc office space easier for startups, Flow attempts to do the same for residential real estate.

A lot of ink has been spilled on whether it’s okay for A16Z to have invested this money given Neumann’s well-documented, disastrous track record with WeWork, in an environment where lots of other people find it hard to raise even a tiny fraction of this amount. I agree with these comments in the sense that it’s obviously unfair: a sign of an unequal system. It just is.

But for a moment, look at it from a mercenary venture capitalist’s perspective. WeWork is everywhere, which happened under Neumann’s watch - and although Neumann is not the one doing it, it’s finally approaching profitability.

And then there’s housing, which is in need of major reform. I’m not going to shed any tears at the loss of today’s batch of rental agencies and real estate management firms, which have helped hike rents up to astronomical levels, and have often lobbied for preferential legislation that hurts ordinary renters. At the same time, investment properties leave many homes completely vacant in the middle of a housing crisis that is leaving millions experiencing housing insecurity.

The trouble is, Flow is highly unlikely to help with any of that. Marc Andreessen’s announcement hints at as much:

Many people are voting with their feet and moving away from traditional economic hub cities to different cities, towns, or rural areas, with no diminishment of economic opportunity. […] The residential real estate world needs to address these changing dynamics. And yet virtually no aspect of the modern housing market is ready for these changes.

Based on these words, Flow is gentrification as a service: a way for the technorati to rent cushy spaces in lower-cost parts of the country and build community with each other without having to engage with the people who are already there. It’s not a stretch to see what the racial and socioeconomic dynamics might be here, and the effect it might have on local economies. Low-cost housing for people who need it this is not.

“Our nation has a housing crisis,” Andreessen says. But he also said this, as reported by Jerusalem Demsas over in the Atlantic:

I am writing this letter to communicate our IMMENSE objection to the creation of multifamily overlay zones in Atherton … Please IMMEDIATELY REMOVE all multifamily overlay zoning projects from the Housing Element which will be submitted to the state in July. They will MASSIVELY decrease our home values, the quality of life of ourselves and our neighbors and IMMENSELY increase the noise pollution and traffic.

He doesn’t care about the housing crisis. What he does care about is making money, and in Neumann, he likely sees someone who already knows the real estate market well and has the ability to grow a business in the space very quickly. I’m sure we’ll see Flow communities all over the country within the next few years.

Where will he start? We can look at public records. The New York Times points out that he’s now going to donate substantial real estate holdings to Flow. Back in January, the Wall Street Journal reported that he’d bought over a billion dollars of apartments in the South:

Entities tied to Mr. Neumann have been quietly acquiring majority stakes in more than 4,000 apartments valued at more than $1 billion in Miami, Atlanta, Nashville, Tenn., Fort Lauderdale, Fla., and other U.S. cities.

Quoted in the same article, his family office made this statement:

“Since the spring of 2020, we have been excited about multifamily apartment living in vibrant cities where a new generation of young people increasingly are choosing to live, the kind of cities that are redefining the future of living. We’re excited to play a role in that future.”

None of this dissuades me from my original suspicion: this is a place to live for the people who WeWork was originally built for. It’s for young, affluent knowledge workers who want to live somewhere cheaper but don’t care to actually know their communities. It’ll transform residential real estate in the sense that it’ll out-compete all those cookie cutter apartment buildings set up for that same market.

Andreessen is likely to make a fortune.

And what about the actual housing crisis? The one that’s making people housing-insecure?

At best it does nothing for them. At worst, it helps hike up rents in parts of the country that remain affordable. Those people, the ordinary people who make up most of the country, who are struggling to keep a roof over the heads, don’t even make it into the pitch deck.

 

PHOTOGRAPH BY STUART ISETT/Fortune Brainstorm TECH


John Philpin : Lifestream

“The big shift we see right now with teens that we’re very

“The big shift we see right now with teens that we’re very focused on is this desire to share more privately,” A Product Manager At Facebook Meta

“The big shift we see right now with teens that we’re very focused on is this desire to share more privately,”

A Product Manager At Facebook Meta

Monday, 15. August 2022

John Philpin : Lifestream

What rock have I been living under? A podcast with Nitin S

What rock have I been living under? A podcast with Nitin Sawhney I had never heard of him until today - and WOW. What an artist. What a career. What a lovely highly talented person.

What rock have I been living under?

A podcast with Nitin Sawhney

I had never heard of him until today - and WOW.

What an artist. What a career. What a lovely highly talented person.


“Capital is that part of wealth which is devoted to obtain

“Capital is that part of wealth which is devoted to obtaining further wealth.” 💬 Alfred Marshall

“Capital is that part of wealth which is devoted to obtaining further wealth.”

💬 Alfred Marshall


“The most valuable of all capital is that invested in huma

“The most valuable of all capital is that invested in human beings.” 💬 Alfred Marshall

“The most valuable of all capital is that invested in human beings.”

💬 Alfred Marshall


“We might as well reasonably dispute whether it is the upp

“We might as well reasonably dispute whether it is the upper or the under blade of a pair of scissors that cuts a piece of paper, as whether value is governed by demand or supply.” 💬 Alfred Marshall

“We might as well reasonably dispute whether it is the upper or the under blade of a pair of scissors that cuts a piece of paper, as whether value is governed by demand or supply.”

💬 Alfred Marshall


Published two years and two days ago - written about a man f

Published two years and two days ago - written about a man from England that I met in Taupo, New Zealand - and we just got talking. “It’s not what you know - It’s who you know.”

Published two years and two days ago - written about a man from England that I met in Taupo, New Zealand - and we just got talking.

“It’s not what you know - It’s who you know.”


Apple plans offering more advertising to users via apps. D

Apple plans offering more advertising to users via apps. Dalyrymple has been on about ads in music for a while. This suggests that it is going to get worse. I’m with Jim. I am paying for these services with cold hard cash. If ads appear I will need to reconsider my options.

Apple plans offering more advertising to users via apps.

Dalyrymple has been on about ads in music for a while. This suggests that it is going to get worse.

I’m with Jim. I am paying for these services with cold hard cash.

If ads appear I will need to reconsider my options.


From a random Tumblr.

From a random Tumblr.

From a random Tumblr.


Exclusive or not, this is one Clubhouse I was happy to leave

Exclusive or not, this is one Clubhouse I was happy to leave I never rated Clubhouse. .. which I wrote even before I had got in - and nothing convinced me that I was wrong even when I did.

Exclusive or not, this is one Clubhouse I was happy to leave

I never rated Clubhouse. .. which I wrote even before I had got in - and nothing convinced me that I was wrong even when I did.


The Machine Internet

Bad News is Good News. “I started talking about the Machine Internet nearly 20 years ago, before it was even called “the Internet of Things.” Sensors and motes, tied to an analysis by wireless networks. Anything whose performance can be adjusted can now be monitored and controlled by a central intelligence. You can monitor the condition of parts and schedule maintenance before things break. Th

Bad News is Good News.

“I started talking about the Machine Internet nearly 20 years ago, before it was even called “the Internet of Things.” Sensors and motes, tied to an analysis by wireless networks. Anything whose performance can be adjusted can now be monitored and controlled by a central intelligence. You can monitor the condition of parts and schedule maintenance before things break. This matters if the part is your heart, and its breaking could kill you.”

💬 Dana Blankenhorn

I’ll take that claim and raise you by a few years …

A group of us started talking about the topic in the mid 90s.

We had the obligatory startup … Flypaper … with a back end we called ‘The Megaserver’ - and without getting into the details - the system allowed us to render a website - ‘just in time’ - by assembling components out of a database. Our message at the time … the website only exists when somebody is visiting it.

It lead to some interesting developments and conversations.

I can still remember one scenario where we envisaged every car having its own website, assembled from thousands of components that each mapped to specific things and functions in the car … this is mid-90s… people didn’t get it.

And then came the DOT BOMB. And oblivion swooped in.

Still very proud of what ‘we’ built.

Definitely ahead of its time … I mean think about the tools we had in the mid-90s to build this thing!


I have left the tracking data in this URL for a reason Wor

I have left the tracking data in this URL for a reason Workplace Productivity: Are You Being Tracked? It’s the NYT - but comes as a ‘gift post’ from Dave Pell and Next Draft. It’s a topic near and dear to my heart … having refused to join a company that was doing exactly this a few years ago. And they haven’t stopped …. I saw an ad just last week where they were hyping a $400K a year salary

I have left the tracking data in this URL for a reason

Workplace Productivity: Are You Being Tracked?

It’s the NYT - but comes as a ‘gift post’ from Dave Pell and Next Draft.

It’s a topic near and dear to my heart … having refused to join a company that was doing exactly this a few years ago. And they haven’t stopped …. I saw an ad just last week where they were hyping a $400K a year salary for some kind of VP Product role … remote work!! (Their modus operandi long before COVID)

The small print of the offer is built on what the NYT is writing about.


WeWork co-founder lines up $350 million A16Z investment for

WeWork co-founder lines up $350 million A16Z investment for a new billion-dollar real estate venture. There is so much wrong with this story. Apart from someone like Adam Neumann rising again …. it’s the old VC trick again… The 350 Million ‘values’ the company at 1 billion - and so far it has done nothing. But of course - if you put 350 million in - and get - say 30 to 35% of the company in

WeWork co-founder lines up $350 million A16Z investment for a new billion-dollar real estate venture.

There is so much wrong with this story.

Apart from someone like Adam Neumann rising again …. it’s the old VC trick again…

The 350 Million ‘values’ the company at 1 billion - and so far it has done nothing. But of course - if you put 350 million in - and get - say 30 to 35% of the company in return - then the whole company MUST be worth over a billion … right?

Andreesen himself is of course hot on property … so this is right in his wheelhouse.

His father-in-law: John Arrigala - who passed earlier this year is famous in the valley for being the guy that owns ‘all the property’ that these businesses are built on.

Beyond that - following in the footsteps of Oracle’s Ellison … he has been buying up Malibu property … last year - in the space of three months - he spent around 1/4 billion on THREE properties in Malibu.

Of course - his main house is in Atherton (the most expensive zip code in America), just down the road from his father-in-law’s estate in Portola Valley.

Carry On.


And The Answer Is ... Nick Clegg!

Zuckerberg’s New No. 2 Faces an Old Dilemma: Distrust of Facebook Seriously? That’s the problem? The new ‘number 2’ is Nick Clegg Wikipedia During the party’s time in coalition, the Liberal Democrats saw a significant drop in support and the 2015 general election left the party with just 8 seats, which resulted in Clegg’s ousting as Deputy Prime Minister and his resignation as party leader

Zuckerberg’s New No. 2 Faces an Old Dilemma: Distrust of Facebook

Seriously? That’s the problem?

The new ‘number 2’ is Nick Clegg

Wikipedia

During the party’s time in coalition, the Liberal Democrats saw a significant drop in support and the 2015 general election left the party with just 8 seats, which resulted in Clegg’s ousting as Deputy Prime Minister and his resignation as party leader. In 2016, following a referendum in which a majority supported leaving the European Union, Clegg returned to the Liberal Democrat frontbench, concurrently serving as Spokesperson for Exiting the European Union and for International Trade from July 2016 to June 2017. In the 2017 general election, Clegg was defeated in his constituency of Sheffield Hallam by Jared O’Mara. After losing his seat, Clegg moved to the United States after he was appointed by Mark Zuckerberg as Vice-President for Global Affairs and Communications of Facebook, Inc.

‘#Winning


Damien Bod

Creating dotnet solution and project templates

This article should how to create and deploy dotnet templates which can be used from the dotnet CLI or from Visual Studio. Code: https://github.com/damienbod/Blazor.BFF.OpenIDConnect.Template Folder Structure The template folder structure is important when creating dotnet templates. The .template.config must be created inside the content folder. This folder has a template.json file and an icon.png

This article should how to create and deploy dotnet templates which can be used from the dotnet CLI or from Visual Studio.

Code: https://github.com/damienbod/Blazor.BFF.OpenIDConnect.Template

Folder Structure

The template folder structure is important when creating dotnet templates. The .template.config must be created inside the content folder. This folder has a template.json file and an icon.png image which is displayed inside Visual Studio once installed. The json structure of the template.json has then a few required objects and properties. You can create different type of templates.

The Blazor.BFF.OpenIDConnect.Template project is an example of a template I created for a Blazor ASP.NET Core solution with three projects and implements the backend for frontend security architecture using OpenID Connect.

{ "author": "damienbod", "classifications": [ "AspNetCore", "WASM", "OpenIDConnect", "OAuth2", "Web", "Cloud", "Console", "Solution", "Blazor" ], "name": "ASP.NET Core Blazor BFF hosted WASM OpenID Connect", "identity": "Blazor.BFF.OpenIDConnect.Template", "shortName": "blazorbffoidc", "tags": { "language": "C#", "type":"solution" }, "sourceName": "BlazorBffOpenIDConnect", "preferNameDirectory": "true", "guids": [ "CFDA20EC-841D-4A9C-A95C-2C674DA96F23", "74A2A84B-C3B8-499F-80ED-093854CABDEA", "BD70F728-398A-4A88-A7C7-A3D9B78B5AE6" ], "symbols": { "HttpsPortGenerated": { "type": "generated", "generator": "port", "parameters": { "low": 44300, "high": 44399 } }, "HttpsPortReplacer": { "type": "generated", "generator": "coalesce", "parameters": { "sourceVariableName": "HttpsPort", "fallbackVariableName": "HttpsPortGenerated" }, "replaces": "44348" } } }

The tags property

The tags object must be set correctly for Visual Studio to display the template. The type property must be set to solution, project or item. The type property must be set with a correct value otherwise it will not be visible inside Visual Studio even though the template will still install in the CLI and run from the CLI.

"tags": { "language": "C#", "type":"solution" // project, item },

HTTP Ports

I like to update the HTTP ports when creating a new solution or project from the template. I do not want to add a parameter for the HTTP port because the user will be required to add a value in Visual Studio. If the user enters nothing, the template will create nothing without an error. Anywhere the port 44348 is found in a launchSettings.json file, this will be updated with a new value inside the range. This will only work if the port number exists already in the template. This must match your content!

"symbols": { "HttpsPortGenerated": { "type": "generated", "generator": "port", "parameters": { "low": 44300, "high": 44399 } }, "HttpsPortReplacer": { "type": "generated", "generator": "coalesce", "parameters": { "sourceVariableName": "HttpsPort", "fallbackVariableName": "HttpsPortGenerated" }, "replaces": "44348" } }

Solution GUIDs

The GUIDs are used to replace the existing solution GUIDs from the solution file with new random GUIDs when creating a new solution using the template. The GUIDs must exist in your solution file, otherwise there is nothing to replace.

"guids": [ "CFDA20EC-841D-4A9C-A95C-2C674DA96F23", "74A2A84B-C3B8-499F-80ED-093854CABDEA", "BD70F728-398A-4A88-A7C7-A3D9B78B5AE6" ],

Namespaces, sourceName, project names

The sourceName value is used to replace this value with the new name value passed as the -n parameter or the project name inside Visual Studio. The value of the property is used and everywhere in the content, the projects and the namespaces are replaced with the passed parameter. It is important that when you create the content for the template, the namespaces and the projects use the same value as the sourceName value.

classifications

The classifications value is important when using inside Visual Studio. You can use this to filter and find your template in the “create new solution/project” UI.

Create a template Nuget package

I deploy the template as a Nuget package. I use a nuspec file for this. This can be used to create a nupkg file which can be uploaded to Nuget. You could also create a package from a dotnet project file.

<?xml version="1.0" encoding="utf-8"?> <package xmlns="http://schemas.microsoft.com/packaging/2012/06/nuspec.xsd"> <metadata> <id>Blazor.BFF.OpenIDConnect.Template</id> <version>1.2.6</version> <title>Blazor.BFF.OpenIDConnect.Template</title> <license type="file">LICENSE</license> <description>Blazor backend for frontend (BFF) template for WASM ASP.NET Core hosted</description> <projectUrl>https://github.com/damienbod/Blazor.BFF.OpenIDConnect.Template</projectUrl> <authors>damienbod</authors> <owners>damienbod</owners> <icon>./BlazorBffOpenIDConnect/.template.config/icon.png</icon> <language>en-US</language> <tags>Blazor BFF WASM ASP.NET Core</tags> <requireLicenseAcceptance>false</requireLicenseAcceptance> <copyright>2022 damienbod</copyright> <summary>This template provides a simple Blazor template with BFF server authentication WASM hosted</summary> <releaseNotes>Improved template with http port generator, update packages</releaseNotes> <repository type="git" url="https://github.com/damienbod/Blazor.BFF.OpenIDConnect.Template" /> <packageTypes> <packageType name="Template" /> </packageTypes> </metadata> </package>

Installing

The template can be installed using the dotnet CLI. The name of the template is defined in the template file. The dotnet CLI run can be used to create a new solution or project depending on your template type. The -n is used to define the name of the projects and the namespaces.

// install dotnet new -i Blazor.BFF.OpenIDConnect.Template // run dotnet new blazorbffoidc -n YourCompany.Bff

Visual Studio

After installing using the dotnet CLI, the template will be visible inside Visual Studio, if the tags type is set correctly. The icon will be displayed if the correct icon.png is added to the .template.config folder.

Notes

Creating and using templates using the dotnet CLI is really powerful and very simple to use. There are a few restrictions which must be followed and the docs are a bit light. This github repo is a great starting point and is where I would go to learn and create your first template. If deploying the template to Visual Studio and using in the dotnet CLI, you need to test both. Entering HTTP port parameters does not work so good in Visual Studio as no default value is set if the user does not enter this. I was not able to get the VSIX extensions to work within a decent time limit but will probably come back to this at some stage. I had many problems with the target type and XML errors when deploying and so on. The dotnet CLI works great and can be used anywhere and the templates can be used in Visual Studio as well, this is enough for me. I think the dotnet CLI templates feature is great and makes it really used to get started faster when creating software solutions.

Links:

https://github.com/sayedihashimi/template-sample

https://www.nuget.org/packages/Blazor.BFF.OpenIDConnect.Template/

https://dotnetnew.azurewebsites.net/

https://devblogs.microsoft.com/dotnet/how-to-create-your-own-templates-for-dotnet-new/

https://docs.microsoft.com/en-us/dotnet/core/tools/custom-templates

https://github.com/dotnet/aspnetcore/tree/main/src/ProjectTemplates

https://json.schemastore.org/template

https://docs.microsoft.com/en-us/dotnet/core/tutorials/cli-templates-create-item-template


John Philpin : Lifestream

Has anybody seen the film ‘Luce’. Good? Bad?

Has anybody seen the film ‘Luce’. Good? Bad?

Has anybody seen the film ‘Luce’.

Good? Bad?


reb00ted

Levels of information architecture

I’ve been reading up on what is apparently called information architecture: the “structural design of shared information environments”. A quite fascinating discipline, and sorely needed as the amount of information we need to interact with on a daily basis keeps growing. I kind of think of it as “the structure behind the design”. If design is the what you see when looking at something, informa

I’ve been reading up on what is apparently called information architecture: the “structural design of shared information environments”.

A quite fascinating discipline, and sorely needed as the amount of information we need to interact with on a daily basis keeps growing.

I kind of think of it as “the structure behind the design”. If design is the what you see when looking at something, information architecture are the beams and struts and foundations etc that keeps the whole thing standing and comprehensible.

Based on what I’ve read so far, however, it can be a bit myopic in terms of focusing just on “what’s inside the app”. That’s most important, obviously, but insufficient in the age of IoT – where some of the “app” is actually controllable and observable through physical items – and the expected coming wave of AR applications. Even here and now many flows start with QR codes printed on walls or scanned from other people’s phones, and we miss something in the “design of shared information environments” if we don’t make those in-scope.

So I propose this outermost framework to help us think about how to interact with shared information environments:

Universe-level: Focuses on where on the planet where a user could conceivably be, and how that changes how they interact with the shared information environment. For example, functionality may be different in different regions, use different languages or examples, or not be available at all. Environment-level: Focuses on the space in which the user is currently located (like sitting on their living room couch), or that they can easily reach, such as a bookshelf in the same room. Here we can have a discussion about, say, whether the user will pick up their Apple remote, run the virtual remote app on their iOS device, or walk over to the TV to turn up the volume. Device-level: Once the user has decided which device to use (e.g. their mobile phone, their PC, their AR goggles, a button on the wall etc), this level focuses on what they user does on the top level of that device. On a mobile phone or PC, that would be the operating-system level features such as which app to run (not the content of the app, that’s the next level down), or home screen widgets. Here we can discuss how the user interacts with the shared information space given that they also do other things on their device; how to get back and forth; integrations and so forth. App-level: The top-level structure inside an app: For example, an app might have 5 major tabs reflecting 5 different sets of features. Page-level: The structure of pages within an app. Do they have commonalities (such as all of them have a title at the top, or a toolbox to the right) and how are they structured. Mode-level: Some apps have “modes” that change how the user interacts with what it shown on a page. Most notably: drawing apps where the selected tool (like drawing a circle vs erasing) determines different interaction styles.

I’m just writing this down for my own purposes, because I don’t want to forget it and refer to it when thinking of design problems. And perhaps it is useful for you, the reader, as well. If you think it can be improved, let me know!

Sunday, 14. August 2022

John Philpin : Lifestream

In case you were thinking that what you really need is anoth

In case you were thinking that what you really need is another micro blogging platform to add to your arsenal … Nicheless | Share your thoughts. Write away.

In case you were thinking that what you really need is another micro blogging platform to add to your arsenal …

Nicheless | Share your thoughts. Write away.


Ben Werdmüller

Finding ethical eyewear

I ordered new glasses recently. At some point over the last few months, I accidentally slept on my main pair and bent them out of shape; although I tried my best to put them back, they’ve been a little bit crooked ever since. I’ve been a die-hard Zenni Optical customer for years, because their frames are affordable, relatively well-made, and can be engraved with my website address. (Yes, I’ve

I ordered new glasses recently. At some point over the last few months, I accidentally slept on my main pair and bent them out of shape; although I tried my best to put them back, they’ve been a little bit crooked ever since.

I’ve been a die-hard Zenni Optical customer for years, because their frames are affordable, relatively well-made, and can be engraved with my website address. (Yes, I’ve been wearing “werd.io” on my face for the best part of a decade.) But this adherence means I’ve been wearing the same black frames forever, and hey, why not change it up?

I wish Genusee made prescription glasses: they’re made from water bottles in Flint, Michigan, and can be recycled back into the same material stream. I like everything about their mission - but unfortunately, I need prescription glasses to see.

Sunglasses by Pala Eyewear fund eye care across Africa, but is based in the UK, so I’d need to order pairs from overseas.

Solo makes its sunglasses from repurposed wood, bamboo, cellulose acetate and recycled plastic. Great, but while they mention that their frames are prescription-ready, they don’t actually seem to offer prescriptions.

Reader, I gave up and followed the stereotypical Silicon Valley path into Warby Parker. They felt well-made, which turns out to be all I can ask for. But I’m still looking for the right place to get prescription sunglasses.

Perhaps the most sustainable route would be to get laser eye surgery and dispense with the need for glasses at all. I’ve thought about it, but to be honest, despite my understanding of the low risk involved, the idea of lasers cutting away at my eyeballs doesn’t have me running towards a surgeon with money in hand.

If you wear prescription glasses and care about the ethics of the products you buy, have you found an adequate solution? I’d love to learn from you.

 

Photo by Bud Helisson on Unsplash

Saturday, 13. August 2022

John Philpin : Lifestream

When did Zoom change its terms? I used to talk for unlimit

When did Zoom change its terms? I used to talk for unlimited time with 1 person with a 40-minute limit for 3 or more people. Now that 40-minute limit is universal. Poll Is this going to incentivize more people to pay or will they simply find alternative solutions?

When did Zoom change its terms?

I used to talk for unlimited time with 1 person with a 40-minute limit for 3 or more people.

Now that 40-minute limit is universal.

Poll

Is this going to incentivize more people to pay or will they simply find alternative solutions?


Jon Udell

How to rewrite a press release: a step-by-step guide

As a teaching fellow in grad school I helped undergrads improve their expository writing. Some were engineers, and I invited them to think about writing and editing prose in the same ways they thought about writing and editing code. Similar rules apply, with different names. Strunk and White say “omit needless words”; coders say “DRY” … Continue reading How to rewrite a press release: a step-by-ste

As a teaching fellow in grad school I helped undergrads improve their expository writing. Some were engineers, and I invited them to think about writing and editing prose in the same ways they thought about writing and editing code. Similar rules apply, with different names. Strunk and White say “omit needless words”; coders say “DRY” (don’t repeat yourself.) Writers edit; coders refactor. I encouraged students to think about writing and editing prose not as a creative act (though it is one, as is coding) but rather as a method governed by rules that are straightforward to learn and mechanical to apply.

This week I applied those rules to an internal document that announces new software features. It’s been a long time since I’ve explained the method, and thanks to a prompt from Greg Wilson I’ll give it a try using another tech announcement I picked at random. Here is the original version.

I captured the transformations in a series of steps, and named each step in the version history of a Google Doc.

Step 1

The rewritten headline applies the following rules.

Lead with key benefits. The release features two: support for diplex-matched antennas and faster workflow. The original headline mentions only the first, I added the second.

Clarify modifiers. A phrase like “diplex matched antennas” is ambiguous. Does “matched” modify “diplex” or “antennas”? The domain is unfamiliar to me, but I suspected it should be “diplex-matched” and a web search confirmed that hunch.

Omit needless words. The idea of faster workflow appears in the original first paragraph as “new efficiencies aimed at streamlining antenna design workflows and shortening design cycles.” That’s a long, complicated, yet vague way of saying “enables designers to work faster.”

Step 2

The original lead paragraph was now just a verbose recap of the headline. So poof, gone.

Step 3

The original second paragraph, now the lead, needed a bit of tightening. Rules in play here:

Strengthen verbs. “NOUN is a NOUN that VERBs” weakens the verb. “NOUN, a NOUN, VERBs” makes it stronger.

Clarify modifiers. “matching network analysis” -> “matching-network analysis”. (As I look at it again now, I’d revise to “analysis of matching networks.”)

Break up long, weakly-linked sentences. The original was really two sentences linked weakly by “making it,” so I split them.

Omit needless words. A word that adds nothing, like “applications” here, weakens a sentence.

Strengthen parallelism. If you say “It’s ideal for X and Y” there’s no problem. But when X becomes “complex antenna designs that involve multi-state and multi-port aperture or impedance tuners,” and Y becomes “corporate feed networks with digital phase shifters,” then it helps to make the parallelism explicit: “It’s ideal for X and for Y.”

Step 4

Omit needless words. “builds on the previous framework with additional” -> “adds”.

Simplify. “capability to connect” -> “ability to connect”.

Show, don’t tell. A phrase like “time-saving options in the schematic editor’s interface” tells us that designers save time but doesn’t show us how. That comes next: “the capability to connect two voltage sources to a single antenna improves workflow efficiency.” The revision cites that as a shortcut.

Activate the sentence. “System and radiation efficiencies … can be effortlessly computed from a single schematic” makes efficiencies the subject and buries the agent (the designer) who computes them. The revision activates that passive construction. Similar rules govern the rewrite of the next paragraph.

Step 5

When I reread the original fourth paragraph I realized that the release wasn’t only touting faster workflow, but also better collaboration. So I adjusted the headline accordingly.

Step 6

Show, don’t tell. The original version tells, the new one shows.

Simplify. “streamline user input” -> “saves keystrokes” (which I might further revise to “clicks and keystrokes”).

Final result

Here’s the result of these changes.

I haven’t fully explained each step, and because the domain is unfamiliar I’ve likely missed some nuance. But I’m certain that the final version is clearer and more effective. I hope this step-by-step narration helps you see how and why the method works.


John Philpin : Lifestream

Four Nil. Against Brentford? FTW!

Four Nil. Against Brentford? FTW!

Four Nil.

Against Brentford?

FTW!



Simon Willison

Bypassing macOS notarization

Bypassing macOS notarization Useful tip from the geckodriver docs: if you've downloaded an executable file through your browser and now cannot open it because of the macOS quarantine feature, you can run "xattr -r -d com.apple.quarantine path-to-binary" to clear that flag so you can execute the file. Via davidbgk on Datasette Discord

Bypassing macOS notarization

Useful tip from the geckodriver docs: if you've downloaded an executable file through your browser and now cannot open it because of the macOS quarantine feature, you can run "xattr -r -d com.apple.quarantine path-to-binary" to clear that flag so you can execute the file.

Via davidbgk on Datasette Discord

Friday, 12. August 2022

Mike Jones: self-issued

Publication Requested for OAuth DPoP Specification

Brian Campbell published an updated OAuth 2.0 Demonstrating Proof-of-Possession at the Application Layer (DPoP) draft addressing the shepherd review comments received. Thanks to Rifaat Shekh-Yusef for his useful review! Following publication of this draft, Rifaat also created the shepherd write-up, obtained IPR commitments for the specification, and requested publication of the specification as an

Brian Campbell published an updated OAuth 2.0 Demonstrating Proof-of-Possession at the Application Layer (DPoP) draft addressing the shepherd review comments received. Thanks to Rifaat Shekh-Yusef for his useful review!

Following publication of this draft, Rifaat also created the shepherd write-up, obtained IPR commitments for the specification, and requested publication of the specification as an RFC. Thanks all for helping us reach this important milestone!

The specification is available at:

https://tools.ietf.org/id/draft-ietf-oauth-dpop-11.html

ian glazers tuesdaynight

Lessons on Salesforce’s Road to Complete Customer MFA Adoption

What follows is a take on what I learned as Salesforce moved to require all of its customers to use MFA. There’s plenty more left on the cutting room floor but it will definitely give you a flavor for the experience. If you don’t want to read all this you can check out the version … Continue reading Lessons on Salesforce’s Road to Complete Customer MFA Adoption

What follows is a take on what I learned as Salesforce moved to require all of its customers to use MFA. There’s plenty more left on the cutting room floor but it will definitely give you a flavor for the experience. If you don’t want to read all this you can check out the version I delivered at Identiverse 2022.

i

Thank you.

It is an honor and a privilege to be here on the first day of Identiverse. I want to thank Andi and the entire program team for allowing me to speak to you today.

This talk is an unusual one for me. I have had the pleasure and privilege to be here on stage before. But in all the times that i have spoken to you, I have been wearing my IDPro hat. I have never had the opportunity to represent my day job and talk about what my amazing team does. So today I am here to talk to you as a Salesforce employee.

And because of that you’re going to note a different look and feel for this presentation. Very different. I get to use the corporate template and I am leaning in hard to that.

Salesforce is a very different kind of company and that shows in up many different ways. Including the fact that, yes, there’s a squirrel-like thing on this slide. That’s Astro – they are one of our mascots. Let’s just get one thing out of the way up front – yes, they have their own backstories and different pronouns; no, they do not all wear pants. Let’s move on.

So the reason why I am here today is to talk to you about Salesforce’s journey towards complete customer adoption of MFA. There are 2 key words in this: Customer and Journey.

‘Customer’ is a key word here because the journey we are on is to drive our customers’ users to use MFA. This is not going to be a talk about how we enable our workforce to use MFA. Parenthetically we did that a few years ago and got ~95% of all employees enrolled in MFA in under 48 hours. Different talk another time. We are focused on raising the security posture of our customers with their help.

Journey is the other key word here. The reason why I want to focus on the Journey is because I believe there is something for everyone to take away and apply in their own situations. And I want to tell this Journey as a way of sharing the lessons I have learned, my team has learned, to help avoid the mistakes we made along the way.

The Journey Begins

So the Journey towards complete customer MFA. 

It starts in the Fall of 2019. Our CISO at the time makes a pronouncement. Because MFA is the single most effective way our customers could protect themselves and their customers, we wanted to drive more use of MFA. So the pronouncement was simple: in 3 months time every one of our at the time 10 product groups (known as our Clouds) will adopt a common MFA service (which was still in development at the time btw) and by February 1 of 2021, 100% of end users of our well over 1500,000 customers will use MFA or SSO on every log in. Again this is Salesforce changing all of our customers’ and all of their users’ behavior across all of our products in roughly a year’s time.

That means in a year’s time we are going change the way every single person who logs into a Salesforce product. And let’s be honest with ourselves fellow identity nerds, this is what people think of MFA:

100% service adoption in 3 months. 100% user penetration within about 1 year.

100%

Of all end users.

All of them.

100%

There’s laughing from the audience. There’s some whispering to neighbors. I assume this is your reaction to the low bar that the CISO set for us… a trivial thing to achieve.

Oh wait no… the opposite. You, like I did at the time reacted to the absolute batshit nutsery of that goal. What the CISO is proposing is to tell customers, WHO PAY US, here is the minimum bar for your users’ security posture and you must change behaviors and potentially technologies if you don’t currently meet that bar and want to use our services.

100%… oh hell no.

I reacted like most 5 year olds would. I stomped my feet. I pulled the covers over my head thinking if the monsters couldn’t see me, they couldn’t get me. If I just didn’t acknowledge the CISO’s decree, it would somehow not apply. Super mature response. Lasted for about a week. Then I learned that the CISO committed all this to our Board of Directors. So… the chances of ignoring this were zero. But still, I fought against the tide. I was difficult. I was difficult to the program team and to my peer in Security. That was immature and just wasted time. I spent time rebuilding those relationships during the first 6 months of the program.

Step 0: Get a writer and a data person

What would you do hotshot?
If you got this decree what should be the first thing you’d do. Come on – shout them out! (Audience shout out answers.) All good ideas… but the first thing you should do is hire the best tech writer you can. Trust me you are going to need that person in the 2nd and 3rd acts and its gonna take them a bit of time to come up to speed… so get going, hire a writer!

(It’s also not a bad idea to get data people on the team. If you are going to target 100% rollout then you need good ways to measure your progress. And you’ll want to slice and dice that to better understand where you need more customer outreach and which regions or business are doing well.)

Step 1: Form a program with non-SMEs

Ok probably the next thing you’d do is get a program running which is what we did. That program was and is run by non-identity people. Honestly, my first reaction was that this was going to be a problem. What I foresaw was a lot of explaining the “basics” of identity and MFA and SSO to the program team and not a lot of time left to do the work.

I was right and I was wrong. I was correct in that I and my team did spend a lot of time explaining identity concepts to the program team. I was wrong in that the work of explaining was actually the work that needed to be done. The program team were not identity people and we were asking them to do identity stuff and this was just like the admins at our customers. They were not identity people and we were now asking them to do identity stuff.

So having a program team of non-subject matter experts was a great feature not a bug. As the SMEs, my team spent hours explaining so many things to the program team and it turned out that the time we spent there was a glimpse of what the entire program would need to do with our customers.

Not only did we have a program team staffed with non-subject matter experts, we also formed a steering committee staffed, in part, with non-subject matter experts. The Steerco was headed by a representative from Security, Customer Success, and Product. This triumvirate helped us to balance the desires of Security with the realities of the customers with our ability to deliver needed features. 

Step 2: Find the highest ranking exec you can and use them as persuaders as needed

Next up – if we needed all of our clouds to use MFA, we need to actually get their commitment to do so. The program dutifully relayed the CISO’s decree to the general managers of all the clouds. Understand that Salesforce’s financial year starts Feb 1, so we were just entering Q4 and here comes the program team telling the GMs, “yeah on top of all your revenue goals for the year, you need to essentially drop everything and integrate with the MFA service,” which again wasn’t GA yet. 

We were asking the GMs to change their Q4 and next fiscal year plans by adding a significant Trust-related program. And at Salesforce Trust is our number 1 value which means that this program had to go to the top of every cloud’s backlog. As a product manager, if someone told me “hey Ian, this thing that you really had no plans to do now has to be done immediately” I would take it poorly. Luckily, we have our CISO with the support of our Co-CEOs and Board to persuade the GMs.

Step 3: Get, maintain, and measure alignment using cultural and operational norms

So we got GM commitments but needed a way to keep them committed in the forthcoming years of the program. We used our execs to help do this and we relied on a standard Salesforce planning mechanism: the V2MOM. V2MOM stands for Vision, Values, Methods, Obstacles, and Measures. Essentially, where do you want to go, what is important to you in that journey, what are the things you are going to do get to that destination, what roadblocks do you expect, and how will you measure your progress. V2MOMs are ingrained in Salesforce culture and operations. Specific to MFA, we made sure that service adoption and customer MFA adoption measures were in the very first Method of every Cloud’s V2MOM and we used the regular review processes within Salesforce to monitor our progress.

Do not create someting new! Find whatever your organization uses to gain and monitor alignment and progress and use it!

Lesson 1: Service delivery without adoption is the same thing as no service delivery

Round about this time I made the first of many mistakes. We had just GA’ed the new MFA service and I wanted to publish a congratulatory note and get all the execs to pile on. Keep in mind that the release was an MVP release and exactly zero clouds had adopted it. My boss stoped me from sending the note. Instead of a congratulatory piling on from the execs, I got a piling on from the CISO for the lack of features and adoption. 

I am a product manager and live in a product org… not an internal IT org, not the Security org. My world is about shipping features… my world was about to get rocked. I had lost sight of the most important thing, especially to the execs: adoption.

Thus service delivery without adoption is the same thing as no service delivery.

Lesson 2: Plan to replan

At this point it is roughly February 2020 and no clouds had adopted the MFA service and we had just started to get metrics from the clouds as to their existing MFA and SSO penetration. It wasn’t pretty but at least we knew where we stood. And where we stood made it pretty clear to see that we were not going to be in a position to drive customer adoption of MFA and certainly not achieve 100% user coverage within the original year’s time.

We needed to reset our timeline and in doing so we had to draw up a two new sets of plans: one for our clouds adopting the MFA service and one for our customer adoption. In that process, we moved the dates out for both. We gave our clouds more time to adopt the MFA service and moved the date for 100% customer end-user adoption to February 1 2022.

No matter how prepared you are at the beginning of a program like this, there will always be externalities that force you to adapt. 

Continue onwards

So with our new plans in hand, a reasonably well-oiled program in place, we began to roll out communications to customers in April of 2020. We explained what we wanted them to do 100% MFA usage and why – MFA is the single best control they could employ to protect themselves, their customers, and their data against things like credential stuffing and password reuse. And we let them know about the deadline of February 1 2022. We did this in the clearest ways we knew how to express ourselves. We did it in multiple formats, languages, and media. We had teams of people calling customers and making them aware of the MFA requirements.

Remember when I said hire a writer early… yeah, that. Clear comms is crucial. Clear comms about identity stuff to non-identity people is really difficult and crucial to get as right as possible (and then iterate… a lot.)

Gain traction; get feedback

The program team we had formed was based on a template for a feature adoption team. Years ago, Salesforce released a fundament change to its UX tier which had profound impact to how our customers built and interacted with apps on our platform. To drive adoption for the new UX tier, we put together an adoption team… and we lifted heavily from that team and their approach.

Using the wisdom of those people, we knew that we were going to have to meet our customers where they were. First and foremost, we need a variety of ways to get the MFA message out. We used both email and in-app messages along with good ole’ fashion phone calls – yes we called our customer admins. Besides a microsite, we build ebooks and an FAQ. We put on multiple webinars and found space in our in-person events to spread the word. We even build some specialized apps in some of our products to drive MFA awareness.

And we listened… our Customer Success Group brought back copious notes from their interactions with customers. We opened a dedicated forum in our Trailblazer Community. We trained a small army of people to respond to customer questions. We tracked customer escalations and sentiment and reported all of this to the CISO and other senior execs.

Wobbler #1

In our leadership development courses at Salesforce, we do a business simulation. This simulation puts attendees in the shoes of executives of a mythical company and they are asked to make resource allocation and other decisions. Over the course of the classes, you compete with fellow attendees and get to see the impact of your decisions. It’s a lot of fun. 

One consistent thing in all of the simulations is “The Wobbler.” The Wobbler is an externality thrown at you and your teammates. They can be intense; they can definitely knock a winning team out of contention. And so you can say to a colleague, “We were doing great until this wobbler” and they totally know what you mean.

Predictably, the MFA program was due for a wobbler. This one came in from a discrepancy in what we were communicating and the CISO noticed it first. Despite the many status briefings. Despite having one of his trusted deputies as part of the steering committee for the MFA program. There was a big disconnect. The MFA Program was telling our customers “By February 1 2022 you need to be doing MFA or SSO.” The CISO thought we were telling customers “MFA or SSO with MFA.” 

There are probably a few MBA classes on executive communication that could be written about this “little” disconnect. There was going to be no changing the CISO’s mind; the program team simply needed to start communicating the requirement of MFA or SSO with MFA.

From our customers perspective, Salesforce was moving the goal posts. They were stressed enough as it is and this eroded trust. Our poor lead writer had a very bad week. The customer success teams doing outreach and talking to customers had very bad weeks. My teams had to redo their release plans to pull forward instrumentation to log and surface whether an SSO login used MFA upstream.

A word from our speaker

And now a word from a kinda popular SaaS Service Provider: “Hi, are you like me? Are you a service provider just trying to make the internet a safer place and increase the security posture of your customers but are thwarted by the lack of insight into the upstream authentication event? Isn’t that frustrating? But don’t worry we have standards and things like AuthNContext in SAML and AMR claims in OIDC. Now if only on-prem and IDaaS IDPs would populate those claims consistently as well as consistently use the same values in those claims. If we could do that, it would make the world a better place. Don’t let this guy down.” 

Ok I know this isn’t sexy stuff but please please please! It is damn hard as an SP to consistently get any insight into the upstream user authentication event. I know my own services can do better here when we act as an IDP. Please, industry peers, please please make this data available to downstream SPs. And, standards nerds, I know it ain’t sexy but can we please standardize or at least normalize not only the values in those claims but the order and meaning of the order of values within those claims. Pretty please? (include scrolling spreadsheet of all the amr values we’ve seen)

Step 4: Accommodate the hard use case

The wheels had begun to gain traction so to speak. We heard from customer CISOs who were thrilled about our MFA requirements – it gave them the justification they were looking for to go much bigger with MFA. But we also heard from customers with hard use cases for whom there aren’t always great answers. For example, we have customers who use 3rd parties to administer and modify their Salesforce environments. Getting MFA into those peoples’ hands is tricky. Another example, people doing robotic process automation or have UX test suites struggle to meet the MFA requirements of MFA on every UI-based login. Those users look like “regular” human users and have access to customer data. They need MFA. And yet the support for MFA in those areas is spotty at best.

We had another source of challenging use cases – brought to us by our ISV and OEM partners. These vital parts of our business have a unique relationship with our products and our customers and the challenges that our customers feel are amplified for our ISVs and OEMs.

What we learned was that there are going to be use cases that are just damn hard to deal with. 3rd party call centers. RPA tools. Managed service providers. The lesson here is – it’s okay. Your teams are comprised of smart people and even still there is no way to know at the onset of such a program these use cases. Find the flexibility to meet the customers where they are… and that includes negotiated empathy with your executives and stakeholders. I truly believe there is always a path forward but it does require flexibility.

Wobbler #2

At this point we have clouds mostly adopted, people are rolling out MFA controls in their products. Customer adoption of MFA and SSO are climbing and we are feeling good. And, predictably, the universe decided to take us down a peg. Enter Wobbler #2 – outages. 

Raise your hand if you know the people that maintain the DNS infrastructure at your company… if you don’t know them, find them. Bring them chocolate, whiskey, aspirin… DNS is hard. And when DNS goes squirrely it tends to have a massive blast radius. Salesforce had a DNS-related outage and the MFA service that most of our clouds had just adopted was impacted. 

And a few weeks after we recovered from that, the MFA service suffered a second outage due to a regional failover process not failing over in a predicted manner. 

We recovered, we learned, we strengthened the service, we strengthened ourselves. 

So when things are going well, just assume that Admiral Ackbar is going to appear in your life… “It’s a trap.” 

Step 5: Address the long tail

So where are we today? Well, while we found lots of MFA and SSO adoption in our largest customers – especially SSO, we have a lot of customers with less than 100 users and their adoption rates were low. One concerning thing about these customers is that the ratio of general users to those with admin rights is very high. Where privileged users might make up less than low single digits of the total user population in larger tenants, it was much much higher in smaller ones. Although we had a great outreach program there are literally tens of thousands of tenants and thus tens of thousands of customers whose login configurations and behaviors we had to change.

And here is where we learned that we had to enlist automation and that is where our teams are focused today. Building ways to ensure that new tenants have MFA turned on by default, customers have ways of opting out specific users such as their UX testing users, and means to turn on MFA for all customers, not just new ones without breaking those that put in the effort to do MFA previously. That takes time but it is well spent time – we are going to automatically change the behavior of the system which directly impacts our customers users – it is not something ones does lightly (one does not simply turn on mfa meme)

Lesson 3: Loving 100% percent

Standing here today, I can say that I really like the 100% goal. As I wrote this talk, I looked back at some of my email from the beginning of the project… and I am a little ashamed. I really fought the 100% goal hard… it wasn’t a good look. It wasn’t the right thing to do. The reason I like the goal is that although we are at roughly 80% of our monthly active users using mfa or sso, had we not made 100% the goal then we’d have achieved less and been fine with where we are. Without that goal we wouldn’t have pushed to address the long tail of customers; we would not have innovated to find better solutions for both our customers and ourselves. Would I have liked our CISO to deliver the goal in a different way? Sure. But I have become a fan of a seemingly impossible goal… so long as it is expressed with empathy and care.

Step 6: Re-remember the goal

We ended last fiscal year with about 14 million monthly active users of MFA or SSO with MFA. They represent 14M people who are habituated; the identity ceremonies they perform include MFA.

And that has a huge knock on effect. They bring that ceremony inclusive of MFA home with them. They bring the awareness of MFA to their families and friends. And this helps keep them safer in their business and their personal lives. The growth of MFA use in a business context is a huge deal professionally speaking. As I tell the extended team, what they have done and are doing is resume-building work: they rolled out and drove adoption of MFA across multiple lines of business at a 200 billion dollar company. That is no small feat!

But that knock on effect – that those same users are going to bring MFA home with them and look to use it in their family lives… that, as an identity practitioner, is just as big of a deal. That makes the journey worth it.

Thank you.


Werdmüller on Medium

10 things I’m worrying about on the verge of new parenthood

I’m terrified. This is a subset of my anxieties. Continue reading on Medium »

I’m terrified. This is a subset of my anxieties.

Continue reading on Medium »


Ben Werdmüller

10 things I'm worrying about on the verge of new parenthood

One. Is it even ethical to bring a child into the world right now? During their lifetime we’ll see water scarcity and an increase in global conflict as a result of climate change. It’s going to get worse before it gets better. How, in good conscience, can I bring a new human into that? It’s an imperfect answer, but I’ve arrived at this: what would the world look like if only the people who di

One. Is it even ethical to bring a child into the world right now? During their lifetime we’ll see water scarcity and an increase in global conflict as a result of climate change. It’s going to get worse before it gets better. How, in good conscience, can I bring a new human into that?

It’s an imperfect answer, but I’ve arrived at this: what would the world look like if only the people who didn’t believe in climate change had children? Yes, they’re going to need to be part of the solution, because everyone will need to be. It’s a tough ask for a human who didn’t ask to be born. But I’m confident they’ll be an asset to the future.

Two. What does nationality look like? It’s important to me, but why, exactly?

I’m a third culture kid: living in the US is the first time I’ve spent an extended time in a place where I was a citizen. I’ve written before about how I consider myself to have no nationality and no religion.

The thing is, that’s not quite right: I am a product of all the nationalities and cultures that led up to me. That I don’t exactly identify with any of them doesn’t mean that they don’t belong to me.

But that’s just me: Erin, as the mother, carries a full half of their context, and has a different background to me. How do you honor both backgrounds and contexts, while also downplaying the importance of nationality and patriotism overall?

What’s important to me is that they know they’re from multiple places, and they know that the world is their oyster. Rather than patriotism, I want them to feel proud to be human, and to feel connected to all humans. I want them to have broad horizons and an inclusively global mindset. They can do anywhere and do anything they want: the world is awash with possibilities. And at the same time, everyone, everywhere matters, and people who are more local to them do not matter more than people who are more remote. I want my child to have the privilege of openness and connectedness.

Three. I want to keep them healthy and happy. That, in itself, is incredibly daunting. What if I hurt them somehow?

Four. I’ve spent my life in front of screens. I literally learned to write on a Sinclair ZX81, writing stories that incorporated the BASIC shortcuts on its keyboard. Characters would GOTO places a lot; they would RUN; THEN they would do something else. One of my first memories is watching the animated interstitial network announcements on our little TV in Amsterdam.

What should their relationship with devices be? The going advice is that introducing screens too early can interfere with their development. And at the same time, my dad in particular deliberately allowed me to play with everything. I took apart radios; I mucked around with computers with impunity; I developed, early on, a complete lack of fear of technology. And that’s served me very well.

I’ll admit to feeling a bit judgmental of parents of those toddlers out in the world who have iPads in carry-cases. But what right do I have to feel that way? I’ve never had a child, until now. Maybe I’ll feel completely different.

And actually, I feel very strongly about tablets and phones themselves. I didn’t have a device that couldn’t be hacked until I was in my twenties. Everything could be taken apart, programmed for, adjusted. There were no games consoles in our household, and cellphones weren’t really available until I was older. I like that philosophy: open technology only. Teach them early to be a maker, not a consumer.

Five. Should I buy a domain name for them? Reserve a Twitter username? Is that self-indulgent?

Six. It’s important to make sure babies interact with a wide range of people while they’re very little, to allow them to develop an understanding that every type of face is part of their circle. Infants learn about race in their first year; by 9 months old, they recognize faces from their own race better than others. By 6 months old, they may exhibit racial bias. So it’s incredibly important that their circles are diverse.

While this cognitive wiring is established early, developmental changes obviously continue throughout childhood. For these reasons - and also just because they’re better places to live for all kinds of reasons - it’s important to be in a cosmopolitan, diverse, open-minded location. Homogenous towns and cities are not what I want, both as a person, and for my child.

Seven. No, I don’t think ideological diversity is anywhere near as important as actual intersectional diversity. And I have no intention to allow bigotry or small-mindedness to enter their worldview.

Empathy, inclusion, love, understanding, and connectedness must be core values. Change is inevitable and to be embraced. Difference is beautiful. The world is to be explored and embraced.

Eight. I like the idea and philosophy of free-range parenting. Let the child explore and learn on their own terms, for crying out loud. Let them ride bikes in the neighborhood and hang out with their friends and generally live out The Goonies.

But that seems to be out of vogue? There’s a trend of helicopter parents who schedule their child’s every moment? The idea seems repellant to me - doesn’t it mean that they miss out on developing a degree of autonomy? - but am I right to feel that way?

Nine. My parents made friends through pre-natal and baby classes - and that’s where a lot of my early friends came from, too. Everything’s online now because of covid. Where are baby-friends supposed to come from?

Ten. How do we share photos and information with family and friends without compromising on privacy? Social media sites like Instagram and Facebook will be data-mined; email feels insecure because I don’t know who will end up seeing photos and messages. The really private tools and services are too hard to use for a lot of people. What’s the best practice? What does baby infosec look like?

 

Photo by Kelli McClintock on Unsplash

Thursday, 11. August 2022

John Philpin : Lifestream

FTC weighs new rules to protect Americans’ personal data.

FTC weighs new rules to protect Americans’ personal data. Here is a good place to start … Meta injecting code into websites visited by its users to track them.

Simon Willison

Litestream backups for Datasette Cloud (and weeknotes)

My main focus this week has been adding robust backups to the forthcoming Datasette Cloud. Datasette Cloud is a SaaS service for Datasette. It allows people to create a private Datasette instance where they can upload data, visualize and transform it and share it with other members of their team. You can join the waiting list to try it out using this form. I'm building Datastte Cloud on Fly, s

My main focus this week has been adding robust backups to the forthcoming Datasette Cloud.

Datasette Cloud is a SaaS service for Datasette. It allows people to create a private Datasette instance where they can upload data, visualize and transform it and share it with other members of their team. You can join the waiting list to try it out using this form.

I'm building Datastte Cloud on Fly, specifically on Fly Machines.

Security is a big concern for Datasette Cloud. Teams should only be able to access their own data - bugs where users accidentally (or maliciously) access data for another team should be protected against as much as possible.

To help guarantee that, I've designed Datasette Cloud so that each team gets their own, dedicated instance, running in a Firecracker VM managed by Fly. Their data lives in a dedicated volume.

Fly volumes already implement snapshot backups, but I'm interested in defence in depth. This is where Litestream comes in (coincidentally now part of Fly, although it wasn't when I first selected it as my backup strategy).

I'm using Litestream to constantly backup the data for each Datasette Cloud team to an S3 bucket. In the case of a complete failure of a volume, I can restore data from a backup that should be at most a few seconds out of date. Litestream also gives me point-in-time backups, such that I can recover a previous version of the data within a configurable retention window.

Keeping backups isolated

Litestream works by writing a constant stream of pages from SQLite's WAL (Write-Ahead Log) up to an S3 bucket. It needs the ability to both read and write from S3.

This requires making S3 credentials available within the containers that run Datasette and Litestream for each team account.

Credentials in those containers are not visible to the users of the software, but I still wanted to be confident that if the credentials leaked in some way the isolation between teams would be maintained.

Initially I thought about having a separate S3 bucket for each team, but it turns out AWS has a default limit of 100 buckets per account, and a hard limit of 1,000. I aspire to have more than 1,000 customers, so this limit makes a bucket-per-team seem like the wrong solution.

I've learned an absolute ton about S3 and AWS permissions building my s3-credentials tool for creating credentials for accessing S3.

One of the tricks I've learned is that it's possible to create temporary, time-limited credentials that only work for a prefix (effectively a folder) within an S3 bucket.

This means I can run Litestream with credentials that are specific to the team - that can read and write only from the team-ID/ prefix in the S3 bucket I am using to store the backups.

Obtaining temporary credentials

My s3-credentials tool can create credentials for a prefix within an S3 bucket like this:

s3-credentials create my-bucket-for-backus \ --duration 12h \ --prefix team-56/

This command uses the sts.assume_role() AWS method to create credentials that allow access to that bucket, attaching this generated JSON policy to it in order to restrict access to the provided prefix.

I extracted the relevant Python code from s3-credentials and used it to create a private API endpoint in my Datasette Cloud management server which could return the temporary credentials needed by the team container.

With the endpoint in place, my code for launching a team container can do this:

Create the volume and machine for that team (if they do not yet exist) Generate a signed secret token that the machine container can exchange for its S3 credentials Launch the machine container, passing it the secret token On launch, the container runs a script which exchanges that secret token for its 12 hour S3 credentials, using the private API endpoint I created Those credentials are used to populate the AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_SESSION_TOKEN environment variables used by Litestream Start Litestream, which then starts Datasette Restarting every 12 hours

You may be wondering why I bothered with that initial secret token - why not just pass the temporary AWS credentials to the container when I launch it?

The reason for this is that I need to be able to obtain fresh credentials every 12 hours.

A really neat feature of Fly Machines is that they support scale-to-zero. You can stop them, and Fly will automatically restart them the next time they recieve traffic.

All you need to do is call sys.exit(0) in your Python code (or the equivalent in any other language) and Fly will stop your container... and then restart it again with a couple of seconds of cold start time the next time an HTTP request for your container hits the Fly router.

So far I'm mainly using this to avoid the cost of running containers when they aren't actually in- use. But there's a neat benefit when it comes to Litestream too.

I'm using S3 credentials which expire after 12 hours. This means I need to periodically refresh the credentials and restart Litestream or it will stop being able to write to the S3 bucket.

After considering a few ways of doing this, I selected the simplest to implement: have Datasette call sys.exit(0) after ten hours, and let Fly restart the container causing my startup script to fetch freshly generated 12 hour credentials and pass them to Litestream.

I implemented this by adding it as a new setting to my existing datasette-scale-to-zero plugin. You can now configure that with "max-age": "10h" and it will shut down Datasette once the server has been running for that long.

Why does this require my own secret token system? Because when the container is restarted, it needs to make an authenticated call to my endpoint to retrieve those fresh S3 credentials. Fly persists environment variable secrets between restarts to the container, so that secret can be long-lived even while it is exchanged for short-term S3 credentials.

I only just put the new backup system in place, so I'm exercising it a bit before I open things up to trial users - but so far it's looking like a very robust solution to the problem.

s3-ocr improvements

I released a few new versions of s3-ocr this week, as part of my ongoing project working with the San Francisco Microscopical Society team to release a searchable version of their scanned document archives.

The two main improvements are:

A new --dry-run option to s3-ocr start which shows you what the tool will do without making any changes to your S3 bucket, or triggering any OCR jobs. #22 s3-ocr start used to fail with an error if running it would create more than 100 (or 600 depending on your region) concurrent OCR jobs. The tool now knows how to identify that error and pause and retry starting the jobs instead. #21

The fix that took the most time is this: installations of the tool no longer arbitrarily fail to work depending on the environment you install them into!

Solving this took me the best part of a day. The short version is this: Click 8.1.0 introduced a new feature that lets you use @cli.command as a decorator instead of @cli.command(). This meant that installing s3-ocr in an environment that already had a previous version of Click would result in silent errors.

The solution is simple: pin to click>=8.1.0 in the project dependencies if you plan to use this new syntax.

If I'd read the Click changelog more closely I would have saved myself a whole lot of time.

Issues #25 and #26 detail the many false turns I took trying to figure this out.

More fun with GPT-3 and DALL-E

This tweet scored over a million impressions on Twitter:

New hobby: prototyping video games in 60 seconds using a combination of GPT-3 and DALL-E

Here's "Raccoon Heist" pic.twitter.com/xQ3Vm8p2XW

- Simon Willison (@simonw) August 5, 2022

As this got retweeted outside of my usual circles it started confusing people who thought the "prototype" was a working game, as opposed to a fake screenshot and a paragraph of descriptive text! I wasn't kidding when I said I spent 60 seconds on this.

I also figured out how to use GPT-3 to write jq one-liners. I love jq but I have to look up how to use it every time, so having GPT-3 do the work for me is a pretty neat time saver. More on that in this TIL: Using GPT-3 to figure out jq recipes

Releases this week s3-ocr: 0.6.3 - (9 releases total) - 2022-08-10
Tools for running OCR against files stored in S3 datasette-scale-to-zero: 0.2 - (4 releases total) - 2022-08-05
Quit Datasette if it has not received traffic for a specified time period shot-scraper: 0.14.3 - (18 releases total) - 2022-08-02
A command-line utility for taking automated screenshots of websites s3-credentials: 0.12.1 - (13 releases total) - 2022-08-01
A tool for creating credentials for accessing S3 buckets datasette-sqlite-fts4: 0.3.2 - (2 releases total) - 2022-07-31 TIL this week Related content with SQLite FTS and a Datasette template function Using boto3 from the command line Trying out SQLite extensions on macOS Mocking a Textract LimitExceededException with boto Using GPT-3 to figure out jq recipes

Ben Werdmüller

Equality on the ballot: a free event with Stacey Abrams

This is one of those times I can’t believe I get to work at The 19th. We’re putting on a free event on voter equality with a roster of very smart speakers headlined by Stacey Abrams, in partnership with Live Nation Women and Teen Vogue, live in Atlanta or free to watch afterwards online. Go register!

This is one of those times I can’t believe I get to work at The 19th. We’re putting on a free event on voter equality with a roster of very smart speakers headlined by Stacey Abrams, in partnership with Live Nation Women and Teen Vogue, live in Atlanta or free to watch afterwards online.

Go register!


Mike Jones: self-issued

JWK Thumbprint URI is now RFC 9278

The JWK Thumbprint URI specification has been published as RFC 9278. Congratulations to my co-author, Kristina Yasuda, on the publication of her first RFC! The abstract of the RFC is: This specification registers a kind of URI that represents a JSON Web Key (JWK) Thumbprint value. JWK Thumbprints are defined in RFC 7638. This enables […]

The JWK Thumbprint URI specification has been published as RFC 9278. Congratulations to my co-author, Kristina Yasuda, on the publication of her first RFC!

The abstract of the RFC is:


This specification registers a kind of URI that represents a JSON Web Key (JWK) Thumbprint value. JWK Thumbprints are defined in RFC 7638. This enables JWK Thumbprints to be used, for instance, as key identifiers in contexts requiring URIs.

The need for this arose during specification work in the OpenID Connect working group. In particular, JWK Thumbprint URIs are used as key identifiers that can be syntactically distinguished from other kinds of identifiers also expressed as URIs in the Self-Issued OpenID Provider v2 specification.


Simon Willison

datasette on Open Source Insights

datasette on Open Source Insights Open Source Insights is "an experimental service developed and hosted by Google to help developers better understand the structure, security, and construction of open source software packages". It calculates scores for packages using various automated heuristics. A JSON version of the resulting score card can be accessed using https://deps.dev/_/s/pypi/p/{packag

datasette on Open Source Insights

Open Source Insights is "an experimental service developed and hosted by Google to help developers better understand the structure, security, and construction of open source software packages". It calculates scores for packages using various automated heuristics. A JSON version of the resulting score card can be accessed using https://deps.dev/_/s/pypi/p/{package_name}/v/

Via sethmlarson/pypi-data


sethmlarson/pypi-data

sethmlarson/pypi-data Seth Michael Larson uses GitHub releases to publish a ~325MB (gzipped to ~95MB) SQLite database on a roughly monthly basis that contains records of 370,000+ PyPI packages plus their OpenSSF score card metrics. It's a really interesting dataset, but also a neat way of packaging and distributing data - the scripts Seth uses to generate the database file are included in the re

sethmlarson/pypi-data

Seth Michael Larson uses GitHub releases to publish a ~325MB (gzipped to ~95MB) SQLite database on a roughly monthly basis that contains records of 370,000+ PyPI packages plus their OpenSSF score card metrics. It's a really interesting dataset, but also a neat way of packaging and distributing data - the scripts Seth uses to generate the database file are included in the repository.

Via @sethmlarson

Wednesday, 10. August 2022

John Philpin : Lifestream

Liz Truss facing first sleaze investigation over ‘murky dona

Liz Truss facing first sleaze investigation over ‘murky donations’ Two ways to take the headline … first … and the second, third, fourth charges are due any moment. first … oh my goodness. I wonder which it is?

Liz Truss facing first sleaze investigation over ‘murky donations’

Two ways to take the headline …

first … and the second, third, fourth charges are due any moment.

first … oh my goodness.

I wonder which it is?


Kevin Spacey ordered to pay $31m to House of Cards producers

Kevin Spacey ordered to pay $31m to House of Cards producers. … the award seems a little out of wack compared to Alex Jones. No?

Kevin Spacey ordered to pay $31m to House of Cards producers.

… the award seems a little out of wack compared to Alex Jones.

No?


Truth in humour … You’re an asset waiting to happen.

Truth in humour … You’re an asset waiting to happen.

Have to say, the ad is so famous with much written about it,

Have to say, the ad is so famous with much written about it, that I thought I knew all the ‘nuances’ - at least those nuances in the public domain. Turns out I was wrong. Why 1984 Debuted in 1983

Have to say, the ad is so famous with much written about it, that I thought I knew all the ‘nuances’ - at least those nuances in the public domain. Turns out I was wrong.

Why 1984 Debuted in 1983


“Find a writer you like and read them. If you can’t find t

“Find a writer you like and read them. If you can’t find the writer whose work you want to read, become that writer. That’s what I did. It’s great.” 💬 Cory Doctorow The Full Story

“Find a writer you like and read them. If you can’t find the writer whose work you want to read, become that writer. That’s what I did. It’s great.”

💬 Cory Doctorow

The Full Story


Simon Willison

Let websites framebust out of native apps

Let websites framebust out of native apps Adrian Holovaty makes a compelling case that it is Not OK that we allow native mobile apps to embed our websites in their own browsers, including the ability for them to modify and intercept those pages (it turned out today that Instagram injects extra JavaScript into pages loaded within the Instagram in-app browser). He compares this to frame-busting on

Let websites framebust out of native apps

Adrian Holovaty makes a compelling case that it is Not OK that we allow native mobile apps to embed our websites in their own browsers, including the ability for them to modify and intercept those pages (it turned out today that Instagram injects extra JavaScript into pages loaded within the Instagram in-app browser). He compares this to frame-busting on the regular web, and proposes that the X-Frame-Options: DENY header which browsers support to prevent a page from being framed should be upgraded to apply to native embedded browsers as well.

I'm not convinced that reusing X-Frame-Options: DENY would be the best approach - I think it would break too many existing legitimate uses - but a similar option (or a similar header) specifically for native apps which causes pages to load in the native OS browser instead sounds like a fantastic idea to me.

Via @adrianholovaty


Introducing sqlite-http: A SQLite extension for making HTTP requests

Introducing sqlite-http: A SQLite extension for making HTTP requests Characteristically thoughtful SQLite extension from Alex, following his sqlite-html extension from a few days ago. sqlite-http lets you make HTTP requests from SQLite - both as a SQL function that returns a string, and as a table-valued SQL function that lets you independently access the body, headers and even the timing data f

Introducing sqlite-http: A SQLite extension for making HTTP requests

Characteristically thoughtful SQLite extension from Alex, following his sqlite-html extension from a few days ago. sqlite-http lets you make HTTP requests from SQLite - both as a SQL function that returns a string, and as a table-valued SQL function that lets you independently access the body, headers and even the timing data for the request.

This write-up is excellent: it provides interactive demos but also shows how additional SQLite extensions such as the new-to-me "define" extension can be combined with sqlite-http to create custom functions for parsing and processing HTML.

Via @agarcia_me


How SQLite Helps You Do ACID

How SQLite Helps You Do ACID Ben Johnson's series of posts explaining the internals of SQLite continues with a deep look at how the rollback journal works. I'm learning SO much from this series. Via @benbjohnson

How SQLite Helps You Do ACID

Ben Johnson's series of posts explaining the internals of SQLite continues with a deep look at how the rollback journal works. I'm learning SO much from this series.

Via @benbjohnson


curl-impersonate

curl-impersonate "A special build of curl that can impersonate the four major browsers: Chrome, Edge, Safari & Firefox. curl-impersonate is able to perform TLS and HTTP handshakes that are identical to that of a real browser." I hadn't realized that it's become increasingly common for sites to use fingerprinting of TLS and HTTP handshakes to block crawlers. curl-impersonate attempts to impe

curl-impersonate

"A special build of curl that can impersonate the four major browsers: Chrome, Edge, Safari & Firefox. curl-impersonate is able to perform TLS and HTTP handshakes that are identical to that of a real browser."

I hadn't realized that it's become increasingly common for sites to use fingerprinting of TLS and HTTP handshakes to block crawlers. curl-impersonate attempts to impersonate browsers much more accurately, using tricks like compiling with Firefox's nss TLS library and Chrome's BoringSSL.

Via Ask HN: What are the best tools for web scraping in 2022?

Tuesday, 09. August 2022

Simon Willison

sqlite-zstd: Transparent dictionary-based row-level compression for SQLite

sqlite-zstd: Transparent dictionary-based row-level compression for SQLite Interesting SQLite extension from phiresky, the author of that amazing SQLite WASM hack from a while ago which could fetch subsets of a large SQLite database using the HTTP range header. This extension, written in Rust, implements row-level compression for a SQLite table by creating compression dictionaries for larger chu

sqlite-zstd: Transparent dictionary-based row-level compression for SQLite

Interesting SQLite extension from phiresky, the author of that amazing SQLite WASM hack from a while ago which could fetch subsets of a large SQLite database using the HTTP range header. This extension, written in Rust, implements row-level compression for a SQLite table by creating compression dictionaries for larger chunks of the table, providing better results than just running compression against each row value individually.


SeanBohan.com

The Panopticon is (going to be) Us

I originally wrote this on the ProjectVRM mailing list in January of 2020. I made some edits to fix errors and clunky phrasingI didn’t like. It is a rant and a series of observations and complaints derived from after dinner chats/walks with my significant other (who is also a nerd). This is a weak-tea attempt… Continue reading... The post The Panopticon is (going to be) Us first appeared on SeanB

I originally wrote this on the ProjectVRM mailing list in January of 2020. I made some edits to fix errors and clunky phrasingI didn’t like. It is a rant and a series of observations and complaints derived from after dinner chats/walks with my significant other (who is also a nerd). This is a weak-tea attempt at the kind of amazing threads Cory Doctorow puts out. 

I still hold out hope (for privacy, for decentralized identity, for companies realizing their user trust is worth way more than this quarter’s numbers). But unless there are changes across the digital world (people, policy, corps, orgs), it is looking pretty dark.

TLDR: 

There is a reason why AR is a favorite technology for Black Mirror screenwriters. 

Where generally available augmented reality and anonymity in public is going is bad and it is going to happen unless the users start demanding better and the Bigs (GAMAM+) decide that treating customers better is a competitive priority. 

My (Dark) Future of AR:

Generally available Augmented Reality will be a game changer for user experience, utility and engagement. The devices will be indistinguishable from glasses and everyone will wear them. 

The individual will wear their AR all the time, capturing sound, visuals, location and other data points at all times as they go about their day. They will only very rarely take it off (how often do you turn off your mobile phone?), capturing what they see, maybe what they hear, and everyone around them in the background, geolocated and timestamped. 

Every user of this technology will have new capabilities (superpowers!):

Turn by turn directions in your field of view Visually search their field of view during the time they were in a gallery a week ago (time travel!) Find live performance details from a band’s billboard (image recognition!) Product recognition on the shelves of the grocery store (computer-vision driven dynamic shopping lists!)  Know when someone from your LinkedIn connections is also in a room you are in, along with where they are working now (presence! status! social!). 

Data (images, audio, location, direction, etc.) will be directly captured. Any data exhaust (metadata, timestamps, device data, sounds in the background, individuals and objects in the background) will be hoovered up by whoever is providing you the “service”. All of this data (direct and indirect) will probably be out of your control or awareness. Compare it to the real world: do you know every organization that has data *about* you right now? What happens when that is 1000x. 

Thanks to all of this data being vacuumed up and processed and parsed and bought and sold, Police (state, fed, local, contract security, etc.) WILL get new superpowers too. They can and will request all of the feeds from Amazon and Google and Apple for a specific location at a specific time, Because your location is in public, all three will have a harder time resisting (no expectation of privacy, remember?). Most of these requests will be completely legitimate and focused on crime or public safety. There will definitely be requests that are unethical, invalid and illegal and citizens will rarely find out about these. Technology can and will be misused in banal and horrifying ways.

GAMAM* make significant revenue from advertising. AR puts commercial realtime data collection on steroids.

“What product did he look at? For how long? Where was he? Let’s offer him a discount in realtime!”

The negative impacts won’t be for everyone, though. If I had a million dollars I would definitely take the bet where Elon Musk, Eric Schmidt, the Collisons, Sergey, Larry, Bezos and Tim Cook and other celebrities will all have the ability to “opt out” of being captured and processed. The rest of us will not get to opt out unless we pay $$$ – continuing to bring the old prediction “it isn’t how much privacy you have a right to, it is how much privacy you can afford” to life. 

You won’t know who is recording you and have to assume it is happening all of the time. 

We aren’t ready

Generally available augmented reality has societal / civil impacts we aren’t prepared for. We didn’t learn any lessons over the last 25 years regarding digital technology and privacy. AR isn’t the current online world where you can opt out, run an adblocker, run a VPN, not buy from Amazon, delete your a social media account, compartmentalize browsers (one for work, one for research, one for personal), etc. AR is an overlay onto the real world, where everyone will be indirectly watching everyone else… for someone else’s benefit. I used the following example discussing this challenge with a friend:

2 teens took a selfie on 37th street and 8th avenue in Manhattan to celebrate their trip to NYC.  In the background of their selfie a recovering heroin addict steps out of a methadone clinic on the block. His friends and coworkers don’t know he has a problem but he is working hard to get clean.  The teens posted the photo online That vacation photo was scraped by ClearView AI or another company using similar tech with less public exposure Once captured, it would be trivial to identify him Months or years later (remember, there is no expiration date on this data and data gets cheaper and cheaper every day) he applies for a job and is rejected during the background check.  Why? Because the background check vendor used by his prospective employer pays for a service that compares his photo to an index of “questionable locations and times/dates” including protest marches, known drug locations, riots, and methadone clinics. That data is then processed by an algorithm that scores him as a risk and he doesn’t get the job. 

“Redlining” isn’t a horrible practice of the past, with AR we can do it in new and awful ways. 

Indirect data leakage is real: we leak other people’s data all the time. With AR, the panopticon is us: you and me and everyone around us who will be using this tech in their daily lives. This isn’t the state or Google watching us – AR is tech where the surveillance is user generated from my being able to get turn by turn directions in my personal HeadsUp Display. GAFAM are downstream and will exploit all that sweet sweet data. 

This is going from Surveillance to “Sous-veillance”… but on steroids because we can’t opt out of going to work, or walking down the street, or running to the grocery, or riding the subway to a job interview or, or going to a protest march, or going to an AA meeting, or, or, or living our lives. A  rebuttal to the, “I don’t have to worry about surveillance because I have nothing to hide”is that  *we all* have to fight for privacy and reduced surveillance, especially those who have nothing to hide because some of our fellow humans are in marginalized communities who cannot fight for themselves and because this data can and will impact us in ways we can’t identify. The convenience of reading emails while walking to work shouldn’t possibly out someone walking into an AA meeting, or walking out of a halfway house, etc. 

No consumer, once they get the PERSONAL, INTIMATE value and the utility out of AR, will want to have the functionality of their AR platform limited or taken away by any law about privacy – even one that protects *their* privacy. This very neatly turns everyone using generally available AR technology into a surveillance node. 

The panopticon is us. 

There is a reason AR is a favorite plot device in Black Mirror. 

It is going to be up to us. 

For me, AR is the most “oh crap” thing out there, right now. I love the potential of the technology, yet I am concerned about how it will be abused if we aren’t VERY careful and VERY proactive, and based on how things have been going for the last 20+ years. I have a hard time being positive on where this is going to go. 

There are a ton of people working on privacy in AR/VR/XR. The industry is still working on the “grammar” or “vocabulary” for these new XR-driven futures and there are a lot of people and organized efforts to prevent some of the problems mentioned above. We don’t have societal-level agreements on what is and is not acceptable when it comes to personal data NOW. In a lot of cases the industry is looking forward to ham-handedly trying to stuff Web2, Web1 and pre-Web business models (advertising) into this very sleek, new, super-powered platform. Governments love personal data even though they are legislating on it (in some effective and not effective ways). 

The tech (fashion, infrastructure) is moving much faster than culture and governance can react. 

My belief, in respect to generally available Augmented Reality and the potential negative impacts on the public, is we are all in this together and the solution isn’t a tech or policy or legislative or user solution but a collective one. We talk about privacy a lot and what THEY (govs, adtech, websites, hardware, iot, services, etc.) are doing to US, but what about what we are doing to each other? Yup, individuals need to claim control over their digital lives, selves and data. Yes, Self Sovereign Identity as default state would help. 

To prevent the potential dystopias mentioned above, we need aggressive engagement by Users. ALL of us need to act in ways that protect our privacy/identity/data/digital self as well as those around us. WE are leaking our friends’ identity, correlated attributes, and data every single day. Not intentionally, but via our own digital (and soon physical thanks to AR) data exhaust. We need to demand to be treated better by the companies, orgs and govs we interact with on a daily basis. 

Governments need to get their act together in regards to policy and legislation. There needs to be real consequences for bad behavior and poor stewardship of users data. 

Businesses need to start listening to their customers and treating them like customers and not sheep to be shorn. Maybe companies like AVAST can step up and bring their security/privacy-know how to help users level-up. Maybe a company like Facebook can pivot and “have the user’s back” in this future.

IIW, MyData, CustomerCommons, VRM, and the Decentralized/Self Soverign Identity communities are all working towards changing this for the good of everyone. 

At the end of the day, along with need a *Digital Spring* where people stand up and say “no more” to all the BS happening right now (adtech, lack of agency, abysmal data practices, lack of liberty for digital selves) before we get to a world where user generated surveillance is commonplace.

(Yes dear reader, algorithms are a big part of this issue and I am only focused on the AR part of the problem with this piece. The problem is a big awful venn diagram of issues and actors with different incentives).

The post The Panopticon is (going to be) Us first appeared on SeanBohan.com.

Monday, 08. August 2022

Just a Theory

RFC: Restful Secondary Key API

A RESTful API design conundrum and a proposed solution.

I’ve been working on a simple CRUD API at work, with an eye to make a nicely-designed REST interface for managing a single type of resource. It’s not a complicated API, following best practices recommended by Apigee and Microsoft. It features exactly the sorts for APIs you’d expect if you’re familiar with REST, including:

POST /users: Create a new user resource GET /users/{uid}: Read a user resource PUT /users/{uid}: Update a user resource DELETE /users/{uid}: Delete a user resource GET /users?{params}: Search for user resources

If you’re familiar with REST, you get the idea.

There is one requirement that proved a bit of design challenge. We will be creating canonical ID for all resources managed by the service, which will function as the primary key. The APIs above reference that key by the {uid} path variable. However, we also need to support fetching a single resource by a number of existing identifiers, including multiple legacy IDs, and natural keys like, sticking to the users example, usernames and email addresses. Unlike the search API, which returns an array of resources, we need a nice single API like GET /users/{uid} that returns a single resource, but for a secondary key. What should it look like?

None of my initial proposals were great (using username as the sample secondary key, though again, we need to support a bunch of these):

GET /users?username={username} — consistent with search, but does it return a collection like search or just a single entry like GET /users/{uid}? Would be weird not to return an array or not based on which parameters were used. GET /users/by/username/{username} — bit weird to put a preposition in the URL. Besides, it might conflict with a planned API to fetch subsets of info for a single resource, e.g., GET /users/{uid}/profile, which might return just the profile object. GET /user?username={username} — Too subtle to have the singular rather than plural, but perhaps the most REST-ish. GET /lookup?obj=user&username={username} Use special verb, not very RESTful

I asked around a coding Slack, posting a few possibilities, and friendly API designers suggested some others. We agreed it was an interesting problem, easily solved if there was just one alternate that never conflicts with the primary key ID, such as GET /users/{uid || username}. But of course that’s not the problem we have: there are a bunch of these fields, and they may well overlap!

There was some interest in GET /users/by/username/{username} as an aesthetically-pleasing URL, plus it allows for

/by => list of unique fields /by/username/ => list of all usernames?

But again, it runs up against the planned use of subdirectories to return sub-objects of a resource. One other I played around with was: GET /users/user?username={username}: The user sub-path indicates we want just one user much more than /by does, and it’s unlikely we’d ever use user to name an object in a user resource. But still, it overloads the path to mean one thing when it’s user and another when it’s a UID.

Looking back through the options, I realized that what we really want is an API that is identical to GET /users/{uid} in its behaviors and response, just with a different key. So what if we just keep using that, as originally suggested by a colleague as GET /users/{uid || username} but instead of just the raw value, we encode the key name in the URL. Turns out, colons (:) are valid in paths, so I defined this route:

GET /users/{key}:{value}: Fetch a single resource by looking up the {key} with the {value}. Supported {key} params are legacy_id, username, email_address, and even uid. This then becomes the canonical “look up a user resource by an ID” API.

The nice thing about this API is that it’s consistent: all keys are treated the same, as long as no key name contains a colon. Best of all, we can keep the original GET /users/{uid} API around as an alias for GET /users/uid:{value}. Or, better, continue to refer to it as the canonical path, since the PUT and DELETE actions map only to it, and document the GET /users/{key}:{value} API as accessing an alias for symlink for GET /users/{uid}. Perhaps return a Location header to the canonical URL, too?

In any event, as far as I can tell this is a unique design, so maybe it’s too weird or not properly RESTful? Would love to know of any other patterns designed to solve the problem of supporting arbitrarily-named secondary unique keys. What do you think?

More about… REST API Secondary Key RFC

Damien Bod

Debug Logging Microsoft.Identity.Client and the MSAL OAuth client credentials flow

This post shows how to add debug logging to the Microsoft.Identity.Client MSAL client which is used to implement an OAuth2 client credentials flow using a client assertion. The client uses the MSAL nuget package. PII logging was activated and the HttpClient was replaced to log all HTTP requests and responses from the MSAL package. Code: […]

This post shows how to add debug logging to the Microsoft.Identity.Client MSAL client which is used to implement an OAuth2 client credentials flow using a client assertion. The client uses the MSAL nuget package. PII logging was activated and the HttpClient was replaced to log all HTTP requests and responses from the MSAL package.

Code: ConfidentialClientCredentialsCertificate

The Microsoft.Identity.Client is used to implement the client credentials flow. A known certificate is used to implement the client authentication using a client assertion in the token request. The IConfidentialClientApplication uses a standard client implementation with two extra extension methods, one to add the PII logging and a second to replace the HttpClient used for the MSAL requests and responses. The certificate is read from Azure Vault using the Azure SDK and managed identities on a deployed instance.

// Use Key Vault to get certificate var azureServiceTokenProvider = new AzureServiceTokenProvider(); // Get the certificate from Key Vault var identifier = _configuration["CallApi:ClientCertificates:0:KeyVaultCertificateName"]; var cert = await GetCertificateAsync(identifier); var scope = _configuration["CallApi:ScopeForAccessToken"]; var authority = $"{_configuration["CallApi:Instance"]}{_configuration["CallApi:TenantId"]}"; // client credentials flows, get access token IConfidentialClientApplication app = ConfidentialClientApplicationBuilder .Create(_configuration["CallApi:ClientId"]) .WithAuthority(new Uri(authority)) .WithHttpClientFactory(new MsalHttpClientFactoryLogger(_logger)) .WithCertificate(cert) .WithLogging(MyLoggingMethod, Microsoft.Identity.Client.LogLevel.Verbose, enablePiiLogging: true, enableDefaultPlatformLogging: true) .Build(); var accessToken = await app.AcquireTokenForClient(new[] { scope }).ExecuteAsync();

The GetCertificateAsync loads the certificate from an Azure key vault. This is slow in local development and you could replace this with an host installed certificate for development.

private async Task<X509Certificate2> GetCertificateAsync(string identitifier) { var vaultBaseUrl = _configuration["CallApi:ClientCertificates:0:KeyVaultUrl"]; var secretClient = new SecretClient(vaultUri: new Uri(vaultBaseUrl), credential: new DefaultAzureCredential()); // Create a new secret using the secret client. var secretName = identitifier; //var secretVersion = ""; KeyVaultSecret secret = await secretClient.GetSecretAsync(secretName); var privateKeyBytes = Convert.FromBase64String(secret.Value); var certificateWithPrivateKey = new X509Certificate2(privateKeyBytes, string.Empty, X509KeyStorageFlags.MachineKeySet); return certificateWithPrivateKey; }

WithLogging

The WithLogging is used to add the PII logging and to change the log level. You should never do this on a production deployment and all the PII data would get logged and saved to the logging persistence. This includes access tokens from all users or application clients using the client package. This is great for development, if you need to see why an access token does not work with an API and check the claims inside the access token. The MyLoggingMethod method is used in the WithLogging extension method.

void MyLoggingMethod(Microsoft.Identity.Client.LogLevel level, string message, bool containsPii) { _logger.LogInformation("MSAL {level} {containsPii} {message}", level, containsPii, message); }

The WithLogging can be used as follows:

.WithLogging(MyLoggingMethod, Microsoft.Identity.Client.LogLevel.Verbose, enablePiiLogging: true, enableDefaultPlatformLogging: true)

Now all logs will be logged for this client.

WithHttpClientFactory

I would also like to see how the MSAL package implements the OAuth client credentials flow and see what is sent in the requests and the corresponding responses. I replaced the MsalHttpClientFactory with my MsalHttpClientFactoryLogger inplementation and logged everything.

.WithHttpClientFactory(new MsalHttpClientFactoryLogger(_logger))

To implement this, I used an implementation of the DelegatingHandler. This logs all and the full HTTP requests and responses for the MSAL client.

using Microsoft.Extensions.Logging; using System.Net.Http; using System.Text; using System.Threading; using System.Threading.Tasks; namespace ServiceApi.HttpLogger; public class MsalLoggingHandler : DelegatingHandler { private ILogger _logger; public MsalLoggingHandler(HttpMessageHandler innerHandler, ILogger logger) : base(innerHandler) { _logger = logger; } protected override async Task<HttpResponseMessage> SendAsync( HttpRequestMessage request, CancellationToken cancellationToken) { var builder = new StringBuilder(); builder.AppendLine("MSAL Request: {request}"); builder.AppendLine(request.ToString()); if (request.Content != null) { builder.AppendLine(); builder.AppendLine(await request.Content.ReadAsStringAsync()); } HttpResponseMessage response = await base.SendAsync(request, cancellationToken); builder.AppendLine(); builder.AppendLine("MSAL Response: {response}"); builder.AppendLine(response.ToString()); if (response.Content != null) { builder.AppendLine(); builder.AppendLine(await response.Content.ReadAsStringAsync()); } _logger.LogDebug(builder.ToString()); return response; } }

The message handler is used in the IMsalHttpClientFactory implementation. I pass the default ILogger into the method and use this to log. In the source code, Serilog is used.

Do not use this in production as everything gets logged and persisted to the server. This is good to see how the client is implemented.

using Microsoft.Extensions.Logging; using Microsoft.Identity.Client; using System.Net.Http; namespace ServiceApi.HttpLogger; public class MsalHttpClientFactoryLogger : IMsalHttpClientFactory { private static HttpClient _httpClient; public MsalHttpClientFactoryLogger(ILogger logger) { if (_httpClient == null) { _httpClient = new HttpClient(new MsalLoggingHandler(new HttpClientHandler(), logger)); } } public HttpClient GetHttpClient() { return _httpClient; } }

OAuth client credentials with client assertion

I ran the extra logging then with an OAuth2 client credentials flow using client authentication client assertions.

The discovery endpoint is called first from the MSAL client for the Azure App registration used to configure the client. This returns all the well known endpoints.

MSAL Request: {request} Method: GET, RequestUri: 'https://login.microsoftonline.com/common/discovery/instance?api-version=1.1&authorization_endpoint=https%3A%2F%2Flogin.microsoftonline.com%2F7ff95b15-dc21-4ba6-bc92-824856578fc1%2Foauth2%2Fv2.0%2Fauthorize' MSAL Response: {response} { "tenant_discovery_endpoint": "https://login.microsoftonline.com/7ff95b15-dc21-4ba6-bc92-824856578fc1/v2.0/.well-known/openid-configuration",

The token is requested using a client_assertion parameter with a signed JWT token using the client certificate created for this application. Only this client knows the private key and the public key was uploaded to the Azure App registration. See the following link for the spec details:

https://openid.net/specs/openid-connect-core-1_0.html#ClientAuthentication

If the JWT has the correct claims and is signed by the correct certificate, an access token is returned for the application confidential client. This request can only be sent and created for an application in possession of the private certificate key. This is more secure then the same OAuth flow using client secrets as any client can send a token request once the secret is shared. Using the client assertion and a signed JWT request, we can achieve a better client authentication. The request cannot be used twice and the correct implementation enforces this by validating the jti claim in the signed JWT. The token must only be used once.

2022-08-03 20:09:40.364 +02:00 [DBG] MSAL Request: {request} Method: POST, RequestUri: 'https://login.microsoftonline.com/7ff95b15-dc21-4ba6-bc92-824856578fc1/oauth2/v2.0/token', Version: 1.1, Content: System.Net.Http.StreamContent, Headers: { Content-Type: application/x-www-form-urlencoded } client_id=b178f3a5-7588-492a-924f-72d7887b7e48 &client_assertion_type=urn%3Aietf%3Aparams%3Aoauth%3Aclient-assertion-type%3Ajwt-bearer &client_assertion=eyJhbGciOiJSUzI1... &scope=api%3A%2F%2Fb178f3a5-7588-492a-924f-72d7887b7e48%2F.default &grant_type=client_credentials MSAL Response: {response} StatusCode: 200, ReasonPhrase: 'OK', Version: 1.1, Content: System.Net.Http.HttpConnectionResponseContent, Headers: { Content-Type: application/json; charset=utf-8 } { "token_type":"Bearer", "expires_in":3599,"ext_expires_in":3599, "access_token":"eyJ0eXAiOiJKV..." }

The signed JWT client assertion contains the following claims. The OpenID Connect specification defines the required claims and further optional claims can be included in the request JWT as required. Microsoft.Identity.Client supports adding customs claims if required.

By adding PII logs and logging all requests and responses from the MSAL client, it is possible to see exactly how the client was implemented and works without having to reverse engineer the code. Do not use this in production!

For clients without a user, you should implement the client credentials flow using certificates whenever possible, as this is a more secure and improved authentication compared to the same flow using client secrets.

Links

https://docs.microsoft.com/en-us/azure/active-directory/develop/msal-net-client-assertions

https://docs.microsoft.com/en-us/azure/architecture/multitenant-identity/client-certificate

https://jwt.ms/

https://openid.net/specs/openid-connect-core-1_0.html#ClientAuthentication


Ben Werdmüller

Newsletter housekeeping

If you’re subscribing via email, heads up that I’m thinking about changing my newsletter engine, possibly to Buttondown. You shouldn’t see anything particularly different - and if you’re subscribing via RSS, nothing will change at all. But, to be honest, I’ll be paying a lot less money for a lot more power. As always, I really appreciate it when people share around my posts, or let me know if t

If you’re subscribing via email, heads up that I’m thinking about changing my newsletter engine, possibly to Buttondown. You shouldn’t see anything particularly different - and if you’re subscribing via RSS, nothing will change at all. But, to be honest, I’ll be paying a lot less money for a lot more power.

As always, I really appreciate it when people share around my posts, or let me know if they’ve disagreed with something I’ve written. Your time and attention are limited; thanks for sticking around.


Jon Udell

The Velvet Bandit’s COVID series

The Velvet Bandit is a local street artist whose work I’ve admired for several years. Her 15 minutes of fame happened last year when, as reported by the Press Democrat, Alexandria Ocasio-Cortez wore a gown with a “Tax the Rich” message that closely resembled a VB design. I have particularly enjoyed a collection that I … Continue reading The Velvet Bandit’s COVID series

The Velvet Bandit is a local street artist whose work I’ve admired for several years. Her 15 minutes of fame happened last year when, as reported by the Press Democrat, Alexandria Ocasio-Cortez wore a gown with a “Tax the Rich” message that closely resembled a VB design.

I have particularly enjoyed a collection that I think of as the Velvet Bandit’s COVID series, which appeared on the boarded-up windows of the former Economy Inn here in Santa Rosa. The building is now under active renovation and the installation won’t last much longer, so I photographed it today and made a slideshow.

I like this image especially, though I have no idea what it means.

If you would like to buy some of her work, it’s available here. I gather sales have been brisk since l’affaire AOC!


Ben Werdmüller

Performative productivity and building a culture that matters

I recently heard a story about a company that, when determining who would be laid off in a downsizing event, asked team leads to rank their teams based on who would be most likely to work on the weekend. Mind-blowingly, while it’s obviously (or hopefully obviously) immoral, this practice appears to be legal in the US, which has at-will employment in every state. This is one of the many contra

I recently heard a story about a company that, when determining who would be laid off in a downsizing event, asked team leads to rank their teams based on who would be most likely to work on the weekend.

Mind-blowingly, while it’s obviously (or hopefully obviously) immoral, this practice appears to be legal in the US, which has at-will employment in every state. This is one of the many contrasts between European and US employment law, which were the biggest culture shock for me when I moved to the US eleven years ago. In Europe, employers must ensure that employees don’t work more than 48 hours during a week and the minimum vacation allocation beyond statutory holidays is four weeks. In America, it’s often seen as a badge of honor to work 70 hour weeks, and it’s one of the few countries in the world with 0 mandatory vacation days.

Perhaps my concerns around compassionate employment are irredeemably European, but as I’ve written before, long hours with little rest are counter-productive. Environments that want you to work weekends and evenings in addition to standard office hours tend to value performative productivity over actual results, or adhere to a kind of religious belief around work ethic. If these employers paid attention to the research and data, they wouldn’t do it.

This is perhaps even more of a problem in flexible and work from home settings. In my previous piece, I quoted a French member of Parliament:

“Employees physically leave the office, but they do not leave their work. They remain attached by a kind of electronic leash — like a dog. The texts, the messages, the emails — they colonize the life of the individual to the point where he or she eventually breaks down.”

I heard recently about another company where the CEO regularly shouts at their team, and if a team lead suggests that a goal can’t be reached, retorts that they’ll find a team who can achieve it instead. These companies share a common trait: a fundamental lack of respect for the expertise and the lives of the people they’ve hired. It’s as if there’s some inherent value to this kind of work, and that the exchange of time for money obviates the need for human care.

It’s cultural. “I don’t think you should work while you have covid,” I told someone recently, mindful of the research about recovery times and long covid. “Maybe I learned the wrong lessons from my Dad,” he replied.

As well as being health and quality issues, these kinds of attitudes compound inclusion problems: only certain kinds of people can work all hours. Carers and parents, and particularly people from lower-income backgrounds, are more likely to have other commitments.

All aspects of a company’s culture are really hard to change when it’s already been set in motion. Either you care about creating a place that cares for its employees or you don’t, and these values affect the choices that are made by founders from the very first day. It’s impossible to do it bespoke, too: every aspect of a company has to pull together with the same cultural underpinnings. The entirety of a community has to pull together or resentments and friction build.

The same cultural change problem doesn’t apply in the same way across the country. While this is a uniquely American problem, not every American company behaves this way: to be frank, most of the successful ones don’t. One of the most promising aspects of the organization I’m at, The 19th, is its excellent, intentionally inclusive culture; it’s among the best, but not the first time I’ve felt valued at work. And it doesn’t take a people ops superhero to understand that people who feel valued do better work overall.

While I think these problems are best solved through legislation and unionization, competitive forces can be a useful fallback. Not everyone has the luxury of being discerning about their employer, but each of the companies I’ve mentioned is in the tech sector: a world where knowledge workers often do have the privilege of choice. Again, it doesn’t take an empath to understand that, given the choice between two otherwise similar firms, employees are more likely to choose to work at the one with a more supportive culture. It goes without saying that yelling doesn’t make for a productive workplace, but if you want to hire the best people, you’ve got to be the best place for them to work, and understand that, past a point, people are motivated by meaning, not money.

From a prospective employee standpoint, if you’re looking for a job, it helps to understand that you have every right to ask for and expect a better, more supportive culture. Having strong standards here makes the employment experience better for everyone, and helps even the worst employers understand that they need to change if they want to be successful.

But make no mistake: the onus is not on employees here. Employers - and the legislators that govern them in the United States - need to drag themselves into the twenty-first century and learn that a strong culture of support in turn makes for strong companies, and strong countries.

Sunday, 07. August 2022

reb00ted

An autonomous reputation system

Context: We never built an open reputation system for the internet. This was a mistake, and that’s one of the reasons why we have so much spam and fake news. But now, as governance takes an ever-more prominent role in technology, such as for the ever-growing list of decentralized projects e.g. DAOs, we need to figure out how to give more power to “better” actors within a given community or conte

Context: We never built an open reputation system for the internet. This was a mistake, and that’s one of the reasons why we have so much spam and fake news.

But now, as governance takes an ever-more prominent role in technology, such as for the ever-growing list of decentralized projects e.g. DAOs, we need to figure out how to give more power to “better” actors within a given community or context, and disempower or keep out the detractors and direct opponents. All without putting a centralized authority in place.

Proposal: Here is a quite simple, but as I think rather powerful proposal. We use an on-line discussion group as an example, but this is a generic protocol that should be applicable to many other applications that can use reputation scores of some kind.

Let’s call the participants in the reputation system Actors. As this is a decentralized, non-hierarchical system without a central power, there is only one class of Actor. In the discussion group example, each person participating in the discussion group is an Actor.

An Actor is a person, or an account, or a bot, or anything really that has some ability to act, and that can be uniquely identified with an identifier of some kind within the system. No connection to the “real” world is necessary, and it could be as simple as a public key. There is no need for proving that each Actor is a distinct person, or that a person controls only one Actor. In our example, all discussion group user names identify Actors.

The reputation system manages two numbers for each Actor, called the Reputation Score S, and the Rating Tokens Balance R. It does this in a way that it is impossible for those numbers to be changed outside of this protocol.

For example, these numbers could be managed by a smart contract on a blockchain which cannot be modified except through the outlined protocol.

The Reputation Score S is the current reputation of some Actor A, with respect to some subject. In the example discussion group, S might express the quality of content that A is contributing to the group.

If there is more than one reputation subject we care about, there will be an instance of the reputation system for each subject, even if it covers the same Actors. In the discussion group example, the reputation of contributing good primary content might be different from reputation for resolving heated disputes, for example, and would be tracked in a separate instance of the reputation system.

The Reputation Score S of any Actor automatically decreases over time. This means that Actors have a lower reputation if they were rated highly in the past, than if they were rated highly recently.

There’s a parameter in the system, let’s call it αS, which reflects S’s rate of decay, such as 1% per month.

Actors rate each other, which means that they take actions, as a result of which the Reputation Score of another Actor changes. Actors cannot rate themselves.

It is out of scope for this proposal to discuss what specifically might cause an Actor to decide to rate another, and how. This tends to be specific to the community. For example, in a discussion group, ratings might often happen if somebody reads newly posted content and reacts to it; but it could also happen if somebody does not post new content because the community values community members who exercise restraint.

The Rating Tokens Balance R is the set of tokens an Actor A currently has at their disposal to rate other Actors. Each rating that A performs decreases their Rating Tokens Balance R, and increases the Reputation Score S of the rated Actor by the same amount.

Every Actor’s Rating Tokens Balance R gets replenished on a regular basis, such as monthly. The regular increase in R is proportional to the Actor’s current Reputation Score S.

In other words, Actors with high reputation have a high ability to rate other Actors. Actors with a low reputation, or zero reputation, have little or no ability to rate other Actors. This is a key security feature inhibiting the ability for bad actors to take over.

The Rating Token Balance R is capped to some maximum value Rmax, which is a percentage of the current reputation of the Actor.

This prevents passive accumulation of rating tokens that then could be unleashed all at once.

The overall number of new Ratings Tokens that is injected into the system on a regular basis as replenishment is determined as a function of the desired average Reputation Score of Actors in the system. This enables Actors’ average Reputation Scores to be relatively constant over time, even as individual reputations increase and decrease, and Actors join and leave the system.

For example, if the desired average Reputation Score is 100 in a system with 1000 Actors, if the monthly decay reduced the sum of all Reputation Scores by 1000, 10 new Actors joined over the month, and 1000 Rating Tokens were eliminated because of the cap, 3000 new Rating Tokens (or something like that, my math may be off – sorry) would be distributed, proportional to their then-current Reputation Scores, to all Actors.

Optionally, the system may allow downvotes. In this case, the rater’s Rating Token Balance still decreases by the number of Rating Tokens spent, while the rated Actor’s Reputation also decreases. Downvotes may be more expensive than upvotes.

There appears to be a dispute among reputation experts on whether downvotes are a good idea, or not. Some online services support them, some don’t, and I assume for good reasons that depend on the nature of the community and the subject. Here, we can model this simply by introducing another coefficient between 0 and 1, which reflects the decrease of reputation of the downvoted Actor given the number of Rating Tokens spent by the downvoting Actor. In case of 1, upvotes cost the same as downvotes; in case of 0, no amount of downvotes can actually reduce somebody’s score.

To bootstrap the system, an initial set of Actors who share the same core values about the to-be-created reputation each gets allocated a bootstrap Reputation Score. This gives them the ability to receive Rating Tokens with which they can rate each other and newly entering Actors.

Some observations:

Once set up, this system can run autonomously. No oversight is required, other than perhaps adjusting some of the numeric parameters before enough experience is gained what those parameters should be in a real-world operation.

Bad Actors cannot take over the system until they have played by the rules long enough to have accumulated sufficiently high reputation scores. Note they can only acquire reputation by being good Actors in the eyes of already-good Actors. So in this respect this system favors the status quo and community consensus over facilitating revolution, which is probably desirable: we don’t want a reputation score for “verified truth” to be easily hijackable by “fake news”, for example.

Anybody creating many accounts aka Actors has only very limited ability to increase the total reputation they control across all of their Actors.

This system appears to be generally-applicable. We discussed the example of rating “good” contributions to a discussion group, but it appears this could also be applied to things such as “good governance”, where Actors rate higher who consistently perform activities others believe are good for governance; their governance reputation score could then be used to get them more votes in governance votes (such as to adjust the free numeric parameters, or other governance activities of the community).

Known issues:

This system does not distinguish reputation on the desired value (like posting good content) vs reputation in rating other Actors (e.g. the difference between driving a car well, and being able to judge others' driving ability, such as needed for driving instructors. I can imagine that there are some bad drivers who are good at judging others’ driving abilities, and vice versa). This could probably be solved with two instances of the system that are suitable connected (details tbd).

There is no privacy in this system. (This may be a feature or a problem depending on where it is applied.) Everybody can see everybody else’s Reputation Score, and who rated them how.

If implemented on a typical blockchain, the financial incentives are backwards: it would cost to rate somebody (a modifying operation to the blockchain) but it would be free to obtain somebody’s score (a read-only operation, which is typically free). However, rating somebody does not create immediate benefit, while having access to ratings does. So a smart contract would have to be suitably wrapped to present the right incentive structure.

I would love your feedback.

This proposal probably should have a name. Because it can run autonomously, I’m going to call it Autorep. And this is version 0.5. I’ll create new versions when needed.

Saturday, 06. August 2022

Simon Willison

Microsoft® Open Source Software (OSS) Secure Supply Chain (SSC) Framework Simplified Requirements

Microsoft® Open Source Software (OSS) Secure Supply Chain (SSC) Framework Simplified Requirements This is really good: don't get distracted by the acronyms, skip past the intro and head straight to the framework practices section, which talks about things like keeping copies of the packages you depend on, running scanners, tracking package updates and most importantly keeping an inventory of the

Microsoft® Open Source Software (OSS) Secure Supply Chain (SSC) Framework Simplified Requirements

This is really good: don't get distracted by the acronyms, skip past the intro and head straight to the framework practices section, which talks about things like keeping copies of the packages you depend on, running scanners, tracking package updates and most importantly keeping an inventory of the open source packages you work so you can quickly respond to things like log4j.

I feel like I say this a lot these days, but if you had told teenage-me that Microsoft would be publishing genuinely useful non-FUD guides to open source supply chain security by 2022 I don't think I would have believed you.

Thursday, 04. August 2022

Werdmüller on Medium

The corpus bride

Adventures with DALL-E 2 and beyond. Continue reading on Medium »

Adventures with DALL-E 2 and beyond.

Continue reading on Medium »


Ben Werdmüller

The corpus bride

I got my beta invitation to DALL-E 2, which creates art based on text prompts. You’ve probably seen them floating around the internet by now: surrealist, AI-drawn illustrations in a variety of styles. Another tool, Craiyon (formerly DALL-E Mini), had been doing the rounds as a freely-available toy. It’s fun too, but DALL-E’s fidelity is impressive enough to be almost indistinguishable from ma

I got my beta invitation to DALL-E 2, which creates art based on text prompts. You’ve probably seen them floating around the internet by now: surrealist, AI-drawn illustrations in a variety of styles.

Another tool, Craiyon (formerly DALL-E Mini), had been doing the rounds as a freely-available toy. It’s fun too, but DALL-E’s fidelity is impressive enough to be almost indistinguishable from magic.

I can’t claim to fully understand its algorithm, but DALL-E is ultimately based on a huge corpus of information: OpenAI created a variation of GPT-3 that follows human-language instructions well enough to sift through collected data and create new works based on what it’s learned. OpenAI claims to have guarded against hateful or infringing use cases, but it can never be perfect at this, and will only ever be as sensitive to these issues as the team that builds it.

These images are attention-grabbing, but the technology has lots of different applications. Some are benign: the team found that AI-generated critiques helped human writers find flaws in their work, for example. GitHub uses OpenAI’s libraries to help engineers write code, using a feature called Copilot. There’s a Figma plugin that will mock up a website based on a text description. But it’s obvious that there are military and intelligence applications for this technology, too.

If I was a science fiction writer - and at night, I am! - I would ask myself what I could create if the corpus was everything. If an AI algorithm was fed with every decision made by every person in the world - our movements via our cellphones, our intentions via our searches, our actions via our purchases and interactions - what might it be able to say about us? Could it predict what we would do next? Could it determine how to influence us to take certain actions?

Yes - but “yes” wouldn’t make for a particularly compelling story in itself. Instead, I’d want to drill a level deeper and remind myself that any technology is a reflection of the people who built it. So even if all those datapoints were loaded into the system, a person who fell outside of the parameters the designers thought to measure or look for might not be as predictable in the system. The designer’s answer, in turn, might be to incentivize people to act within the frameworks they’d built: to make them conform to the data model. (Modern marketing already doesn’t stray too far from this idea.) The people who are not compliant, who resist those incentives, are the only ones who can bring down the system. In the end, only the non-conformists, in this story and in life, are truly free, and are the flag-bearers of freedom for everyone else.

The corpus of images used to power DALL-E 2 is scraped from the internet; the corpus of code for GitHub Copilot is scraped from open source software. There are usage implications here, of course: I did not grant permission for my code, my drawings, or my photographs to form the basis of someone else’s work. But a human artist also draws on everything they’ve encountered, and we tend not to worry about that (unless the re-use becomes undeniably obviously centered on one work in particular). An engineer relies on “best practices” and “patterns” that were developed by others, and we actively encourage that (unless, again, it turns the corner and becomes plagiarism of a single source). Where should we draw the line, legally and conceptually?

I think there is a line, and it’s in part because OpenAI is building a commercial, proprietary platform. The corpus of work translates into profit for them; if OpenAI’s software does wind up powering military applications, or if my mini science fiction story partially becomes true, it could also translate into real harm. The ethical considerations there can’t be brushed away.

What I’m not worried about: I don’t think AI is coming for the jobs of creative people. The corpus requires new art. I do think we will see AI-produced news stories, which are a natural evolution of the content aggregator and cheap reblogging sites we see today, but there will always be a need for deeply-reported journalism. I don’t think we’ll see AI-produced novels and other similar content, although I can imagine writers using them to help with their first drafts before they revise. Mostly, for creatives, this will be a tool rather than a replacement. At least, for another generation or so.

In the meantime, here’s a raccoon in a cowboy hat singing karaoke:


Simon Willison

Quoting Ken Williams

Your documentation is complete when someone can use your module without ever having to look at its code. This is very important. This makes it possible for you to separate your module's documented interface from its internal implementation (guts). This is good because it means that you are free to change the module's internals as long as the interface remains the same. Remember: the documentatio

Your documentation is complete when someone can use your module without ever having to look at its code. This is very important. This makes it possible for you to separate your module's documented interface from its internal implementation (guts). This is good because it means that you are free to change the module's internals as long as the interface remains the same.

Remember: the documentation, not the code, defines what a module does.

Ken Williams

Wednesday, 03. August 2022

Simon Willison

Introducing sqlite-html: query, parse, and generate HTML in SQLite

Introducing sqlite-html: query, parse, and generate HTML in SQLite Another brilliant SQLite extension module from Alex Garcia, this time written in Go. sqlite-html adds a whole family of functions to SQLite for parsing and constructing HTML strings, built on the Go goquery and cascadia libraries. Once again, Alex uses an Observable notebook to describe the new features, with embedded interactive

Introducing sqlite-html: query, parse, and generate HTML in SQLite

Another brilliant SQLite extension module from Alex Garcia, this time written in Go. sqlite-html adds a whole family of functions to SQLite for parsing and constructing HTML strings, built on the Go goquery and cascadia libraries. Once again, Alex uses an Observable notebook to describe the new features, with embedded interactive examples that are backed by a Datasette instance running in Fly.

Via My TIL on Trying out SQLite extensions on macOS


Phil Windleys Technometria

The Path to Redemption: Remembering Craig Burton

Summary: Last week I spoke at the memorial service for Craig Burton, a giant of the tech industry and my close friend. Here are, slightly edited, my remarks. When I got word that Craig Burton had died, the news wasn't unexpected. He'd been ill with brain cancer for a some time and we knew his time was limited. Craig is a great man, a good person, a valued advisor, and a fabulous frien

Summary: Last week I spoke at the memorial service for Craig Burton, a giant of the tech industry and my close friend. Here are, slightly edited, my remarks.

When I got word that Craig Burton had died, the news wasn't unexpected. He'd been ill with brain cancer for a some time and we knew his time was limited. Craig is a great man, a good person, a valued advisor, and a fabulous friend. Craig's life is an amazing story of success, challenge, and overcoming.

I first met Craig when I was CIO for Utah and he was the storied co-founder of Novell and the Burton Group. Dave Politis calls Craig "one of Utah's tech industry Original Gangsters". I was a bit intimidated. Craig was starting a new venture with his longtime friend Art Navarez, and wanted to talk to me about it. That first meeting was where I came to appreciate his famous wit and sharp, insightful mind. Over time, our relationship grew and I came to rely him whenever I had a sticky problem to unravel. One of Craig's talents was throwing out the conventional thinking and starting over to reframe a problem in ways that made solutions tractable. That's what he'd done at Novell when he moved up the stack to avoid the tangle of competing network standards and create a market in network services.

When Steve Fulling and I started Kynetx in 2007 we knew we needed Craig as an advisor. He mentored us—sometimes gently and sometimes with a swift kick. He advised us. He dove into the technology and developed applications, even though he wasn't a developer. He introduced us to one of our most important investors, and now good friend, Roy Avondet. He was our biggest cheerleader and we were grateful for his friendship and help. Craig wasn't just an advisor. He was fully engaged.

One of Craig's favorite words was "ubiquity" and he lived his life consistent with that philosophy. Let me share three stories about Craig from the Kynetx days that I hope show a little bit of his personality:

Steve, Craig, and I had flown to Seattle to meet with Microsoft. Flying with Craig is always an adventure, but that's another story. We met with some people on Microsoft's identity team including Kim Cameron, Craig's longtime friend and Microsoft's Chief Identity Architect. During the meeting someone, a product manager, said something stupid and you could just see Craig come up in his chair. Kim, sitting in the corner, was trying not to laugh because he knew what was coming. Craig, very deliberately and logically, took the PM's argument apart. He wasn't mean; he was patient. But his logic cut like a knife. He could be direct. Craig always took charge of a room. Craig's trademark look (click to enlarge) We hosted a developer conference at Kynetx called Impact. Naturally, Craig spoke. But Craig couldn't just give a standard presentation. He sat, in a big chair on the stage and "held forth". He even had his guitar with him and sang during the presentation. Craig loved music. The singing was all Craig. He couldn't just speak, he had to entertain and make people laugh and smile. Craig and me at Kynetx Impact in 2011 (click to enlarge) At Kynetx, we hosted Free Lunch Friday every week. We'd feed lunch to our team, developers using our product, and anyone else who wanted to come visit the office. We usually brought in something like Jimmy Johns, Costco pizza, or J Dawgs. Not Craig. He and Judith took over the entire break room (for the entire building), brought in portable burners, and cooked a multi-course meal. It was delicious and completely over the top. I can see him with his floppy hat and big, oversized glasses, flamboyant and happy. Ubiquity! Craig with Britt Blaser at IIW (click to enlarge)

I've been there with Craig in some of the highest points of his life and some of the lowest. I've seen him meet his challenges head on and rise above them. Being his friend was hard sometimes. He demanded much of his friends. But he returned help, joy, and, above all, love. He regretted that his choices hurt others besides himself. Craig loved large and completely.

The last decade of Craig's life was remarkable. Craig, in 2011, was a classic tragic hero: noble, virtuous, and basking in past success but with a seemingly fatal flaw. But Craig's story didn't end in 2011. Drummond Reed, a mutual friend and fellow traveler wrote this for Craig's service:

Ten years ago, when Craig was at one of the lowest points in his life, I had the chance to join a small group of his friends to help intervene and steer him back on an upward path. It was an extraordinary experience I will never forget, both because of what I learned about Craig's amazing life, and what it proved about the power of love to change someone's direction. In fact Craig went on from there not just to another phase of his storied career, but to reconnect and marry his high school sweetheart.

Craig and his crew: Doc Searls, me, Craig, Craig's son Alex, Drummond Reed, and Steve Fulling (click to enlarge)

Craig found real happiness in those last years of his life—and he deserved it.

Craig Burton was a mountain of a man, and a mountain of mind. And he moved the mountains of the internet for all of us. The digital future will be safer, richer, and more rewarding for all of us because of the gifts he gave us.

Starting with that intervention, Craig began a long, painful path to eventual happiness and redemption.

Craig overcame his internal demons. This was a battle royale. He had help from friends and family (especially his sisters), but in the end, he had to make the change, tamp down his darkest urges, and face his problems head on. His natural optimism and ability to see things realistically helped. When he finally turned his insightful mind on himself, he began to make progress. Craig had to live and cope with chronic health challenges, many of which were the result of decisions he'd made earlier in his life. Despite the limitations they placed on him, he met them with his usual optimism and love of life. Craig refound his faith. I'm not sure he ever really lost it, but he couldn't reconcile some of his choices with what he believed his faith required of him. In 2016, he decided to rejoin the Church of Jesus Christ of Latter-Day Saints. I was privileged to be able to baptize him. A great honor, that he was kind enough to give me. Craig also refound love and his high school sweetheart, Paula. The timing couldn't have been more perfect. Earlier and Craig wouldn't have been ready. Later and it likely would have been too late. They were married in 2017 and later had the marriage sealed in the Seoul Korea Temple. Craig and Paula were living in Seoul at the time, engaged in another adventure. While Craig loved large, I believe he may have come to doubt that he was worthy of love himself. Paula gave him love and a reason to strive for a little more in the last years of his life. Craig and Paula (click to enlarge)

As I think about the last decade of Craig's life and his hard work to set himself straight, I'm reminded of the parable of the Laborers in the Vineyard. In that parable, Jesus compares the Kingdom of Heaven to a man hiring laborers for his vineyard. He goes to the marketplace and hires some, promising them a penny. He goes back later, at the 6th and 9th hours, and hires more. Finally he hires more laborers in the 11th hour. When it comes time to pay them, he gives everyone the same wage—a penny. The point of the parable is that it doesn't matter so much when you start the journey, but where you end up.

I'm a believer in Jesus Christ and the power of his atonement and resurrection. I know Craig was too. He told me once that belief had given him the courage and hope to keep striving when all seemed lost. Craig knew the highest of the highs. He knew the lowest of the lows. The last few years of his life were among the happiest I ever saw him experience. He was a new man. In the end, Craig ended up in a good place.

I will miss my friend, but I'm eternally grateful for his life and example.

Other Tributes and Remembrances Craig Burton Obituary Remembering Craig Burton by Doc Searls Doc Searls photo album of Craig In Honor of Craig Burton from Jamie Lewis Silicon Slopes Loses A Tech Industry OG: R.I.P., Craig Burton by David Politis

Photo Credits: Craig Burton, 1953-2022 from Doc Searls (CC BY 2.0)

Tags: identity iiw novell kynetx utah


@_Nat Zone

中央集権IDから分散IDに至るまで、歴史は繰り返す | 日経クロステック

先週に引き続き、シリーズ「徹底考察、ブロックチェー…

先週に引き続き、シリーズ「徹底考察、ブロックチェーンは人類を幸せにするのか」の第3回として分散IDの歴史第2弾が日経クロステックに掲載されました。「中央集権IDから分散IDに至るまで、歴史は繰り返す」です。前回はW3C DIDとXRIの対比を見ましたが、今回はいよいよOpenIDの話です。OpenIDの背景にある思想などが書いてあるものは少ないので、OpenIDは中央集権という主張をされるかたは、まずはこのあたりを読んですこし考えてから話していただけるとよろしいかと思います。

目次は以下のような感じです。

(導入部) 「自己主権」「自主独立」を体現するOpenIDの思想 Account URI:メールアドレスは打てるけれどもURLは打てない問題 the ‘acct’ URIの概要 the ‘acct’ URI のResolution OpenID Connectとthe ‘acct’ URIとの関係 SIOPとアルゴリズム的に生成されるメタデータ文書 DIDは権力の分散に寄与するのか~歴史は繰り返す

なお、現在1ページめに

YADIS(Yet Another Distributed Identity Systemと、もう1つの分散アイデンティティーシステム)

(出所) https://xtech.nikkei.com/atcl/nxt/column/18/02132/072900003/

という表記がありますが、この「と」は誤記です。「もう一つの分散アイデンティティシステム」という名前のシステムですので、以下のような感じの表記が正しいです。現在修正依頼中です。

YADIS(Yet Another Distributed Identity System、「もう1つの分散アイデンティティーシステム」)

(出所)筆者

※ 3日17:59 修正確認しました。

それでは、どうぞ。

日経BP 中央集権IDから分散IDに至るまで、歴史は繰り返す

Tuesday, 02. August 2022

Simon Willison

How I Used DALL·E 2 to Generate The Logo for OctoSQL

How I Used DALL·E 2 to Generate The Logo for OctoSQL Jacob Martin gives a blow-by-blow account of his attempts at creating a logo for his OctoSQL project using DALL-E, spending $30 of credits and making extensive use of both the "variations" feature and the tool that lets you request modifications to existing images by painting over parts you want to regenerate. Really interesting to read as an

How I Used DALL·E 2 to Generate The Logo for OctoSQL

Jacob Martin gives a blow-by-blow account of his attempts at creating a logo for his OctoSQL project using DALL-E, spending $30 of credits and making extensive use of both the "variations" feature and the tool that lets you request modifications to existing images by painting over parts you want to regenerate. Really interesting to read as an example of a "real world" DALL-E project.

Via Hacker News


Ben Werdmüller

Comments are hard

Building a comments system is really hard. I tried to build one for Known, which powers my website, but found that spammers circumvented it surprisingly easily. You can flag spam using Akismet (which was built for WordPress but works across platforms), but this process tends to require you to pre-screen comments and make them public after the fact. That’s a fair amount of work and a fair amount

Building a comments system is really hard. I tried to build one for Known, which powers my website, but found that spammers circumvented it surprisingly easily. You can flag spam using Akismet (which was built for WordPress but works across platforms), but this process tends to require you to pre-screen comments and make them public after the fact. That’s a fair amount of work and a fair amount of unnecessary friction for building community.

If you have a blog - you do have a blog, don’t you? - you can post a response to one of my posts and send a webmention. But not everybody has their own website, and the barrier to entry for sending webmentions is pretty high.

So I’ve been looking for something else.

Fred Wilson gave up on comments and asks people to discuss on Twitter. That works pretty well, but I’m not really into forcing people to use a particular service. That’s also why I’m not particularly into using Disqus embeds, which also unnecessarily track you across sites. Finally, I was using Cactus Comments, which is based on the decentralized Matrix network for a while, but it occasionally seemed to break in ways that were disconcerting for site visitors. (It’s still a very cool project.)

I love comments, and I guess that means I’m writing my own system again. To do so means getting into an arms race with spammers, which I’m not very excited about, but I don’t see an alternative that I’m completely happy about.

Do you run a blog with comments? How do you deal with these issues? I’d love to learn from you.


Werdmüller on Medium

Do we really need private schools?

A fully-public education system would benefit everybody. Continue reading on Medium »

A fully-public education system would benefit everybody.

Continue reading on Medium »


Ben Werdmüller

Do we really need private schools?

One of my most controversial opinions is that private schools should not be allowed. Quite how controversial is always a surprise to me: from my perspective it feels very straightforward. In a nutshell, my argument comes down to the following complementary ideas: Every child deserves to have an equal start in life. As a society, we are better off if people from different backgrounds mix,

One of my most controversial opinions is that private schools should not be allowed. Quite how controversial is always a surprise to me: from my perspective it feels very straightforward.

In a nutshell, my argument comes down to the following complementary ideas:

Every child deserves to have an equal start in life. As a society, we are better off if people from different backgrounds mix, interact, and get to know each other as early as possible. Every system of inequality is built around disenfranchisement and blocking access to resources. Giving everyone access to the same education and the connections that inevitably develop while attending an institution helps dismantle these systems. If the rich are forced to use the same system as the poor, the overall standard of education will rise for everyone. Education is a human right.

Does this fly in the face of American individualism? Sure, probably. Will it result in a society that is both culturally and financially richer? I think so.

As far as I can tell, the arguments for private education come down to the perceived right to perpetuate inequality by gating a special education system for people with wealth, a defense of American individualism at the expense of community, and sometimes the adjoining right to perpetuate exclusionary values systems. I’m not particularly interested in protecting any of those things.

It’s certainly true that public education needs more funding, more resources, and stronger frameworks around (for example) special needs education. I don’t think the answer to these problems is private alternatives: instead, I think we solve them by providing stronger support for public infrastructure. And one of the ways we guarantee this support is by forcing people with wealth and resources to use the same infrastructure as everybody else.

 

Photo by ROBIN WORRALL on Unsplash


Damien Bod

Disable Azure AD user account using Microsoft Graph and an application client

This post shows how to enable, disable or remove Azure AD user accounts using Microsoft Graph and a client credentials client. The Microsoft Graph client uses an application scope and application client. This is also possible using a delegated client. If using an application which has no user, an application scope is used to authorize […]

This post shows how to enable, disable or remove Azure AD user accounts using Microsoft Graph and a client credentials client. The Microsoft Graph client uses an application scope and application client. This is also possible using a delegated client. If using an application which has no user, an application scope is used to authorize the client. Using a delegated scope requires a user and a web authentication requesting the required scope and a user consent.

History

2022-08-02 : Fixed incorrect conclusion about application client and AccountEnabled permission, feedback from Stephan van Rooij

Image: https://docs.microsoft.com/en-us/graph/overview

Microsoft Graph with an application scope can be used to update, change, edit user accounts in an Azure AD tenant. The MsGraphService class is used to implement the Graph client using OAuth client credentials. This requires a user secret or a client certificate. The client uses the default scope and no consent is required or can be used because no user is involved.

public MsGraphService(IConfiguration configuration, ILogger<MsGraphService> logger) { _groups = configuration.GetSection("Groups").Get<List<GroupsConfiguration>>(); _logger = logger; string[]? scopes = configuration.GetValue<string>("AadGraph:Scopes")?.Split(' '); var tenantId = configuration.GetValue<string>("AadGraph:TenantId"); // Values from app registration var clientId = configuration.GetValue<string>("AadGraph:ClientId"); var clientSecret = configuration.GetValue<string>("AadGraph:ClientSecret"); _federatedDomainDomain = configuration.GetValue<string>("FederatedDomain"); var options = new TokenCredentialOptions { AuthorityHost = AzureAuthorityHosts.AzurePublicCloud }; // https://docs.microsoft.com/dotnet/api/azure.identity.clientsecretcredential var clientSecretCredential = new ClientSecretCredential( tenantId, clientId, clientSecret, options); _graphServiceClient = new GraphServiceClient(clientSecretCredential, scopes); }

Option 1 Update User AccountEnabled property

By setting the AccountEnabled property, a user account can be updated and can be enabled or disabled. The User.ReadWrite.All permission is required for this.

user.GivenName = userModel.FirstName; user.Surname = userModel.LastName; user.DisplayName = $"{userModel.FirstName} {userModel.LastName}"; user.AccountEnabled = userModel.IsActive; await _msGraphService.UpdateUserAsync(user);

The Graph user can then be updated.

public async Task<User> UpdateUserAsync(User user) { return await _graphServiceClient.Users[user.Id] .Request() .UpdateAsync(user); }

Option 2 Delete User

A user can also be completely removed and deleted from the Azure AD tenant.

Disadvantages

User is deleted and would need to add the account again if the user account is “reactivated”

The user can be deleted using the following code:

public async Task DeleteUserAsync(string userId) { await _graphServiceClient.Users[userId] .Request() .DeleteAsync(); }

Option 3 Remove Security groups for user and leave the account enabled

Deleting or disabling a user is normally not an option because a user might and probably does have access to further applications. After deleting a user, the user cannot be reactivated, but must sign up again. Setting AccountEnabled to false disables the whole account.

Another option instead of deleting/disabling the user is to use group memberships for access to different Azure services. When access to different services need to be removed or disabled, the user can be removed from the groups which are required to access service X or whatever. This will only work if groups are used to control the access to the different services in AAD or Office etc. The following code checks if the user has access to the explicit groups and removes the memberships if required. This works well and does not change the user settings for further services in Azure AD, or office which are outside the scope.

public async Task RemoveUserFromAllGroupMemberships(string userId) { var currentGroupIds = await GetGraphUserMemberGroups(userId); var currentGroupIdsList = currentGroupIds.ToList(); // Only delete specific groups we defined in this app. foreach (var group in _groups) { if(currentGroupIdsList.Contains(group.GroupId)) // remove group await RemoveUserFromGroup(userId, group.GroupId); currentGroupIds.Remove(group.GroupId); } }

The group membership for the user is deleted.

private async Task RemoveUserFromGroup(string userId, string groupId) { try { await _graphServiceClient.Groups[groupId] .Members[userId] .Reference .Request() .DeleteAsync(); } catch (Exception ex) { _logger.LogError(ex, "{Error} RemoveUserFromGroup", ex.Message); } }

Disadvantages

Applications must use security groups for access control

For this to work, the groups must be used to force the authorization. This requires some IT management.

Links

https://docs.microsoft.com/en-us/graph/api/user-delete

https://docs.microsoft.com/en-us/graph/api/resources/groups-overview

https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/users-default-permissions#compare-member-and-guest-default-permissions

https://docs.microsoft.com/en-us/graph/overview


Heres Tom with the Weather

Monday, 01. August 2022

Simon Willison

storysniffer

storysniffer Ben Welsh built a small Python library that guesses if a URL points to an article on a news website, or if it's more likely to be a category page or /about page or similar. I really like this as an example of what you can do with a tiny machine learning model: the model is bundled as a ~3MB pickle file as part of the package, and the repository includes the Jupyter notebook that was

storysniffer

Ben Welsh built a small Python library that guesses if a URL points to an article on a news website, or if it's more likely to be a category page or /about page or similar. I really like this as an example of what you can do with a tiny machine learning model: the model is bundled as a ~3MB pickle file as part of the package, and the repository includes the Jupyter notebook that was used to train it.

Via @palewire


Werdmüller on Medium

Building an inclusive, independent, open newsroom

Working at the intersection of news and open source. Continue reading on Medium »

Working at the intersection of news and open source.

Continue reading on Medium »


Jon Udell

Subtracting devices

People who don’t listen to podcasts often ask people who do: “When do you find time to listen?” For me it’s always on long walks or hikes. (I do a lot of cycling too, and have thought about listening then, but wind makes that impractical and cars make it dangerous.) For many years my trusty … Continue reading Subtracting devices

People who don’t listen to podcasts often ask people who do: “When do you find time to listen?” For me it’s always on long walks or hikes. (I do a lot of cycling too, and have thought about listening then, but wind makes that impractical and cars make it dangerous.) For many years my trusty podcast player was one or another version of the Creative Labs MuVo which, as the ad says, is “ideal for dynamic environments.”

At some point I opted for the convenience of just using my phone. Why carry an extra, single-purpose device when the multi-purpose phone can do everything? That was OK until my Quixotic attachment to Windows Phone became untenable. Not crazy about either of the alternatives, I flipped a coin and wound up with an iPhone. Which, of course, lacks a 3.5mm audio jack. So I got an adapter, but now the setup was hardly “ideal for dynamic environments.” My headset’s connection to the phone was unreliable, and I’d often have to stop walking, reseat it, and restart the podcast.

If you are gadget-minded you are now thinking: “Wireless earbuds!” But no thanks. The last thing I need in my life is more devices to keep track of, charge, and sync with other devices.

I was about to order a new MuVo, and I might still; it’s one of my favorite gadgets ever. But on a recent hike, in a remote area with nobody else around, I suddenly realized I didn’t need the headset at all. I yanked it out, stuck the phone in my pocket, and could hear perfectly well. Bonus: Nothing jammed into my ears.

It’s a bit weird when I do encounter other hikers. Should I pause the audio or not when we cross paths? So far I mostly do, but I don’t think it’s a big deal one way or another.

Adding more devices to solve a device problem amounts to doing the same thing and expecting a different result. I want to remain alert to the possibility that subtracting devices may be the right answer.

There’s a humorous coda to this story. It wasn’t just the headset that was failing to seat securely in the Lightning port. Charging cables were also becoming problematic. A friend suggested a low-tech solution: use a toothpick to pull lint out of the socket. It worked! I suppose I could now go back to using my wired headset on hikes. But I don’t think I will.


Mike Jones: self-issued

JSON Web Proofs BoF at IETF 114 in Philadelphia

This week at IETF 114 in Philadelphia, we held a Birds-of-a-Feather (BoF) session on JSON Web Proofs (JWPs). JSON Web Proofs are a JSON-based representation of cryptographic inputs and outputs that enable use of Zero-Knowledge Proofs (ZKPs), selective disclosure for minimal disclosure, and non-correlatable presentation. JWPs use the three-party model of Issuer, Holder, and Verifier […]

This week at IETF 114 in Philadelphia, we held a Birds-of-a-Feather (BoF) session on JSON Web Proofs (JWPs). JSON Web Proofs are a JSON-based representation of cryptographic inputs and outputs that enable use of Zero-Knowledge Proofs (ZKPs), selective disclosure for minimal disclosure, and non-correlatable presentation. JWPs use the three-party model of Issuer, Holder, and Verifier utilized by Verifiable Credentials.

The BoF asked to reinstate the IETF JSON Object Signing and Encryption (JOSE) working group. We asked for this because the JOSE working group participants already have expertise creating simple, widely-adopted JSON-based cryptographic formats, such as JSON Web Signature (JWS), JSON Web Encryption (JWE), and JSON Web Key (JWK). The JWP format would be a peer to JWS and JWE, reusing elements that make sense, while enabling use of new cryptographic algorithms whose inputs and outputs are not representable in the existing JOSE formats.

Presentations given at the BoF were:

Chair SlidesKaren O’Donoghue and John Bradley The need: Standards for selective disclosure and zero-knowledge proofsMike Jones What Would JOSE Do? Why re-form the JOSE working group to meet the need?Mike Jones The selective disclosure industry landscape, including Verifiable Credentials and ISO Mobile Driver Licenses (mDL)Kristina Yasuda A Look Under the Covers: The JSON Web Proofs specifications – Jeremie Miller Beyond JWS: BBS as a new algorithm with advanced capabilities utilizing JWPTobias Looker

You can view the BoF minutes at https://notes.ietf.org/notes-ietf-114-jwp. A useful discussion ensued after the presentations. Unfortunately, we didn’t have time to finish the BoF in the one-hour slot. The BoF questions unanswered in the time allotted would have been along the lines of “Is the work appropriate for the IETF?”, “Is there interest in the work?”, and “Do we want to adopt the proposed charter?”. Discussion of those topics is now happening on the jose@ietf.org mailing list. Join it at https://www.ietf.org/mailman/listinfo/jose to participate. Roman Danyliw, the Security Area Director who sponsored the BoF, had suggested that we hold a virtual interim BoF to complete the BoF process before IETF 115 in London. Hope to see you there!

The BoF Presenters:

The BoF Participants, including the chairs:

Sunday, 31. July 2022

Simon Willison

Cleaning data with sqlite-utils and Datasette

Cleaning data with sqlite-utils and Datasette I wrote a new tutorial for the Datasette website, showing how to use sqlite-utils to import a CSV file, clean up the resulting schema, fix date formats and extract some of the columns into a separate table. It's accompanied by a ten minute video originally recorded for the HYTRADBOI conference. Via @simonw

Cleaning data with sqlite-utils and Datasette

I wrote a new tutorial for the Datasette website, showing how to use sqlite-utils to import a CSV file, clean up the resulting schema, fix date formats and extract some of the columns into a separate table. It's accompanied by a ten minute video originally recorded for the HYTRADBOI conference.

Via @simonw


Doc Searls Weblog

The Empire Strikes On

Twelve years ago, I posted The Data Bubble. It began, The tide turned today. Mark it: 31 July 2010. That’s when The Wall Street Journal published The Web’s Gold Mine: Your Secrets, subtitled A Journal investigation finds that one of the fastest-growing businesses on the Internet is the business of spying on consumers. First in a series. It has ten […]

Twelve years ago, I posted The Data Bubble. It began,

The tide turned today. Mark it: 31 July 2010.

That’s when The Wall Street Journal published The Web’s Gold Mine: Your Secrets, subtitled A Journal investigation finds that one of the fastest-growing businesses on the Internet is the business of spying on consumers. First in a series. It has ten links to other sections of today’s report. It’s pretty freaking amazing — and amazingly freaky when you dig down to the business assumptions behind it. Here is the rest of the list (sans one that goes to a link-proof Flash thing):

Personal Details Exposed Via Biggest U.S. Websites The largest U.S. websites are installing new and intrusive consumer-tracking technologies on the computers of people visiting their sites—in some cases, more than 100 tracking tools at a time. See the Database at WSJ.com Follow @whattheyknow on Twitter What They Know About You Your Questions on Digital Privacy Analyzing What You Have Typed Video: A Guide to Cookies What They Know: A Glossary The Journal’s Methodology

Here’s the gist:

The Journal conducted a comprehensive study that assesses and analyzes the broad array of cookies and other surveillance technology that companies are deploying on Internet users. It reveals that the tracking of consumers has grown both far more pervasive and far more intrusive than is realized by all but a handful of people in the vanguard of the industry.

It gets worse:

In between the Internet user and the advertiser, the Journal identified more than 100 middlemen—tracking companies, data brokers and advertising networks—competing to meet the growing demand for data on individual behavior and interests.The data on Ms. Hayes-Beaty’s film-watching habits, for instance, is being offered to advertisers on BlueKai Inc., one of the new data exchanges. “It is a sea change in the way the industry works,” says Omar Tawakol, CEO of BlueKai. “Advertisers want to buy access to people, not Web pages.” The Journal examined the 50 most popular U.S. websites, which account for about 40% of the Web pages viewed by Americans. (The Journal also tested its own site, WSJ.com.) It then analyzed the tracking files and programs these sites downloaded onto a test computer. As a group, the top 50 sites placed 3,180 tracking files in total on the Journal’s test computer. Nearly a third of these were innocuous, deployed to remember the password to a favorite site or tally most-popular articles. But over two-thirds—2,224—were installed by 131 companies, many of which are in the business of tracking Web users to create rich databases of consumer profiles that can be sold.

Here’s what’s delusional about all this: There is no demand for tracking by individual customers. All the demand comes from advertisers — or from companies selling to advertisers. For now.

Here is the difference between an advertiser and an ordinary company just trying to sell stuff to customers: nothing. If a better way to sell stuff comes along — especially if customers like it better than this crap the Journal is reporting on — advertising is in trouble.

In fact, I had been calling the tracking-based advertising business (now branded adtech or ad-tech) a bubble for some time. For example, in Why online advertising sucks, and is a bubble (31 October 2008) and After the advertising bubble bursts (23 March 2009). But I didn’t expect my own small voice to have much effect. But this was different. What They Know was written by a crack team of writers, researchers, and data visualizers. It was led by Julia Angwin and truly Pulitzer-grade stuff. It  was so well done, so deep, and so sharp, that I posted a follow-up report three months later, called The Data Bubble II. In that one, I wrote,

That same series is now nine stories long, not counting the introduction and a long list of related pieces. Here’s the current list:

The Web’s Gold Mine: What They Know About You Microsoft Quashed Bid to Boost Web Privacy On the Web’s Cutting Edge: Anonymity in Name Only Stalking by Cell Phone Google Agonizes Over Privacy Kids Face Intensive Tracking on Web ‘Scrapers’ Dig Deep for Data on the Web Facebook in Privacy Breach A Web Pioneer Profiles Users By Name

Related pieces—

Personal Details Exposed Via Biggest U.S. Websites The largest U.S. websites are installing new and intrusive consumer-tracking technologies on the computers of people visiting their sites—in some cases, more than 100 tracking tools at a time. See the Database at WSJ.com What They Know: A Glossary The Journal’s Methodology The Tracking Ecosystem Your Questions on Digital Privacy Analyzing What You Have Typed Video: A Guide to Cookies App Developers Weigh Business Models How the Leaks Happen Some Apps Return After Breach Facebook Faces Lawsuit Social Networks Weigh Privacy vs. Profits Four Aspects of Online Data Privacy How to Protect Your Child’s Privacy How to Avoid Prying Eyes Graphic: Google’s Widening Reach Digits Live Show: How RapLeaf Mines Data Online Digits: Escaping the Scrapers Privacy Advocate Withdraws From RapLeaf Advisory Board Candidate Apologizes for Using RapLeaf to Target Ads Preview: Facebook Leads Ad Recovery How to Get Out of RapLeaf’s System The Dangers of Web Tracking (buy Nicholas Carr) Why Tracking Isn’t Bad (by Jim Harper) Follow @whattheyknow on Twitter

Two things I especially like about all this. First, Julia Angwin and her team are doing a terrific job of old-fashioned investigative journalism here. Kudos for that. Second, the whole series stands on the side of readers. The second person voice (you, your) is directed to individual persons—the same persons who do not sit at the tables of decision-makers in this crazy new hyper-personalized advertising business.

To measure the delta of change in that business, start with John Battelle‘s Conversational Marketing series (post 1post 2post 3) from early 2007, and then his post Identity and the Independent Web, from last week. In the former he writes about how the need for companies to converse directly with customers and prospects is both inevitable and transformative. He even kindly links to The Cluetrain Manifesto (behind the phrase “brands are conversations”).

It was obvious to me that this fine work would blow the adtech bubble to a fine mist. It was just a matter of when.

Over the years since, I’ve retained hope, if not faith. Examples: The Data Bubble Redux (9 April 2016), and Is the advertising bubble finally starting to pop? (9 May 2016, and in Medium).

Alas, the answer to that last one was no. By 2016, Julia and her team had long since disbanded, and the original links to the What They Know series began to fail. I don’t have exact dates for which failed when, but I do know that the trusty master link, wjs.com/wtk, began to 404 at some point. Fortunately, Julia has kept much of it alive at https://juliaangwin.com/category/portfolio/wall-street-journal/what-they-know/. Still, by the late Teens it was clear that even the best journalism wasn’t going to be enough—especially since the major publications had become adtech junkies. Worse, covering their own publications’ involvement in surveillance capitalism had become an untouchable topic for journalists. (One notable exception is Farhad Manjoo of The New York Times, whose coverage of the paper’s own tracking was followed by a cutback in the practice.)

While I believe that most new laws for tech mostly protect yesterday from last Thursday, I share with many a hope for regulatory relief. I was especially jazzed about Europe’s GDPR, as you can read in GDPR will pop the adtech bubble (12 May 2018) and Our time has come (16 May 2018 in ProjectVRM).

But I was wrong then too. Because adtech isn’t a bubble. It’s a death star in service of an evil empire that destroys privacy through every function it funds in the digital world.

That’s why I expect the American Data Privacy and Protection Act (H.R. 8152), even if it passes through both houses of Congress at full strength, to do jack shit. Or worse, to make our experience of life in the digital world even more complicated, by requiring us to opt-out, rather than opt-in (yep, it’s in the law—as a right, no less), to tracking-based advertising everywhere. And we know how well that’s been going. (Read this whole post by Tom Fishburne, the Marketoonist, for a picture of how less than zero progress has been made, and how venial and absurd “consent” gauntlets on websites have become.) Do a search for https://www.google.com/search?q=gdpr+compliance to see how large the GDPR “compliance” business has become. Nearly all your 200+ million results will be for services selling obedience to the letter of the GDPR while death-star laser beams blow its spirit into spinning shards. Then expect that business to grow once the ADPPA is in place.

There is only thing that will save us from adtech’s death star.

That’s tech of our own. Our tech. Personal tech.

We did it in the physical world with the personal privacy tech we call clothing, shelter, locks, doors, shades, and shutters. We’ve barely started to make the equivalents for the digital world. But the digital world is only a few decades old. It will be around for dozens, hundreds, or thousands of decades to come. And adtech is still just a teenager. We can, must, and will do better.

All we need is the tech. Big Tech won’t do it for us. Nor will Big Gov.

The economics will actually help, because there are many business problems in the digital world that can only be solved from the customers’ side, with better signaling from demand to supply than adtech-based guesswork can ever provide. Customer Commons lists fourteen of those solutions, here. Privacy is just one of them.

Use the Force, folks.

That Force is us.

Saturday, 30. July 2022

Simon Willison

GAS-ICS-Sync

GAS-ICS-Sync Google Calendar can subscribe to ICS calendar feeds... but polls for updates less than once every 24 hours (as far as I can tell) greatly limiting their usefulness. Derek Antrican wrote a script using Google App Script which fixes this by polling calendar URLs more often and writing them to your calendar via the write API. Via Kevin Marks

GAS-ICS-Sync

Google Calendar can subscribe to ICS calendar feeds... but polls for updates less than once every 24 hours (as far as I can tell) greatly limiting their usefulness. Derek Antrican wrote a script using Google App Script which fixes this by polling calendar URLs more often and writing them to your calendar via the write API.

Via Kevin Marks


GPSJam

GPSJam John Wiseman's "Daily maps of GPS interference" - a beautiful interactive globe (powered by Mapbox GL) which you can use to see points of heaviest GPS interference over a 24 hour period, using data collected from commercial airline radios by ADS-B Exchange. "From what I can tell the most common reason for aircraft GPS systems to have degraded accuracy is jamming by military systems. At l

GPSJam

John Wiseman's "Daily maps of GPS interference" - a beautiful interactive globe (powered by Mapbox GL) which you can use to see points of heaviest GPS interference over a 24 hour period, using data collected from commercial airline radios by ADS-B Exchange. "From what I can tell the most common reason for aircraft GPS systems to have degraded accuracy is jamming by military systems. At least, the vast majority of aircraft that I see with bad GPS accuracy are flying near conflict zones where GPS jamming is known to occur."

Via @lemonodor


Introducing sqlite-lines - a SQLite extension for reading files line-by-line

Introducing sqlite-lines - a SQLite extension for reading files line-by-line Alex Garcia wrote a brilliant C module for SQLIte which adds functions (and a table-valued function) for efficiently reading newline-delimited text into SQLite. When combined with SQLite's built-in JSON features this means you can read a huge newline-delimited JSON file into SQLite in a streaming fashion so it doesn't e

Introducing sqlite-lines - a SQLite extension for reading files line-by-line

Alex Garcia wrote a brilliant C module for SQLIte which adds functions (and a table-valued function) for efficiently reading newline-delimited text into SQLite. When combined with SQLite's built-in JSON features this means you can read a huge newline-delimited JSON file into SQLite in a streaming fashion so it doesn't exhaust memory for a large file. Alex also compiled the extension to WebAssembly, and his post here is an Observable notebook post that lets you exercise the code directly.

Via @agarcia_me


Weeknotes: Joining the board of the Python Software Foundation

A few weeks ago I was elected to the board of directors for the Python Software Foundation. I put myself up for election partly because I've found myself saying "I wish the PSF would help with ..." a few times over the years, and realized that joining the board could be a more useful way to actively participate, rather than shouting from the sidelines. I was quite surprised to win. I wrote up

A few weeks ago I was elected to the board of directors for the Python Software Foundation.

I put myself up for election partly because I've found myself saying "I wish the PSF would help with ..." a few times over the years, and realized that joining the board could be a more useful way to actively participate, rather than shouting from the sidelines.

I was quite surprised to win. I wrote up a short manifesto - you can see that here - but the voting system lets voters select as many candidates as they like, so it's possible I got in more on broad name recognition among the voters than based on what I wrote. I don't think there's a way to tell one way or the other.

I had my first board meeting on Wednesday, where I formally joined the board and got to vote on my first resolutions. This is my first time as a board member for a non-profit and I have learned a bunch already, with a lot more to go!

Board terms last three years. I expect it will take me at least a few months to get fully up to speed on how everything works.

As a board member, my primary responsibilities are to show up to the meetings, vote on resolutions, act as an ambassador for the PSF to the Python community and beyond and help both set the direction for the PSF and ensure that the PSF meets its goals and holds true to its values.

I'm embarassed to admit that I wrote my election manifesto without a deep understanding of how the PSF operates and how much is possible for it to get done. Here's the section I wrote about my goals should I be elected:

I believe there are problems facing the Python community that require dedicated resources beyond volunteer labour. I'd like the PSF to invest funding in the following areas in particular:

Improve Python onboarding. In coaching new developers I've found that the initial steps to getting started with a Python development environment can be a difficult hurdle to cross. I'd like to help direct PSF resources to tackling this problem, with a goal of making the experience of starting to learn Python as smooth as possible, no matter what platform the learner is using. Make Python a great platform for distributing software. In building my own application, Datasette, in Python I've seen how difficult it can be to package up a Python application so that it can be installed by end-users, who aren't ready to install Python and learn pip in order to try out a new piece of software. I've researched solutions for this for my own software using Homebrew, Docker, an Electron app and WASM/Pyodide. I'd like the PSF to invest in initiatives and documentation to make this as easy as possible, so that one of the reasons to build with Python is that distributing an application to end-users is already a solved problem.

I still think these are good ideas, and I hope to make progress on them during my term as a director - but I'm not going to start arguing for new initiatives until I've learned the ropes and fully understood the PSF's abilities, current challenges and existing goals.

In figuring out how the board works, one of the most useful pages I stumbled across was this list of resolutions voted on by the board, dating back to 2001. There are over 1,600 of them! Browsing through them gave me a much better idea of the kind of things the board has the authority to do.

Scraping data into Datasette Lite

Because everything looks like a nail when you have a good hammer, I explored the board resolutions by loading them into Datasette. I tried a new trick this time: I scraped data from that page into a CSV file, then loaded up that CSV file in Datasette Lite via a GitHub Gist.

My scraper isn't perfect - it misses about 150 resolutions because they don't exactly fit the format it expects, but it was good enough for a proof of concept. I wrote that in a Jupyter Notebook which you can see here.

Here's the CSV in a Gist. The great thing about Gists is that GitHub serve those files with the access-control-allow-origin: * HTTP header, which means you can load them cross-domain.

Here's what you get if you paste the URL to that CSV into Datasette Lite (using this new feature I added last month):

And here's a SQL query that shows the sum total dollar amount from every resolution that mentions "Nigeria":

with filtered as ( select * from [psf-resolutions] where "dollars" is not null and "text" like '%' || :search || '%' ) select 'Total: $' || printf('%,d', sum(dollars)) as text, null as date from filtered union all select text, date from filtered;

I'm using a new-to-me trick here: I use a CTE to filter down to just the rows I am interested in, then I create a first row that sums the dollar amounts as the text column and leaves the date column null, then unions that against the rows from the query.

Important note: These numbers aren't actually particularly useful. Just because the PSF board voted on a resolution does not mean that the money made it to the grantee - there are apparently situations where the approved grant may not be properly claimed and transferred. Also, my scraper logic isn't perfect. Plus the PSF spends a whole lot of money in ways that don't show up in these resolutions.

So this is a fun hack, and a neat way to start getting a general idea of how the PSF works, but any numbers it produces should not be taken as the absolute truth.

As a general pattern though, I really like this workflow of generating CSV files, saving them to a Gist and then opening them directly in Datasette Lite. It provides a way to use Datasette to share and explore data without needing to involve any server-side systems (other than GitHub Gists) at all!

Big-endian bugs in sqlite-fts4

sqlite-fts4 is a small Python library I wrote that adds SQLite functions for calculating relevance scoring for full-text search using the FTS4 module that comes bundled with SQLite. I described that project in detail in Exploring search relevance algorithms with SQLite.

It's a dependency of sqlite-utils so it has a pretty big install base, despite being relatively obscure.

This week I had a fascinating bug report from Sarah Julia Kriesch: Test test_underlying_decode_matchinfo fails on PPC64 and s390x on openSUSE.

The s390x is an IBM mainframe architecture and it uses a big-endian byte order, unlike all of the machines I use which are little-endian.

This is the first time I've encountered a big-endian v.s. little-endian bug in my entire career! I was excited to dig in.

Here's the relevant code:

def decode_matchinfo(buf): # buf is a bytestring of unsigned integers, each 4 bytes long return struct.unpack("I" * (len(buf) // 4), buf)

SQLite FTS4 provides a matchinfo binary string which you need to decode in order to calculate the relevance score. This code uses the struct standard library module to unpack that binary string into a list of integers.

My initial attempt at fixing this turned out to be entirely incorrect.

I didn't have a big-endian machine available for testing, and I assumed that the problem was caused by Python interpreting the bytes as the current architecture's byte order. So I applied this fix:

return struct.unpack(">" + ("I" * (len(buf) // 4)), buf)

The > prefix there ensures that struct will always interpret the bytes as little-endian. I wrote up a TIL and shipped 1.0.2 with the fix.

Sarah promptly got back to me and reported some new failing tests.

It turns out my fix was entirely incorrect - in fact, I'd broken something that previously was working just fine.

The clue is in the SQLite documentation for matchinfo (which I really should have checked):

The matchinfo function returns a blob value. If it is used within a query that does not use the full-text index (a "query by rowid" or "linear scan"), then the blob is zero bytes in size. Otherwise, the blob consists of zero or more 32-bit unsigned integers in machine byte-order (emphasis mine).

Looking more closely at the original bug report, the test that failed was this one:

@pytest.mark.parametrize( "buf,expected", [ ( b"\x01\x00\x00\x00\x02\x00\x00\x00\x02\x00\x00\x00\x02\x00\x00\x00", (1, 2, 2, 2), ) ], ) def test_underlying_decode_matchinfo(buf, expected): assert expected == decode_matchinfo(buf)

That test hard-codes a little-endian binary string and checks the output of my decode_matchinfo function. This is obviously going to fail on a big-endian system.

So my original behaviour was actually correct: I was parsing the string using the byte order of the architecture, and SQLite was providing the string in the byte order of the architecture. The only bug was in my test.

I reverted my previous fix and fixed the test instead:

@pytest.mark.parametrize( "buf,expected", [ ( b"\x01\x00\x00\x00\x02\x00\x00\x00\x02\x00\x00\x00\x02\x00\x00\x00" if sys.byteorder == "little" else b"\x00\x00\x00\x01\x00\x00\x00\x02\x00\x00\x00\x02\x00\x00\x00\x02", (1, 2, 2, 2), ) ], ) def test_underlying_decode_matchinfo(buf, expected): assert expected == decode_matchinfo(buf)

sys.byteorder reports the byte order of the host system, so this test now passes on both little-endian and big-endian systems.

There was one remaining challenge: how to test this? I wasn't going to make the same mistake of shipping a fix that hadn't actually been exercised on the target architecture a second time.

After quite a bit of research (mainly throwing the terms docker and s390x into the GitHub code search engine and seeing what I could find) I figured out a fix. It turns out you can use Docker and QEMU to run an emulated s390x system - both on a Mac loptop and in GitHub Actions.

Short version:

docker run --rm --privileged multiarch/qemu-user-static:register --reset docker run -it multiarch/ubuntu-core:s390x-focal /bin/bash

For the longer version, check my TIL: Emulating a big-endian s390x with QEMU.

Releases this week sqlite-fts4: 1.0.3 - (5 releases total) - 2022-07-30
Custom Python functions for working with SQLite FTS4 shot-scraper: 0.14.2 - (17 releases total) - 2022-07-28
A comand-line utility for taking automated screenshots of websites datasette-sqlite-fts4: 0.3.1 - 2022-07-28
Datasette plugin that adds custom SQL functions for working with SQLite FTS4 datasette-publish-vercel: 0.14.1 - (22 releases total) - 2022-07-23
Datasette plugin for publishing data using Vercel datasette-insert: 0.8 - (8 releases total) - 2022-07-22 TIL this week Using pytest and Playwright to test a JavaScript web application Deploying a redbean app to Fly Testing things in Fedora using Docker struct endianness in Python Migrating a GitHub wiki from one repository to another Emulating a big-endian s390x with QEMU

Doc Searls Weblog

On windowseat photography

A visitor to aerial photos on my Flickr site asked me where one should sit on a passenger plane to shoot pictures like mine. This post expands on what I wrote back to him. Here’s the main thing: you want a window seat on the side of the plane shaded from the Sun, and away […]

A visitor to aerial photos on my Flickr site asked me where one should sit on a passenger plane to shoot pictures like mine. This post expands on what I wrote back to him.

Here’s the main thing: you want a window seat on the side of the plane shaded from the Sun, and away from the wing. Sun on plane windows highlights all the flaws, scratches, and dirt that are typical features of airplane windows. It’s also best to have a clear view of the ground. In front of the wing is also better than behind, because jet engine exhaust at low altitudes distorts the air, causing blur in a photo. (At high altitudes this problem tends to go away.) So, if you are traveling north in the morning, you want a seat on the left side of the plane (where the seat is usually called A). And the reverse if you’re flying south.

Here in North America, when flying west I like to be on the right side, and when flying east I like to be on the left, because the whole continent is far enough north of the Equator for the Sun, at least in the middle hours of the day, to be in the south. (There are exceptions, however, such as early and late in the day in times of year close to the Summer Solstice, when the Sun rises and sets far north of straight east and west.) This photo, of massive snows atop mountains in Canada’s arctic Baffin Island, was shot on a flight from London to Denver, with the sun on the left side of the plane. I was on the right:

As for choosing seats, the variety of variables is extreme. That’s because almost every airline flies different kinds of planes, and even those that fly only one kind of plane may fly many different seat layouts. For example, there are thirteen different variants of the 737 model, across four generations. And, even within one model of plane, there may be three or four different seat layouts, even within one airline. For example, United flies fifteen different widebody jets: four 767s, six 777s, and four 787s, each with a different seat layout. It also flies nineteen narrowbody jets, five regional jets, and seven turboprops, all with different seat layouts as well.

So I always go to SeatGuru.com for a better look at the seat layout for a plane than what United (or any airline) will tell me on their seat selection page when I book a flight online. On the website, you enter the flight number and the date, and SeatGuru will give you the seat layout, with a rating or review for every seat.

This is critical because some planes’ window seats are missing a window, or have a window that is “misaligned,” meaning it faces the side of a seat back, a bulkhead, or some other obstruction. See here:

Some planes have other challenges, such as the electrically dimmable windows on Boeing 787 “Dreamliners.” I wrote about the challenges of those here.

Now, if you find yourself with a seat that’s over the wing and facing the Sun, good photography is still possible, as you see in this shot of this sunset at altitude:

One big advantage of life in our Digital Age is that none of the airlines, far as I know, will hassle you for shooting photos out windows with your phone. That’s because, while in the old days some airlines forbid photography on planes, shooting photos with phones, constantly, is now normative in the extreme, everywhere. (It’s still bad form to shoot airline personnel in planes, though, and you will get hassled for that.)

So, if you’re photographically inclined, have fun.

Friday, 29. July 2022

Simon Willison

Packaging Python Projects with pyproject.toml

Packaging Python Projects with pyproject.toml I decided to finally figure out how packaging with pyproject.toml works - all of my existing projects use setup.py. The official tutorial from the Python Packaging Authority (PyPA) had everything I needed.

Packaging Python Projects with pyproject.toml

I decided to finally figure out how packaging with pyproject.toml works - all of my existing projects use setup.py. The official tutorial from the Python Packaging Authority (PyPA) had everything I needed.


Heres Tom with the Weather

P-Hacking Example

One of the most interesting lessons from the pandemic is the harm that can be caused by p-hacking. A paper with errors related to p-hacking that hasn’t been peer-reviewed is promoted by one or more people with millions of followers on social media and then some of those followers suffer horrible outcomes because they had a false sense of security. Maybe the authors of the paper did not even real

One of the most interesting lessons from the pandemic is the harm that can be caused by p-hacking. A paper with errors related to p-hacking that hasn’t been peer-reviewed is promoted by one or more people with millions of followers on social media and then some of those followers suffer horrible outcomes because they had a false sense of security. Maybe the authors of the paper did not even realize the problem but for whatever reason, the social media rock stars felt the need to spread the misinformation. And another very interesting lesson is that the social media rock stars seem to almost never issue a correction after the paper is reviewed and rejected.

To illustrate p-hacking with a non-serious example, I am using real public data with my experience attending drop-in hockey.

I wanted to know if goalies tended to show up more or less frequently on any particular day of the week because it is more fun to play when at least one goalie shows up. I collected 85 independent samples.

For all days, there were 27 days with 1 goalie and 27 days with 2 goalies and 31 days with 0 goalies.

Our test procedure will define the test statistic X = the number of days that at least one goalie registered.

I am not smart so instead of committing to a hypothesis to test prior to looking at the data, I cheat and look at the data first and notice that the numbers for Tuesday look especially low. So, I focus on goalie registrations on Tuesdays. Using the data above for all days, the null hypothesis is that the probability that at least one goalie registered on a Tuesday is 0.635.

For perspective, taking 19 samples for Tuesday would give an expected value of 12 samples where at least 1 goalie registered.

Suppose we wanted to propose an alternative hypothesis that p < 0.635 for Tuesday. What is the rejection region of values that would refute the null hypothesis (p=0.635)?

Let’s aim for α = 0.05 as the level of significance. This means that (pretending that I had not egregiously cherry-picked data beforehand) we want there to be less than a 5% chance that the experimental result would occur inside the rejection region if the null hypothesis was true (Type I error).

For a binomial random variable X, the pmf b(x; n, p) is

def factorial(n) (1..n).inject(:*) || 1 end def combination(n,k) factorial(n) / (factorial(k)*factorial(n-k)) end def pmf(x,n,p) combination(n,x) * (p ** x) * ((1 - p) ** (n-x)) end

The cdf B(x; n, p) = P(X x) is

def cdf(x,n,p) (0..x).map {|i| pmf(i,n,p)}.sum end

For n=19 samples, if x 9 was chosen as the rejection region, then α = P(X 9 when X ~ Bin(19, 0.635)) = 0.112

2.4.10 :001 > load 'stats.rb' => true 2.4.10 :002 > cdf(9,19,0.635) => 0.1121416295262306

This choice is not good enough because even if the null hypothesis is true, there is a large 11% chance (again, pretending I had not cherry-picked the data) that the test statistic falls in the rejection region.

So, if we narrow the rejection region to x 8, then α = P(X 8 when X ~ Bin(19, 0.635)) = 0.047

2.4.10 :003 > cdf(8,19,0.635) => 0.04705965393607316

This rejection region satisfies the requirement of a 0.05 significance level.

The n=19 samples for Tuesday are [0, 0, 0, 1, 0, 0, 0, 0, 1, 2, 0, 1, 1, 0, 1, 2, 2, 0, 0].

Since x=8 falls within the rejection region, the null hypothesis is (supposedly) rejected for Tuesday samples. So I announce to my hockey friends on social media “Beware! Compared to all days of the week, it is less likely that at least one goalie will register on a Tuesday!”

Before addressing the p-hacking, let’s first address another issue. The experimental result was x = 8 which gave us a 0.047 probability of obtaining 8 or less days in a sample of 19 assuming that the null hypothesis (p=0.635) is true. This result just barely makes the 0.05 cutoff by the skin of its teeth. So, just saying that the null hypothesis was refuted with α = 0.05 does not reveal that it was barely refuted. Therefore, it is much more informative to say the p-value was 0.047 and also does not impose a particular α on readers who want to draw their own conclusions.

Now let’s discuss the p-hacking problem. I gave myself the impression that there was only a 5% chance that I would see a significant result even if the null hypothesis (p=0.635) were true. However, since there is data for 5 days (Monday, Tuesday, Wednesday, Thursday, Friday), I could have performed 5 different tests. If I chose that same p < 0.635 alternative hypothesis for each, then there would similarly be a 5% chance of a significant result for each test. The probability that all 5 tests would not be significant would be 0.95 * 0.95 * 0.95 * 0.95 * 0.95 = 0.77. Therefore, the probability that at least one test would be significant is 1 - 0.77 = 0.23 (the Family-wise error rate) which is much higher than 0.05. That’s like flipping a coin twice and getting two heads which is not surprising at all. We should expect such a result even if the null hypothesis is true. Therefore, there is not a significant result for Tuesday.

I was inspired to write this blog post after watching Dr. Susan Oliver’s Antivaxxers fooled by P-Hacking and apples to oranges comparisons. The video references the paper The Extent and Consequences of P-Hacking in Science (2015).

Thursday, 28. July 2022

@_Nat Zone

W3Cが分散IDの規格を標準化、そこに至るまでの歴史を振り返る

先週W3Cから勧告が発行されたDID (Decen…

先週W3Cから勧告が発行されたDID (Decentralized Identifier) 関連の表題の記事を日経XTECHに執筆いたしました1。巷で話題の Web 3.0 / web3 やWeb5とも関係する注目の技術の背景ですね。DIDやっている人でもご存じない方がほとんどだと思いますので楽しめる(ただし、有料の後半部分)と思います。

この記事は、シリーズ「徹底考察、ブロックチェーンは人類を幸せにするのか」の第2回、第3回の第1回目で、1ページめではDIDの概要を、2ページめではそこに至るまでの先行技術など歴史的背景の話や考察がかいてあります。肝心のことが書いてある2ページ目以降は有料記事なので課金しないと見ることができませんが…。以下に目次だけ書いておきます。もし課金まだでしたら、課金して読んでくださると幸いです2

(導入部) DIDとは何か (無料) DIDの形式 検証可能レジストリ DID文書 DIDコントローラ これらの関係性 DIDの構想は実は20年以上前にできていたーーXRIの歴史(有料) XRI周辺の歴史 XRIの形式 XRIレジストリ XRDS文書 これらの関係性 XRIにあってDIDにないもの(有料) DIDにあってXRIに無いもの(有料) 独裁 v.s. 古代共和制 v.s. 近代民主制(有料)

この記事、書いてる間に現在のボリュームの4倍位には楽勝でなっていたのを、エピソードや周辺の歴史などを脱さり切ることで短くしたものです。フル版は暗殺が出てきたり3と、血湧き肉躍る(かもしれない)ので、またどこかで書けると良いなと思っています。

なお、来週は記事の後半が公開されます。OpenIDの歴史的背景や哲学などもそこで出てきますのでお楽しみに。Web 2.0 が中央集権だとかと言っている人は考え直さざるを得ないことが書いてありますよ。

しかし、このシリーズ、本当に英文でもニア・リアルタイムで公表したいところですねぇ。

それでは、どうぞ。

日経XTECH: W3Cが分散IDの規格を標準化、そこに至るまでの歴史を振り返る(題1回)

Wednesday, 27. July 2022

Simon Willison

Fastest way to turn HTML into text in Python

Fastest way to turn HTML into text in Python A light benchmark of the new-to-me selectolax Python library shows it performing extremely well for tasks such as extracting just the text from an HTML string, after first manipulating the DOM. selectolax is a Python binding over the Modest and Lexbor HTML parsing engines, which are written in no-outside-dependency C. Via Found selectolax in re

Fastest way to turn HTML into text in Python

A light benchmark of the new-to-me selectolax Python library shows it performing extremely well for tasks such as extracting just the text from an HTML string, after first manipulating the DOM. selectolax is a Python binding over the Modest and Lexbor HTML parsing engines, which are written in no-outside-dependency C.

Via Found selectolax in readthedocs/embed/v3/views.py


Doc Searls Weblog

Remembering Craig Burton

I used to tell Craig Burton there was no proof that he could be killed, because he came so close, so many times. But now we have it. Cancer got him, a week ago today. He was sixty-seven. So here’s a bit of back-story on how Craig and I became great friends. In late 1987, […]

I used to tell Craig Burton there was no proof that he could be killed, because he came so close, so many times. But now we have it. Cancer got him, a week ago today. He was sixty-seven.

So here’s a bit of back-story on how Craig and I became great friends.

In late 1987, my ad agency, Hodskins Simone & Searls, pulled together a collection of client companies for the purpose of creating what we called a “connectivity consortium.” The idea was to evangelize universal networking—something the world did not yet have—and to do it together.

The time seemed right. Enterprises everywhere were filling up with personal computers, each doing far more than mainframe terminals ever did. This explosion of personal productivity created a massive demand for local area networks, aka LANs, on which workers could share files, print documents, and start to put their companies on a digital footing. IBM, Microsoft, and a raft of other companies were big in the LAN space, but one upstart company—Novell—was creaming all of them. It did that by embracing PCs, Macs, makers of hardware accessories such as Ethernet cards, plus many different kinds of network wiring and communications protocols.

Our agency was still new in Silicon Valley, and our clients were relatively small. To give our consortium some heft, we needed a leader in the LAN space. So I did the audacious thing, and called on Novell at Comdex, which was then the biggest trade show in tech. My target was Judith Clarke, whose marketing smarts were already legendary. For example, while all the biggest companies competed to out-spend each other with giant booths on the show floor, Judith had Novell rent space on the ground floor of the Las Vegas Hilton, turning that space into a sales office for the company, a storefront on the thickest path for foot traffic at the show.

So I cold-called on Judith at that office. Though she was protected from all but potential Novell customers, I cajoled a meeting, and Judith said yes. Novell was in.

The first meeting of our connectivity consortium was in a classroom space at Novell’s Silicon Valley office. One by one, each of my agency’s client companies spoke about what they were bringing to our collective table, while a large unidentified dude sat in the back of the room, leaning forward, looking like a walrus watching fish. After listening patiently to what everyone said, the big dude walked up to the blackboard in front and chalked out diagrams and descriptions of how everything everyone was talking about could actually work together. He also added a piece nobody had brought up yet: TCP/IP, the base protocol for the Internet. That one wasn’t represented by a company, mostly because it wasn’t invented for commercial purposes. But, the big guy said, TCP/IP was the protocol that would, in the long run, make everything work together.

I was of the same mind, so quickly the dude and I got into a deep conversation during which it was clear to me that I was being both well-schooled about networking, yet respected for what little new information I brought to the conversation. After a while, Judith leaned in to tell us that this dude was Craig Burton, and that it was Craig’s strategic vision that was busy guiding Novell to its roaring success.

Right after that meeting, Craig called me just to talk, because he liked how the two of us could play “mind jazz” together, co-thinking about the future of a digital world that was still being born. Which we didn’t stop doing for the next thirty-four years.

So much happened in that time. Craig and Judith† had an affair, got exiled from Novell, married each other and built The Burton Group with another Novell alum, Jamie Lewis. It was through The Burton Group that I met and became good friends with Kim Cameron, who also passed too early, in November of last year. Both were also instrumental in helping start the Internet Identity Workshop, along with too many other things to mention. (Here are photos from the first meeting of what was then the “Identity Gang.”)

If you search for Craig’s name and mine together, you’ll find more than a thousand results. I’ll list a few of them later, and unpack their significance. But instead for now, I’ll share what I sent for somebody to use at the service for Craig today in Salt Lake City:

In a more just and sensible world, news of Craig Burton’s death would have made the front page of the Deseret News, plus the obituary pages of major papers elsewhere—and a trending topic for days in social media.*

If technology had a Hall of Fame, Craig would belong in it. And maybe some day, that will happen.

Because Craig was one of the most important figures in the history of the networked world where nearly all of us live today. Without Craig’s original ideas, and guiding strategic hand, Novell would not have grown from a small hardware company into the most significant networking company prior to the rise of the Internet itself. Nor would The Burton Group have helped shape the networking business as well, through the dawn of the Internet Age.

In those times and since, Craig’s thinking has often been so deep and far-reaching that I am sure it will be blowing minds for decades to come. Take, for example, what Craig said to me in  a 2000 interview for Linux Journal. (Remember that this was when the Internet was still new, and most homes were connected by dial-up modems.)

I see the Net as a world we might see as a bubble. A sphere. It’s growing larger and larger, and yet inside, every point in that sphere is visible to every other one. That’s the architecture of a sphere. Nothing stands between any two points. That’s its virtue: it’s empty in the middle. The distance between any two points is functionally zero, and not just because they can see each other, but because nothing interferes with operation between any two points. There’s a word I like for what’s going on here: terraform. It’s the verb for creating a world. That’s what we’re making here: a new world.

Today, every one of us with a phone in our pocket or purse lives on that giant virtual world, with zero functional distance between everyone and everything—a world we have barely started to terraform.

I could say so much more about Craig’s original thinking and his substantial contributions to developments in our world. But I also need to give credit where due to the biggest reason Craig’s heroism remains mostly unsung, and that’s Craig himself. The man was his own worst enemy: a fact he admitted often, and with abiding regret for how his mistakes hurt others, and not just himself.

But I also consider it a matter of answered prayer that, after decades of struggling with alcohol addiction, Craig not only sobered up, but stayed that way, married his high school sweetheart and returned to the faith into which he was born.

Now it is up to people like me—Craig’s good friends still in the business—to make sure Craig’s insights and ideas live on.

Here is a photo album of Craig. I’ll be adding to it over the coming days.

†Judith died a few years ago, at just 66. Her heroism as a marketing genius is also mostly unsung today.

*Here’s a good one, in Silicon Slopes.


MyDigitalFootprint

Can frameworks help us understand and communicate?

I have the deepest respect and high regard for Wardley Maps and the Cynefin framework.  They share much of the same background and evolution. Both are extremely helpful and modern frameworks for understanding, much like Porter’s five forces model was back in the 1980s.  I adopted the same terminology (novel, emergent, good and best) when writing about the development of governance

I have the deepest respect and high regard for Wardley Maps and the Cynefin framework.  They share much of the same background and evolution. Both are extremely helpful and modern frameworks for understanding, much like Porter’s five forces model was back in the 1980s. 





I adopted the same terminology (novel, emergent, good and best) when writing about the development of governance for 2050. In the article Revising the S-Curve in an age of emergence, I used the S-curve as it has helped us on several previous journeys. It supported our understanding of adoption and growth; it can now be critical in helping us understand the development and evolution of governance towards a sustainable future. An evolutionary S-curve is more applicable than ever as we enter a new phase of emergence. Our actions and behaviours emerge when we grasp that all parts of our ecosystem interact as a more comprehensive whole.

A governance S-curve can help us unpack new risks in this dependent ecosystem so that we can make better judgments that lead to better outcomes. What is evident is that we need far more than proof, lineage and provenance of data from a wide ecosystem if we are going to create better judgement environments, we need a new platform. 

The image below takes the same terminology again but moves the Cynefin framework from the four quadrant domains to consider what happens when you have to optimise for more things - as in the Peak Paradox model.  


The yellow outer disc is about optimising for single outcomes and purposes.  In so many ways, this is simple as there is only one driving force, incentive or purpose, which means the relationship between cause and effect is obvious. 

The first inner purple ring recognises that some decision-making has a limited number of dependent variables.  System thinking is required to unpick, but it is possible to come up with an optimal outcome.

The pink inner ring is the first level where the relationship between cause and effect requires analysis or some form of investigation and/ or the application of expert knowledge.  This is difficult and requires assumptions, often leading to tension and conflict.   Optimising is not easy, if at all possible.

The inner black circle - where peak paradox exists.  Complexity thrives as the relationship between cause and effect can only be perceived in hindsight.  Models can post-justify outcomes but are unlikely to scale or be repeatable.  There is a paradox because the same information can have two (or more) meaning and outcomes.

The joy of any good framework is that it can always give new understanding and insight. What a Wardley Map then adds is movement, changing of position from where you are to where you will be. 

Why does this matter?

Because what we choose to optimise for is different from what a team of humans or a company will optimise for. Note I use “optimise”, but equality could be “maximise”. These are the yin/yan of effectiveness and efficiency, a continual movement. The purpose ideals are like efficacy - are you doing the right thing?

What we choose to optimise for is different from what a team of humans or a company will optimise for.

We know that it is far easier to make a decision when there is clarity of purpose.  However, when we have to optimise for different interests that are both dependent and independent - decision-making enters zones that are hard and difficult. It requires judgement.  In complexity is where leadership can shine as they can move from simple and obvious decision-making in the outer circle to utilising collective intelligence of the wider team as the decisions become more complex. Asking “what is going on here” and understanding it is outside a single person's reach.  High functions and diverse teams are critical for decisions where paradoxes may exist.  

When it gets towards the difficult areas, leadership will first determine if they are being asked to optimise for a policy or to align to an incentive; this shines the first spotlight on a zone where they need to be.   





Simon Willison

SQLite Internals: Pages & B-trees

SQLite Internals: Pages & B-trees Ben Johnson provides a delightfully clear introduction to SQLite internals, describing the binary format used to store rows on disk and how SQLite uses 4KB pages for both row storage and for the b-trees used to look up records. Via @benbjohnson

SQLite Internals: Pages & B-trees

Ben Johnson provides a delightfully clear introduction to SQLite internals, describing the binary format used to store rows on disk and how SQLite uses 4KB pages for both row storage and for the b-trees used to look up records.

Via @benbjohnson


reb00ted

Is this the end of social networking?

Scott Rosenberg, in a piece with the title “Sunset of the social network”, writes at Axios: Mark last week as the end of the social networking era, which began with the rise of Friendster in 2003, shaped two decades of internet growth, and now closes with Facebook’s rollout of a sweeping TikTok-like redesign. A sweeping statement. But I think he’s right: Facebook is fundamentally an adve

Scott Rosenberg, in a piece with the title “Sunset of the social network”, writes at Axios:

Mark last week as the end of the social networking era, which began with the rise of Friendster in 2003, shaped two decades of internet growth, and now closes with Facebook’s rollout of a sweeping TikTok-like redesign.

A sweeping statement. But I think he’s right:

Facebook is fundamentally an advertising machine. Like other Meta products are. There aren’t really about “technologies that bring the world closer together”, as the Meta homepage has it. At least not primarily.

This advertising machine has been amazingly successful, leading to a recent quarterly revenue of over $50 per user in North America (source). And Meta certainly has driven this hard, otherwise it would not have been in the news for overstepping the consent of its users year after year, scandal after scandal.

But now a better advertising machine is in town: TikTok. This new advertising machine is powered not by friends and family, but by an addiction algorithm. This addiction algorithm figures out your points of least resistance, and pours down one advertisement after another down your throat. And as soon as you have swalled one more, you scroll a bit more, and by doing so, you are asking for more advertisements, because of the addiction. This addiction-based advertising machine is probably close to the theoretical maximum of how many advertisements one can pour down somebody’s throat. An amazing work of art, as an engineer I have to admire it. (Of course that admiration quickly changes into some other emotion of the disgusting sort, if you have any kind of morals.)

So Facebook adjusts, and transitions into another addiction-based advertising machine. Which does not really surprise anybody I would think.

And because it was never about “bring[ing] the world closer together”, they drop that mission as if they never cared. (That’s because they didn’t. At least MarkZ didn’t, and he is the sole, unaccountable overlord of the Meta empire. A two-class stock structure gives you that.)

With the giant putting their attention elsewhere, where does this leave social networking? Because the needs and the wants to “bring[ing] the world closer together”, and to catch up with friends and family are still there.

I think it leaves social networking, or what will replace it, in a much better place. What about this time around we build products whose primary focus is actually the stated mission? Share with friends and family and the world, to bring it together (not divide it)! Instead of something unrelated, like making lots of ad revenue! What a concept!

Imagine what social networking could be!! The best days of social networking are still ahead. Now that the pretenders are leaving, we can actually start solving the problem. Social networking is dead. Long live what will emerge from the ashes. It might not be called social networking, but it will be, just better.

Tuesday, 26. July 2022

Simon Willison

Cosmopolitan: Compiling Python

Cosmopolitan: Compiling Python Cosmopolitan is Justine Tunney's "build-once run-anywhere C library" - part of the αcτµαlly pδrταblε εxεcµταblε effort, which produces wildly clever binary executable files that work on multiple different platforms, and is the secret sauce behind redbean. I hadn't realized this was happening but there's an active project to get Python to work as this format, produc

Cosmopolitan: Compiling Python

Cosmopolitan is Justine Tunney's "build-once run-anywhere C library" - part of the αcτµαlly pδrταblε εxεcµταblε effort, which produces wildly clever binary executable files that work on multiple different platforms, and is the secret sauce behind redbean. I hadn't realized this was happening but there's an active project to get Python to work as this format, producing a new way of running Python applications as standalone executables, only these ones have the potential to run unmodified on Windows, Linux and macOS.


@_Nat Zone

[7/26 21時] BGIN Block #6 IKP WG総会〜SBT, 選択的提供など〜

本日(2022年7月26日) 日本時間21時(チュ…

本日(2022年7月26日) 日本時間21時(チューリヒ時間14時)より、BGIN Block #6 でIKP WG Plenary (総会) 1 が行われます。アジェンダは以下の通り

Overview of the session (10 minutes)
Ransomware report presentation and discussion (15 minutes): Jessica Mila Schutzman, (co-editor and presenter)
Selective Disclosure: (20 minutes) Kazue Sako (editor and presenter)
Soul Bound Token (SBT) : (45 minutes) Michi Kakebayashi (presenter) Tetsu Kurumizawa (presenter)
AOB: Ronin network incident, etc

以下の公式サイトからまだリモート参加登録可能なはず!ご興味があればお立ち寄りください。

Blockchain Governance Initiative Network (BGIN Block #6) @Zurich [Hybrid]

[7/26 22:30] SBT (Soul Bound Token)に関する著者への質疑

本日(7/26) 日本時間22:30 (15:30…

本日(7/26) 日本時間22:30 (15:30 CET) より、ヴィタリック・ブテリンが共著者であったことで話題になったSBT (Soul Bound Token)について著者たちにインタビューするセッションが BGIN Block #6で行われます。

15:30 – 16:00 (CET)Digital IdentityModerator: Michi Kakebayashi and Tetsu Kurumisawa
E. Glen Weyl, Puja OhlhaverSoul Bound Token (Interviewing)

SBTペーパーのリンクはこちら→ https://t.co/nLhD4gbMk9

また、これに先立ち、SBTペーパーの解説のセッションも行われます。日本時間21時よりのIPK WGセッションの中でです。

14:00 – 15:30 (CET)IKP WG Editing SessionChair: Nat Sakimura
Jessica Mila Schutzman
Michi Kakebayashi
Tetsu Kurumizawa
Kazue SakoIKP Working Group Editing Session
– Overview of the session (Chair: Nat Sakimura)
– Ransomware report (Jessica Mila Schutzman)
– Soul Bound Token (SBT)
– Selective disclosure
– AOB

BGIN Block #6 は以下のオフィシャルサイトから登録可能です。

Blockchain Governance Initiative Network (BGIN Block #6) @Zurich [Hybrid]

[7/26 17:45] Web5、分散アイデンティティとそのエコシステム

本日 17:45 より「Web 5, Decent…

本日 17:45 より1「Web 5, Decentralized Identity and its ecosystem」と第して、ジャック・ドーシーのツイートで有名になったWeb5の中の人であるDaniel Buchner (Block社 Head of Decentralized Identity)の講演がBGIN2 Block #6 3の基調講演としてやってもらいます。

this will likely be our most important contribution to the internet. proud of the team. #web5

(RIP web3 VCs )https://t.co/vYlVqDyGE3 https://t.co/eP2cAoaRTH

— jack (@jack) June 10, 2022
Jack DorseyのWeb5アナウンスのツイート〜web3ベンチャーキャピタル安らかに眠れ〜

Web5の概要については、オフィシャルサイトが詳しいです。

Daneil とは分散アイデンティティのコンテキストでIdentiverseのわたしのセッションに出てもらったりしてきています。ただ、彼にとって夜中なのですっぽかされないかドキドキしていますが4

登録はBGIN Block #6の以下のオフィシャルサイトからまだ可能みたいなので、ご興味のある方はぜひ。

Blockchain Governance Initiative Network (BGIN Block #6) @Zurich [Hybrid]

reb00ted

A list of (supposed) web3 benefits

I’ve been collecting a list of the supposed benefits of web3, to understand how the term is used these days. Might as well post what I found: better, fairer internet wrest back power from a small number of centralized institutions participate on a level playing field control what data a platform receives all data (incl. identities) is self-sovereign and secure high-quality informatio

I’ve been collecting a list of the supposed benefits of web3, to understand how the term is used these days. Might as well post what I found:

better, fairer internet wrest back power from a small number of centralized institutions participate on a level playing field control what data a platform receives all data (incl. identities) is self-sovereign and secure high-quality information flows creators benefit reduced inefficiencies fewer intermediaries transparency personalization better marketing capture value from virtual items no censorship (content, finance etc) democratized content creation crypto-verified information correctness privacy decentralization composability collaboration human-centered permissionless

Some of this is clearly aspirational, perhaps on the other side of likely. Also not exactly what I would say if asked. But nevertheless an interesting list.


The shortest definition of Web3

web1: read web2: read + write web3: read + write + own Found here, but probably lots of other places, too.
web1: read web2: read + write web3: read + write + own

Found here, but probably lots of other places, too.


Simon Willison

viewport-preview

viewport-preview I built a tiny tool which lets you preview a URL in a bunch of different common browser viewport widths, using iframes. Via @simonw

viewport-preview

I built a tiny tool which lets you preview a URL in a bunch of different common browser viewport widths, using iframes.

Via @simonw

Monday, 25. July 2022

Simon Willison

Reduce Friction

Reduce Friction Outstanding essay on software engineering friction and development team productivity by C J Silverio: it explains the concept of "friction" (and gives great definitions of "process", "ceremony" and "formality" in the process) as it applies to software engineering, lays out the challenges involved in getting organizations to commit to reducing it and then provides actionable advic

Reduce Friction

Outstanding essay on software engineering friction and development team productivity by C J Silverio: it explains the concept of "friction" (and gives great definitions of "process", "ceremony" and "formality" in the process) as it applies to software engineering, lays out the challenges involved in getting organizations to commit to reducing it and then provides actionable advice on how to get consensus and where to invest your efforts in order to make things better.

Sunday, 24. July 2022

Simon Willison

Sqitch tutorial for SQLite

Sqitch tutorial for SQLite Sqitch is an interesting implementation of database migrations: it's a command-line tool written in Perl with an interface similar to Git, providing commands to create, run, revert and track migration scripts. The scripts the selves are written as SQL in whichever database engine you are using. The tutorial for SQLite gives a good idea as to how the whole system works.

Sqitch tutorial for SQLite

Sqitch is an interesting implementation of database migrations: it's a command-line tool written in Perl with an interface similar to Git, providing commands to create, run, revert and track migration scripts. The scripts the selves are written as SQL in whichever database engine you are using. The tutorial for SQLite gives a good idea as to how the whole system works.

Via Piers Cawley

Thursday, 21. July 2022

MyDigitalFootprint

Why do we lack leadership?

Because when there is a leader, we look to them to lead, and they want us to follow their ideas. If you challenge the leader, you challenge leadership, and suddenly, you are not in or on the team. If you don’t support the leader, you are seen as a problem and are not a welcome member of the inner circle. If you bring your ideas, you are seen to be competitive to the system and not aligned.&nbs


Because when there is a leader, we look to them to lead, and they want us to follow their ideas. If you challenge the leader, you challenge leadership, and suddenly, you are not in or on the team. If you don’t support the leader, you are seen as a problem and are not a welcome member of the inner circle. If you bring your ideas, you are seen to be competitive to the system and not aligned.  If you don’t bring innovation, you are seen to lack leadership potential. 

The leader sets the rules unless and until the leader loses authority or it is evident that their ideas don’t add up when a challenge to leadership and a demonstration of leadership skills becomes valid.

We know this leadership model is broken and based on old command and control thinking inherited from models of war. We have lots of new leadership models, but leaders who depend on others for ideas, skills and talent, are they really the inspiration we are seeking?  

Leadership is one of the biggest written-about topics, but it focuses on the skills/ talents you need to be a leader and the characteristics you need as a leader. 

So I am stuck thinking …..

in a world where war was not a foundation, what would have been a natural or dominant model for leadership?

do we lack leaders because we have leaders - because of our history?

do we love the idea of leaders more than we love leaders?

do we have leaders because of a broken model for accountability and responsibility?

do we like leadership because it is not us leading?

do we find it easier to be critical than be criticised?

is leadership sustainable? 

if care for our natural world was our only job, what would leadership look like?


Tuesday, 19. July 2022

MyDigitalFootprint

A problem of definitions in ecomimcs that create conflicts

A problem of definitions As we are all reminded of inflation and its various manifestations, perhaps we also need to rethink some of them.  The reason is that in economics, inflation is all about a linear scale. Sustainable development does not really map very well to this scale. In eco-systems, it is about balance.  Because of the way we define growth - we aim for inflation and need t

A problem of definitions

As we are all reminded of inflation and its various manifestations, perhaps we also need to rethink some of them.  The reason is that in economics, inflation is all about a linear scale. Sustainable development does not really map very well to this scale. In eco-systems, it is about balance.  Because of the way we define growth - we aim for inflation and need to control it.  However, this scale thinking then frames how we would perceive sustainability as the framing sets these boundaries.   What happens if we change it round?


What we have today in terms of the definition that creates conflicts and therefore has to ask, is this useful for a sustainable future as we are trying to fit a square peg in a round hole.

Economics

Definition

Perceptions from the Sustainability community and long term impact

Hyperinflation

Hyperinflation is a period of fast-rising inflation; 

an Increase in prices drives for more efficiency to control pricing. Use of scale to create damp effects.  Use global supply to counter effects.

Rapid and irreparable damage

Inflation

Inflation is the rate at which the overall level of prices for various goods and services in an economy rises over a period of time.

Drives growth which is an increase in the amount of goods and services produced per head of the population over a period of time.

Significant damage and changes to eco-systems and habitat

Stagnation

Stagflation is characterised by slow economic growth and relatively high unemployment—or economic stagnation—which is at the same time accompanied by rising prices (i.e., inflation). Stagflation can be alternatively defined as a period of inflation combined with a decline in the gross domestic product (GDP).

Unstable balance but repairable damage possible 

Recession/ deflation

Deflation is when prices drop significantly due to too large a money supply or a slump in consumer spending; lower costs mean companies earn less and may institute layoffs.

Stable and sustainable

Contraction

Contraction is a phase of the business cycle in which the economy is in decline. A contraction generally occurs after the business cycle peaks before it becomes a trough.

Expansion of the ecosystem and improving habitats



Perhaps what we need/ want if we want to remove the tensions from the ideals of growth and have a sustainable future.


Sustainable development


Economics

Unstable balance and damage creates change 


Rapid growth

Out of balance, but repairable damage possible 


Unco-ordinated growth

Stable and sustainable

Requires a lot of work and investment into projects to maintain stability and sustainability.   Projects are long-term and vast.  Requires global accord and loss of intra-Varlas protections.  No sovereign states are needed as must hold everyone accountable.

Growth but without intervention would not be sustainable

Expansion of ecosystem and improving habitats

Goldilocks zone - improving quality of life and lifestyles but not at the expense of reducing the habitual area on the earth. 

Slow growth in terms of purity of economics and GDP measurements.

Stable and sustainable

Requires a lot of work and investment into projects to maintain stability and sustainability.   Projects are long-term and vast.  Requires global accord and loss of intra-Varlas protections.  No sovereign states are needed as must hold everyone accountable.

Shrinking and without intervention would not be sustainable

Out of balance, but repairable damage possible 


Unco-ordinated decline

Unstable balance and damage creates change 


Rapid decline




Friday, 15. July 2022

Doc Searls Weblog

Subscriptification

via Nick Youngson CC BY-SA 3.0 Pix4free.org Let’s start with what happened to TV. For decades, all TV signals were “over the air,” and free to be watched by anyone with a TV and an antenna. Then these things happened:  Community Antenna TeleVision, aka CATV, gave us most or all of our free over-the-air channels, plus many […]

via Nick Youngson CC BY-SA 3.0 Pix4free.org

Let’s start with what happened to TV.

For decades, all TV signals were “over the air,” and free to be watched by anyone with a TV and an antenna. Then these things happened:

 Community Antenna TeleVision, aka CATV, gave us most or all of our free over-the-air channels, plus many more—for a monthly subscription fee. They delivered this service, literally, through a cable connection—one that looked like the old one that went to an outside antenna, but instead went back to the cable company’s local headquarters. Then premium TV (aka “pay,” “prestige” and “subscription” TV), along with one’s cable channel selection. This started with HBO and Showtime. It cost additional subscription fees but was inside your cable channel selection and your monthly cable bill. Then came streaming services, (aka Video on Demand, or VoD) showed up over the Internet, and then through media players you could hook up to your tv through an input (usually HDMI) aside from the one from your cable box, and your cable service—even if your Internet service was provided by the cable company. This is why the cable industry called all of these services “over the top,” or OTT. The main brands here were Amazon Fire, Apple TV, Google Chromecast, and Roku. Being delivered over the Internet rather than lumped in with all those cable channels, higher resolutions were possible. At best most cable services are “HD,” which was fine a decade ago, but is now quite retro. Want to watch TV in 4K, HDR, and all that? Subscribe through your smart OTT media intermediary. And now media players are baked into TVs. Go to Best Buy, Costco, Sam’s Club, Amazon, or Walmart, and you’ll see promos for “smart” Google, Fire (Amazon), Roku, webOS, and Tizen TVs—rather than just Sony, LG, Samsung, and other brands. Relatively cheap brands, such as Vizio, TCL, and Hisense, are essentially branded media players with secondary brand names on the bezel.

Economically speaking, all that built-in smartness is about two things. One is facilitating subscriptions, and the other is spying on you for the advertising business. But let’s table the latter and focus just on subscriptions, because that’s the way the service world is going.

More and more formerly free stuff on the Net is available only behind paywalls. Newspapers and magazines have been playing this game for some time. But, now that Substack is the new blogging, many writers there are paywalling their stuff as well. Remember SlideShare? Now it’s “Read free for 60 days.”

Podcasting is drifting in that direction too. SiriusXM and Spotify together paid over a half $billion to put a large mess of popular podcasts into subscription-based complete (SiriusXM) or partial (Spotify) paywall systems, pushing podcasting toward the place where premium TV has already sat for years—even though lots of popular podcasts are still paid for by advertising.

I could add a lot of data here, but I’m about to leave on a road trip. So I’ll leave it up to you. Look at what you’re spending now on subscriptions, and how that collection of expenses is going up. Also, take a look at how much of what was free on the Net and the Web is moving to a paid subscription model. The trend is not small, and I don’t see it stopping soon.

 

Wednesday, 13. July 2022

Ludo Sketches

ForgeRock Directory Services 7.2 has been released

ForgeRock Directory Services 7.2 was and will be the last release of ForgeRock products that I’ve managed. It was finished when I left the company and was released to the public a few days after. Before I dive into the… Continue reading →

ForgeRock Directory Services 7.2 was and will be the last release of ForgeRock products that I’ve managed. It was finished when I left the company and was released to the public a few days after. Before I dive into the changes available in this release, I’d like to thank the amazing team that produced this version, from the whole Engineering team led by Matt Swift, to the Quality engineering led by Carole Forel, the best and only technical writer Mark Craig, and also our sustaining engineer Chris Ridd who contributed some important fixes to existing customers. You all rock and I’ve really appreciated working with you all these years.

So what’s new and exciting in DS 7.2?

First, this version introduces a new type of index: Big Index. This type of index is to be used to optimize search queries that are expecting to return a large number of results among an even much larger number of entries. For example, if you have an application that searches for all users in the USA that live in a specific state. In a population of hundreds of millions users, you may have millions that live in one particular state (let’s say Ohio). With previous versions, searching for all users in Ohio would be unindexed and the search if allowed would scan the whole directory data to identify the ones in Ohio. With 7.2, the state attribute can be indexed as a Big Index, and the same search query would be considered as indexed, only going through the reduced set of users with that have Ohio as the value for the state attribute.

Big Indexes can have a lesser impact on write performances than regular indexes, but they tend to have a higher on disk footprint. As usual, choosing to use a Big Index type is a matter of trade-of between read and write performances, but also disk space occupation which may also have some impact on performances. It is recommended to test and run benchmarks in development or pre-production environments before using them in production.

The second significant new feature in 7.2 is the support of the HAProxy Protocol for LDAP and LDAPS. When ForgeRock Directory Services is deployed behind a software load-balancer such as HAProxy, NGINX or Kubernetes Ingress, it’s not possible for DS to know the IP address of the Client application (the only IP address known is the one of the load-balancer), therefore, it is not possible to enforce specific access controls or limits based on the applications. By supporting the HAProxy Protocol, DS can decode a specific header sent by the load-balancer and retrieve some information about the client application such as IP address but also some TLS related information if the connection between the client and the load-balancer is secured by TLS, and DS can use this information in access controls, logging, limits… You can find more details about DS support of the Proxy Protocol in DS documentation.

In DS 7.2, we have added a new option for securing and hashing passwords: Argon2. When enabled (which is the default), this allows importing users with Argon2 hashed passwords, and letting them authenticating immediately. Argon2 may be selected as well as the default scheme for hashing new passwords, by associating it with a password policy (such as the default password policy). The Argon2 password scheme has several parameters that control the cost of the hash: version, number of iterations, amount of memory to use and parallelism (aka number of threads used). While Argon2 is probably today the best algorithm to secure passwords, it can have a very big impact on the server’s performance, depending on the Argon2 parameters selected. Remember that DS encrypts the entries on disk by default, and therefore the risk of exposing hashed passwords at rest is extremely low (if not null).

Also new is the ability to search for attributes with a DistinguishedName syntax using pattern matching. DS 7.2 introduces a new matching rule named distinguishedNamePatternMatch (defined with the OID 1.3.6.1.4.1.36733.2.1.4.13). It can be used to search for users with a specific manager for example with the following filter “(manager:1.3.6.1.4.1.36733.2.1.4.13:=uid=trigden,**)” or a more human readable form “(manager:distinguishedNamePatternMatch:=uid=trigden,**)”, or to search for users whose manager is part of the Admins organisational unit with the following filter “(manager:1.3.6.1.4.1.36733.2.1.4.13:=*,ou=Admins,dc=example,dc=com)”.

ForgeRock Directory Services 7.2 includes several minor improvements:

Monitoring has been improved to include metrics about index use in searches, and access logs now contain information about the proc entry’s size (the later is also written in the access logs). The index troubleshooting attribute “DebugSearchIndex” output has been revised to provide better details for the query plan. Alert notifications are raised when backups are finished. The REST2LDAP service provides several enhancements making several queries easier.

As with every release, there has been several performances optimizations and improvements, many minor issues corrected.

You can find the full details of the changes in the Release Notes.

I hope you will enjoy this latest release of ForgeRock Directory Services. If not, don’t reach out to me, I’m no longer in charge.


Phil Windleys Technometria

The Most Inventive Thing I've Done

Summary: I was recently asked to respond in writing to the prompt "What is the most inventive or innovative thing you've done?" I decided to write about picos. In 2007, I co-founded a company called Kynetx and realized that the infrastructure necessary for building our product did not exist. To address that gap, I invented picos, an internet-first, persistent, actor-model programmin

Summary: I was recently asked to respond in writing to the prompt "What is the most inventive or innovative thing you've done?" I decided to write about picos.

In 2007, I co-founded a company called Kynetx and realized that the infrastructure necessary for building our product did not exist. To address that gap, I invented picos, an internet-first, persistent, actor-model programming system. Picos are the most inventive thing I've done. Being internet-first, every pico is serverless and cloud-native, presenting an API that can be fully customized by developers. Because they're persistent, picos support databaseless programming with intuitive data isolation. As an actor-model programming system, different picos can operate concurrently without the need for locks, making them a natural choice for easily building decentralized systems.

Picos can be arranged in networks supporting peer-to-peer communication and computation. A cooperating network of picos reacts to messages, changes state, and sends messages. Picos have an internal event bus for distributing those messages to rules installed in the pico. Rules in the pico are selected to run based on declarative event expressions. The pico matches events on its bus with event scenarios declared in each rule's event expression. The pico engine schedules any rule whose event expression matches the event for execution. Executing rules may raise additional events which are processed in the same way.

As Kynetx reacted to market forces and trends, like the rise of mobile, the product line changed, and picos evolved and matured to match those changing needs, becoming a system that was capable of supporting complex Internet-of-Things (IoT) applications. For example, we ran a successful Kickstarter campaign in 2013 to build a connected car product called Fuse. Fuse used a cellular sensor connected to the vehicle's on-board diagnostics port (OBD2) to raise events from the car's internal bus to a pico that served as the vehicle's digital twin. Picos allowed Fuse to easily provide an autonomous processing agent for each vehicle and to organize those into fleets. Because picos support peer-to-peer architectures, putting a vehicle in more than one fleet or having a fleet with multiple owners was easy.

Fuse presented a conventional IoT user experience using a mobile app connected to a cloud service built using picos. But thanks to the inherently distributed nature of picos, Fuse offered owner choice and service substitutability. Owners could choose to move the picos representing their fleet to an alternate service provider, or even self-host if they desired without loss of functionality. Operationally, picos proved more than capable of providing responsive, scalable, and resilient service for Fuse customers without significant effort on my part. Fuse ultimately shut down because the operator of the network supplying the OBD2 devices went out of business. But while Fuse ran, picos provided Fuse customers with an efficient, capable, and resilient infrastructure for a valuable IoT service with unique characteristics.

The characteristics of picos make them a good choice for building distributed and decentralized applications that are responsive, resilient to failure, and respond well to uneven workloads. Asynchronous messaging and concurrent operation make picos a great fit for modern distributed applications. For example, picos can synchronously query other picos to get data snapshots, but this is not usually the most efficient interaction pattern. Instead, because picos support lock-free asynchronous concurrency, a system of picos can efficiently respond to events to accomplish a task using reactive programming patterns like scatter-gather.

The development of picos has continued, with the underlying pico engine having gone through three major versions. The current version is based on NodeJS and is open-source. The latest version was designed to operate on small platforms like a Raspberry PI as well as cloud platforms like Amazon's EC2. Over the years hundreds of developers have used picos for their programming projects. Recent applications include a proof-of-concept system supporting intention-based ecommerce by Customer Commons.

The architecture of picos was a good fit for Customer Commons' objective to build a system promoting user autonomy and choice because picos provide better control over apps and data. This is a natural result of the pico model where each pico represents a closure over services and data. Picos cleanly separate the data for different entities. Picos, representing a specific entity, and rulesets representing a specific business capability within the pico, provide fine grained control over data and its processing. For example, if you sell a car represented in Fuse, you can transfer the vehicle pico to the new owner, after deleting the Trips application, and its associated data, while leaving untouched the maintenance records, which are isolated inside the Maintenance application in the pico.

I didn't start out in 2007 to write a programming language that naturally supports decentralized programming using the actor-model while being cloud-native, serverless, and databaseless. Indeed, if I had, I likely wouldn't have succeeded. Instead picos evolved from a simple rule language for modifying web pages to a powerful, general-purpose programming system for building any decentralized application. Picos are easily the most important technology I've invented.

Tags: picos kynetx fuse

Monday, 11. July 2022

Damien Bod

Invite external users to Azure AD using Microsoft Graph and ASP.NET Core

This post shows how to invite new Azure AD external guest users and assign the users to Azure AD groups using an ASP.NET Core APP Connector to import or update existing users from an external IAM and synchronize the users in Azure AD. The authorization can be implemented using Azure AD groups and can be […]

This post shows how to invite new Azure AD external guest users and assign the users to Azure AD groups using an ASP.NET Core APP Connector to import or update existing users from an external IAM and synchronize the users in Azure AD. The authorization can be implemented using Azure AD groups and can be imported or used in the ASP.NET Core API.

Setup

The APP Connector or the IAM connector is implemented using ASP.NET Core and Microsoft Graph. Two Azure APP registrations are used, one the for the external application and a second for the Microsoft Graph access. Both applications use an application client and can be run as background services, console applications or whatever. Only the APP Connector has access to the Microsoft Graph API and the graph application permissions are allowed only for this client. This way, the Microsoft Graph client can be controlled as a lot of privileges are required to add, update and delete users or add and remove group assignments. We only allow the client explicit imports or updates for guest users. The APP Connector sends invites to the new external guest users and the users can then authentication using an email code. The correct groups are then assigned to the user depending on the API payload. With this, it is possible to keep external user accounting and manage the external identities in AAD without having to migrate the users. One unsolved problem with this solution is single sign on (SSO). It would be possible to achieve this, if all the external users came from the same domain and the external IAM system supported SAML. AAD does not support OpenID Connect for this.

Microsoft Graph client

A confidential client is used to get an application access token for the Microsoft Graph API calls. The .default scope is used to request the access token using the OAuth client credentials flow. The Azure SDK ClientSecretCredential is used to authorize the client.

public MsGraphService(IConfiguration configuration, IOptions<GroupsConfiguration> groups, ILogger<MsGraphService> logger) { _groups = groups.Value; _logger = logger; string[]? scopes = configuration.GetValue<string> ("AadGraph:Scopes")?.Split(' '); var tenantId = configuration.GetValue<string> ("AadGraph:TenantId"); // Values from app registration var clientId = configuration.GetValue<string> ("AadGraph:ClientId"); var clientSecret = configuration.GetValue<string> ("AadGraph:ClientSecret"); _federatedDomainDomain = configuration.GetValue<string> ("FederatedDomain"); var options = new TokenCredentialOptions { AuthorityHost = AzureAuthorityHosts.AzurePublicCloud }; // https://docs.microsoft.com/dotnet/api/azure.identity.clientsecretcredential var clientSecretCredential = new ClientSecretCredential( tenantId, clientId, clientSecret, options); _graphServiceClient = new GraphServiceClient( clientSecretCredential, scopes); }

The following permissions were added to the Azure App registration for the Graph requests. See the Microsoft Graph documentation to see what permissions are required for what API request. All permissions are application permissions as an application access token was requested. No user is involved.

Directory.Read.All Directory.ReadWrite.All Group.Read.All Group.ReadWrite.All // if role assignments are used in the Azure AD group
RoleManagement.ReadWrite.Directory User.Read.All User.ReadWrite.All

ASP.NET Core application client

The external identity system also uses the client credentials to access the APP Connector API, this time using an access token from the second Azure App registration. This is separated from the Azure App registration used for the Microsoft Graph requests. The scopes is defined to use the “.default” value which requires no consent.

// 1. Client client credentials client var app = ConfidentialClientApplicationBuilder .Create(configuration["AzureAd:ClientId"]) .WithClientSecret(configuration["AzureAd:ClientSecret"]) .WithAuthority(configuration["AzureAd:Authority"]) .Build(); var scopes = new[] { configuration["AzureAd:Scope"] }; // 2. Get access token var authResult = await app.AcquireTokenForClient(scopes) .ExecuteAsync();

Implement the user invite

I decided to invite the users from the external identity providers in the Azure AD as guest users. At present, the default authentication sends a code to the user email which can be used to sign-in. You could create new AAD users, or even a federated AAD user. Single sign on will only work for google, facebook or a SAML domain federation where users come from the same domain. I wish for an OpenID Connect external authentication button in my sign-in UI where I can decide which users and from what domain authenticate in my AAD. This is where AAD is really lagging behind other identity providers.

/// <summary> /// Graph invitations only works for Azure AD, not Azure B2C /// </summary> public async Task<Invitation?> InviteUser(UserModel userModel, string redirectUrl) { var invitation = new Invitation { InvitedUserEmailAddress = userModel.Email, InvitedUser = new User { GivenName = userModel.FirstName, Surname = userModel.LastName, DisplayName = $"{userModel.FirstName} {userModel.LastName}", Mail = userModel.Email, UserType = "Guest", // Member OtherMails = new List<string> { userModel.Email }, Identities = new List<ObjectIdentity> { new ObjectIdentity { SignInType = "federated", Issuer = _federatedDomainDomain, IssuerAssignedId = userModel.Email }, }, PasswordPolicies = "DisablePasswordExpiration" }, SendInvitationMessage = true, InviteRedirectUrl = redirectUrl, InvitedUserType = "guest" // default is guest,member }; var invite = await _graphServiceClient.Invitations .Request() .AddAsync(invitation); return invite; }

Adding, Removing AAD users and groups

Once the users exist in the AAD tenant, you can assign the users to groups, remove assignments, remove users or update users. If a user is disabled in the external IAM system, you cannot disable the user in the AAD with an application permission, you can only delete the user. You can assign security groups or M365 groups to the AAD guest user. With this, the AAD IT admin can manage guest users and assign the group of guests to any AAD application.

public async Task AddRemoveGroupMembership(string userId, List<string>? accessRolesPermissions, List<string> currentGroupIds, string groudId, string groupType) { if (accessRolesPermissions != null && accessRolesPermissions.Any(g => g.Contains(groupType))) { await AddGroupMembership(userId, groudId, currentGroupIds); } else { await RemoveGroupMembership(userId, groudId, currentGroupIds); } } private async Task AddGroupMembership(string userId, string groupId, List<string> currentGroupIds) { if (!currentGroupIds.Contains(groupId)) { // add group await AddUserToGroup(userId, groupId); currentGroupIds.Add(groupId); } } private async Task RemoveGroupMembership(string userId, string groupId,List<string> currentGroupIds) { if (currentGroupIds.Contains(groupId)) { // remove group await RemoveUserFromGroup(userId, groupId); currentGroupIds.Remove(groupId); } } public async Task<User?> UserExistsAsync(string email) { var users = await _graphServiceClient.Users .Request() .Filter($"mail eq '{email}'") .GetAsync(); if (users.CurrentPage.Count == 0) return null; return users.CurrentPage[0]; } public async Task DeleteUserAsync(string userId) { await _graphServiceClient.Users[userId] .Request() .DeleteAsync(); } public async Task<User> UpdateUserAsync(User user) { return await _graphServiceClient.Users[user.Id] .Request() .UpdateAsync(user); } public async Task<User> GetGraphUser(string userId) { return await _graphServiceClient.Users[userId] .Request() .GetAsync(); } public async Task<IDirectoryObjectGetMemberGroupsCollectionPage> GetGraphUserMemberGroups(string userId) { var securityEnabledOnly = false; return await _graphServiceClient.Users[userId] .GetMemberGroups(securityEnabledOnly) .Request() .PostAsync(); } private async Task RemoveUserFromGroup(string userId, string groupId) { try { await _graphServiceClient.Groups[groupId] .Members[userId] .Reference .Request() .DeleteAsync(); } catch (Exception ex) { _logger.LogError(ex, "{Error} RemoveUserFromGroup", ex.Message); } } private async Task AddUserToGroup(string userId, string groupId) { try { var directoryObject = new DirectoryObject { Id = userId }; await _graphServiceClient.Groups[groupId] .Members .References .Request() .AddAsync(directoryObject); } catch (Exception ex) { _logger.LogError(ex, "{Error} AddUserToGroup", ex.Message); } }

Create a new guest user with group assignments

I created a service which then creates a user and assigns the defined groups to the user using the Graph services defined above. You cannot select users or groups after creating them for n-seconds. It is important to use the request result from the create requests, otherwise you will have to implement the follow up tasks in a worker process or poll the graph API until the get returns the updated user or group.

public async Task<(UserModel? UserModel, string Error)> CreateUserAsync(UserModel userModel) { var emailValid = _msGraphService.IsEmailValid(userModel.Email); if (!emailValid) { return (null, "Email is not valid"); } var user = await _msGraphService.UserExistsAsync(userModel.Email); if (user != null) { return (null, "User with this email already exists in AAD tenant"); } var result = await _msGraphService.InviteUser(userModel, _configuration["InviteUserRedirctUrl"]); if (result != null) { await AssignmentGroupsAsync( result.InvitedUser.Id, userModel.AccessRolesPermissions, new List<string>()); } return (userModel, string.Empty); }

The UpdateAssignmentGroupsAsync and the AssignmentGroupsAsync maps the API definition to the configured Azure AD group and removes or adds the group as defined.

private async Task UpdateAssignmentGroupsAsync(string userId, List<string>? accessRolesPermissions) { var currentGroupIds = await _msGraphService.GetGraphUserMemberGroups(userId); var currentGroupIdsList = currentGroupIds.ToList(); await AssignmentGroupsAsync(userId, accessRolesPermissions, currentGroupIdsList); } private async Task AssignmentGroupsAsync(string userId, List<string>? accessRolesPermissions, List<string> currentGroupIds) { await _msGraphService.AddRemoveGroupMembership(userId, accessRolesPermissions, currentGroupIds, _groups.UserWorkspace, Consts.USER_WORKSPACE); await _msGraphService.AddRemoveGroupMembership(userId, accessRolesPermissions, currentGroupIds, _groups.AdminWorkshop, Consts.ADMIN_WORKSPACE); }

The service method can then be made public in a Web API which requires the AAD application access token. This access token will only work for the API. The graph API access token is never made public. The Graph API access token has a lot of permissions.

[HttpPost("Create")] [ProducesResponseType(StatusCodes.Status201Created, Type = typeof(UserModel))] [ProducesResponseType(StatusCodes.Status400BadRequest)] [SwaggerOperation(OperationId = "Create-AAD-guest-Post", Summary = "Creates an Azure AD guest user with assigned groups")] public async Task<ActionResult<UserModel>> CreateUserAsync( [FromBody] UserModel userModel) { var result = await _userGroupManagememtService .CreateUserAsync(userModel); if (result.UserModel == null) return BadRequest(result.Error); return Created(nameof(UserModel), result.UserModel); }

Update or delete a guest User with group assignments

The UpdateDeleteUserAsync method deletes the AAD user, if the user is not active in the external identity system. If the user is still active, the AAD user gets updated. This will not take effect until the next authentication of the user or you could implement a policy to force a re-authentication. This depends upon the use case, it is not such a good experience, if the user id forced to update during a session, unless of course permissions were removed. The user gets assigned or removed from groups depending on the external authentication authorization definitions.

public async Task<(CreateUpdateResult? Result, string Error)> §UpdateDeleteUserAsync(UserUpdateModel userModel) { var emailValid = _msGraphService.IsEmailValid(userModel.Email); if (!emailValid) { return (null, "Email is not valid"); } var user = await _msGraphService.UserExistsAsync(userModel.Email); if (user == null) { return (null, "User with this email does not exist"); } if (userModel.IsActive) { user.GivenName = userModel.FirstName; user.Surname = userModel.LastName; user.DisplayName = $"{userModel.FirstName} {userModel.LastName}"; await _msGraphService.UpdateUserAsync(user); await UpdateAssignmentGroupsAsync(user.Id, userModel.AccessRolesPermissions); return (new CreateUpdateResult { Succeeded = true, Reason = $"{userModel.Email} {userModel.Username} updated" }, string.Empty); } else // not active, remove { await UpdateAssignmentGroupsAsync(user.Id, null); await _msGraphService.DeleteUserAsync(user.Id); return (new CreateUpdateResult { Succeeded = true, Reason = $"{userModel.Email} {userModel.Username} removed" }, string.Empty); } }

The service implementation method can be made public in a secure Web API. This is not a update but a API service which updates or deletes a user and also assigns or removes groups for this user. I used a HTTP POST for this.

[HttpPost("UpdateUser")] [ProducesResponseType(StatusCodes.Status200OK, Type = typeof(CreateUpdateResult))] [ProducesResponseType(StatusCodes.Status400BadRequest)] [SwaggerOperation(OperationId = "Update-AAD-guest-Post", Summary = "Updates or deletes an Azure AD guest user and assigned groups")] public async Task<ActionResult<CreateUpdateResult>> UpdateUserAsync([FromBody] UserUpdateModel userModel) { var update = await _userGroupManagememtService .UpdateUserAsync(userModel); if (update.Result == null) return BadRequest(update.Error); return Ok(update.Result); }

Testing the API using a Console application

Any trusted application can be used to implement the client. The client application must be a trusted application because a secret is required to access the web API. If you use a non trusted client, then a UI authentication user flow with delegated permissions must be used. The Graph API access is not made public to this client either way.

I implemented a test client in .NET Core. Any API call could look something like this:

static async Task<HttpResponseMessage> CreateUser(IConfigurationRoot configuration, AuthenticationResult authResult) { var client = new HttpClient { BaseAddress = new Uri(configuration["AzureAd:ApiBaseAddress"]) }; client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", authResult.AccessToken); client.DefaultRequestHeaders.Accept .Add(new MediaTypeWithQualityHeaderValue("application/json")); var response = await client.PostAsJsonAsync("AadUsers/Create", new UserModel { Username = "paddy@test.com", Email = "paddy@test.com", FirstName = "Paddy", LastName = "Murphy", AccessRolesPermissions = new List<string> { "UserWorkspace" } }); return response; }

Notes

One problem with this system is that the user does not have a single sign on. Azure AD does not support this for multiple domains. It is a real pity that you cannot define an external identity provider in Azure AD which is then displayed in the Azure AD sign-in UI. To make Single Sign on with federation work in Azure AD, you must use Azure AD as the main accounting database. If all your external users have the same domain, then you could setup an SAML federation for this domain. If the users from the external domain have different domains, Azure AD does not support this. This is a big problem if you cannot migrate existing identity providers and the accounting to AAD and you require applications which require an AAD authentication.

Links

https://docs.microsoft.com/en-us/azure/active-directory/external-identities/what-is-b2b

https://docs.microsoft.com/en-us/azure/active-directory/external-identities/redemption-experience


Werdmüller on Medium

My indieweb real estate website

How I rolled my own website to sell my home. Continue reading on Medium »

How I rolled my own website to sell my home.

Continue reading on Medium »

Thursday, 07. July 2022

Pulasthi Mahawithana

10 Ways to Customize Your App’s Login Experience with WSO2 — Part 1

10 Ways to Customize Your App’s Login Experience with WSO2 — Part 1 In this series I’ll go through 10 different ways you can customize your application authentication experience with WSO2 Identity Server’s adaptive authentication feature. To give some background, WSO2 Identity Server(IS) is an open-source Identity and Access Management(IAM) product. One of its main use is to be used as an i
10 Ways to Customize Your App’s Login Experience with WSO2 — Part 1

In this series I’ll go through 10 different ways you can customize your application authentication experience with WSO2 Identity Server’s adaptive authentication feature.

To give some background, WSO2 Identity Server(IS) is an open-source Identity and Access Management(IAM) product. One of its main use is to be used as an identity provider for your applications. It can support multi factor authentication, social login, single sign-on based on several widely adopted protocols like oauth/OIDC, SAML, WS-Federation etc.

Adaptive authentication is a feature where you can move away from static authentication methods to support dynamic authentication flow. For example, without adaptive authentication, you can configure an application to authenticate with username and password as the first step and with either SMS OTP or TOTP as the second option, where all users will need to use that authentication method no matter who they are, what they are going to do with the application. With adaptive authentication, you can make this dynamic to offer better experience and/or security. In the above example we may use adaptive authentication to make the second factor required only when the user is trying to login to the application from a new device which (s)he hasn’t used before. That way the user will have a better user experience, while keeping the required security.

Traditional Vs Adaptive Authentication

With adaptive authentication, the login experience can be customized to almost anything that will give the best user experience to the user. Following are 10 high-level use cases you can achieve with WSO2 IS’s adaptive authentication.

Conditionally Stepping up the Authentication — Instead of statically having a pre-defined set of authentication methods, we can step-up/down the authentication based on several factors. Few such factors include, roles/attributes of the user, device, user’s activity, user store (in case of multiple user stores) Conditional Authorization — Similar to stepping up or down the authentication, we can authorize or deny the login to the application based on the similar factors Dynamic Account Linking — A physical user may have multiple identities provided from multiple external providers (eg. google, facebook, twitter). With adaptive authentication, you can verify and link those at authentication time. User attribute enrichment — During a login flow, the user attributes may be provided from multiple sources, in different formats. However the application may require those attributes in a different way due to which they can’t be used staight away. Adaptive authentication can be used to enrich such attributes as needed. Improve login experience — Depending on different factors (as mentioned in the first point), the login experience can be customized to look different, or to avoid any invalid authentication methods being offered to user. Sending Notifications — Can be used to trigger different events, send email notifcations during the authentication flow in case on unusual or unexpected behaviour Enchance Security — Enforce security policies, level of assuarance required by the application or by the organization Limit/Manage Concurrent Sessions — Limit the number of sessions a user may have for the application concurrently based on security requirements, or business requirements (like subscription tiers) Auditing/Analytics — Publish the useful stats to the analytics servers or gather data for auditing purposes. Bring your own functionality — In a business there are so many variables based on the domain, country/region, security standards, competitors etc. All these can’t be generalized, and hence there will be certain things which you will specifically require. Adaptive authentication provide so many flexibility to define your own functionalities which you can use to make your application authentication experience user-friendly, secure and unique.

In the next posts, I’ll go through each of the above with example scenarios and how to achieve them with WSO2 IS.


MyDigitalFootprint

Mind the Gap - between short and long term strategy

Mind the Gap This article addresses a question that ESG commentators struggle with: “Is ESG a model, a science, a framework, or a reporting tool?    Co-authored @yaelrozencwajg  and @tonyfish An analogy. Our universe is governed by two fundamental models, small and big. The gap between Quantum Physics (small) and The Theory of Relativity (big) is similar to the issues betwee

Mind the Gap

This article addresses a question that ESG commentators struggle with: “Is ESG a model, a science, a framework, or a reporting tool?    Co-authored @yaelrozencwajg  and @tonyfish




An analogy. Our universe is governed by two fundamental models, small and big. The gap between Quantum Physics (small) and The Theory of Relativity (big) is similar to the issues between how we frame and deliver short and long-term business planning. We can model and master the small (short) and the big (long), but there is a chasm between them which means we fail to understand why the modelling and outcomes of one theory; don’t enlighten us about the other.  The mismatch or gaps between our models create uncertainty and ambiguity, leading to general confusion and questionable incentives.

In physics, quantum mechanics is about understanding the small nuclear forces. However, based on our understanding of the interactions and balances between fundamental elements that express small nuclear forces, we cannot predict the movement of planets in the solar system.  Vice versa, our model of gravity allows us to understand and predict motion in space and time, enabling us to model and know the exact position of Voyager 1 since 1977, which does not help in any way to understand fundamental particle interactions.  There remains a gap between the two models, which is marketed as “The Theory of Everything” it is a hypothetical, singular, all-encompassing, coherent theoretical framework of physics that thoroughly explains and links together all physical aspects of the universe - it closes the gaps as we want it to all be explainable.

In business, we worked out that based on experience, probability, and confidence, using the past makes a reasonable predictive model in the short term (say the next three years), especially if the assumptions are based on a stable system (maintaining one sigma variance). If a change occurs, we will see it as a delta between the plan and reality as the future does not play out as the short-term model predicted. 

We have improved our capabilities in predicting the future by developing frameworks, scenario planning and game theory. We can reduce risks and model scenarios by understanding the current context.  The higher level of detail and understanding we have about the present, the better we are able to model the next short period of time. However, whilst we have learnt that our short-term models can be representative and provide a sound basis, there is always a delta to understand and manage.  No matter how big and complex our model is, it doesn't fare well in with a longer time horizon as short-term models are not helpful for long-term strategic planning. 

Long-term planning is not based on a model but instead on a narrative about how the active players' agency, influence and power will change. We are better able to think about global power shifts in the next 50 to 100 years than we can perceive what anything will look like in 10 years. We bounded by Gates Law “Most people overestimate what they can achieve in a year and underestimate what they can achieve in ten years.”

For every view of the long-term future, there is a supporting opinion.  There is an alternative view of the future for every opinion, and neither follows the short-term model trajectory.

There is a gap between the two models; (small and big)/ (short and long).  The gap is a fog-covered chasm that we have so far failed to address how to cross using models, theories or concepts from either the short- or long-term position.  In this gap, where the fog-filled chasm exists, this zone demands critical thinking, judgement and leadership.  The most critical aspects of modern humanity's ability to thrive sit in this zone: climate, eco-systems, geopolitics, global supply chains, digital identity, privacy, poverty, energy and water.


ESG has become the latest victim stuck in the foggy chasm.

ESG will be lost if we couldn’t agree now to position it as both a short-term model and a long-term value framework.  ESG has to have an equal foot in each camp and requires a simple linear narrative which connects the two, avoiding getting lost in the foggy chasm that sucks the life out of progress that sites between them.  

ESG as a short-term model must have data with a level of accuracy for reporting transparently.  However, no matter how good this data is or the reporting frameworks, it will not predict or build a sustainable future. 

ESG demands a long-term framework for a sustainable future, but we need globally agreed policies, perhaps starting from many UN SDG ideals.  How can we realistically create standards, policies and regulations when we cannot agree on what we want the future to look like because of our geographical biases. We know that a long-term vision does not easily translate into practical short-term actions and will not help deliver immediate impact, but without a purpose, north star and governance, we are likely to get more of the same.   

If the entire ESG eco-system had only focussed on one (short or long), it would have alienated the other, but right now, I fear that ESG has ended up being unable to talk about or deliver either.  The media has firmly put ESG in the foggy gap because it is the best for its advertising-driven model. As a community, we appear unable to use our language or models to show how to criss-cross the chasm. Indeed our best ideas and technologies are being used to create division and separation. For example, climate technologies such as “carbon capture and storage models” had long-term thinking in the justification. Still, it has become a short-term profit centre and tax escape for the oil and gas extraction industries. "The Great Carbon Capture Scam" by Greenpeace does a deep dive on the topic. 

As humans, we desperately need ESG to deliver a long-term sustainable future, but this is easy to ignore as anyone and everyone can have an opinion. Suppose ESG becomes only a short-term deliverable, and reporting tool, it is likely to fail as the data quality is poor and there is a lack of transparency. Whilst the level of narrative interruption is one that marketing demands, we will likely destroy our habitat before we acknowledge it and end up in the next global threat.

Repeating the opening question. “Is ESG a model, a science, a framework, or a reporting tool?” On reflection, it is a fair question as ESG has to be all. However, we appear to lack the ability to provide clarity on each element or a holistic vision for unity.  Just in ESG science alone, there is the science that can be used to defend any climate point of view you want. Therefore, maybe, a better question is, “Does ESG have a Social Identity Crisis?” If so, what can we do to solve it? 

Since: 

there is no transparency about why a party supports a specific outcome, deliverable, standard or position;

the intrinsic value of ESG is unique by context and even to the level of a distinct part of an organisation;

we cannot agree if ESG investors are legit;

we cannot agree on standards or timeframes;

practitioners do not declare how or by whom they are paid or incentivised;

And bottom line, we have not agreed on what we are optimising for!

Whilst we know that we cannot please everyone all of the time, how would you approach this thorny debate as a thought leader? 




Wednesday, 06. July 2022

Phil Windleys Technometria

Using a Theory of Justice to Build a Better Web3

Summary: Building a better internet won't happen by chance or simply maximizing freedom. We have to build systems that support justice. How can we do that? Philosophy discussions are the black hole of identity. Once you get in, you can't get out. Nevertheless, I find that I'm drawn to them. I'm a big proponent of self-sovereign identity (SSI) precisely because I believe that autono

Summary: Building a better internet won't happen by chance or simply maximizing freedom. We have to build systems that support justice. How can we do that?

Philosophy discussions are the black hole of identity. Once you get in, you can't get out. Nevertheless, I find that I'm drawn to them. I'm a big proponent of self-sovereign identity (SSI) precisely because I believe that autonomy and agency are a vital part of building a new web that works for everyone. Consequently, I read Web3 Is Our Chance to Make a Better Internet with interest because it applied John Rawls's thought experiment known as the "veil of ignorance1," from his influential 1971 work A Theory of Justice to propose three things we can do in Web3 to build a more fair internet:

Promote self-determination and agency Reward participation, not just capital Incorporate initiatives that benefit the disadvantaged

Let's consider each of these in turn.

Promoting Self-Determination and Agency

As I wrote in Web3: Self-Sovereign Authority and Self-Certifying Protocols,

Web3, self-sovereign authority enabled by self-certifying protocols, gives us a mechanism for creating a digital existence that respects human dignity and autonomy. We can live lives as digitally embodied beings able to operationalize our digital relationships in ways that provide rich, meaningful interactions. Self-sovereign identity (SSI) and self-certifying protocols provide people with the tools they need to operationalize their self-sovereign authority and act as peers with others online. When we dine at a restaurant or shop at a store in the physical world, we do not do so within some administrative system. Rather, as embodied agents, we operationalize our relationships, whether they be long-lived or nascent, by acting for ourselves. Web3, built in this way, allows people to act as full-fledged participants in the digital realm.

There are, of course, ways to screw this up. Notably, many Web3 proponents don't really get identity and propose solutions to identity problems that are downright dangerous and antithetical to their aim of self-determination and agency. Writing about Central Bank Digital Currencies (CBDCs), Dave Birch said this:

The connection between digital identity and digital currency is critical. We must get the identity side of the equation right before we continue with the money side of the equation. As I told the Lords' committee at the very beginning of my evidence, "I am a very strong supporter of retail digital currency, but I am acutely aware of the potential for a colossal privacy catastrophe". From Identity And The New Money
Referenced 2022-05-18T16:14:50-0600

Now, whether you see a role for CBDCs in Web3 or see them as the last ditch effort of the old guard to preserve their relevance, Dave's points about identity are still true regardless of what currency systems you support. We don't necessarily want identity in Web3 for anti-money laundering and other fraud protection mechanisms (although those might be welcomed in a Web3 world that isn't a hellhole), but because identity is the basis for agency. And if we do it wrong, we destroy the very thing we're trying to promote. Someone recently said (I wish I had a reference) that using your Ethereum address for your online identity is like introducing yourself at a party using your bank balance. A bit awkward at least.

Rewarding Participation

If you look at the poster children of Web3, cryptocurrencies and NFTs, the record is spotty for how well these systems reward participation rather than rewarding early investors. But that doesn't have to be the case. In Why Build in Web3, Jad Esber and Scott Duke Kominers describe the "Adam Bomb" NFT:

For example, The Hundreds, a popular streetwear brand, recently sold NFTs themed around their mascot, the "Adam Bomb." Holding one of these NFTs gives access to community events and exclusive merchandise, providing a way for the brand's fans to meet and engage with each other — and thus reinforcing their enthusiasm. The Hundreds also spontaneously announced that it would pay royalties (in store credit) to owners of the NFTs associated to Adam Bombs that were used in some of its clothing collections. This made it roughly as if you could have part ownership in the Ralph Lauren emblem, and every new line of polos that used that emblem would give you a dividend. Partially decentralizing the brand's value in this way led The Hundreds's community to feel even more attached to the IP and to go out of their way to promote it — to the point that some community members even got Adam Bomb tattoos. From Why Build in Web3
Referenced 2022-05-17T14:42:53-0600

NFTs are a good match for this use case because they represent ownership and are transferable. The Hundreds doesn't likely care if someone other than the original purchaser of an Adam Bomb NFT uses it to get a discount so long as they can authenticate it. Esber and Kominers go on to say:

Sharing ownership allows for more incentive alignment between products and their derivatives, creating incentives for everyone to become a builder and contributor.

NFTs aren't the only way to reward participation. Another example is the Helium Network. Helium is a network of more than 700,000 LoRaWAN hotspots around the world. Operators of the hotspots, like me, are rewarded in HNT tokens for providing the hotspot and network backhaul using a method called "proof of coverage" that ensures the hotspot is active in a specific geographic area. The reason the network is so large is precisely because Helium uses its cryptocurrency to reward participants for the activities that grow the network and keep it functioning.

Building web3 ecosystems that reward participation is in stark contrast to Web 2.0 platforms that treat their participants as mere customers (at best) or profit from surveillance capitalism (at worst).

Incorporating Initiatives that Benefit the Disadvantaged

The HBR article acknowledges that this is the hardest one to enable using technology. That's because this is often a function of governance. One of the things we tried to do at Sovrin Foundation was live true to the tagline: Identity for All. For example, we spent a lot of time on governance for just this reason. For example, many of the participants in the Foundation worked on initiatives like financial inclusion and guardianship to ensure the systems we were building and promoting worked for everyone. These efforts cost us the support of some of our more "business-oriented" partners and stewards who just wanted to get to the business of quickly building a credential system that worked for their needs. But we let them walk away rather than cutting back on governance efforts in support of identity for all.

The important parts of Web3 aren't as sexy as ICOs and bored apes, but they are what will ensure we build something that supports a digital life worth living. Web 2.0 didn't do so well in the justice department. I believe Web3 is our chance to build a better internet, but only if we promote self-determination, reward participation, and build incentives that benefit the disadvantaged as well as those better off.

Notes The "veil of ignorance" asks a system designer to consider what system they would design if they were in a disadvantaged situation, rather than their current situation. For example, if you're designing a cryptocurrency, assume you're one of the people late to the game. What design decisions would make the system fair for you in that situation?

Photo Credit: Artists-impressions-of-Lady-Justice from Lonpicman (CC BY-SA 3.0)

Tags: web3 freedom agency ssi

Tuesday, 05. July 2022

Phil Windleys Technometria

Decentralized Systems Don't Care

Summary: I like to remind my students that decentralized systems don't care what they (or anyone else thinks). The paradox is that they care very much what everyone thinks. We call that coherence and it's what makes decentralized systems maddeningly frustrating to understand, architect, and maintain. I love getting Azeem Azhar's Exponential View each week. There's always a few t

Summary: I like to remind my students that decentralized systems don't care what they (or anyone else thinks). The paradox is that they care very much what everyone thinks. We call that coherence and it's what makes decentralized systems maddeningly frustrating to understand, architect, and maintain.

I love getting Azeem Azhar's Exponential View each week. There's always a few things that catch my eye. Recently, he linked to a working paper from Alberto F. Alesina, el. al. called Persistence Through Revolutions (PDF). The paper looks at the fate of the children and grandchildren of landed elite who were systematically persecuted during the cultural revolution (1966 to 1976) in an effort to eradicate wealth and educational inequality. The paper found that the grandchildren of these elite have recovered around two-thirds of the pre-cultural revolution status that their grandparents had. From the paper:

[T]hree decades after the introduction of economic reforms in the 1980s, the descendants of the former elite earn a 16–17% higher annual income than those of the former non-elite, such as poor peasants. Individuals whose grandparents belonged to the pre-revolution elite systematically bounced back, despite the cards being stacked against them and their parents. They could not inherit land and other assets from their grandparents, their parents could not attend secondary school or university due to the Cultural Revolution, their parents were unwilling to express previously stigmatized pro-market attitudes in surveys, and they reside in counties that have become more equal and more hostile toward inequality today. One channel we emphasize is the transmission of values across generations. The grandchildren of former landlords are more likely to express pro-market and individualistic values, such as approving of competition as an economic driving force, and willing to exert more effort at work and investing in higher education. In fact, the vertical transmission of values and attitudes — "informal human capital" — is extremely resilient: even stigmatizing public expression of values may not be sufficient, since the transmission in the private environment could occur regardless. From Persistence Through Revolutions
Referenced 2022-06-27T11:13:05-0600

There are certainly plenty of interesting societal implications to these findings, but I love what it tells us about the interplay between institutions, even very powerful ones, and more decentralized systems like networks and tribes1. The families are functioning as tribes, but there's like a larger social network in play as well made from connections, relatives, and friends. The decentralized social structure or tribes and networks proved resilient even in the face of some of the most coercive and overbearing actions that a seemingly all-powerful state could take.

In a more IT-related story, I also recently read this article, Despite ban, Bitcoin mining continues in China. The article stated:

Last September, China seemed to finally be serious about banning cryptocurrencies, leading miners to flee the country for Kazakhstan. Just eight months later, though, things might be changing again.

Research from the University of Cambridge's Judge Business School shows that China is second only to the U.S. in Bitcoin mining. In December 2021, the most recent figures available, China was responsible for 21% of the Bitcoin mined globally (compared to just under 38% in the U.S.). Kazakhstan came in third.

From Despite ban, Bitcoin mining continues in China
Referenced 2022-06-27T11:32:29-0600

When China instituted the crackdown, some of my Twitter friends, who are less than enthusiastic about crypto, reacted with glee, believing this would really hurt Bitcoin. My reaction was "Bitcoin doesn't care what you think. Bitcoin doesn't care if you hate it."

What matters is not what actions institutions take against Bitcoin2 (or any other decentralized system), but whether or not Bitcoin can maintain coherence in the face of these actions. Social systems that are enduring, scalable, and generative require coherence among participants. Coherence allows us to manage complexity. Coherence is necessary for any group of people to cooperate. The coherence necessary to create the internet came in part from standards, but more from the actions of people who created organizations, established those standards, ran services, and set up exchange points.

Bitcoin's coherence stems from several things including belief in the need for a currency not under institutional control, monetary rewards from mining, investment, and use cases. The resilience of Chinese miners, for example, likely rests mostly on the monetary reward. The sheer number of people involved in Bitcoin gives it staying power. They aren't organized by an institution, they're organized around the ledger and how it operates. Bitcoin core developers, mining consortiums, and BTC holders are powerful forces that balance the governance of the network. The soft and hard forks that have happened over the years represent an inefficient, but effective governance reflecting the core believes of these powerful groups.

So, what should we make of the recent crypto sell-off? I think price is a reasonable proxy for the coherence of participants in the social system that Bitcoin represents. As I said, people buy, hold, use, and sell Bitcoin for many different reasons. Price lets us condense all those reasons down to just one number. I've long maintained that stable decentralized systems need a way to transfer value from the edge to the center. For the internet, that system was telcos. For Bitcoin, it's the coin itself. The economic strength of a decentralized system (whether the internet of Bitcoin) is a good measure of how well it's fairing.

Comparing Bitcoin's current situation to Ethereum's is instructive. If you look around, it's hard to find concrete reasons for Bitcoin's price doldrums other than the general miasma that is affecting all assets (especially risk assets) because of fears about recession and inflation. Ethereum is different. Certainly, there's a set of investors who are selling for the same reasons they're selling BTC. But Ethereum is also undergoing a dramatic transition, called "the merge", that will move the underlying ledger from proof-of-work to proof-of-stake. These kinds of large scale transitions have a big impact on a decentralized system's coherence since there will inevitably be people very excited about it and some who are opposed—winners and losers, if you will.

Is the design of Bitcoin sufficient for it to survive in the long term? I don't know. Stable decentralized systems are hard to get right. I think we got lucky with the internet. And even the internet is showing weakness against the long-term efforts of institutional forces to shape it in their image. Like the difficulty of killing off decentralized social and cultural traditions and systems, decentralized technology systems can withstand a lot of abuse and still function. Bitcoin, Ethereum, and a few other blockchains have proven that they can last for more than a decade despite challenges, changing expectations, and dramatic architectural transitions. I love the experimentation in decentralized system design that they represent. These systems won't die because you (or various governments) don't like them. The paradox is that they don't care what you think, even as they depend heavily on what everyone thinks.

Notes To explore this categorization further, see this John Robb commentary on David Ronfeldt's Rand Corporation paper "Tribes, Institutions, Markets, Networks" (PDF). For simplicity, I'm just going to talk about Bitcoin, but my comments largely apply to any decentralized system

Photo Credit: Ballet scene at the Great Hall of the People attended by President and Mrs. Nixon during their trip to Peking from Byron E. Schumaker (Public Domain)

Tags: decentralization legitimacy coherence

Monday, 04. July 2022

Damien Bod

Add Fido2 MFA to an OpenIddict identity provider using ASP.NET Core Identity

This article shows how to add Fido2 multi-factor authentication to an OpenID Connect identity provider using OpenIddict and ASP.NET Core Identity. OpenIddict implements the OpenID Connect standards and ASP.NET Core Identity is used for the user accounting and persistence of the identities. Code: https://github.com/damienbod/AspNetCoreOpeniddict I began by creating an OpenIddict web application usin

This article shows how to add Fido2 multi-factor authentication to an OpenID Connect identity provider using OpenIddict and ASP.NET Core Identity. OpenIddict implements the OpenID Connect standards and ASP.NET Core Identity is used for the user accounting and persistence of the identities.

Code: https://github.com/damienbod/AspNetCoreOpeniddict

I began by creating an OpenIddict web application using ASP.NET Core Identity. See the OpenIddict samples for getting started.

I use the fido2-net-lib Fido2 Nuget package which can be use to add support for Fido2 in .NET Core applications. You can add this to the web application used for the identity provider.

<PackageReference Include="Fido2" Version="3.0.0-beta6" />

Once added, you need to add the API controllers for the webAuthn API calls and the persistence classes using the Fido2 Nuget package. I created a set of classes which you can copy into your project. You need to switch the ApplicationUser class with IdentityUser if you are not extending the ASP.NET Core Identity. I use the ApplicationUser class in this example.

https://github.com/damienbod/AspNetCoreOpeniddict/tree/main/OpeniddictServer/Fido2

In the ApplicationDbContext, the DBSet with the FidoStoredCredential entity is added to persist the Fido2 data.

using Fido2Identity; using Microsoft.AspNetCore.Identity.EntityFrameworkCore; using Microsoft.EntityFrameworkCore; namespace OpeniddictServer.Data; public class ApplicationDbContext : IdentityDbContext<ApplicationUser> { public ApplicationDbContext(DbContextOptions<ApplicationDbContext> options) : base(options) { } public DbSet<FidoStoredCredential> FidoStoredCredential => Set<FidoStoredCredential>(); protected override void OnModelCreating(ModelBuilder builder) { builder.Entity<FidoStoredCredential>().HasKey(m => m.Id); base.OnModelCreating(builder); } }

The Fido2 Identity services are added to the application. I use an SQL Server to persistent the data. The ApplicationUser is used for the ASP.NET Core services. The Fido2 Fido2UserTwoFactorTokenProvider class is used to add a new Fido2 MFA to the ASP.NET Core Identity. A session is used to store the webAuthn requests and the Fido2 store was added for the persistence.

services.AddDbContext<ApplicationDbContext>(options => { // Configure the context to use Microsoft SQL Server. options.UseSqlServer(Configuration .GetConnectionString("DefaultConnection")); // Register the entity sets needed by OpenIddict. // Note: use the generic overload if you need // to replace the default OpenIddict entities. options.UseOpenIddict(); }); services.AddIdentity<ApplicationUser, IdentityRole>() .AddEntityFrameworkStores<ApplicationDbContext>() .AddDefaultTokenProviders() .AddDefaultUI() .AddTokenProvider<Fido2UserTwoFactorTokenProvider>("FIDO2"); services.Configure<Fido2Configuration>( Configuration.GetSection("fido2")); services.AddScoped<Fido2Store>(); services.AddDistributedMemoryCache(); services.AddSession(options => { options.IdleTimeout = TimeSpan.FromMinutes(2); options.Cookie.HttpOnly = true; options.Cookie.SameSite = SameSiteMode.None; options.Cookie.SecurePolicy = CookieSecurePolicy.Always; });

The Fido2UserTwoFactorTokenProvider implements the IUserTwoFactorTokenProvider interface which can be used to add additional custom 2FA providers to ASP.NET Core Identity.

using Microsoft.AspNetCore.Identity; using OpeniddictServer.Data; using System.Threading.Tasks; namespace Fido2Identity; public class Fido2UserTwoFactorTokenProvider : IUserTwoFactorTokenProvider<ApplicationUser> { public Task<bool> CanGenerateTwoFactorTokenAsync( UserManager<ApplicationUser> manager, ApplicationUser user) { return Task.FromResult(true); } public Task<string> GenerateAsync(string purpose, UserManager<ApplicationUser> manager, ApplicationUser user) { return Task.FromResult("fido2"); } public Task<bool> ValidateAsync(string purpose, string token, UserManager<ApplicationUser> manager, ApplicationUser user) { return Task.FromResult(true); } }

The Session is added as middleware as well as the standard packages.

app.UseRouting(); app.UseAuthentication(); app.UseAuthorization(); app.UseSession(); app.UseEndpoints(endpoints => { endpoints.MapControllers(); endpoints.MapDefaultControllerRoute(); endpoints.MapRazorPages(); });

The Javascript classes to implement the webAuthn standard API calls and backend API calls need to be added to the wwwroot of the project. The is added here:

https://github.com/damienbod/AspNetCoreOpeniddict/tree/main/OpeniddictServer/wwwroot/js

One js file implements the Fido2 register process and the other implements the login.

Now we need to implement the Fido2 bits in the ASP.NET Core Identity UI and add the Javascript scripts to these pages. I usually scaffold in the required ASP.NET Core pages and extend these with the FIDO2 implementations for the MFA.

The following Identity Pages need to be created or updated:

Account/Login Account/LoginFido2Mfa Account/Manage/Disable2fa Account/Manage/Fido2Mfa Account/Manage/TwoFactorAuthentication Account/Manage/ManageNavPages

The Fido2 registration is implemented in the Account/Manage/Fido2Mfa Identity Razor page. The Javascript files are added in this page.

@page "/Fido2Mfa/{handler?}" @using Microsoft.AspNetCore.Identity @inject SignInManager<ApplicationUser> SignInManager @inject UserManager<ApplicationUser> UserManager @inject Microsoft.AspNetCore.Antiforgery.IAntiforgery Xsrf @functions{ public string? GetAntiXsrfRequestToken() { return Xsrf.GetAndStoreTokens(this.HttpContext).RequestToken; } } @model OpeniddictServer.Areas.Identity.Pages.Account.Manage.MfaModel @{ Layout = "_Layout.cshtml"; ViewData["Title"] = "Two-factor authentication (2FA)"; ViewData["ActivePage"] = ManageNavPages.Fido2Mfa; } <h4>@ViewData["Title"]</h4> <div class="section"> <div class="container"> <h1 class="title is-1">2FA/MFA</h1> <div class="content"><p>This is scenario where we just want to use FIDO as the MFA. The user register and logins with their username and password. For demo purposes, we trigger the MFA registering on sign up.</p></div> <div class="notification is-danger" style="display:none"> Please note: Your browser does not seem to support WebAuthn yet. <a href="https://caniuse.com/#search=webauthn" target="_blank">Supported browsers</a> </div> <div class="columns"> <div class="column is-4"> <h3 class="title is-3">Add a Fido2 MFA</h3> <form action="/Fido2Mfa" method="post" id="register"> <input type="hidden" id="RequestVerificationToken" name="RequestVerificationToken" value="@GetAntiXsrfRequestToken()"> <div class="field"> <label class="label">Username</label> <div class="control has-icons-left has-icons-right"> <input class="form-control" type="text" readonly placeholder="email" value="@User.Identity?.Name" name="username" required> </div> </div> <div class="field" style="margin-top:10px;"> <div class="control"> <button class="btn btn-primary">Add FIDO2 MFA</button> </div> </div> </form> </div> </div> <div id="fido2mfadisplay"></div> </div> </div> <div style="display:none" id="fido2TapYourSecurityKeyToFinishRegistration">FIDO2_TAP_YOUR_SECURITY_KEY_TO_FINISH_REGISTRATION</div> <div style="display:none" id="fido2RegistrationError">FIDO2_REGISTRATION_ERROR</div> <script src="~/js/helpers.js"></script> <script src="~/js/instant.js"></script> <script src="~/js/mfa.register.js"></script>

The Account/LoginFido2Mfa Razor Page implements the Fido2 login.

@page @using Microsoft.AspNetCore.Identity @inject SignInManager<ApplicationUser> SignInManager @inject UserManager<ApplicationUser> UserManager @inject Microsoft.AspNetCore.Antiforgery.IAntiforgery Xsrf @functions{ public string? GetAntiXsrfRequestToken() { return Xsrf.GetAndStoreTokens(this.HttpContext).RequestToken; } } @model OpeniddictServer.Areas.Identity.Pages.Account.MfaModel @{ ViewData["Title"] = "Login with Fido2 MFA"; } <h4>@ViewData["Title"]</h4> <div class="section"> <div class="container"> <h1 class="title is-1">2FA/MFA</h1> <div class="content"><p>This is scenario where we just want to use FIDO as the MFA. The user register and logins with their username and password. For demo purposes, we trigger the MFA registering on sign up.</p></div> <div class="notification is-danger" style="display:none"> Please note: Your browser does not seem to support WebAuthn yet. <a href="https://caniuse.com/#search=webauthn" target="_blank">Supported browsers</a> </div> <div class="columns"> <div class="column is-4"> <h3 class="title is-3">Fido2 2FA</h3> <form action="/LoginFido2Mfa" method="post" id="signin"> <input type="hidden" id="RequestVerificationToken" name="RequestVerificationToken" value="@GetAntiXsrfRequestToken()"> <div class="field"> <div class="control"> <button class="btn btn-primary">2FA with FIDO2 device</button> </div> </div> </form> </div> </div> <div id="fido2logindisplay"></div> </div> </div> <div style="display:none" id="fido2TapKeyToLogin">FIDO2_TAP_YOUR_SECURITY_KEY_TO_LOGIN</div> <div style="display:none" id="fido2CouldNotVerifyAssertion">FIDO2_COULD_NOT_VERIFY_ASSERTION</div> <div style="display:none" id="fido2ReturnUrl">@Model.ReturnUrl</div> <script src="~/js/helpers.js"></script> <script src="~/js/instant.js"></script> <script src="~/js/mfa.login.js"></script>

The other ASP.NET Core Identity files are extended to implement the 2FA providers logic.

I extended the _Layout file to include the sweetalert2 js package used to implement the UI popups as in the FIDO2 demo from the Nuget package passwordless demo. You do not need this and can change the js files to use something else.

<head> // ... <script type="text/javascript" src="https://cdn.jsdelivr.net/npm/sweetalert2"></script> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/limonte-sweetalert2/6.10.1/sweetalert2.min.css" /> </head>

I used the Feitian Fido2 keys to test the implementation. I find these keys excellent and robust. I used 2 mini and 2 NFC standard keys to test. You should add at least 2 keys per identity. I usually use three keys for all my accounts.

https://www.ftsafe.com/products/FIDO

Once you login and create an account, you can go to the user settings and setup a 2FA. Choose Fido2. Then you can register a key for you account.

Next time you login, you will be required to authenticate using Fido2 as a second factor.

With Fido2 protecting the accounts against phishing and a solid implementation of OpenID Connect, you have a great start to implementing a professional identity provider.

Links:

https://github.com/abergs/fido2-net-lib

https://webauthn.io/

https://github.com/damienbod/AspNetCoreIdentityFido2Mfa

https://documentation.openiddict.com/

https://github.com/openiddict/openiddict-core

https://github.com/openiddict/openiddict-samples

Sunday, 03. July 2022

reb00ted

What is a DAO? A non-technical definition

Definitions of “DAO” (short for Decentralized Autonomous Organization) usually start with technology, specifically blockchain. But I think that actually misses much of what’s exciting about DAOs, a bit like if you were to explain why your smartphone is great by talking about semiconductor circuits. Let’s try to define DAO without starting with blockchain. For me: A DAO is… a distributed

Definitions of “DAO” (short for Decentralized Autonomous Organization) usually start with technology, specifically blockchain. But I think that actually misses much of what’s exciting about DAOs, a bit like if you were to explain why your smartphone is great by talking about semiconductor circuits. Let’s try to define DAO without starting with blockchain.

For me:

A DAO is…

a distributed group with a common cause of consequence that governs itself, does not have a single point of failure, and that is digital-native.

Let’s unpack this:

A group: a DAO is a form of organization. It is usually a group of people, but it could also be a group of organizations, a group of other DAOs (yes!) or any combination.

This group is distributed: the group members are not all sitting around the same conference table, and may never. The members of many DAOs have not met in person, and often never will. From the get-go, DAO members may come from around the globe. A common jurisdiction cannot be assumed, and as DAO membership changes, over time it may be that most members eventually come from a very different geography than where the DAO started.

With a common cause: DAOs are organized around a common cause, or mission, like “save the whales” or “invest in real-estate together”. Lots of different causes are possible, covering most areas of human interest, including “doing good”, “not for profit” or “for profit”.

This cause is of consequence to the members, and members are invested in the group. Because of that, members will not easily abandon the group. So we are not talking about informal pop-in-and-out-groups where maybe people have a good time but don’t really care whether the group is successful, but something where success of the group is important to the members and they will work on making the group successful.

That governs itself: it’s not a group that is subservient to somebody or some other organization or some other ruleset. Instead, the members of the DAO together make the rules, including how to change the rules. They do not depend on anybody outside of the DAO for that (unless, of course, they decide to do that). While some DAOs might identify specific members with specific roles, a DAO is much closer to direct democracy than representative democracy (e.g. as in traditional organization where shareholders elect directors who then appoint officers who then run things).

That does not have a single point of failure and are generally resilient. No single point of failure should occur in terms of people who are “essential” and cannot be replaced, or tools (like specific websites). This often is described in a DAO context as “sufficient decentralization”.

And that is digital-native: a DAO usually starts on-line as a discussion group, and over time, as its cause, membership and governance become more defined, gradually turns into a DAO. At all stages members prefer digital tools and digital interactions over traditional tools and interactions. For example, instead of having an annual membership meeting at a certain place and time, they will meet online. Instead of filling out paper ballots, they will vote electronically, e.g. on a blockchain. (This is where having a blockchain is convenient, but there are certainly other technical ways voting could be performed.)

Sounds … very broad? It is! For me, that’s one of the exciting things about DAOs. They come with very little up-front structure, so the members can decide what and how they want to do things. And if they change their minds, they change their minds and can do that any time, collectively, democratically!

Of course, all this freedom means more work because a lot of defaults fall away and need to be defined. Governance can fail in new and unexpected ways because we don’t have hundreds of years of precedent in how, say, Delaware corporations work.

As an inventor and innovator, I’m perfectly fine with that. The things I tend to invent – in technology – are also new and fail in unexpected ways. Of course, there is many situations where that would be unacceptable: when operating a nuclear power plant, for example. So DAOs definitely aren’t for everyone and everything. But where existing structure of governance are found to be lacking, here is a new canvas for you!

Wednesday, 29. June 2022

Mike Jones: self-issued

OAuth DPoP Presentation at Identiverse 2022

Here’s the DPoP presentation that Pieter Kasselman and I gave at the 2022 Identiverse conference: Bad actors are stealing your OAuth tokens, giving them control over your information – OAuth DPoP (Demonstration of Proof of Possession) is what we’re doing about it (PowerPoint) (PDF) A few photographs that workation photographer Brian Campbell took during the […]

Here’s the DPoP presentation that Pieter Kasselman and I gave at the 2022 Identiverse conference:

Bad actors are stealing your OAuth tokens, giving them control over your information – OAuth DPoP (Demonstration of Proof of Possession) is what we’re doing about it (PowerPoint) (PDF)

A few photographs that workation photographer Brian Campbell took during the presentation follow.

Mike Presenting:

Who is that masked man???

Pieter Presenting:

Tuesday, 28. June 2022

@_Nat Zone

Global Identity GAINs Global Interoperability

金曜日の朝、前日夜のIdentiverse伝統のハ…

金曜日の朝、前日夜のIdentiverse伝統のハードなパーティー(わたしは行きませんでしたが)の後、朝8時半に人が集まり、最終日のキーノートセッションが始まりました。Andi Hindle のイントロダクションから始まり、Don ThibeauによるOpenID Foundation キム・キャメロン・アワードの発表(わたしはバックステージにいたので見れませんでした)、そして最後に我々のパネルが30分ほど行われました。

午前8時45分〜午前9時15分 基調講演

パネリストは以下の方々です。

Drummond Reed Director of Trust Services • Avast Daniel Goldscheider Co-founder & CEO • Yes.com Sanjay Jain Chief Innovation Officer; Partner • CIIE Co.; Bharat Innovation Fund Nat Sakimura, Chairman • OpenID Foundation

話の内容は、基本的には、信頼できるネットワーク間の相互運用性についての話でした。既存のネットワークを活用するという考え方は、多くの法域で絶大な人気を誇っているようです。

Network of networks, given first in my presentation at EIC 2021 Keynote

また、スマートフォンが一人一人に行き渡る「金持ちのコンピューティング」のユースケースだけでなく、スマートフォンにアクセスできず、回線も電気も散発的なケースも考慮する必要がある点にも触れました。

より詳しい内容については、Identiverse 2022のアーカイブが公開されてからこの記事を更新します。

(写真提供:Brian Campbell)

Monday, 27. June 2022

Phil Windleys Technometria

Fixing Web Login

Summary: Like the "close" buttons for elevator doors, "keep me logged in" options on web-site authentication screens feel more like a placebo than something that actually works. Getting rid of passwords will mean we need to authenticate less often, or maybe just don't mind as much when we do. You know the conventional wisdom that the "close" button in elevators isn't really hooked up to a

Summary: Like the "close" buttons for elevator doors, "keep me logged in" options on web-site authentication screens feel more like a placebo than something that actually works. Getting rid of passwords will mean we need to authenticate less often, or maybe just don't mind as much when we do.

You know the conventional wisdom that the "close" button in elevators isn't really hooked up to anything. That it's just there to make you feel good? "Keep me logged in" is digital identity's version of that button. Why is using authenticated service on the web so unpleasant?

Note that I'm specifically talking about the web, as opposed to mobile apps. As I wrote before, compare your online, web experience at your bank with the mobile experience from the same bank. Chances are, if you're like me, that you pick up your phone and use a biometric authentication method (e.g. FaceId) to open it. Then you select the app and the biometrics play again to make sure it's you, and you're in.

On the web, in contrast, you likely end up at a landing page where you have to search for the login button which is hidden in a menu or at the top of the page. Once you do, it probably asks you for your identifier (username). You open up your password manager (a few clicks) and fill the username and only then does it show you the password field1. You click a few more times to fill in the password. Then, if you use multi-factor authentication (and you should), you get to open up your phone, find the 2FA app, get the code, and type it in. To add insult to injury, the ceremony will be just different enough at every site you visit that you really don't develop much muscle memory for it.

As a consequence, when most people need something from their bank, they pull out their phone and use the mobile app. I think this is a shame. I like the web. There's more freedom on the web because there are fewer all-powerful gatekeepers. And, for many developers, it's more approachable. The web, by design, is more transparent in how it works, inspiring innovation and accelerating it's adoption.

The core problem with the web isn't just passwords. After all, most mobile apps authenticate using passwords as well. The problem is how sessions are set up and refreshed (or not, in the case of the web). On the web, sessions are managed using cookies, or correlation identifiers. HTTP cookies are generated by the server and stored on the browser. Whenever the browser makes a request to the server, it sends back the cookie, allowing the server to correlate all requests from that browser. Web sites, over the years, have become more security conscious and, as a result, most set expirations for cookies. When the cookie has expired, you have to log in again.

Now, your mobile app uses HTTP as well, and so it also uses cookies to link HTTP requests and create a session. The difference is in how you're authenticated. Mobile apps (speaking generally) are driven by APIs. The app makes an HTTP request to the API and receives JSON data in return which it then renders into the screens and buttons you interact with. Most API access is protected by an identity protocol called OAuth.

Getting an access token from the authorization server (click to enlarge) Using a token to request data from an API (click to enlarge)

You've used OAuth if you've ever used any kind of social login like Login with Apple, or Google sign-in. Your mobile app doesn't just ask for your user ID and password and then log you in. Rather, it uses them to authenticate with an authentication server for the API using OAuth. The standard OAuth flow returns an authentication token that the app stores and then returns to the server with each request. Like cookies, these access tokens expire. But, unlike cookies, OAuth defines a refresh token mechanism that the app can be use to get a new access token. Neat, huh?

The problem with using OAuth on the web is that it's difficult to trust browsers:

Some are in public places and people forget to log out. A token in the browser can be attacked with techniques like cross-site scripting. Browser storage mechanisms are also subject to attack.

Consequently, storing the access token, refresh token, and developer credentials that are used to carry out an OAuth flow is hard—maybe impossible—to do securely.

Solving this problem probably won't happen because we solved browser security problems and decided to use OAuth in the browser. A more likely approach is to get rid of passwords and make repeated authentication much less onerous. Fortunately, solutions are at hand. Most major browsers on most major platforms can now be used as FIDO platform authenticators. This is a fancy way of saying you can use the the same mechanisms you use to authenticate to the device (touch ID, face ID, or even a PIN) to authenticate to your favorite web site as well. Verifiable credentials are another up and coming technology that promises to significantly reduce the burdens of passwords and multi-factor authentication.

I'm hopeful that we may really be close to the end for passwords. I think the biggest obstacle to adoption is likely that these technologies are so slick that people won't believe they're really secure. If we can get adoption, then maybe we'll see a resurgence of web-based services as well.

Notes This is known as "identifier-first authentication". By asking for the identifier, the authentication service can determine how to authenticate you. So, if you're using a token authentication instead of passwords, it can present the right option. Some places do this well, merely hiding the password field using Javascript and CSS, so that password managers can still fill the password even though it's not visible. Others don't, and you have to use your password manager twice for a single login.

Photo Credit: Dual elevator door buttons from Nils R. Barth (CC0 1.0)

Tags: identity web mobile oauth cookies


Kerri Lemole

JFF & VC-EDU Plugfest #1:

JFF & VC-EDU Plugfest #1: Leaping Towards Interoperable Verifiable Learning & Employment Records Plugfest #1 Badge Image Digital versions of learning and employment records (LER) describe a person’s learning and employment experiences and are issued or endorsed by entities making claims about these experiences. The advantage over paper documents is that LERs can contain massive amount
JFF & VC-EDU Plugfest #1: Leaping Towards Interoperable Verifiable Learning & Employment Records Plugfest #1 Badge Image

Digital versions of learning and employment records (LER) describe a person’s learning and employment experiences and are issued or endorsed by entities making claims about these experiences. The advantage over paper documents is that LERs can contain massive amounts of useful data that describe the experiences, skills and competencies applied, and may even include assets like photos, videos, or content that demonstrate the achievement. The challenge is that this data needs to be understandable and it should be in the hands of those that the data is about so that they have the power to decide who or what has access to it much like they do with their watermarked and notarized paper documents.

LERs that are issued, delivered, and verified according to well-established and supported standards with syntactic, structural, and semantic similarities, can be understood and usable across many systems. This can provide individuals with direct, convenient, understandable, and affordable access to their records (Read more about interoperable verifiable LERs).

To encourage the development of a large and active marketplace of interoperable LER-friendly technology, tools, and infrastructure, Jobs for the Future (JFF), in collaboration with the W3C Verifiable Credentials Education Task Force (VC-EDU) is hosting a series of interoperability plugfests. These plugfests are inspired by the DHS Plugfests and provide funding to vendors that can demonstrate the use of standards such as W3C Verifiable Credentials (VC), and Decentralized Identifiers (DIDs). The first plugfest set the stage for the others by introducing VC wallet vendors to an education data standard called Open Badges and introducing Open Badges platforms to VCs.

Over the past year, the community at VC-EDU and 1EdTech Open Badges members have been working towards an upgrade of Open Badges to 3.0 which drops its web server hosted verification in favor of the VC cryptographic verification method. Open Badges are digital credentials that can represent any type of achievement from micro-credentials to diplomas. Until this upgrade, they have been used more as human verifiable credentials shared on websites and social media than machine verifiable ones. This upgrade increases the potential for machines to interact with these credentials giving individuals more opportunities to decide to use them in educational and employment situations that use computers to read and analyze the data.

Plugfest #1 requirements were kept simple in order to welcome as many vendors as possible. It required that vendors be able to display an Open Badge 3.0 including a badge image, issuer name, achievement name, achievement description, and achievement criteria. Optionally they could also display an issuer logo and other Open Badges 3.0 terms. For a stretch goal, vendors could demonstrate that they verified the badge prior to accepting and displaying it in their wallet app. Lastly, the participants were required to make a 3–5 minute video demonstrating what they’d done.

There were 20 participants from around the world at various stages in their implementation (list of participants). They were provided with a web page listing resources and examples of Open Badges. Because work on Open Badges 3.0 was still in progress, a sample context file was hosted at VC-EDU that would remain unchanged during the plugfest. Open discussion on the VC-EDU email list was encouraged so that they could be archived and shared with the community. These were the first Open Badges 3.0 to be displayed and there were several questions about how to best display them in a VC wallet. As hoped, the cohort worked together to answer these questions in an open conversation that the community could access and learn from.

The timeline to implement was a quick three weeks. Demo day was held on June 6, 2022, the day before the JFF Horizons conference in New Orleans. The videos were watched in batches by the participants and observers who were in person and on Zoom. Between batches, there were questions and discussions.

A complete list of the videos is available on the list of participants. Here are a few examples:

Plugfest #1 succeeded in familiarizing VC wallet vendors with an education data standard and education/workforce platforms with VCs. The participants were the first to issue & display Open Badges 3.0 or for that matter any education standard as a VC. It revealed new questions about displaying credentials and what onboarding resources will be useful.

Each of the participating vendors that met the requirements will be awarded the Plugfest #1 badge (image pictured above). With this badge, they qualify to participate in Plugfest #2 which will focus on issuing and displaying LER VCs. Plugfest #2 will take place in November 2022 with plans to meet in person the day before the Internet Identity Workshop on November 14 in Mountainview, CA. If vendors are interested in Plugfest #2 and didn’t participate in Plugfest #1, there is still an opportunity to do so by fulfilling the same requirements listed above including the video and earning a Plugfest #1 badge.

To learn more, join VC-EDU which meets online most Mondays at 8 am PT/11 am ET/5 pm CET. Meeting connection info and archives can be found here. Subscribe to the VC-EDU mailing list by sending an email to public-vc-edu-request@w3.org with the subject “subscribe” (no email message needed).

Thursday, 23. June 2022

Phil Windleys Technometria

Transferable Accounts Putting Passengers at Risk

Summary: The non-transferability of verifiable credential is one of their super powers. This post examines how that super power can be used to reduce fraud and increase safety in a hired car platform. Bolt is a hired-car service like Uber or Lyft. Bolt is popular because its commissions are less than other ride-sharing platforms. In Bolt drivers in Nigeria are illicitly selling their

Summary: The non-transferability of verifiable credential is one of their super powers. This post examines how that super power can be used to reduce fraud and increase safety in a hired car platform.

Bolt is a hired-car service like Uber or Lyft. Bolt is popular because its commissions are less than other ride-sharing platforms. In Bolt drivers in Nigeria are illicitly selling their accounts, putting passengers at risk Rest of World reports on an investigation showing that Bolt drivers in Nigeria (and maybe other countries) routinely sell verified accounts to third parties. The results are just what you'd expect:

Adede Sonaike is another Lagos-based Bolt user since 2018, and said she gets frequently harassed and shouted at by its drivers over even the simplest of issues, such as asking to turn down the volume of the car stereo. Sonaike said these incidents have become more common and that she anticipates driver harassment on every Bolt trip. But on March 18, she told Rest of World she felt that her life was threatened. Sonaike had ordered a ride, and confirmed the vehicle and plate number before entering the car. After the trip started, she noticed that the driver’s face didn’t match the image on the app. “I asked him why the app showed me a different face, and he said Bolt blocked his account and that [he] was using his brother’s account, and asked why I was questioning him,” she recalled. She noticed the doors were locked and the interior door handle was broken, and became worried. Sonaike shared her ride location with her family and asked the driver to stop, so she could end the trip. He only dropped her off after she threatened to break his windows. From Bolt drivers in Nigeria are illicitly selling their accounts
Referenced 2022-06-09T09:44:24-0400

The problem is accounts are easily transferable and reputations tied to transferable accounts can't be trusted since they don't reflect the actions of the person currently using the account. Making accounts non-transferable using traditional means is difficult because they're usually protected by something you know (e.g., a password) and that can be easily changed and exchanged. Even making the profile picture difficult to change (like Bolt apparently does) isn't a great solution since people may not check the picture, or fall for stories like the driver gave the passenger in the preceding quote.

Verifiable credentials are a better solution because they're designed to not be transferable1. Suppose Bob wants to sell his Bolt account to Malfoy. Alice, a rider wants to know the driver is really the holder of the account. Bolt issued a verifiable credential (VC) to Bob when he signed up. The VC issuing and presenting protocols cryptographically combine an non-correlatable identifier and a link secret and use zero-knowledge proofs (ZKPs) to present the credential. ZKP-based credential presentations have a number of methods that can be used to prevent transferring the credential. I won't go into the details, but the paper I link to provides eight techniques that can be used to prevent the transfer of a VC. We can be confident the VC was issued to the person presenting it.

Bolt could require that Bob use the VC they provided when he signed up to log into his account each time he starts driving. They could even link a bond or financial escrow to the VC to ensure it's not transferred. To prevent Bob from activating the account for Malfoy at the beginning of each driving period, Alice, and other passengers could ask drivers for proof that they're a legitimate Bolt driver by requesting a ZKP from the Bolt credential. Their Bolt app could do this automatically and even validate that the credential is from Bolt.

Knowing that the credential was issued to the person presenting it is one of the four cryptographic cornerstones of credential fidelity. The Bolt app can ensure the provenance of the credential Bob presents. Alice doesn't have trust Bob or know very much about Bob personally, just that he really is the driver that Bolt has certified.

The non-transferability of verifiable credential is one of their super powers. A lot of the talk about identity in Web 3 has focused on NFTs. NFTs are, for the most part, designed to be transferable2. In that sense, they're no better than a password-protected account. Identity relies on knowing that the identifiers and attributes being presented are worthy of confidence and can be trusted. Otherwise, identity isn't reducing risk the way it should. That can't happen with transferable identifiers—whether their password-based accounts or even NFTs. There's no technological barrier to Bolt implementing this solution now...and they should for the safety of their customers.

Notes I'm speaking of features specific to the Aries credential exchange protocol in this post. Recently Vatalik el. al. proposed what they call a soul-bound token as a non-transferable credential type for Web3. I'm putting together my thoughts on that for a future post.

Photo Credit: A Buenos Aires taxi ride from Phillip Capper (CC BY 2.0)

Tags: identity ssi verifiable+credentials reputation

Wednesday, 22. June 2022

Kerri Lemole

Interoperability for Verifiable Learning and Employment Records

“real-world slide together” by fdecomite is licensed under CC BY 2.0. in·ter·op·er·a·ble /ˌin(t)ərˈäp(ə)rəb(ə)l/ adjective (of computer systems or software) able to exchange and make use of information. (Oxford Dictionary) if two products, programs, etc. are interoperable, they can be used together. (Cambridge Dictionary) It’s no surprise that digital versions of learning and employment rec
real-world slide together” by fdecomite is licensed under CC BY 2.0.

in·ter·op·er·a·ble
/ˌin(t)ərˈäp(ə)rəb(ə)l/
adjective

(of computer systems or software) able to exchange and make use of information. (Oxford Dictionary)

if two products, programs, etc. are interoperable, they can be used together. (Cambridge Dictionary)

It’s no surprise that digital versions of learning and employment records (LERs) like certifications, licenses, and diplomas can introduce new worlds of opportunity and perspective. If they are issued, delivered, and verified according to well-established and supported standards, computers are able to exchange and use this information securely and interoperably. This practice of technical interoperability could also precipitate an increase in systemic interoperability by providing more individuals with direct, convenient, understandable, and affordable access to their confirmable LERs that are syntactically, structurally, and semantically similar. This can make digital credentials useful across many different systems.

Interoperability of digital LERs has three primary aspects:

Verification describes when the claims were made, who the credentials are from, who they are about, and provides methods to prove these identities and that the claim data have remained unchanged since issuance. Delivery describes how the LERs move from one entity to another; overlaps with the verification layer. Content describes what each claim is and is also referred to as the credential subject.

Verification
At the Worldwide Web Consortium (W3C) there’s a standard called Verifiable Credentials (VC) that describes how claims can be verified. It’s being used for claims that require unmitigated proof like government credentials, identity documents, supply chain management, and education credentials. A diploma issued as a VC by a university would contain content representing the diploma and would be digitally signed by the university. The identities of the university and the student could be represented by a Decentralized Identifier (DID, also a recommendation developed at the W3C for cryptographically verifiable identities. The diploma could be stored in a digital wallet app where the student would have access to their cryptographically verifiable digital diploma at a moment’s notice. Verifiers, such as employers, who understand the VC and DID standards could verify the diploma efficiently without notifying the university. Digitally, this resembles how watermarked and notarized documents are handled offline.

Delivery
The connections between the wallet, the university credential issuing system, the student, and the verifier encompass the delivery of VCs. This overlaps with verification because DIDs and digital signature methods must be taken into consideration when the LERs are issued and transported. There are a handful of ways to accomplish this and several efforts aiming towards making this more interoperable including W3C CCG VC HTTP API and DIF Presentation Exchange.

Content
Verifiers can recognize that a VC is a diploma, certification, or a transcript because there are many semantic standards with vocabularies that describe learning and employment records like these. Open Badges and the Comprehensive Learner Record (CLR) at 1edtech (formerly IMS Global) provide descriptions of credentials and the organizations that issue them. Both of these standards have been working towards upgrades (Open Badges 3.0 and CLR 2.0) to use the W3C Verifiable Credential model for verification. They provide a structural and semantic content layer that describes the claim as a type of achievement, the criteria met, and a potential profile of the issuer.

Another standard is the Credential Transparency Language (CTDL) at Credential Engine which provides a more in-depth vocabulary to describe organizations, skills, jobs, and even pathways. When LER VCs contain CTDL content on its own or in addition to Open Badges or CLR, the rich data source can precisely describe who or what is involved in an LER providing additional context and taxonomy that can be aligned with opportunities.

Standards groups continue to seek ways to meet the needs of issuing services, wallet vendors, and verifying services that are coming to market. The Credentials Community Group (CCG) is a great place to get acquainted with the community working on this. The W3C Verifiable Credentials for Education Task Force (VC-EDU) is a subgroup of the CCG that is exploring how to represent education, employment, and achievement verifiable credentials. This includes pursuing data model recommendations, usage guidelines, and best practices. Everyone at every stage of technology understanding is welcome to join in because we are all learning and every perspective increases understanding. VC-EDU meets online most Mondays at 8 am PT/11 am ET/5 pm CET. Meeting connection info and archives can be found here. Subscribe to the VC-EDU mailing list by sending an email to public-vc-edu-request@w3.org with the subject “subscribe” (no email message needed).

Tuesday, 21. June 2022

Doc Searls Weblog

What shall I call my newsletter?

I’ve been blogging since 1999, first at weblog.searls.com, and since 2007 here. I also plan to continue blogging here* for the rest of my life. But it’s clear now that newsletters are where it’s at, so I’m going to start one of those. The first question is, What do I call it? The easy thing, and […]

I’ve been blogging since 1999, first at weblog.searls.com, and since 2007 here. I also plan to continue blogging here* for the rest of my life. But it’s clear now that newsletters are where it’s at, so I’m going to start one of those.

The first question is, What do I call it?

The easy thing, and perhaps the most sensible, is Doc Searls Newsletter, or Doc Searls’ Newsletter, in keeping with the name of this blog. In branding circles, they call this line extension.

Another possibility is Spotted Hawk. This is inspired by Walt Whitman, who wrote,

The spotted hawk swoops by and accuses me,
he complains of my gab and my loitering.
I too am not a bit tamed.
I too am untranslatable.
I sound my barbaric yawp over the roofs of the world.

In the same spirit I might call the newsletter Barbaric Yawp. But ya kinda gotta know the reference, which even English majors mostly don’t. Meanwhile, Spotted Hawk reads well, even if the meaning is a bit obscure. Hell, the Redskins or the Indians could have renamed themselves the Spotted Hawks.

Yet barbaric yawping isn’t my style, even if I am untamed and sometimes untranslatable.

Any other suggestions?

As a relevant but unrelated matter, I also have to decide how to produce it. The easy choice is to use Substack, which all but owns the newsletter platform space right now. But Substack newsletters default to tracking readers, and I don’t want that. I also hate paragraph-long substitutes for linked URLs, and tracking cruft appended to the ends of legible URLs. (When sharing links from newsletters, always strip that stuff off. Pro tip: the cruft usually starts with a question mark.) I’m tempted by Revue, entirely because Julia Angwin and her team at The Markup went through a similar exercise in 2019 and chose Revue for their newsletter. I’m already playing with that one. Other recommendations are welcome. Same goes for managing the mailing list if I don’t use a platform. Mailman perhaps?

*One reason I keep this blog up is that Harvard hosts it, and Harvard has been around since 1636. I also appreciate deeply its steady support of what I do here and at ProjectVRM, which also manifests as a blog, at the Berkman Klein Center.


Identity Woman

Seeing Self-Sovereign Identity in Historical Context

Abstract A new set of technical standards called Self-Sovereign Identity (SSI) is emerging, and it reconfigures how digital identity systems work. My thesis is that the new configuration aligns better with the emergent ways our social systems in the west have evolved identity systems to  work at a mass scale and leverage earlier paper-based technologies. […] The post Seeing Self-Sovereign I

Abstract A new set of technical standards called Self-Sovereign Identity (SSI) is emerging, and it reconfigures how digital identity systems work. My thesis is that the new configuration aligns better with the emergent ways our social systems in the west have evolved identity systems to  work at a mass scale and leverage earlier paper-based technologies. […]

The post Seeing Self-Sovereign Identity in Historical Context appeared first on Identity Woman.

Sunday, 19. June 2022

Werdmüller on Medium

Tech on Juneteenth

Some tech firms perpetuate modern-day slavery by using prison labor. Continue reading on Medium »

Some tech firms perpetuate modern-day slavery by using prison labor.

Continue reading on Medium »

Friday, 17. June 2022

Ludo Sketches

The end of a chapter

After almost 12 years, I’ve decided to close the ForgeRock chapter and leave the company. Now that the company has successfully gone public, and has been set on a trajectory to lead the Identity industry, it was time for me… Continue reading →

After almost 12 years, I’ve decided to close the ForgeRock chapter and leave the company.

Now that the company has successfully gone public, and has been set on a trajectory to lead the Identity industry, it was time for me to pause and think about what matters to me in life. So I’ve chosen to leverage the exciting experience I’ve gained with ForgeRock and to start giving back to the startups in my local community.

But what an incredible journey, it has been! I joined the company when it had a dozen employees, I was given the opportunity to found the French subsidiary, to start an engineering center, build an amazing team of developers and deliver some rock solid, highly scalable products. For this opportunity, I will always be thankful to the amazing 5 Founders of ForgeRock.

The ForgeRock Founders: Hermann, Victor, Lasse, Steve, Jonathan.

I have nothing but good memories of all those years, the amazing events organized for all the employees or for our customers. There has been many IdentityLive events (formerly known as Identity Summits), there has been fewer but so energizing Company Meetings, in Portugal, Greece, USA, Italy.

I’ve worked with a team of rock-star product managers, from which I’ve learnt so much:

I’ve hired and built a team of talented and software engineers, some of them I’ve known for 20 years:

I don’t have enough space to write about all the different things we’ve done together, at work, outside work… But yeah, we rocked!

Overall those 12 years have been an incredible and exciting journey, but what made the journey so exceptional is all the persons that have come along. Without you, nothing would have been the same. Thank you ! Farewell but I’m sure we will meet again

Thursday, 16. June 2022

reb00ted

A new term: "Platform DAO"

I usually invent technology, but today I present you with a new term: Platform DAO. Web searches don’t bring anything meaningful up, so I claim authorship on this term. Admittedly the amount of my inventiveness here is not very large. Trebor Scholz coined the term “Platform Cooperative” in an article in 2014 (according to Wikipedia). He started with the long established term of a “cooperativ

I usually invent technology, but today I present you with a new term:

Platform DAO.

Web searches don’t bring anything meaningful up, so I claim authorship on this term.

Admittedly the amount of my inventiveness here is not very large. Trebor Scholz coined the term “Platform Cooperative” in an article in 2014 (according to Wikipedia). He started with the long established term of a “cooperative”, and applied it to an organization that creates and maintains a software platform. So we get a “Platform Co-op”.

I’m doing the exact same thing: a “Platform DAO” is a Decentralized Autonomous Organization, a DAO, that creates and maintains a software platform. Given that DAOs largely are the same as Co-ops, except that they use technology in order to automate, and reduce the cost of some governance tasks – and also use technology for better transparency – it seems appropriate to create that parallel.

Why is this term needed? This is where I think things get really interesting.

The Platform co-op article on Wikipedia lists many reasons why platform co-ops could deliver much more societal benefits than traditional vendor-owned tech platforms can. But it also points out some core difficulties, which is why we haven’t seen too many successful platform co-ops. At the top of which is the difficulty of securing early-stage capital.

Unlike in co-ops, venture investors these days definitely invest in DAOs.

Which means we might see the value of “Platform Co-ops” realized in their form as “Platform DAOs” as venture investment would allow them to compete at a much larger scale.

Imagine if today, somebody started Microsoft Windows. As a DAO. Where users, and developers, and the entire VAR channel, are voting members of the DAO. This DAO will be just as valuable as Microsoft – in fact I would argue it would be more valuable than Microsoft –, with no reason to believe it would deliver fewer features or quality, but lots of reasons to believe that the ecosystem would rally around it in a way that it would never rally around somebody else’s company.

Want to help? (No, I’m not building a Windows DAO. But a tech platform DAO that could be on that scale.) Get in touch!

Wednesday, 15. June 2022

Moxy Tongue

Self-Administered Governance In America

"We the people" have created a new living master; a bureaucratic machine, not "for, by, of" our control as people. This bureaucratic system, protected by a moat of credentialed labor certification processes and costs, is managed via plausible deniability practices now dominating the integrity of the civil systems which a civil society is managed by. Living people, now legal "people", function as as
"We the people" have created a new living master; a bureaucratic machine, not "for, by, of" our control as people. This bureaucratic system, protected by a moat of credentialed labor certification processes and costs, is managed via plausible deniability practices now dominating the integrity of the civil systems which a civil society is managed by. Living people, now legal "people", function as assets under management and social liabilities leveraged for the development of budget expenditures not capable of self-administration by the people they exist to serve. This "bureaucratic supremacy" in governed process has rendered words meaningless in practice, and allowed a new Administrative King to rule the Sovereign territory where American self-governance was envisioned and Constituted, once upon a time.
President after President, one precedent after another is used to validate actions that lack integrity under inspection. "Civil Rights Laws", suspended by bureaucratic supremacy alone, allow a President to nominate and hire a Supreme Court Justice on the stated basis of gender, skin color and qualifications. In lieu of a leader demonstrating what self-governance is capable of, "we the people" are rendered spectators of lifetime bureaucrats demonstrating their bureaucratic supremacy upon their "people". 
Throw all the words away; democracy, republic, authoritarian dictatorship, gone. None matter, none convey meaningful distinctions.
You can either self-administer your role in a "civil society", or you can not. If you can not, it need not matter what you call your Government, or what form of "voting" is exercised. In the end, you are simply data. You are data under management, a demographic to be harvested. You will either be able to self-administer your participation, or you will ask for endless permission of your bureaucratic masters who fiddle with the meaning of those permissions endlessly. In this context, a bureaucratic process like gerrymandering is simply an exercise in bureaucratic fraud, always plausibly deniable. 
Read all the history books you like; dead history is dead.
Self-administered governance of a civil society is the basis of the very existence of a "civil society" derived "of, by, for" people. People, Individuals All, living among one another, expressing the freedom of self-administration, is the only means by which a computationally accurate Constitution can exist. The imperfection of politics, driven by cult groupings of people practicing group loyalty for leverage in governed practices is itself a tool of leverage held exclusively by the bureaucracy. Self-administration of one's vote, held securely in an authenticated custodial relationship as an expression of one's authority in Government, is the means by which a Government derived "of, by, for" people comes into existence, and is sustained. Bureaucratic processes separating such people from the self-administration of their participation Constitutes a linguistic and legal ruse perpetrated upon people, Individuals all.
Plato, John Locke, Adam Smith... put down the books & seminal ideas.
Self-Administration of human authority, possessed equally by all living Individuals who choose civil participation as a method of Governance derived "of, by, for" people, begins and ends with the structural accuracy of words, and their functional practices. 






Tuesday, 14. June 2022

reb00ted

Impressions from the Consensus conference, Austin 2022

This past weekend I went to the Consensus conference in Austin. I hadn’t been to another, so I can’t easily compare this year with previous years. But here are my impressions, in random order: The show was huge. Supposedly 20,000 in-person attendees. Just walking from one presentation to another at the other end of the conference took a considerable amount of time. And there were other locat

This past weekend I went to the Consensus conference in Austin. I hadn’t been to another, so I can’t easily compare this year with previous years. But here are my impressions, in random order:

The show was huge. Supposedly 20,000 in-person attendees. Just walking from one presentation to another at the other end of the conference took a considerable amount of time. And there were other locations distributed all over downtown Austin.

Lots and lots of trade show booths with lots of traffic.

In spite of “crypto winter”, companies still spent on the trade show booths. (But then, maybe they committed to the expense before the recent price declines)

Pretty much all sessions were “talking heads on stage”. They were doing a good job at putting many women on. But only “broadcast from the experts to the (dumb) audience”? This feels out of touch in 2022, and particularly because web3/crypto is all supposed to be giving everyday users agency, and a voice. Why not practice what you promote, Consensus? Not even an official hash tag or back channel.

Frances Haugen is impressive.

No theme emerged. I figured that there would be one, or a couple, of “hot topics” that everybody talked about and would be excited about. Instead, I didn’t really see anything that I hadn’t heard about for some years.

Some of the demos at some of the booths, and some of the related conversations were surprisingly bad. Without naming names, for example, what would you expect if somebody’s booth promises you some kind of “web3 authentication”? What I won’t expect is that the demo consists of clicking on a button labeled “Log in with Google”, and when I voiced surprise, handwaved about something with split keys, without being able to explain, or show, it at all.

I really hate it if I ask “what does the product do?”", and the answer is “60,000 people use it”. This kind of response is of course not specific to crypto, but either the sales guy doesn’t actually know what the product does – which happens surprisingly often – or simply doesn’t care at all the somebody asked a question. Why are you going to trade shows again?

The refrain “it’s early days for crypto” is getting a bit old. Yes, other industries have been around for longer, but one should be able to see a few compelling, deployed solutions for real-world problems that are touching the real world outside of crypto. Many of those that I heard people pitch were often some major distance away from being realistic. For example, if somebody pitches tokenizing real estate, I would expect them to talk about the value proposition for, say, realtors, how they are reaching them and converting them, or how there is a new title insurance company based on blockchain that is growing very rapidly because it can provide better title insurance at much lower cost. Things like that. But no such conversation could be heard – well, at least not by me – and that indicates to me that the people pitching this haven’t really really encountered the market yet.

An anonymous crypto whale/investor – I think – who I chatted with over breakfast so much confirmed this: he basically complained that so many pitches he’s getting are on subjects that the entrepreneurs basically know nothing about. So real domain knowledge is missing for too many projects. (Which would explain many things, including why so many promising projects have run out of steam when it is time to actually deliver on the lofty vision),

The crypto “market” still seems to mostly consist of a bunch of relatively young people who have found a cool new technology, and are all in, but haven’t either felt the need to, nor have been successful at applying it to the real world. I guess billions of dollars of money flowing in crypto coins allowed them to ignore this so far. I wonder whether this attitude can last in this “crypto winter”.

But this is also a great opportunity. While 90% of what has been pitched in web3/crypto is probably crap and/or fraudulent (your number may be lower, or higher), it is not 100% and some things are truly intriguing. My personal favorites are DAOs, which have turned into this incredible laboratory for governance innovations. Given that we still vote – e.g. in the US – in largely the same way as hundreds of years ago, innovation in democratic governance has been glacial. All of a sudden we have groups that apply liquid democracy, and quadratic voting, and weigh votes by contributions, and lots of other ideas. It’s like somebody turned on the water in the desert, and instead of governance being all the same sand as always, there are now flowers of a 1000 different kinds that you have never seen before, blooming all over. (Of course many of those will not survive, as we don’t know how to do governance differently, but the innovation is inspiring.)

In personal view, the potential of crypto technologies is largely all about governance. The monetary uses should be considered a side effect of new forms of governance, not the other way around. Of course, almost nobody – technologist or not – has many thoughts on novel, better forms governance, because we have so been trained into believing that “western style democracy” cannot be improved on. Clearly, that is not true, and there are tons of spaces that need better governance than we have – my favorite pet peeve is the rules about the trees on my street – so all innovations in governance are welcome. If we could govern those trees better, perhaps we could also have a street fund to pay for their maintenance – which would be a great example for a local wallet with a “multisig”. Certainly it convinces me much more than some of the examples that I heard about at Consensus.

I think the early days are ending. The crypto winter will have a bunch of projects die, but the foundation has been laid for some new projects that could take over the world overnight, by leading with governance of an undergoverned, high-value space. Now what was your’s truly working on again? :-)

Monday, 13. June 2022

Doc Searls Weblog

Why the Celtics will win the NBA finals

Marcus Smart. Photo by Eric Drost, via Wikimedia Commons. Back in 2016, I correctly predicted that the Cleveland Cavaliers would win the NBA finals, beating the heavily favored Golden State Warriors, which had won a record 73 games in the regular season. In 2021, I incorrectly predicted that the Kansas City Chiefs would beat the Tampa […]

Marcus Smart. Photo by Eric Drost, via Wikimedia Commons.

Back in 2016, I correctly predicted that the Cleveland Cavaliers would win the NBA finals, beating the heavily favored Golden State Warriors, which had won a record 73 games in the regular season. In 2021, I incorrectly predicted that the Kansas City Chiefs would beat the Tampa Bay Buccaneers. I based both predictions on a theory: the best story would win. And maybe Tom Brady proved that anyway: a relative geezer who was by all measures the GOAT, proved that label.

So now I’m predicting that the Boston Celtics will win the championship because they will win because they have the better story.

Unless Steph Curry proves that he’s the GSOAT: Greatest Shooter Of All Time. Which he might. He sure looked like it in Game Four. That’s a great story too.

But I like the Celtics’ story better. Here we have a team of relative kids who were average at best by the middle of the season, but then, under their rookie coach, became a defensive juggernaut, racking up the best record through the remainder of the season, then blowing through three playoffs to get to the Finals. In Round One, they swept Kevin Durant, Kyrie Irving and the Brooklyn Nets, who were pre-season favorites to win the Eastern Conference. In Round Two, they beat Giannis Antentokuompo and the Milwaukee Bucks, who were defending champs, in six games. In Round Three, they won the conference championship by beating the Miami Heat, another great defensive team, and the one with the best record in the conference, in seven games. Now the Celtics are tied, 2-2, with the Western Conference champs, the Golden State Warriors, with Steph Curry playing his best, looking all but unbeatable, on a team playing defense that’s pretty much the equal of Boston’s.

Three games left, two at Golden State.

But I like the Celtics in this. They seem to have no problem winning on the road, and I think they want it more. And maybe even better.

May the best story win.

[Later…] Well, c’est le jeu. The Celtics lost the next two games, and the Warriors took the series.

After it was over, lots of great stories were told about the Warriors: the team peaked at the right time, they were brilliantly coached (especially on how to solve the Celtics), Steph moved up in all-time player rankings (maybe even into the top ten), Wiggins finally looked like the #1 draft choice he was years ago, the Dynasty is back. Long list, and it goes on. But the Celtics still had some fine stories of their own, especially around how they transformed from a mediocre team at mid-season to a proven title contender that came just two games away from winning it all. Not bad.


Ludo Sketches

ForgeRock Identity Live, Austin TX

A few weeks ago, ForgeRock organised the first Identity Live event of the season, in Austin TX. With more than 300 registered guests, an impeccable organisation by our Marketing team, the event was a great success. The first day was… Continue reading →

A few weeks ago, ForgeRock organised the first Identity Live event of the season, in Austin TX.

With more than 300 registered guests, an impeccable organisation by our Marketing team, the event was a great success.

The first day was sales oriented, with company presentations, roadmaps, products’ demonstrations but also testimony from existing customers. The second day was focusing on the technical side of the ForgeRock solutions, in an unconference format, where Product Managers, Technical Consultants et Engineers shared with the audience their experience, their knowledge.

It was great to meet again so many colleagues, partners, customers; to have lively conversations about the products, the projects and the overall directions of the Identity technology.

You can find more photos of the event in the dedicated album.


Damien Bod

Force MFA in Blazor using Azure AD and Continuous Access

This article shows how to force MFA from your application using Azure AD and a continuous access auth context. When producing software which can be deployed to multiple tenants, instead of hoping IT admins configure this correctly in their tenants, you can now force this from the application. Many tenants do not force MFA. Code: […]

This article shows how to force MFA from your application using Azure AD and a continuous access auth context. When producing software which can be deployed to multiple tenants, instead of hoping IT admins configure this correctly in their tenants, you can now force this from the application. Many tenants do not force MFA.

Code: https://github.com/damienbod/AspNetCoreAzureADCAE

Blogs in this series

Implement Azure AD Continuous Access in an ASP.NET Core Razor Page app using a Web API Implement Azure AD Continuous Access (CA) step up with ASP.NET Core Blazor using a Web API Implement Azure AD Continuous Access (CA) standalone with Blazor ASP.NET Core Force MFA in Blazor using Azure AD and Continuous Access

Steps to implement

Create an authentication context in Azure for the tenant (using Microsoft Graph). Add a CA policy which uses the authentication context. Implement an authentication challenge using the claims challenge in the Blazor WASM.

Creating a conditional access authentication context

A continuous access (CA) authentication context was created using Microsoft Graph and a policy was created to use this. See the first blog in this series for details on setting this up.

Force MFA in the Blazor application

Now that the continuous access (CA) authentication context is setup and a policy is created requiring MFA, the application can check that the required acrs with the correct value is present in the id_token. We do this is two places, in the login of the account controller and in the OpenID Connect event sending the authorize request. The account controller Login method can be used to set the claims parameter with the required acrs value. By requesting this, the Azure AD policy auth context is forced.

[HttpGet("Login")] public ActionResult Login(string? returnUrl, string? claimsChallenge) { // var claims = // "{\"access_token\":{\"acrs\":{\"essential\":true,\"value\":\"c1\"}}}"; // var claims = // "{\"id_token\":{\"acrs\":{\"essential\":true,\"value\":\"c1\"}}}"; var redirectUri = !string.IsNullOrEmpty(returnUrl) ? returnUrl : "/"; var properties = new AuthenticationProperties { RedirectUri = redirectUri }; if(claimsChallenge != null) { string jsonString = claimsChallenge.Replace("\\", "") .Trim(new char[1] { '"' }); properties.Items["claims"] = jsonString; } else { // lets force MFA using CAE for all sign in requests. properties.Items["claims"] = "{\"id_token\":{\"acrs\":{\"essential\":true,\"value\":\"c1\"}}}"; } return Challenge(properties); }

In the application an ASP.NET Core authorization policy can be implemented to force the MFA. All requests require a claim type acrs with the value c1, which we created in the Azure tenant using Microsoft Graph.

services.AddMicrosoftIdentityWebAppAuthentication(configuration) .EnableTokenAcquisitionToCallDownstreamApi(initialScopes) .AddMicrosoftGraph("https://graph.microsoft.com/v1.0", scopes) .AddInMemoryTokenCaches(); services.AddAuthorization(options => { options.AddPolicy("ca-mfa", policy => { policy.RequireClaim("acrs", AuthContextId.C1); }); });

By using the account controller login method, only the login request forces the auth context. If the context needs to be forced everywhere, then a middleware using the OnRedirectToIdentityProvider event can be used to add the extra request parameter on every OIDC authorize request. The OnRedirectToIdentityProvider event can be used to add this to all requests, which has not already added the claims parameter. You could also only use this without the login implementation in the account controller.

services.AddRazorPages().AddMvcOptions(options => { var policy = new AuthorizationPolicyBuilder() .RequireAuthenticatedUser() .RequireClaim("acrs", AuthContextId.C1) .Build(); options.Filters.Add(new AuthorizeFilter(policy)); }).AddMicrosoftIdentityUI(); services.Configure<MicrosoftIdentityOptions>(OpenIdConnectDefaults.AuthenticationScheme, options => { options.Events.OnRedirectToIdentityProvider = context => { if(!context.ProtocolMessage.Parameters.ContainsKey("claims")) { context.ProtocolMessage.SetParameter( "claims", "{\"id_token\":{\"acrs\":{\"essential\":true,\"value\":\"c1\"}}}"); } return Task.FromResult(0); }; });

Now all requests require the auth context which is used to require the CA MFA policy.

Links

https://github.com/damienbod/Blazor.BFF.AzureAD.Template

https://github.com/Azure-Samples/ms-identity-ca-auth-context

https://github.com/Azure-Samples/ms-identity-dotnetcore-ca-auth-context-app

https://docs.microsoft.com/en-us/azure/active-directory/conditional-access/overview

https://github.com/Azure-Samples/ms-identity-dotnetcore-daemon-graph-cae

https://docs.microsoft.com/en-us/azure/active-directory/develop/developer-guide-conditional-access-authentication-context

https://docs.microsoft.com/en-us/azure/active-directory/develop/claims-challenge

https://docs.microsoft.com/en-us/azure/active-directory/develop/v2-conditional-access-dev-guide

https://techcommunity.microsoft.com/t5/itops-talk-blog/deep-dive-how-does-conditional-access-block-legacy/ba-p/3265345


Heres Tom with the Weather

Good time to wear N95s

A few weeks ago, I ordered a box of N95 masks as I had been following the rising positivity rate. Both Vaccines may not prevent many symptoms of long covid, study suggests and 1 Of 5 With Covid May Develop Long Covid, CDC Finds were also persuasive.

A few weeks ago, I ordered a box of N95 masks as I had been following the rising positivity rate. Both Vaccines may not prevent many symptoms of long covid, study suggests and 1 Of 5 With Covid May Develop Long Covid, CDC Finds were also persuasive.

Sunday, 12. June 2022

@_Nat Zone

紛争下のアイデンティティ〜ウクライナ侵略をうけて

来る6月21日に、米国コロラド州デンバー近郊で行わ…

来る6月21日に、米国コロラド州デンバー近郊で行われるIdentiverse 2022で、「Identity in Conflict (紛争下のアイデンティティ)」と題するワークショップを行います。

Identity in Conflict Tuesday, June 21, 11:30 am – 12:20 pm MDT (日本時間 2:30 AM – 3:20 AM) In times of instability and uncertainty, the reliability and trustworthiness of our identity systems become especially important. This workshop examines two areas in particular—identity management for displaced people, and the protection of government identity systems—and seeks to establish some ground rules to ensure that critical identity systems are robust and fit for purpose.

このセッションは、 去る2月24日に始まったウクライナ侵略をうけて、わたしが主催者に提案して実現したものです。すでに一杯であったプログラムに無理やり押し込んでくれた主催者には感謝しかありません。

紛争下のアイデンティティの課題には大きく分けて以下の2つがあります。

避難民のアイデンティティ管理  どのように彼らに援助やその他のサービス(銀行業務など)を提供するか 避難民と彼らを取り巻く人々への標的型誤情報から彼らをどう守るか 政府関連システムのアイデンティティ管理 敵の攻撃をいかにしのぎ政府や援助団体のシステムを守るか 事業継続・再生戦略

それぞれ、いくらでも語ることがあるトピックですが、残念ながら時間が50分しかありませんし、政府システム防衛を担当しておられる方が急遽渡米してこのセッションに加わっていただけることになったので、主に項目2について検討することになるかと思います。

今回の侵略が開始された後、ウクライナ政府に対するフィッシング等の攻撃は3000%の増加を見せています。これに対応するために、Yubico社が20,000個のYubikeyを送付するなど、各所からの援助が届いています。一方で、現在ウクライナ政府が使っている暗号アルゴリズムがGOST(ロシア版NIST)開発によるものであり、これと、ほとんどすべての政府システムが既にハッキングを受けたという某筋からの情報を合わせると、いろいろ考えさせられるところがあります。

Identiverseにお越しの折には、ぜひご参加ください。

Saturday, 11. June 2022

Heres Tom with the Weather

Memory Fail

A long time ago, I read The Language Instinct. Inside the back page of my book are notes with page numbers. This is a practice I learned from a book by James Michener. At some point, I started sharing in conversations something I learned. Unfortunately, I had not made a note for this that I could check and the information was more complex than I remembered. Since I had shared this more than o

A long time ago, I read The Language Instinct. Inside the back page of my book are notes with page numbers. This is a practice I learned from a book by James Michener. At some point, I started sharing in conversations something I learned. Unfortunately, I had not made a note for this that I could check and the information was more complex than I remembered. Since I had shared this more than once, I thought I should really find the reference and it was not easy but I found it on page 293. The first part I had right.

In sum, acquisition of a normal language is guaranteed for children up to the age of six, is steadily compromised from then until shortly after puberty, and is rare thereafter.

Here is the part I screwed up.

We do know that the language-learning circuitry of the brain is more plastic in childhood; children learn or recover language when the left hemisphere of the brain is damaged or even surgically removed (though not quite at normal levels), but comparable damage in an adult usually leads to permanent aphasia.

While this itself is fascinating to me, I had been embellishing the story to say language is acquired in the brain’s right hemisphere in children and the left for adults. Now that I’m rereading it after so many years, it is clear that the book says this can happen but is not necessarily so.

Thursday, 09. June 2022

MyDigitalFootprint

Predator-prey models to model users

Predator-prey models are helpful and are often used in environmental science because they allow researchers to both observe the dynamics of animal populations and make predictions as to how they will develop/ change over time. I have been quiet as we have been unpacking an idea that with a specific data set, we can model user behaviour based on a dynamic competitive market. This Predator-prey

Predator-prey models are helpful and are often used in environmental science because they allow researchers to both observe the dynamics of animal populations and make predictions as to how they will develop/ change over time.

I have been quiet as we have been unpacking an idea that with a specific data set, we can model user behaviour based on a dynamic competitive market. This Predator-prey method, when applied to understand why users are behaving in a certain way, opens up a lot of questions we don’t have answers to.  

As a #CDO, we have to remain curious, and this is curious. 

Using the example of the rabbit and the fox. We know that there is a lag between growth in a rabbit population and the increase in a fox population.  The lag varies on each cycle, as does the peak and minimum of each animal.  We know that there is a lag between minimal rabbits and minimal foxes, as foxes can find other food sources and rabbits die of other causes.

Some key observations.  

The cycles, whilst they look similar, are very different because of externalities - and even over many time cycles where we end up with the same starting conditions, we get different outcomes.  Starting at any point and using the data from a different cycle creates different results; it is not a perfect science even with the application, say Euler's method, or bayesian network models.  Indeed we appear to have divergence and not convergence - between what we expect and what we see, even with the actual reality showing that over a long time, the numbers remain within certain boundaries. 

Each case begins with a set of initial conditions at a certain point in the cycle that will produce different outcomes for the function of the population of rabbits and foxes over a long period (100 years) - or user behaviours. 

This creates a problem, as the data and models look good in slide form, as we can fix one model into a box that makes everyone feel warm and fuzzy. With the same model and different starting parameters - the outcome does not marry the plan.  Decision-making is not always easier with data!

As a CEO

How do you test that the model that is being presented flexes and provides sensitivity to changes and not wildly different outcomes? It is easy to frame a model to give the outcome we want.   






Wednesday, 08. June 2022

Just a Theory

Bryce Canyon 1987

Back in 1987 I made a photo at the Bryce Canyon Park. And now I’m posting it, because it’s *spectacular!*

The hoodoos of Bryce Canyon National Park. Photo made in the summer of 1987. © 1987 David E. Wheeler

Back in 1987, my mom and I went on a trip around the American Southwest. I was 18, freshly graduated from high school. We had reservations to ride donkeys down into the Grand Canyon, but, sadly I got a flu and kept us in the hotel along the rim.

The highlight of the trip turned out to be Bryce Canyon, where I made this photo of its famous hoodoos. Likely shot with Kodachrome 64, my go-too for sunny summer shots at the time, on a Pentax ME Super SLR with, as I recall, a 28-105mm lens. Mom asked me yesterday if I’d scanned photos from that trip and, digging into my scans, the deeply saturated colors with those lovely evergreens took my breath away.

More about… Bryce Canyon Utah Hoodoo

Monday, 06. June 2022

Heather Vescent

NFTs, ICOs and Ownership of Creative Ideas

Photo by Artur Aldyrkhanov on Unsplash In my March 2022 Biometric Update column, I explained that NFTs are a big deal because they are a unique digital identity layer native to the Ethereum blockchain. This is exciting because it can be used to track the ownership of digital items. While NFTs are not without their problems, there is a growing appetite to explore the possibilities thanks to a c
Photo by Artur Aldyrkhanov on Unsplash

In my March 2022 Biometric Update column, I explained that NFTs are a big deal because they are a unique digital identity layer native to the Ethereum blockchain. This is exciting because it can be used to track the ownership of digital items. While NFTs are not without their problems, there is a growing appetite to explore the possibilities thanks to a culture that views the world in a fresh way.

ICOs

To understand the importance of NFTs, we need to understand the context of the world when NFTs were originally designed. In 2018, there was a lot of energy around exploring alternate currencies as a funding mechanism. The term ICO — or initial coin offering — was a method to raise funds for a new venture. But the vision of ICOs weren’t only to raise money, but to create a community with shared values. It’s similar to an IPO or Kickstarter, but with one key difference, the community had its own currency that could be used for transactions. Many of the ICO projects used a cryptocurrency as part of the product design — a financial mechanism to enable cooperation in a complementary financial system (see Bernard Lietaer’s work on complementary currency systems). But an ICO was equally a signaling of belief in the project and a desire to innovate existing economic models.

ICOs were problematic for many reasons, but one thing legit ICO creators wanted was the ability to issue a receipt or stock token to something to show that you are part of this community. This functionality was not possible with the existing transactional tokens. Different business models became available with a token that had a unique identity and ran on the same infrastructure as transactional tokens.

ICOs combined crowd-funding and cryptocurrency, and challenged economics as we knew it at the time. Not all ICOs succeeded, and there were scams. But ICOs are a successful innovation, making a funding mechanism that was previously only available to an elitist few, available more broadly. And it paved the way for NFTs which extend the transactional nature of tokens to enable unique identity while using the same ETH rails. These innovations enable new business models.

Artists and Ownership of Ideas

Artists are explorers pushing the boundaries of what we think technology can be used for and how it can be used. There are many challenges of being an artist. Not only do you have to successfully mine the well of creativity to create your art; you have to have some business acumen and be lucky to find success. Often the financial success of artists comes late in their career, and many die before they see the impact they’ve had on human society.

I was just commenting about this in the Mac Cosmetics store last week, when seeing Mac’s new Keith Haring Viva Glam series. Keith Haring was a revolutionary artist that died of AIDS before he could even see the influence of his work. And one of my first future scenarios explored this idea, through an alternate currency created specifically so creators could pay for longevity treatments to live longer to see the impact of their lives.

Photo by Danny Lines on Unsplash

But Artists can be jerks. There are countless stories of lesser talented but more well known artists, stealing ideas from unknown geniuses. Yayoi Kusama’s earliest ideas were stolen and utilized by Andy Warhol, Claes Oldenburg and Lucas Samaras, the results of which made them famous, while Kusama still struggled. Seeing the success of her stolen ideas under a different name almost destroyed her. There was no way for her to prove creative provenance, especially when credit was not given.

Jodorosky’s Dune influenced the entire Sci-Fi industry, including the Star Wars, Alien and Blade Runner franchises. But none of this was known until relatively recently.

Then there are artists like Banksy, who create art in public spaces, on brick walls and billboards. I remember driving down La Brea in Los Angeles one morning in 2011 seeing the latest Banksy graffiti on a recent billboard, only to hear that the billboard owner took it down a few hours later– in order to capitalize on it! In another case a concrete wall was removed because of the Banksy piece on it.

Photo by Nicolas J Leclercq on Unsplash

This illustrates the problems of creative provenance and ownership. Creative commons licenses were created to provide a mechanism to license one’s work and allow others to use and remix it with attribution. But there aren’t good options for creators to protect against more powerful and better resourced people who can execute on their (borrowed or stolen) idea?

For artists who do sell their work, there is another conundrum. Artists only get paid on the first sale of their work, but their art can be sold at auctions later at an increased value. This makes art collectors in many ways investors, but the actual creator doesn’t get to benefit in the increased value after the art has left their hands. In some cases, the holder of the piece of art can make many millions more, than the actual creator of the piece. This use case inspired the creation of the ERC-2981 Royalty token, where artists can specify a royalty amount paid back to them, when the digital item is transferred.

Artists on one hand don’t always care about the ownership of ideas. On the other hand, you have to have money to live and keep making art. For anyone who has experienced someone take their ideas and execute on it, perhaps not even realizing it was not their own, is extremely painful. But the problem with ideas, if you want them to catch on, they have to become someone else’s. Unfortunately those with additional resources benefit when “borrowing” someone’s idea simply because they have the resources to execute on it.

Do NFTs give control?

NFTs sell the dream that artists can have control over what others can do with their creations. If you put an NFT on something to show provenance, that could help, but only if the laws around IP and ownership change too. Culture needs to change too. We’re all standing on the shoulders of the past, whether we acknowledge it or not.

We shouldn’t be surprised at the explosion of NFTs art — artists always use technology in novel ways that can be quite different from the use cases of the original creators. Traditional economic models haven’t supported creative efforts. And isn’t the point of artists to challenge the traditional ways we see the world? NFTs are an economic innovation that promises to give a tiny bit of power back to the artist.

I want to believe NFTs will help solve this problem, and I think they can partially address it. NFTs are a mechanism to give an artist more control and enable others to directly support them. But there is still the larger problem of living in a world that doesn’t often value creative expression in economic terms. And of those who have more power and resources utilizing the ideas of others for their own gain.


Damien Bod

Using math expressions in github markdown

This blog explores using and creating some standard mathematical expressions using github markdown. I was motivated to try this out after reading this blog. If you know the TEX Commands available in MathJax, then creating math documentation in github is a breeze. Github markdown uses MathJax. Code: https://github.com/damienbod/math-docs I decided to try out some basic […]

This blog explores using and creating some standard mathematical expressions using github markdown. I was motivated to try this out after reading this blog. If you know the TEX Commands available in MathJax, then creating math documentation in github is a breeze. Github markdown uses MathJax.

Code: https://github.com/damienbod/math-docs

I decided to try out some basic math functions.

Quadratic equations examples

$$x = {-b \pm \sqrt{b^2-4ac} \over 2a}$$ $(a+b)^2$ $$\eqalign{ (a+b)^2 &= (a+b)(a+b) \\ &= a^2 + ab + ba + b^2 \\ &= a^2 + 2ab + b^2 }$$ $(a-b)^2$ $$\eqalign{ (a-b)^2 &= (a-b)(a-b) \\ &= a^2 - ab - ba + b^2 \\ &= a^2 - 2ab + b^2 }$$ $(a-b)(a+b)$ $$\eqalign{ (a+b)(a-b) &= a^2 - ab + ba - b^2 \\ &= a^2 - b^2 }$$

Produces https://github.com/damienbod/math-docs#quadratic-equations

Functions

$$ f(x) = {\sqrt{5x^2+2x-1}+(x-2)^2 } $$ $$ g(x) = {\\frac{a}{1-a^2} }$$ $$ f(x) = {(x + 2) \over (2x + 1)} $$ $$ f(x) = { \sqrt[3]{x^2} }$$ $$ \sqrt[5]{34}$$

Trigonometry examples

$$ cos^2 \theta + sin^2 \theta = 1 $$ $$ tan 2 \theta = {2tan \theta \over 1 - tan^2 \theta} $$ $$\eqalign{ cos 2 \theta = cos^2 \theta - sin^2 \theta \\ &= 2 cos^2 \theta -1 \\ &= 1 - 2sin^2 \theta }$$ Prove $ \sqrt{ 1 - cos^2 \theta \over 1- sin^2 \theta} = tan \theta $ $$ \sqrt{ 1 - cos^2 \theta \over 1- sin^2 \theta} = \sqrt{ sin^2 \theta \over cos^2 \theta} = {sin \theta \over cos \theta} = Tan \theta $$

Calculus examples

$$\eqalign{ f(x) = {3x^4} \implies {dy \over dx} = 12x^3 }$$ $$\eqalign{ f(x) = {2x^{-3/2}} \implies {dy \over dx} = -3x^{-5/2} &= -{3 \over \sqrt{x^5}} }$$ If $x = 2t + 1$ and $ y = t^2$ find ${dy \over dx}$ ? $$\eqalign{ x = 2t + 1 \implies {dx \over dt} = 2 \\ y = t^2 \implies {dy \over dt} = 2t \\ {dy \over dx} = {dy \over dt} \div {dx \over dt} \\ \implies 2t \div 2 = t }$$

Integration examples

Evaluate $\int_1^2 (x + 4)^2 dx $ $$\eqalign{ \int_1^2 (x + 4)^2 dx = \int_1^2 (x^2 + 8x + 16) dx \\ &= \left\lbrack {x^3 \over 3} + {8x^2 \over 2} + 16x \right\rbrack_1^2 \\ &= \left\lbrack {8 \over 3} + {8 * 4 \over 2} + 16 * 2 \right\rbrack - \left\lbrack {1 \over 3} + {8 \over 2} + 16 \right\rbrack }$$

Matrix example

$$ {\left\lbrack \matrix{2 & 3 \cr 4 & 5} \right\rbrack} * \left\lbrack \matrix{1 & 0 \cr 0 & 1} \right\rbrack = \left\lbrack \matrix{2 & 3 \cr 4 & 5} \right\rbrack $$

Sum examples

$$\sum_{n=1}^n n = {n \over 2} (n + 1) $$ $$\sum_{n=1}^n n^2 = {n \over 6} (n + 1)(2n + 1) $$

Notes

It is super easy to create great mathematical documentation now using github markdown. This is super useful for many use cases, schools, students and reports. This blog provides great help in finding the correct code to produce the docs.

Links

Render mathematical expressions in Markdown

https://www.mathjax.org/

http://docs.mathjax.org/en/latest/

https://docs.mathjax.org/en/v2.7-latest/tex.html

https://www.onemathematicalcat.org/MathJaxDocumentation/TeXSyntax.htm

Sunday, 05. June 2022

Identity Praxis, Inc.

MEF CONNECTS Personal Data & Identity Event & The Personal Data & Identity Meeting of The Waters: Things are Just Getting Started

This article was published by the MEF, on June 3, 2022.   Early last month, the MEF held its first-ever event dedicated to personal data and identity event: MEF CONNECTS Personal Data & Identity Hybrid, on May 10 and 11th, in London (watch all the videos here). It was unquestionably a huge success. Hundreds of people […] The post MEF CONNECTS Personal Data & Identity Event &

This article was published by the MEF, on June 3, 2022.

 

Early last month, the MEF held its first-ever event dedicated to personal data and identity event: MEF CONNECTS Personal Data & Identity Hybrid, on May 10 and 11th, in London (watch all the videos here). It was unquestionably a huge success. Hundreds of people came together to learn, interact, and make an impact.

A Transformative Agenda

The event covered a wide range of strategic, tactical, and technical topics. In addition to recruiting the speakers and programming the event, I spoke on The Personal Data & Identity Meeting of the Waters and introduced The Identity Nexus, an equation that illustrates the social and commercial personal data and identity equilibrium. Together we discussed:

Leading identification, authentication, and verification strategies The pros and cons and comparisons of various biometrics methods, including FaceID and VeinID People’s attitudes and sentiments at the nexus of personal data, identity, privacy and trust across 10-markets and U.S. undergraduate students Passwordless authentication and approaches to self-sovereign identity and personal data management Where personal data and identity meet physical and eCommerce retail, financial services, insurance, automotive, the U.S. military, and healthcare The role of carriers and data brokers in improving the customer experience along the customer journey, identification, and combating fraud Strategies for onboarding the over billion people today without an ID, let alone a digital ID. The rise of the personal information economy and seven different approaches to empowering individuals to give them agency, autonomy, and control over their personal data and identity Zero-party data strategies Demonstrable strategies for securing IoT data traffic Environment, Social, and Governance (ESG) investment personal data, and identity investment strategies Emergent people-centric/human-centric business models The rise of new regulations, including GDPR, CCPA, and new age-verification and age-gating regulations and the impact they’ll have on every business Frameworks to help business leaders at every level figure out what to keep doing, start doing, or do differently


MEF CONNECTS Personal Data & Identity 2022 Wordcloud

By the Numbers

The numbers tell it all –

Over 300 people engaging in-person and online 26 sessions 11 hours and 50 minutes of recorded on-demand video content 43 senior leaders speaking on a far range of topics: 11 CEOs and Presidents (Inc. Lieutenant Colonel, U.S. Army (Ret.)) 4 C-suite executives (Strategy, Commercial, Marketing) 3 Executive Directors & Co-founders 5 SVPs and VPs 7 Department Heads 13 SEMs 37 companies and brands represented on stage – MyDexMobile Ecosystem ForumMercedes-Benz CarsIdentity Praxis, Inc.IG4CapitalIdentity WomanLeading PointsWomen In IdentityVodafoneZARIOTSpokeoSinchAerPassHitachi Europe LtdInfobipInsights AngelsNickey HickmanBritish TelecomCheetah DigitalDataswiftDigi.meFingoAssurantAge Verification Providers AssociationTwilioVolvoCtrl-ShiftIPificationPool DataNatWestiProov LtdVisaXConnectpolyPolySkechersGlobal Messaging Service [GMS]World Economic Forum 5 Sponsors – Assurant, XConnect, Infobip, Cheetah Digital, and Sinch 1 book announcement, the pre-release announcement of my new book – “The Personal Data & Identity Meeting of the Waters: A Global Market Assessment

It was an honor to share the stage with so many talented people and a hug shout-out needs to be given to our sponsors and to the Mobile Ecosystem Forum team who executed flawlessly.

We’re Just Getting Started and I’m Here for You

The global data market generates $11.7 trillion annually for the global economy. Industry experts forecast that efficient use of personal data and identity (not including the benefits of innovation, improving mental health and social systems, IoT interactions, banking and finance, road safety, reducing multi-trillion-dollar cybercrime losses, and more), can add one percent to thirteen percent of a country’s gross domestic product (GDP). And we’re just getting started. The personal data and identity tsunami is just now reaching and washing over the shores of every society and economy. No region, no community, no country, no government, no enterprise, no individual, no thing, is immune to its effects.

I’m here to help. I can help you get involved with the MEF Personal Data & Identity Working Group, understand the global and regional personal data and identity market, build and execute a balanced personal data & identity strategy and products, build people-centric customer experiences at every touchpoint along your customer journey, meet new people and identify and source partners, educate your team, impact global regulations, standards, and protocols, identify programs and events that can help you and your organization learn, grown and make a difference. Connect with me on LinkedIn or schedule a call with me here.

Meet with Me In Person in June

I’ll be speaking at the MyData 2022 in Helsinki on June 20~23. If you can make it, please connect with me, and let’s meet up (ping me if you need a discount code to attend).

#APIs #AgeAssurance #AgeEstimation #AgeGating #AgeVerification #Agency #Aggregator #Assurance #Authentication #AuthenticatorApp #Biometrics #C #CampaignRegistry #Carrier #CarrierIdentification #Compliance #ConnectedCar #ConsumerSentiment #Control #CustomerRelationships #Data #DataBroker #DataCooperative #DataTrust #DataUnion #DecentralizedIdentity #DigitalID #ESG #Econometrics #End-To-EndEncryption #EnvironmentalSocialGovernance #FaceID #FinancialServices #FingerprintID #Fraud #FraudMitigation #Identity #Infomediary #Investing #IoT #LoyaltyProgram #MEF #MarTech #MeetingOfWaters #MeetingofWaters #MeetingoftheWaters #Messaging #MilitaryConnect #MobileCarrier #MobileOperator #NumberInteligence #Organizational-CentricApproach #OrganizationalCentricApproach #PasswordlessAuthentication #People-CentricApproach #PeopleCentricApproach #Pers #PersonalControl #PersonalData #PersonalDataStore #Personalization #Privacy #RAISEFramework #Regulation #Research #Retail #SMS #SMSOTP #Self-regulation #SelfSovereignty #ServiceProvider #TheIdentityNexus #Trust #TrustFramework #TrustbutVerify #VeinID #ZeroKnowledgeAuthentication #ZeroPartyData #data #decentrlaizedidentity #eCommerce #eIDAS #euConsent #identity #personaldata #privacy #selfsovereignidentity #usercentricidentity @AegisMobile @AerPass @AgeVerificationProvidersAssocation @Assurant @BritishTelecom @CheetahDigital @Ctrl-Shift @Dataswift @Digi.me @Fingo @Freelancer @GlobalMessagingService[GMS] @HitachiEuropeLtd @IG4Capital @IPification @IdentityPraxis @IdentityPraxis,Inc. @IdentityWoman @Infobip @InsightsAngels @LeadingPoints @Mercedes-BenzCars @MobileEcosystemForum @MyDex @NatWest @PoolData @Sinch @Skechers @Spokeo @Twilio @Visa @Vodafone @Volvo @WomenInIdentity @WorldEconomicForum @XConnect @ZARIOT @iProovLtd @polyPoly @WEF

The post MEF CONNECTS Personal Data & Identity Event & The Personal Data & Identity Meeting of The Waters: Things are Just Getting Started appeared first on Identity Praxis, Inc..


Jon Udell

What happened to simple, basic web hosting?

For a friend’s memorial I signed up to make a batch of images into a slideshow. All I wanted was the Simplest Possible Thing: a web page that would cycle through a batch of images. It’s been a while since I did something like this, so I looked around and didn’t find anything that seemed … Continue reading What happened to simple, basic web hosting?

For a friend’s memorial I signed up to make a batch of images into a slideshow. All I wanted was the Simplest Possible Thing: a web page that would cycle through a batch of images. It’s been a while since I did something like this, so I looked around and didn’t find anything that seemed simple enough. The recipes I found felt like overkill. Here’s all I wanted to do:

Put the images we’re gathering into a local folder Run one command to build slideshow.html Push the images plus slideshow.html to a web folder

Step 1 turned out to be harder than expected because a bunch of the images I got are in Apple’s HEIC format, so I had to find a tool that would convert those to JPG. Sigh.

For step 2 I wrote the script below. A lot of similar recipes you’ll find for this kind of thing will create a trio of HTML, CSS, and JavaScript files. That feels to me like overkill for something as simple as this, I want as few moving parts as possible, so the Python script bundles everything into slideshow.html which is the only thing that needs to be uploaded (along with the images).

Step 3 was simple: I uploaded the JPGs and slideshow.html to a web folder.

Except, whoa, not so fast there, old-timer! True, it’s easy for me, I’ve maintained a personal web server for decades and I don’t think twice about pushing files to it. Once upon a time, when you signed up with an ISP, that was a standard part of the deal: you’d get web hosting, and would use an FTP client — or some kind of ISP-provided web app — to move files to your server.

As I realized a few years ago, that’s now a rare experience. It seems that for most people, it’s far from obvious how to push a chunk of basic web stuff to a basic web server. People know how to upload stuff to Google Drive, or WordPress, but those are not vanilla web hosting environments.

It’s a weird situation. The basic web platform has never been more approachable. Browsers have converged nicely on the core standards. Lots of people could write a simple app like this one. Many more could at least /use/ it. But I suspect it will be easier for many nowadays to install Python and run this script than to push its output to a web server.

I hate to sound like a Grumpy Old Fart. Nobody likes that guy. I don’t want to be that guy. So I’ll just ask: What am I missing here? Are there reasons why it’s no longer important or useful for most people to be able to use the most basic kind of web hosting?

import os l = [i for i in os.listdir() if i.endswith('.jpg')] divs = '' for i in l: divs += f""" <div class="slide"> <img src="{i}"> </div> """ # Note: In a Python f-string, CSS/JS squiggies ({}) need to be doubled html = f""" <html> <head> <title>My Title</title> <style> body {{ background-color: black }} .slide {{ text-align: center; display: none; }} img {{ height: 100% }} </style> </head> <body> <div id="slideshow"> <div role="list"> {divs} </div> </div> <script> const slides = document.querySelectorAll('.slide') const time = 5000 slides[0].style.display = 'block'; let i = 0 setInterval( () => {{ i++ if (i === slides.length) {{ i = 0 }} for (let j = 0; j <= i; j++ ) {{ if ( j === i ) {{ slides[j].style.display = 'block' }} else {{ slides[j].style.display = 'none' }} }} }}, time) </script> </body> </html> """ with open('slideshow.html', 'w') as f: f.write(html)

Monday, 30. May 2022

Phil Windleys Technometria

Twenty Years of Blogging

Summary: Blogging has been good to me. Blogging has been good for me. Leslie Lamport said "If you think you understand something, and don’t write down your ideas, you only think you’re thinking." I agree wholeheartedly. I often think "Oh, I get this" and then go to write it down and find all kinds of holes in my understanding. I write to understand. Consequently, I write my blog fo

Summary: Blogging has been good to me. Blogging has been good for me.

Leslie Lamport said "If you think you understand something, and don’t write down your ideas, you only think you’re thinking." I agree wholeheartedly. I often think "Oh, I get this" and then go to write it down and find all kinds of holes in my understanding. I write to understand. Consequently, I write my blog for me. But I hope you get something out of it too!

I started blogging in May 2002, twenty years ago today. I'd been thinking about blogging for about a year before that, but hadn't found the right tool. Jon Udell, who I didn't know then, mentioned his blog in an InfoWorld column. He was using Dave Winer's Radio Userland, so I downloaded it and started writing. At the time I was CIO for the State of Utah, so I garnered a bit of noteriety as a C-level blogger. And I had plenty of things to blog about.

Later, I moved to MovableType and then, like many developers who blog, wrote my own blogging system. I was tired of the complexity of blogging platforms that required a database. I didn't want the hassle. I write the body of each post using Emacs using custom macros I created. Then my blogging system generates pages from the bodies using a collection of templates. I use rsync to push them up to my server on AWS. Simple, fast, and completely under my control.

Along the way, I've influenced my family to blog. My wife, Lynne, built a blog to document her study abroad to Europe in 2019. My son Bradford has a blog where he publishes short stories. My daughter Alli is a food blogger and entrepreneur with a large following. My daughter Samantha is an illustrator and keeps her portfolio on her blog.

Doc Searls, another good friend who I met through blogging, says you can make money from your blog or because of it. I'm definately in the latter camp. Because I write for me, I don't want to do the things necessary to grow an audience and make my blog pay. But my life and bank account are richer because I blog. Jon, Dave, and Doc are just a few of countless friends I've made blogging. I wouldn't have written my first book if Doug Kaye, another blogging friend, hadn't suggested it. I wouldn't have started Internet Identity Workshop or been the Executive Producer of IT Conversations. I documented the process of creating my second startup, Kynetx on my blog. And, of course, I've written a bit (402 posts so far, almost 10% of the total) on identity. I've been invited to speak, write, consult, and travel because of what I write.

After 20 years, blogging has become a way of life. I think about things to write all the time. I can't imagine not blogging. Obviously, I recommend it. You'll become a better writer if you blog regularly. And you'll better understand what you write about. Get a domain name so you can move it, because you will, and you don't want to lose what you've written. You may not build a brand, but you'll build yourself and that's the ultimate reward for blogging.

Photo Credit: MacBook Air keyboard 2 from TheumasNL (CC BY-SA 4.0)

Tags: blogging


Damien Bod

Implement Azure AD Continuous Access (CA) standalone with Blazor ASP.NET Core

This post shows how to force an Azure AD policy using Azure AD Continuous Access (CA) in an ASP.NET Core Blazor application. An authentication context is used to require MFA. The “acrs” claim in the id_token is used to validate whether or not an Azure AD CAE policy has been fulfilled. If the claim is […]

This post shows how to force an Azure AD policy using Azure AD Continuous Access (CA) in an ASP.NET Core Blazor application. An authentication context is used to require MFA. The “acrs” claim in the id_token is used to validate whether or not an Azure AD CAE policy has been fulfilled. If the claim is missing, an OpenID Connect challenge is sent to the Azure AD identity provider to request and require this. In this sample, MFA is required.

Code: https://github.com/damienbod/AspNetCoreAzureADCAE

Blogs in this series

Implement Azure AD Continuous Access in an ASP.NET Core Razor Page app using a Web API Implement Azure AD Continuous Access (CA) step up with ASP.NET Core Blazor using a Web API Implement Azure AD Continuous Access (CA) standalone with Blazor ASP.NET Core Force MFA in Blazor using Azure AD and Continuous Access

Steps to implement

Create an authentication context in Azure for the tenant (using Microsoft Graph) Add a CA policy which uses the authentication context. Implement the Blazor backend to handle the CA validation correctly Implement an authentication challenge using the claims challenge in the Blazor WASM

Setup overview

A Blazor WASM application is implemented and hosted in an ASP.NET Core application. This is one single application, or also know as a server rendered application. The single application is secured using a single confidential client and the security is implemented in the trusted backend with no sensitive token data stored in the browser. Cookies are used to store the sensitive data. Microsoft.Identity.Web is used to implement the security. The Microsoft.Identity.Web lib is an OpenID Connect client wrapper from Microsoft with some Microsoft Azure specifics.

Creating a conditional access authentication context

A continuous access evaluation (CAE) authentication context was created using Microsoft Graph and a policy was created to use this. See the first blog in this series for details on setting this up.

Validate the CAE in the Blazor backend

The CaeClaimsChallengeService class is used to implement the CAE check in the application. The class checks for the acrs claim and returns a claims challenge requesting the claim if this is missing.

namespace BlazorBffAzureAD.Server; /// <summary> /// Claims challenges, claims requests, and client capabilities /// /// https://docs.microsoft.com/en-us/azure/active-directory/develop/claims-challenge /// /// Applications that use enhanced security features like Continuous Access Evaluation (CAE) /// and Conditional Access authentication context must be prepared to handle claims challenges. /// /// This class is only required if using a standalone AuthContext check /// </summary> public class CaeClaimsChallengeService { private readonly IConfiguration _configuration; public CaeClaimsChallengeService(IConfiguration configuration) { _configuration = configuration; } public string? CheckForRequiredAuthContextIdToken(string authContextId, HttpContext context) { if (!string.IsNullOrEmpty(authContextId)) { string authenticationContextClassReferencesClaim = "acrs"; if (context == null || context.User == null || context.User.Claims == null || !context.User.Claims.Any()) { throw new ArgumentNullException(nameof(context), "No Usercontext is available to pick claims from"); } var acrsClaim = context.User.FindAll(authenticationContextClassReferencesClaim).FirstOrDefault(x => x.Value == authContextId); if (acrsClaim?.Value != authContextId) { string clientId = _configuration.GetSection("AzureAd").GetSection("ClientId").Value; var cae = "{\"id_token\":{\"acrs\":{\"essential\":true,\"value\":\"" + authContextId + "\"}}}"; return cae; } } return null; } }

The AdminApiCallsController is used to provide data for the Blazor WASM UI. If the identity does not have the required authorization, an unauthorized response is returned to the UI with the claims challenge.

using Microsoft.AspNetCore.Authentication.Cookies; using Microsoft.AspNetCore.Authorization; using Microsoft.AspNetCore.Mvc; namespace BlazorBffAzureAD.Server.Controllers; [ValidateAntiForgeryToken] [Authorize(AuthenticationSchemes = CookieAuthenticationDefaults.AuthenticationScheme)] [ApiController] [Route("api/[controller]")] public class AdminApiCallsController : ControllerBase { private readonly CaeClaimsChallengeService _caeClaimsChallengeService; public AdminApiCallsController(CaeClaimsChallengeService caeClaimsChallengeService) { _caeClaimsChallengeService = caeClaimsChallengeService; } [HttpGet] public IActionResult Get() { // if CAE claim missing in id token, the required claims challenge is returned var claimsChallenge = _caeClaimsChallengeService .CheckForRequiredAuthContextIdToken(AuthContextId.C1, HttpContext); if (claimsChallenge != null) { return Unauthorized(claimsChallenge); } return Ok(new List<string>() { "Admin data 1", "Admin data 2" }); } }

Handling the authentication challenge in the Blazor WASM client

The Blazor WASM client handles the unauthorized response by authenticating again using Azure AD. If the claims challenge is returned, a step up authentication is sent to Azure AD with the challenge. The CaeStepUp method is used to implement the UI part of this flow.

using System.Net; namespace BlazorBffAzureAD.Client.Services; // orig src https://github.com/berhir/BlazorWebAssemblyCookieAuth public class AuthorizedHandler : DelegatingHandler { private readonly HostAuthenticationStateProvider _authenticationStateProvider; public AuthorizedHandler(HostAuthenticationStateProvider authenticationStateProvider) { _authenticationStateProvider = authenticationStateProvider; } protected override async Task<HttpResponseMessage> SendAsync( HttpRequestMessage request, CancellationToken cancellationToken) { var authState = await _authenticationStateProvider.GetAuthenticationStateAsync(); HttpResponseMessage responseMessage; if (authState.User.Identity!= null && !authState.User.Identity.IsAuthenticated) { // if user is not authenticated, immediately set response status to 401 Unauthorized responseMessage = new HttpResponseMessage(HttpStatusCode.Unauthorized); } else { responseMessage = await base.SendAsync(request, cancellationToken); } if (responseMessage.StatusCode == HttpStatusCode.Unauthorized) { var content = await responseMessage.Content.ReadAsStringAsync(); // if server returned 401 Unauthorized, redirect to login page if (content != null && content.Contains("acr")) // CAE { _authenticationStateProvider.CaeStepUp(content); } else // standard { _authenticationStateProvider.SignIn(); } } return responseMessage; } }

The CaeStepUp navigates to the authorization URL of the backend application with the claims challenge passed as a parameter.

public void CaeStepUp(string claimsChallenge, string? customReturnUrl = null) { var returnUrl = customReturnUrl != null ? _navigation.ToAbsoluteUri(customReturnUrl).ToString() : null; var encodedReturnUrl = Uri.EscapeDataString(returnUrl ?? _navigation.Uri); var logInUrl = _navigation.ToAbsoluteUri( $"{LogInPath}?claimsChallenge={claimsChallenge}&returnUrl={encodedReturnUrl}"); _navigation.NavigateTo(logInUrl.ToString(), true); }

The Login checks for claims challenge and starts an authentication process using Azure AD and the Microsoft.Identity.Web client.

[Route("api/[controller]")] public class AccountController : ControllerBase { [HttpGet("Login")] public ActionResult Login(string? returnUrl, string? claimsChallenge) { // var claims = "{\"access_token\":{\"acrs\":{\"essential\":true,\"value\":\"c1\"}}}"; // var claims = "{\"id_token\":{\"acrs\":{\"essential\":true,\"value\":\"c1\"}}}"; var redirectUri = !string.IsNullOrEmpty(returnUrl) ? returnUrl : "/"; var properties = new AuthenticationProperties { RedirectUri = redirectUri }; if(claimsChallenge != null) { string jsonString = claimsChallenge.Replace("\\", "") .Trim(new char[1] { '"' }); properties.Items["claims"] = jsonString; } return Challenge(properties); }

Using CAE is a useful way in applications to force authorization or policies in an Azure applications. This can be implemented easily with an ASP.NET Core application.

Links

https://github.com/damienbod/Blazor.BFF.AzureAD.Template

https://github.com/Azure-Samples/ms-identity-ca-auth-context

https://github.com/Azure-Samples/ms-identity-dotnetcore-ca-auth-context-app

https://docs.microsoft.com/en-us/azure/active-directory/conditional-access/overview

https://github.com/Azure-Samples/ms-identity-dotnetcore-daemon-graph-cae

https://docs.microsoft.com/en-us/azure/active-directory/develop/developer-guide-conditional-access-authentication-context

https://docs.microsoft.com/en-us/azure/active-directory/develop/claims-challenge

https://docs.microsoft.com/en-us/azure/active-directory/develop/v2-conditional-access-dev-guide

https://techcommunity.microsoft.com/t5/itops-talk-blog/deep-dive-how-does-conditional-access-block-legacy/ba-p/3265345

Shared Signals and Events – A Secure Webhooks Framework

Sunday, 29. May 2022

Heres Tom with the Weather

Linking to OpenLibrary for read posts

My blog is now linking to openlibrary.org for read posts. If you have the book’s ISBN, then it is trivial to link to openlibrary’s page for your book. It would be cool if those pages accepted webmention so that you could see who is reading the book.

My blog is now linking to openlibrary.org for read posts. If you have the book’s ISBN, then it is trivial to link to openlibrary’s page for your book. It would be cool if those pages accepted webmention so that you could see who is reading the book.



Rebecca Rachmany

Tokenomics: Three Foundations for Creating a Token Economy

Requests for tokenomics consulting have been bombarding me lately. My theory is that it’s because almost everyone recognizes that nothing any tokenomics expert has told them makes any sense. Quite a few clients have reported that all the tokenomics experts tell them it’s about the marketing and the burn rate and they, as founders, can’t understand where the real value it. In other words, tok

Requests for tokenomics consulting have been bombarding me lately. My theory is that it’s because almost everyone recognizes that nothing any tokenomics expert has told them makes any sense. Quite a few clients have reported that all the tokenomics experts tell them it’s about the marketing and the burn rate and they, as founders, can’t understand where the real value it.

In other words, tokenomics seems to be complete baloney to many founders. And they’re not wrong. In this post I’m going to go into three considerations in constructing your tokenomics model and how each one should affect the model. I won’t go into any fancy mathematical formulae, just the basic logic.

The three considerations, in order of importance are:

The founders’ goals and desires. What “the market/ investors” will invest in. Sustainable and logical tokenomics that move the project forward.

Just separating these three out is a revelation for many founders. It may be completely possible to raise a lot of funds on a model that is completely at odds with the sustainability of the project. It’s equally easy to raise a lot of money and have a successful company and not accomplish any of your personal goals. Many entrepreneurs go into business thinking that once they succeed, they’ll have more money and time for the things they really want to do. How’s that working out?

What do you REALLY want?

Issuing a token seems to be a fast way to big money, and there’s also some stuff about freedom and democracy, so blockchain naturally attracts a huge crowd. Let’s assume that you do raise the money you want for your project.

What do you really want as a founder?

To create a successful long-term business that contributes value to the world? To expand or get a better valuation for an existing company? To build a better form of democracy? To build cool tech stuff? To rescue the rain forests? To prove yourself in the blockchain industry so you’ll have a future career? To have enough money to buy an island and retire? To provide a way for poor people to make a living in crypto? To get rich and show everyone they were wrong about how crypto is a bubble? To get out of the existing rat race before the economy completely collapses? To save others from the collapse by getting everyone a bitcoin wallet and a few satoshis?

Usually you and the other founders will have a combination of personal goals, commitments to your family, values that you want to promote, and excitement about a particular project.

The tokenomics should align with your goals. Generally speaking:

There are serious legal implications and potential repercussions to raising money through a token launch. If you have an existing, profitable business, you do have something to lose by getting it wrapped up in crypto. Projects do need money and pretending you have a good tokenomics model can get you there. If you have an idea for a blockchain project, chances are 98% that someone else has already done something similar. Ask yourself honestly why you aren’t just joining them. If you think you can do it better, ask yourself why you don’t just help them be better. Do your research to understand the challenges they are facing, because you are about to face them. If your main inspiration is building a great business or getting career experience, joining a project that already raised money might get you there faster. If you are doing a “social good” project, monetary incentives will corrupt the project. If you love DeFi, yield farming, and all that stuff, and just want to make money, you probably will do better working hard and investing in the right projects rather than taking the legal and personal risks involved in your own token.

Personally, my core reason for writing whitepapers and consulting in Web3 is because I love helping people accomplish their goals. I’ve been working with entrepreneurs for 30 years, and nothing beats the satisfaction of watching people accomplish their dreams.

Investors, what’s an investor?

The second consideration is what the “investors” will perceive as a good tokenomics model. If you’ve gotten this far, you’ve already decided to raise money through a token sale. The only way to do that is to create tokenomics that investors will love.

It does not matter if the tokenomics model makes sense. It does not matter if the tokenomics model works in practice. It does not matter if the tokenomics model works in theory.

All of those models of burn rate and stuff do not matter for the purposes of selling your token EXCEPT that they need to align with what the investor-de-jour thinks the calculation-du-jour should be. All of those charts and models are pure poppycock. With rare exception, none of the people who are modelling are central bankers, monetary theory experts, mathematicians or data scientists. If they were, they would either tell you it doesn’t work or that it’s speculative and unproven, or they would create something that would never pass your legal team.

The good news is that you don’t have to understand tokenomics or make something sensible to create a useful tokenomics model. You just have to copy the thing-du-jour and have good marketing.

After all, these people aren’t really investors, are they? They are going to dump your coin as soon as they can. They aren’t going to use the token for the “ecosystem”. They aren’t going to advise or critique you on anything beyond the token price. They are in for the quick profits. Your job is to figure out how to pump the coin for as long as possible and do whatever it was you planned in step one (what you want) with the money you raised. Nobody is being fooled around here. Let’s not pretend that there was actually some project you wanted to do with the money. If there was, also fine, and you just keep doing that with the funds raised, and if it succeeds, that’s a bonus, but it doesn’t matter to these “investors”. They weren’t hodling for the long term.

If you have real investors who are in it for the long term, BTW, you might be coming to me to write a whitepaper for you, but you wouldn’t be reading my advice on tokenomics. You’ve probably got those people on board already.

To summarize, in today’s market, what you are trying to create is a speculative deflationary model that you can market for as long as possible. This is not sustainable for the actual product, as I’ll cover in the next section.

What would actually work?

As far as I can tell based on my experience working with more than 300 project, there is no empirical evidence that any of the tokenomics models work, other than Security Tokens where you really give investors equity in the project.

Token models are not designed to give the token holders profit. So far, all of the cryptocurrency market is based on speculation. You can potentially to argue that Bitcoin and a few other coins are really useful as a store and exchange of value, but it is too late to invent Bitcoin again.

Let me clarify that, because it’s not customary for a tokenomics expert to say “none of the token models work,” so let me discuss Ethereum as one of the best possible outcomes but which also triggers a failure mode.

Ethereum is one of the very few cryptocurrencies that works as advertised in the whitepaper: it is used as the utility token on the Ethereum network. It’s also a great investment, because it has generally gone up in value. So it “works” as advertised, but the failure mode is that it’s gotten too damn expensive. Yes, it rose in value, which is great for investors. But using the Ethereum network is prohibitively expensive. The best thing to do with ETH is hodl, not utilize. For your project, think of the following models:

You create the BeanCoin which allows the holder to get a bushel of beans for 1 coin. You are successful and the BeanCoin is now worth $500, but nobody wants to spend $500 to for a bushel of beans. Investors are happy but the coin is useless. You create the PollCoin, a governance token that allows the holder to vote for community proposals and elections. You are successful and the PollCoin is worth $500 and now it costs a minimum of $500 to become a citizen of the community and $2500 to submit a proposal to the community. The best companies/people to do the work don’t want to submit a proposal because the risk is too high of losing that money. Anyone who bought in early to PollCoin sells because they would rather have money than a vote in a community of elitist rich people with poor execution because nobody wants to submit proposals to do work.

In other words, when you create a deflationary coin or token, by default, the success of the token is also it’s failure.

But what about Pay-to-Earn models? Haven’t they been a success? How about DAOs where the community does the work?

First of all, any project younger than 3 years old can’t be considered as a model for long-term tokenomics success. The best we can say is “good so far”. Secondly, nobody has ever been able to explain to me how “giving away money” is a potential long-term business model.

A token for everyone

A surprising number of people who contact me have not thought deeply about what they want on a six-month, two-year, or ten-year scale when they launch these projects. Many people think a token is an easy way to raise money, which it is, relative to many other ways of raising money. But keep in mind that every step you take in your entrepreneurial journey is just a step closer to the next, usually bigger, problem. As you launch your token, make sure to check in with yourself and your other founders that you’re ready for the next challenge down the pike.

Monday, 23. May 2022

Damien Bod

Implement Azure AD Continuous Access (CA) step up with ASP.NET Core Blazor using a Web API

This article shows how to implement Azure AD Continuous Access (CA) in a Blazor application which uses a Web API. The API requires an Azure AD conditional access authentication context. In the example code, MFA is required to use the external API. If a user requests data from the API using the required access token […]

This article shows how to implement Azure AD Continuous Access (CA) in a Blazor application which uses a Web API. The API requires an Azure AD conditional access authentication context. In the example code, MFA is required to use the external API. If a user requests data from the API using the required access token without the required acr claim, an unauthorized response is returned with the missing claims. The Blazor application returns the claims challenge to the WASM application and the application authenticates again with the step up claims challenge. If the user has authenticated using MFA, the authentication is successful and the data from the API can be retrieved.

Code: https://github.com/damienbod/AspNetCoreAzureADCAE

Blogs in this series

Implement Azure AD Continuous Access in an ASP.NET Core Razor Page app using a Web API Implement Azure AD Continuous Access (CA) step up with ASP.NET Core Blazor using a Web API Implement Azure AD Continuous Access (CA) standalone with Blazor ASP.NET Core Force MFA in Blazor using Azure AD and Continuous Access

Steps to implement

Create an authentication context in Azure for the tenant (using Microsoft Graph). Add a CA policy which uses the authentication context. Implement the CA Azure AD authentication context authorization in the API. Implement the Blazor backend to handle the CA unauthorized responses correctly. Implement an authentication challenge using the claims challenge in the Blazor WASM.

Setup overview

Creating a Conditional access Authentication Context

A continuous access (CA) authentication context was created using Microsoft Graph and a policy was created to use this. See the previous blog for details on setting this up.

External API setup

The external API is setup to validate Azure AD JWT Bearer access tokens and to validate that the required continuous access evaluation (CAE) policy is fulfilled. The CAE policy uses the authentication context required for this API. If the user is authorized, the correct claims need to be presented in the access token.

[Authorize(Policy = "ValidateAccessTokenPolicy", AuthenticationSchemes = JwtBearerDefaults.AuthenticationScheme)] [ApiController] [Route("[controller]")] public class ApiForUserDataController : ControllerBase { private readonly CAEClaimsChallengeService _caeClaimsChallengeService; public ApiForUserDataController( CAEClaimsChallengeService caeClaimsChallengeService) { _caeClaimsChallengeService = caeClaimsChallengeService; } [HttpGet] public IEnumerable<string> Get() { // returns unauthorized exception with WWW-Authenticate header // if CAE claim missing in access token // handled in the caller client exception with challenge returned // if not ok _caeClaimsChallengeService .CheckForRequiredAuthContext(AuthContextId.C1, HttpContext); return new List<string> { "admin API CAE protected data 1", "admin API CAE protected data 2" }; } }

The CaeClaimsChallengeService class implements the CAE requirement to use the API. If the user access token has insufficient claims, an unauthorized response is returned to the application requesting data from the API. The WWW-Authenticate header is set with the correct data as defined in the OpenID Connect signals and events specification.

using Microsoft.AspNetCore.Http; using Microsoft.Extensions.Configuration; using System; using System.Globalization; using System.Linq; using System.Net; using System.Text; namespace AdminCaeMfaRequiredApi; /// <summary> /// Claims challenges, claims requests, and client capabilities /// /// https://docs.microsoft.com/en-us/azure/active-directory/develop/claims-challenge /// /// Applications that use enhanced security features like Continuous Access Evaluation (CAE) /// and Conditional Access authentication context must be prepared to handle claims challenges. /// </summary> public class CaeClaimsChallengeService { private readonly IConfiguration _configuration; public CaeClaimsChallengeService(IConfiguration configuration) { _configuration = configuration; } /// <summary> /// Retrieves the acrsValue from database for the request method. /// Checks if the access token has acrs claim with acrsValue. /// If does not exists then adds WWW-Authenticate and throws UnauthorizedAccessException exception. /// </summary> public void CheckForRequiredAuthContext(string authContextId, HttpContext context) { if (!string.IsNullOrEmpty(authContextId)) { string authenticationContextClassReferencesClaim = "acrs"; if (context == null || context.User == null || context.User.Claims == null || !context.User.Claims.Any()) { throw new ArgumentNullException(nameof(context), "No Usercontext is available to pick claims from"); } var acrsClaim = context.User.FindAll(authenticationContextClassReferencesClaim).FirstOrDefault(x => x.Value == authContextId); if (acrsClaim?.Value != authContextId) { if (IsClientCapableofClaimsChallenge(context)) { string clientId = _configuration.GetSection("AzureAd").GetSection("ClientId").Value; var base64str = Convert.ToBase64String(Encoding.UTF8.GetBytes("{\"access_token\":{\"acrs\":{\"essential\":true,\"value\":\"" + authContextId + "\"}}}")); context.Response.Headers.Append("WWW-Authenticate", $"Bearer realm=\"\", authorization_uri=\"https://login.microsoftonline.com/common/oauth2/authorize\", client_id=\"" + clientId + "\", error=\"insufficient_claims\", claims=\"" + base64str + "\", cc_type=\"authcontext\""); context.Response.StatusCode = (int)HttpStatusCode.Unauthorized; string message = string.Format(CultureInfo.InvariantCulture, "The presented access tokens had insufficient claims. Please request for claims requested in the WWW-Authentication header and try again."); context.Response.WriteAsync(message); context.Response.CompleteAsync(); throw new UnauthorizedAccessException(message); } else { throw new UnauthorizedAccessException("The caller does not meet the authentication bar to carry our this operation. The service cannot allow this operation"); } } } } /// <summary> /// Evaluates for the presence of the client capabilities claim (xms_cc) and accordingly returns a response if present. /// </summary> public bool IsClientCapableofClaimsChallenge(HttpContext context) { string clientCapabilitiesClaim = "xms_cc"; if (context == null || context.User == null || context.User.Claims == null || !context.User.Claims.Any()) { throw new ArgumentNullException(nameof(context), "No Usercontext is available to pick claims from"); } var ccClaim = context.User.FindAll(clientCapabilitiesClaim).FirstOrDefault(x => x.Type == "xms_cc"); if (ccClaim != null && ccClaim.Value == "cp1") { return true; } return false; } }

The API can be used by any application and user which presents the correct access token including the claims required by the CAE.

Using Continuous Access Evaluation (CAE) in an ASP.NET Core hosted Blazor application.

The Blazor application is authenticated using MSAL and the backend for frontend (BFF) architecture. The Blazor application administrator page uses data from the CAE protected API. The Blazor ASP.NET Core hosted WASM application is protected using a MSAL confidential client.

services.AddMicrosoftIdentityWebAppAuthentication( Configuration, "AzureAd", subscribeToOpenIdConnectMiddlewareDiagnosticsEvents: true) .EnableTokenAcquisitionToCallDownstreamApi( new[] { "api://7c839e15-096b-4abb-a869-df9e6b34027c/access_as_user" }) .AddMicrosoftGraph( Configuration.GetSection("GraphBeta")) .AddDistributedTokenCaches();

The AdminApiClientService class is used to request data from the external API. The http client uses an Azure AD user delegated access token. If the API returns an unauthorized response, a WebApiMsalUiRequiredException is created with the WWW-Authenticate header payload.

public class AdminApiClientService { private readonly IHttpClientFactory _clientFactory; private readonly ITokenAcquisition _tokenAcquisition; public AdminApiClientService( ITokenAcquisition tokenAcquisition, IHttpClientFactory clientFactory) { _clientFactory = clientFactory; _tokenAcquisition = tokenAcquisition; } public async Task<IEnumerable<string>?> GetApiDataAsync() { var client = _clientFactory.CreateClient(); var scopes = new List<string> { "api://7c839e15-096b-4abb-a869-df9e6b34027c/access_as_user" }; var accessToken = await _tokenAcquisition .GetAccessTokenForUserAsync(scopes); client.BaseAddress = new Uri("https://localhost:44395"); client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", accessToken); client.DefaultRequestHeaders.Accept .Add(new MediaTypeWithQualityHeaderValue("application/json")); var response = await client.GetAsync("ApiForUserData"); if (response.IsSuccessStatusCode) { var stream = await response.Content.ReadAsStreamAsync(); var payload = await JsonSerializer.DeserializeAsync<List<string>>(stream); return payload; } throw new WebApiMsalUiRequiredException( $"Unexpected status code in the HttpResponseMessage: {response.StatusCode}.", response); } }

The AdminApiCallsController implements the API used by the Blazor WASM client. This is protected using cookies. The controller would return an unauthorized response with the claims challenge, if the WebApiMsalUiRequiredException is thrown.

public class AdminApiCallsController : ControllerBase { private readonly AdminApiClientService _userApiClientService; public AdminApiCallsController( AdminApiClientService userApiClientService) { _userApiClientService = userApiClientService; } [HttpGet] public async Task<IActionResult> Get() { try { return Ok(await _userApiClientService.GetApiDataAsync()); } catch (WebApiMsalUiRequiredException hex) { var claimChallenge = WwwAuthenticateParameters .GetClaimChallengeFromResponseHeaders(hex.Headers); return Unauthorized(claimChallenge); } } }

In the Blazor WASM client, an AuthorizedHandler is implemented to handle the unauthorized response from the API. If the “acr” claim is returned, the CAE step method is called.

public class AuthorizedHandler : DelegatingHandler { private readonly HostAuthenticationStateProvider _authenticationStateProvider; public AuthorizedHandler( HostAuthenticationStateProvider authenticationStateProvider) { _authenticationStateProvider = authenticationStateProvider; } protected override async Task<HttpResponseMessage> SendAsync( HttpRequestMessage request, CancellationToken cancellationToken) { var authState = await _authenticationStateProvider .GetAuthenticationStateAsync(); HttpResponseMessage responseMessage; if (authState.User.Identity!= null && !authState.User.Identity.IsAuthenticated) { // if user is not authenticated, // immediately set response status to 401 Unauthorized responseMessage = new HttpResponseMessage( HttpStatusCode.Unauthorized); } else { responseMessage = await base.SendAsync( request, cancellationToken); } if (responseMessage.StatusCode == HttpStatusCode.Unauthorized) { var content = await responseMessage.Content.ReadAsStringAsync(); // if server returned 401 Unauthorized, redirect to login page if (content != null && content.Contains("acr")) // CAE { _authenticationStateProvider.CaeStepUp(content); } else // standard { _authenticationStateProvider.SignIn(); } } return responseMessage; } }

The CaeStepUp method is implemented in the Blazor WASM client and creates a claims challenge with the defined claims challenge and the URL of the WASM client page for the redirect.

public void CaeStepUp(string claimsChallenge, string? customReturnUrl = null) { var returnUrl = customReturnUrl != null ? _navigation.ToAbsoluteUri(customReturnUrl).ToString() : null; var encodedReturnUrl = Uri.EscapeDataString(returnUrl ?? _navigation.Uri); var logInUrl = _navigation.ToAbsoluteUri( $"{LogInPath}?claimsChallenge={claimsChallenge}&returnUrl={encodedReturnUrl}"); _navigation.NavigateTo(logInUrl.ToString(), true); }

The account login sends a challenge to Azure AD to request the claims for the CAE.

[HttpGet("Login")] public ActionResult Login(string? returnUrl, string? claimsChallenge) { //var claims = "{\"access_token\":{\"acrs\":{\"essential\":true,\"value\":\"c1\"}}}"; var redirectUri = !string.IsNullOrEmpty(returnUrl) ? returnUrl : "/"; var properties = new AuthenticationProperties { RedirectUri = redirectUri }; if(claimsChallenge != null) { string jsonString = claimsChallenge.Replace("\\", "") .Trim(new char[1] { '"' }); properties.Items["claims"] = jsonString; } return Challenge(properties); }

The Microsoft.Identity.Web client package requires the cp1 ClientCapabilities.

"AzureAd": { "Instance": "https://login.microsoftonline.com/", "Domain": "[Enter the domain of your tenant, e.g. contoso.onmicrosoft.com]", "TenantId": "[Enter 'common', or 'organizations' or the Tenant Id (Obtained from the Azure portal. Select 'Endpoints' from the 'App registrations' blade and use the GUID in any of the URLs), e.g. da41245a5-11b3-996c-00a8-4d99re19f292]", "ClientId": "[Enter the Client Id (Application ID obtained from the Azure portal), e.g. ba74781c2-53c2-442a-97c2-3d60re42f403]", "ClientSecret": "[Copy the client secret added to the app from the Azure portal]", "ClientCertificates": [ ], // the following is required to handle Continuous Access Evaluation challenges "ClientCapabilities": [ "cp1" ], "CallbackPath": "/signin-oidc" },

To test both the Blazor server application and the API can be started and the CAE claims are required to use the API.

Links

https://github.com/damienbod/Blazor.BFF.AzureAD.Template

https://github.com/Azure-Samples/ms-identity-ca-auth-context

https://github.com/Azure-Samples/ms-identity-dotnetcore-ca-auth-context-app

https://docs.microsoft.com/en-us/azure/active-directory/conditional-access/overview

https://github.com/Azure-Samples/ms-identity-dotnetcore-daemon-graph-cae

https://docs.microsoft.com/en-us/azure/active-directory/develop/developer-guide-conditional-access-authentication-context

https://docs.microsoft.com/en-us/azure/active-directory/develop/claims-challenge

https://docs.microsoft.com/en-us/azure/active-directory/develop/v2-conditional-access-dev-guide

https://techcommunity.microsoft.com/t5/itops-talk-blog/deep-dive-how-does-conditional-access-block-legacy/ba-p/3265345

Shared Signals and Events – A Secure Webhooks Framework

Sunday, 22. May 2022

Just a Theory

Feynman’s Genius

A while back I reviewed James Gleick's "Genius" on Goodreads. It died along with my Goodreads account. Now it's back!

Yours truly, in a 2018 review of Genius, by James Gleick:

Because our ways of understanding the universe are not the universe itself. They’re explanatory tools we develop, use, and sometimes discard in favor of newer, more effective tools. They’re imperfect, products of their times and cultures. But sometimes, in the face of an intractable problem, a maverick mind, cognizant of this reality, will take the radical step of discarding some part of the prevailing doctrine in an attempt to simplify the problem, or just to see what might happen. Feynman was such a mind, as Gleick shows again and again.

In case you’re wondering why I’m linking to my own blog, while this piece dates from 2018, I posted it only a few weeks ago. Originally I posted it on Goodreads, but when Goodreads unceremoniously deleted my account I thought it was gone for good. But two months later, Goodreads sent me my content. I was back in business! With my data recovered and added to my StoryGraph profile, I also took the opportunity to post the one review I had put some effort into on my own site. So here were are.

In other words, I’m more likely to post book reviews on Just a Theory from here on, but meanwhile, I’d be happy to be your friend on StoryGraph.

More about… Books James Gleick Richard Feynman Genius

Friday, 20. May 2022

MyDigitalFootprint

Great leadership is about knowing what to optimise for & when.

I participated in a fantastic talk in May 2022 on “Ideological Polarization and Extremism in the 21st Century” led by Jonathan Leader Maynard who is a Lecturer in International Politics at King's College London.  The purpose here focuses on a though I took from Jonathan's talk and his new book, “Ideology and Mass Killing: The Radicalized Security Politics of Genocides and Deadly Atrocities

I participated in a fantastic talk in May 2022 on “Ideological Polarization and Extremism in the 21st Century” led by Jonathan Leader Maynard who is a Lecturer in International Politics at King's College London.  The purpose here focuses on a though I took from Jonathan's talk and his new book, “Ideology and Mass Killing: The Radicalized Security Politics of Genocides and Deadly Atrocities,” published by Oxford University Press.  

When I started thinking about writing about Peak Paradox, it was driven by a desire to answer a core question I asked myself, individuals, boards and teams; “what are we/ you optimising for?” .  It has become my go-to question when I want to explore the complexity of decision making and team dynamics as the timeframe (tactical vs strategic) is determined by the person answering the question. Ultimately individuals in a team, which give the team its capabilities, are driven by different purposes and ideals, which means incentives work differently as each person optimises for different things in different ways in different time frames.

Nathalie Oestmann put up this post with the diagram below, talking about communication and all having the same message.  My comment was that if you want stability this is good thinking, if you need change it will be less so as it will build resistance.   If you want everyone to have the same message then again this is helpful thinking, but, if you need innovation alignment is less useful.  When we all optimise for one thing and do the same thing - what do we become?  A simple view is that 91 lines in the final idea becomes one as we will perpetuate the same, building higher walls with  confimative incentives, feedback loops and echo chambers to ensure that the same is defensible.   

What we become, when we optimise for one thing was also set out by Jonathan in his talk. He effectively said (to me) that if you optimise for one thing, you are an extremist.  You have opted that this one thing is (or very few things are) more important than anything else.   We might *not* like to think of ourselves as extremists but it is in fact what we are when we optimise for a single goal.  Natalie’s post confirms that if you have enough people optimizing, for one thing, you have a tribe, movement, power, and voice.  The very action of limiting optimisation from many to single creates bias and framing. 

Extremism can be seen as a single optimisation when using Peak Paradox 

Bill George wrote supporting the 24-hour rule from this INC article, essentially, whatever happens, is good or bad, you have 24 hours to celebrate or stew. Tomorrow, it’s a new day. It’s a good way to stay focused on the present.  The problem is that this optimising appears good at one level but to improve leadership decision making and judgement skills, moving on without much stewing or celebrating removes critical learning. Knowing when to use the 24-hour is perfect, to blanket apply is probably a less useful rule. Leadership needs to focus on tomorrow's issues based on yesterday's learning whilst ensuring surviving today, to get to tomorrow.  

So much about what we love (embrase/ take one/ follow) boils ideas down to a single optimisation. Diets, games, books, movies, etc..  Is it that we find living with complexity and optimising for more than one thing difficult or exhausting, or that one thing is so easy that we given our energy retention preference there is a natural bias to simplification? Religion and faith, politics, science, maths, friends family and life in general however requires us to optimise for opposing ideas all the same time, creating tensions and compromises. 

Implication to boards, leadership and management

For way-to-long, the mantra for the purpose of a business was to “maximise the wealth for the shareholders.”  This was the singular objective and optimisation of the board and instruction to management “maximise profits.”   We have come a little way since then, but as I have written before, the single purity of Jenson and Mecking's work in 1976 left a legacy of incentives and education to optimise for one thing ignoring other thinking such as Peter Drucker’s 1973 insight that the only valid purpose of a firm is to create a customer, which itself has problems. 

We have shifted to “optimsing for stakeholders” but is that really a shift on the peak paradox framework or a dumbing down of one purpose, one idea, one vision?  A movement from the simple and singular to a nuanced paradoxical mess?  Yes it is a move from the purity of “shareholder primacy” but does it really engage in optimising for the other peaks on the map?  What does become evident is that when we move from the purity of maximising shareholder value is that decision making becomes more complex.  I am not convinced that optimising for all stakeholders really embraces the requirement for the optimising for sustainability, it is a washed out north star where we are likely to deliver neither.   

Here is the issue for the leadership.  

Optimising for one thing is about being extreme.  When we realise we are being extreme it is not a comfortable place to be and it naturally drives out creativity, innovation and change.  

The pandemic made us focus on one issue, and it showed us that when we, irrespective of industry, geography and resources had to focus on just one thing we can make the amazing happen.  Often sited is the 10 years of progress in 10 months, especially in digital and changing work patterns.  However, we did not consider the tensions, compromises or unintended consequences, we just acted.  Doing one thing, and being able to do just one thing is extreme. 

Extreme is at edges of the peak paradox model.  When we move from optimising for one thing to a few things we struggle to determine which is the priority. This is the journey from the the edges to the centre of the peak paradox model.  When we have to optimise for many things we are at Peak Paradox. We know that optimising for many trends is tiring and the volume of data/ information we need for decision making increases by the square of the number of factors you want to optimise for.   It is here that we find complexity but also realise that we cannot optimise or drive for anything.  Whilst living with complexity is a critical skill for senior teams, it is here that we find that we cannot optimise and we appear to a ship adrift in a storm being pulled everywhere, with no direction or clarity of purpose.  A true skill of great leadership is about knowing what to optimise for & when. Given the ebbs and flows of a market, there is a time to dwell and live with complexity optimising for many but knowing when to draw out, provide clarity, and optimise for one thing is critical. 

The questions we are left to reflect on are:

how far from a single optimisation do your skills enable you to move?

how far from a single optimisation do your team's collective skills enable you to collectively move?

when optimising for conflicting purposes, how do you make decisions?

when optimising for conflicting purposes, how does the team make collective decisions?

When we finally master finding clarity to optimise for one thing and equally living with the tensions, conflicts and compromises of optimising for many things; we move from average to outperforming in terms of delivery and from decision making to judgment. 

Great leaders and teams appear to be able to exist equally in the optimisation for both singular and complex purposes at the same time.

This viewpoint suggests that when the optimisation is for a singular focus, such as a three-word mission and purpose statement that provides a perfect clarity of purpose, is actually only half the capability that modern leadership needs to demonstrate.


Tuesday, 17. May 2022

Doc Searls Weblog

A thermal theory of basketball

Chemistry is a good metaphor for how teams work—especially when times get tough, such as in the playoffs happening in the NBA right now. Think about it. Every element has a melting point: a temperature above which solid turns liquid. Basketball teams do too, only that temperature changes from game to game, opponent to opponent, and […]

Chemistry is a good metaphor for how teams work—especially when times get tough, such as in the playoffs happening in the NBA right now.

Think about it. Every element has a melting point: a temperature above which solid turns liquid. Basketball teams do too, only that temperature changes from game to game, opponent to opponent, and situation to situation. Every team is a collection of its own human compounds of many elements: physical skills and talents, conditioning, experience, communication skills, emotional and mental states, beliefs, and much else.

Sometimes one team comes in pre-melted, with no chance of winning. Bad teams start with a low melting point, arriving in liquid form and spilling all over the floor under heat and pressure from better teams.

Sometimes both teams might as well be throwing water balloons at the hoop.

Sometimes both teams are great, neither melts, and you get an overtime outcome that’s whatever the score said when the time finally ran out. Still, one loser and one winner. After all, every game has a loser, and half the league loses every round. Whole conferences and leagues average .500. That’s their melting point: half solid, half liquid.

Yesterday we saw two meltdowns, neither of which was expected and one of which was a complete surprise.

First, the Milwaukee Bucks melted under the defensive and scoring pressures of the Boston Celtics. There was nothing shameful about it, though. The Celtics just ran away with the game. It happens. Still, you could see the moment the melting started. It was near the end of the first half. The Celtics’ offense sucked, yet they were still close. Then they made a drive to lead going into halftime. After that, it became increasingly and obviously futile to expect the Bucks to rally, especially when it was clear that Giannis Antetokounmpo, the best player in the world, was clearly less solid than usual. The team melted around him while the Celtics rained down threes.

To be fair, the Celtics also melted three times in the series, most dramatically at the end of game five, on their home floor. But Marcus Smart, who was humiliated by a block and a steal in the closing seconds of a game the Celtics had led almost all the way, didn’t melt. In the next two games, he was more solid than ever. So was the team. And they won—this round, at least. Against the Miami Heat? We’ll see.

Right after that game, the Phoenix Suns, by far the best team in the league through the regular season, didn’t so much play the Dallas Mavericks as submit to them. Utterly.

In chemical terms, the Suns showed up in liquid form and turned straight into gas. As Arizona Sports put it, “We just witnessed one of the greatest collapses in the history of the NBA.” No shit. Epic. Nobody on the team will ever live this one down. It’s on their permanent record. Straight A’s through the season, then a big red F.

Talk about losses: a mountain of bets on the Suns also turned to vapor yesterday.

So, what happened? I say chemistry.

Maybe it was nothing more than Luka Dončić catching fire and vaporizing the whole Suns team. Whatever, it was awful to watch, especially for Suns fans. Hell, they melted too. Booing your team when it needs your support couldn’t have helped, understandable though it was.

Applying the basketball-as-chemistry theory, I expect the Celtics to go all the way. They may melt a bit in a game or few, but they’re more hardened than the Heat, which comes from having defeated two teams—the Atlanta Hawks and the Philadelphia 76ers—with relatively low melting points. And I think both the Mavs and the Warriors have lower melting points than either the Celtics or the Heat.

But we’ll see.

Through the final two rounds, look at each game as a chemistry experiment. See how well the theory works.

 

 

Monday, 16. May 2022

Mike Jones: self-issued

JWK Thumbprint URI Draft Addressing IETF Last Call Comments

Kristina Yasuda and I have published a new JWK Thumbprint URI draft that addresses the IETF Last Call comments received. Changes made were: Clarified the requirement to use registered hash algorithm identifiers. Acknowledged IETF Last Call reviewers. The specification is available at: https://www.ietf.org/archive/id/draft-ietf-oauth-jwk-thumbprint-uri-02.html

Kristina Yasuda and I have published a new JWK Thumbprint URI draft that addresses the IETF Last Call comments received. Changes made were:

Clarified the requirement to use registered hash algorithm identifiers. Acknowledged IETF Last Call reviewers.

The specification is available at:

https://www.ietf.org/archive/id/draft-ietf-oauth-jwk-thumbprint-uri-02.html

Phil Windleys Technometria

Decentralizing Agendas and Decisions

Summary: Allowing groups to self-organize, set their own agendas, and decide without central guidance or planning requires being vulnerable and trusting. But the results are worth the risk. Last month was the 34th Internet Identity Workshop (IIW). After doing the last four virtually, it was spectacular to be back together with everyone at the Computer History Museum. You could almo

Summary: Allowing groups to self-organize, set their own agendas, and decide without central guidance or planning requires being vulnerable and trusting. But the results are worth the risk.

Last month was the 34th Internet Identity Workshop (IIW). After doing the last four virtually, it was spectacular to be back together with everyone at the Computer History Museum. You could almost feel the excitement in the air as people met with old friends and made new ones. Rich the barista was back, along with Burrito Wednesday. I loved watching people in small groups having intense conversations over meals, drinks, and snacks.

Also back was IIW's trademark open space organization. Open space conferences are workshops that don't have pre-built agendas. Open space is like an unconference with a formal facilitator trained in using open space technology. IIW is self-organizing, with participants setting the agenda every morning before we start. IIW has used open space for part or all of the workshop since the second workshop in 2006. Early on, Kaliya Young, one of my co-founders (along with Doc Searls), convinced me to try open space as a way of letting participants shape the agenda and direction. For an event this large (300-400 participants), you need professional facilitation. Heidi Saul has been doing that for us for years. The results speak for themselves. IIW has nurtured many of the ideas, protocols, and trends that make up modern identity systems and thinking.

Welcome to IIW 34! (click to enlarge) mDL Discussion at IIW 34 (click to enlarge) Agenda Wall at IIW 34 (Day 1) (click to enlarge)

Last month was the first in-person CTO Breakfast since early 2020. CTO Breakfast is a monthly gathering of technologists in the Provo-Salt Lake City area that I've convened for almost 20 years. Like IIW, CTO Breakfast has no pre-planned agenda. The discussion is freewheeling and active. We have just two rules: (1) no politics and (2) one conversation at a time. Topics from the last meeting included LoRaWAN, Helium network, IoT, hiring entry-level software developers, Carrier-Grade NATs, and commercial real estate. The conversation goes where it goes, but is always interesting and worthwhile.

When we built the University API at BYU, we used decentralized decision making to make key architecture, governance, and implementation decisions. Rather than a few architects deciding everything, we had many meetings, with dozens of people in each, over the course of a year hammering out the design.

What all of these have in common is decentralized decision making by a group of people that results in learning, consensus, and, if all goes well, action. The conversation at IIW, CTO Breakfast, and BYU isn't the result a few smart people deciding what the group needed to hear and then arranging meetings to push it at them. Instead, the group decides. Empowering the group to make decisions about the very nature and direction of the conversation requires trust and trust always implies vulnerability. But I've become convinced that it's really the best way to achieve real consensus and make progress in heterogeneous groups. Thanks Kaliya!

Tags: decentralization iiw cto+breakfast byu university+api


Damien Bod

Using multiple Azure B2C user flows from ASP.NET Core

This article shows how to use multiple Azure B2C user flows from a single ASP.NET Core application. Microsoft.Identity.Web is used to implement the authentication in the client. This is not so easy to implement with multiple schemes as the user flow policy is used in most client URLs and the Microsoft.Identity.Web package overrides an lot […]

This article shows how to use multiple Azure B2C user flows from a single ASP.NET Core application. Microsoft.Identity.Web is used to implement the authentication in the client. This is not so easy to implement with multiple schemes as the user flow policy is used in most client URLs and the Microsoft.Identity.Web package overrides an lot of the default settings. I solved this by implementing an account controller to handle the Azure B2C signup user flow initial request and set the Azure B2C policy. It can be useful to split the user flows in the client application, if the user of the application needs to be guided better.

Code https://github.com/damienbod/azureb2c-fed-azuread

The Azure B2C user flows can be implemented as simple user flows. I used a signup flow and a signin, signup flow.

The AddMicrosoftIdentityWebAppAuthentication is used to implement a standard Azure B2C client. There is no need to implement a second scheme and override the default settings of the Microsoft.Identity.Web client because we use a controller to select the flow.

string[]? initialScopes = Configuration.GetValue<string>( "UserApiOne:ScopeForAccessToken")?.Split(' '); services.AddMicrosoftIdentityWebAppAuthentication(Configuration, "AzureAdB2C") .EnableTokenAcquisitionToCallDownstreamApi(initialScopes) .AddInMemoryTokenCaches(); services.Configure<MicrosoftIdentityOptions>( OpenIdConnectDefaults.AuthenticationScheme, options => { options.Events.OnTokenValidated = async context => { if (ApplicationServices != null && context.Principal != null) { using var scope = ApplicationServices.CreateScope(); context.Principal = await scope.ServiceProvider .GetRequiredService<MsGraphClaimsTransformation>() .TransformAsync(context.Principal); } }; });

The AzureAdB2C app settings configures the sign in, sign up flow. The SignUpSignInPolicyId setting is used to configure the default user flow policy.

"AzureAdB2C": { "Instance": "https://b2cdamienbod.b2clogin.com", "ClientId": "8cbb1bd3-c190-42d7-b44e-42b20499a8a1", "Domain": "b2cdamienbod.onmicrosoft.com", "SignUpSignInPolicyId": "B2C_1_signup_signin", "TenantId": "f611d805-cf72-446f-9a7f-68f2746e4724", "CallbackPath": "/signin-oidc", "SignedOutCallbackPath ": "/signout-callback-oidc", // "ClientSecret": "--use-secrets--" },

The AccountSignUpController is used to set the policy of the flow we would like to use. The SignUpPolicy method just challenges the Azure B2C OpenID Connect server with the correct policy.

using Microsoft.AspNetCore.Authentication; using Microsoft.AspNetCore.Authentication.OpenIdConnect; using Microsoft.AspNetCore.Authorization; using Microsoft.AspNetCore.Mvc; namespace Microsoft.Identity.Web.UI.Areas.MicrosoftIdentity.Controllers; [AllowAnonymous] [Route("MicrosoftIdentity/[controller]/[action]")] public class AccountSignUpController : Controller { [HttpGet("{scheme?}")] public IActionResult SignUpPolicy( [FromRoute] string scheme, [FromQuery] string redirectUri) { scheme ??= OpenIdConnectDefaults.AuthenticationScheme; string redirect; if (!string.IsNullOrEmpty(redirectUri) && Url.IsLocalUrl(redirectUri)) { redirect = redirectUri; } else { redirect = Url.Content("~/")!; } scheme ??= OpenIdConnectDefaults.AuthenticationScheme; var properties = new AuthenticationProperties { RedirectUri = redirect }; properties.Items[Constants.Policy] = "B2C_1_signup"; return Challenge(properties, scheme); } }

The Razor page opens a link to the new controller and challenges the OIDC server with the correct policy.

<li class="nav-item"> <a class="nav-link text-dark" href="/MicrosoftIdentity/AccountSignUp/SignUpPolicy">Sign Up</a> </li>

With this approach, an ASP.NET Core application can be extended with multiple user flows and the UI can be improved for the end user as required.

Links

https://docs.microsoft.com/en-us/azure/active-directory-b2c/overview

https://docs.microsoft.com/en-us/azure/active-directory-b2c/identity-provider-azure-ad-single-tenant

https://github.com/AzureAD/microsoft-identity-web

https://docs.microsoft.com/en-us/azure/active-directory/develop/microsoft-identity-web

https://docs.microsoft.com/en-us/azure/active-directory-b2c/identity-provider-local

https://docs.microsoft.com/en-us/azure/active-directory/

https://docs.microsoft.com/en-us/aspnet/core/security/authentication/azure-ad-b2c

https://github.com/azure-ad-b2c/azureadb2ccommunity.io

https://github.com/azure-ad-b2c/samples

https://docs.microsoft.com/en-us/aspnet/core/blazor/security/webassembly/graph-api

https://docs.microsoft.com/en-us/graph/sdks/choose-authentication-providers?tabs=CS#client-credentials-provider

https://docs.microsoft.com/en-us/graph/api/user-post-users?view=graph-rest-1.0&tabs=csharp https://docs.microsoft.com/en-us/graph/api/invitation-post?view=graph-rest-1.0&tabs=csharp

Sunday, 15. May 2022

Werdmüller on Medium

A quiet morning in America

A disconnect that gets under your skin Continue reading on Human Parts »

A disconnect that gets under your skin

Continue reading on Human Parts »

Friday, 13. May 2022

Jon Udell

Appreciating “Just Have a Think”

Just Have a Think, a YouTube channel created by Dave Borlace, is one of my best sources for news about, and analysis of, the world energy transition. Here are some hopeful developments I’ve enjoyed learning about. Solar Wind and Wave. Can this ocean hybrid platform nail all three? New energy storage tech breathing life and … Continue reading Appreciating “Just Have a Think”

Just Have a Think, a YouTube channel created by Dave Borlace, is one of my best sources for news about, and analysis of, the world energy transition. Here are some hopeful developments I’ve enjoyed learning about.

Solar Wind and Wave. Can this ocean hybrid platform nail all three?

New energy storage tech breathing life and jobs back into disused coal power plants

Agrivoltaics. An economic lifeline for American farmers?

Solar PV film roll. Revolutionary new production technology

All of Dave’s presentations are carefully researched and presented. A detail that has long fascinated me: how the show displays source material. Dave often cites IPCC reports and other sources that are, in raw form, PDF files. He spices up these citations with some impressive animated renderings. Here’s one from the most recent episode.

The progressive rendering of the chart in this example is an even fancier effect than I’ve seen before, and it prompted me to track down the original source. In that clip Dave cites IRENA, the International Renewable Energy Agency, so I visited their site, looked for the cited report, and found it on page 8 of World Energy Transitions Outlook 2022. That link might or might not take you there directly, if not you can scroll to page 8 where you’ll find the chart that’s been animated in the video.

The graphical finesse of Just Have a Think is only icing on the cake. The show reports a constant stream of innovations that collectively give me hope we might accomplish the transition and avoid worst-case scenarios. But still, I wonder. That’s just a pie chart in a PDF file. How did it become the progressive rendering that appears in the video?

In any case, and much more importantly: Dave, thanks for the great work you’re doing!

Thursday, 12. May 2022

Doc Searls Weblog

Lens vs. Camera

I did a lot of shooting recently with a rented Sony FE 70-200mm F2.8 GM OSS II lens, mounted on my 2013-vintage Sony a7r camera. One result was the hummingbird above, which you’ll find among the collections here and here. Also, here’s a toddler… …and a grandma (right after she starred as the oldest alumnus at a […]

I did a lot of shooting recently with a rented Sony FE 70-200mm F2.8 GM OSS II lens, mounted on my 2013-vintage Sony a7r camera. One result was the hummingbird above, which you’ll find among the collections here and here. Also, here’s a toddler…

…and a grandma (right after she starred as the oldest alumnus at a high school reunion I where I took hundreds of other shots):

This lens is new, sharp, versatile, earns good reviews (e.g. here) and is so loved already that it’s hard to get, despite the price: more than $3k after taxes. And, though it’s very compact and light (2.3 lbs) for what it is and does, the thing is big:

So I ordered one, which Amazon won’t charge me for before it ships, on May 23, for delivery on the 24th.

But I’m having second, third, and fourth thoughts, which I just decided to share here.

First, I’m not a fine art photographer. I’m an amateur who mostly shoots people and subjects that interest me, such as what I can see out airplane windows, or choose to document for my own odd purposes—such as archiving photos of broadcast towers and antennas, most of which will fall out of use over the next two decades, after being obsolesced by the Internet, wi-fi and 5G.

All the photos I publish are Creative Commons licensed to encourage use by others, which is why more than 1600 of them have found their way into Wikimedia Commons. Some multiple of those accompany entries in Wikipedia. This one, for example, is in 9 different Wikipedia entries in various languages:

Here is the original, shot with a tiny Canon pocket camera I pulled from the pocket of my ski jacket.

In other words, maybe I’ll be better off with a versatile all-in-one camera that will do much of what this giant zoom does, plus much more.

After much online research, I’ve kind of settled on considering the Sony Cyber-shot DSC-RX10 IV. It has a smaller sensor than I’d like, but it is exceptionally versatile and gets great reviews. While my Sony a7r with its outstanding 24-105mm f/4 FE G OSS lens is versatile as well, and light for a full-frame DSLR, I really need a long lens for a lot of the stuff I shoot. And I suspect this “bridge” camera will do the job.

So here is the choice:

Leave the order stand, and pay $3k for a fully fabulous 70-200 zoom that I’m sure to love but will be too big to haul around in many of the settings where I’ll be shooting. Cancel that order, and instead pay half that for the DSC-RX10 IV—and get it in time for my trip to Hawaii next week.

[Later…] I decided to let the order stand. Two reasons. First, I’ve shot a couple thousand photos so far with the 70-200 zoom, and find it a near-flawless instrument that I enjoy playing. One reason I do is that it’s as close to uncompromising as a lens can be—especially a zoom, which by design involves many compromises. Second, I’ve never played with the DSC-RX10 IV, and that’s kind of a prerequisite. I also know that one of its compromises I won’t be able to overcome is the size of its sensor. I know megapixels are a bit of a head trip, but they do matter, and 36.4 Mpx vs 20.1 “effective” Mpx is non-trivial.

Additionally, I may choose in the long run to also get an a7iv camera, so my two lenses will have two bodies. We’ll see.

 

 

Wednesday, 11. May 2022

Heather Vescent

Six insights about the Future of Biometrics

Photo by v2osk on Unsplash Biometrics are seen as a magic bullet to uniquely identify humans — but it is still new technology. Companies can experience growing pains and backlash due to incomplete thinking prior to implementation. Attackers do the hard work of finding every crack and vulnerability. Activists point out civil liberty and social biases. This shows how our current solutions are no
Photo by v2osk on Unsplash

Biometrics are seen as a magic bullet to uniquely identify humans — but it is still new technology. Companies can experience growing pains and backlash due to incomplete thinking prior to implementation. Attackers do the hard work of finding every crack and vulnerability. Activists point out civil liberty and social biases. This shows how our current solutions are not always secure or equitable. In the end, each criminal, activist, and product misstep inspires innovation and new solutions.

The benefit of biometrics is they are unique and can be trusted to be unique. It’s not impossible, but it is very hard for someone to spoof a biometric. Using a biometric raises the bar a bit, and makes that system less attractive to target — up to a point. Any data is only as secure as the system in which it is stored. Sometimes these systems can be easily penetrated due to poor identity and access management protocols. This has nothing to do with the security of biometrics — that has to do with the security of stored data. Apple FaceID is unbelievably convenient! Once I set up FaceID to unlock my phone, I can configure it to unlock other apps — like banking apps. Rather than typing in or selecting my password from a password manager — I just look at my phone! This makes it easy for me to access my sensitive data. From a user experience perspective, this is wonderful, but I have to trust Apple’s locked down tech stack. The first versions of new technologies will still have issues. All new technology is antifragile, and thus will have more bugs. As the technology is used, the bugs are discovered (thanks hackers!) and fixed, and the system becomes more secure over time. Attackers will move on to more vulnerable targets. Solve for every corner case and you’ll have a rigid yet secure system that probably doesn’t consider the human interface very well. Leave out a corner case and you might be leaving an open door for attack. Solving for the “right” situation is a balance. Which means, either extreme can be harmful to different audiences. Learn from others, share and collaborate on what you have learned. Everyone has to work together to move the industry forward.

Curious to learn more insights about the Future of Digital Identity? I’ll be joining three speakers on the Future of Digital Identity Verification panel.

Tuesday, 10. May 2022

@_Nat Zone

Decentralized, Global, Human-Owned. 理想的なweb3世界(もしあれば)においてIDMが果たす役割

Keynote Panel at the Euro…

Keynote Panel at the European Identity and Cloud Conference, Friday, May 13, 2022 09:20—09:40 Location: BCC Berlin, C01

告知が遅くなりましたが、今週の金曜日(5/13)にベルリンで行われるEuropean Identity & Cloud Conference のキーノートパネルに登場します。英文タイトルは「Decentralized, Global, Human-Owned. The Role of IDM in an Ideal (If there is One) Web3 World です。

詳細のリンクは、https://www.kuppingercole.com/sessions/5092/1 です。以下、そのDeepL翻訳です。

インターネットはIDレイヤーを持たずに作られ、認証、承認、プライバシー、アクセスをWebサイトやアプリケーションに任せていました。ユーザー名とパスワードは依然として支配的なパラダイムであり、さらに重要なことは、ユーザーが個人を特定する情報をコントロールできないことです。データの悪用、ハッキングや不正操作のリスクは重要な課題となっており、web3の出現と価値の伝達というその中核機能の時代において、新しいアプローチが必要とされています。分散型DLTベースのアイデンティティは、最終的にDeFi、NFT、DAOを可能にするソリューションとなるのでしょうか?この素晴らしいキーノートのパネルに参加して、このトピックを侃々諤々議論してください。

(出所)https://www.kuppingercole.com/sessions/5092/1

パネリスト

André Durand, CEO, Ping Identity Martin Kuppinger, CEO, KuppingerCole Nat Sakimura, Chairman, OpenID Foundation Drs. Jacoba C. Sieders, Advisory board member, EU SSIF-lab


Doc Searls Weblog

Laws of Identity

When digital identity ceases to be a pain in the ass, we can thank Kim Cameron and his Seven Laws of Identity, which he wrote in 2004, formally published in early 2005, and gently explained and put to use until he died late last year. Today, seven of us will take turns explaining each of […]

When digital identity ceases to be a pain in the ass, we can thank Kim Cameron and his Seven Laws of Identity, which he wrote in 2004, formally published in early 2005, and gently explained and put to use until he died late last year. Today, seven of us will take turns explaining each of Kim’s laws at KuppingerCole‘s EIC conference in Berlin. We’ll only have a few minutes each, however, so I’d like to visit the subject in a bit more depth here.

To understand why these laws are so important and effective, it will help to know where Kim was coming from in the first place. It wasn’t just his work as the top architect for identity at Microsoft (to which he arrived when his company was acquired). Specifically, Kim was coming from two places. One was the physical world where we live and breathe, and identity is inherently personal. The other was the digital world where what we call identity is how we are known to databases. Kim believed the former should guide the latter, and that nothing like that had happened yet, but that we could and should work for it.

Kim’s The Laws of Identity paper alone is close to seven thousand words, and his IdentityBlog adds many thousands more. But his laws by themselves are short and sweet. Here they are, with additional commentary by me, in italics.

1. User Control and Consent

Technical identity systems must only reveal information identifying a user with the user’s consent.

Note that consent goes in the opposite direction from all the consent “agreements” websites and services want us to click on. This matches the way identity works in the natural world, where each of us not only chooses how we wish to be known, but usually with an understanding about how that information might be used.

2. Minimun Disclosure for a Constrained Use

The solution which discloses the least amount of identifying information and best limits its use is the most stable long term solution.

There is a reason we don’t walk down the street wearing name badges: because the world doesn’t need to know any more about us than we wish to disclose. Even when we pay with a credit card, the other party really doesn’t need (or want) to know the name on the card. It’s just not something they need to know.

3. Justifiable Parties

Digital identity systems must be designed so the disclosure of identifying information is limited to parties having a necessary and justifiable place in a given identity relationship.

If this law applied way back when Kim wrote it, we wouldn’t have the massive privacy losses that have become the norm, with unwanted tracking pretty much everywhere online—and increasingly offline as well. 

4. Directed Identity

A universal identity system must support both “omni-directional” identifiers for use by public entities and “unidirectional” identifiers for use by private entities, thus facilitating discovery while preventing unnecessary release of correlation handles.

All brands, meaning all names of public entities, are “omni-directional.” They are also what Kim calls “beacons” that have the opposite of something to hide about who they are. Individuals, however, are private first, and public only to the degrees they wish to be in different circumstances. Each of the first three laws are “unidirectional.”

5. Pluralism of Operators and Technologies

A universal identity system must channel and enable the inter-working of multiple identity technologies run by multiple identity providers.

This law expresses learnings from Microsoft’s failed experiment with Passport and a project called “Hailstorm.” The idea with both was for Microsoft to become the primary or sole online identity provider for everyone. Kim’s work at Microsoft was all about making the company one among many working in the same broad industry.

6. Human Integration

The universal identity metasystem must define the human user to be a component of the distributed system integrated through unambiguous human-machine communication mechanisms offering protection against identity attacks.

As Kim put it in his 2019 (and final) talk at EIC, we need to turn the Web “right side up,” meaning putting the individual at the top rather than the bottom, with each of us in charge of our lives online, in distributed homes of our own. That’s what will integrate all the systems we deal with. (Joe Andrieu first explained this in 2007, here.)

7. Consistent Experience Across Contexts

The unifying identity metasystem must guarantee its users a simple, consistent experience while enabling separation of contexts through multiple operators and technologies.

So identity isn’t just about corporate systems getting along with each other. It’s about giving each of us scale across all the entities we deal with. Because it’s our experience that will make identity work right, finally, online. 

I expect to add more as the conference goes on; but I want to get this much out there to start with.

By the way, the photo above is from the first and only meeting of the Identity Gang, at Esther Dyson’s PC Forum in 2005. The next meeting of the Gang was the first Internet Identity Workshop, aka IIW, later that year. We’ve had 34 more since then, all with hundreds of participants, all with great influence on the development of code, standards, and businesses in digital identity and adjacent fields. And all guided by Kim’s Laws.

 

Thursday, 05. May 2022

Hans Zandbelt

A WebAuthn Apache module?

It is a question that people (users, customers) ask me from time to time: will you develop an Apache module that implements WebAuthn or FIDO2. Well, the answer is: “no”, and the rationale for that can be found below. At … Continue reading →

It is a question that people (users, customers) ask me from time to time: will you develop an Apache module that implements WebAuthn or FIDO2. Well, the answer is: “no”, and the rationale for that can be found below.

At first glance it seems very useful to have an Apache server that authenticates users using a state-of-the-art authentication protocol that is implemented in modern browsers and platforms. Even more so, that Apache server could function as a reverse proxy in front of any type of resources you want to protect. This will allow for those resources to be agnostic to the type of authentication and its implementation, a pattern that I’ve been promoting for the last decade or so.

But in reality the functionality that you are looking for already exists…

The point is that deploying WebAuthn means that you’ll not just be authenticating users, you’ll also have to take care of signing up new users and managing credentials for those users. To that end, you’ll need to facilitate an onboarding process and manage a user database. That type of functionality is best implemented in a server-type piece of software (let’s call it “WebAuthn Provider”) written in a high-level programming language, rather than embedding it in a C-based Apache module. So in reality it means that any sensible WebAuthn/FIDO2 Apache module would rely on an externally running “Provider” software component to offload the heavy-lifting of onboarding and managing users and credentials. Moreover, just imagine the security sensitivity of such a software component.

Well, all of the functionality described above is exactly something that your average existing Single Sign On Identity Provider software was designed to do from the very start! And even more so, those Identity Providers typically already support WebAuthn and FIDO2 for (“local”) user authentication and OpenID Connect for relaying the authentication information to (“external”) Relying Parties.

And yes, one of those Relying Parties could be mod_auth_openidc, the Apache module that enables users to authenticate to an Apache webserver using OpenID Connect.

So there you go: rather than implementing WebAuthn or FIDO2 (and user/credential management…) in a single Apache module, or write a dedicated WebAuthn/FIDO2 Provider alongside of it and communicate with that using a proprietary protocol, the more sensible choice is to use the already existing OpenID Connect protocol. The Apache OpenID Connect module (mod_auth_openidc) will send users off to the OpenID Connect Provider for authentication. The Provider can use WebAuthn or FIDO2, as a single factor, or as a 2nd factor combined with traditional methods such as passwords or stronger methods such as PKI, to authenticate users and relay the information about the authenticated user back to the Apache server.

To summarise: using WebAuthn or FIDO2 to authenticate users to an Apache server/reverse-proxy is possible today by using mod_auth_openidc’s OpenID Connect implementation. This module can send user off for authentication towards a WebAuthn/FIDO2 enabled Provider, such as Keycloak, Okta, Ping, ForgeRock etc. This setup allows for a very flexible approach that leverages existing standards and implementations to their maximum potential: OpenID Connect for (federated) Single Sign On, WebAuthn and FIDO2 for (centralized) user authentication.

Wednesday, 04. May 2022

Mike Jones: self-issued

OAuth DPoP Specification Addressing WGLC Comments

Brian Campbell has published an updated OAuth DPoP draft addressing the Working Group Last Call (WGLC) comments received. All changes were editorial in nature. The most substantive change was further clarifying that either iat or nonce can be used alone in validating the timeliness of the proof, somewhat deemphasizing jti tracking. As Brian reminded us […]

Brian Campbell has published an updated OAuth DPoP draft addressing the Working Group Last Call (WGLC) comments received. All changes were editorial in nature. The most substantive change was further clarifying that either iat or nonce can be used alone in validating the timeliness of the proof, somewhat deemphasizing jti tracking.

As Brian reminded us during the OAuth Security Workshop today, the name DPoP was inspired by a Deutsche POP poster he saw on the S-Bahn during the March 2019 OAuth Security Workshop in Stuttgart:

He considered it an auspicious sign seeing another Deutsche PoP sign in the Vienna U-Bahn during IETF 113 the same day WGLC was requested!

The specification is available at:

https://tools.ietf.org/id/draft-ietf-oauth-dpop-08.html

Wednesday, 04. May 2022

Identity Woman

The Future of You Podcast with Tracey Follows

Kaliya Young on the Future of You Podcast with the host Stacey Follows and a fellow guest Lucy Yang, to dissect digital wallets, verifiable credentials, digital identity and self-sovereignty. The post The Future of You Podcast with Tracey Follows appeared first on Identity Woman.

Kaliya Young on the Future of You Podcast with the host Stacey Follows and a fellow guest Lucy Yang, to dissect digital wallets, verifiable credentials, digital identity and self-sovereignty.

The post The Future of You Podcast with Tracey Follows appeared first on Identity Woman.

Monday, 02. May 2022

Phil Windleys Technometria

Is an Apple Watch Enough?

Summary: If you're like me, your smartphone has worked its tentacles into dozens, even hundreds, of areas in your life. I conducted an experiment to see what worked and what didn't when I ditched the phone and used an Apple Watch as my primary device for two days. Last week, I conducted an experiment. My phone battery needed to be replaced and the Authorized Apple Service Center wa

Summary: If you're like me, your smartphone has worked its tentacles into dozens, even hundreds, of areas in your life. I conducted an experiment to see what worked and what didn't when I ditched the phone and used an Apple Watch as my primary device for two days.

Last week, I conducted an experiment. My phone battery needed to be replaced and the Authorized Apple Service Center was required to keep it while they ordered the new battery from Apple (yeah, I think that's a stupid policy too). I was without my phone for 2 days and decided it was an excellent time to see if I could get by using my Apple Watch as my primary device. Here's how it went.

First things first. For this to be any kind of success you need a cellular plan for your watch and a pair of Airpods or other bluetooth earbuds. The first thing I noticed is that the bathroom, standing in the checkout line, and other places are boring without the distraction of my phone to read news, play Wordle, or whatever. Siri is your friend. I used Siri a lot more than normal due to the small screen. I'd already set up Apple Pay and while I don't often use it from my watch under normal circumstances, it worked great here. Answering the phone means keeping your Airpods in or fumbling for them every time there's a call. I found I rejected a lot of calls to avoid the hassle. (But never your's, Lynne!) Still, I was able to take and make calls just fine without a phone. Voicemail access is a problem. You have to call the number and retrieve them just like it's 1990 or something. This messed with my usual strategy of not answering calls from numbers I don't recognize and letting them go to voicemail, then reading the transcript to see if I want to call them back. Normal texts don't work that I could tell, but Apple Messages do. I used voice transcription almost exclusively for sending messages, but read them on the watch. Most crypto wallets are unusable without the phone. For the most part, I just used the Web for banking as a substitute for mobile apps and that worked fine. The one exception was USAA. The problem with USAA was 2FA. Watch apps for 2FA are "companion apps" meaning they're worthless without the phone. For TOTP 2FA, I'd mirrored to my iPad, so that worked fine. I had to use the pre-set tokens for Duo that I'd gotten when I set it up. USAA uses Verisign's VIP. It can't be mirrored. What's more, USAA's recovery relies on SMS. I didn't have my phone, so that didn't work. I was on the phone with USAA for an hour trying to figure this out. Eventually USAA decided it was hopeless and told me to conduct banking by voice. Ugh. Listening to music on the watch worked fine. I read books on my Kindle, so that wasn't a problem. There are a number of things I fell back to my iPad for. I've already mentioned 2FA, another is maps. Maps don't work on the watch. I didn't realize how many pictures I take in a day, sometimes just for utility. I used the iPad when I had to. Almost none of my IoT services or devices did much with the watch beyond issuing a notification. None of the Apple HomeKit stuff worked that I could see. For example, I often use a HomeKit integration with my garage door opener. That no longer worked without a phone. Battery life on the watch is more than adequate in normal situations. But hour long phone calls and listening to music challenge battery life when it's your primary device. I didn't realize how many things are tied just to my phone number.

Using just my Apple Watch with some help from my iPad was mostly doable, but there are still rough spots. The Watch is a capable tool for many tasks, but it's not complete. I can certainly see leaving my phone at home more often now since most things work great—especially when you know you can get back to your phone when you need to. Not having my phone with me feels less scary now.

Photo Credit: IPhone 13 Pro and Apple Watch from Simon Waldherr (CC BY-SA 4.0)

Tags: apple watch iphone

Wednesday, 27. April 2022

Mike Jones: self-issued

OpenID Presentations at April 2022 OpenID Workshop and IIW

I gave the following presentations at the Monday, April 25, 2022 OpenID Workshop at Google: OpenID Connect Working Group (PowerPoint) (PDF) OpenID Enhanced Authentication Profile (EAP) Working Group (PowerPoint) (PDF) I also gave the following invited “101” session presentation at the Internet Identity Workshop (IIW) on Tuesday, October 1, 2019: Introduction to OpenID Connect (PowerPoint) […]

I gave the following presentations at the Monday, April 25, 2022 OpenID Workshop at Google:

OpenID Connect Working Group (PowerPoint) (PDF) OpenID Enhanced Authentication Profile (EAP) Working Group (PowerPoint) (PDF)

I also gave the following invited “101” session presentation at the Internet Identity Workshop (IIW) on Tuesday, October 1, 2019:

Introduction to OpenID Connect (PowerPoint) (PDF)

Tuesday, 26. April 2022

Phil Windleys Technometria

We Need a Self-Sovereign Model for IoT

Summary: The Internet of Things is more like the CompuServe of Things. We need a new, self-sovereign model to protect us from proprietary solutions and unlock IoT's real potential. Last week Insteon, a large provider of smart home devices, abruptly closed its doors. While their web site is still up and advertises them as "the most reliable and simplest way to turn your home into a sm

Summary: The Internet of Things is more like the CompuServe of Things. We need a new, self-sovereign model to protect us from proprietary solutions and unlock IoT's real potential.

Last week Insteon, a large provider of smart home devices, abruptly closed its doors. While their web site is still up and advertises them as "the most reliable and simplest way to turn your home into a smart home," the company seems to have abruptly shut down their cloud service without warning or providing a way for customers to continue using their products, which depend on Insteon's privacy cloud. High-ranking Insteon execs even removed their affiliation with Insteon from their LinkedIn profiles. Eek!

Fortunately, someone reverse-engineered the Insteon protocol a while back and there are some open-source solutions for people who are able to run their own servers or know someone who can do it for them. Home Assistant is one. OpenHAB is another.

Insteon isn't alone. Apparently iHome terminated its service on April 2, 2022. Other smarthome companies or services who have gone out of business include Revolv, Insignia, Wink, and Staples Connect.

The problem with Insteon, and every other IoT and Smarthome company I'm aware of is that their model looks like this:

Private cloud IoT model; grey box represents domain of control

In this model, you:

Buy the device Download their app Create an account on the manufacturer's private cloud Register your device Control the device from the app

All the data and the device are inside the manufacturer's private cloud. They administer it all and control what you can do. Even though you paid for the device, you don't own it because it's worthless without the service the manufacturer provides. If they take your account away (or everyone's account, in the case of Insteon), you're out of luck. Want to use your motion detector to turn on the lights? Good luck unless they're from the same company1. I call this the CompuServe of Things.

The alternative is what I call the self-sovereign IoT (SSIoT) model:

Self-sovereign IoT model; grey box represents domain of control

Like the private-cloud model, in the SSIoT model, you also:

Buy the device Download an app Establish a relationship with a compatible service provider Register the device Control the device using the app

The fact that the flows for these two models are the same is a feature. The difference lies elsewhere: in SSIoT, your device, the data about you, and the service are all under your control. You might have a relationship with the device manufacturer, but you and your devices are not under their administrative control. This might feel unworkable, but I've proven it's not. Ten years ago we built a connected-car platform called Fuse that used the SSIoT model. All the data was under the control of the person or persons who owned the fleet and could be moved to an alternate platform without loss of data or function. People used the Fuse service that we provided, but they didn't have to. If Fuse had gotten popular, other service providers could have provided the same or similar service based on the open-model and Fuse owners would have had a choice of service providers. Substitutability is an indispensable property for the internet of things.

All companies die. Some last a long time, but even then they frequently kill off products. Having to buy all your gear from a single vendor and use their private cloud puts your IoT project at risk of being stranded, like Insteon customers have been. Hopefully, the open-source solutions will provide the basis for some relief to them. But the ultimate answer is interoperability and self-sovereignty as the default. That's the only way we ditch the CompuServe of Things for a real internet of things.

Notes Apple HomeKit and Google Home try to solve this problem, but you're still dependent on the manufacturer to provide the basic service. And making the administrative domain bigger is nice, but doesn't result in self-sovereignty.

Tags: picos iot interoperability cloud fuse ssi

Saturday, 16. April 2022

Jon Udell

Capture the rain

It’s raining again today, and we’re grateful. This will help put a damper on what was shaping up to be a terrifying early start of fire season. But the tiny amounts won’t make a dent in the drought. The recent showers bring us to 24 inches of rain for the season, about 2/3 of normal. … Continue reading Capture the rain

It’s raining again today, and we’re grateful. This will help put a damper on what was shaping up to be a terrifying early start of fire season. But the tiny amounts won’t make a dent in the drought. The recent showers bring us to 24 inches of rain for the season, about 2/3 of normal. But 10 of those 24 inches came in one big burst on Oct 24.

Here are a bunch of those raindrops sailing down the Santa Rosa creek to the mouth of the Russian River at Jenner.

With Sam Learner’s amazing River Runner we can follow a drop that fell in the Mayacamas range as it makes its way to the ocean.

Until 2014 I’d only ever lived east of the Mississipi River, in Pennsylvania, Michigan, Maryland, Massachusetts, and New Hampshire. During those decades there may never have been a month with zero precipitation.

I still haven’t adjusted to a region where it can be dry for many months. In 2017, the year of the devastating Tubbs Fire, there was no rain from April through October.

California relies heavily on the dwindling Sierra snowpack for storage and timed release of water. Clearly we need a complementary method of storage and release, and this passage in Kim Stanley Robinson’s Ministry for the Future imagines it beautifully.

Typically the Sierra snowpack held about fifteen million acre-feet of water every spring, releasing it to reservoirs in a slow melt through the long dry summers. The dammed reservoirs in the foothills could hold about forty million acre-feet when full. Then the groundwater basin underneath the central valley could hold around a thousand million acre-feet; and that immense capacity might prove their salvation. In droughts they could pump up groundwater and put it to use; then during flood years they needed to replenish that underground reservoir, by capturing water on the land and not allow it all to spew out the Golden Gate.

Now the necessity to replumb the great valley for recharge had forced them to return a hefty percentage of the land to the kind of place it had been before Europeans arrived. The industrial agriculture of yesteryear had turned the valley into a giant factory floor, bereft of anything but products grown for sale; unsustainable ugly, devastated, inhuman, and this in a place that had been called the “Serengeti of North America,” alive with millions of animals, including megafauna like tule elk and grizzly bear and mountain lion and wolves. All those animals had been exterminated along with their habitat, in the first settlers’ frenzied quest to use the valley purely for food production, a kind of secondary gold rush. Now the necessity of dealing with droughts and floods meant that big areas of the valley were restored, and the animals brought back, in a system of wilderness parks or habitat corridors, all running up into the foothills that ringed the central valley on all sides.

The book, which Wikipedia charmingly classifies as cli-fi, grabbed me from page one and never let go. It’s an extraordinary blend of terror and hope. But this passage affected me in the most powerful way. As Marc Reisner’s Cadillac Desert explains, and as I’ve seen for myself, we’ve already engineered the hell out of California’s water systems, with less than stellar results.

Can we redo it and get it right this time? I don’t doubt our technical and industrial capacity. Let’s hope it doesn’t take an event like the one the book opens with — a heat wave in India that kills 20 million people in a week — to summon the will.


Werdmüller on Medium

Elon, Twitter, and the future of social media

There’s no world where nationalists get what they want. Continue reading on Medium »

There’s no world where nationalists get what they want.

Continue reading on Medium »

Wednesday, 13. April 2022

Habitat Chronicles

Game Governance Domains: a NFT Support Nightmare

“I was working on an online trading-card game in the early days that had player-to-player card trades enabled through our servers. The vast majority of our customer »»

“I was working on an online trading-card game in the early days that had player-to-player card trades enabled through our servers. The vast majority of our customer support emails dealt with requests to reverse a trade because of some kind of trade scams. When I saw Hearthstone’s dust system, I realized it was genius; they probably cut their support costs by around 90% with that move alone.”

Ian Schreiber
A Game’s Governance Domain

There have always been key governance requirements for object trading economies in online games, even before user-generated-content enters the picture.  I call this the game’s object governance domain.

Typically, an online game object governance domain has the following features (amongst others omitted for brevity):

There is usually at least one fungible token currency There is often a mechanism for player-to-player direct exchange There is often one or more automattic markets to exchange between tokens and objects May be player to player transactions May be operator to player transactions (aka vending and recycling machinery) Managed by the game operator There is a mechanism for reporting problems/disputes There is a mechanism for adjudicating conflicts There are mechanisms for resolving a disputes, including: Reversing transactions Destroying objects Minting and distributing objects Minting and distributing tokens Account, Character, and Legal sanctions Rarely: Changes to TOS and Community Guidelines


In short, the economy is entirely in the ultimate control of the game operator. In effect, anything can be “undone” and injured parties can be “made whole” through an entire range of solutions.

Scary Future: Crypto? Where’s Undo?

Introducing blockchain tokens (BTC, for example) means that certain transactions become “irreversible”, since all transactions on the chain are 1) Atomic and 2) Expensive. In contrast, many thousands of credit-card transactions are reversed every minute of every day (accidental double charges, stolen cards, etc.) Having a market to sell an in-game object for BTC will require extending the governance domain to cover very specific rules about what happens when the purchaser has a conflict with a transaction. Are you really going to tell customers “All BTC transactions are final. No refunds. Even if your kid spent the money without permission. Even if someone stole your wallet”?

Nightmare Future: Game UGC & NFTs? Ack!

At least with your own game governance domain, you had complete control over IP presented in your game and some control, or at least influence, over the games economy. But it gets pretty intense to think about objects/resources created by non-employees being purchased/traded on markets outside of your game governance domain.

When your game allows content that was not created within that game’s governance domain, all bets are off when it comes to trying to service customer support calls. And there will be several orders of magnitude more complaints. Look at Twitter, Facebook, and Youtube and all of the mechanisms they need to support IP-related complaints, abuse complaints, and robot-spam content. Huge teams of folks spending millions of dollars in support of Machine Learning are not able to stem the tide. Those companies’ revenue depends primarily on UGC, so that’s what they have to deal with.

NFTs are no help. They don’t come with any governance support whatsoever. They are an unreliable resource pointer. There is no way to make any testable claims about any single attribute of the resource. When they point to media resources (video, jpg, etc.) there is no way to verify that the resource reference is valid or legal in any governance domain. Might as well be whatever someone randomly uploaded to a photo service – oh wait, it is.

NFTs have been stolen, confused, hijacked, phished, rug-pulled, wash-traded, etc. NFT Images (like all internet images) have been copied, flipped, stolen, misappropriated, and explicitly transformed. There is no undo, and there is no governance domain. OpenSea, because they run a market, gets constant complaints when there is a problem, but they can’t reverse anything. So they madly try to “prevent bad listings” and “punish bad accounts” – all closing the barn door after the horse has left. Oh, and now they are blocking IDs/IPs from sanctioned countries.

So, even if a game tries to accept NFT resources into their game – they end up in the same situation as OpenSea – inheriting all the problems of irreversibility, IP abuse, plus new kinds of harassment with no real way to resolve complaints.

Until blockchain tokens have RL-bank-style undo, and decentralized trading systems provide mechanisms for a reasonable standard of governance, online games should probably just stick with what they know: “If we made it, we’ll deal with any governance problems ourselves.”








Monday, 11. April 2022

Justin Richer

The GNAPathon

At the recent IETF 113 meeting in Vienna, Austria, we put the GNAP protocol to the test by submitting it as a Hackathon project. Over the course of the weekend, we built out GNAP components and pointed them at each other to see what stuck. Here’s what we learned. Our Goals GNAP is a big protocol, and there was no reasonable way for us to build out literally every piece and option of it in o

At the recent IETF 113 meeting in Vienna, Austria, we put the GNAP protocol to the test by submitting it as a Hackathon project. Over the course of the weekend, we built out GNAP components and pointed them at each other to see what stuck. Here’s what we learned.

Our Goals

GNAP is a big protocol, and there was no reasonable way for us to build out literally every piece and option of it in our limited timeframe. While GNAP’s transaction negotiation patterns make the protocol fail gracefully when two sides don’t have matching features, we wanted to aim for success. As a consequence, we decided to focus on a few key interoperability points:

HTTP Message Signatures for key proofing, with Content Digest for protecting the body of POST messages. Redirect-based interaction, to get there and back. Dynamic keys, not relying on pre-registration at the AS. Single access tokens.

While some of the components built out did support additional features, these were the ones we chose as a baseline to make everything work as best as it could. We laid out our goals to get these components to talk to each other in increasingly complete layers.

Our goal of the hackathon wasn’t just to create code, we wanted to replicate a developer’s experience when approaching GNAP for the first time. Wherever possible, we tried to use libraries to cover existing functionality, including HTTP Signatures, cryptographic primitives, and HTTP Structured Fields. We also used the existing XYZ Java implementation of GNAP to test things out.

New Clients

With all of this in hand, we set about building some clients from scratch. Since we had a functioning AS to build against, focusing on the clients allowed us to address different platforms and languages than we otherwise had. We settled on three very different kinds of client software:

A single page application, written in JavaScript with no backend components. A command line application, written in PHP. A web application, written in PHP.

By the end of the weekend, we were able to get all three of these working, and the demonstration results are available as part of the hackathon readout. This might not seem like much, but the core functionality of all three clients was written completely from scratch, including the HTTP Signatures implementation.

Getting Over the Hump

Importantly, we also tried to work in such a way that the different components could be abstracted out after the fact. While we could have written very GNAP-specific code to handle the key handling and signing, we opted to instead create generic functions that could sign and present any HTTP message. This decision had two effects.

First, once we had the signature method working, the rest of the GNAP implementation went very, very quickly. GNAP is designed in such a way as to leverage HTTP, JSON, and security layers like HTTP Message Signatures as much as it can. What this meant meant for us during implementation is that getting the actual GNAP exchange to happen was a simple set of HTTP calls and JSON objects. All the layers did their job appropriately, keeping abstractions from leaking between them.

Second, this will give us a chance to extract the HTTP Message Signature code into truly generic libraries across different languages. HTTP Message Signatures is used in places other than GNAP, and so a GNAP implementor is going to want to use a dedicated library for this core function instead of having to write their own like we did.

We had a similar reaction to elements like structured field libraries, which helped with serialization and message-building, and cryptographic functions. As HTTP Message Signatures in particular gets built out more across different ecosystems, we’ll see more and more support for fundamental tooling.

Bug Fixes

Another important part of the hackathon was the discovery and patching of bugs in the existing XYZ authorization server and Java Servlet web-based client code. At the beginning of the weekend, these pieces of software worked with each other. However, it became quickly apparent that there were a number of issues and assumptions in the implementation. Finding things like this is one of the best things that can come out of a hackathon — by putting different code from different developers against each other, you can figure out where code is weak, and sometimes, where the specification itself is unclear.

Constructing the Layers

Probably the most valuable outcome of the hackathon, besides the working code itself, is a concrete appreciation of how clear the spec is from the eyes of someone trying to build to it. We came out of the weekend with a number of improvements that need to be made to GNAP and HTTP Message Signatures, but also ideas on what additional developer support there should be in the community at large. These things will be produced and incorporated over time, and hopefully make the GNAP ecosystem brighter and stronger as a result.

In the end, a specification isn’t real unless you have running code to prove it. Even more if people can use that code in their own systems to get real work done. GNAP, like most standards, is just a layer in the internet stack. It builds on technologies and technologies will be built on it.

Our first hackathon experience has shown this to be a pretty solid layer. Come, build with us!


Doc Searls Weblog

What’s up with Dad?

My father was always Pop. He was born in 1908. His father, also Pop, was born in 1863. That guy’s father was born in 1809, and I don’t know what his kids called him. I’m guessing, from the chart above, it was Pa. My New Jersey cousins called their father Pop. Uncles and their male […]

My father was always Pop. He was born in 1908. His father, also Pop, was born in 1863. That guy’s father was born in 1809, and I don’t know what his kids called him. I’m guessing, from the chart above, it was Pa. My New Jersey cousins called their father Pop. Uncles and their male contemporaries of the same generation in North Carolina, however, were Dad or Daddy.

To my kids, I’m Pop or Papa. Family thing, again.

Anyway, I’m wondering what’s up, or why’s up, with Dad?

 


reb00ted

Web2's pervasive blind spot: governance

What is the common theme in these commonly stated problems with the internet today? Too much tracking you from one site to another. Wrong approach to moderation (too heavy-handed, too light, inconsistent, contextually inappropriate etc). Too much fake news. Too many advertisements. Products that make you addicted, or are otherwise bad for your mental health. In my view, the common

What is the common theme in these commonly stated problems with the internet today?

Too much tracking you from one site to another. Wrong approach to moderation (too heavy-handed, too light, inconsistent, contextually inappropriate etc). Too much fake news. Too many advertisements. Products that make you addicted, or are otherwise bad for your mental health.

In my view, the common theme underlying these problems is: “The wrong decisions were made.” That’s it. Not technology, not product, not price, not marketing, not standards, not legal, nor whatever else. Just that the wrong decisions were made.

Maybe it was:

The wrong people made the decisions. Example: should it really be Mark Zuckerberg who decides which of my friends’ posts I see?

The wrong goals were picked by the decisionmakers and they are optimizing for those. Example: I don’t want to be “engaged” more and I don’t care about another penny per share for your earnings release.

A lack of understanding or interest in the complexity of a situation, and inability for the people with the understanding to make the decision instead. Example: are a bunch of six-figure Silicon Valley guys really the ones who should decide what does and does not inflame religious tensions in a low-income country half-way around the world with a societal structure that’s fully alien to liberal Northern California?

What do we call the thing that deals with who gets to decide, who has to agree, who can keep them from doing bad things and the like? Yep, it’s “governance”.

Back in the 1980’s in 90’s, all we cared about was code. So when the commercial powers started abusing their power, in the mind of some users, those users pushed back with projects such as GNU and open-source.

But we’ve long moved on from there. In one of the defining characteristics of Web2 over Web1, data has become more important than the code.

Starting about 15 years ago, it was suddenly the data scientists and machine learning people who started getting the big bucks, not the coders any more. Today the fight is not about who had the code any more; it is about who has the data.

Pretty much the entire technology industry understands that now. What it doesn’t understand yet is that the consumer internet crisis we are in is best understood as a need to add another layer to the sandwich: not just the right code, not just plus the right data, but also plus the right governance: have the right people decide for the right reasons, and the mechanisms to get rid of the decisionmakers if the affected community decides they made the wrong decisions or had the wrong reasons.

Have you noticed that pretty much all senior technologists that dismiss Web3 — usually in highly emotional terms – completely ignore that pretty much all the genuinely interesting innovations in the Web3 world are governance innovations? (never mind blockchain, it’s just a means to an end for those innovators).

If we had governance as part of the consumer technology sandwich, then:

Whether I see which of my friends’ posts should be decisions that I make with my friends, and nobody else gets a say.

Whether a product optimizes for this or that should be a decision that is made by its users, not some remote investors or power-hungry executives.

A community of people half-way around the world should determine, on its own for its own purposes, what is good for its members.

(If we had a functioning competitive marketplace, Adam Smith-style, then we would probably get this because products that do what the customers want win over products that don’t. But have monopolies instead that cement the decisionmaking in the wrong places for the wrong reasons. A governance problem, in other words.)

If you want to get ahead of the curve, pay attention to this. All the genuinely new stuff in technology that I’ve seen for a few years has genuinely new ideas about governance. It’s a complete game changer.

Conversely, if you build technology with the same rudimentary, often dictatorial and almost always dysfunctional governance we have had for technology in the Web1 and Web2 world, you are fundamentally building a solution for the past, not for the future.

To be clear, better governance for technology is in the pre-kindergarten stage. It’s like the Apple 1 of the personal computer – assembly required – or the Archie stage of the internet. But we would have been wrong to dismiss those as mere fads then, and it would be wrong to dismiss the crucial importance of governance now.

That, for me, is the essence of how the thing after Web2 – and we might as well call it Web3 – is different. And it is totally exciting! Because “better governance” is just another way to say: the users get to have a say!!

Thursday, 07. April 2022

Identity Woman

Media Mention: MIT Technology Review

I was quoted in the article in MIT Technology Review on April 6, 2022, “Deception, exploited workers, and cash handouts: How Worldcoin recruited its first half a million test users.” Worldcoin, a startup built on a promise of a fairly-distributed, cryptocurrency-based universal basic income, is building a biometric database by collecting data from the financially […] The post Media Mention: MIT

I was quoted in the article in MIT Technology Review on April 6, 2022, “Deception, exploited workers, and cash handouts: How Worldcoin recruited its first half a million test users.” Worldcoin, a startup built on a promise of a fairly-distributed, cryptocurrency-based universal basic income, is building a biometric database by collecting data from the financially […]

The post Media Mention: MIT Technology Review appeared first on Identity Woman.

Monday, 04. April 2022

Randall Degges

Real Estate vs Stocks

As I’ve mentioned before, I’m a bit of a personal finance nerd. I’ve been carefully tracking my spending and investing for many years now. In particular, I find the investing side of personal finance fascinating. For the last eight years, my wife and I have split our investments roughly 50⁄50 between broadly diversified index funds and real estate (rental properties). Earlier this week,

As I’ve mentioned before, I’m a bit of a personal finance nerd. I’ve been carefully tracking my spending and investing for many years now. In particular, I find the investing side of personal finance fascinating.

For the last eight years, my wife and I have split our investments roughly 50⁄50 between broadly diversified index funds and real estate (rental properties).

Earlier this week, I was discussing real estate investing with some friends, and we had a great conversation about why you might even consider investing in real estate in the first place. As I explained my strategy to them, I thought it might make for an interesting blog post (especially if you’re new to the world of investing).

Please note that I’m not an expert, just an enthusiastic hobbyist. Like all things I work on, I like to do a lot of research, experimentation, etc., but don’t take this as financial advice.

Why Invest in Stocks

Before discussing whether real estate or stocks is the better investment, let’s talk about how stocks work. If you don’t understand how to invest in stocks (and what rewards you can expect from them), the comparison between real estate and stocks will be meaningless.

What is a Stock?

Stocks are the simplest form of investment you can make. If you buy one share of Tesla stock for $100, you’re purchasing one tiny sliver of the entire company and are now a part-owner!

Each stock you hold can either earn or lose money, depending on how the company performs. For example, if Tesla doesn’t sell as many vehicles as the prior year, it’s likely that the company will not make as much money and will therefore be worth less than it was a year ago, so the value of the stock might drop. In this case, the one share of Tesla stock you purchased for $100 might only be worth $90 (a 10% drop in value!).

But, stocks can also make you money. If Tesla sells more vehicles than anyone expected, the company might be worth more, and now your one share of Tesla stock might be worth $110 (a 10% gain!). This gain is referred to as appreciation because the value of your stock has appreciated.

In addition to appreciation, you can also make money through dividends. While some companies choose to take any profits they make and reinvest them into the business to make more products, conduct research, etc., some companies take their profits and split them up amongst their shareholders. We call this distribution a dividend. When a dividend is paid, you’ll receive a set amount of money per share as a shareholder. For example, if Tesla issues a 10 cent dividend per share, you’ll receive $0.10 of spending money as the proud owner of one share of Tesla stock!

But here’s the thing, investing in stocks is RISKY. It’s risky because companies make mistakes, and even the most highly respected and valuable companies today can explode overnight and become worthless (Enron, anyone?). Because of this, generally speaking, it’s not advisable to ever buy individual stocks.

Instead, the best way to invest in stocks is by purchasing index funds.

What is an Index Fund?

Index funds are stocks you buy that are essentially collections of other stocks. If you invest in Vanguard’s popular VTSAX index fund, for example, you’re buying a small amount of all publicly traded companies in the US.

This approach is much less risky than buying individual stocks because VTSAX is well-diversified. If any of the thousands of companies in the US goes out of business, it doesn’t matter to you because you only own a very tiny amount of it.

The way index funds work is simple: if the value of the index as a whole does well (the US economy in our example), the value of your index fund rises. If the value of the index as a whole does poorly, the value of your index fund drops. Simple!

How Well Do Index Funds Perform?

Let’s say you invest your money into VTSAX and now own a small part of all US companies. How much money can you expect to make?

While there’s no way to predict the future, what we can do is look at the past. By looking at the average return of the stock market since 1926 (when the first index was created), you can see that the average return of the largest US companies has been ~10% annually (before inflation).

If you were to invest in VTSAX over a long period of time, it’s historically likely that you’ll earn an average of 10% per year. And understanding that the US market averages 10% per year is exciting because if you invest a little bit of money each month into index funds, you’ll become quite wealthy.

If you plug some numbers into a compound interest calculator, you’ll see what I mean.

For example, if you invest $1,000 per month into index funds for 30 years, you’ll end up with $2,171,321.10. If you start working at 22, then by the time you’re 52, you’ll have over two million dollars: not bad!

How Much Money Do I Need to Retire if I Invest in Index Funds?

Now that you know how index funds work and how much they historically earn, you might be wondering: how much money do I need to invest in index funds before I can retire?

As it turns out, there’s a simple answer to this question, but before I give you the answer, let’s talk about how this works.

Imagine you have one million dollars invested in index funds that earn an average of 10% yearly. You could theoretically sell 10% of your index funds each year and never run out of money in this scenario. Or at least, this makes sense at first glance.

Unfortunately, while it’s true that the market has returned a historical average of 10% yearly, this is an average, and actual yearly returns vary significantly by year. For example, you might be up 30% one year down 40% the next.

This unpredictability year-over-year makes it difficult to safely withdraw money each year without running out of money due to sequence of return risk.

Essentially, while it’s likely that you’ll earn 10% per year on average if you invest in a US index fund, you will likely run out of money if you sell 10% of your portfolio per year due to fluctuating returns each year.

Luckily, a lot of research has been done on this topic, and the general consensus is that if you only withdraw 4% of your investments per year, you’ll have enough money to last you a long time (a 30-year retirement). This is known as the 4% rule and is the gold standard for retirement planning.

Using the 4% rule as a baseline, you can quickly determine how much money you need to invest to retire with your desired spending.

For example, let’s say you want to retire and live off $100k per year. In this case, $100k is 4% of $2.5m, so you’ll need at least $2.5m invested to retire safely.

PRO TIP: You can easily calculate how much you need invested to retire if you simply take your desired yearly spend and multiply it by 25. For example, $40k * 25 = $1m, $100k * 25 = $2.5m, etc.

By only withdrawing 4% of your total portfolio per year, it’s historically likely that you’ll never run out of money over 30 years. Need a longer retirement? You may want to aim for a 3.5% withdrawal rate (or lower).

Should I Invest in Index Funds?

I’m a big fan of index fund investing, which is why my wife and I put 50% of our money into index funds.

Index funds are simple to purchase and sell (you can do it instantly using an investment broker like Vanguard) in seconds Index funds have an excellent historical track record (10% average yearly returns is fantastic!) Index funds are often tax-advantaged (they are easy to purchase through a company 401k plan, IRA, or other tax-sheltered accounts) Why Invest in Real Estate?

Now that we’ve discussed index funds, how they work, what returns you can expect if you invest in index funds, and how much money you need to invest to retire using index funds, we can finally talk about real estate.

What Qualifies as a Real Estate Investment?

Like stocks and other types of securities, there are multiple ways to invest in real estate. I’m going to cover the most basic form of real estate investing here, but know that there are many other ways to invest in real estate that I won’t cover today due to how complex it can become.

At a basic level, investing in real estate means you’re purchasing a property: a house, condo, apartment building, piece of land, commercial building, etc.

How Do Real Estate Investors Make Money?

There are many ways to make money through investing in real estate. Again, I’m only going to cover the most straightforward ways here due to the topic’s complexities.

Let’s say you own an investment property. The typical ways you might make money from this investment are:

Renting the property out for a profit Owning the property as its value rises over time. For example, if you purchased a house ten years ago for $100k worth $200k today, you’ve essentially “earned” $100k in profit, even if you haven’t yet sold the property. This is called appreciation.

Simple, right?

What’s One Major Difference Between Index Funds and Real Estate?

One of the most significant differences between real estate investing and index fund investing is leverage.

When you invest in an index fund like VTSAX, you’re buying a little bit of the index using your own money directly. This means if you purchase $100k of index funds and earn 10% on your money, you’ll have $110k of investments.

On the other hand, real estate is often purchased using leverage (aka: bank loans). It’s common to buy an investment property and only put 20-25% of your own money into the investment while seeking a mortgage from a bank to cover the remaining 75-80%.

The benefit of using leverage is that you can stretch your money further. For example, let’s say you have $100k to invest. You could put this $100k into VTSAX or purchase one property worth $500k (20% down on a $500k property means you only need $100k as a down payment).

Imagine these two scenarios:

Scenario 1: You invest $100k in VTSAX and earn precisely 10% per year Scenario 2: You put a $100k down payment on a $500k property that you rent out for a profit of $500 per month after expenses (we call this cash flow), and this property appreciates at a rate of 6% per year. Also, assume that you can secure a 30-year fixed-rate loan for the remaining $400k at a 4.5% interest rate.

After ten years, in Scenario 1, you’ll have $259,374.25. Not bad! That’s a total profit of $159,374.25.

But what will you have after ten years in Scenario 2?

In Scenario 2, you’ll have:

A property whose value has increased from $500k to $895,423.85 (an increase of $395,423.85) Cash flow of $60k A total remaining mortgage balance of $320,357.74 (a decrease of $79,642.26)

If you add these benefits up, in Scenario 2, you’ve essentially ballooned your original $100k investment into a total gain of $535,066.11. That’s three times the gain you would have gotten had you simply invested your $100k into VTSAX!

There are a lot of variables at play here, but you get the general idea. While investing in index funds is profitable and straightforward, if you’re willing to learn the business and put in the work, you can often make higher returns through real estate investing over the long haul.

How Difficult is Real Estate Investing?

Real estate investing is complicated. It requires a lot of knowledge, effort, and ongoing work to run a successful real estate investing operation. Among other things, you need to know:

How much a potential investment property will rent for How much a potential investment property will appreciate What sort of mortgage rates you can secure What your expenses will be each month How much property taxes will cost How much insurance will cost Etc.

All of the items above are variables that can dramatically impact whether or not a particular property is a good or bad investment. And this doesn’t even begin to account for the other things you need to do on an ongoing basis: manage the property, manage your accounts/taxes, follow all relevant laws, etc.

In short: investing in real estate is not simple and requires a lot of knowledge to do successfully. But, if you’re interested in running a real estate business, it can be a fun and profitable venture.

How We Invest in Real Estate

As I mentioned earlier, my wife and I split our investable assets 50⁄50 between index funds and real estate. The reason we do this is twofold:

It’s easy (and safe) for us to invest money in index funds It’s hard for us to invest in real estate (it took a lot of time and research to get started), but we generally earn greater returns on our real estate investments than we do on our index investments

Our real-estate investing criteria are pretty simple.

We only purchase residential real estate that we rent out to long-term tenants. We do this because it’s relatively low-risk, low-maintenance, and straightforward. We only purchase rental properties that generate a cash-on-cash return of 8% or greater. For example, if we buy a $200k property with a $40k downpayment, we need to earn $3,200 per year in profit ($3,200 is 8% of $40k) for the deal to make sense. We don’t factor appreciation into our investment calculations as we plan to hold these rental properties long-term and never sell them. The rising value of the rental properties we acquire isn’t as beneficial to us as is the cash flow. Over time, the properties pay themselves off, and once they’re free and clear, we’ll have a much larger monthly profit.

Why did we choose an 8% cash-on-cash return as our target metric for rental property purchases? In short, it’s because that 8% is roughly twice the safe withdrawal rate of our index funds.

I figured early on that if I was going to invest a ton of time and energy into learning about real estate investing, hunting down opportunities, etc., I’d have to make it worthwhile by at least doubling the safe withdrawal rate of our index funds. Otherwise, I could simply invest our money into VTSAX and never think about taking on extra work or risk.

Today, my wife and I own a small portfolio of single-family homes that we rent out to long-term tenants, each earning roughly 8% cash-on-cash return yearly.

Should I Invest in Stocks or Real Estate?

As you’ve seen by now, there isn’t a clear answer here. To sum it up:

If you’re looking for the most straightforward path to retirement, invest your money in well-diversified index funds like VTSAX. Index funds will allow you to retire with a 4% safe withdrawal rate and slowly build your wealth over time. If you’re interested in real estate and are willing to put in the time and effort to learn about it, you can potentially make greater returns, but it’s a lot of work. Or, if you’re like me, why not both? This way, you get the best of both worlds: a bit of simple, reliable index investments and a bit of riskier, more complex, and more rewarding real estate investments.

Does Music Help You Focus?

I’ve always been the sort of person who works with music in the background. Ever since I was a little kid writing code in my bedroom, I’d routinely listen to my favorite music while programming. Over the last 12 years, as my responsibilities have shifted from purely writing code to writing articles, recording videos, and participating in meetings, my habits have changed. Out of necessity, I

I’ve always been the sort of person who works with music in the background. Ever since I was a little kid writing code in my bedroom, I’d routinely listen to my favorite music while programming.

Over the last 12 years, as my responsibilities have shifted from purely writing code to writing articles, recording videos, and participating in meetings, my habits have changed. Out of necessity, I’m unable to work with music most of the time, but when I have an hour or so of uninterrupted time, I still prefer to put music on and use it to help me crank through whatever it is I’m focusing on.

However, I’ve been doing some experimentation over the last few months. My goal was to determine how much music helped me focus. I didn’t have a precise scientific way of measuring this except to track whether or not I felt my Pomodoro sessions were productive.

To keep score, I kept a simple Apple Notes file that contained a running tally of whether or not I felt my recently finished Pomodoro session was productive or not. And while this isn’t the most scientific way to measure, I figured it was good enough for my purposes.

Over the last three months, I logged 120 completed Pomodoro sessions. Of those, roughly 50% (58 sessions) were completed while listening to music, and the other 50% (62 sessions) were completed without music.

To my surprise, when tallying up the results, it appears that listening to music is a distraction for me, causing me to feel like my sessions weren’t very productive. Out of the 58 Pomodoro sessions I completed while listening to music, I noted that ~20% were productive (12 sessions) vs. ~60% (37 sessions) without music.

60% vs. 20% is a significant difference, which is especially surprising since I genuinely enjoy working with music. When I started this experiment, I expected that music would make me more, not less productive.

So what’s the takeaway here? For me, it’s that despite how much I enjoy listening to music while working, it’s distracting.

Am I going to give up listening to music while trying to focus? Not necessarily. As I mentioned previously, I still love working with music. But, I’ll undoubtedly turn the music off if I’m trying to get something important done and need my time to be as productive as possible.

In the future, I’m also planning to run this experiment separately to compare the impact of instrumental vs. non-instrumental music on my productivity. I typically listen to music with lyrics (hip-hop, pop, etc.), which makes me wonder if the lyrics are distracting or just the music itself.

I’m also curious as to whether or not lyrics in a language I don’t understand would cause a similar level of distraction or not (for example, maybe I could listen to Spanish music without impacting my productivity since I don’t understand the language).

Regardless of my results, please experiment for yourself! If you’re trying to maximize productivity, you might be surprised what things are impacting your focus levels.

Friday, 01. April 2022

reb00ted

What can we do with a DAO that cannot be done with other organizational forms?

Decentralized Autonomous Organizations (DAOs) are something new enabled by crypto and blockchain technologies. We are only at the beginning of understanding what they can do and what not. So I asked my social network: “What can we do with a DAO that cannot be done with other organizational forms?” Here is a selected set of responses, mostly from this Twitter thread and this Facebook thread. Th

Decentralized Autonomous Organizations (DAOs) are something new enabled by crypto and blockchain technologies. We are only at the beginning of understanding what they can do and what not.

So I asked my social network: “What can we do with a DAO that cannot be done with other organizational forms?”

Here is a selected set of responses, mostly from this Twitter thread and this Facebook thread. They are both public, so I’m attributing:

Kurt Laitner: “They enable dynamic equity and dynamic governance”

Vittorio Bertocci: “Be robbed without any form of recourse, appeal or protection? 😛 I kid, I kid 🙂”

Dan Lyke: “…they create a gameable system that has less recourse to the law than a traditional system … [but] the immutable public ledger of all transactions may provide a better audit trail”

David Mason: “Lock yourself into a bad place without human sensibility to bail you out.”

Adam Lake: “We already have cooperatives, what is the value add?”

Phill Hallam-Baker: “Rob people who don’t understand that the person who creates them controls them absolutely.”

Jean Russell: “Act like you have a bank account as a group regardless of the jurisdictions of the members.”

David Berlind: “For now (things are changing), a DAO can fill a gap in international business law.”

Follow the links above, there are more details in the discussions.

I conclude: there is no consensus whatsoever :-) That may be because there such a large range of setups under that term today.

Wednesday, 23. March 2022

MyDigitalFootprint

Will decision making improve if we understand the bias in the decision making unit?

As a human I know we all have biases, and we all have different biases. We expose certain biases based on context, time, and people. We know that bias forms because of experience, and we are sure that social context reinforces perceived inconstancy.  Bias is like a mirror and can show our good and bad sides. As a director, you have to have experience before taking on the role, even as a fou

As a human I know we all have biases, and we all have different biases. We expose certain biases based on context, time, and people. We know that bias forms because of experience, and we are sure that social context reinforces perceived inconstancy.  Bias is like a mirror and can show our good and bad sides.

As a director, you have to have experience before taking on the role, even as a founder director. This thought-piece asks if we know where our business biases start from and what direction of travel they create. Business bias is the bias you have right now that affects your choice, judgment and decision making. Business bais is something that our data cannot tell us. Data can tell me if your incentive removes choice or aligns with an outcome. 

At the most superficial level, we know that the expectations of board members drive decisions.  The decisions we take link to incentives, rewards and motivations and our shared values. 

If we unpack this simple model, we can follow (the blue arrows in the diagram below) that says your expectation builds shared values that focus/highlight the rewards and motivations (as a group) we want. These, in turn, drives new expectations.

However, equally, we could follow (the orange arrows) and observe that expectations search and align with rewards and motivations we are given; this exposes our shared values that create new expectations for us. 



Whilst Individual bias is complex; board or group bias adds an element of continuous dynamic change. We have observed and been taught this based on the “forming storming norming performing” model of group development first proposed by Bruce Tuckman in 1965, who said that these phases are all necessary and inevitable for a team to grow face up to challenges, tackle problems, find solutions, plan work, and deliver results.


The observation here is that whilst we might all follow the Tuckman ideals of “time”; in terms of the process to get to perfroming, of which there is lots of data to support, his model ignores the process of self-discovery we pass through during each phase, assuming that we align during the storming (conflicts and tensions) phase but ignore that we fundamentally have different approaches.  Do you follow the blue of orange route and from where did you start.

This is non-more evident than when you get a “board with mixed experience”, in this case, the diversity of experience is a founder, family business and promoted leader. The reason is that if you add their starting positions to the map, we tend to find they start from different biased positions and may be travelling in different directions.  Thank you to Claudia Heimer for stimulating this thought.  The storming phase may align the majority round the team but will not change the underlying ideals and biases in the individuals, which means we don’t explose the paradoxes in decision making. 


What does this all mean? As a CDO, we are tasked with finding data to support decisions? Often leadership will not follow the data, and we are left with questions. Equally, some leaders blindly follow the data without questioning it. Maybe it is time to collect smaller data at the board to uncover how we work and expose a bias in our decision making.





Monday, 21. March 2022

Heather Vescent

Beyond the Metaverse Hype

Seven Reflections Photo by Harry Quan on Unsplash On March 11, 2022, I was a panelist on The Metaverse: The Emperor’s New Clothes panel at the Vancouver International Privacy & Security Summit’s panel. Nik Badminton set the scene and led a discussion with myself, James Hursthouse and Kharis O’Connell. Here are seven reflections. Games are a playful way to explore who we are, to process
Seven Reflections Photo by Harry Quan on Unsplash

On March 11, 2022, I was a panelist on The Metaverse: The Emperor’s New Clothes panel at the Vancouver International Privacy & Security Summit’s panel. Nik Badminton set the scene and led a discussion with myself, James Hursthouse and Kharis O’Connell. Here are seven reflections.

Games are a playful way to explore who we are, to process and interact with people in a way we can’t do IRL. Games are a way to try on other identities, to create or adjust our mental map of the world. Companies won’t protect me. I’m concerned we are not fully aware of the data that can be tracked with VR hardware. From a quantified self perspective, I would love to know more information about myself to be a better human; but I don’t trust companies. Companies will weaponize any scrap of data to manipulate you and I into buying something (advertising), and even believing something that isn’t true (disinformation). Privacy for all. We need to shift thinking around privacy and security. It’s not something we each should individually have to fight for — for one of us to have privacy, all of us must have privacy. I wrote some longer thoughts in this article. Capitalism needs Commons. Capitalism can’t exist without a commons to exploit. And commons will dry up if they are not replenished or created anew. So we need to support the continuity and creation of commons. Governments traditionally are in the role of protecting commons. But people can come together to create common technological languages, like technology standards to enable interoperable technology “rails” that pave the way for an open marketplace. We need new business models. The point of a business model is profit first. This bias has created the current set of problems. In order to solve the world’s problems, we must wean ourselves off profit as the primary objective. I’m not saying that making money isn’t important, it is. But profit at all costs is what has got us into the current set of world problems. Appreciate the past. I’m worried too much knowledge about how we’ve done things in the past is being lost. But not everything needs to go into the future. Identify what has worked and keep doing it. Identify what hasn’t worked and iterate to improve on it. This is how you help build on the past and contribute to the future. Things will fail. There is a lot of energy (and money) in the Metaverse, and I don’t see it going away. That said, there will be failures. If the experimentation fails, is that so bad? In order to understand what is possible, we have to venture a bit into the realm of what’s impossible.

Watch the whole video for the thought-provoking conversation.

Thank you to Nik, Kharis, James and everyone at the Vancouver International Privacy & Security Summit!

Friday, 11. March 2022

@_Nat Zone

東日本大震災の記憶ー2011年3月11日のツイート

今日で東関東大震災から11年。記憶が薄れないように…

今日で東関東大震災から11年。記憶が薄れないように、わたしの当日のツイートを貼り付けておきます。時系列そのままです。リアルタイムであの記憶が蘇ります。日本語、英語両方あります。

日本語のツイート 英語のツイート 日本語のツイート @kthrtty STAATSKAPELLE BERLIN もなかなか。 posted at 00:35:28 無常社会の例。現代のジャンバルジャン RT @masanork: 副業で続けてたんだったらともかく、辞めさせる必要ないと思うんだけど / 科学の先生は「大ポルノ女優」! 生徒がビデオ見つけて大騒動に(夕刊フジ) htn.to/t8n8AJ posted at 09:50:16 えーと。 RT @47news: ウイルス作成に罰則 関連法案を閣議決定 bit.ly/dFndIC posted at 09:51:53 地震だ! posted at 14:48:22 緊急地震速報。30秒以内に大きな揺れが来ます。 posted at 14:48:56 ビルがバキバキ言っている。出口は確保されている。 posted at 14:51:13 エレベーターは非常停止中。 posted at 14:52:59 余震なう。縦揺れ。 posted at 15:17:58 Tsunami 10m H. posted at 15:26:33 九段会館ホール天井崩落600人巻き込み。けが人多数。 posted at 15:49:02 横浜駅前ボーリング場天井崩落。10人が下敷き。神奈川県庁外壁剥がれ落ち。 posted at 15:50:25 RT @motoyaKITO: これやばいぞ RT @OsakaUp: どなたか、助けてあげて下さい!東京都台東区花川戸1-11-7 ギークハウス浅草 301号RT @itkz 地震が起きた時、社内サーバールームにいたのだが、ラックが倒壊した。 … posted at 16:25:52 汐留シティーセンター、津波対策のために地下2階出入口封鎖。出入りには1F、地下1Fを利用のこと。 posted at 16:50:24 Earthquake in Japan. Richter Scale 8.4. posted at 17:01:27 「こちらは汐留シティーセンター防災センターです。本日は地震のため、17時半にて営業を終了しました。」え?! posted at 17:32:27 「訂正します。店舗の営業を終了しました。」そりゃそうだよねw RT @_nat: 「こちらは汐留シティーセンター防災センターです。本日は地震のため、17時半にて営業を終了しました。」え?! posted at 17:42:42 another shake coming in a minute. posted at 17:44:03 Fukushima Nuclear Power Plant’s cooling system not working. Emergency state announced. 1740JST #earthquakes posted at 17:50:53 本当に?津波は川も上って来るはずだけと大丈夫?安全な場所で待機が基本のはずだけど。 RT @CUEICHI: こういうときは、動けるとおもった瞬間に、迷わず移動しないと、後になればなるほど身動きとれなくなります。 posted at 18:07:47 政府の17:40の指示は、待機。RT @CUEICHI: こういうときは、動けるとおもった瞬間に、迷わず移動しないと、後になればなるほど身動きとれなくなります。 posted at 18:09:21 Finally could get in touch with my daughter. posted at 18:32:37 RT @hitoshi: 【帰宅困難の方】毛布まではさすがに用意できませんし、ゆっくり寝るようなスペースは取れないかもしれませんが、店長が泊まることになっていますので、避難してきた方は明日の朝まで滞在可能です。豚組しゃぶ庵 港区六本木7-5-11-2F posted at 18:54:42 RT @UstreamTech_JP: このたびの地震災害報道に関して、NHK様より、放送をUSTREAM上で再配信をすることについて許諾いただきました。 posted at 18:57:00 RT @oohamazaki: 【東京23区内にいる帰宅難民へ】避難場所を公開しているところを可能なかぎりGoogle Maps でまとめました。リアルタイム更新します!bit.ly/tokyohinan posted at 19:51:50 食料班が帰還 posted at 19:53:49 @night_in_tunisi ありがとうございます! posted at 19:54:45 なんと。 RT @hiroyoshi: 霞ヶ関の各庁舎には講堂がある。なぜ帰宅難民に開放しない? posted at 20:29:22 こりゃぁ、都と国とで、大分対応が分かれるなぁ。 posted at 20:50:31 RT @fu4: RT @kazu_fujisawa: 4時ぐらいの携帯メールが今ごろたくさん届いた。TwitterとGmailだけ安定稼働したな。クラウドの信頼性は専用回線より劣るというのは、嘘だという事が判明した。 posted at 20:59:35 チリの友人からその旨連絡ありました。 RT @Y_Kaneko: チリは既に警戒していると聞きました。 RT @marinepolaris: ハワイや米国西海岸、南米チリペルーの在留邦人に津波の情報をお願いします。到達確実なので。 posted at 21:50:11 @iglazer Thanks. Yes, they are fine. posted at 22:10:17 チリ政府も支援体制を整えたそうです。 @trinitynyc posted at 22:12:35 NHK 被災人の知恵 www.nhk.or.jp/hisaito2/chie/… posted at 22:16:27 市原、五井のプラント火災、陸上からは近づけず。塩釜市石油コンビナートで大規模な火災。爆発も。 posted at 22:19:21 東京都、都立高校、すべて開放へ。受け入れ準備中。 posted at 22:20:55 RT @inosenaoki: 都営新宿線は21時45分に全線再開。他の都営地下鉄はすでに再開。ただし本数はまだ少ない。 posted at 22:22:36 RT @fujita_nzm: 【お台場最新情報】台場駅すぐの「ホテル日航東京」さんでは温かいコーンスープと、冷たいウーロン茶の無料サービスが始まり、喝采を浴びています。みなさん落ち着きを取り戻し、疲れて寝る方も増えてきました。 posted at 22:24:07 For earthquake info in English/Chinese etc., tune to 963 for NHK Radio. posted at 22:25:20 RT @genwat: 福島原発は報道されているとおりです。 電源車がつけばよし、つかなければ予想は難しいです。一気にメルトダウンというものではありません。デマにまぎらわされず、推移を見守りましょう。BWR=沸騰水型軽水炉なので、汚染黒鉛を吹いたりするタイプではありません posted at 22:26:25 English Earthquake Information site for the evacuation center etc. Plz RT. ht.ly/4cqaj posted at 22:30:20 [22:39:52] =nat: 宮城県警察本部:仙台市若林区荒浜で200人~300人の遺体が見つかった。 22:40 posted at 22:41:42 RT @tokyomx: 鉄道情報。本日の運転を終日見合わせを決めたのは次のとおり。JR東日本、ゆりかもめ、東武伊勢崎線、東武東上線、京王電鉄、京成電鉄。(現在情報です) posted at 22:44:05 @mayumine よかった! posted at 22:50:58 I’m at 都営浅草線 新橋駅 (新橋2-21-1, 港区) 4sq.com/hvEZ7Z posted at 23:39:48 @ash7 汐留はだめ。浅草線はOk posted at 23:44:31 英語のツイート Big Earthquake in Japan right now. posted at 14:54:37 Earthquake Intensity in Japan. ow.ly/i/921g posted at 14:59:14 All the trains in Tokyo are stopped. posted at 15:08:32 Still Shaking. posted at 15:08:48 It is one of the biggest shake that Japan had. @shita posted at 15:13:41 Tsunami 10m H. posted at 15:26:33 90min past the shake and it is still shaking in Tokyo. posted at 16:25:50 Earthquake in Japan. Richter Scale 8.4. posted at 17:01:28 another shake coming in a minute. posted at 17:44:03 Well, it is 8.8. RT @judico: OMG, 7.9 in Japan. Be safe @_nat_en! #earthquakes posted at 17:48:43 Fukushima Nuclear Power Plant’s cooling system not working. Emergency state announced. 1740JST #earthquakes posted at 17:50:54 Now it is corrected to be 8.8. RT @domcat: Earthquake in Japan. Richter Scale 8.4. (via @_nat_en) posted at 17:54:26 Finally could get in touch with my daughter. posted at 18:32:38 @rachelmarbus Thanks! posted at 18:36:40 Fukushima Nuclear Power Plant – If we can re-install power for the cooling system within 8 hours, it will be ok. #earthquakes posted at 18:39:30 @helena_arellano We still have 7 hours to install power for the cooling system. posted at 19:32:26 Tsunami is approaching Hawaii now. posted at 22:21:37 English Earthquake Information site for the evacuation center etc. Plz RT. ht.ly/4cqam posted at 22:30:20 According to the Miyagi Policy, 200-300 bodies found in the Arahama beach. posted at 22:43:00

Wednesday, 09. March 2022

Heres Tom with the Weather

C. Wright Mills and the Battalion

On Monday, there were a few people in my Twitter feed sharing Texas A&M’s Battalion article about The Rudder Association. While Texas A&M has improved so much over the years, this stealthy group called the Rudder Association is now embarrassing the school. I was glad to read the article and reassured that the kids are alright. I couldn’t help but be reminded of the letters written t

On Monday, there were a few people in my Twitter feed sharing Texas A&M’s Battalion article about The Rudder Association. While Texas A&M has improved so much over the years, this stealthy group called the Rudder Association is now embarrassing the school. I was glad to read the article and reassured that the kids are alright. I couldn’t help but be reminded of the letters written to the Battalion in 1935 by a freshman named C. Wright Mills.

College students are supposed to become leaders of thought and action in later life. It is expected they will profit from a college education by developing an open and alert mind to be able to cope boldly with everyday problems in economics and politics. They cannot do this unless they learn to think independently for themselves and to stand fast for their convictions. Is the student at A and M encouraged to do this? Is he permitted to do it? The answer is sadly in the negative.

Little did he know that current students would be dealing with this shit 85 years later with a group of former students with nothing better to do than infiltrate student-run organizations from freshman orientation to the newspaper. But shocking no one, they were too incompetent to maintain the privacy of the school regents who met with them.

According to meeting minutes from Dec. 1, 2020, the Rudder Association secured the attendance of four members of the A&M System Board of Regents. The meeting minutes obtained by The Battalion were censored by TRA to remove the names of the regents in the meeting as well as other “highly sensitive information.”

“DO NOT USE THEIR NAMES BEYOND THE RUDDER BOARD. They do not wish to be outed,” the minutes read on the regents in attendance.

Further examination by The Battalion revealed, however, that the censored text could be copied and pasted into a text document to be viewed in its entirety due to TRA using a digital black highlighter to censor.

Well done, Battalion.

(photo is from C. Wright Mills: Letters and autobiographical writings)

Sunday, 06. March 2022

Mike Jones: self-issued

Two new COSE- and JOSE-related Internet Drafts with Tobias Looker

This week, Tobias Looker and I submitted two individual Internet Drafts for consideration by the COSE working group. The first is “Barreto-Lynn-Scott Elliptic Curve Key Representations for JOSE and COSE“, the abstract of which is: This specification defines how to represent cryptographic keys for the pairing-friendly elliptic curves known as Barreto-Lynn-Scott (BLS), for use with […]

This week, Tobias Looker and I submitted two individual Internet Drafts for consideration by the COSE working group.

The first is “Barreto-Lynn-Scott Elliptic Curve Key Representations for JOSE and COSE“, the abstract of which is:


This specification defines how to represent cryptographic keys for the pairing-friendly elliptic curves known as Barreto-Lynn-Scott (BLS), for use with the key representation formats of JSON Web Key (JWK) and COSE (COSE_Key).

These curves are used in Zero-Knowledge Proof (ZKP) representations for JOSE and COSE, where the ZKPs use the CFRG drafts “Pairing-Friendly Curves” and “BLS Signatures“.

The second is “CBOR Web Token (CWT) Claims in COSE Headers“, the abstract of which is:


This document describes how to include CBOR Web Token (CWT) claims in the header parameters of any COSE structure. This functionality helps to facilitate applications that wish to make use of CBOR Web Token (CWT) claims in encrypted COSE structures and/or COSE structures featuring detached signatures, while having some of those claims be available before decryption and/or without inspecting the detached payload.

JWTs define a mechanism for replicating claims as header parameter values, but CWTs have been missing the equivalent capability to date. The use case is the same as that which motivated Section 5.3 of JWT “Replicating Claims as Header Parameters” – encrypted CWTs for which you’d like to have unencrypted instances of particular claims to determine how to process the CWT prior to decrypting it.

We plan to discuss both with the COSE working group at IETF 113 in Vienna.


Kyle Den Hartog

Convergent Wisdom

Convergent Wisdom is utilizing the knowledge gained from studying multiple solutions that approach a similar outcome in different ways in order to choose the appropriate solution for the problem at hand.

I was recently watching a MIT Opencourseware video on Youtube titled “Introduction to ‘The Society of Mind’” which is a series of lectures (or as the author refers to them “seminars”) by Marvin Minsky. While watching the first episode of this course the professors puts forth an interesting theory about what grants humans the capability to handle a variety of problems while machines remain limited in their capacity to generically compute solutions to problems. In this theory he alludes to the concept that humans “resourcefullness” is what grants us this capability which to paraphrase is the ability for humans to leverage a variety of different paths to identify a variety of solutions to the same problem. All of which can be used in a variety of different situations in order to develop a solution to the generic problem at hand. While he was describing this theory he made an off hand comment about the choice of the word “resourcefullness” positing whether there was a shorter word to describe the concept.

This got me thinking about the lingustical preciseness to describe the concept and I came across a very fullfilling suggestion on stack exchange to do just that. They suggested the word “equifinality” which is incredibly precise, but also a bit of a pompous choice for a general audience. Albeit, great for the audience he was addressing. The second suggestion sent me down a tangent of thought that I find very enticing though. “Convergent” is a word that’s commonly used to describe this in common tongue today and more importantly can be paired with wisdom to describe a new concept. I’m choosing to define the concept of “convergent wisdom” as utilizing the knowledge gained from studying multiple solutions that approach the same outcome in different ways in order to choose the appropriate solution for the problem at hand.

What’s interesting about the concept of convergent wisdom is that it suitably describes the feedback loop that humans exploit in order to gain the capability of generalizable problem solving. For example, in chemical synthesis the ability to understand the pathway of creating an exotic compound is nearly as important as the compound itself because it can affect the feasiblity of mass production of the compound. In manufacteuring similarly, there are numerous instance of giant discoveries occuring (battery technology is the one that comes to mind first) which then fall short when it comes time to manufateur the product. In both of these instances the ability to understand the chosen path is nearly as important as the solution itself.

So why does this matter and why define the concept? This concept seems incredibly important to the ability to build generically intelligent machines. Today, it seems much of the focus of the artificial intelligence feild focuses primarily on the outcome while treating the process as a hidden and unimportant afterthought up until the point in which the algorithm starts to produce ethically dubious outcomes as well.

Through the study of not only the inputs and outputs, but also the pathway by with the outcome is achieved I believe the same feedback loop may be able to be formed to produce generalizable computing in machines. Unfortunately, I’m no expert in this space and have tons of reading to do on the topic. So now that I’ve been able to describe and define the topic can anyone point me to the area of study or academic literature which focuses on this aspect of AI?

Saturday, 05. March 2022

Just a Theory

How Goodreads Deleted My Account

Someone stole my Goodreads account; the company failed to recover it, then deleted it. It was all too preventable.

On 12:31pm on February 2, I got an email from Goodreads:

Hi David,

This is a notice to let you know that the password for your account has been changed.

If you did not recently reset or change your password, it is possible that your account has been compromised. If you have any questions about this, please reach out to us using our Contact Us form. Alternatively, visit Goodreads Help.

Since I had not changed my password, I immediately hit the “Goodreads Help” link (not the one in the email, mind you) and reported the issue. At 2:40pm I wrote:

I got an email saying my password had been changed. I did not change my password. I went to the site and tried go log in, but the login failed. I tried to reset my password, but got an email saying my email is not in the system.

So someone has compromised the account. Please help me recover it.

I also tried to log in, but failed. I tried the app on my phone, and had been logged out there, too.

The following day at 11:53am, Goodreads replied asking me for a link to my account. I had no idea what the link to my account was, and since I assumed that all my information had been changed by the attackers, I didn’t think to search for it.

Three minutes later, at 11:56, I replied:

No, I always just used the domain and logged in, or the iOS app. I’ve attached the last update email I got around 12:30 EST yesterday, in case that helps. I’ve also attached the email telling me my password had been changed around 2:30 yesterday. That was when I became aware of the fact that the account was taken over.

A day and half later, at 5:46pm on the 4th, Goodreads support replied to say that they needed the URL in order to find it and investigate and asked if I remembered the name on the account. This seemed odd to me, since until at least the February 2nd it was associated with my name and email address.

I replied 3 minutes later at 5:49:

The name is mine. The username maybe? I’m usually “theory”, “itheory”, or “justatheory”, though if I set up a username for Goodreads it was ages ago and never really came up. Where could I find an account link?

Over the weekend I can log into Amazon and Facebook and see if I see any old integration messages.

The following day was Saturday the fifth. I logged into Facebook to see what I could find. I had deleted the link to Goodreads in 2018 (when I also ceased to use Facebook), but there was still a record of it, so I sent the link ID Facebook had. I also pointed out that my email address had been associated with the account for many years until it was changed on Feb 2. Couldn’t they find it in the history for the account?

I still didn’t know the link to my account, but forwarded the marketing redirect links that had been in the password change email, as well as an earlier email with a status on my reading activity.

After I sent the email, I realized I could ask some friends who I knew followed me on Goodreads to see if they could dig up the link. Within a few minutes my pal Travis had sent it to me, https://www.goodreads.com/user/show/7346356-david-wheeler. I was surprised, when I opened it, to see all my information there as I’d left it, no changes. I still could not log in, however. I immediately sent the link to Goodreads support (at 12:41pm).

That was the fifth. I did no hear back again until February 9th, when I was asked if I could provide some information about the account so they could confirm it was me. The message asked for:

Any connected apps or devices Pending friend requests to your account Any accounts linked to your Goodreads account (Goodreads accounts can be linked to Amazon, Apple, Google, and/or Facebook accounts) The name of any private/secret groups of which you are a part Any other account-specific information you can recall

Since I of course had no access to the account, I replied 30 minutes later with what information I could recall from memory: my devices, Amazon Kindle connection (Kindle would sometimes update my reading progress, though not always), membership in some groups that may or may not have been public, and the last couple books I’d updated.

Presumably, most of that information was public, and the devices may have been changed by the hackers. I heard nothing back. I sent followup inquiries on February 12th and 16th but got no replies.

On February 23rd I complained on Twitter. Four minutes later @goodreads replied and I started to hope there might be some progress again. They asked me to get in touch with Support again, which i did at 10:59am, sending all the previous information and context I could.

Then, at 12:38am, this bombshell arrived in my inbox from Goodreads support:

Thanks for your your patience while we looked into this. I have found that your account was deleted due to suspected suspicious activity. Unfortunately, once an account has been deleted, all of the account data is permanently removed from our database to comply with the data regulations which means that we are unable to retrieve your account or the related data. I know that’s not the news you wanted and I am sincerely sorry for the inconvenience.Please let me know if there’s anything else I ​can assist you with.

I was stunned. I mean of course there was suspicious activity, the account was taken over 19 days previously! As of the 5th when I found the link it still existed, and I had been in touch a number of times previously. Goodreads knew that the account had been reported stolen and still deleted it?

And no chance of recovery due to compliance rules? I don’t live in the EU, and even if I was subject to the GDPR or CCPA, there is no provision to delete my data unless I request it.

WTAF.

So to summarize:

Someone took control of my account on February 2 I reported it within hours On February 5 my account was still on Goodreads We exchanged a number of messages By February 23 the account was deleted with no chance of recovery due to suspicious activity

Because of course there was suspicious activity. I told them there was an issue!

How did this happen? What was the security configuration for my account?

I created an entry for Goodreads in 1Password on January 5, 2012. The account may have been older than that, but for at least 10 years I’ve had it, and used it semi-regularly. The password was 16 random ASCII characters generated by 1Password on October 27, 2018. I create unique random passwords for all of my accounts, so it would not be found in a breached database (and I have updated all breached accounts 1Password has identified). The account had no additional factors of authentication or fallbacks to something like SMS, because Goodreads does not offer them. There was only my email address and password. On February 2nd someone changed my password. I had clicked no links in emails, so phishing is unlikely. Was Goodreads support social-engineered to let someone else change the password? How did this happen? I exchanged multiple messages with Goodreads support between February 2 and 23rd, to no avail. By February 23rd, my account was gone with all my reviews and reading lists.

Unlike Nelson, who’s account was also recently deleted without chance of recovery, I had not been making and backups of my data. Never occurred to me, perhaps because I never put a ton of effort into my Goodreads account, mostly just tracked reading and a few brief reviews. I’ll miss my reading list the most. Will have to start a new one on my own machines.

Though all this, Goodreads support were polite but not particularly responsive. days and then weeks went by without response. The company deleted the account for suspicious activity an claim no path to recovery for the original owner. Clearly the company doesn’t give its support people the tools they need to adequately support cases such as this.

I can think of a number of ways in which these situations can be better handled and even avoided. In fact, given my current job designing identity systems I’m going to put a lot of thought into it.

But sadly I’ll be trusting third parties less with my data in the future. Redundancy and backups are key, but so is adequate account protection. Letterboxed, for example, has no multifactor authentication features, making it vulnerable should someone decide it’s worthwhile to steal accounts to spam reviews or try to artificially pump up the scores for certain titles. Just made a backup.

You should, too, and backup your Goodreads account regularly. Meanwhile, I’m on the lookout for a new social reading site that supports multifactor authentication. But even with that, in the future I’ll post reviews here on Just a Theory and just reference them, at best, from social sites.

Update April 3, 2022: This past week, I finally got some positive news from Goodreads, two months after this saga began:

The Goodreads team would like to apologize for your recent poor experience with your account. We sincerely value your contribution to the Goodreads community and understand how important your data is to you. We have investigated this issue and attached is a complete file of your reviews, ratings, and shelvings.

And that’s it, along with some instructions for creating a new account and loading the data. Still no account recovery, so my old URL is dead and there is no information about my Goodreads friends. Still, I’m happy to at least have my lists and reviews recovered. I imported them into a new Goodreads account, then exported them again and imported them into my new StoryGraph profile.

More about… Security Goodreads Account Takeover Fail

Thursday, 03. March 2022

Mike Jones: self-issued

Minor Updates to OAuth DPoP Prior to IETF 113 in Vienna

The editors have applied some minor updates to the OAuth DPoP specification in preparation for discussion at IETF 113 in Vienna. Updates made were: Renamed the always_uses_dpop client registration metadata parameter to dpop_bound_access_tokens. Clarified the relationships between server-provided nonce values, authorization servers, resource servers, and clients. Improved other descriptive wording.

The editors have applied some minor updates to the OAuth DPoP specification in preparation for discussion at IETF 113 in Vienna. Updates made were:

Renamed the always_uses_dpop client registration metadata parameter to dpop_bound_access_tokens. Clarified the relationships between server-provided nonce values, authorization servers, resource servers, and clients. Improved other descriptive wording.

The specification is available at:

https://tools.ietf.org/id/draft-ietf-oauth-dpop-06.html

Wednesday, 02. March 2022

Heres Tom with the Weather

Good Paper on Brid.gy

I read Bridging the Open Web and APIs: Alternative Social Media Alongside the Corporate Web because it was a good opportunity to fill some holes in my knowledge about the Indieweb and Facebook. Brid.gy enables people to syndicate their posts from their own site to large proprietary social media sites. Although I don’t use it myself, I’m often impressed when I see all the Twitter “likes” and

I read Bridging the Open Web and APIs: Alternative Social Media Alongside the Corporate Web because it was a good opportunity to fill some holes in my knowledge about the Indieweb and Facebook.

Brid.gy enables people to syndicate their posts from their own site to large proprietary social media sites.

Although I don’t use it myself, I’m often impressed when I see all the Twitter “likes” and responses that are backfed by brid.gy to the canonical post on a personal website.

The paper details the challenging history of providing the same for Facebook (in which even Cambridge Analytica plays a part) and helped me appreciate why I never see similar responses from Facebook on personal websites these days.

It ends on a positive note…

while Facebook’s API shutdown led to an overnight decrease in Bridgy accounts (Barrett, 2020), other platforms with which Bridgy supports POSSE remain functional and new platforms have been added, including Meetup, Reddit, and Mastodon.

Monday, 28. February 2022

Randall Degges

Journaling: The Best Habit I Picked Up in 2021

2021 was a challenging year in many ways. Other than the global pandemic, many things changed in my life (some good, some bad), and it was a somewhat stressful year. In March of 2021, I almost died due to a gastrointestinal bleed (a freak accident caused by a routine procedure). Luckily, I survived the incident due to my amazing wife calling 911 at the right time and the fantastic paramedic

2021 was a challenging year in many ways. Other than the global pandemic, many things changed in my life (some good, some bad), and it was a somewhat stressful year.

In March of 2021, I almost died due to a gastrointestinal bleed (a freak accident caused by a routine procedure). Luckily, I survived the incident due to my amazing wife calling 911 at the right time and the fantastic paramedics and doctors at my local hospital, but it was a terrifying ordeal.

While I was in recovery, I spent a lot of time thinking about what I wanted to do when feeling better. How I wanted to spend the limited time I have left. There are lots of things I want to spend my time doing: working on meaningful projects, having fun experiences with family and friends, going on camping and hiking trips, writing, etc.

The process of thinking through everything I wanted to do was, in and of itself, incredibly cathartic. The more time I spent reflecting on my thoughts and life, the better I felt. There’s something magical about taking dedicated time out of your day to write about your thoughts and consider the big questions seriously.

Without thinking much about it, I found myself journaling every day.

It’s been just about a year since I first started journaling, and since then, I’ve written almost every day with few exceptions. In this time, journaling has made a tremendous impact on my life, mood, and relationships. Journaling has quickly become the most impactful of all the habits I’ve developed over the years.

Benefits of Journaling

There are numerous reasons to journal, but these are the primary benefits I’ve personally noticed after a year of journaling.

Journaling helps clear your mind.

I have a noisy inner monologue, and throughout the day, I’m constantly being interrupted by ideas, questions, and concerns. When I take a few minutes each day to write these thoughts down and think through them, it puts my brain at ease and allows me to relax and get them off my mind.

Journaling helps put things in perspective.

I’ve often found myself upset or frustrated about something, only to realize later in the day while writing about how insignificant the problem is. The practice of writing things down brings a certain level of rationality to your thoughts that aren’t always immediately apparent.

I often discover that even the “big” problems in my life have obvious solutions I would never have noticed had I not journaled about them.

Journaling preserves memories.

My memory is terrible. If you asked me what I did last month, I’d have absolutely no idea.

Before starting a journal, the only way I could reflect on memories was to look through photos. The only problem with this is that often, while I can remember bits and pieces of what was going on at the time, I can’t remember everything.

As I’m writing my daily journal entry, I’ll include any relevant photos and jot down some context around them – I’ve found that by looking back through these entries with both pictures and stories, it allows me to recall everything.

And… As vain as it is, I hope that someday I’ll be able to pass these journals along to family members so that, if they’re interested, they can get an idea of what sort of person I was, what I did, and the types of things I thought about.

Journaling helps keep your goals on track.

It’s really easy to set a personal goal and forget about it – I’ve done it hundreds of times. But, by writing every day, I’ve found myself sticking to my goals more than ever.

I think this boils down to focus. It would be hard for me to journal every day without writing about my goals and how I’m doing, and that little bit of extra focus and attention goes a long way towards helping me keep myself honest.

It’s fun!

When I started journaling last year, I didn’t intend to do it every day. It just sort of happened.

Each day I found myself wanting to write down some thought or idea, and the more I did it, the more I enjoyed it. Over time, I noticed that I found myself missing it on the few occasions I didn’t journal.

Now, a year in, I look forward to writing a small journal entry every day. It’s part of my wind-down routine at night, and I love it.

Keeping a Digital and Physical Journal

Initially, when I started keeping a journal, I had a few simple goals:

I wanted to be able to quickly write (and ideally include photos) in my journal I wanted it to be easy to write on any device (phone, laptop, iPad, etc.) I wanted some way to physically print my journal each year so that I could have a physical book to look back at any time I want – as well as to preserve the memories as digital stuff tends to disappear eventually

With these requirements in mind, I did a lot of research, looking for a suitable solution. I looked at various journaling services and simple alternatives (physical journals, Google Docs, Apple Notes, etc.).

In the end, I decided to start using the Day One Mac app (works on all Apple devices). I cannot recommend it highly enough if you’re an Apple user.

NOTE: I have no affiliation whatsoever with the Day One app. But it’s incredible.

The Day One app looks beautiful, syncs your journals privately using iCloud, lets you embed photos (and metadata) into entries in a stylish and simple way, makes it incredibly easy to have multiple journals (by topic), track down any entries you’ve previously created, and a whole lot more.

For me, the ultimate feature is the ability to easily create a beautiful looking physical journal whenever I want. Here’s a picture of my journal from 2021.

It’s a bound book with high-quality photos, layouts, etc. It looks astounding. You can customize the book’s cover, include select entries, and make a ton of other customizations I won’t expand on here.

So, my recommendation is that if you’re going to start a journal and want to print it out eventually, use the Day One app – it’s been absolutely 10⁄10 incredible.

Wednesday, 23. February 2022

MyDigitalFootprint

Ethics, maturity and incentives: plotting on Peak Paradox.

Ethics, maturity and incentives may not appear obvious or natural bedfellows.  However, if someone else’s incentives drive you, you are likely on a journey from immaturity to Peak Paradox.  A road from Peak Paradox towards a purpose looks like maturity as your own incentives drive you. Of note, ethics change depending on the direction of travel.   ---- In psychology, maturit
Ethics, maturity and incentives may not appear obvious or natural bedfellows.  However, if someone else’s incentives drive you, you are likely on a journey from immaturity to Peak Paradox.  A road from Peak Paradox towards a purpose looks like maturity as your own incentives drive you. Of note, ethics change depending on the direction of travel.  

----

In psychology, maturity can be operationally defined as the level of psychological functioning one can attain, after which the level of psychological functioning no longer increases with age.  Maturity is the state, fact, or period of being mature.

Whilst immature is not fully developed or has an emotional or intellectual development appropriate to someone younger, I want to use the state of immaturity, which is the state where one is not fully mature. 

Incentives are a thing that motivates or encourages someone to do something.

Peak Paradox is where you try to optimise for everything but cannot achieve anything as you do not know what drives you and are constantly conflicted. 

Ethics is a branch of philosophy that "involves systematising, defending, and recommending concepts of right and wrong behaviour".  Ethical and moral principles govern a person's behaviour.


The Peak Paradox framework is below.

 


When we start our life journey, we travel from being immature to mature.  Depending on your context, e.g. location, economics, social, political and legal, you will naturally be associated with one of the four Peak Purposes. It might not be extreme, but you will be framed towards one of them (bias).  This is the context you are in before determining your own purpose, mission or vision.  Being born in a place with little food and water, there is a natural affinity to survival.  Being born in a community that values everyone and everything, you will naturally align to a big society.  Born to the family of a senior leader in a global industry, you will be framed to a particular model.  Being a child of Murdoch, Musk, Zuckerberg, Trump, Putin, Rothschild, Gates,  etc. - requires assimilation to a set of beliefs. 

Children of celebrities break from what the parents thinking, as have we and as do our children.  Politics and religious chats with teenagers are always enlightening. As children, we travel from the contextual purpose we are born into and typically head towards reaching Peak Paradox - on this journey, we are immature. (note, it is possible to go the other way and become more extreme in purpose than your parents.)  Later in life and with the benefits of maturity, we observe this journey from the simplicity of binary choices (black and white ethics) towards a more nuanced mess at Peak Paradox. At Peak Paradox, we sense a struggle to make sense of the different optimisations, drivers, purposes, incentives, rewards that others have.  This creates anxiety, tension and conflict within us. During this journey from a given purpose to Peak Paradox, the incentives given to you are designed to maintain the original purpose, to keep you following that ideal.  Incentive frame and keep us in a model which is sometimes hard to break.

It is only when we live with complexity and are able to appreciate others' purposes, optimisations, and drivers that we will also gain clarity on our own passion, purpose or mission. By living with Peak Paradox, we change from being driven by others' incentives to uncovering our own affinity; this is where we start to align to what we naturally believe in and find what fits our skin. 

I have written before that we have to move from Peak Paradox towards a purpose if we want to have clarity of purpose and achieve something

Enter Ethics

Suppose maturity is the transition from our actions being determined by others and following their ethical or moral code to determining what type of world or society we want to be part of. In that case, we need to think about the two journeys.  

On the route from birth to Peak Paradox, I have framed this as a route from immaturity to living with complexity.  On the route in, we live by others' moral and ethical codes and are driven by their incentives.   As we leave Peak Paradox and head to a place where we find less tension, conflicts and anxiety, which has a natural affinity to what we believe, and we create our own moral and ethical compass/ code and become driven by our own motivations.  

We should take a fresh perspective on ethics and first determine which direction someone is heading before we make a judgement.  This is increasingly important in #cyberethics and #digitalethics as we only see the point and have no ability to create a bearing or direction.

The purpose of Peak Paradox

As maturity heads towards being mature, we move in and out of living at Peak Paradox from different “Purposes” I am sure this is an iterative process.  The purpose of Peak Paradox is to be in a place where you are comfortable with complexity, but it is not a place to stay. It is like a holiday home, good to go there now and again, but it does not represent life.  The question we have is how do we know when we are at Peak Paradox, probably because our north star has become a black hole!  The key message here is that some never escape the Peak Paradox black hole, finding they live in turmoil driven by others' incentives that are designed to keep you there) and never finding their own passion, incentive or motivation.  The complexity death vortex is where you endlessly search for a never reachable explanation of everything as it is all too interconnected and interdependent to unravel. Leaders come out from Peak Paradox knowing why they have a purpose and a direction. 

The Journey

Imagine you are born into a celebrity household, over time you see the world is more complicated,  you work for a company believing that money bonuses and incentives matter.  Over time you come to understand the tensions such a narrow view brings.  You search for something better, committing to living a simpler life, changing your ethics and moral code.  This still creates tension, and you search for peace and harmony, where you find less tension and more alignment; when you arrive there, you have a unique code because you live with complexity and understand different optimisations.  It does not scale, and few can grasp your message.


Where are you on the journey?  




Monday, 21. February 2022

Mike Jones: self-issued

Four Months of Refinements to OAuth DPoP

A new draft of the OAuth 2.0 Demonstration of Proof-of-Possession at the Application Layer (DPoP) specification has been published that addresses four months’ worth of great review comments from the working group. Refinements made were: Added Authorization Code binding via the dpop_jkt parameter. Described the authorization code reuse attack and how dpop_jkt mitigates it. Enhanced […]

A new draft of the OAuth 2.0 Demonstration of Proof-of-Possession at the Application Layer (DPoP) specification has been published that addresses four months’ worth of great review comments from the working group. Refinements made were:

Added Authorization Code binding via the dpop_jkt parameter. Described the authorization code reuse attack and how dpop_jkt mitigates it. Enhanced description of DPoP proof expiration checking. Described nonce storage requirements and how nonce mismatches and missing nonces are self-correcting. Specified the use of the use_dpop_nonce error for missing and mismatched nonce values. Specified that authorization servers use 400 (Bad Request) errors to supply nonces and resource servers use 401 (Unauthorized) errors to do so. Added a bit more about ath and pre-generated proofs to the security considerations. Mentioned confirming the DPoP binding of the access token in the list in (#checking). Added the always_uses_dpop client registration metadata parameter. Described the relationship between DPoP and Pushed Authorization Requests (PAR). Updated references for drafts that are now RFCs.

I believe this brings us much closer to a final version.

The specification is available at:

https://tools.ietf.org/id/draft-ietf-oauth-dpop-05.html

Sunday, 20. February 2022

Heres Tom with the Weather

Friday, 18. February 2022

Identity Woman

Event Series: Making the Augmented Social Network Vision a Reality

This series began in November with Logging Off Facebook: What Comes Next? The 2nd event will be March 4th online Both events are going to be Open Space Technology for three sessions. We will co-create the agenda the opening hour. The 3rd Event will be April 1 online. Building on the previous one we will […] The post Event Series: Making the Augmented Social Network Vision a Reality appeared firs

This series began in November with Logging Off Facebook: What Comes Next? The 2nd event will be March 4th online Both events are going to be Open Space Technology for three sessions. We will co-create the agenda the opening hour. The 3rd Event will be April 1 online. Building on the previous one we will […]

The post Event Series: Making the Augmented Social Network Vision a Reality appeared first on Identity Woman.

Wednesday, 16. February 2022

Mike Jones: self-issued

JWK Thumbprint URI Draft Addressing Working Group Last Call Comments

Kristina Yasuda and I have published an updated JWK Thumbprint URI draft that addresses the OAuth Working Group Last Call (WGLC) comments received. Changes made were: Added security considerations about multiple public keys coresponding to the same private key. Added hash algorithm identifier after the JWK thumbprint URI prefix to make it explicit in a […]

Kristina Yasuda and I have published an updated JWK Thumbprint URI draft that addresses the OAuth Working Group Last Call (WGLC) comments received. Changes made were:

Added security considerations about multiple public keys coresponding to the same private key. Added hash algorithm identifier after the JWK thumbprint URI prefix to make it explicit in a URI which hash algorithm is used. Added reference to a registry for hash algorithm identifiers. Added SHA-256 as a mandatory to implement hash algorithm to promote interoperability. Acknowledged WGLC reviewers.

The specification is available at:

https://www.ietf.org/archive/id/draft-ietf-oauth-jwk-thumbprint-uri-01.html

Tuesday, 15. February 2022

MyDigitalFootprint

How do you recognise when your north star has become a black hole?

This post is about being lost — without realising it. source: https://earthsky.org/space/x9-47-tucanae-closest-star-to-black-hole/ I have my NorthStar, and I am heading for it, but somehow the gravitational pull of a black hole we did not know existed got me without realising it! I am writing about becoming lost on a journey as I emerge from working from home, travel restrictions, lockdowns and
This post is about being lost — without realising it.

source: https://earthsky.org/space/x9-47-tucanae-closest-star-to-black-hole/

I have my NorthStar, and I am heading for it, but somehow the gravitational pull of a black hole we did not know existed got me without realising it! I am writing about becoming lost on a journey as I emerge from working from home, travel restrictions, lockdowns and masks; to find nothing has changed, but everything has changed.

The hope of a shake or wake up call from something so dramatic as a global pandemic is immediately lost as we re-focus on how to pay for the next meal, drink, ticket, bill, rent, mortgage, school fee or luxury item. Have you become so wedded to an economic model that we cannot see that we will not get to our imagined NorthStar?

I feel right now that I have gone into a culdesac and cannot find the exit. The road I was following had a shortcut, but my journey planner had a shortcut that assumed I was walking and could hop over the gate onto the public path and not the reality that I was in my car.

I wrote about “The New Fatigue — what is this all about?” back in Feb 2021. I could not pinpoint how I was productive, maintained fitness, and ate well, but something was missing — human contact and social and chemistry-based interactions. I posted a view about the 7 B’s and how we were responding to a global pandemic; we lost #belonging. I wrote more on this under a post about Isolation — the 8th deadly sin.

Where am I going with this? Because we want a radical change masked as a “New Normal, something better”, but we are already finding nothing has actually changed on the journey we have been on, and I am now questioning that the bright north star I had lost its sparkle!

I have used heuristics and rules to help me for the longest time; anyone on the neuro-diverse spectrum has to have them because without them surviving becomes exhausting. However, these shortcuts (when created and learnt) also mean I stopped questioning why. Now that the very fabric that set up my heuristics has changed, those rules don’t necessarily work or apply. We love a shortcut because it gets us out of trouble, we love the quick route because it works, we love an easy known trusted route because we don’t have to think. We use them all the time in business to prioritise. “What is the ROI on this?” In truth, we either don’t have the resources or cannot be bothered to spend the time to look in detail, so we use the blunt tool (ROI) to make a decision.

My tools don’t work (as well or at all)

I found my NorthStar with my tools. I was navigating to the north star with my tools. My tools did not tell me I was heading past a black hole that could suck me in. I am not sensing I am lost as my tools are not telling me; all the things we did pre-pandemic don’t work as well on the other side — but nothing other than feeling lost is telling me this. We have not gone back to everything working and still have not created enough solid ground to build new rules, so we are now lost, looking for a new NorthStar with tools that do not work.

Our shortcuts sucked us in and took away the concept that we need to dwell, be together, work it out, and take time. Our tools and shortcuts reduced our time frames and tricked us into thinking they would work forever. The great quote from Anthony Zhou below assumes you know where you are going. That is not true.

How do I recognise that my north star has become a black hole because my shortcuts and rules no longer work, creating fatigue I cannot describe, and I feel lost? There is a concept of anchor points in philosophy, and it is a cognitive bias. When you lose your anchor in a harbour, you drift (ignoring sea anchors for those who sail). The same can be said when you lose your own personal anchor points that have provided the grounding for your decision making. Routines and experience are not anchor points. But the pandemic looks to have cut the ties we had to anchor points, so we feel all somewhat lost and drifting. The harder we try to re-apply the old rules, the more frustrated we become that nothing works. Perhaps it is time to make some new art such that we can discover the new rules and find some new anchor points. Then, maybe I will feel less lost?


Tuesday, 08. February 2022

MyDigitalFootprint

Hostile environments going in the right direction; might be the best place to work?

Whilst our universe is full of hostile places, and they are engaging in their own right, I want to unpack the thinking and use naturally occurring hostile environments as an analogy to help unpack complex decision making in hostile to non-hostile work environments. ---- I enjoyed reading Anti-Fragile in 2013; it is the book about things that gain from disorder by Nassim Nicholas Taleb. "Some th

Whilst our universe is full of hostile places, and they are engaging in their own right, I want to unpack the thinking and use naturally occurring hostile environments as an analogy to help unpack complex decision making in hostile to non-hostile work environments.

----

I enjoyed reading Anti-Fragile in 2013; it is the book about things that gain from disorder by Nassim Nicholas Taleb. "Some things benefit from shocks; they thrive and grow when exposed to volatility, randomness, disorder, and stressors and love adventure, risk, and uncertainty. Yet, in spite of the ubiquity of the phenomenon, there is no word for the exact opposite of fragile. Let us call it antifragile. Antifragility is beyond resilience or robustness. The resilient resists shocks and stays the same; the antifragile gets better." When writing this, I have the same problem looking for a direct opposite of a Hostile Environment, as in an extreme ecosystem (ecology),  and whilst I have opted for a non-hostile environment, anti-hostile would be better.

 In nature, a hostile or extreme environment is a habitat characterised by harsh environmental conditions, beyond the optimal range for the development of humans, for example, pH 2 or 11, −20°C or 113°C, saturating salt concentrations, high radiation, 200 bars of pressure, among others.  These hostile places are super interesting because life is possible, and it is from what life emerged. I will extend the thinking and use this as an analogy to help unpack complex decision making by comparing hostile to non-hostile, extreme to amicable. 

 

Time apparently moves more slowly than in a non-hostile environment than in a hostile environment.  Time, in this case, is about the period between change.  Change in a hostile environment is challenging and has to be done in tiny steps over a long period of time. Rapid change increases the risk of death and non-survival.   To survive in a hostile environment, the living thing needs stability and resilience.  The processes (chemistry) methods (biology) must be finely adjusted to become incredibly efficient and effective - efficacy matters. Change is incremental and very slow.  To survive, sharing is a better option; having facts (data) matters, you survive together, and there are few paradoxes. Of note is that hostile environments will become less hostile. As it moved from an acidic creation to our current diversity, the earth is worthy of note.

In a non-hostile environment, time can move fast between iteration/ adaptation.  The risk of a change leading to death is lower as the environment is tolerant of change.   The jumps can be more significant with fewer threats.  Because the environment has a wide tolerance, it is less sensitive to risk (5 sigma deviation is acceptable); therefore, you can have large scale automation, programmable algorithms and less finely tuned processes, methods and rules.  Innovation and change will come thick and fast as the environment is amiable and amicable.  The time between this innovation and adaptation is fast.   The environment creates more volatility, uncertainty, complexity and ambiguity.   Politics and power focus on control over survival.  The world is full of paradoxes.  Non-hostile environments will become more hostile. 

----

Yes, there are problems will all analogies, and this one breaks down; however, there is a principle here worth thinking about.  In hostile environments, there are fewer paradoxes.   I would argue that this is because survival is one driving purpose.  Survival is one of the four-opposing purposes in the peak-paradox model.    In non-hostile environments, you can optimise for more than one thing.  Indeed you can have many purposes all competing.  This leads to many paradoxes.   In work environments where senior leadership is unable to comprehend paradoxes, I also observe hostile environments (different to natural ones but just as toxic).  Where I find teams that embrace VUCA, innovation, change and can see the paradoxes in their data, facts and knowledge;  observe congenial, amenable, non-hostile and anti-hostile environments.  

The key here is not this observation but about direction.  Knowing which camp you are in is essential, but so too is knowing the direction. How to measure Peak-Hostility is going to be another post. Non-hostile places tend to become more hostile because of politics and power dynamics; dealing with paradoxes is hard work.  Because they demand working together and clarity of purpose, hostile environments can become less hostile.

If we plot this thinking on the Peak Pardox framework, I believe it will be difficult to escape the dynamics of Peak Human Purpose (survival) until scarcity is resolved.  At Peak Individual Purpose, the few will control, but this creates a hostile environment for the majority.  Peak Work, we observe fierce competition between the two camps, where hostile can win by focussing on costs, but non-hostile wins through innovation.  At Peak Society Purpose - there is something unique as non-hostile could lead to anti-hostile.  

As to decision-making, what becomes critical is whether your decision-making processes match your (hostile/ non-hostile) environment and direction, as they demand very different approaches.  Hostile, in many ways, is more straightforward as there is a much more defined purpose to which decisions can be aligned.  Non-hostile introduce paradoxes, optimisation and complexity into the processes with many interested stakeholders.  If there is a mismatch in methods, this can be destructive—much more to think about.   

 

 

 


Monday, 07. February 2022

Werdmüller on Medium

The web is a miracle

Not everything has to be a business. Continue reading on Medium »

Not everything has to be a business.

Continue reading on Medium »

Thursday, 03. February 2022

Altmode

Chasing Power Anomalies

Recently, we and a number of our neighbors have been noticing our lights flickering in the evening and early morning. While we have considered it to be mostly an annoyance, this has bothered some of our neighbors enough that they have opened cases with the utility and began raising the issue on our street mailing […]

Recently, we and a number of our neighbors have been noticing our lights flickering in the evening and early morning. While we have considered it to be mostly an annoyance, this has bothered some of our neighbors enough that they have opened cases with the utility and began raising the issue on our street mailing list.

Pacific Gas and Electric (PG&E) responded to these customers with a visit, and in some cases replaced the service entrance cable to the home. In at least one case PG&E also said they might need to replace the pole transformer, which would take a few months to complete. I have heard no reports that these efforts have made any difference.

This isn’t our first recent challenge with voltage regulation in our neighborhood. Our most recent issue was a longer-term voltage regulation problem that occurred on hot days, apparently due to load from air conditioners and the fact that our neighborhood is fed by older 4-kilovolt service from the substation. This is different, and raised several questions:

How local are the anomalies? Are neighbors on different parts of the street seeing the same anomalies, or are they localized to particular pole transformers or individual homes? What is the duration and nature of the anomalies? Are they only happening in the evening and early morning, or do we just notice them at these times?

To try to answer these questions, I found a test rig that I built several years ago when we were noticing some dimming of our lights, apparently due to neighbors’ air conditioners starting on summer evenings. The test rig consists of a pair of filament transformers: 110 volt to 6 volt transformers that were used in equipment with electronic tubes, which typically used 6 volts to heat the tube’s filament. The transformers are connected in cascade to reduce the line voltage to a suitable level for the line-in audio input on a computer. An open-source audio editing program, Audacity, is used to record the line voltage. I often joke that this is a very boring recording: mostly just a continuous 60 hertz tone.

At the same time, I started recording the times our lights flickered (or my uninterruptable power supply clicked, another symptom). I asked my neighbors to record when they see their lights flicker and report that back to me.

I created a collection of 24-hour recordings of the power line, and went looking for the reported power anomalies. It was a bit of a tedious process, because not everyone’s clocks are exactly synchronized. But I was successful in identifying several power anomalies that were observed by neighbors on opposite ends of the street (about three blocks). Here’s a typical example:

Typical power anomaly

As you can see, the problem is very short in duration, about 60 milliseconds or so.

I was getting a lot of flicker reports, and as I mentioned, searching for these anomalies was tedious. So I began looking at the analysis capabilities of Audacity. I noticed a Silence Finder plug-in and attempted to search for the anomalies using that tool. But Silence Finder is designed to find the kind of silence that one might find between tracks on an LP: very quiet for a second or so. Not surprisingly, Silence Finder didn’t find anything for me.

I noticed that Silence Finder is written in a specialized Lisp-like signal processing language known as Nyquist. So I had a look at the source code, which is included with Audacity, and was able to understand quite a bit of what was going on. For efficiency reasons, Silence Finder down-samples the input data so it doesn’t have to deal with as much data. In order to search for shorter anomalies, I needed to change that, as well as the user interface limits on minimum silence duration. Also, the amplitude of the silence was expressed in dB, which makes sense for audio but I needed more sensitivity to subtle changes in amplitude. So I changed the silence amplitude from dB to a linear voltage value.

The result was quite helpful. The modified plug-in, which I called “Glitch Finder”, was able to quite reliably find voltage anomalies. For example:

Power recording 1/29/2022-1/30/2022

The label track generated by Glitch Finder points out the location of the anomalies (at 17:05:12, 23:00:12, and 7:17:56 the next morning), although they’re not visible at this scale. Zoom in a few times and they become quite obvious:

Power anomaly at 1/30/2022 7:17:56

Thus far I have reached these tentative conclusions:

The power problems are primarily common to the neighborhood, and unlikely to be caused by a local load transient such as plugging an electric car in. They seem to be concentrated mainly in the evening (4-11 pm) and morning (6-10 am). These seem to be times when power load is changing, due to heating, cooking, lighting, and home solar power systems going off and on at sunset and sunrise. The longer term voltage goes up or down a bit at the time of a power anomaly. This requires further investigation, but may be due to switching activity by the utility. Further work

As usual, a study like this often raises new questions about as quickly as it answers questions. Here are a few that I’m still curious about.

What is the actual effect on lights that causes people to notice these anomalies so easily? I currently have an oscilloscope connected to a photoelectric cell, set to trigger when the lights flash. It will be interesting to see how that compares with the magnitude of the anomaly. Do LED lights manifest this more than incandescent bulbs? It seems unlikely that such a short variation would affect the filament temperature of an incandescent bulb significantly. Do the anomalies correlate with any longer-term voltage changes? My test rig measures long-term voltage in an uncalibrated way, but the processing I’m currently doing doesn’t make it easy to look at longer-term voltage changes as well.

Wednesday, 02. February 2022

Moxy Tongue

Bureaucratic Supremacy

The fight against "Bureaucratic Supremacy" affects us all. Time for unity beyond the dysfunctional cult politics driving people apart from their independence. Words are thinking tools; used wrongly, contrived inappropriately, disseminated poorly, words can cause great harm to people and society. This is being demonstrated for all people to witness in 2020++ History is long; let the dead pas
The fight against "Bureaucratic Supremacy" affects us all. Time for unity beyond the dysfunctional cult politics driving people apart from their independence.
Words are thinking tools; used wrongly, contrived inappropriately, disseminated poorly, words can cause great harm to people and society. This is being demonstrated for all people to witness in 2020++
History is long; let the dead past bury its dead. You are here, now, and the structure of your participation in this life, the administration of your Rights in civil society, matters a great deal for the results both are capable of rendering.
"We the people" is an example of how words crafted by intent, can be manipulated over time to render outcomes out-of-step with their original intent. Once upon a time.. "people" was an easy word to define. An experiment in self-governance, unique in all the world and history, arrived because people made it so. Fast forward to 2022, and people no longer function as "people" under the law; in lieu of basic observable fact, a bureaucratic interpretation and abstraction of intent has been allowed to take root among people.
Oft confused in 2020++ with phrases like "white supremacy", or "Institutional racism", methods of administrative bureaucracy have taken a supreme role in defining and operationalizing Rights defined for "people". This "bureaucratic supremacy" has allowed abstraction of words like "people" to render a Government operated by bureaucrats, not under the authority of the people "of, by, for" whom it was originally conceived and instantiated, but instead under methods of processing bureaucratic intent. From the point-of-view of the oppressed historically, the bureaucracy has skin in the game, and domination is absolute. But, from the point-of-view of the unexperienced future derived "of, by, for" people, skin has nothing to do with it. History's leverage is one of administrative origin.
Pandemics will come and go in time; experiences of the administrative machinery that guarantees the integrity, security and continued self-governance of society by people should never be overlooked. Especially in the context of now "computational" Constitutional Rights for people, (not birth certificates - vaccination passport holders - or ID verification methods & artifacts poorly designed) where operational structure determines the integrity of operational run time results. Literature might say "Freedom of Speech" is a Right, but if the administrative system does not compute said Rights, then they cease to exist. 
"Bureaucratic Supremacy" has a predictable moat surrounding its practices; credentialed labor. Employees with labor certifications are only as useful as the validity of the credential under inspection and in practice. Administering the permission to be hired, work, contribute value and extend a meaningful voice into a civil system is easily sequestered if/when that credential is meaningless under inspection, and is only used as means of identifying bureaucratic compliance.
Bureaucratic supremacy is the direct result of bureaucratic compliance; people, functioning as "people" willing to cede their inalienable rights in exchange for a paycheck, yield a systemic approach to human management that often counters the initial intent and integrity of a systems existence. Often heard when something happens that lacks systemic integrity, "I was only doing my job" represents an output of bureaucratic fraud, whereby people claim plausible deniability of responsibility and accountability based on the structure of their working efforts. Corporate law is founded on the premise of "liability control", whereby a resulting bureaucracy allows real human choices to function as bureaucratic outcomes lacking any real direct human definition. People are no longer operating as "people" when abstracted by the law in such ways, and the world over, systems of bureaucracy with historic significance control and confuse the interpretation of results that such a system of labor induces.
Rooted at the origin of a self-governed civil society is an original act of human Sovereignty. In America, this act is writ large by John Hancock for a King's benefit, as well as every administrative bureaucracy the world will ever come to experience. People declare independence from bureaucracies by personal Sovereign authority. This is the root of Sovereign authority, and can never be provisioned by a bureaucracy. Bureaucratic supremacy is a perversion of this intent, and labor credentials make it so.
Where do "we" go from here?
People, Individuals all, is the only living reality of the human species that will ever actually exist. People, living among one another, never cease to function as Individuals, and any systemic process that uses a literary abstraction, or computational abstraction to induce "we" into a bureaucratic form is an aggressive act of fraud against Humanity, and the Sovereignty defined "of, by, for" such people.
In America, people are not the dog of their Government; self-governance is man's best friend. The order of operations in establishing such a "more perfect Union" is critical for its sustained existence. Be wary of listening to lifetime bureaucrats; they will speak with words that are no longer tools for human advancement, but instead, are designed to reinforce the "bureaucratic supremacy" of the authority derived by their labor credentials. Inspect those credentials directly to ensure they are legitimate, for false labor credentials are endemic.
Structure yields results; fraud by bureaucracy and Rights for people are juxtapositional and never exist in the same place at the same time. 
Think About It.. More Individuals in Civil Society Needed: https://youtu.be/KHbzSif78qQ

Tuesday, 01. February 2022

Heres Tom with the Weather

Although it makes a good point, the "False balance" article seems to accept the widely held assumption that Rogan is just "letting people voice their views" without interrupting them but he did so recently with guest Josh Szeps to wrongly argue against covid myocarditis evidence.

Although it makes a good point, the "False balance" article seems to accept the widely held assumption that Rogan is just "letting people voice their views" without interrupting them but he did so recently with guest Josh Szeps to wrongly argue against covid myocarditis evidence.

Monday, 31. January 2022

Identity Woman

Reality 2.0 Podcast: ID.me Vs. The Alternatives

I chatted with Katherine Druckman and Doc Searls of Reality 2.0 about the dangers of ID.me, a national identity system created by the IRS and contracted out to one private company, and the need for the alternatives, decentralized systems with open standards.  The post Reality 2.0 Podcast: ID.me Vs. The Alternatives appeared first on Identity Woman.

I chatted with Katherine Druckman and Doc Searls of Reality 2.0 about the dangers of ID.me, a national identity system created by the IRS and contracted out to one private company, and the need for the alternatives, decentralized systems with open standards. 

The post Reality 2.0 Podcast: ID.me Vs. The Alternatives appeared first on Identity Woman.

Sunday, 30. January 2022

Jon Udell

Life in the neighborhood

I’ve worked from home since 1998. All along I’ve hoped many more people would enjoy the privilege and share in the benefits. Now that it’s finally happening, and seems likely to continue in some form, let’s take a moment to reflect on an underappreciated benefit: neighborhood revitalization. I was a child of the 1960s, and … Continue reading Life in the neighborhood

I’ve worked from home since 1998. All along I’ve hoped many more people would enjoy the privilege and share in the benefits. Now that it’s finally happening, and seems likely to continue in some form, let’s take a moment to reflect on an underappreciated benefit: neighborhood revitalization.

I was a child of the 1960s, and spent my grade school years in a newly-built suburb of Philadelphia. Commuter culture was well established by then, so the dads in the neighborhood were gone during the day. So were some of the moms, mine included, but many were at home and were able to keep an eye on us kids as we played in back yards after school. And our yards were special. A group of parents had decided not to fence them, thus creating what was effectively a private park. The games we played varied from season to season but always involved a group of kids roaming along that grassy stretch. Nobody was watching us most of the time. Since the kitchens all looked out on the back yards, though, there was benign surveillance. Somebody’s mom might be looking out at any given moment, and if things got out of hand, somebody’s mom would hear that.

For most kids, a generation later, that freedom was gone. Not for ours, though! They were in grade school when BYTE Magazine ended and I began my remote career. Our house became an after-school gathering place for our kids and their friends. With me in my front office, and Luann in her studio in the back, those kids enjoyed a rare combination of freedom and safety. We were mostly working, but at any given moment we could engage with them in ways that most parents never could.

I realized that commuter culture had, for several generations, sucked the daytime life out of neighborhoods. What we initially called telecommuting wasn’t just a way to save time, reduce stress, and burn less fossil fuel. It held the promise of restoring that daytime life.

All this came back to me powerfully at the height of the pandemic lockdown. Walking around the neighborhood on a weekday afternoon I’d see families hanging out, kids playing, parents working on landscaping projects and tinkering in garages, neighbors talking to one another. This was even better than my experience in the 2000s because more people shared it.

Let’s hold that thought. Even if many return to offices on some days of the week, I believe and hope that we’ve normalized working from home on other days. By inhabiting our neighborhoods more fully on weekdays, we can perhaps begin to repair a social fabric frayed by generations of commuter culture.

Meanwhile here is a question to ponder. Why do we say that we are working from and not working at home?


Randall Degges

How to Calculate the Energy Consumption of a Mac

I’m a bit of a sustainability nerd. I love the idea of living a life where your carbon footprint is neutral (or negative) and you leave the world a better place than it was before you got here. While it’s clear that there’s only so much impact an individual can have on carbon emissions, I like the idea of working to minimize my personal carbon footprint. This is a big part of the reason why

I’m a bit of a sustainability nerd. I love the idea of living a life where your carbon footprint is neutral (or negative) and you leave the world a better place than it was before you got here.

While it’s clear that there’s only so much impact an individual can have on carbon emissions, I like the idea of working to minimize my personal carbon footprint. This is a big part of the reason why I live in a home with solar power, drive an electric vehicle, and try to avoid single-use plastics as much as possible.

During a recent impact-focused hackathon at work (come work with me!), I found myself working on an interesting sustainability project. Our team’s idea was simple: because almost all Snyk employees work remotely using a Mac laptop, could we measure the energy consumption of every employee’s Mac laptop to better understand how much energy it takes to power employee devices, as well as the amount of carbon work devices produce?

Because we know (on average) how much carbon it takes to produce a single kilowatt-hour (kWh) of electricity in the US (0.85 pounds of CO2 emissions per kWh), if we could figure out how many kWh of electricity were being used by employee devices, we’d be able to do some simple math and figure out two things:

How much energy is required to power employee devices How much carbon is being put into the atmosphere by employee devices

Using this data, we could then donate money to a carbon offsetting service to “neutralize” the impact of our employee’s work devices.

PROBLEM: Now, would this be a perfectly accurate way of measuring the true carbon impact of employees? Absolutely not – there are obviously many things we cannot easily measure (such as the amount of energy of attached devices, work travel, food consumption, etc.), but the idea of being able to quantify the carbon impact of work laptops was still interesting enough that we decided to pursue it regardless.

Potential Energy Tracking Solutions

The first idea we had was to use smart energy monitoring plugs that employees could plug their work devices into while charging. These plugs could then store a tally of how much energy work devices consume, and we could aggregate that somewhere to get a total amount of energy usage.

I happen to have several of the Eve Energy smart plugs around my house (which I highly recommend if you use Apple’s HomeKit) that I’ve been using to track my personal energy usage for a while now.

While these devices are incredible (they work well, come with a beautiful app, etc.), unfortunately, they don’t have any sort of publicly accessible API you can use to extract energy consumption data.

We also looked into various other types of smart home energy monitoring plugs, including the Kasa Smart Plug Mini, which does happen to have an API.

Unfortunately, however, because Snyk is a global company with employees all over the world, hardware solutions were looking less and less appealing as to do what we wanted, we’d need to:

Ship country-specific devices to each new and existing employee Include setup instructions for employees (how to configure the plugs, how to hook them up to a home network, etc.) Instruct employees to always plug their work devices into these smart plugs, which many people may forget to do Is It Possible to Track Mac Energy Consumption Using Software?

When someone on the team proposed using software to track energy consumption, I thought it’d be a simple task. I assumed there were various existing tools we could easily leverage to grab energy consumption data. But boy, oh boy, I was wrong!

As it turns out, it’s quite complicated to figure out how many watt-hours of electricity your Mac laptop is using. To the best of my knowledge, there are no off-the-shelf applications that do this.

Through my research, however, I stumbled across a couple potential solutions.

Using Battery Metrics to Calculate Energy Consumption

The first idea I had was to figure out the size of the laptop’s battery (in milliamp-hours (mAh)), as well as how many complete discharge cycles the battery has been through (how many times has the battery been fully charged and discharged).

This information would theoretically allow us to determine how much energy a Mac laptop has ever consumed by multiplying the size of the battery in mAh by the number of battery cycles. We could then simply convert the number of mAh -> kWh using a simple formula.

After a lot of Google-fu and command-line scripting, I was able to get this information using the ioreg command-line tool, but in the process, I realized that there was a critical problem with this approach.

The problem is that while the variables I mentioned above will allow you to calculate the energy consumption of your laptop over time, when your laptop is fully charged and plugged into a wall outlet it isn’t drawing down energy from the battery – it’s using the electricity directly from your wall.

This means that the measuring approach above will only work if you never use your laptop while it is plugged into wall chargers – you’d essentially need to keep your laptop shut down while charging and only have it turned on while on battery power. Obviously, this is not very realistic.

Using Wall Adapter Information to Calculate Energy Consumption

After the disappointing battery research, I decided to take a different approach. What if there was a way to extract how much energy your laptop was pulling from a wall adapter?

If we were able to figure out how many watts of electricity, for example, your laptop was currently drawing from a wall adapter, we could track this information over time to determine the amount of watt-hours of electricity being consumed. We could then easily convert this number to kWh or any other desired measure.

And… After a lot of sifting through ioreg output and some help from my little brother (an engineer who helps build smart home electric panels), I was able to successfully extract the amount of watts being pulled from a plugged-in wall adapter! Woo!

The Final Solution: How to Calculate the Energy Consumption of Your Mac Using Software

After many hours of research and playing around, what I ended up building was a small shell script that parses through ioreg command-line output and extracts the amount of watts being pulled from a plugged-in wall adapter.

This shell script runs on a cron job once a minute, logging energy consumption information to a file. This file can then be analyzed to compute the amount of energy consumed by a Mac device over a given time period.

I’ve packaged this solution up into a small GitHub project you can check out here.

The command I’m using to grab the wattage information is the following:

/usr/sbin/ioreg -rw0 -c AppleSmartBattery | grep BatteryData | grep -o '"AdapterPower"=[0-9]*' | cut -c 16- | xargs -I % lldb --batch -o "print/f %" | grep -o '$0 = [0-9.]*' | cut -c 6-

Here it is broken down with a brief description of what these commands are doing:

/usr/sbin/ioreg -rw0 -c AppleSmartBattery | \ # retrieve power data grep BatteryData | \ # filter it down to battery stats grep -o '"AdapterPower"=[0-9]*' | \ # extract adapter power info cut -c 16- | \ # extract power info number xargs -I % lldb --batch -o "print/f %" | \ # convert power info into an IEEE 754 float grep -o '$0 = [0-9.]*' | \ # extract only the numbers cut -c 6- # remove the formatting

The output of this command is a number which is the amount of watts currently being consumed by your laptop (I verified this by confirming it with hardware energy monitors). In order to turn this value into a usable energy consumption metric, you have to sample it over time. After thinking this through, here was the logging format I came up with to make tracking energy consumption simple:

timestamp=YYYY-MM-DDTHH:MM:SSZ wattage=<num> wattHours=<num> uuid=<string>

This format allows you to see:

The timestamp of the log The amount of watts being drawn from the wall at the time of measurement (wattage) The number of watt hours consumed at the time of measurement (wattHours), assuming this measurement is taken once a minute, and The unique Mac UUID for this device. This is logged to help with deduplication and other statistics in my case.

Here’s an example of what some real-world log entries look like:

timestamp=2022-01-30T23:41:00Z wattage=8.37764739 wattHours=.13962745650000000000 uuid=EDD819A5-1409-5797-9BE4-22EAAC75D999 timestamp=2022-01-30T23:42:01Z wattage=8.7869072 wattHours=.14644845333333333333 uuid=EDD819A5-1409-5797-9BE4-22EAAC75D999 timestamp=2022-01-30T23:43:00Z wattage=9.16559505 wattHours=.15275991750000000000 uuid=EDD819A5-1409-5797-9BE4-22EAAC75D999 timestamp=2022-01-30T23:44:00Z wattage=8.49206352 wattHours=.14153439200000000000 uuid=EDD819A5-1409-5797-9BE4-22EAAC75D999 timestamp=2022-01-30T23:45:00Z wattage=7.45262718 wattHours=.12421045300000000000 uuid=EDD819A5-1409-5797-9BE4-22EAAC75D999

To sum up the amount of energy consumption over time, you can then parse this log file and sum up the wattHours column over a given time period. Also, please note that the script I wrote will NOT log energy consumption data to the file if there is no energy being consumed (aka, your laptop is not plugged into a wall adapter).

PROBLEMS: While this is the final solution we ended up going with, it still has one fatal flaw: this approach only works if the script is ran once a minute. This means that if your laptop is shut down or sleeping and this code is not running, there will be no way to log energy consumption data.

What I Learned About Tracking Energy Consumption on Macs

While building our short sustainability-focused hackathon project, I learned a lot about tracking energy consumption on Macs.

Your laptop doesn’t always use its battery as a power source, so tracking battery metrics is not an ideal solution It’s possible to track energy consumption by measuring the draw from wall adapters, although this approach isn’t perfect as it requires your computer to be on and running code on a regular interval While using hardware energy trackers isn’t convenient in our case, this is certainly the simplest (and probably the best) option for personal energy tracking

If you’d like to see the software-based energy tracking solution I built, please check it out on GitHub.

I’m currently in the process of following up with Snyk’s IT department to see if this is something we could one day roll out automatically to employee devices. I still think it would be incredibly interesting to see a central dashboard of how much energy Snyk employees are using to “power” their work, and what that amount of carbon looks like.

PS: The creation of this blog post took precisely 19.972951810666647 watt-hours of electricity and generated .016977009039067 pounds of CO2.

Saturday, 29. January 2022

Mike Jones: self-issued

Working Group Adoption of the JWK Thumbprint URI Specification

The IETF OAuth working group has adopted the JWK Thumbprint URI specification. The abstract of the specification is: This specification registers a kind of URI that represents a JSON Web Key (JWK) Thumbprint value. JWK Thumbprints are defined in RFC 7638. This enables JWK Thumbprints to be used, for instance, as key identifiers in contexts […]

The IETF OAuth working group has adopted the JWK Thumbprint URI specification. The abstract of the specification is:

This specification registers a kind of URI that represents a JSON Web Key (JWK) Thumbprint value. JWK Thumbprints are defined in RFC 7638. This enables JWK Thumbprints to be used, for instance, as key identifiers in contexts requiring URIs.

The need for this arose during specification work in the OpenID Connect working group. In particular, JWK Thumbprint URIs are used as key identifiers that can be syntactically distinguished from other kinds of identifiers also expressed as URIs in the Self-Issued OpenID Provider v2 specification.

Given that the specification does only one simple thing in a straightforward manner, we believe that it is ready for working group last call.

The specification is available at:

https://www.ietf.org/archive/id/draft-ietf-oauth-jwk-thumbprint-uri-00.html

Aaron Parecki

Stream a USB webcam to HDMI on a Raspberry Pi

This post exists to collect my notes on displaying a USB webcam on the Raspberry Pi HDMI outputs. This is not the same as streaming the webcam (easy), and this is not for use with the Raspberry Pi camera module. This is specifically for USB UVC webcams.

This post exists to collect my notes on displaying a USB webcam on the Raspberry Pi HDMI outputs. This is not the same as streaming the webcam (easy), and this is not for use with the Raspberry Pi camera module. This is specifically for USB UVC webcams.

Note: Do not actually do this, it's terrible.

Install Raspberry Pi OS Lite, you don't want the full desktop environment.

Once you boot the Pi, install VLC and the X windows environment:

sudo apt install vlc xinit

Configure your Pi to boot to the command line already logged in, using the tool raspi-config.

Create the file ~/.bash_profile with the following contents which will start X on boot:

if [ -z $DISPLAY ] && [ $(tty) = /dev/tty1 ]
then
startx
fi

Create the file ~/.xinitrc to launch VLC streaming the webcam when X launches:

#!/bin/bash
cvlc v4l2:// :v4l2-dev=/dev/video0

Now you can reboot the Pi with a webcam plugged in and you'll get a full screen view of the camera.

If your webcam isn't recognized when it first boots up, you'll need to quit VLC and start it again. You can quit by pressing ctrl-Q, then type startx to restart it after you plug the camera back in. If that doesn't work, you might have to SSH in and kill the process that way.

There are many problems with this approach:

It seems VLC is not hardware accelerated so there is pretty bad tearing of the image Sometimes the webcam isn't recognized when the Pi boots up and I have to unplug it and plug it back in when it boots and restart the script The image tearing and stuttering is completely unusable for pretty much anything

Do you know of a better solution? Let me know!

So far I haven't found anything that actually works, and I've searched all the forums and tried all the solutions with guvcview and omxplayer with no luck so far.

For some other better solutions, check out my blog post and video How to Convert USB Webcams to HDMI.


Hans Zandbelt

OpenID Connect for Oracle HTTP Server

Over the past years ZmartZone enabled a number of customers to migrate their Single Sign On (SSO) implementation from proprietary Oracle HTTP Server components to standards-based OpenID Connect SSO. Some observations about that: Oracle Webgate and mod_osso are SSO plugins … Continue reading →

Over the past years ZmartZone enabled a number of customers to migrate their Single Sign On (SSO) implementation from proprietary Oracle HTTP Server components to standards-based OpenID Connect SSO. Some observations about that:

Oracle Webgate and mod_osso are SSO plugins (aka. agents) for the Oracle HTTP Server (OHS) that implement a proprietary (Oracle) SSO/authentication protocol that provides authentication (only) against Oracle Access Manager the said components are closed source implementations owned by Oracle these components leverage a single domain-wide SSO cookie which has known security drawbacks, especially in todays distributed and delegated (cloud and hybrid) application landscape, see here ZmartZone supports builds of mod_auth_openidc that can be used as plugins in to Oracle HTTP Server (11 and 12), thus implementing standards based OpenID Connect for OHS with an open source component those builds are a drop in replacement into OHS that can even be used to set the same headers as mod_osso/Webgate does/did mod_auth_openidc can be used to authenticate to Oracle Access Manager but also to (both commercial and free) alternative Identity Providers such as PingFederate, Okta, Keycloak etc. when required Oracle HTTP Server can be replaced with stock Apache HTTPd the Oracle HTTP Server builds of mod_auth_openidc come as part of a light-weight commercial support agreement on top of the open source community support channel

In summary: modern OpenID Connect-based SSO for Oracle HTTP Server can be implemented with open source mod_auth_openidc following a fast, easy and lightweight migration plan.

See also:
https://hanszandbelt.wordpress.com/2021/10/28/mod_auth_openidc-vs-legacy-web-access-management
https://hanszandbelt.wordpress.com/2019/10/23/replacing-legacy-enterprise-sso-systems-with-modern-standards/

Friday, 28. January 2022

Identity Woman

Exploring Social Technologies for Democracy with Kaliya Young, Heidi Nobuntu Saul, Tom Atlee

We see democracy as ideally a process of co-creating the conditions of our shared lives, solving our collective problems, and learning about life from and with each other. Most of the social technologies for democracy we work with are grounded in conversation – discussion, dialogue, deliberation, choice-creating, negotiation, collective visioning, and various forms of council, […] The post Explo

We see democracy as ideally a process of co-creating the conditions of our shared lives, solving our collective problems, and learning about life from and with each other. Most of the social technologies for democracy we work with are grounded in conversation – discussion, dialogue, deliberation, choice-creating, negotiation, collective visioning, and various forms of council, […]

The post Exploring Social Technologies for Democracy with Kaliya Young, Heidi Nobuntu Saul, Tom Atlee appeared first on Identity Woman.

Monday, 24. January 2022

Jon Udell

Remembering Diana

The other day Luann and I were thinking of a long-ago friend and realized we’d forgotten the name of that friend’s daughter. Decades ago she was a spunky blonde blue-eyed little girl; we could still see her in our minds’ eyes, but her name was gone. “Don’t worry,” I said confidently, “it’ll come back to … Continue reading Remembering Diana

The other day Luann and I were thinking of a long-ago friend and realized we’d forgotten the name of that friend’s daughter. Decades ago she was a spunky blonde blue-eyed little girl; we could still see her in our minds’ eyes, but her name was gone.

“Don’t worry,” I said confidently, “it’ll come back to one us.”

Sure enough, a few days later, on a bike ride, the name popped into my head. I’m sure you’ve had the same experience. This time around it prompted me to think about how that happens.

To me it feels like starting up a background search process that runs for however long it takes, then notifies me when the answer is ready. I know the brain isn’t a computer, and I know this kind of model is suspect, so I wonder what’s really going on.

– Why was I was so sure the name would surface?

– Does a retrieval effort kick off neurochemical change that elaborates over time?

– Before computers, what model did people use to explain this phenomenon?

So far I’ve only got one answer. That spunky little girl was Diana.


Hyperonomy Digital Identity Lab

Trusted Digital Web (TDW2022): Characteristic Information Scopes

Figure 1. Trusted Digital Web (TDW2022): Characteristic Information Scopes (based on the Social Evolution Model

Sunday, 23. January 2022

Moxy Tongue

Rough Seas Ahead People

The past is dead.  You are here now. The future will be administered. Data is not literature, it is structure. Data is fabric. Data is blood. Automated data will compete with humans in markets, governments, and all specialty fields of endeavor that hold promise for automated systems to function whereas.  Whereas human; automated human process. Automate human data extraction. Au
The past is dead. 
You are here now.
The future will be administered. Data is not literature, it is structure. Data is fabric. Data is blood. Automated data will compete with humans in markets, governments, and all specialty fields of endeavor that hold promise for automated systems to function whereas. 
Whereas human; automated human process. Automate human data extraction. Automate human data use.
I am purposefully vague -> automate everything that can be automated .. this is here, now.
What is a Constitution protecting both "Human Rights" and "Civil Rights"? 
From the view of legal precedent and human intent actualized, it is a document, a work of literary construct, and its words are utilized to determine meaning in legal concerns where the various Rights of people are concerned. Imperfect words of literature, implemented in their time and place. And of those words, a Governing system of defense for the benefit "of, by, for" the people Instituting such Governance.
This is the simple model, unique in the world, unique in history as far as is known to storytellers the world over. A literary document arriving here and now as words being introduced to their data manifestations. Data loves words. Data loves numbers. Data loves people the most. Why?
Data is "literally" defined as "data" in relation to the existence of Humanity. That which has no meaning to Humanity is not considered "data" being utilized as such. Last time I checked, Humanity did not know everything, yet. Therefore much "data" has barely been considered as existing, let alone being understood in operational conditions called "real life", or "basic existence" by people. 
This is our administrative problem; words are not being operationalized accurately as data. The relationship between "words" and "data" as operational processes driving the relationship between "people" and "Government Administration" has not been accurately structured. In other words, words are not being interpreted as data accurately enough, if at all.
A governed system derived "of, by, for" the people creating and defending such governed process, has a basic starting point. It seems obvious, but many are eager to acquiesce to something else upon instantiation of a service relationship, when easy or convenient enough, so perhaps "obvious" is just a word. "Of, By, For" people means that "Rights" are for people, not birth certificates. 
Consider how you administer your own life. Think back to last time you went to the DMV. Think back to last time you filed taxes and something went wrong that you needed to fix. Think back to when you registered your child for kindergarten. Think back to the last time you created an online bank account. 
While you are considering these experiences, consider the simultaneous meaning created by the words "of, by, for" and whether any of those experiences existed outside of your Sovereign Rights as a person.
Humanity does not come into existence inside a database. The American Government does not come into authority "of, by, for" database entries. 
Instead, people at the edges of society, in the homes of our towns derive the meaning "of, by, for" their lawful participation. Rights are for people, not birth certificates. People prove birth certificates, birth certificates do not prove people. If an administrative process follows the wrong "administrative precedent" and logic structure, then "words" cease meaning what they were intended to mean.
This words-to-data slight of hand is apparently easy to run on people. The internet, an investment itself of Government created via DARPA and made public via NSF, showcases daily the mis-construed meaning of "words" as "data". People are being surveilled, tracked and provisioned access to services based on having their personal "ID:DATA" leveraged. In some cases, such as the new ID.me services being used at Government databases, facial scans are being correlated to match people as "people" operating as "data". The methods used defy "words" once easily accessible, and have been replaced by TOSDR higher up the administrative supply chain as contracts of adhesion.
Your root human rights, the basic meaning of words with Constitutional authority to declare war upon the enemies of a specific people in time, have been usurped, and without much notice, most all people have acquiesced to the "out-of-order" administrative data flows capturing their participation. Freedom can not exist on such an administrative plantation, whereby people are captured as data for use by 2nd and 3rd parties without any root control provided to the people giving such data existence and integrity.
People-backwards-authority will destroy this world. America can not be provisioned from a database. People possess root authority in America. America is the leader of the world, and immigrants come to America because "people possess root authority" in America. "Of, By, For" People in America, this is the greatest invention of America. Owning your own authority, owning root authority as a person expressing the Sovereign structure of your Rights as a person IS the greatest super power on planet Earth.
The American consumer marketplace is born in love with the creative spirit of Freedom. The American Dream lures people from the world over to its shores. A chance to be free, to own your own life and express your freedom in a market of ideas, where Rights are seen, protected, and leveraged for the benefit of all people. A place where work is honored, and where ladders may be climbed by personal effort and dedication in pursuit of myriad dreams. A land honored by the people who sustain its promise, who guard its shores, and share understanding of how American best practices can influence and improve the entire world.
It all begins with you.
If I could teach you how to do it for yourself I would. I try. My words here are for you to use as you wish. I donate them with many of my efforts sustained over many years. This moment (2020-2022) has been prepared for by many for many many years. A populace ignorant of how data would alter the meaning of words in the wrong hands was very predictable. Knowing what words as data meant in 1992 was less common. In fact, getting people to open ears, or an email, was a very developmental process. Much hand-holding, much repetition. I have personally shared words the world over, and mentored 10's of thousands over the past 25 years. To what end?
I have made no play to benefit from the ignorance of people. I have sought to propel conversation, understanding, skill, and professional practices. By all accounts, I have failed at scale. The world is being over-run by ignorance, and this ignorance is being looted, and much worse, it is being leveraged against the best interest of people, Individuals all.
"We the people" is a literary turn-of-hand in data terms; People, Individuals All. The only reality of the human species that matters is the one that honors what people actually are. Together, each of us as Individual, living among one another.. is the only reality that will ever exist. "We" is a royal construct if used to instantiate an Institutional outcome not under the control of actual people as functioning Individuals, and instead abstracts this reality via language, form, contract or use of computer science to enable services to be rendered upon people rather than "of, by, for" people.
The backwards interpretation of words as data process is the enemy of Humanity. Simple as that.
You must own root authority; Americans, People. 

Read Next: Bureaucratic Supremacy



Tuesday, 18. January 2022

Kerri Lemole

W3C Verifiable Credentials Education Task Force 2022 Planning

At the W3C VC-EDU Task Force we’ve been planning meeting agendas and topics for 2022. We’ve been hard at work writing use cases, helping education standards organizations understand and align with VCs, and we’ve been heading towards a model recommendation doc for the community. In 2022 we plan on building upon this and are ramping up for an exciting year of pilots. To get things in order, we

At the W3C VC-EDU Task Force we’ve been planning meeting agendas and topics for 2022. We’ve been hard at work writing use cases, helping education standards organizations understand and align with VCs, and we’ve been heading towards a model recommendation doc for the community. In 2022 we plan on building upon this and are ramping up for an exciting year of pilots.

To get things in order, we compiled a list of topics and descriptions in this sheet and have set up a ranking system. This ranking system is open until January 19 at 11:59pm ET and anyone is invited to weigh in. The co-chairs will evaluate the results and we’ll discuss them at the January 24th VC-EDU Call (call connection info).

It’s a lengthy and thought-provoking list and I hope we have the opportunity to dig deep into each of these topics and maybe more. I reconsidered my choices quite a few times before I landed on these top 5:

Verifiable Presentations (VPs) vs (nested) Verifiable Credentials (VCs) in the education context — How to express complex nested credentials (think full transcript). The description references full transcript but this topic is also related to presentation of multiple single achievements by the learner. I ranked this first because presentations are a core concept of VCs and very different from how the education ecosystem is accustomed to sharing their credentials. VPs introduce an exchange of credentials in response to a verifiable request versus sharing a badge online or emailing a PDF. Also, there’s been quite a bit of discussion surrounding more complex credentials such as published transcripts that we can get into here. Integration with Existing Systems — Digitizing existing systems, vs creating; existing LMSes; bridging; regulatory requirements — ex: licensing, PDFs needing to be visually inspected. To gain some traction with VCs, we need to understand how systems work now and what can be improved upon using VCs but also, how do we make VCs work with what is needed now? Bridging Tech. This ties into integrating with existing systems above. We are accustomed to the tech we have now and it will be with us for some time. For instance, email will still be used for usernames and identity references even when Decentralized Identifiers start gaining traction. They will coexist and it can be argued that compromises will need to be made (some will argue against this). Protocols — Much of the work in VC-EDU so far has been about the data model. But what about the protocols — what do we /do/ with the VCs once we settle on the format? (How to issue, verify, exchange, etc). This made my top five because as the description notes, we’re pretty close to a data model but we need to understand more about the protocols that deliver, receive, and negotiate credential exchanges. Part of what we do in VC-EDU is learn more about what is being discussed and developed in the broader ecosystem and understanding protocols will help the community with implementation. Context file for VC-EDU — Create a simple context file to describe an achievement claim. There are education standards organizations like IMS Global (Open Badges & CLR) that are working towards aligning with VC-EDU but having an open, community-created description of an achievement claim, even if it reuses elements from other vocabularies, will provide a simple a