Last Update 8:48 PM May 18, 2021 (UTC)

Identity Blog Catcher

Brought to you by Identity Woman and Infominer.
Support this collaboration on Patreon!!!

Tuesday, 18. May 2021

John Philpin : Lifestream

Bookshelves .. or shelves … or maybe just ‘lists’ … isn’t th

Bookshelves .. or shelves … or maybe just ‘lists’ … isn’t that really what they are … I don’t have a shelf of books at home that I want to read … I mean I haven’t even got them yet? // @manton @danielpunkass

Bookshelves .. or shelves … or maybe just ‘lists’ … isn’t that really what they are … I don’t have a shelf of books at home that I want to read … I mean I haven’t even got them yet?

// @manton @danielpunkass


“The difference between America and England is that Americ

“The difference between America and England is that Americans think 100 years is a long time, while the English think 100 miles is a long way.” Earle Hitchner

“The difference between America and England is that Americans think 100 years is a long time, while the English think 100 miles is a long way.”

Earle Hitchner


Kierkegaard Goes to Therapy - Existential Comics

Kierkegaard Goes to Therapy - Existential Comics

“Once we’re thrown off our habitual paths, we think all is

“Once we’re thrown off our habitual paths, we think all is lost, but it’s only here that the new and the good begins.” Leo Tolstoy

“Once we’re thrown off our habitual paths, we think all is lost, but it’s only here that the new and the good begins.”

Leo Tolstoy


Ben Werdmüller

First run in a few weeks. Feels ...

First run in a few weeks. Feels both horrendous and excellent.

First run in a few weeks. Feels both horrendous and excellent.


Simon Willison

Weeknotes: Velma, more Django SQL Dashboard

Matching locations for Vaccinate The States, fun with GeoJSON and more improvements to Django SQL Dashboard. Velma I described a few weeks ago part of the process we've been using to build Vaccinate The States - a map of every COVID vaccine location in the USA (now at just over 70,000 markers and counting). Short version: we have scrapers and data ingesters for a whole bunch of different sou

Matching locations for Vaccinate The States, fun with GeoJSON and more improvements to Django SQL Dashboard.

Velma

I described a few weeks ago part of the process we've been using to build Vaccinate The States - a map of every COVID vaccine location in the USA (now at just over 70,000 markers and counting).

Short version: we have scrapers and data ingesters for a whole bunch of different sources (see the vaccine-feed-ingest repository).

Part of the challenge here is how to deal with duplicates - with multiple sources of data, chances are high that the same location will show up in more than on of our input feeds.

So in the past weeks we've been building a new tool code-named Velma to help handle this. It shows our volunteers a freshly scraped location and asks them to either match it to one of our existing locations (based on automated suggestions) or use it to create a brand new location in our database.

I've been working exclusively on the backend APIs for Velma: APIs that return new scraped data and accept and process the human matching decisions from our volunteers.

This week we've been expanding Velma to also cover merging potential duplicate locations within our existing corpus, so I've been building out the APIs for that effort as well.

I've also been working on new export code for making our entire set of locations available to partners and interested outside developers. We hope to launch that fully in the next few days.

geojson-to-sqlite

One of the export formats we are working with is GeoJSON. I have a tool called geojson-to-sqlite which I released last year: this week I released an updated version with the ability to create SpatiaLite indexes and a --nl option for consuming newline-delimited GeoJSON, contributed by Chris Amico.

I've also been experimenting with SpatiaLite's KNN mechanism using geojson-to-sqlite to load in data - here's a TIL showing how to use those tools together.

Django SQL Dashboard

I released the first non-alpha version of this last week and it's started to gain some traction: I've heard from a few people who are trying it out on their projects and it seems to work, so that's good!

I released version 0.14 yesterday with a bunch of fixes based on feedback from users, plus a security fix that closes a hole where users without the execute_sql permission but with access to the Django Admin could modify the SQL in saved dashboards and hence execute their own custom queries.

I also made a bunch of improvements to the documentation, including adding screenshots and demo links to the widgets page.

TIL this week The Wikipedia page stats API Vega-Lite bar charts in the same order as the data Enabling a gin index for faster LIKE queries KNN queries with SpatiaLite Django data migration using a PostgreSQL CTE Releases this week geojson-to-sqlite: 0.3 - (6 releases total) - 2021-05-17
CLI tool for converting GeoJSON files to SQLite (with SpatiaLite) django-sql-dashboard: 0.14 - (28 releases total) - 2021-05-16
Django app for building dashboards using raw SQL queries

Monday, 17. May 2021

Simon Willison

No feigning surprise

No feigning surprise Don't feign surprise if someone doesn't know something that you think they should know. Even better: even if you are surprised, don't let them know! "When people feign surprise, it’s usually to make them feel better about themselves and others feel worse." Via @cameronbardell

No feigning surprise

Don't feign surprise if someone doesn't know something that you think they should know. Even better: even if you are surprised, don't let them know! "When people feign surprise, it’s usually to make them feel better about themselves and others feel worse."

Via @cameronbardell


Damien Bod

Securing OAuth Bearer tokens from multiple Identity Providers in an ASP.NET Core API

This article shows how to secure and use different APIs in an ASP.NET Core API which support OAuth access tokens from multiple identity providers. Access tokens from Azure AD and from Auth0 can be be used to access data from the service. Each API only supports a specific token from the specific identity provider. Microsoft.Identity.Web […]

This article shows how to secure and use different APIs in an ASP.NET Core API which support OAuth access tokens from multiple identity providers. Access tokens from Azure AD and from Auth0 can be be used to access data from the service. Each API only supports a specific token from the specific identity provider. Microsoft.Identity.Web is used to implement the access token authorization for the Azure AD tokens and the default authorization is used to support the Auth0 access tokens.

Code: https://github.com/damienbod/SeparatingApisPerSecurityLevel

Blogs in this series

Securing multiple Auth0 APIs in ASP.NET Core using OAuth Bearer tokens Securing OAuth Bearer tokens from multiple Identity Providers in an ASP.NET Core API

Setup

An API ASP.NET Core application is created to implement the multiple APIs and accept access tokens created by Auth0 and Azure AD. The access tokens need to be validated and should only work for the intended purpose for which the access token was created. The Azure AD API is used by an ASP.NET Core Razor page application which requests an user access token with the correct scope to access the API. Two Azure AD App registrations are used to define the Azure AD setup. The Auth0 application is implemented using a Blazor server hosted application and accesses the two Auth0 APIs, See the pervious post for details.

To support the multiple identity providers, multiple schemes are used. The Auth0 APIs use the default scheme definition for JWT Bearer tokens and the Azure AD uses a custom named scheme. It does not matter which scheme is used for which as long as the correct scheme is defined on the controller securing the API. The AddMicrosoftIdentityWebApiAuthentication method takes the scheme and the configuration name as a optional parameter. The Azure AD configuration is defined like any standard Azure AD API in ASP.NET Core.

public void ConfigureServices(IServiceCollection services) { // Adds Microsoft Identity platform (AAD v2.0) // support to protect this Api services.AddMicrosoftIdentityWebApiAuthentication( Configuration, "AzureAd", "myADscheme"); // Auth0 API configuration=> default scheme services.AddAuthentication(options => { options.DefaultAuthenticateScheme = JwtBearerDefaults.AuthenticationScheme; options.DefaultChallengeScheme = JwtBearerDefaults.AuthenticationScheme; }).AddJwtBearer(options => { options.Authority = "https://dev-damienbod.eu.auth0.com/"; options.Audience = "https://auth0-api1"; }); services.AddSingleton<IAuthorizationHandler, UserApiScopeHandler>(); // authorization definitions for the multiple Auth0 tokens services.AddAuthorization(policies => { policies.AddPolicy("p-user-api-auth0", p => { p.Requirements.Add(new UserApiScopeHandlerRequirement()); // Validate id of application for which the token was created p.RequireClaim("azp", "AScjLo16UadTQRIt2Zm1xLHVaEaE1feA"); }); policies.AddPolicy("p-service-api-auth0", p => { // Validate id of application for which the token was created p.RequireClaim("azp", "naWWz6gdxtbQ68Hd2oAehABmmGM9m1zJ"); p.RequireClaim("gty", "client-credentials"); }); }); services.AddControllers(options => { var policy = new AuthorizationPolicyBuilder() .RequireAuthenticatedUser() .Build(); options.Filters.Add(new AuthorizeFilter(policy)); }); }

The Configure method uses the UseAuthentication method to add the middleware for the APIs.

public void Configure(IApplicationBuilder app, IWebHostEnvironment env) { // ... app.UseRouting(); app.UseAuthentication(); app.UseAuthorization(); app.UseEndpoints(endpoints => { endpoints.MapControllers(); }); }

The AzureADUserOneController class is used to implement the API for the Azure AD access tokens. The AuthorizeForScopes attribute from Microsoft.Identity.Web is used to validate the Azure AD App registration access token and define the scheme required for the validation. The scope name must match the Azure App registration definition.

using System.Collections.Generic; using Microsoft.AspNetCore.Authorization; using Microsoft.AspNetCore.Mvc; using Microsoft.Extensions.Logging; using Microsoft.Identity.Web; namespace MyApi.Controllers { [AuthorizeForScopes(Scopes = new string[] { "api://72286b8d-5010-4632-9cea-e69e565a5517/user_impersonation" }, AuthenticationScheme = "myADscheme")] [ApiController] [Route("api/[controller]")] public class AzureADUserOneController : ControllerBase { private readonly ILogger<UserOneController> _logger; public AzureADUserOneController(ILogger<UserOneController> logger) { _logger = logger; } [HttpGet] public IEnumerable<string> Get() { return new List<string> { "AzureADUser one data" }; } } }

The UserOneController implements the Auth0 user access token API. Since the default scheme is used, no scheme definition is required. The authorization policy is used to secure the API which validates the scope and the claims for this API.

using System.Collections.Generic; using Microsoft.AspNetCore.Authorization; using Microsoft.AspNetCore.Mvc; using Microsoft.Extensions.Logging; namespace MyApi.Controllers { [Authorize(Policy = "p-user-api-auth0")] [ApiController] [Route("api/[controller]")] public class UserOneController : ControllerBase { private readonly ILogger<UserOneController> _logger; public UserOneController(ILogger<UserOneController> logger) { _logger = logger; } [HttpGet] public IEnumerable<string> Get() { return new List<string> { "user one data" }; } } }

When the API application is started the APIs can be used and a swagger UI implemented using Swashbuckle was created to display the different APIs. Each API will only work with the correct access token. The different UIs can use the APIs and data is returned.

Links

https://auth0.com/docs/quickstart/webapp/aspnet-core

https://docs.microsoft.com/en-us/aspnet/core/security/authorization/introduction

Open ID Connect

Securing Blazor Web assembly using Cookies and Auth0


Simon Willison

geocode-sqlite

geocode-sqlite Neat command-line Python utility by Chris Amico: point it at a SQLite database file and it will add latitude and longitude columns and populate them by geocoding one or more of the other fields, using your choice from four currently supported geocoders.

geocode-sqlite

Neat command-line Python utility by Chris Amico: point it at a SQLite database file and it will add latitude and longitude columns and populate them by geocoding one or more of the other fields, using your choice from four currently supported geocoders.

Sunday, 16. May 2021

Hyperonomy Digital Identity Lab

NETAGO Downtime – 2021-05-15 & 16

Hardwired ThinkPad laptop: 81.01% downtime (same laptop used for prior NUM reports) WiFi Wireless Lenovo laptop: 3.78% downtime
Hardwired ThinkPad laptop: 81.01% downtime (same laptop used for prior NUM reports) WiFi Wireless Lenovo laptop: 3.78% downtime netuptime-20210516-075453-tp-hardwiredDownload netuptime-20210514-061223-lenovo-wifiDownload

John Philpin : Lifestream

Netanyahu on Jewish-Arab violence: ‘Anyone who acts as a ter

Netanyahu on Jewish-Arab violence: ‘Anyone who acts as a terrorist will be treated as such’. No irony. None.

Currently reading: Sapiens - Graphic Novel 01 by Yuval Noa

Currently reading: Sapiens - Graphic Novel 01 by Yuval Noah Harari 📚 🎵 Notes: I mean - don’t not read the original book, but this is a fun way to engage with Harari’s thinking.

Currently reading:
Sapiens - Graphic Novel 01
by Yuval Noah Harari
📚 🎵

Notes:
I mean - don’t not read the original book, but this is a fun way to engage with Harari’s thinking.


Finished reading: Working Class Boy by Jimmy Barnes 📚 🎵

Finished reading: Working Class Boy by Jimmy Barnes 📚 🎵 Notes: Bookshelf now enabled on Microblog. Some clean up and updating in front of me - meanwhile - a test post.

Finished reading:
Working Class Boy
by Jimmy Barnes

📚 🎵

Notes:

Bookshelf now enabled on Microblog.
Some clean up and updating in front of me - meanwhile - a test post.

Saturday, 15. May 2021

John Philpin : Lifestream

“I won’t insult your intelligence by suggesting that you r

“I won’t insult your intelligence by suggesting that you really believe what you just said.” William F. Buckley, Jr.

“I won’t insult your intelligence by suggesting that you really believe what you just said.”

William F. Buckley, Jr.


” It’s so beautifully arranged on the plate — you know som

” It’s so beautifully arranged on the plate — you know someone’s fingers have been all over it.” Julia Child (on Nouvelle Cuisine)

” It’s so beautifully arranged on the plate — you know someone’s fingers have been all over it.”

Julia Child (on Nouvelle Cuisine)


”Frequently on administrative forms, the options for marit

”Frequently on administrative forms, the options for marital status are single, married, and divorced. (How is ‘divorced’ a status? Isn’t that just single?)” Scott Galloway

”Frequently on administrative forms, the options for marital status are single, married, and divorced. (How is ‘divorced’ a status? Isn’t that just single?)”

Scott Galloway


🎬 It is clear that Netanyahu just watched ‘Wag The Dog’

🎬 It is clear that Netanyahu just watched ‘Wag The Dog’

🎬 It is clear that Netanyahu just watched ‘Wag The Dog’


Today’s DayONE prompt … and my response.

Today’s DayONE prompt … and my response.

Today’s DayONE prompt … and my response.


Apple Podcasts App is so BAD at managing space on your drive

Apple Podcasts App is so BAD at managing space on your drive - total hog.

Apple Podcasts App is so BAD at managing space on your drive - total hog.

Friday, 14. May 2021

Doc Searls Weblog

How the cookie poisoned the Web

Have you ever wondered why you have to consent to terms required by the websites of the world, rather than the other way around? Or why you have no record of what you have accepted or agreed to? Blame the cookie. Have you wondered why you have no more privacy on the Web than what […]

Have you ever wondered why you have to consent to terms required by the websites of the world, rather than the other way around? Or why you have no record of what you have accepted or agreed to?

Blame the cookie.

Have you wondered why you have no more privacy on the Web than what other parties grant you (which is none at all), and that you can only opt in or out of choices that others provide—while the only controls you have over your privacy are to skulk around like a criminal (thank you, Edward Snowden and Russell Brand, for that analogy) or to stay offline completely?

Blame the cookie.

And have you paused to wonder why Europe’s GDPR regards you as a mere “data subject” while assuming that the only parties qualified to be “data controllers” and “data processors” are the sites and services of the world, leaving you with little more agency than those sites and services allow, or provide you?

Blame the cookie.

Or why California’s CCPA regards you as a mere “consumer” (not a producer, much less a complete human being), and only gives you the right to ask the sites and services of the world to give back data they have gathered about you, or not to “sell” that personal data, whatever the hell that means?

Blame the cookie.

There are more examples, but you get the point: this situation has become so established that it’s hard to imagine any other way for the Web to operate.

Now here’s another point: it didn’t have to be that way.

The World Wide Web that Tim Berners-Lee invented didn’t have cookies. It also didn’t have websites. It had pages one could publish or read, at any distance across the Internet.

This original Web was simple and peer-to-peer. It was meant to be personal as well, meaning an individual could publish with a server or read with a browser. One could also write pages easily with an HTML editor, which was also easy to invent and deploy.

It should help to recall that the Apache Web server, which has published most of the world’s Web pages across most the time the Web has been around, was meant originally to work as a personal server. That’s because the original design assumption was that anyone, from individuals to large enterprises, could have a server of their own, and publish whatever they wanted on it. The same went for people reading pages on the Web.

Back in the 90s my own website, searls.com, ran on a box under my desk. It could do that because, even though my connection was just dial-up speed, it was on full time over its own static IP address, which I easily rented from my ISP. In fact, that I had sixteen of those addresses, so I could operate another server in my office for storing and transferring articles and columns I wrote to Linux Journal. Every night a cron utility would push what I wrote to the magazine itself. Both servers ran Apache. And none of this was especially geeky. (I’m not a programmer and the only code I know is Morse.)

My point here is that the Web back then was still peer-to-peer and welcoming to individuals who wished to operate at full agency. It even stayed that way through the Age of Blogs in the early ’00s.

But gradually a poison disabled personal agency. That poison was the cookie.

Technically a cookie is a token—a string of text—left by one computer program with another, to help the two remember each other. These are used for many purposes in computing.

But computing for the Web got a special kind of cookie called the HTTP cookie. This, Wikipedia says (at that link)

…is a small piece of data stored on the user‘s computer by the web browser while browsing a website. Cookies were designed to be a reliable mechanism for websites to remember stateful information (such as items added in the shopping cart in an online store) or to record the user’s browsing activity (including clicking particular buttons, logging in, or recording which pages were visited in the past). They can also be used to remember pieces of information that the user previously entered into form fields, such as names, addresses, passwords, and payment card numbers.

It also says,

Cookies perform essential functions in the modern web. Perhaps most importantly, authentication cookies are the most common method used by web servers to know whether the user is logged in or not, and which account they are logged in with.

This, however, was not the original idea, which Lou Montulli came up with in 1994. Lou’s idea was just for a server to remember the last state of a browser’s interaction with it. But that one move—a server putting a cookie inside every visiting browser—crossed a privacy threshold: a personal boundary that should have been clear from the start but was not.

Once that boundary was crossed, and the number and variety of cookies increased, a snowball started rolling, and whatever chance we had to protect our privacy behind that boundary, was lost.

Today that snowball is so large that nearly all personal agency on the Web happens within the separate silos of every website, and compromised by whatever countless cookies and other tracking methods are used to keep track of, and to follow, the individual.

This is why most of the great stuff you can do on the Web is by grace of Google, Apple, Facebook, Amazon, Twitter, WordPress and countless others, including those third parties.

Bruce Schneier calls this a feudal system:

Some of us have pledged our allegiance to Google: We have Gmail accounts, we use Google Calendar and Google Docs, and we have Android phones. Others have pledged allegiance to Apple: We have Macintosh laptops, iPhones, and iPads; and we let iCloud automatically synchronize and back up everything. Still others of us let Microsoft do it all. Or we buy our music and e-books from Amazon, which keeps records of what we own and allows downloading to a Kindle, computer, or phone. Some of us have pretty much abandoned e-mail altogether … for Facebook.

These vendors are becoming our feudal lords, and we are becoming their vassals.

Bruce wrote that in 2012, about the time we invested hope in Do Not Track, which was designed as a polite request one could turn on in a browser, and servers could obey.

Alas, the tracking-based online advertising business and its dependents in publishing dismissed Do Not Track with contempt.

Starting in 2013, we serfs fought back, by the hundreds of millions, blocking ads and tracking: the biggest boycott in world history. This, however, did nothing to stop what Shoshana Zuboff calls Surveillance Capitalism and Brett Frischmann and Evan Selinger call Re-engineering Humanity.

Today our poisoned minds can hardly imagine having native capacities of our own that can operate at scale across all the world’s websites and services. To have that ability would also be at odds with the methods and imperatives of personally targeted advertising, which requires cookies and other tracking methods. One of those imperatives is making money: $Trillions of it.

The business itself (aka adtech) is extremely complex and deeply corrupt: filled with fraud, botnets and malwareMost of the money spent on adtech also goes to intermediaries and not to the media you (as they like to say) consume. It’s a freaking fecosystem, and every participant’s dependence on it is extreme.

Take, for example, Vizio TVs. As Samuel Axon puts it in Ars Technica, Vizio TV buyers are becoming the product Vizio sells, not just its customers Vizio’s ads, streaming, and data business grew 133 percent year over year.

Without cookies and the cookie-like trackers by which Vizio and its third parties can target customers directly, that business wouldn’t be there.

As a measure of how far this poisoning has gone, dig this: FouAnalyticsPageXray says the Ars Technica story above comes to your browser with all this spyware you don’t ask for or expect when you click on that link:

Adserver Requests: 786 Tracking Requests: 532 Other Requests: 112

I’m also betting that nobody reporting for a Condé Nast publication will touch that third rail, which I have been challenging journalists to do in 139 posts, essays, columns and articles, starting in 2008.

(Please prove me wrong, @SamuelAxon—or any reporter other than Farhad Manjoo, who so far is the only journalist from a major publication I know to have bitten the robotic hand that feeds them. I also note that the hand in his case is The New York Times‘, and that it has backed off a great deal in the amount of tracking it does. Hats off for that.)

At this stage of the Web’s moral devolution, it is nearly impossible to think outside the cookie-based fecosystem. If it was, we would get back the agency we lost, and the regulations we’re writing would respect and encourage that agency as well.

But that’s not happening, in spite of all the positive privacy moves Apple, Brave, Mozilla, Consumer Reports, the EFF and others are making.

My hat’s off to all of them, but let’s face it: the poisoning is too far advanced. After fighting it for more than 22 years (dating from publishing The Cluetrain Manifesto in 1999), I’m moving on.

To here.


Ben Werdmüller

Vaccinated

I’m fully-vaccinated today: I got my second Pfizer jab two weeks ago. According to new guidance from the CDC, I can go without a mask in most situations. The official CDC page is really clear, and reporting has been generally good. I feel safe - but like many people, I will still choose to wear one, even in situations where I am not required to, for a while. A lot of people aren’t so lucky. I

I’m fully-vaccinated today: I got my second Pfizer jab two weeks ago. According to new guidance from the CDC, I can go without a mask in most situations. The official CDC page is really clear, and reporting has been generally good. I feel safe - but like many people, I will still choose to wear one, even in situations where I am not required to, for a while.

A lot of people aren’t so lucky. In India, where I have friends and coworkers, everyone I speak to seems to have lost a friend or relative. The official numbers woefully undercount the dead: conservative estimates put it at twice the official number, and I’ve heard as high as ten times.

Broken medical supply chains have left families to source oxygen for themselves; even empty oxygen canisters, which can be refilled, are in short supply. My friend Padmini Ray Murray has set up a COVID-19 and oxygen supply resource page for Bangalore, and is helping to crowdsource oxygen availability in the region.

Meanwhile, the United States has been hoarding vaccines, while countries like India may not get vaccinated until 2023. COVAX, a global vaccine initiative, has been underfunded, and rich countries didn’t arm it with the vaccine supplies it needed. Manufacturing capacity is bottlenecked. And even though some countries (including, to its credit, the US) have agreed to waive vaccine patent rights, the tests and technology transfers involved are also bottlenecked. More help is needed, and quickly; without meaningful assistance, vaccine waivers and COVAX pledges start to look more like PR for rich countries than an actual effort to vaccinate the world.

Some have argued that vaccine patent waivers should not be issued, because of the effect on innovation. I, and others, think this falls squarely into the bucket of solvable problems: information sharing mechanisms and economic incentives can be provided in other ways. The focus right now must be on saving lives, not saving capitalism.

It’s also common in a global crisis for the burden to be placed on individuals: in this case, there are plenty of community fundraisers for COVAX. I’ve donated and, if you have the means, I recommend that you do too: buying a single dose for someone in need costs $7. But the focus should be on governments and large corporations to donate and help as much as they can; our focus should be at least as much on pressuring them to do the right thing as convincing our friends and neighbors to put some money in.

I have both friends and family who still don’t believe that COVID-19 is a real threat; who don’t trust the vaccine; who don’t believe in the science or the reporting. In the midst of a genuinely global crisis, not having the real-world effect of watching your friends and family succumbing to the disease is a kind of privilege. Elsewhere, they would not have the luxury of being so ignorant.

And I wouldn’t have the luxury of feeling the freedom I do today. I’m excited to be able to see my friends again; to travel; to eat at a restaurant; to gather and share and be social. I hope the whole world is able to share in this freedom. We are no more deserving than they are.

 

Photo by Spencer Davis on Unsplash


Hyperonomy Digital Identity Lab

NETAGO Downtime – 2021-05-14 AM

Hardwired ThinkPad laptop: 77.96% downtime (same laptop used for prior NUM reports) WiFi Wireless Lenovo laptop: 2.99% downtime
Hardwired ThinkPad laptop: 77.96% downtime (same laptop used for prior NUM reports) WiFi Wireless Lenovo laptop: 2.99% downtime netuptime-20210513-230229-tp-hardwiredDownload netuptime-20210513-230235-lenovo-wifiDownload

Simon Willison

Powering the Python Package Index in 2021

Powering the Python Package Index in 2021 PyPI now serves "nearly 900 terabytes over more than 2 billion requests per day". Bandwidth is donated by Fastly, a value estimated at 1.8 million dollars per month! Lots more detail about how PyPI has evolved over the past years in this post by Dustin Ingram.

Powering the Python Package Index in 2021

PyPI now serves "nearly 900 terabytes over more than 2 billion requests per day". Bandwidth is donated by Fastly, a value estimated at 1.8 million dollars per month! Lots more detail about how PyPI has evolved over the past years in this post by Dustin Ingram.

Thursday, 13. May 2021

John Philpin : Lifestream

Trading up: one woman’s quest to swap a hairpin for a house.

Trading up: one woman’s quest to swap a hairpin for a house. Talking of sequels- nothing like an original idea

Palestinian Family Who Lost Home In Airstrike Takes Comfort

Palestinian Family Who Lost Home In Airstrike Takes Comfort In Knowing This All Very Complicated Indeed.

Amazon’s Massive Tracking Network Is Turned On By Default. H

Amazon’s Massive Tracking Network Is Turned On By Default. Here’s How to Turn It Off. ”To be fair, there’s a good reason it did.” It’s not ‘fair’. It’s scummy … it should be an opt-in choice.

Amazon’s Massive Tracking Network Is Turned On By Default. Here’s How to Turn It Off.

”To be fair, there’s a good reason it did.”

It’s not ‘fair’. It’s scummy … it should be an opt-in choice.


Simon Willison

Quoting Brian LeRoux

Folks think s3 is static assets hosting but really it's a consistent and highly available key value store with first class blob support — Brian LeRoux

Folks think s3 is static assets hosting but really it's a consistent and highly available key value store with first class blob support

Brian LeRoux


Here's Tom with the Weather

Betty's Funeral Service

My mother Betty passed away Friday night and her service is Saturday at 2pm at the Kingwood Funeral Home. This is one of my favorite pictures of her with her mother and my brother David in downtown San Francisco. Growing up, I always remember that there was a set of golf clubs in the garage but it seemed like they were never used. Luckily, this week, I listened to a Christmas audio message

My mother Betty passed away Friday night and her service is Saturday at 2pm at the Kingwood Funeral Home. This is one of my favorite pictures of her with her mother and my brother David in downtown San Francisco.

Growing up, I always remember that there was a set of golf clubs in the garage but it seemed like they were never used. Luckily, this week, I listened to a Christmas audio message from 1971 that my parents had sent to my grandparents. My mom said that the day before, Baxter had taken her to the golf course at Heather Farm and they golfed nine holes. She said on the first hole, she amused several bystanders trying to get out of the sand trap and they decided to quit keeping score from there. She said she learned golf was not her sport and would find something else. She did find tennis. I was glad to play that sport with her and the family.


Hyperonomy Digital Identity Lab

NETAGO Downtime – 2021-05-13 AM – 88.81%

Over the last approximately 7.5 hours, the NETAGO internet service in Bindloss, Alberta was down 88.81% of the time according to the Net Uptime Monitor (NUM) app. This is not acceptable. Net Uptime Monitor Failure Log (NetUptimeMonitor.com)Licensed to Michael Herman … Continue reading →

Over the last approximately 7.5 hours, the NETAGO internet service in Bindloss, Alberta was down 88.81% of the time according to the Net Uptime Monitor (NUM) app. This is not acceptable.

Net Uptime Monitor Failure Log (NetUptimeMonitor.com)
Licensed to Michael Herman

=======================================

2021-05-12 10:19:45 PM Log Start

Failure Start Length
2021-05-12 10:20:33 PM 0:00:05
2021-05-12 10:20:52 PM 0:03:23
2021-05-12 10:24:47 PM 0:01:32
2021-05-12 10:26:54 PM 0:04:16
2021-05-12 10:31:32 PM 0:07:08
2021-05-12 10:38:48 PM 0:01:30
2021-05-12 10:40:26 PM 0:10:28
2021-05-12 10:51:22 PM 0:03:33
2021-05-12 10:55:03 PM 0:06:38
2021-05-12 11:01:50 PM 0:13:03
2021-05-12 11:16:13 PM 0:09:05
2021-05-12 11:26:56 PM 0:10:07
2021-05-12 11:37:12 PM 0:04:32
2021-05-12 11:44:15 PM 0:12:41
2021-05-12 11:58:03 PM 0:03:08
2021-05-13 12:05:52 AM 0:04:00
2021-05-13 12:10:00 AM 0:09:12
2021-05-13 12:19:20 AM 0:03:58
2021-05-13 12:23:26 AM 0:06:59
2021-05-13 12:32:56 AM 0:13:24
2021-05-13 12:47:53 AM 0:12:56
2021-05-13 1:00:57 AM 0:01:34
2021-05-13 1:03:24 AM 0:01:54
2021-05-13 1:07:04 AM 0:21:49
2021-05-13 1:29:07 AM 0:00:23
2021-05-13 1:29:39 AM 0:02:10
2021-05-13 1:31:57 AM 0:01:56
2021-05-13 1:34:01 AM 0:01:06
2021-05-13 1:35:15 AM 0:00:30
2021-05-13 1:36:19 AM 0:05:39
2021-05-13 1:42:07 AM 0:00:36
2021-05-13 1:42:51 AM 0:12:26
2021-05-13 1:56:36 AM 0:04:57
2021-05-13 2:01:41 AM 0:11:08
2021-05-13 2:12:58 AM 0:04:35
2021-05-13 2:20:10 AM 0:21:12
2021-05-13 2:41:44 AM 0:06:45
2021-05-13 2:48:44 AM 0:06:04
2021-05-13 2:57:25 AM 0:05:49
2021-05-13 3:04:02 AM 0:00:33
2021-05-13 3:04:56 AM 0:00:05
2021-05-13 3:05:23 AM 0:04:44
2021-05-13 3:10:22 AM 0:07:22
2021-05-13 3:17:52 AM 0:07:44
2021-05-13 3:27:15 AM 0:18:57
2021-05-13 3:46:20 AM 0:09:16
2021-05-13 3:58:39 AM 0:00:51
2021-05-13 4:01:55 AM 0:06:26
2021-05-13 4:08:42 AM 0:19:29
2021-05-13 4:28:20 AM 0:10:05
2021-05-13 4:40:36 AM 0:11:36
2021-05-13 4:52:21 AM 0:02:01
2021-05-13 4:54:36 AM 0:03:54
2021-05-13 4:58:39 AM 0:03:10
2021-05-13 5:01:57 AM 0:07:02
2021-05-13 5:09:33 AM 0:07:25
2021-05-13 5:17:19 AM 0:02:02
2021-05-13 5:19:36 AM 0:00:33
2021-05-13 5:20:17 AM 0:01:09
2021-05-13 5:23:24 AM 0:01:32
2021-05-13 5:25:05 AM 0:03:57
2021-05-13 5:29:23 AM 0:01:28
2021-05-13 5:30:59 AM 0:10:35
2021-05-13 5:42:40 AM 0:01:57
2021-05-13 5:46:10 AM 0:00:51
2021-05-13 5:47:09 AM 0:00:13

2021-05-13 5:48:22 AM 0:08:06

2021-05-13 5:56:36 AM Log End


Monitor Duration 7:36:50
Failure Summary:
Count 67
Total Downtime 6:45:44
% Downtime 88.81
Minimum Length 0:00:05
Maximum Length 0:21:49
Average Length 0:06:03

Wednesday, 12. May 2021

John Philpin : Lifestream

A different way to look at it.

A different way to look at it.

A different way to look at it.


Facebook Moderator Says Karaoke Recommended for Traumatized

Facebook Moderator Says Karaoke Recommended for Traumatized Workers When you look around the world - oh say like what’s happening in Gaza / Israel this makes me puke. Those poor poor Facebook workers that are so traumatized … they don’t even know what it means.

Facebook Moderator Says Karaoke Recommended for Traumatized Workers

When you look around the world - oh say like what’s happening in Gaza / Israel this makes me puke.

Those poor poor Facebook workers that are so traumatized … they don’t even know what it means.


🎥🎬The cast for the first Knives Out sequel is shaping up to

🎥🎬The cast for the first Knives Out sequel is shaping up to be pretty awesome. The ‘first’ sequel? A sequel is worrying enough - that there are more in the hopper is deeply concerning. Such a crap movie … I guess they are doubling down.

🎥🎬The cast for the first Knives Out sequel is shaping up to be pretty awesome.

The ‘first’ sequel? A sequel is worrying enough - that there are more in the hopper is deeply concerning.

Such a crap movie … I guess they are doubling down.


McCarthy is actually claiming no one is ‘questioning the leg

McCarthy is actually claiming no one is ‘questioning the legitimacy’ of the election Filed in the ‘drive’ bucket.

WeWork’s CEO said people who are most comfortable working fr

WeWork’s CEO said people who are most comfortable working from home are the ‘least engaged’ with their job. … and there I was thinking that the company might have learned a thing or two … oh well.

WeWork’s CEO said people who are most comfortable working from home are the ‘least engaged’ with their job.

… and there I was thinking that the company might have learned a thing or two … oh well.


Simon Willison

Quoting Using async and await in Flask 2.0

Async functions require an event loop to run. Flask, as a WSGI application, uses one worker to handle one request/response cycle. When a request comes in to an async view, Flask will start an event loop in a thread, run the view function there, then return the result. Each request still ties up one worker, even for async views. The upside is that you can run async code within a view, for example

Async functions require an event loop to run. Flask, as a WSGI application, uses one worker to handle one request/response cycle. When a request comes in to an async view, Flask will start an event loop in a thread, run the view function there, then return the result.

Each request still ties up one worker, even for async views. The upside is that you can run async code within a view, for example to make multiple concurrent database queries, HTTP requests to an external API, etc. However, the number of requests your application can handle at one time will remain the same.

Using async and await in Flask 2.0


New Major Versions Released! Flask 2.0, Werkzeug 2.0, Jinja 3.0, Click 8.0, ItsDangerous 2.0, and MarkupSafe 2.0

New Major Versions Released! Flask 2.0, Werkzeug 2.0, Jinja 3.0, Click 8.0, ItsDangerous 2.0, and MarkupSafe 2.0 Huge set of releases from the Pallets team. Python 3.6+ required and comprehensive type annotations. Flask now supports async views, Jinja async templates (used extensively by Datasette) "no longer requires patching", Click has a bunch of new code around shell tab completion, ItsDange

New Major Versions Released! Flask 2.0, Werkzeug 2.0, Jinja 3.0, Click 8.0, ItsDangerous 2.0, and MarkupSafe 2.0

Huge set of releases from the Pallets team. Python 3.6+ required and comprehensive type annotations. Flask now supports async views, Jinja async templates (used extensively by Datasette) "no longer requires patching", Click has a bunch of new code around shell tab completion, ItsDangerous supports key rotation and so much more.


MyDigitalFootprint

who wins when our diversity creates less diversity?

The Media wins by playing with us When Education wins, everyone wins I can win but the self interest distroys more We win by being one together in our diversity It is not we lose by doing nothing, someone else gains more Our paradox is that the more diversity we have, the less diverse we become.   
The Media wins by playing with us
When Education wins, everyone wins
I can win but the self interest distroys more
We win by being one together in our diversity
It is not we lose by doing nothing, someone else gains more Our paradox is that the more diversity we have, the less diverse we become.   

Hyperonomy Digital Identity Lab

NETAGO Downtime – 2021-05-12 Early Morning – 93.6%

Net Uptime Monitor Failure Log (NetUptimeMonitor.com)Licensed to Michael Herman ======================================= 2021-05-12 3:24:40 AM Log Start Failure Start Length2021-05-12 3:26:48 AM 0:00:052021-05-12 3:27:08 AM 0:00:162021-05-12 3:28:06 AM 0:06:222021-05-12 3:34:36 AM 0:01:042021-05-12 3:35:48 AM 0:05:232021-05-12 3:43:30 AM 0:27:342021-05-12 4:11:13 AM 0:00:292021-05-12 4:11:56 … Conti

Net Uptime Monitor Failure Log (NetUptimeMonitor.com)
Licensed to Michael Herman

=======================================

2021-05-12 3:24:40 AM Log Start

Failure Start Length
2021-05-12 3:26:48 AM 0:00:05
2021-05-12 3:27:08 AM 0:00:16
2021-05-12 3:28:06 AM 0:06:22
2021-05-12 3:34:36 AM 0:01:04
2021-05-12 3:35:48 AM 0:05:23
2021-05-12 3:43:30 AM 0:27:34
2021-05-12 4:11:13 AM 0:00:29
2021-05-12 4:11:56 AM 0:02:18
2021-05-12 4:14:55 AM 0:02:05
2021-05-12 4:17:09 AM 0:05:56
2021-05-12 4:23:13 AM 0:56:42
2021-05-12 5:20:23 AM 0:00:23
2021-05-12 5:21:46 AM 0:00:33
2021-05-12 5:22:27 AM 0:07:16
2021-05-12 5:29:51 AM 0:04:38
2021-05-12 5:34:38 AM 0:17:12
2021-05-12 5:51:58 AM 0:07:41
2021-05-12 5:59:54 AM 0:00:15
2021-05-12 6:02:15 AM 0:00:27
2021-05-12 6:02:50 AM 0:04:07
2021-05-12 6:07:37 AM 0:21:43

2021-05-12 6:29:28 AM 0:07:42

2021-05-12 6:37:22 AM Log End


Monitor Duration 3:12:41
Failure Summary:
Count 22
Total Downtime 3:00:21
% Downtime 93.60
Minimum Length 0:00:05
Maximum Length 0:56:42
Average Length 0:08:11


Net Uptime Monitor (NUM)

The Net Uptime Monitor What it does… Is your internet connection unreliable? You’ve probably called your internet provider’s support line and maybe they were able to help you, maybe they even sent out a tech to look at it. But … Continue reading →

The Net Uptime Monitor


What it does…


Is your internet connection unreliable? You’ve probably called your internet provider’s support line and maybe they were able to help you, maybe they even sent out a tech to look at it. But all too often the response is “Well, it’s working fine now!”


The Net Uptime Monitor alerts you to failures in your internet connection and documents the exact time and length of those failures. This failure log will help your provider troubleshoot the problem – after it helps you convince them it’s not your imagination! Net Uptime Monitor is designed to be as simple as possible and accomplish this one purpose accurately and thoroughly with the least effort from you.


How it works…


Net Uptime Monitor (NUM) uses the “Ping” command to test the response from three public servers operated by Google, Level 3, and OpenDNS. (See “What’s a Ping?” below for an explanation.) Each server is pinged in turn at an interval that you can set – normally five seconds. By default, NUM waits 200 milliseconds (2/10 of a second) for the server to respond – at least 3 times as long as a typical broadband internet connection should take.


NUM pings one server at a time; if the server responds, NUM waits the test interval, then pings the next server. If the server doesn’t respond, NUM immediately tries the next server, then the next. If any of the servers respond, then your connection must be working. Only when all three servers fail to respond does NUM determine that your connection is down.


By using three servers, NUM ensures that the problem isn’t just with the server or with some connection on the way to that server, or that the server isn’t momentarily slow or congested.


NUM can detect failures as short as a couple of seconds in length, but you can decide how long a failure must be before it really counts. A very short failure of a second or so is not likely to affect your use of the net and isn’t of any real concern. You can set how long a failure must be before NUM alerts you to it and records the failure in its failure log.


Connection is up, no previous failures…

Connection is down, one previous failure…

The display shows the names and IP addresses of each server. The indicator “light” flashes yellow when the ping is sent and shows green for a successful response. The response time of the last ping is shown. When the response time exceeds the time set for “Wait for Ping Response”, the indicator turns red to show no response from that server.


If your connection fails, the current fail length is displayed in red. When the length of the failure exceeds your setting for “Log Failure If Longer Than”, NUM plays an alert sound and writes the failure information into its log.


The display also shows the monitored time (how long the monitor has been running), the time since the last logged failure (up time), the start time and length of the last logged failure, and the total count of logged failures since NUM was started. The current settings for the test interval and the minimum failure length to be logged are shown at the bottom of the display.


Click the minimize button on the NUM window to hide the display. NUM disappears into your system tray in the “notifications area”. The NUM icon is shown in the notification – you can hover over the icon to see the current time since the last failure (“Up Time”) or click the icon to restore the display. In the Settings, you can choose to have a “failure alert” sound play, and/or have the NUM window “pop up”, if a connection failure longer than your minimum setting occurs.


The Log


NUM keeps a log of results in a text file. You can view the current log at any time by clicking the “View Log” button. The log is displayed in a separate window. NUM will continue to update the log even while you are viewing it.
Because the log is a plain text file, you can open it outside of the NUM program. It will open in Notepad or your default text editor, so you can easily edit or print the log.


The log records the start and end time of the monitoring and each failure start time and length. A summary shows the total monitoring time, failure count, total down time, percentage of down time, and the minimum, maximum, and average failure lengths. Here’s an example:

Net Uptime Monitor Failure Log (NetUptimeMonitor.com)
Licensed to Example User

=======================================

8/17/2015 8:44:28 AM Log Start

Failure Start Length
8/17/2015 1:44:25 PM 0:00:44
8/17/2015 1:49:53 PM 0:00:36

8/17/2015 1:52:39 PM 0:01:59

8/18/2015 12:13:17 AM Log End
Monitor Duration 15:28:46
Failure Summary:
Count 3
Total Downtime 0:03:20
% Downtime 0.36
Minimum Length 0:00:36
Maximum Length 0:01:59

Average Length 0:01:06

The example shows date and time in US English format; your log will use the format for your region.
The log files are saved in a folder of your choice; the default is your Documents folder. You can choose a different folder in the Settings.


Also in the Settings, there are two options for the log file:
1) New File Each Run
A new file is created each time NUM starts. Each log file is named with the date and time NUM was started so that they will appear in your directory in chronological order. The file name is in the form of “NetUptime 20110805 134243.txt”. In this example, the date is August 10, 2015 – 20150810 – and the time is 1:42:43 PM – 134243.
2) Add to Existing File
Each new log is added to the same single file. The file name is always NetUptime.txt. As long as that file exists in the folder where you have chosen to save the log file, NUM will keep adding to it. If the file doesn’t exist, i.e. it’s been deleted, moved, or renamed, NUM will start a new file.


The Settings

Click the “Change Settings” button on the NUM display to open the Settings window. There are several settings available:


Startup Settings…


· Start when Windows Starts? – Check the box and NUM will automatically start when your computer starts. Uncheck the box and you can start NUM when you want by clicking its desktop icon. The default on installation is checked – NUM starts automatically.
· Start Minimized in Tray? – Check the box and NUM will be minimized in the system tray automatically when it starts. The default on installation is unchecked – NUM starts with the main form displayed.
Test Settings…
· Test Interval – how many seconds between ping tests when the servers are responding. Five seconds is the default. It is possible that NUM will miss a failure that is shorter than the time between tests, so if your connection has very frequent failures of just a few seconds you might choose a shorter test interval. If you don’t have many failures, you may want to test less often. Most connection problems result in less frequent but longer failures, so five seconds is a good choice for most users.
· Wait for Ping Response – the length of time NUM waits for a response after sending a ping. The default setting of 200 milliseconds is recommended for normal situations. If you have a slower internet connection, such as a dialup or mobile connection, or are in a remote area where response is typically slow, you can set the wait time for up to 1500 milliseconds (1.5 seconds). To help you find the best setting for your situation, set the wait time to 1500 milliseconds and observe the ping response times NUM displays when your connection is working normally. Set the wait time to about 1.5 times the typical ping response times you see for efficient detection of outages.
· Change Target Servers – click to open the Target Servers window.

You can edit the IP Address and Name of any of the three servers. Click the Test button to try that server, verifying that it responds and checking the response time.


The default target servers (Google, Level 3, OpenDNS) were selected for their performance and very high reliability. You should only use a different server if you find that one of these servers does not respond reliably in your particular situation. Click “Restore Defaults” to reset the Target Servers to their original values. Changes to the Target Servers take effect the next time the program starts.


Alert and Log Settings…


· Pop Up on Failure? – Check the box and the NUM form will pop up from the system tray when there is a failure. Uncheck the box and NUM will continue to log and alert but it will stay minimized during a failure. The default on installation is checked – if NUM is minimized to the system tray, the main NUM form will be displayed when a failure is logged.
· Alert and Log Failure If Longer Than – the minimum failure length that will be counted, both for the log and the alert of a failure. Five seconds is the default setting.
· Log File Location – the folder where the logs will be stored. Click the “Select Folder” button to browse to the folder you want. The log for the current run of NUM is already started, so a change in this setting will not take effect until the next time you run NUM.
· Log File Option – New File Each Run (the default) or Add to Existing File. See previous section “The Log” for a more detailed explanation.
· Choose Failure Alert Sound – choose the sound NUM makes when a failure is counted. The sound plays when you choose its button so you can preview each one. Choose “None” to silence the alert. Choose “Custom” and click the Select File button to use any .WAV sound file on your system. The default on installation is the “Short” sound.
· Play Reconnect Sound – NUM can play a sound when your internet reconnects after a failure. Choose “None” to silence the reconnect sound. Choose “Custom” and click the Select File button to use any .WAV sound file on your system.


Combine Settings for “Invisible” Operation


NUM can do its job without showing itself or alerting the user to its operation in any way. Choose these settings:
· Start when Windows Starts? – checked.
· Start Minimized in Tray? – checked.
· Pop Up On Failure – unchecked.
· Choose Failure Alert Sound – None.
· Choose Reconnect Sound – None.
With this combination of settings, the user need never be aware of NUM. This is useful in a support situation where you are installing NUM on a computer you aren’t personally using.


What’s a Ping?


“Ping” is a command available on all kinds of computers that tests whether another computer on the network will respond to your computer. It’s named after the sound of submarine sonar systems – they send out a “ping” sound which bounces off their target and they listen for that echo, locating their target. The internet “ping” works in a similar way. You name your target, an internet server, and “ping” it. The ping command and response looks like this (in a DOS command window):


C:\ ping google.com

Pinging google.com [74.125.224.84] with 32 bytes of data:
Reply from 74.125.224.84: bytes=32 time=30ms TTL=54
Reply from 74.125.224.84: bytes=32 time=31ms TTL=54
Reply from 74.125.224.84: bytes=32 time=31ms TTL=54
Reply from 74.125.224.84: bytes=32 time=31ms TTL=54

Ping statistics for 74.125.224.84:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 30ms, Maximum = 31ms, Average = 30ms

A ping command actually generates four requests and the server replies four times. Each response is timed in thousandths of a second (ms = milliseconds). Here we see that the server at google.com responded in about 31/1000 or 3/100 of a second. The internet is fast! – when everything’s working.


Licensing


A license for Net Uptime Monitor removes the time limits from the trial version and lets you use the full program on one computer. To purchase a license or register your license, just click “Trial Version – Click to Register or Purchase License” at the bottom of the NUM main form. If you have your license, enter the License Key code you’ve received and click Register. If you need a license, click Purchase a License to visit our web site and make your purchase.
If you have already registered your copy of NUM, your name and email are shown on the main form. Click the License Info button to see your license key.


Moving to a New Computer or Installing a New Operating System


You must unregister your license before you replace your computer or install a new version of Windows. This will make your license key available again to use on your new system. Just click License Info, click Print This Form to make sure you’ll have the license key, then click Unregister License. The program will go back to Trial mode. You can then reuse your license key to register NUM on any computer.


NETAGO Downtime – 2021-05-11 – 93%

Net Uptime Monitor Failure Log (NetUptimeMonitor.com)Licensed to Michael Herman ======================================= 2021-05-11 7:03:07 AM Log Start Failure Start Length2021-05-11 7:03:38 AM 0:00:462021-05-11 7:06:43 AM 0:00:052021-05-11 7:07:43 AM 0:02:572021-05-11 7:10:49 AM 0:36:322021-05-11 7:47:29 AM 0:05:172021-05-11 7:53:40 AM 0:07:142021-05-11 8:01:02 AM 0:01:422021-05-11 8:02:53 … Conti

Net Uptime Monitor Failure Log (NetUptimeMonitor.com)
Licensed to Michael Herman

=======================================

2021-05-11 7:03:07 AM Log Start

Failure Start Length
2021-05-11 7:03:38 AM 0:00:46
2021-05-11 7:06:43 AM 0:00:05
2021-05-11 7:07:43 AM 0:02:57
2021-05-11 7:10:49 AM 0:36:32
2021-05-11 7:47:29 AM 0:05:17
2021-05-11 7:53:40 AM 0:07:14
2021-05-11 8:01:02 AM 0:01:42
2021-05-11 8:02:53 AM 0:03:00
2021-05-11 8:06:01 AM 0:00:50
2021-05-11 8:07:18 AM 0:02:56
2021-05-11 8:10:23 AM 0:06:16
2021-05-11 8:17:07 AM 0:06:48
2021-05-11 8:24:49 AM 0:02:43
2021-05-11 8:27:40 AM 0:05:36
2021-05-11 8:33:50 AM 0:00:24
2021-05-11 8:35:28 AM 0:10:01
2021-05-11 8:46:23 AM 0:21:58
2021-05-11 9:09:15 AM 0:13:49
2021-05-11 9:23:39 AM 0:02:06
2021-05-11 9:28:10 AM 0:06:14
2021-05-11 9:35:57 AM 0:12:14
2021-05-11 9:49:11 AM 0:02:55
2021-05-11 9:53:13 AM 0:10:16
2021-05-11 10:03:43 AM 0:07:44
2021-05-11 10:13:39 AM 0:00:24
2021-05-11 10:16:27 AM 0:03:55
2021-05-11 10:20:56 AM 0:07:14
2021-05-11 10:28:24 AM 0:00:20
2021-05-11 10:28:53 AM 0:00:11
2021-05-11 10:29:13 AM 0:00:11
2021-05-11 10:30:11 AM 0:00:41
2021-05-11 10:31:20 AM 0:01:01
2021-05-11 10:32:48 AM 0:06:12
2021-05-11 10:40:07 AM 0:01:14
2021-05-11 10:41:29 AM 0:04:03
2021-05-11 10:46:00 AM 0:08:14
2021-05-11 10:55:34 AM 0:03:37
2021-05-11 11:00:56 AM 0:01:54
2021-05-11 11:03:05 AM 0:06:05
2021-05-11 11:09:38 AM 0:14:24
2021-05-11 11:25:15 AM 0:00:15
2021-05-11 11:25:38 AM 0:04:01
2021-05-11 11:30:08 AM 0:00:25
2021-05-11 11:31:28 AM 0:03:26
2021-05-11 11:35:08 AM 0:03:37
2021-05-11 11:39:33 AM 0:01:45
2021-05-11 11:41:32 AM 0:05:13
2021-05-11 11:47:20 AM 0:16:20
2021-05-11 12:05:19 PM 0:18:31
2021-05-11 12:24:11 PM 0:12:56
2021-05-11 12:37:15 PM 0:08:06
2021-05-11 12:45:30 PM 0:01:18
2021-05-11 12:48:01 PM 0:01:58
2021-05-11 12:50:08 PM 0:05:59
2021-05-11 12:56:15 PM 0:29:48
2021-05-11 1:26:18 PM 0:06:12
2021-05-11 1:32:45 PM 0:16:43
2021-05-11 1:49:43 PM 0:00:11
2021-05-11 1:50:02 PM 0:32:03
2021-05-11 2:22:13 PM 0:17:54
2021-05-11 2:40:41 PM 0:00:06
2021-05-11 2:40:55 PM 0:16:30
2021-05-11 2:59:04 PM 0:00:54
2021-05-11 3:01:44 PM 0:01:18
2021-05-11 3:03:10 PM 0:18:05
2021-05-11 3:21:36 PM 0:01:21
2021-05-11 3:23:51 PM 0:10:10
2021-05-11 3:34:10 PM 0:03:28
2021-05-11 3:39:16 PM 0:06:33
2021-05-11 3:46:24 PM 0:16:42
2021-05-11 4:03:14 PM 0:19:54
2021-05-11 4:25:39 PM 0:12:58
2021-05-11 4:40:09 PM 0:04:53
2021-05-11 4:45:10 PM 0:01:45
2021-05-11 4:47:03 PM 0:06:46
2021-05-11 4:53:58 PM 0:01:08
2021-05-11 4:55:14 PM 0:38:56
2021-05-11 5:34:25 PM 0:13:51
2021-05-11 5:48:30 PM 0:35:29
2021-05-11 6:26:18 PM 0:01:45
2021-05-11 6:28:17 PM 0:09:13
2021-05-11 6:37:45 PM 0:01:16
2021-05-11 6:39:10 PM 0:01:36
2021-05-11 6:42:25 PM 0:34:53
2021-05-11 7:17:27 PM 0:08:36
2021-05-11 7:26:37 PM 0:15:57
2021-05-11 7:42:43 PM 0:04:15
2021-05-11 7:47:06 PM 0:35:10
2021-05-11 8:22:37 PM 0:09:37
2021-05-11 8:32:23 PM 0:13:20

2021-05-11 8:45:51 PM 0:24:59

2021-05-11 9:10:58 PM Log End
Monitor Duration 14:07:50
Failure Summary:
Count 91
Total Downtime 13:08:53
% Downtime 93.05
Minimum Length 0:00:05
Maximum Length 0:38:56

Average Length 0:08:40


John Philpin : Lifestream

Norwegian DPA Issues 2.5M EUR Preliminary Fine for U.S. Comp

Norwegian DPA Issues 2.5M EUR Preliminary Fine for U.S. Company Utilizing Web-Tracking IDs We’ve had prior threads in here about Disqus privacy and it’s shady tracking. Looks like they’ve been caught out.

Norwegian DPA Issues 2.5M EUR Preliminary Fine for U.S. Company Utilizing Web-Tracking IDs

We’ve had prior threads in here about Disqus privacy and it’s shady tracking.

Looks like they’ve been caught out.

Tuesday, 11. May 2021

Phil Windley's Technometria

Building an SSI Ecosystem: Digital Staff Passports at the NHS

Summary: How does a functioning credential ecosystem get started? This post goes deep on Manny Nijjarâs work to create a program for using digital staff passports in the sprawling UK NHS bureaucracy. Dr Manny Nijjar is an infectious disease doctor with Whipps Cross Hospital in the UK. Heâs also an innovator who quickly saw how verifiable credentials could be applied to health care. I

Summary: How does a functioning credential ecosystem get started? This post goes deep on Manny Nijjarâs work to create a program for using digital staff passports in the sprawling UK NHS bureaucracy.

Dr Manny Nijjar is an infectious disease doctor with Whipps Cross Hospital in the UK. Heâs also an innovator who quickly saw how verifiable credentials could be applied to health care. I first met Manny at the launch of Sovrin Foundation in London in September 2016. Heâs been working to bring this vision to life with his company Truu, ever since.

SSI For Healthcare: Lessons from the NHS Frontline

In this video, Manny discusses why he became interested in digital credentials. He also speaks to the influence medical ethics has had on his journey. In 2015, he was training to become an infectious disease specialist. Manny was the most senior clinician on site in the evenings, in charge of about 500 beds.

Manny kept getting called by, and about, a temporary agency doctor every night. Manny and other medical staff had questions about this doctorâs skills, qualifications, and the decisions he was making. But there were shortages and the hospital needed to fill the gap. Manny was so discouraged by seeing an unqualified physician slip through the cracks, that he was about to quit his career, but instead he determined to do something about it.

Serendipitously, Manny came across self-sovereign identity (SSI) at the same time and, as I said, spoke at the launch of Sovrin Foundation. Over the next several years, Manny and his partners worked to create an SSI solution that the National Health Service in the UK could use to instantly verify the identity and skills of temporary and permanent clinical staff. There were three primary problems that this solves:

Patient Safety - Verifying the identity and skills of temporary and permanent clinical staff. Burden on Clinical Staff - Admin time for repeated identity and pre-employment checks. Organizational Risk and Operational Inefficiencies - Failure of manual checks. Time and cost to onboard healthcare staff.

Mannyâs first thought had been to use a traditional, administrative scheme using usernames and passwords. But he saw the problems with that. He realized a digital credential was a better answer. And his journey into self-sovereign identity commenced.

Manny's paper credentials (click to enlarge)

Over the past five years, Manny and his team at Truu have worked with clinicians, various parts of the NHS, employers, HR departments, and locum agencies to understand their needs and build a solution that fits.

In 2019, Truu conducted a pilot with the NHS where the General Medical Council (GMC) issued âlicense to practiceâ credentials to SSI wallets controlled by medical staff. Medical staff could present that credential to Blackpool Teaching Hospitals. The hospital, in turn, issued a âsign inâ credential to the staff member who could then use it to log into clinical systems at the hospital.

Digital Credentials for People and Organizations (click to enlarge)

The Covid-19 pandemic increased the pressure on the NHS, making the need to easily move staff between facilities acute. Truu worked with NHS to use this critical moment to shift to digital credentials and to do it in the right way. Truuâs early work, including the pilot, positioned the idea so that it could be quickly adopted when it was needed most. Digital credentialing in healthcare simplifies onboarding for providers, enables the secure expansion of telehealth services, and enhances information exchangeâ"providing a path to interoperability for healthcare data.

The National Health Service in the UK has a program to issue staff passports to medical personnel, confirming their qualifications and ability to work. NHS staff passports are based on verifiable credentials. Eighty-four NHS organizations are participating to date.

Locations of Participating Organizations in the NHS Staff Passport Program in April 2021 (click to enlarge)

The work that Manny, his team at Truu, and partners like Evernym have done has already had a big impact. The UK Department of Health and Social Care recognized the importance of the program, promising to expand the use of staff passports in their Busting Bureaucracy report. They said:

NHSE/I, NHSX and HEE are working to provide multiple staff groups with access to digital staff passports in line with People Plan commitments to improve workforce agility and to support staff training and development.

Junior doctors, who frequently rotate to different healthcare providers, are being prioritized and the ambition is that they will have access to staff passports in 2021/22. The passports will hold digital credentials representing their skills, competencies and occupational health checks. Other target groups include specialists such as maternity and stroke care staff who often need to be rapidly deployed to a neighboring hospital or care home. The use of digital staff passports will save agency fees and release time for care.

Medical staff passports are catching on in the UK where they are solving real problems that ultimately impact patient care, staff fatigue, and patient access and privacy. The journey hasnât been short, but the NHS Staff Passport program is illustrative of a successful credential ecosystem.

Related Videos

In this 11 minute video, I explain how trust frameworks function in an ecosystem like the one that the NHS has created.

Phil Windley on Trust Frameworks

In this hour-long meetup, Drummond Reed talks with CU Ledger (now Bonifii), about their work to establish a trust framework for credit union credentials. Iâll be writing more about the credit union industryâs MemberPass credential in a future newsletter.

Trust Frameworks and SSI: An Interview with CULedger on the Credit Union MyCUID Trust Framework

A version of this article was previously published in the Technometria Newsletter, Issue #9, May 4, 2021.

Images are from the SSI For Healthcare: Lessons from the NHS Frontline video referenced above.

Tags: ssi identity use+cases verifiable+credentials healthcare


Decentralized System in a Box

Summary: Iâve been a beekeeper for many years. I love the honey, but I love watching the bees even more. They are a fascinating example of a natural, decentralized system. I installed a package of bees in a hive over the weekend. You buy bees in packages that contain 15-20 thousand bees and a queen. The queen is in a cage so she is easy to find. Queens give off a pheromone that attra

Summary: Iâve been a beekeeper for many years. I love the honey, but I love watching the bees even more. They are a fascinating example of a natural, decentralized system.

I installed a package of bees in a hive over the weekend. You buy bees in packages that contain 15-20 thousand bees and a queen. The queen is in a cage so she is easy to find. Queens give off a pheromone that attracts the other bees in the hive. The queen is the secret to creating legitimacy for the hive (see Legitimacy and Decentralized Systems for more on legitimacy). If the queen is in the new hive, chances are the other bees will see it as their legitimate home and stick around.

Queen in a cage (click to enlarge)

I placed queen cage in the hive using a rubber band to fix the cage on one of the frames that the bees make honeycomb on. I replaced the cork in the cage with a candy stopper. The bees eat through the candy over the course of a few days and free the queen. Hopefully, by that time, the hive is established and the bees stick around.

After placing the queen cage in the hive, you just dump the bees out on top of the frames. I love this part because thousands of bees are flying everywhere trying to make sense of what just happened. But over the course of an hour or two, the hive coalesces on the queen and most of the bees are inside, getting adjusted to their new home.

Bees on top of the hive frames (click to enlarge) About an hour after the bees get their new home, they're out on the porch, fanning and taking orientation flights. (click to enlarge)

Besides providing a basis for hive legitimacy, the queen is also the sole reproductive individual, responsible for laying every egg that will be raised in the hive. This is a big job. During the summer, she will lay about 2000 eggs per day and the hive will swell to multiple tens of thousands of bees. But beyond this, the queenâs role is limited. She doesnât direct the actions of the members of the hive. No one does.

Thermoregulation

So, how does the hive function without central direction? Thermoregulation provides an example. Despite the fact that bees themselves are not homeothermic, the hive is. The bees manage to keep the hive at 93-94°F (34°C) regardless of the outside air temperature.

How do the bees do that? The straightforward answer is that some bees go to the entrance of the hive and fan air to increase circulation when the internal temperature gets too high. When it gets too low, bees cluster in the center and generate heat by shivering.

The more interesting question is âhow do the bees know to do that?â All the bees have similar genetic programming (algorithmic governance). But the tasks that theyâre inclined to do depend on their age. The youngest workers clean cells, then move onto nursing functions, mortuary activities, guarding the hive, and finally, in the last weeks of their lives, to foraging for water, nectar, and pollen.

Bees have a genetic threshold for carrying out these tasks. The threshold changes as they age. A young bee has a very high threshold for foraging that decreases over her life. Further, these thresholds vary by patriline (even though every bee in the hive has the same mother, there are many fathers), providing diversity.

So as the temperature in the hive climbs, a few bees go down to the hive entrance and fan. As it gets hotter, even more bees will take up the task, depending on their internal threshold. Their genetic programming, combined with the diversity in their thresholds, promotes an even response to temperature swings that could damage the hive. You can read more about hive thermoregulation in an earlier blog post I wrote on the topic.

Swarming and Protecting Against Byzantine Failure

An even more interesting phenomenon is how bees decide to swarm. Because the hive is a super organism, the queenâs efforts to reproduce donât result in a new hive unless thereâs a swarm. Swarming is how new hives are created.

Bees swarm in response to stresses like insufficient food supply, too little space, and so on. But no one really knows how a hive decides itâs time to swarm. In preparation for a swarm, the hive starts to raise new queens. Whether an egg grows into a worker, drone, or queen is determined by how the larva is fed by nurse bees. At some point the bees collectively determine to swarm and the queen produces a pheromone that broadcasts that decision.

The swarm consists of the current queen (and her powerful pheromones), some of the worker bees, and a portion of the honey stores. The swarm leaves the hive and the remaining bees raise the new queen and carry on. The swarm flies a short distance and settles down on some convenient structure to decide where to make their permanent home. Again the swarm centers on the queen. This is where the fun starts.

Thomas Seeley of Cornell has been studying swarms for his entire career. In the following video he describes how bees use collective decision making to choose their new home.

Cornell professor, biologist and beekeeper Thomas Seeley (click to view)

There are several interesting features in this process. First, Seeley has determined that bees donât just make a good decision, but the best possible decision. I think thatâs amazing. Several hundred bees leave the swarm to search for a new home and participate in a debate to choose one of the available sites and settle on the best choice.

This is a process that is potentially subject to byzantine failure. Not that the bees are malicious, in fact theyâre programmed to accurately represent their findings. But they can report faulty information based on their judgment of the suitability of a candidate site. The use of reputation signals for sites and voting by multiple inspectors allows the bees avoid bad decisions even in the face of false signals.

Swarm lodged in a fruit tree in my garden (click to enlarge)

The process is further protected from error because bees are programmed to only advertise sites theyâve actually visited. Again, they donât have the ability to be malicious. Each bee advertising a potential site has done the work of flying to the site and inspecting it. As bees signal their excitement for that site in a waggle dance, even more bees will fly out to it, perform an inspection, and return to advertise their findings. I donât know if Iâd characterize this as proof of work, but it does ensure that votes are based on real information. Once a quorum of bees in the swarm reach consensus about a particular site, the swarm departs and takes up residence in their new home.

Honeybee Democracy by Thomas D. Seeley

Honeybees make decisions collectively--and democratically. Every year, faced with the life-or-death problem of choosing and traveling to a new home, honeybees stake everything on a process that includes collective fact-finding, vigorous debate, and consensus building. In fact, as world-renowned animal behaviorist Thomas Seeley reveals, these incredible insects have much to teach us when it comes to collective wisdom and effective decision making.

You may not be thrilled if a swarm determines the best new home is in your attic, but you can be thrilled with the knowledge that ten thousand decentralized bees with sophisticated algorithmic programming achieved consensus and ranked it #1.

The hive is a super organism with its intelligence spread out among its tens of thousands of members. Life and death decisions are made on a daily basis in a completely decentralized fashion. Besides thermoregulation of the hive and finding a new home, the bees in a hive autonomously make millions of other decentralized decisions every day that result in the hive not only surviving but thriving in hostile conditions. I find that remarkable.

Tags: decentralization identity legitimacy


Ben Werdmüller

Disrespect for the hustle

My favorite working environments have all been like liberal arts colleges: spaces where people were trying to do their best work, often quietly, with a great deal of introspection. Here, people asked questions about how they could do meaningful work that uplifted and empowered communities. The worst - multiple startups - have been aggressively confrontational, where the emphasis was on hustli

My favorite working environments have all been like liberal arts colleges: spaces where people were trying to do their best work, often quietly, with a great deal of introspection. Here, people asked questions about how they could do meaningful work that uplifted and empowered communities.

The worst - multiple startups - have been aggressively confrontational, where the emphasis was on hustling to get people in the door by any means necessary.

My friend Roxann Stafford introduced me to this great quote from the labor organizer General Baker:

You keep asking how do we get the people here? I say, what will we do when they get here?

While it’s true that the Field of Dreams user acquisition strategy doesn’t work - even if you build it, they won’t necessarily come, so you’d better figure out how to reach out to the right people - it can only be a fragment of the product strategy. If you let hustle culture take over the entire business, you run the risk of spending all your time on how to get people there, and comparatively little on what you’ll do when they arrive. At best, you’ll end up with a superficial product; at worst, a disingenuous one. You might find yourself accidentally creating a culture where it’s okay to say just about anything to get people in the door.

The thing is, when you’re running out of money, or when you don’t have any to begin with, getting more is an imperative. As much as money is a pain in the ass, it’s necessary to keep the lights on, and to grow.

Newsrooms used to have a way to deal with this: a firewall between editorial and advertising departments. Because the value of a news publication is in the information it provides, regardless of financial influence, the need to make money has been kept siloed away. When, latterly, some newsrooms began to remove this firewall and allow financial considerations to affect the content of their coverage, the quality of their reporting (and public trust thereof) noticeably declined.

The same is true in software. When hustle culture becomes the product, the incentive to provide real, deep value to your community of users is undermined. You’ll deliver worse products. That isn’t to say that sales and marketing are not valuable: they’re absolutely vital. But a startup (or a project, or a traditional business) can’t let sales and marketing drive the ship. It’s the product team’s job to build something that deeply serves a need, including by identifying the first community of people to understand, co-design with, and serve.

Marketing, in the traditional sense, is the act of understanding that market and positioning a product to reach it (although it’s often reductively conflated with advertising). The sales folks - the hustlers - close the deals. These things are important parts of a complete, delicious breakfast, but they can’t be the whole breakfast.

Nothing absolves you from building a meaningful product, obsessing over every detail, and taking care in its craft and design. It’s hard to do that if your whole focus is on leads. Why do you exist? Who are you helping? How? These questions can’t just be a story you tell - they have to be your deeply-held reason for existing.

You keep asking how do we get the people here? I say, what will we do when they get here?

That’s the question that matters.

 

Photo by Garrhet Sampson on Unsplash


MyDigitalFootprint

Dashboards - we love them, but why do they love us?

Subject: Agenda item for our away day on strategy and scenarios To: CEO and senior exec team We should congratulate ourselves on the progress made, however as your CDO, I am now going to make a case that we measure too much, have too much data and that as a team, we should reflect on the next thing that data can support us in! We have bought into “Data is the new oil,” and whilst we know th


Subject: Agenda item for our away day on strategy and scenarios

To: CEO and senior exec team

We should congratulate ourselves on the progress made, however as your CDO, I am now going to make a case that we measure too much, have too much data and that as a team, we should reflect on the next thing that data can support us in!

We have bought into “Data is the new oil,” and whilst we know the analogy breaks down below the veneer, the message is beautifully simple and has empowered the change to a data and digital business. The global pandemic has accelerated our adoption and transformation, and we are in a better place than March 2020. However, sticking with oil, we know that the extraction process has downsides, including carbon release, messy, and difficulty locating economic wells.   Amongst data’s most significant downsides are legal liabilities, noise and the wrong data. 

I can easily hide data’s downsides through dashboards.  Our dashboards are based on trickle-down KPI and objectives from our strategic plan.  We hand out SMART objectives, but such objectives fundamentally assume that we are and continue to do the right thing.  We gather data and post-analysis present it as trending graphs or red, amber, green dashboards, aiming for everything to be going in the right direction or green.  Green only means we have no negative variance between our plan and the actual. Organisationally we are incentivised to achieve green at any unknown cost or consequence.  Trending analysis can always be generated through the selection of data to fit the story. 

Right now, we are using data to measure, analyse and inform. We have better control of variance across our entire operations and ecosystem than at any previous point in our history. We increasingly have to depend on dashboards to manage the complicated and complex, remain informed, and determine where our energies should be focussed.  Without a doubt, this should remain and continue to be improved, but we are already on a diminishing return with the data we collect due to the increase in noise over signal and should aim to cull than add. 

As an agenda item; should we introduce the colour BLUE into our reporting, trends, and dashboards to reduce reporting and data?  The traditional traffic lights remain the same; blue is not replacing any of them; it can become the one that allows us to know we are doing the right thing and not just doing the wrong thing that is in the plan in the most efficient and effective way (can be green or red).

As a team, we have to feed our sense of enquiry, and our existing data gathering, analysis, and reporting do not value the complexity we are faced with. More data does not solve complexity. Data has allowed us to evolve from finance data as the most critical decision-making source to become more sensitive.  Whilst we have far more data, it is still narrow, and we should consider how we prepare for the next evolution, which will not be more of the same data.  Following on from customer data and the start in ESG, the next data set we are being mandated to report on is Human Capital.  

Human capital reporting opens up our ability to sense what our employees and ecosystem are sensing, seeing and implying; helping us to determine, using technologies such as sentiment analysis, if we are doing the right thing. Where are issues that are not on the dashboard are occurring? What, as yet, unidentified tensions and conflicts are created by our current trickle-down objective/ incentive system? However, as you can imagine, big brother, privacy, and trust are foundational issues we need to discuss front up before this next evolutionary step hits us.  This next phase of our data journey means we will find it harder to hide in the selection of data for trends and dashboards or just seek the right trend or green. This is more data, but different data and it will fill a gap in our knowledge,  meaning we will be better informed about complex decisions. 

I would like to present for 15 minutes on this topic and host a 45-minute debate with your approval.

Your CDO



John Philpin : Lifestream

”Henry James had a mind so fine that no idea could violate

”Henry James had a mind so fine that no idea could violate it.” T.S. Eliot

”Henry James had a mind so fine that no idea could violate it.”

T.S. Eliot


The politicians who tried to overturn an election — and the

The politicians who tried to overturn an election — and the local news team that won’t let anyone forget it. ”A simple rule for network TV producers. Don’t book election denialists if they haven’t publicly retracted.” Simple is good.

The politicians who tried to overturn an election — and the local news team that won’t let anyone forget it.

”A simple rule for network TV producers. Don’t book election denialists if they haven’t publicly retracted.”

Simple is good.


JetBlue’s Founder Is Preparing to Launch a New Airline in a

JetBlue’s Founder Is Preparing to Launch a New Airline in a Global Pandemic ”Breeze CEO David Neeleman thinks masks are for morons and this is actually a great time to start an airline.” Morons sometimes start airlines.

JetBlue’s Founder Is Preparing to Launch a New Airline in a Global Pandemic

”Breeze CEO David Neeleman thinks masks are for morons and this is actually a great time to start an airline.”

Morons sometimes start airlines.

Monday, 10. May 2021

Simon Willison

Django SQL Dashboard

I've released the first non-alpha version of Django SQL Dashboard, which provides an interface for running arbitrary read-only SQL queries directly against a PostgreSQL database, protected by the Django authentication scheme. It can also be used to create saved dashboards that can be published or shared internally. I started building this tool back in March as part of my work to port VaccinateCA

I've released the first non-alpha version of Django SQL Dashboard, which provides an interface for running arbitrary read-only SQL queries directly against a PostgreSQL database, protected by the Django authentication scheme. It can also be used to create saved dashboards that can be published or shared internally.

I started building this tool back in March as part of my work to port VaccinateCA away from Airtable to a custom Django backend. One of the strengths of Airtable is that it allows ad-hoc data exploration and reporting, and I wanted to provide an alternative to that for the new Django backend.

I also wanted to try out some new ideas for Datasette, which doesn't (yet) work with PostgreSQL.

First, a demo

I recorded this three minute video demo introducing the software, using my blog's database as an example.

In the video I run the following SQL queries to explore the many-to-many table that maps tags to my blog entries:

select * from blog_entry_tags;

The table starts out looking like this - not particularly interesting:

Then I run this query to join it against the blog_tag table and get the details of each tag:

select * from blog_entry_tags join blog_tag on blog_tag.id = blog_entry_tags.tag_id

This is a bit more useful. I then click on the "count" link at the top of that "tag" column. This constructs a SQL query for me that uses a count(*) and group by to return a count of each value in that column:

select "tag", count(*) as n from ( select * from blog_entry_tags join blog_tag on blog_tag.id = blog_entry_tags.tag_id ) as results group by "tag" order by n desc

Then I demonstrate some of the default widget visualizations that come with Django SQL Dashboard. If I rewrite the query to return columns called bar_label and bar_quantity the tool will render the results as a bar chart:

select "tag" as bar_label, count(*) as bar_quantity from ( select * from blog_entry_tags join blog_tag on blog_tag.id = blog_entry_tags.tag_id ) as results group by "tag" order by bar_quantity desc

Next, I demonstrate a similar trick that instead produces a word cloud by aliasing the columns to wordcloud_word and wordcloud_count:

select "tag" as wordcloud_word, count(*) as wordcloud_count from ( select * from blog_entry_tags join blog_tag on blog_tag.id = blog_entry_tags.tag_id ) as results group by "tag" order by wordcloud_count desc

Finally, I show how that query can be turned into a saved dashboard and made available to the public. Here's the saved dashboard I created in the video:

https://simonwillison.net/dashboard/tag-cloud/

This illustrates a key idea underlying both Django SQL dashboard and Datasette: a complete application can be defined as a SQL query!

Much of the work we do as web application developers can be boiled down to constructing a SQL query and hooking it up to output to a web page. If you can safely execute SQL queries from page query strings this means you can build custom applications that exist entirely as bookmarkable URLs.

My draw-a-shape-on-a-map application for searching mini parks in California from a few months ago is another example of this pattern in action.

Custom widgets

Building new custom widgets for this tool is extremely easy - hence the word cloud widget which I actually built specially for this demo. All you need to provide is a single Django template file.

If your widget is going to respond to returned columns wordcloud_word and wordcloud_count the name of that template is those columns, sorted alphabetically and joined with hyphens:

wordcloud_count-wordcloud_word.html

Place that in a django_sql_dashboard/widgets template directory and the new widget will be ready to use. Here's the full implementation of the word cloud widget.

Named parameter support

This is a feature I lifted directly from Datasette. You can construct SQL queries that look like this:

select * from blog_entry where id = %(id)s

This uses psycopg2 syntax for named parameters. The value will be correctly quoted and escaped, so this is a useful tool for avoiding SQL injection attacks.

Djang SQL Dashboard spots these parameters and turns them into form fields. Here's what that looks like in the interface:

These forms submit using GET, so the result can be bookmarked. Here's a saved dashboard you can use to retrieve the details of any of my blog entries by their numeric ID:

https://simonwillison.net/dashboard/blog-entry-by-id/?id=7991

You can include multiple SQL parameters on a single dashboard, and any form parameters will be made available to all of those queries.

This means you can build dashboards that run multiple queries against the same arguments. Imagine for example you want to build a report about a specific user's activity across multiple tables - you can accept their user ID as a parameter, then display the output of multiple queries (including custom visualizations) that each refer to that parameter.

Export through copy and paste

I love copy and paste as a mechanism for exporting data from a system. Django SQL Dashboard embraces this in a couple of ways:

Results from SQL queries can be copied out as TSV from an expandable textarea below the table - up to 1,000 rows. I like this format because you can paste it directly into Google Sheets or Excel to get the data correctly split into cells. Any time JSON is returned as a value from PostgreSQL, a "copy to clipboard" icon is shown next to the JSON. I use this a lot: both for JSON stored in PostgreSQL as well as the output from JSON aggregation functions. Export all query results as CSV/TSV

This comes up a lot at Vaccinate CA: we do a lot of data analysis where we need to work with other tools or send data to partners, and having a way to export the full set of results for a query (rather than truncating at the first thousand to avoid crashing the user's browser) was a frequent need. - Django SQL Dashboard provides this option using a combination of Django's streaming HTTP response mechanism and PostgreSQL server-side cursors to efficiently stream large amounts of data without running out of resources.

A complex example: searching code examples across my blog

I decided to see how far I could push PostgreSQL.

I often include code in my blog entries - examples that are wrapped in a <pre> tag. Within that tag I sometimes apply syntax highlighting (a bunch of <span> elements).

It turns out I've included code snippets in 134 different blog entries:

select count(*) from blog_entry where body ~ '<pre>.*<pre>'

Can I use regular expressions in PostgreSQL to extract just the code examples, clean them up (removing those spans, reversing HTML entity encoding) and then provide simple search across the text of those examples, all in one query?

It turns out I can!

Here's a saved dashboard you can use to execute searches against just the contents of those <pre> tags across every entry on my blog:

https://simonwillison.net/dashboard/code-examples/?search=select

with results_stripped as ( select id, title, replace(replace(replace(replace(replace(regexp_replace( (regexp_matches(body, '<pre>(.*?)</pre>', 'g'))[1], E'<[^>]+>', '', 'gi' ), '&quot' || chr(59), '"'), '&gt' || chr(59), '>'), '&lt' || chr(59), '<'), '&#039' || chr(59), chr(39)), '&amp' || chr(59), '&' ) as code from blog_entry where body ~ '<pre>.*<pre>' ) select id, title, code, 'https://simonwillison.net/e/' || id as link from results_stripped where code like '%%' || %(search)s || '%%' limit 10

There's a lot going on here. The key component is this bit:

regexp_matches(body, '<pre>(.*?)</pre>', 'g'))[1]

The regexp_matches() function, with the 'g' flag, returns every match for the given regular expression. As part of a larger select query this means that if the expression matches three times you'll get back three rows in the output (in this case with duplicate id and title columns) - which is what I want here.

It's wrapped in a terrifying nest of extra functions. These serve two purposes: they strip out any nested HTML tags, and the un-escape the &quot;, &lt;, &gt;, &amp; and &#039; HTML entities. I did this as a nested block of replace() functions - there's probably a neater solution here.

The chr(59) bits are a hack: Django SQL Dashboard disallows the ; character to ensure people can't execute multiple SQL queries - which could be used to work around some of the per-transaction protective settings applied by the tool.

But I need to search-and-replace &quot; - so I use this pattern to include the semicolon:

replace(text, '&quot' || chr(59), '"')

Where || is the PostgreSQL string concatenation operator.

The search itself is constructed like this:

where code like '%%' || %(search)s || '%%'

This constructs a like query against '%your-search-term%' - the double percentage sign escaping is needed because % has a special meaning here (it's part of the %(search)s named parameter).

One last trick: the final output of the query is produced by this:

select id, title, code, 'https://simonwillison.net/e/' || id as link from results_stripped

results_stripped is a CTE defined earlier - I usually try to wrap up complex weird stuff like those nested replace() calls in a CTE so I can write a simple final query.

The 'https://simonwillison.net/e/' || id as link bit here concatenates together a URL that links to my entry based on its ID. My blog uses /yyyy/Mon/slug/ URLs but generating these from a SQL query against the created column was a little fussy, so I added /e/ID redirecting URLs to make generating links in dashboard queries easier.

Future plans

Django SQL Dashboard has already proved itself invaluable for my current project. I imagine I'll be using it for every Django project I build going forward - being able to query the database like this, create ad-hoc visualizations and then link to them is a huge productivity boost.

The bigger question is how it overlaps with Datasette.

Datasette has been SQLite-only since I started the project three and a half years ago - because I know that building a database abstraction layer is a huge additional commitment and, for Datasette's initial purpose of helping publish read-only data, it didn't feel necessary.

I have a growing suspicion that getting Datasette to work against PostgreSQL (and other database backends) in addition to SQLite is less work than I had originally thought.

Datasette is also built on top of ASGI. Django 3.0 introduced ASGI support, so it's now possible to host ASGI applications like Datasette as part of a unified Django application.

So it's possible that the future of Django SQL Dashboard will be for Datasette to eventually make it obsolete.

That doesn't stop it from being extremely useful today. If you try it out I'd love to hear from you! I'm also keen to see people start to expand it for their own projects, especially via the custom widgets mechanism.

Let me know if you try it out!

TIL this week Scroll page to form if there are errors Releases this week django-sql-dashboard: 0.12 - (22 releases total) - 2021-05-08
Django app for building dashboards using raw SQL queries

Damien Bod

Present and Verify Verifiable Credentials in ASP.NET Core using Decentralized Identities and MATTR

This article shows how use verifiable credentials stored on a digital wallet to verify a digital identity and use in an application. For this to work, a trust needs to exist between the verifiable credential issuer and the application which requires the verifiable credentials to verify. A blockchain decentralized database is used and MATTR is […]

This article shows how use verifiable credentials stored on a digital wallet to verify a digital identity and use in an application. For this to work, a trust needs to exist between the verifiable credential issuer and the application which requires the verifiable credentials to verify. A blockchain decentralized database is used and MATTR is used as a access layer to this ledger and blockchain. The applications are implemented in ASP.NET Core.

The verifier application Bo Insurance is used to implement the verification process and to create a presentation template. The application sends a HTTP post request to create a presentation request using the DID Id from the OIDC credential Issuer, created in the previous article. This DID is created from the National Driving license application which issues verifiable credentials and so a trust needs to exist between the two applications. Once the credentials have been issued to a holder of the verifiable credentials and stored for example in a digital wallet, the issuer is no longer involved in the process. Verifying the credentials only requires the holder and the verifier and the decentralized database which holds the digital identities and documents. The verifier application gets the DID from the ledger and signs the verify request. The request can then be presented as a QR Code. The holder can scan this using a MATTR digital wallet and grant consent to share the credentials with the application. The digital wallet calls the callback API defined in the request presentation body and sends the data to the API. The verifier application hosting the API would need to verify the data and can update the application UI using SignalR to continue the business process with the verified credentials.

Code https://github.com/swiss-ssi-group/MattrGlobalAspNetCore

Blogs in the series

Getting started with Self Sovereign Identity SSI Create an OIDC credential Issuer with MATTR and ASP.NET Core Present and Verify Verifiable Credentials in ASP.NET Core using Decentralized Identities and MATTR

Create the presentation template for the Verifiable Credential

A presentation template is required to verify the issued verifiable credentials stored on a digital wallet.

The digital identity (DID) Id of the OIDC credential issuer is all that is required to create a presentation request template. In the application which issues credentials, ie the NationalDrivingLicense, a Razor page was created to view the DID of the OIDC credential issuer.

The DID can be used to create the presentation template. The MATTR documentation is really good here:

https://learn.mattr.global/tutorials/verify/presentation-request-template

A Razor page was created to start this task from the UI. This would normally require authentication as this is an administrator task from the application requesting the verified credentials. The code behind the Razor page takes the DID request parameter and calls the MattrPresentationTemplateService to create the presentation template and present this id a database.

public class CreatePresentationTemplateModel : PageModel { private readonly MattrPresentationTemplateService _mattrVerifyService; public bool CreatingPresentationTemplate { get; set; } = true; public string TemplateId { get; set; } [BindProperty] public PresentationTemplate PresentationTemplate { get; set; } public CreatePresentationTemplateModel(MattrPresentationTemplateService mattrVerifyService) { _mattrVerifyService = mattrVerifyService; } public void OnGet() { PresentationTemplate = new PresentationTemplate(); } public async Task<IActionResult> OnPostAsync() { if (!ModelState.IsValid) { return Page(); } TemplateId = await _mattrVerifyService.CreatePresentationTemplateId(PresentationTemplate.DidId); CreatingPresentationTemplate = false; return Page(); } } public class PresentationTemplate { [Required] public string DidId { get; set; } }

The Razor page html template creates a form to post the request to the server rendered page and displays the templateId after, if the creation was successful.

@page @model BoInsurance.Pages.CreatePresentationTemplateModel <div class="container-fluid"> <div class="row"> <div class="col-sm"> <form method="post"> <div> <div class="form-group"> <label class="control-label">DID ID</label> <input asp-for="PresentationTemplate.DidId" class="form-control" /> <span asp-validation-for="PresentationTemplate.DidId" class="text-danger"></span> </div> <div class="form-group"> @if (Model.CreatingPresentationTemplate) { <input class="form-control" type="submit" readonly="@Model.CreatingPresentationTemplate" value="Create Presentation Template" /> } </div> <div class="form-group"> @if (!Model.CreatingPresentationTemplate) { <div class="alert alert-success"> <strong>Mattr Presentation Template created</strong> </div> } </div> </div> </form> <hr /> <p>When the templateId is created, you can use the template ID to verify</p> </div> <div class="col-sm"> <div> <img src="~/ndl_car_01.png" width="200" alt="Driver License"> <div> <b>Driver Licence templateId from presentation template</b> <hr /> <dl class="row"> <dt class="col-sm-4">templateId</dt> <dd class="col-sm-8"> @Model.TemplateId </dd> </dl> </div> </div> </div> </div> </div>

The MattrPresentationTemplateService is used to create the MATTR presentation template. This class uses the MATTR API and sends a HTTP post request with the DID Id of the OIDC credential issuer and creates a presentation template. The service saves the returned payload to a database and returns the template ID as the result. The template ID is required to verify the verifiable credentials.

The MattrTokenApiService is used to request an API token for the MATTR API using the credential of your MATTR account. This service has a simple token cache and only requests new access tokens when no token exists or the token has expired.

The BoInsuranceDbService service is used to access the SQL database using Entity Framework Core. This provides simple methods to persist or select the data as required.

private readonly IHttpClientFactory _clientFactory; private readonly MattrTokenApiService _mattrTokenApiService; private readonly BoInsuranceDbService _boInsuranceDbService; private readonly MattrConfiguration _mattrConfiguration; public MattrPresentationTemplateService(IHttpClientFactory clientFactory, IOptions<MattrConfiguration> mattrConfiguration, MattrTokenApiService mattrTokenApiService, BoInsuranceDbService boInsuranceDbService) { _clientFactory = clientFactory; _mattrTokenApiService = mattrTokenApiService; _boInsuranceDbService = boInsuranceDbService; _mattrConfiguration = mattrConfiguration.Value; } public async Task<string> CreatePresentationTemplateId(string didId) { // create a new one var v1PresentationTemplateResponse = await CreateMattrPresentationTemplate(didId); // save to db var drivingLicensePresentationTemplate = new DrivingLicensePresentationTemplate { DidId = didId, TemplateId = v1PresentationTemplateResponse.Id, MattrPresentationTemplateReponse = JsonConvert .SerializeObject(v1PresentationTemplateResponse) }; await _boInsuranceDbService .CreateDriverLicensePresentationTemplate(drivingLicensePresentationTemplate); return v1PresentationTemplateResponse.Id; } private async Task<V1_PresentationTemplateResponse> CreateMattrPresentationTemplate(string didId) { HttpClient client = _clientFactory.CreateClient(); var accessToken = await _mattrTokenApiService.GetApiToken(client, "mattrAccessToken"); client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", accessToken); client.DefaultRequestHeaders.TryAddWithoutValidation("Content-Type", "application/json"); var v1PresentationTemplateResponse = await CreateMattrPresentationTemplate(client, didId); return v1PresentationTemplateResponse; }

The CreateMattrPresentationTemplate method sends the HTTP Post request like in the MATTR API documentation. Creating the payload for the HTTP post request using the MATTR Open API definitions is a small bit complicated. This could be improved with a better Open API definition. In our use case, we just want to create the default template for the OIDC credential issuer and so just require the DID Id. Most of the other properties are fixed values, see the MATTR API docs for more information.

private async Task<V1_PresentationTemplateResponse> CreateMattrPresentationTemplate( HttpClient client, string didId) { // create presentation, post to presentations templates api // https://learn.mattr.global/tutorials/verify/presentation-request-template var createPresentationsTemplatesUrl = $"https://{_mattrConfiguration.TenantSubdomain}/v1/presentations/templates"; var additionalProperties = new Dictionary<string, object>(); additionalProperties.Add("type", "QueryByExample"); additionalProperties.Add("credentialQuery", new List<CredentialQuery> { new CredentialQuery { Reason = "Please provide your driving license", Required = true, Example = new Example { Context = new List<object>{ "https://schema.org" }, Type = "VerifiableCredential", TrustedIssuer = new List<TrustedIssuer2> { new TrustedIssuer2 { Required = true, Issuer = didId // DID use to create the oidc } } } } }); var payload = new MattrOpenApiClient.V1_CreatePresentationTemplate { Domain = _mattrConfiguration.TenantSubdomain, Name = "certificate-presentation", Query = new List<Query> { new Query { AdditionalProperties = additionalProperties } } }; var payloadJson = JsonConvert.SerializeObject(payload); var uri = new Uri(createPresentationsTemplatesUrl); using (var content = new StringContentWithoutCharset(payloadJson, "application/json")) { var presentationTemplateResponse = await client.PostAsync(uri, content); if (presentationTemplateResponse.StatusCode == System.Net.HttpStatusCode.Created) { var v1PresentationTemplateResponse = JsonConvert .DeserializeObject<MattrOpenApiClient.V1_PresentationTemplateResponse>( await presentationTemplateResponse.Content.ReadAsStringAsync()); return v1PresentationTemplateResponse; } var error = await presentationTemplateResponse.Content.ReadAsStringAsync(); } throw new Exception("whoops something went wrong"); }

The application can be started and the presentation template can be created. The ID is returned back to the UI for the next step.

Verify the verifiable credentials

Now that a template exists to request the verifiable data from the holder of the data which is normally stored in a digital wallet, the verifier application can create and start a verification process. A post request is sent to the MATTR APIs which creates a presentation request using a DID ID and the required template. The application can request the DID from the OIDC credential issuer. The request is signed using the correct key from the DID and the request is published in the UI as a QR Code. A digital wallet is used to scan the code and the user of the wallet can grant consent to share the personal data. The wallet sends a HTTP post request to the callback API. This API handles the request, would validate the data and updates the UI using SignalR to move to the next step of the business process using the verified data.

Step 1 Invoke a presentation request

The InvokePresentationRequest method implements the presentation request. This method requires the DID Id of the OIDC credential issuer which will by used to get the data from the holder of the data. The template ID is also required from the template created above. A challenge is also used to track the verification. The challenge is a random value and is used when the digital wallet calls the API with the verified data. The callback URL is where the data is returned to. This could be unique for every request or anything you want. The payload is created like the docs from the MATTR API defines. The post request is sent to the MATTR API and a V1_CreatePresentationRequestResponse is returned if all is configured correctly.

private async Task<V1_CreatePresentationRequestResponse> InvokePresentationRequest( HttpClient client, string didId, string templateId, string challenge, string callbackUrl) { var createDidUrl = $"https://{_mattrConfiguration.TenantSubdomain}/v1/presentations/requests"; var payload = new MattrOpenApiClient.V1_CreatePresentationRequestRequest { Did = didId, TemplateId = templateId, Challenge = challenge, CallbackUrl = new Uri(callbackUrl), ExpiresTime = MATTR_EPOCH_EXPIRES_TIME_VERIFIY // Epoch time }; var payloadJson = JsonConvert.SerializeObject(payload); var uri = new Uri(createDidUrl); using (var content = new StringContentWithoutCharset(payloadJson, "application/json")) { var response = await client.PostAsync(uri, content); if (response.StatusCode == System.Net.HttpStatusCode.Created) { var v1CreatePresentationRequestResponse = JsonConvert .DeserializeObject<V1_CreatePresentationRequestResponse>( await response.Content.ReadAsStringAsync()); return v1CreatePresentationRequestResponse; } var error = await response.Content.ReadAsStringAsync(); } return null; }

Step 2 Get the OIDC Issuer DID

The RequestDID method uses the MATTR API to get the DID data from the blochchain for the OIDC credential issuer. Only the DID Id is required.

private async Task<V1_GetDidResponse> RequestDID(string didId, HttpClient client) { var requestUrl = $"https://{_mattrConfiguration.TenantSubdomain}/core/v1/dids/{didId}"; var uri = new Uri(requestUrl); var didResponse = await client.GetAsync(uri); if (didResponse.StatusCode == System.Net.HttpStatusCode.OK) { var v1CreateDidResponse = JsonConvert.DeserializeObject<V1_GetDidResponse>( await didResponse.Content.ReadAsStringAsync()); return v1CreateDidResponse; } var error = await didResponse.Content.ReadAsStringAsync(); return null; }

Step 3 Sign the request using correct key and display QR Code

To verify data using a digital wallet, the payload must be signed using the correct key. The SignAndEncodePresentationRequestBody uses the DID payload and the request from the presentation request to create the payload to sign. Creating the payload is a big messy due to the OpenAPI definitions created for the MATTR API. A HTTP post request with the payload returns the signed JWT in a payload in a strange data format so we parse this as a string and manually get the JWT payload.

private async Task<string> SignAndEncodePresentationRequestBody( HttpClient client, V1_GetDidResponse did, V1_CreatePresentationRequestResponse v1CreatePresentationRequestResponse) { var createDidUrl = $"https://{_mattrConfiguration.TenantSubdomain}/v1/messaging/sign"; object didUrlArray; did.DidDocument.AdditionalProperties.TryGetValue("authentication", out didUrlArray); var didUrl = didUrlArray.ToString().Split("\"")[1]; var payload = new MattrOpenApiClient.SignMessageRequest { DidUrl = didUrl, Payload = v1CreatePresentationRequestResponse.Request }; var payloadJson = JsonConvert.SerializeObject(payload); var uri = new Uri(createDidUrl); using (var content = new StringContentWithoutCharset(payloadJson, "application/json")) { var response = await client.PostAsync(uri, content); if (response.StatusCode == System.Net.HttpStatusCode.OK) { var result = await response.Content.ReadAsStringAsync(); return result; } var error = await response.Content.ReadAsStringAsync(); } return null; }

The CreateVerifyCallback method uses the presentation request, the get DID and the sign HTTP post requests to create a URL which can be displayed in a UI. The challenge is created using the RNGCryptoServiceProvider class which creates a random string. The access token to access the API is returned from the client credentials OAuth requests or from the in memory cache. The DrivingLicensePresentationVerify class is persisted to a database and the verify URL is returned so that this could be displayed as a QR Code in the UI.

/// <summary> /// https://learn.mattr.global/tutorials/verify/using-callback/callback-e-to-e /// </summary> /// <param name="callbackBaseUrl"></param> /// <returns></returns> public async Task<(string QrCodeUrl, string ChallengeId)> CreateVerifyCallback(string callbackBaseUrl) { callbackBaseUrl = callbackBaseUrl.Trim(); if (!callbackBaseUrl.EndsWith('/')) { callbackBaseUrl = $"{callbackBaseUrl}/"; } var callbackUrlFull = $"{callbackBaseUrl}{MATTR_CALLBACK_VERIFY_PATH}"; var challenge = GetEncodedRandomString(); HttpClient client = _clientFactory.CreateClient(); var accessToken = await _mattrTokenApiService.GetApiToken(client, "mattrAccessToken"); client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", accessToken); client.DefaultRequestHeaders.TryAddWithoutValidation("Content-Type", "application/json"); var template = await _boInsuranceDbService.GetLastDriverLicensePrsentationTemplate(); // Invoke the Presentation Request var invokePresentationResponse = await InvokePresentationRequest( client, template.DidId, template.TemplateId, challenge, callbackUrlFull); // Request DID V1_GetDidResponse did = await RequestDID(template.DidId, client); // Sign and Encode the Presentation Request body var signAndEncodePresentationRequestBodyResponse = await SignAndEncodePresentationRequestBody( client, did, invokePresentationResponse); // fix strange DTO var jws = signAndEncodePresentationRequestBodyResponse.Replace("\"", ""); // save to db // TODO add this back once working var drivingLicensePresentationVerify = new DrivingLicensePresentationVerify { DidId = template.DidId, TemplateId = template.TemplateId, CallbackUrl = callbackUrlFull, Challenge = challenge, InvokePresentationResponse = JsonConvert.SerializeObject(invokePresentationResponse), Did = JsonConvert.SerializeObject(did), SignAndEncodePresentationRequestBody = jws }; await _boInsuranceDbService.CreateDrivingLicensePresentationVerify(drivingLicensePresentationVerify); var qrCodeUrl = $"didcomm://https://{_mattrConfiguration.TenantSubdomain}/?request={jws}"; return (qrCodeUrl, challenge); } private string GetEncodedRandomString() { var base64 = Convert.ToBase64String(GenerateRandomBytes(30)); return HtmlEncoder.Default.Encode(base64); } private byte[] GenerateRandomBytes(int length) { using var randonNumberGen = new RNGCryptoServiceProvider(); var byteArray = new byte[length]; randonNumberGen.GetBytes(byteArray); return byteArray; }

The CreateVerifierDisplayQrCodeModel is the code behind for the Razor page to request a verification and also display the verify QR Code for the digital wallet to scan. The CallbackUrl can be set from the UI so that this is easier for testing. This callback can be any webhook you want or API. To test the application in local development, I used ngrok. The return URL has to match the proxy which tunnels to you PC, once you start. If the API has no public address when debugging, you will not be able to test locally.

public class CreateVerifierDisplayQrCodeModel : PageModel { private readonly MattrCredentialVerifyCallbackService _mattrCredentialVerifyCallbackService; public bool CreatingVerifier { get; set; } = true; public string QrCodeUrl { get; set; } [BindProperty] public string ChallengeId { get; set; } [BindProperty] public CreateVerifierDisplayQrCodeCallbackUrl CallbackUrlDto { get; set; } public CreateVerifierDisplayQrCodeModel(MattrCredentialVerifyCallbackService mattrCredentialVerifyCallbackService) { _mattrCredentialVerifyCallbackService = mattrCredentialVerifyCallbackService; } public void OnGet() { CallbackUrlDto = new CreateVerifierDisplayQrCodeCallbackUrl(); CallbackUrlDto.CallbackUrl = $"https://{HttpContext.Request.Host.Value}"; } public async Task<IActionResult> OnPostAsync() { if (!ModelState.IsValid) { return Page(); } var result = await _mattrCredentialVerifyCallbackService .CreateVerifyCallback(CallbackUrlDto.CallbackUrl); CreatingVerifier = false; QrCodeUrl = result.QrCodeUrl; ChallengeId = result.ChallengeId; return Page(); } } public class CreateVerifierDisplayQrCodeCallbackUrl { [Required] public string CallbackUrl { get; set; } }

The html or template part of the Razor page displays the QR Code from a successful post request. You can set any URL for the callback in the form request. This is really just for testing.

@page @model BoInsurance.Pages.CreateVerifierDisplayQrCodeModel <div class="container-fluid"> <div class="row"> <div class="col-sm"> <form method="post"> <div> <div class="form-group"> <label class="control-label">Callback base URL (ngrok in debug...)</label> <input asp-for="CallbackUrlDto.CallbackUrl" class="form-control" /> <span asp-validation-for="CallbackUrlDto.CallbackUrl" class="text-danger"></span> </div> <div class="form-group"> @if (Model.CreatingVerifier) { <input class="form-control" type="submit" readonly="@Model.CreatingVerifier" value="Create Verification" /> } </div> <div class="form-group"> @if (!Model.CreatingVerifier) { <div class="alert alert-success"> <strong>Ready to verify</strong> </div> } </div> </div> </form> <hr /> <p>When the verification is created, you can scan the QR Code to verify</p> </div> <div class="col-sm"> <div> <img src="~/ndl_car_01.png" width="200" alt="Driver License"> </div> </div> </div> <div class="row"> <div class="col-sm"> <div class="qr" id="qrCode"></div> <input asp-for="ChallengeId" hidden/> </div> </div> </div> @section scripts { <script src="~/js/qrcode.min.js"></script> <script type="text/javascript"> new QRCode(document.getElementById("qrCode"), { text: "@Html.Raw(Model.QrCodeUrl)", width: 400, height: 400, correctLevel: QRCode.CorrectLevel.M }); $(document).ready(() => { }); var connection = new signalR.HubConnectionBuilder().withUrl("/mattrVerifiedSuccessHub").build(); connection.on("MattrCallbackSuccess", function (challengeId) { console.log("received verification:" + challengeId); window.location.href = "/VerifiedUser?challengeid=" + challengeId; }); connection.start().then(function () { //console.log(connection.connectionId); const challengeId = $("#ChallengeId").val(); if (challengeId) { console.log(challengeId); // join message connection.invoke("AddChallenge", challengeId, connection.connectionId).catch(function (err) { return console.error(err.toString()); }); } }).catch(function (err) { return console.error(err.toString()); }); </script> }

Step 4 Implement the Callback and update the UI using SignalR

After a successful verification in the digital wallet, the wallet sends the verified credentials to the API defined in the presentation request. The API handling this needs to update the correct client UI and continue the business process using the verified data. We use SignalR for this with a single client to client connection. The Signal connections for each connection is associated with a challenge ID, the same Id we used to create the presentation request. Using this, only the correct client will be notified and not all clients broadcasted. The DrivingLicenseCallback takes the body with is specific for the credentials you issued. This is always depending on what you request. The data is saved to a database and the client is informed to continue. We send a message directly to the correct client using the connectionId of the SignalR session created for this challenge.

[ApiController] [Route("api/[controller]")] public class VerificationController : Controller { private readonly BoInsuranceDbService _boInsuranceDbService; private readonly IHubContext<MattrVerifiedSuccessHub> _hubContext; public VerificationController(BoInsuranceDbService boInsuranceDbService, IHubContext<MattrVerifiedSuccessHub> hubContext) { _hubContext = hubContext; _boInsuranceDbService = boInsuranceDbService; } /// <summary> /// { /// "presentationType": "QueryByExample", /// "challengeId": "GW8FGpP6jhFrl37yQZIM6w", /// "claims": { /// "id": "did:key:z6MkfxQU7dy8eKxyHpG267FV23agZQu9zmokd8BprepfHALi", /// "name": "Chris", /// "firstName": "Shin", /// "licenseType": "Certificate Name", /// "dateOfBirth": "some data", /// "licenseIssuedAt": "dda" /// }, /// "verified": true, /// "holder": "did:key:z6MkgmEkNM32vyFeMXcQA7AfQDznu47qHCZpy2AYH2Dtdu1d" /// } /// </summary> /// <param name="body"></param> /// <returns></returns> [HttpPost] [Route("[action]")] public async Task<IActionResult> DrivingLicenseCallback([FromBody] VerifiedDriverLicense body) { string connectionId; var found = MattrVerifiedSuccessHub.Challenges .TryGetValue(body.ChallengeId, out connectionId); // test Signalr //await _hubContext.Clients.Client(connectionId).SendAsync("MattrCallbackSuccess", $"{body.ChallengeId}"); //return Ok(); var exists = await _boInsuranceDbService.ChallengeExists(body.ChallengeId); if (exists) { await _boInsuranceDbService.PersistVerification(body); if (found) { //$"/VerifiedUser?challengeid={body.ChallengeId}" await _hubContext.Clients .Client(connectionId) .SendAsync("MattrCallbackSuccess", $"{body.ChallengeId}"); } return Ok(); } return BadRequest("unknown verify request"); } }

The SignalR server is configured in the Startup class of the ASP.NET Core application. The path for the hub is defined in the MapHub method.

public void ConfigureServices(IServiceCollection services) { // ... services.AddRazorPages(); services.AddSignalR(); services.AddControllers(); } public void Configure(IApplicationBuilder app, IWebHostEnvironment env) { // ... app.UseEndpoints(endpoints => { endpoints.MapRazorPages(); endpoints.MapHub<MattrVerifiedSuccessHub>("/mattrVerifiedSuccessHub"); endpoints.MapControllers(); }); }

The Hub implementation requires only one fixed method. The AddChallenge method takes the challenge Id and adds this the an in-memory cache. The controller implemented for the callbacks uses this ConcurrentDictionary to find the correct connectionId which is mapped to the challenges form the verification.

public class MattrVerifiedSuccessHub : Hub { /// <summary> /// This should be replaced with a cache which expires or something /// </summary> public static readonly ConcurrentDictionary<string, string> Challenges = new ConcurrentDictionary<string, string>(); public void AddChallenge(string challengeId, string connnectionId) { Challenges.TryAdd(challengeId, connnectionId); } }

The Javascript SignalR client in the browser connects to the SignalR server and registers the connectionId with the challenge ID used for the verification of the verifiable credentials from the holder of the digital wallet. If a client gets a message from that a verify has completed successfully and the callback has been called, it will redirect to the verified page. The client listens to the MattrCallbackSuccess for messages. These messages are sent from the callback controller directly.

<script type="text/javascript"> var connection = new signalR.HubConnectionBuilder() .withUrl("/mattrVerifiedSuccessHub").build(); connection.on("MattrCallbackSuccess", function (challengeId) { console.log("received verification:" + challengeId); window.location.href = "/VerifiedUser?challengeid=" + challengeId; }); connection.start().then(function () { //console.log(connection.connectionId); const challengeId = $("#ChallengeId").val(); if (challengeId) { console.log(challengeId); // join message connection.invoke("AddChallenge", challengeId, connection.connectionId).catch(function (err) { return console.error(err.toString()); }); } }).catch(function (err) { return console.error(err.toString()); }); </script>

The VerifiedUserModel Razor page displays the data and the business process can continue using the verified data.

public class VerifiedUserModel : PageModel { private readonly BoInsuranceDbService _boInsuranceDbService; public VerifiedUserModel(BoInsuranceDbService boInsuranceDbService) { _boInsuranceDbService = boInsuranceDbService; } public string ChallengeId { get; set; } public DriverLicenseClaimsDto VerifiedDriverLicenseClaims { get; private set; } public async Task OnGetAsync(string challengeId) { // user query param to get challenge id and display data if (challengeId != null) { var verifiedDriverLicenseUser = await _boInsuranceDbService.GetVerifiedUser(challengeId); VerifiedDriverLicenseClaims = new DriverLicenseClaimsDto { DateOfBirth = verifiedDriverLicenseUser.DateOfBirth, Name = verifiedDriverLicenseUser.Name, LicenseType = verifiedDriverLicenseUser.LicenseType, FirstName = verifiedDriverLicenseUser.FirstName, LicenseIssuedAt = verifiedDriverLicenseUser.LicenseIssuedAt }; } } } public class DriverLicenseClaimsDto { public string Name { get; set; } public string FirstName { get; set; } public string LicenseType { get; set; } public string DateOfBirth { get; set; } public string LicenseIssuedAt { get; set; } }

Running the verifier

To test the BoInsurance application locally, which is the verifier application, ngrok is used so that we have a public address for the callback. I install ngrok using npm. Without a license, you can only run your application in http.

npm install -g ngrok

Run the ngrok from the command line using the the URL of the application. I start the ASP.NET Core application at localhost port 5000.

ngrok http localhost:5000

You should be able to copied the ngrok URL and use this in the browser to test the verification.

Once running, a verification can be created and you can scan the QR Code with your digital wallet. Once you grant access to your data, the data is sent to the callback API and the UI will be redirected to the success page.

Notes

MATTR APIs work really well and support some of the flows for digital identities. I plan to try out the zero proof flow next. It is only possible to create verifiable credentials from data from your identity provider using the id_token. To issue credentials, you have to implement your own identity provider and cannot use business data from your application. If you have full control like with Openiddict, IdenityServer4 or Auth0, this is no problem, just more complicated to implement. If you do not control the data in your identity provider, you would need to create a second identity provider to issue credentials. This is part of your business logic then and not just an identity provider. This will always be a problem is using Azure AD or IDPs from large, medium size companies. The quality of the verifiable credentials also depend on how good the OIDC credential issuers are implemented as these are still central databases for these credentials and are still open to all the problems we have today. Decentralized identities have to potential to solve many problems but still have many unsolved problems.

Links

https://mattr.global/

https://learn.mattr.global/tutorials/verify/using-callback/callback-e-to-e

https://mattr.global/get-started/

https://learn.mattr.global/

https://keybase.io/

https://learn.mattr.global/tutorials/dids/did-key

https://gunnarpeipman.com/httpclient-remove-charset/

https://auth0.com/

Friday, 07. May 2021

Doc Searls Weblog

First iPhone mention?

I wrote this fake story on January 24, 2005, in an email to Peter Hirshberg after we jokingly came up with it during a phone call. Far as I know, it was the first mention of the word “iPhone.” Apple introduces one-button iPhone Shuffle To nobody’s surprise, Apple’s long-awaited entry into the telephony market is […]

I wrote this fake story on January 24, 2005, in an email to Peter Hirshberg after we jokingly came up with it during a phone call. Far as I know, it was the first mention of the word “iPhone.”

Apple introduces one-button iPhone Shuffle

To nobody’s surprise, Apple’s long-awaited entry into the telephony market is no less radical and minimalistic than the one-button mouse and the gum-stick-sized music player. In fact, the company’s new cell phone — developed in deeply secret partnership with Motorola — extends the concept behind the company’s latest iPod, as well as its brand identity.

Like the iPod Shuffle, the new iPhone Shuffle has no display. It’s an all-white rectangle with a little green light to show that a call is in progress. While the iPhone Shuffle resembles the iPod Shuffle, its user interface is even more spare. In place of the round directional “wheel” of the iPods, the iPhone Shuffle sports a single square button. When pressed, the iPod Shuffle dials a random number from its phone book.

“Our research showed that people don’t care who they call as much as they care about being on the phone,” said Apple CEO Steve Jobs. “We also found that most cell phone users hate routine, and prefer to be surprised. That’s just as true for people answering calls as it is for people making them. It’s much more liberating, and far more social, to call people at random than it is to call them deliberately.”

Said (pick an analyst), “We expect the iPhone Shuffle will do as much to change the culture of telephony as the iPod has done to change the culture of music listening.”

Safety was also a concern behind the one-button design. “We all know that thousands of people die on highways every year when they take their eyes off the road to dial or answer a cell phone,” Jobs said. “With the iPhone Shuffle, all they have to do is press one button, simple as that.”

For people who would rather dial contacts in order than at random, the iPhone Shuffle (like the iPod Shuffle) has a switch that allows users to call their phone book in the same order as listings are loaded loaded from the Address Book application.
To accommodate the new product, Apple also released Version 4.0.1 of  Address Book, which now features “phonelists” modeled after the familiar “playlists” in iTunes. These allow the iPhone Shuffle’s phone book to be populated by the same ‘iFill’ system that loads playlists from iTunes into iPod Shuffles.

A number of online sites reported that Apple negotiating with one of the major cell carriers to allow free calls between members who maintain .Mac accounts and keep their data in Apple’s Address Book. A few of those sites also suggested that future products in the Shuffle line will combine random phone calling and music playing, allowing users to play random music for random phone contacts.

The iPhone Shuffle will be sold at Apple retail stores.

Thursday, 06. May 2021

Phil Windley's Technometria

The Politics of Vaccination Passports

Summary: The societal ramifications of Covid-19 passports are not easy to parse. Ultimately, I believe they are inevitable, so the questions for us are when, where, and how they should be used. On December 2, 1942, Enrico Fermi and his team at the University of Chicago initiated the first human-made, self-sustaining nuclear chain reactions in history beneath the viewing stands of Stagg

Summary: The societal ramifications of Covid-19 passports are not easy to parse. Ultimately, I believe they are inevitable, so the questions for us are when, where, and how they should be used.

On December 2, 1942, Enrico Fermi and his team at the University of Chicago initiated the first human-made, self-sustaining nuclear chain reactions in history beneath the viewing stands of Stagg Field. Once humans knew how nuclear chain reactions work and how to initiate them, an atomic bomb was inevitable. Someone would build one.

What was not inevitable was when, where, and how nuclear weapons would be used. Global geopolitical events of the last half of the 20th century and many of the international questions of our day deal with the when, where, and how of that particular technology.

A similar, and perhaps just as impactful, discussion is happening now around technologies like artificial intelligence, surveillance, and digital identity. Iâd like to focus on just one small facet of the digital identity debate: vaccination passports.

In Vaccination Passports, Devon Loffreto has strong words about the effort to create vaccination passports, writing:

The vaccination passport represents the introduction of the CCP social credit system to America, transmuting people into sub-human data points lasting lifetimes. From Vaccination Passports
Referenced 2021-04-12T11:13:58-0600

Devonâs larger point is that once we get used to having to present a vaccination passport to travel, for example, it could quickly spread. Presenting an ID could become the default with bars, restaurants, churches, stores, and every other public place saying âpapers, please!â before allowing entry.

This is a stark contrast to how people have traditionally gone about their lives. Asking for ID is, by social convention and practicality, limited mostly to places where itâs required by law or regulation. We expect to get carded when we buy cigarettes, but not milk. A vaccination passport could change all that and thatâs Devonâs point.

Devon specifically calls out the Good Health Pass collaborative as "supporting the administration of people as cattle, as fearful beings 'trusting' their leaders with their compliance."

For their part, participants of the Good Health Pass collaborative argue that they are working to create a âsafe path to restore international travel and restart the global economy.â Their principles declare that they are building health-pass systems that are privacy protecting, user-controlled, interoperable, and widely accepted.

Iâm sympathetic to Devonâs argument. Once such a passport is in place for travel, thereâs nothing stopping it from being used everywhere, moving society from free and open to more regulated and closed. Nothing that is, unless we put something in place.

Like the direct line from Fermiâs atomic pile to an atomic bomb, the path from nearly ubiquitous smartphone use to some kind of digital vaccination passport is likely inevitable. The question for us isnât whether or not it will exist, but where, how, and when passports will be used.

For example, Iâd prefer a vaccination passport that is built according to principles of the Good Health Pass collaborative than, say, one built by Facebook, Google, Apple, or Amazon. Social convention, and regulation where necessary, can limit where such a passport is used. Itâs an imperfect system, but social systems are. More important, decentralized governance processes are necessarily political.

As I said, Iâm sympathetic to Devonâs arguments. The sheer ease of presenting digital credentials removes some of the practicality barrier that paper IDs naturally have. Consequently, digital IDs are likely to be used more often than paper. I donât want to live in a society where Iâm carded at every turn—whether for proof of vaccination or anything else. But Iâm also persuaded that organizations like the Good Health Pass collaborative arenât the bad guys. Theyâre just folks who see the inevitability of a vaccination credential and are determined to at least see that itâs done right, in ways that respect individual choice and personal privacy as much as possible.

The societal questions remain regardless.

Photo Credit: COVID-19 Vaccination record card from Jernej Furman (CC BY 2.0)

Tags: verifiable+credentials identity covid

Wednesday, 05. May 2021

Nader Helmy

IIW32: BBS+ and beyond

The Internet Identity Workshop continues to be a central nucleus for thoughtful discussion and development of all things related to digital identity. The most recent workshop, which was held in mid-April, was no exception. Despite the lack of in-person interaction due to the ongoing global pandemic, this IIW was as lively as ever, bringing together a diverse set of stakeholders from across the glo

The Internet Identity Workshop continues to be a central nucleus for thoughtful discussion and development of all things related to digital identity. The most recent workshop, which was held in mid-April, was no exception. Despite the lack of in-person interaction due to the ongoing global pandemic, this IIW was as lively as ever, bringing together a diverse set of stakeholders from across the globe to share experiences, swap perspectives, and engage in healthy debates.

One common theme this year was the continued development and adoption of BBS+ signatures, a type of multi-message cryptographic digital signature that enables selective disclosure of verifiable credentials. We first introduced this technology at IIW30 in April 2020, and have been inspired and delighted by the community’s embrace and contribution to this effort across the board. In the year since, progress has been made in a variety of areas, from standards-level support to independent implementations and advanced feature support.

We thought we’d take a moment to round up some of the significant developments surrounding BBS+ signatures and highlight a few of the top items to pay attention to going forward.

Over the past few months, the linked data proofs reference implementation of BBS+ published a new release that introduces a variety of improvements in efficiency and security, including formal alignment to the W3C CCG Security Vocab v3 definitions. In addition, support for JSON-LD BBS+ signatures was added to the VC HTTP API, making it possible to test this functionality in an interoperable way with other vendors participating in an open environment.

An important element in enabling BBS+ signatures is using what’s known as a pairing-friendly curve; for our purposes we use BLS12–381. We have seen some promising signs of adoption for this key pair, with multiple Decentralized Identifier (DID) methods — both did:indy from Hyperledger and did:ion from DIF — indicating they intend to add or already have support for these keys, allowing BBS+ signatures to be issued across a variety of decentralized networks and ecosystems. This development is possible due to the fact that BBS+ signatures is a ledger-independent approach to selective disclosure, effectively no custom logic or bespoke infrastructure is needed for these digital signatures to be created, used and understood.

In addition, the Hyperledger Aries project has been hard at work developing interoperable and ledger-agnostic capabilities in open source. The method used to track interop targets within the cohort and ultimately measure conformance against Aries standards is what’s known as an Aries Interop Profile (AIP). A major upcoming update to AIP will add support for additional DID methods, key types and credential formats, as well as introducing Aries support for JSON-LD BBS+ signatures as part of AIP 2.0. This will allow Aries-driven credential issuance and presentation protocols to work natively with BBS+ credentials, making that functionality broadly available for those in the Aries community and beyond.

There have also been a number of exciting developments when it comes to independent implementations of BBS+ signatures. Animo Solutions has recently implemented JSON-LD BBS+ signatures support into the popular open-source codebase Hyperledger Aries Cloud Agent Python (ACA-Py). In another independent effort, Trinsic has contributed an implementation of JSON-LD BBS+ credentials which they have demonstrated to be working in tandem with DIDComm v2, a secure messaging protocol based on DIDs. Implementations such as these help to demonstrate that open standards are transparent, can be understood and verified independently, and can be implemented with separate languages and tech stacks. They also set the groundwork for demonstrating real testing-driven interoperability via mechanisms such as the VC HTTP API and AIP 2.0. We are continuously looking to improve the documentation of these specs and standards so that their implications and nuances can be more broadly understood by builders and developers looking to engage with the technology.

On the cryptographic side of things, progress is also being made in hardening the existing BBS+ specification as well as expanding BBS+ to support more advanced privacy-preserving features. A significant development in this area is the work of cryptographer Michael Lodder who has been actively conducting research on an enhanced credential revocation mechanism using cryptographic accumulators with BBS+. This approach presents a promising alternative to existing solutions that allow authoritative issuers to update the status of issued credentials without compromising the privacy of the credential holder or subject who may be presenting the credential. We see this as another application of BBS+ signatures in the context of verifiable credentials that carries a lot of potential in pushing this technology to an even more robust state.

There was also initial discussion and tacit agreement to create a new cryptography-focused working group at Decentralized Identity Foundation. As the new WG drafts its charter, the first work item of this group will be the BBS+ Signatures spec which defines the cryptographic scheme known as BBS+ agnostic of its application in areas such as linked data signatures or verifiable credentials. In the future, this WG will likely evolve to include other crypto-related work items from the community.

This is just the tip of the iceberg when it comes to the momentum and development building around this technology in the community. We couldn’t be more excited about the future of BBS+ signatures, especially as we gear up to tackle the next set of hard problems in this area including privacy-preserving subject authentication and revocation using cryptographic accumulators. If you’re interested we encourage you to get involved, either by contributing to the Linked Data Proofs specification, checking out our reference implementations, or participating in the new WG at DIF, to name but a few of the many ways to engage with this work. We look forward to doing this retrospective at many IIWs to come, documenting the ever-growing community that continues to champion this technology in dynamic and interesting ways.

IIW32: BBS+ and beyond was originally published in MATTR on Medium, where people are continuing the conversation by highlighting and responding to this story.


Simon Willison

A museum bot

A museum bot Shawn Graham built a Twitter bot, using R, which tweets out random items from the collection at the Canadian Science and Technology Museum - using a Datasette instance that he's running based on a CSV export of their collections data. Via @DEJPett

A museum bot

Shawn Graham built a Twitter bot, using R, which tweets out random items from the collection at the Canadian Science and Technology Museum - using a Datasette instance that he's running based on a CSV export of their collections data.

Via @DEJPett


Doc Searls Weblog

My podcasts of choice

As a follow-up to what I wrote earlier today, here are my own favorite podcasts, in the order they currently appear in my phone’s podcast apps: Radio Open Source (from itself) Bill Simmons (on The Ringer) Fresh Air (from WHYY via NPR) JJ Reddick & Tommy Alter (from ThreeFourTwo) The Mismatch (on The Ringer) The New […]

As a follow-up to what I wrote earlier today, here are my own favorite podcasts, in the order they currently appear in my phone’s podcast apps:

Radio Open Source (from itself) Bill Simmons (on The Ringer) Fresh Air (from WHYY via NPR) JJ Reddick & Tommy Alter (from ThreeFourTwo) The Mismatch (on The Ringer) The New Yorker Radio Hour (WNYC via NPR) Econtalk (from itself) On the Media (WNYC) How I Built This with Guy Raz (from NPR) The Daily (from New York Times) Reimagining the Internet (from Ethan Zuckerman and UMass Amherst) Planet Money (from NPR) Up First (from NPR) Here’s the thing (WNYC via NPR) FLOSS Weekly (TWiT) Reality2.0 (from itself)

Note that I can’t help listening to the last two, because I host one and co-host the other.

There are others I’ll listen to on occasion as well, usually after hearing bits of them on live radio. These include Radiolab, This American Life, 99% Invisible, Snap Judgement, Freakonomics Radio, Hidden BrainInvisibilia, The Moth, Studio 360. Plus limited run podcasts, such as Serial, S-Town, Rabbit Hole and Floodlines.

Finally, there are others I intend to listen to at some point, such as Footnoting History, Philosophize This, The Infinite Monkey Cage, Stuff You Should Know, The Memory Palace, and Blind Spot.

And those are just off the top of my head. I’m sure there are others I’m forgetting.

Anyway, most of the time I’d rather listen to those than live radio—even though I am a devoted listener to a raft of public stations (especially KCLU, KPCC, KCRW, KQED, WNYC, WBUR and WGBH) and too many channels to mention on SiriusXM, starting with Howard Stern and the NBA channel.

 

Tuesday, 04. May 2021

Simon Willison

cinder: Instagram's performance oriented fork of CPython

cinder: Instagram's performance oriented fork of CPython Instagram forked CPython to add some performance-oriented features they wanted, including a method-at-a-time JIT compiler and a mechanism for eagerly evaluating coroutines (avoiding the overhead of creating a coroutine if an awaited function returns a value without itself needing to await). They're open sourcing the code to help start conv

cinder: Instagram's performance oriented fork of CPython

Instagram forked CPython to add some performance-oriented features they wanted, including a method-at-a-time JIT compiler and a mechanism for eagerly evaluating coroutines (avoiding the overhead of creating a coroutine if an awaited function returns a value without itself needing to await). They're open sourcing the code to help start conversations about implementing some of these features in CPython itself. I particularly enjoyed the warning that accompanies the repo: this is not intended to be a supported release, and if you decide to run it in production you are on your own!

Via Hacker News


Justin Richer

Signing HTTP Messages

There’s a new draft in the HTTP working group that deals with signing HTTP messages of all types. Why is it here, and what does that give us? HTTP is irrefutably a fundamental building block of most of today’s software systems. Yet security and identity need to be layered alongside HTTP. The most common of these is simply running the HTTP protocol over an encrypted socket using TLS, resultin

There’s a new draft in the HTTP working group that deals with signing HTTP messages of all types. Why is it here, and what does that give us?

HTTP is irrefutably a fundamental building block of most of today’s software systems. Yet security and identity need to be layered alongside HTTP. The most common of these is simply running the HTTP protocol over an encrypted socket using TLS, resulting in HTTPS. While this is a powerful and important security component, TLS works only my protecting the stream of bits in transit. It does not allow for message-level and application-level security operations. But what if we could sign the messages themselves?

While it is possible to wrap the body of a request in a cryptographic envelope like JOSE or XML DSig, such approaches force developers to ignore most of the power and flexibility of HTTP, reducing it to a dumb transport layer. In order to sign a message but keep using HTTP as it stands, with all the verbs and headers and content types that it gives us, we will need a scheme that allows us to add a detached signature to the HTTP message. The cryptographic elements to the message can then be generated and validated separately from the request itself, providing a layered approach.

There have been numerous attempts at creating detached signature methods for HTTP over the years, one of the most famous being the Cavage draft which itself started as a community-facing version of Amazon’s SIGv4 method used within AWS. There were several other efforts, and all of them incompatible with each other in one way or another. To address this, the HTTP Working Group in the IETF stepped up and took on the effort of creating an RFC-track standard for HTTP message signatures that could be used across the variety of use cases.

As of the writing of this post, the specification is at version 04. While it’s not finished yet, it’s recently become a bit more stable and so it’s worth looking at it in greater depth.

Normalizing HTTP

As it turns out, the hardest part of signing HTTP messages isn’t the signing, it’s the HTTP. HTTP is a messy set of specifications, with pieces that have been built up by many authors over many years in ways that aren’t always that consistent. A recent move towards consistency has been the adoption of Structured Field Values for HTTP. In short, structured fields allow HTTP headers to house simple, non-recursive data structures with unambiguous parsing and deterministic serialization. These aspects made it perfect for use within the HTTP message signatures specification.

Previous efforts at HTTP message signing concentrated on creating a signature around HTTP headers, and the current draft is no exception in allowing that. On top of that, the current draft also allows for the definition of specialty fields that contain other pieces of constructed information not found in the headers themselves. These covered components are identified and combined with each other into a signature input string. To this string is added a field that includes all of the input parameters to this signature. For example, let’s say we want to sign parts of this HTTP request:

POST /foo?param=value&pet=dog HTTP/1.1
Host: example.com
Date: Tue, 20 Apr 2021 02:07:55 GMT
Content-Type: application/json
Digest: SHA-256=X48E9qOokqqrvdts8nOJRJN3OWDUoyWxBf7kbu9DBPE=
Content-Length: 18

{"hello": "world"}

We choose the components we want to sign, including the target of the request and a subset of the available headers, and create the following signature input string:

@request-target": post /foo?param=value&pet=dog
"host": example.com
"date": Tue, 20 Apr 2021 02:07:55 GMT
"content-type": application/json
"digest": SHA-256=X48E9qOokqqrvdts8nOJRJN3OWDUoyWxBf7kbu9DBPE=
"content-length": 18
"@signature-params": ("@request-target" "host" "date" "content-type" "digest" "content-length");created=1618884475;keyid="test-key-rsa-pss"

With a given HTTP message and a set of input parameters determining which parts of the message are covered with a signature, any party can re-generate this string with a reasonable level of success. Unsigned headers can be added to the message by intermediaries without invalidating the signature, and it’s even possible for an intermediary to add its own signature to the message on the way through — but we’ll get more into that advanced use case in a future post. The result of this is that the signer and verifier will re-create this signature input string independently of each other.

Now that we have a normalized string to sign, how do we actually sign it?

Signing and Verifying Content

Once we have the string, it’s a relatively straightforward matter of applying a key and signature function to the string. Any signature method that takes in and bunch of bytes and spits out a different set of bytes is technically feasible here.

How do the signer and verifier know which algorithm to use on a given method? It turns out that different deployments have drastically different needs in this regard. As a consequence, this is an aspect that is application specific by the specification, with several common methods called out:

The signer and verifier can both be configured to expect only a specific algorithm, or have that algorithm identified by some aspect external to the protocol. The signer and verifier can identify the key used to do the signing and figure out the signature algorithm based on that. If an application’s using JSON Web Keys, the alg field of the key provides an easy way to identify a signing mechanism. If the signer and verifier need to signal the algorithm dynamically at runtime, there is an alg field in the signature parameter set itself that points to a new registry. And if two or more of these methods are applicable to a given message, the answers all have to match, otherwise something fishy is going on and the signature is invalidated.

Given the above signature input string and an RSA-PSS signing method, we end up with the following Base64-encoded bytes as the signature output:

NtIKWuXjr4SBEXj97gbick4O95ff378I0CZOa2VnIeEXZ1itzAdqTpSvG91XYrq5CfxCmk8zz1Zg7ZGYD+ngJyVn805r73rh2eFCPO+ZXDs45Is/Ex8srzGC9sfVZfqeEfApRFFe5yXDmANVUwzFWCEnGM6+SJVmWl1/jyEn45qA6Hw+ZDHbrbp6qvD4N0S92jlPyVVEh/SmCwnkeNiBgnbt+E0K5wCFNHPbo4X1Tj406W+bTtnKzaoKxBWKW8aIQ7rg92zqE1oqBRjqtRi5/Q6P5ZYYGGINKzNyV3UjZtxeZNnNJ+MAnWS0mofFqcZHVgSU/1wUzP7MhzOKLca1Yg==

This gives us a signed object, and now we need to put that into our HTTP message.

Sending Signatures in Messages

The HTTP message signature specification defines two new headers to carry the signature, Signature and Signature-Input. Both of these use the Dictionary construct from the HTTP Structured Field Values standard to carry a named signature.

But first, why two headers? This construct allows us to easily separate the metadata about the signature — how it was made — from the signature value itself. This separation makes parsing simpler and also allows the HTTP message signatures specification to support multiple independent signatures on a given message.

The Signature-Input header contains all the parameters that went into the creation of the signature, including the list of covered content, identifiers for the key and algorithm, and items like timestamps or other application-specific flags. In fact, this is the same value used as the last line of the signature input string, and so its values are always covered by the signature. The Signature header contains the value of the signature itself as a byte array, encoded in Base64. The signer chooses a name for the signature object and adds both items to the headers. The name has no semantic impact, it just needs to be unique within a given request.

Let’s say this signature is named sig1. The signer adds both headers to the request above, resulting in the following signed request.

POST /foo?param=value&pet=dog HTTP/1.1
Host: example.com
Date: Tue, 20 Apr 2021 02:07:55 GMT
Content-Type: application/json
Digest: SHA-256=X48E9qOokqqrvdts8nOJRJN3OWDUoyWxBf7kbu9DBPE=
Content-Length: 18
Signature-Input: sig1=("host" "date" "content-type");created=1618884475;keyid="test-key-rsa-pss"
Signature: sig1=:NtIKWuXjr4SBEXj97gbick4O95ff378I0CZOa2VnIeEXZ1itzAdqTpSvG91XYrq5CfxCmk8zz1Zg7ZGYD+ngJyVn805r73rh2eFCPO+ZXDs45Is/Ex8srzGC9sfVZfqeEfApRFFe5yXDmANVUwzFWCEnGM6+SJVmWl1/jyEn45qA6Hw+ZDHbrbp6qvD4N0S92jlPyVVEh/SmCwnkeNiBgnbt+E0K5wCFNHPbo4X1Tj406W+bTtnKzaoKxBWKW8aIQ7rg92zqE1oqBRjqtRi5/Q6P5ZYYGGINKzNyV3UjZtxeZNnNJ+MAnWS0mofFqcZHVgSU/1wUzP7MhzOKLca1Yg==: {"hello": "world"}

Note that none of the other headers or aspects of the message are modified by the signature process.

The verifier parses both headers, re-creates the signature input string from the request, and verifies the signature value using the identified key and algorithm. But how does the verifier know that this signature is sufficient for this request, and how does the signer know what to sign in the first place?

Applying the Message Signature Specification

As discussed above, the signer and verifier need to have a way of figuring out which algorithm and keys are appropriate for a given signed message. In many deployments, this information can be gleaned through context and configuration. For example, a key derivation algorithm based on the tenant identifier in the URL can be used to dereference the key needed for a given call. Or an application identifier passed in the body could point to a record giving both the expected algorithm and allowable key material.

In addition to defining a predictable way to determine this, an application of the HTTP message signatures specification also needs to define which parts of the message need to be signed. For example, and API might have very different behaviors based on a Content-Type header but not really care about the Content-Encoding. A security protocol like OAuth or GNAP would require signing the Authorization header that contains the access token as well as the @request-target specialty field.

The HTTP protocol is also designed to allow interception and proxy of requests and responses, with intermediaries fully allowed to alter the message in certain ways. Applications that need to account for such intermediaries can be picky about which headers and components are signed, allowing the signature to survive expected message modifications but protecting against unanticipated changes in transit.

This fundamentally means that no signature method will ever be perfect for all messages — but that’s ok. The HTTP message signature draft instead leans on flexibility, allowing applications to define how best to apply the signature methods to achieve the security needed.

Building the Standard

The HTTP message signatures specification is still a long way from being done. It’s taken in a number of different inputs and many years of collective community experience, and that initially resulted in some major churn in the specification’s syntax and structure. As of version 04 though, the major surgery seems to be behind us. While there will inevitably be some changes to the parameters, names, and possibly even structures, the core of this is pretty solid. It’s time to start implementing it and testing it out with applications of all stripes, and I invite all of you to join me in doing just that.


Doc Searls Weblog

A half-century of NPR

NPR, which turned 50 yesterday, used to mean National Public Radio. It still does, at least legally; but they quit calling it that in 2010. The reason given was “…most of our audience — more than 27 million listeners to NPR member stations and millions more who experience our content on NPR.org and through mobile or […]

NPR, which turned 50 yesterday, used to mean National Public Radio. It still does, at least legally; but they quit calling it that in 2010. The reason given was “…most of our audience — more than 27 million listeners to NPR member stations and millions more who experience our content on NPR.org and through mobile or tablet devices — identify us as NPR.” Translation: We’re not just radio any more.

And they aren’t. Television, newspapers and magazines also aren’t what they were. All of those are now experienced mostly on glowing rectangles connected to the Internet.

Put another way, the Internet is assimilating all of them. On the Internet, radio is also fracturing into new and largely (though not entirely) different spawn. The main two are streaming (for music, live news and events) and podcasting (for talk and news).

This sidelines the radio sources called stations. Think about it: how much these days do you ask yourself “What’s on?” And how much do you listen to an actual radio, or watch TV through an antenna? Do you even have a radio that’s not in a car or stored away in the garage?

If you value and manage your time, chances are you are using apps to store and forward your listening and viewing to later times, when you can easily speed up the program or skip over ads and other “content” you don’t want to “consume.” (I put those in quotes because only the supply side talks that way about what they produce and what you do with it.)

This does not match the legacy structure of radio stations. Especially technically.

See, the purpose of stations is to stand in one place radiating sound (called “programs”) on signals, in real time (called ‘live”), around the clock, for a limited geography: a city or a region. Key thing: they have to fill that time.

For this stations can get along without studios (like companies in our current plague have found ways to get along without offices). But they still need to maintain transmitters with antennas.

For AM, which was born in the 1920s, the waves are so long that whole towers, or collections of them, radiate the signals. In almost all cases these facilities take up acres of real estate—sometimes dozens of acres. For FM and TV, media born in the 1940s, the waves are short, but need to radiate from high places: atop towers, tall buildings or mountains.

Maintaining these facilities isn’t cheap. In the case of AM stations, it is now common for the land under towers to be worth far more than the stations themselves, which is why so many AM stations are now going off the air or moving off to share other stations’ facilities, usually at the cost of lost coverage.

This is why I am sure that most or all of these facilities will be as gone as horse-drawn carriages and steam engines, sometime in the next few years or decades. Also why I am documenting transmitters that still stand, photographically. You can see a collection of my transmitter and antenna photos here and here. (The image above is what radiates KPCC/89.3 from Mt. Wilson, which overlooks Los Angeles.)

It’s a safe bet, for a few more years at least, that stations will still be around, transmitting to people mostly on the Net. But at some point (probably many points) the transmitters will be gone, simply because they cost too much, don’t do enough—and in one significant way, do too much. Namely, fill the clock, 24/7, with “content.”

To help get our heads around this, consider this: the word station derives from the Latin station- and statio from stare, which means to stand. In a place.

In the terrestrial world, we needed stationary places for carriages, trains and busses to stop. On radio, we used to need what we called a “dial,” where radio stations could be found on stationary positions called channels or frequencies. Now those are numbers that appear in a read-out.

But even those were being obsolesced decades ago in Europe. There a car radio might say the name of a station, which might be received on any number of frequencies, transmitted by many facilities, spread across a region or a country. What makes this possible is a standard called RDS, which uses a function called alternative frequency (AF) to make a radio play a given station on whatever channel sounds best to the radio. This would be ideal for the CBC in Canada and for regional public stations such as WAMC, KPCC, KUER and KCRW, which all have many transmitters scattered around.

Alas, when this standard was being worked out in the ’80s and early ’90s, the North American folks couldn’t imagine one station on many frequencies and in many locations, so they deployed a lesser derivative standard called RDBS, which lacked the AF function.

But this is now, and on its 50th anniversary public radio—and NPR stations especially—are doing well.

In radio ratings for New York, Los Angeles, San Francisco, Washington, San Diego, and dozens of other markets, the top news station is an NPR one. Here in Santa Barbara, about a quarter of all listening goes to non-commercial stations, led by KCLU, the most local of the NPR affiliates with transmitters here. (Best I can tell, Santa Barbara, which I wrote about here in 2019, is still the top market for public radio in the country. Number two is still Vermont.)

But I gotta wonder how long the station-based status quo will remain stationary in our listening habits. To the degree that I’m a one-person bellwether, the prospects aren’t good. Nearly all my listening these days is to podcasts or to streams on the Net. Some of those are from stations, but most are straight from producers, only one of which is NPR. And I listen to few of them live.

Still, it’s a good bet that NPR will do well for decades to come. Its main challenge will be to survive the end of station-based live broadcasting. Because that eventuality is starting to become visible.


Ben Werdmüller

Press Freedom Day

Yesterday was World Press Freedom Day. I’d planned to publish this post then, but my mother was in the in ER. (She's out now; the rollercoaster continues.) A functional, free press is a vital part of the foundation of democracy, right alongside free and open elections. It's impossible to have an educated voting populace without it - and you can't have a democracy without educated voters. It's i

Yesterday was World Press Freedom Day. I’d planned to publish this post then, but my mother was in the in ER. (She's out now; the rollercoaster continues.)

A functional, free press is a vital part of the foundation of democracy, right alongside free and open elections. It's impossible to have an educated voting populace without it - and you can't have a democracy without educated voters. It's incredibly important to have people out there dedicated to uncovering the truth and speaking truth to power.

According to Reporters Without Borders, the US ranks 44th in the world for press freedom. During Trump's last year in office, nearly 400 journalists were assaulted on the job, and over 130 were detained. Only 40% of Americans trust the media; among conservatives, the figure is considerably worse.

To control a populace, authoritarians first seek to undermine the press. The Nazi-era term for this was Lügenpresse, which literally translates to lying press. The Trump-era term was lifted almost verbatim: fake news. It continues to do harm.

As well as in the broader societal sphere, this discourse extends to industry: in tech, we’ve had our own anti-press movements that seek to undermine free and fair reporting. It’s always abhorrent.

Which isn’t to say that the press shouldn’t be criticized: oversight of journalism is also journalism, and conversations about the nature of reporting are important. No institution can be unassailable, and no journalist can be above reproach. I particularly welcome conversations about diversity and equity in newsrooms and how the demographics of reporting staff affect the stories they produce. Journalists are imperfect, because everyone is imperfect; regardless, they should have unfettered access to information and receive protection under the law. The work they do makes freedom and democracy possible.

Similarly, whistleblowers. We depend on people who are willing to call out wrongdoing. Daniel Ellsberg revealing the Pentagon Papers allowed Americans to understand the full scope of the Vietnam War for the first time. Edward Snowden allowed Americans to understand that they were the subjects of illegal mass surveillance. Chelsea Manning allowed Americans to understand war crimes that were committed in their names. Each of them faced severe repercussions; each of them allowed American voters to better understand the actions of their government.

In the midst of the “fake news” culture war, there’s been a lot of talk about how to battle misinformation. One of those tactics has sometimes been to promote certain, trusted publications. The intention is noble: there’s no doubt in my mind that the New York Times is more trustworthy than InfoWars, for example. But the unintended effect is to shut out new publications that haven’t managed to build a reputation yet - and in particular, new publications that might be run by people of color, who are underrepresented in establishment media. It also has the effect of potentially discrediting whistleblower accounts that can’t find purchase in mainstream publications, creating an “approved news” that can unintentionally obscure important facts.

Instead, I’m more excited - albeit with some reservations - by software projects that add context on a story by showing how other outlets have reported it. I’m committed to an open web that allows anyone to publish, even if that means tolerating the InfoWars and Epoch Times dumpster fires alongside more legitimate sources. Context and critical reasoning are key.

The press isn’t glamorous; it’s not always convenient or comfortable. But it’s absolutely crucial for a functional democracy and a free society. Because power is at stake, there will always be people who want to undermine or control journalism - and our job as democratic citizens is to refuse to allow them.

I’m grateful for the press. I’m grateful for democracy. Let’s be vigilant.


Simon Willison

Plot & Vega-Lite

Plot & Vega-Lite Useful documentation comparing the brand new Observable Plot to Vega-Lite, complete with examples of how to achieve the same thing in both libraries.

Plot & Vega-Lite

Useful documentation comparing the brand new Observable Plot to Vega-Lite, complete with examples of how to achieve the same thing in both libraries.


Observable Plot

Observable Plot This is huge: a brand new high-level JavaScript visualization library from Mike Bostock, the author of D3 - partially inspired by Vega-Lite which I've used enthusiastically in the past. First impressions are that this is a big step forward for quickly building high-quality visualizations. It's released under the ISC license which is "functionally equivalent to the BSD 2-Clause an

Observable Plot

This is huge: a brand new high-level JavaScript visualization library from Mike Bostock, the author of D3 - partially inspired by Vega-Lite which I've used enthusiastically in the past. First impressions are that this is a big step forward for quickly building high-quality visualizations. It's released under the ISC license which is "functionally equivalent to the BSD 2-Clause and MIT licenses".

Via @mbostock


MyDigitalFootprint

In leadership, why is recognising paradox critically important?

Source: Wendy Smith  https://www.learninginnovationslab.org/guest-faculty/ The importance of creating or seeing a paradox is that you can understand that the data and facts being presented to you can lead to the recommendation or conclusion being offered, but equally that the same data and facts can equally lead to a different conclusion.   Our problem is that we are not very

Source: Wendy Smith  https://www.learninginnovationslab.org/guest-faculty/


The importance of creating or seeing a paradox is that you can understand that the data and facts being presented to you can lead to the recommendation or conclusion being offered, but equally that the same data and facts can equally lead to a different conclusion.  

Our problem is that we are not very good at finding flaws in our own arguments, if for no other reason than they support our incentives and beliefs. We tend to take it personally when someone attacks our logic, beliefs or method, even if they are searching for the paradox. Equally, the person you are about to question reacts just like you do.  

Searching for the paradox allows you to see the jumps, assumptions and framing in the logic being presented, which lays bare how our thinking and decisions are being manipulated.  Often it turns out, others are blinded to see one conclusion, and as a leader and executive, your role is to explore and question the flow.  

Logical flow decisions often create paradoxes because of an invalid argument, but they are nevertheless valuable in creating a narrative. We see this as a statement or proposition which, despite sound (or apparently sound) reasoning from acceptable premises, leads to a conclusion that seems logically unacceptable or self-contradictory.  Finding a paradox in non-logical flow decisions reveals errors in definitions that were assumed to be rigorous. Equally,  a paradox can be seen when a seemingly absurd or contradictory statement or proposition, which when investigated, proves to be well-founded or true.   What is evident is the need for critical thinking, questions and sensitivity. 

Just because an idea pops into your head during a presentation doesn’t mean it’s true or reasonable. Idea bias is a new skinny belief you have just created, leading you to poor decision making as you have been framed. Often in a meeting, the framing is such that the presenter has set up a story or analogy which you believe in and fail to create new questions about (idea bias); as a way to make the logic jumps needed to justify a story.  If you cannot see the paradox, you are in their model, which means you are unlikely to make an unbiased decision.  If you can see the paradox you have mastered critical thinking and can use tools to ensure you make decisions that lead to outcomes that you want. 

 If you cannot see the paradox, you are in a model.


Decision-making framing itself create paradox’s for us 

Prevention paradox: For one person to benefit, many people have to change their behaviour — even though they receive no benefit or even suffer, from the change.  An assumption about the adoption of a product.

Decision-making paradox: Picking “the best decision-making method” is a decision problem in itself. Can the tool pick the best tool?  What has your process already framed as a decision method?

Abilene paradox: Making a decision based on what you think others want to do and not on what they actually want to do.  Everybody decides to do something that nobody really wants to do, but only what they thought everybody else wanted to do.  Do we have the agency to make an individual choice in the setting we have?

Inventor’s paradox: It is easier to solve a more general problem that covers the specifics of the sought-after solution.  Have we actually solved the problem?

Willpower paradox: Those who kept their minds open were more goal-directed and more motivated than those who declared their objective to themselves.

Buridan’s ass: Making a rational choice between two outcomes of equal value creates the longest delay in decision making (thanks, Yael).  Better known as Fredkin’s paradox: The more similar two choices are, the more time a decision-making agent spends on deciding.

Navigation paradox: Increased navigational precision may result in increased collision risk.

The paradox of tolerance: Should one tolerate intolerance if intolerance would destroy the possibility of tolerance?

Prevention paradox: For one person to benefit, many people have to change their behaviour — even though they receive no benefit or even suffer, from the change.

Willpower paradox: Those who kept their minds open were more goal-directed and more motivated than those who declared their objective to themselves.

Rule-following paradox: Even though rules are intended to determine actions, “no course of action could be determined by a rule because any course of action can be made out, to accord with the rule.”


A growing list of paradoxes that can help develop critical thinking can be found here.  I am exploring Paradox as I expand on my thinking at www.peakparadox.com 



Voidstar: blog

Dead Lies Dreaming (Laundry Files, 10) by Charles Stross

[from: Librarything]

[from: Librarything]

Harrow the Ninth (The Locked Tomb Trilogy, 2) by Tamsyn Muir

[from: Librarything]

[from: Librarything]

False Value (Rivers of London) by Ben Aaronovitch

[from: Librarything]

[from: Librarything]

Attack Surface by Cory Doctorow

[from: Librarything]

[from: Librarything]

Robot Artists & Black Swans: The Italian Fantascienza Stories by Bruce Sterling

[from: Librarything]

[from: Librarything]

What Abigail Did That Summer by Ben Aaronovitch

[from: Librarything]

[from: Librarything]

Simon Willison

Practical SQL for Data Analysis

Practical SQL for Data Analysis This is a really great SQL tutorial: it starts with the basics, but quickly moves on to a whole array of advanced PostgreSQL techniques - CTEs, window functions, efficient sampling, rollups, pivot tables and even linear regressions executed directly in the database using regr_slope(), regr_intercept() and regr_r2(). I picked up a whole bunch of tips for things I d

Practical SQL for Data Analysis

This is a really great SQL tutorial: it starts with the basics, but quickly moves on to a whole array of advanced PostgreSQL techniques - CTEs, window functions, efficient sampling, rollups, pivot tables and even linear regressions executed directly in the database using regr_slope(), regr_intercept() and regr_r2(). I picked up a whole bunch of tips for things I didn't know you could do with PostgreSQL here.

Via @be_haki

Monday, 03. May 2021

Simon Willison

Adding GeoDjango to an existing Django project

Work on VIAL for Vaccinate The States continues. I talked about matching last week. I've been building more features to support figuring out if a newly detected location is already listed or not, with one of the most significant being the ability to search for locations within a radius of a specific point. I've experimented with a PostgreSQL/Django version of the classic cos/sin/radians query

Work on VIAL for Vaccinate The States continues.

I talked about matching last week. I've been building more features to support figuring out if a newly detected location is already listed or not, with one of the most significant being the ability to search for locations within a radius of a specific point.

I've experimented with a PostgreSQL/Django version of the classic cos/sin/radians query for this but if you're going to do this over a larger dataset it's worth using a proper spatial index for it - and GeoDjango has provided tools for this since Django 1.0 in 2008!

I have to admit that outside of a few prototypes I've never used GeoDjango extensively myself - partly I've not had the right project for it, and in the past I've also been put off by the difficulty involved in installing all of the components.

That's a lot easier in 2021 than it was in 2008. But VIAL is a project in-flight, so here are some notes on what it took to get GeoDjango added to an existing Django project.

Alex Vandiver has been working with me on VIAL and helped figure out quite a few of these steps.

Activating PostgreSQL

The first step was to install the PostGIS PostgreSQL extension. This can be achieved using a Django migration:

from django.contrib.postgres.operations import CreateExtension from django.db import migrations class Migration(migrations.Migration): dependencies = [ ("my_app", "0108_previous-migration"), ] operations = [ CreateExtension("postgis"), ]

Most good PostgreSQL hosting already makes this extension available - in our case we are using Google Cloud SQL which supports various extensions, including PostGIS. I use Postgres.app for my personal development environment which bundles PostGIS too.

So far, so painless!

System packages needed by GeoDjango

GeoDjango needs the GEOS, GDAL and PROJ system libraries. Alex added these to our Dockerfile (used for our production deployments) like so:

RUN apt-get update && apt-get install -y \ binutils \ gdal-bin \ libproj-dev \ && rm -rf /var/lib/apt/lists/* Adding a point field to a Django model

I already had a Location model, which looked something like this:

class Location(models.Model): name = models.CharField() # ... latitude = models.DecimalField( max_digits=9, decimal_places=5 ) longitude = models.DecimalField( max_digits=9, decimal_places=5 )

I made three changes to this class: I changed the base class to this:

from django.contrib.gis.db import models as gis_models class Location(gis_models.Model): # ...

I added a point column:

point = gis_models.PointField( blank=True, null=True, spatial_index=True )

And I set up a custom save() method to populate that point field with a point representing the latitude and longitude every time the object was saved:

from django.contrib.gis.geos import Point # ... def save(self, *args, **kwargs): # Point is derived from latitude/longitude if self.longitude and self.latitude: self.point = Point( float(self.longitude), float(self.latitude), srid=4326 ) else: self.point = None super().save(*args, **kwargs)

srid=4326 ensures the point is stored using WGS84 - the most common coordinate system for latitude and longitude values across our planet.

Running ./manage.py makemigrations identified the new point Point column and created the corresponding migration for me.

Backfilling the point column with a migration

The .save() method would populate point for changes going forward, but I had 40,000 records that already existed which I needed to backfill. I used this migration to do that:

from django.db import migrations class Migration(migrations.Migration): dependencies = [ ("core", "0110_location_point"), ] operations = [ migrations.RunSQL( sql=""" update location set point = ST_SetSRID( ST_MakePoint( longitude, latitude ), 4326 );""", reverse_sql=migrations.RunSQL.noop, ) ] latitude/longitude/radius queries

With the new point column created and populated, here's the code I wrote to support simple latitude/longitude/radius queries:

from django.contrib.gis.geos import Point from django.contrib.gis.measure import Distance def search_locations(request): qs = Location.objects.filter(soft_deleted=False) latitude = request.GET.get("latitude") longitude = request.GET.get("longitude") radius = request.GET.get("radius") if latitude and longitude and radius: # Validate latitude/longitude/radius for value in (latitude, longitude, radius): try: float(value) except ValueError: return JsonResponse( {"error": "latitude/longitude/radius should be numbers"}, status=400 ) qs = qs.filter( point__distance_lt=( Point( float(longitude), float(latitude) ), Distance(m=float(radius)), ) ) # ... return JSON for locations

In writing up these notes I realize that this isn't actually the best way to do this, because it fails to take advantage of the spatial index on that column! I've filed myself an issue to switch to the spatial-index-friendly dwithin instead.

Getting CI to work

The hardest part of all of this turned out to be getting our CI suites to pass.

We run CI in two places at the moment: GitHub Actions and Google Cloud Build (as part of our continuous deployment setup).

The first error I hit was this one:

psycopg2.errors.UndefinedFile: could not open extension control file "/usr/share/postgresql/13/extension/postgis.control": No such file or directory

It turns out that's what happens when your PostgreSQL server doesn't have the PostGIS extension available.

Our GitHub Actions configuration started like this:

name: Run tests on: [push] jobs: test: runs-on: ubuntu-latest services: postgres: image: postgres:13 env: POSTGRES_USER: postgres POSTGRES_PASSWORD: postgres POSTGRES_DB: vaccinate options: --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5 ports: - 5432:5432 steps:

The postgres:13 image doesn't have PostGIS. Swapping that out for postgis/postgis:13-3.1 fixed that (using this image).

Our Cloud Build configuration included this:

# Start up a postgres for tests - id: "start postgres" name: "gcr.io/cloud-builders/docker" args: - "run" - "-d" - "--network=cloudbuild" - "-e" - "POSTGRES_HOST_AUTH_METHOD=trust" - "--name" - "vaccinate-db" - "postgres" - id: "test image" name: "gcr.io/cloud-builders/docker" args: - "run" - "-t" - "--network=cloudbuild" - "-e" - "DATABASE_URL=postgres://postgres@vaccinate-db:5432/vaccinate" - "${_IMAGE_NAME}:latest" - "pytest" - "-v"

I tried swapping out that last postgres argument for postgis/postgis:13-3.1, like I had with the GitHub Actions one... and it failed with this error instead:

django.db.utils.OperationalError: could not connect to server: Connection refused Is the server running on host "vaccinate-db" (192.168.10.3) and accepting TCP/IP connections on port 5432?

This one stumped me. Eventually Alex figured out the problem: the extra extension meant the PostgreSQL was taking slightly longer to start - something that was covered in our GitHub Actions configuration by the pg_isready line. He added this step to our Cloud Build configuration:

- id: "wait for postgres" name: "jwilder/dockerize" args: ["dockerize", "-timeout=60s", "-wait=tcp://vaccinate-db:5432"]

It uses jwilder/dockerize to wait until the database container starts accepting connections on port 5432.

Next steps

Now that we have GeoDjango I'm excited to start exploring new capabilities for our software. One thing in particular that interests me is teaching VIAL to backfill the county for a location based on its latitude and longitude - the US Census provide a shapefile of county polygons which I use with Datasette and SpatiaLite in my simonw/us-counties-datasette project, so I'm confident it would work well using PostGIS instead.

Releases this week django-sql-dashboard: 0.11a0 - (22 total releases) - 2021-04-26
Django app for building dashboards using raw SQL queries TIL this week migrations.RunSQL.noop for reversible SQL migrations Running Datasette on Replit

Damien Bod

Create an OIDC credential Issuer with MATTR and ASP.NET Core

This article shows how to create and issue verifiable credentials using MATTR and an ASP.NET Core. The ASP.NET Core application allows an admin user to create an OIDC credential issuer using the MATTR service. The credentials are displayed in an ASP.NET Core Razor Page web UI as a QR code for the users of the […]

This article shows how to create and issue verifiable credentials using MATTR and an ASP.NET Core. The ASP.NET Core application allows an admin user to create an OIDC credential issuer using the MATTR service. The credentials are displayed in an ASP.NET Core Razor Page web UI as a QR code for the users of the application. The user can use a digital wallet form MATTR to scan the QR code, authenticate against an Auth0 identity provider configured for this flow and use the claims from the id token to add the verified credential to the digital wallet. In a follow up post, a second application will then use the verified credentials to allow access to a second business process.

Code: https://github.com/swiss-ssi-group/MattrGlobalAspNetCore

Blogs in the series

Getting started with Self Sovereign Identity SSI Create an OIDC credential Issuer with MATTR and ASP.NET Core Present and Verify Verifiable Credentials in ASP.NET Core using Decentralized Identities and MATTR

Setup

The solutions involves an MATTR API which handles all the blockchain identity logic. An ASP.NET Core application is used to create the digital identity and the OIDC credential issuer using the MATTR APIs and also present this as a QR code which can be scanned. An identity provider is required to add the credential properties to the id token. The properties in a verified credential are issued using the claims values from the id token so a specific identity provider is required with every credential issuer using this technic. Part of the business of this solution is adding business claims to the identity provider. A MATTR digital wallet is required to scan the QR code, authenticate against the OIDC provider which in our case is Auth0 and then store the verified credentials to the wallet for later use.

MATTR Setup

You need to register with MATTR and create a new account. MATTR will issue you access to your sandbox domain and you will get access data from them plus a link to support.

Once setup, use the OIDC Bridge tutorial to implement the flow used in this demo. The docs are really good but you need to follow the docs exactly.

https://learn.mattr.global/tutorials/issue/oidc-bridge/issue-oidc

Auth0 Setup

A standard trusted web application which supports the code flow is required so that the MATTR digital wallet can authenticate using the identity provider and use the id token values from the claims which are required in the credential. It is important to create a new application which is only used for this because the client secret is required when creating the OIDC credential issuer and is shared with the MATTR platform. It would probably be better to use certificates instead of a shared secret which is persisted in different databases. We also use a second Auth0 application configuration to sign into the web application but this is not required to issue credentials.

In Auth0, rules are used to extend the id token claims. You need to add your claims as required by the MATTR API and your business logic for the credentials you wish to issue.

function (user, context, callback) { const namespace = 'https://--your-tenant--.vii.mattr.global/'; context.idToken[namespace + 'license_issued_at'] = user.user_metadata.license_issued_at; context.idToken[namespace + 'license_type'] = user.user_metadata.license_type; context.idToken[namespace + 'name'] = user.user_metadata.name; context.idToken[namespace + 'first_name'] = user.user_metadata.first_name; context.idToken[namespace + 'date_of_birth'] = user.user_metadata.date_of_birth; callback(null, user, context); }

For every user (holder) who should be able to create verifiable credentials, you must add the credential data to the user profile. This is part of the business process with this flow. If you were to implement this for a real application with lots of users, it would probably be better to integrate the identity provider into the solution issuing the credentials and add a UI for editing the user profile data which is used in the credentials. This would be really easy using ASP.NET Core Identity and for example OpenIddict or IdentityServer4. It is important that the user cannot edit this data. This logic is part of the credential issuer logic and not part of the user profile.

After creating a new MATTR OIDC credential issuer, the callback URL needs to be added to the Open ID connect code flow client used for the digital wallet sign in.

Add the URL to the Allowed Callback URLs in the settings of your Auth0 application configuration for the digital wallet.

Implementing the OpenID Connect credentials Issuer application

The ASP.NET Core application is used to create new OIDC credential issuers and also display the QR code for these so that the verifiable credential can be loaded to the digital wallet. The application requires secrets. The data is stored to a database, so that any credential can be added to a wallet at a later date and also so that you can find the credentials you created. The MattrConfiguration is the data and the secrets you received from MATTR for you account access to the API. The Auth0 configuration is the data required to sign in to the application. The Auth0Wallet configuration is the data required to create the OIDC credential issuer so that the digital wallet can authenticate the identity using the Auth0 application. This data is stored in the user secrets during development.

{ // use user secrets "ConnectionStrings": { "DefaultConnection": "--your-connection-string--" }, "MattrConfiguration": { "Audience": "https://vii.mattr.global", "ClientId": "--your-client-id--", "ClientSecret": "--your-client-secret--", "TenantId": "--your-tenant--", "TenantSubdomain": "--your-tenant-sub-domain--", "Url": "http://mattr-prod.au.auth0.com/oauth/token" }, "Auth0": { "Domain": "--your-auth0-domain", "ClientId": "--your--auth0-client-id--", "ClientSecret": "--your-auth0-client-secret--", } "Auth0Wallet": { "Domain": "--your-auth0-wallet-domain", "ClientId": "--your--auth0-wallet-client-id--", "ClientSecret": "--your-auth0-wallet-client-secret--", } }

Accessing the MATTR APIs

The MattrConfiguration DTO is used to fetch the MATTR account data for the API access and to use in the application.

public class MattrConfiguration { public string Audience { get; set; } public string ClientId { get; set; } public string ClientSecret { get; set; } public string TenantId { get; set; } public string TenantSubdomain { get; set; } public string Url { get; set; } }

The MattrTokenApiService is used to acquire an access token and used for the MATTR API access. The token is stored to a cache and only fetched if the old one has expired or is not available.

public class MattrTokenApiService { private readonly ILogger<MattrTokenApiService> _logger; private readonly MattrConfiguration _mattrConfiguration; private static readonly Object _lock = new Object(); private IDistributedCache _cache; private const int cacheExpirationInDays = 1; private class AccessTokenResult { public string AcessToken { get; set; } = string.Empty; public DateTime ExpiresIn { get; set; } } private class AccessTokenItem { public string access_token { get; set; } = string.Empty; public int expires_in { get; set; } public string token_type { get; set; } public string scope { get; set; } } private class MattrCrendentials { public string audience { get; set; } public string client_id { get; set; } public string client_secret { get; set; } public string grant_type { get; set; } = "client_credentials"; } public MattrTokenApiService( IOptions<MattrConfiguration> mattrConfiguration, IHttpClientFactory httpClientFactory, ILoggerFactory loggerFactory, IDistributedCache cache) { _mattrConfiguration = mattrConfiguration.Value; _logger = loggerFactory.CreateLogger<MattrTokenApiService>(); _cache = cache; } public async Task<string> GetApiToken(HttpClient client, string api_name) { var accessToken = GetFromCache(api_name); if (accessToken != null) { if (accessToken.ExpiresIn > DateTime.UtcNow) { return accessToken.AcessToken; } else { // remove => NOT Needed for this cache type } } _logger.LogDebug($"GetApiToken new from oauth server for {api_name}"); // add var newAccessToken = await GetApiTokenClient(client); AddToCache(api_name, newAccessToken); return newAccessToken.AcessToken; } private async Task<AccessTokenResult> GetApiTokenClient(HttpClient client) { try { var payload = new MattrCrendentials { client_id = _mattrConfiguration.ClientId, client_secret = _mattrConfiguration.ClientSecret, audience = _mattrConfiguration.Audience }; var authUrl = "https://auth.mattr.global/oauth/token"; var tokenResponse = await client.PostAsJsonAsync(authUrl, payload); if (tokenResponse.StatusCode == System.Net.HttpStatusCode.OK) { var result = await tokenResponse.Content.ReadFromJsonAsync<AccessTokenItem>(); DateTime expirationTime = DateTimeOffset.FromUnixTimeSeconds(result.expires_in).DateTime; return new AccessTokenResult { AcessToken = result.access_token, ExpiresIn = expirationTime }; } _logger.LogError($"tokenResponse.IsError Status code: {tokenResponse.StatusCode}, Error: {tokenResponse.ReasonPhrase}"); throw new ApplicationException($"Status code: {tokenResponse.StatusCode}, Error: {tokenResponse.ReasonPhrase}"); } catch (Exception e) { _logger.LogError($"Exception {e}"); throw new ApplicationException($"Exception {e}"); } } private void AddToCache(string key, AccessTokenResult accessTokenItem) { var options = new DistributedCacheEntryOptions().SetSlidingExpiration(TimeSpan.FromDays(cacheExpirationInDays)); lock (_lock) { _cache.SetString(key, JsonConvert.SerializeObject(accessTokenItem), options); } } private AccessTokenResult GetFromCache(string key) { var item = _cache.GetString(key); if (item != null) { return JsonConvert.DeserializeObject<AccessTokenResult>(item); } return null; } }

Generating the API DTOs using Nswag

The MattrOpenApiClientSevice file was generated using Nswag and the Open API file provided by MATTR here. We only generated the DTOs using this and access the client then using a HttpClient instance. The Open API file used in this solution is deployed in the git repo.

Creating the OIDC credential issuer

The MattrCredentialsService is used to create an OIDC credentials issuer using the MATTR APIs. This is implemented using the CreateCredentialsAndCallback method. The created callback is returned so that it can be displayed in the UI and copied to the specific Auth0 application configuration.

private readonly IConfiguration _configuration; private readonly DriverLicenseCredentialsService _driverLicenseService; private readonly IHttpClientFactory _clientFactory; private readonly MattrTokenApiService _mattrTokenApiService; private readonly MattrConfiguration _mattrConfiguration; public MattrCredentialsService(IConfiguration configuration, DriverLicenseCredentialsService driverLicenseService, IHttpClientFactory clientFactory, IOptions<MattrConfiguration> mattrConfiguration, MattrTokenApiService mattrTokenApiService) { _configuration = configuration; _driverLicenseService = driverLicenseService; _clientFactory = clientFactory; _mattrTokenApiService = mattrTokenApiService; _mattrConfiguration = mattrConfiguration.Value; } public async Task<string> CreateCredentialsAndCallback(string name) { // create a new one var driverLicenseCredentials = await CreateMattrDidAndCredentialIssuer(); driverLicenseCredentials.Name = name; await _driverLicenseService.CreateDriverLicense(driverLicenseCredentials); var callback = $"https://{_mattrConfiguration.TenantSubdomain}/ext/oidc/v1/issuers/{driverLicenseCredentials.OidcIssuerId}/federated/callback"; return callback; }

The CreateMattrDidAndCredentialIssuer method implements the different steps described in the MATTR API documentation for this. An access token for the MATTR API is created or retrieved from the cache and DID is created and the id from the DID post response is used to create the OIDC credential issuer. The DriverLicenseCredentials is returned which is persisted to a database and the callback is created using this object.

private async Task<DriverLicenseCredentials> CreateMattrDidAndCredentialIssuer() { HttpClient client = _clientFactory.CreateClient(); var accessToken = await _mattrTokenApiService .GetApiToken(client, "mattrAccessToken"); client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", accessToken); client.DefaultRequestHeaders .TryAddWithoutValidation("Content-Type", "application/json"); var did = await CreateMattrDid(client); var oidcIssuer = await CreateMattrCredentialIssuer(client, did); return new DriverLicenseCredentials { Name = "not_named", Did = JsonConvert.SerializeObject(did), OidcIssuer = JsonConvert.SerializeObject(oidcIssuer), OidcIssuerId = oidcIssuer.Id }; }

The CreateMattrDid method creates a new DID as specified by the API. The MattrOptions class is used to create the request object. This is serialized using the StringContentWithoutCharset class due to a bug in the MATTR API validation. I created this class using the blog from Gunnar Peipman.

private async Task<V1_CreateDidResponse> CreateMattrDid(HttpClient client) { // create did , post to dids // https://learn.mattr.global/api-ref/#operation/createDid // https://learn.mattr.global/tutorials/dids/use-did/ var createDidUrl = $"https://{_mattrConfiguration.TenantSubdomain}/core/v1/dids"; var payload = new MattrOpenApiClient.V1_CreateDidDocument { Method = MattrOpenApiClient.V1_CreateDidDocumentMethod.Key, Options = new MattrOptions() }; var payloadJson = JsonConvert.SerializeObject(payload); var uri = new Uri(createDidUrl); using (var content = new StringContentWithoutCharset(payloadJson, "application/json")) { var createDidResponse = await client.PostAsync(uri, content); if (createDidResponse.StatusCode == System.Net.HttpStatusCode.Created) { var v1CreateDidResponse = JsonConvert.DeserializeObject<V1_CreateDidResponse>( await createDidResponse.Content.ReadAsStringAsync()); return v1CreateDidResponse; } var error = await createDidResponse.Content.ReadAsStringAsync(); } return null; }

The MattrOptions DTO is used to create a default DID using the key type “ed25519”. See the MATTR API docs for further details.

public class MattrOptions { /// <summary> /// The supported key types for the DIDs are ed25519 and bls12381g2. /// If the keyType is omitted, the default key type that will be used is ed25519. /// /// If the keyType in options is set to bls12381g2 a DID will be created with /// a BLS key type which supports BBS+ signatures for issuing ZKP-enabled credentials. /// </summary> public string keyType { get; set; } = "ed25519"; }

The CreateMattrCredentialIssuer implements the OIDC credential issuer to create the post request. The request properties need to be setup for your credential properties and must match claims from the id token of the Auth0 user profile. This is where the OIDC client for the digital wallet is setup and also where the credential claims are specified. If this is setup up incorrectly, loading the data into your wallet will fail. The HTTP request and the response DTOs are implemented using the Nswag generated classes.

private async Task<V1_CreateOidcIssuerResponse> CreateMattrCredentialIssuer(HttpClient client, V1_CreateDidResponse did) { // create vc, post to credentials api // https://learn.mattr.global/tutorials/issue/oidc-bridge/setup-issuer var createCredentialsUrl = $"https://{_mattrConfiguration.TenantSubdomain}/ext/oidc/v1/issuers"; var payload = new MattrOpenApiClient.V1_CreateOidcIssuerRequest { Credential = new Credential { IssuerDid = did.Did, Name = "NationalDrivingLicense", Context = new List<Uri> { new Uri( "https://schema.org") // Only this is supported }, Type = new List<string> { "nationaldrivinglicense" } }, ClaimMappings = new List<ClaimMappings> { new ClaimMappings{ JsonLdTerm="name", OidcClaim=$"https://{_mattrConfiguration.TenantSubdomain}/name"}, new ClaimMappings{ JsonLdTerm="firstName", OidcClaim=$"https://{_mattrConfiguration.TenantSubdomain}/first_name"}, new ClaimMappings{ JsonLdTerm="licenseType", OidcClaim=$"https://{_mattrConfiguration.TenantSubdomain}/license_type"}, new ClaimMappings{ JsonLdTerm="dateOfBirth", OidcClaim=$"https://{_mattrConfiguration.TenantSubdomain}/date_of_birth"}, new ClaimMappings{ JsonLdTerm="licenseIssuedAt", OidcClaim=$"https://{_mattrConfiguration.TenantSubdomain}/license_issued_at"} }, FederatedProvider = new FederatedProvider { ClientId = _configuration["Auth0Wallet:ClientId"], ClientSecret = _configuration["Auth0Wallet:ClientSecret"], Url = new Uri($"https://{_configuration["Auth0Wallet:Domain"]}"), Scope = new List<string> { "openid", "profile", "email" } } }; var payloadJson = JsonConvert.SerializeObject(payload); var uri = new Uri(createCredentialsUrl); using (var content = new StringContentWithoutCharset(payloadJson, "application/json")) { var createOidcIssuerResponse = await client.PostAsync(uri, content); if (createOidcIssuerResponse.StatusCode == System.Net.HttpStatusCode.Created) { var v1CreateOidcIssuerResponse = JsonConvert.DeserializeObject<V1_CreateOidcIssuerResponse>( await createOidcIssuerResponse.Content.ReadAsStringAsync()); return v1CreateOidcIssuerResponse; } var error = await createOidcIssuerResponse.Content.ReadAsStringAsync(); } throw new Exception("whoops something went wrong"); }

Now the service is completely ready to generate credentials. This can be used in any Blazor UI, Razor page or MVC view in ASP.NET Core. The services are added to the DI in the startup class. The callback method is displayed in the UI if the application successfully creates a new OIDC credential issuer.

private readonly MattrCredentialsService _mattrCredentialsService; public bool CreatingDriverLicense { get; set; } = true; public string Callback { get; set; } [BindProperty] public IssuerCredential IssuerCredential { get; set; } public AdminModel(MattrCredentialsService mattrCredentialsService) { _mattrCredentialsService = mattrCredentialsService; } public void OnGet() { IssuerCredential = new IssuerCredential(); } public async Task<IActionResult> OnPostAsync() { if (!ModelState.IsValid) { return Page(); } Callback = await _mattrCredentialsService .CreateCredentialsAndCallback(IssuerCredential.CredentialName); CreatingDriverLicense = false; return Page(); } } public class IssuerCredential { [Required] public string CredentialName { get; set; } }

Adding credentials you wallet

After the callback method has been added to the Auth0 callback URLs, the credentials can be used to add verifiable credentials to your wallet. This is fairly simple. The Razor Page uses the data from the database and generates an URL using the MATTR specification and the id from the created OIDC credential issuer. The claims from the id token or the profile data is just used to display the data for the user signed into the web application. This is not the same data which is used be the digital wallet. If the same person logs into the digital wallet, then the data is the same. The wallet authenticates the identity separately.

public class DriverLicenseCredentialsModel : PageModel { private readonly DriverLicenseCredentialsService _driverLicenseCredentialsService; private readonly MattrConfiguration _mattrConfiguration; public string DriverLicenseMessage { get; set; } = "Loading credentials"; public bool HasDriverLicense { get; set; } = false; public DriverLicense DriverLicense { get; set; } public string CredentialOfferUrl { get; set; } public DriverLicenseCredentialsModel(DriverLicenseCredentialsService driverLicenseCredentialsService, IOptions<MattrConfiguration> mattrConfiguration) { _driverLicenseCredentialsService = driverLicenseCredentialsService; _mattrConfiguration = mattrConfiguration.Value; } public async Task OnGetAsync() { //"license_issued_at": "2021-03-02", //"license_type": "B1", //"name": "Bob", //"first_name": "Lammy", //"date_of_birth": "1953-07-21" var identityHasDriverLicenseClaims = true; var nameClaim = User.Claims.FirstOrDefault(t => t.Type == $"https://{_mattrConfiguration.TenantSubdomain}/name"); var firstNameClaim = User.Claims.FirstOrDefault(t => t.Type == $"https://{_mattrConfiguration.TenantSubdomain}/first_name"); var licenseTypeClaim = User.Claims.FirstOrDefault(t => t.Type == $"https://{_mattrConfiguration.TenantSubdomain}/license_type"); var dateOfBirthClaim = User.Claims.FirstOrDefault(t => t.Type == $"https://{_mattrConfiguration.TenantSubdomain}/date_of_birth"); var licenseIssuedAtClaim = User.Claims.FirstOrDefault(t => t.Type == $"https://{_mattrConfiguration.TenantSubdomain}/license_issued_at"); if (nameClaim == null || firstNameClaim == null || licenseTypeClaim == null || dateOfBirthClaim == null || licenseIssuedAtClaim == null) { identityHasDriverLicenseClaims = false; } if (identityHasDriverLicenseClaims) { DriverLicense = new DriverLicense { Name = nameClaim.Value, FirstName = firstNameClaim.Value, LicenseType = licenseTypeClaim.Value, DateOfBirth = dateOfBirthClaim.Value, IssuedAt = licenseIssuedAtClaim.Value, UserName = User.Identity.Name }; // get per name //var offerUrl = await _driverLicenseCredentialsService.GetDriverLicenseCredentialIssuerUrl("ndlseven"); // get the last one var offerUrl = await _driverLicenseCredentialsService.GetLastDriverLicenseCredentialIssuerUrl(); DriverLicenseMessage = "Add your driver license credentials to your wallet"; CredentialOfferUrl = offerUrl; HasDriverLicense = true; } else { DriverLicenseMessage = "You have no valid driver license"; } } }

The data is displayed using Bootstrap. If you use a MATTR wallet to scan the QR Code shown underneath, you will be redirected to authenticate against the specified Auth0 application. If you have the claims, you can add verifiable claims to you digital wallet.

Notes

MATTR API has a some problems with its API and a stricter validation would help a lot. But MATTR support is awesome and the team are really helpful and you will end up with a working solution. It would be also awesome if the Open API file could be used without changes to generate a client and the DTOs. It would makes sense, if you could issue credentials data from the data in the credential issuer application and not from the id token of the user profile. I understand that in some use cases, you would like to protect against any wallet taking credentials for other identities, but I as a credential issuer cannot always add my business data to user profiles from the IDP. The security of this solution all depends on the user profile data. If a non authorized person can change this data (in this case, this could be the same user), then incorrect verifiable credentials can be created.

Next step is to create an application to verify and use the verifiable credentials created here.

Links

https://mattr.global/

https://mattr.global/get-started/

https://learn.mattr.global/

https://keybase.io/

https://learn.mattr.global/tutorials/dids/did-key

https://gunnarpeipman.com/httpclient-remove-charset/

https://auth0.com/

Sunday, 02. May 2021

Simon Willison

Hosting SQLite databases on Github Pages

Hosting SQLite databases on Github Pages I've seen the trick of running SQLite compiled to WASM in the browser before, but this comes with an incredibly clever bonus trick: it uses SQLite's page structure to fetch subsets of the database file via HTTP range requests, which means you can run indexed SQL queries against a 600MB database file while only fetching a few MBs of data over the wire. Abs

Hosting SQLite databases on Github Pages

I've seen the trick of running SQLite compiled to WASM in the browser before, but this comes with an incredibly clever bonus trick: it uses SQLite's page structure to fetch subsets of the database file via HTTP range requests, which means you can run indexed SQL queries against a 600MB database file while only fetching a few MBs of data over the wire. Absolutely brilliant. Tucked away at the end of the post is another neat trick: making the browser DOM available to SQLite as a virtual table, so you can query and update the DOM of the current page using SQL!

Via Hacker News


One year of TILs

Just over a year ago I started tracking TILs, inspired by Josh Branchaud's collection. I've since published 148 TILs across 43 different topics. It's a great format! TIL stands for Today I Learned. The thing I like most about TILs is that they drop the barrier to publishing something online to almost nothing. If I'm writing a blog entry, I feel like it needs to say something new. This pressure

Just over a year ago I started tracking TILs, inspired by Josh Branchaud's collection. I've since published 148 TILs across 43 different topics. It's a great format!

TIL stands for Today I Learned. The thing I like most about TILs is that they drop the barrier to publishing something online to almost nothing.

If I'm writing a blog entry, I feel like it needs to say something new. This pressure for originality leads to vast numbers of incomplete, draft posts and a sporadic publishing schedule that trends towards not publishing anything at all.

(Establishing a weeknotes habit has helped enormously here too.)

The bar for a TIL is literally "did I just learn something?" - they effectively act as a public notebook.

They also reflect my values as a software engineer. The thing I love most about this career is that the opportunities to learn new things never reduce - there will always be new sub-disciplines to explore, and I aspire to learn something new every single working day.

My hope is that by publishing a constant stream of TILs I can reinforce the idea that even if you've been working in this industry for twenty years there will always be new things to learn, and learning any new trick - even the most basic thing - should be celebrated.


Doc Searls Weblog

On the persistence of KPIG

On Quora, William Moser asked, Would the KPIG radio format of Americana—Folk, Blugrass, Delta to modern Blues, Blues-rock, trad. & modern C&W, country & Southern Rock, jam-bands, singer/songwriters, some jazz, big-band & jazz-singers sell across markets in America? I answered, I’ve liked KPIG since its prior incarnation as KFAT.I’ve liked KPIG since its prior incarnation as […]

On Quora, William Moser asked, Would the KPIG radio format of Americana—Folk, Blugrass, Delta to modern Blues, Blues-rock, trad. & modern C&W, country & Southern Rock, jam-bands, singer/songwriters, some jazz, big-band & jazz-singers sell across markets in America?

I answered,

I’ve liked KPIG since its prior incarnation as KFAT.I’ve liked KPIG since its prior incarnation as KFAT.

It’s a great fit in the Santa Cruz-Salinas-Monterey market, anchored in Santa Cruz, which is a college/beach/hippie/artist kind of town.

Ratings have always been good, putting it in the top few. See here.

It has also done okay in San Luis Obispo, for similar reasons.

For what it’s worth, those are markets #91 and #171. Similar in a coastal California kind of way.

The station is also a throwback, with its commitment to being the institution it is, with real personalities who actually live there, aren’t leaving, and having a sense of humor about all of it. Also love. And listener participation. None of that is a formula.

Watch this and you’ll get what I mean. It’s what all of radio ought to be, in its own local ways, and way too little of it is.

William replied,

I have played tha brilliant ‘Ripple’ since it was released; the editing is spot-on.I have played tha brilliant ‘Ripple’ since it was released; the editing is spot-on.

With respect to DJ personalities, there are at least two that, in my fantasy of owning that station, would be gone before the ink was dry. (You might even have an idea of whom). Paradoxically, that’s probably part of what makes KPIG work.

It’s really the cross-country market appeal of the ‘Americana’ music format that is my question.

My response:

I think it’s a local thing. KPIG (and KFAT and KHIP before it) is a deeply rooted local institution. It’s not a formula, and without standing one up in some other region like it, and funding it long enough to see if it catches on, it’s hard to say.

All of radio is in decline now, as talk listening moves to podcasts and music listening moves to streaming. The idea that a city or a region needs things called “stations,” all with limited geographical coverage, and with live talent performing, and an obligation to stay on the “air” 24/7, designed to work through things called “radios,” which are no longer sold in stores and persist as secondary functions on car dashboards, is an anachronism at a time when damn near everything (including chat, telephony, video, photography, gaming, fitness tracking and you-name-it) is moving onto phones—which are the most persistently personal things people carry everywhere.

There are truly great alternative stations, however, that thrive in their markets: KESP in Seattle and WWOZ in New Orleans are two great examples. That KPIG manages to persist as a commercial station is especially remarkable in a time when people would rather hit a 30-second skip-forward button on their phone app than listen to an ad.

So I guess my answer is no. But if you want Americana, there are lots of stations that play or approximate it on the Internet. And all of them can be received on your phone or your computer.

That was a bit tough to write, because yesterday I was poised to enjoy Ron Phillips‘ long-standing Saturday morning show on WWOZ when I learned he had died suddenly of a heart attack. Ron has been a great friend since the 1970s, when he was a mainstay at WQDR in Raleigh, and I was a partner in its ad agency (while still being a funny guy at cross-market non-rival WDBS), and I had planned to give him a call after his show, to see how he was doing. (He’d had carpal tunnel surgery recently.)

Though WWOZ is alive and thriving, and there persist many radio stations that are vital institutions in their towns and regions, radio on the whole has been in decline. See here:

Via VisualCapitalist.com

The slopes there are long, and a case can be made, on that low angle alone, that radio will persist forever, along with magazines and TV. But it’s clear that our media usage is moving, overall, to the Internet, where mobile devices are especially good at doing what radio, TV and magazines also do—and in some ways doing it better.

But back to William Moser’s question.

There are already many ways to stream Americana (aka American roots music) on the Internet, whether from stations, streaming services like Spotify, Pandora, or a channel or two on SiriusXM. But those are largely personality-free.

What KPIG has (once described to me as “mutant cowboy rock & roll”) isn’t a format. It’s an institution, like a favorite old tavern, music club, outdoor festival, or coffee shop—or all of those rolled into one. Can one replicate that with an Internet station, or a channel on some global service?

I think not, because those services are all global. You need to start with local roots. WWOZ has that, because it started as a radio station in New Orleans, a place that is itself deeply rooted. After years of living all over the place, Ron moved to New Orleans to be with those roots, and near its greatest radio voice.

Radio is geographical. All the stations I mentioned above, living and dead, are not the biggest ones in their towns. KPIG’s signal is almost notoriously minimal. So is KEXP’s in Seattle. They are local in the most basic sense.

I suppose that condition will persist for another decade or two. But it’s hard to say. Mobile devices are also evolving quickly, getting old within just a few years.

I’m not sure we’ll miss it much, the succession of generations being what it is. But we are losing something. And you can still hear it on KPIG.


Simon Willison

Query Engines: Push vs. Pull

Query Engines: Push vs. Pull Justin Jaffray (who has worked on Materialize) explains the difference between push and pull query execution engines using some really clear examples built around JavaScript generators. Via Hacker News

Query Engines: Push vs. Pull

Justin Jaffray (who has worked on Materialize) explains the difference between push and pull query execution engines using some really clear examples built around JavaScript generators.

Via Hacker News

Saturday, 01. May 2021

Ben Werdmüller

Reading, watching, playing, using: April, 2021

This is my monthly roundup of the media I consumed and found interesting. Here's my list for April, 2021. Books Captain America Vol. 1: Winter In America, by Ta-Nehisi Coates. To be honest, I was expecting more. Ta-Nehisi Coates is such a brilliant writer, but this volume felt minimalist to the point of being abstracted away from the drama. It does set up the story for a little more, but not e

This is my monthly roundup of the media I consumed and found interesting. Here's my list for April, 2021.

Books

Captain America Vol. 1: Winter In America, by Ta-Nehisi Coates. To be honest, I was expecting more. Ta-Nehisi Coates is such a brilliant writer, but this volume felt minimalist to the point of being abstracted away from the drama. It does set up the story for a little more, but not enough more. Still, it felt good to read a comic book again - it’s been quite a while.

Suite for Barbara Loden, by Nathalie Léger, translated by Natasha Lehrer and Cécile Menon. I read it in one sitting, mesmerized by the writing and the articulation of a recognizable kind of sadness. This is the kind of book I would write if I was brave enough: almost certainly not as skillfully, but with an intention to gather the dark corners of solitude and weaving it into poetry. The translation is superb; I wish I could read it in its original French.

Shuggie Bain, by Douglas Stuart. Immersive and real. I could smell Glasgow in every page. The desperation of these well-rounded characters trying to survive through post-industrial poverty, and the moments of human beauty despite it all, ring true. The writing is excellent; the heart at the center of it all beats strong.

Streaming

Nomadland. Naturalistic to the point that fiction and reality are blurred. Frances McDormand gives an impressive performance as always, but what really stands out are the real-life characters drawn into the story. Their lives are written across their faces; tragic but defiant.

The Father. Anchored by kaleidoscopic writing and nuanced performances, we see one man’s dementia play out from all sides. The set is a character in itself, reflecting slips of memory and a rapidly unraveling relationship with time. Watching it from the context of my own parents’ - albeit very different - failing health was tough. One of those films where quiet recognition leaves you cathartically weeping alone in the dark.

The Mitchells vs. The Machines. I guffawed. A lot. Packed full of in-jokes, this has everything you’d expect from the people who made Into the Spider-Verse and The LEGO Movie. A+, five stars.

Notable Articles Business

The Mysterious Case of the F*cking Good Pizza. “Suddenly, I was seized by a need to get to the bottom of a matter that felt like a glitch in the fabric of my humdrum pandemic existence: Where did these clickbait restaurant brands come from, even if they didn’t seem to technically exist? And why did delivery marketplaces across the U.S., and countries around the world, suddenly seem to be flooded with them?”

The Wrong Kind of Splash. Om on Unsplash: “I was a fan up until last evening when I got an email announcing that the company was being acquired by none other than Getty Images. Hearing this was like a red hot spike through the eyes. A startup whose raison d’être was to upend draconian and amoral companies like Getty Images was going to now be part of Getty. Even after I have had time to process it, the news isn’t sitting well with me.”

Let Your Employees Ask Questions. “But you also have to recognize that as a founder, you’re empowered to fuck things up. If you spend three months chasing a market that turns outs to be a dead end, nobody is going to fire you. You own the place. If someone does that at a large company, they’re maybe getting fired. And your employees will bring that reticence to your startup. So, early on, plan on providing feedback and answering a lot of questions about how you want things to get done.”

Investing in Firefly Health. This announcement caught my attention for this: “Health insurance is undergoing a rapid cycle of unbundling and repackaging. Vertically-integrated “payviders” (groups that both pay for services, like an insurer would, and administer those services, like a provider would) are emerging as a new standard, and provider networks are being recontoured as virtual-first care models take root.” I have some thoughts on what the ultimate “payvider” would be - but I wonder if these sorts of services will help get America more comfortable with the idea of a real healthcare system.

How Index Funds May Hurt the Economy. "In recent decades, the whole economy has gone on autopilot. Index-fund investment is hyperconcentrated. So is online retail. So are pharmaceuticals. So is broadband. Name an industry, and it is likely dominated by a handful of giant players. That has led to all sorts of deleterious downstream effects: suppressing workers’ wages, raising consumer prices, stifling innovation, stoking inequality, and suffocating business creation. The problem is not just the indexers. It is the public markets they reflect, where more chaos, more speculation, more risk, more innovation, and more competition are desperately needed."

If You Love Us, Pay Us: A letter from Sean Combs to Corporate America. "Corporations like General Motors have exploited our culture, undermined our power, and excluded Black entrepreneurs from participating in the value created by Black consumers. In 2019, brands spent $239 billion on advertising. Less than 1% of that was invested in Black-owned media companies. Out of the roughly $3 billion General Motors spent on advertising, we estimate only $10 million was invested in Black-owned media. Only $10 million out of $3 billion! Like the rest of Corporate America, General Motors is telling us to sit down, shut up and be happy with what we get."

Amazon Workers Defeat Union Effort in Alabama. "The company’s decisive victory deals a crushing blow to organized labor, which had hoped the time was ripe to start making inroads." Pretty disappointing.

Why Can’t American Workers Just Relax?. “Alarmed by the toll of increasingly nonexistent boundaries between work and home during the pandemic, a growing number of nations want to help their citizens unplug when they’re done with work. In the last few months, several governments, including Canada, the E.U., Ireland, and even Japan—which invented the word karoshi, for death by overwork—announced they’re considering “right to disconnect” laws. Similar laws are already on the books in Argentina, Belgium, Chile, France, Ireland, Italy, the Philippines, and Spain.” Some great links to movements for better working conditions here.

Personal Reflection: Empathy In The Workplace. "The best empathetic leaders are frequently grounded in authentic emotional connectivity with those on their team and beyond. Empathy in this context conveys sincere optimism about how “we can make it through life’s challenges together” and gives others the sense of “team” at a time when they feel most vulnerable and alone. Positive corporate culture creates this emotional support in the organization that goes well beyond tackling corporate objectives."

Six fun remote team building activities. Range is leading the way on organizational culture. This is so great. I bought a SnackMagic box for my team.

Changes at Basecamp. This is a shockingly regressive move from Basecamp, a company that literally wrote the book on building team culture. While "paternalistic benefits" like gym memberships are arguable, not being able to discuss societal context or give feedback to your peers in a structured way paves the way for a monoculture that excludes entire demographics of people. Basecamp's workers should unionize. This is the exact opposite of what an inclusive, empathetic company should be doing.

An Open Letter to Jason and David. "Anyways, it appears your reaction to the pleas and asks to recognize that Basecamp already represents a diversity of experiences and that we want the company’s software and policies to do the same has once again been lacking and disproportionate. But what’s particularly disappointing is the direction of your reaction. The oppressive direction. The silencing direction."

Culture

1984: The Hitchhiker's Guide to the Galaxy. A wonderful look back on one of the best games ever made, co-authored by Douglas Adams himself.

Non-Fungible Taylor Swift. “To put it another way, while we used to pay for plastic discs and thought we were paying for songs (or newspapers/writing or cable/TV stars), empowering distribution over creators, today we pay with both money and attention according to the direction of creators, giving them power over everyone. If the creator decides that their NFTs are important, they will have value; if they decide their show is worthless, it will not.”

Media

Why We’re Freaking Out About Substack. “Danny Lavery had just agreed to a two-year, $430,000 contract with the newsletter platform Substack when I met him for coffee last week in Brooklyn, and he was deciding what to do with the money.” Some notable details here about Substack’s behind the scenes deals.

NPR will roll out paid subscriptions to its podcasts. Worth saying that PRX's founder Jake Shapiro now works at Apple on podcasts. This is a good partnership, and I trust Jake to maintain an open ecosystem.

SiriusXM Is Buying ‘99% Invisible,’ and Street Cred in Podcasting. "Under the new arrangement, “99% Invisible” will remain available at no cost on all platforms supported by ads. But the parties may explore exclusive partnerships for some products down the line. In addition to a large catalog of free podcasts that are available on all platforms, Stitcher sells a premium service offering special features from podcasts it has a relationship with — including ad-free listening, early access and bonus content — for $4.99 per month."

Politics

Justice Dept. Inquiry Into Matt Gaetz Said to Be Focused on Cash Paid to Women. “A Justice Department investigation into Representative Matt Gaetz and an indicted Florida politician is focusing on their involvement with multiple women who were recruited online for sex and received cash payments, according to people close to the investigation and text messages and payment receipts reviewed by The New York Times.”

Yellen calls for a global minimum corporate tax rate. I think I'm in favor of this? But it seems difficult to implement in practice.

What Georgia’s Voting Law Really Does. “The New York Times analyzed the state’s new 98-page voting law and identified 16 key provisions that will limit ballot access, potentially confuse voters and give more power to Republican lawmakers.”

Big Tech Is Pushing States to Pass Privacy Laws, and Yes, You Should Be Suspicious. “The Markup reviewed existing and proposed legislation, committee testimony, and lobbying records in more than 20 states and identified 14 states with privacy bills built upon the same industry-backed framework as Virginia’s, or with weaker models. The bills are backed by a who’s who of Big Tech–funded interest groups and are being shepherded through statehouses by waves of company lobbyists.”

Science

COVID was bad for the climate. “To keep global warming under 2°C, we’d need sustained emissions reductions in this range every year for the next 20-30 years. The pandemic has been hugely disruptive, but it’s still temporary, and all signs point to a strong recovery. The drop in emissions was largely caused by lockdown, not persistent structural changes that will persist for decades to come.”

Finding From Particle Research Could Break Known Laws of Physics. “Evidence is mounting that a tiny subatomic particle called a muon is disobeying the laws of physics as we thought we knew them, scientists announced on Wednesday.” So exciting!

A Surprising Number Of Sea Monster Sightings Can Be Explained By Whale Erections. Today I learned.

American Honey Is Radioactive From Decades of Nuclear Bomb Testing. "The world’s nuclear powers have detonated more than 500 nukes in the atmosphere. These explosions were tests, shows of force to rival nations, and proof that countries such as Russia, France, and the U.S. had mastered the science of the bomb. The world’s honey has suffered for it. According to a new study published in Nature Communications, honey in the United States is full of fallout lingering from those atmospheric nuclear tests."

Flu Has Disappeared Worldwide during the COVID Pandemic. ““There’s just no flu circulating,” says Greg Poland, who has studied the disease at the Mayo Clinic for decades. The U.S. saw about 600 deaths from influenza during the 2020-2021 flu season. In comparison, the Centers for Disease Control and Prevention estimated there were roughly 22,000 deaths in the prior season and 34,000 two seasons ago.”

Society

Estimates and Projections of COVID-19 and Parental Death in the US. "The number of children experiencing a parent dying of COVID-19 is staggering, with an estimated 37,300 to 43,000 already affected. For comparison, the attacks on September 11, 2001, left 3000 children without a parent."

Clearview AI Offered Thousands Of Cops Free Trials. “A controversial facial recognition tool designed for policing has been quietly deployed across the country with little to no public oversight. According to reporting and data reviewed by BuzzFeed News, more than 7,000 individuals from nearly 2,000 public agencies nationwide have used Clearview AI to search through millions of Americans’ faces, looking for people, including Black Lives Matter protesters, Capitol insurrectionists, petty criminals, and their own friends and family members.”

What an analysis of 377 Americans arrested or charged in the Capitol insurrection tells us . "Nor were these insurrectionists typically from deep-red counties. Some 52 percent are from blue counties that Biden comfortably won. But by far the most interesting characteristic common to the insurrectionists’ backgrounds has to do with changes in their local demographics: Counties with the most significant declines in the non-Hispanic White population are the most likely to produce insurrectionists who now face charges."

Reflexive McLuhanism. "To paraphrase Churchill: First we shape X, then X shapes us. If a defining characteristic of humanity is making and using tools, then a defining characteristic of society is being shaped by those same tools."

‘My full name is Tanyaradzwa’: the stars reclaiming their names. "Names are important and they have meaning, said the cultural historian and campaigner Patrick Vernon, whether that is familial significance or the time or day someone was born, for example. “The fact that people still feel they have to change or anglicise their names, and water down their heritage to fit in or succeed within the dominant culture, says we’ve still got a long way to go.”"

My Son, the Organ Donor. "My son’s vital organs saved four lives. His skin and other tissue donations will go on to help countless others. His strong heart now vigorously thumps inside the chest of a teenage boy." Please consider signing up to be a donor.

How to Name Your Black Son in a Racist Country. "And then warn him. Inform your son that he will likely be the only Tyrone in the cohort of 100 Americans and that there will be white people in his cohort who think gentrification is a good thing and who do not read. Let him know that those white people are not worth his time and that he should make a group chat with the six other Black folks in his cohort because he will regret not doing so later."

Get Ready for Blob Girl Summer. "So many people have died this year, millions, and I have survived to take into my body a miraculous shot that is the very flower of medical science, a code written in my genome to lock out the great threat. And I, imbibing this, have the temerity to not even be sexy. If Vaxxed Girl Summer is meant to be a kind of pan-cultural Rumspringa I ought to be someone that transcends schlubhood under its thrilling aegis. And yet."

Technology

NFT Canon. “The a16z NFT Canon is a curated list of readings and resources on all things NFTs, organized from the big picture, what NFTs (non-fungible tokens) are and why they matter... to how to mint, collect, and do more with them -- including how they play into various applications such as art, music, gaming, social tokens, more.”

Asian Americans in tech say they face ‘a unique flavor of oppression’. “Diversity training was "half-assed, whitewashed," she said. No one said the words "white supremacy" or "institutionalized racism."”

Social Attention: a modest prototype in shared presence. “My take is that the web could feel warmer and more lively than it is. Visiting a webpage could feel a little more like visiting a park and watching the world go by. Visiting my homepage could feel just a tiny bit like stopping by my home.” Nice proof of concept.

Google wins copyright clash with Oracle over computer code. “In siding with Google, Breyer wrote that, assuming for the sake of argument that the lines of code can be copyrighted, Google’s copying is nonetheless fair use. The fair-use doctrine permits unauthorized use of copyrighted material in some circumstances, including when the copying “transforms” the original material to create something new.” An important win in for Google at the Supreme Court.

Target CIO Mike McNamara makes a cloud declaration of independence. It makes sense that Target would want to move away from AWS, and their approach avoids lock-in to any cloud provider. All of this is made possible by free and open source software tools.

At Dynamicland, The Building Is The Computer. "Instead of simulating things like paper and pencils inside a computer, Realtalk grants computational value to everyday objects in the world. The building is the computer. Space is a first-class entity — a building block of computation. Digital projectors, cameras, and computers are inconspicuously attached to the ceiling rafters, creating space on tables and walls for projects and collaboration. Most of the software is printed on paper and runs on paper. But the deeper idea is that when the system recognizes any physical object, it becomes a computational object." Magical.

Signal adopts MobileCoin, a crypto project linked to its own creator Moxie Marlinspike. "Security expert Bruce Schneier thinks it’s an incredibly bad idea that “muddies the morality of the product, and invites all sorts of government investigative and regulatory meddling: by the IRS, the SEC, FinCEN, and probably the FBI.” He thinks the two apps—crypto and secure communications—should remain separate. In his mind, this is going to ruin Signal for everyone."

After Working at Google, I’ll Never Let Myself Love a Job Again. "After I quit, I promised myself to never love a job again. Not in the way I loved Google. Not with the devotion businesses wish to inspire when they provide for employees’ most basic needs like food and health care and belonging. No publicly traded company is a family. I fell for the fantasy that it could be."

Revealed: the Facebook loophole that lets world leaders deceive and harass their citizens. “The investigation shows how Facebook has allowed major abuses of its platform in poor, small and non-western countries in order to prioritize addressing abuses that attract media attention or affect the US and other wealthy countries. The company acted quickly to address political manipulation affecting countries such as the US, Taiwan, South Korea and Poland, while moving slowly or not at all on cases in Afghanistan, Iraq, Mongolia, Mexico, and much of Latin America.”

DoJ used court order to thwart hundreds of Microsoft Exchange web shells. “In an unprecedented move, the Department of Justice used a court order to dismantle ‘hundreds’ of web shells installed using Exchange Server vulnerabilities patched by Microsoft six weeks ago.” A court order that allowed the FBI to go in and pre-emptively patch compromised systems. Fascinating.

Australian firm Azimuth unlocked the San Bernardino shooter’s iPhone for the FBI. “Azimuth specialized in finding significant vulnerabilities. Dowd, a former IBM X-Force researcher whom one peer called “the Mozart of exploit design,” had found one in open-source code from Mozilla that Apple used to permit accessories to be plugged into an iPhone’s lightning port, according to the person.”

Exploiting vulnerabilities in Cellebrite UFED and Physical Analyzer from an app's perspective. "Cellebrite makes software to automate physically extracting and indexing data from mobile devices. They exist within the grey – where enterprise branding joins together with the larcenous to be called “digital intelligence.” Their customer list has included authoritarian regimes in Belarus, Russia, Venezuela, and China; death squads in Bangladesh; military juntas in Myanmar; and those seeking to abuse and oppress in Turkey, UAE, and elsewhere. A few months ago, they announced that they added Signal support to their software." This is a genuinely incredible blog post.

Why not faster computation via evolution and diffracted light. "What is the ultimate limit of computational operations per gram of the cosmos, and why don’t we have compilers that are targeting that as a substrate? I would like to know that multiple." Inspiring and mind-bending in that way that many genuinely new ideas are: connecting multiple existing ideas to create something fresh. A really great blog post.

University duo thought it would be cool to sneak bad code into Linux as an experiment. Of course, it absolutely backfired. "Computer scientists at the University of Minnesota theorized they could sneak vulnerabilities into open-source software – but when they tried subverting the Linux kernel, it backfired spectacularly."

Read Facebook's Internal Report About Its Role In The Capitol Insurrection. "From the earliest Groups, we saw high levels of Hate, VNI, and delegitimization, combined with meteoric growth rates — almost all of the fastest growing FB Groups were Stop the Steal during their peak growth. Because we were looking at each entity individually, rather than as a cohesive movement, we were only able to take down individual Groups and Pages once they exceeded a violation threshold. We were not able to act on simple objects like posts and comments because they individually tended not to violate, even if they were surrounded by hate, violence, and misinformation. After the Capitol Insurrection and a wave of Storm the Capitol events across the country, we realized that the individual delegitimizing Groups, Pages, and slogans did constitute a cohesive movement."

Thursday, 29. April 2021

Mike Jones: self-issued

OpenID Connect Working Group Presentation at the Third Virtual OpenID Workshop

I gave the following presentation on the OpenID Connect Working Group at the Third Virtual OpenID Workshop on Thursday, April 29, 2021: OpenID Connect Working Group (PowerPoint) (PDF)

I gave the following presentation on the OpenID Connect Working Group at the Third Virtual OpenID Workshop on Thursday, April 29, 2021:

OpenID Connect Working Group (PowerPoint) (PDF)

Bill Wendel's Real Estate Cafe

Homebuyers, let’s use 20th anniversary to call for a Bidding War Bill of Rights!

Kudos to investigative journalists in Canada for putting BLIND bidding wars into the spotlight. Invite readers to visit yesterday’s article and watch the video. Equally… The post Homebuyers, let's use 20th anniversary to call for a Bidding War Bill of Rights! first appeared on Real Estate Cafe.

Kudos to investigative journalists in Canada for putting BLIND bidding wars into the spotlight. Invite readers to visit yesterday’s article and watch the video. Equally…

The post Homebuyers, let's use 20th anniversary to call for a Bidding War Bill of Rights! first appeared on Real Estate Cafe.


MyDigitalFootprint

the journey and the destination

I know the journey is more important than the destination, but destinations provide an essential point as they mark somewhere to head towards.  All journeys start with a single step, and for me, this journey started a little over three years ago. I have spent this past period considering the question, “How do we make Better Decisions.” This question was refined to become “How do we make B

I know the journey is more important than the destination, but destinations provide an essential point as they mark somewhere to head towards.  All journeys start with a single step, and for me, this journey started a little over three years ago. I have spent this past period considering the question, “How do we make Better Decisions.” This question was refined to become “How do we make Better Decisions with data.” This expanded into “How do we make Better Decisions with data and be better ancestors?”  My journey can finally see a destination. 

However, I am now facing a more significant challenge.

Having reached the destination zone, I want to leave a mark, and it's straightforward to imagine planting a flag. The hope is that when the flag is planted some of the team back at home can see that you've reached the final place.  In most circumstances, the destination is not in the Line-of-Sight. Therefore you pick up the flag and wave it, hoping that somebody with binoculars can see you waving your destination flag. If someone sees you, they relay it to the others that the advance party or pioneers (risk-takers and general nutters) have reached the destination. In innovation and invention land, you're tempted to stay at the destination, hoping that the others will follow you to the same place. Waiting there means you eventually run out of resources, this being cash or budget. Having run out of resources, you're faced with the reality you have to head back to where everyone else is as nobody is going to follow you to the destination; it is too risky.  On arriving back at home, there's a bit of a party and a celebration. You relay your stories about the journey and how wonderful the destination is.   After a few hours of partying, everybody heads back to their homes, leaving you wondering how to persuade others to go to the destination as well.

There are a series of videos on YouTube of a single dancer who starts to dance to music, and over periods of 5-minutes to several hours, the dancing crowd grows until everybody is dancing. Early adopters and supporters join the pioneer, and eventually, they are joined by the followers who make the crowd. (2021 #covid19 note, what is a crowd and what is a party?)

The next day after your return, everybody gets up and goes around their jobs and business as usual.  What is now needed is to convert the excitement of the destination into a language and story that the first supporters and early adopters can relate to and join in. What is tremendously difficult is constructing a straightforward linear narrative and step actions, meaning the crowd can also join in and desire to get to a new destination. (crossing the chasm)  For businesses right now, this is the digital transformation, data, circular economy, ESG, climate change, AI, ethics and sustainability.  The pioneers have been to the destination, planted the flags, imagined the better fruits, but we have to drop the complexity of the issues and find a better story for us all to get there.

The journey I have been on for the last few years is understanding how we make better decisions.  This means I have had to unpack numerous topics and dependencies deep in the messiness of models and complexity. Unsurprisingly there are only a few who are also motivated about understanding the complexity and philiosphy of better decisions, it is my burden. We are converting what we have found at the destination (how to make better decisions with data and be better ancestors) into a language that fellow supporters and early adopters can engage with and discuss. 

BUT

I am now desperate for expert support to convert the ideas into straightforward narrative and actions that the crowd in the town can take on board and start their journey, but knowing where they are going has been discovered. My struggle is that I find it hard to dumb down the complex into a tweet as it cannot possibly embrace the complexity. I continually flight myself as I can see the simple is not truly representative. The PhD virologist of 30 years is given 1 minute to explain virus mutation on the news.  The simple becomes fodder to the vocal and easily upset in social media land as it is easily misunderstood and taken out of context. I spend more time trying to justify (to myself) why I am using simple concepts to explain complex ideas.

Tribe, I need some help.  Is there anybody out there (a nod to Pink Floyd) who is willing to spend the time in a small community helping to refine the words, language, stories, ideas and concepts into a straightforward linear narrative that portrays actions and activities, which means we can all take steps on a journey to making Better Decisions? This is not a job it is a vocation. Journeys are so much more fun when we do them together, and the next part of the journey is to do it all together. I'm looking for early adopter and supporters who can share their destinations so that together we can create a better narrative about decision making, governance and oversight. 

Are you willing and able to help form and write a narrative to enable others to come on our journey?



Day 0 - as the CDO, you are now the new corporate punch bag.

In commercial land, the axis of power has tended to rest with the CEO/ CFO relationship.   There is always a myriad of other political triangles that lobby and wrestle for power and sway decisions.  Given that decisions are increasingly reliant on evidence which is data, the CDO gets dragged into everyone's battles, which are not always in the best interest of the business, customer,

In commercial land, the axis of power has tended to rest with the CEO/ CFO relationship.   There is always a myriad of other political triangles that lobby and wrestle for power and sway decisions.  Given that decisions are increasingly reliant on evidence which is data, the CDO gets dragged into everyone's battles, which are not always in the best interest of the business, customer, ecosystem or society - such are incentive scheme.

Everyone else in the senior team does not want to recognise is that the data they use as evidence and proof is equally supportive or detrimental to everyone else's cause.  Whilst everyone else on the leadership team gets to pick and bias what they foreground and promote, the CDO has to keep their mind open and judge all data with the same level of critical thinking.  This tends to mean the CDO becomes the punch bag when data either supports or otherwise a decision, which in reality is a political lobby for power which the data may not fully support. However, we are all fallible, and data is not the evidence we want it to be. 

we are all fallible, and data is not the evidence we want it to be. 

Given that even the most highly skilled data scientists, who were incentivised to come to the most accurate results, can create a broad range of conclusions, even when given the same data and hypothesis.  This means that senior leadership teams don’t know if the data they have and the conclusion they have reached is correct. 

A recent and significant published paper gave 73 teams the same data and research question. The answers varied widely, and very little variation was easily explained. What the paper goes on to say is that competencies and potential confirmation biases do not explain the broad variation in outcomes. 



The paper concludes that if any given study had been conducted by a different (set of) researcher(s), perhaps even the same researchers at a different time, its results may well have varied for reasons that cannot be reduced to easily observable analytical choices or biases.  They conclude that steps in the research process remain undisclosed in the standard presentation and consumption of scientific results and have exposed a “hidden universe” of idiosyncratic research and researcher variability. An important takeaway here is that the reality of researcher decisions cannot easily be simulated.

Therefore, there are at least two forces at play in data leading to a recommendation or decision.  One, the bias and foregrounding of data outcomes by a member of the leadership team is for a reason. Two. the same data, same tools, the same team can generate more than one recommendation.  How to determine what is at play is a modern-day skill the CDO must-have. 

As the CDO, you have to gain the trust of all your senior team and work with them to determine what is incentive biased, desired outcome-driven, seeking support or driven from the data set where there is naturally more than one possible conclusion.  In doing this, you have to be capable of assessing the alignment or divergence from others in the team who have come to different conclusions.   This becomes more complex if the data set is public, purchased, or gathered from the ecosystem with partners, as others outside of the organisation can create different conclusions. You have to be prepared to justify your reason, rationale and processes.   The skill you need to demonstrate is one of consistency in finding a route to align any data conclusions to the company's purpose and agreed strategic goals and not to the forces of lobby. Leave those calls to the CEO knowing the CEO has your back or get out, as being the new corporate punch bag is not fun.   


Note to the CEO

Each leadership team member is playing their own game and is looking to the CDO to find and support data for their cause, lobby, decision or budget.  This means that the CDO becomes the corporate punch bag and police, taking over from HR. The CDO has to navigate the path of varying conclusions and desired outcomes from the same data set, which are in conflict as they meet individuals agendas.   As the CEO, you have to be aware of this and the game your CFO will play.  The power axis itself of the CEO/ CFO relationship comes under stress as the CDO can give you more insights into decisions “presented because of incentives and self-interest” than anyone else, but HR will still want to own it.  If you alienate the CDO, you will lose that linkage, which is exactly what others want.  However, first, check that the CDO has this trust of the team and that your CDO has the capability and capacity to manage this modern-day leadership challenge.  If not, it might be time to upgrade your CDO with new skills or find a new version. 






If your strategic plan is based on data, have you considered the consequences?

source: accenture https://www.accenture.com/_acnmedia/PDF-108/Accenture-closing-data-value-gap-fixed.pdf Several generations ago, the incentives in your organisation mean that those who collected and analysed old data created bias. Such bias occurred as people in the system favoured specific incentives, rewards and recommendations.  The decisions made created certain processes a

source: accenture https://www.accenture.com/_acnmedia/PDF-108/Accenture-closing-data-value-gap-fixed.pdf


Several generations ago, the incentives in your organisation mean that those who collected and analysed old data created bias.

Such bias occurred as people in the system favoured specific incentives, rewards and recommendations. 

The decisions made created certain processes and rules to hide the maintenance of those incentives and biases.

The biases worked to favour certain (the same) groups and outcomes, which have, over time, become part of the culture, reinforcing the processes and rules.

How do you know, today, what bias there is in your strategic plan. What framing and blindness are created because of the ghosts in your system?   

If you cannot see, touch and feel equality and balance in gender, race and neuro-diversity, it is likely that the bias is still there.  Whilst it might feel good to get to a target, that does not mean the systems, rules and processes are not without those same biases.   It took generations to build in; it takes far more effort than a target to bring about better decisions. 

How do you know your data set has the views of everyone who is critical to your business today and in the future? How do you know the tools you use provide equal weight to everyone to make our business thrive?  How do you know if the recommendation was written before the analysis? How do your incentives create a new bias?

Is the consequence of your beautiful strategic data-led plan that you get precisely what the biased data wants.

In any framework where data leads to decisions, strategy or automation, first understand how you might be reinforcing something you are trying to eliminate.


Wednesday, 28. April 2021

Mike Jones: self-issued

Passing the Torch at the OpenID Foundation

Today marks an important milestone in the life of the OpenID Foundation and the worldwide digital identity community. Following Don Thibeau’s decade of exemplary service to the OpenID Foundation as its Executive Director, today we welcomed Gail Hodges as our new Executive Director. Don was instrumental in the creation of OpenID Connect, the Open Identity […]

Today marks an important milestone in the life of the OpenID Foundation and the worldwide digital identity community. Following Don Thibeau’s decade of exemplary service to the OpenID Foundation as its Executive Director, today we welcomed Gail Hodges as our new Executive Director.

Don was instrumental in the creation of OpenID Connect, the Open Identity Exchange, the OpenID Certification program, the Financial-grade API (FAPI), and its ongoing worldwide adoption. He’s created and nurtured numerous liaison relationships with organizations and initiatives advancing digital identity and user empowerment worldwide. And thankfully, Don intends to stay active in digital identity and the OpenID Foundation, including supporting Gail in her new role.

Gail’s Twitter motto is “Reinventing identity as a public good”, which I believe will be indicative of the directions in which she’ll help lead the OpenID Foundation. She has extensive leadership experience in both digital identity and international finance, as described in her LinkedIn profile. The board is thrilled to have her on board and looks forward to what we’ll accomplish together in this next exciting chapter of the OpenID Foundation!

I encourage all of you to come meet Gail at the OpenID Foundation Workshop tomorrow, where she’ll introduce herself to the OpenID community.


Phil Windley's Technometria

Legitimacy and Decentralized Systems

Summary: Why are some decentralized systems accepted and widely used while others wither? Why do some âhard forksâ succeed while others fail? It all comes down to legitimacy. As an undergraduate engineering major, I recall being surprised by the so-called three body problem. In Newtonian mechanics, there are nice closed-form solutions to problems involving the motion of two intera

Summary: Why are some decentralized systems accepted and widely used while others wither? Why do some âhard forksâ succeed while others fail? It all comes down to legitimacy.

As an undergraduate engineering major, I recall being surprised by the so-called three body problem. In Newtonian mechanics, there are nice closed-form solutions to problems involving the motion of two interacting bodies, given their initial position and velocity. This isnât true of systems with three or more points. How can adding just one more point to the system make it unsolvable?

N-body systems are chaotic for most initial conditions and their solution involves numerical methods—simulation—rather than nice, undergraduate-level math. In other words, itâs messy. Humans like simple solutions.

Like the n-body problem, decentralized systems are chaotic and messy. Humans arenât good at reasoning about emergent behavior from the coordinated, yet autonomous, behavior of interacting agents. We build bureaucracies and enact laws to try to make chaotic systems legible. The internet was our first, large-scale technical system where decentralization and governance clashed. I remember people in the 90âs asking âWhoâs in charge of the internet?â

In The Most Important Scarce Resource is Legitimacy, Vitalik Buterin, the creator of Ethereum, discusses why legitimacy is crucial for the success of any decentralized endeavor. He says:

[T]he Bitcoin and Ethereum ecosystems are capable of summoning up billions of dollars of capital, but have strange and hard-to-understand restrictions on where that capital can go. From The Most Important Scarce Resource is Legitimacy
Referenced 2021-04-26T14:46:43-0600

These âstrange and hard to understand restrictionsâ are rooted in legitimacy. Decentralized systems must be considered legitimate in order to thrive. That legitimacy is tied to how well the systems and people enabling them, like programmers and miners, are seen to be following âthe rulesâ both written and unwritten. Legitimacy isnât a technical issue, but a social one.

Wikipedia defines legitimacy as

the right and acceptance of an authority, usually a governing law or a regime.

While this is most often applied to governments, I think we can rightly pose legitimacy questions for technical systems, especially those that have large impacts on people and society.

With respect to legitimacy, Philip Bobbit says:

The defining characteristic ⦠of a constitutional order is its basis for legitimacy. The constitutional order of the industrial nation state, within which we currently live, promised: give us power and we will improve the material well-being of the nation.

In other words, legitimacy comes from the constitutional order: the structure of the governance and its explicit and implicit promises. People grant legitimacy to constitutional orders that meet their expectations by surrendering part of their sovereignty to them. In the quote from Vilaik above, the "strange and hard to understand restrictions" are promises that members of the Bitcoin or Ethreum ecosystems believe those constitutional orders have made. And if they're broken, the legitimacy of those system is threatened.

Talking about âlegitimacyâ and âconstitutional ordersâ for decentralized systems like Bitcoin, Ethereum, or your favorite NFT might feel strange, but I believe these are critical tools for understanding why some thrive and others wither. Or why some hard forks succeed and others don't.

In Bobbittâs theory of constitutional orders, transitions from one constitutional order to a new one always requires war. While people seeking legitimacy for one decentralized system or another might not use tanks or missiles, a hard fork is essentially just that—a war fought to cause the transition from one constitutional order to another because of a question of legitimacy. For example, Vitalik describes how the Steem community did a hard fork to create Hive, leaving Steemâs founder (and his tokens) behind because the constitutional order he represented lost its legitimacy because people believed it could no longer keep its promises.

So when you hear someone talking about a decentralized system and starting sentences with phrases like âSomebody shouldâ¦â or âWhy do we let themâ¦â or âWhoâs in charge ofâ¦â, beware. Unlike most of the easy to understand systems weâre familiar with, decentralized systems are heterarchical, not hierarchical. Thus the means of their control is political, not authoritarian. These systems are not allowed to exist—they're called "permissionless" for a reason. They simply are, by virtue of their legitimacy in the eyes of people who use and support them.

This doesnât mean decentralized systems are unassailable, but changing them is slower and less sure than most people would like. When you âknowâ the right way to do something, you want a boss who can dictate the change. Changing decentralized systems is a political process that sometimes requires war. As Clausewitz said âWar is the continuation of politics by other means.â

There are no closed-form solutions to the n-body problems represented by decentralized systems. They are messy and chaotic. Iâm not sure people will ever get more comfortable with decentralization or understand it well enough to reason about it carefully. But one thing is for sure: decentralized systems donât care. They simply are.

A version of this article was previously published in Technometria Newsletter, Issue #6, April 13, 2021.

Photo Credit: Major General Andrew Jackson and his Soldiers claim a victory in the Battle of New Orleans during the War of 1812. from Georgia National Guard (CC BY 2.0)

Tags: legitimacy decentralization

Tuesday, 27. April 2021

Ben Werdmüller

The DEI rollback

Yesterday, Jason Fried, Basecamp’s CEO, shared an internal memo he’d written about changes at the company. In it, he details how political discussions are no longer acceptable at work, and how benefits he considers to be “paternalistic” - like gym memberships and farmer’s market shares - are being removed. It’s weird to me that this is coming from a company that literally wrote the book on cult

Yesterday, Jason Fried, Basecamp’s CEO, shared an internal memo he’d written about changes at the company. In it, he details how political discussions are no longer acceptable at work, and how benefits he considers to be “paternalistic” - like gym memberships and farmer’s market shares - are being removed.

It’s weird to me that this is coming from a company that literally wrote the book on culture. I’ve always thought of Basecamp (and its predecessor, 37 Signals) as being the yardstick for how to run a great company. This blog post completely blew that out of the water.

Conor Muirhead, a designer at Basecamp, later noted that political discussion was limited to two opt-in spaces: a space called “Civil Solace”, and a recently-formed DEI council. He notes that it was rare for these discussions to spill out of those spaces, although they did when, for example, “folks shared thoughts on how mocking people’s non-anglo names is a stepping stone towards racism”.

As Annalee Flower Horne rightly pointed out, “here's a thing about banning political discussions from a space because they're divisive: that does not resolve the division. It just says if you feel marginalized or unsafe here, keep it to yourself, we don't want to hear it.” Indeed, the predominantly white, male discourse - the one that is still dominant - is not usually considered to be “political”, while equity for marginalized people usually is. The effect is to further marginalize people of color in particular.

Regarding “paternalistic benefits”, Fred Wilson points out that “If you care about the mental and physical well-being of your team, I believe it makes sense to support them by investing in that. Companies can do that tax efficiently and employees cannot. Paying employees more so that they can then make these investments personally sounds rational but I don’t believe it will be as effective as company-funded programs that employees can opt into or not.” Because these benefits enjoy a special tax status, removing them disproportionately affects lower paid workers.

Something must have happened behind the scenes at Basecamp to force this change. The smart money’s on management becoming uncomfortable with changes requested, and power gathered, by the DEI council. But if you don’t want to make those changes, why have the council to begin with, except as a superficial gesture?

Basecamp was emboldened by Coinbase, which previously enacted a similar policy. It’s a regressive trend that more tech companies, led by men who are already predisposed to this narrower worldview, are likely to follow. This is particularly true in the post-Trump era, when the stakes (from a privileged perspective) seem lower.

For many of them, it’s an intentional roll back of the clock. Code2040 CEO Emeritus Karla Monterroso shared that “I will never forget a Latinx VP at a big tech company telling me that one of their VC’s (big name) told him at a board meeting that they had become an inhospitable place for white men and they needed to fix that.”

The solution, for now, is to call it out, and for those of us with privilege to pledge never to work for (or start) an organization with these policies. Diversity and inclusion is more important than ever. And leaders who care about the culture of their companies should once again take note of the Basecamp team: this time as a lesson in what not to do.

Monday, 26. April 2021

Simon Willison

Quoting Drew DeVault, SourceHut

Over the past several months, everyone in the industry who provides any kind of free CPU resources has been dealing with a massive outbreak of abuse for cryptocurrency mining. The industry has been setting up informal working groups to pool knowledge of mitigations, communicate when our platforms are being leveraged against one another, and cumulatively wasting thousands of hours of engineering t

Over the past several months, everyone in the industry who provides any kind of free CPU resources has been dealing with a massive outbreak of abuse for cryptocurrency mining. The industry has been setting up informal working groups to pool knowledge of mitigations, communicate when our platforms are being leveraged against one another, and cumulatively wasting thousands of hours of engineering time implementing measures to deal with this abuse, and responding as attackers find new ways to circumvent them.

Drew DeVault, SourceHut


Hyperonomy Digital Identity Lab

The Verifiable Economy Architecture Reference Model (VE-ARM): Fully Decentralized Object (FDO) Model

Michael HermanHyperonomy Digital Identity LabTrusted Digital Web ProjectParallelspace Corporation NOTE: This article supersedes an older version of this article: The Verifiable Economy: Architecture Reference Model (VE-ARM) 0.1: Original Concepts [OLD] 1. Introduction 1.1 Goals The goals of this article are three-fold: … Continue reading →

Michael Herman
Hyperonomy Digital Identity Lab
Trusted Digital Web Project
Parallelspace Corporation

NOTE: This article supersedes an older version of this article:

The Verifiable Economy: Architecture Reference Model (VE-ARM) 0.1: Original Concepts [OLD] 1. Introduction 1.1 Goals

The goals of this article are three-fold:

Introduce the concept of a Verifiable Capability Authorizations (VCA) and how they can be used to implement controls over which specific methods a particular party is allowed to execute against a particular instance of a Fully Decentralized Object (FDO). VCAs are both delegatable and attenuatable. Illustrate how #graphitization techniques can be used for modeling and visualizing: Trusted Decentralized Identifiers (DIDs) DID Documents Trusted Digital Agents (and their Service Endpoints (SEPs)) Verifiable Credentials (VCs) Verifiable Capability Authorizations (VCAs) and, Most importantly, their myriad of interrelationships. Use the above 2 goals to further detail and describe how to use the VE-ARM model for implementing trusted, reliable, efficient, frictionless, standards-based, global-scale software systems based on Fully Decentralized Objects (FDOs). 1.2 Purpose

This article takes the following “All-in” graph view of The Verifiable Economy Architecture Reference Model (VE-ARM) and partitions it into a series of subgraphs that depict the key elements of the overall architecture reference model for FDOs. Each subgraph is documented with a narrative that is mapped to the numbered blue targets used to identify each element in each subgraph.

Figure 1. Subgraph 0. The Verifiable Economy Architecture Reference Model (VE-ARM)

The above graphitization is the result of a several iterations validating The Verifiable Economy Architecture Reference Model (VE-ARM) against the following live scenario:

Erin acquiring a personal DID and DID Document to enable Erin to acquire a Province of Sovronia Driver’s License (SDL) (represented as an FDO) and hold the SDL in Erin’s digital wallet.

TDW Glossary: Self-Sovereign Identity (SSI) User Scenarios: Erin Buys a Car in Sovronia (3 User Scenarios)

A Fully Decentralized Object (FDO) is comprised of the following minimal elements:

DID (and correspond DID Document) Master Verifiable Capability Authorization (MVCA) for the object’s DID and DID Document Zero or more Verifiable Capability Authorizations (VCAs) linked to the above MVCA for the object (recursively) A Property Set for the FDO Property Set DID (and corresponding DID Document) Property Set MVCA that is issued when the Property Set’s DID and DID Document is issued. Property Set Verifiable Credential (VC) is issued to hold the object’s properties and their values Zero or more Verifiable Capability Authorizations (VCAs) linked to the FDO’s Property Set MVCA (recursively) A Trusted Digital Agent registered with a Service Endpoint (SEP) in the object’s DID Document that implements the VCA-controlled methods for accessing and interacting with the object and/or it’s property set. Control over which methods are invokable by a party is controlled by the respective MVCAs and a Delegated Directed Graphs of VCAs (if there are any).

The goal and purpose of the VE-ARM is to describe a Fully-Decentralized Object (FDO) model that unites the following concepts into a single integrated, operational model:

Verifiable Identifiers, Decentralized Identifiers (DIDs), and DID Documents; Verifiable Claims, Relationships, and Verifiable Credentials (VCs); Master Verifiable Capability Authorizations (MVCA) (Master Proclamations), Verifiable Capability Authorizations (VCAs) (Proclamations), Verifiable Capability Authorization Method Invocations (MIs); and Trusted Digital Agents (TDAs). 1.3 Background

The scenario used to model the VE-ARM is an example of a citizen (Erin) of a fictional Canadian province called Sovronia holding a valid physical Sovronia Driver’s License (Erin RW SDL) as well as a digital, verifiable Sovronia Driver’s License (Erin SDL).

Figure 2. Erin’s “Real World” Sovronia Driver’s License (Erin RW SDL) 1.4 Graphitization of the Verifiable Economy Architecture Reference Model (VE-ARM)

The underlying model was built automatically using a series of Neo4j Cypher queries running against a collection of actual DID Document, Verifiable Credential, and Verifiable Capability Authorization JSON files. The visualization was laid out using the Neo4j Browser. The resulting layout was manually optimized to produce the final version of the graphitization used in this article. The numbered targets used to identify each element in each subgraph were added using Microsoft PowerPoint.

2. Organization of this Article

Following a list of Key Definitions, the remainder of this article is organized as a series of increasingly more detailed explanations of the VE-ARM model. The overall model is partitioned into a collection of (overlapping) subgraphs.

Each subgraph is described by a narrative that explains the purpose of each element in the particular subgraph. Each narrative is organized as a list of numbered bullets that further describe to the corresponding numbered blue targets used to identify each element in each subgraph .

A narrative is a story. It recounts a series of events that have taken place. … These essays are telling a story in order to drive a point home. Narration, however, is the act of telling a story.

Examples of Narration: 3 Main Types in Literature
2.1 Table of Subgraphs Subgraph F1 – Erin’s DID Document (DD) Neighborhood Subgraph F2 – Erin’s DD Master Verifiable Capability Authorization (MVCA) Neighborhood Subgraph F3 – Province of Sovronia DID Document (DD) Neighborhood Subgraph F4 – Province of Sovronia DD Master Verifiable Capability Authorization (MVCA) Neighborhood Subgraph F5 – DID Documents (DDs) and Master Verifiable Capability Authorizations (MVCAs) Neighborhood Subgraph F6 – Erin’s Sovronia Drivers License (SDL) Property Set Verifiable Credential (VC) Neighborhood Subgraph F7 – Erin’s SDL Property Set Delegated Directed Graph of Verifiable Capability Authorizations Neighborhood Subgraph F8 – Erin “Real World” Neighborhood Subgraph F9 – SOVRONA Trusted Decentralized Identity Provider (TDIDP) Neighborhood Subgraph F10 – The Verifiable Economy “All-In” Graph View Figure 4. Subgraph 0. Table of Subgraphs 3. Key Definitions

Several of the following definitions (those related to the concept oferifiable capability authorizations) are inspired by the following RWoT5 article:

Linked Data Capabilities by Christopher Lemmer Webber and Mark S. Miller

Additional context can be found in Authorization Capabilities for Linked Data v0.3.

3.1 VE-ARM Verifiable Capability Authorization (VCA) Model

The VE-ARM Verifiable Capability Authorization (VCA) model used to grant the authority to specific parties to invoke specific methods against an instance of a Fully Decentralized Object (FDO). The VE-ARM VCA model is based, in part, on the Object-Capability Model. The VE-ARM VCA model supports Delegation and Attenuation.

3.2 Object Capability Model

The object-capability model is a computer security model. A capability describes a transferable right to perform one (or more) operations on a given object. It can be obtained by the following combination:

– An unforgeable reference (in the sense of object references or protected pointers) that can be sent in messages.
– A message that specifies the operation to be performed.

Object-Capability Model (https://en.wikipedia.org/wiki/Object-capability_model)
3.3 VCA Model Principles Delegation and Attenuation

With delegation, a capability holder can transfer his capability to another entity, whereas with attenuation he can confine a capability before delegating it.

Capability-based access control for multi-tenant systems using OAuth 2.0 and Verifiable Credentials
3.4 Fully Decentralized Object (FDO)

In The Verifiable Economy, a Fully Decentralized Object (FDO) is comprised of the following minimal elements:

DID (and correspond DID Document) Master Verifiable Capability Authorization (MVCA) for the object’s DID and DID Document Zero or more Verifiable Capability Authorizations (VCAs) linked to the above MVCA for the object (recursively) A Property Set for the FDO Property Set DID (and corresponding DID Document) Property Set MVCA that is issued when the Property Set’s DID and DID Document is issued. Property Set Verifiable Credential (VC) is issued to hold the object’s properties and their values Zero or more Verifiable Capability Authorizations (VCAs) linked to the FDO’s Property Set MVCA (recursively) An Trusted Digital Agent registered with a Service Endpoint (SEP) in the object’s DID Document that implements the VCA-controlled methods for accessing and interacting with the object and/or it’s property set. Control over which methods are invokable by a party is controlled by the respective MVCAs and a Delegated Directed Graphs of VCAs (if there are any). 3.5 Fully Decentralized Object (FDO) Model

A complete decentralized object system based on the concept of FDOs.

3.6 Verifiable Capability Authorization (VCA)

A Verifiable Capability Authorization (VCA) is a JSON-LD structure that grants (or restricts) a specific party (the controller of a key (grantedKey)) the ability to invoke specific methods against a specific instance of a Fully Decentralized Object (FDO). A VCA typically has a type of Proclamation (unless it is a Method Invocation VCA).

A VCA has the following properties:

id – trusted, verifiable decentralized identifier for the VCA type – “Proclamation” parent – trusted, verifiable decentralized identifier for a parent VCA whose control supersedes this current VCA. subject – trusted, verifiable decentralized identifier of the specific instance of the FDO. grantedKey – trusted, verifiable key of the party to whom the specified capabilities are being granted specifically with respect to the specific instance of the FDO. caveat – the collection of specific capabilities the party represented by grantedKey is granted (or restricted) from invoking against a specific instance of the FDO identified by the subject identifier. signature – trusted, verifiable proof that this VCA is legitimate.

NOTE: The current VCA’s capabilities must be equal to or an attenuation of the parent VCA’s capabilities. This part of the VCA model is recursive.

NOTE: An FDO can be an object or a service represented as an object.

The following is an example of a VCA associated with Erin and Erin’s Sovronia Driver’s License Property Set.

Snippet 1. Verifiable Credential Authorization (VCA) Example 3.7 Master Verifiable Capability Authorization (MVCA)

A Master Verifiable Capability Authorization (MVCA) is a Proclamation-type VCA that is created for every FDO at the time that the DID and DID Document for the FDO is issued by a Trusted Decentralized Identity Provider (TDIDP) (e.g. SOVRONIA).

That is, a new MVCA is created whenever a new DID and DID Document are issued by a TDIDP. The MVCA typically grants authorization for any and all methods to the controller of the DID. (This is the essence of the definition of self-sovereign identity principle.)

An MVCA has the following properties:

id – trusted, verifiable decentralized identifier for the VCA type – “Proclamation” (or “Invocation”) subject – trusted, verifiable decentralized identifier of the specific instance of the FDO. An FDO can be an object or a service represented as an object. grantedKey – trusted, verifiable key of the party to whom the specified capabilities are being granted specifically with respect to the specific instance of the FDO. caveat – the collection of specific capabilities the party represented by grantedKey is granted (or restricted) from invoking against a specific instance of the FDO identified by the subject identifier. Typically, this is set to RestrictToMethod( * ) granting the controller of the grantedKey to execute any and all methods against the subject. (This is where and how the essence of the definition of the self-sovereign identity principle is realized.) signature – trusted, verifiable proof that this VCA is legitimate.

NOTE: A MVCA has no parent property because an MVCA always represents the top-level root VCA in a Delegated Directed Graphs of Verifiable Capability Authorizations (see below).

The following is an example of a MVCA for Erin’s Sovronia Drivers License Property Set. This MVCA is the parent of the above VCA.

Snippet 2. Master Verifiable Credential Authorization (MVCA) Example 3.8 VCA Method Invocation (MI)

A VCA Method Invocation (MI) is a JSON-LD structure that attempts to invoke a specific method against a specific instance of a Fully Decentralized Object (FDO) on behalf of a specific invoking party. An MI is of type Invocation (not Proclamation).

An MI has the following properties:

id – trusted, verifiable decentralized identifier for the MI type – “Invocation” proclamation – trusted, verifiable decentralized identifier for the VCA to be used for this MI against the specific instance of an FDO by a specific party (Proclamation VCA). method – specific name of the method to be invoked against the specific instance of an FDO by a specific party. usingKey – trusted, verifiable key of the party to be used to attempt the invocation of the above method against a specific instance of the FDO. signature – trusted, verifiable proof that this VCA is legitimate.

NOTE: An MI doesn’t have a subject property. The target object is specified by the subject property of the proclamation VCA.

A very important point you make is, “NOTE: An MI doesn’t have a subject property. The target object is specified by the subject property of the proclamation VCA.”  That point is so important, not separating designation from authorization, that I’d like to see it in bold.

Alan Karp alanhkarp@gmail.com, May 17, 2021 CCG Mailing List

The following is an example of a MI that attempts to invoke the Present method on behalf of Erin against Erin’s Sovronia Drivers License Property Set. The referenced VCA is the VCA example from above.

Snippet 3. Verifiable Credential Authorization Method Invocation (MI) Example 3.9 Delegated Directed Graph of Verifiable Capability Authorizations

A Delegated Directed Graph of Verifiable Capability Authorizations is a directed list of VCAs that starts with an MVCA as it’s top-level, root VCA. Each VCA in the graph points to the previous VCA in the graph via its parent property. An MI, in turn, refers to a single VCA in the graph via the MI’s proclamation property. The capabilities in effect are those that are specifically listed in the target VCA’s caveat property. While there is no inheritance of capabilities in this model, the capabilities specified by each VCA must be equal or less than (a subset of) the capabilities of the parent VCA (see the definition of Principles of Delegation and Attenuation).

The above examples of an MVCA, a VCA, and an MI, taken together, form an example of a Delegated Directed Graph of Verifiable Capability Authorizations.

Figure 3. Delegated Directed Graph of Verifiable Capability Authorizations Example

3.8.1 Narrative

17. Erin SDL Prop Set MVCA. Erin SDL Prop Set MVCA is the Master Verifiable Capability Authorization (MVCA) created for Erin’s SDL Prop Set (DD) at the time that the DID and DID Document were first issued by SOVRONA on behalf of the Province of Sovronia for Erin’s SDL. (A new MVCA is created whenever a new DID and DID Document are issued by a TDIDP. The MVCA grants authorization for any and all methods defined for the subject to the effective issuer. In this case, the effective issuer is the Province of Sovronia.)

18. Erin SDL VCA. Erin SDL VCA is the Verifiable Capability Authorization (VCA) created for Erin’s SDL Prop Set DD. The VCA was issued by the Province of Sovronia authorizing Erin to be able to present the properties (and their values) of Erin’s SDL to a third party using the Present method associated with Erin’s SDL Prop Set and supported (implemented) by Erin’s AGENT. The parent of Erin’s SDL VCA is the Erin SDL MVCA.

19. Erin SDL VCA MI. Erin SDL VCA MI is an example of a MVCA Method Invocation (VCA MI) that uses the Erin SCL VCA which authorizes the potential execution of the Present method by Erin against Erin’s SDL Prop Set.

3.10 Resource Servers and Authentication Servers

A resource server that hosts a protected resource owned by a resource owner, a client wishing to access that resource, and an authorization server responsible for generating access tokens. Access tokens are granted to clients authorized by the resource owner: client authorization is proven using an authorization grant. In our system we are using the ‘client credentials’ grant. As it can be seen from Fig. 1, when this type of grant is used, a resource owner configures the authentication server with the credentials of the authorized clients; a client authenticates to the authorization server and receives an access token, then it uses the access token to access the protected resource.

Capability-based access control for multi-tenant systems using OAuth 2.0 and Verifiable Credentials

Although these terms are not currently used in the VE-ARM, the resource server role is assigned to the FDO AGENT specified in the subject’s DID document. The authorization server role is assigned to the actor who is responsible for creating Verifiable Capability Authorizations (VCAs). In the current example, SOVORONIA hosts the authorization server on behalf of either the Province of Sovronia or Erin.

4. VE-ARM Principles

The following principles are used to guide The Verifiable Economy Architecture Reference Model (VE-ARM):

DD MVCA Principle. Every DID (and DID Document) has a corresponding Master Verifiable Capability Authorization (MCVA). Whenever a DID and corresponding DID Document is issued, a corresponding Master Verifiable Capability Authorization (MCVA) is automatically created. See F2 in Figure 1. Snippet 4 is an example of a DID Document Master Verifiable Capability Authorization (DD MVCA). Property Set VC Principle. All of the properties (and their values), a Property Set, for a particular decentralized object are stored in a Verifiable Credential (VC) that has an id value that is equal to the DID id of the decentralized object. See F6 in Figure 6. Snippet 5 is a partial example of a Property Set Verifiable Credential (PS VC). Snippet 4. DID Document Master Verifiable Capability Authorization (MVCA) Example Snippet 5. Partial Property Set Verifiable Credential (VC) Example

NOTE: Additional architecture and design principles need to be added to this section.

5. Erin’s DID Document (DD) and Master Verifiable Capability Authorization (MVCA) Neighborhoods

Erin Amanda Lee Anderson is a Person, a Citizen of Sovronia, and a Sovronia Driver’s License holder. The following is a graphitization of Erin’s DID and DID Document and the corresponding Master Verifiable Capability Authorization (MVCA).

Figure 5. Subgraphs F1 and F2: Erin’s DID Document (DD) and Master Verifiable Capability Authorization (MVCA) Neighborhoods 5.1 Erin’s DID Document Narrative (F1)

1. Erin. Erin is a RW_PERSON (“Real World” Person) and a citizen of the Province of Sovronia. Erin also holds a (valid) Sovronia Driver’s License (SDL) and controls a “Real World” Wallet (RW_WALLET) as well as a Digital Wallet (PDR).

2. Erin D Wallet. Erin D Wallet is a Digital Wallet (PDR (Private Data Registry)) controlled by Erin, a Person.

3. Erin DD. Erin DD is the primary DIDDOC (DID Document) for Erin, a Person. It is issued by SOVRONA who records it on the SOVRONA VDR and it is also held in the Erin DD Wallet.

4. DID:SVRN:PERSON:04900EEF-38E7-487E-8D6F-09D6C95D9D3E#fdom1. DID:SVRN:PERSON:04900EEF-38E7-487E-8D6F-09D6C95D9D3E#fdom1 is the identifier for the primary AGENT for Erin, a Person.

5. http://services.sovronia.ca/agent. http://services.sovronia.ca/agent is the primary SEP (Service Endpoint) for accessing the AGENT(s) associated with the DID(s) and DID Document(s) issued by the Province of Sovronia, an Organization. This includes all of the DID(s) and DID Document(s) associated with Erin.

6. SOVRONA VDR. SOVRONA VDR is the primary VDR (Verifiable Data Registry) controlled by SOVRONA, an Organization. The SOVRONA VDR is used to host the SVRN DID Method.

5.2 Erin’s DD Master Capability Authorization Narrative (F2)

7. Erin DD MVCA. Erin DD MVCA is the Master Verifiable Capability Authorization (MVCA) created for Erin’s DID Document at the time that the DID and DID Document were first issued by SOVRONA on behalf of the Province of Sovronia for Erin. (A new MVCA is created whenever a new DID and DID Document are issued by a TDIDP. The MVCA grants authorization for any and all methods to Erin.)

6. Province of Sovronia DID Document (DD) and DD Master Verifiable Capability Authorization (MVCA) Neighborhood

Province of Sovronia is an Organization and a “Real World” Nation State (sovronia.ca). The following is a graphitization of the Province of Sovronia’s DID and DID Document and its corresponding Master Verifiable Capability Authorization (MVCA).

Figure 6. Subgraphs F3 and F4: Province of Sovronia DID Document (DD) and Master Verifiable Capability Authorization (MVCA) Neighborhood 6.1 Province of Sovronia DID Document (DD) Narrative (F3)

6. SOVRONA VDR. SOVRONA VDR is the primary VDR (Verifiable Data Registry) controlled by SOVRONA, an Organization. The SOVRONA VDR is used to host the SVRN DID Method.

8. PoS RW Nation State. The Province of Sovronia is a (fictitious) Province (RW_NATIONSTATE (“Real World” Nation State)) in Canada and the legal government jurisdiction for the citizens of the province. The Province of Sovronia is an Organization. The Province of Sovronia issues “Real World” Sovronia Driver’s Licenses (SDLs) but relies on SOVRONA to issue digital, verifiable SDLs.

9. PoS D Wallet. PoS D Wallet is a Digital Wallet (PDR (Private Data Registry)) controlled by the Province of Sovronia, an Organization.

10. PoS DD. PoS DD is the primary DIDDOC (DID Document) for the Province of Sovronia, an Organization. It is issued by SOVRONA who records it on the SOVRONA VDR and it is held in the PoS D Wallet.

11. DID:SVRN:ORG:0E51593F-99F7-4722-9139-3E564B7B8D2B#fdom1. DID:SVRN:ORG:0E51593F-99F7-4722-9139-3E564B7B8D2B#fdom1 is the identifier for the primary AGENT for the Province of Sovronia, an Organization.

12. http://services.sovrona.com/agent. http://services.sovrona.com/agent is the primary SEP (Service Endpoint) for accessing the AGENT(s) associated with the DID(s) and DID Document(s) issued by SOVRONA, an Organization.

6.2 Province of Sovronia DD Master Capability Authorization Neighborhood (F4)

13. PoS DD MVCA. PoS DD MVCA is the Master Verifiable Capability Authorization (MVCA) created for the Province of Sovronia’s DID Document (DD) at the time that the DID and DID Document were first issued by SOVRONA on behalf of itself for the Province of Sovronia. (A new MVCA is created whenever a new DID and DID Document are issued by a TDIDP. The MVCA grants authorization for any and all methods to the Province of Sovronia.)

7. DID Document (DD) and Master Verifiable Capability Authorization (MVCA) Neighborhoods

A new MVCA is created whenever a new DID and DID Document are issued by a TDIDP. This subgraph highlights that with every new DID and DID Document, a corresponding MVCA is issued at the same time. The graphitization includes all of the DIDs in the Subgraph 0 scenario (plus their corresponding MVCAs).

Figure 7. DID Document (DD) and Master Verifiable Capability Authorization (MVCA) Neighborhoods 7.1 DID Documents (DDs) and Master Verifiable Capability Authorizations (MVCAs) Narratives (F5)

3. Erin DD. Erin DD is the primary DIDDOC (DID Document) for Erin, a Person. It is issued by SOVRONA who records it on the SOVRONA VDR and it is also held in the Erin DD Wallet.

7. Erin DD MVCA. Erin DD MVCA is the Master Verifiable Capability Authorization (MVCA) created for Erin’s DID Document at the time that the DID and DID Document were first issued by SOVRONA on behalf of the Province of Sovronia for Erin. (A new MVCA is created whenever a new DID and DID Document are issued by a TDIDP. The MVCA grants authorization for any and all methods to Erin.)

10. PoS DD. PoS DD is the primary DIDDOC (DID Document) for the Province of Sovronia, an Organization. It is issued by SOVRONA who records it on the SOVRONA VDR and it is held in the PoS D Wallet.

13. PoS DD MVCA. PoS DD MVCA is the Master Verifiable Capability Authorization (MVCA) created for the Province of Sovronia’s DID Document (DD) at the time that the DID and DID Document were first issued by SOVRONA on behalf of itself for the Province of Sovronia. (A new MVCA is created whenever a new DID and DID Document are issued by a TDIDP. The MVCA grants authorization for any and all methods to the Province of Sovronia.)

14. Erin SDL DD. Erin SDL DD is the primary DIDDOC (DID Document) for Erin’s digital, verifiable SDL.

15. Erin SDL MVCA. Erin SDL MVCA is the Master Verifiable Capability Authorization (MVCA) created for Erin’s SDL (DD) at the time that the DID and DID Document were first issued by SOVRONA on behalf of the Province of Sovronia for Erin’s SDL. (A new MVCA is created whenever a new DID and DID Document are issued by a TDIDP. The MVCA grants authorization for any and all methods defined for the subject to the effective issuer. In this case, the effective issuer is the Province of Sovronia.)

16. Erin SDL Prop Set DD. Erin SDL Prop Set DD is the primary DIDDOC (DID Document) for the Verified Credential (VC) that is used to represent the properties of Erin’s digital, verifiable SDL (and their values). The properties (and their values) are represented in Erin SDL Prop Set, a Verifiable Credential associated with the DID in Erin SDL Prop Set DD.

17. Erin SDL Prop Set MVCA. Erin SDL Prop Set MVCA is the Master Verifiable Capability Authorization (MVCA) created for Erin’s SDL Prop Set (DD) at the time that the DID and DID Document were first issued by SOVRONA on behalf of the Province of Sovronia for Erin’s SDL. (A new MVCA is created whenever a new DID and DID Document are issued by a TDIDP. The MVCA grants authorization for any and all methods defined for the subject to the effective issuer. In this case, the effective issuer is the Province of Sovronia.)

8. Erin’s Sovronia Drivers License Property Set DID Document (DD) and Master Verifiable Capability Authorization (MVCA) Neighborhood

Subgraph F6 illustrates how a Property Set for an FDO is realized by a Verifiable Credential (VC). The following is a graphitization of Erin’s Sovronia Driver’s License Property Set.

NOTE: All the properties of an FDO (an FDO Property Set) are represented by one or more Verifiable Credentials associated with the FDO’s DID. A Property Set is associated with an FDO by creating a Verifiable Credential that holds the properties (and their values) that is linked to the FDO’s DID.

VE-ARM Principles
Figure 8. Subgraphs F6. Erin’s Sovronia Drivers License Property Set DID Document (DD) and Master Verifiable Capability Authorization (MVCA) Neighborhood 8.1 Erin’s Sovronia Drivers License Property Set Verifiable Credential (VC) Narrative (F6)

16. Erin SDL Prop Set DD. Erin SDL Prop Set DD is the primary DIDDOC (DID Document) for the Verified Credential (VC) that is used to represent the properties of Erin’s digital, verifiable SDL (and their values). The properties (and their values) are represented in Erin SDL Prop Set, a Verifiable Credential associated with the DID in Erin SDL Prop Set DD.

17. Erin SDL Prop Set MVCA. Erin SDL Prop Set MVCA is the Master Verifiable Capability Authorization (MVCA) created for Erin’s SDL Prop Set (DD) at the time that the DID and DID Document were first issued by SOVRONA on behalf of the Province of Sovronia for Erin’s SDL. (A new MVCA is created whenever a new DID and DID Document are issued by a TDIDP. The MVCA grants authorization for any and all methods defined for the subject to the effective issuer. In this case, the effective issuer is the Province of Sovronia.)

20. Erin SDL Prop Set VC. Erin SDL Prop Set VC is the Verified Credential (VC) that is used to represent the properties of Erin’s digital, verifiable SDL (and their values). The properties (and their values) are represented in Erin SDL Prop Set VC, a Verifiable Credential associated with the DID in Erin SDL Prop Set DD.

9. Erin’s Sovronia Drivers License Property Set Delegated Directed Graph of Verifiable Capability Authorizations Neighborhood

This subgraph illustrates what a Delegated Directed Graph of Verifiable Capability Authorizations looks like. The graphitization of the Delegated Directed Graph of VCAs applies to Erin’s Sovronia Drivers License Property Set.

The Delegated Directed Graph of VCAs, in this scenario, consists of:

Erin’s Sovronia Drivers License Property Set MVCA One VCA linked back to the MVCA One VCA Method Innovation (MI) linked back the VCA Figure 9. Subgraphs F7. Erin’s Sovronia Drivers License Property Set Delegated Directed Graph of Verifiable Capability Authorizations Neighborhood 9.1 Erin’s SDL Property Set Delegated Directed Graph of Verifiable Capability Authorizations Narrative (F7)

16. Erin SDL Prop Set DD. Erin SDL Prop Set DD is the primary DIDDOC (DID Document) for the Verified Credential (VC) that is used to represent the properties of Erin’s digital, verifiable SDL (and their values). The properties (and their values) are represented in Erin SDL Prop Set, a Verifiable Credential associated with the DID in Erin SDL Prop Set DD.

17. Erin SDL Prop Set MVCA. Erin SDL Prop Set MVCA is the Master Verifiable Capability Authorization (MVCA) created for Erin’s SDL Prop Set (DD) at the time that the DID and DID Document were first issued by SOVRONA on behalf of the Province of Sovronia for Erin’s SDL. (A new MVCA is created whenever a new DID and DID Document are issued by a TDIDP. The MVCA grants authorization for any and all methods defined for the subject to the effective issuer. In this case, the effective issuer is the Province of Sovronia.)

18. Erin SDL VCA. Erin SDL VCA is the Verifiable Capability Authorization (VCA) created for Erin’s SDL Prop Set DD. The VCA was issued by the Province of Sovronia authorizing Erin to be able to present the properties (and their values) of Erin’s SDL to a third party using the Present method associated with Erin’s SDL Prop Set and supported (implemented) by Erin’s AGENT. The parent of Erin’s SDL VCA is the Erin SDL MVCA.

19. Erin SDL VCA MI. Erin SDL VCA MI is an example of a MVCA Method Invocation (VCA MI) that uses the Erin SCL VCA which authorizes the potential execution of the Present method by Erin against Erin’s SDL Prop Set.

10. SOVRONA Trusted Decentralized Identity Provider (TDIDP) DID Document (DD), DD Master Verifiable Capability Authorization (MVCA) and Erin “Real World” Neighborhoods

Subgraph F8 is a visualization of:

Erin’s “Real World” objects Erin’s “Real World” Wallet (Erin RW (Leather) Wallet) Erin’s “Real World” Sovronia Drivers License (Erin RW SDL) SVORONIA’s DID and DID Document (and corresponding MVCA) Figure 10. SOVRONA TDIDP DID Document (DD), DD Master Verifiable Capability Authorization (MVCA) and Erin “Real World” Neighborhoods 10.1 Erin’s “Real World” Narrative (F9)

1. Erin. Erin is a RW_PERSON (“Real World” Person) and a citizen of the Province of Sovronia. Erin also holds a (valid) Sovronia Driver’s License (SDL) and controls a “Real World” Wallet (RW_WALLET) as well as a Digital Wallet (PDR).

8. PoS RW Nation State. The Province of Sovronia is a (fictitious) Province (RW_NATIONSTATE (“Real World” Nation State)) in Canada and the legal government jurisdiction for the citizens of the province. The Province of Sovronia is an Organization. The Province of Sovronia issues “Real World” Sovronia Driver’s Licenses (SDLs) but relies on SOVRONA to issue digital, verifiable SDLs.

22. Erin RW Wallet. Erin RW Wallet is a RW_WALLET (“Real World” (Leather) Wallet) and it is used to hold Erin’s “Real World” Sovronia Driver’s License (Erin RW SDL). Erin RW Wallet is owned and controlled by Erin.

23. Erin RW SDL. Erin RW SDL is Erin’s RW_SDL (“Real World” Sovronia Driver’s License) and it is held by Erin in Erin’s RW Wallet.

10.2 SOVRONA TDIDP Narrative (F10)

12. http://services.sovrona.com/agent. http://services.sovrona.com/agent is the primary SEP (Service Endpoint) for accessing the AGENT(s) associated with the DID(s) and DID Document(s) issued by SOVRONA, an Organization.

24. SOVRONA Organization. SOVRONA is an Organization and the primary “Real World” TDIDP (RW_DIDPROVIDER) for the citizens and government of Sovronia, a fictitious province in Canada. SOVRONA controls a Digital Wallet (PDR (Personal Data Registry)), SOVRONA D Wallet, as well as the SOVRONA Verifiable Data Registry (VDR).

25. SOVRONA D Wallet. SOVRONA D Wallet is a Digital Wallet (PDR (Private Data Registry)) that is controlled by SOVRONA, an Organization.

26. SOVRONA DD. SOVRONA DD is the primary DIDDOC (DID Document) for SOVRONA, an Organization.

27. DID:SVRN:ORG:01E9CFEA-E36D-4111-AB68-D99AE9D86D51#fdom1. DID:SVRN:ORG:01E9CFEA-E36D-4111-AB68-D99AE9D86D51#fdom1 is the identifier for the primary AGENT for SOVRONA, an Organization.

28. SOVRONA DD MVCA. SOVRONA DD MVCA is the Master Verifiable Capability Authorization (MVCA) created for SOVRONA’s DID Document (DD) at the time that the DID and DID Document were first issued by SOVRONA on behalf of itself for SOVRONA’s DD. (A new MVCA is created whenever a new DID and DID Document are issued by a TDIDP. The MVCA grants authorization for any and all methods defined for the subject to the effective issuer. In this case, the effective issuer is SOVRONA, the Organization.)

11. VE-ARM “All-In” Graph View

The following is a depiction of the “All-In” view of the The Verifiable Economy Architecture Reference Model (VE-ARM) graph. This graph view represents the union of all of the previous subgraphs.

Figure 11. Subgraph F10. The Verifiable Economy “All-In” Graph View 11.1 Narrative

1. Erin. Erin is a RW_PERSON (“Real World” Person) and a citizen of the Province of Sovronia. Erin also holds a (valid) Sovronia Driver’s License (SDL) and controls a “Real World” Wallet (RW_WALLET) as well as a Digital Wallet (PDR).

2. Erin D Wallet. Erin D Wallet is a Digital Wallet (PDR (Private Data Registry)) controlled by Erin, a Person.

3. Erin DD. Erin DD is the primary DIDDOC (DID Document) for Erin, a Person. It is issued by SOVRONA who records it on the SOVRONA VDR and it is also held in the Erin DD Wallet.

4. DID:SVRN:PERSON:04900EEF-38E7-487E-8D6F-09D6C95D9D3E#fdom1. DID:SVRN:PERSON:04900EEF-38E7-487E-8D6F-09D6C95D9D3E#fdom1 is the identifier for the primary AGENT for Erin, a Person.

5. http://services.sovronia.ca/agent. http://services.sovronia.ca/agent is the primary SEP (Service Endpoint) for accessing the AGENT(s) associated with the DID(s) and DID Document(s) issued by the Province of Sovronia, an Organization. This includes all of the DID(s) and DID Document(s) associated with Erin.

6. SOVRONA VDR. SOVRONA VDR is the primary VDR (Verifiable Data Registry) controlled by SOVRONA, an Organization. The SOVRONA VDR is used to host the SVRN DID Method.

7. Erin DD MVCA. Erin DD MVCA is the Master Verifiable Capability Authorization (MVCA) created for Erin’s DID Document at the time that the DID and DID Document were first issued by SOVRONA on behalf of the Province of Sovronia for Erin. (A new MVCA is created whenever a new DID and DID Document are issued by a TDIDP. The MVCA grants authorization for any and all methods to Erin.)

8. PoS RW Nation State. The Province of Sovronia is a (fictitious) Province (RW_NATIONSTATE (“Real World” Nation State)) in Canada and the legal government jurisdiction for the citizens of the province. The Province of Sovronia is an Organization. The Province of Sovronia issues “Real World” Sovronia Driver’s Licenses (SDLs) but relies on SOVRONA to issue digital, verifiable SDLs.

9. PoS D Wallet. PoS D Wallet is a Digital Wallet (PDR (Private Data Registry)) controlled by the Province of Sovronia, an Organization.

10. PoS DD. PoS DD is the primary DIDDOC (DID Document) for the Province of Sovronia, an Organization. It is issued by SOVRONA who records it on the SOVRONA VDR and it is held in the PoS D Wallet.

11. DID:SVRN:ORG:0E51593F-99F7-4722-9139-3E564B7B8D2B#fdom1. DID:SVRN:ORG:0E51593F-99F7-4722-9139-3E564B7B8D2B#fdom1 is the identifier for the primary AGENT for the Province of Sovronia, an Organization.

12. http://services.sovrona.com/agent. http://services.sovrona.com/agent is the primary SEP (Service Endpoint) for accessing the AGENT(s) associated with the DID(s) and DID Document(s) issued by SOVRONA, an Organization.

13. PoS DD MVCA. PoS DD MVCA is the Master Verifiable Capability Authorization (MVCA) created for the Province of Sovronia’s DID Document (DD) at the time that the DID and DID Document were first issued by SOVRONA on behalf of itself for the Province of Sovronia. (A new MVCA is created whenever a new DID and DID Document are issued by a TDIDP. The MVCA grants authorization for any and all methods to the Province of Sovronia.)

14. Erin SDL DD. Erin SDL DD is the primary DIDDOC (DID Document) for Erin’s digital, verifiable SDL.

15. Erin SDL MVCA. Erin SDL MVCA is the Master Verifiable Capability Authorization (MVCA) created for Erin’s SDL (DD) at the time that the DID and DID Document were first issued by SOVRONA on behalf of the Province of Sovronia for Erin’s SDL. (A new MVCA is created whenever a new DID and DID Document are issued by a TDIDP. The MVCA grants authorization for any and all methods defined for the subject to the effective issuer. In this case, the effective issuer is the Province of Sovronia.)

16. Erin SDL Prop Set DD. Erin SDL Prop Set DD is the primary DIDDOC (DID Document) for the Verified Credential (VC) that is used to represent the properties of Erin’s digital, verifiable SDL (and their values). The properties (and their values) are represented in Erin SDL Prop Set, a Verifiable Credential associated with the DID in Erin SDL Prop Set DD.

17. Erin SDL Prop Set MVCA. Erin SDL Prop Set MVCA is the Master Verifiable Capability Authorization (MVCA) created for Erin’s SDL Prop Set (DD) at the time that the DID and DID Document were first issued by SOVRONA on behalf of the Province of Sovronia for Erin’s SDL. (A new MVCA is created whenever a new DID and DID Document are issued by a TDIDP. The MVCA grants authorization for any and all methods defined for the subject to the effective issuer. In this case, the effective issuer is the Province of Sovronia.)

18. Erin SDL VCA. Erin SDL VCA is the Verifiable Capability Authorization (VCA) created for Erin’s SDL Prop Set DD. The VCA was issued by the Province of Sovronia authorizing Erin to be able to present the properties (and their values) of Erin’s SDL to a third party using the Present method associated with Erin’s SDL Prop Set and supported (implemented) by Erin’s AGENT. The parent of Erin’s SDL VCA is the Erin SDL MVCA.

19. Erin SDL VCA MI. Erin SDL VCA MI is an example of a MVCA Method Invocation (VCA MI) that uses the Erin SCL VCA which authorizes the potential execution of the Present method by Erin against Erin’s SDL Prop Set.

20. Erin SDL Prop Set VC. Erin SDL Prop Set VC is the Verified Credential (VC) that is used to represent the properties of Erin’s digital, verifiable SDL (and their values). The properties (and their values) are represented in Erin SDL Prop Set VC, a Verifiable Credential associated with the DID in Erin SDL Prop Set DD.

21. DID:SVRN:VC:0B114A04-2559-4C68-AE43-B7004646BD76#fdom1. DID:SVRN:VC:0B114A04-2559-4C68-AE43-B7004646BD76#fdom1 is the identifier for the primary AGENT for Erin SDL Property Set DD.

22. Erin RW Wallet. Erin RW Wallet is a RW_WALLET (“Real World” (Leather) Wallet) and it is used to hold Erin’s “Real World” Sovronia Driver’s License (Erin RW SDL). Erin RW Wallet is owned and controlled by Erin.

23. Erin RW SDL. Erin RW SDL is Erin’s RW_SDL (“Real World” Sovronia Driver’s License) and it is held by Erin in Erin’s RW Wallet.

24. SOVRONA Organization. SOVRONA is an Organization and the primary “Real World” TDIDP (RW_DIDPROVIDER) for the citizens and government of Sovronia, a fictitious province in Canada. SOVRONA controls a Digital Wallet (PDR (Personal Data Registry)), SOVRONA D Wallet, as well as the SOVRONA Verifiable Data Registry (VDR).

25. SOVRONA D Wallet. SOVRONA D Wallet is a Digital Wallet (PDR (Private Data Registry)) that is controlled by SOVRONA, an Organization.

26. SOVRONA DD. SOVRONA DD is the primary DIDDOC (DID Document) for SOVRONA, an Organization.

27. DID:SVRN:ORG:01E9CFEA-E36D-4111-AB68-D99AE9D86D51#fdom1. DID:SVRN:ORG:01E9CFEA-E36D-4111-AB68-D99AE9D86D51#fdom1 is the identifier for the primary AGENT for SOVRONA, an Organization.

28. SOVRONA DD MVCA. SOVRONA DD MVCA is the Master Verifiable Capability Authorization (MVCA) created for SOVRONA’s DID Document (DD) at the time that the DID and DID Document were first issued by SOVRONA on behalf of itself for SOVRONA’s DD. (A new MVCA is created whenever a new DID and DID Document are issued by a TDIDP. The MVCA grants authorization for any and all methods defined for the subject to the effective issuer. In this case, the effective issuer is SOVRONA, the Organization.)

29. DID:SVRN:LICENSE:999902-638#fdom1. DID:SVRN:LICENSE:999902-638#fdom1 is the identifier for the primary AGENT for Erin SDL DD.

12. Conclusions

The goals of this article are three-fold:

Introduce the concept of a Verifiable Capability Authorizations (VCA) and how they can be used to implement controls over which specific methods a particular party is allowed to execute against a particular instance of a Fully Decentralized Object (FDO). VCAs are both delegatable and attenuatable. Illustrate how #graphitization techniques can be used for visualizing: Trusted Decentralized Identifiers (DIDs) DID Documents Trusted Digital Agents (and their Service Endpoints (SEPs)) Verifiable Credentials (VCs) Verifiable Capability Authorizations (VCAs) and, Most importantly, their myriad of interrelationships. Use the above 2 goals to further detail and describe how to use the VE-ARM model for implementing trusted, reliable, efficient, frictionless, standards-based, global-scale software systems based on Fully Decentralized Objects (FDOs).

This article described The Verifiable Economy Architecture Reference Model (VE-ARM) using a #graphiziation approach for modeling and visualization. The resulting overall graph was partitioned into a series of subgraphs that depict the key elements of the architecture reference model. Each subgraph was documented with a narrative that is mapped to the numbered blue targets used to identify each element in each subgraph .


Ben Werdmüller

The user's journey

I’ve been lucky to get some productive, actionable criticism on my short stories, both from writing classes I’ve been a part of and journals I’ve submitted to. The most common criticism goes something like this: “your line to line writing is solid, but you let the idea become the story”. In other words, rather than letting the story stand on its own feet, I fall into the trap of treating it lik

I’ve been lucky to get some productive, actionable criticism on my short stories, both from writing classes I’ve been a part of and journals I’ve submitted to.

The most common criticism goes something like this: “your line to line writing is solid, but you let the idea become the story”. In other words, rather than letting the story stand on its own feet, I fall into the trap of treating it like a kind of argument with a point I want to drive home.

I’m pretty sure I’ve developed this habit from 23 years of opinionated blogging: I write regular posts that try and argue for a particular worldview, or a way of doing things. Even if you’re a newcomer, you’ve probably noticed that I talk quite a bit about decentralization, data ownership, and the dangers of centralized data silos as a means to build concentrated wealth. I care about those things, and I’d love for more people to join me.

It’s served me pretty well as a way to write on my website, but it doesn’t really work for stories. The underlying idea can certainly inform how the story is written - and it should - but the narrative needs to be driven by its characters. Stories are about telling “true lies” that shine a light on some aspect of being human. In genre fiction that will often be accompanied by an exploration of an overt idea, but if, for example, a science fiction story is just about the science and not about how real human characters live and breathe in a world where that science is true, the story will suck.

It’s a trap and I’m learning to get over it.

Here’s the thing: I’ve realized that I fall into the exact same trap in my technology work, too. I’m often so wrapped up in an idea I care about - scroll up for a list of some of them - that I let it subsume the most important thing about any technology project. Just as a story needs to be driven by human characters (or proxies for human characters; I’m not arguing against Redwall here), technology products need to be driven by the people who use them. It’s not about your story as a creator; it’s about their story as a user.

It’s an ego thing, in a way. In both cases, I become so excited by the idea that I let myself become the character: the person expressing the idea, either in prose or code. The trick, the real art of it, is to inform the story with your ideas, but to center the character. Their journey is the all-important thing, and if an idea doesn’t fit with that journey, it doesn’t belong there.

Like I said: it’s a trap and I’m learning to get over it. And I strongly suspect I’m not alone.

You serve the reader by telling a human story; you serve the user by serving their story. It’s not about educating them, or forcing them around to your point of view. Whether you’re shining a light on the human condition or making a tool to make a part of it easier, it’s about service. Our goal should be to disappear and let the work speak for itself.


Damien Bod

Securing an ASP.NET Core app and web API using windows authentication

This post shows how an ASP.NET Core Web API and an ASP.NET Core Razor page application can be implemented to use windows authentication. The Razor page application uses Javascript to display an autocomplete control which gets the data indirectly from the service API which is protected using windows authentication. The Razor Page application uses the […]

This post shows how an ASP.NET Core Web API and an ASP.NET Core Razor page application can be implemented to use windows authentication. The Razor page application uses Javascript to display an autocomplete control which gets the data indirectly from the service API which is protected using windows authentication. The Razor Page application uses the API to get the auto-complete suggestions data. Both applications are protected using windows authentication.

Code: https://github.com/damienbod/PoCWindowsAuth

Setup the API

The ASP.NET Core demo API is setup to use windows authentication. The launch settings windowsAuthentication property is set to true and the anonymousAuthentication property to false. The application host file settings on your development PC would also need to be configured to allow windows authentication, which is disabled by default. See the stack overflow link at the bottom for more information.

{ "iisSettings": { "windowsAuthentication": true, "anonymousAuthentication": false, "iisExpress": { "applicationUrl": "https://localhost:44364", "sslPort": 44364 } },

The Startup ConfigureServices method is configured to require authentication using the IISDefaults.AuthenticationScheme scheme. This would need to be changed if you were using a different hosting model.

public void ConfigureServices(IServiceCollection services) { services.AddAuthentication(IISDefaults.AuthenticationScheme); services.AddControllers().AddJsonOptions(option => option.JsonSerializerOptions .PropertyNamingPolicy = JsonNamingPolicy.CamelCase); } public void Configure(IApplicationBuilder app, IWebHostEnvironment env) { if (env.IsDevelopment()) { app.UseDeveloperExceptionPage(); } app.UseHttpsRedirection(); app.UseRouting(); app.UseAuthentication(); app.UseAuthorization(); app.UseEndpoints(endpoints => { endpoints.MapControllers(); }); }

The API is protected using the authorize attribute. This example returns the user name from the windows authentication.

[Authorize] [ApiController] [Route("api/[controller]")] public class MyDataController : ControllerBase { private readonly ILogger<MyDataController> _logger; public MyDataController(ILogger<MyDataController> logger) { _logger = logger; } [HttpGet] public IEnumerable<string> Get() { return new List<string> { User.Identity.Name }; } }

Implement the ASP.NET Core Razor pages

The application calling the API also requires windows authentication and requests the data from the API project. The HttpClient instance requesting the data from the API project must send the default credentials with each API call. A HttpClientHandler is used to implement this. The HttpClientHandler is added to a named AddHttpClient service which can be used anywhere in the application.

public void ConfigureServices(IServiceCollection services) { services.AddAuthentication(IISDefaults.AuthenticationScheme); services.AddHttpClient(); HttpClientHandler handler = new HttpClientHandler() { UseDefaultCredentials = true }; services.AddHttpClient("windowsAuthClient", c =>{ }) .ConfigurePrimaryHttpMessageHandler(() => handler); services.AddScoped<MyDataClientService>(); services.AddRazorPages().AddJsonOptions(option => option.JsonSerializerOptions .PropertyNamingPolicy = JsonNamingPolicy.CamelCase); }

A client service is implemented to call the API from the second project. This client uses the IHttpClientFactory to create instances of the HttpClient. The CreateClient method is used to create an instance using the named client which was configured in the Startup class. This instance will send credentials to the API.

public MyDataClientService( IConfiguration configurations, IHttpClientFactory clientFactory) { _configurations = configurations; _clientFactory = clientFactory; _jsonSerializerOptions = new JsonSerializerOptions { PropertyNameCaseInsensitive = true, }; } public async Task<List<string>> GetMyData() { try { var client = _clientFactory.CreateClient("windowsAuthClient"); client.BaseAddress = new Uri(_configurations["MyApiUrl"]); var response = await client.GetAsync("api/MyData"); if (response.IsSuccessStatusCode) { var data = await JsonSerializer.DeserializeAsync<List<string>>( await response.Content.ReadAsStreamAsync()); return data; } var error = await response.Content.ReadAsStringAsync(); throw new ApplicationException($"Status code: {response.StatusCode}, Error: {response.ReasonPhrase}, Message: {error}"); } catch (Exception e) { throw new ApplicationException($"Exception {e}"); } }

Javascript UI

If using Javascript to call the API protected with window authentication, this can become a bit tricky due to CORS when using windows authentication. I prefer to avoid this and use a backend to proxy the calls from my trusted backend to the API. The OnGetAutoCompleteSuggest method is used to call the API. The would also make it easy to map DTOs from my API to my view DTOs as required.

public class IndexModel : PageModel { private readonly ILogger<IndexModel> _logger; private readonly MyDataClientService _myDataClientService; public List<string> DataFromApi; public string SearchText { get; set; } public List<PersonCity> PersonCities; public IndexModel(MyDataClientService myDataClientService, ILogger<IndexModel> logger) { _myDataClientService = myDataClientService; _logger = logger; } public async Task OnGetAsync() { DataFromApi = await _myDataClientService.GetMyData(); } public async Task<ActionResult> OnGetAutoCompleteSuggest(string term) { PersonCities = await _myDataClientService.Suggest(term); SearchText = term; return new JsonResult(PersonCities); } }

The Razor Page underneath uses an autocomplete implemented in Javascript to suggest data requested from the API. Any Javascript framework can be used in this way.

@page "{handler?}" @model IndexModel @{ ViewData["Title"] = "Home page"; } <div class="text-center"> <p>Data from API:</p> @foreach (string item in Model.DataFromApi) { <p>@item</p><br /> } </div> <hr /> <fieldset class="form"> <legend>Search for a person in the search engine</legend> <table width="500"> <tr> <th></th> </tr> <tr> <td> <input class="form-control" id="autocomplete" type="text" style="width:500px" /> </td> </tr> </table> </fieldset> <br /> <div class="card" id="results"> <h5 class="card-header"> <span id="docName"></span> <span id="docFamilyName"></span> </h5> <div class="card-body"> <p class="card-text"><span id="docInfo"></span></p> <p class="card-text"><span id="docCityCountry"></span></p> <p class="card-text"><span id="docWeb"></span></p> </div> </div> @section scripts { <script type="text/javascript"> var items; $(document).ready(function () { $("#results").hide(); $("input#autocomplete").autocomplete({ source: function (request, response) { $.ajax({ url: "Index/AutoCompleteSuggest", dataType: "json", data: { term: request.term, }, success: function (data) { var itemArray = new Array(); for (i = 0; i < data.length; i++) { itemArray[i] = { label: data[i].name + " " + data[i].familyName, value: data[i].name + " " + data[i].familyName, data: data[i] } } console.log(itemArray); response(itemArray); }, error: function (data, type) { console.log(type); } }); }, select: function (event, ui) { $("#results").show(); $("#docNameId").text(ui.item.data.id); $("#docName").text(ui.item.data.name); $("#docFamilyName").text(ui.item.data.familyName); $("#docInfo").text(ui.item.data.info); $("#docCityCountry").text(ui.item.data.cityCountry); $("#docWeb").text(ui.item.data.web); console.log(ui.item); } }); }); </script> }

If all is setup correctly, the ASP.NET Core application displays the API data which is protected using the windows authentication.

CRSF

If using windows authentication, you need to protect against CSRF forgery like any application using cookies. It is also recommended NOT to use windows authentication in the public domain. Modern security architectures should be used like Open ID Connect whenever possible. This works well on intranets or for making changes to existing applications which use windows authentication in secure networks.

Links:

https://stackoverflow.com/questions/36946304/using-windows-authentication-in-asp-net

https://docs.microsoft.com/en-us/aspnet/web-api/overview/security/integrated-windows-authentication


Simon Willison

Weeknotes: Vaccinate The States, and how I learned that returning dozens of MB of JSON works just fine these days

On Friday VaccinateCA grew in scope, a lot: we launched a new website called Vaccinate The States. Patrick McKenzie wrote more about the project here - the short version is that we're building the most comprehensive possible dataset of vaccine availability in the USA, using a combination of data collation, online research and continuing to make a huge number of phone calls. VIAL, the Django a

On Friday VaccinateCA grew in scope, a lot: we launched a new website called Vaccinate The States. Patrick McKenzie wrote more about the project here - the short version is that we're building the most comprehensive possible dataset of vaccine availability in the USA, using a combination of data collation, online research and continuing to make a huge number of phone calls.

VIAL, the Django application I've been working on since late February, had to go through some extensive upgrades to help support this effort!

VIAL has a number of responsibilities. It acts as our central point of truth for the vaccination locations that we are tracking, powers the app used by our callers to serve up locations to call and record the results, and as-of this week it's also a central point for our efforts to combine data from multiple other providers and scrapers.

The data ingestion work is happening in a public repository, CAVaccineInventory/vaccine-feed-ingest. I have yet to write a single line of code there (and I thoroughly enjoy working on that kind of code) because I've been heads down working on VIAL itself to ensure it can support the ingestion efforts.

Matching and concordances

If you're combining data about vaccination locations from a range of different sources, one of the biggest challenges is de-duplicating the data: it's important the same location doesn't show up multiple times (potentially with slightly differing details) due to appearing in multiple sources.

Our first step towards handling this involved the addition of "concordance identifiers" to VIAL.

I first encountered the term "concordance" being used for this in the Who's On First project, which is building a gazetteer of every city/state/country/county/etc on earth.

A concordance is an identifier in another system. Our location ID for RITE AID PHARMACY 05976 in Santa Clara is receu5biMhfN8wH7P - which is e3dfcda1-093f-479a-8bbb-14b80000184c in VaccineFinder and 7537904 in Vaccine Spotter and ChIJZaiURRPKj4ARz5nAXcWosUs in Google Places.

We're storing them in a Django table called ConcordanceIdentifier: each record has an authority (e.g. vaccinespotter_org) and an identifier (7537904) and a many-to-many relationship to our Location model.

Why many-to-many? Surely we only want a single location for any one of these identifiers?

Exactly! That's why it's many-to-many: because if we import the same location twice, then assign concordance identifiers to it, we can instantly spot that it's a duplicate and needs to be merged.

Raw data from scrapers

ConcordanceIdentifier also has a many-to-many relationship with a new table, called SourceLocation. This table is essentially a PostgreSQL JSON column with a few other columns (including latitude and longitude) into which our scrapers and ingesters can dump raw data. This means we can use PostgreSQL queries to perform all kinds of analysis on the unprocessed data before it gets cleaned up, de-duplicated and loaded into our point-of-truth Location table.

How to dedupe and match locations?

Initially I thought we would do the deduping and matching inside of VIAL itself, using the raw data that had been ingested into the SourceLocation table.

Since we were on a tight internal deadline it proved more practical for people to start experimenting with matching code outside of VIAL. But that meant they needed the raw data - 40,000+ location records (and growing rapidly).

A few weeks ago I built a CSV export feature for the VIAL admin screens, using Django's StreamingHttpResponse class combined with keyset pagination for bulk export without sucking the entire table into web server memory - details in this TIL.

Our data ingestion team wanted a GeoJSON export - specifically newline-delimited GeoJSON - which they could then load into GeoPandas to help run matching operations.

So I built a simple "search API" which defaults to returning 20 results at a time, but also has an option to "give me everything" - using the same technique I used for the CSV export: keyset pagination combined with a StreamingHttpResponse.

And it worked! It turns out that if you're running on modern infrastructure (Cloud Run and Cloud SQL in our case) in 2021 getting Django to return 50+MB of JSON in a streaming response works just fine.

Some of these exports are taking 20+ seconds, but for a small audience of trusted clients that's completely fine.

While working on this I realized that my idea of what size of data is appropriate for a dynamic web application to return more or less formed back in 2005. I still think it's rude to serve multiple MBs of JavaScript up to an inexpensive mobile phone on an expensive connection, but for server-to-server or server-to-automation-script situations serving up 50+ MB of JSON in one go turns out to be a perfectly cromulent way of doing things.

Export full results from django-sql-dashboard

django-sql-dashboard is my Datasette-inspired library for adding read-only arbitrary SQL queries to any Django+PostgreSQL application.

I built the first version last month to help compensate for switching VaccinateCA away from Airtable - one of the many benefits of Airtable is that it allows all kinds of arbitrary reporting, and Datasette has shown me that bookmarkable SQL queries can provide a huge amount of that value with very little written code, especially within organizations where SQL is already widely understood.

While it allows people to run any SQL they like (against a read-only PostgreSQL connection with a time limit) it restricts viewing to the first 1,000 records to be returned - because building robust, performante pagination against arbitrary SQL queries is a hard problem to solve.

Today I released django-sql-dashboard 0.10a0 with the ability to export all results for a query as a downloadable CSV or TSV file, using the same StreamingHttpResponse technique as my Django admin CSV export and all-results-at-once search endpoint.

I expect it to be pretty useful! It means I can run any SQL query I like against a Django project and get back the full results - often dozens of MBs - in a form I can import into other tools (including Datasette).

TIL this week Usable horizontal scrollbars in the Django admin for mouse users Filter by comma-separated values in the Django admin Constructing GeoJSON in PostgreSQL Django Admin action for exporting selected rows as CSV Releases this week django-sql-dashboard: 0.10a1 - (21 total releases) - 2021-04-25
Django app for building dashboards using raw SQL queries

Saturday, 24. April 2021

DustyCloud Brainstorms

Beyond the shouting match: what is a blockchain, really?

If there's one thing that's true about the word "blockchain", it's that these days people have strong opinions about it. Open your social media feed and you'll see people either heaping praises on blockchains, calling them the saviors of humanity, or condemning them as destroying and burning down the planet …

If there's one thing that's true about the word "blockchain", it's that these days people have strong opinions about it. Open your social media feed and you'll see people either heaping praises on blockchains, calling them the saviors of humanity, or condemning them as destroying and burning down the planet and making the rich richer and the poor poorer and generally all the other kinds of fights that people like to have about capitalism (also a quasi-vague word occupying some hotly contested mental real estate).

There are good reasons to hold opinions about various aspects of what are called "blockchains", and I too have some pretty strong opinions I'll be getting into in a followup article. The followup article will be about "cryptocurrencies", which many people also seem to think of as synonymous with "blockchains", but this isn't particularly true either, but we'll deal with that one then.

In the meanwhile, some of the fighting on the internet is kind of confusing, but even more importantly, kind of confused. Some of it might be what I call "sportsballing": for whatever reason, for or against blockchains has become part of your local sportsball team, and we've all got to be team players or we're gonna let the local team down already, right? And the thing about sportsballing is that it's kind of arbitrary and it kind of isn't, because you might pick a sportsball team because you did all your research or you might have picked it because that just happens to be the team in your area or the team your friends like, but god almighty once you've picked your sportsball team let's actually not talk against it because that might be giving in to the other side. But sportsballing kind of isn't arbitrary either because it tends to be initially connected to real communities of real human beings and there's usually a deeper cultural web than appears at surface level, so when you're poking at it, it appears surface-level shallow but there are some real intricacies beneath the surface. (But anyway, go sportsball team.)

But I digress. There are important issues to discuss, yet people aren't really discussing them, partly because people mean different things. "Blockchain" is a strange term that encompasses a wide idea space, and what people consider or assume essential to it vary just as widely, and thus when two people are arguing they might not even be arguing about the same thing. So let's get to unpacking.

"Blockchain" as handwaving towards decentralized networks in general

Years ago I was at a conference about decentralized networked technology, and I was having a conversation with someone I had just met. This person was telling me how excited they were about blockchains... finally we have decentralized network designs, and so this seems really useful for society!

I paused for a moment and said yes, blockchains can be useful for some things, though they tend to have significant costs or at least tradeoffs. It's good that we also have other decentralized network technology; for example, the ActivityPub standard I was involved in had no blockchains but did rely on the much older "classic actor model."

"Oh," the other person said, "I didn't know there were other kinds of decentralized network designs. I thought that 'blockchain' just meant 'decentralized network technology'."

It was as if a light had turned on and illuminated the room for me. Oh! This explained so many conversations I had been having over the years. Of course... for many people, blockchains like Bitcoin were the first ever exposure they had (aside from email, which maybe they never gave much thought to as being decentralized) of something that involved a decentralized protocol. So for many people, "blockchain" and "decentralized technology" are synonyms, if not in technical design, but in terms of understanding of a space.

Mark S. Miller, who was standing next to me, smiled and gave a very interesting followup: "There is only one case in which you need a blockchain, and that is in a decentralized system which needs to converge on a single order of events, such as a public ledger dealing with the double spending problem."

Two revelations at once. It was a good conversation... it was a good start. But I think there's more.

Blockchains are the "cloud" of merkle trees

As time has gone on, the discourse over blockchains has gotten more dramatic. This is partly because what a "blockchain" is hasn't been well defined.

All terminology exists on an ever-present battle between fuzziness and crispness, with some terms being much clearer than others. The term "boolean" has a fairly crisp definition in computer science, but if I ask you to show me your "stove", the device you show me today may be incomprehensible to someone's definition a few centuries ago, particularly in that today it might not involve fire. Trying to define as in terms of its functionality can also cause confusion: if I asked you to show me a stove, and you showed me a computer processor or a car engine, I might be fairly confused, even though technically people enjoy showing off that they can cook eggs on both of these devices when they get hot enough. (See also: Identity is a Katamari, language is a Katamari explosion.)

Still, some terms are fuzzier than others, and as far as terms go, "blockchain" is quite fuzzy. Hence my joke: "Blockchains are the 'cloud' of merkle trees."

This ~joke tends to get a lot of laughs out of a particular kind of audience, and confused looks from others, so let me explain. The one thing everyone seems to agree on is that it's a "chain of blocks", but all that really seems to mean is that it's a merkle tree... really, just an immutable datastructure where one node points at the parent node which points at the parent node all the way up. The joke then is not that this merkle tree runs on a cloud, but that "cloud computing" means approximately nothing: it's marketing speak for some vague handwavey set of "other peoples' computers are doing computation somewhere, possibly on your behalf sometimes." Therefore, "cloud of merkle trees" refers to the vagueness of the situation. (As everyone knows, jokes are funnier when fully explained, so I'll turn on my "STUDIO LAUGHTER" sign here.)

So, a blockchain is a chain of blocks, ie a merkle tree, and I mean, technically speaking, that means that Git is a blockchain (especially if the commits are signed), but when you see someone arguing on the internet about whether or not blockchains are "good" or "bad", they probably weren't thinking about git, which aside from having a high barrier of entry in its interface and some concerns about the hashing algorithm used, isn't really something likely to drag you into an internet flamewar.

"Blockchain" is to "Bitcoin" what "Roguelike" is to "Rogue"

These days it's common to see people either heaping praises on blockchains or criticizing them, and those people tend to be shouting past one another. I'll save unpacking that for another post. In the meanwhile though, it's worth noting that people might not be talking about the same things.

What isn't in doubt is whether or not Bitcoin is a blockchain... trying to understand and then explore the problem space around Bitcoin is what created the term "blockchain". It's a bit like the video game genre of roguelikes, which started with the game Rogue, particularly explored and expanded upon in NetHack, and then suddenly exploding into the indie game scene as a "genre" of its own. Except the genre has become fuzzier and fuzzier as people have explored the surrounding space. What is essential? Is a grid based layout essential? Is a non-euclidean grid acceptable? Do you have to provide an ascii or ansi art interface so people can play in their terminals? Dare we allow unicode characters? What if we throw out terminals altogether and just play on a grid of 2d pixelart? What about 3d art? What about permadeath? What about the fantasy theme? What about random level generation? What are the key features of a roguelike?

Well now we're at the point where I pick up a game like Blazing Beaks and it calls itself a "roguelite", which I guess is embracing the point that terminology has gotten extremely fuzzy... this game feels more like Robotron than Rogue.

So... if "blockchain" is to Bitcoin what "roguelike" is to Rogue, then what's essential to a blockchain? Does the blockchain have to be applied to a financial instrument, or can it be used to store updateable information about eg identity? Is global consensus required? Or what about a "trusted quorum" of nodes, such as in Hyperledger? Is "mining" some kind of asset a key part of the system? Is proof of work acceptable, or is proof of stake okay? What about proof of space, proof of space-time, proof of pudding?

On top of all this, some of the terms around blockchains have been absorbed as if into them. For instance, I think to many people, "smart contract" means something like "code which runs on a blockchain" thanks to Ethereum's major adoption of the term, but the E programming language described "smart contracts" as the "likely killer app of distributed capabilities" all the way back in 1999, and was borrowing the term from Nick Szabo, but really the same folks working on E had described many of those same ideas in the Agoric Papers back in 1988. Bitcoin wasn't even a thing at all until at least 2008, so depending on how you look at it, "smart contracts" precede "blockchains" by one or two decades. So "blockchain" has somehow even rolled up terms outside of its space as if within it. (By the way, I don't think anyone has given a good and crisp definition for "smart contract" either despite some of these people trying to give me one, so let me give you one that I think is better and embraces its fuzziness: "Smart contracts allow you to do the kinds of things you might do with legal contracts, but relying on networked computation instead of a traditional state-based legal system." It's too bad more people also don't know about the huge role that Mark Miller's "split contracts" idea plays into this space because that's what makes the idea finally makes sense... but that's a conversation for another time.) (EDIT: Well, after I wrote this, Kate Sills lent me her definition, which I think is the best one: "Smart contracts are credible commitments using technology, and outside a state-provided legal system." I like it!)

So anyway, the point of this whole section is to say that kind of like roguelike, people are thinking of different things as essential to blockchains. Everyone roughly agrees on the jumping-off point of ideas but since not everyone agrees from there, it's good to check in when we're having the conversation. Wait, you do/don't like this game because it's a roguelike? Maybe we should check in on what features you mean. Likewise for blockchains. Because if you're blaming blockchains for burning down the planet, more than likely you're not condemning signed git repositories (or at least, if you're condemning them, you're probably doing so about it from an aspect that isn't the fundamental datastructure... probably).

This is an "easier said than done" kind of thing though, because of course, I'm kind of getting into some "in the weeds" level of details here... but it's the "in the weeds" where all the substance of the disagreements really are. The person you are talking with might not actually even know or consider the same aspects to be essential that you consider essential though, so taking some time to ask which things we mean can help us lead to a more productive conversation sooner.

"Blockchain" as an identity signal

First, a digression. One thing that's kind of curious about the term "virtue signal" is that in general it tends to be used as a kind of virtue signal. It's kind of like the word hipster in the previous decade, which weirdly seemed to be obsessively and pejoratively used by people who resembled hipsters than anyone else. Hence I used to make a joke called "hipster recursion", which is that since hipsters seem more obsessesed with pejorative labeling of hipsterism than anyone else, there's no way to call someone a "hipster" without yourself taking on hipster-like traits, and so inevitably even this conversation is N-levels deep into hipster recursion for some numerical value of N.

"Virtue signaling" appears similar, but even more ironically so (which is a pretty amazing feat given how much of hipsterdom seems to surround a kind of inauthentic irony). When I hear someone say "virtue signaling" with a kind of sneer, part of that seems to be acknowledging that other people are sending signals merely to impress others that they are some kind of the same group but it seems as if it's being raised as in a you-know-and-I-know-that-by-me-acknowledging-this-I'm-above-virtue-signaling kind of way. Except that by any possible definition of virtue signaling, the above appears to be a kind of virtue signaling, so now we're into virtue signaling recursion.

Well, one way to claw our way out of the rabbithole of all this is to drop the pejorative aspect of it and just acknowledge that signaling is something that everyone does. Hence me saying "identity signaling" here. You can't really escape identity signaling, or even sportsballing, but you can acknowledge that it's a thing that we all do, and there's a reason for it: people only have so much time to find out information about each other, so they're searching for clues that they might align and that, if they introduce you to their peer group, that you might align with them as well, without access to a god-like view of the universe where they know exactly what you think and exactly what kinds of things you've done and exactly what way you'll behave in the future or whether or not you share the same values. (After all, what else is virtue ethics but an ethical framework that takes this in its most condensed form as its foundation?) But it's true that at its worst, this seems to result in shallow, quick, judgmental behavior, usually based on stereotypes of the other side... which can be unfortunate or unfair to whomever is being talked about. But also on the flip side, people also do identity signal to each other because they want to create a sense of community and bonding. That's what a lot of culture is. It's worth acknowledging then that this occurs, recognizing its use and limitations, without pretending that we are above it.

So wow, that's quite a major digression, so now let's get back to "identity signaling". There is definitely a lot of identity signaling that tends to happen around the word "blockchain", for or against. Around the critiques of the worst of this, I tend to agree: I find much of the machismo hyper-white-male-privilege that surrounds some of the "blockchain" space uncomfortable or cringey.

But I also have some close friends who are not male and/or are people of color and those ones tend to actually suffer the worst of it from these communities internally, but also seem to find things of value in them, but particularly seem to feel squeezed externally when the field is reduced to these kinds of (anti?-)patterns. There's something sad about that, where I see on the one hand friends complaining about blockchain from the outside on behalf of people who on the inside seem to be both struggling internally but then kind of crushed by being lumped into the same identified problems externally. This is hardly a unique problem but it's worth highlighting for a moment I think.

But anyway, I've taken a bunch of time on this, more than I care to, maybe because (irony again?) I feel that too much of public conversation is also hyperfocusing on this aspect... whether there's a subculture around blockchain, whether or not that subculture is good or bad, etc. There's a lot worthwhile in unpacking this discourse-wise, but some of the criticisms of blockchains as a technology (to the extent it even is coherently one) seem to get lumped up into all of this. It's good to provide thoughtful cultural critique, particularly one which encourages healthy social change. And we can't escape identity signaling. But as someone who's trying to figure out what properties of networked systems we do and don't want, I feel like I'm trying to navigate the machine and for whatever reason, my foot keeps getting caught in the gears here. Well, maybe that itself is pointing to some architectural mistakes, but socially architectural ones. But it's useful to also be able to draw boundaries around it so that we know where this part of the conversation begins and ends.

"Blockchain" as "decentralized centralization" (or "decentralized convergence")

One of the weird things about people having the idea of "blockchains" as being synonymous with "decentralization" is that it's kind of both very true and very untrue, depending on what abstraction layer you're looking at.

For a moment, I'm going to frame this in harsh terms: blockchains are decentralized centralization.

What? How dare I! You'll notice that this section is in harsh contrast to the "blockchain as handwaving towards decentralized networks in general" section... well, I am acknowledging the decentralized aspect of it, but the weird thing about a blockchain is that it's a decentralized set of nodes converging on (creating a centrality of!) a single abstract machine.

Contrast with classic actor model systems like CapTP in Spritely Goblins, or as less good examples (because they aren't quite as behavior-oriented as they are correspondence-oriented, usually) ActivityPub or SMTP (ie, email). All of these systems involve decentralized computation and collaboration stemming from sending messages to actors (aka "distributed objects"). Of CapTP this is especially clear and extreme: computations happen in parallel across many collaborating machines (and even better, many collaborating objects on many collaborating machines), and the behavior of other machines and their objects is often even opaque to you. (CapTP survives this in a beautiful way, being able to do well on anonymous, peer to peer, "mutually suspicious" networks. But maybe read my rambling thoughts about CapTP elsewhere.)

While to some degree there are some very clever tricks in the world of cryptography where you may be able to get back some of the opacity, this tends to be very expensive, adding an expensive component to the already inescapable additional expenses of a blockchain. A multi-party blockchain with some kind of consensus will always, by definition be slower than a single machine operating alone.

If you are irritated by this framing: good. It's probably good to be irritated by it at least once, if you can recognize the portion of truth in it. But maybe that needs some unpacking to get there. It might be better to say "blockchains are decentralized convergence", but I have some other phrasing that might be helpful.

"Blockchain" as "a single machine that many people run"

There's value in having a single abstract machine that many people run. The most famous source of value is in the "double spending problem". How do we make sure that when someone has money, they don't spend that money twice?

Traditional accounting solves this with a linear, sequential ledger, and it turns out that the right solution boils down to the same thing in computers. Emphasis on sequential: in order to make sure money balances out right, we really do have to be able to order things.

Here's the thing though: the double spending problem was in a sense solved in terms of single-computers a long time ago in the object capability security community. Capability-based Financial Instruments was written about a decade before blockchains even existed and showed off how to make a "mint" (kind of like a fiat-currency bank) that can be implemented in about 25 lines of code in the right architecture (I've ported it to Goblins, for instance) and yet has both distributed accounts and is robust against corruption on errors.

However, this seems to be running on a "single-computer based machine", and again operates like a fiat currency. Anyone can create their own fiat currency like this, and they are cheap, cheap, cheap (and fast!) to make. But it does rely on sequentiality to some degree to operate correctly (avoiding a class of attacks called "re-entrancy attacks").

But this "single-computer based machine" might bother you for a couple reasons:

We might be afraid the server might crash and service will be interrupted, or worse yet, we will no longer be able to access our accounts.

Or, even if we could trade these on an open market, and maybe diversify our portfolio, maybe we don't want to have to trust a single operator or even some appointed team of operators... maybe we have a lot of money in one of these systems and we want to be sure that it won't suddenly vanish due to corruption.

Well, if our code operates deterministically, then what if from the same initial conditions (or saved snapshot of the system) we replay all input messages to the machine? Functional programmers know: we'll end up with the same result.

So okay, we might want to be sure this doesn't accidentally get corrupted, maybe for backup reasons. So maybe we submit the input messages to two computers, and then if one crashes, we just continue on with the second one until the other comes up, and then we can restore the first one from the progress the second machine made while the first one was down.

Oh hey, this is already technically a blockchain. Except our trust model is that we implicitly trust both machines.

Hm. Maybe we're now worried that we might have top-down government pressure to coerce some behavior on one of our nodes, or maybe we're worried that someone at a local datacenter is going to flip some bits to make themselves rich. So we actually want to spread this abstract machine out over three countries. So okay, we do that, and now we set a rule agreeing on what all the series of input messages are... if two of three nodes agree, that's good enough. Oh hey look, we've just invented the "small-quorum-style" blockchain/ledger!

(And yes, you can wire up Goblins to do just this; a hint as to how is seen in the Terminal Phase time travel demo. Actually, let's come back to that later.)

Well, okay. This is probably good enough for a private financial asset, but what about if we want to make something more... global? Where nobody is in charge!

Well, we could do that too. Here's what we do.

First, we need to prevent a "swarming attack" (okay, this is generally called a "sybil attack" in the literature, but for a multitude of reasons I won't get into, I don't like that term). If a global set of peers are running this single abstract machine, we need to make sure there aren't invocations filling up the system with garbage, since we all basically have to keep that information around. Well... this is exactly where those proof-of-foo systems come in the first time; in fact Proof of Work's origin is in something called Hashcash which was designed to add "friction" to disincentivize spam for email-like systems. If we don't do something friction-oriented in this category, our ledger is going to be too easily filled with garbage too fast. We also need to agree on what the order of messages is, so we can use this mechanism in conjuction with a consensus algorithm.

When are new units of currency issued? Well, in our original mint example, the person who set up the mint was the one given the authority to make new money out of thin air (and they can hand out attenuated versions of that authority to others as they see fit). But what if instead of handing this capability out to individuals we handed it out to anyone who can meet an abstract requirement? For instance, in zcap-ld an invoker can be any kind of entity which is specified with linked data proofs, meaning those entities can be something other than a single key... for instance, what if we delegated to an abstract invoker that was specified as being "whoever can solve the state of the machine's current proof-of-work puzzle"? Oh my gosh! We just took our 25-line mint and extended it for mining-style blockchains. And the fundamental design still applies!

With these two adjustments, we've created a "public blockchain" akin to bitcoin. And we don't need to use proof-of-work for either technically... we could swap in different mechanisms of friction / qualification.

If the set of inputs are stored as a merkle tree, then all of the system types we just looked at are technically blockchains:

A second machine as failover in a trusted environment

Three semi-trusted machines with small-scale private consensus

A public blockchain without global trust, with swarming-attack resistance and an interesting abstract capability accessible to anyone who can meet the abstract requirement (in this case, to issue some new currency).

The difference for choosing any of the above is really a question of: "what is your trust/failover requirements?"

Blockchains as time travel plus convergent inputs

If this doesn't sound believable to you, that you could create something like a "public blockchain" on top of something like Goblins so easily, consider how we might extend time travel in Terminal Phase to add multiplayer. As a reminder, here's an image:

Now, a secret thing about Terminal Phase is that the gameplay is deterministic (the random starfield in the background is not, but the gameplay is) and runs on a fixed frame-rate. This means that given the same set of keyboard inputs, the game will always play the same, every time.

Okay, well let's say we wanted to hand some way for someone to replay our last game. Chess games can be fully replayed with a very condensed syntax, meaning that merely handing someone a short list of codes they can precisely replay the same game, every time, deterministically.

Well okay, as a first attempt at thinking this through, what if for some game of Terminal Phase I played we wrote down each keystroke I entered on my keyboard, on every tick of the game? Terminal Phase runs at 30 ticks per second. So okay, if you replay these, each one at 30 ticks per second, then yeah, you'd end up with the same gameplay every time.

It would be simple enough for me to encode these as a linked list (cons, cons, cons!) and hand them to you. You could descend all the way to the root of the list and start playing them back up (ie, play the list in reverse order) and you'd get the same result as I did. I could even stream new events to you by giving you new items to tack onto the front of the list, and you could "watch" a game I was playing live.

So now imagine that you and I want to play Terminal Phase together now, over the network. Let's imagine there are two ships, and for simplicity, we're playing cooperatively. (The same ideas can be extended to competitive, but for narrating how real-time games work it's easier to to start with a cooperative assumption.)

We could start out by wiring things up on the network so that I am allowed to press certain keys for player 1 and you are allowed to press certain keys for player 2. (Now it's worth noting that a better way to do this doesn't involve keys on the keyboard but capability references, and really that's how we'd do things if we were to bring this multiplayer idea live, but I'm trying to provide a metaphor that's easy to think about without introducing the complicated sounding kinds of terms like "c-lists" and "vat turns" that we ocap people seem to like.) So, as a first attempt, maybe if we were playing on a local area network or something, we could synchronize at every game tick: I share my input with you and you share yours, and then and only then do both of our systems actually input them into that game-tick's inputs. We'll have achieved a kind of "convergence" as to the current game state on every tick. (EDIT: I wrote "a kind of consensus" instead of "a kind of convergence" originally, and that was an error, because it misleads on what consensus algorithms tend to do.)

Except this wouldn't work very well if you and I were living far away from each other and playing over the internet... the lag time for doing this for every game tick might slow the system to a crawl... our computers wouldn't get each others' inputs as fast as the game was moving along, and would have to pause until we received each others' moves.

So okay, here's what we'll do. Remember the time-travel GUI above? As you can see, we're effectively restoring from an old snapshot. Oh! So okay. We could save a snapshot of the game every second, and then both get each other our inputs to each other as fast as we can, but knowing it'll lag. So, without having seen your inputs yet, I could move my ship up and to the right and fire (and send that I did that to you). My game would be in a "dirty state"... I haven't actually seen what you've done yet. Now suddenly I get the last set of moves you did over the network... in the last five frames, you move down and to the left and fire. Now we've got each others' inputs... what our systems can do is secretly time travel behind the scenes to the last snapshot, then fast forward, replaying both of our inputs on each tick up until the latest state where we've both seen each others' moves (but we wouldn't show the fast forward process, we'd just show the result with the fast forward having been applied). This can happen fast enough that I might see your ship jump forward a little, and maybe your bullet will kill the enemy instead of mine and the scores shift so that you actually got some points that for a moment I thought I had, but this can all happen in realtime and we don't need to slow down the game at all to do it.

Again, all the above can be done, but with actual wiring of capabilities instead of the keystroke metaphor... and actually, the same set of ideas can be done with any kind of system, not just a game.

And oh hey, technically, technically, technically if we both hashed each of our previous messages in the linked list and signed each one, then this would qualify as a merkle tree and then this would also qualify as a blockchain... but wait, this doesn't have anything to do with cryptocurrencies! So is it really a blockchain?

"Blockchain" as synonym for "cryptocurrency" but this is wrong and don't do this one

By now you've probably gotten the sense that I really was annoyed with the first section of "blockchain" as a synonym for "decentralization" (especially because blockchains are decentralized centralization/convergence) and that is completely true. But even more annoying to me is the synonym of "blockchain" with "cryptocurrency".

"Cryptocurrency" means "cryptographically based currency" and it is NOT synonymous with blockchains. Digicash precedes blockchains by a dramatic amount, but it is a cryptocurrency. The "simple mint" type system also precedes blockchains and while it can be run on a blockchain, it can also run on a solo computer/machine.

But as we saw, we could perceive multiplayer Terminal Phase as technically, technically a blockchain, even though it has nothing to do with currencies whatsoever.

So again a blockchain is just a single, abstract, sequential machine, run by multiple parties. That's it. It's more general than cryptocurrencies, and it's not exclusive to implementing them either. One is a kind of programming-plus-cryptography-use-case (cryptocurrencies), the other one is a kind of abstracted machine (blockchains).

So please. They are frequently combined, but don't treat them as the same thing.

Blockchains as single abstract machines on a wider network

One of my favorite talks is Mark Miller's Programming Secure Smart Contracts talk. Admittedly, I like it partly because it well illustrates some of the low-level problems I've been working on, and that might not be as useful to everyone else. But it has this lovely diagram in it:

This is better understood by watching the video, but the abstraction layers described here are basically as follows:

"Machines" are the lowest layer of abstraction on the network, but there a variety of kinds of machines. Public blockchains are one, quorum blockchains are another, solo computer machines yet another (and the simplest case, too). What's interesting then is that we can see public chains and quorums abstractly demonstrated as machines in and of themselves... even though they are run by many parties.

Vats are the next layer of abstraction, these are basically the "communicating event loops"... actors/objects live inside them, and more or less these things run sequentially.

Replace "JS ocaps" with "language ocaps" and you can see actors/objects in both Javascript and Spritely living here.

Finally, at the top are "erights" and "smart contracts", which feed into each other... "erights" are "exclusive electronic rights", and "smart contracts" are generally patterns of cooperation involving achieving mutual goals despite suspicion, generally involving the trading of these erights things (but not necessarily).

Okay, well cool! This finally explains the worldview I see blockchains on. And we can see a few curious things:

The "public chain" and "quorum" kinds of machines still boil down to a single, sequential abstract machine.

Object connections exist between the machines... ocap security. No matter whether it's run by a single computer or multiple.

Public blockchains, quorum blockchains, solo-computer machines all talk to each other, and communicate between object references on each other.

Blockchains are not magical things. They are abstracted machines on the network. Some of them have special rules that let whoever can prove they qualify for them access some well-known capabilities, but really they're just abstracted machines.

And here's an observation: you aren't ever going to move all computation to a single blockchain. Agoric's CEO, Dean Tribble, explained beautifully why on a recent podcast:

One of the problems with Ethereum is it is as tightly coupled as possible. The entire world is a single sequence of actions that runs on a computer with about the power of a cell phone. Now, that's obviously hugely valuable to be able to do commerce in a high-integrity fashion, even if you can only share a cell phone's worth of compute power with the entire rest of the world. But that's clearly gonna hit a brick wall. And we've done lots of large-scale distributed systems whether payments or cyberspace or coordination, and the fundamental model that covers all of those is islands of sequential programming in a sea of asynchronous communication. That is what the internet is about, that's what the interchain is about, that's what physics requires you to do if you want a system to scale.

Put this way, it should be obvious: are we going to replace the entire internet with something that has the power of a cell phone? To ask the question is to know the answer: of course not. Even when we do admit blockchain'y systems into our system, we're going to have to have many of them communicating with each other.

Blockchains are just machines that many people/agents run. That's it.

Some of these are encoded with some nice default programming to do some useful things, but all of them can be done in non-blockchain systems because communicating islands of sequential processes is the generalization. You might still want a blockchain, ie you might want multiple parties running one of those machines as a shared abstract machine, but how you configure that blockchain from there might depend on your trust and integrity requirements.

What do I think of blockchains?

I've covered a wide variety of perspectives of "what is a blockchain" in this article.

On the worse end of things are the parts involving hand-wavey confusion about decentralization, mistaken ideas of them being tied to cryptocurrencies, marketing hype, cultural assumptions, and some real, but not intrinsic, cultural problems.

In the middle, I am particularly keen on highlighting the similarity between the term "blockchain" and the term "roguelike", how both of them might boil down to some key ideas or not, but more importantly they're both a rough family of ideas that diverge from one highly influential source (Bitcoin and Rogue respectively). This is also the source of much of the "shouting past each other", because many people are referring to different components that they view as essential or inessential. Many of these pieces may be useful or harmful in isolation, in small amounts, in large amounts, but much of the arguing (and posturing) involves highlighting different things.

On the better end of things is a revelation, that blockchains are just another way of abstracting a computer so that multiple parties can run it. The particular decisions and use cases layered on top of this fundamental design are highly variant.

Having made the waters clear again, we could muddy them. A friend once tried to convince me that all computers are technically blockchains, that blockchains are the generalization of computing, and the case of a solo computer is merely one where a blockchain is run only by one party and no transaction history or old state is kept around. Maybe, but I don't think this is very useful. You can go in either direction, and I think the time travel and Terminal Phase section maybe makes that clear to me, but I'm not so sure how it lands with others I suppose. But a term tends to be useful in terms of what it introduces, and calling everything a blockchain seems to make the term even less useful than it already is. While a blockchain could be one or more parties running a sequential machine as the generalization, I suggest we stick to two or more.

Blockchains are not magic pixie dust, putting something on a blockchain does not make it work better or more decentralized... indeed, what a blockchain really does is converging (or re-centralizing) a machine from a decentralized set of computers. And it always does so with some cost, some set of overhead... but what those costs and overhead are varies depending on what the configuration decisions are. Those decisions should always stem from some careful thinking about what those trust and integrity needs are... one of the more frustrating things about blockchains being a technology of great hype and low understanding is that such care is much less common than it should be.

Having a blockchain, as a convergent machine, can be useful. But how that abstracted convergent machine is arranged can diverge dramatically; if we aren't talking about the same choices, we might shout past each other. Still, it may be an unfair ask to request that those without a deep technical background go into technical specifics, and I recognize that, and in a sense there can be some amount gained from speaking towards broad-sweeping, fuzzy sets and the patterns they seem to be carrying. A gut-sense assertion from a set of loosely observed behaviors can be a useful starting point. But to get at the root of what those gut senses actually map to, we will have to be specific, and we should encourage that specificity where we can (without being rude about it) and help others see those components as well.

But ultimately, as convergent machines, blockchains will not operate alone. I think the system that will hook them all together should be CapTP. But no matter the underlying protocol abstraction, blockchains are just abstract machines on the network.

Having finally disentangled what blockchains are, I think soon I would like to move onto what cryptocurrencies are. Knowing that they are not necessarily tied to blockchains opens us up to considering an ecosystem, even an interoperable and exchangeable one, of varying cryptographically based financial instruments, and the different roles and uses they might play. But that is another post of its own, for whenever I can get to it, I suppose.

ADDENDUM: After writing this post, I had several conversations with several blockchain-oriented people. Each of them roughly seemed to agree that Bitcoin was roughly the prototypical "blockchain", but each of them also seemed to highlight different things they thought were "essential" to what a "blockchain" is: some kinds of consensus algorithms being better than others, that kinds of social arrangements are enabled, whether transferrable assets are encoded on the chain, etc. To start with, I feel like this does confirm some of the premise of this post, that Bitcoin is the starting point, but like Rogue and "roguelikes", "blockchains" are an exploration space stemming from a particular influential technical piece.

However my friend Kate Sills (who also gave me a much better definition for "smart contracts", added above) highlighted something that I hadn't talked about much in my article so far, which I do agree deserves expansion. Kate said: "I do think there is something huge missing from your piece. Bitcoin is amazing because it aligns incentives among actors who otherwise have no goals in common."

I agree that there's something important here, and this definition of "blockchain" maybe does explain why while from a computer science perspective, perhaps signed git trees do resemble blockchains, they don't seem to fit within the realm of what most people are thinking about... while git might be a tool used by several people with aligned incentives, it is not generally itself the layer of incentive-alignment.

Friday, 23. April 2021

Ben Werdmüller

The return of the decentralized web

I’ve been having a lot of really inspiring conversations about decentralization lately. Decentralization doesn’t require the blockchain - and pre-dates it - but the rise of blockchain technologies have allowed more people to become comfortable with the idea and why it’s valuable. Decentralized platforms have been part of virtually my entire career. I left my first job out of university to start

I’ve been having a lot of really inspiring conversations about decentralization lately. Decentralization doesn’t require the blockchain - and pre-dates it - but the rise of blockchain technologies have allowed more people to become comfortable with the idea and why it’s valuable.

Decentralized platforms have been part of virtually my entire career. I left my first job out of university to start Elgg, a platform that allowed anyone to make an online space for their communities on their own terms. It started in education and developed an ecosystem there, before expanding to far wider use cases. Across it all, the guiding principle was that one size didn’t fit all: every community should be able to dictate not just its own features, but its own community dynamics. We were heavily involved in interoperability and federation conversations, and my biggest regret is that we didn’t push our nascent Open Data Definition forward into an ActivityStreams-like data format. To this day, though, people are using Elgg to support disparate communities across the web. Although they use Elgg’s software, the Elgg Foundation doesn’t strip-mine those communities: all value (financial and otherwise) stays with them.

Known was built on a similar principle, albeit for a world of ubiquitous connectivity where web-capable devices sit in everyone’s pocket. I use it every day (for example, to power this article), as many others do.

Much later, I was the first employee at Julien Genestoux’s Unlock, which is a decentralized protocol for access control built on top of the Ethereum blockchain. Here, a piece of content is “locked” with an NFT, and you can sell or share access via keys. If a user connects to content (which could be anything from a written piece to a real-life physical event) with a key for the lock, they gain access. Because it’s an open protocol, one size once again doesn’t fit all: anyone can use the underlying lock/key mechanism to build something new. Because it’s decentralized, the owner of the content keeps all the value.

Contrast that principle with Facebook, which has been the flag-bearer for the strip-mining of communities across the web for well over a decade now. Its business model means that it’s super-easy to create a community space, which it then monetizes for all it’s worth: you even have to pay to effectively reach the people you connected with to begin with. We’ve all become familiar with the societal harms of its targeted model, but even beyond that, centralization has inherent harms. When every online interaction and discussion is templated to the same team’s design decisions (and both the incentives and assumptions behind those decisions), those interactions are inevitably shaped by those templates. It leads to what Amber Case calls the templated self. Each of those conversations consequently occurs in a form that serves Facebook (or Twitter, etc) rather than the community itself.

It’s easy to discount blockchain; I did, for many years. (It was actually DADA, one of our investments at Matter, who showed me the way.) And there’s certainly a lot that can be said about the environmental impact and more. We should talk about them now: it’s important to apply pressure to change to proof of stake and other models beyond. The climate crisis can’t be brushed aside. But we shouldn’t throw out the baby with the bathwater: blockchain platforms have created value in decentralization, and provided a meaningful alternative to invasive, centralized silos for the first time in a generation. Those things are impermanent; we won’t be talking about harmful, slow proof of work algorithms in a few years, in the same way we don’t talk about HTML 1 today.

What does it look like to build an ethical, decentralized platform for community and discourse that is also self-sustaining, using these ideas? How can we distribute equity among participants of the community rather than sucking it up into a centralized megacorporation or institutional investors? That question has been giving me energy. And there are more and more people thinking along similar lines.

Animated GIF NFTs and crypto speculation aren’t very interesting at best (at least to me), and at worst are a reflection of a kind of reductive greed that has seriously negative societal effects. But looking beyond the gold rush, the conversations I’m having remind me of the conversations I used to have about the original web. The idea of decentralization is empowering. The idea of a community supporting itself organically is empowering. The idea of communities led by peer-to-peer self-governance is empowering. The idea of movement leaders being organically supported in their work is empowering. And we’re now in a position where if we pull those threads a little more, it’s not obvious that these ideas will fail. That’s an exciting place to be.

Thursday, 22. April 2021

FACILELOGIN

My Personal Brand is My Professional Success Story!

This blog is the script of a talk I did internally at WSO2 in 2015 to inspire the team to build their personal brand. Found this today, buried in my Google Docs, and thought of sharing publicly (unedited), in case if someone finds it helpful! Good Morning folks, thanks for joining in — it’s my great pleasure to do this session on ‘My Personal Brand is My Professional Success Story’. Fi

This blog is the script of a talk I did internally at WSO2 in 2015 to inspire the team to build their personal brand. Found this today, buried in my Google Docs, and thought of sharing publicly (unedited), in case if someone finds it helpful!

Good Morning folks, thanks for joining in — it’s my great pleasure to do this session on ‘My Personal Brand is My Professional Success Story’.

First of all I must thank Asanka, Zai, Charitha, Usama and the entire marketing team for giving me the opportunity to present on this topic. In first sight, I thought, it’s tough to present on a topic — that I have not consciously focused on — or purposely wanted to achieve myself. Then again, thinking further on the title, I realized whether we like it or not each one of us has a personal brand.

The personal brand is the image of you, that you cultivate in others’ minds. In other words — that is how — others think about you. This raises a question in all ‘radical’ minds — why we have to care about what others think about us — we do our bit in the way we want, and should we care about personal branding? It is extremely important to find answers to this question, because if we are not convinced of something, we will never do it.

In my view there are no individuals — there is a bigger, greater team behind each individual. This bigger, greater team includes your parents, siblings, spouse, kids, relations, friends. colleagues and many more. You like it or not, more or less you are a reflection of this team behind you. As we grow up as human beings the team behind us — or the team which influences us, widens up. It would not just include well-wishers, but also haters, competitors and many more. But, still you as an individual is a reflection of this team. Sometimes — or even in most of the cases, the haters could motivate you more than well-wishers. This team also includes people you have never talked to — people you have never seen — people who never exist, like some characters in books. This is the team behind you — at the same time — you like it or not you become a member of a team behind another individual or set of individuals. In other words, you get influenced by a team and then again you influence another set of individuals.

Let me take a quick example. Everyone knows Mahatma Gandhi. In his own words, Ghandi once said — “Three moderns have left a deep impress on my life and captivated me. Raychandbhai by his living contact; Tolstoy by his book, “The Kingdom of God is within you”; and Ruskin by his “Unto This Last”. That was what influenced him — today there are countless number of individuals who are influenced by Gandhi.

Arguably, CNBC, in 2014, named Steve Jobs as the most influential person in last 25 years. Thousands of people are influenced by Steve Jobs, at the same time there are many other people who influenced Jobs — Edwin H. Land, who co-founded Polaroid and made a number of significant advancements in the field of photography, is one of them, Jobs used to talk about.

In short, whether you like it or not, more or less, you get influenced by others and then again you influence others. Now it is a question on how much an impact you want to make on the rest of the world before you die, to make this world a better place than it looks like today. I

f you want to make a good/positive impact on others, you care about how they think about you. If you cannot build a positive image of you, in their minds, you will find it extremely hard to make a positive impact in their lives. The positive image of you is the reflection of your character. If you have a bad character, it is extremely hard to build a good image out of it, not quite impossible though. But, if your character is good, the positive image is the bonus what you get for it. Personal branding requires, little more than having a good image — you need to learn to express yourself — not to market yourself — but to express yourself. Everyone from the history, who has made a positive impact to the world, have expressed themselves. The way Gandhi took to express himself, is not the same which Steve Jobs picked.

The rest of the talk from here onwards, is about, how to build a good image and then how to express your character to the rest, to build a positive personal brand.

In my view, everyone of us, should have a vision for the life. Vision for the life is the one that drives you to the future. The vision for the life is the one that motivates us to wake up every morning. If you don’t have one — start thinking about it from today. Think about, what motivates you to do what you do everyday. Having a good vision is the core in building a great image.

The vision has to be inspirational — a great vision statement inspires and moves us. It is a motivational force that compels action. You recognize a great vision statement when you find it difficult not to be inspired.

The vision has to be challenging — the best vision statements challenge us to become better. In this way, a vision statement requires us to stretch ourselves in pursuit of the vision we seek to achieve. The vision is not an ‘easy target’; it is something that if achieved- would represent a sense of pride and fulfillment.

The vision has to be achievable — vision must not be so far-fetched that is outside of our reach. It must be conceivably possible, though not probable without additional effort.

When we start working for a company, we get used to spend most of our time working there. If your vision for life does not match with the vision of the company you work for — there could be many conflicts and you won’t be productive. If your vision for life, is to make this world a better place, you cannot work for a company which produces cigarettes or weapons.

The second most important thing in building a good image, is your integrity. Oprah Winfrey, who is a well-respected TV talk show host, actress, producer and philanthropist, says “Real integrity is doing the right thing, knowing that nobody’s going to know whether you did it or not.”. I don’t think there is a better way of explaining, ‘integrity’ than this. It captures all what it needs to be.

I have visited and talked with many WSO2 customers over last eight years. We never talk to a customer with an intention of selling a product. First thing we do is listening to them and learn from them, then we all work towards the best solution to the problem they have. Finally we see how WSO2 could fit into the solution. If it is not a perfect fit — we never lie — we identify the gaps — and find a way to move forward by filling those gaps. Most of the time we win the customer at the second stage, when we build the solution for them and in many cases they agree to go ahead with us, even we are not the perfect match for their problem. That is mainly because, the level of integrity we demonstrate as a company.

No one is perfect — that also implies everyone makes mistakes. A guy with a high level of integrity would never hide mistakes, but rather would accept it, apologize for it and fix it. Never he would lie — never he would say something to one person and something else to another. Mark Twain once said, “If you tell the truth, you don’t have to remember anything.”

In short, vision for life will drive you to the future, while the integrity is the cornerstone of your personal brand.

The third most important thing you should do in building a positive image is, to raise yourself against negativity. Do not let negativity kill your productivity, enthusiasm, passion and spirit. People who spread negativity are the people who feel extremely insecure in their current state. They only have complaints — no suggestions. Their feedbacks are negative, not constructive. They see only bad — not a single bit of good. Identifying these type of people are not that hard — first you need to shield yourself from negativity — then you need to protect your team. Even just by keeping silent when hear something negative, you indirectly contributes to spread it over — fix it at that very point. If you are closely following the US presidential election campaign, you might have noticed, Donald Trump, who is the republican front runner at the moment, is being heavily criticized for being silent and not correcting a question raised by someone in his political campaign, where he said — ‘Muslims is a problem and Barack Obama is a Muslim’. Even though Trump is still the frontrunner, his popularity has gone down considerably after these dialogues.

The fourth most important thing you should do in building a positive image is, when you do something do it to a level where it can make an impact. If you believe something is right, go for it and make it happen. At the end of the day you may fail — but look back and see whether you have contributed to your best — if so, you will never frustrate — no regrets.

Expressing an idea is important — but representing an idea is much more important. When you represent something you own it. If you want to do something to make an impact, you must own it. You should not be a someone who talks the talk but does not walk the walk.

Tolerating criticism and accepting constructive feedback, is another key aspect in building a positive image. There are no better sources than criticisms to validate the direction we are heading and to learn. Bill Gates, once said ‘Your most unhappy customers are your greatest source of learning’.

We discussed so far, the need to build a positive image and how to do it. Next we will focus on how to build a personal brand by expressing yourself. As we discussed before, personal branding requires, little more than having a good image — you need to learn to express yourself — not to market yourself. If you already have a positive image, being little expressive will build you a positive personal brand. If you already have a negative image, being little expressive will build you a negative personal brand. The image you build is a reflection of your character. That includes your role, as a father, son, brother, husband, colleague, friend, mentor, developer, architect and many more. You can build an image as a good father and a bad son — or as a good son and a bad brother — or as a good friend and a bad developer — likewise any combinations. But, more or less, ultimately your true image is how you do overall. You can be the best developer in the company, but then again if you do not understand the value of respecting each other’s religions and cultural values or — in a single word, if you are a racist — your top skills as a developer is worthless.

You need to pick how you want to impact the world — or how you want the world to see you. Thats your personal brand — and you build it on top of your character or the image. Your overall character is the shelter for your personal brand. If you do not build it right — if you find holes in it — you cannot protect your brand, even from a light shower. That’s why building the right character comes well before building a personal brand.

In my view, the area you can make the most impact to the world, in its positive direction, is the area that you are most passionate about. If you are extremely worried and angry about child labour — you can be a thought leader in protesting against child labour. If you are extremely worried and angry about human rights violations — you can be a thought leader in protecting human rights. If you are extremely passionate about integration technologies, you can be a thought leader in integration space. If you are extremely passionate about machine learning, you can be a thought leader in machine learning space. If you are passionate about APIs — you can be a thought leader in API space. If you are passionate about Big Data, you can be a thought leader in Big Data space. If you are passionate about Identity and Access Management, you can be a thought leader in Identity and Access Management space. Opportunities are limitless — but remember our ground rules — if you do something — do it to a level where it can make a positive impact. You do not need to worry about being a thought leader, but when you make a good positive impact, you will become a thought leader automatically.

Once you decide the area where you want to make an impact — the rest depends on how good you as a communicator. Communication is critically important, because, that’s the only way you can reach to your audience. Content marketing is the best way to build a brand and reputation online; when people look for information, they tend to go back to sources that were helpful to them. If you can become a trusted source of information through your content, over time you’ll become collectively known as the expert of your specific field. It’s best to start your own blog and update it on a regular basis — at least weekly. If you do not update regularly you lose your audience. At the start it would be tough — but once you make it a practice, it will start to happen effortlessly. Another key principle I would like to highlight here is the difference between good and great. Most of the time the difference between good and great lies heavily on how you do tiny/little things better. You may spend hours in writing a blog — finding the content — validating the content and getting all in order. But — we are bit lazy to put another more five to ten minutes of effort, to format our blog post, publish it in DZone and other blog aggregators, share it in social media sites — and do little more. This additional ten minutes of effort could easily make your blog post from being a good one to a great one — and also would attract a larger audience.

Regularly participate in mailing lists related to the subject of your interest is another way of passing your message to the rest of the world. These mailing lists may be within WSO2 or even outside. Look for standard bodies like W3C, IETF, OASIS — and any other communities that share the same interest of yours, and eagerly participate in related discussions. Not just the mailing lists, look for interesting groups in Facebook, LinkedIn, StackOverFlow and wherever possible and make your mark.

Webinars at WSO2 is another key medium to pass your message to the audience of your interest. If you want to be a thought leader in your product space, then your responsibility does not end at the moment you release the product. You need to come up with a plan for evangelization — and webinars could be extremely helpful.

At WSO2 you get a massive number of opportunities to build your brand. Your personal brand is important to you as well as, to the company you serve. Few years back, we had a VP of Marketing called Katie Poplin — and I personally got motivated by some of the concepts she put forward. One thing she believed was, in the open source community, brand value of individuals are much higher and trustworthier than that of companies. People used to think, everything a company shares, is part of their marketing propaganda — which may not reflect the real value. But, what individuals share are their first hand experience. Also we had monthly awards for best blogger, best evangelist and the best article. If I remember correctly I won both the best blogger and best evangelist awards in couple of months and it was fun :-).

Then again don’t just get constrained by the the opportunities you get from WSO2. Always look for what is happening outside. Try to get your articles published in external portals. Also — look for writing books. Writing a book is not hard as it looks to be. First you need to come up with a proposal, with an attractive topic, in your domain of expertise, and then submit it to few publishers. Most of the publishers accept book proposals and if you go to their web sites, you will find everything you need to know, on writing books — go for it!.

Conferences and meetups are another way to establish yourself as a prominent speaker in the corresponding domain. Then again, getting a speaking opportunity will depend on how better you have done your homework.

These are only few techniques to build your brand, in the domain you are interested in, on top of your personal image or the character. Building your personal brand is a focused exercise, not just a piece of cake. It’s a journey, not a destination. Once you built it — maintaining it and protecting it is much harder. As we discussed before, your image or the character is the shelter or the shield of your personal brand. If you build your character consciously, that’ll help you in protecting your brand.

Finally to wrap up, in this session we discussed the importance of brand building, how to build your character and the image, how to build a personal brand under the shelter of your character. Thank you very much.

My Personal Brand is My Professional Success Story! was originally published in FACILELOGIN on Medium, where people are continuing the conversation by highlighting and responding to this story.

Wednesday, 21. April 2021

Hyperonomy Digital Identity Lab

The Verifiable Economy: Architecture Reference Model (VE-ARM) 0.1: Original Concepts [OLD]

Michael HermanHyperonomy Digital Identity LabTrusted Digital Web Project Parallelspace Corporation NOTE: This article has been superseded by a newer article: The Verifiable Economy: Architecture Reference Model (VE-ARM) 0.2: Step-by-Step. Introduction This visualization represents the first complete iteration of The Verifiable Economy … Continue reading →

Michael Herman
Hyperonomy Digital Identity Lab
Trusted Digital Web Project
Parallelspace Corporation

NOTE: This article has been superseded by a newer article:

The Verifiable Economy: Architecture Reference Model (VE-ARM) 0.2: Step-by-Step. Introduction

This visualization represents the first complete iteration of The Verifiable Economy Architecture Reference Model (VE-ARM). It is the first complete example of a Fully-Decentralized Object Model (FDOM) that unites the following into a single integrated model:

Verifiable Identifiers, Decentralized Identifiers (DIDs), and DID Documents; Verifiable Claims, Relationships, and Verifiable Credentials (VCs); and Verifiable Capability Proclamations, Verifiable Capability Invocations, and Verifiable Capability Authorizations (VCAs). Background

The scenario used to model the VE-ARM is an example of a citizen (Erin) of a fictional Canadian province called Sovronia holding a valid physical Sovronia Driver’s License (Erin RW SDL) as well as a digital, verifiable Sovronia Driver’s License (Erin SDL).

Figure 1. Erin’s Real-World Sovronia Driver’s License (Erin RW SDL) Creation of the Verifiable Economy Architecture Reference Model (VE-ARM)

The underlying model was built automatically using a series of Neo4j Cypher queries running against a collection of actual DID Document, Verifiable Credential, and Verifiable Capability Authorization JSON files. The visualization was laid out using the Neo4j Browser. The resulting layout was manually optimized to produce the final version of the visualization that appears below. The numbered markers were added using Microsoft PowerPoint.

The Legends and Narration sections that follow further describe the VE-ARM in more detail. A whitepaper will be available shortly. The whitepaper will contain copies of the underlying DID Document, Verifiable Credential, and Verifiable Capability Authorization JSON files.

Click the image to enlarge it.

Figure 2. The Verifiable Economy Architecture Reference Model (VE-ARM) Legend Figure 3. Legend Narrative

The numbered bullets in the following narrative refer to the corresponding numbered markers in Figure 2.

SOVRONA, a DID Provider (sovrona.com)

1. SOVRONA Organization. SOVRONA is an Organization and the primary Real-World DID Provider (RW_DIDPROVIDER) for the citizens and government of Sovronia, a fictitious province in Canada. SOVRONA controls a Digital Wallet (PDR (Personal Data Registry)), SOVRONA D Wallet, as well as the SOVRONA Verifiable Data Registry (VDR).

2. SOVRONA D Wallet. SOVRONA D Wallet is a Digital Wallet (PDR (Private Data Registry)) that is controlled by SOVRONA, an Organization.

3. SOVRONA DD. SOVRONA DD is the primary DIDDOC (DID Document) for SOVRONA, an Organization.

4. DID:SVRN:ORG:01E9CFEA-E36D-4111-AB68-D99AE9D86D51#fdom1. DID:SVRN:ORG:01E9CFEA-E36D-4111-AB68-D99AE9D86D51#fdom1 is the identifier for the primary AGENT for SOVRONA, an Organization.

5. http://services.sovrona.com/agent. http://services.sovrona.com/agent is the primary SEP (Service Endpoint) for accessing the AGENT(s) associated with the DID(s) and DID Document(s) issued by SOVRONA, an Organization.

6. SOVRONA VDR. SOVRONA VDR is the primary VDR (Verifiable Data Registry) controlled by SOVRONA, an Organization. The SOVRONA VDR is used to host the SVRN DID Method.

Province of Sovronia, an Organization and Nation State (sovronia.ca)

7. PoS Nation State. The Province of Sovronia is a (fictitious) Province (RW_NATIONSTATE (Real-World Nation State)) in Canada and the legal government jurisdiction for the citizens of the province. The Province of Sovronia is an Organization. The Province of Sovronia issues Real-World Sovronia Driver’s Licenses (SDLs) but relies on SOVRONA to issue digital, verifiable SDLs.

8. PoS D Wallet. PoS D Wallet is a Digital Wallet (PDR (Private Data Registry)) controlled by the Province of Sovronia, an Organization.

9. PoS DD. PoS DD is the primary DIDDOC (DID Document) for the Province of Sovronia, an Organization.

10. DID:SVRN:ORG:0E51593F-99F7-4722-9139-3E564B7B8D2B#fdom1. DID:SVRN:ORG:0E51593F-99F7-4722-9139-3E564B7B8D2B#fdom1 is the identifier for the primary AGENT for the Province of Sovronia, an Organization.

Erin Amanda Lee Anderson, a Person and Citizen of Sovronia (and Sovronia Driver’s License Holder)

11. Erin. Erin is a RW_PERSON (Real-World Person) and a citizen of the Province of Sovronia. Erin also holds a (valid) Sovronia Driver’s License (SDL) and controls a Real-World Wallet (RW_WALLET) as well as a Digital Wallet (PDR).

12. Erin D Wallet. Erin D Wallet is a Digital Wallet (PDR (Private Data Registry)) controlled by Erin, a Person.

13. Erin DD. Erin DD is the primary DIDDOC (DID Document) for Erin, a Person.

14. Erin RW Wallet. Erin RW Wallet is a RW_WALLET (Real-World (Leather) Wallet) and it is used to hold Erin’s Real-World Sovronia Driver’s License (Erin RW SDL). Erin RW Wallet is owned and controlled by Erin.

20. DID:SVRN:PERSON:04900EEF-38E7-487E-8D6F-09D6C95D9D3E#fdom1. DID:SVRN:PERSON:04900EEF-38E7-487E-8D6F-09D6C95D9D3E#fdom1 is the identifier for the primary AGENT for Erin, a Person.

Erin’s Sovronia Driver’s License Verifiable Identifiers, Decentralized Identifiers (DIDs), and DID Documents; Verifiable Claims, Relationships, and Verifiable Credentials (VCs); and Verifiable Capability Proclamations, Verifiable Capability Invocations, and Verifiable Capability Authorizations (VCAs).

15. Erin RW SDL. Erin RW SDL is Erin’s RW_SDL (Real-World Sovronia Driver’s License) and it is held by Erin in Erin’s RW Wallet.

16. Erin SDL DD. Erin SDL DD is the primary DIDDOC (DID Document) for Erin’s digital, verifiable SDL.

17. Erin SDL Prop VC DD. Erin SDL Prop VC DD is the primary DIDDOC (DID Document) for the Verified Credential (VC) that is used to represent the properties of Erin’s digital, verifiable SDL (and their values). The properties (and their values) are represented in Erin SDL Prop VC, a Verifiable Credential associated with the DID in Erin SDL Prop VC DD.

18. Erin SDL Prop VC. Erin SDL Prop VC is the Verified Credential (VC) that is used to represent the properties of Erin’s digital, verifiable SDL (and their values). The properties (and their values) are represented in Erin SDL Prop VC, a Verifiable Credential associated with the DID in Erin SDL Prop VC DD.

19. LicenseBackgroundImage. LicenseBackgroundImage is an IPFSIMAGE (IPFS Image Resource) used to store the Background License Image to be used in Erin’s digital and verifiable SDL. The URL of this resources is one of the property values represented in the Erin SDL Prop VC.

19. PhotoImage. PhotoImage is an IPFSIMAGE (IPFS Image Resource) used to store the Erin’s official photo. The URL of this resources is one of the property values represented in the Erin SDL Prop VC.

19. ProvinceStateLogoImage. ProvinceStateLogoImage is an IPFSIMAGE (IPFS Image Resource) used to store the office Provincial (or State) Logo Image to be used in Erin’s digital and verifiable SDL. The URL of this resources is one of the property values represented in the Erin SDL Prop VC.

19. SignatureImage. SignatureImage is an IPFSIMAGE (IPFS Image Resource) used to store the image of Erin’s official signature. The URL of this resources is one of the property values represented in the Erin SDL Prop VC.

21. DID:SVRN:LICENSE:999902-638#fdom1. DID:SVRN:LICENSE:999902-638#fdom1 is the identifier for the primary AGENT for Erin SDL DD, the DID Document for the “root” of Erin’s digital, verifiable Sovronia Driver’s License.

22. DID:SVRN:VC:0B114A04-2559-4C68-AE43-B7004646BD76#fdom1. DID:SVRN:VC:0B114A04-2559-4C68-AE43-B7004646BD76#fdom1 is the identifier for the primary AGENT for Erin SDL Prop VC DD, the DID Document for the Verified Credential used to represent the properties (and values) of Erin’s digital, verifiable Sovronia Driver’s License.

23. http://services.sovronia.ca/agent. http://services.sovronia.ca/agent is the primary SEP (Service Endpoint) for accessing the AGENT(s) associated with the DID(s) and DID Document(s) issued by the Province of Sovronia, an Organization. This includes all of the DID(s) and DID Document(s) associated with Erin and Erin’s SDL.

Erin’s Sovronia Driver’s License Verifiable Capability Authorizations (VCAs)

26. Erin SDL MVCA. Erin SDL MVCA is the Master Verifiable Capability Authorization (MVCA) created for Erin’s SDL (DD) at the time that the DID and DID Document were first issued by SOVRONA on behalf of the Province of Sovronia for Erin’s SDL. (A new MVCA is created whenever a new DID and DID Document are issued by a DID Provider. The MVCA grants authorization for any and all methods defined for the subject to the effective issuer. In this case, the effective issuer is the Province of Sovronia.)

25. Erin SDL VCA. Erin SDL VCA is the Verifiable Capability Authorization (VCA) created for Erin’s SDL Prop VC DD. The VCA was issued by the Province of Sovronia authorizing Erin to be able to present the properties (and their values) of Erin’s SDL to a third party using the Present method associated with Erin’s SDL Prop VC and supported (implemented) by Erin’s AGENT. The parent of Erin’s SDL VCA is the Erin SDL MVCA. (This is not illustrated correctly in the current version of Figure 2.)

24. Erin SDL VCA MI. Erin SDL VCA MI is an example of a MVCA Method Invocation (VCA MI) that uses the Erin SCL VCA which authorizes the potential execution of the Present method by Erin against Erin’s SDL Prop VC. (This is not illustrated correctly in the current version of Figure 2.)

NOTE: The domains sovrona.com and sovronia.ca are owned by the author.


Here's Tom with the Weather

Vacciniation Achievement Unlocked

I am grateful and feel fortunate to have received my 2nd Moderna shot today. I hope the vaccinations become more widely available around the world.

I am grateful and feel fortunate to have received my 2nd Moderna shot today. I hope the vaccinations become more widely available around the world.


Mike Jones: self-issued

OpenID Connect Presentation at IIW XXXII

I gave the following invited “101” session presentation at the 32nd Internet Identity Workshop (IIW) on Tuesday, April 20, 2021: Introduction to OpenID Connect (PowerPoint) (PDF) The session was well attended. There was a good discussion about uses of Self-Issued OpenID Providers.

I gave the following invited “101” session presentation at the 32nd Internet Identity Workshop (IIW) on Tuesday, April 20, 2021:

Introduction to OpenID Connect (PowerPoint) (PDF)

The session was well attended. There was a good discussion about uses of Self-Issued OpenID Providers.


OAuth 2.0 JWT Secured Authorization Request (JAR) sent back to the RFC Editor

As described in my last post about OAuth JAR, after it was first sent to the RFC Editor, the IESG requested an additional round of IETF feedback. I’m happy to report that, having addressed this feedback, the spec has now been sent back to the RFC Editor. As a reminder, this specification takes the JWT […]

As described in my last post about OAuth JAR, after it was first sent to the RFC Editor, the IESG requested an additional round of IETF feedback. I’m happy to report that, having addressed this feedback, the spec has now been sent back to the RFC Editor.

As a reminder, this specification takes the JWT Request Object from Section 6 of OpenID Connect Core (Passing Request Parameters as JWTs) and makes this functionality available for pure OAuth 2.0 applications – and does so without introducing breaking changes. This is one of a series of specifications bringing functionality originally developed for OpenID Connect to the OAuth 2.0 ecosystem. Other such specifications included OAuth 2.0 Dynamic Client Registration Protocol [RFC 7591] and OAuth 2.0 Authorization Server Metadata [RFC 8414].

The specification is available at:

https://tools.ietf.org/html/draft-ietf-oauth-jwsreq-33

An HTML-formatted version is also available at:

https://self-issued.info/docs/draft-ietf-oauth-jwsreq-33.html

Ben Werdmüller

Guilty. Rightly so. But I want to see ...

Guilty. Rightly so. But I want to see justice for all victims of the police.

Guilty. Rightly so.

But I want to see justice for all victims of the police.

Tuesday, 20. April 2021

Ben Werdmüller

Trying out live discussion

I'm experimenting with adding live discussion to every post. Comments are powered by Cactus, which in turn is powered by the decentralized Matrix project: they're not monetized or tracked, and you can choose to take part using the Matrix client of your choice instead of on my website. Comments are pseudonymous by default, but you can create a Matrix profile (or log in if you already have one) t

I'm experimenting with adding live discussion to every post.

Comments are powered by Cactus, which in turn is powered by the decentralized Matrix project: they're not monetized or tracked, and you can choose to take part using the Matrix client of your choice instead of on my website. Comments are pseudonymous by default, but you can create a Matrix profile (or log in if you already have one) to attach your identity.

I love the idea of posts on my site as a starting point for wider discussion. It'll allow me to pose questions more effectively, and for all of you to meet each other. The internet is about community, not one-way broadcasts; I'm excited to see how this goes.

Monday, 19. April 2021

Bill Wendel's Real Estate Cafe

What will happen to housing prices when artificially low inventory hits a tipping point?

Real Estate Cafe has used the hashtag #Covid_ImpactRE to tweet about market distortions during the totally artificial housing market of the past year. Yesterday’s tweet… The post What will happen to housing prices when artificially low inventory hits a tipping point? first appeared on Real Estate Cafe.

Real Estate Cafe has used the hashtag #Covid_ImpactRE to tweet about market distortions during the totally artificial housing market of the past year. Yesterday’s tweet…

The post What will happen to housing prices when artificially low inventory hits a tipping point? first appeared on Real Estate Cafe.


Damien Bod

Securing multiple Auth0 APIs in ASP.NET Core using OAuth Bearer tokens

This article shows a strategy for security multiple APIs which have different authorization requirements but the tokens are issued by the same authority. Auth0 is used as the identity provider. A user API and a service API are implemented in the ASP.NET Core API project. The access token for the user API data is created […]

This article shows a strategy for security multiple APIs which have different authorization requirements but the tokens are issued by the same authority. Auth0 is used as the identity provider. A user API and a service API are implemented in the ASP.NET Core API project. The access token for the user API data is created using an Open ID Connect Code flow with PKCE authentication and the service API access token is created using the client credentials flow in the trusted backend of the Blazor application. It is important that both access tokens will only work for the intended API.

Code: https://github.com/damienbod/SeparatingApisPerSecurityLevel

Blogs in this series

Securing multiple Auth0 APIs in ASP.NET Core using OAuth Bearer tokens Securing OAuth Bearer tokens from multiple Identity Providers in an ASP.NET Core API

Setup

The projects are setup to use a Blazor WASM application hosted in ASP.NET Core secured using the Open ID Connect code flow with PKCE and the BFF pattern. Cookies are used to persist the session. This application uses two separate APIs, a user data API and a service API. The access token from the OIDC authentication is used to access the user data API and a client credentials flow is used to get an access token for the service API. Auth0 is setup using a regular web application and an API configuration. A scope was added to the API which is requested in the client application and validated in the API project.

Implementing the APIs in ASP.NET Core

OAuth2 JwtBearer auth is used to secure the APIs. As we use the same Authority and the same Audience, a single scheme can be used for both applications. We use the default JwtBearerDefaults.AuthenticationScheme.

services.AddAuthentication(options => { options.DefaultAuthenticateScheme = JwtBearerDefaults.AuthenticationScheme; options.DefaultChallengeScheme = JwtBearerDefaults.AuthenticationScheme; }).AddJwtBearer(options => { options.Authority = "https://dev-damienbod.eu.auth0.com/"; options.Audience = "https://auth0-api1"; });

The AddAuthorization method is used to setup the policies so that each API can authorize that the correct token was used to request the data. Two policies are added, one for the user access token and one for the service access token. The access token created using the client credentials flow with Auth0 can be authorized using the azp claim and the Auth0 gty claim. The API client-id is validated using the token claims. The user access token is validated using an IAuthorizationHandler implementation. A default policy is added to the AddControllers method to require an authenticated user meaning a valid access token.

services.AddSingleton<IAuthorizationHandler, UserApiScopeHandler>(); services.AddAuthorization(policies => { policies.AddPolicy("p-user-api-auth0", p => { p.Requirements.Add(new UserApiScopeHandlerRequirement()); // Validate id of application for which the token was created p.RequireClaim("azp", "AScjLo16UadTQRIt2Zm1xLHVaEaE1feA"); }); policies.AddPolicy("p-service-api-auth0", p => { // Validate id of application for which the token was created p.RequireClaim("azp", "naWWz6gdxtbQ68Hd2oAehABmmGM9m1zJ"); p.RequireClaim("gty", "client-credentials"); }); }); services.AddControllers(options => { var policy = new AuthorizationPolicyBuilder() .RequireAuthenticatedUser() .Build(); options.Filters.Add(new AuthorizeFilter(policy)); });

Swagger is added with an OAuth UI so that we can add access tokens manually to test the APIs.

services.AddSwaggerGen(c => { // add JWT Authentication var securityScheme = new OpenApiSecurityScheme { Name = "JWT Authentication", Description = "Enter JWT Bearer token **_only_**", In = ParameterLocation.Header, Type = SecuritySchemeType.Http, Scheme = "bearer", // must be lower case BearerFormat = "JWT", Reference = new OpenApiReference { Id = JwtBearerDefaults.AuthenticationScheme, Type = ReferenceType.SecurityScheme } }; c.AddSecurityDefinition(securityScheme.Reference.Id, securityScheme); c.AddSecurityRequirement(new OpenApiSecurityRequirement { {securityScheme, new string[] { }} }); c.SwaggerDoc("v1", new OpenApiInfo { Title = "My API", Version = "v1", Description = "My API", Contact = new OpenApiContact { Name = "damienbod", Email = string.Empty, Url = new Uri("https://damienbod.com/"), }, }); });

The Configure method is used to add the middleware to implement the API application. It is important to use the UseAuthentication middleware and you should have no reason to implement this yourself. If you find yourself implementing some special authentication middleware for whatever reason, maybe your security architecture might be incorrect.

public void Configure(IApplicationBuilder app) { app.UseSwagger(); app.UseSwaggerUI(c => { c.SwaggerEndpoint("/swagger/v1/swagger.json", "User API"); c.RoutePrefix = string.Empty; }); // only needed for browser clients // app.UseCors("AllowAllOrigins"); app.UseHttpsRedirection(); app.UseRouting(); app.UseAuthentication(); app.UseAuthorization(); app.UseEndpoints(endpoints => { endpoints.MapControllers(); }); }

The UserApiScopeHandler class implements the abstract AuthorizationHandler class. Logic can be implemented here to fulfil the UserApiScopeHandlerRequirement requirement. This requirement is what we use to authorize a request for the user data API. This handler just validates if the required scope exists in the scope claim.

public class UserApiScopeHandler : AuthorizationHandler<UserApiScopeHandlerRequirement> { protected override Task HandleRequirementAsync( AuthorizationHandlerContext context, UserApiScopeHandlerRequirement requirement) { if (context == null) throw new ArgumentNullException(nameof(context)); if (requirement == null) throw new ArgumentNullException(nameof(requirement)); var scopeClaim = context .User .Claims .FirstOrDefault(t => t.Type == "scope"); if (scopeClaim != null) { var scopes = scopeClaim .Value .Split(" ", StringSplitOptions.RemoveEmptyEntries); if (scopes.Any(t => t == "auth0-user-api-one")) { context.Succeed(requirement); } } return Task.CompletedTask; } } public class UserApiScopeHandlerRequirement : IAuthorizationRequirement{ }

The policies can be applied anywhere within the application and the authorization logic is not tightly coupled anywhere to the business of the application. By separating the authorization implementation with the business implementation of the application, it is easier to maintain and understand the authorization and business of the application. This has worked well for me and I find it easy to test and maintain applications setup like this over long periods of time.

[Authorize(Policy = "p-user-api-auth0")] [ApiController] [Route("api/[controller]")] public class UserOneController : ControllerBase

The p-service-api-auth policy is applied to the Service API.

[Authorize(Policy = "p-service-api-auth0")] [ApiController] [Route("api/[controller]")] public class ServiceTwoController : ControllerBase

When the application is started, the swagger UI is displayed and any access token can be pasted into the swagger UI. Both APIs are displayed in the swagger and both APIs require a different access token.

Calling the clients from ASP.NET Core

A Blazor WASM application hosted in ASP.NET Core is used to access the APIs. The application is secured using a trusted server rendered application and the OIDC data is persisted to a secure cookie. The OnRedirectToIdentityProvider method is used to set the audience of the API to request the access token with the required scope. The scopes are added to the OIDC options.

services.AddAuthentication(options => { options.DefaultScheme = CookieAuthenticationDefaults.AuthenticationScheme; options.DefaultChallengeScheme = OpenIdConnectDefaults.AuthenticationScheme; }) .AddCookie(options => { options.Cookie.Name = "__Host-BlazorServer"; options.Cookie.SameSite = SameSiteMode.Lax; }) .AddOpenIdConnect(OpenIdConnectDefaults.AuthenticationScheme, options => { options.Authority = $"https://{Configuration["Auth0:Domain"]}"; options.ClientId = Configuration["Auth0:ClientId"]; options.ClientSecret = Configuration["Auth0:ClientSecret"]; options.ResponseType = OpenIdConnectResponseType.Code; options.Scope.Clear(); options.Scope.Add("openid"); options.Scope.Add("profile"); options.Scope.Add("email"); options.Scope.Add("auth0-user-api-one"); options.CallbackPath = new PathString("/signin-oidc"); options.ClaimsIssuer = "Auth0"; options.SaveTokens = true; options.UsePkce = true; options.GetClaimsFromUserInfoEndpoint = true; options.TokenValidationParameters.NameClaimType = "name"; options.Events = new OpenIdConnectEvents { // handle the logout redirection OnRedirectToIdentityProviderForSignOut = (context) => { var logoutUri = $"https://{Configuration["Auth0:Domain"]}/v2/logout?client_id={Configuration["Auth0:ClientId"]}"; var postLogoutUri = context.Properties.RedirectUri; if (!string.IsNullOrEmpty(postLogoutUri)) { if (postLogoutUri.StartsWith("/")) { // transform to absolute var request = context.Request; postLogoutUri = request.Scheme + "://" + request.Host + request.PathBase + postLogoutUri; } logoutUri += $"&returnTo={ Uri.EscapeDataString(postLogoutUri)}"; } context.Response.Redirect(logoutUri); context.HandleResponse(); return Task.CompletedTask; }, OnRedirectToIdentityProvider = context => { // The context's ProtocolMessage can be used to pass along additional query parameters // to Auth0's /authorize endpoint. // // Set the audience query parameter to the API identifier to ensure the returned Access Tokens can be used // to call protected endpoints on the corresponding API. context.ProtocolMessage.SetParameter("audience", "https://auth0-api1"); return Task.FromResult(0); } }; });

Calling the User API

A user API client service is used to request the data from the ASP.NET Core API. The access token is passed as a parameter and the IHttpClientFactory is used to create the HttpClient.

/// <summary> /// setup to oidc client in the startup correctly /// https://auth0.com/docs/quickstart/webapp/aspnet-core#enterprise-saml-and-others- /// </summary> public class MyApiUserOneClient { private readonly IConfiguration _configurations; private readonly IHttpClientFactory _clientFactory; public MyApiUserOneClient( IConfiguration configurations, IHttpClientFactory clientFactory) { _configurations = configurations; _clientFactory = clientFactory; } public async Task<List<string>> GetUserOneApiData(string accessToken) { try { var client = _clientFactory.CreateClient(); client.BaseAddress = new Uri(_configurations["MyApiUrl"]); client.SetBearerToken(accessToken); var response = await client.GetAsync("api/UserOne"); if (response.IsSuccessStatusCode) { var data = await JsonSerializer.DeserializeAsync<List<string>>( await response.Content.ReadAsStreamAsync()); return data; } throw new ApplicationException($"Status code: {response.StatusCode}, Error: {response.ReasonPhrase}"); } catch (Exception e) { throw new ApplicationException($"Exception {e}"); } } }

The user access token is saved to the HttpContext after a successful sign-in and the GetTokenAsync method with the “access_token” parameter is used to retrieve the user access token.

private readonly MyApiUserOneClient _myApiUserOneClient; public CallUserApiController( MyApiUserOneClient myApiUserOneClient) { _myApiUserOneClient = myApiUserOneClient; } [HttpGet] public async Task<IActionResult> GetAsync() { // call user API string accessToken = await HttpContext.GetTokenAsync("access_token"); var userData = await _myApiUserOneClient .GetUserOneApiData(accessToken); return Ok(userData); }

Calling the Service API

Using a service API requires requesting an access token using the OAuth client credentials flow. This flow can only be used in a trusted backend and a secret is required to request an access token. No user is involved. This is a machine to machine request. The access token is persisted to a distributed cache.

public class Auth0CCTokenApiService { private readonly ILogger<Auth0CCTokenApiService> _logger; private readonly Auth0ApiConfiguration _auth0ApiConfiguration; private static readonly Object _lock = new Object(); private IDistributedCache _cache; private const int cacheExpirationInDays = 1; private class AccessTokenResult { public string AcessToken { get; set; } = string.Empty; public DateTime ExpiresIn { get; set; } } private class AccessTokenItem { public string access_token { get; set; } = string.Empty; public int expires_in { get; set; } public string token_type { get; set; } public string scope { get; set; } } public Auth0CCTokenApiService( IOptions<Auth0ApiConfiguration> auth0ApiConfiguration, IHttpClientFactory httpClientFactory, ILoggerFactory loggerFactory, IDistributedCache cache) { _auth0ApiConfiguration = auth0ApiConfiguration.Value; _logger = loggerFactory.CreateLogger<Auth0CCTokenApiService>(); _cache = cache; } public async Task<string> GetApiToken(HttpClient client, string api_name) { var accessToken = GetFromCache(api_name); if (accessToken != null) { if (accessToken.ExpiresIn > DateTime.UtcNow) { return accessToken.AcessToken; } else { // remove => NOT Needed for this cache type } } _logger.LogDebug($"GetApiToken new from oauth server for {api_name}"); // add var newAccessToken = await GetApiTokenClient(client); AddToCache(api_name, newAccessToken); return newAccessToken.AcessToken; } private async Task<AccessTokenResult> GetApiTokenClient(HttpClient client) { try { var payload = new Auth0ClientCrendentials { client_id = _auth0ApiConfiguration.ClientId, client_secret = _auth0ApiConfiguration.ClientSecret, audience = _auth0ApiConfiguration.Audience }; var authUrl = _auth0ApiConfiguration.Url; var tokenResponse = await client.PostAsJsonAsync(authUrl, payload); if (tokenResponse.StatusCode == System.Net.HttpStatusCode.OK) { var result = await tokenResponse.Content.ReadFromJsonAsync<AccessTokenItem>(); DateTime expirationTime = DateTimeOffset.FromUnixTimeSeconds(result.expires_in).DateTime; return new AccessTokenResult { AcessToken = result.access_token, ExpiresIn = expirationTime }; } _logger.LogError($"tokenResponse.IsError Status code: {tokenResponse.StatusCode}, Error: {tokenResponse.ReasonPhrase}"); throw new ApplicationException($"Status code: {tokenResponse.StatusCode}, Error: {tokenResponse.ReasonPhrase}"); } catch (Exception e) { _logger.LogError($"Exception {e}"); throw new ApplicationException($"Exception {e}"); } } private void AddToCache(string key, AccessTokenResult accessTokenItem) { var options = new DistributedCacheEntryOptions().SetSlidingExpiration(TimeSpan.FromDays(cacheExpirationInDays)); lock (_lock) { _cache.SetString(key, System.Text.Json.JsonSerializer.Serialize(accessTokenItem), options); } } private AccessTokenResult GetFromCache(string key) { var item = _cache.GetString(key); if (item != null) { return System.Text.Json.JsonSerializer.Deserialize<AccessTokenResult>(item); } return null; } }

The MyApiServiceTwoClient service uses the client credentials token client to get the access token and request data from the service API.

public class MyApiServiceTwoClient { private readonly IConfiguration _configurations; private readonly IHttpClientFactory _clientFactory; private readonly Auth0CCTokenApiService _auth0TokenApiService; public MyApiServiceTwoClient( IConfiguration configurations, IHttpClientFactory clientFactory, Auth0CCTokenApiService auth0TokenApiService) { _configurations = configurations; _clientFactory = clientFactory; _auth0TokenApiService = auth0TokenApiService; } public async Task<List<string>> GetServiceTwoApiData() { try { var client = _clientFactory.CreateClient(); client.BaseAddress = new Uri(_configurations["MyApiUrl"]); var access_token = await _auth0TokenApiService.GetApiToken(client, "ServiceTwoApi"); client.SetBearerToken(access_token); var response = await client.GetAsync("api/ServiceTwo"); if (response.IsSuccessStatusCode) { var data = await JsonSerializer.DeserializeAsync<List<string>>( await response.Content.ReadAsStreamAsync()); return data; } throw new ApplicationException($"Status code: {response.StatusCode}, Error: {response.ReasonPhrase}"); } catch (Exception e) { throw new ApplicationException($"Exception {e}"); } } }

The services are added to the default IoC in ASP.NET Core so that construction injection can be used.

services.AddHttpClient(); services.AddOptions(); services.Configure<Auth0ApiConfiguration(Configuration.GetSection("Auth0ApiConfiguration"); services.AddScoped<Auth0CCTokenApiService>(); services.AddScoped<MyApiServiceTwoClient>(); services.AddScoped<MyApiUserOneClient>();

The service can be used anywhere in the code as required.

private readonly MyApiServiceTwoClient _myApiClientService; public CallServiceApiController( MyApiServiceTwoClient myApiClientService) { _myApiClientService = myApiClientService; } [HttpGet] public async Task<IActionResult> GetAsync() { // call service API var serviceData = await _myApiClientService.GetServiceTwoApiData(); return Ok(serviceData); }

You can test the APIs in the swagger UI. I added a breakpoint to my application and copied the access token. I added the token to the swagger UI.

If you send a HTTP request using the wrong token for the intended API, the request will be rejected and a 401or 403 is returned. Without the extra authorization logic implemented with the policies, this request would not have failed.

Notes

It is really important to validate that only access tokens created for the specific APIs will work. There are different ways of implementing this. If using service APIs which are probably solution internal, you could possibly use network security as well to separate these into different security zones. It is really important to validate the no access non-functional use case where using the same identity provider to create the access token for different APIs or if the identity provider produces access tokens for different applications which will probably have different security requirements. For high security requirements, you could use sender constrained tokens.

Links

https://auth0.com/docs/quickstart/webapp/aspnet-core

https://docs.microsoft.com/en-us/aspnet/core/security/authorization/introduction

Open ID Connect

Securing Blazor Web assembly using Cookies and Auth0

Sunday, 18. April 2021

Simon Willison

Weeknotes: The Aftermath

Some tweets that effectively illustrate my week: "... and nothing broke!" - several days later I can confirm that a few things did indeed break, but thankfully nothing catastrophically so! — Simon Willison (@simonw) April 16, 2021 My resume: Migrated legacy Rails user model to external API in 30 days with no impact to customers. Reality: 90% migrated fine, 10% edge cases where I played wh

Some tweets that effectively illustrate my week:

"... and nothing broke!" - several days later I can confirm that a few things did indeed break, but thankfully nothing catastrophically so!

— Simon Willison (@simonw) April 16, 2021

My resume: Migrated legacy Rails user model to external API in 30 days with no impact to customers.

Reality: 90% migrated fine, 10% edge cases where I played whack-a-mole for an entire week and deploying multiple times daily. 😂 https://t.co/L6Akkvr5pB

— Damon Cortesi (@dacort) April 16, 2021

Last week we went live with VIAL, the replacement backend I've been building for VaccinateCA. This meant we went from having no users to having a whole lot of users, and all of the edge-cases and missing details quickly started to emerge.

So this week I've been almost exclusively working my way through those. Not much to report otherwise!

TIL this week Using json_extract_path in PostgreSQL Listing files uploaded to Cloud Build Enabling the fuzzystrmatch extension in PostgreSQL with a Django migration Releases this week django-sql-dashboard: 0.8a2 - (17 total releases) - 2021-04-14
Django app for building dashboards using raw SQL queries

country-coder

country-coder Given a latitude and longitude, how can you tell what country that point sits within? One way is to do a point-in-polygon lookup against a set of country polygons, but this can be tricky: some countries such as New Zealand have extremely complex outlines, even though for this use-case you don't need the exact shape of the coastline. country-coder solves this with a custom designed

country-coder

Given a latitude and longitude, how can you tell what country that point sits within? One way is to do a point-in-polygon lookup against a set of country polygons, but this can be tricky: some countries such as New Zealand have extremely complex outlines, even though for this use-case you don't need the exact shape of the coastline. country-coder solves this with a custom designed 595KB GeoJSON file with detailed land borders but loosely defined ocean borders. It also comes with a wrapper JavaScript library that provides an API for resolving points, plus useful properties on each country with details like telepohen calling codes and emoji flags.

Via @bhousel

Wednesday, 14. April 2021

Bill Wendel's Real Estate Cafe

#LetUsDream: Do we dwell together to make money or is this a community?

What is the meaning of this city?Do you huddle together because youlove each other?What will you answer?“We all dwell together to make moneyfrom each other”?… The post #LetUsDream: Do we dwell together to make money or is this a community? first appeared on Real Estate Cafe.

What is the meaning of this city?Do you huddle together because youlove each other?What will you answer?“We all dwell together to make moneyfrom each other”?…

The post #LetUsDream: Do we dwell together to make money or is this a community? first appeared on Real Estate Cafe.


Mike Jones: self-issued

Second Version of W3C Web Authentication (WebAuthn) Now a Standard

The World Wide Web Consortium (W3C) has published this Recommendation for the Web Authentication (WebAuthn) Level 2 specification, meaning that it now a completed standard. While remaining compatible with the original standard, this second version adds additional features, among them for user verification enhancements, manageability, enterprise features, and an Apple attestation format. The compani

The World Wide Web Consortium (W3C) has published this Recommendation for the Web Authentication (WebAuthn) Level 2 specification, meaning that it now a completed standard. While remaining compatible with the original standard, this second version adds additional features, among them for user verification enhancements, manageability, enterprise features, and an Apple attestation format. The companion second FIDO2 Client to Authenticator Protocol (CTAP) specification is also approaching becoming a completed standard.

See the W3C announcement of this achievement. Also, see Tim Cappalli’s summary of the changes in the second versions of WebAuthn and FIDO2.


Simon Willison

Why you shouldn't use ENV variables for secret data

Why you shouldn't use ENV variables for secret data I do this all the time, but this article provides a good set of reasons that secrets in environment variables are a bad pattern - even when you know there's no multi-user access to the host you are deploying to. The biggest problem is that they often get captured by error handling scripts, which may not have the right code in place to redact th

Why you shouldn't use ENV variables for secret data

I do this all the time, but this article provides a good set of reasons that secrets in environment variables are a bad pattern - even when you know there's no multi-user access to the host you are deploying to. The biggest problem is that they often get captured by error handling scripts, which may not have the right code in place to redact them. This article suggests using Docker secrets instead, but I'd love to see a comprehensive write-up of other recommended patterns for this that go beyond applications running in Docker.

Via The environ-config tutorial


Karyl Fowler

Takeaways from the Suez Canal Crisis

An Appeal for Supply Chain Agility — Powered by Verifiable Credentials Ever Given — Wikimedia Commons The Suez Canal debacle had a massive impact on global supply chains — estimated at >$9B in financial hits each day the Ever Given was stuck, totaling at nearly $54B in losses in stalled cargo shipments alone. And it’s no secret that the canal, which sees >12% of global trade move through it
An Appeal for Supply Chain Agility — Powered by Verifiable Credentials Ever Given — Wikimedia Commons

The Suez Canal debacle had a massive impact on global supply chains — estimated at >$9B in financial hits each day the Ever Given was stuck, totaling at nearly $54B in losses in stalled cargo shipments alone. And it’s no secret that the canal, which sees >12% of global trade move through it annually, dealt an especially brutal blow to the oil and gas industry while blocked (given it represents the primary shipping channel for nearly 10% of gas and 8% of natural gas).

While the Ever Given itself was a container ship, likely loaded with finished goods versus raw materials or commodities, the situation has already — and will continue to — have a massive negative impact on totally unrelated industries…for months to come. Here’s an example of the resulting impact on steel and aluminum prices; this had related impact again to oil and gas (steel pipes flow oil) as well as infrastructure and…finished goods (like cars). And the costs continue to climb as the drama unfolds with port authorities and insurers battling over what’s owed to who.

Transmute is a software company — a verifiable credentials as a service company to be exact — and we’ve been focused specifically on the credentials involved in moving steel assets around the globe alongside our customers at DHS SVIP and CBP for the last couple years now. Now, there’s no “silver bullet” for mitigating the fiscal impact of the Ever Given on global trade, and ships who arrived the day it got stuck or shortly after certainly faced a tough decision — sail around the Cape of Africa for up to ~$800K [fuel costs alone] + ~26 days to trip or wait it out at an up to $30K per day demurrage expense [without knowing it’d only be stuck for 6 days or ~$180,000].

So what if you’re a shipping manager and you can make this decision faster? Or, make the call before your ship arrives at the canal? [Some did make this decision, by the way]. What if your goods are stuck on the Ever Given — do you wait it out? Switching suppliers is costly, and you’ve likely got existing contracts in place for much of the cargo. Even if you could fulfill existing contracts and demand on time with a new supplier, what do you do with the delayed cargo expense? What if you’re unsure whether you can sell the duplicate and delayed goods when they reach their originally intended destination?

Well, verifiable credentials — a special kind of digital document that’s cryptographically provable, timestamped and anchored to an immutable ledger at the very moment in time it’s created — can give companies the kind of data needed to make these sorts of decisions. With use over time for trade data, verifiable credentials build a natural reputation for all the things the trade documents are about: suppliers, products, contracts, ports, regulations, tariffs, time between supply chain handoff points, etc.

This type of structured data is of such high integrity that supply chain operators can rely on it and feel empowered to make decisions based on it.

What I’m hoping comes from this global trade disaster is a change in the way supply chain operators make critical decisions. Supply chains of the future will be powered by verifiable credentials, which seamlessly bridge all the data silos that exist today — whether software-created silos or even the paper-based manual, offline silos.

Today, it’s possible to move from a static, critical chain style of management where we often find ourselves in a reactive position to supply chains that look more like an octopus. High integrity data about suppliers and products enables proactive, dynamic decision making in anticipation of and in real time response to shifts in the market — ultimately capturing more revenue opportunities and mitigating risk at the same time.

Takeaways from the Suez Canal Crisis was originally published in Transmute on Medium, where people are continuing the conversation by highlighting and responding to this story.


Aaron Parecki

How to Sign Users In with IndieAuth

This post will show you step by step how you can let people log in to your website with their own IndieAuth website so you don't need to worry about user accounts or passwords.

This post will show you step by step how you can let people log in to your website with their own IndieAuth website so you don't need to worry about user accounts or passwords.

What is IndieAuth? IndieAuth is an extension of OAuth 2.0 that enables an individual website like someone's WordPress, Gitea or OwnCast instance to become its own identity provider. This means you can use your own website to sign in to other websites that support IndieAuth.

You can learn more about the differences between IndieAuth and OAuth by reading OAuth for the Open Web.

What You'll Need

You'll need a few tools and libraries to sign users in with IndieAuth.

An HTTP client. A URL parsing library. A hashing library that supports SHA256. A library to find <link> tags in HTML. The ability to show an HTML form to the user. IndieAuth Flow Summary

Here is a summary of the steps to let people sign in to your website with IndieAuth. We'll dive deeper into each step later in this post.

Present a sign-in form asking the user to enter their server address. Fetch the URL to discover their IndieAuth server. Redirect them to their IndieAuth server with the details of your sign-in request in the query string. Wait for the user to be redirected back to your website with an authorization code in the query string. Exchange the authorization code for the user's profile information by making an HTTP request to their IndieAuth server. Step by Step

Let's dive into the details of each step of the flow. While this is meant to be an approachable guide to IndieAuth, eventually you'll want to make sure to read the spec to make sure you're handling all the edge cases you might encounter properly.

Show the Sign-In Form

First you'll need to ask the user to enter their server address. You should show a form with a single HTML field, <input type="url">. You need to know at least the server name of the user's website.

To improve the user experience, you should add some JavaScript to automatically add the https:// scheme if the user doesn't type it in.

The form should submit to a route on your website that will start the flow. Here's a complete example of an IndieAuth sign-in form.

<form action="/indieauth/start" method="post"> <input type="url" name="url" placeholder="example.com"> <br> <input type="submit" value="Sign In"> </form>

When the user submits this form, you'll start with the URL they enter and you're ready to begin the IndieAuth flow.

Discover the IndieAuth Server

There are potentially two URLs you'll need to find at the URL the user entered in order to complete the flow: the authorization endpoint and token endpoint.

The authorization endpoint is where you'll redirect the user to so they can sign in and approve the request. Eventually they'll be redirected back to your app with an authorization code in the query string. You can take that authorization code and exchange it for their profile information. If your app wanted to read or write additional data from their website, such as when creating posts using Micropub, it could exchange that code at the second endpoint (the token endpoint) to get an access token.

To find these endpoints, you'll fetch the URL the user entered (after validating and normalizing it first) and look for <link> tags on the web page. Specifically, you'll be looking for <link rel="authorization_endpoint" href="..."> and <link rel="token_endpoint" href="..."> to find the endpoints you need for the flow. You'll want to use an HTML parser or a link rel parser library to find these URLs.

Start the Flow by Redirecting the User

Now you're ready to send the user to their IndieAuth server to have them log in and approve your request.

You'll need to take the authorization endpoint you discovered in the previous request and add a bunch of parameters to the query string, then redirect the user to that URL. Here is the list of parameters to add to the query string:

response_type=code - This tells the server you are doing an IndieAuth authorization code flow. client_id= - Set this value to the home page of your website the user is signing in to. redirect_uri= - This is the URL where you want the user to be returned to after they log in and approve the request. It should have the same domain name as the client_id value. state= - Before starting this step, you should generate a random value for the state parameter and store it in a session and include it in the request. This is for CSRF protection for your app. code_challenge= - This is the base64-urlencoded SHA256 hash of a random string you will generate. We'll cover this in more detail below. code_challenge_method=S256 - This tells the server which hashing method you used, which will be SHA256 or S256 for short. me= - (optional) You can provide the URL the user entered in your sign-in form as a parameter here which can be a hint to some IndieAuth servers that support multiple users per server. scope=profile - (optional) If you want to request the user's profile information such as their name, photo, or email, include the scope parameter in the request. The value of the scope parameter can be either profile or profile email. (Make sure to URL-encode the value when including it in a URL, so it will end up as profile+email or profile%20email.)

Calculating the Code Challenge

The Code Challenge is a hash of a secret (called the Code Verifier) that you generate before redirecting the user. This lets the server know that the thing that will later make the request for the user's profile information is the same thing that started the flow. You can see the full details of how to create this parameter in the spec, but the summary is:

Create a random string (called the Code Verifier) between 43-128 characters long Calculate the SHA256 hash of the string Base64-URL encode the hash to create the Code Challenge

The part that people most often make a mistake with is the Base64-URL encoding. Make sure you are encoding the raw hash value, not a hex representation of the hash like some hashing libraries will return.

Once you're ready with all these values, add them all to the query string of the authorization endpoint you previously discovered. For example if the user's authorization endpoint is https://indieauth.rocks/authorize because their website is https://indieauth.rocks, then you'd add these parameters to the query string to create a URL like:

https://indieauth.rocks/authorize?response_type=code &client_id=https://example-app.com &redirect_uri=https://example-app.com/redirect &state=a46a0b27e67c0cb53 &code_challenge=eBKnGb9SEoqsi0RGBv00dsvFDzJNQOyomi6LE87RVSc &code_challenge_method=S256 &me=https://indieauth.rocks &scope=profile

Note: The user's authorization endpoint might not be on the same domain as the URL they entered. That's okay! That just means they have delegated their IndieAuth handling to an external service.

Now you can redirect the user to this URL so that they can approve this request at their own IndieAuth server.

Handle the Redirect Back

You won't see the user again until after they've logged in to their website and approved the request. Eventually the IndieAuth server will redirect the user back to the redirect_uri you provided in the authorization request. The authorization server will add two query parameters to the redirect: code and state. For example:

https://example-app.com/redirect?code=af79b83817b317afc9aa &state=a46a0b27e67c0cb53

First you need to double check that the state value in the redirect matches the state value that you included in the initial request. This is a CSRF protection mechanism. Assuming they match, you're ready to exchange the authorization code for the user's profile information.

Exchange the Authorization Code for the User's Profile Info

Now you'll need to make a POST request to exchange the authorization code for the user's profile information. Since this code was returned in a redirect, the IndieAuth server needs an extra confirmation that it was sent back to the right thing, which is what the Code Verifier and Code Challenge are for. You'll make a POST request to the authorization endpoint with the following parameters:

grant_type=authorization_code code= - The authorization code as received in the redirect. client_id= - The same client_id as was used in the original request. redirect_uri= The same redirect_uri as was used in the original request. code_verifier= The original random string you generated when calculating the Code Challenge.

This is described in additional detail in the spec.

Assuming everything checks out, the IndieAuth server will respond with the full URL of the user, as well as their stated profile information if requested. The response will look like the below:

{ "me": "https://indieauth.rocks/", "profile": { "name": "IndieAuth Rocks", "url": https://indieauth.rocks/" "photo": "https://indieauth.rocks/profile.jpg" } }

Wait! We're not done yet! Just because you get information in this response doesn't necessarily mean you can trust it yet! There are two important points here:

The information under the profile object must ALWAYS be treated as user-supplied data, not treated as canonical or authoritative in any way. This means for example not de-duping users based on the profile.url field or profile.email field. If the me URL is not an exact match of the URL the user initially entered, you need to re-discover the authorization endpoint of the me URL returned in this response and make sure it matches exactly the authorization server you found in the initial discovery step.

You can perform the same discovery step as in the beginning, but this time using the me URL returned in the authorization code response. If that authorization endpoint matches the same authorization endpoint that you used when you started the flow, everything is fine and you can treat this response as valid.

This last validation step is critical, since without it, anyone could set up an authorization endpoint claiming to be anyone else's server. More details are available in the spec.

Now you're done!

The me URL is the value you should use as the canonical and stable identifier for this user. You can use the information in the profile object to augment this user account with information like the user's name or profile information. If the user logs in again later, look up the user from their me URL and update their name/photo/email with the most recent values in the profile object to keep their profile up to date.

Testing Your IndieAuth Client

To test your IndieAuth client, you'll need to find a handful of IndieAuth providers in the wild you can use to sign in to it. Here are some to get you started:

Micro.blog - All micro.blog accounts are IndieAuth identities as well. You can use a free account for testing. WordPress - With the IndieAuth plugin installed, a WordPress site can be its own IndieAuth server as well. Drupal - The IndieWeb module for Drupal will let a Drupal instance be its own IndieAuth server. Selfauth - Selfauth is a single PHP file that acts as an IndieAuth server.

Eventually I will get around to finishing the test suite at indieauth.rocks so that you have a testing tool readily available, but in the mean time the options above should be enough to get you started.

Getting Help

If you get stuck or need help, feel free to drop by the IndieWeb chat to ask questions! Myself and many others are there all the time and happy to help troubleshoot new IndieAuth implementations!

Tuesday, 13. April 2021

MyDigitalFootprint

What superpowers does a CDO need?

Below are essential characteristics any CDO’s needs, ideal for a job description. After the list, I want to expand on one new superpower all CDO’s need, oddly where less data is more powerful. Image Source: https://openpolicy.blog.gov.uk/2020/01/17/lab-long-read-human-centred-policy-blending-big-data-and-thick-data-in-national-policy/ Day 0 a CDO must: BE a champion of fac

Below are essential characteristics any CDO’s needs, ideal for a job description. After the list, I want to expand on one new superpower all CDO’s need, oddly where less data is more powerful.

Image Source: https://openpolicy.blog.gov.uk/2020/01/17/lab-long-read-human-centred-policy-blending-big-data-and-thick-data-in-national-policy/

Day 0 a CDO must:

BE a champion of fact-based, data-driven decision making. However, complex decisions based on experience, gut instinct, leadership and opinions still play a role, but most decisions can now be underpinned with a firmer foundation. BE curious about how the business operates and makes money and its drivers of cost, revenue, and customer satisfaction through the lens of data and analytical models. BE an ambassador of change. Data uncovers assumptions that unpack previous political decisions and moves power. Data does not create change but will create conflict — how this is managed is a critical CDO skill. BE a great storyteller. KNOW who is the smartest data scientist in the company, where the most sophisticated models are, and understand and appreciate what those data teams do and how they do it. Managing and getting the best from these teams is a skill everyone needs. FIGURE out and articulate the value your team can deliver to the business in the next week, month, and quarter. As the CDO, what is the value you bring to your peers and shareholder in the next 5 years? IMPROVE decision making using data for day to day, how to reduce risk and how to inform the company on achieving and adapting the company’s strategy. BUILD relationships to source data both within your business and the wider ecosystem. This is both to determine the quality of the data and be able to better use data and or roll out solutions that improve quality and decision-making. KNOW what technical questions to ask and being able to live with the complexity involved in the delivery.

Decision making is a complex affair, and as CDO’s we are there to support. Decisions are perceived to be easier when there is lots of data, and the signal is big, loud and really clear. Big data has a place, but we must not forget small signals from ethnographic data sources. Leadership often does not know what to do with critical and challenging small data, especially when it challenges easy assumptions that big data justifies.

A CDO superpower is to shine a light on all data, without bias

Our superpower is to shine a light on all data, without bias, and help strategic thinkers, who often put a higher value on quantitative data. They didn’t know how to handle data that wasn’t easily measurable does not show up in existing paid-for reports. Ethnographic work has a serious perception problem in a data-driven decision world. A key role of the CDO is to uncover all data and its value, not bias to a bigger data set — that is just lazy. I love this image from @triciawang, where the idea of critical small data set is represented as “thick data.” Do follow her work https://www.triciawang.com/ or that of Genevieve BellKate Crawford and danah boyd (@zephoria).

Source: Nokia’s experience of ignoring small data

Note to the CEO

Digital transformation has built a dependence on data, and the bigger the data set, the more weight it is assumed to have. Often, there is a dangerous assumption made that the risk in a decision is reduced because of the data set's size. It may be true for operational issues and automated decision making but not necessarily for strategy.

As the CEO, you need to determine the half-life of the data used to justify or solidify a decision. Half-life in science is when more than 50 per cent of a substance has undergone a radical change; in business terms, this is when half the value of the data is lost or a doubling of the error. The bigger the data set, the quicker (shorter) the half-life will be. Indeed some data’s half-life is less than the time it took to collect and store it. It is big but it really has no value. For small data sets, such as ethnographic data, the half-life can be longer than a 3 to 5 years strategic planning cycle. Since some data might be small and could be a signal to your future, supporting a CDO who puts equal weight on all data is critical to success.

Monday, 12. April 2021

Simon Willison

Porting VaccinateCA to Django

As I mentioned back in February, I've been working with the VaccinateCA project to try to bring the pandemic to an end a little earlier by helping gather as accurate a model as possible of where the Covid vaccine is available in California and how people can get it. The key activity at VaccinateCA is calling places to check on their availability and eligibility criteria. Up until last night this

As I mentioned back in February, I've been working with the VaccinateCA project to try to bring the pandemic to an end a little earlier by helping gather as accurate a model as possible of where the Covid vaccine is available in California and how people can get it.

The key activity at VaccinateCA is calling places to check on their availability and eligibility criteria. Up until last night this was powered by a heavily customized Airtable instance, accompanied by a custom JavaScript app for the callers that communicated with the Airtable API via some Netlify functions.

Today, the flow is powered by a new custom Django backend, running on top of PostgreSQL.

The thing you should never do

Here's one that took me fifteen years to learn: "let's build a new thing and replace this" is hideously dangerous: 90% of the time you won't fully replace the old thing, and now you have two problems!

- Simon Willison (@simonw) June 29, 2019

Replacing an existing system with a from-scratch rewrite is risky. Replacing a system that is built on something as flexible as Airtable that is evolving on a daily basis is positively terrifying!

Airtable served us extremely well, but unfortunately there are hard limits to the number of rows Airtable can handle and we've already bounced up against them and had to archive some of our data. To keep scaling the organization we needed to migrate away.

We needed to build a matching relational database with a comprehensive, permission-controlled interface for editing it, plus APIs to drive our website and application. And we needed to do it using the most boring technology possible, so we could focus on solving problems directly rather than researching anything new.

It will never cease to surprise me that Django has attained boring technology status! VaccineCA sits firmly in Django's sweet-spot. So we used that to build our replacement.

The new Django-based system is called VIAL, for "Vaccine Information Archive and Library" - a neat Jesse Vincent bacronym.

We switched things over to VIAL last night, but we still have activity in Airtable as well. I expect we'll keep using Airtable for the lifetime of the organization - there are plenty of ad-hoc data projects for which it's a perfect fit.

The most important thing here is to have a trusted single point of truth for any piece of information. I'm not quite ready to declare victory on that point just yet, but hopefully once things settle down over the next few days.

Data synchronization patterns

The first challenge, before even writing any code, was how to get stuff out of Airtable. I built a tool for this a while ago called airtable-export, and it turned out the VaccinateCA team were using it already before I joined!

airtable-export was already running several times an hour, backing up the data in JSON format to a GitHub repository (a form of Git scraping). This gave us a detailed history of changes to the Airtable data, which occasionally proved extremely useful for answering questions about when a specific record was changed or deleted.

Having the data in a GitHub repository was also useful because it gave us somewhere to pull data from that wasn't governed by Airtable's rate limits.

I iterated through a number of different approaches for writing importers for the data.

Each Airtable table ended up as a single JSON file in our GitHub repository, containing an array of objects - those files got pretty big, topping out at about 80MB.

I started out with Django management commands, which could be passed a file or a URL. A neat thing about using GitHub for this is that you can use the "raw data" link to obtain a URL with a short-lived token, which grants access to that file. So I could create a short-term URL and paste it directly to my import tool.

I don't have a good pattern for running Django management commands on Google Cloud Run, so I started moving to API-based import scripts instead.

The pattern that ended up working best was to provide a /api/importRecords API endpoint which accepts a JSON array of items.

The API expects the input to have a unique primary key in each record - airtable_id in our case. It then uses Django's update_or_create() ORM method to create new records if they were missing, and update existing records otherwise.

One remaining challenge: posting 80MB of JSON to an API in one go would likely run into resource limits. I needed a way to break that input up into smaller batches.

I ended up building a new tool for this called json-post. It has an extremely specific use-case: it's for when you want to POST a big JSON array to an API endpoint but you want to first break it up into batches!

Here's how to break up the JSON in Reports.json into 50 item arrays and send them to that API as separate POSTs:

json-post Reports.json \ "https://example.com/api/importReports" \ --batch-size 50

Here are some more complex options. Here we need to pass an Authorization: Beraer XXXtokenXXX API key header, run the array in reverse, record our progress (the JSON responses from the API as newline-delimited JSON) to a log file, set a longer HTTP read timeout and filter for just specific items:

% json-post Reports.json \ "https://example.com/api/importReports" \ -h Authorization 'Bearer XXXtokenXXX' \ --batch-size 50 \ --reverse \ --log /tmp/progress.txt \ --http-read-timeout 20 \ --filter 'item.get("is_soft_deleted")'

The --filter option proved particularly useful. As we kicked the tires on VIAL we would spot new bugs - things like the import script failing to correctly record the is_soft_deleted field we were using in Airtable. Being able to filter that input file with a command-line flag meant we could easily re-run the import just for a subset of reports that were affected by a particular bug.

--filter takes a Python expression that gets compiled into a function and passed item as the current item in the list. I borrowed the pattern from my sqlite-transform tool.

The value of API logs

VaccineCA's JavaScript caller application used to send data to Airtable via a Netlify function, which allowed additional authentication to be added built using Auth0.

Back in February, the team had the bright idea to log the API traffic to that function to a separate base in Airtable - including full request and response bodies.

This proved invaluable for debugging. It also meant that when I started building VIAL's alternative implementation of the "submit a call report" API I could replay historic API traffic that had been recorded in that table, giving me a powerful way to exercise the new API with real-world traffic.

This meant that when we turned on VIAL we could switch our existing JavaScript SPA over to talking to it using a fully tested clone of the existing Airtable-backed API.

VIAL implements this logging pattern again, this time using Django and PostgreSQL.

Given that the writable APIs will recieve in the low thousands of requests a day, keeping them in a database table works great. The table has grown to 90MB so far. I'm hoping that the pandemic will be over before we have to worry about logging capacity!

We're using PostgreSQL jsonb columns to store the incoming and returned JSON, via Django's JSONField. This means we can do in-depth API analysis using PostgreSQL's JSON SQL functions! Being able to examine returned JSON error messages or aggregate across incoming request bodies helped enormously when debugging problems with the API import scripts.

Storing the original JSON

Today, almost all of the data stored in VIAL originated in Airtable. One trick that has really helped build the system is that each of the tables that might contain imported data has both an airtable_id nullable column and an import_json JSON field.

Any time we import a record from airtable, we record both the ID and the full, original Airtable JSON that we used for the import.

This is another powerful tool for debugging: we can view the original Airtable JSON directly in the Django admin interface for a record, and confirm that it matches the ORM fields that we set from that.

I came up with a simple pattern for Pretty-printing all read-only JSON in the Django admin that helps with this too.

Staying as flexible as possible

The thing that worried me most about replacing Airtable with Django was Airtable's incredible flexibility. In the organization's short life it has already solved so many problems by adding new columns in Airtable, or building new views.

Is it possible to switch to custom software without losing that huge cultural advantage?

This is the same reason it's so hard for custom software to compete with spreadsheets.

We've only just made the switch, so we won't know for a while how well we've done at handling this. I have a few mechanisms in place that I'm hoping will help.

The first is django-sql-dashboard. I wrote about this project in previous weeknotes here and here - the goal is to bring some of the ideas from Datasette into the Django/PostgreSQL world, by providing a read-only mechanism for constructing SQL queries, bookmarking and saving the results and outputting simple SQL-driven visualizations.

We have a lot of SQL knowledge at VaccinateCA, so my hope is that people with SQL will be able to solve their own problems, and people who don't know SQL yet will have no trouble finding someone who can help them.

In the boring technology model of things, django-sql-dashboard counts as the main innovation token I'm spending for this project. I'm optimistic that it will pay off.

I'm also leaning heavily on Django's migration system, with the aim of making database migrations common and boring, rather than their usual default of being rare and exciting. We're up to 77 migrations already, in a codebase that is just over two months old!

I think a culture that evolves the database schema quickly and with as little drama as possible is crucial to maintaining the agility that this kind of organization needs.

Aside from the Django Admin providing the editing interface, everything that comes into and goes out of VIAL happens through APIs. These are fully documented: I want people to be able to build against the APIs independently, especially for things like data import.

After seeing significant success with PostgreSQL JSON already, I'm considering using it to add even more API-driven flexbility to VIAL in the future. Allowing our client developers to start collecting a new piece of data from our volunteers in an existing JSON field, then migrating that into a separate column once it has proven its value, is very tempting indeed.

Open source tools we are using

An incomplete list of open source packages we are using for VIAL so far:

pydantic - as a validation layer for some of the API endpoints social-auth-app-django - to integrate with Auth0 django-cors-headers python-jose - for JWTs, which were already in use by our Airtable caller app django-reversion and django-reversion-compare to provide a diffable, revertable history of some of our core models django-admin-tools - which adds a handy customizable menu to the admin, good for linking to additional custom tools django-migration-linter - to help avoid accidentally shipping migrations that could cause downtime during a deploy pytest-django, time-machine and pytest-httpx for our unit tests sentry-sdk, honeycomb-beeline and prometheus-client for error logging and observability Want to help out?

VaccinateCA is hiring! It's an interesting gig, because the ultimate goal is to end the pandemic and put this non-profit permanently out of business. So if you want to help end things faster, get in touch.

VaccinateCA is hiring a handful of engineers to help scale our data ingestion and display by more than an order of magnitude.

If you'd like to register interest:https://t.co/BSvi40sW1M

Generalists welcome. Three subprojects; Python backend/pedestrian front-end JS.

- Patrick McKenzie (@patio11) April 7, 2021
TIL this week Language-specific indentation settings in VS Code Efficient bulk deletions in Django Using unnest() to use a comma-separated string as the input to an IN query Releases this week json-post: 0.2 - (3 total releases) - 2021-04-11
Tool for posting JSON to an API, broken into pages airtable-export: 0.7.1 - (10 total releases) - 2021-04-09
Export Airtable data to YAML, JSON or SQLite files on disk django-sql-dashboard: 0.6a0 - (13 total releases) - 2021-04-09
Django app for building dashboards using raw SQL queries

Damien Bod

Securing Blazor Web assembly using Cookies and Auth0

The article shows how an ASP.NET Core Blazor web assembly UI hosted in an ASP.NET Core application can be secured using cookies. Auth0 is used as the identity provider. The trusted application is protected using the Open ID Connect code flow with a secret and using PKCE. The API calls are protected using the secure […]

The article shows how an ASP.NET Core Blazor web assembly UI hosted in an ASP.NET Core application can be secured using cookies. Auth0 is used as the identity provider. The trusted application is protected using the Open ID Connect code flow with a secret and using PKCE. The API calls are protected using the secure cookie and anti-forgery tokens to protect against CSRF. This architecture is also known as the Backends for Frontends (BFF) Pattern.

Code: https://github.com/damienbod/SeparatingApisPerSecurityLevel

Blogs in this series

Securing Blazor Web assembly using Cookies and Azure AD Securing Blazor Web assembly using Cookies and Auth0

The application was built as described in the previous blog in this series. Please refer to that blog for implementation details about the WASM application, user session and anti-forgery tokens. Setting up the Auth0 authentication and the differences are described in this blog.

An Auth0 account is required and a Regular Web Application was setup for this. This is not an SPA application and must always be deployed with a backend which can keep a secret. The WASM client can only use the APIs on the same domain and uses cookies. All application authentication is implemented in the trusted backend and the secure data is encrypted in the cookie.

The Microsoft.AspNetCore.Authentication.OpenIdConnect Nuget package is used to add the authentication to the ASP.NET Core application. User secrets are used for configuration which uses the Auth0 sensitive data

<Project Sdk="Microsoft.NET.Sdk.Web"> <PropertyGroup> <TargetFramework>net5.0</TargetFramework> <WebProject_DirectoryAccessLevelKey>1</WebProject_DirectoryAccessLevelKey> <UserSecretsId>de0b7f31-65d4-46d6-8382-30c94073cf4a</UserSecretsId> </PropertyGroup> <ItemGroup> <ProjectReference Include="..\Client\BlazorAuth0Bff.Client.csproj" /> <ProjectReference Include="..\Shared\BlazorAuth0Bff.Shared.csproj" /> </ItemGroup> <ItemGroup> <PackageReference Include="Microsoft.AspNetCore.Components.WebAssembly.Server" Version="5.0.5" /> <PackageReference Include="Microsoft.AspNetCore.Authentication.JwtBearer" Version="5.0.5" NoWarn="NU1605" /> <PackageReference Include="Microsoft.AspNetCore.Authentication.OpenIdConnect" Version="5.0.5" NoWarn="NU1605" /> <PackageReference Include="IdentityModel" Version="5.1.0" /> <PackageReference Include="IdentityModel.AspNetCore" Version="3.0.0" /> </ItemGroup> </Project>

The ConfigureServices method in the Startup class of the ASP.NET Core Blazor server application is used to add the authentication. The Open ID Connect code flow with PKCE and a client secret is used for the default challenge and a cookie is used to persist the tokens if authenticated. The Blazor client WASM uses the cookie to access the API.

The Open ID Connect is configured to match the Auth0 settings for the client. A client secret is required and used to authenticate the application. The PKCE option is set explicitly to use PKCE with the client configuration. The required scopes are set so that the profile is returned and an email. These are OIDC standard scopes. The user profile API is used to return the profile data and so keep the id_token small. The tokens are persisted. If successful, the data is persisted to an identity cookie. The logout client is configured as documented by Auth0 in its example.

services.AddAuthentication(options => { options.DefaultScheme = CookieAuthenticationDefaults.AuthenticationScheme; options.DefaultChallengeScheme = OpenIdConnectDefaults.AuthenticationScheme; }) .AddCookie(options => { options.Cookie.Name = "__Host-BlazorServer"; options.Cookie.SameSite = SameSiteMode.Lax; }) .AddOpenIdConnect(OpenIdConnectDefaults.AuthenticationScheme, options => { options.Authority = $"https://{Configuration["Auth0:Domain"]}"; options.ClientId = Configuration["Auth0:ClientId"]; options.ClientSecret = Configuration["Auth0:ClientSecret"]; options.ResponseType = OpenIdConnectResponseType.Code; options.Scope.Clear(); options.Scope.Add("openid"); options.Scope.Add("profile"); options.Scope.Add("email"); options.CallbackPath = new PathString("/signin-oidc"); options.ClaimsIssuer = "Auth0"; options.SaveTokens = true; options.UsePkce = true; options.GetClaimsFromUserInfoEndpoint = true; options.TokenValidationParameters.NameClaimType = "name"; options.Events = new OpenIdConnectEvents { // handle the logout redirection OnRedirectToIdentityProviderForSignOut = (context) => { var logoutUri = $"https://{Configuration["Auth0:Domain"]}/v2/logout?client_id={Configuration["Auth0:ClientId"]}"; var postLogoutUri = context.Properties.RedirectUri; if (!string.IsNullOrEmpty(postLogoutUri)) { if (postLogoutUri.StartsWith("/")) { // transform to absolute var request = context.Request; postLogoutUri = request.Scheme + "://" + request.Host + request.PathBase + postLogoutUri; } logoutUri += $"&returnTo={ Uri.EscapeDataString(postLogoutUri)}"; } context.Response.Redirect(logoutUri); context.HandleResponse(); return Task.CompletedTask; } }; });

The Configure method is implement to require authentication. The UseAuthentication extension method is required. Our endpoints are added like in the previous blog.

public void Configure(IApplicationBuilder app, IWebHostEnvironment env) { // IdentityModelEventSource.ShowPII = true; JwtSecurityTokenHandler.DefaultInboundClaimTypeMap.Clear(); app.UseHttpsRedirection(); app.UseBlazorFrameworkFiles(); app.UseStaticFiles(); app.UseRouting(); app.UseAuthentication(); app.UseAuthorization(); app.UseEndpoints(endpoints => { endpoints.MapRazorPages(); endpoints.MapControllers(); endpoints.MapFallbackToPage("/_Host"); }); }

The Auth0 configuration can be downloaded in the sample application, or you can configure this direct in the Auth0 UI and copy this. Three properties are required. I added these to the user secrets in my application development. If I deployed this to Azure, I would add these to an Azure Key Vault and can then use managed identities to access the secrets.

"Auth0": { "Domain": "your-domain-in-auth0", "ClientId": "--in-secrets--", "ClientSecret": "--in-secrets--" }

Now everything will run and you can now use ASP.NET Core Blazor BFF with Auth0. We don’t need any access tokens in the browser. This was really simple to configure and only ASP.NET Core standard Nuget packages are used. Security best practices are supported by Auth0 and it is really easy to setup. In production I would force MFA and FIDO2 if possible.

Links

Securing Blazor Web assembly using Cookies and Azure AD

https://auth0.com/

https://docs.microsoft.com/en-us/aspnet/core/blazor/components/prerendering-and-integration?view=aspnetcore-5.0&pivots=webassembly#configuration

https://docs.microsoft.com/en-us/aspnet/core/security/anti-request-forgery

https://docs.microsoft.com/en-us/aspnet/core/blazor/security

https://docs.microsoft.com/en-us/aspnet/core/blazor/security/webassembly/additional-scenarios

Sunday, 11. April 2021

Hyperonomy Digital Identity Lab

Trusted Digital Web Glossary (TDW Glossary): All-In View (latest version)

Click to enlarge.

Click to enlarge.

TDW Glossary: All-In View (latest version)

Simon Willison

Quoting Jacques Chester

In general, relying only on natural keys is a nightmare. Double nightmare if it's PII. Natural keys only work if you are flawlessly omniscient about the domain. And you aren't. — Jacques Chester

In general, relying only on natural keys is a nightmare. Double nightmare if it's PII. Natural keys only work if you are flawlessly omniscient about the domain. And you aren't.

Jacques Chester


Virtual Democracy

On Science Preprints: academic publishing takes a quantum leap into the present

Academic journals are becoming the vacuum tubes of the Academy 2.0 enterprise; they are already described and defined more by their limitations than by their advantages. In their early decades, they served us well, until they didn’t. After the transition to an academy-internal publication economy, powered by ePrint services hosted across the planet, journals will not be missed. That individual acad
Academic journals are becoming the vacuum tubes of the Academy 2.0 enterprise; they are already described and defined more by their limitations than by their advantages. In their early decades, they served us well, until they didn’t. After the transition to an academy-internal publication economy, powered by ePrint services hosted across the planet, journals will not be missed. That individual academic libraries should need to continue to pony up for thousands of journal subscriptions for decades to come is now an idea only in the Xeroxed business models of for-profit publishers. Everyone else is looking for a way out; and the internet awaits.

Saturday, 10. April 2021

Bill Wendel's Real Estate Cafe

Housing recovery or iCovery? 10 iFactors driving unsustainable price spikes

Anyone reading the Boston Globe’s Spring House Hunt articles this week online or in print this weekend?  To put them into context, sharing my comment… The post Housing recovery or iCovery? 10 iFactors driving unsustainable price spikes first appeared on Real Estate Cafe.

Anyone reading the Boston Globe’s Spring House Hunt articles this week online or in print this weekend?  To put them into context, sharing my comment…

The post Housing recovery or iCovery? 10 iFactors driving unsustainable price spikes first appeared on Real Estate Cafe.

Tuesday, 06. April 2021

The Dingle Group

SSI in IoT, The SOFIE Project

Decentralized Identifiers and Verifiable Credentials are starting to make their way into the world of IoT. There are many ongoing research projects funded by EU and private sector organizations as well as an increasing number of DLT based IoT projects that are including DIDs and VCs as a core component of their solutions. For the 22nd Vienna Digital Identity Meetup* we hosted three of the lead

Decentralized Identifiers and Verifiable Credentials are starting to make their way into the world of IoT. There are many ongoing research projects funded by EU and private sector organizations as well as an increasing number of DLT based IoT projects that are including DIDs and VCs as a core component of their solutions.

For the 22nd Vienna Digital Identity Meetup* we hosted three of the lead researchers from the EU H2020 funded The SOFIE Project. The SOFIE Project wrapped up at the end of last year a key part of this research focused on the the use of SSI concepts in three IoT sectors (energy, supply chain, and mixed reality gaming) targeting integrating SSI in without requiring changes to the existing IoT systems.

Our three presenters were from two different European research universities, Aalto University (Dr. Dmitrij Lagutin and Dr. Yki Kortesniemi) and Athens University of Economics and Business (Dr. Nikos Fotiou)

The presentation covered four areas of interest in SSI the IoT sector:

DIDs and VCs on constrained devices

Access control using the W3C Web of Things (WoT) Thing Description

did:self method

Ephemeral DIDs and Ring signatures

Each of these research areas are integrated into real world use cases and connected to the sectors that were part of the SOFIR project’s mandate.

(Note: There were some ‘technical issues’ at the start of the event and the introduction part of the presentation has been truncated, but the good new is all of our presenters content is there.)

To listen to a recording of the event please check out the link: https://vimeo.com/530442817

Time markers:

0:00:00 - SOFIE Project Introduction, (Dr. Dmitrij Lagutin)

0:02:33 - DIDs and VCs on constrained devices

0:14:00 - Access Control for WoT using VCs (Dr. Nikos Fotiou)

0:33:23 - did:self method

0:46:00 - Ephemeral DIDs and Ring Signatures (Dr. Yki Kortesniemi)

1:07:29 - Wrap-up & Upcoming Events


Resources

The SOFIE Project Slide deck: download

And as a reminder, we continue to have online only events.

If interested in getting notifications of upcoming events please join the event group at: https://www.meetup.com/Vienna-Digital-Identity-Meetup/

*The Vienna Digital Identity Meetup is hosted by The Dingle Group and is focused on educating business, societal, legal and technologists on the new opportunities that arise with a high assurance digital identity created by the reduction risk and strengthened provenance. We meet on the 4th Monday of every month, in person (when permitted) in Vienna and online on Zoom. Connecting and educating across borders and oceans.

Monday, 05. April 2021

Simon Willison

Behind GitHub’s new authentication token formats

Behind GitHub’s new authentication token formats This is a really smart design. GitHub's new tokens use a type prefix of "ghp_" or "gho_" or a few others depending on the type of token, to help support mechanisms that scan for accidental token publication. A further twist is that the last six characters of the tokens are a checksum, which means token scanners can reliably distinguish a real toke

Behind GitHub’s new authentication token formats

This is a really smart design. GitHub's new tokens use a type prefix of "ghp_" or "gho_" or a few others depending on the type of token, to help support mechanisms that scan for accidental token publication. A further twist is that the last six characters of the tokens are a checksum, which means token scanners can reliably distinguish a real token from a coincidental string without needing to check back with the GitHub database. "One other neat thing about _ is it will reliably select the whole token when you double click on it" - what a useful detail!

Via Hacker News


Damien Bod

Creating Verifiable credentials in ASP.NET Core for decentralized identities using Trinsic

This article shows how verifiable credentials can be created in ASP.NET Core for decentralized identities using the Trinsic platform which is a Self-sovereign identity implementation with APIs to integrate. The verifiable credentials can be downloaded to your digital wallet if you have access and can be used in separate application which understands the Trinsic APIs. […]

This article shows how verifiable credentials can be created in ASP.NET Core for decentralized identities using the Trinsic platform which is a Self-sovereign identity implementation with APIs to integrate. The verifiable credentials can be downloaded to your digital wallet if you have access and can be used in separate application which understands the Trinsic APIs.

Code: https://github.com/swiss-ssi-group/TrinsicAspNetCore

Blogs in this series

Getting started with Self Sovereign Identity SSI Creating Verifiable credentials in ASP.NET Core for decentralized identities using Trinsic Verifying Verifiable Credentials in ASP.NET Core for Decentralized Identities using Trinsic

Setup

We want implement the flow shown in the following figure. The National Driving license application is responsible for issuing driver licenses and administrating licenses for users which have authenticated correctly. The user can see his or her driver license and a verifiable credential displayed as a QR code which can be used to add the credential to a digital wallet. When the application generates the credential, it adds the credential DID to the blockchain ledger with the cryptographic proof of the issuer and the document. When you scan the QR Code, the DID will get validated and will be added to the wallet along with the request claims. The digital wallet must be able to find the DID on the correct network and the schema and needs to search for the ledger in the correct blockchain. A good wallet should take care of this for you. The schema is required so that the data in the DID document can be understood.

Trinsic Setup

Trinsic is used to connect to the blockchain and create the DIDs, credentials in this example. Trinsic provides good getting started docs.

In Trinsic, you need to create an organisation for the Issuer application.

Click on the details of the organisation to get the API key. This is required for the application. This API Key cannot be replaced or updated, so if you make a mistake and lose this, commit it in code, you would have to create a new organisation. It is almost important to note the network. This is where you can find the DID to get the credentials produced by this issuer.

To issuer credentials, you need to create a template or schema with the claims which are issued in the credential using the template. The issuer application provides values for the claims.

Implementing the ASP.NET Core Issuer

The verifiable credentials issuer is implemented in an ASP.NET Core application using Razor pages and Identity. This application needs to authenticate the users before issuing a verifiable credential for the user. FIDO2 with the correct authenticate flow would be a good choice as this would protect against phishing. You could use credentials as well, if the users of the applications had a trusted ID. You would still have to protect against phishing. The quality of the credentials issued depends on the security of the issuing application. If the application has weak user authentication, then the credentials cannot be trusted. For a bank, gov IDs, drivings license, a high level of security is required. Open ID Connect FAPI with FIDO2 would make a good solution to authenticate the user. Or a user with a trusted gov issued credential together with FIDO2 would also be good.

The ASP.NET Core application initializes the services and adds the Trinsic client using the API Key from the organisation which issues the credentials. The Trinsic.ServiceClients Nuget package is used for the Trinsic integration. ASP.NET Core Identity is used to add, remove users and add driving licenses for the users in the administration part of the application. MFA should be setup but as this is a demo, I have not forced this.

public void ConfigureServices(IServiceCollection services) { services.AddScoped<TrinsicCredentialsService>(); services.AddScoped<DriverLicenseService>(); services.AddTrinsicClient(options => { // For CredentialsClient and WalletClient // API key of National Driving License (Organisation which does the verification) options.AccessToken = Configuration["Trinsic:ApiKey"]; // For ProviderClient // options.ProviderKey = providerKey; }); services.AddDbContext<ApplicationDbContext>(options => options.UseSqlServer( Configuration.GetConnectionString("DefaultConnection"))); services.AddDatabaseDeveloperPageExceptionFilter(); services.AddIdentity<IdentityUser, IdentityRole>( options => options.SignIn.RequireConfirmedAccount = false) .AddEntityFrameworkStores<ApplicationDbContext>() .AddDefaultTokenProviders(); services.AddSingleton<IEmailSender, EmailSender>(); services.AddScoped<IUserClaimsPrincipalFactory<IdentityUser>, AdditionalUserClaimsPrincipalFactory>(); services.AddAuthorization(options => { options.AddPolicy("TwoFactorEnabled", x => x.RequireClaim("amr", "mfa") ); }); services.AddRazorPages(); }

User secrets are used to add the secrets required for the application in development. The secrets can be added to the Json secrets file and not to the code source. If deploying this to Azure, the secrets would be read from Azure Key vault. The application requires the Trinsic API Key and the credential template definition ID created in Trinsic studio.

{ "ConnectionStrings": { "DefaultConnection": "--db-connection-string--" }, "Trinsic": { "ApiKey": "--your-api-key-organisation--", "CredentialTemplateDefinitionId": "--Template-credential-definition-id--" } }

The driving license service is responsible for creating driver license for each user. This is just an example of logic and is not related to SSI.

using Microsoft.EntityFrameworkCore; using NationalDrivingLicense.Data; using System.Threading.Tasks; namespace NationalDrivingLicense { public class DriverLicenseService { private readonly ApplicationDbContext _applicationDbContext; public DriverLicenseService(ApplicationDbContext applicationDbContext) { _applicationDbContext = applicationDbContext; } public async Taskbool> HasIdentityDriverLicense(string username) { if (!string.IsNullOrEmpty(username)) { var driverLicense = await _applicationDbContext.DriverLicenses.FirstOrDefaultAsync( dl => dl.UserName == username && dl.Valid == true ); if (driverLicense != null) { return true; } } return false; } public async Task<DriverLicense> GetDriverLicense(string username) { var driverLicense = await _applicationDbContext.DriverLicenses.FirstOrDefaultAsync( dl => dl.UserName == username && dl.Valid == true ); return driverLicense; } public async Task UpdateDriverLicense(DriverLicense driverLicense) { _applicationDbContext.DriverLicenses.Update(driverLicense); await _applicationDbContext.SaveChangesAsync(); } } }

The Trinsic credentials service is responsible for creating the verifiable credentials. It uses the users drivers license and creates a new credential using the Trinsic client API using the CreateCredentialAsync method. The claims must match the template created in the studio. A Trinsic specific URL is returned. This can be used to create a QR Code which can be scanned from a Trinsic digital wallet.

public class TrinsicCredentialsService { private readonly ICredentialsServiceClient _credentialServiceClient; private readonly IConfiguration _configuration; private readonly DriverLicenseService _driverLicenseService; public TrinsicCredentialsService(ICredentialsServiceClient credentialServiceClient, IConfiguration configuration, DriverLicenseService driverLicenseService) { _credentialServiceClient = credentialServiceClient; _configuration = configuration; _driverLicenseService = driverLicenseService; } public async Task<string> GetDriverLicenseCredential(string username) { if (!await _driverLicenseService.HasIdentityDriverLicense(username)) { throw new ArgumentException("user has no valid driver license"); } var driverLicense = await _driverLicenseService.GetDriverLicense(username); if (!string.IsNullOrEmpty(driverLicense.DriverLicenseCredentials)) { return driverLicense.DriverLicenseCredentials; } string connectionId = null; // Can be null | <connection identifier> bool automaticIssuance = false; IDictionary<string, string> credentialValues = new Dictionary<String, String>() { {"Issued At", driverLicense.IssuedAt.ToString()}, {"Name", driverLicense.Name}, {"First Name", driverLicense.FirstName}, {"Date of Birth", driverLicense.DateOfBirth.Date.ToString()}, {"License Type", driverLicense.LicenseType} }; CredentialContract credential = await _credentialServiceClient .CreateCredentialAsync(new CredentialOfferParameters { DefinitionId = _configuration["Trinsic:CredentialTemplateDefinitionId"], ConnectionId = connectionId, AutomaticIssuance = automaticIssuance, CredentialValues = credentialValues }); driverLicense.DriverLicenseCredentials = credential.OfferUrl; await _driverLicenseService.UpdateDriverLicense(driverLicense); return credential.OfferUrl; } }

The DriverLicenseCredentials Razor page uses the Trinsic service and returns the credentials URL if the user has a valid drivers license.

using System.Threading.Tasks; using Microsoft.AspNetCore.Mvc.RazorPages; using NationalDrivingLicense.Data; namespace NationalDrivingLicense.Pages { public class DriverLicenseCredentialsModel : PageModel { private readonly TrinsicCredentialsService _trinsicCredentialsService; private readonly DriverLicenseService _driverLicenseService; public string DriverLicenseMessage { get; set; } = "Loading credentials"; public bool HasDriverLicense { get; set; } = false; public DriverLicense DriverLicense { get; set; } public string CredentialOfferUrl { get; set; } public DriverLicenseCredentialsModel(TrinsicCredentialsService trinsicCredentialsService, DriverLicenseService driverLicenseService) { _trinsicCredentialsService = trinsicCredentialsService; _driverLicenseService = driverLicenseService; } public async Task OnGetAsync() { DriverLicense = await _driverLicenseService.GetDriverLicense(HttpContext.User.Identity.Name); if (DriverLicense != null) { var offerUrl = await _trinsicCredentialsService .GetDriverLicenseCredential(HttpContext.User.Identity.Name); DriverLicenseMessage = "Add your driver license credentials to your wallet"; CredentialOfferUrl = offerUrl; HasDriverLicense = true; } else { DriverLicenseMessage = "You have no valid driver license"; } } } }

The Razor page template displays the QR code and information about the driver license issued to the logged in user.

@page @model NationalDrivingLicense.Pages.DriverLicenseCredentialsModel @{ } <h3>@Model.DriverLicenseMessage</h3> <br /> <br /> @if (Model.HasDriverLicense) { <div class="container-fluid"> <div class="row"> <div class="col-sm"> <div class="qr" id="qrCode"></div> </div> <div class="col-sm"> <div> <img src="~/ndl_car_01.png" width="200" alt="Driver License"> <div> <b>Driver Licence: @Html.DisplayFor(model => model.DriverLicense.UserName)</b> <hr /> <dl class="row"> <dt class="col-sm-4">Issued</dt> <dd class="col-sm-8"> @Model.DriverLicense.IssuedAt.ToString("MM/dd/yyyy") </dd> <dt class="col-sm-4"> @Html.DisplayNameFor(model => model.DriverLicense.Name) </dt> <dd class="col-sm-8"> @Html.DisplayFor(model => model.DriverLicense.Name) </dd> <dt class="col-sm-4">First Name</dt> <dd class="col-sm-8"> @Html.DisplayFor(model => model.DriverLicense.FirstName) </dd> <dt class="col-sm-4">License Type</dt> <dd class="col-sm-8"> @Html.DisplayFor(model => model.DriverLicense.LicenseType) </dd> <dt class="col-sm-4">Date of Birth</dt> <dd class="col-sm-8"> @Model.DriverLicense.DateOfBirth.ToString("MM/dd/yyyy") </dd> <dt class="col-sm-4">Issued by</dt> <dd class="col-sm-8"> @Html.DisplayFor(model => model.DriverLicense.Issuedby) </dd> <dt class="col-sm-4"> @Html.DisplayNameFor(model => model.DriverLicense.Valid) </dt> <dd class="col-sm-8"> @Html.DisplayFor(model => model.DriverLicense.Valid) </dd> </dl> </div> </div> </div> </div> </div> } @section scripts { <script src="~/js/qrcode.min.js"></script> <script type="text/javascript"> new QRCode(document.getElementById("qrCode"), { text: "@Html.Raw(Model.CredentialOfferUrl)", width: 300, height: 300 }); $(document).ready(() => { document.getElementById('begin_token_check').click(); }); </script> }

When the application is started, you can register and create a new license in the license administration.

Add licences as required. The credentials will not be created here, only when you try to get a driver license as a user.

The QR code of the license is displayed which can be scanned and added to your Trinsic digital wallet.

Notes

This works fairly good but has a number of problems. The digital wallets are vendor specific and the QR Code, credential links are dependent on the product used to create this. The wallet implementations and the URL created for the credentials are all specific and rely on good will of the different implementations of the different vendors. This requires an RFC specification or something like this, if SSI should become easy to use and mainstream. Without this, users would require n-wallets for all the different applications and would also have problems using credentials between different systems.

Another problem is the organisations API keys use the represent the issuer or the verifier applications. If this API keys get leaked which they will, the keys are hard to replace.

Using the wallet, the user also needs to know which network to use to load the credentials, or to login to your product. A default user will not know where to find the required DID.

If signing in using the wallet credentials, the application does not protect against phishing. This is not good enough for high security authentication. FIDO2 and WebAuthn should be used if handling such sensitive data as this is designed for.

Self sovereign identities is in the very early stages but holds lots of potential. A lot will depend on how easy it is to use and how easy it is to implement and share credentials between systems. The quality of the credential will depend on the quality of the application issuing it.

In a follow up blog to this one, Matteo will use the verifiable credentials added to the digital wallet and verify them in a second application.

Links

https://studio.trinsic.id/

https://www.youtube.com/watch?v=mvF5gfMG9ps

https://github.com/trinsic-id/verifier-reference-app

https://docs.trinsic.id/docs/tutorial

Self sovereign identity

https://techcommunity.microsoft.com/t5/identity-standards-blog/ion-we-have-liftoff/ba-p/1441555


Simon Willison

Render single selected county on a map

Render single selected county on a map Another experiment at the intersection of Datasette and Observable notebooks. This one imports a full Datasette table (3,200 US counties) using streaming CSV and loads that into Observable's new Search and Table filter widgets. Once you select a single county a second Datasette SQL query (this time retuning JSON) fetches a GeoJSON representation of that cou

Render single selected county on a map

Another experiment at the intersection of Datasette and Observable notebooks. This one imports a full Datasette table (3,200 US counties) using streaming CSV and loads that into Observable's new Search and Table filter widgets. Once you select a single county a second Datasette SQL query (this time retuning JSON) fetches a GeoJSON representation of that county which is then rendered as SVG using D3.

Via @simonw

Sunday, 04. April 2021

Simon Willison

Spatialite Speed Test

Spatialite Speed Test Part of an excellent series of posts about SpatiaLite from 2012 - here John C. Zastrow reports on running polygon intersection queries against a 1.9GB database file in 40 seconds without an index and 0.186 seconds using the SpatialIndex virtual table mechanism.

Spatialite Speed Test

Part of an excellent series of posts about SpatiaLite from 2012 - here John C. Zastrow reports on running polygon intersection queries against a 1.9GB database file in 40 seconds without an index and 0.186 seconds using the SpatialIndex virtual table mechanism.


Animated choropleth of vaccinations by US county

Last week I mentioned that I've recently started scraping and storing the CDC's per-county vaccination numbers in my cdc-vaccination-history GitHub repository. This week I used an Observable notebook and d3's TopoJSON support to render those numbers on an animated choropleth map. The full code is available at https://observablehq.com/@simonw/us-county-vaccinations-choropleth-map From scrape

Last week I mentioned that I've recently started scraping and storing the CDC's per-county vaccination numbers in my cdc-vaccination-history GitHub repository. This week I used an Observable notebook and d3's TopoJSON support to render those numbers on an animated choropleth map.

The full code is available at https://observablehq.com/@simonw/us-county-vaccinations-choropleth-map

From scraper to Datasette

My scraper for this data is a single line in a GitHub Actions workflow:

curl https://covid.cdc.gov/covid-data-tracker/COVIDData/getAjaxData?id=vaccination_county_condensed_data \ | jq . > counties.json

I pipe the data through jq to pretty-print it, just to get nicer diffs.

My build_database.py script then iterates over the accumulated git history of that counties.json file and uses sqlite-utils to build a SQLite table:

for i, (when, hash, content) in enumerate( iterate_file_versions(".", ("counties.json",)) ): try: counties = json.loads( content )["vaccination_county_condensed_data"] except ValueError: # Bad JSON continue for county in counties: id = county["FIPS"] + "-" + county["Date"] db[ "daily_reports_counties" ].insert( dict(county, id=id), pk="id", alter=True, replace=True )

The resulting table can be seen at cdc/daily_reports_counties.

From Datasette to Observable

Observable notebooks are my absolute favourite tool for prototyping new visualizations. There are examples of pretty much anything you could possibly want to create, and the Observable ecosystem actively encourages forking and sharing new patterns.

Loading data from Datasette into Observable is easy, using Datasette's various HTTP APIs. For this visualization I needed to pull two separate things from Datasette.

Firstly, for any given date I need the full per-county vaccination data. Here's the full table filtered for April 2nd for example.

Since that's 3,221 rows Datasette's JSON export would need to be paginated... but Datasette's CSV export can stream all 3,000+ rows in a single request. So I'm using that, fetched using the d3.csv() function:

county_data = await d3.csv( `https://cdc-vaccination-history.datasette.io/cdc/daily_reports_counties.csv?_stream=on&Date=${county_date}&_size=max` );

In order to animate the different dates, I need a list of available dates. I can get those with a SQL query:

select distinct Date from daily_reports_counties order by Date

Datasette's JSON API has a ?_shape=arrayfirst option which will return a single JSON array of the first values in each row, which means I can do this:

https://cdc-vaccination-history.datasette.io/cdc.json?sql=select%20distinct%20Date%20from%20daily_reports_counties%20order%20by%20Date&_shape=arrayfirst

And get back just the dates as an array:

[ "2021-03-26", "2021-03-27", "2021-03-28", "2021-03-29", "2021-03-30", "2021-03-31", "2021-04-01", "2021-04-02", "2021-04-03" ]

Mike Bostock has a handy Scrubber implementation which can provide a slider with the ability to play and stop iterating through values. In the notebook that can be used like so:

viewof county_date = Scrubber(county_dates, { delay: 500, autoplay: false }) county_dates = (await fetch( "https://cdc-vaccination-history.datasette.io/cdc.json?sql=select%20distinct%20Date%20from%20daily_reports_counties%20order%20by%20Date&_shape=arrayfirst" )).json() import { Scrubber } from "@mbostock/scrubber" Drawing the map

The map itself is rendered using TopoJSON, an extension to GeoJSON that efficiently encodes topology.

Consider the map of 3,200 counties in the USA: since counties border each other, most of those border polygons end up duplicating each other to a certain extent.

TopoJSON only stores each shared boundary once, but still knows how they relate to each other which means the data can be used to draw shapes filled with colours.

I'm using the https://d3js.org/us-10m.v1.json TopoJSON file built and published with d3. Here's my JavaScript for rendering that into an SVG map:

{ const svg = d3 .create("svg") .attr("viewBox", [0, 0, width, 700]) .style("width", "100%") .style("height", "auto"); svg .append("g") .selectAll("path") .data( topojson.feature(topojson_data, topojson_data.objects.counties).features ) .enter() .append("path") .attr("fill", function(d) { if (!county_data[d.id]) { return 'white'; } let v = county_data[d.id].Series_Complete_65PlusPop_Pct; return d3.interpolate("white", "green")(v / 100); }) .attr("d", path) .append("title") // Tooltip .text(function(d) { if (!county_data[d.id]) { return ''; } return `${ county_data[d.id].Series_Complete_65PlusPop_Pct }% of the 65+ population in ${county_data[d.id].County}, ${county_data[d.id].StateAbbr.trim()} have had the complete vaccination`; }); return svg.node(); } Next step: a plugin

Now that I have a working map, my next goal is to package this up as a Datasette plugin. I'm hoping to create a generic choropleth plugin which bundles TopoJSON for some common maps - probably world countries, US states and US counties to start off with - but also allows custom maps to be supported as easily as possible.

Datasette 0.56

Also this week, I shipped Datasette 0.56. It's a relatively small release - mostly documentation improvements and bug fixes, but I've alse bundled SpatiaLite 5 with the official Datasette Docker image.

TIL this week Useful Markdown extensions in Python Releases this week airtable-export: 0.6 - (8 total releases) - 2021-04-02
Export Airtable data to YAML, JSON or SQLite files on disk datasette: 0.56 - (85 total releases) - 2021-03-29
An open source multi-tool for exploring and publishing data

Identity Woman

Quoted In: Everything You Need to Know About “Vaccine Passports”

Earlier this week I spoke to Molly who wrote this article about so called “vaccine passports” we don’t call them that though (Only government’s issue passports). Digital Vaccination Certificates would be more accurate. Early on when the Covid-19 Credentials Initiative was founded I joined to help. In December the initiative joined LFPH and I become […] The post Quoted In: Everything You Need to

Earlier this week I spoke to Molly who wrote this article about so called “vaccine passports” we don’t call them that though (Only government’s issue passports). Digital Vaccination Certificates would be more accurate. Early on when the Covid-19 Credentials Initiative was founded I joined to help. In December the initiative joined LFPH and I become […]

The post Quoted In: Everything You Need to Know About “Vaccine Passports” appeared first on Identity Woman.


Article: CoinTelegraph, Women Changing Face of Enterprise Blockchain

This article is about what it says it is and quotes me. CoinTelegraph, Women Changing Face of Enterprise Blockchain The post Article: CoinTelegraph, Women Changing Face of Enterprise Blockchain appeared first on Identity Woman.

This article is about what it says it is and quotes me. CoinTelegraph, Women Changing Face of Enterprise Blockchain

The post Article: CoinTelegraph, Women Changing Face of Enterprise Blockchain appeared first on Identity Woman.

Thursday, 01. April 2021

Simon Willison

Quoting Aaron Straup Cope

If you measure things by foot traffic we [the SFO Museum] are one of the busiest museums in the world. If that is the case we are also one of the busiest museums in the world that no one knows about. Nothing in modern life really prepares you for the idea that a museum should be part of an airport. San Francisco, as I've mentioned, is funny that way. — Aaron Straup Cope

If you measure things by foot traffic we [the SFO Museum] are one of the busiest museums in the world. If that is the case we are also one of the busiest museums in the world that no one knows about. Nothing in modern life really prepares you for the idea that a museum should be part of an airport. San Francisco, as I've mentioned, is funny that way.

Aaron Straup Cope


Bill Wendel's Real Estate Cafe

Use LAUGHtivism or HACKtivism to protect homebuyers from overpaying in BLIND bidding wars?

It’s April Fool’s Day again, a favorite opportunity to poke fun at #GamesREAgentsPlay and irrational exuberance in real estate.  Each year we ask, how can… The post Use LAUGHtivism or HACKtivism to protect homebuyers from overpaying in BLIND bidding wars? first appeared on Real Estate Cafe.

It’s April Fool’s Day again, a favorite opportunity to poke fun at #GamesREAgentsPlay and irrational exuberance in real estate.  Each year we ask, how can…

The post Use LAUGHtivism or HACKtivism to protect homebuyers from overpaying in BLIND bidding wars? first appeared on Real Estate Cafe.

Wednesday, 31. March 2021

Mike Jones: self-issued

Second Version of FIDO2 Client to Authenticator Protocol (CTAP) advanced to Public Review Draft

The FIDO Alliance has published this Public Review Draft for the FIDO2 Client to Authenticator Protocol (CTAP) specification, bringing the second version of FIDO2 one step closer to becoming a completed standard. While remaining compatible with the original standard, this second version adds additional features, among them for user verification enhancements, manageability, enterprise features, and

The FIDO Alliance has published this Public Review Draft for the FIDO2 Client to Authenticator Protocol (CTAP) specification, bringing the second version of FIDO2 one step closer to becoming a completed standard. While remaining compatible with the original standard, this second version adds additional features, among them for user verification enhancements, manageability, enterprise features, and an Apple attestation format.

This parallels the similar progress of the closely related second version of the W3C Web Authentication (WebAuthn) specification, which recently achieved Proposed Recommendation (PR) status.


DustyCloud Brainstorms

The hurt of this moment, hopes for the future

Of the deeper thoughts I might give to this moment, I have given them elsewhere. For this blogpost, I just want to speak of feelings... feelings of hurt and hope. I am reaching out, collecting the feelings of those I see around me, writing them in my mind's journal. Though …

Of the deeper thoughts I might give to this moment, I have given them elsewhere. For this blogpost, I just want to speak of feelings... feelings of hurt and hope.

I am reaching out, collecting the feelings of those I see around me, writing them in my mind's journal. Though I hold clear positions in this moment, there are few roots of feeling and emotion about the moment I feel I haven't steeped in myself at some time. Sometimes I tell this to friends, and they think maybe I am drifting from a mutual position, and this is painful for them. Perhaps they fear this could constitute or signal some kind of betrayal. I don't know what to say: I've been here too long to feel just one thing, even if I can commit to one position.

So I open my journal of feelings, and here I share some of the pages collecting the pain I see around me:

The irony of a movement wanting to be so logical and above feelings being drowned in them.

The feelings of those who found a comfortable and welcoming home in a world of loneliness, and the split between despondence and outrage for that unraveling.

The feelings of those who wanted to join that home too, but did not feel welcome.

The pent up feelings of those unheard for so long, uncorked and flowing.

The weight and shadow of a central person who seems to feel things so strongly but cannot, and does not care to learn to, understand the feelings of those around them.

I flip a few pages ahead. The pages are blank, and I interpret this as new chapters for us to write, together.

I hope we might re-discover the heart of our movement.

I hope we can find a place past the pain of the present, healing to build the future.

I hope we can build a new home, strong enough to serve us and keep us safe, but without the walls, moat, and throne of a fortress.

I hope we can be a movement that lives up to our claims: of justice, of freedom, of human rights, to bring these to everyone, especially those we haven't reached.


Simon Willison

Quoting Corey Quinn

This teaches us that—when it’s a big enough deal—Amazon will lie to us. And coming from the company that runs the production infrastructure for our companies, stores our data, and has been granted an outsized position of trust based upon having earned it over 15 years, this is a nightmare. — Corey Quinn

This teaches us that—when it’s a big enough deal—Amazon will lie to us. And coming from the company that runs the production infrastructure for our companies, stores our data, and has been granted an outsized position of trust based upon having earned it over 15 years, this is a nightmare.

Corey Quinn

Tuesday, 30. March 2021

Hyperonomy Digital Identity Lab

Why is a Glossary like a Network of Balls connected by Elastics?

Why is it good to think of a Glossary as a Network of Balls connected by Elastics? From: Michael Herman (Trusted Digital Web) <mwherman@parallelspace.net>Sent: March 24, 2021 8:47 AMTo: Leonard Rosenthol <lrosenth@adobe.com>; David Waite <dwaite@pingidentity.com>; Jim St.Clair <jim.stclair@lumedic.io>Cc: Drummond Reed … Continue reading →

Why is it good to think of a Glossary as a Network of Balls connected by Elastics?

From: Michael Herman (Trusted Digital Web) <mwherman@parallelspace.net>
Sent: March 24, 2021 8:47 AM
To: Leonard Rosenthol <lrosenth@adobe.com>; David Waite <dwaite@pingidentity.com>; Jim St.Clair <jim.stclair@lumedic.io>
Cc: Drummond Reed <drummond.reed@evernym.com>; sankarshan <sankarshan@dhiway.com>; W3C Credentials CG (Public List) <public-credentials@w3.org>; Hardman, Daniel <Daniel.Hardman@sicpa.com>
Subject: TDW Glossary [was: The “self-sovereign” problem (was: The SSI protocols challenge)]

RE: First and foremost, without a definition/clarification of “Verifiable”, both of your statements are ambiguous. 

Leonard, I don’t disagree with your feedback. What I have been rolling out to the community is selected neighborhoods of closely related terms from the TDW Glossary project.

[A Glossary is] like a bunch of balls explicitly connected by elastics: add a new ball to the model and all of the existing balls in the neighborhood need to adjust themselves. The more balls you have in the network, the more stable the network becomes. So it is with visual glossaries of terms, definitions, and relationships.

Michael Herman, March 2021

Verifiable and Verifiable Data Registry are in the model but currently, they don’t have specific verified definitions.

The TDW Glossary is a huge, visual, highly interrelated, multi-disciplinary, multi-standard, 6-domain, semantic network model (https://hyperonomy.com/2021/03/10/tdw-glossary-management-platform-gmp-initial-results/) that includes:

Common English Language concepts – various dictionaries and reference web sites Sovrin Glossary concepts – https://sovrin.org/library/glossary/ and https://mwherman2000.github.io/sovrin-arm/ Enterprise Architecture concepts – https://pubs.opengroup.org/architecture/archimate3-doc/ HyperLedger Indy Architecture Reference Model concepts – https://mwherman2000.github.io/indy-arm/ HyperLedger Aries Architecture Reference Model concepts – https://mwherman2000.github.io/indy-arm/ Did-core concepts – https://w3c.github.io/did-core/ VC concepts – https://w3c.github.io/vc-data-model/ Others?

All new and updated terms, their definitions, metadata, and relationships, are automatically being published here: https://github.com/mwherman2000/tdw-glossary-1 (e.g. https://github.com/mwherman2000/tdw-glossary-1/blob/e4b96a0a21dd352f67b6bd93fdac66a1599ed35f/model/motivation/Principle_id-72c83ae5b01346b7892e6d2a076e787f.xml)

Other references:

https://hyperonomy.com/?s=tdw+glossary https://hyperonomy.com/2016/04/06/definitions-controlled-vocabulary-dictionary-glossary-taxonomy-folksonomy-ontology-semantic-network/

Here’s a snapshot of what the TDW Glossary “all-in” view looks like today (aka the “network of balls connected by elastics”). The TDW Glossary has (or will very soon have) more than 500 terms and definitions plus associated metadata and relationships.

Thank you for the feedback, Leonard. Keep it coming.

Cheers,
Michael

Figure 1. TDW Glossary: “All In” View

MyDigitalFootprint

Why framing “data” as an asset or liability is dangerous

If there is one thing that can change finance’s power and dominance as a decision-making tool, it is the rest of the data. According to Google (2020), 3% of company data is finance data when considered part of an entire company’s data lake. McKinsey reports that 90% of company decisions are based on finance data alone, the same 3% of data.   If you are in accounting, audit or financ

If there is one thing that can change finance’s power and dominance as a decision-making tool, it is the rest of the data. According to Google (2020), 3% of company data is finance data when considered part of an entire company’s data lake. McKinsey reports that 90% of company decisions are based on finance data alone, the same 3% of data.  

If you are in accounting, audit or finance shoes, how would you play the game to retain control when something more powerful comes on the scene?  You ensure that data is within your domain, you bring out the big guns and declare that data is just another asset or liability, and its rightful position is on the balance sheet.  We get to value it as part of the business. If we reflect on it, finance has been shoring up its position for a while.  HR, tech, processes, methods, branding, IP, legal, and culture have become subservient and controlled by finance. In the finance control game, we are all just an asset or liability and set a budget!  In the context of control and power and how to make better decisions, as a CDO, your friends and partners are human resources, tech, legal, operations, marketing, sales and strategy; your threat and enemy is finance.  

A critical inquiry you will have on day 0 is what weight do we, this organisation, put on the aspects of good decision making? How do we order and with what authority data, finance,  team, processes/methods, justifications, regulation, culture/brand/reputation, compliance/oversight/ governance,  reporting and stewardship? What is more important to us as a team; the trustworthiness or truthfulness of a decision?  The quality of a decision? The explainability of a decision? The ability to communicate a decision or diversity in a decision?  If data is controlled by finance and seen as an asset or liability, how does it affect your decision making capability?

As the CDO, if you determine that the axis of control remains with the CEO/ CFO it may be time to align your skill to a CEO gets data. 

Note to the CEO

It is your choice, but your new CDO is your new CFO in terms of power for decision making, which means there will be a swing in the power game. Your existing CEO/ CFO axis is under threat, and we know that underhanded and political games will emerge in all changes of power.  You will lead the choice about how you want this to play out given that all CEO’s need to find their next role every 5 to 7 years, which will only ever require more interactions with the CDO and data, defending the CFO power plays will not bring favour to your next role.  The CFO remains critically essential, but the existing two-way axis (CEO/CFO) has to become a three-way game that enables the CDO to shine until a new power balance is reached.  


Monday, 29. March 2021

Damien Bod

Getting started with Self Sovereign Identity SSI

The blog is my getting started with Self Sovereign identity. I plan to explore developing solutions using Self Sovereign Identities, the different services and evaluate some of the user cases in the next couple of blogs. Some of the definitions are explained, but mainly it is a list of resources, links for getting started. I’m […]

The blog is my getting started with Self Sovereign identity. I plan to explore developing solutions using Self Sovereign Identities, the different services and evaluate some of the user cases in the next couple of blogs. Some of the definitions are explained, but mainly it is a list of resources, links for getting started. I’m developing this blog series together with Matteo and will create several repos, blogs together.

Blogs in this series

Getting started with Self Sovereign Identity SSI Creating Verifiable credentials in ASP.NET Core for decentralized identities using Trinsic Verifying Verifiable Credentials in ASP.NET Core for Decentralized Identities using Trinsic

What is Self Sovereign Identity SSI?

Self-sovereign identity is an emerging solution built on blockchain technology for solving digital identities which gives the management of identities to the users and not organisations. It makes it possible the solve consent and data privacy of your data and makes it possible to authenticate your identity data across organisations or revoke it. It does not solve the process of authenticating users in applications. You can authenticate into your application using credentials from any trusted issuer, but this is vulnerable to phishing attacks. FIDO2 would be a better solution for this together with an OIDC flow for the application type. Or if you could use your credentials together with a registered FIDO2 key for the application, this would work. The user data is stored in a digital wallet, which is usually stored on your mobile phone. Recovery of this wallet does not seem so clear but a lot of work is going on here which should result in good solutions for this. The credentials DIDs are stored to a blockchain and to verify the credentials you need to search in the same blockchain network.

What are the players?

Digital Identity, Decentralized identifiers (DIDs)

A digital identity can be expressed as a universal identifier which can be owned and can be publicly shared. A digital identity provides a way of showing a subject (user, organisation, thing), a way of exchanging credentials to other identities and a way to verify the identity without storing data on a shared server. This can be all done across organisational boundaries. A digital identity can be found using decentralized identifiers (DID) and this has working group standards in the process of specifying this. The DIDs are saved to a blockchain network which can be resolved.

https://w3c.github.io/did-core/

The DIDs representing identities are published to a blockchain network.

Digital wallet

A digital wallet is a database which stores all your verified credentials which you added to your data. This wallet is usually stored on your mobile phone and needs encryption. You want to prevent all third party access to this wallet. Some type of recovery process is required, if you use a digital wallet. A user can add or revoke credentials in the wallet. When you own a wallet, you would publish a public key to a blockchain network. A DID is returned representing the digital identity for this wallet and a public DID was saved to the network which can be used to authenticate anything interacting with the wallet. Digital wallets seem to be vendor locked at the moment which will be problematic for mainstream adoption. 

Credentials, Verifiable credentials

https://www.w3.org/TR/vc-data-model/

A verifiable credential is an immutable set of claims created by an issuer which can be verified. A verifiable credential has claims, metadata and proof to validate the credential. A credential can be saved to a digital wallet, so no data is persisted anywhere apart from the issuer and the digital wallet.

This credential can then be used anywhere.

The credential is created by the issuer for the holder of the credential. This credential is presented to the verifier by the holder from a digital wallet and the verifier can validate the credential using the issuer DID which can be resolved from the blockchain network.

Networks

The networks are different distributed blockchains with verifiable data registries using DIDs. You need to know how to resolve each DID, issuer DID to verify or use a credential and so you need to know where to find the network on which the DID is persisted. The networks are really just persisted distributed databases. Sovrin or other blockchains can be used as a network. The blockchain holds public key DIDs, DID documents, ie credentials and schemas.

Energy consumption

This is something I would like to evaluate, and if this technology was to become widespread, how much energy would this cost. I have no answers to this at the moment.

Youtube videos, channels

An introduction to decentralized identities | Azure Friday

SSI Meetup

An introduction to Self-Sovereign Identity

Intro to SSI for Developers: Architecting Software Using Verifiable Credentials

SSI Ambassador

Decentralized identity explained

Evernym channel

Books, Blogs, articles, info

Self-Sovereign Identity: The Ultimate Beginners Guide!

Decentralized Identity Foundation

SELF-SOVEREIGN IDENTITY PDF by Marcos Allende Lopez

https://en.wikipedia.org/wiki/Self-sovereign_identity

https://decentralized-id.com/

https://github.com/animo/awesome-self-sovereign-identity

Organisations

https://identity.foundation/

https://github.com/decentralized-identity

sovrin

People

Drummond Reed @drummondreed
Rieks Joosten
Oskar van Deventer
Alex Preukschat @AlexPreukschat
Danny Strockis @dStrockis
Tomislav Markovski @tmarkovski
Riley Hughes @rileyphughes
Michael Boyd @michael_boyd_
Marcos Allende Lope @MarcosAllendeL
Adrian Doerk @doerkadrian
Mathieu Glaude @mathieu_glaude
Markus Sabadello @peacekeeper
Ankur Patel @_AnkurPatel
Daniel Ƀrrr @csuwildcat
Matthijs Hoekstra @mahoekst
Kaliya-Identity Woman @IdentityWoman

Products

https://docs.trinsic.id/docs

https://docs.microsoft.com/en-us/azure/active-directory/verifiable-credentials/

Companies

https://tykn.tech/

https://trinsic.id/

Microsoft Azure AD

evernym

northernblock.io

Specs

https://w3c.github.io/did-core/

https://w3c.github.io/vc-data-model/

https://www.w3.org/TR/vc-data-model/

Links

https://github.com/swiss-ssi-group

https://www.hyperledger.org/use/aries

sovrin

https://github.com/evernym

what-is-self-sovereign-identity

https://techcommunity.microsoft.com/t5/identity-standards-blog/ion-we-have-liftoff/ba-p/1441555

Sunday, 28. March 2021

Jon Udell

Acknowledgement of uncertainty

In 2018 I built a tool to help researchers evaluate a proposed set of credibility signals intended to enable automated systems to rate the credibility of news stories. Here are examples of such signals: – Authors cite expert sources (positive) – Title is clickbaity (negative) And my favorite: – Authors acknowledge uncertainty (positive) Will the … Continue reading Acknowledgement of uncertainty

In 2018 I built a tool to help researchers evaluate a proposed set of credibility signals intended to enable automated systems to rate the credibility of news stories.

Here are examples of such signals:

– Authors cite expert sources (positive)

– Title is clickbaity (negative)

And my favorite:

– Authors acknowledge uncertainty (positive)

Will the news ecosystem ever be able to label stories automatically based on automatic detection of such signals, and if so, should it? These are open questions. The best way to improve news literacy may be the SIFT method advocated by Mike Caulfield, which shifts attention away from intrinsic properties of individual news stories and advises readers to:

– Stop

– Investigate the source

– Find better coverage

– Trace claims, quotes, and media to original context

“The goal of SIFT,” writes Charlie Warzel in Don’t Go Down the Rabbit Hole, “isn’t to be the arbiter of truth but to instill a reflex that asks if something is worth one’s time and attention and to turn away if not.”

SIFT favors extrinsic signals over the intrinsic ones that were the focus of the W3C Credible Web Community Group. But intrinsic signals may yet play an important role, if not as part of a large-scale automated labeling effort then at least as another kind of news literacy reflex.

This morning, in How public health officials can convince those reluctant to get the COVID-19 vaccine, I read the following:

What made these Trump supporters shift their views on vaccines? Science — offered straight-up and with a dash of humility.

The unlikely change agent was Dr. Tom Frieden, who headed the Centers for Disease Control and Prevention during the Obama administration. Frieden appealed to facts, not his credentials. He noted that the theory behind the vaccine was backed by 20 years of research, that tens of thousands of people had participated in well-controlled clinical trials, and that the overwhelming share of doctors have opted for the shots.

He leavened those facts with an acknowledgment of uncertainty. He conceded that the vaccine’s potential long-term risks were unknown. He pointed out that the virus’s long-term effects were also uncertain.

“He’s just honest with us and telling us nothing is 100% here, people,” one participant noted.

Here’s evidence that acknowledgement of uncertainty really is a powerful signal of credibility. Maybe machines will be able to detect it and label it; maybe those labels will matter to people. Meanwhile, it’s something people can detect and do care about. Teaching students to value sources that acknowledge uncertainty, and discount ones that don’t, ought to be part of any strategy to improve news literacy.

Saturday, 27. March 2021

Jon Udell

The paradox of abundance

Several years ago I bought two 5-packs of reading glasses. There was a 1.75-diopter set for books, magazines, newspapers, and my laptop (when it’s in my lap), plus a 1.25-diopter set for the screens I look at when working in my Captain Kirk chair. They were cheap, and the idea was that they’d be an … Continue reading The paradox of abundance

Several years ago I bought two 5-packs of reading glasses. There was a 1.75-diopter set for books, magazines, newspapers, and my laptop (when it’s in my lap), plus a 1.25-diopter set for the screens I look at when working in my Captain Kirk chair. They were cheap, and the idea was that they’d be an abundant resource. I could leave spectacles lying around in various places, there would always be a pair handy, no worries about losing them.

So of course I did lose them like crazy. At one point I bought another 5-pack but still, somehow, I’m down to a single 1.75 and a single 1.25. And I just realized it’s been that way for quite a while. Now that the resource is scarce, I value it more highly and take care to preserve it.

I’m sorely tempted to restock. It’s so easy! A couple of clicks and two more 5-packs will be here tomorrow. And they’re cheap, so what’s not to like?

For now, I’m resisting the temptation because I don’t like the effect such radical abundance has had on me. It’s ridiculous to lose 13 pairs of glasses in a couple of years. I can’t imagine how I’d explain that to my pre-Amazon self.

For now, I’m going to try to assign greater value to the glasses I do have, and treat them accordingly. And when I finally do lose them, I hope I’ll resist the one-click solution. I thought it was brilliant at the time, and part of me still does. But it just doesn’t feel good.

Friday, 26. March 2021

Identity Woman

IPR - what is it? why does it matter?

I am writing this essay to support those of you who are confused about why some of the technologists keep going on and on about Intellectual Property Rights (IPR). First of all, what the heck is it? Why does it matter? How does it work? Why should we get it figured out “now” rather than […] The post IPR - what is it? why does it matter? appeared first on Identity Woman.

I am writing this essay to support those of you who are confused about why some of the technologists keep going on and on about Intellectual Property Rights (IPR). First of all, what the heck is it? Why does it matter? How does it work? Why should we get it figured out “now” rather than […]

The post IPR - what is it? why does it matter? appeared first on Identity Woman.


Tim Bouma's Blog

Verifiable Credentials: Mapping to a Generic Policy Terminology

Note: This post is the sole opinion and perspective of the author. Over the past several months I have been diligently attempting to map the dynamically evolving world of trust frameworks and verifiable credentials into a straightforward and hopefully timeless terminology that can be used for policymaking. The storyboard diagram above is what I’ve come up with so far. Counterparty — f

Note: This post is the sole opinion and perspective of the author.

Over the past several months I have been diligently attempting to map the dynamically evolving world of trust frameworks and verifiable credentials into a straightforward and hopefully timeless terminology that can be used for policymaking. The storyboard diagram above is what I’ve come up with so far.

Counterparty — for every consequential relationship or transaction there are at a minimum of two parties involved. Regardless of whether the interaction is collaborative, competitive, zero positive sum, they can be considered as counterparties to one another. Claim — is the something that is the matter of concern between the counterparties — it can be financial, tangible, intangible; something in the present, or a promise of something in the future. Offer — a counterparty offers something that usually relates to a Claim. Commit — a counterparty can commit to its Offer. Present — a counterparty can present an Offer (or a Claim). Accept — on the other side, the other counterparty accepts an Offer. Issue — An Offer, once formed, can be issued in whatever form — usually a document or credential that is signed by the counterparty. Hold — An offer can be held. How it is held depends on its embodiment (e.g.., digital, paper, verbal, etc.) Verify — An offer, or more specifically its embodiment can be verified for its origin and integrity.

All of the above is made possible by:

Business Trust — how the counterparties decide to trust one another. This is the non-technical aspect of agreements, rules, treaties, legislation, etc.

And underpinned by:

Technical Trust: how the counterparties prove to another that their trust has not been compromised. This the technical aspect that includes, cryptographic protocols, data formats, etc.

Why is this useful? When writing policy, you need a succinct model which is clear enough for subsequent interpretation. To do this, you need conceptual buckets to drop things into. Yes, this model is likely to change, but it’s my best and latest crack at it to synthesize the complex world of digital credentials with an abstraction that might be useful to help us align existing solutions while adopting exciting new capabilities.

As always, I am open for comment and constructive feedback. You know where to find me.


MyDigitalFootprint

#lockdown one year in, and I now question. What is a Better Normal?

I have written my fair amount over lockdown, but a core tenant of my hope was to leave the old normal behind, not wanting a new normal but a better normal.  The old normal (pre-#covid19) was as exhausting as I felt like a dog whose sole objective was to chase its own tail.   I perceived that a new normal (post-lockdown) would be straight back to doing the same.  My hope was fo


I have written my fair amount over lockdown, but a core tenant of my hope was to leave the old normal behind, not wanting a new normal but a better normal.  The old normal (pre-#covid19) was as exhausting as I felt like a dog whose sole objective was to chase its own tail.   I perceived that a new normal (post-lockdown) would be straight back to doing the same.  My hope was for a  “better normal” where I got to change/ pick a new objective. 

Suppose I unpack old and new normal with a time lens on both ideas.  What am I really (really honestly) doing differently hour by hour, day by day, week by week, month by month and year by year, today compared to the old normal.  My new brighter, shinny, hope-filled, better normal looks remarkable like the old when viewed by the lens of time.  Meetings, calls, reading, writing, communicating and thinking. Less travel and walking has been replaced with more allocation to the other activities, but I have lost the time I used to throw away, the time to reflect, time to dwell, time to ponder, time to prepare.  

Time and its allocation is indicating that the old and the new normals look the same.  Where has “My Better Normal Gone?”

 Where has “My Better Normal Gone?”


If the work to be done is the same, then time is not an appropriate measure of observing change.  How about looking at my processes and methods?   My methods of work have changed, but not for the better.  My old normal heuristics and rules were better as I created more time to walk, travel and, therefore, time to reflect and prepare.  I try to allocate more time to different approaches and methods, but “screen-time” appears only to have one determinant - attention (distraction and diversion included) 

So it appears to me that if I want a better normal, I have to change the work to be done (a nod to Clayton Christensen).  There has been one change in the work to be done, which has been detrimental from my perspective; I have been forced, like everyone, to exchanged time with family and friends for either time alone or jobs.

So as the anniversary passes and I reflect on a second lockdown birthday, have I spent enough time changing the work to be done? Probably not, but I now plan to.


Thursday, 25. March 2021

Nader Helmy

Why we’re launching MATTR VII

It’s no secret we need a better web. The original vision of an open and decentralized network that’s universally accessible continues to be a north star for those working to design the future of digital infrastructure for everyday people. Despite the progress that has been made in democratising access to massive amounts of information, the dire state of cybersecurity and privacy on the internet to

It’s no secret we need a better web. The original vision of an open and decentralized network that’s universally accessible continues to be a north star for those working to design the future of digital infrastructure for everyday people. Despite the progress that has been made in democratising access to massive amounts of information, the dire state of cybersecurity and privacy on the internet today present significant barriers to access for too many of our most vulnerable populations. We started MATTR because we believe that standards, transparency, and openness are not only better for users; they make for stronger systems and more resilient networks. We recognize that a decentralized web of digital trust, based on transparency, consent, and verifiable data, can help us address critical challenges on a global scale. It represents a significant opportunity to give people real agency and control over their digital lives.

Our story

At its inception, we chose “MATTR” as a moniker because we strongly believed that the movement towards more decentralized systems will fundamentally change the nature of data and privacy on the internet. Matter, in its varying states, forms the building blocks of the universe, symbolically representing the capacity for change and transformation that allows us all to grow and adapt. In another sense, people matter, and the impact of decisions we make as builders of technology extend beyond ourselves. It’s a responsibility we take seriously, as Tim Berners Lee puts it, “to preserve new frontiers for the common good.” We proudly bear the name MATTR and the potential it represents as we’ve built out our little universe of products.

In September 2020, we introduced our decentralized identity platform. Our goal was to deliver standards-based digital trust to developers in a scalable manner. We designed our platform with a modular security architecture to enable our tools to work across many different contexts. By investing deeply in open standards and open source communities as well as developing insights through collaboration and research, we realized that developers want to use something that’s convenient without compromising on flexibility, choice, or security. That’s why we launched our platform with standards-based cryptography and configurable building blocks to suit a broad array of use cases and user experiences in a way that can evolve as technology matures.

At the same time, we’ve continued to work in open source and open standards communities with greater commitment than ever to make sure we’re helping to build a digital ecosystem that can support global scale. We launched MATTR Learn and MATTR Resources as hubs for those interested in these new technologies, developing educational content to explore concepts around decentralized identity, offering guided developer tutorials and videos, and providing documentation and API references. We also unveiled a new website, introduced a novel approach to selective disclosure of verifiable credentials, built and defined a new secure messaging standard, developed a prototype for paper-based credentials to cater for low-tech environments, and made a bridge to extend OpenID Connect with verifiable credentials. We’ve consistently released tools and added features to make our products more secure, extensible, and easy to use. In parallel, we also joined the U.S. Department of Homeland Security’s SVIP program in October to help advance the goals of decentralized identity and demonstrate provable interoperability with other vendors in a transparent and globally-visible manner. Zooming out a bit, our journey at MATTR is part of a much larger picture of passionate people working in collaborative networks across the world to make this happen.

The bigger picture

It has been an incredible year for decentralized and self-sovereign identity as a whole. In light of the global-scale disruption of COVID-19, the demand for more secure digital systems became even more critical to our everyday lives. Start-ups, corporations, governments, and standards organizations alike have been heavily investing in building technology and infrastructure to support an increasingly digital world. We’re seeing this innovation happen across the globe, from the work being done by the DHS Silicon Valley Innovation Program to the Pan-Canadian Trust Framework and New Zealand Digital Identity Trust Framework. Many global leaders are stepping up to support and invest in more privacy-preserving digital security, and for good reason. Recent legislation like GDPR and CCPA have made the role of big tech companies and user data rights increasingly important, providing a clear mandate for a wave of change that promises to strengthen the internet for the public good. This provides an incredible catalyst for all the work happening in areas such as cryptography, decentralized computing and digital governance. Just in the last year, we’ve seen the following advancements:

Secure Data Storage WG created at DIF and W3C to realize an interoperable technology for encrypted and confidential data storage Decentralized Identifiers v1.0 specification reached “Candidate Recommendation” stage at the W3C, establishing stability in anticipation of standardization later this year Sidetree protocol v1.0 released at DIF, providing a layer-2 blockchain solution for scalable Decentralized Identifiers built on top of ledgers such as Bitcoin and Ethereum DIDComm Messaging v2.0 specification launched at DIF, a new protocol for secure messaging based on Decentralized Identifiers and built on JOSE encryption standards Self-Issued OpenID (SIOP) became an official working group item at the OpenID Foundation, advancing the conversation around the role of identity providers on the web Google’s WebID project started developing features to allow the browser to mediate interactions between end-users and identity providers in a privacy-preserving way

For more information on how all of these technologies are interconnected, read our latest paper, The State of Identity on the Web.

In addition, as part of our involvement with the DHS SVIP program, in March of this year we participated in the DHS SVIP Interoperability Plugfest. This event saw 8 different companies, representing both human-centric identity credentials as well as asset-centric supply chain traceability credentials, come together to showcase standards-compliance and genuine cross-vendor and cross-platform interoperability via testing and live demonstrations. The full presentation, including demos and videos from the public showcase day, can be found here.

These are just a handful of the significant accomplishments achieved over the last year. It’s been incredibly inspiring to see so many people working towards a common set of goals for the betterment of the web. As we’ve built our products and developed alongside the broader market, we’ve learned quite a bit about how to solve some of the core business and technical challenges associated with this new digital infrastructure. We’ve also gained a lot of insight from working directly with governments and companies across the globe to demonstrate interoperability and build bridges across different technology ecosystems.

Launching MATTR VII

We’ve been hard at work making our decentralized identity platform better than ever, and we’re proud to announce that as of today, we’re ready to support solutions that solve real-world problems for your users, in production — and it’s open to everybody.

That’s why we’re rebranding our platform to MATTR VII. Inspired by the seven states of matter, our platform gives builders and developers all the tools they need at their fingertips to create a whole new universe of decentralized products and applications. We provide all the raw technical building blocks to allow you to create exactly what you have in mind. MATTR VII is composable and configurable to fit your needs, whether you’re a well-established business with legacy systems or a start-up looking to build the next best thing in digital privacy. Best of all, MATTR VII is use-case-agnostic, meaning we’ve baked minimal dependencies into our products so you can use them the way that makes the most sense for you.

Starting today, we’re opening our platform for general availability. Specifically, that means if you’re ready to build a solution to solve a real-world problem for your users, we’re ready to support you. Simply contact us to get the ball rolling and to have your production environment set up. Of course, if you’re not quite ready for that, you can still test drive the platform by signing up for a free trial of MATTR VII to get started right away. It’s an exciting time in the MATTR universe, and we’re just getting started.

We’re continuing to build-out features to operationalize and support production use cases. To this end, in the near future we will be enhancing the sign-up and onboarding experience as well as providing tools to monitor your usage of the platform. Please reach out to give us feedback on how we can improve our products to support the solutions you’re building.

We’re excited to be in this new phase of our journey with MATTR VII. It will no doubt be another big year for decentralized identity, bringing us closer to the ultimate goal of bringing cryptography and digital trust to every person on the web.

Follow us on GitHub, Medium or Twitter for further updates.

Why we’re launching MATTR VII was originally published in MATTR on Medium, where people are continuing the conversation by highlighting and responding to this story.

Wednesday, 24. March 2021

Bill Wendel's Real Estate Cafe

Sweetest Deals of 2020: What’s really happening in luxury real estate during pandemic?

What started as an impassioned thread in a leading agent-to-agent Facebook group yesterday about Generation Priced Out morphed into a debate about whether the housing… The post Sweetest Deals of 2020: What's really happening in luxury real estate during pandemic? first appeared on Real Estate Cafe.

What started as an impassioned thread in a leading agent-to-agent Facebook group yesterday about Generation Priced Out morphed into a debate about whether the housing…

The post Sweetest Deals of 2020: What's really happening in luxury real estate during pandemic? first appeared on Real Estate Cafe.


Margo Johnson

Interoperability is Not a Choice

This post describes Transmute’s approach to interoperable software and includes video and technical results from a recent interoperability demonstration with US DHS SVIP cohort companies. Photo by ian dooley on Unsplash The future of software is all about choice. Customers want to chose technology that best solves their business problems, knowing that they will not be locked in with a v

This post describes Transmute’s approach to interoperable software and includes video and technical results from a recent interoperability demonstration with US DHS SVIP cohort companies.

Photo by ian dooley on Unsplash

The future of software is all about choice.

Customers want to chose technology that best solves their business problems, knowing that they will not be locked in with a vendor if that solution is no longer the best fit.

Businesses are also demanding choice about when and how they consume important data — a reaction to the data silos and expensive systems integrations of the past.

Interoperability moves from theory to reality when companies have meaningful ability to choose. It is predicated on open standards foundations that enable easy movement of data and vendors.

Interoperability with DHS SVIP Companies

Our team was proud to participate in the US Department of Homeland Security Silicon Valley Innovation Program Interoperability Plug-fest this month. DHS SVIP has been leading the charge on interoperability for years now, putting their funding and networks on the table to lead the charge.

This was Transmute’s second time participating as an awarded company of the SVIP program, and we were joined by 7 other companies from around the globe, addressing topics from supply chain traceability to digital assets for humans.

While each company is focused on slightly different industries — and therefore nuanced solutions for those customers — we are all committed (and contractually obligated by the US Government) to implement open standards infrastructure in a way that ensures verifiable information can be issued, consumed, and verified across systems using different technical “stacks”.

Technical foundations for interoperability include the W3C Verifiable Credential Data Model, JSON Linked Data, the Verifiable Credentials HTTP API, and the Credential Handler API. Companies also worked from shared vocabularies based on use case, such as the Traceability Vocabulary that aggregates global supply chain ontologies for use in linked data credentials.

The following two videos show examples of interoperability in action using both Transmute and other cohort company systems. Note that the use cases have been simplified to allow for ease of demonstration to diverse audiences.

Transmute and other companies also publicly shared the results of our interoperability testing — Transmute’s results are here.

Interoperability in steel supply chain

Transmute is working directly with US Customs and Border Protection to trace the origins of steel materials using verifiable credentials. This video shows an example of multiple steel supply chain actors exchanging verifiable trade information culminating in a seamless review process from CBP.

Interoperability across industries

We also worked with other cohort companies to demonstrate how important credentials like a vaccination certificate can be used to help supply chain professionals get back to work safely. This demo includes the use of selective disclosure technology as well as off-line verification of a paper credential.

Charting the Course

Interoperability across systems moves the internet towards a more open-network approach for trustworthy exchange of information. Choice is increasingly becoming the network feature that governments and enterprises will not do without. It is pre-competitive table stakes for doing business. The path is clear for those of us developing technology in this space: interoperate, or get out. Fortunately, the competitive “pie” is big enough for all of us.

By creating interoperable systems that can seamlessly exchange trusted information we are creating a global network of information that grows in value as more players enter it.

Transmute is proud to build with talented teams from around the globe, including our cohort friends: Mattr, Mavennet, Mesur, Digital Bazaar, Secure Key, Danube Tech, and Spherity.

Thank you also to the DHS SVIP team for funding this interoperability work, and to our partners at US CBP for your support moving from technology to tactical solutions.

To learn more about Transmute’s platform and solutions contact us today.

Interoperability is Not a Choice was originally published in Transmute on Medium, where people are continuing the conversation by highlighting and responding to this story.

Tuesday, 23. March 2021

Bill Wendel's Real Estate Cafe

iBuyers compounding confusion, lack of affordable inventory in rigged housing market

“The ShapeShifters of real estate,” that’s what this consumer advocate has called iBuyers for years because they abandon the role of real estate agent during… The post iBuyers compounding confusion, lack of affordable inventory in rigged housing market first appeared on Real Estate Cafe.

“The ShapeShifters of real estate,” that’s what this consumer advocate has called iBuyers for years because they abandon the role of real estate agent during…

The post iBuyers compounding confusion, lack of affordable inventory in rigged housing market first appeared on Real Estate Cafe.


Damien Bod

Setting dynamic Metadata for Blazor Web assembly

This post shows how HTML header meta data can be dynamically updated or changed for a Blazor Web assembly application routes hosted in ASP.NET Core. This can be usually for changing how URL link previews are displayed when sharing links. Code: https://github.com/damienbod/BlazorMetaData Updating the HTTP Header data to match the URL route used in the […]

This post shows how HTML header meta data can be dynamically updated or changed for a Blazor Web assembly application routes hosted in ASP.NET Core. This can be usually for changing how URL link previews are displayed when sharing links.

Code: https://github.com/damienbod/BlazorMetaData

Updating the HTTP Header data to match the URL route used in the Blazor WASM can be supported using a Razor Page host file instead of using a static html file. The Razor Page _Host file can use code behind and a model from this class. The Model can the be used to display the different values as required. This is a Hosted WASM application using ASP.NET Core as the server.

@page "/" @model BlazorMeta.Server.Pages._HostModel @namespace BlazorMeta.Pages @addTagHelper *, Microsoft.AspNetCore.Mvc.TagHelpers @{ Layout = null; } <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=no" /> <meta property="og:type" content="website" /> <meta property="og:title" content="Blazor BFF AAD Cookie 2021 @Model.SiteName" /> <meta property="og:url" content="https://damienbod.com" /> <meta property="og:image" content="https://avatars.githubusercontent.com/u/3442158?s=400&v=4"> <meta property="og:image:height" content="384" /> <meta property="og:image:width" content="384" /> <meta property="og:site_name" content="@Model.SiteName" /> <meta property="og:description" content="@Model.PageDescription" /> <meta name="twitter:site" content="damien_bod" /> <meta name="twitter:card" content="summary" /> <meta name="twitter:description" content="@Model.PageDescription" /> <meta name="twitter:title" content="Blazor BFF AAD Cookie 2021 @Model.SiteName" /> <title>Blazor AAD Cookie</title> <base href="~/" /> <link rel="stylesheet" href="css/bootstrap/bootstrap.min.css" /> <link href="css/app.css" rel="stylesheet" /> <link href="BlazorMeta.Client.styles.css" rel="stylesheet" /> <link rel="apple-touch-icon" sizes="512x512" href="icon-512.png" /> </head> <body> <div id="app"> <!-- Spinner --> <div class="spinner d-flex align-items-center justify-content-center" style="position:absolute; width: 100%; height: 100%; background: #d3d3d39c; left: 0; top: 0; border-radius: 10px;"> <div class="spinner-border text-success" role="status"> <span class="sr-only">Loading...</span> </div> </div> </div> @*<component type="typeof(App)" render-mode="WebAssembly" />*@ <div id="blazor-error-ui"> <environment include="Staging,Production"> An error has occurred. This application may no longer respond until reloaded. </environment> <environment include="Development"> An unhandled exception has occurred. See browser dev tools for details. </environment> <a href="" class="reload">Reload</a> <a class="dismiss">🗙</a> </div> <script src="_framework/blazor.webassembly.js"></script> </body> </html>

The code behind _Host class adds the public properties of the model which are used in the template _cshtml file. The OnGet sets the values of the properties using the path property of the HTTP request.

using Microsoft.AspNetCore.Mvc.RazorPages; namespace BlazorMeta.Server.Pages { public class _HostModel : PageModel { public string SiteName { get; set; } = "damienbod"; public string PageDescription { get; set; } = "damienbod init description"; public void OnGet() { (SiteName, PageDescription) = GetMetaData(); } private (string, string) GetMetaData() { var metadata = Request.Path.Value switch { "/counter" => ("damienbod/counter", "This is the meta data for the counter"), "/fetchdata" => ("damienbod/fetchdata", "This is the meta data for the fetchdata"), _ => ("damienbod", "general description") }; return metadata; } } }

The MapFallbackToPage must be set to use the _Host Razor Page layout or fallback file.

app.UseEndpoints(endpoints => { endpoints.MapRazorPages(); endpoints.MapControllers(); endpoints.MapFallbackToPage("/_Host"); });

When the application is deployed to a public server, an URL with the Blazor route can be copied and pasted into the software services or tools which then display the preview data.

Each service uses different meta data headers and you would need to add the headers with the dynamic content as required. Underneath are some examples of what can be displayed.

LinkedIn URL preview

Slack URL preview

Twitter URL preview

Microsoft teams URL preview

Links:

https://www.w3schools.com/tags/tag_meta.asp

https://cards-dev.twitter.com/validator

https://developer.twitter.com/en/docs/twitter-for-websites/cards/overview/markup

https://swimburger.net/blog/dotnet/pre-render-blazor-webassembly-at-build-time-to-optimize-for-search-engines


Just a Theory

Assume Positive Intensifies

How “Assume positive intent” downplays impact, gaslights employees, and absolves leaders of responsibility.

Lets talk about that well-worn bit of wisdom: “assume positive intent.” On the surface it’s excellent advice: practice empathy by mindfully assuming that people may create issues despite their best intentions. You’ve heard the parables, from Steven Covey’s paradigm shift on the subway to David Foster Wallace’s latent condemnation of gas-guzzling traffic and soul-sucking supermarkets. Pepsi CEO Indra Nooyi has popularized the notion to ubiquity in corporate America.

In practice, the assumption of positive intent enables some pretty serious anti-patterns.

First, focusing on intent downplays impact. Good intentions don’t change the outcomes of one’s actions: we still must deal with whatever broke. At best, good intentions enable openness to feedback and growth, but do not erase those mistakes.

Which leads us to a more fundamental dilemma. In a piece for Medium last year, Ruth Terry, quoting the Kirwan Institute’s Lena Tenney, summarizes it aptly:

By downplaying actual impact, assuming positive intent can deprioritize the experience of already marginalized people.

“All of this focus on intention essentially remarginalizes a person of color who’s speaking up about racism by telling them that their experience doesn’t matter because the person didn’t mean it that way,” says Tenney, who helped create interactive implicit bias learning tools for the Kirwan Institute.

This remarginalization of the vulnerable seriously undermines the convictions behind “assume positive intent,” not to mention the culture at large. But the impact transcends racial contexts: it appears wherever people present uncomfortable issues to people in a dominant position.

Take the workplace. A brave employee publicly calls out a problematic behavior or practice, often highlighting implicit bias or, at the very least, patterns that contradict the professed values of the organization. Management nods and says, “I’m glad you brought that up, but it’s important for us all to assume positive intent in our interactions with our co-workers.” Then they explain the context for the actions, or, more likely, list potential mitigating details — without the diligence of investigation or even consequences. Assume positive intent, guess at or manufacture explanations, but little more.

This response minimizes the report’s impact to management while simultaneously de-emphasizing the experience of the worker who voiced it. Such brave folks, speaking just a little truth to power, may start to doubt themselves or what they’ve seen. The manager has successfully gaslighted the worker.

Leaders: please don’t do this. The phrase is not “Assume positive intent for me, but not for thee.” Extend the assumption only to the people reporting uncomfortable issues. There’s a damn good chance they came to you only by the assumption of positive intent: if your coworkers thought you had ill-intent, they would not speak at all.

If you feel inclined to defend behavior or patterns based on presumption of good intent, avoid that reflex, too. Good intent may be key to transgressors accepting difficult feedback, but hold them accountable and don’t let assumptions stand on their own. Impact matters, and so must consequences.

Most importantly, Never use the assumption of good intent to downplay or dismiss the crucial but uncomfortable or inconvenient feedback brave souls bring to you.

Assume positive intent in yourself, never assert it in others, and know that, regardless of intent, problems still must be addressed without making excuses or devaluing or dismissing the people who have suffered them.

More about… Culture Gaslighting Management Leadership Ruth Terry Lena Tenney

Monday, 22. March 2021

Bill Wendel's Real Estate Cafe

WSJ’s article on oversupply of real estate agents exposes RECartel

Applaud the Wall Street Journal’s headline about “New Realtors Pile Into Hot Housing Market” and am not surprised to find there are now “more real-estate… The post WSJ's article on oversupply of real estate agents exposes RECartel first appeared on Real Estate Cafe.

Applaud the Wall Street Journal’s headline about “New Realtors Pile Into Hot Housing Market” and am not surprised to find there are now “more real-estate…

The post WSJ's article on oversupply of real estate agents exposes RECartel first appeared on Real Estate Cafe.

Saturday, 20. March 2021

Jon Udell

Original memories

Were it not for the Wayback Machine, a lot of my post-1995 writing would now be gone. Since the advent of online-only publications, getting published has been a lousy way to stay published. When pubs change hands, or die, the works of their writers tend to evaporate. I’m not a great self-archivist, despite having better-than-average … Continue reading Original memories

Were it not for the Wayback Machine, a lot of my post-1995 writing would now be gone. Since the advent of online-only publications, getting published has been a lousy way to stay published. When pubs change hands, or die, the works of their writers tend to evaporate.

I’m not a great self-archivist, despite having better-than-average skills for the job. Many but not all of my professional archives are preserved — for now! — on my website. Occasionally, when I reach for a long-forgotten and newly-relevant item, only to find it 404, I’ll dig around and try to resurrect it. The forensic effort can be a big challenge; an even bigger one is avoiding self-blame.

The same thing happens with personal archives. When our family lived in New Delhi in the early 1960s, my dad captured thousands of images. Those color slides, curated in carousels and projected onto our living room wall in the years following, solidified the memories of what my five-year-old self had directly experienced. When we moved my parents to the facility where they spent their last years, one big box of those slides went missing. I try, not always successfully, to avoid blaming myself for that loss.

When our kids were little we didn’t own a videocassette recorder, which was how you captured home movies in that era. Instead we’d rent a VCR from Blockbuster every 6 months or so and spend the weekend filming. It turned out to be a great strategy. We’d set it on a table or on the floor, turn it on, and just let it run. The kids would forget it was there, and we recorded hours of precious daily life in episodic installments.

Five years ago our son-in-law volunteered the services of a friend of his to digitize those tapes, and brought us the MP4s on a thumb drive. I put copies in various “safe” places. Then we moved a couple of times, and when I reached for the digitized videos, they were gone. As were the original cassettes. This time around, there was no avoiding the self-blame. I beat myself up about it, and was so mortified that I hesitated to ask our daughter and son-in-law if they have safe copies. (Spoiler alert: they do.) Instead I’d periodically dig around in various hard drives, clouds, and boxes, looking for files or thumb drives that had to be there somewhere.

During this period of self-flagellation, I thought constantly about something I heard Roger Angell say about Carlton Fisk. Roger Angell was one of the greatest baseball writers, and Carlton Fisk one of the greatest players. One day I happened to walk into a bookstore in Harvard Square when Angell was giving a talk. In the Q and A, somebody asked: “What’s the most surprising thing you’ve ever heard a player say?”

The player was Carlton Fisk, and the surprise was his answer to the question: “How many time have you seen the video clip of your most famous moment?”

That moment is one of the most-watched sports clips ever: Fisk’s walk-off home run in game 6 of the 1975 World Series. He belts the ball deep to left field, it veers toward foul territory, he dances and waves it fair.

So, how often did Fisk watch that clip? Never.

Why not? He didn’t want to overwrite the original memory.

Of course we are always revising our memories. Photographic evidence arguably prevents us from doing so. Is that good or bad? I honestly don’t know. Maybe both.

For a while, when I thought those home videos were gone for good, I tried to convince myself that it was OK. The original memories live in my mind, I hold them in my heart, nothing can take them away, no recording can improve them.

Although that sort of worked, I was massively relieved when I finally fessed up to my negligence and learned that there are safe copies. For now, I haven’t requested them and don’t need to see them. It’s enough to know that they exist.

Friday, 19. March 2021

Mike Jones: self-issued

OAuth 2.0 JWT Secured Authorization Request (JAR) updates addressing remaining review comments

After the OAuth 2.0 JWT Secured Authorization Request (JAR) specification was sent to the RFC Editor, the IESG requested an additional round of IETF feedback. We’ve published an updated draft addressing the remaining review comments, specifically, SecDir comments from Watson Ladd. The only normative change made since the 28 was to change the MIME Type […]

After the OAuth 2.0 JWT Secured Authorization Request (JAR) specification was sent to the RFC Editor, the IESG requested an additional round of IETF feedback. We’ve published an updated draft addressing the remaining review comments, specifically, SecDir comments from Watson Ladd. The only normative change made since the 28 was to change the MIME Type from “oauth.authz.req+jwt” to “oauth-authz-req+jwt”, per advice from the designated experts.

As a reminder, this specification takes the JWT Request Object from Section 6 of OpenID Connect Core (Passing Request Parameters as JWTs) and makes this functionality available for pure OAuth 2.0 applications – and does so without introducing breaking changes. This is one of a series of specifications bringing functionality originally developed for OpenID Connect to the OAuth 2.0 ecosystem. Other such specifications included OAuth 2.0 Dynamic Client Registration Protocol [RFC 7591] and OAuth 2.0 Authorization Server Metadata [RFC 8414].

The specification is available at:

https://tools.ietf.org/html/draft-ietf-oauth-jwsreq-31

An HTML-formatted version is also available at:

https://self-issued.info/docs/draft-ietf-oauth-jwsreq-31.html

Thursday, 18. March 2021

MyDigitalFootprint

Is there a requirement for a “Data Attestation” in a Board paper?

This article is about how to ensure Directors gain assurance about “data” that is supporting the recommendations in a Board paper. I have read, written and presented my fair share of Board and Investment Committee papers over the past 25 years. As Directors, we are collectively accountable and responsible for the decisions we take. I can now observe a skills gap regarding “data”, with many b
This article is about how to ensure Directors gain assurance about “data” that is supporting the recommendations in a Board paper.


I have read, written and presented my fair share of Board and Investment Committee papers over the past 25 years. As Directors, we are collectively accountable and responsible for the decisions we take. I can now observe a skills gap regarding “data”, with many board members assuming and trusting the data that forms the basis on which they are asked to approve. There are good processes, methods and procedures for ensuring that any Board papers presented are factual. However, decisions using big-data and their associated analysis tools, including ML and AI, which drives automation, is new and requires different expertise at a higher level of detail.  Challenging data is different from finding it hard to question in detail any C-suite on their specific expertise and, more generally, the general counsel, CFO and CTO.  The CDO/ CIO axis bridges the value line being both a cost and revenue.  With “data” as the business driver, it remains superficially easier to question costs without understanding the consequences on our future decision ability and even harder to unpack unethical revenue. 

A classic “board paper” will likely have the following headings: Introduction, Background, Rationale, Structure/ Operations, Illustrative Financials & Scenarios, Competition, Risks and Legal. Case by case, there are always minor adjustments. Finally, some form of recommendation will invite the board to note key facts and approve the action.  I believe it is time for the Chair or CEO, with the support of their senior data lead (#CDO), to ask that each board paper has a new section heading called “Data Attestation.”  A section on Data Attestation will be a declaration that there are traceable evidence and proof of the data and the action of the presenter being a witness to certifying it.  Some teams will favour this as an addition to the main flow, some as a new part of legal, others as an appendix and some will claim it is already inherent in the process. How and where matters little compared to its intent.

Such a section could provide a solution until such time that we can gain sufficient skills at the Board and test data correctly.  Yes, there is a high duty of care that is already intrinsic in anyone who presents a board paper (already inherent). However, the data expertise and skills at most senior levels are also well below what we need because all the politics, bias and complexity is in the weeds, which is both easy not to know and hide.  Board members have to continue to question performance metrics (KPI and BSC) to determine the motivation for any decision, but having to trust “data sets” a different standard to those we have with audit, finance, legal and compliance.  If nothing else, a “data attestation statement” will set a hurdle for those presenting to prioritise bias, ethics and consequences of data used in their proposal. 

Having to trust data sets a different standard to those we have with audit, finance, legal and compliance.

Arguments for and against

Key assumptions

Data is critically important to our future and is foundational for decision making going forward.

Data is more complex today and continues to increase in complexity.

The C-suite and leadership team are experts in their disciplines and has deep expertise in their critical areas, but there is a data skills gap.

There is a recognition at the board that data bias, a lack of audibility, provenance, and data lineage can lead to flawed/bad decision making.

Based on these working assumptions, I do not believe that adding a “Data Attestation” section is a long term fix.  Whilst to comply with Section 172 of the Companies Act, it is an absolute requirement to meet the fiduciary duties that we upskill.  But data is not like marking, technology, operations, finance or HR - data is new, and the vast majority of boards and senior leadership team have little experience in big data,  data analytics or coding.  It is a recognition that education and skills development is a better solution, but in the gap between today and skills arriving, we should do something?   Critically, I would support introducing a data attestation section with a set date where it falls away. 

It is essential to consider as insurance companies who offer D&O policies are looking at new clauses related to the capability of Directors who make decisions based on data and their ability to know the data was “fit for purpose” for the decision. Insurance companies need to protect their claims business and might feel that the upskilling might take to long.

Why might this work? Do you get on a plane and ask to pilot it?  Do you go to the hospital with the correct google answer or ask a qualified Doctor?  We need to form our own view that someone has checked whether the pilot and doctor are qualified.  Today, we outsource Audit to a committee because of this same issue; it is complex. But Data is not finance, and data is not an Audit committee issue. Data is a different skill set. 


Each Board has to make its own choice. The easiest is to justify to oneself that our existing processes are good enough and we are following “best practices”, compliance thinking.   Given the 76 recommendations in the Sir Donald Brydon Review of Audit, assuming that our existing processes are good enough is difficult to justify. If we want to make better decisions with data, we need to make sure we can. 

Recommendation

A strong recommendation would be to put in place an “Attestation Clause”, a drop-dead date, a 2-year mandatory data training program aimed at the senior leadership team and Directors/ Board members and a succession plan that priorities data skills for new senior and board (inc NXD) roles.

Proposal

A “data attestation” section intends that the board receives a *signed* declaration from the proposer(s) and independent data expert that the proposer has:

proven attestation of the data used in the board paper, 

proven rights to use the data

what difference/ delta third-party data makes the recommendation/ outcome

ensured, to best efforts, that there is no bias or selection in the data or analysis

clearly specified any decision making that is or becomes automated 

if relevant, created the hypothesis before the analysis 

run scenarios using different data and tools

not miss-led the board using data

highlighted the conflicts of interest between their BSC/KPI and the approval sort

The independent auditor should not be the companies financial auditor or data lake provider; this should be an independent forensic data expert. Audit suggests sampling; this is not about sampling. It is not about creating more hurdles or handing power to an external body; this is about 3rd party verification and validation. As a company, you build a list of experts and cycle through them regularly. The auditor does not need to see the board paper, the outcome from the analysis or the recommendations - they are there to check the attestation and efficacy from end to end.  Critical will be proof of their expertise and an insurance certificate.    

Whilst this is not the final wording you will use, it is the intent that is important; this does not negate or novate data risks from the risk section.

Example of a Data Attestation section

We certify by our signatures that we, the proposer and auditor, can prove to OurCompany (PLC) Board that we have provable attestation and rights to all the data used in this paper’s presentation.   We have presented in this paper sensitivity of the selected data, model and tools and have provided evidence that different data and analysis tool selection equally favours the recommendation.  We have tested and can verify that our data, analysis, insights, and knowledge is traceable and justifiable.  We declare that there are no Conflicts of Interest, and no automation of decision making will result from this approval. 




Wednesday, 17. March 2021

Damien Bod

The authentication pyramid

This article looks at the authentication pyramid for signing into different applications. I only compare flows which have user interaction and only compare the 2FA, MFA differences. A lot of incorrect and aggressive marketing from large companies are blurring out the differences so that they can sell their products and so on. When you as […]

This article looks at the authentication pyramid for signing into different applications. I only compare flows which have user interaction and only compare the 2FA, MFA differences. A lot of incorrect and aggressive marketing from large companies are blurring out the differences so that they can sell their products and so on.

When you as a user need to use an application, you need to login. The process of logging in or signing in requires authentication of the user and the application. The sign in or login is an authentication process.

To explain the different user and application (identity) authentication possibilities, I created an authentication pyramid diagram from worse to best. FIDO2 is the best way of authenticating as this is only one which protects against phishing.

Passwords which rotate

Passwords which rotate without a second factor is the worst way of authenticating your users. By forcing password rotation, people are forced to update passwords regularly and this discourages the use of password managers and encourages users to use something simple which they can remember. In companies which use this policy, a lot of users have a simple password with a two-digit number at the end of the password. You can guess the number by calculating the length of time the user was at the company and the amount of time a password is active.

Passwords

Passwords without a second factor which don’t rotate are better than password which rotate as people tend to use better passwords. It is easier to educate the organisation users to use a password manager and then any complicated password can be easily used without constant rotation, annoyance. If it’s a pain to use in your daily routine, then people will try to avoid using it. I encourage users to use bitwarden but most password managers work good. Ease of use in the browser is important.

SMS MFA

SMS as a second factor is way better than passwords alone. But SMS as a second factor has too many security problems and is too easy to bypass. NIST no longer recommends using SMS as a second factor.

https://pages.nist.gov/800-63-3/sp800-63b.html

Authenticators

Authenticators are a good solution for second factor authentication and does improve the quality and the security of the authentication process compared to passwords and SMS second factor. Authenticators do have many problems, but the major fault of authenticators is that it does NOT protect against phishing. When using authenticators, your users are still vulnerable to phishing attacks.

If a user accesses a phishing website, the push notification will still get sent to the mobile device or the OTP can still be validated. A lot of companies are now moving to 2FA using authenticators. One problem with this is that the push notifications seems to get sent randomly from Authenticators. Authenticators are NOT enough if you have high security requirements.

Most people will recognise the popup underneath. This popup opens on one of my domains on almost a daily basis. I see users conditioned now to just click the checkbox, give in the code and continue working without even checking which application requested this. It has become routine now to just fill this in and click it away so that you can continue working. Sometimes the popup just opens because the last code has timed out. If someone had acquired your password and was logging in, a few users would just enter the code for the attacker due to this conditioning. Also, if you were accessing your application through a phishing website, you would not notice any difference here and continue to validate with the code.

Another problem with authenticators is that it requires a mobile phone to install the application. To complete the login, you require a phone. Most of us only have one mobile phone, so if you lose your phone, then you are locked out of your accounts. The account recovery comes into play which is usually a password, or SMS. Your security is reduced to SMS or password if you use this type of recovery. If recovery requires your IT admin to reset the account, you must wait for the IT from your organisation to reset your account, but this way, the security stays at the authenticator security level.

It is important that the account recovery does not use a reduced authentication process.

FIDO2 (Fast IDentity Online)

FIDO2 is the best way to authenticate identities where user interaction is required. FIDO2 protects against phishing. This is the standout feature which none of the other authentication processes protect against. If you lose your FIDO2 key, then you can use a second FIDO2 key. You as an organisation no longer need to worry about phishing. This could change in the future, but at present FIDO2 protects against phishing.

If you are an organisation with 100 employees, you could only allow FIDO2 and block all other authentication methods. Each employee would require 2 FIDO2 keys which would cost roughly 100$. This would cost about 10k for 100 users and so fully protect an organisation and rid yourself from phishing. You could save all the costs of the phishing exercises which so my companies force us to do now.

Here’s how you could configure this in Azure AD using the security, Authentication methods:

Notes:

Passwordless is great when using FIDO2. FIDO2 has best practice and recommendations for which FIDO2 flow, when and where to use. There are different type of FIDO2 flows for different use cases.

Passwordless is not only FIDO2, so be careful implementing a Passwordless flow. Make sure it’s a FIDO2 solution.

If you are using FIDO2, you will require NFC on the mobile phones to use applications from the organisation, unless you allow FIDO2 hardware from the device.

Identity authentication is only one part of the authentication. If you use a weak OAuth2/OIDC (OpenID Connect) flow, or something created by yourself, you also can have a weak authentication even if using FIDO2. It is important to follow standards. For example using FIDO2 authentication with OIDC FAPI would give you a really good authentication solution.

Try to avoid non-transparent solutions. Big companies will always push their own solutions and if these are not built using standards, there is no way of knowing how good the solution is. Security for your users is not the focus for the companies selling security solutions, selling their SOLUTION is the focus. It is up to you to know what you are buying. Security requires a good solution for application security and network security. A good solution will give you the chance to use best practice in both.

I would consider OIDC FAPI state of the art for high security in applications. This is what I would now consider when evaluating solutions for banks, insurance or government E-Ids. Using this together with FIDO2 and you have an easy to use best practice security solution.

Links

Home

https://github.com/OWASP/ASVS

NIST

https://pages.nist.gov/800-63-3/sp800-63b.html

https://docs.microsoft.com/en-us/azure/active-directory/authentication/concept-authentication-passwordless

https://docs.microsoft.com/en-us/azure/active-directory/authentication/howto-authentication-passwordless-security-key

https://portal.azure.com/#blade/Microsoft_AAD_IAM/AuthenticationMethodsMenuBlade/AdminAuthMethods

https://docs.microsoft.com/en-us/aspnet/core/security/authentication/mfa

FIDO2: WebAuthn & CTAP

https://fidoalliance.org/specs/fido-v2.0-id-20180227/fido-client-to-authenticator-protocol-v2.0-id-20180227.html

https://github.com/herrjemand/awesome-webauthn

https://docs.microsoft.com/en-us/azure/active-directory/authentication/concept-authentication-passwordless#fido2-security-keys

https://bitwarden.com/

Tuesday, 16. March 2021

Doc Searls Weblog

How anywhere is everywhere

On Quora, somebody asked, Which is your choice, radio, television, or the Internet?. I replied with the following. If you say to your smart speaker “Play KSKO,” it will play that small-town Alaska station, which has the wattage of a light bulb, anywhere in the world. In this sense the Internet has eaten the station. […]

On Quora, somebody asked, Which is your choice, radio, television, or the Internet?. I replied with the following.

If you say to your smart speaker “Play KSKO,” it will play that small-town Alaska station, which has the wattage of a light bulb, anywhere in the world. In this sense the Internet has eaten the station. But many people in rural Alaska served by KSKO and its tiny repeaters don’t have Internet access, so the station is either their only choice, or one of a few. So we use the gear we have to get the content we can.

TV viewing is also drifting from cable to à la carte subscription services (Netflix, et. al.) delivered over the Internet, in much the same way that it drifted earlier from over-the-air to cable. And yet over-the-air is still with us. It’s also significant that most of us get our Internet over connections originally meant only for cable TV, or over cellular connections originally meant only for telephony.

Marshall and Eric McLuhan, in Laws of Media, say every new medium or technology does four things: enhance, retrieve, obsolesce and reverse. (These are also caled the Tetrad of Media Effects.) And there are many answers in each category. For example, the Internet—

enhances content delivery; retrieves radio, TV and telephone technologies; obsolesces over-the-air listening and viewing; reverses into tribalism;

—among many other effects within each of those.

The McLuhans also note that few things get completely obsolesced. For example, there are still steam engines in the world. Some people still make stone tools.

It should also help to note that the Internet is not a technology. At its base it’s a protocol—TCP/IP—that can be used by a boundless variety of technologies. A protocol is a set of manners among things that compute and communicate. What made the Internet ubiquitous and all-consuming was the adoption of TCP/IP by things that compute and communicate everywhere in the world.

This development—the worldwide adoption of TCP/IP—is beyond profound. It’s a change as radical as we might have if all the world suddenly spoke one common language. Even more radically, it creates a second digital world that coexists with our physical one.

In this digital world, we are at a functional distance apart of zero. We also have no gravity. We are simply present with each other. This means the only preposition that accurately applies to our experience of the Internet is with. Because we are not really on or through or over anything. Those prepositions refer to the physical world. The digital world is some(non)thing else.

This is why referring to the Internet as a medium isn’t quite right. It is a one-of-one, an example only of itself. Like the Universe. That you can broadcast through the Internet is just one of the countless activities it supports. (Even though the it is not an it in the material sense.)

I think we are only at the beginning of coming to grips with what it all means, besides a lot.

Monday, 15. March 2021

@_Nat Zone

Fin/Sum 2021 Day 2に出演します〜ポストコロナで金融サービスとテクノロジーは如何にあるべきか

2021年3月17日 午前9時10分から、Fin/… The post Fin/Sum 2021 Day 2に出演します〜ポストコロナで金融サービスとテクノロジーは如何にあるべきか first appeared on @_Nat Zone.

2021年3月17日 午前9時10分から、Fin/Sum 2021 1にモデレーターとして出演します2

Fin/Sum 2021 は、日本経済新聞社と金融庁の共催ですが、Day 2 のメインホールは金融庁が中心になって作っているプログラムです。赤澤副大臣のご挨拶で始まって、麻生副総理のご挨拶で占める形を取ります。その中で、わたしのセッションは、赤澤 亮正内閣府副大臣(金融担当)のご挨拶の直後で、セッションのタイトルは「ポストコロナで金融サービスとテクノロジーは如何にあるべきか」。ある意味この1日の方向づけをする大切なセッションです。

パネルの出演者は

サムソン・モウ Blockstream CSO Pixelmatic CEO ブラッド・カー 国際金融協会デジタルファイナンスマネージングディレクター 横田 浩二 みんなの銀行代表取締役頭取 兼 ふくおかフィナンシャルグループ取締役執行役員 松尾 元信 金融庁 証券取引等監視委員会 事務局長

と、堂々たる顔ぶれです。

パネルの構造としては、まず、私から、「トラストとは」ということを若干お話して、その後を受けて、横田頭取、カー氏、モウ氏と行って、それを受けて松尾事務局長に当局の視点からのお話をいただき、15分ほどフリーディスカッション→クロージング、という形を考えております。

今年は、オンライン・オフライン組み合わせたハイブリッド開催のようです。もしお時間があるようでしたら、ご覧いただければと思います。チケットはこちらから購入可能です。会場での参加は10万円と高額ですが、リモートには無料版(アーカイブアクセス不能)、5000円(アーカイブアクセス可能)の種類もあります。

なお、以下に、当日のプログラムを掲載しておきます。

プログラム (Day 2) [2021-03-17]

9:00-9:05

挨拶

赤澤 亮正内閣府副大臣(金融担当)

9:10-10:00

セッション1 ポストコロナで金融サービスとテクノロジーは如何にあるべきか サムソン・モウBlockstream CSO Pixelmatic CEO ブラッド・カー国際金融協会デジタルファイナンスマネージングディレクター 横田 浩二みんなの銀行代表取締役頭取 兼 ふくおかフィナンシャルグループ取締役執行役員 松尾 元信金融庁 証券取引等監視委員会 事務局長

モデレーター
崎村 夏彦OpenID Foundation 理事長

10:20-11:10

セッション2 デジタル上の「信頼」構築に向けたビルディング・ブロック モティ・ウンGoogle セキュリティ&プライバシーリサーチサイエンティスト 安田 クリスティーナマイクロソフト・コーポレーション アイデンティティ規格アーキテクト トーステン・ロッダーシュテットyes.com CTO 手塚 悟慶應義塾大学 環境情報学部 教授

モデレーター

松尾 真一郎ジョージタウン大学Department of Computer Science研究教授 NTTリサーチ CISラボラトリーズ ブロックチェーン研究グループヘッド

11:30-12:20

セッション3 デジタル資産への変わりゆく信頼 ケイヴォン・ピレスターニHead of APAC Institutional Coverage & COO Coinbase Singapore ジョシュ・ディームスフィデリティ・デジタル・アセット 事業開発部長 ジャン=マリー・モグネッティCOINSHARES INTERNATIONAL CEO KOMAINU HOLDINGS CEOモデレーター マイケル・ケーシーCoinDesk CCO

12:50-13:40

セッション4 金融庁ブロックチェーン国際共同研究プロジェクト – デジタルアイデンティティの活用可能性と課題 佐古 和恵早稲田大学 基幹理工学部情報理工学科 教授 MyDataJapan 副理事長 間下 公照ジェーシービー イノベーション統括部 次長 アンドレ・ボイセンSecureKey Technologies Inc. チーフ・アイデンティティ・オフィサー 渡辺 翔太 野村総合研究所 コーポレートイノベーションコンサルティング部 主任コンサルタント

モデレーター

牛田 遼介ジョージタウン大学 シニアフェロー 金融庁 フィンテック室 課長補佐

14:00-14:50

セッション5 APIエコノミーにおける金融の役割を再考する 藤井 達人日本マイクロソフト エンタープライズ事業本部 業務執行役員 金融イノベーション本部長 FINOVATORS Co-Founder 丸山 弘毅インフキュリオン 代表取締役社長 富士榮 尚寛OpenID Foundation eKYC and Identity Assurance WG 共同議長
OpenID ファウンデーションジャパン 理事 松尾 拓哉JALペイメント・ポート 取締役マーケティング部長

モデレーター
大久保 光伸金融庁 参与 内閣官房 政府CIO補佐官

15:10-15:55

特別座談会1 ユーザー起点の金融サービスとは何なのか? 沖田 貴史ナッジ 代表取締役社長 Fintech協会 会長 河合 祐子apan Digital Design CEO室 Senior Researcher 加藤 修一伊藤忠商事 執行役員 第8カンパニー プレジデント

モデレーター
岡田 大金融庁 総合政策局 総合政策課長

16:15-17:05

セッション6 BGIN – 1年間の歩みの振り返りと今後の展望 鈴木 茂哉慶應義塾大学 大学院政策・メディア研究科 特任教授 ローマン・ダンツィガー パヴロフSafestead CEO ジュリアン・ブリンガーKallistech CEO マノージ・クマル・シンハインド準備銀行 副部長 

モデレーター
マイ・サンタマリーアアイルランド財務省 ファイナンシャルアドバイザリー部門長

17:25-18:10

特別座談会2 金融サービス新時代に向けたフィンテック・イノベーションの推進 貴志 優紀Plug and Play Japan Fintech and Brand&Retail 兼 Director Fintech協会 理事 リチャード・ノックス英国財務省 金融サービスグループ長(国際部門) パット・パテルシンガポール金融管理局 プリンシプルエグゼクティブオフィサー

モデレーター
野崎 彰金融庁 組織戦略監理官 兼 フィンテック室長

18:15-18:20

挨拶

挨拶麻生 太郎副総理 兼 財務大臣 兼 内閣府特命担当大臣(金融)

The post Fin/Sum 2021 Day 2に出演します〜ポストコロナで金融サービスとテクノロジーは如何にあるべきか first appeared on @_Nat Zone.

Nader Helmy

The State of Identity on the Web

The evolution of identity on the web is happening at a rapid pace, with many different projects and efforts converging around similar ideas with their own interpretations and constraints. It can be difficult to parse through all of these developments while the dust hasn’t completely settled, but looking at these issues holistically, we can see a much bigger pattern emerging. In fact, many of the m

The evolution of identity on the web is happening at a rapid pace, with many different projects and efforts converging around similar ideas with their own interpretations and constraints. It can be difficult to parse through all of these developments while the dust hasn’t completely settled, but looking at these issues holistically, we can see a much bigger pattern emerging. In fact, many of the modern innovations related to identity on the web are actually quite connected and build upon each other in a myriad of complementary ways.

The rise of OpenID Connect

The core of modern identity is undoubtedly OpenID Connect (OIDC), the de-facto standard for user authentication and identity protocol on the internet. It’s a protocol that enables developers building apps and services to verify the identity of their users and obtain basic profile information about them in order to create an authenticated user experience. Because OIDC is an identity layer built on top of the OAuth 2.0 framework, it can also be used as an authorization solution. Its development was significant for many reasons, in part because it came with the realization that identity on the web is fundamental to many different kinds of interactions, and these interactions need simple and powerful security features that are ubiquitous and accessible. Secure digital identity is a problem that doesn’t make sense to solve over and over again in different ways with each new application, but instead needs a standard and efficient mechanism that’s easy to use and works for the majority of people.

OpenID Connect introduced a convenient and accessible protocol for identity that required less setup and complexity for developers building different kinds of applications and programs. In many ways, protocols like OIDC and OAuth 2.0 piggy-backed on the revolution that was underfoot in the mid 2000’s as developers fled en-mass from web based systems heavily reliant on technologies like XML (and consequently identity systems built upon these technologies like SAML), for simpler systems based on JSON. OpenID built on the success of OAuth and offered a solution that improved upon existing identity and web security technologies which were vulnerable to attacks like screen scraping. This shift towards a solution built upon modern web technologies with an emphasis on being easy-to-use created ripe conditions for adoption of these web standards.

OIDC’s success has categorically sped up both the web and native application development cycle when it comes to requiring the integration of identity, and as a result, users have now grown accustomed to having sign-in options aplenty with all their favorite products and services. It’s not intuitively clear to your average user why they need so many different logins and it’s up to the user to manage which identities they use with which services, but the system works and provides a relatively reliable way to integrate identity on the web.

Success and its unintended consequences

While OIDC succeeded in simplicity and adoption, what has emerged over time are a number of limitations and challenges that have come as a result of taking these systems to a global scale.

When it comes to the market for consumer identity, there are generally three main actors present:

Identity Providers Relying Parties End-Users

The forces in the market that cause their intersection to exist are complex, but can be loosely broken down into the interaction between each pair of actors.

In order for an End-User to be able to “login” to a website today, the “sweet spot” must exist where each of these sets of requirements are met.

The negotiation between these three parties usually plays out on the relying party’s login page. It’s this precious real-estate that drives these very distinct market forces.

Anti-competitive market forces

In typical deployments of OIDC, in order for a user to be able to “login” to a relying party or service they’re trying to access online, the relying party must be in direct contact with the Identity Provider (IdP). This is what’s come to be known as the IdP tracking problem. It’s the IdP that’s responsible for performing end-user authentication and issuing end-user identities to relying parties, not the end-users themselves. Over time, these natural forces in OIDC have created an environment that tends to favour the emergence and continued growth of a small number of very large IdPs. These IdPs wield a great deal of power, as they have become a critical dependency and intermediary for many kinds of digital interactions that require identity.

This environment prevents competition and diversity amongst IdPs in exchange for a convenience-driven technology framework where user data is controlled and managed in a few central locations. The market conditions have made it incredibly difficult for new IdPs to break into the market. For example, when Apple unveiled their “Sign in with Apple” service, they used their position as a proprietary service provider to mandate their inclusion as a third party sign in option for any app or service that was supporting federated login on Apple devices. This effectively guaranteed adoption of their OpenID-based solution, allowing them to easily capture a portion of the precious real-estate that is the login screen of thousands of modern web apps today. This method of capturing the market is indicative of a larger challenge wherein the environment of OIDC has made it difficult for newer and smaller players in the IdP ecosystem to participate with existing vendors on an equal playing field.

Identity as a secondary concern has primary consequences

Another key issue in the current landscape is that for nearly all modern IdPs, being an identity provider is often a secondary function to their primary line of business. Though they have come to wear many different hats, many of the key IdPs’ primary business function is offering some service to end-users (e.g. Facebook, Twitter, Google, etc.). Their role as IdPs is something that emerged over time, and with it has surfaced a whole new set of responsibilities whose impact we are only just beginning to grapple with.

Due to this unequal relationship, developers and businesses who want to integrate identity in their applications are forced to choose those IdPs which contain user data for their target demographics, instead of offering options for IdP selection based on real metrics around responsible and privacy-preserving identity practices for end-users.

This cycle perpetuates the dominance of a few major IdPs and likewise forces users to keep choosing from the same set of options or risk losing access to all of their online accounts. In addition, many of these IdPs have leveraged their role as central intermediaries to increase surveillance and user behavior tracking, not just across their proprietary services, but across a user’s entire web experience. The net result of this architecture on modern systems is that IdPs have become a locus for centralized data storage and processing.

The privacy implications associated with the reliance on a central intermediary who can delete, control, or expose user data at any time have proven to be no small matter. New regulations such as GDPR and CCPA have brought user privacy to the forefront and have spurred lots of public discourse and pressure for companies to manage their data processing policies against more robust standards. The regulatory and business environment that is forming around GDPR and CCPA is pushing the market to consider better solutions that may involve decentralizing the mode of operation or separating the responsibilities of an IdP.

Identity Provider lock-in

Lastly, in today’s landscape there is an inseparable coupling between an End-User and the IdP they use. This effectively means that, in order to transfer from say “Sign In With Google” to “Sign In With Twitter,” a user often has to start over and build their identity from scratch. This is due to the fact that users are effectively borrowing or renting their identities from their IdPs, and hence have little to no control in exercising that identity how they see fit. This model creates a pattern that unnecessarily ties a user to the application and data ecosystem of their IdP and means they must keep an active account with the provider to keep using their identity. If a user can’t access to their account with an IdP, say by losing access to their Twitter profile, they can no longer login to any of the services where they’re using Twitter as their IdP.

One of the problems with the term Identity Provider itself is that it sets up the assumption that the end-user is being provided with an identity, rather than the identity being theirs or under their control. If end-users have no real choice in selecting their IdP, then they are ultimately subject to the whims of a few very large and powerful companies. This model is not only antithetical to anti-trust policies and legislation, it also prevents data portability between platforms. It’s made it abundantly clear that the paradigm shift on end-user privacy practices needs to start by giving users a baseline level of choice when it comes to their identity.

A nod to an alternative model

Fundamentally, when it comes to identity on the web, users should have choice; choice about which services they employ to facilitate the usage of their digital identities along with being empowered to change these service providers if they so choose.

The irony of OpenID Connect is that the original authors did actually consider these problems, and evidence of this can be found in the original OIDC specification: in chapter 7, entitled “Self Issued OpenID Provider” (SIOP).

Earning its name primarily from the powerful idea that users could somehow be their own identity provider, SIOP was an ambitious attempt at solving a number of different problems at once. It raises some major questions about the future of the protocol, but it stops short of offering an end-to-end solution to these complex problems.

As it stands in the core specification, the SIOP chapter of OIDC was really trying to solve 3 significant, but distinct problems, which are:

Enabling portable/transferable identities between providers Dealing with different deployment models for OpenID providers Solving the Nascar Problem

SIOP has recently been of strong interest to those in the decentralized or self-sovereign identity community because it’s been identified as a potential pathway to transitioning existing deployed digital infrastructure towards a more decentralized and user-centric model. As discussion is ongoing at the OpenID Foundation to evolve and reboot the work around SIOP, there are a number of interesting questions raised by this chapter that are worth exploring to their full extent. For starters, SIOP questions some of the fundamental assumptions around the behaviour and deployment of an IdP.

OpenID and OAuth typically use a redirect mechanism to relay a request from a relying party to an IdP. OAuth supports redirecting back to a native app for the end-user, but it assumes that the provider itself always takes the form of an HTTP server, and furthermore it assumes the request is primarily handled server-side. SIOP challenged this assumption by questioning whether the identity provider has to be entirely server-side, or if the provider could instead take the form of a Single-Page Application (SPA), Progressive Web Application (PWA), or even a native application. In creating a precedent for improving upon the IdP model, SIOP was asking fundamental questions such as: who gets to pick the provider? What role does the end-user play in this selection process? Does the provider always need to be an authorization server or is there a more decentralized model available that is resilient from certain modes of compromise?

Although some of these questions remain unanswered or are early in development, the precedent set by SIOP has spurred a number of related developments in and around web identity. Work is ongoing at the OpenID Foundation to flesh out the implications of SIOP in the emerging landscape.

Tech giants capitalize on the conversation

Although OIDC is primarily a web-based identity protocol, it was purposefully designed to be independent of any particular browser feature or API. This separation of concerns has proved incredibly useful in enabling adoption of OIDC outside of web-only environments, but it has greatly limited the ability for browser vendors to facilitate and mediate web-based login events. A number of large technology and browser vendors have picked up on this discrepancy, and are starting to take ownership of the role they play in web-based user interactions.

Notably, a number of new initiatives have been introduced in the last few years to address this gap in user privacy on the web. An example of this can be found in the W3C Technical Architecture Group (TAG), a group tasked with documenting and building consensus around the architecture of the World Wide Web. Ahead of the 2019 W3C TPAC in Japan, Apple proposed an initiative called IsLoggedIn, effectively a way for websites to tell the browser whether the user was logged in or not in a trustworthy way. What they realized is that the behavior of modern web architecture results in users being “logged in by default” to websites they visit, even if they only visit a website once. Essentially as soon as the browser loads a webpage, that page can store data about the user indefinitely on the device, with no clear mechanism for indicating when a user has logged out or wishes to stop sharing their data. They introduced an API that would allow browsers to set the status of user log-ins to limit long term storage of user data. It was a vision that required broad consensus among today’s major web browsers to be successful. Ultimately, the browsers have taken their own approach in trying to mitigate the issue.

In 2019, Google created their Privacy Sandbox initiative to advance user privacy on the web using open and transparent standards. As one of the largest web browsers on the planet, Google Chrome seized the opportunity provided by an increased public focus on user privacy to work on limiting cross-site user tracking and pervasive incentives that encourage surveillance. Fuelled by the Privacy Sandbox initiative, they created a project called WebID to explore how the browser can mediate between different parties in a digital identity transaction. WebID is an early attempt to get in the middle of the interaction that happens between a relying party and an IdP, allowing the browser to facilitate the transaction in a way that provides stronger privacy guarantees for the end-user.

As an overarching effort, it’s in many ways a response to the environment created by CCPA and GDPR where technology vendors like Google are attempting to enforce privacy expectations for end-users while surfing the web. Its goal is to keep protocols like OIDC largely intact while using the browser as a mediator to provide a stronger set of guarantees when it comes to user identities. This may ultimately give end-users more privacy on the web, but it doesn’t exactly solve the problem of users being locked into their IdPs. With the persistent problem of data portability and limited user choices, simply allowing the browser to mediate the interaction is an important piece of the puzzle but does not go far enough on its own.

Going beyond the current state of OpenID Connect

Though it is a critical component of modern web identity, OIDC is not by any means the only solution or protocol to attempt to solve these kinds of problems.

A set of emerging standards from the W3C Credentials Community Group aim to look at identity on the web in a very different way, and, in fact, are designed to consider use cases outside of just consumer identity. One such standard is Decentralized Identifiers (DIDs) which defines a new type of identifier and accompanying data model featuring several novel properties not present in most mainstream identifier schemes in use today. Using DIDs in tandem with technologies like Verifiable Credentials (VCs) creates an infrastructure for a more distributed and decentralized layer for identity on the web, enabling a greater level of user control. VCs were created as the newest in a long line of cryptographically secured data representation formats. Their goal was to provide a standard that improves on its predecessors by accommodating formal data semantics through technologies like JSON-LD and addressing the role of data subjects in managing and controlling data about themselves.

These standards have emerged in large part to address the limitations of federated identity systems such as the one provided by OIDC. In the case of DIDs, the focus has been on creating a more resilient kind of user-controllable identifier. These kinds of identifiers don’t have to be borrowed or rented from an IdP as is the case today, but can instead be directly controlled by the entities they represent via cryptography in a consistent and standard way. When combining these two technologies, VCs and DIDs, we enable verifiable information that has a cryptographic binding to the end-user and can be transferred cross-context while retaining its security and semantics.

As is the case with many emerging technologies, in order to be successful in an existing and complicated market, these new standards should have a cohesive relationship to the present. To that end, there has been a significant push to bridge these emerging technologies with the existing world of OIDC in a way that doesn’t break existing implementations and encourages interoperability.

One prominent example of this is a new extension to OIDC known as OpenID Connect Credential Provider. Current OIDC flows result in the user receiving an identity token which is coupled to the IdP that created it, and can be used to prove the user’s identity within a specific domain. OIDC Credential Provider allows you to extend OIDC to allow IdPs to issue reusable VCs about the end-user instead of simple identity tokens with limited functionality. It allows end-users to request credentials from an OpenID Provider and manage their own credentials in a digital wallet under their control. By allowing data authorities to be the provider of reusable digital credentials instead of simple identity assertions, this extension effectively turns traditional Identity Providers into Credential Providers.

The credentials provided under this system are cryptographically bound to a public key controlled by the end-user. In addition to public key binding, the credential can instead be bound to a DID, adding a layer of indirection between a user’s identity and the keys they use to control it. In binding to a DID, the subject of the credential is able to maintain ownership of the credential on a longer life cycle due to their ability to manage and rotate keys while maintaining a consistent identifier. This eases the burden on data authorities to re-issue credentials when the subject’s keys change and allows relying parties to verify that the credential is always being validated against the current public key of the end-user. The innovations upon OIDC mark a shift from a model where relying parties request claims from an IdP, to one where they can request claims from specific issuers or according to certain trust frameworks and evaluation metrics appropriate to their use case. This kind of policy-level data management creates a much more predictable and secure way for businesses and people to get the data they need.

OIDC Credential Provider, a new spec at the OpenID Foundation, is challenging the notion that the identity that a user receives has to be an identity entirely bound to its domain. It offers traditional IdPs a way to issue credentials that are portable and can cross domains because the identity/identifier is no longer coupled to the provider as is the case with an identity token. This work serves to further bridge the gap between existing digital identity infrastructure and emerging technologies which are more decentralized and user-centric. It sets the foundation for a deeper shift in how data is managed online, whether it comes in the form of identity claims, authorizations, or other kinds of verifiable data.

Broadening the landscape of digital identity

OIDC, which is primarily used for identity, is built upon OAuth 2.0, whose primary use is authorization and access. If OIDC is about who the End-User is, then OAuth 2.0 is about what you’re allowed to do on behalf of and at the consent of the End-User. OAuth 2.0 was built prior to OIDC, in many ways because authorization allowed people to accomplish quite a bit without the capabilities of a formalized and standardized identity protocol. Eventually, it became obvious that identity is an integral and relatively well-defined cornerstone of web access that needed a simple solution. OIDC emerged as it increasingly became a requirement to know who the end-user (or resource owner) is and for the client to be able to request access to basic claims about them. Together, OIDC and OAuth2.0 create a protocol that combines authentication and authorization. While this allows them to work natively with one another, it’s not always helpful from an infrastructure standpoint to collapse these different functions together.

Efforts like WebID are currently trending towards the reseparation of these concepts that have become married in the current world of OpenID, by developing browser APIs that are specifically geared towards identity. However, without a solution to authorization, it could be argued that many of the goals of the project will remain unsatisfied whenever the relying party requires both authentication and authorization in a given interaction.

As it turns out, these problems are all closely related to each other and require a broad and coordinated approach. As we step into an increasingly digital era where the expectation continues to evolve around what’s possible to do online, the role of identity becomes increasingly complex. Take, for example, sectors such as the financial industry dealing with increased requirements around electronic Know-Your-Customer (KYC) policies. In parallel with the innovation around web identity and the adoption of emerging technologies such as VCs, there has been a growing realization that the evolution of digital identity enables many opportunities that extend far beyond the domain of identity. This is where the power of verifiable data on the web really begins, and with it an expanded scope and structure for how to build digital infrastructure that can support a whole new class of applications.

A new proposed browser API called Credential Handler API (CHAPI) offers a promising solution to browser-mediated interactions that complements the identity-centric technologies of OIDC and WebID. It currently takes the form of a polyfill to allow these capabilities to be used in the browser today. Similar to how SIOP proposes for the user to be able pick their provider for identity-related credentials, CHAPI allows you to pick your provider, but not just for identity — for any kind of credential. In that sense, OIDC and CHAPI are solving slightly different problems:

OIDC is primarily about requesting authentication of an End-User and receiving some limited identity claims about them, and in certain circumstances also accessing protected resources on their behalf. CHAPI is about requesting credentials that may describe the End-user or may not. Additionally, credentials might not even be related to their identity directly and instead used for other related functions like granting authorization, access, etc.

While OIDC offers a simple protocol based upon URL redirects, CHAPI pushes for a world of deeper integration with the browser that affords several useability benefits. Unlike traditional implementations of OIDC, CHAPI does not start with the assumption that an identity is fixed to the provider. Instead, the end-user gets to register their preferred providers in the browser and then select from this list when an interaction with their provider is required. Since CHAPI allows for exchanging credentials that may or may not be related to the end-user, it allows for a much broader set of interactions than what’s provided by today’s identity protocols. In theory, these can work together rather than as alternative options. You could, for instance, treat CHAPI browser APIs as a client to contact the end-user’s OpenID Provider and then use CHAPI to exchange and present additional credentials that may be under the end-user’s control.

CHAPI is very oriented towards the “credential” abstraction, which is essentially a fixed set of claims protected in a cryptographic envelope and often intended to be long lived. A useful insight from the development of OIDC is that it may be helpful to separate, at least logically, the presentation of identity-related information from the presentation of other kinds of information. To extend this idea, authenticating or presenting a credential is different from authenticating that you’re the subject of a credential. You may choose to do these things in succession, but they are not inherently related.

The reason this is important has to do with privacy, data hygiene, and best security practices. In order to allow users to both exercise their identity on the web and manage all of their credentials in one place, we should be creating systems that default to requesting specific information about an end-user as needed, not necessarily requesting credentials when what’s needed is an authentic identity and vice versa.

Adopting this kind of policy would allow configurations where the identifier for the credential subject would not be assumed to be the identifier used to identify the subject with the relying party. Using this capability in combination with approaches to selective disclosure like VCs with JSON-LD BBS+ signatures will ensure not only a coherent system that can separate identity and access, but also one that respects user privacy and provides a bridge between existing identity management infrastructure and emerging technologies.

An emergent user experience

Using these technologies in tandem also helps to bridge the divide between native and web applications when it comes to managing identity across different modalities. Although the two often get conflated, a digital wallet for holding user credentials is not necessarily an application. It’s a service to help users manage their credentials, both identity-related and otherwise, and should be accessible wherever an end-user needs to access it. In truth, native apps and web apps are each good at different things and come with their own unique set of trade-offs and implementation challenges. Looking at this emerging paradigm where identity is managed in a coherent way across different types of digital infrastructure, “web wallets” and “native wallets” are not necessarily mutually exclusive — emerging technologies can leverage redirects to allow the use of both.

The revolution around digital identity offers a new paradigm that places users in a position of greater control around their digital interactions, giving them the tools to exercise agency over their identity and their data online. Modern legislation focused on privacy, portability, security and accessible user experience is also creating an impetus for the consolidation of legacy practices. The opportunity is to leverage this directional shift to create a network effect across the digital ecosystem, making it easier for relying parties to build secure web experiences and unlocking entirely new value opportunities for claims providers and data authorities.

Users shouldn’t have to manage the complexity left behind by today’s outdated identity systems, and they shouldn’t be collateral damage when it comes to designing convenient apps and services. Without careful coordination, much of the newer innovation could lead to even more fragmentation in the digital landscape. However, as we can see here, many of these technology efforts and standards are solving similar or complementary problems.

Ultimately, a successful reinvention of identity on the web should make privacy and security easy; easy for end-users to understand, easy for relying parties to support, and easy for providers to implement. That means building bridges across technologies to support not only today’s internet users, but enabling access to an entirely new set of stakeholders across the globe who will finally have a seat at the table, such as those without access to the internet or readily available web infrastructure. As these technologies develop, we should continue to push for consolidation and simplicity to strike the elusive balance between security and convenience across the ecosystem for everyday users.

Where to from here?

Solving the challenges necessary to realize the future state of identity on the web will take a collective effort of vendor collaboration, standards contributions, practical implementations and education. In order to create adoption of this technology at scale, we should consider the following as concrete next steps we can all take to bring this vision to life:

Continue to drive development of bridging technologies that integrate well with existing identity solutions and provide a path to decentralized and portable identity

E.g. formalization of OIDC Credential Provider to extend existing IdPs

Empower users to exercise autonomy and sovereignty in selecting their service provider, as well as the ability to change providers and manage services over time

E.g. selection mechanisms introduced by SIOP and WebID

Adopt a holistic approach to building solutions that recognizes the role of browser-mediated interactions in preserving user privacy

E.g. newer browser developments such as CHAPI and WebID

Build solutions that make as few assumptions as necessary in order to support different types of deployment environments that show up in real-world use cases

E.g. evolution of SIOP as well as supporting web and native wallets

Ensure that the development of decentralized digital identity supports the variety and diversity of data that may be managed by users in the future, whether that data be identity-related or otherwise

Taking these steps will help to ensure that the identity technologies we build to support the digital infrastructure of tomorrow will avoid perpetuating the inequalities and accessibility barriers we face today. By doing our part to collaborate and contribute to solutions that work for everybody, building bridges rather than building siloes, we can create a paradigm shift that has longevity and resilience far into the future. We hope you join us.

The State of Identity on the Web was originally published in MATTR on Medium, where people are continuing the conversation by highlighting and responding to this story.

Sunday, 14. March 2021

Jon Udell

The Modesto Pile

Reading a collection of John McPhee stories, I found several that were new to me. The Duty of Care, published in the New Yorker in 1993, is about tires, and how we do or don’t properly recycle them. One form of reuse we’ve mostly abandoned is retreads. McPhee writes: A retread is in no way … Continue reading The Modesto Pile

Reading a collection of John McPhee stories, I found several that were new to me. The Duty of Care, published in the New Yorker in 1993, is about tires, and how we do or don’t properly recycle them. One form of reuse we’ve mostly abandoned is retreads. McPhee writes:

A retread is in no way inferior to a new tire, but new tires are affordable, and the retreaded passenger tire has descended to the status of a clip-on tie.

My dad wore clip-on ties. He also used retreaded tires, and I can remember visiting a shop on several occasions to have the procedure done.

Recently I asked a friend: “Whatever happened to retreaded tires?” We weren’t sure, but figured they’d gone away for good reasons: safety, reliability. But maybe not. TireRecappers and TreadWright don’t buy those arguments. Maybe retreads were always a viable option for our passenger fleet, as they still are for our truck fleet. And maybe, with better tech, they’re better than they used to be.

In Duty of Care, McPhee tells the story of the Modesto pile. It was, at the time, the world’s largest pile of scrap tires containing, by his estimate, 34 million tires.

You don’t have to stare long at that pile before the thought occurs to you that those tires were once driven upon by the Friends of the Earth. They are Environmental Defense Fund tires, Rainforest Action Network tires, Wilderness Society tires. They are California Natural Resources Federation tires, Save San Francisco Bay Association tires, Citizens for a Better Environment tires. They are Greenpeace tires, Sierra Club tires, Earth Island Institute tires. They are Earth First! tires!

(I love a good John McPhee list.)

The world’s largest pile of tires left a surprisingly small online footprint, but you can find the LA Times’ Massive Pile of Tires Fuels Controversial Energy Plan which describes the power plant — 41 million dollars, 14 megawatts, “the first of its kind in the United States and the largest in the world” — that McPhee visited when researching his story. I found it on Google Maps by following McPhee’s directions.

If you were abandon your car three miles from the San Joaquin County line and make your way on foot southwest one mile…

You can see the power plant. There’s no evidence of tires, or trucks moving them, so maybe the plant, having consumed the pile, is retired. Fortunately it never caught fire, that would’ve made a hell of a mess.

According to Wikipedia, we’ve reduced our inventory of stockpiled tires by an order of magnitude from a peak of a billion around the time McPhee wrote that article. We burn most of them for energy, and turn some into ground rubber for such uses as paving and flooring. So that’s progress. But I can’t help but wonder about the tire equivalent of Amory Lovins’ negawatt: “A watt of energy that you have not used through energy conservation or the use of energy-efficient products.”

Could retreaded passenger tires be an important source of negawatts? Do we reject the idea just because they’re as unfashionable as clip-on ties? I’m no expert on the subject, obviously, but I suspect these things might be true.

Friday, 12. March 2021

@_Nat Zone

おすすめの隠れた名曲〜ダマーズ:フルート、オーボエ、クラリネットとピアノのための四重奏曲

みなさん、フランスの20世紀の作曲家、ダマーズの「… The post おすすめの隠れた名曲〜ダマーズ:フルート、オーボエ、クラリネットとピアノのための四重奏曲 first appeared on @_Nat Zone.

みなさん、フランスの20世紀の作曲家、ダマーズの「フルート、オーボエ、クラリネットとピアノのための四重奏」をご存知でしょうか?いかにもフランスという洒脱な曲です。リンクを貼ったのは二楽章ですが、ダマーズはこれに限らず第二楽章に楽しいものがおおいです。

ダマーズはコルトーの弟子で、フォーレのノクチュルヌとバルカローレの全曲録音を最初にした人でもあります。

新古典主義の作曲だったので20世紀にはあまり評価されていませんでしたが、最近評価が上がってきて録音も手に入るようになってきています。いかにもフランス的。プーランクが好きだったらダマーズも好きだと思います。

https://music.youtube.com/watch?v=mKkA9mFBlRw&feature=share

The post おすすめの隠れた名曲〜ダマーズ:フルート、オーボエ、クラリネットとピアノのための四重奏曲 first appeared on @_Nat Zone.

Doc Searls Weblog

Trend of the Day: NFT

NFTs—Non-Fungible Tokens—are hot shit. Wikipedia explains (at that link), A non-fungible token (NFT) is a special type of cryptographic token that represents something unique. Unlike cryptocurrencies such bitcoin and many network or utility tokens,[a], NFTs are not mutually interchangeable and are thus not fungible in nature[1][2] Non-fungible tokens are used

NFTs—Non-Fungible Tokens—are hot shit. Wikipedia explains (at that link),

A non-fungible token (NFT) is a special type of cryptographic token that represents something unique. Unlike cryptocurrencies such bitcoin and many network or utility tokens,[a], NFTs are not mutually interchangeable and are thus not fungible in nature[1][2]

Non-fungible tokens are used to create verifiable[how?artificial scarcity in the digital domain, as well as digital ownership, and the possibility of asset interoperability across multiple platforms.[3] Although an artist can sell one or more NFTs representing a work, the artist can still retain the copyright to the work represented by the NFT.[4] NFTs are used in several specific applications that require unique digital items like crypto art, digital collectibles, and online gaming.

Art was an early use case for NFTs, and blockchain technology in general, because of the purported ability of NFTs to provide proof of authenticity and ownership of digital art, a medium that was designed for ease of mass reproduction, and unauthorized distribution through the Internet.[5]

NFTs can also be used to represent in-game assets which are controlled by the user instead of the game developer.[6] NFTs allow assets to be traded on third-party marketplaces without permission from the game developer.

An NPR story the other day begins,

The artist Grimes recently sold a bunch of NFTs for nearly $6 million. An NFT of LeBron James making a historic dunk for the Lakers garnered more than $200,000. The band Kings of Leon is releasing its new album in the form of an NFT.

At the auction house Christie’s, bids on an NFT by the artist Beeple are already reaching into the millions.

And on Friday, Twitter CEO Jack Dorsey listed his first-ever tweet as an NFT.

Safe to say, what started as an Internet hobby among a certain subset of tech and finance nerds has catapulted to the mainstream.

I remember well exactly when I decided not to buy bitcoin. It was on July 26, 2009, after I finished driving back home to Arlington, Mass, after dropping off my kid at summer camp in Vermont. I had heard a story about it on the radio that convinced me that now was the time to put $100 into something new that would surely become Something Big.

But trying to figure out how to do it took too much trouble, and my office in the attic was too hot, so I didn’t. Also, at the time, the price was $0. Easy to rationalize not buying a non-something that’s worth nothing.

So let’s say I made the move when it hit $1, which I think was in 2011. That would have been $100 for 100 bitcoin, which at this minute are worth $56101.85 apiece. A hundred of those are now $5,610,185. And what if I had paid the 1¢ or less a bitcoin would have been in July, 2009? You move the decimal point while I shake my head.

So now we have NFTs. What do you think I should do? Or anybody? Serious question.


The Dingle Group

eIDAS and Self-Sovereign Identity

On March 9, The Vienna Digital Identity Meetup* hosted presentations from Xavier Vale, Product Manager for Validated ID and Dr. Ignacio Alamillo, Director of Astrea on eIDAS and Self Sovereign Identity. The presentations covered the technical, legal and business dimensions of bridging between eIDAS and SSI concepts and increasing the value and usability of digital identity in the European mode

On March 9, The Vienna Digital Identity Meetup* hosted presentations from Xavier Vila, Product Manager for Validated ID and Dr. Ignacio Alamillo, Director of Astrea on eIDAS and Self Sovereign Identity. The presentations covered the technical, legal and business dimensions of bridging between eIDAS and SSI concepts and increasing the value and usability of digital identity in the European model.

A key component of the Digital Europe architecture is the existence of a trusted and secure digital identity infrastructure. This journey was started in 2014 with the implementation of the eIDAS Regulation. As has been discussed in previous Vienna Digital Identity Meetups, a high assurance digital identity is the key piece connecting the physical and digital worlds, and this piece was not included in the initial creation of our digital world.

Why then is eIDAS v1 not seen as a success? There are many reasons; from parts of the regulation that focused or constrained its use into the public sphere only, to the lack of total coverage across all of the EU. Likely the key missing piece was that the cultural climate was not yet ripe and the state of digital identity was really not ready. Too many technical problems were yet to be solved. Without these elements the realized state of eIDAS should not be unexpected. All this said, eIDAS v1 laid very important groundwork and created an environment to gather important learnings to allow eIDAS v2 to realize the hoped for levels of success and adoption.

Validated ID has been developing a eIDAS - ESSIF bridge bringing the eIDAS trust seals to Verifiable Credentials. As a Qualified Trust Service Provider under the eIDAS, Validated ID has been offering trust services to customers across Europe since 2012. Xavier presented this current work and provides a detailed explanation on how the eIDAS-SSI bridge applies the qualified electronic seals to Verifiable Credentials.

Dr. Alamillo further clarified the importance of the eIDAS-SSI Bridge in his presentation on SSI in the eIDAS Regulation. When a Qualified Electronic Seal is applied to a Verifiable Credential the combined document becomes a legal document with cross border legal value within the EU. Currently in an identity context, eIDAS is the European wide identity meta system, providing the framework for the 27 nation states of the EU to operate as a very large federated identity network. However, the point was raised the in the current ongoing revision of the eIDAS regulation, it is possible that handling of electronic identification may be changed from a ‘connecting of services’ to a new trust service.


To listen to a recording of the event please check out the link: https://vimeo.com/522501200

Time markers:

0:00:00 - Introduction


0:04:00 - Xavier Vila, Validated ID


0:34:00 - Questions


0:42:00 - Dr. Ignacio Alamillo, Astrea


1:18:00 - Questions


1:28:00 - Wrap-up & Upcoming Events



Resources

SSI - eIDAS Bridge GitHub Repo - https://github.com/validatedid/ssi-eidas-bridge/

Xavier Vila’s Presentation Deck: Vale-SSI-EIDAS Bridge.pdf
Nach
o Alamillo’s Presentation Deck: Alamillo-SSI-EIDAS.pdf

And as a reminder, we continue to have online only events.

If interested in getting notifications of upcoming events please join the event group at: https://www.meetup.com/Vienna-Digital-Identity-Meetup/

*The Vienna Digital Identity Meetup is hosted by The Dingle Group and is focused on educating business, societal, legal and technologists on the new opportunities that arise with a high assurance digital identity created by the reduction risk and strengthened provenance. We meet on the 4th Monday of every month, in person (when permitted) in Vienna and online on Zoom. Connecting and educating across borders and oceans.


Jon Udell

A wider view

For 20 bucks or less, nowadays, you can buy an extra-wide convex mirror that clips onto your car’s existing rear-view mirror. We just tried one for the first time, and I’m pretty sure it’s a keeper. These gadgets claim to eliminate blind spots, and this one absolutely does. Driving down 101, I counted three seconds … Continue reading A wider view

For 20 bucks or less, nowadays, you can buy an extra-wide convex mirror that clips onto your car’s existing rear-view mirror. We just tried one for the first time, and I’m pretty sure it’s a keeper. These gadgets claim to eliminate blind spots, and this one absolutely does. Driving down 101, I counted three seconds as a car passed through my driver’s-side blind spot. That’s a long time when you’re going 70 miles per hour; during that whole time I could see that passing car in the extended mirror.

Precious few gadgets spark joy for me. This one had me at hello. Not having to turn your head, avoiding the risk of not turning your head — these are huge benefits, quite possibly life-savers. For 20 bucks!

It got even better. As darkness fell, we wondered how it would handle approaching headlights. It’s not adjustable like the stock mirror, but that turns out not to be a problem. The mirror dims those headlights so they’re easy to look at. The same lights in the side mirrors are blinding by comparison.

I’ve been driving more than 40 years. This expanded view could have been made available at any point along the way. There’s nothing electronic or digital. It’s just a better idea that combines existing ingredients in a new way. That pretty much sums up my own approach to product development.

Finally, there’s the metaphor. Seeing around corners is a superpower I’ve always wanted. I used to love taking photos with the fisheye lens on my dad’s 35mm Exacta, now I love making panoramic views with my phone. I hate being blindsided, on the road and in life, by things I can’t see coming. I hate narrow-mindedness, and always reach for a wider view.

I’ll never overcome all my blind spots but it’s nice to chip away at them. After today, there will be several fewer to contend with.

File:A wider view at sunset – geograph.org.uk – 593022.jpg – Wikimedia Commons

Thursday, 11. March 2021

Doc Searls Weblog

Why is the “un-carrier” falling into the hellhole of tracking-based advertising?

For a few years now, T-Mobile has been branding itself the “un-carrier,” saying it’s “synonymous with 100% customer commitment.” Credit where due: we switched from AT&T a few years ago because T-Mobile, alone among U.S. carriers at the time, gave customers a nice cheap unlimited data plan for traveling outside the country. But now comes this […]

For a few years now, T-Mobile has been branding itself the “un-carrier,” saying it’s “synonymous with 100% customer commitment.” Credit where due: we switched from AT&T a few years ago because T-Mobile, alone among U.S. carriers at the time, gave customers a nice cheap unlimited data plan for traveling outside the country.

But now comes this story in the Wall Street Journal:

T-Mobile to Step Up Ad Targeting of Cellphone Customers
Wireless carrier tells subscribers it could share their masked browsing, app data and online activity with advertisers unless they opt out

Talk about jumping on a bandwagon sinking in quicksand. Lawmakers in Europe (GDPR), California (CCPA) and elsewhere have been doing their best to make this kind of thing illegal, or at least difficult. Worse, it should now be clear that it not only sucks at its purpose, but customers hate it. A lot.

I just counted, and all 94 responses in the “conversation” under that piece are disapproving of this move by T-Mobile. I just copied them over and compressed out some extraneous stuff. Here ya go:

“Terrible decision by T-Mobile. Nobody ever says “I want more targeted advertising,” unless they are in the ad business.  Time to shop for a new carrier – it’s not like their service was stellar.”

“A disappointing development for a carrier which made its name by shaking up the big carriers with their overpriced plans.”

“Just an unbelievable break in trust!”

“Here’s an idea for you, Verizon. Automatically opt people into accepting a break on their phone bill in exchange for the money you make selling their data.”

“You want to make money on selling customer’s private information? Fine – but in turn, don’t charge your customers for generating that profitable information.”

“Data revenue sharing is coming. If you use my data, you will have to share the revenue with me.”

“Another reason to never switch to T-Mobile.”

“Kudos to WSJ for providing links on how to opt-out!”

“Just another disappointment from T-Mobile.  I guess I shouldn’t be surprised.”

“We were supposed to be controlled by the government.”

“How crazy is it that we are having data shared for service we  PAY for? You might expect it on services that we don’t, as a kind of ‘exchange.'”

“WSJ just earned their subscription fee. Wouldn’t have known about this, or taken action without this story. Toggled it off on my phone, and then sent everyone I know on T Mobile the details on how to protect themselves.”

“Just finished an Online Chat with their customer service dept….’Rest assured, your data is safe with T-Mobile’…no, no it isn’t.  They may drop me as a customer since I sent links to the CCPA, the recent VA privacy law and a link to this article.  And just  to make sure the agent could read it – I sent the highlights too.  the response – ‘Your data is safe….’  Clueless, absolutely clueless.”

“As soon as I heard this, I went in and turned off tracking.  Also, when I get advertising that is clearly targeted (sometimes pretty easy to tell) I make a mental note to never buy or use the product or service advertised if I can avoid it.  Do others think the same?”

“Come on Congress, pass a law requiring any business or non-profit that wants to share your data with others to require it’s customers to ‘opt-in’. We should(n’t) have to ‘opt-out’ to prevent them from doing so, it should be the other way around. Only exception is them sharing data with the government and that there should be laws that limit what can be shared with the government and under what circumstances.”

“There must be massive amounts of money to be made in tracking what people do for targeted ads.  I had someone working for a national company tell me I would be shocked at what is known about me and what I do online.  My 85 year old dad refuses a smartphone and pays cash for everything he does short of things like utilities.  He still sends in a check each month to them, refuses any online transactions.  He is their least favorite kind of person but, he at least has some degree of privacy left.”

“Would you find interest-based ads on your phone helpful or intrusive?
Neither–they’re destructive. They limit the breadth of ideas concerning things I might be interested in seeing or buying. I generally proactively look when I want or need something, and so advertising has little impact on me. However, an occasional random ad shows up that broadens my interest–that goes away with the noise of targeted ads overlain and drowning it out. If T-Mobile were truly interested, it would make its program an opt-in program and tout it so those who might be interested could make the choice.”

“Humans evolved from stone age to modern civilization. These tech companies will strip all our clothes.”

“They just can’t help themselves. They know it’s wrong, they know people will hate and distrust them for it, but the lure of doing evil is too strong for such weak-minded business executives to resist the siren call of screwing over their customers for a buck. Which circle of hell will they be joining Zuckerberg in?”

“Big brother lurks behind every corner.”

“What privacy policy update was this?  Don’t they always preface their privacy updates with the statement: YOUR PRIVACY IS IMPORTANT TO US(?) When did T-Mobile tell its customers our privacy is no longer important to them?  And that in fact we are now going to sell all we know about you to the highest bidder. Seems they need at least to get informed consent to reverse this policy and to demonstrate that they gave notice that was actually received and reviewed and  understood by customers….otherwise, isn’t this wiretapping by a third party…a crime?  Also isn’t using electronic means to monitor someone in an environment where they have the reasonable expectation of privacy a tort. Why don’t they just have a dual rate structure?   The more expensive traditional privacy plan and a cheaper exploitation plan? Then at least they can demonstrate they have given you consideration for the surrender of your right to privacy.”

“A very useful article! I was able to log in and remove my default to receive such advertisements “relevant” to me.  That said all the regulatory bodies in the US are often headed by industry personnel who are their to protect companies, not consumers. US is the best place for any company to operate freely with regulatory burden. T-mobile follows the European standards in EU, but in the US there are no such restraints.”

“It’s far beyond time for the Congress to pass a sweeping privacy bill that outlaws collection and sale of personal information on citizens without their consent.”

“Appreciate the heads-up  and the guidance on how to opt out. Took 30 seconds!”

“Friends, you may not be aware that almost all of the apps on your iPhone track your location, which the apps sell to other companies, and someday the government. If you want to stop the apps from tracking your locations, this is what to do. In Settings, choose Privacy.   Then choose Location Services.  There you will see a list of your apps that track your location.  All of the time. I have switched nearly all of my apps to ‘Never’ track.  A few apps, mostly relating to travel, I have set to “While using.”  For instance, I have set Google Maps to ‘While using.’ That is how to take control of your information.”

“Thank you for this important info! I use T-Mobile and like them, but hadn’t heard of this latest privacy outrage. I’ve opted out.”

“T-Mobile is following Facebook’s playbook. Apple profits by selling devices and Operating Sysyems. Facebook & T-Mobile profit by selling, ………………… YOU!”

“With this move, at first by one then all carriers, I will really start to limit my small screen time.”

“As a 18 year customer of T-Mobile, I would have preferred an email from T-Mobile  about this, rather than having read this by chance today.”

“It should be Opt-In, not Opt-out. Forcing an opt out is a bit slimy in my books. Also, you know they’ll just end up dropping that option eventually and you’ll be stuck as opted in. Even if you opted in, your phone plan should be free or heavily subsidized since they are making dough off your usage.”

“No one automatically agrees to tracking of one’s life, via the GPS on their cell phone. Time to switch carriers.”

“It’s outrageous that customers who pay exorbitant fees for the devices are also exploited with advertising campaigns. I use ad blockers and a VPN and set cookies to clear when the browser is closed. When Apple releases the software to block the ad identification number of my device from being shared with the scum, I’ll be the first to use that, too.”

“It was a pain to opt out of this on T-Mobile. NOT COOL.”

“I just made the decision to “opt out” of choosing TMobile as my new phone service provider.  So very much appreciated.”

“Well, T-Mobile, you just lost a potential subscriber.  And why not reverse this and make it opt-in instead of opt-out?  I know, because too many people are lazy and will never opt-out, selling their souls to advertisers. And for those of you who decide to opt-out, congratulations.  You’re part of the vast minority who actually pay attention to these issues.”

“I have been seriously considering making the switch from Verizon to T-Mobile. The cavalier attitude that T-Mobile has for customers data privacy has caused me to put this on hold. You have to be tone deaf as a company to think that this is a good idea in the market place today.”

“Been with T-Mo for over 20 years because they’re so much better for international travel than the others. I don’t plan on changing to another carrier but I’ll opt out of this, thanks.”

“So now we know why T-Mobile is so much cheaper.”

“I have never heard anyone say that they want more ads. How about I pay too much for your services already and I don’t want ANY ads. We need a European style GDP(R) with real teeth in the USA and we need it now!”

“So these dummies are going to waste their money on ads when their service Suckky Ducky!   Sorry, but it’s a wasteland of T-Mobile, “No Service” Bars on your phone with these guys.  It’s the worst service, period. Spend your money on your service, the customers will follow.  Why is that so hard for these dummies to understand?”

“If they do this I will go elsewhere.”

“When will these companies learn that their ads are an annoyance.  I do not want or appreciate their ads.  I hate the words ‘We use our data to customize the ads you receive.'”

“Imagine if those companies had put that much effort and money into actually improving their service. Nah, that’s ridiculous.”

“Thank you info on how to opt out. I just did so. It’s up to me to decide what advertising is relevant for me, not some giant corporation that thinks they own me.”

“who is the customer out there like, Yeah I want them to advertise to me! I love it!’? Hard to believe anyone would ask for this.”

“I believe using a VPN would pretty much halt all of this nonsense, especially if the carrier doesn’t want to cooperate.”

“I’m a TMobile customer, and to be honest, I really don’t care about advertising–as long as they don’t give marketers my phone number.  Now that would be a deal breaker.”

“What about iPhone users on T-Mobile?  Apple’s move to remove third party cookies is creating this incentive for carriers to fill the void. It’s time for a national privacy bill.”

“We need digital privacy laws !!!   Sad that Europe and other countries are far ahead of us here.”

“Pure arrogance on the part of the carrier. What are they thinking at a time when people are increasingly concerned about privacy? I’m glad that I’m not currently a T-Mobile customer and this seals the deal for me for the future.”

“AT&T won’t actually let you opt out fully. Requests to block third party analytics trigger pop up messages that state ‘Our system doesn’t seem to be cooperating. Sorry for any inconvenience. Please try again later’.”

“One of the more salient articles I’ve read anywhere recently. Google I understand, we get free email and other stuff, and it’s a business. But I already pay a couple hundred a month to my phone provider. And now they think it’s a good idea to barrage me and my family? What about underage kids getting ads – that must be legal only because the right politicians got paid off.”

“Oh yeah, I bet customers have been begging for more “targeted advertising”.  It would be nice if a change in privacy policy also allowed you to void your 12 month agreement with these guys.”

“Thank you for showing us how to opt out. If these companies want to sell my data, then they should pay me part of the proceeds. Otherwise, I opt out.”

Think T-Mobile is listening?

If not, they’re just a typical carrier with 0% customer commitment.


The eventual normal

One year ago exactly (at this minute), my wife and I were somewhere over Nebraska, headed from Newark to Santa Barbara by way of Denver, on the last flight we’ve ever taken. Prior to that we had put about four million miles on United alone, flying almost constantly somewhere, mostly on business. The map above […]

One year ago exactly (at this minute), my wife and I were somewhere over Nebraska, headed from Newark to Santa Barbara by way of Denver, on the last flight we’ve ever taken. Prior to that we had put about four million miles on United alone, flying almost constantly somewhere, mostly on business. The map above traces what my pocket GPS recorded on various trips (and far from all of them) by land, sea and air since 2007. This life began for me in 1990 and for my wife long before that. Post-Covid, none of this will ever be the same. For anybody.

We also haven’t seen most of our kids or grandkids in more than a year. Same goes for countless friends, business associates and fellow (no longer) travelers on other routes of life.

The old normal is over. We don’t know what the new normal will be, exactly; but it’s clear that business travel as we knew it is gone for years to come, if not forever.

I also sense a generational hand-off. Young people always take over from their elders at some point, but this handoff is from the physical to the digital. Young people are digital natives. Older folk are at best familiar with the digital world: adept in many cases, but not born into it. Being born into the digital world is very different. And still very new.

Though my wife and I have been stuck in Southern California for a year now, we have been living mostly in the digital world, working hard on that handoff, trying to deposit all we can of our long experience and hard-won wisdom on the conveyor belt of work we share across generations.

There will be a new normal, eventually. It will be a normal like the one we had in the 20th Century, which started with WWI and ended with Covid. This was a normal where the cultural center was held by newspapers and broadcasting, and every adult knew how to drive.

Now we’re in the 21st Century, and it’s something of a whiteboard. We still have the old media and speak the same languages, but Covid pushed a reset button, and a lot of the old norms are open to question, if not out the window completely.

Why should the digital young accept the analog-born status quos of business, politics, religion, education, transportation or anything? The easy answer is because the flywheels of those things are still spinning. The hard answers start with questions about how we can do all that stuff better. For sure all the answers will be, to a huge degree, digital.

Perspective: the world has been digital for a only few years now, and will likely remain so for many decades or centuries. Far more has been not been done than has, and lots of stuff will have to be improvised until we (increasingly the young folk) figure out the best approaches. It won’t be easy. None of the technical areas my wife and I are involved with personally (and I’ve been writing about) —privacy, identity, fintech, facial recognition, advertising, journalism—have easy answers to their problems, much less final ones.

But we like working on them, and sensing some progress, which doesn’t suck.

 

 

 


Bill Wendel's Real Estate Cafe

Pandemic exposed Hidden Infections – Real Estate Reckoning Overdue

“When I started in this business, there was a broad consensus around making the American dream accessible to middle- and lower-income people. After this year… The post Pandemic exposed Hidden Infections - Real Estate Reckoning Overdue first appeared on Real Estate Cafe.

“When I started in this business, there was a broad consensus around making the American dream accessible to middle- and lower-income people. After this year…

The post Pandemic exposed Hidden Infections - Real Estate Reckoning Overdue first appeared on Real Estate Cafe.


Doc Searls Weblog

Enough with the giant URLs

A few minutes ago I wanted to find something I’d written about privacy. So I started with a simple search on Google: The result was this: Which is a very very very very very very very very very very very very very way long way of saying this:  https://google.com/search?&q=doc+searls+… That’s 609 characters vs. 47, or […]

A few minutes ago I wanted to find something I’d written about privacy. So I started with a simple search on Google:

The result was this:

Which is a very very very very very very very very very very very very very way long way of saying this:

 https://google.com/search?&q=doc+searls+…

That’s 609 characters vs. 47, or about 13 times longer. (Hence the word “very” repeated 13 times, above.)

Why are search URLs so long these days? The didn’t used to be.

I assume that the 562 extra characters in that long url tell Google more about me and what I’m doing than they used to want to know. In old long-URL search results, there was human-readable stuff there about the computer and the browser being used. This mess surely contains the same, plus lots of personal data about me and what I’m doing online in addition to searching for this one thing. But I don’t know. And that’s surely part of the idea here.

This much, however, is easy for a human to read:

Giant URLs like this are cyphers, on purpose. You’re not supposed to know what they actually say. Only Google should know. There is a lot about your searches that are Google’s business and not yours. Google has lost interest (if it ever had any) in making search result URLs easy to copy and use somewhere else, such as in a post like this.

Bing is better in this regard. Here’s the same search result there:

That’s 101 characters, or less than 1/6th of Google’s.

The de-crufted URL is also shorter:

 https://bing.com/search?q=doc+searls+pri…

Just 44 characters.

So here is a suggestion for both companies: make search results available with one click in their basic forms. That will make sharing those URLs a lot easier to do, and create good will as well. And, Google, if a cruft-less URL is harder for you to track, so what? Maybe you shouldn’t be doing some of this tracking in the first place.

Sometimes it’s better to make things easy for people than harder. This is one of those times. Or billions of them.

 

 

 

Tuesday, 09. March 2021

Jon Udell

The New Freshman Comp

The column republished below, originally at http://www.oreillynet.com/pub/a/network/2005/04/22/primetime.html, was the vector that connected me to my dear friend Gardner Campbell. I’m resurrecting it here partly just to bring it back online, but mainly to celebrate the ways in which Gardner — a film scholar among many other things — is, right now, bringing his film expertise … Continue reading The

The column republished below, originally at http://www.oreillynet.com/pub/a/network/2005/04/22/primetime.html, was the vector that connected me to my dear friend Gardner Campbell. I’m resurrecting it here partly just to bring it back online, but mainly to celebrate the ways in which Gardner — a film scholar among many other things — is, right now, bringing his film expertise to the practice of online teaching.

In this post he reflects:

Most of the learning spaces I’ve been in provide very poorly, if at all, for the supposed magic of being co-located. A state-mandated prison-spec windowless classroom has less character than a well-lighted Zoom conference. A lectern with a touch-pad control for a projector-and-screen combo is much less flexible and, I’d argue, conveys much less human connection and warmth than I can when I share a screen on Zoom during a synchronous class, or see my students there, not in front of a white sheet of reflective material, but in the medium with me, lighting up the chat, sharing links, sharing the simple camaraderie of a hearty “good morning” as class begins.

And in this one he shares the course trailer (!) for Fiction into film: a study of adaptations of “Little Women”.

My 2005 column was a riff on a New York Times article, Is Cinema Studies the new MBA? It was perhaps a stretch, in 2005, to argue for cinema studies as an integral part of the new freshman comp. The argument makes a lot more sense now.

The New Freshman Comp

For many years I have alternately worn two professional hats: writer and programmer. Lately I find myself wearing a third hat: filmmaker. When I began making the films that I now call screencasts, my readers and I both sensed that this medium was different enough to justify the new name that we collaboratively gave it. Here’s how I define the difference. Film is a genre of storytelling that addresses the whole spectrum of human experience. Screencasting is a subgenre of film that can tell stories about the limited — but rapidly growing — slice of our lives that is mediated by software.

Telling stories about software in this audiovisual way is something I believe technical people will increasingly want to do. To explain why, let’s first discuss a more ancient storytelling mode: writing.

The typical reader of this column is probably, like me, a writer of both prose and code. Odds are you identify yourself as a coder more than as a writer. But you may also recently have begun blogging, in which case you’ve seen your writing muscles grow stronger with exercise.

Effective writing and effective coding are more closely related than you might think. Once upon a time I spent a year as a graduate student and teaching assistant at an Ivy League university. My program of study was science writing, but that was a tiny subspecialty within a larger MFA (master of fine arts) program dedicated to creative writing. That’s what they asked me to teach, and the notion terrified me. I had no idea what I’d say to a roomful of aspiring poets and novelists. As it turned out, though, many of these kids were in fact aspiring doctors, scientists, and engineers who needed humanities credits. So I decided to teach basic expository writing. The university’s view was that these kids had done enough of that in high school. Mine was that they hadn’t, not by a long shot.

I began by challenging their reverence for published work. Passages from books and newspapers became object lessons in editing, a task few of my students had ever been asked to perform in a serious way. They were surprised by the notion that you could improve material that had been professionally written and edited, then sold in bookstores or on newsstands. Who were they to mess with the work of the pros?

I, in turn, was surprised to find this reverent attitude even among the budding software engineers. They took it for granted that programs were imperfect texts, always subject to improvement. But they didn’t see prose in the same way. They didn’t equate refactoring a program with editing a piece of writing, as I did then and still do.

When I taught this class more than twenty years ago the term “refactoring” wasn’t commonly applied to software. Yet that’s precisely how I think about the iterative refinement of prose and of code. In both realms, we adjust vocabulary to achieve consistency of tone, and we transform structure to achieve economy of expression.

I encouraged my students to regard writing and editing as activities governed by engineering principles not unlike the ones that govern coding and refactoring. Yes, writing is a creative act. So is coding. But in both cases the creative impulse is expressed in orderly, calculated, even mechanical ways. This seemed to be a useful analogy. For technically-inclined students earning required humanities credits, it made the subject seem more relevant and at the same time more approachable.

In the pre-Internet era, none of us foresaw the explosive growth of the Internet as a textual medium. If you’d asked me then why a programmer ought to be able to write effectively, I’d have pointed mainly to specs and manuals. I didn’t see that software development was already becoming a global collaboration, that email and newsgroups were its lifeblood, and that the ability to articulate and persuade in the medium of text could be as crucial as the ability to design and build in the medium of code.

Nowadays, of course, software developers have embraced new tools of articulation and persuasion: blogs, wikis. I’m often amazed not only by the amount of writing that goes on in these forms, but also by its quality. Writing muscles do strengthen with exercise, and the game of collaborative software development gives them a great workout.

Not everyone drinks equally from this fountain of prose, though. Developers tend to write a great deal for other developers, but much less for those who use their software. Laziness is a factor; hubris even more so. We like to imagine that our software speaks for itself. And in some ways that’s true. Documentation is often only a crutch. If you have to explain how to use your software, you’ve failed.

It may, however, be obvious how to use a piece of software, and yet not at all obvious why to use it. I’ll give you two examples: Wikipedia and del.icio.us. Anyone who approaches either of these applications will immediately grasp their basic modes of use. That’s the easy part. The hard part is understanding what they’re about, and why they matter.

A social application works within an environment that it simultaneously helps to create. If you understand that environment, the application makes sense. Otherwise it can seem weird and pointless.

Paul Kedrosky, an investor, academic, and columnist, alluded to this problem on his blog last month:

Funny conversation I had with someone yesterday: We agreed that the thing that generally made us both persevere and keep trying any new service online, even if we didn’t get it the first umpteen times, was having Jon Udell post that said service was useful. After all, if Jon liked it then it had to be that we just hadn’t tried hard enough. [Infectious Greed]

I immodestly quote Paul’s remarks in order to revise and extend them. I agree that the rate-limiting factor for software adoption is increasingly not purchase, or installation, or training, but simply “getting it.” And while I may have a good track record for “getting it,” plenty of other people do too — the creators of new applications, obviously, as well as the early adopters. What’s unusual about me is the degree to which I am trained, inclined, and paid to communicate in ways that help others to “get it.”

We haven’t always seen the role of the writer and the role of the developer as deeply connected but, as the context for understanding software shifts from computers and networks to people and groups, I think we’ll find that they are. When an important application’s purpose is unclear on the first umpteen approaches, and when “getting it” requires hard work, you can’t fix the problem with a user-interface overhaul or a better manual. There needs to be an ongoing conversation about what the code does and, just as importantly, why. Professional communicators (like me) can help move things along, but everyone needs to participate, and everyone needs to be able to communicate effectively.

If you’re a developer struggling to evangelize an idea, I’d start by reiterating that your coding instincts can also help you become a better writer. Until recently, that’s where I’d have ended this essay too. But recent events have shown me that writing alone, powerful though it can be, won’t necessarily suffice.

I’ve written often — and, I like to think, cogently — about wikis and tagging. But my screencasts about Wikipedia and del.icio.us have had a profoundly greater impact than anything I’ve written on these topics. People “get it” when they watch these movies in ways that they otherwise don’t.

It’s undoubtedly true that an audiovisual narrative enters many 21st-century minds more easily, and makes a more lasting impression on those minds, than does a written narrative. But it’s also true that the interactive experience of software is fundamentally cinematic in nature. Because an application plays out as a sequence of frames on a timeline, a narrated screencast may be the best possible way to represent it and analyze it.

If you buy either or both of these explanations, what then? Would I really suggest that techies will become fluid storytellers not only in the medium of the written essay, but also in the medium of the narrated screencast? Actually, yes, I would, and I’m starting to find people who want to take on the challenge.

A few months ago I heard from Michael Tiller, who describes himself as a “mechanical engineer trapped in a computer scientist’s body.” Michael has had a long and passionate interest in Modelica, an open, object-oriented language for modeling mechanical, electrical, electronic, hydraulic, thermal, and control systems. He wanted to work with me to develop a screencast on this topic. But it’s far from my domains of expertise and, in the end, all he really needed was my encouragement. This week, Michael launched a website called Dynopsis.com that’s chartered to explore the intersection of engineering and information technologies. Featured prominently on the site is this 20-minute screencast in which he illustrates the use of Modelica in the context of the Dymola IDE.

This screencast was made with Windows Media Encoder 9, and without the help of any editing. After a couple of takes, Michael came up with a great overview of the Modelica language, the Dymola tool, and the modeling and simulation techniques that they embody. Since he is also author of a book on this subject, I asked Michael to reflect on these different narrative modes, and here’s how he responded on his blog:

If I were interested in teaching someone just the textual aspects of the Modelica language, this is exactly the approach I would take.

But when trying to teach or explain a medium that is visual, other tools can be much more effective. Screencasts are one technology that could really make an impact on the way some subjects are taught and I can see how these ideas could be extended much further. [Dynopsis: Learning by example: screencasts]

We’re just scratching the surface of this medium. Its educational power is immediately obvious, and over time its persuasive power will come into focus too. The New York Times recently asked: “Is cinema studies the new MBA?” I’ll go further and suggest that these methods ought to be part of the new freshman comp. Writing and editing will remain the foundation skills they always were, but we’ll increasingly combine them with speech and video. The tools and techniques are new to many of us. But the underlying principles — consistency of tone, clarity of structure, economy of expression, iterative refinement — will be familiar to programmers and writers alike.

Sunday, 07. March 2021

Jon Udell

The 3D splendor of the Sonoma County landscape

We’ve been here 6 years, and the magical Sonoma County landscape just keeps growing on me. I’ve written about the spectacular coastline, a national treasure just twenty miles to our west that’s pristine thanks to the efforts of a handful of environmental activists, notably Bill Kortum. Even closer, just ten miles to our east, lies … Continue reading The 3D splendor of the Sonoma County landscape

We’ve been here 6 years, and the magical Sonoma County landscape just keeps growing on me. I’ve written about the spectacular coastline, a national treasure just twenty miles to our west that’s pristine thanks to the efforts of a handful of environmental activists, notably Bill Kortum. Even closer, just ten miles to our east, lies the also spectacular Mayacamas range where today I again hiked Bald Mountain in Sugarloaf Ridge State Park.

If you ever visit this region and fancy a hike with great views, this is the one. (Ping me and I’ll go with you at the drop of a hat.) It’s not too challenging. The nearby Goodspeed Trail, leading up to Gunsight Notch on Hood Mountain, is more demanding, and in the end, less rewarding. Don’t get me wrong, the view from Gunsight — looking west over the Sonoma Valley and the Santa Rosa plain — is delightful. But the view from the top of Bald Mountain is something else again. On a clear day (which is most of them) you can spin a slow 360 and take in the Napa Valley towns of St. Helena and Calistoga to the west, Cobb and St. Helena mountains to the north, the Sonoma Valley and Santa Rosa plain to the east, then turn south to see Petaluma, Mt. Tamalpais, the top of the Golden Gate bridge, San Francisco, the San Pablo Bay, the tops of the tallest buildings in Oakland, and Mt. Diablo 51 miles away. Finally, to complete the loop, turn east again to look at the Sierra Nevada range 130 miles away.

The rugged topography you can see in that video occurs fractally everywhere around here. It’s taken a while to sink in, but I think I can finally explain what’s so fascinating about this landscape. It’s the sightlines. From almost anywhere, you’re looking at a 3D puzzle. I’m a relative newcomer, but I hike with friends who’ve lived their whole lives here, and from any given place, they are as likely as I am to struggle to identify some remote landmark. Everything looks different from everywhere. You’re always seeing multiple overlapping planes receding into the distance, like dioramas. And they change dramatically as you move around even slightly. Even just ten paces in any direction, or a slight change in elevation, can alter the sightlines completely and reveal or hide a distant landmark.

We’ve lived in flat places, we’ve lived in hilly places, but never until now in such a profoundly three-dimensional landscape. It is a blessing I will never take for granted.


Bill Wendel's Real Estate Cafe

CASA Share: Dreaming of the Saints Next Door!

Have you seen the Pope’s new book, Let Us Dream: The Path to a Better Future? It encourages people to dream bold new futures coming… The post CASA Share: Dreaming of the Saints Next Door! first appeared on Real Estate Cafe.

Have you seen the Pope’s new book, Let Us Dream: The Path to a Better Future? It encourages people to dream bold new futures coming…

The post CASA Share: Dreaming of the Saints Next Door! first appeared on Real Estate Cafe.

Saturday, 06. March 2021

Wip Abramson

Thoughts and Ideas on the Memory of Things

An idea has been maturing in my thoughts for a while now. Or rather I have been thinking about a series of ideas, programming projects I…

An idea has been maturing in my thoughts for a while now. Or rather I have been thinking about a series of ideas, programming projects I would love to actualise, which I recently realised share a common thread. Memory and how we access it. Probably not surprising for software developers, pretty much any application we can imagine requires some form of data store. Although rarely, in my experience, do we talk about this data in terms of memory.

My more recent work researching identity, privacy and trust in digital interactions has evolved and broadened my perspective on the importance of memory. Our identity, whatever we conceive that to be, must be understood to exist in close relation to memory. Or the mathematician in me wants to say as a function of memory I ~ F(m) 1. And as Herbert Simon details in the Science of the Artificial, memory is a key property of any intelligent system 2. The ability to take information from the past and apply it when navigating the present moment is a powerful skill in the Homo Sapien toolbox, both as individuals and a species.

These patterns of thought on memory percolating in my mind have further been influenced through my engagement with blockchain/distributed ledger both as a concept and through the practical application of it in application development. I especially find the ideas coming out of the Ethereum community transformative to the possibilities for digital application design. Persistent Compute Objects (PICOs) 3 are a more recent idea and open source project I have been following that appears to support novel interaction patterns.

It has been an interesting journey to this point, what follows in a sketch of the evolution of my thoughts on memory told through the lens of three ideas and the questions these ideas sparked in me. Then I intend to reflect on where my thoughts are at now, because it is only now reflecting on these ideas that I see the common thread - human memory.

The ideas that influenced my thinking and help illustrate my thoughts are; Viewing Time, The Community Mind and Nifty Books. Each originates at a different point in my life but all were ideas for programming side projects I could use to cut my teeth on a new technology, language or library. Each idea has a special place in my own memory, and it is from these memories that the thoughts I am sharing emerged.

Viewing Time

My first idea, back before I knew how to program. The inspirational carrot I used to motivate myself to learn. Not that I ever wrote more than a few lines of code on this. It originates from a time in Maastricht visiting a friend, we were sitting looking at a beautiful view and I thought:

What if we could create a timelapse of a view from a specific location? A View of Time.

I think I tried to get my friend to go back their every day to take a photo to do just this. Not the ideal solution, my thinking has evolved since then. Here are some of the questions I thought about:

What if we could crowd source the collection of the photos for a view, enabling anyone with a camera to contribute?

How would view’s of time be discovered, found, contributed to and viewed?

How would you prevent “bad” view from being added. Inappropriate, non valued, etc?

Who decides what a “bad” view is? Authorisation problem

Where would views be stored?

Who stores them? Who has control over them? Who manages that control? Who would host and pay for this application and why?

How might Viewing Time change our relationship with our environment and help us reflect on the change happening all around us?

Chasing Ice, a documentary I watched on another visit to the Netherlands, emphasised how powerful this could be. Here is an example. Watching a Chinese cityscape evolve over these last 20 years would have provided another staggering view of time and change. Here are some examples of this using satellite imagery How might Viewing Time help communities record and interact with shared memories? I was recently involved with planting a Community Orchard and became aware how valuable creating a view as a shared artifact could be. It might help communities celebrate and appreciate the positive changes they bring about. How might date, time or season be used to present different time-lapses of the same view?

How would we prevent overtourism?

I never wanted to turn view’s into used and abused tourist spaces, an acknowledged tension. Explicitly want to avoid insta tourism type effects. How? This is a funny add from the Kiwi’s recently keeping me mindful of this What are the incentives? How do we ensure this is an artifact all can enjoy while minimising the unintended consequences associated with the change in context-relative informational norms? How might this be used to create incentives for positive, respectful tourism? Realised in part this is about how views are discovered. This got me closer to the importance of location.

What if you could only discover view’s if you were in it’s location?

How might this help to prevent bad content? How would you even prove you were in a certain location? Who/what would you prove it to?

What is the context and associated informational norms with a View?

Who defines and evolves these norms? What are the rules? How might location be used as a strong authenticator?4

These are all pointers, sketching the outlines of thought that I have been evolving over the 5 or so years since first appeared in my mind.

It is an idea that I never made any meaningful, tangible progress on. Except for a few positive conversations with friends it has remained wishful thinking. It is in my humble opinion, a beauty of an idea, something that I would love to see happen. I am optimistic, the technology and mental model for application design is shifting in ways that open up an entirely new design space for these kinds of ideas. Something other than the for profit venture capitalist endeavours that have lead to the colonisation and privatisation of much of our virtual spaces. A for-profit venture could never be the best realisation of this idea.

The Community Mind

This is my baby. My first side project. It is how I learnt React and felt the power of GraphQl. The first website and server I ever deployed - a challenging experience but one I learnt from. It is was also my first encounter with authentication and account management, what a nightmare that was.

After completing a year in industry this was the next programming project I worked on. I committed a lot of my time to this work. Including spending a month straight on it while dipping my toes into the digital nomad lifestyle in Chiang Mai, where I was exposed to and became fascinated with blockchain, Bitcoin and all that crazy stuff. An influential period of my life really.

Anyway, the idea revolves around creating a place for us to organise our questions. A space for questions to be shared and thought about, but not a place to collect answers. In my mind it was explicitly not for answers. Rather it would provide triggers to thought’s within individuals as they pondered these questions from the their own unique perspective shaped by their lived experience. I am a strong believe that we all have the ability to imagine creative ideas and possibilities, the hard part of course is actioning those ideas. As in many ways this text demonstrates.

The initial inspiration for this idea came from the book A More Beautiful Question. I wanted to try to develop something that would make it easier to ask and discover beautiful questions. A place where these questions could be crowd sourced, recorded and connected. I wanted to provide an interface for individuals to explore a web of interconnected questions being thought by others. I wanted people to be able to contribute their own questions and thought pathways to this network. Each individual interacting with and contributing to the community mind.

The questions that surfaced when thinking through the design requirements of such a project were something along these lines:

How will I manage questioners?

I was still thinking in a user account paradigm in these days. Who gets to ask questions? Who gets to see the questions asked?

How will questioners search and discover the questions they are interested in?

What if questions could be linked to other questions? Who can link questions to other questions? Which links do people see and how do they decide? What if these links grew in strength the more they were traversed and endorsed like neural pathways in our own minds?

How will we prevent duplicate questions?

Am I trying to create a single global repository of questions?

Or would it be better to allow each individual to manage their web of questions independently?

How might you enable the best of both?

How will we prevent bad data? Questions that don’t align with the ethos of the mind?

Whose mind? Who decides? How might questions be optimally curated? What is optimal and who is curating? Where would this information be stored?

What is the business model for such an application?

What are the incentives?

When initially developing this project I had in mind a database for storing questions and their links, which people would interact with. Searching and filtering to discover the questions they were interested in. Contributing their own questions and connections to this storage. All managed by some centralised application, providing a single view and interface for people to interact with. Today, I have a model of individual’s being able to maintain their own mind, curating the questions and connections that find useful to them. Then providing a mechanism to network and aggregate the minds of others into a larger web of questions for all to explore. Imagine a mind like a GitHub repository, the entity that creates it would be able to manage the rules governing how questions and links are contributed. I even considered private minds as a potential business model. Although my desire was and remains today to develop an open source tool for recording, curating and discovering beautiful questions. I see them as a loose scaffold around thought, hinting at the problem space without prescribing the solution. A common entry point to creativity, that any individual from any background at any moment in time would be able to interact with. Using their own unique perspective to draw new insights and inspire different solutions.

I have a lot of fond memories developing this idea, including a weekend in Porto visiting a friend where I discovered the joy and value of committing thoughts to paper. Creating a physical artifact to interact with. Something that in my view can never fully be replicated in a digital medium, but what if you could have both?

This book take me back

Towards the end of my active development of this idea I attempted to integrate a token, Simple Token, as part of a challenge they held. My idea at the time was to use this as some form of incentive mechanism for the application, although the actual execution was a bit clumsy looking back. You can view my submission here. Then there is this old Github issue from a month long hackathon called Blockternship.

While development is dormant, I am still very committed to making this a thing. One day!

Nifty Books

This is another lovely idea, in my book at least. Originating from my desire to learn how to write smart contracts using solidity, the Ethereum programming language. The idea stemmed from thinking through how we might digitally represent books the books we own, creating a distributed library and opening access to a wealth of books. Moving them off our shelves and into peoples hands, helping the wisdom held within them diffuse into more peoples minds and become recorded in their memories. Book’s can provide an intoxicating fountain of knowledge or a refreshing escape from reality. I appreciate both aspects equally and would love more to experience their joy.

For a bit of history of this idea you can see my proposal for the ETH Berlin hackathon around this. I proposed creating an application that allowed anyone to mint an ERC721 Non-Fungible Token to represent their physical book. Unfortunately, I ended up forming a different team and haven’t made much progress on realising this idea. I remain a sketchy solidity developer at best. That said, progress has been made. The concept is more mature in my mind, and the ethereum development landscape has come a long way since 2018. As my recent experience at the virtual ETH Denver highlighted, while I didn’t manage to submit anything or even write much code I did get a sense for how far things have come. The scaffold-eth repo seems like a great place to start, if I ever do manage to carve out time to create this.

I am convinced, and regularly reminded, how this idea could unlock so much hidden value. Book’s deserve to be read more than once, indeed there is something beautiful about a book having been read by many different people. Throughout the course of my studies in Edinburgh I have developed a fairly extensive personal collection of some truly fascinating books. I would love to have a means to share them with others in the area also interested in this material. And yes I am sure there exist ways for me to do this if I really tried, but I believe giving a book a memory has more implications that simply making it easily shareable.

Here are a few questions that this idea has raised over the years as I wondered about how it might be developed:

How might we digitally represent a physical book?

How would you link the physical book with it’s digital representation? Would a QR Code work here?

How might we represent ownership?

What affordances should owners of books have? What affordances should borrower’s have?

How would books within the virtual library be discovered, requested and returned?

How might you pay for postage of book between participants?

What if the model was to create primarily a local virtual library, but with exchanges between localities when requested?

Libraries have existed like this for ages.

How might the digital representation of a book be used to embed it with a memory?

What information would the book want to store in it’s memory? How might we represent the list of borrowers without compromising their privacy?

What if the book had an interface to the Community Mind allowing readers to ask and share questions that the material provoked in them?

How might being in possession of a book, either as a lender or borrower provide a mechanism for access control into other applications? E.g. the Community Mind. What can we learn from the way individuals interact with eBook’s today? How might this approach help us appreciate this medium more deeply?

How might we deter bad actors abusing the virtual library?

What is the incentive model? Who decides? What is to stop malicious actors creating virtual book’s unattached to physical copies? What prevents people from stealing book’s they have borrowed? Who stores the information around virtual books?

How are the search and discovery capabilities for these books mediated?

Who is mediating this? What can we learn from the way book’s are shared and exchanged by travellers?

What if the things we bought came with a configurable and extendable digital memory?

How might this both simplify and expand the field of interaction enabled and perceived by those in proximity to the device?

As usual, a whole load of questions. Always there are questions. I present them here to provoke your own thought and inquiry around these ideas.

The Common Thread

Now, these ideas are not directly or intentionally linked. They are connected through me, and through my desire to produce ideas for software applications I would be motivated to create. Most developers you meet will have a few of these kicking around if you ask them. Over time all ideas evolve, dots connect and new insight emerges. The three ideas I presented trace the evolution of me, as much as anything else, from a computer science undergraduate to hopefully a final year PhD student.

It is only recently, from the new perspectives my research has provided me that I can reflect on all these ideas and clearly see a common thread in my thinking. In keeping with the article will summarise in a series of questions:

How might we use technology to experiment with the ways in which we can attach memory to the artifacts we place meaning in?

How might we design this memory to be open and extendable supporting permission-less innovation at the edges?

How might artifacts use this memory to intelligently interact with it’s environment and the people interacting with it?

How might such artifacts be designed to respect the privacy of those it interacts with while ensuring they are held accountable within a defined context?

What if we started to centralise information on artifacts, using our physical interaction with these artifacts to provide an alternative to the search engine for the discovery of information?

How might centralising information on the artifact, be that a location, object or thought transform the way design digital systems and our ability to actively maintain collective memory at all scales of society? How might context-relative informational norms for managing this memory be defined, communicated, upheld and evolved? How might such artifacts change the way we identify and authenticate digitally?

How might this change the nature of the virtual spaces we interact in?

How might this provide natural limits to the participants within these spaces? How might this create virtual neighbours and encourage trustworthy behaviour and positive relationships?

What are the descriptive and normative properties for structuring memory?

What are the properties of collective memory that we experience in the majority of virtual environments we exist in today and how do they differ from the ways we have managed memory in the physical reality across time and context?

It is an exploration of the structure of memory, how we might augment this structure with intention using digital technology and the implications of this on meaning making within both individuals and groups. To date this exploration has predominantly involved a few powerful entities constructing, owning and manipulating the digital structures of memory to meet their own agenda’s 5.

It is also interesting to me that while working on this idea I created artifacts, both physical and digital, they hold meaning to me but are no longer only accessible too me. By committing thoughts to paper, code or words, they become remembered at least partially in a different kind of memory. Indeed this text itself is one of these artifacts.

I could go on, but it’s long already and I wrote this for fun more than anything. A break from the rigidity of academic writing that can be suffocating at times. A more detailed, formal analysis of these ideas is for another time.

Thanks for making it this far. These thoughts are pretty fresh, I would love to know what they triggered in you.

This is something my thoughts wander across on occasion. I am always drawn the ideas in quantum field theory where particles exist in a field of potential, it is only upon measurement that the field collapses. Feels like there is something interesting in having a similar mental model for identity.

The Sciences of the Artificial, Third Edition. Herbert A Simon. 1969. This is a pretty dense book, but worth a read. Chapters on the Psychology of Thinking and Learning and Remembering are relevant to this post.

Persistent Compute Objects, a fascinating but little known project originating from Phil Windely some time ago I believe. It is currently being actively developed at Brigham Young university. Some good links: https://picolabs.atlassian.net/wiki/spaces/docs/pages/1189992/Persistent+Compute+Objects, https://www.windley.com/tags/picos.shtml

One idea that stuck with me from this is location based authentication. This is one of the reasons I was so interested by the FOAM project, although it seems a long way from reaching it’s potential at the moment. Alternative, censorship resistant location services feels like something that would unlock a lot of value.

I am reminded of a recent podcast episode I listened to involving Kate Raworth the creator of Donut Economics and the Centre for Humane Technology responsible for the Social Dilema. Here is a noteworthy and relevant clip (sorry it’s on facebook), although I highly recommend listening to the entire episode

Friday, 05. March 2021

MyDigitalFootprint

Updating our board papers for Data Attestation

I have written and read my fair share of board and investment papers over the past 25 years.  This post is not to add to the abundance of excellent work on how to write a better board/ investment paper or what the best structure is - it would annoy you and waste my time.  A classic “board paper” will likely have the following headings: Introduction, Background, Rationale, Structur


I have written and read my fair share of board and investment papers over the past 25 years.  This post is not to add to the abundance of excellent work on how to write a better board/ investment paper or what the best structure is - it would annoy you and waste my time. 

A classic “board paper” will likely have the following headings: Introduction, Background, Rationale, Structure/ Operations, Illustrative Financials & Scenarios, Competition, Risks and Legal. Case by case there are always minor adjustments. Finally, there will be some form of Recommendation inviting the board to note key facts and approve the request.  I believe it is time for the Chair or CEO, with the support of their senior data lead (#CDO) to ask that each board paper has a new section heading called   “Data Attestation.”  Some will favour this as an addition to the main flow, some as a new part of legal, others as an appendix; how and where matters little compared to its intent.

The intention of this new heading and section is that the board receives a *signed* declaration from the proposer(s) and independent data expert, that the proposer has:

proven attestation of the data used in the board paper, 

proven rights to use the data

what difference/ delta third-party data makes the recommendation/ outcome

ensured, to best efforts, that there is no bias or selection in the data or analysis

clearly specifies any decision making that is or becomes automated 

if relevant created the hypothesis before the analysis 

run scenarios using different data and tools

not miss-led the board using data

highlighted the conflicts of interest between their BSC/KPI and the approval sort


In regards to the independent auditor, this should not be the companies financial auditor or data lake provider, this should be an independent forensic data expert. Audit suggests sampling; this is not about sampling. It is not about creating more hurdles or handing power to an external body, this is about third party verification and validation. A company you build a list of experts and cycle through them on a regular basis. The auditor does not need to see the board paper, the outcome from the analysis or the recommendations - they are there to check the attestation and efficacy from end to end.  Critical will be proof of their expertise and a large insurance certificate.    


Whilst this is not the final wording you will use, it is the intent that is important, this does not remove data risks from the risk section.

Data Attestation

We certify by our signatures that we, the proposer and auditor, can prove to OurCompany PLC Board that we have provable attestation and rights to all the data used in the presentation of this paper.   We have presented in this paper sensitivity of the selected data, model and tools and have provided evidence that different data and analysis tool selection equally favours the recommendation.  We have tested and can verify that our data, analysis, insights and knowledge is traceable and justifiable.  We declare that there are no Conflicts of Interest and no automation of decision making will result from this approval. 

Why do this?

Whilst Directors are collectively accountable and responsible for the decisions they take, right now there is a gap in skills in data and many board members don’t know how to test the data that forms the basis on which they are being asked to approve. This is all new and a level of detail that requires deep expertise.  This provides an additional line until such time that we can gain sufficient skills at the Board and test data properly.  Yes, there is a high duty of care that is already intrinsic in anyone who presents a board paper, however, the data expertise and skills in the majority of senior levels are also well below what we need.  If nothing else it will get those presenting to think carefully about data, bias and the ethics of their proposal. 





Thursday, 04. March 2021

FACILELOGIN

Why developer-first IAM and why Okta’s Auth0 acquisition matters?

https://careerfoundry.com/en/blog/uploads/web_dev_pillar_page.jpg Why developer-first IAM ? And why Okta’s Auth0 acquisition matters? In my previous blog, The Next TCP/IP Moment in Identity, I discussed why the enterprises will demand for developer-first IAM. As every company is becoming a software company, and starting to build their competitive advantage on the software they build, the deve
https://careerfoundry.com/en/blog/uploads/web_dev_pillar_page.jpg Why developer-first IAM ? And why Okta’s Auth0 acquisition matters?

In my previous blog, The Next TCP/IP Moment in Identity, I discussed why the enterprises will demand for developer-first IAM. As every company is becoming a software company, and starting to build their competitive advantage on the software they build, the developer-first IAM will free the developers from inherent complexities in doing Identity integrations.

The announcement came yesterday on Okta’s intention to acquire Auth0 for $6.5B, which is probably 40 times the Auth0’s current revenue, is a true validation of the push towards developer-first IAM. However, this is not Okta’s first effort towards developer-first IAM. In 2017, Okta acquired Stormpath, a company that built tools to help developers to integrate login with their apps. Stormpath soon got absorbed into the Okta platform, but yet, Okta’s selling strategy didn’t change. It was always top-down.

In contrast to Okta, Auth0 follows a bottom-up sales strategy. One of the analysts I spoke to couple of years back told that the Auth0 name comes in an inquiry call only when a developer joins in. The acquisition of Auth0 will give Okta, the access to a broader market. So, it is important for Okta to let Auth0 run as an independent business, as also mentioned in the acquisition announcement.

Auth0 is not just about the product; but also the developer tooling, content and the developer community around it. Okta will surely benefit from this ecosystem around Auth0. Also, for years, Azure is the primary competitor of Okta, and Auth0 acquisition will make Okta stronger against Azure in the long run.

Late 2020, in one of the the earnings calls, the Okta announced that it sees the total market for its workplace identity management software as $30 billion, but it sees another, additional market for customer identity software at $25 billion. In the customer Identity space, when we talk to enterprises, they bring in their unique requirements. In many cases they look for a product that can be used to build an agile, event-driven customer Identity (CIAM) platform that can flex to meet frequently changing business requirements. The developer-first IAM is more critical in building a CIAM solution than in workforce IAM. In the latest Forrester report on CIAM, Auth0 is way ahead of Okta, in terms of the current product offering. Okta will probably use Auth0 to increase their presence in the CIAM domain. Like Microsoft has Azure AD to focus on workforce IAM and Azure B2C to focus on CIAM, Auth0 could be Okta’s offering for CIAM.

Forrester CIAM Wave 2020

When Auth0 was founded in 2013, it picked a less-saturated (even right to say fresh), future-driven market segment in IAM — developer-first. Developer-first is all about the experience. Auth0 did extremely well there. They were worried about building the right level of developer experience, rather than the feature set. Even today, Auth0 stands against others not because of their feature set, but the experience they build for developers.

The developer-first experience is not only about the product. How you build product features in a developer-first manner, probably contribute 50% of the total effort. The rest is about developer tooling, SDKs, and the content. Then again, the content is not just about the product. The larger portion of the content needs to be on how to integrate the product with a larger ecosystem — and also to teach developers the basic constructs, concepts and best practices in IAM. That helps to win the developer trust!

How the Auth0 website looked like in July, 2014

Auth0’s vision towards developer-first IAM evolved over the past. The way the Auth0 website itself evolved in terms of the messaging and the presentation, reflects how much they want to be on the enterprisy side more today than in the past. As they claim Auth0 has 9000+ enterprise customers, which probably generate $150M annual revenue, the average sales value (ASV) would be around $16500. That probably means, the majority of Auth0 customers are in free/developer/developer pro tiers. So, it’s understandable why they want to bring in enterprise look and messaging, probably moving forward to focus more on top-down driven sales strategy. The big Contact Sales button over the Signup button on the Auth0 website today, sums up this direction to some extent.The Okta’s acquisition of Auth0 could probably strengthen this move.

How the Auth0 website looks like today (March, 2021)

Like Microsoft’s acquisition of GitHub for $7.5B in 2018, Okta’s acquisition of Auth0 for $6.5B is a win for developers! Congratulations to both Auth0 and Okta and very much looking forward to see the journey of Auth0 together with Okta.

Why developer-first IAM and why Okta’s Auth0 acquisition matters? was originally published in FACILELOGIN on Medium, where people are continuing the conversation by highlighting and responding to this story.


MyDigitalFootprint

Trust is not a thing or a destination, but an outcome from a transformation