Last Update 10:49 AM May 19, 2022 (UTC)

Identity Blog Catcher

Brought to you by Identity Woman and Infominer.
Support this collaboration on Patreon!!!

Thursday, 19. May 2022

John Philpin : Lifestream

My goodness - what a thread … and 15 comments, 6 shares late

My goodness - what a thread … and 15 comments, 6 shares later .. not one person has highlighted that the link takes you to The Babylon Bee - even their own strap line reads ‘Fake News You Can Trust’. Search ‘The Babylon Bee’ and you get … and this is on LINKEDIN !!

My goodness - what a thread … and 15 comments, 6 shares later .. not one person has highlighted that the link takes you to The Babylon Bee - even their own strap line reads ‘Fake News You Can Trust’.

Search ‘The Babylon Bee’ and you get

… and this is on LINKEDIN !!


YUP

YUP

YUP


Dana “5 companies control the bulk of the cloud data cen

Dana “5 companies control the bulk of the cloud data centers that now drive the global economy. These are Apple, Microsoft, Amazon, Google, and the artists formerly known as Facebook. No, I’m not going to create some stupid acronym. Then allow me … ‘A’ ‘MAGA’ collective?

Dana

“5 companies control the bulk of the cloud data centers that now drive the global economy. These are Apple, Microsoft, Amazon, Google, and the artists formerly known as Facebook. No, I’m not going to create some stupid acronym.

Then allow me … ‘A’ ‘MAGA’ collective?


The man who could rig the 2024 election. Judd Legum on the

The man who could rig the 2024 election. Judd Legum on the case. Shapiro should be a slam dunk IMHO, but I think he is going to have his work cut out for him.

The man who could rig the 2024 election.

Judd Legum on the case.

Shapiro should be a slam dunk IMHO, but I think he is going to have his work cut out for him.


“Web3 is my favorite new blog in years. Everything about i

“Web3 is my favorite new blog in years. Everything about it is just perfect.” 💬 John Gruber I agree with @gruber

“Web3 is my favorite new blog in years. Everything about it is just perfect.”

💬 John Gruber

I agree with @gruber


I wonder if Phaux News will change much when 91 year old Rup

I wonder if Phaux News will change much when 91 year old Rupert Murdoch dies. The entire family only controls 39% of the holding corporation - so I assume the other 61% are happy enough with what is happening over there.

I wonder if Phaux News will change much when 91 year old Rupert Murdoch dies. The entire family only controls 39% of the holding corporation - so I assume the other 61% are happy enough with what is happening over there.


Who owns Einstein? The battle for the world’s most famous fa

Who owns Einstein? The battle for the world’s most famous face Seriously? Did you know that Einstein’s image is a $12.5 million a year business? Did you know that the Hebrew University of Jerusalem owns the rights? Filed in my ‘new news’ bucket.

Who owns Einstein? The battle for the world’s most famous face

Seriously?

Did you know that Einstein’s image is a $12.5 million a year business?

Did you know that the Hebrew University of Jerusalem owns the rights?

Filed in my ‘new news’ bucket.


Filed in the bucket of ‘you just can’t make this stuff up’.

Filed in the bucket of ‘you just can’t make this stuff up’. Eden Project installs plastic grass to stop children getting muddy

Filed in the bucket of ‘you just can’t make this stuff up’.

Eden Project installs plastic grass to stop children getting muddy

Wednesday, 18. May 2022

John Philpin : Lifestream

🎶 Mindless Self Indulgence 19th in the series of #MBMay

🎶 Mindless Self Indulgence 19th in the series of #MBMay Photo; Cover of ‘Pink’ by Mindless Self Indulgence Caveat: NONE of the photos in this series are mine, but when I know who to credit, I do.

🎶 Mindless Self Indulgence

19th in the series of #MBMay

Photo; Cover of ‘Pink’ by Mindless Self Indulgence

Caveat: NONE of the photos in this series are mine, but when I know who to credit, I do.


Simon Willison

Comby

Comby Describes itself as "Structural search and replace for any language". Lets you execute search and replace patterns that look a little bit like simplified regular expressions, but with some deep OCaml-powered magic that makes them aware of comment, string and nested parenthesis rules for different languages. This means you can use it to construct scripts that automate common refactoring or

Comby

Describes itself as "Structural search and replace for any language". Lets you execute search and replace patterns that look a little bit like simplified regular expressions, but with some deep OCaml-powered magic that makes them aware of comment, string and nested parenthesis rules for different languages. This means you can use it to construct scripts that automate common refactoring or code upgrade tasks.

Via Hacker News


John Philpin : Lifestream

I wonder what Bento would be doing today had it not disappea

I wonder what Bento would be doing today had it not disappeared?

I wonder what Bento would be doing today had it not disappeared?

Tuesday, 17. May 2022

John Philpin : Lifestream

On the 12th of May I realized that I had settled into a pers

On the 12th of May I realized that I had settled into a personal theme within the #MBMay challenge BUT 4 of my first 6 entries didn’t ‘fit’., so I have added a comment to each of the entries providing an alternative entry that is! 1 Switch 2 Photo 4 Thorny 6 Silhouette

🎶 Daft Punk wasn’t in my mind when I offered ‘random’ as a p

🎶 Daft Punk wasn’t in my mind when I offered ‘random’ as a possible word for #MBMay - and yet here we are. 18th in the series of #MBMay Photo; Cover of ‘Random Access Memories’ Caveat: NONE of the photos in this series are mine, but when I know who to credit, I do.

🎶 Daft Punk wasn’t in my mind when I offered ‘random’ as a possible word for #MBMay - and yet here we are.

18th in the series of #MBMay

Photo; Cover of ‘Random Access Memories’

Caveat: NONE of the photos in this series are mine, but when I know who to credit, I do.


Propaganda for Good “The Putin threat to this nation is

Propaganda for Good “The Putin threat to this nation is still alive. It can still win. That’s because Trump was never Putin’s most dangerous puppet. Rupert Murdoch is.” 💬 Dana Blankenhorn

Propaganda for Good

“The Putin threat to this nation is still alive. It can still win. That’s because Trump was never Putin’s most dangerous puppet.

Rupert Murdoch is.”

💬 Dana Blankenhorn


“We were promised bicycles for the mind, but we got aircra

“We were promised bicycles for the mind, but we got aircraft carriers instead” 💬 Jonathan Edwards Bike - a ‘tool for thought’ from Hog Bay Software. I assume the name emerges from the quote, but for thirty bucks not sure what it adds to the existing note/outliner world.

“We were promised bicycles for the mind, but we got aircraft carriers instead”

💬 Jonathan Edwards

Bike - a ‘tool for thought’ from Hog Bay Software.

I assume the name emerges from the quote, but for thirty bucks not sure what it adds to the existing note/outliner world.


Simon Willison

simonw/datasette-screenshots

simonw/datasette-screenshots I started a new GitHub repository to automate taking screenshots of Datasette for marketing purposes, using my shot-scraper browser automation tool. Via @simonw

simonw/datasette-screenshots

I started a new GitHub repository to automate taking screenshots of Datasette for marketing purposes, using my shot-scraper browser automation tool.

Via @simonw


Ben Werdmüller

Taking a Break from Social Media Makes you Happier and Less Anxious

“At the end of this week, the researchers found “significant between-group differences” in well-being, depression, and anxiety, with the intervention group faring much better on all three metrics. These results held even after control for baseline scores, as well as age and gender.” [Link]

“At the end of this week, the researchers found “significant between-group differences” in well-being, depression, and anxiety, with the intervention group faring much better on all three metrics. These results held even after control for baseline scores, as well as age and gender.”

[Link]


Doc Searls Weblog

A thermal theory of basketball

Chemistry is a good metaphor for how teams work—especially when times get tough, such as in the playoffs happening in the NBA right now. Think about it. Every element has a melting point: a temperature above which solid turns liquid. Basketball teams do too, only that temperature changes from game to game, opponent to opponent, and […]

Chemistry is a good metaphor for how teams work—especially when times get tough, such as in the playoffs happening in the NBA right now.

Think about it. Every element has a melting point: a temperature above which solid turns liquid. Basketball teams do too, only that temperature changes from game to game, opponent to opponent, and situation to situation. Every team is a collection of its own human compounds of many elements: physical skills and talents, conditioning, experience, communication skills, emotional and mental states, beliefs, and much else.

Sometimes one team comes in pre-melted, with no chance of winning. Bad teams start with a low melting point, arriving in liquid form and spilling all over the floor under heat and pressure from better teams.

Sometimes both teams might as well be throwing water balloons at the hoop.

Sometimes both teams are great, neither melts, and you get an overtime outcome that’s whatever the score said when the time finally ran out. Still, one loser and one winner. After all, every game has a loser, and half the league loses every round. Whole conferences and leagues average .500. That’s their melting point: half solid, half liquid.

Yesterday we saw two meltdowns, neither of which was expected and one of which was a complete surprise.

First, the Milwaukee Bucks melted under the defensive and scoring pressures of the Boston Celtics. There was nothing shameful about it, though. The Celtics just ran away with the game. It happens. Still, you could see the moment the melting started. It was near the end of the first half. The Celtics’ offense sucked, yet they were still close. Then they made a drive to lead going into halftime. After that, it became increasingly and obviously futile to expect the Bucks to rally, especially when it was clear that Giannis Antetokounmpo, the best player in the world, was clearly less solid than usual. The team melted around him while the Celtics rained down threes.

To be fair, the Celtics also melted three times in the series, most dramatically at the end of game five, on their home floor. But Marcus Smart, who was humiliated by a block and a steal in the closing seconds of a game the Celtics had led almost all the way, didn’t melt. In the next two games, he was more solid than ever. So was the team. And they won—this round, at least. Against the Miami Heat? We’ll see.

Right after that game, the Phoenix Suns, by far the best team in the league through the regular season, didn’t so much play the Dallas Mavericks as submit to them. Utterly.

In chemical terms, the Suns showed up in liquid form and turned straight into gas. As Arizona Sports put it, “We just witnessed one of the greatest collapses in the history of the NBA.” No shit. Epic. Nobody on the team will ever live this one down. It’s on their permanent record. Straight A’s through the season, then a big red F.

Talk about losses: a mountain of bets on the Suns also turned to vapor yesterday.

So, what happened? I say chemistry.

Maybe it was nothing more than Luka Dončić catching fire and vaporizing the whole Suns team. Whatever, it was awful to watch, especially for Suns fans. Hell, they melted too. Booing your team when it needs your support couldn’t have helped, understandable though it was.

Applying the basketball-as-chemistry theory, I expect the Celtics to go all the way. They may melt a bit in a game or few, but they’re more hardened than the Heat, which comes from having defeated two teams—the Atlanta Hawks and the Philadelphia 76ers—with relatively low melting points. And I think both the Mavs and the Warriors have lower melting points than either the Celtics or the Heat.

But we’ll see.

Through the final two rounds, look at each game as a chemistry experiment. See how well the theory works.

 

 


John Philpin : Lifestream

Read Chris Hedges - you should anyway, but this one is about

Read Chris Hedges - you should anyway, but this one is about the execution of a journalist. Just like 10 dead in a mall in America, lots of voices and lots of prayers and yet almost certainly mall shootings and assassinations will continue unabated because words are not enough.

Read Chris Hedges - you should anyway, but this one is about the execution of a journalist.

Just like 10 dead in a mall in America, lots of voices and lots of prayers and yet almost certainly mall shootings and assassinations will continue unabated because words are not enough.


I think ‘On Hold' Music is the only kind of 🎶 music I hate.

I think ‘On Hold' Music is the only kind of 🎶 music I hate. I guess that is another contributor to my dislike of Spotify. 17th in the series of #MBMay Photo; Spotify Caveat: NONE of the photos in this series are mine, but when I know who to credit, I do.

I think ‘On Hold' Music is the only kind of 🎶 music I hate.

I guess that is another contributor to my dislike of Spotify.

17th in the series of #MBMay

Photo; Spotify

Caveat: NONE of the photos in this series are mine, but when I know who to credit, I do.

Monday, 16. May 2022

Simon Willison

Supercharging GitHub Actions with Job Summaries

Supercharging GitHub Actions with Job Summaries GitHub Actions workflows can now generate a rendered Markdown summary of, well, anything that you can think to generate as part of the workflow execution. I particularly like the way this is designed: they provide a filename in a $GITHUB_STEP_SUMMARY environment variable which you can then append data to from each of your steps. Via Hacker N

Supercharging GitHub Actions with Job Summaries

GitHub Actions workflows can now generate a rendered Markdown summary of, well, anything that you can think to generate as part of the workflow execution. I particularly like the way this is designed: they provide a filename in a $GITHUB_STEP_SUMMARY environment variable which you can then append data to from each of your steps.

Via Hacker News


Ben Werdmüller

How inequities make the baby formula shortage worse for many families

“In the meantime, parents have begun stockpiling if they can – and rationing when they can’t. Much of the burden is falling on households that need financial assistance: The White House noted that people on the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC) account for about half of all infant formula purchases. Parents who work lower-income jobs

“In the meantime, parents have begun stockpiling if they can – and rationing when they can’t. Much of the burden is falling on households that need financial assistance: The White House noted that people on the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC) account for about half of all infant formula purchases. Parents who work lower-income jobs often need to rely on formula more because their jobs do not allow for them to establish breastfeeding easily – assuming a parent can produce enough milk to begin with.”

[Link]


John Philpin : Lifestream

Apple Podcasts … “will also automatically delete older e

Apple Podcasts … “will also automatically delete older episodes, preventing the app from taking up too much storage space on iOS devices.” How long has there been a podcasts app on the phone again?

Apple Podcasts …

“will also automatically delete older episodes, preventing the app from taking up too much storage space on iOS devices.”

How long has there been a podcasts app on the phone again?


Mike Jones: self-issued

JWK Thumbprint URI Draft Addressing IETF Last Call Comments

Kristina Yasuda and I have published a new JWK Thumbprint URI draft that addresses the IETF Last Call comments received. Changes made were: Clarified the requirement to use registered hash algorithm identifiers. Acknowledged IETF Last Call reviewers. The specification is available at: https://www.ietf.org/archive/id/draft-ietf-oauth-jwk-thumbprint-uri-02.html

Kristina Yasuda and I have published a new JWK Thumbprint URI draft that addresses the IETF Last Call comments received. Changes made were:

Clarified the requirement to use registered hash algorithm identifiers. Acknowledged IETF Last Call reviewers.

The specification is available at:

https://www.ietf.org/archive/id/draft-ietf-oauth-jwk-thumbprint-uri-02.html

Phil Windley's Technometria

Decentralizing Agendas and Decisions

Summary: Allowing groups to self-organize, set their own agendas, and decide without central guidance or planning requires being vulnerable and trusting. But the results are worth the risk. Last month was the 34th Internet Identity Workshop (IIW). After doing the last four virtually, it was spectacular to be back together with everyone at the Computer History Museum. You could almo

Summary: Allowing groups to self-organize, set their own agendas, and decide without central guidance or planning requires being vulnerable and trusting. But the results are worth the risk.

Last month was the 34th Internet Identity Workshop (IIW). After doing the last four virtually, it was spectacular to be back together with everyone at the Computer History Museum. You could almost feel the excitement in the air as people met with old friends and made new ones. Rich the barista was back, along with Burrito Wednesday. I loved watching people in small groups having intense conversations over meals, drinks, and snacks.

Also back was IIW's trademark open space organization. Open space conferences are workshops that don't have pre-built agendas. Open space is like an unconference with a formal facilitator trained in using open space technology. IIW is self-organizing, with participants setting the agenda every morning before we start. IIW has used open space for part or all of the workshop since the second workshop in 2006. Early on, Kaliya Young, one of my co-founders (along with Doc Searls), convinced me to try open space as a way of letting participants shape the agenda and direction. For an event this large (300-400 participants), you need professional facilitation. Heidi Saul has been doing that for us for years. The results speak for themselves. IIW has nurtured many of the ideas, protocols, and trends that make up modern identity systems and thinking.

Welcome to IIW 34! (click to enlarge) mDL Discussion at IIW 34 (click to enlarge) Agenda Wall at IIW 34 (Day 1) (click to enlarge)

Last month was the first in-person CTO Breakfast since early 2020. CTO Breakfast is a monthly gathering of technologists in the Provo-Salt Lake City area that I've convened for almost 20 years. Like IIW, CTO Breakfast has no pre-planned agenda. The discussion is freewheeling and active. We have just two rules: (1) no politics and (2) one conversation at a time. Topics from the last meeting included LoRaWAN, Helium network, IoT, hiring entry-level software developers, Carrier-Grade NATs, and commercial real estate. The conversation goes where it goes, but is always interesting and worthwhile.

When we built the University API at BYU, we used decentralized decision making to make key architecture, governance, and implementation decisions. Rather than a few architects deciding everything, we had many meetings, with dozens of people in each, over the course of a year hammering out the design.

What all of these have in common is decentralized decision making by a group of people that results in learning, consensus, and, if all goes well, action. The conversation at IIW, CTO Breakfast, and BYU isn't the result a few smart people deciding what the group needed to hear and then arranging meetings to push it at them. Instead, the group decides. Empowering the group to make decisions about the very nature and direction of the conversation requires trust and trust always implies vulnerability. But I've become convinced that it's really the best way to achieve real consensus and make progress in heterogeneous groups. Thanks Kaliya!

Tags: decentralization iiw cto+breakfast byu university+api


Ben Werdmüller

Cats learn the names of their friend cats in their daily lives

“This study provides evidence that cats link a companion's name and corresponding face without explicit training.” [Link]

“This study provides evidence that cats link a companion's name and corresponding face without explicit training.”

[Link]


Damien Bod

Using multiple Azure B2C user flows from ASP.NET Core

This article shows how to use multiple Azure B2C user flows from a single ASP.NET Core application. Microsoft.Identity.Web is used to implement the authentication in the client. This is not so easy to implement with multiple schemes as the user flow policy is used in most client URLs and the Microsoft.Identity.Web package overrides an lot […]

This article shows how to use multiple Azure B2C user flows from a single ASP.NET Core application. Microsoft.Identity.Web is used to implement the authentication in the client. This is not so easy to implement with multiple schemes as the user flow policy is used in most client URLs and the Microsoft.Identity.Web package overrides an lot of the default settings. I solved this by implementing an account controller to handle the Azure B2C signup user flow initial request and set the Azure B2C policy. It can be useful to split the user flows in the client application, if the user of the application needs to be guided better.

Code https://github.com/damienbod/azureb2c-fed-azuread

The Azure B2C user flows can be implemented as simple user flows. I used a signup flow and a signin, signup flow.

The AddMicrosoftIdentityWebAppAuthentication is used to implement a standard Azure B2C client. There is no need to implement a second scheme and override the default settings of the Microsoft.Identity.Web client because we use a controller to select the flow.

string[]? initialScopes = Configuration.GetValue<string>( "UserApiOne:ScopeForAccessToken")?.Split(' '); services.AddMicrosoftIdentityWebAppAuthentication(Configuration, "AzureAdB2C") .EnableTokenAcquisitionToCallDownstreamApi(initialScopes) .AddInMemoryTokenCaches(); services.Configure<MicrosoftIdentityOptions>( OpenIdConnectDefaults.AuthenticationScheme, options => { options.Events.OnTokenValidated = async context => { if (ApplicationServices != null && context.Principal != null) { using var scope = ApplicationServices.CreateScope(); context.Principal = await scope.ServiceProvider .GetRequiredService<MsGraphClaimsTransformation>() .TransformAsync(context.Principal); } }; });

The AzureAdB2C app settings configures the sign in, sign up flow. The SignUpSignInPolicyId setting is used to configure the default user flow policy.

"AzureAdB2C": { "Instance": "https://b2cdamienbod.b2clogin.com", "ClientId": "8cbb1bd3-c190-42d7-b44e-42b20499a8a1", "Domain": "b2cdamienbod.onmicrosoft.com", "SignUpSignInPolicyId": "B2C_1_signup_signin", "TenantId": "f611d805-cf72-446f-9a7f-68f2746e4724", "CallbackPath": "/signin-oidc", "SignedOutCallbackPath ": "/signout-callback-oidc", // "ClientSecret": "--use-secrets--" },

The AccountSignUpController is used to set the policy of the flow we would like to use. The SignUpPolicy method just challenges the Azure B2C OpenID Connect server with the correct policy.

using Microsoft.AspNetCore.Authentication; using Microsoft.AspNetCore.Authentication.OpenIdConnect; using Microsoft.AspNetCore.Authorization; using Microsoft.AspNetCore.Mvc; namespace Microsoft.Identity.Web.UI.Areas.MicrosoftIdentity.Controllers; [AllowAnonymous] [Route("MicrosoftIdentity/[controller]/[action]")] public class AccountSignUpController : Controller { [HttpGet("{scheme?}")] public IActionResult SignUpPolicy( [FromRoute] string scheme, [FromQuery] string redirectUri) { scheme ??= OpenIdConnectDefaults.AuthenticationScheme; string redirect; if (!string.IsNullOrEmpty(redirectUri) && Url.IsLocalUrl(redirectUri)) { redirect = redirectUri; } else { redirect = Url.Content("~/")!; } scheme ??= OpenIdConnectDefaults.AuthenticationScheme; var properties = new AuthenticationProperties { RedirectUri = redirect }; properties.Items[Constants.Policy] = "B2C_1_signup"; return Challenge(properties, scheme); } }

The Razor page opens a link to the new controller and challenges the OIDC server with the correct policy.

<li class="nav-item"> <a class="nav-link text-dark" href="/MicrosoftIdentity/AccountSignUp/SignUpPolicy">Sign Up</a> </li>

With this approach, an ASP.NET Core application can be extended with multiple user flows and the UI can be improved for the end user as required.

Links

https://docs.microsoft.com/en-us/azure/active-directory-b2c/overview

https://docs.microsoft.com/en-us/azure/active-directory-b2c/identity-provider-azure-ad-single-tenant

https://github.com/AzureAD/microsoft-identity-web

https://docs.microsoft.com/en-us/azure/active-directory/develop/microsoft-identity-web

https://docs.microsoft.com/en-us/azure/active-directory-b2c/identity-provider-local

https://docs.microsoft.com/en-us/azure/active-directory/

https://docs.microsoft.com/en-us/aspnet/core/security/authentication/azure-ad-b2c

https://github.com/azure-ad-b2c/azureadb2ccommunity.io

https://github.com/azure-ad-b2c/samples

https://docs.microsoft.com/en-us/aspnet/core/blazor/security/webassembly/graph-api

https://docs.microsoft.com/en-us/graph/sdks/choose-authentication-providers?tabs=CS#client-credentials-provider

https://docs.microsoft.com/en-us/graph/api/user-post-users?view=graph-rest-1.0&tabs=csharp https://docs.microsoft.com/en-us/graph/api/invitation-post?view=graph-rest-1.0&tabs=csharp

Simon Willison

Weeknotes: Camping, a road trip and two new museums

Natalie and I took a week-long road trip and camping holiday. The plan was to camp on Santa Rosa Island in the California Channel Islands, but the boat to the island was cancelled due to bad weather. We treated ourselves to a Central Californian road trip instead. The Madonna Inn If you're driving down from San Francisco to Santa Barbara and you don't stay a night at the Madonna Inn in San Lui

Natalie and I took a week-long road trip and camping holiday. The plan was to camp on Santa Rosa Island in the California Channel Islands, but the boat to the island was cancelled due to bad weather. We treated ourselves to a Central Californian road trip instead.

The Madonna Inn

If you're driving down from San Francisco to Santa Barbara and you don't stay a night at the Madonna Inn in San Luis Obispo you're missing out.

This legendary hotel/motel built 110 guest rooms in the 1960s, each of them with a different theme. We ended up staying two nights thanks to our boat cancellation - one in the Kona Rock room (Hawaii themed, mostly carved out of solid rock, the shower has a waterfall) and one in Safari. Epic.

Camping

Camping in California generally requires booking a site, often months in advance. Our travel companions knew what they were doing and managed to grab us last minute spots for one night at Islay Creek near Los Osos and two nights in the beautiful Los Padres National Forest.

The Victorian Mansion

I have a habit of dropping labels on Google Maps with tips that people have given me about different places. Labels have quite a strict length limit, which means my tips are often devoid of context - including when and from whom the tip came.

This means I'm constantly stumbling across little tips from my past self, with no recollection of where the tip came from. This is delightful.

As we were planning the last leg of our trip, I spotted a label north of Santa Barbara which just said "6 rooms puts Madonna Inn to shame".

I have no recollection of saving this tip. I had attached it to the Victorian Mansion Bed & Breakfast in Los Alamos, California - an old Victorian house with six uniquely themed rooms.

We stayed in the 1950s suite. It was full of neon and the bed was a 1956 Cadillac convertible which the house had been reconstructed around when the building was moved to its present location. We watched Sideways, a movie set in the area, on the projector that simulated a drive-in movie theater on a screen in front of the car.

And some museums

On the way down to San Luis Obispo we stumbled across the Paso Robles Pioneer Museum. This was the best kind of local history museum - entirely run by volunteers, and with an eclectic accumulation of donated exhibits covering all kinds of details of the history of the surrounding area. I particularly enjoyed the Swift Jewell Barbed Wire Collection - the fourth largest collection of barbed wire on public display in the world!

(This raised the obvious question: what are the top three? From this category on Atlas Obscura it looks like there are two in Kansas and one in Texas.)

Then on the way back up we checked Roadside America and found its listing for Mendenhall's Museum of Gasoline Pumps & Petroliana. This was the absolute best kind of niche museum: an obsessive collection, in someone's home, available to view by appointment only.

We got lucky: one of the museum's operators spotted us lurking around the perimeter looking optimistic and let us have a look around despite not having pre-booked.

The museum features neon, dozens of gas pumps, more than 400 porcelain gas pump globes, thousands of gas station signs plus classic and historic racing cars too. My write-up and photos are available on Niche Museums.

Museums this week Paso Robles Pioneer Museum Mendenhall's Museum of Gasoline Pumps & Petroliana TIL this week Efficiently copying a file

Heroku: Core Impact

Heroku: Core Impact Ex-Heroku engineer Brandur Leach pulls together some of the background information circulating concerning the now more than a month long Heroku security incident and provides some ex-insider commentary on what went right and what went wrong with a platform that left a huge, if somewhat underappreciated impact on the technology industry at large. Via Hacker News

Heroku: Core Impact

Ex-Heroku engineer Brandur Leach pulls together some of the background information circulating concerning the now more than a month long Heroku security incident and provides some ex-insider commentary on what went right and what went wrong with a platform that left a huge, if somewhat underappreciated impact on the technology industry at large.

Via Hacker News


John Philpin : Lifestream

Awesome updates to the Reader beta from Readwise.

Awesome updates to the Reader beta from Readwise.

Awesome updates to the Reader beta from Readwise.


Currently getting a 90% fail on Bookshelves at the moment.

Currently getting a 90% fail on Bookshelves at the moment. I enter the title. The choice of books come up - including the one I want. I click through to the book. There are all the details - BUT the cover image is missing. Is anyone else (not) seeing this? // @manton

Currently getting a 90% fail on Bookshelves at the moment.

I enter the title. The choice of books come up - including the one I want.

I click through to the book.

There are all the details - BUT the cover image is missing.

Is anyone else (not) seeing this?

// @manton


📚Great Meeting Of The Readers' Republic today - wonderful bo

📚Great Meeting Of The Readers' Republic today - wonderful book recommendations that I am documenting. Notes to follow.

📚Great Meeting Of The Readers' Republic today - wonderful book recommendations that I am documenting. Notes to follow.

Sunday, 15. May 2022

John Philpin : Lifestream

‘Time Flies’ 🎶 a track on ‘The Innocent' and 📚 the tit

‘Time Flies’ 🎶 a track on ‘The Innocent' and 📚 the title of the bio of Porcupine Tree. 16th in the series of #MBMay Photo; Book Cover. Illustration by Lasse Holle? Caveat: NONE of the photos in this series are mine, but when I know who to credit, I do.

‘Time Flies’

🎶 a track on ‘The Innocent'

and

📚 the title of the bio of Porcupine Tree.

16th in the series of #MBMay

Photo; Book Cover. Illustration by Lasse Holle?

Caveat: NONE of the photos in this series are mine, but when I know who to credit, I do.


Werdmüller on Medium

A quiet morning in America

A disconnect that gets under your skin Continue reading on Human Parts »

A disconnect that gets under your skin

Continue reading on Human Parts »


Ben Werdmüller

A quiet morning in America

I pour myself another cup of coffee: two scoops into the Aeropress, a gentle pour of boiling water, a quick stir. I leave the plastic stirrer in the tube like a tombstone while the water percolates through the grounds. Quiet mornings are hard to come by. I had a conversation with someone recently whose entire family had contracted Covid. I found out like this: sorry if my voice goes, he said

I pour myself another cup of coffee: two scoops into the Aeropress, a gentle pour of boiling water, a quick stir. I leave the plastic stirrer in the tube like a tombstone while the water percolates through the grounds.

Quiet mornings are hard to come by.

I had a conversation with someone recently whose entire family had contracted Covid. I found out like this: sorry if my voice goes, he said. I have Covid. I was helping him out with his work by answering some questions, but I quickly told him that he needed to rest. Give yourself the space to recover, I told him. I guess my Dad told me the wrong thing, he said.

I’ve been living in California for eleven years, and I’ve been an American citizen since I was born. There are still moments that make me wonder about the place I moved to. Some of the things that leave me wondering whether I’ll ever feel really at home here are relatively small - someone working through sickness instead of taking care of themselves, for example. And some are big.

While I was watching Ukraine win Eurovision, an 18 year old opened fire at a supermarket in Buffalo, New York, murdering ten people. He live-streamed his attack on Twitch after publishing an 180-page manifesto in which he described himself as a white supremacist and an anti-Semite. He discussed replacement theory, and chose the location of his attack by researching the area with the highest percentage of Black people within driving distance. It’s a hate crime, fueled by hate speech. It was also the country’s one hundred and ninety-eighth mass shooting in 2022, on the one hundred and thirty-third day of the year.

I put on some toast and consider whether I’ll go for a walk or read my book. I just started The Glass Hotel by Emily St. John Mandel as part of a book group, and I’m also rereading Radical Candor. Outside, trees slowly sway against an unbroken blue sky.

Last week, CO2 levels exceeded 420ppm for the first time in recorded human history. I’m still thinking about a conversation where someone complained to me about having to take the bus. I routinely speak to people who believe public transport is outdated compared to road or air travel. The Cato institute says wanting to move people onto high-speed rail is “like wanting to be the world leader in electric typewriters, rotary telephones, or steam locomotives—all technologies that once seemed revolutionary but are functionally obsolete today.” It’s estimated that two-thirds of the world’s population will live under water scarcity by 2025.

I shop for wall sconces for the new house in Philly: something modern that will create enough light in the living room to offset the darkness of the walls. The walls themselves will have to be repainted white at some point, of course. But for now, there has to be something to brighten up the room.

We have too many 9-to-5-ers, someone told me about their startup a few months ago. You’ve got to hustle. I want to see people working evenings and weekends. Strangely, he was having trouble with getting people to stay motivated and complete their work.

The banality of the unkindness gets under your skin after a while. The first year, commuters stepping over homeless people seemed jarring and horrifying. By year five, it was inevitably part of life: there but not there. Someone once told me I was wrong to buy a Street Sheet from a vendor because it was begging. I make a point of carrying money to give to people who ask for it, but sometimes I forget to top it up.

A culture that is busy maintaining the base level of its hierarchy of needs has little time to spend worrying about other people. The through line between the mass shootings and the psychotic work culture and the disregard for climate and the disdain for the impoverished is a lack of regard for community. In America, we’re not all in this together, we’re all in this as individuals. Everyone is out for themselves. It’s not even about social safety nets or other legislation: those things are symptoms of a deeper distrust that seeps between people. It’s a society that has not been set up to be happy together: it is designed to leave you wanting to be rich alone.

I order some middle eastern food on DoorDash. The driver, on average, will make $15.74 per hour, which is far below the poverty line in the San Francisco Bay Area. DoorDash will take between 10-25% of my order from the independently-run restaurant, whose profit margin is often less than that. The order will likely come in plastic.

I take a sip from my coffee and wait.

 

Update: while I was writing this, there was another mass shooting at the Laguna Woods retirement community in Southern California. It does not and will not stop.


Simon Willison

How Materialize and other databases optimize SQL subqueries

How Materialize and other databases optimize SQL subqueries Jamie Brandon offers a survey of the state-of-the-art in optimizing correlated subqueries, across a number of different database engines.

How Materialize and other databases optimize SQL subqueries

Jamie Brandon offers a survey of the state-of-the-art in optimizing correlated subqueries, across a number of different database engines.


Why Rust’s postfix await syntax is good

Why Rust’s postfix await syntax is good C J Silverio explains postfix await in Rust - where you can write a line like this, with the ? causing any errors to be caught and turned into an error return from your function: let count = fetch_all_animals().await?.filter_for_hedgehogs().len(); Via @ceejbot

Why Rust’s postfix await syntax is good

C J Silverio explains postfix await in Rust - where you can write a line like this, with the ? causing any errors to be caught and turned into an error return from your function:

let count = fetch_all_animals().await?.filter_for_hedgehogs().len();

Via @ceejbot


John Philpin : Lifestream

🎶 ‘Obscured By Clouds’ was released 50 years ago. 15th in

🎶 ‘Obscured By Clouds’ was released 50 years ago. 15th in the series of #MBMay Photo; Album Cover by Storm Thorgerson and Aubrey Powell of Hipgnosis. Caveat: NONE of the photos in this series are mine, but when I know who to credit, I do.

🎶 ‘Obscured By Clouds’ was released 50 years ago.

15th in the series of #MBMay

Photo; Album Cover by Storm Thorgerson and Aubrey Powell of Hipgnosis.

Caveat: NONE of the photos in this series are mine, but when I know who to credit, I do.

Saturday, 14. May 2022

John Philpin : Lifestream

Visualizing Everyone that has Ever Lived Now THAT is a coo

Visualizing Everyone that has Ever Lived Now THAT is a cool way of looking at the data.

Visualizing Everyone that has Ever Lived

Now THAT is a cool way of looking at the data.


“I thought it might have been known in the office that aft

“I thought it might have been known in the office that after 34 years and 20 books I knew certain things about writing and didn’t want a copy-editor’s help with punctuation or the thing called repetition … 💬 V. S. Naipaul Every writer has his own voice.

“I thought it might have been known in the office that after 34 years and 20 books I knew certain things about writing and didn’t want a copy-editor’s help with punctuation or the thing called repetition …

💬 V. S. Naipaul

Every writer has his own voice.


What I Learned From Unfollowing Everyone On Twitter, Then Re

What I Learned From Unfollowing Everyone On Twitter, Then Rebuilding In case you were wondering ‘how’ … buried in this article from Charlie Warzel is a link to a piece with scripts that allow you to bulk unfollow people on Twitter.

What I Learned From Unfollowing Everyone On Twitter, Then Rebuilding

In case you were wondering ‘how’ … buried in this article from Charlie Warzel is a link to a piece with scripts that allow you to bulk unfollow people on Twitter.

Friday, 13. May 2022

Simon Willison

sqlite-utils: a nice way to import data into SQLite for analysis

sqlite-utils: a nice way to import data into SQLite for analysis Julia Evans on my sqlite-utils Python library and CLI tool. Via Chris Adams

sqlite-utils: a nice way to import data into SQLite for analysis

Julia Evans on my sqlite-utils Python library and CLI tool.

Via Chris Adams


Ben Werdmüller

Contrarian opinion: market corrections that take the ...

Contrarian opinion: market corrections that take the worst of the greed and self-centeredness out of tech are a really good thing.

Contrarian opinion: market corrections that take the worst of the greed and self-centeredness out of tech are a really good thing.


Jon Udell

Appreciating “Just Have a Think”

Just Have a Think, a YouTube channel created by Dave Borlace, is one of my best sources for news about, and analysis of, the world energy transition. Here are some hopeful developments I’ve enjoyed learning about. Solar Wind and Wave. Can this ocean hybrid platform nail all three? New energy storage tech breathing life and … Continue reading Appreciating “Just Have a Think”

Just Have a Think, a YouTube channel created by Dave Borlace, is one of my best sources for news about, and analysis of, the world energy transition. Here are some hopeful developments I’ve enjoyed learning about.

Solar Wind and Wave. Can this ocean hybrid platform nail all three?

New energy storage tech breathing life and jobs back into disused coal power plants

Agrivoltaics. An economic lifeline for American farmers?

Solar PV film roll. Revolutionary new production technology

All of Dave’s presentations are carefully researched and presented. A detail that has long fascinated me: how the show displays source material. Dave often cites IPCC reports and other sources that are, in raw form, PDF files. He spices up these citations with some impressive animated renderings. Here’s one from the most recent episode.

The progressive rendering of the chart in this example is an even fancier effect than I’ve seen before, and it prompted me to track down the original source. In that clip Dave cites IRENA, the International Renewable Energy Agency, so I visited their site, looked for the cited report, and found it on page 8 of World Energy Transitions Outlook 2022. That link might or might not take you there directly, if not you can scroll to page 8 where you’ll find the chart that’s been animated in the video.

The graphical finesse of Just Have a Think is only icing on the cake. The show reports a constant stream of innovations that collectively give me hope we might accomplish the transition and avoid worst-case scenarios. But still, I wonder. That’s just a pie chart in a PDF file. How did it become the progressive rendering that appears in the video?

In any case, and much more importantly: Dave, thanks for the great work you’re doing!

Thursday, 12. May 2022

Ben Werdmüller

I've taken to listening to BBC Radio ...

I've taken to listening to BBC Radio 2 in my car via TuneIn, and I'm noticing how alien (and sometimes funny) the idioms and place names are for me now. All that used to be home, and now it's a universe away.

I've taken to listening to BBC Radio 2 in my car via TuneIn, and I'm noticing how alien (and sometimes funny) the idioms and place names are for me now. All that used to be home, and now it's a universe away.


Beyond Magenta: Transgender Teens Speak Out, by Susan Kuklin

I wanted to like this, but I can't recommend it. Granted, it's almost a decade old, and the discourse has evolved since then. But the author leaves gender essentialism and some stories that verge on abuse unaddressed. It's great that these teenagers' stories are told verbatim, but it's not great to miss out on the nuanced commentary that they demand. I love the idea and I hope

I wanted to like this, but I can't recommend it. Granted, it's almost a decade old, and the discourse has evolved since then. But the author leaves gender essentialism and some stories that verge on abuse unaddressed. It's great that these teenagers' stories are told verbatim, but it's not great to miss out on the nuanced commentary that they demand. I love the idea and I hope someone executes it better than this.

[Link]


Copying a startup's playbook isn't going to ...

Copying a startup's playbook isn't going to make you the next Google. A magical multi-part story for kids isn't going to be the next Harry Potter. A multi-character, multi-platform strategy won't be the next MCU. Do your own thing. Build what's right for you and your creation.

Copying a startup's playbook isn't going to make you the next Google.
A magical multi-part story for kids isn't going to be the next Harry Potter.
A multi-character, multi-platform strategy won't be the next MCU.

Do your own thing. Build what's right for you and your creation.


Researchers Pinpoint Reason Infants Die From SIDS

“Previously, parents were told SIDS could be prevented if they took proper precautions: laying babies on their backs, not letting them overheat and keeping all toys and blankets out of the crib were a few of the most important preventative steps. So, when SIDS still occurred, parents were left with immense guilt, wondering if they could have prevented their baby’s death.”

“Previously, parents were told SIDS could be prevented if they took proper precautions: laying babies on their backs, not letting them overheat and keeping all toys and blankets out of the crib were a few of the most important preventative steps. So, when SIDS still occurred, parents were left with immense guilt, wondering if they could have prevented their baby’s death.”

[Link]


Doc Searls Weblog

Lens vs. Camera

I did a lot of shooting recently with a rented Sony FE 70-200mm F2.8 GM OSS II lens, mounted on my 2013-vintage Sony a7r camera. One result was the hummingbird above, which you’ll find among the collections here and here. Also, here’s a toddler… …and a grandma (right after she starred as the oldest alumnus at a […]

I did a lot of shooting recently with a rented Sony FE 70-200mm F2.8 GM OSS II lens, mounted on my 2013-vintage Sony a7r camera. One result was the hummingbird above, which you’ll find among the collections here and here. Also, here’s a toddler…

…and a grandma (right after she starred as the oldest alumnus at a high school reunion I where I took hundreds of other shots):

This lens is new, sharp, versatile, earns good reviews (e.g. here) and is so loved already that it’s hard to get, despite the price: more than $3k after taxes. And, though it’s very compact and light (2.3 lbs) for what it is and does, the thing is big:

So I ordered one, which Amazon won’t charge me for before it ships, on May 23, for delivery on the 24th.

But I’m having second, third, and fourth thoughts, which I just decided to share here.

First, I’m not a fine art photographer. I’m an amateur who mostly shoots people and subjects that interest me, such as what I can see out airplane windows, or choose to document for my own odd purposes—such as archiving photos of broadcast towers and antennas, most of which will fall out of use over the next two decades, after being obsolesced by the Internet, wi-fi and 5G.

All the photos I publish are Creative Commons licensed to encourage use by others, which is why more than 1600 of them have found their way into Wikimedia Commons. Some multiple of those accompany entries in Wikipedia. This one, for example, is in 9 different Wikipedia entries in various languages:

Here is the original, shot with a tiny Canon pocket camera I pulled from the pocket of my ski jacket.

In other words, maybe I’ll be better off with a versatile all-in-one camera that will do much of what this giant zoom does, plus much more.

After much online research, I’ve kind of settled on considering the Sony Cyber-shot DSC-RX10 IV. It has a smaller sensor than I’d like, but it is exceptionally versatile and gets great reviews. While my Sony a7r with its outstanding 24-105mm f/4 FE G OSS lens is versatile as well, and light for a full-frame DSLR, I really need a long lens for a lot of the stuff I shoot. And I suspect this “bridge” camera will do the job.

So here is the choice:

Leave the order stand, and pay $3k for a fully fabulous 70-200 zoom that I’m sure to love but will be too big to haul around in many of the settings where I’ll be shooting. Cancel that order, and instead pay half that for the DSC-RX10 IV—and get it in time for my trip to Hawaii next week.

[Later…] I decided to let the order stand. Two reasons. First, I’ve shot a couple thousand photos so far with the 70-200 zoom, and find it a near-flawless instrument that I enjoy playing. One reason I do is that it’s as close to uncompromising as a lens can be—especially a zoom, which by design involves many compromises. Second, I’ve never played with the DSC-RX10 IV, and that’s kind of a prerequisite. I also know that one of its compromises I won’t be able to overcome is the size of its sensor. I know megapixels are a bit of a head trip, but they do matter, and 36.4 Mpx vs 20.1 “effective” Mpx is non-trivial.

Additionally, I may choose in the long run to also get an a7iv camera, so my two lenses will have two bodies. We’ll see.

 

 

Wednesday, 11. May 2022

Ben Werdmüller

Inflation’s biting. Roe’s fraying. Dems are still trying to connect with voters.

“When Porter gave an emotional speech about how inflation has been hitting her family for months during a private House Democratic Caucus meeting last week, she said it seemed like the first time the personal toll of high consumer prices had sunk in for some lawmakers in the room.” [Link]

“When Porter gave an emotional speech about how inflation has been hitting her family for months during a private House Democratic Caucus meeting last week, she said it seemed like the first time the personal toll of high consumer prices had sunk in for some lawmakers in the room.”

[Link]


Heather Vescent

Six insights about the Future of Biometrics

Photo by v2osk on Unsplash Biometrics are seen as a magic bullet to uniquely identify humans — but it is still new technology. Companies can experience growing pains and backlash due to incomplete thinking prior to implementation. Attackers do the hard work of finding every crack and vulnerability. Activists point out civil liberty and social biases. This shows how our current solutions are no
Photo by v2osk on Unsplash

Biometrics are seen as a magic bullet to uniquely identify humans — but it is still new technology. Companies can experience growing pains and backlash due to incomplete thinking prior to implementation. Attackers do the hard work of finding every crack and vulnerability. Activists point out civil liberty and social biases. This shows how our current solutions are not always secure or equitable. In the end, each criminal, activist, and product misstep inspires innovation and new solutions.

The benefit of biometrics is they are unique and can be trusted to be unique. It’s not impossible, but it is very hard for someone to spoof a biometric. Using a biometric raises the bar a bit, and makes that system less attractive to target — up to a point. Any data is only as secure as the system in which it is stored. Sometimes these systems can be easily penetrated due to poor identity and access management protocols. This has nothing to do with the security of biometrics — that has to do with the security of stored data. Apple FaceID is unbelievably convenient! Once I set up FaceID to unlock my phone, I can configure it to unlock other apps — like banking apps. Rather than typing in or selecting my password from a password manager — I just look at my phone! This makes it easy for me to access my sensitive data. From a user experience perspective, this is wonderful, but I have to trust Apple’s locked down tech stack. The first versions of new technologies will still have issues. All new technology is antifragile, and thus will have more bugs. As the technology is used, the bugs are discovered (thanks hackers!) and fixed, and the system becomes more secure over time. Attackers will move on to more vulnerable targets. Solve for every corner case and you’ll have a rigid yet secure system that probably doesn’t consider the human interface very well. Leave out a corner case and you might be leaving an open door for attack. Solving for the “right” situation is a balance. Which means, either extreme can be harmful to different audiences. Learn from others, share and collaborate on what you have learned. Everyone has to work together to move the industry forward.

Curious to learn more insights about the Future of Digital Identity? I’ll be joining three speakers on the Future of Digital Identity Verification panel.

Tuesday, 10. May 2022

@_Nat Zone

Decentralized, Global, Human-Owned. 理想的なWeb3世界(もしあれば)においてIDMが果たす役割

Keynote Panel at the Euro… The post Decentralized, Global, Human-Owned. 理想的なWeb3世界(もしあれば)においてIDMが果たす役割 first appeared on @_Nat Zone.

Keynote Panel at the European Identity and Cloud Conference, Friday, May 13, 2022 09:20—09:40 Location: BCC Berlin, C01

告知が遅くなりましたが、今週の金曜日(5/13)にベルリンで行われるEuropean Identity & Cloud Conference のキーノートパネルに登場します。英文タイトルは「Decentralized, Global, Human-Owned. The Role of IDM in an Ideal (If there is One) Web3 World です。

詳細のリンクは、https://www.kuppingercole.com/sessions/5092/1 です。以下、そのDeepL翻訳です。

インターネットはIDレイヤーを持たずに作られ、認証、承認、プライバシー、アクセスをWebサイトやアプリケーションに任せていました。ユーザー名とパスワードは依然として支配的なパラダイムであり、さらに重要なことは、ユーザーが個人を特定する情報をコントロールできないことです。データの悪用、ハッキングや不正操作のリスクは重要な課題となっており、Web3の出現と価値の伝達というその中核機能の時代において、新しいアプローチが必要とされています。分散型DLTベースのアイデンティティは、最終的にDeFi、NFT、DAOを可能にするソリューションとなるのでしょうか?この素晴らしいキーノートのパネルに参加して、このトピックを侃々諤々議論してください。

(出所)https://www.kuppingercole.com/sessions/5092/1

パネリスト

André Durand, CEO, Ping Identity Martin Kuppinger, CEO, KuppingerCole Nat Sakimura, Chairman, OpenID Foundation Drs. Jacoba C. Sieders, Advisory board member, EU SSIF-lab

The post Decentralized, Global, Human-Owned. 理想的なWeb3世界(もしあれば)においてIDMが果たす役割 first appeared on @_Nat Zone.

Doc Searls Weblog

Laws of Identity

When digital identity ceases to be a pain in the ass, we can thank Kim Cameron and his Seven Laws of Identity, which he wrote in 2004, formally published in early 2005, and gently explained and put to use until he died late last year. Today, seven of us will take turns explaining each of […]

When digital identity ceases to be a pain in the ass, we can thank Kim Cameron and his Seven Laws of Identity, which he wrote in 2004, formally published in early 2005, and gently explained and put to use until he died late last year. Today, seven of us will take turns explaining each of Kim’s laws at KuppingerCole‘s EIC conference in Berlin. We’ll only have a few minutes each, however, so I’d like to visit the subject in a bit more depth here.

To understand why these laws are so important and effective, it will help to know where Kim was coming from in the first place. It wasn’t just his work as the top architect for identity at Microsoft (to which he arrived when his company was acquired). Specifically, Kim was coming from two places. One was the physical world where we live and breathe, and identity is inherently personal. The other was the digital world where what we call identity is how we are known to databases. Kim believed the former should guide the latter, and that nothing like that had happened yet, but that we could and should work for it.

Kim’s The Laws of Identity paper alone is close to seven thousand words, and his IdentityBlog adds many thousands more. But his laws by themselves are short and sweet. Here they are, with additional commentary by me, in italics.

1. User Control and Consent

Technical identity systems must only reveal information identifying a user with the user’s consent.

Note that consent goes in the opposite direction from all the consent “agreements” websites and services want us to click on. This matches the way identity works in the natural world, where each of us not only chooses how we wish to be known, but usually with an understanding about how that information might be used.

2. Minimun Disclosure for a Constrained Use

The solution which discloses the least amount of identifying information and best limits its use is the most stable long term solution.

There is a reason we don’t walk down the street wearing name badges: because the world doesn’t need to know any more about us than we wish to disclose. Even when we pay with a credit card, the other party really doesn’t need (or want) to know the name on the card. It’s just not something they need to know.

3. Justifiable Parties

Digital identity systems must be designed so the disclosure of identifying information is limited to parties having a necessary and justifiable place in a given identity relationship.

If this law applied way back when Kim wrote it, we wouldn’t have the massive privacy losses that have become the norm, with unwanted tracking pretty much everywhere online—and increasingly offline as well. 

4. Directed Identity

A universal identity system must support both “omni-directional” identifiers for use by public entities and “unidirectional” identifiers for use by private entities, thus facilitating discovery while preventing unnecessary release of correlation handles.

All brands, meaning all names of public entities, are “omni-directional.” They are also what Kim calls “beacons” that have the opposite of something to hide about who they are. Individuals, however, are private first, and public only to the degrees they wish to be in different circumstances. Each of the first three laws are “unidirectional.”

5. Pluralism of Operators and Technologies

A universal identity system must channel and enable the inter-working of multiple identity technologies run by multiple identity providers.

This law expresses learnings from Microsoft’s failed experiment with Passport and a project called “Hailstorm.” The idea with both was for Microsoft to become the primary or sole online identity provider for everyone. Kim’s work at Microsoft was all about making the company one among many working in the same broad industry.

6. Human Integration

The universal identity metasystem must define the human user to be a component of the distributed system integrated through unambiguous human-machine communication mechanisms offering protection against identity attacks.

As Kim put it in his 2019 (and final) talk at EIC, we need to turn the Web “right side up,” meaning putting the individual at the top rather than the bottom, with each of us in charge of our lives online, in distributed homes of our own. That’s what will integrate all the systems we deal with. (Joe Andrieu first explained this in 2007, here.)

7. Consistent Experience Across Contexts

The unifying identity metasystem must guarantee its users a simple, consistent experience while enabling separation of contexts through multiple operators and technologies.

So identity isn’t just about corporate systems getting along with each other. It’s about giving each of us scale across all the entities we deal with. Because it’s our experience that will make identity work right, finally, online. 

I expect to add more as the conference goes on; but I want to get this much out there to start with.

By the way, the photo above is from the first and only meeting of the Identity Gang, at Esther Dyson’s PC Forum in 2005. The next meeting of the Gang was the first Internet Identity Workshop, aka IIW, later that year. We’ve had 34 more since then, all with hundreds of participants, all with great influence on the development of code, standards, and businesses in digital identity and adjacent fields. And all guided by Kim’s Laws.

 

Monday, 09. May 2022

Damien Bod

Use a gateway service for a software UI with micro services architecture?

In this post, I would like to look at some of the advantages and disadvantages of using an implemented gateway service to process all UI API requests, optimize the business and remove some of the complexity from the user interface application. Setup with UI using APIs directly Modern public facing applications APIs used by UI […]

In this post, I would like to look at some of the advantages and disadvantages of using an implemented gateway service to process all UI API requests, optimize the business and remove some of the complexity from the user interface application.

Setup with UI using APIs directly

Modern public facing applications APIs used by UI apps are mostly protected using user delegated access tokens and the applications APIs are in the public zone because the SPA application or whatever client you use needs to request data. The API data is merged directly in the client. Each API can be secured with a separate scope and the client application would need to manage multiple access tokens. Each API would need to allow CORS for an SPA client. (You could use something like Azure Gateway as a workaround for this.) If a server rendered application was used to access the API, this is not required. With this approach, multiple APIs are exposed in the public zone.

Characteristics of this solution

UI responsible for joining API calls for different views

Multiple UIs per module, API, micro service possible

Setup with gateway implementation (BFF or API implementation)

All applications still need to implement HTTPS and the gateway should not terminate the HTTPS. All applications still require HTTPS only access and HTTP should not be used (in development as well). Then all the security headers can be applied in all services to the max.

Many micro services implementations do not optimize the API for the UI. This is not really a good idea, if using the APIs directly in a UI. By implementing a gateway service, the micro service APIs can be optimized for the UI through the gateway. This makes it possible to remove lots of unrequired or unoptimized APIs calls and reduces the amount of logic required in the UI to implement the required business.

The application gateway should not be used to apply missing security headers to new APIs or APIs created as part of your system. The security headers should be implemented where the responses are created. If using a legacy application where the security headers are missing and cannot be implemented at the source, then using the gateway in this way is good.

The application gateway could be implemented as a reverse proxy to delegate the requests onto further applications, APIs. This is good because the further APIs do not need to be exposed to the internet and the attack surface is reduced as well as the amount of APIs in the public zone. Less is more.

Characteristics of this solution

API calls can be optimized for the UI

Security can be improved by reducing the public attack surface

User authorization could be moved to the gateway service

UI and gateway could be deployed together with improved security (BFF security arch and single security client)

Gateway single point of failure

Comparing the security of the Gateway, no Gateway solutions

Improved application security with Gateway

By using a gateway, the attack surface can be reduced in the public zone. The gateway API can use a user delegated access token (or a cookie if using a server rendered application or BFF architecture) All other applications can use application tokens (OAuth client credentials flow), certificate authentication or Azure managed identities to protect the APIs. It is important that the APIs are secured and that the public user access token does not work with the private APIs.

Easier to implement systems tests for services

It is hard to implement system tests for APIs using user access tokens. These tokens can only be created using a UI test tool. The correct flow requires user interaction to authenticate. Creating a back door with some type of application access is not a good idea as this usually gets deployed in some form or other. With a gateway, it is still hard to test the gateway, but with all the private APIs, system tests can be created to test with little effort.

CORS, same domain, same site

Only the public facing gateway API needs to allow CORS if using from an SPA application. All the private APIs can completely disable this and all can use the full set of security headers. If using cookies, same site and same domain can be forced on the single gateway, UI application.

User authorization

Authorization for identities with users can be reduced and most of the implementation would be rolled out in the gateway as well as been used in the user interface. The private APIs would need less complicated authorization as only the trusted application requests must be validated and not user authorization. It is not a must, that the system implements in this way, different security setups are possible like the OBO flow for the private APIs.

More choice for authorization in the private APIs.

As the private APIs are trusted APP to APP security, managed identities or certificate authentication could be used as well as OAuth client credentials. This allows for more choice but the APIs must still implement security in the application and not relay on the network security alone. I usually use the OAuth CC flow with a scope claim or role claim validation. The access tokens used for access must be validated. I never mix user access tokens and application tokens in a single API.

Notes

KISS is probably the most important way of thinking when producing software. Most solutions can be implemented as a single monolith application and the end client gets what is required with the least amount of effort. Micro services are normally not required for most solutions. Once using micro services, it is important to implement this correctly and with a solid security architecture which requires effort and planning. Do not be afraid to merge services if you see that the are tightly coupled and depend on each other. Define the public zone and private zone and where each micro service belongs. Implement the hosting environment which allows for a good application security implementation as well as a good network infrastructure. HTTP should not be used anywhere.

Friday, 06. May 2022

Simon Willison

Weeknotes: Datasette Lite, nogil Python, HYTRADBOI

My big project this week was Datasette Lite, a new way to run Datasette directly in a browser, powered by WebAssembly and Pyodide. I also continued my research into running SQL queries in parallel, described last week. Plus I spoke at HYTRADBOI. Datasette Lite This started out as a research project, inspired by the excitement around Python in the browser from PyCon US last week (which I didn't

My big project this week was Datasette Lite, a new way to run Datasette directly in a browser, powered by WebAssembly and Pyodide. I also continued my research into running SQL queries in parallel, described last week. Plus I spoke at HYTRADBOI.

Datasette Lite

This started out as a research project, inspired by the excitement around Python in the browser from PyCon US last week (which I didn't attend, but observed with some jealousy on Twitter).

I've been wanting to explore this possibility for a while. JupyterLite had convinced me that it would be feasible to run Datasette using Pyodide, especially after I found out that the sqlite3 module from the Python standard library works there already.

I have a private "notes" GitHub repository which I use to keep notes in GitHub issues. I started a thread there researching the possibility of running an ASGI application in Pyodide, thinking that might be a good starting point to getting Datasette to work.

The proof of concept moved remarkably quickly, especially once I realized that Service Workers weren't going to work but Web Workers might.

Once I had comitted to Datasette Lite as a full project I started a new repository for it and transferred across my initial prototype issue thread. You can read that full thread for a blow-by-blow account of how my research pulled together in datasette-lite issue #1.

The rest of the project is documented in detail in my blog post.

Since launching it the biggest change I've made was a change of URL: since it's clearly going to be a core component of the Datasette project going forward I promoted it from simonw.github.io/datasette-lite/ to its new permanent home at lite.datasette.io. It's still hosted by GitHub Pages - here's my TIL about setting up the new domain.

It may have started as a proof of concept tech demo, but the response to it so far has convinced me that I should really take it seriously. Being able to host Datasette without needing to run any server-side code at all is an incredibly compelling experience.

It doesn't matter how hard I work on getting the Datasette deployment experience as easy as possible, static file hosting will always be an order of magnitude more accessible. And even at this early stage Datasette Lite is already proving to be a genuinely useful way to run the software.

As part of this research I also shipped sqlite-utils 3.26.1 with a minor dependency fix that means it works in Pyodide now. You can try that out by running the following in the Pyodide REPL:

>>> import micropip >>> await micropip.install("sqlite-utils") >>> import sqlite_utils >>> db = sqlite_utils.Database(memory=True) >>> list(db.query("select 3 * 5")) [{'3 * 5': 15}] Parallel SQL queries work... if you can get rid of the GIL

Last week I described my effort to implement Parallel SQL queries for Datasette.

The idea there was that many Datasette pages execute multiple SQL queries - a count(*) and a select ... limit 101 for example - that could be run in parallel instead of serial, for a potential improvement in page load times.

My hope was that I could get away with this despite Python's infamous Global Interpreter Lock because the sqlite3 C module releases the GIL when it executes a query.

My initial results weren't showing an increase in performance, even while the queries were shown to be overlapping each other. I opened a research thread and spent some time this week investigating.

My conclusion, sadly, was that the GIL was indeed to blame. sqlite3 releases the GIL to execute the query, but there's still a lot of work that happens in Python land itself - most importantly the code that assembles the objects that represent the rows returned by the query, which is still subject to the GIL.

Then this comment on a thread about the GIL on Lobsters reminded me of the nogil fork of Python by Sam Gross, who has been working on this problem for several years now.

Since that fork has a Docker image trying it out was easy... and to my amazement it worked! Running my parallel queries implementation against nogil Python reduced a page load time from 77ms to 47ms.

Sam's work is against Python 3.9, but he's discussing options for bringing his improvemets into Python itself with the core maintainers. I'm hopeful that this might happen in the next few years. It's an incredible piece of work.

An amusing coincidence: one restriction of WASM and Pyodide is that they can't start new threads - so as part of getting Datasette to work on that platform I had to add a new setting that disables the ability to run SQL queries in threads entirely!

datasette-copy-to-memory

One question I found myself asking while investigating parallel SQL queries (before I determined that the GIL was to blame) was whether parallel SQLite queries against the same database file were suffering from some form of file locking or contention.

To rule that out, I built a new plugin: datasette-copy-to-memory - which reads a SQLite database from disk and copies it into an in-memory database when Datasette first starts up.

This didn't make an observable difference in performance, but I've not tested it extensively - especially not against larger databases using servers with increased amounts of available RAM.

If you're inspired to give this plugin a go I'd love to hear about your results.

asgi-gzip and datasette-gzip

I mentioned datasette-gzip last week: a plugin that acts as a wrapper around the excellent GZipMiddleware from Starlette.

The performance improvements from this - especially for larger HTML tables, which it turns out compress extremely well - were significant. Enough so that I plan to bring gzip support into Datasette core very shortly.

Since I don't want to add the whole of Starlette as a dependency just to get gzip support, I extracted that code out into a new Python package called asgi-gzip.

The obvious risk with doing this is that it might fall behind the excellent Starlette implementation. So I came up with a pattern based on Git scraping that would automatically open a new GitHub issue should the borrowed Starlette code change in the future.

I wrote about that pattern in Automatically opening issues when tracked file content changes.

Speaking at HYTRADBOI

I spoke at the HYTRADBOI conference last week: Have You Tried Rubbing A Database On It.

HYTRADBOI was organized by Jamie Brandon. It was a neat event, with a smart format: 34 pre-recorded 10 minute long talks, arranged into a schedule to encourage people to watch and discuss them at specific times during the day of the event.

It's worth reading Jamie's postmortem of the event for some insightful thinking on online event organization.

My talk was Datasette: a big bag of tricks for solving interesting problems using SQLite. It ended up working out as a lightning-fast 10 minute tutorial on using the sqlite-utils CLI to clean up some data (in this case Manatee Carcass Recovery Locations in Florida since 1974) and then using Datasette to explore and publish it.

I've posted some basic notes to accompany the talk. My plan is to use this as the basis for an official tutorial on sqlite-utils for the tutorials section of the Datasette website.

Releases this week datasette: 0.62a0 - (111 releases total) - 2022-05-02
An open source multi-tool for exploring and publishing data sqlite-utils: 3.26.1 - (100 releases total) - 2022-05-02
Python CLI utility and library for manipulating SQLite databases click-default-group-wheel: 1.2.2 - 2022-05-02
Extends click.Group to invoke a command without explicit subcommand name (this version publishes a wheel) s3-credentials: 0.11 - (11 releases total) - 2022-05-01
A tool for creating credentials for accessing S3 buckets datasette-copy-to-memory: 0.2 - (5 releases total) - 2022-04-30
Copy database files into an in-memory database on startup datasette-gzip: 0.2 - (2 releases total) - 2022-04-28
Add gzip compression to Datasette asgi-gzip: 0.1 - 2022-04-28
gzip middleware for ASGI applications, extracted from Starlette TIL this week Intercepting fetch in a service worker Setting up a custom subdomain for a GitHub Pages site

Thursday, 05. May 2022

Hans Zandbelt

A WebAuthn Apache module?

It is a question that people (users, customers) ask me from time to time: will you develop an Apache module that implements WebAuthn or FIDO2. Well, the answer is: “no”, and the rationale for that can be found below. At … Continue reading →

It is a question that people (users, customers) ask me from time to time: will you develop an Apache module that implements WebAuthn or FIDO2. Well, the answer is: “no”, and the rationale for that can be found below.

At first glance it seems very useful to have an Apache server that authenticates users using a state-of-the-art authentication protocol that is implemented in modern browsers and platforms. Even more so, that Apache server could function as a reverse proxy in front of any type of resources you want to protect. This will allow for those resources to be agnostic to the type of authentication and its implementation, a pattern that I’ve been promoting for the last decade or so.

But in reality the functionality that you are looking for already exists…

The point is that deploying WebAuthn means that you’ll not just be authenticating users, you’ll also have to take care of signing up new users and managing credentials for those users. To that end, you’ll need to facilitate an onboarding process and manage a user database. That type of functionality is best implemented in a server-type piece of software (let’s call it “WebAuthn Provider”) written in a high-level programming language, rather than embedding it in a C-based Apache module. So in reality it means that any sensible WebAuthn/FIDO2 Apache module would rely on an externally running “Provider” software component to offload the heavy-lifting of onboarding and managing users and credentials. Moreover, just imagine the security sensitivity of such a software component.

Well, all of the functionality described above is exactly something that your average existing Single Sign On Identity Provider software was designed to do from the very start! And even more so, those Identity Providers typically already support WebAuthn and FIDO2 for (“local”) user authentication and OpenID Connect for relaying the authentication information to (“external”) Relying Parties.

And yes, one of those Relying Parties could be mod_auth_openidc, the Apache module that enables users to authenticate to an Apache webserver using OpenID Connect.

So there you go: rather than implementing WebAuthn or FIDO2 (and user/credential management…) in a single Apache module, or write a dedicated WebAuthn/FIDO2 Provider alongside of it and communicate with that using a proprietary protocol, the more sensible choice is to use the already existing OpenID Connect protocol. The Apache OpenID Connect module (mod_auth_openidc) will send users off to the OpenID Connect Provider for authentication. The Provider can use WebAuthn or FIDO2, as a single factor, or as a 2nd factor combined with traditional methods such as passwords or stronger methods such as PKI, to authenticate users and relay the information about the authenticated user back to the Apache server.

To summarise: using WebAuthn or FIDO2 to authenticate users to an Apache server/reverse-proxy is possible today by using mod_auth_openidc’s OpenID Connect implementation. This module can send user off for authentication towards a WebAuthn/FIDO2 enabled Provider, such as Keycloak, Okta, Ping, ForgeRock etc. This setup allows for a very flexible approach that leverages existing standards and implementations to their maximum potential: OpenID Connect for (federated) Single Sign On, WebAuthn and FIDO2 for (centralized) user authentication.

Wednesday, 04. May 2022

Simon Willison

SIARD: Software Independent Archiving of Relational Databases

SIARD: Software Independent Archiving of Relational Databases I hadn't heard of this before but it looks really interesting: the Federal Archives of Switzerland developed a standard for archiving any relational database as a zip file full of XML which is "is used in over 50 countries around the globe". Via @MAndrewWaugh

SIARD: Software Independent Archiving of Relational Databases

I hadn't heard of this before but it looks really interesting: the Federal Archives of Switzerland developed a standard for archiving any relational database as a zip file full of XML which is "is used in over 50 countries around the globe".

Via @MAndrewWaugh


Datasette Lite: a server-side Python web application running in a browser

Datasette Lite is a new way to run Datasette: entirely in a browser, taking advantage of the incredible Pyodide project which provides Python compiled to WebAssembly plus a whole suite of useful extras. You can try it out here: https://lite.datasette.io/ The initial example loads two databases - the classic fixtures.db used by the Datasette test suite, and the content.db database that pow

Datasette Lite is a new way to run Datasette: entirely in a browser, taking advantage of the incredible Pyodide project which provides Python compiled to WebAssembly plus a whole suite of useful extras.

You can try it out here:

https://lite.datasette.io/

The initial example loads two databases - the classic fixtures.db used by the Datasette test suite, and the content.db database that powers the official datasette.io website (described in some detail in my post about Baked Data).

You can instead use the "Load database by URL to a SQLite DB" button to paste in a URL to your own database. That file will need to be served with CORS headers that allow it to be fetched by the website (see README).

Try this URL, for example:

https://congress-legislators.datasettes.com/legislators.db

You can follow this link to open that database in Datasette Lite.

Datasette Lite supports almost all of Datasette's regular functionality: you can view tables, apply facets, run your own custom SQL results and export the results as CSV or JSON.

It's basically the full Datasette experience, except it's running entirely in your browser with no server (other than the static file hosting provided here by GitHub Pages) required.

I’m pretty stunned that this is possible now.

I had to make some small changes to Datasette to get this to work, detailed below, but really nothing extravagant - the demo is running the exact same Python code as the regular server-side Datasette application, just inside a web worker process in a browser rather than on a server.

The implementation is pretty small - around 300 lines of JavaScript. You can see the code in the simonw/datasette-lite repository - in two files, index.html and webworker.js

Why build this?

I built this because I want as many people as possible to be able to use my software.

I've invested a ton of effort in reducing the friction to getting started with Datasette. I've documented the install process, I've packaged it for Homebrew, I've written guides to running it on Glitch, I've built tools to help deploy it to Heroku, Cloud Run, Vercel and Fly.io. I even taught myself Electron and built a macOS Datasette Desktop application, so people could install it without having to think about their Python environment.

Datasette Lite is my latest attempt at this. Anyone with a browser that can run WebAssembly can now run Datasette in it - if they can afford the 10MB load (which in many places with metered internet access is way too much).

I also built this because I'm fascinated by WebAssembly and I've been looking for an opportunity to really try it out.

And, I find this project deeply amusing. Running a Python server-side web application in a browser still feels like an absurd thing to do. I love that it works.

I'm deeply inspired by JupyterLite. Datasette Lite's name is a tribute to that project.

How it works: Python in a Web Worker

Datasette Lite does most of its work in a Web Worker - a separate process that can run expensive CPU operations (like an entire Python interpreter) without blocking the main browser's UI thread.

The worker starts running when you load the page. It loads a WebAssembly compiled Python interpreter from a CDN, then installs Datasette and its dependencies into that interpreter using micropip.

It also downloads the specified SQLite database files using the browser's HTTP fetching mechanism and writes them to a virtual in-memory filesystem managed by Pyodide.

Once everything is installed, it imports datasette and creates a Datasette() object called ds. This object stays resident in the web worker.

To render pages, the index.html page sends a message to the web worker specifying which Datasette path has been requested - / for the homepage, /fixtures for the database index page, /fixtures/facetable for a table page and so on.

The web worker then simulates an HTTP GET against that path within Datasette using the following code:

response = await ds.client.get(path, follow_redirects=True)

This takes advantage of a really useful internal Datasette API: datasette.client is an HTTPX client object that can be used to execute HTTP requests against Datasette internally, without doing a round-trip across the network.

I initially added datasette.client with the goal of making any JSON APIs that Datasette provides available for internal calls by plugins as well, and to make it easier to write automated tests. It turns out to have other interesting applications too!

The web worker sends a message back to index.html with the status code, content type and content retrieved from Datasette. JavaScript in index.html then injects that HTML into the page using .innerHTML.

To get internal links working, Datasette Lite uses a trick I originally learned from jQuery: it applies a capturing event listener to the area of the page displaying the content, such that any link clicks or form submissions will be intercepted by a JavaScript function. That JavaScript can then turn them into new messages to the web worker rather than navigating to another page.

Some annotated code

Here are annotated versions of the most important pieces of code. In index.html this code manages the worker and updates the page when it recieves messages from it:

// Load the worker script const datasetteWorker = new Worker("webworker.js"); // Extract the ?url= from the current page's URL const initialUrl = new URLSearchParams(location.search).get('url'); // Message that to the worker: {type: 'startup', initialUrl: url} datasetteWorker.postMessage({type: 'startup', initialUrl}); // This function does most of the work - it responds to messages sent // back from the worker to the index page: datasetteWorker.onmessage = (event) => { // {type: log, line: ...} messages are appended to a log textarea: var ta = document.getElementById('loading-logs'); if (event.data.type == 'log') { loadingLogs.push(event.data.line); ta.value = loadingLogs.join("\n"); ta.scrollTop = ta.scrollHeight; return; } let html = ''; // If it's an {error: ...} message show it in a <pre> in a <div> if (event.data.error) { html = `<div style="padding: 0.5em"><h3>Error</h3><pre>${escapeHtml(event.data.error)}</pre></div>`; // If contentType is text/html, show it as straight HTML } else if (/^text\/html/.exec(event.data.contentType)) { html = event.data.text; // For contentType of application/json parse and pretty-print it } else if (/^application\/json/.exec(event.data.contentType)) { html = `<pre style="padding: 0.5em">${escapeHtml(JSON.stringify(JSON.parse(event.data.text), null, 4))}</pre>`; // Anything else (likely CSV data) escape it and show in a <pre> } else { html = `<pre style="padding: 0.5em">${escapeHtml(event.data.text)}</pre>`; } // Add the result to <div id="output"> using innerHTML document.getElementById("output").innerHTML = html; // Update the document.title if a <title> element is present let title = document.getElementById("output").querySelector("title"); if (title) { document.title = title.innerText; } // Scroll to the top of the page after each new page is loaded window.scrollTo({top: 0, left: 0}); // If we're showing the initial loading indicator, hide it document.getElementById('loading-indicator').style.display = 'none'; };

The webworker.js script is where the real magic happens:

// Load Pyodide from the CDN importScripts("https://cdn.jsdelivr.net/pyodide/dev/full/pyodide.js"); // Deliver log messages back to the index.html page function log(line) { self.postMessage({type: 'log', line: line}); } // This function initializes Pyodide and installs Datasette async function startDatasette(initialUrl) { // Mechanism for downloading and saving specified DB files let toLoad = []; if (initialUrl) { let name = initialUrl.split('.db')[0].split('/').slice(-1)[0]; toLoad.push([name, initialUrl]); } else { // If no ?url= provided, loads these two demo databases instead: toLoad.push(["fixtures.db", "https://latest.datasette.io/fixtures.db"]); toLoad.push(["content.db", "https://datasette.io/content.db"]); } // This does a LOT of work - it pulls down the WASM blob and starts it running self.pyodide = await loadPyodide({ indexURL: "https://cdn.jsdelivr.net/pyodide/dev/full/" }); // We need these packages for the next bit of code to work await pyodide.loadPackage('micropip', log); await pyodide.loadPackage('ssl', log); await pyodide.loadPackage('setuptools', log); // For pkg_resources try { // Now we switch to Python code await self.pyodide.runPythonAsync(` # Here's where we download and save those .db files - they are saved # to a virtual in-memory filesystem provided by Pyodide # pyfetch is a wrapper around the JS fetch() function - calls using # it are handled by the browser's regular HTTP fetching mechanism from pyodide.http import pyfetch names = [] for name, url in ${JSON.stringify(toLoad)}: response = await pyfetch(url) with open(name, "wb") as fp: fp.write(await response.bytes()) names.append(name) import micropip # Workaround for Requested 'h11<0.13,>=0.11', but h11==0.13.0 is already installed await micropip.install("h11==0.12.0") # Install Datasette itself! await micropip.install("datasette==0.62a0") # Now we can create a Datasette() object that can respond to fake requests from datasette.app import Datasette ds = Datasette(names, settings={ "num_sql_threads": 0, }, metadata = { # This metadata is displayed in Datasette's footer "about": "Datasette Lite", "about_url": "https://github.com/simonw/datasette-lite" }) `); datasetteLiteReady(); } catch (error) { self.postMessage({error: error.message}); } } // Outside promise pattern // https://github.com/simonw/datasette-lite/issues/25#issuecomment-1116948381 let datasetteLiteReady; let readyPromise = new Promise(function(resolve) { datasetteLiteReady = resolve; }); // This function handles messages sent from index.html to webworker.js self.onmessage = async (event) => { // The first message should be that startup message, carrying the URL if (event.data.type == 'startup') { await startDatasette(event.data.initialUrl); return; } // This promise trick ensures that we don't run the next block until we // are certain that startDatasette() has finished and the ds.client // Python object is ready to use await readyPromise; // Run the reuest in Python to get a status code, content type and text try { let [status, contentType, text] = await self.pyodide.runPythonAsync( ` import json # ds.client.get(path) simulates running a request through Datasette response = await ds.client.get( # Using json here is a quick way to generate a quoted string ${JSON.stringify(event.data.path)}, # If Datasette redirects to another page we want to follow that follow_redirects=True ) [response.status_code, response.headers.get("content-type"), response.text] ` ); // Message the results back to index.html self.postMessage({status, contentType, text}); } catch (error) { // If an error occurred, send that back as a {error: ...} message self.postMessage({error: error.message}); } };

One last bit of code: here's the JavaScript in index.html which intercepts clicks on links and turns them into messages to the worker:

let output = document.getElementById('output'); // This captures any click on any element within <div id="output"> output.addEventListener('click', (ev => { // .closest("a") traverses up the DOM to find if this is an a // or an element nested in an a. We ignore other clicks. var link = ev.srcElement.closest("a"); if (link && link.href) { // It was a click on a <a href="..."> link! Cancel the event: ev.stopPropagation(); ev.preventDefault(); // I want #fragment links to still work, using scrollIntoView() if (isFragmentLink(link.href)) { // Jump them to that element, but don't update the URL bar // since we use # in the URL to mean something else let fragment = new URL(link.href).hash.replace("#", ""); if (fragment) { let el = document.getElementById(fragment); el.scrollIntoView(); } return; } let href = link.getAttribute("href"); // Links to external sites should open in a new window if (isExternal(href)) { window.open(href); return; } // It's an internal link navigation - send it to the worker loadPath(href); } }), true); function loadPath(path) { // We don't want anything after #, and we only want the /path path = path.split("#")[0].replace("http://localhost", ""); // Update the URL with the new # location history.pushState({path: path}, path, "#" + path); // Plausible analytics, see: // https://github.com/simonw/datasette-lite/issues/22 useAnalytics && plausible('pageview', {u: location.href.replace('?url=', '').replace('#', '/')}); // Send a {path: "/path"} message to the worker datasetteWorker.postMessage({path}); } Getting Datasette to work in Pyodide

Pyodide is the secret sauce that makes this all possible. That project provides several key components:

A custom WebAssembly build of the core Python interpreter, bundling the standard library (including a compiled WASM version of SQLite) micropip - a package that can install additional Python dependencies by downloading them from PyPI A comprehensive JavaScript to Python bridge, including mechanisms for translating Python objects to JavaScript and vice-versa A JavaScript API for launching and then managing a Python interpreter process

I found the documentation on Using Pyodide in a web worker particularly helpful.

I had to make a few changes to Datasette to get it working with Pyodide. My tracking issue for that has the full details, but the short version is:

Ensure each of Datasette's dependencies had a wheel package on PyPI (as opposed to just a .tar.gz) - micropip only works with wheels. I ended up removing python-baseconv as a dependency and replacing click-default-group with my own click-default-group-wheel forked package (repo here). I got sqlite-utils working in Pyodide with this change too, see the 3.26.1 release notes. Work around an error caused by importing uvicorn. Since Datasette Lite doesn't actually run its own web server that dependency wasn't necessary, so I changed my code to catch the ImportError in the right place. The biggest change: WebAssembly can't run threads, which means Python can't run threads, which means any attempts to start a thread in Python cause an error. Datasette only uses threads in one place: to execute SQL queries in a thread pool where they won't block the event loop. I added a new --setting num_sql_threads 0 feature for disabling threading entirely, see issue 1735.

Having made those changes I shipped them in a Datasette 0.62a0 release. It's this release that Datasette Lite installs from PyPI.

Fragment hashes for navigation

You may have noticed that as you navigate through Datasette Lite the URL bar updates with URLs that look like the following:

https://lite.datasette.io/#/content/pypi_packages?_facet=author

I'm using the # here to separate out the path within the virtual Datasette instance from the URL to the Datasette Lite application itself.

Maintaining the state in the URL like this means that the Back and Forward browser buttons work, and also means that users can bookmark pages within the application and share links to them.

I usually like to avoid # URLs - the HTML history API makes it possible to use "real" URLs these days, even for JavaScript applications. But in the case of Datasette Lite those URLs wouldn't actually work - if someone attempted to refresh the page or navigate to a link GitHub Pages wouldn't know what file to serve.

I could run this on my own domain with a catch-all page handler that serves the Datasette Lite HTML and JavaScript no matter what path is requested, but I wanted to keep this as pure and simple as possible.

This also means I can reserve Datasette Lite's own query string for things like specifying the database to load, and potentially other options in the future.

Web Workers or Service Workers?

My initial idea for this project was to build it with Service Workers.

Service Workers are some deep, deep browser magic: they let you install a process that can intercept browser traffic to a specific domain (or path within that domain) and run custom code to return a result. Effectively they let you run your own server-side code in the browser itself.

They're mainly designed for building offline applications, but my hope was that I could use them to offer a full simulation of a server-side application instead.

Here's my TIL on Intercepting fetch in a service worker that came out of my initial research.

I managed to get a server-side JavaScript "hello world" demo working, but when I tried to add Pyodide I ran into some unavoidable road blocks. It turns out Service Workers are very restricted in which APIs they provide - in particular, they don't allow XMLHttpRequest calls. Pyodide apparently depends on XMLHttpRequest, so it was unable to run in a Service Worker at all. I filed an issue about it with the Pyodide project.

Initially I thought this would block the whole project, but eventually I figured out a way to achieve the same goals using Web Workers instead.

Is this an SPA or an MPA?

SPAs are Single Page Applications. MPAs are Multi Page Applications. Datasette Lite is a weird hybrid of the two.

This amuses me greatly.

Datasette itself is very deliberately architected as a multi page application.

I think SPAs, as developed over the last decade, have mostly been a mistake. In my experience they take longer to build, have more bugs and provide worse performance than a server-side, multi-page alternatives implementation.

Obviously if you are building Figma or VS Code then SPAs are the right way to go. But most web applications are not Figma, and don't need to be!

(I used to think Gmail was a shining example of an SPA, but it's so sludgy and slow loading these days that I now see it as more of an argument against the paradigm.)

Datasette Lite is an SPA wrapper around an MPA. It literally simulates the existing MPA by running it in a web worker.

It's very heavy - it loads 11MB of assets before it can show you anything. But it also inherits many of the benefits of the underlying MPA: it has obvious distinctions between pages, a deeply interlinked interface, working back and forward buttons, it's bookmarkable and it's easy to maintain and add new features.

I'm not sure what my conclusion here is. I'm skeptical of SPAs, and now I've built a particularly weird one. Is this even a good idea? I'm looking forward to finding that out for myself.

Coming soon: JavaScript!

Another amusing detail about Datasette Lite is that the one part of Datasette that doesn't work yet is Datasette's existing JavaScript features!

Datasette currently makes very sparing use of JavaScript in the UI: it's used to add some drop-down interactive menus (including the handy "cog" menu on column headings) and for a CodeMirror-enhanced SQL editing interface.

JavaScript is used much more extensively by several popular Datasette plugins, including datasette-cluster-map and datasette-vega.

Unfortunately none of this works in Datasette Lite at the moment - because I don't yet have a good way to turn <script src="..."> links into things that can load content from the Web Worker.

This is one of the reasons I was initially hopeful about Service Workers.

Thankfully, since Datasette is built on the principles of progressive enhancement this doesn't matter: the application remains usable even if none of the JavaScript enhancements are applied.

I have an open issue for this. I welcome suggestions as to how I can get all of Datasette's existing JavaScript working in the new environment with as little effort as possible.

Bonus: Testing it with shot-scraper

In building Datasette Lite, I've committed to making Pyodide a supported runtime environment for Datasette. How can I ensure that future changes I make to Datasette - accidentally introducing a new dependency that doesn't work there for example - don't break in Pyodide without me noticing?

This felt like a great opportunity to exercise my shot-scraper CLI tool, in particular its ability to run some JavaScript against a page and pass or fail a CI job depending on if that JavaScript throws an error.

Pyodide needs you to run it from a real web server, not just an HTML file saved to disk - so I put together a very scrappy shell script which builds a Datasette wheel package, starts a localhost file server (using python3 -m http.server), then uses shot-scraper javascript to execute a test against it that installs Datasette from the wheel using micropip and confirms that it can execute a simple SQL query via the JSON API.

Here's the script in full, with extra comments:

#!/bin/bash set -e # I always forget to do this in my bash scripts - without it, any # commands that fail in the script won't result in the script itself # returning a non-zero exit code. I need it for running tests in CI. # Build the wheel - this generates a file with a name similar to # dist/datasette-0.62a0-py3-none-any.whl python3 -m build # Find the name of that wheel file, strip off the dist/ wheel=$(basename $(ls dist/*.whl)) # $wheel is now datasette-0.62a0-py3-none-any.whl # Create a blank index page that loads Pyodide echo ' <script src="https://cdn.jsdelivr.net/pyodide/v0.20.0/full/pyodide.js"></script> ' > dist/index.html # Run a localhost web server for that dist/ folder, in the background # so we can do more stuff in this script cd dist python3 -m http.server 8529 & cd .. # Now we use shot-scraper to run a block of JavaScript against our # temporary web server. This will execute in the context of that # index.html page we created earlier, which has loaded Pyodide shot-scraper javascript http://localhost:8529/ " async () => { // Load Pyodide and all of its necessary assets let pyodide = await loadPyodide(); // We also need these packages for Datasette to work await pyodide.loadPackage(['micropip', 'ssl', 'setuptools']); // We need to escape the backticks because of Bash escaping rules let output = await pyodide.runPythonAsync(\` import micropip // This is needed to avoid a dependency conflict error await micropip.install('h11==0.12.0') // Here we install the Datasette wheel package we created earlier await micropip.install('http://localhost:8529/$wheel') // These imports avoid Pyodide errors importing datasette itself import ssl import setuptools from datasette.app import Datasette // num_sql_threads=0 is essential or Datasette will crash, since // Pyodide and WebAssembly cannot start threads ds = Datasette(memory=True, settings={'num_sql_threads': 0}) // Simulate a hit to execute 'select 55 as itworks' and return the text (await ds.client.get( '/_memory.json?sql=select+55+as+itworks&_shape=array' )).text \`); // The last expression in the runPythonAsync block is returned, here // that's the text returned by the simulated HTTP response to the JSON API if (JSON.parse(output)[0].itworks != 55) { // This throws if the JSON API did not return the expected result // shot-scraper turns that into a non-zero exit code for the script // which will cause the CI task to fail throw 'Got ' + output + ', expected itworks: 55'; } // This gets displayed on the console, with a 0 exit code for a pass return 'Test passed!'; } " # Shut down the server we started earlier, by searching for and killing # a process that's running on the port we selected pkill -f 'http.server 8529'

Mike Jones: self-issued

OAuth DPoP Specification Addressing WGLC Comments

Brian Campbell has published an updated OAuth DPoP draft addressing the Working Group Last Call (WGLC) comments received. All changes were editorial in nature. The most substantive change was further clarifying that either iat or nonce can be used alone in validating the timeliness of the proof, somewhat deemphasizing jti tracking. As Brian reminded us […]

Brian Campbell has published an updated OAuth DPoP draft addressing the Working Group Last Call (WGLC) comments received. All changes were editorial in nature. The most substantive change was further clarifying that either iat or nonce can be used alone in validating the timeliness of the proof, somewhat deemphasizing jti tracking.

As Brian reminded us during the OAuth Security Workshop today, the name DPoP was inspired by a Deutsche POP poster he saw on the S-Bahn during the March 2019 OAuth Security Workshop in Stuttgart:

He considered it an auspicious sign seeing another Deutsche PoP sign in the Vienna U-Bahn during IETF 113 the same day WGLC was requested!

The specification is available at:

https://tools.ietf.org/id/draft-ietf-oauth-dpop-08.html

Wednesday, 04. May 2022

Identity Woman

The Future of You Podcast with Tracey Follows

Kaliya Young on the Future of You Podcast with the host Stacey Follows and a fellow guest Lucy Yang, to dissect digital wallets, verifiable credentials, digital identity and self-sovereignty. The post The Future of You Podcast with Tracey Follows appeared first on Identity Woman.

Kaliya Young on the Future of You Podcast with the host Stacey Follows and a fellow guest Lucy Yang, to dissect digital wallets, verifiable credentials, digital identity and self-sovereignty.

The post The Future of You Podcast with Tracey Follows appeared first on Identity Woman.

Tuesday, 03. May 2022

Simon Willison

Simple declarative schema migration for SQLite

Simple declarative schema migration for SQLite This is an interesting, clearly explained approach to the database migration problem. Create a new in-memory database and apply the current schema, then run some code to compare that with the previous schema - which tables are new, and which tables have had columns added. Then apply those changes. I'd normally be cautious of running something like

Simple declarative schema migration for SQLite

This is an interesting, clearly explained approach to the database migration problem. Create a new in-memory database and apply the current schema, then run some code to compare that with the previous schema - which tables are new, and which tables have had columns added. Then apply those changes.

I'd normally be cautious of running something like this because I can think of ways it could go wrong - but SQLite backups are so quick and cheap (just copy the file) that I could see this being a relatively risk-free way to apply migrations.

Via Hacker News


Web Scraping via Javascript Runtime Heap Snapshots

Web Scraping via Javascript Runtime Heap Snapshots This is an absolutely brilliant scraping trick. Adrian Cooney figured out a way to use Puppeteer and the Chrome DevTools protocol to take a heap snapshot of all of the JavaScript running on a web page, then recursively crawl through the heap looking for any JavaScript objects that have a specified selection of properties. This allows him to scra

Web Scraping via Javascript Runtime Heap Snapshots

This is an absolutely brilliant scraping trick. Adrian Cooney figured out a way to use Puppeteer and the Chrome DevTools protocol to take a heap snapshot of all of the JavaScript running on a web page, then recursively crawl through the heap looking for any JavaScript objects that have a specified selection of properties. This allows him to scrape data from arbitrarily complex client-side web applications. He built a JavaScript library and command line tool that implements the pattern.

Via Dathan Pattishall

Monday, 02. May 2022

Phil Windley's Technometria

Is an Apple Watch Enough?

Summary: If you're like me, your smartphone has worked its tentacles into dozens, even hundreds, of areas in your life. I conducted an experiment to see what worked and what didn't when I ditched the phone and used an Apple Watch as my primary device for two days. Last week, I conducted an experiment. My phone battery needed to be replaced and the Authorized Apple Service Center wa

Summary: If you're like me, your smartphone has worked its tentacles into dozens, even hundreds, of areas in your life. I conducted an experiment to see what worked and what didn't when I ditched the phone and used an Apple Watch as my primary device for two days.

Last week, I conducted an experiment. My phone battery needed to be replaced and the Authorized Apple Service Center was required to keep it while they ordered the new battery from Apple (yeah, I think that's a stupid policy too). I was without my phone for 2 days and decided it was an excellent time to see if I could get by using my Apple Watch as my primary device. Here's how it went.

First things first. For this to be any kind of success you need a cellular plan for your watch and a pair of Airpods or other bluetooth earbuds. The first thing I noticed is that the bathroom, standing in the checkout line, and other places are boring without the distraction of my phone to read news, play Wordle, or whatever. Siri is your friend. I used Siri a lot more than normal due to the small screen. I'd already set up Apple Pay and while I don't often use it from my watch under normal circumstances, it worked great here. Answering the phone means keeping your Airpods in or fumbling for them every time there's a call. I found I rejected a lot of calls to avoid the hassle. (But never your's, Lynne!) Still, I was able to take and make calls just fine without a phone. Voicemail access is a problem. You have to call the number and retrieve them just like it's 1990 or something. This messed with my usual strategy of not answering calls from numbers I don't recognize and letting them go to voicemail, then reading the transcript to see if I want to call them back. Normal texts don't work that I could tell, but Apple Messages do. I used voice transcription almost exclusively for sending messages, but read them on the watch. Most crypto wallets are unusable without the phone. For the most part, I just used the Web for banking as a substitute for mobile apps and that worked fine. The one exception was USAA. The problem with USAA was 2FA. Watch apps for 2FA are "companion apps" meaning they're worthless without the phone. For TOTP 2FA, I'd mirrored to my iPad, so that worked fine. I had to use the pre-set tokens for Duo that I'd gotten when I set it up. USAA uses Verisign's VIP. It can't be mirrored. What's more, USAA's recovery relies on SMS. I didn't have my phone, so that didn't work. I was on the phone with USAA for an hour trying to figure this out. Eventually USAA decided it was hopeless and told me to conduct banking by voice. Ugh. Listening to music on the watch worked fine. I read books on my Kindle, so that wasn't a problem. There are a number of things I fell back to my iPad for. I've already mentioned 2FA, another is maps. Maps don't work on the watch. I didn't realize how many pictures I take in a day, sometimes just for utility. I used the iPad when I had to. Almost none of my IoT services or devices did much with the watch beyond issuing a notification. None of the Apple HomeKit stuff worked that I could see. For example, I often use a HomeKit integration with my garage door opener. That no longer worked without a phone. Battery life on the watch is more than adequate in normal situations. But hour long phone calls and listening to music challenge battery life when it's your primary device. I didn't realize how many things are tied just to my phone number.

Using just my Apple Watch with some help from my iPad was mostly doable, but there are still rough spots. The Watch is a capable tool for many tasks, but it's not complete. I can certainly see leaving my phone at home more often now since most things work great—especially when you know you can get back to your phone when you need to. Not having my phone with me feels less scary now.

Photo Credit: IPhone 13 Pro and Apple Watch from Simon Waldherr (CC BY-SA 4.0)

Tags: apple watch iphone


Simon Willison

sqlite-utils 3.26.1

sqlite-utils 3.26.1 I released sqlite-utils 3.36.1 with one tiny but exciting feature: I fixed its one dependency that wasn't published as a pure Python wheel, which means it can now be used with Pyodide - Python compiled to WebAssembly running in your browser! Via @simonw

sqlite-utils 3.26.1

I released sqlite-utils 3.36.1 with one tiny but exciting feature: I fixed its one dependency that wasn't published as a pure Python wheel, which means it can now be used with Pyodide - Python compiled to WebAssembly running in your browser!

Via @simonw


Damien Bod

Implement an OpenIddict identity provider using ASP.NET Core Identity with Keycloak federation

This post shows how to setup a Keycloak external authentication in an OpenIddict identity provider using ASP.NET Core identity. Code: https://github.com/damienbod/AspNetCoreOpeniddict Setup The solution context implements OpenID Connect clients which use an OpenIddict identity provider and ASP.NET Core Identity to manage the accounts. All clients authenticate using the OpenIddict server. Keycloak i

This post shows how to setup a Keycloak external authentication in an OpenIddict identity provider using ASP.NET Core identity.

Code: https://github.com/damienbod/AspNetCoreOpeniddict

Setup

The solution context implements OpenID Connect clients which use an OpenIddict identity provider and ASP.NET Core Identity to manage the accounts. All clients authenticate using the OpenIddict server. Keycloak is used as an external authentication provider on the OpenIddict identity provider. Users can be created in either identity provider. If only users are created on the Keycloak server, the direct sign in with username, password can be completely disabled in the OpenIddict server. This setup allows for great flexibility and the MFA can be forced anywhere depending on the requirements. Companies using the product can use their own identity provider.

Integrating an OpenID Connect conform client is really simple in ASP.NET Core and no extra Nuget packages are required. You only need extra packages when the IDP are not conform or do something vendor specific. The AddOpenIdConnect method is used to implement the Keycloak server. I set this up using this github repo and the Keycloak docs. The SignInScheme needs to be set to the correct value. ASP.NET Core Identity is used to map the external identities and the “Identity.External” is the default scheme used for this. If you need to disable local identity users, the ASP.NET Core Identity logic can be scaffolded into the project and adapted.

services.AddAuthentication(CookieAuthenticationDefaults.AuthenticationScheme) .AddOpenIdConnect("KeyCloak", "KeyCloak", options => { options.SignInScheme = "Identity.External"; //Keycloak server options.Authority = Configuration.GetSection("Keycloak")["ServerRealm"]; //Keycloak client ID options.ClientId = Configuration.GetSection("Keycloak")["ClientId"]; //Keycloak client secret in user secrets for dev options.ClientSecret = Configuration.GetSection("Keycloak")["ClientSecret"]; //Keycloak .wellknown config origin to fetch config options.MetadataAddress = Configuration.GetSection("Keycloak")["Metadata"]; //Require keycloak to use SSL options.GetClaimsFromUserInfoEndpoint = true; options.Scope.Add("openid"); options.Scope.Add("profile"); options.SaveTokens = true; options.ResponseType = OpenIdConnectResponseType.Code; options.RequireHttpsMetadata = false; //dev options.TokenValidationParameters = new TokenValidationParameters { NameClaimType = "name", RoleClaimType = ClaimTypes.Role, ValidateIssuer = true }; });

The Keycloak configuration is added to the app.settings. We do not need much, a standard OpenID connect confidential code flow client with PKCE is setup to authenticate using Keycloak. This can be adapted, changed in almost any way depending on the server requirements. You should stick to the standards when implementing this. Using PKCE is required now on most deployments when using the OIDC code flow. Any identity providers solutions which does not support this should be avoided.

"Keycloak": { "ServerRealm": "http://localhost:8080/realms/myrealm", "Metadata": "http://localhost:8080/realms/myrealm/.well-known/openid-configuration", "ClientId": "oidc-code-pkce", // "ClientSecret": "--in user secrets or keyvault--" },

The Keycloak server is setup to use the standard settings. You could improve the security of this using token exchange or further supported specifications from Keycloak and ASP.NET Core.

Notes:

Using OpenIddict and ASP.NET Core to federate to further identity providers, it is really easy to support best practice application security and also integrate any third party identity systems without surprises. Having full control of your identity provider is a good thing and by using federation, you do not need to manage the user accounts. This can be fully implemented on the client system. If the application requires strong MFA like FIDO2, this can also be easily implemented in ASP.NET Core. Using some of the cloud solution IDPs prevents you implementing strong application security. These cloud systems do provide excellent user accounting and it would be nice to use this and combine this with a top identity provider.

If implementing a product which needs to support multiple different identity providers from different clients, then you should implement the identity provider as part of your solution context and federate to the client systems.

Links

https://documentation.openiddict.com/

https://github.com/openiddict/openiddict-core

https://docs.microsoft.com/en-us/java/openjdk/download

https://github.com/tuxiem/AspNetCore-keycloak

https://wjw465150.gitbooks.io/keycloak-documentation/content/server_installation/topics/network/https.html

https://www.keycloak.org/documentation.html

https://docs.microsoft.com/en-us/aspnet/core/security/authentication/identity



Saturday, 30. April 2022

Simon Willison

PyScript demos

PyScript demos PyScript was announced at PyCon this morning. It's a new open source project that provides Web Components built on top of Pyodide, allowing you to use Python directly within your HTML pages in a way that is executed using a WebAssembly copy of Python running in your browser. These demos really help illustrate what it can do - it's a fascinating new piece of the Python web ecosyste

PyScript demos

PyScript was announced at PyCon this morning. It's a new open source project that provides Web Components built on top of Pyodide, allowing you to use Python directly within your HTML pages in a way that is executed using a WebAssembly copy of Python running in your browser. These demos really help illustrate what it can do - it's a fascinating new piece of the Python web ecosystem.

Via @simonw

Friday, 29. April 2022

Simon Willison

Testing Datasette parallel SQL queries in the nogil/python fork

Testing Datasette parallel SQL queries in the nogil/python fork As part of my ongoing research into whether Datasette can be sped up by running SQL queries in parallel I've been growing increasingly suspicious that the GIL is holding me back. I know the sqlite3 module releases the GIL and was hoping that would give me parallel queries, but it looks like there's still a ton of work going on in Py

Testing Datasette parallel SQL queries in the nogil/python fork

As part of my ongoing research into whether Datasette can be sped up by running SQL queries in parallel I've been growing increasingly suspicious that the GIL is holding me back. I know the sqlite3 module releases the GIL and was hoping that would give me parallel queries, but it looks like there's still a ton of work going on in Python GIL land creating Python objects representing the results of the query.

Sam Gross has been working on a nogil fork of Python and I decided to give it a go. It's published as a Docker image and it turns out trying it out really did just take a few commands... and it produced the desired results, my parallel code started beating my serial code where previously the two had produced effectively the same performance numbers.

I'm pretty stunned by this. I had no idea how far along the nogil fork was. It's amazing to see it in action.

Thursday, 28. April 2022

Simon Willison

Automatically opening issues when tracked file content changes

I figured out a GitHub Actions pattern to keep track of a file published somewhere on the internet and automatically open a new repository issue any time the contents of that file changes. Extracting GZipMiddleware from Starlette Here's why I needed to solve this problem. I want to add gzip support to my Datasette open source project. Datasette builds on the Python ASGI standard, and Starlet

I figured out a GitHub Actions pattern to keep track of a file published somewhere on the internet and automatically open a new repository issue any time the contents of that file changes.

Extracting GZipMiddleware from Starlette

Here's why I needed to solve this problem.

I want to add gzip support to my Datasette open source project. Datasette builds on the Python ASGI standard, and Starlette provides an extremely well tested, robust GZipMiddleware class that adds gzip support to any ASGI application. As with everything else in Starlette, it's really good code.

The problem is, I don't want to add the whole of Starlette as a dependency. I'm trying to keep Datasette's core as small as possible, so I'm very careful about new dependencies. Starlette itself is actually very light (and only has a tiny number of dependencies of its own) but I still don't want the whole thing just for that one class.

So I decided to extract the GZipMiddleware class into a separate Python package, under the same BSD license as Starlette itself.

The result is my new asgi-gzip package, now available on PyPI.

What if Starlette fixes a bug?

The problem with extracting code like this is that Starlette is a very effectively maintained package. What if they make improvements or fix bugs in the GZipMiddleware class? How can I make sure to apply those same fixes to my extracted copy?

As I thought about this challenge, I realized I had most of the solution already.

Git scraping is the name I've given to the trick of running a periodic scraper that writes to a git repository in order to track changes to data over time.

It may seem redundant to do this against a file that already lives in version control elsewhere - but in addition to tracking changes, Git scraping can offfer a cheap and easy way to add automation that triggers when a change is detected.

I need an actionable alert any time the Starlette code changes so I can review the change and apply a fix to my own library, if necessary.

Since I already run all of my projects out of GitHub issues, automatically opening an issue against the asgi-gzip repository would be ideal.

My track.yml workflow does exactly that: it implements the Git scraping pattern against the gzip.py module in Starlette, and files an issue any time it detects changes to that file.

Starlette haven't made any changes to that file since I started tracking it, so I created a test repo to try this out.

Here's one of the example issues. I decided to include the visual diff in the issue description and have a link to it from the underlying commit as well.

How it works

The implementation is contained entirely in this track.yml workflow. I designed this to be contained as a single file to make it easy to copy and paste it to adapt it for other projects.

It uses actions/github-script, which makes it easy to do things like file new issues using JavaScript.

Here's a heavily annotated copy:

name: Track the Starlette version of this # Run on repo pushes, and if a user clicks the "run this action" button, # and on a schedule at 5:21am UTC every day on: push: workflow_dispatch: schedule: - cron: '21 5 * * *' # Without this block I got this error when the action ran: # HttpError: Resource not accessible by integration permissions: # Allow the action to create issues issues: write # Allow the action to commit back to the repository contents: write jobs: check: runs-on: ubuntu-latest steps: - uses: actions/checkout@v2 - uses: actions/github-script@v6 # Using env: here to demonstrate how an action like this can # be adjusted to take dynamic inputs env: URL: https://raw.githubusercontent.com/encode/starlette/master/starlette/middleware/gzip.py FILE_NAME: tracking/gzip.py with: script: | const { URL, FILE_NAME } = process.env; // promisify pattern for getting an await version of child_process.exec const util = require("util"); // Used exec_ here because 'exec' variable name is already used: const exec_ = util.promisify(require("child_process").exec); // Use curl to download the file await exec_(`curl -o ${FILE_NAME} ${URL}`); // Use 'git diff' to detect if the file has changed since last time const { stdout } = await exec_(`git diff ${FILE_NAME}`); if (stdout) { // There was a diff to that file const title = `${FILE_NAME} was updated`; const body = `${URL} changed:` + "\n\n```diff\n" + stdout + "\n```\n\n" + "Close this issue once those changes have been integrated here"; const issue = await github.rest.issues.create({ owner: context.repo.owner, repo: context.repo.repo, title: title, body: body, }); const issueNumber = issue.data.number; // Now commit and reference that issue number, so the commit shows up // listed at the bottom of the issue page const commitMessage = `${FILE_NAME} updated, refs #${issueNumber}`; // https://til.simonwillison.net/github-actions/commit-if-file-changed await exec_(`git config user.name "Automated"`); await exec_(`git config user.email "actions@users.noreply.github.com"`); await exec_(`git add -A`); await exec_(`git commit -m "${commitMessage}" || exit 0`); await exec_(`git pull --rebase`); await exec_(`git push`); }

In the asgi-gzip repository I keep the fetched gzip.py file in a tracking/ directory. This directory isn't included in the Python package that gets uploaded to PyPI - it's there only so that my code can track changes to it over time.

More interesting applications

I built this to solve my "tell me when Starlette update their gzip.py file" problem, but clearly this pattern has much more interesting uses.

You could point this at any web page to get a new GitHub issue opened when that page content changes. Subscribe to notifications for that repository and you get a robust , shared mechanism for alerts - plus an issue system where you can post additional comments and close the issue once someone has reviewed the change.

There's a lot of potential here for solving all kinds of interesting problems. And it doesn't cost anything either: GitHub Actions (somehow) remains completely free for public repositories!

Wednesday, 27. April 2022

Simon Willison

Weeknotes: Parallel SQL queries for Datasette, plus some middleware tricks

A promising new performance optimization for Datasette, plus new datasette-gzip and datasette-total-page-time plugins. Parallel SQL queries in Datasette From the start of the project, Datasette has been built on top of Python's asyncio capabilities - mainly to benefit things like streaming enormous CSV files. This week I started experimenting with a new way to take advantage of them, by expl

A promising new performance optimization for Datasette, plus new datasette-gzip and datasette-total-page-time plugins.

Parallel SQL queries in Datasette

From the start of the project, Datasette has been built on top of Python's asyncio capabilities - mainly to benefit things like streaming enormous CSV files.

This week I started experimenting with a new way to take advantage of them, by exploring the potential to run multiple SQL queries in parallel.

Consider this Datasette table page:

That page has to execute quite a few SQL queries:

A select count(*) ... to populate the 3,283 rows heading at the top Queries against each column to decide what the "suggested facets" should be (details here) For each of the selected facets (in this case repos and committer) a select name, count(*) from ... group by name order by count(*) desc query The actual select * from ... limit 101 query used to display the actual table

It ends up executing more than 30 queries! Which may seem like a lot, but Many Small Queries Are Efficient In SQLite.

One thing that's interesting about the above list of queries though is that they don't actually have any dependencies on each other. There's no reason not to run all of them in parallel - later queries don't depend on the results from earlier queries.

I've been exploring a fancy way of executing parallel code using pytest-style dependency injection in my asyncinject library. But I decided to do a quick prototype to see what this would look like using asyncio.gather().

It turns out that simpler approach worked surprisingly well!

You can follow my research in this issue, but the short version is that as-of a few days ago the Datasette main branch runs many of the above queries in parallel.

This trace (using the datasette-pretty-traces plugin) illustrates my initial results:

As you can see, the grey lines for many of those SQL queries are now overlapping.

You can add the undocumented ?_noparallel=1 query string parameter to disable parallel execution to compare the difference:

One thing that gives me pause: for this particular Datasette deployment (on the cheapest available Cloud Run instance) the overall performance difference between the two is very small.

I need to dig into this deeper: on my laptop I feel like I'm seeing slightly better results, but definitely not conclusively. It may be that multiple cores are not being used effectively here.

Datasette runs SQL queries in a pool of threads. You might expect Python's infamous GIL (Global Interpreter Lock) to prevent these from executing across multiple cores - but I checked, and the GIL is released in Python's C code the moment control transfers to SQLite. And since SQLite can happily run multiple threads, my hunch is that this means parallel queries should be able to take advantage of multiple cores. Theoretically at least!

I haven't yet figured out how to prove this though, and I'm not currently convinced that parallel queries are providing any overall benefit at all. If you have any ideas I'd love to hear them - I have a research issue open, comments welcome!

Update 28th April 2022: Research continues, but it looks like there's little performance benefit from this. Current leading theory is that this is because of the GIL - while the SQLite C code releases the GIL, much of the activity involved in things like assembling Row objects returned by a query still uses Python - so parallel queries still end up mostly blocked on a single core. Follow the issue for more details. I started a discussion on the SQLite Forum which has some interesting clues in it as well.

Further update: It's definitely the GIL. I know because I tried running it against Sam Gross's nogil Python fork and the parallel version soundly beat the non-parallel version! Details in this comment.

datasette-gzip

I've been putting off investigating gzip support for Datasette for a long time, because it's easy to add as a separate layer. If you run Datasette behind Cloudflare or an Apache or Nginx proxy configuring gzip can happen there, with very little effort and fantastic performance.

Then I noticed that my Global Power Plants demo returned an HTML table page that weighed in at 420KB... but gzipped was just 16.61KB. Turns out HTML tables have a ton of repeated markup and compress REALLY well!

More importantly: Google Cloud Run doesn't gzip for you. So all of my Datasette instances that were running on Cloud Run without also using Cloudflare were really suffering.

So this morning I released datasette-gzip, a plugin that gzips content if the browser sends an Accept-Encoding: gzip header.

The plugin is an incredibly thin wrapper around the thorougly proven-in-production GZipMiddleware. So thin that this is the full implementation:

from datasette import hookimpl from starlette.middleware.gzip import GZipMiddleware @hookimpl(trylast=True) def asgi_wrapper(datasette): return GZipMiddleware

This kind of thing is exactly why I ported Datasette to ASGI back in 2019 - and why I continue to think that the burgeoning ASGI ecosystem is the most under-rated piece of today's Python web development environment.

The plugin's tests are a lot more interesting.

That @hookimpl(trylast=True) line is there to ensure that this plugin runs last, after ever other plugin has executed.

This is necessary because there are existing ASGI plugins for Datasette (such as the new datasette-total-page-time) which modify the generated request.

If the gzip plugin runs before they do, they'll get back a blob of gzipped data rather than the HTML that they were expecting. This is likely to break them.

I wanted to prove to myself that trylast=True would prevent these errors - so I ended up writing a test that demonstrated that the plugin registered with trylast=True was compatible with a transforming content plugin (in the test it just converts everything to uppercase) whereas tryfirst=True would instead result in an error.

Thankfully I have an older TIL on Registering temporary pluggy plugins inside tests that I could lean on to help figure out how to do this.

The plugin is now running on my latest-with-plugins demo instance. Since that instance loads dozens of different plugins it ends up serving a bunch of extra JavaScript and CSS, all of which benefits from gzip:

datasette-total-page-time

To help understand the performance improvements introduced by parallel SQL queries I decided I wanted the Datasette footer to be able to show how long it took for the entire page to load.

This is a tricky thing to do: how do you measure the total time for a page and then include it on that page if the page itself hasn't finished loading when you render that template?

I came up with a pretty devious middleware trick to solve this, released as the datasette-total-page-time plugin.

The trick is to start a timer when the page load begins, and then end that timer at the very last possible moment as the page is being served back to the user.

Then, inject the following HTML directly after the closing </html> tag (which works fine, even though it's technically invalid):

<script> let footer = document.querySelector("footer"); if (footer) { let ms = 37.224; let s = ` &middot; Page took ${ms.toFixed(3)}ms`; footer.innerHTML += s; } </script>

This adds the timing information to the page's <footer> element, if one exists.

You can see this running on this latest-with-plugins page.

Releases this week datasette-gzip: 0.1 - 2022-04-27
Add gzip compression to Datasette datasette-total-page-time: 0.1 - 2022-04-26
Add a note to the Datasette footer measuring the total page load time asyncinject: 0.5 - (7 releases total) - 2022-04-22
Run async workflows using pytest-fixtures-style dependency injection django-sql-dashboard: 1.1 - (35 releases total) - 2022-04-20
Django app for building dashboards using raw SQL queries shot-scraper: 0.13 - (14 releases total) - 2022-04-18
Tools for taking automated screenshots of websites TIL this week Format code examples in documentation with blacken-docs Seeing files opened by a process using opensnoop Atuin for zsh shell history in SQLite

Mike Jones: self-issued

OpenID Presentations at April 2022 OpenID Workshop and IIW

I gave the following presentations at the Monday, April 25, 2022 OpenID Workshop at Google: OpenID Connect Working Group (PowerPoint) (PDF) OpenID Enhanced Authentication Profile (EAP) Working Group (PowerPoint) (PDF) I also gave the following invited “101” session presentation at the Internet Identity Workshop (IIW) on Tuesday, October 1, 2019: Introduction to OpenID Connect (PowerPoint) […]

I gave the following presentations at the Monday, April 25, 2022 OpenID Workshop at Google:

OpenID Connect Working Group (PowerPoint) (PDF) OpenID Enhanced Authentication Profile (EAP) Working Group (PowerPoint) (PDF)

I also gave the following invited “101” session presentation at the Internet Identity Workshop (IIW) on Tuesday, October 1, 2019:

Introduction to OpenID Connect (PowerPoint) (PDF)

Tuesday, 26. April 2022

Simon Willison

HTML event handler attributes: down the rabbit hole

HTML event handler attributes: down the rabbit hole onclick="myfunction(event)" is an idiom for passing the click event to a function - but how does it work? It turns out the answer is buried deep in the HTML spec - the browser wraps that string of code in a function(event) { ... that string ... } function and makes the event available to its local scope that way. Via @phil_eaton

HTML event handler attributes: down the rabbit hole

onclick="myfunction(event)" is an idiom for passing the click event to a function - but how does it work? It turns out the answer is buried deep in the HTML spec - the browser wraps that string of code in a function(event) { ... that string ... } function and makes the event available to its local scope that way.

Via @phil_eaton


Mac OS 8 emulated in WebAssembly

Mac OS 8 emulated in WebAssembly Absolutely incredible project by Mihai Parparita. This is a full, working copy of Mac OS 8 (from 1997) running in your browser via WebAssembly - and it's fully loaded with games and applications too. I played with Photoshop 3.0 and Civilization and there's so much more on there to explore too - I finally get to try out HyperCard! Via Infinite Mac: An Insta

Mac OS 8 emulated in WebAssembly

Absolutely incredible project by Mihai Parparita. This is a full, working copy of Mac OS 8 (from 1997) running in your browser via WebAssembly - and it's fully loaded with games and applications too. I played with Photoshop 3.0 and Civilization and there's so much more on there to explore too - I finally get to try out HyperCard!

Via Infinite Mac: An Instant-Booting Quadra in Your Browser


Learn Go with tests

Learn Go with tests I really like this approach to learning a new language: start by learning to write tests (which gets you through hello world, environment setup and test running right from the beginning) and use them to explore the language. I also really like how modern Go development no longer depends on the GOPATH, which I always found really confusing.

Learn Go with tests

I really like this approach to learning a new language: start by learning to write tests (which gets you through hello world, environment setup and test running right from the beginning) and use them to explore the language. I also really like how modern Go development no longer depends on the GOPATH, which I always found really confusing.


jq language description

jq language description I love jq but I've always found it difficult to remember how to use it, and the manual hasn't helped me as much as I would hope. It turns out the jq wiki on GitHub offers an alternative, more detailed description of the language which fits the way my brain works a lot better. Via psacawa on Hacker News

jq language description

I love jq but I've always found it difficult to remember how to use it, and the manual hasn't helped me as much as I would hope. It turns out the jq wiki on GitHub offers an alternative, more detailed description of the language which fits the way my brain works a lot better.

Via psacawa on Hacker News


A tiny CI system

A tiny CI system Christian Ştefănescu shares a recipe for building a tiny self-hosted CI system using Git and Redis. A post-receive hook runs when a commit is pushed to the repo and uses redis-cli to push jobs to a list. Then a separate bash script runs a loop with a blocking "redis-cli blpop jobs" operation which waits for new jobs and then executes the CI job as a shell script. Via @stc

A tiny CI system

Christian Ştefănescu shares a recipe for building a tiny self-hosted CI system using Git and Redis. A post-receive hook runs when a commit is pushed to the repo and uses redis-cli to push jobs to a list. Then a separate bash script runs a loop with a blocking "redis-cli blpop jobs" operation which waits for new jobs and then executes the CI job as a shell script.

Via @stchris_


Phil Windley's Technometria

We Need a Self-Sovereign Model for IoT

Summary: The Internet of Things is more like the CompuServe of Things. We need a new, self-sovereign model to protect us from proprietary solutions and unlock IoT's real potential. Last week Insteon, a large provider of smart home devices, abruptly closed its doors. While their web site is still up and advertises them as "the most reliable and simplest way to turn your home into a sm

Summary: The Internet of Things is more like the CompuServe of Things. We need a new, self-sovereign model to protect us from proprietary solutions and unlock IoT's real potential.

Last week Insteon, a large provider of smart home devices, abruptly closed its doors. While their web site is still up and advertises them as "the most reliable and simplest way to turn your home into a smart home," the company seems to have abruptly shut down their cloud service without warning or providing a way for customers to continue using their products, which depend on Insteon's privacy cloud. High-ranking Insteon execs even removed their affiliation with Insteon from their LinkedIn profiles. Eek!

Fortunately, someone reverse-engineered the Insteon protocol a while back and there are some open-source solutions for people who are able to run their own servers or know someone who can do it for them. Home Assistant is one. OpenHAB is another.

Insteon isn't alone. Apparently iHome terminated its service on April 2, 2022. Other smarthome companies or services who have gone out of business include Revolv, Insignia, Wink, and Staples Connect.

The problem with Insteon, and every other IoT and Smarthome company I'm aware of is that their model looks like this:

Private cloud IoT model; grey box represents domain of control

In this model, you:

Buy the device Download their app Create an account on the manufacturer's private cloud Register your device Control the device from the app

All the data and the device are inside the manufacturer's private cloud. They administer it all and control what you can do. Even though you paid for the device, you don't own it because it's worthless without the service the manufacturer provides. If they take your account away (or everyone's account, in the case of Insteon), you're out of luck. Want to use your motion detector to turn on the lights? Good luck unless they're from the same company1. I call this the CompuServe of Things.

The alternative is what I call the self-sovereign IoT (SSIoT) model:

Self-sovereign IoT model; grey box represents domain of control

Like the private-cloud model, in the SSIoT model, you also:

Buy the device Download an app Establish a relationship with a compatible service provider Register the device Control the device using the app

The fact that the flows for these two models are the same is a feature. The difference lies elsewhere: in SSIoT, your device, the data about you, and the service are all under your control. You might have a relationship with the device manufacturer, but you and your devices are not under their administrative control. This might feel unworkable, but I've proven it's not. Ten years ago we built a connected-car platform called Fuse that used the SSIoT model. All the data was under the control of the person or persons who owned the fleet and could be moved to an alternate platform without loss of data or function. People used the Fuse service that we provided, but they didn't have to. If Fuse had gotten popular, other service providers could have provided the same or similar service based on the open-model and Fuse owners would have had a choice of service providers. Substitutability is an indispensable property for the internet of things.

All companies die. Some last a long time, but even then they frequently kill off products. Having to buy all your gear from a single vendor and use their private cloud puts your IoT project at risk of being stranded, like Insteon customers have been. Hopefully, the open-source solutions will provide the basis for some relief to them. But the ultimate answer is interoperability and self-sovereignty as the default. That's the only way we ditch the CompuServe of Things for a real internet of things.

Notes Apple HomeKit and Google Home try to solve this problem, but you're still dependent on the manufacturer to provide the basic service. And making the administrative domain bigger is nice, but doesn't result in self-sovereignty.

Tags: picos iot interoperability cloud fuse ssi

Sunday, 24. April 2022

Simon Willison

WebAIM guide to using iOS VoiceOver to evaluate web accessibility

WebAIM guide to using iOS VoiceOver to evaluate web accessibility I asked for pointers on learning to use VoiceOver on my iPhone for accessibility testing today and Matt Hobbs pointed me to this tutorial from the WebAIM group at Utah State University. Via @TheRealNooshu

WebAIM guide to using iOS VoiceOver to evaluate web accessibility

I asked for pointers on learning to use VoiceOver on my iPhone for accessibility testing today and Matt Hobbs pointed me to this tutorial from the WebAIM group at Utah State University.

Via @TheRealNooshu


Useful tricks with pip install URL and GitHub

The pip install command can accept a URL to a zip file or tarball. GitHub provides URLs that can create a zip file of any branch, tag or commit in any repository. Combining these is a really useful trick for maintaining Python packages. pip install URL The most common way of using pip is with package names from PyPi: pip install datasette But the pip install command has a bunch of other a

The pip install command can accept a URL to a zip file or tarball. GitHub provides URLs that can create a zip file of any branch, tag or commit in any repository. Combining these is a really useful trick for maintaining Python packages.

pip install URL

The most common way of using pip is with package names from PyPi:

pip install datasette

But the pip install command has a bunch of other abilities - it can install files, pull from various version control systems and most importantly it can install packages from a URL.

I sometimes use this to distribute ad-hoc packages that I don’t want to upload to PyPI. Here’s a quick and simple Datasette plugin I built a while ago that I install using this option:

pip install 'https://static.simonwillison.net/static/2021/datasette_expose_some_environment_variables-0.1-py3-none-any.whl'

(Source code here)

You can also list URLs like this directly in your requirements.txt file, one per line.

datasette install

Datasette has a datasette install command which wraps pip install. It exists purely so that people can install Datasette plugins easily without first having to figure out the location of Datasette's Python virtual environment.

This works with URLs too, so you can install that plugin like so:

datasette install https://static.simonwillison.net/static/2021/datasette_expose_some_environment_variables-0.1-py3-none-any.whl

The datasette publish commands have an --install option for installing plugin which works with URLs too:

datasette publish cloudrun mydatabase.db \ --service=plugins-demo \ --install datasette-vega \ --install https://static.simonwillison.net/static/2021/datasette_expose_some_environment_variables-0.1-py3-none-any.whl \ --install datasette-graphql Installing branches, tags and commits

Any reference in a GitHub repository can be downloaded as a zip file or tarball - that means branches, tags and commits are all available.

If your repository contains a Python package with a setup.py file, those URLs will be compatible with pip install.

This means you can use URLs to install tags, branches and even exact commits!

Some examples:

pip install https://github.com/simonw/datasette/archive/refs/heads/main.zip installs the latest main branch from the simonw/datasette repository. pip install https://github.com/simonw/datasette/archive/refs/tags/0.61.1.zip - installs version 0.61.1 of Datasette, via this tag. pip install https://github.com/simonw/datasette/archive/refs/heads/0.60.x.zip - installs the latest head from my 0.60.x branch. pip install https://github.com/simonw/datasette/archive/e64d14e4.zip - installs the package from the snapshot at commit e64d14e413a955a10df88e106a8b5f1572ec8613 - note that you can use just the first few characters in the URL rather than the full commit hash.

That last option, installing for a specific commit hash, is particularly useful in requirements.txt files since unlike branches or tags you can be certain that the content will not change in the future.

As you can see, the URLs are all predictable - GitHub has really good URL design. But if you don't want to remember or look them up you can instead find them using the Code -> Download ZIP menu item for any view onto the repository:

Installing from a fork

I sometimes use this trick when I find a bug in an open source Python library and need to apply my fix before it has been accepted by upstream.

I create a fork on GitHub, apply my fix and send a pull request to the project.

Then in my requirements.txt file I drop in a URL to the fix in my own repository - with a comment reminding me to switch back to the official package as soon as they've applied the bug fix.

Installing pull requests

This is a new trick I discovered this morning: there's a hard-to-find URL that lets you do the same thing for code in pull requests.

Consider PR #1717 against Datasette, by Tim Sherratt, adding a --timeout option the datasette publish cloudrun command.

I can install that in a fresh environment on my machine using:

pip install https://api.github.com/repos/simonw/datasette/zipball/pull/1717/head

This isn't as useful as checking out the code directly, since it's harder to review the code in a text editor - but it's useful knowing it's possible.

Installing gists

GitHub Gists also get URLs to zip files. This means it's possible to create and host a full Python package just using a Gist, by packaging together a setup.py file and one or more Python modules.

Here's an example Gist containing my datasette-expose-some-environment-variables plugin.

You can right click and copy link on the "Download ZIP" button to get this URL:

https://gist.github.com/simonw/b6dbb230d755c33490087581821d7082/archive/872818f6b928d9393737eee541c3c76d6aa4b1ba.zip

Then pass that to pip install or datasette install to install it.

That Gist has two files - a setup.py file containing the following:

from setuptools import setup VERSION = "0.1" setup( name="datasette-expose-some-environment-variables", description="Expose environment variables in Datasette at /-/env", author="Simon Willison", license="Apache License, Version 2.0", version=VERSION, py_modules=["datasette_expose_some_environment_variables"], entry_points={ "datasette": [ "expose_some_environment_variables = datasette_expose_some_environment_variables" ] }, install_requires=["datasette"], )

And a datasette_expose_some_environment_variables.py file containing the actual plugin:

from datasette import hookimpl from datasette.utils.asgi import Response import os REDACT = {"GPG_KEY"} async def env(request): output = [] for key, value in os.environ.items(): if key not in REDACT: output.append("{}={}".format(key, value)) return Response.text("\n".join(output)) @hookimpl def register_routes(): return [ (r"^/-/env$", env) ]

Thursday, 21. April 2022

Simon Willison

Web Components as Progressive Enhancement

Web Components as Progressive Enhancement I think this is a key aspect of Web Components I had been missing: since they default to rendering their contents, you can use them as a wrapper around regular HTML elements that can then be progressively enhanced once the JavaScript has loaded. Via Alex Russell

Web Components as Progressive Enhancement

I think this is a key aspect of Web Components I had been missing: since they default to rendering their contents, you can use them as a wrapper around regular HTML elements that can then be progressively enhanced once the JavaScript has loaded.

Via Alex Russell

Wednesday, 20. April 2022

Damien Bod

Implement Azure AD Continuous Access Evaluation in an ASP.NET Core Razor Page app using a Web API

This article shows how Azure AD continuous access evaluation (CAE) can be used in an ASP.NET Core UI application to force MFA when using an administrator API from a separate ASP.NET Core application. Both applications are secured using Microsoft.Identity.Web. An ASP.NET Core Razor Page application is used to implement the UI application. The API is […]

This article shows how Azure AD continuous access evaluation (CAE) can be used in an ASP.NET Core UI application to force MFA when using an administrator API from a separate ASP.NET Core application. Both applications are secured using Microsoft.Identity.Web. An ASP.NET Core Razor Page application is used to implement the UI application. The API is implemented with swagger open API and ASP.NET Core. An Azure AD conditional access authentication context is used to implement the MFA requirement. An Azure AD CAE policy is setup which requires the defines MFA and uses the context.

Code https://github.com/damienbod/AspNetCoreAzureADCAE

Continuous access evaluation (CAE) requires an Azure AD P2 license to use the authentication context. If your applications are deployed to any other tenant type, this is will not work.

Requirements

Azure AD tenant with P2 license Microsoft Graph

Create a Conditional access Authentication Context

A Continuous access evaluation (CAE) authentication context was created using Microsoft Graph and can be viewed in the portal. In this demo, like the Microsoft sample application, three authentication contexts are created using Microsoft Graph. The Policy.Read.ConditionalAccess Policy.ReadWrite.ConditionalAccess permissions are required to change the CAE authentication contexts.

This is only needed to create the CAE authentication contexts. Once created, this can be used in the target applications.

public async Task CreateAuthContextViaGraph(string acrKey, string acrValue) { await _graphAuthContextAdmin.CreateAuthContextClassReferenceAsync( acrKey, acrValue, $"A new Authentication Context Class Reference created at {DateTime.UtcNow}", true); } public async Task<AuthenticationContextClassReference?> CreateAuthContextClassReferenceAsync( string id, string displayName, string description, bool IsAvailable) { try { var acr = await _graphServiceClient .Identity .ConditionalAccess .AuthenticationContextClassReferences .Request() .AddAsync(new AuthenticationContextClassReference { Id = id, DisplayName = displayName, Description = description, IsAvailable = IsAvailable, ODataType = null }); return acr; } catch (ServiceException e) { _logger.LogWarning( "We could not add a new ACR: {exception}", e.Error.Message); return null; } }

The created conditional access authentication context can be viewed in the portal in the Security blade of the Azure AD tenant.

If you open the context, you can see the id used. This is used in the applications to check the MFA requirement.

Create a CAE policy to use the context

Now that a authentication context exists, a CAE policy can be created to use this. I created a policy to require MFA.

Implement the API and use the CAE context

The API application needs to validate if the access token contains the acrs claim with the c1 value. If CAE is activated and the claim is included in the token, then any policies which use this CAE authentication context must be fulfilled or no events have been received which inform the client that this access token is invalid. A lot of things need to be implemented correctly for this to work. If configured correctly, a MFA step up authentication is required to use the API. The API returns an unauthorized response as specified in the OpenID Connect signals and events specification, if the claim is missing from the access token. This is handled by the calling UI application.

/// <summary> /// Claims challenges, claims requests, and client capabilities /// /// https://docs.microsoft.com/en-us/azure/active-directory/develop/claims-challenge /// /// Applications that use enhanced security features like Continuous Access Evaluation (CAE) /// and Conditional Access authentication context must be prepared to handle claims challenges. /// </summary> public class CAECliamsChallengeService { private readonly IConfiguration _configuration; public CAECliamsChallengeService(IConfiguration configuration) { _configuration = configuration; } /// <summary> /// Retrieves the acrsValue from database for the request method. /// Checks if the access token has acrs claim with acrsValue. /// If does not exists then adds WWW-Authenticate and throws UnauthorizedAccessException exception. /// </summary> public void CheckForRequiredAuthContext(string authContextId, HttpContext context) { if (!string.IsNullOrEmpty(authContextId)) { string authenticationContextClassReferencesClaim = "acrs"; if (context == null || context.User == null || context.User.Claims == null || !context.User.Claims.Any()) { throw new ArgumentNullException(nameof(context), "No Usercontext is available to pick claims from"); } var acrsClaim = context.User.FindAll(authenticationContextClassReferencesClaim).FirstOrDefault(x => x.Value == authContextId); if (acrsClaim?.Value != authContextId) { if (IsClientCapableofClaimsChallenge(context)) { string clientId = _configuration.GetSection("AzureAd").GetSection("ClientId").Value; var base64str = Convert.ToBase64String(Encoding.UTF8.GetBytes("{\"access_token\":{\"acrs\":{\"essential\":true,\"value\":\"" + authContextId + "\"}}}")); context.Response.Headers.Append("WWW-Authenticate", $"Bearer realm=\"\", authorization_uri=\"https://login.microsoftonline.com/common/oauth2/authorize\", client_id=\"" + clientId + "\", error=\"insufficient_claims\", claims=\"" + base64str + "\", cc_type=\"authcontext\""); context.Response.StatusCode = (int)HttpStatusCode.Unauthorized; string message = string.Format(CultureInfo.InvariantCulture, "The presented access tokens had insufficient claims. Please request for claims requested in the WWW-Authentication header and try again."); context.Response.WriteAsync(message); context.Response.CompleteAsync(); throw new UnauthorizedAccessException(message); } else { throw new UnauthorizedAccessException("The caller does not meet the authentication bar to carry our this operation. The service cannot allow this operation"); } } } } /// <summary> /// Evaluates for the presence of the client capabilities claim (xms_cc) and accordingly returns a response if present. /// </summary> public bool IsClientCapableofClaimsChallenge(HttpContext context) { string clientCapabilitiesClaim = "xms_cc"; if (context == null || context.User == null || context.User.Claims == null || !context.User.Claims.Any()) { throw new ArgumentNullException(nameof(context), "No Usercontext is available to pick claims from"); } var ccClaim = context.User.FindAll(clientCapabilitiesClaim).FirstOrDefault(x => x.Type == "xms_cc"); if (ccClaim != null && ccClaim.Value == "cp1") { return true; } return false; } }

The API uses the CAE scoped service to validate the CAE authentication context and either the data is returned or an unauthorized exception is returned. The Authorize attribute is also used to validate the JWT bearer token and validate that the authentication policy is supported. You could probably implement middleware to check the CAE authentication context as well.

[Authorize(Policy = "ValidateAccessTokenPolicy", AuthenticationSchemes = JwtBearerDefaults.AuthenticationScheme)] [ApiController] [Route("[controller]")] public class ApiForUserDataController : ControllerBase { private readonly CAECliamsChallengeService _caeCliamsChallengeService; public ApiForUserDataController(CAECliamsChallengeService caeCliamsChallengeService) { _caeCliamsChallengeService = caeCliamsChallengeService; } [HttpGet] public IEnumerable<string> Get() { // returns unauthorized exception with WWW-Authenticate header if CAE claim missing in access token // handled in the caller client exception with challenge returned if not ok _caeCliamsChallengeService.CheckForRequiredAuthContext(AuthContextId.C1, HttpContext); return new List<string> { "admin API CAE protected data 1", "admin API CAE protected data 2" }; } }

The program file adds the services and secure the API using Microsoft.Identity.Web. A policy is created to used on the controllers.

var builder = WebApplication.CreateBuilder(args); builder.Services.AddScoped<CAECliamsChallengeService>(); builder.Services.AddDistributedMemoryCache(); builder.Services.AddMicrosoftIdentityWebApiAuthentication(builder.Configuration) .EnableTokenAcquisitionToCallDownstreamApi() .AddMicrosoftGraph(builder.Configuration.GetSection("GraphBeta")) .AddDistributedTokenCaches(); JwtSecurityTokenHandler.DefaultInboundClaimTypeMap.Clear(); JwtSecurityTokenHandler.DefaultMapInboundClaims = false; //IdentityModelEventSource.ShowPII = true; builder.Services.AddControllers(options => { var policy = new AuthorizationPolicyBuilder() .RequireAuthenticatedUser() .Build(); options.Filters.Add(new AuthorizeFilter(policy)); }); builder.Services.AddAuthorization(options => { options.AddPolicy("ValidateAccessTokenPolicy", validateAccessTokenPolicy => { // Validate id of application for which the token was created // In this case the UI application validateAccessTokenPolicy.RequireClaim("azp", builder.Configuration["AzpValidClientId"]); // only allow tokens which used "Private key JWT Client authentication" // // https://docs.microsoft.com/en-us/azure/active-directory/develop/access-tokens // Indicates how the client was authenticated. For a public client, the value is "0". // If client ID and client secret are used, the value is "1". // If a client certificate was used for authentication, the value is "2". validateAccessTokenPolicy.RequireClaim("azpacr", "1"); }); });

The program file is used to setup the API ASP.NET Core API project like any Azure AD Microsoft.Identity.Web client.

"AzureAd": { "Instance": "https://login.microsoftonline.com/", "Domain": "[Enter the domain of your tenant, e.g. contoso.onmicrosoft.com]", "TenantId": "[Enter 'common', or 'organizations' or the Tenant Id (Obtained from the Azure portal. Select 'Endpoints' from the 'App registrations' blade and use the GUID in any of the URLs), e.g. da41245a5-11b3-996c-00a8-4d99re19f292]", "ClientId": "[Enter the Client Id (Application ID obtained from the Azure portal), e.g. ba74781c2-53c2-442a-97c2-3d60re42f403]", "ClientSecret": "[Copy the client secret added to the app from the Azure portal]", "ClientCertificates": [ ], // the following is required to handle Continuous Access Evaluation challenges "ClientCapabilities": [ "cp1" ], "CallbackPath": "/signin-oidc" }, "AzpValidClientId": "7c839e15-096b-4abb-a869-df9e6b34027c", "GraphBeta": { "BaseUrl": "https://graph.microsoft.com/beta", "Scopes": "Policy.Read.ConditionalAccess Policy.ReadWrite.ConditionalAccess" },

Now that the unauthorized exception is returned to the calling UI interactive client, this needs to be handled.

Implement the ASP.NET Core Razor Page with step up MFA check

The UI project implements a Web APP project. The Admin API scope is requested to access the admin API.

builder.Services.AddDistributedMemoryCache(); builder.Services.AddMicrosoftIdentityWebAppAuthentication(builder.Configuration, "AzureAd", subscribeToOpenIdConnectMiddlewareDiagnosticsEvents: true) .EnableTokenAcquisitionToCallDownstreamApi(new[] { builder.Configuration.GetSection("AdminApi")["Scope"] }) .AddMicrosoftGraph(builder.Configuration.GetSection("GraphBeta")) .AddDistributedTokenCaches();

The app uses a scoped service to request data from the administrator API. Using the ITokenAcquisition interface, an access token is request for the API. If an unauthorized response is returned, then a WebApiMsalUiRequiredException exception is thrown with the response headers.

public async Task<IEnumerable<string>?> GetApiDataAsync() { var client = _clientFactory.CreateClient(); var scopes = new List<string> { _adminApiScope }; var accessToken = await _tokenAcquisition .GetAccessTokenForUserAsync(scopes); client.BaseAddress = new Uri(_adminApiBaseUrl); client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", accessToken); client.DefaultRequestHeaders.Accept.Add( new MediaTypeWithQualityHeaderValue("application/json")); var response = await client.GetAsync("ApiForUserData"); if (response.IsSuccessStatusCode) { var stream = await response.Content.ReadAsStreamAsync(); var payload = await JsonSerializer .DeserializeAsync<List<string>>(stream); return payload; } // This exception can be used to handle a claims challenge throw new WebApiMsalUiRequiredException( $"Unexpected status code in the HttpResponseMessage: {response.StatusCode}.", response); }

The ASP.NET Core Razor page is used to handled the WebApiMsalUiRequiredException exception. If this is returned, a new ClaimChallenge is created with the request for the authentication context. This is returned to the UI. If this response is returned, the user is redirected to authenticate again for the new scope which must fulfil the CAE policy using this.

public async Task<IActionResult> OnGet() { try { Data = await _userApiClientService.GetApiDataAsync(); return Page(); } catch (WebApiMsalUiRequiredException hex) { // Challenges the user if exception is thrown from Web API. try { var claimChallenge = WwwAuthenticateParameters .GetClaimChallengeFromResponseHeaders(hex.Headers); _consentHandler.ChallengeUser( new string[] { "user.read" }, claimChallenge); return Page(); } catch (Exception ex) { _consentHandler.HandleException(ex); } _logger.LogInformation("{hexMessage}", hex.Message); } return Page(); }

MFA is configured in a policy using the CAE conditional access authentication context.

Notes

The application will only work with Azure AD and if the continuous access evaluation policies are implemented correctly by the Azure IT tenant admin. You cannot force this in the application, you can only use this. CAE authentication context only works with Azure AD and P2 licenses. If you deploy the application anywhere else, this will not work.

Links

https://github.com/Azure-Samples/ms-identity-ca-auth-context

https://github.com/Azure-Samples/ms-identity-dotnetcore-ca-auth-context-app

https://docs.microsoft.com/en-us/azure/active-directory/conditional-access/overview

https://github.com/Azure-Samples/ms-identity-dotnetcore-daemon-graph-cae

https://docs.microsoft.com/en-us/azure/active-directory/develop/developer-guide-conditional-access-authentication-context

https://docs.microsoft.com/en-us/azure/active-directory/develop/claims-challenge

https://docs.microsoft.com/en-us/azure/active-directory/develop/v2-conditional-access-dev-guide

https://techcommunity.microsoft.com/t5/itops-talk-blog/deep-dive-how-does-conditional-access-block-legacy/ba-p/3265345

Shared Signals and Events – A Secure Webhooks Framework

Tuesday, 19. April 2022

Simon Willison

Glue code to quickly copy data from one Postgres table to another

Glue code to quickly copy data from one Postgres table to another The Python script that Retool used to migrate 4TB of data between two PostgreSQL databases. I find the structure of this script really interesting - it uses Python to spin up a queue full of ID ranges to be transferred and then starts some threads, but then each thread shells out to a command that runs "psql COPY (SELECT ...) TO S

Glue code to quickly copy data from one Postgres table to another

The Python script that Retool used to migrate 4TB of data between two PostgreSQL databases. I find the structure of this script really interesting - it uses Python to spin up a queue full of ID ranges to be transferred and then starts some threads, but then each thread shells out to a command that runs "psql COPY (SELECT ...) TO STDOUT" and pipes the result to "psql COPY xxx FROM STDIN". Clearly this works really well ("saturate the database's hardware capacity" according to a comment on HN), and neatly sidesteps any issues with Python's GIL.

Via How Retool upgraded our 4 TB main application PostgreSQL database


Netlify Edge Functions: A new serverless runtime powered by Deno

Netlify Edge Functions: A new serverless runtime powered by Deno You can now run Deno scripts directly in Netlify's edge CDN - bundled as part of their default pricing plan. Interesting that they decided to host it on Deno's Deno Deploy infrastructure. The hello world example is pleasingly succinct: export default () => new Response("Hello world")

Netlify Edge Functions: A new serverless runtime powered by Deno

You can now run Deno scripts directly in Netlify's edge CDN - bundled as part of their default pricing plan. Interesting that they decided to host it on Deno's Deno Deploy infrastructure. The hello world example is pleasingly succinct:

export default () => new Response("Hello world")


Phil Windley's Technometria

John Oliver on Surveillance Capitalism

Summary: John Oliver's Last Week Tonight took on data brokers and surveillance capitalism in a recent show and he did his usual great job of explaining a serious topic in an entertaining way. Definitely worth the watch. Surveillance capitalism is a serious subject that can be hard to explain, let alone make interesting. I believe that it threatens our digital future. The question is "

Summary: John Oliver's Last Week Tonight took on data brokers and surveillance capitalism in a recent show and he did his usual great job of explaining a serious topic in an entertaining way. Definitely worth the watch.

Surveillance capitalism is a serious subject that can be hard to explain, let alone make interesting. I believe that it threatens our digital future. The question is "what to do about it?"

John Oliver's Last Week Tonight recently took on the task of explaining surveillance capitalism, how it works, and why it's a threat. I recommend watching it. Oliver does a great job of explaining something important, complex, and, frankly, a little boring in a way that is both funny and educational.

But he didn't just explain it. He took some steps to do something about it.

In researching this story, we realized that there is any number of perfectly legal bits of f—kery that we could engage in. We could, for example, use data brokers to go phishing for members of congress, by creating a demographic group consisting of men, age 45 and up, in a 5-mile radius of the U.S. Capitol, who had previously visited sites regarding or searched for terms including divorce, massage, hair loss and mid-life crisis.

The result is a collection of real data from their experiment that Oliver threatens to reveal if Congress doesn't act. The ads they ran were Marriage shouldn't be a prison, Can you vote twice?, and Ted Cruz erotic fan fiction. I'm not sure it will actually light a fire under so moribund an institution as Congress, but it's worth a shot!

Tags: privacy surveillance+capitalism humor

Monday, 18. April 2022

Simon Willison

How to push tagged Docker releases to Google Artifact Registry with a GitHub Action

How to push tagged Docker releases to Google Artifact Registry with a GitHub Action Ben Welsh's writeup includes detailed step-by-step instructions for getting the mysterious "Workload Identity Federation" mechanism to work with GitHub Actions and Google Cloud. I've been dragging my heels on figuring this out for quite a while, so it's great to see the steps described at this level of detail.

How to push tagged Docker releases to Google Artifact Registry with a GitHub Action

Ben Welsh's writeup includes detailed step-by-step instructions for getting the mysterious "Workload Identity Federation" mechanism to work with GitHub Actions and Google Cloud. I've been dragging my heels on figuring this out for quite a while, so it's great to see the steps described at this level of detail.


Building a Covid sewage Twitter bot (and other weeknotes)

I built a new Twitter bot today: @covidsewage. It tweets a daily screenshot of the latest Covid sewage monitoring data published by Santa Clara county. I'm increasingly distrustful of Covid numbers as fewer people are tested in ways that feed into the official statistics. But the sewage numbers don't lie! As the Santa Clara county page explains: SARS-CoV-2 (the virus that causes COVID-19) is

I built a new Twitter bot today: @covidsewage. It tweets a daily screenshot of the latest Covid sewage monitoring data published by Santa Clara county.

I'm increasingly distrustful of Covid numbers as fewer people are tested in ways that feed into the official statistics. But the sewage numbers don't lie! As the Santa Clara county page explains:

SARS-CoV-2 (the virus that causes COVID-19) is shed in feces by infected individuals and can be measured in wastewater. More cases of COVID-19 in the community are associated with increased levels of SARS-CoV-2 in wastewater, meaning that data from wastewater analysis can be used as an indicator of the level of transmission of COVID-19 in the community.

That page also embeds some beautiful charts of the latest numbers, powered by an embedded Observable notebook built by Zan Armstrong.

Once a day, my bot tweets a screenshot of those latest charts that looks like this:

How the bot works

The bot runs once a daily using this scheduled GitHub Actions workflow.

Here's the bit of the workflow that generates the screenshot:

- name: Generate screenshot with shot-scraper run: |- shot-scraper https://covid19.sccgov.org/dashboard-wastewater \ -s iframe --wait 3000 -b firefox --retina -o /tmp/covid.png

This uses my shot-scraper screenshot tool, described here previously. It takes a retina screenshot just of the embedded iframe, and uses Firefox because for some reason the default Chromium screenshot failed to load the embed.

This bit sends the tweet:

- name: Tweet the new image env: TWITTER_CONSUMER_KEY: ${{ secrets.TWITTER_CONSUMER_KEY }} TWITTER_CONSUMER_SECRET: ${{ secrets.TWITTER_CONSUMER_SECRET }} TWITTER_ACCESS_TOKEN_KEY: ${{ secrets.TWITTER_ACCESS_TOKEN_KEY }} TWITTER_ACCESS_TOKEN_SECRET: ${{ secrets.TWITTER_ACCESS_TOKEN_SECRET }} run: |- tweet-images "Latest Covid sewage charts for the SF Bay Area" \ /tmp/covid.png --alt "Screenshot of the charts" > latest-tweet.md

tweet-images is a tiny new tool I built for this project. It uses the python-twitter library to send a tweet with one or more images attached to it.

The hardest part of the project was getting the credentials for sending tweets with the bot! I had to go through Twitter's manual verification flow, presumably because I checked the "bot" option when I applied for the new developer account. I also had to figure out how to extract all four credentials (with write permissions) from the Twitter developer portal.

I wrote up full notes on this in a TIL: How to get credentials for a new Twitter bot.

Datasette for geospatial analysis

I stumbled across datanews/amtrak-geojson, a GitHub repository containing GeoJSON files (from 2015) showing all of the Amtrak stations and sections of track in the USA.

I decided to try exploring it using my geojson-to-sqlite tool, which revealed a bug triggered by records with a geometry but no properties. I fixed that in version 1.0.1, and later shipped version 1.1 with improvements by Chris Amico.

In exploring the Amtrak data I found myself needing to learn how to use the SpatiaLite GUnion function to aggregate multiple geometries together. This resulted in a detailed TIL on using GUnion to combine geometries in SpatiaLite, which further evolved as I used it as a chance to learn how to use Chris's datasette-geojson-map and sqlite-colorbrewer plugins.

This was so much fun that I was inspired to add a new "uses" page to the official Datasette website: Datasette for geospatial analysis now gathers together links to plugins, tools and tutorials for handling geospatial data.

sqlite-utils 3.26

I'll quote the release notes for sqlite-utils 3.26 in full:

New errors=r.IGNORE/r.SET_NULL parameter for the r.parsedatetime() and r.parsedate() convert recipes. (#416) Fixed a bug where --multi could not be used in combination with --dry-run for the convert command. (#415) New documentation: Using a convert() function to execute initialization. (#420) More robust detection for whether or not deterministic=True is supported. (#425)
shot-scraper 0.12

In addition to support for WebKit contributed by Ryan Murphy, shot-scraper 0.12 adds options for taking a screenshot that encompasses all of the elements on a page that match a CSS selector.

In also adds a new --js-selector option, suggested by Tony Hirst. This covers the case where you want to take a screenshot of an element on the page that cannot be easily specified using a CSS selector. For example, this expression takes a screenshot of the first paragraph on a page that includes the text "shot-scraper":

shot-scraper https://simonwillison.net/2022/Apr/8/weeknotes/ \ --js-selector 'el.tagName == "P" && el.innerText.includes("shot-scraper")' \ --padding 15 --retina And an airship museum!

I finally got to add another listing to my www.niche-museums.com website about small or niche museums I have visited.

The Moffett Field Historical Society museum in Mountain View is situated in the shadow of Hangar One, an airship hangar built in 1933 to house the mighty USS Macon.

It's the absolute best kind of local history museum. Our docent was a retired pilot who had landed planes on aircraft carriers using the kind of equipment now on display in the museum. They had dioramas and models. They even had a model railway. It was superb.

Releases this week tweet-images: 0.1.1 - (2 releases total) - 2022-04-17
Send tweets with images from the command line asyncinject: 0.3 - (5 releases total) - 2022-04-16
Run async workflows using pytest-fixtures-style dependency injection geojson-to-sqlite: 1.1.1 - (11 releases total) - 2022-04-13
CLI tool for converting GeoJSON files to SQLite (with SpatiaLite) sqlite-utils: 3.26 - (99 releases total) - 2022-04-13
Python CLI utility and library for manipulating SQLite databases summarize-template: 0.1 - 2022-04-13
Show a summary of a Django or Jinja template shot-scraper: 0.12 - (13 releases total) - 2022-04-11
Tools for taking automated screenshots of websites TIL this week GUnion to combine geometries in SpatiaLite Trick Apple Photos into letting you access your video files How to get credentials for a new Twitter bot

Saturday, 16. April 2022

Jon Udell

Capture the rain

It’s raining again today, and we’re grateful. This will help put a damper on what was shaping up to be a terrifying early start of fire season. But the tiny amounts won’t make a dent in the drought. The recent showers bring us to 24 inches of rain for the season, about 2/3 of normal. … Continue reading Capture the rain

It’s raining again today, and we’re grateful. This will help put a damper on what was shaping up to be a terrifying early start of fire season. But the tiny amounts won’t make a dent in the drought. The recent showers bring us to 24 inches of rain for the season, about 2/3 of normal. But 10 of those 24 inches came in one big burst on Oct 24.

Here are a bunch of those raindrops sailing down the Santa Rosa creek to the mouth of the Russian River at Jenner.

With Sam Learner’s amazing River Runner we can follow a drop that fell in the Mayacamas range as it makes its way to the ocean.

Until 2014 I’d only ever lived east of the Mississipi River, in Pennsylvania, Michigan, Maryland, Massachusetts, and New Hampshire. During those decades there may never have been a month with zero precipitation.

I still haven’t adjusted to a region where it can be dry for many months. In 2017, the year of the devastating Tubbs Fire, there was no rain from April through October.

California relies heavily on the dwindling Sierra snowpack for storage and timed release of water. Clearly we need a complementary method of storage and release, and this passage in Kim Stanley Robinson’s Ministry for the Future imagines it beautifully.

Typically the Sierra snowpack held about fifteen million acre-feet of water every spring, releasing it to reservoirs in a slow melt through the long dry summers. The dammed reservoirs in the foothills could hold about forty million acre-feet when full. Then the groundwater basin underneath the central valley could hold around a thousand million acre-feet; and that immense capacity might prove their salvation. In droughts they could pump up groundwater and put it to use; then during flood years they needed to replenish that underground reservoir, by capturing water on the land and not allow it all to spew out the Golden Gate.

Now the necessity to replumb the great valley for recharge had forced them to return a hefty percentage of the land to the kind of place it had been before Europeans arrived. The industrial agriculture of yesteryear had turned the valley into a giant factory floor, bereft of anything but products grown for sale; unsustainable ugly, devastated, inhuman, and this in a place that had been called the “Serengeti of North America,” alive with millions of animals, including megafauna like tule elk and grizzly bear and mountain lion and wolves. All those animals had been exterminated along with their habitat, in the first settlers’ frenzied quest to use the valley purely for food production, a kind of secondary gold rush. Now the necessity of dealing with droughts and floods meant that big areas of the valley were restored, and the animals brought back, in a system of wilderness parks or habitat corridors, all running up into the foothills that ringed the central valley on all sides.

The book, which Wikipedia charmingly classifies as cli-fi, grabbed me from page one and never let go. It’s an extraordinary blend of terror and hope. But this passage affected me in the most powerful way. As Marc Reisner’s Cadillac Desert explains, and as I’ve seen for myself, we’ve already engineered the hell out of California’s water systems, with less than stellar results.

Can we redo it and get it right this time? I don’t doubt our technical and industrial capacity. Let’s hope it doesn’t take an event like the one the book opens with — a heat wave in India that kills 20 million people in a week — to summon the will.


Werdmüller on Medium

Elon, Twitter, and the future of social media

There’s no world where nationalists get what they want. Continue reading on Medium »

There’s no world where nationalists get what they want.

Continue reading on Medium »

Wednesday, 13. April 2022

Habitat Chronicles

Game Governance Domains: a NFT Support Nightmare

“I was working on an online trading-card game in the early days that had player-to-player card trades enabled through our servers. The vast majority of our customer »»

“I was working on an online trading-card game in the early days that had player-to-player card trades enabled through our servers. The vast majority of our customer support emails dealt with requests to reverse a trade because of some kind of trade scams. When I saw Hearthstone’s dust system, I realized it was genius; they probably cut their support costs by around 90% with that move alone.”

Ian Schreiber
A Game’s Governance Domain

There have always been key governance requirements for object trading economies in online games, even before user-generated-content enters the picture.  I call this the game’s object governance domain.

Typically, an online game object governance domain has the following features (amongst others omitted for brevity):

There is usually at least one fungible token currency There is often a mechanism for player-to-player direct exchange There is often one or more automattic markets to exchange between tokens and objects May be player to player transactions May be operator to player transactions (aka vending and recycling machinery) Managed by the game operator There is a mechanism for reporting problems/disputes There is a mechanism for adjudicating conflicts There are mechanisms for resolving a disputes, including: Reversing transactions Destroying objects Minting and distributing objects Minting and distributing tokens Account, Character, and Legal sanctions Rarely: Changes to TOS and Community Guidelines


In short, the economy is entirely in the ultimate control of the game operator. In effect, anything can be “undone” and injured parties can be “made whole” through an entire range of solutions.

Scary Future: Crypto? Where’s Undo?

Introducing blockchain tokens (BTC, for example) means that certain transactions become “irreversible”, since all transactions on the chain are 1) Atomic and 2) Expensive. In contrast, many thousands of credit-card transactions are reversed every minute of every day (accidental double charges, stolen cards, etc.) Having a market to sell an in-game object for BTC will require extending the governance domain to cover very specific rules about what happens when the purchaser has a conflict with a transaction. Are you really going to tell customers “All BTC transactions are final. No refunds. Even if your kid spent the money without permission. Even if someone stole your wallet”?

Nightmare Future: Game UGC & NFTs? Ack!

At least with your own game governance domain, you had complete control over IP presented in your game and some control, or at least influence, over the games economy. But it gets pretty intense to think about objects/resources created by non-employees being purchased/traded on markets outside of your game governance domain.

When your game allows content that was not created within that game’s governance domain, all bets are off when it comes to trying to service customer support calls. And there will be several orders of magnitude more complaints. Look at Twitter, Facebook, and Youtube and all of the mechanisms they need to support IP-related complaints, abuse complaints, and robot-spam content. Huge teams of folks spending millions of dollars in support of Machine Learning are not able to stem the tide. Those companies’ revenue depends primarily on UGC, so that’s what they have to deal with.

NFTs are no help. They don’t come with any governance support whatsoever. They are an unreliable resource pointer. There is no way to make any testable claims about any single attribute of the resource. When they point to media resources (video, jpg, etc.) there is no way to verify that the resource reference is valid or legal in any governance domain. Might as well be whatever someone randomly uploaded to a photo service – oh wait, it is.

NFTs have been stolen, confused, hijacked, phished, rug-pulled, wash-traded, etc. NFT Images (like all internet images) have been copied, flipped, stolen, misappropriated, and explicitly transformed. There is no undo, and there is no governance domain. OpenSea, because they run a market, gets constant complaints when there is a problem, but they can’t reverse anything. So they madly try to “prevent bad listings” and “punish bad accounts” – all closing the barn door after the horse has left. Oh, and now they are blocking IDs/IPs from sanctioned countries.

So, even if a game tries to accept NFT resources into their game – they end up in the same situation as OpenSea – inheriting all the problems of irreversibility, IP abuse, plus new kinds of harassment with no real way to resolve complaints.

Until blockchain tokens have RL-bank-style undo, and decentralized trading systems provide mechanisms for a reasonable standard of governance, online games should probably just stick with what they know: “If we made it, we’ll deal with any governance problems ourselves.”









Phil Windley's Technometria

Easier IoT Deployments with LoraWan and Helium

Summary: Connectivity requirements add lots of friction to large-scale IoT deployments. LoRaWAN, and the Helium network, just might be a good, universal solution. I've been interested in the internet of things (IoT) for years, even building and selling a connected car product called Fuse at one point. One of the hard parts of IoT is connectivity, getting the sensors on some network so t

Summary: Connectivity requirements add lots of friction to large-scale IoT deployments. LoRaWAN, and the Helium network, just might be a good, universal solution.

I've been interested in the internet of things (IoT) for years, even building and selling a connected car product called Fuse at one point. One of the hard parts of IoT is connectivity, getting the sensors on some network so they can send data back to wherever it's aggregated, analyzed, or used to take action. Picos are a good solution for the endpoint—where the data ends up—but the sensor still has to get connected to the internet.

Wifi, Bluetooth, and cellular are the traditional answers. Each has their limitations in IoT.

Wifi has limited range and, outside the home environment, usually needs a separate device-only network because of different authentication requirements. If you're doing a handful of devices it's fine, but it doesn't easily scale to thousands. Wifi is also power hungry, making it a poor choice for battery-powered applications. Bluetooth's range is even more limited, requiring the installation of Bluetooth gateways. Bluetooth is also not very secure. Bluetooth is relatively good with power. I've had temperature sensor on Bluetooth that ran over a year on a 2025 battery. But still, battery replacement can end up being rel maintenance headache. Cellular is relatively ubiquitous, but it can be expensive and hard to manage. Batteries for for cell phones because people charge them every night. That's not reasonable for many IoT applications, so cellular-based sensors usually need to be powered.

Of course, there are other choices using specialized IoT protocols like ZWave, Zigbee, and Insteon, for example. These all require specialize hubs that must be bought, managed, and maintained. To avoid single points of failure, multiple hubs are needed. For a large industrial deployment this might be worth the cost and effort. Bottom line: Every large IoT project spends a lot of time and money designing and managing the connectivity infrastructure. This friction reduces the appeal of large-scale IoT deployments.

Enter LoraWAN, a long-range (10km), low-power wireless protocol for IoT. Scott Lemon told me about LoRaWAN recently and I've been playing with it a bit. Specifically, I've been playing with Helium, a decentralized LoRaWAN network.

Helium is a LoRaWAN network built from hotspots run by almost anyone. In one of the most interesting uses of crypto I've seen, Helium pays people helium tokens for operating hotspots. They call the model "proof of coverage". You get paid two ways: (1) providing coverage for a given geographical area and (2) moving packets from the radio to the internet. This model has provided amazing coverage with over 700,000 hotspots deployed to date. And Helium expended very little capital to do it, compared with building out the infrastructure on their own.

I started with one of these Dragino LHT65 temperature sensors. The fact that I hadn't deployed my own hotspot was immaterial because there's plenty of coverage around me.

LHT65 Temperature Sensor (click to enlarge)

Unlike a Wifi network, you don't put the network credentials in the device, you put the devices credentials (keys) in the network. Once I'd done that, the sensor started connecting to hotspots near my house and transmitting data. Today I've been driving around with it in my truck and it's roaming onto other hotspots as needed, still reporting temperatures.

Temperature Sensor Coverage on Helium (click to enlarge)

Transmitting data on the Helium network costs money. You pay for data use with data credits (DC). You buy DC with the Helium token (HNT). Each DC costs a fixed rate of $0.00001 per 24 bytes of data. That's about $0.42/Mb, which isn't dirt cheap when compared to your mobile data rate, but you're only only paying for the data you use. For 100 sensors, transmitting 3 packets per hour for a year would cost $2.92. If each of those sensors needed a SIM card and cellular account, the comparable price would be orders of magnitude higher. So, the model fits IoT sensor deployments well. And the LHT65 has an expected battery life of 10 years (at 3 packets per hour) which is also great for large-scale sensor deployments.

Being able to deploy sensors without having to also worry about building and managing the connection infrastructure is a big deal. I could put 100 sensors up around a campus, a city, a farm, or just about anywhere and begin collecting the data from them without worrying about the infrastructure, the cost, or maintenance. My short term goal is to start using these with Picos and build out some rulesets and the UI for using and managing LoRaWAN sensors. I also have one of these SenseCAP M1 LoRaWAN gateways that I'm going to deploy in Idaho later (there are already several hotspots near my home in Utah). I'll let you know how all this goes.

Photo Credit: Helium discharge tube from Heinrich Pniok (CC BY-NC-ND 3.0). Image was cropped vertically.

Tags: iot helium lorawan picos

Monday, 11. April 2022

Justin Richer

The GNAPathon

At the recent IETF 113 meeting in Vienna, Austria, we put the GNAP protocol to the test by submitting it as a Hackathon project. Over the course of the weekend, we built out GNAP components and pointed them at each other to see what stuck. Here’s what we learned. Our Goals GNAP is a big protocol, and there was no reasonable way for us to build out literally every piece and option of it in o

At the recent IETF 113 meeting in Vienna, Austria, we put the GNAP protocol to the test by submitting it as a Hackathon project. Over the course of the weekend, we built out GNAP components and pointed them at each other to see what stuck. Here’s what we learned.

Our Goals

GNAP is a big protocol, and there was no reasonable way for us to build out literally every piece and option of it in our limited timeframe. While GNAP’s transaction negotiation patterns make the protocol fail gracefully when two sides don’t have matching features, we wanted to aim for success. As a consequence, we decided to focus on a few key interoperability points:

HTTP Message Signatures for key proofing, with Content Digest for protecting the body of POST messages. Redirect-based interaction, to get there and back. Dynamic keys, not relying on pre-registration at the AS. Single access tokens.

While some of the components built out did support additional features, these were the ones we chose as a baseline to make everything work as best as it could. We laid out our goals to get these components to talk to each other in increasingly complete layers.

Our goal of the hackathon wasn’t just to create code, we wanted to replicate a developer’s experience when approaching GNAP for the first time. Wherever possible, we tried to use libraries to cover existing functionality, including HTTP Signatures, cryptographic primitives, and HTTP Structured Fields. We also used the existing XYZ Java implementation of GNAP to test things out.

New Clients

With all of this in hand, we set about building some clients from scratch. Since we had a functioning AS to build against, focusing on the clients allowed us to address different platforms and languages than we otherwise had. We settled on three very different kinds of client software:

A single page application, written in JavaScript with no backend components. A command line application, written in PHP. A web application, written in PHP.

By the end of the weekend, we were able to get all three of these working, and the demonstration results are available as part of the hackathon readout. This might not seem like much, but the core functionality of all three clients was written completely from scratch, including the HTTP Signatures implementation.

Getting Over the Hump

Importantly, we also tried to work in such a way that the different components could be abstracted out after the fact. While we could have written very GNAP-specific code to handle the key handling and signing, we opted to instead create generic functions that could sign and present any HTTP message. This decision had two effects.

First, once we had the signature method working, the rest of the GNAP implementation went very, very quickly. GNAP is designed in such a way as to leverage HTTP, JSON, and security layers like HTTP Message Signatures as much as it can. What this meant meant for us during implementation is that getting the actual GNAP exchange to happen was a simple set of HTTP calls and JSON objects. All the layers did their job appropriately, keeping abstractions from leaking between them.

Second, this will give us a chance to extract the HTTP Message Signature code into truly generic libraries across different languages. HTTP Message Signatures is used in places other than GNAP, and so a GNAP implementor is going to want to use a dedicated library for this core function instead of having to write their own like we did.

We had a similar reaction to elements like structured field libraries, which helped with serialization and message-building, and cryptographic functions. As HTTP Message Signatures in particular gets built out more across different ecosystems, we’ll see more and more support for fundamental tooling.

Bug Fixes

Another important part of the hackathon was the discovery and patching of bugs in the existing XYZ authorization server and Java Servlet web-based client code. At the beginning of the weekend, these pieces of software worked with each other. However, it became quickly apparent that there were a number of issues and assumptions in the implementation. Finding things like this is one of the best things that can come out of a hackathon — by putting different code from different developers against each other, you can figure out where code is weak, and sometimes, where the specification itself is unclear.

Constructing the Layers

Probably the most valuable outcome of the hackathon, besides the working code itself, is a concrete appreciation of how clear the spec is from the eyes of someone trying to build to it. We came out of the weekend with a number of improvements that need to be made to GNAP and HTTP Message Signatures, but also ideas on what additional developer support there should be in the community at large. These things will be produced and incorporated over time, and hopefully make the GNAP ecosystem brighter and stronger as a result.

In the end, a specification isn’t real unless you have running code to prove it. Even more if people can use that code in their own systems to get real work done. GNAP, like most standards, is just a layer in the internet stack. It builds on technologies and technologies will be built on it.

Our first hackathon experience has shown this to be a pretty solid layer. Come, build with us!


Doc Searls Weblog

What’s up with Dad?

My father was always Pop. He was born in 1908. His father, also Pop, was born in 1863. That guy’s father was born in 1809, and I don’t know what his kids called him. I’m guessing, from the chart above, it was Pa. My New Jersey cousins called their father Pop. Uncles and their male […]

My father was always Pop. He was born in 1908. His father, also Pop, was born in 1863. That guy’s father was born in 1809, and I don’t know what his kids called him. I’m guessing, from the chart above, it was Pa. My New Jersey cousins called their father Pop. Uncles and their male contemporaries of the same generation in North Carolina, however, were Dad or Daddy.

To my kids, I’m Pop or Papa. Family thing, again.

Anyway, I’m wondering what’s up, or why’s up, with Dad?

 


reb00ted

Web2's pervasive blind spot: governance

What is the common theme in these commonly stated problems with the internet today? Too much tracking you from one site to another. Wrong approach to moderation (too heavy-handed, too light, inconsistent, contextually inappropriate etc). Too much fake news. Too many advertisements. Products that make you addicted, or are otherwise bad for your mental health. In my view, the common

What is the common theme in these commonly stated problems with the internet today?

Too much tracking you from one site to another. Wrong approach to moderation (too heavy-handed, too light, inconsistent, contextually inappropriate etc). Too much fake news. Too many advertisements. Products that make you addicted, or are otherwise bad for your mental health.

In my view, the common theme underlying these problems is: “The wrong decisions were made." That’s it. Not technology, not product, not price, not marketing, not standards, not legal, nor whatever else. Just that the wrong decisions were made.

Maybe it was:

The wrong people made the decisions. Example: should it really be Mark Zuckerberg who decides which of my friends' posts I see?

The wrong goals were picked by the decisionmakers and they are optimizing for those. Example: I don’t want to be “engaged” more and I don’t care about another penny per share for your earnings release.

A lack of understanding or interest in the complexity of a situation, and inability for the people with the understanding to make the decision instead. Example: are a bunch of six-figure Silicon Valley guys really the ones who should decide what does and does not inflame religious tensions in a low-income country half-way around the world with a societal structure that’s fully alien to liberal Northern California?

What do we call the thing that deals with who gets to decide, who has to agree, who can keep them from doing bad things and the like? Yep, it’s “governance”.

Back in the 1980’s in 90’s, all we cared about was code. So when the commercial powers started abusing their power, in the mind of some users, those users pushed back with projects such as GNU and open-source.

But we’ve long moved on from there. In one of the defining characteristics of Web2 over Web1, data has become more important than the code.

Starting about 15 years ago, it was suddenly the data scientists and machine learning people who started getting the big bucks, not the coders any more. Today the fight is not about who had the code any more; it is about who has the data.

Pretty much the entire technology industry understands that now. What it doesn’t understand yet is that the consumer internet crisis we are in is best understood as a need to add another layer to the sandwich: not just the right code, not just plus the right data, but also plus the right governance: have the right people decide for the right reasons, and the mechanisms to get rid of the decisionmakers if the affected community decides they made the wrong decisions or had the wrong reasons.

Have you noticed that pretty much all senior technologists that dismiss Web3 — usually in highly emotional terms – completely ignore that pretty much all the genuinely interesting innovations in the Web3 world are governance innovations? (never mind blockchain, it’s just a means to an end for those innovators).

If we had governance as part of the consumer technology sandwich, then:

Whether I see which of my friends' posts should be decisions that I make with my friends, and nobody else gets a say.

Whether a product optimizes for this or that should be a decision that is made by its users, not some remote investors or power-hungry executives.

A community of people half-way around the world should determine, on its own for its own purposes, what is good for its members.

(If we had a functioning competitive marketplace, Adam Smith-style, then we would probably get this because products that do what the customers want win over products that don’t. But have monopolies instead that cement the decisionmaking in the wrong places for the wrong reasons. A governance problem, in other words.)

If you want to get ahead of the curve, pay attention to this. All the genuinely new stuff in technology that I’ve seen for a few years has genuinely new ideas about governance. It’s a complete game changer.

Conversely, if you build technology with the same rudimentary, often dictatorial and almost always dysfunctional governance we have had for technology in the Web1 and Web2 world, you are fundamentally building a solution for the past, not for the future.

To be clear, better governance for technology is in the pre-kindergarten stage. It’s like the Apple 1 of the personal computer – assembly required – or the Archie stage of the internet. But we would have been wrong to dismiss those as mere fads then, and it would be wrong to dismiss the crucial importance of governance now.

That, for me, is the essence of how the thing after Web2 – and we might as well call it Web3 – is different. And it is totally exciting! Because “better governance” is just another way to say: the users get to have a say!!

Thursday, 07. April 2022

Identity Woman

Media Mention: MIT Technology Review

I was quoted in the article in MIT Technology Review on April 6, 2022, “Deception, exploited workers, and cash handouts: How Worldcoin recruited its first half a million test users.” Worldcoin, a startup built on a promise of a fairly-distributed, cryptocurrency-based universal basic income, is building a biometric database by collecting data from the financially […] The post Media Mention: MIT

I was quoted in the article in MIT Technology Review on April 6, 2022, “Deception, exploited workers, and cash handouts: How Worldcoin recruited its first half a million test users.” Worldcoin, a startup built on a promise of a fairly-distributed, cryptocurrency-based universal basic income, is building a biometric database by collecting data from the financially […]

The post Media Mention: MIT Technology Review appeared first on Identity Woman.

Monday, 04. April 2022

Damien Bod

Implementing OAuth2 Client credentials flow APP to APP security using Azure AD non interactive

This article shows how to implement the OAuth client credentials flow using the Microsoft.Identity.Client Nuget package and Azure AD to create an Azure App registration. The client application requires a secret which can be an Azure App registration or a certificate to request an access token. The token and only tokens created for this client […]

This article shows how to implement the OAuth client credentials flow using the Microsoft.Identity.Client Nuget package and Azure AD to create an Azure App registration. The client application requires a secret which can be an Azure App registration or a certificate to request an access token. The token and only tokens created for this client can be used to access the API.

Code: Azure Client credentials flows

Blogs in this series

Implementing OAuth2 APP to APP security using Azure AD from a Web APP APP to APP security using Azure AD from a daemon app

Azure App registration setup

The Azure App registration is setup as in this blog:

Implementing OAuth2 APP to APP security using Azure AD from a Web APP

An Azure App registration was then created to request new access tokens. The access_as_application claim is validated in the API.

API

The service API is implemented to validate the access tokens. The azp claim is used to validate that the token was requested using the known client ID and the secret. It does not validate who sent the access token, just that this client ID and secret was used to request the access token. The client credentials flow should only be used by trusted clients. The azpacr is used to validate how the token was requested. This is a confidential client. Microsoft.Identity.Web is used to implement the API security.

JwtSecurityTokenHandler.DefaultInboundClaimTypeMap.Clear(); IdentityModelEventSource.ShowPII = true; JwtSecurityTokenHandler.DefaultMapInboundClaims = false; services.AddSingleton<IAuthorizationHandler, HasServiceApiRoleHandler>(); services.AddMicrosoftIdentityWebApiAuthentication(Configuration); services.AddControllers(); services.AddAuthorization(options => { options.AddPolicy("ValidateAccessTokenPolicy", validateAccessTokenPolicy => { validateAccessTokenPolicy.Requirements.Add(new HasServiceApiRoleRequirement()); // Validate id of application for which the token was created // In this case the CC client application validateAccessTokenPolicy.RequireClaim("azp", "b178f3a5-7588-492a-924f-72d7887b7e48"); // only allow tokens which used "Private key JWT Client authentication" // // https://docs.microsoft.com/en-us/azure/active-directory/develop/access-tokens // Indicates how the client was authenticated. For a public client, the value is "0". // If client ID and client secret are used, the value is "1". // If a client certificate was used for authentication, the value is "2". validateAccessTokenPolicy.RequireClaim("azpacr", "1"); }); });

Microsoft.Identity.Client OAuth Client credentials client

A console application is used to implement the client credentials trusted application. A console application cannot be trusted unless it is deployed to a trusted host. It would be better to use a certificate to secure this and even better if this was not a console application but instead some server deployed Azure service which uses Key Vault to persist it’s secrets and managed identities to access the secret. The ConfidentialClientApplicationBuilder is used to create a new CC flow. The access token is used to access the API.

using System.Net.Http.Headers; using Microsoft.Extensions.Configuration; using Microsoft.Identity.Client; var builder = new ConfigurationBuilder() .SetBasePath(Directory.GetCurrentDirectory()) .AddUserSecrets("78cf2604-554c-4a6e-8846-3505f2c0697d") .AddJsonFile("appsettings.json"); var configuration = builder.Build(); // 1. Client client credentials client var app = ConfidentialClientApplicationBuilder.Create(configuration["AzureADServiceApi:ClientId"]) .WithClientSecret(configuration["AzureADServiceApi:ClientSecret"]) .WithAuthority(configuration["AzureADServiceApi:Authority"]) .Build(); var scopes = new[] { configuration["AzureADServiceApi:Scope"] }; // 2. Get access token var authResult = await app.AcquireTokenForClient(scopes) .ExecuteAsync(); if(authResult == null) { Console.WriteLine("no auth result... "); } else { Console.WriteLine(authResult.AccessToken); // 3. Use access token to access token var client = new HttpClient { BaseAddress = new Uri(configuration["AzureADServiceApi:ApiBaseAddress"]) }; client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", authResult.AccessToken); client.DefaultRequestHeaders.Accept .Add(new MediaTypeWithQualityHeaderValue("application/json")); var response = await client.GetAsync("ApiForServiceData"); if (response.IsSuccessStatusCode) { Console.WriteLine(await response.Content.ReadAsStringAsync()); } }

The console application uses app settings to load the trusted client. The Scope is defined to use the application ID from the Azure App registration and the .default scope. Then any API definitions will be added to the access token. It is important to used V2 access tokens which can be defined in the manifest.

{ "AzureADServiceApi": { "ClientId": "b178f3a5-7588-492a-924f-72d7887b7e48", // "ClientSecret": "--in-user-secrets--", // Authority Guid = tenanant ID "Authority": "https://login.microsoftonline.com/7ff95b15-dc21-4ba6-bc92-824856578fc1", "ApiBaseAddress": "https://localhost:44324", "Scope": "api://b178f3a5-7588-492a-924f-72d7887b7e48/.default" } }

The access tokens returned contains the claims with the azp, roles and azpacr claims. These claims as well as the standard claims values are used to authorize each request. Authorize attributes are used with a match scheme and authorization policy which uses the claims to validate. The Azure definitions are used in the policies and the policies are used in the application. You should not use the Azure definitions directly in the application. Avoid using Roles or RequiredScope directly in controllers or specific application parts. Map these in the IClaimsTransformation or the OnTokenValidated method.

"iss": "https://login.microsoftonline.com/7ff95b15-dc21-4ba6-bc92-824856578fc1/v2.0", "iat": 1648363449, "nbf": 1648363449, "exp": 1648367349, "aio": "E2ZgYLh1abHAkpeHvuz/fjX9QMNZDgA=", "azp": "b178f3a5-7588-492a-924f-72d7887b7e48", "azpacr": "1", "oid": "3952ce95-8b14-47b4-b3e6-2a5521d35ed1", "rh": "0.AR8AFVv5fyHcpku8koJIVlePwaXzeLGIdSpJkk9y14h7fkgfAAA.", "roles": [ "access_as_application", "service-api" ], "sub": "3952ce95-8b14-47b4-b3e6-2a5521d35ed1", "tid": "7ff95b15-dc21-4ba6-bc92-824856578fc1", "uti": "WDee3wGpJkeJGUMN5CDOAA", "ver": "2.0" }

Implementing the application permissions in this way makes is possible to secure any daemon application or application flow with no user with a server for any deployment. The client can be hosted in an ASP.NET Core application which authenticates using Azure B2C or a service with no user interaction. The client and the API work alone. It is important that the client can be trusted to secure the secret used to request the access tokens.

Links:

https://github.com/AzureAD/microsoft-identity-web

https://docs.microsoft.com/en-us/azure/active-directory/develop/

https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2

https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/tree/master/4-Call-OwnApi-Pop

https://docs.microsoft.com/en-us/azure/active-directory/develop/scenario-protected-web-api-verification-scope-app-roles?tabs=aspnetcore


Randall Degges

Real Estate vs Stocks

As I’ve mentioned before, I’m a bit of a personal finance nerd. I’ve been carefully tracking my spending and investing for many years now. In particular, I find the investing side of personal finance fascinating. For the last eight years, my wife and I have split our investments roughly 50⁄50 between broadly diversified index funds and real estate (rental properties). Earlier this week,

As I’ve mentioned before, I’m a bit of a personal finance nerd. I’ve been carefully tracking my spending and investing for many years now. In particular, I find the investing side of personal finance fascinating.

For the last eight years, my wife and I have split our investments roughly 50⁄50 between broadly diversified index funds and real estate (rental properties).

Earlier this week, I was discussing real estate investing with some friends, and we had a great conversation about why you might even consider investing in real estate in the first place. As I explained my strategy to them, I thought it might make for an interesting blog post (especially if you’re new to the world of investing).

Please note that I’m not an expert, just an enthusiastic hobbyist. Like all things I work on, I like to do a lot of research, experimentation, etc., but don’t take this as financial advice.

Why Invest in Stocks

Before discussing whether real estate or stocks is the better investment, let’s talk about how stocks work. If you don’t understand how to invest in stocks (and what rewards you can expect from them), the comparison between real estate and stocks will be meaningless.

What is a Stock?

Stocks are the simplest form of investment you can make. If you buy one share of Tesla stock for $100, you’re purchasing one tiny sliver of the entire company and are now a part-owner!

Each stock you hold can either earn or lose money, depending on how the company performs. For example, if Tesla doesn’t sell as many vehicles as the prior year, it’s likely that the company will not make as much money and will therefore be worth less than it was a year ago, so the value of the stock might drop. In this case, the one share of Tesla stock you purchased for $100 might only be worth $90 (a 10% drop in value!).

But, stocks can also make you money. If Tesla sells more vehicles than anyone expected, the company might be worth more, and now your one share of Tesla stock might be worth $110 (a 10% gain!). This gain is referred to as appreciation because the value of your stock has appreciated.

In addition to appreciation, you can also make money through dividends. While some companies choose to take any profits they make and reinvest them into the business to make more products, conduct research, etc., some companies take their profits and split them up amongst their shareholders. We call this distribution a dividend. When a dividend is paid, you’ll receive a set amount of money per share as a shareholder. For example, if Tesla issues a 10 cent dividend per share, you’ll receive $0.10 of spending money as the proud owner of one share of Tesla stock!

But here’s the thing, investing in stocks is RISKY. It’s risky because companies make mistakes, and even the most highly respected and valuable companies today can explode overnight and become worthless (Enron, anyone?). Because of this, generally speaking, it’s not advisable to ever buy individual stocks.

Instead, the best way to invest in stocks is by purchasing index funds.

What is an Index Fund?

Index funds are stocks you buy that are essentially collections of other stocks. If you invest in Vanguard’s popular VTSAX index fund, for example, you’re buying a small amount of all publicly traded companies in the US.

This approach is much less risky than buying individual stocks because VTSAX is well-diversified. If any of the thousands of companies in the US goes out of business, it doesn’t matter to you because you only own a very tiny amount of it.

The way index funds work is simple: if the value of the index as a whole does well (the US economy in our example), the value of your index fund rises. If the value of the index as a whole does poorly, the value of your index fund drops. Simple!

How Well Do Index Funds Perform?

Let’s say you invest your money into VTSAX and now own a small part of all US companies. How much money can you expect to make?

While there’s no way to predict the future, what we can do is look at the past. By looking at the average return of the stock market since 1926 (when the first index was created), you can see that the average return of the largest US companies has been ~10% annually (before inflation).

If you were to invest in VTSAX over a long period of time, it’s historically likely that you’ll earn an average of 10% per year. And understanding that the US market averages 10% per year is exciting because if you invest a little bit of money each month into index funds, you’ll become quite wealthy.

If you plug some numbers into a compound interest calculator, you’ll see what I mean.

For example, if you invest $1,000 per month into index funds for 30 years, you’ll end up with $2,171,321.10. If you start working at 22, then by the time you’re 52, you’ll have over two million dollars: not bad!

How Much Money Do I Need to Retire if I Invest in Index Funds?

Now that you know how index funds work and how much they historically earn, you might be wondering: how much money do I need to invest in index funds before I can retire?

As it turns out, there’s a simple answer to this question, but before I give you the answer, let’s talk about how this works.

Imagine you have one million dollars invested in index funds that earn an average of 10% yearly. You could theoretically sell 10% of your index funds each year and never run out of money in this scenario. Or at least, this makes sense at first glance.

Unfortunately, while it’s true that the market has returned a historical average of 10% yearly, this is an average, and actual yearly returns vary significantly by year. For example, you might be up 30% one year down 40% the next.

This unpredictability year-over-year makes it difficult to safely withdraw money each year without running out of money due to sequence of return risk.

Essentially, while it’s likely that you’ll earn 10% per year on average if you invest in a US index fund, you will likely run out of money if you sell 10% of your portfolio per year due to fluctuating returns each year.

Luckily, a lot of research has been done on this topic, and the general consensus is that if you only withdraw 4% of your investments per year, you’ll have enough money to last you a long time (a 30-year retirement). This is known as the 4% rule and is the gold standard for retirement planning.

Using the 4% rule as a baseline, you can quickly determine how much money you need to invest to retire with your desired spending.

For example, let’s say you want to retire and live off $100k per year. In this case, $100k is 4% of $2.5m, so you’ll need at least $2.5m invested to retire safely.

PRO TIP: You can easily calculate how much you need invested to retire if you simply take your desired yearly spend and multiply it by 25. For example, $40k * 25 = $1m, $100k * 25 = $2.5m, etc.

By only withdrawing 4% of your total portfolio per year, it’s historically likely that you’ll never run out of money over 30 years. Need a longer retirement? You may want to aim for a 3.5% withdrawal rate (or lower).

Should I Invest in Index Funds?

I’m a big fan of index fund investing, which is why my wife and I put 50% of our money into index funds.

Index funds are simple to purchase and sell (you can do it instantly using an investment broker like Vanguard) in seconds Index funds have an excellent historical track record (10% average yearly returns is fantastic!) Index funds are often tax-advantaged (they are easy to purchase through a company 401k plan, IRA, or other tax-sheltered accounts) Why Invest in Real Estate?

Now that we’ve discussed index funds, how they work, what returns you can expect if you invest in index funds, and how much money you need to invest to retire using index funds, we can finally talk about real estate.

What Qualifies as a Real Estate Investment?

Like stocks and other types of securities, there are multiple ways to invest in real estate. I’m going to cover the most basic form of real estate investing here, but know that there are many other ways to invest in real estate that I won’t cover today due to how complex it can become.

At a basic level, investing in real estate means you’re purchasing a property: a house, condo, apartment building, piece of land, commercial building, etc.

How Do Real Estate Investors Make Money?

There are many ways to make money through investing in real estate. Again, I’m only going to cover the most straightforward ways here due to the topic’s complexities.

Let’s say you own an investment property. The typical ways you might make money from this investment are:

Renting the property out for a profit Owning the property as its value rises over time. For example, if you purchased a house ten years ago for $100k worth $200k today, you’ve essentially “earned” $100k in profit, even if you haven’t yet sold the property. This is called appreciation.

Simple, right?

What’s One Major Difference Between Index Funds and Real Estate?

One of the most significant differences between real estate investing and index fund investing is leverage.

When you invest in an index fund like VTSAX, you’re buying a little bit of the index using your own money directly. This means if you purchase $100k of index funds and earn 10% on your money, you’ll have $110k of investments.

On the other hand, real estate is often purchased using leverage (aka: bank loans). It’s common to buy an investment property and only put 20-25% of your own money into the investment while seeking a mortgage from a bank to cover the remaining 75-80%.

The benefit of using leverage is that you can stretch your money further. For example, let’s say you have $100k to invest. You could put this $100k into VTSAX or purchase one property worth $500k (20% down on a $500k property means you only need $100k as a down payment).

Imagine these two scenarios:

Scenario 1: You invest $100k in VTSAX and earn precisely 10% per year Scenario 2: You put a $100k down payment on a $500k property that you rent out for a profit of $500 per month after expenses (we call this cash flow), and this property appreciates at a rate of 6% per year. Also, assume that you can secure a 30-year fixed-rate loan for the remaining $400k at a 4.5% interest rate.

After ten years, in Scenario 1, you’ll have $259,374.25. Not bad! That’s a total profit of $159,374.25.

But what will you have after ten years in Scenario 2?

In Scenario 2, you’ll have:

A property whose value has increased from $500k to $895,423.85 (an increase of $395,423.85) Cash flow of $60k A total remaining mortgage balance of $320,357.74 (a decrease of $79,642.26)

If you add these benefits up, in Scenario 2, you’ve essentially ballooned your original $100k investment into a total gain of $535,066.11. That’s three times the gain you would have gotten had you simply invested your $100k into VTSAX!

There are a lot of variables at play here, but you get the general idea. While investing in index funds is profitable and straightforward, if you’re willing to learn the business and put in the work, you can often make higher returns through real estate investing over the long haul.

How Difficult is Real Estate Investing?

Real estate investing is complicated. It requires a lot of knowledge, effort, and ongoing work to run a successful real estate investing operation. Among other things, you need to know:

How much a potential investment property will rent for How much a potential investment property will appreciate What sort of mortgage rates you can secure What your expenses will be each month How much property taxes will cost How much insurance will cost Etc.

All of the items above are variables that can dramatically impact whether or not a particular property is a good or bad investment. And this doesn’t even begin to account for the other things you need to do on an ongoing basis: manage the property, manage your accounts/taxes, follow all relevant laws, etc.

In short: investing in real estate is not simple and requires a lot of knowledge to do successfully. But, if you’re interested in running a real estate business, it can be a fun and profitable venture.

How We Invest in Real Estate

As I mentioned earlier, my wife and I split our investable assets 50⁄50 between index funds and real estate. The reason we do this is twofold:

It’s easy (and safe) for us to invest money in index funds It’s hard for us to invest in real estate (it took a lot of time and research to get started), but we generally earn greater returns on our real estate investments than we do on our index investments

Our real-estate investing criteria are pretty simple.

We only purchase residential real estate that we rent out to long-term tenants. We do this because it’s relatively low-risk, low-maintenance, and straightforward. We only purchase rental properties that generate a cash-on-cash return of 8% or greater. For example, if we buy a $200k property with a $40k downpayment, we need to earn $3,200 per year in profit ($3,200 is 8% of $40k) for the deal to make sense. We don’t factor appreciation into our investment calculations as we plan to hold these rental properties long-term and never sell them. The rising value of the rental properties we acquire isn’t as beneficial to us as is the cash flow. Over time, the properties pay themselves off, and once they’re free and clear, we’ll have a much larger monthly profit.

Why did we choose an 8% cash-on-cash return as our target metric for rental property purchases? In short, it’s because that 8% is roughly twice the safe withdrawal rate of our index funds.

I figured early on that if I was going to invest a ton of time and energy into learning about real estate investing, hunting down opportunities, etc., I’d have to make it worthwhile by at least doubling the safe withdrawal rate of our index funds. Otherwise, I could simply invest our money into VTSAX and never think about taking on extra work or risk.

Today, my wife and I own a small portfolio of single-family homes that we rent out to long-term tenants, each earning roughly 8% cash-on-cash return yearly.

Should I Invest in Stocks or Real Estate?

As you’ve seen by now, there isn’t a clear answer here. To sum it up:

If you’re looking for the most straightforward path to retirement, invest your money in well-diversified index funds like VTSAX. Index funds will allow you to retire with a 4% safe withdrawal rate and slowly build your wealth over time. If you’re interested in real estate and are willing to put in the time and effort to learn about it, you can potentially make greater returns, but it’s a lot of work. Or, if you’re like me, why not both? This way, you get the best of both worlds: a bit of simple, reliable index investments and a bit of riskier, more complex, and more rewarding real estate investments.

Does Music Help You Focus?

I’ve always been the sort of person who works with music in the background. Ever since I was a little kid writing code in my bedroom, I’d routinely listen to my favorite music while programming. Over the last 12 years, as my responsibilities have shifted from purely writing code to writing articles, recording videos, and participating in meetings, my habits have changed. Out of necessity, I

I’ve always been the sort of person who works with music in the background. Ever since I was a little kid writing code in my bedroom, I’d routinely listen to my favorite music while programming.

Over the last 12 years, as my responsibilities have shifted from purely writing code to writing articles, recording videos, and participating in meetings, my habits have changed. Out of necessity, I’m unable to work with music most of the time, but when I have an hour or so of uninterrupted time, I still prefer to put music on and use it to help me crank through whatever it is I’m focusing on.

However, I’ve been doing some experimentation over the last few months. My goal was to determine how much music helped me focus. I didn’t have a precise scientific way of measuring this except to track whether or not I felt my Pomodoro sessions were productive.

To keep score, I kept a simple Apple Notes file that contained a running tally of whether or not I felt my recently finished Pomodoro session was productive or not. And while this isn’t the most scientific way to measure, I figured it was good enough for my purposes.

Over the last three months, I logged 120 completed Pomodoro sessions. Of those, roughly 50% (58 sessions) were completed while listening to music, and the other 50% (62 sessions) were completed without music.

To my surprise, when tallying up the results, it appears that listening to music is a distraction for me, causing me to feel like my sessions weren’t very productive. Out of the 58 Pomodoro sessions I completed while listening to music, I noted that ~20% were productive (12 sessions) vs. ~60% (37 sessions) without music.

60% vs. 20% is a significant difference, which is especially surprising since I genuinely enjoy working with music. When I started this experiment, I expected that music would make me more, not less productive.

So what’s the takeaway here? For me, it’s that despite how much I enjoy listening to music while working, it’s distracting.

Am I going to give up listening to music while trying to focus? Not necessarily. As I mentioned previously, I still love working with music. But, I’ll undoubtedly turn the music off if I’m trying to get something important done and need my time to be as productive as possible.

In the future, I’m also planning to run this experiment separately to compare the impact of instrumental vs. non-instrumental music on my productivity. I typically listen to music with lyrics (hip-hop, pop, etc.), which makes me wonder if the lyrics are distracting or just the music itself.

I’m also curious as to whether or not lyrics in a language I don’t understand would cause a similar level of distraction or not (for example, maybe I could listen to Spanish music without impacting my productivity since I don’t understand the language).

Regardless of my results, please experiment for yourself! If you’re trying to maximize productivity, you might be surprised what things are impacting your focus levels.

Saturday, 02. April 2022

Doc Searls Weblog

The Age of Optionality—and its costs

Throughout the entire history of what we call media, we have consumed its contents on producers’ schedules. When we wanted to know what was in newspapers and magazines, we waited until the latest issues showed up on newsstands, at our doors, and in our mailboxes. When we wanted to hear what was on the radio […]

Throughout the entire history of what we call media, we have consumed its contents on producers’ schedules. When we wanted to know what was in newspapers and magazines, we waited until the latest issues showed up on newsstands, at our doors, and in our mailboxes. When we wanted to hear what was on the radio or to watch what was on TV, we waited until it played on our stations’ schedules. “What’s on TV tonight?” is perhaps the all-time most-uttered question about a medium. Wanting the answers is what made TV Guide required reading in most American households.

But no more. Because we have entered the Age of Optionality. We read, listen to, and watch the media we choose, whenever we please. Podcasts, streams, and “over the top” (OTT) on-edmand subscription services are replacing old-fashioned broadcasting. Online publishing is now more synchronous with readers’ preferences than with producers’ schedules.

The graph above illustrates what happened and when, though I’m sure the flat line at the right end is some kind of error on Google’s part. Still, the message is clear: what’s on and what’s in have become anachronisms.

The centers of our cultures have been held for centuries by our media. Those centers held in large part because they came on a rhythm, a beat, to which we all danced and on which we all depended. But now those centers are threatened or gone, as media have proliferated and morphed into forms that feed our attention through the flat rectangles we carry in our pockets and purses, or mount like large art pieces on walls or tabletops at home. All of these rectangles maximize optionality to degrees barely imaginable in prior ages and their media environments: vocal, scribal, printed, broadcast.

We are now digital beings. With new media overlords.

The Digital Markets Act in Europe calls these overlords “gatekeepers.” The gates they keep are at entrances to vast private walled gardens enclosing whole cultures and economies. Bruce Schneier calls these gardens feudal systems in which we are all serfs.

To each of these duchies, territories, fiefs, and countries, we are like cattle from which personal data is extracted and processed as commodities. Purposes differ: Amazon, Apple, Facebook, Google, Twitter, and our phone and cable companies each use our personal data in different ways. Some of those ways do benefit us. But our agency over how personal data is extracted and used is neither large nor independent of these gatekeepers. Nor do we have much if any control over what countless customers of gatekeepers do with personal data they are given or sold.

The cornucopia of options we have over the media goods we consume in these gardens somatizes us while also masking the extreme degree to which these private gatekeepers have enclosed the Internet’s public commons, and how algorithmic optimization of engagement at all costs has made us into enemy tribes. Ignorance of this change and its costs is the darkness in which democracy dies.

Shoshana Zuboff calls this development The Coup We Are Not Talking About. The subhead of that essay makes the choice clear: We can have democracy, or we can have a surveillance society, but we cannot have both. Her book, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power, gave us a name for what we’re up against. A bestseller, it is now published in twenty-six languages. But our collective oblivity is also massive.

We plan to relieve some of that oblivity by having Shoshana lead the final salon in our Beyond the Web series at Indiana University’s Ostrom Workshop. To prepare for that, Joyce and I spoke with Shoshana for more than an hour and a half last night, and are excited about her optimism toward restoring the public commons and invigorating democracy in our still-new digital age. This should be an extremely leveraged way to spend an hour or more on April 11, starting at 2PM Eastern time. And it’s free.

Use this link to add the salon to your calendar and join in when it starts.

Or, if you’re in Bloomington, come to the Workshop and attend in person. We’re at 513 North Park Avenue.

 

 

Friday, 01. April 2022

reb00ted

What can we do with a DAO that cannot be done with other organizational forms?

Decentralized Autonomous Organizations (DAOs) are something new enabled by crypto and blockchain technologies. We are only at the beginning of understanding what they can do and what not. So I asked my social network: “What can we do with a DAO that cannot be done with other organizational forms?" Here is a selected set of responses, mostly from this Twitter thread and this Facebook thread. Th

Decentralized Autonomous Organizations (DAOs) are something new enabled by crypto and blockchain technologies. We are only at the beginning of understanding what they can do and what not.

So I asked my social network: “What can we do with a DAO that cannot be done with other organizational forms?"

Here is a selected set of responses, mostly from this Twitter thread and this Facebook thread. They are both public, so I’m attributing:

Kurt Laitner: “They enable dynamic equity and dynamic governance”

Vittorio Bertocci: “Be robbed without any form of recourse, appeal or protection? 😛 I kid, I kid 🙂”

Dan Lyke: “…they create a gameable system that has less recourse to the law than a traditional system … [but] the immutable public ledger of all transactions may provide a better audit trail”

David Mason: “Lock yourself into a bad place without human sensibility to bail you out.”

Adam Lake: “We already have cooperatives, what is the value add?”

Phill Hallam-Baker: “Rob people who don’t understand that the person who creates them controls them absolutely.”

Jean Russell: “Act like you have a bank account as a group regardless of the jurisdictions of the members.”

David Berlind: “For now (things are changing), a DAO can fill a gap in international business law.”

Follow the links above, there are more details in the discussions.

I conclude: there is no consensus whatsoever :-) That may be because there such a large range of setups under that term today.

Wednesday, 30. March 2022

Doc Searls Weblog

Exitings

The photo above dates from early 1978, when Hodskins Simone & Searls, a new ad agency, was born in Durham, North Carolina. Specifically, at 602 West Chapel Hill Street. Click on that link and you’ll see the outside of our building. Perhaps you can imagine the scene above behind the left front window, because that’s […]

The photo above dates from early 1978, when Hodskins Simone & Searls, a new ad agency, was born in Durham, North Carolina. Specifically, at 602 West Chapel Hill Street. Click on that link and you’ll see the outside of our building. Perhaps you can imagine the scene above behind the left front window, because that’s where we stood, in bright diffused southern light. Left to right are David Hodskins, Ray Simone, and me.

That scene, and the rest of my life, were bent toward all their possibilities by a phone call I made to Ray one day in 1976, when I was working as an occasionally employed journalist, advertising guy, comedy writer, radio voice, and laborer: anything that paid, plus plenty that didn’t. I didn’t yet know Ray personally, but I loved the comics he drew, and I wanted his art for an ad I had written for a local audio shop. So I called him at the “multiple media studio” where he was employed at the time. Before we got down to business, however, he also got into an off-phone conversation with another person in his office. After Ray told the other person he was on the phone with Doctor Dave (the comic radio persona by which I was known around those parts back then), the other person told Ray to book lunch with me at a restaurant downtown.

I got there first, so I was sitting down when Ray walked in with a guy who looked like an idealized version of me. Not just better looking, but radiating charisma and confidence. This was the other person who worked with Ray, and who told Ray to propose the lunch. That’s how I met David Hodskins, who used the lunch to recruit me as a copywriter for the multiple media studio. I said yes, and after a few months of that, David decided the three of us should start Hodskins Simone & Searls. Four years and as many locations later, we occupied a whole building in Raleigh, had dozens of people working for us, and were the top ad agency in the state specializing in tech and broadcasting.

A couple years after that we seemed to be hitting a ceiling as the alpha tech agency in a region still decades away from becoming the “other Silicon Valley” it wanted to be. So, after one of our clients said “Y’know, guys, there’s more action on one street in Sunnyvale than there is in all of North Carolina,” David flew out to scout Silicon Valley itself. That resulted in a tiny satellite office in Palo Alto, where David prospected for business while running the Raleigh headquarters by phone and fax. After a year of doing that, David returned, convened a dinner with all the agency managers, and said we’d have to close Palo Alto if he didn’t get some help out there. This was in August 1985.

To my surprise, I heard myself volunteering duty out there, even though a year earlier when David asked me to join him there I had said no. I’m not even sure why I volunteered this time. I loved North Carolina, had many friends there, and was well established as a figure in the community, mostly thanks to my Doctor Dave stuff. I said I just needed to make sure my kids, then 15 and 12, wanted to go. (I was essentially a single dad at the time.) After they said yes, we flew out and spent a week checking out what was for me an extremely exotic place. But the kids fell instantly in love with it. So I rented a house near downtown Palo Alto, registered the kids in Palo Alto junior and high schools, left them there with David, flew back to North Carolina, gave away everything that wouldn’t fit in a small U-Haul trailer, and towed my life west in my new 145-horse ’85 Camry sedan with a stick shift. With my Mom along for company, we crossed the country in just four days.

The business situation wasn’t ideal. Silicon Valley was in a slump at that time. “For Lease” banners hung over the windows new buildings all over the place. Commodore, Atari, and other temporary giants in the new PC industry were going down. Apple, despite the novelty of its new Macintosh computer, was in trouble. And ad agencies—more than 200 of them—were fighting for every possible account, new and old. Worse, except for David, me, and one assistant, our whole staff was three time zones east of there, and the Internet that we know today was decades away in the future. But we bluffed our way into the running for two of the biggest accounts in review.

As we kept advancing in playoffs for those two accounts, the North Carolina office was treading water and funds were running thin. In our final pitches, we were also up against the same incumbent agency: one that, at that time, was by far the biggest and best in the valley, and did enviably good work. So we were not the way to bet. The evening before our last pitch, David told Ray and me that we needed to win both accounts or retreat back to North Carolina. I told him that I was staying, regardless, because I belonged there, and so did my kids, one of whom was suddenly an academic achiever and the other a surfer who totally looked the part. We had gone native. David reached across the table to shake my hand. That was his way of saying both “Thanks” and “I respect that.”

Then we won both accounts, got a mountain of publicity for having come out of nowhere and kicked ass, and our Palo Alto office quickly outgrew our Raleigh headquarters. Within a year we had closed Raleigh and were on our way to becoming one of the top tech agencies in Silicon Valley. None of this was easy, and all of it required maximal tenacity, coordination, and smarts, all of which were embodied in, and exemplified by, David Hodskins. He was wickedly smart, tough, creative, and entrepreneurial: perfect for leading a small and rapidly growing company. While hard-driving and often overbearing (sometimes driving Ray and me nuts) he was also great fun to work and hang out with, and one of the best friends I’ve ever had.

One of our bondings was around basketball. David was a severely loyal Duke alumnus and grandfathered with two season tickets every year to games at the Duke’s famous Cameron Indoor Stadium. I became a Duke fan as his date for dozens of games there. When we moved to Palo Alto, he and I got our basketball fix through season tickets to the Golden State Warriors. (In the late ’80s, this was still affordable for normal people.) At one point, we even came close once to winning the Warriors’ advertising business.

In the early 90s, I forked my own marketing consulting business out of HS&S, while remaining a partner with the firm until it was acquired by Publicis in 1998. By then I had also shifted back into journalism as an editor for Linux Journal, while also starting to blog. (Which I’m still doing right here.) David, Ray, and I remained good friends, however, while all three of us got married (Ray), remarried (David and I), and had California kids. In fact, I had met my wife with Ray’s help in 1990.

Ray died of lung cancer in 2011, at just 63. I remember him in this post here, and every day of my life.

On November 13 of last year, my wife and I attended the first game of the season for the Indiana Univesity Men’s basketball team, which David and I had rooted against countless times when they played Duke and other North Carolina teams. While there, I took a photo of the scene with my phone and sent it in an email to David, saying “Guess where I am?” He wrote back, “Looks suspiciously like Assembly Hall in Bloomington, Indiana, where liberals go to die. WTF are you doing there?”

I explained that Joyce and I were now visiting scholars at IU. He wrote back,

Mr. visiting scholar,

Recuperating from a one-week visit by (a friend) and his missus, before heading to Maui for T’giving week.

The unwelcome news is that I’m battling health issues on several fronts: GERD, Sleep Apnea, Chronic Fatigue, and severe abdominal pain. Getting my stomach scoped when I’m back from Maui, and hoping it isn’t stomach cancer.

Actual retirement is in sight… at the end of 2022. (Wife) hangs it up in February, 2024, so we’ll kick our travel plans into higher gear, assuming I’m still alive.

Already sick of hearing that coach K has “5 national titles, blah, blah, blah” but excited to see Paulo Banchero this year, and to see Jon Scheyer take the reins next year. Check out the drone work in this promotional video: https://youtu.be/Dp1dEadccGQ

Thanks for checking in, and glad to hear you’re keeping your brain(s) active. Please don’t become a Hoosier fan.

d

David’s ailment turned out to be ALS. After a rapid decline too awful to describe, he died last week, on March 22nd. Two days earlier I sent him a video telling him that, among other things, he was the brother I never had and a massive influence on many of the lives that spun through his orbits. Unable to speak, eat or breathe on his own, he was at least able to smile at some of what I told him, and mouth “Wow” at the end.

And now there is just one left: the oldest and least athletic of us three. (Ray was a natural at every sport he picked up and won medals in fencing. David played varsity basketball in high school. Best I ever got at that game was not being chosen last for my college dorm’s second floor south intramural team.)

I have much more to think, say, and write about David, especially since he was a source of wisdom on many subjects. But it’s hard because his being gone is so out of character.

But not completely, I suppose. Hemmingway:

The world breaks everyone and afterward many are strong at the broken places. But those that will not break it kills. It kills the very good and the very gentle and the very brave impartially. If you are none of these you can be sure it will kill you too but there will be no special hurry.

My joke about aging is that I know I’m in the exit line, but I let others cut in. I just wish this time it hadn’t been David.

But the line does keep moving, while the world holds the door.

Tuesday, 29. March 2022

Phil Windley's Technometria

The Ukrainian War, PKI, and Censorship

Summary: PKI has created a global trust framework for the web. But the war in Ukraine has shone a light on its weaknesses. Hierarchies are not good architectures for building robust, trustworthy, and stable digital systems. Each semester I have students in my distributed systems class read Rainbow's End, a science fiction book by Vernor Vinge set in the near future. I think it help

Summary: PKI has created a global trust framework for the web. But the war in Ukraine has shone a light on its weaknesses. Hierarchies are not good architectures for building robust, trustworthy, and stable digital systems.

Each semester I have students in my distributed systems class read Rainbow's End, a science fiction book by Vernor Vinge set in the near future. I think it helps them imagine a world with vastly distributed computing infrastructure that is not as decentralized as it could be and think about the problems that can cause. One of the plot points involves using certificate authorities (CA) for censorship.

To review briefly, certificate authorities are key players in public key infrastructure (PKI) and are an example of a core internet service that is distributed and hierarchical. Whether your browser trusts the certificate my web server returns depends on whether it trusts the certificate used to sign it, and so on up the certificate chain to the root certificate. Root certificates are held in browsers or operating systems. If the root certificate isn't known to the system, then it's not trusted. Each certificate might be controlled by a different organization (i.e. they hold the private key used to sign it), but they all depend on confidence in the root. Take out the root and the entire chain collapses.

Certificate validation path for windley.com (click to enlarge)

The war in Ukraine has made hypothetical worries about the robustness of the PKI all too real. Because of the sanctions imposed on Russia, web sites inside Russia can't pay foreign CAs to renew their certificates. Modern browsers don't just shrug this off, but issue warnings and sometimes even block access to sites with expired certificates. So, the sanctions threaten to cripple the Russian web.

In response, Russia has established its own root certificate authority (see also this from KeyFactor). This is not merely a homegrown CA, located in Russia, but a state-operated CA, subject to the whims and will of the Russian government (specifically the Ministry of Digital Development).

This is interesting from several perspectives. First, from a censorship perspective, it means that Russia can effectively turn off web sites by revoking their certificates, allowing the state to censor web sites for any reason they see fit. Hierarchical networks are especially vulnerable to censorship. And while we might view state-controlled CAs as a specific problem, any CA could be a point of censorship. Recall that while SWIFT is a private company, it is located in Belgium and subject to Belgian and European law. Once Belgium decided to sanction Russia, SWIFT had to go along. Similarly, a government could pass a law mandating the revocation of any certificate for a Russian company and CAs subject to their legal jurisdiction would go along.

From the perspective of users, it's also a problem. Only two browsers support the root certificate of the new Russian CA: the Russian-based Yandex and open-source Atom. I don't think it's likely that Chrome, Safari, Firefox, Brave, Edge, and others will be adding the new Russian root CA anytime soon. And while you can add certificates manually, most people will find that difficult.

Lastly, it's a problem for the Russian economy. The new Russian CA is a massive single point of failure, even if the Russian government doesn't use it to censor. Anonymous, state actors, and other groups can target the new CA and bring large swaths of the Russian internet down. So, state-controlled and -mandated CAs are a danger to the economy they serve. Russia's actions in response to the exigency of the war are understandable, but I suspect it won't go back even after the war ends. Dependence on a single state-run CA is a problem for Russia and its citizens.

State-controlled CAs further balkanize the internet. They put web sites at risk of censorship. They make life difficult for users. They create centralized services that threaten economic stability and vibrancy. In general, hierarchies are not good architectures for building robust, trustworthy, and stable digital systems. PKI has allowed us to create a global trust framework for the web. But the war in Ukraine has shone a light on its weaknesses. We should heed this warning to engineer more decentralized infrastructures that give us confidence in our digital communications.

Photo Credits:

Coat of Arms of Russia from Motorolla (Pixabay) HTTPS Icon from Sean MacEntee (CC BY 2.0)

Tags: pki identity censorship web decentralization web3

Monday, 28. March 2022

Damien Bod

Implementing OAuth2 APP to APP security using Azure AD from a Web APP

This article shows how to implement an API service and client in separate ASP.NET Core applications which are secured using Azure application permissions implemented in an Azure App registration. The OAuth client credentials flow is used to get an access token to access the API. Microsoft.Identity.Web is used to implement the client credentials (CC) flow. […]

This article shows how to implement an API service and client in separate ASP.NET Core applications which are secured using Azure application permissions implemented in an Azure App registration. The OAuth client credentials flow is used to get an access token to access the API. Microsoft.Identity.Web is used to implement the client credentials (CC) flow. Microsoft.Identity.Client can also be used to implement this flow, or any OAuth client implementation.

Code: BlazorWithApis

The OAuth client credentials flow can be used to access services, where no user is involved and the client is trusted. This flow is used in many shapes and forms in Azure. The client application requires some type of secret to get an access token to use the secured API. In Azure, there are different ways of implementing this which vary in different names and implementations. The main difference with most of these implementations is how the secret is acquired and where it is stored. A certificate can be used as the secret or an Azure App registration secret. This secret can be stored in Azure Key Vault. Managed identities provide another way of implementing app to app secured access between Azure services and can be used to acquire access tokens using the Azure SDK. Certificate authentication can also be used to secure this security flow. This can be a little bit confusing, but as a solution architect, you need to know when and where this should be used, and not.

Delegated user access tokens or application client credential tokens

As a general rule, always use delegated user access tokens and not application access tokens if possible. You can reduce the permissions per user to the max. To acquire a user delegated access token, an identity must login somewhere using a UI. A user interaction flow is required for this. The delegated user access token can be requested using a scope for the identity. In Azure AD, the On Behalf Flow OBO can also be used to acquire further delegated user access tokens for downstream APIs. This is not possible in Azure AD B2C.

Scopes or Roles Permissions

In Azure, scope permissions are used for delegated user access tokens, not application permissions. App Roles can be used for application and / or delegated access. Roles can only be defined in Azure AD App registrations and not Azure AD B2C App registrations. To define an Azure App registration with application App Roles, you need to use an Azure AD App registration. This is very Azure specific and nothing to do with security standards. You still request a scope when using delegated or application flows, but not scope permissions when using application OAuth client credentials. More information about this can be found in the Microsoft docs:

Protected web API: Verify scopes and app roles

By using application security permissions, you give the client application permissions for whatever is allowed in the service. No user is involved. This cannot be reduced for different users, only for different client applications.

Azure App Registration setup

The hardest part of implementing an API protected using application permissions is to know how and where to setup the Azure App registration. The Azure App registration needs to be created in an Azure AD app registration and not an Azure AD B2C tenant, even if you use this. The Azure App registration needs an application ID URI, make sure this is created.

An Azure App Role can be created and can be validated in the access token.

The app role is defined as an application type. I named the role access_as_application.

The role can be added as a permission and admin consent can be given. This will be included in tokens issued for this Azure app registration.

API setup

The API is setup using the Microsoft.Identity.Web. The AddMicrosoftIdentityWebApiAuthentication adds the OAuth validation using the configuration from the app settings. I created an authorization policy to implement the authorization which is applied to the controller or as a global filter. I think this is the best way as it is the standard way in ASP.NET Core. You should avoid using the Azure claims directly in the business of the application. Microsoft.Identity.Web also provides some specific Azure helper methods which checks consent or validates the scope etc. It is important that only access tokens intended for this API should work and all other access tokens must be rejected.

services.AddSingleton<IAuthorizationHandler, HasServiceApiRoleHandler>(); services.AddMicrosoftIdentityWebApiAuthentication(Configuration); services.AddControllers(); services.AddAuthorization(options => { options.AddPolicy("ValidateAccessTokenPolicy", validateAccessTokenPolicy => { validateAccessTokenPolicy.Requirements.Add(new HasServiceApiRoleRequirement()); // Validate id of application for which the token was created // In this case the UI application validateAccessTokenPolicy.RequireClaim("azp", "2b50a014-f353-4c10-aace-024f19a55569"); // only allow tokens which used "Private key JWT Client authentication" // // https://docs.microsoft.com/en-us/azure/active-directory/develop/access-tokens // Indicates how the client was authenticated. For a public client, the value is "0". // If client ID and client secret are used, the value is "1". // If a client certificate was used for authentication, the value is "2". validateAccessTokenPolicy.RequireClaim("azpacr", "1"); }); });

Using Microsoft.Identity.Web

One way of implementing a client is to use Microsoft.Identity.Web. The client and user of the application uses the OpenID Connect Code flow and a secret with some Azure specifics and once authenticated, the application can request an application token using the ITokenAcquisition interface and the GetAccessTokenForAppAsync method. The scope definition uses the /.default value with the application ID URL from the Azure App registration. This uses the client credentials flow. If the correct parameters are used, an access token is returned and the token can be used to access the API.

public class ServiceApiClientService { private readonly IHttpClientFactory _clientFactory; private readonly ITokenAcquisition _tokenAcquisition; public ServiceApiClientService( ITokenAcquisition tokenAcquisition, IHttpClientFactory clientFactory) { _clientFactory = clientFactory; _tokenAcquisition = tokenAcquisition; } public async Task<IEnumerable<string>?> GetApiDataAsync() { var client = _clientFactory.CreateClient(); // CC flow access_as_application" (App Role in Azure AD app registration) var scope = "api://b178f3a5-7588-492a-924f-72d7887b7e48/.default"; var accessToken = await _tokenAcquisition.GetAccessTokenForAppAsync(scope); client.BaseAddress = new Uri("https://localhost:44324"); client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", accessToken); client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json")); var response = await client.GetAsync("ApiForServiceData"); if (response.IsSuccessStatusCode) { var stream = await response.Content.ReadAsStreamAsync(); var payload = await JsonSerializer.DeserializeAsync<List<string>>(stream); return payload; } throw new ApplicationException("oh no..."); } }

Notes

When using client applications and the client credentials flow, it is important to only share the secret or certificate with a trusted client. The client should implement this is a safe way so that it does not get stolen. If I own the client, I deploy the client to Azure (if possible) and use a Key Vault to persist the certificates or secrets. A managed identity is used to access the key vault. This way, no secret is stored in an unsecure way. Try to use delegated access tokens rather than application tokens. The delegated access token is issued to the user and the application and the authorization can be reduced. This can only be done if using a user interaction flow to authenticate. Azure App registrations use scopes for delegated access tokens and roles can be used for application permissions. With app to app flows, other ways also exist for securing this access.

Links

https://github.com/AzureAD/microsoft-identity-web

https://docs.microsoft.com/en-us/azure/active-directory/develop/

https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2

https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/tree/master/4-Call-OwnApi-Pop

https://docs.microsoft.com/en-us/azure/active-directory/develop/scenario-protected-web-api-verification-scope-app-roles?tabs=aspnetcore

Friday, 25. March 2022

Doc Searls Weblog

An arsonoma

While walking past this scene on my way to the subway in New York last week, I saw that a woman was emptying out what hadn’t burned from this former car. Being a curious extrovert, I paused to ask her about it. The conversation, best I recall: “This your car?” “Yeah.” “I’m sorry. What happened?” […]

While walking past this scene on my way to the subway in New York last week, I saw that a woman was emptying out what hadn’t burned from this former car. Being a curious extrovert, I paused to ask her about it. The conversation, best I recall:

“This your car?”

“Yeah.”

“I’m sorry. What happened?”

“Somebody around here sets fire to bags of garbage*. One spread to the car.”

“Any suspects?”

“There are surveillance cameras on the building.” She gestured upward toward two of them.

“Did they see anything?”

“They never do.”

So there you have it. In medicine they call this kind of thing a fascinoma. Perhaps in civic life we should call this an arsonoma. Or, in law enforcement, a felonoma.

*In New York City, we now put out garbage and recycling in curbside bags.

Thursday, 24. March 2022

Damien Bod

Onboarding new users in an ASP.NET Core application using Azure B2C

This article shows how to onboard new users into your ASP.NET Core application using Azure B2C as the identity provider and the account management. The software has application specific persisted user data and this user data needs to be connected to the identity data from the corresponding user in Azure B2C. Code https://github.com/damienbod/azureb2c-fed-azuread User Case […]

This article shows how to onboard new users into your ASP.NET Core application using Azure B2C as the identity provider and the account management. The software has application specific persisted user data and this user data needs to be connected to the identity data from the corresponding user in Azure B2C.

Code https://github.com/damienbod/azureb2c-fed-azuread

User Case where user is created first in the application

The users are created in the ASP.NET Core application from an administrator. Once the use is created, an email is sent to the new user. The user clicks the link in the application and signs up in Azure AD B2C. After the account has been created in Azure AD B2C, the user is redirected back to the application and the Azure B2C account gets connected to the persisted user in the database automatically persisting the Azure AD object identifier (oid) claim data. The account is activated in the application, the authorization definitions are created automatically and applied as defined by the administrator user in the first step.

It is not possible to onboard a new user in Azure B2C like this using Microsoft Graph without managing passwords. You should never mail passwords to users. For the flow to work, the signin signup user flow can be used from Azure B2C. The link sent to the user requires authentication and contains a registration one time code. When the user clicks the link, authentication is required and so the user is automatically redirected to Azure B2C. After a successfully signup, the user is redirected back to the application and the registration code is used to connect the accounts. At no stage did the application administrator need to manage the user passwords from the accounts. Because default Azure B2C flows are used, it should be easy to applied the latest best practices supported by Azure B2C for user authentication, once this gets rolled out. FIDO2 is not supported by Azure AD B2C which is bad as this is the strongest MFA.

The create user can be implemented using an ASP.NET Core Razor page. This Razor page requires authentication and an admin policy. The form sends a post request with the user data and if valid, the data is saved to the database using Entity Framework Core. A random registration code is generated and persisted and saved in the user entity. An email is created and the ConnectAccount URL is used with the registration code. This URL is emailed to the end user using Microsoft Graph. An office license is required for the email client to work but you can use any email service. The user now exists and should have mail in their inbox. (or spam) If you are implementing this in a production application, you will need to implement a resend email function and I would update the registration code with each email.

public async Task<IActionResult> OnPostAsync() { if (!ModelState.IsValid) { return Page(); } if (!_userService.IsEmailValid(UserModel.Email)) { ModelState.AddModelError("Email", "Email is invalid"); return Page(); } var user = await _userService.CreateUser(new UserEntity { Email = UserModel.Email, FirstName = UserModel.FirstName, Surname = UserModel.Surname, BirthDate = UserModel.BirthDate, DisplayName = UserModel.DisplayName, PreferredLanguage = UserModel.PreferredLanguage }); await _userService.SendEmailInvite(user, Request.Host, false); OnboardingRegistrationCode = user.OnboardingRegistrationCode; return OnGet(); }

The SendEmailInvite method sends an email using the Microsoft Graph client.

public async Task SendEmailInvite(UserEntity user, HostString host, bool updateCode) { if (updateCode) { user.OnboardingRegistrationCode = GetRandomString(); await _userContext.SaveChangesAsync(); } var accountUrl = $"https://{host}/ConnectAccount?code={user.OnboardingRegistrationCode}"; var header = $"{user.FirstName} {user.Surname} you are invited to signup"; var introText = "You have been invite to join the MyApp services. You can register and sign up here"; var endText = "Best regards, your MyApp support"; var body = $"Dear {user.FirstName} {user.Surname} \n\n{introText} \n\n{accountUrl} \n\n{endText}"; var message = _emailService.CreateStandardEmail(user.Email, header, body); await _msGraphEmailService.SendEmailAsync(message); }

The user service creates a user in the SQL database and generates the random code.

public async Task<UserEntity> CreateUser(UserEntity userModel) { userModel.OnboardingRegistrationCode = GetRandomString(); await _userContext.AddAsync(userModel); await _userContext.SaveChangesAsync(); return userModel; }

The create user ASP.NET Core Razor page just displays a simple form for adding the properties required by your business.

The MsGraphEmailService class implements the Microsoft Graph email service. This client needs to authorize using an Azure tenant which has an office license for the sender account. The application permissions also need to be enabled for the Azure App registration used. This works fine as long as you do not send loads of emails, the amount of mails you can send is limited and you do not want to send many emails anyway. The Azure SDK ClientSecretCredential is used which setups the client credentials flow and uses an application scope from the Azure App registration. For more details, see this post.

using Azure.Identity; using Microsoft.Extensions.Configuration; using Microsoft.Graph; using System.Threading.Tasks; namespace OnboardingAzureB2CCustomInvite.Services; public class MsGraphEmailService { private readonly GraphServiceClient _graphServiceClient; private readonly IConfiguration _configuration; public MsGraphEmailService(IConfiguration configuration) { _configuration = configuration; string[]? scopes = configuration.GetValue<string>("AzureAdEmailService:Scopes")?.Split(' '); var tenantId = configuration.GetValue<string>("AzureAdEmailService:TenantId"); // Values from app registration var clientId = configuration.GetValue<string>("AzureAdEmailService:ClientId"); var clientSecret = configuration.GetValue<string>("AzureAdEmailService:ClientSecret"); var options = new TokenCredentialOptions { AuthorityHost = AzureAuthorityHosts.AzurePublicCloud }; // https://docs.microsoft.com/dotnet/api/azure.identity.clientsecretcredential var clientSecretCredential = new ClientSecretCredential( tenantId, clientId, clientSecret, options); _graphServiceClient = new GraphServiceClient(clientSecretCredential, scopes); } private async Task<string> GetUserIdAsync() { var meetingOrganizer = _configuration["AzureAdEmailService:EmailSender"]; var filter = $"startswith(userPrincipalName,'{meetingOrganizer}')"; var users = await _graphServiceClient.Users .Request() .Filter(filter) .GetAsync(); return users.CurrentPage[0].Id; } public async Task SendEmailAsync(Message message) { var saveToSentItems = true; var userId = await GetUserIdAsync(); await _graphServiceClient.Users[userId] .SendMail(message, saveToSentItems) .Request() .PostAsync(); } }

The email is sent as defined in the create user. The email is really simple and has the link to the ConnectAccount Razor page and the registration code. This is not a link direct to the Azure B2C signup as we want to automatically connect the user data from the application to the Azure B2C identity and assign the defined authorization. It is very easy to make a template for this email and translate it into the user language or add company information etc.

When the end user opens the email and clicks the link, the user opens the connect account link with the registration code. This URL requires authentication and is automatically redirected to the Azure B2C signin signup flow. We need to use this user flow, because in some cases the user already exists in Azure B2C and not in the application and this user would just signin then.

The ConnectAccount Razor page handles the GET requests and connects the authenticated identity with the user in the database if the registration code matches. Normally a GET should never change the state. This Razor page must be protected. If the registration code matches, the OID from the Azure B2C account is persisted and the registration code reset. The user account is set to active. If authorization is pre-defined for this user, then the authorization definitions can be created for the user as well. Once connected, the user can be redirected to the profile Razor page.

[Authorize] public class ConnectAccountModel : PageModel { private readonly UserService _userService; public ConnectAccountModel(UserService userService) { _userService = userService; } [BindProperty] public string? OnboardingRegistrationCode { get; set; } = string.Empty; public async Task<IActionResult> OnGet([FromQuery] string code) { if (string.IsNullOrEmpty(code)) { return Page(); } var email = User.Claims.FirstOrDefault(c => c.Type == "emails")?.Value; var oidClaimType = "http://schemas.microsoft.com/identity/claims/objectidentifier"; var oid = User.Claims.FirstOrDefault(t => t.Type == oidClaimType)?.Value; if(oid == null) return Page(); int id = await _userService.ConnectUserIfExistsAsync( code, oid, true, email); if(id > 0) { return Redirect("/profile"); } return Page(); }

The profile page allows the user to update data which is controlled by the user. I do this in the application and do not user the Azure B2C profile because this is application specific. This is sensitive data and needs to be handled correctly. I display the OID, IsActive and the email here, but this data must not be updated in the profile page. This user data can only be managed by an application administrator. I also create and new user profile for a user, if the user does not exist but already has an Azure B2C account. This account can signin but will see nothing as the identity has no authorization to view anything. The authorization is set by an application administrator. By allowed this, existing Azure B2C accounts can be onboarded as well.

[Authorize] public class ProfileModel : PageModel { private readonly UserService _userService; public ProfileModel(UserService userService) { _userService = userService; } [BindProperty] public Profile Profile { get; set; } = new Profile(); [BindProperty] public string? AzureOid { get; set; } = string.Empty; [BindProperty] public bool IsActive { get; set; } [BindProperty] public string? Email { get; set; } public async Task<IActionResult> OnGetAsync() { var oidClaimType = "http://schemas.microsoft.com/identity/claims/objectidentifier"; var oid = User.Claims.FirstOrDefault(t => t.Type == oidClaimType)?.Value; var email = User.Claims.FirstOrDefault(c => c.Type == "emails")?.Value; UserEntity? userEntity = null; if (oid != null) { userEntity = await _userService.FindUserWithOid(oid); if (userEntity != null) { Profile.Surname = userEntity.Surname; Profile.FirstName = userEntity.FirstName; Profile.DisplayName = userEntity.DisplayName; Profile.BirthDate = userEntity.BirthDate; Profile.PreferredLanguage = userEntity.PreferredLanguage; IsActive = userEntity.IsActive; AzureOid = userEntity.AzureOid; Email = userEntity.Email; } else { IsActive = false; AzureOid = oid; } } return Page(); } public async Task<IActionResult> OnPostAsync() { if (!ModelState.IsValid) { return Page(); } var oidClaimType = "http://schemas.microsoft.com/identity/claims/objectidentifier"; var oid = User.Claims.FirstOrDefault(t => t.Type == oidClaimType)?.Value; var email = User.Claims.FirstOrDefault(c => c.Type == "emails")?.Value; UserEntity? userEntity = null; if (oid != null) userEntity = await _userService.FindUserWithOid(oid); if(userEntity == null) { userEntity = new UserEntity(); if (oid != null) userEntity.AzureOid = oid; if (email != null) userEntity.Email = email; } userEntity.FirstName = Profile.FirstName; userEntity.Surname = Profile.Surname; userEntity.BirthDate = Profile.BirthDate; userEntity.DisplayName = Profile.DisplayName; userEntity.PreferredLanguage = Profile.PreferredLanguage; await _userService.UpdateCreateProfile(userEntity); return await OnGetAsync(); } }

The Profile Razor page allows the user to update the profile data after authenticating using Azure B2C.

Notes

Onboarding and user registration in any software system needs to be robust and flexible. Because a separate identity provider is used, the application specific user data needs to be mapped to the identity profile data. The onboarding process needs to be flexible and user friendly. You also need to support account reset and account update processes, for example when a user changes email. Try to use a system which makes it easy to support, or force MFA for you users like FIDO2. At present, Azure B2C does not support strong MFA. It is not recommended to use SMS but even this is way better than only password.

Links

Create Azure B2C users with Microsoft Graph and ASP.NET Core
https://damienbod.com/2021/10/18/creating-microsoft-teams-meetings-in-asp-net-core-using-microsoft-graph-application-permissions-part-2/

https://docs.microsoft.com/en-us/azure/active-directory-b2c/overview

https://docs.microsoft.com/en-us/azure/active-directory-b2c/identity-provider-azure-ad-single-tenant

https://github.com/AzureAD/microsoft-identity-web

Wednesday, 23. March 2022

MyDigitalFootprint

Will decision making improve if we understand the bias in the decision making unit?

As a human I know we all have biases, and we all have different biases. We expose certain biases based on context, time, and people. We know that bias forms because of experience, and we are sure that social context reinforces perceived inconstancy.  Bias is like a mirror and can show our good and bad sides. As a director, you have to have experience before taking on the role, even as a fou

As a human I know we all have biases, and we all have different biases. We expose certain biases based on context, time, and people. We know that bias forms because of experience, and we are sure that social context reinforces perceived inconstancy.  Bias is like a mirror and can show our good and bad sides.

As a director, you have to have experience before taking on the role, even as a founder director. This thought-piece asks if we know where our business biases start from and what direction of travel they create. Business bias is the bias you have right now that affects your choice, judgment and decision making. Business bais is something that our data cannot tell us. Data can tell me if your incentive removes choice or aligns with an outcome. 

At the most superficial level, we know that the expectations of board members drive decisions.  The decisions we take link to incentives, rewards and motivations and our shared values. 

If we unpack this simple model, we can follow (the blue arrows in the diagram below) that says your expectation builds shared values that focus/highlight the rewards and motivations (as a group) we want. These, in turn, drives new expectations.

However, equally, we could follow (the orange arrows) and observe that expectations search and align with rewards and motivations we are given; this exposes our shared values that create new expectations for us. 



Whilst Individual bias is complex; board or group bias adds an element of continuous dynamic change. We have observed and been taught this based on the “forming storming norming performing” model of group development first proposed by Bruce Tuckman in 1965, who said that these phases are all necessary and inevitable for a team to grow face up to challenges, tackle problems, find solutions, plan work, and deliver results.


The observation here is that whilst we might all follow the Tuckman ideals of “time”; in terms of the process to get to perfroming, of which there is lots of data to support, his model ignores the process of self-discovery we pass through during each phase, assuming that we align during the storming (conflicts and tensions) phase but ignore that we fundamentally have different approaches.  Do you follow the blue of orange route and from where did you start.

This is non-more evident than when you get a “board with mixed experience”, in this case, the diversity of experience is a founder, family business and promoted leader. The reason is that if you add their starting positions to the map, we tend to find they start from different biased positions and may be travelling in different directions.  Thank you to Claudia Heimer for stimulating this thought.  The storming phase may align the majority round the team but will not change the underlying ideals and biases in the individuals, which means we don’t explose the paradoxes in decision making. 


What does this all mean? As a CDO, we are tasked with finding data to support decisions? Often leadership will not follow the data, and we are left with questions. Equally, some leaders blindly follow the data without questioning it. Maybe it is time to collect smaller data at the board to uncover how we work and expose a bias in our decision making.





Monday, 21. March 2022

Phil Windley's Technometria

Are Transactional Relationships Enough?

Summary: Our online relationships are almost all transactional. A purely transaction digital life can't feel as rich and satisfying as one based on interactional relationships. As more of our relationships are intermediated by technology, finding ways to support interactional relationships will allow us to live authentic digital lives. We don't build identity systems to manage iden

Summary: Our online relationships are almost all transactional. A purely transaction digital life can't feel as rich and satisfying as one based on interactional relationships. As more of our relationships are intermediated by technology, finding ways to support interactional relationships will allow us to live authentic digital lives.

We don't build identity systems to manage identities. Rather we build identity systems to manage relationships.

Given this, we might rightly ask what kind of relationships are supported by the identity systems that we use. Put another way, what kind of online relationships do you have? If you're like me, the vast majority are transactional. Transactional relationships focus on commercial interactions, usually buying and selling, but not always that explicit. Transactional relationships look like business deals. They are based on reciprocity.

My relationships with Amazon and Netflix are transactional. That's appropriate and what I expect. But what about my relationships on Twitter? You might argue that they are between friends, colleagues, or even family members. But I also classify them as transactional.

My relationships on Twitter exist within Twitter's administrative control and they facilitate the relationship in order to monetize it. Even though you’re not directly participating in the monetization and may even be unaware of it, it nevertheless colors the kind, frequency, and intimacy of the interactions you have. Twitter is building their platform and making product decisions to facilitate and even promote the kinds of interactions that provide the most profit to them. Your attention and activity are the product in the transaction. What I can do in the relationship is wholly dependent on what Twitter allows.

Given this classification, the bulk of our online relationships are transactional, or at least commercial. Very few are what we might call interactional relationships. Except for email. Email is one of the bright exceptions to this landscape of transactional, administrated online relationships. If Alice and Bob exchange emails, they both have administrative, transactional relationships with their respective email providers, but the interaction does not necessarily take place within the administrative realm of a single email provider. Let's explore what makes email different.

The most obvious difference between email and many other systems that support online relationships is that email is based on a protocol. As a result:

The user picks and controls the email server—With an email client, you have a choice of multiple email providers. You can even run your own email server if you like. Data is stored on a server "in the cloud"—The mail client needn't store any user data beyond account information. While many email clients store email data locally for performance reasons, the real data is in the cloud. Mail client behavior is the same regardless of what server it connects to—As long as the mail client is talking to a mail server that speaks the right protocol, it can provide the same functionality. The client is fungible—I can pick my mail client on the basis of the features it provides without changing where I receive email. I can use multiple clients at the same time—I can use one email client at home and a different email client at work and still see a single, consistent view of my email. I can even access my mail from a Web client if I don't have my computer handy. I can send you email without knowing anything but your email address.—none of the details about how you receive and process email are relevant to me. I simple send email to your address. Mail servers can talk to each other across ownership boundaries—I can use Gmail, you can use Yahoo! mail and the mail still gets delivered. I can change email providers easily or run my own server.—I receive email windley.org even though I use Gmail. I used to run my own server. If Gmail went away, I could start running my own server again. And no one else needs to know.

In short, email was designed with the architecture of the internet in mind. Email is decentralized and protocological. Email is open—not necessarily open-source—but open in that anyone can build clients and servers that speak its core protocols: IMAP and SMTP. As a result, email maximizes freedom of choice and minimizes the chance of disruption.

The features and benefits that email provides are the same ones we want for every online relationship. These properties allow us to use email to create interactional relationships. The important insight is that systems that support interactional relationships can easily support transactional ones as well, where necessary. But the converse is not true. Systems for building transactional relationships don't readily support interactional ones.

I believe that email's support for richer relationships is a primary reason it has continued to be used despite the rise of social media and work platforms like Slack and Teams. I'm not saying email is the right platform for supporting modern, online interactional relationships—it's not. Email has obvious weaknesses, most prominently it doesn't support mutual authentication of the parties to a relationship and therefore suffers from problems like SPAM and phishing attacks. Less often noted, but equally disqualifying is that email doesn't easily lend itself to layering other protocols on top of if—creative uses of MIME notwithstanding.

I've been discussing appropriate support for authenticity and privacy in the last few posts, a key requirement for interactional relationships on the internet. A modern answer to interactional relationships has to support all kinds of relationships from pseudonymous, ephemeral ones to fully authenticated ones. DIDComm is a good candidate. DIDComm has the beneficial properties of email while also supporting mutual authentication for relationship integrity and layered protocols for flexible relationship utility. These properties provide an essential foundation for building rich online relationships that feel more life-like and authentic, provide better safety, and allow people to live effective online lives.

Photo Credit: A Happy Couple with Young Children from Arina Krasnikova (Pexels License)

Tags: identity relationships metaverse


Heather Vescent

Beyond the Metaverse Hype

Seven Reflections Photo by Harry Quan on Unsplash On March 11, 2022, I was a panelist on The Metaverse: The Emperor’s New Clothes panel at the Vancouver International Privacy & Security Summit’s panel. Nik Badminton set the scene and led a discussion with myself, James Hursthouse and Kharis O’Connell. Here are seven reflections. Games are a playful way to explore who we are, to process
Seven Reflections Photo by Harry Quan on Unsplash

On March 11, 2022, I was a panelist on The Metaverse: The Emperor’s New Clothes panel at the Vancouver International Privacy & Security Summit’s panel. Nik Badminton set the scene and led a discussion with myself, James Hursthouse and Kharis O’Connell. Here are seven reflections.

Games are a playful way to explore who we are, to process and interact with people in a way we can’t do IRL. Games are a way to try on other identities, to create or adjust our mental map of the world. Companies won’t protect me. I’m concerned we are not fully aware of the data that can be tracked with VR hardware. From a quantified self perspective, I would love to know more information about myself to be a better human; but I don’t trust companies. Companies will weaponize any scrap of data to manipulate you and I into buying something (advertising), and even believing something that isn’t true (disinformation). Privacy for all. We need to shift thinking around privacy and security. It’s not something we each should individually have to fight for — for one of us to have privacy, all of us must have privacy. I wrote some longer thoughts in this article. Capitalism needs Commons. Capitalism can’t exist without a commons to exploit. And commons will dry up if they are not replenished or created anew. So we need to support the continuity and creation of commons. Governments traditionally are in the role of protecting commons. But people can come together to create common technological languages, like technology standards to enable interoperable technology “rails” that pave the way for an open marketplace. We need new business models. The point of a business model is profit first. This bias has created the current set of problems. In order to solve the world’s problems, we must wean ourselves off profit as the primary objective. I’m not saying that making money isn’t important, it is. But profit at all costs is what has got us into the current set of world problems. Appreciate the past. I’m worried too much knowledge about how we’ve done things in the past is being lost. But not everything needs to go into the future. Identify what has worked and keep doing it. Identify what hasn’t worked and iterate to improve on it. This is how you help build on the past and contribute to the future. Things will fail. There is a lot of energy (and money) in the Metaverse, and I don’t see it going away. That said, there will be failures. If the experimentation fails, is that so bad? In order to understand what is possible, we have to venture a bit into the realm of what’s impossible.

Watch the whole video for the thought-provoking conversation.

Thank you to Nik, Kharis, James and everyone at the Vancouver International Privacy & Security Summit!

Wednesday, 16. March 2022

Damien Bod

Transforming identity claims in ASP.NET Core and Cache

The article shows how to add extra identity claims to an ASP.NET Core application which authenticates using the Microsoft.Identity.Web client library and Azure AD B2C or Azure AD as the identity provider (IDP). This could easily be switched to OpenID Connect and use any IDP which supports OpenID Connect. The extra claims are added after […]

The article shows how to add extra identity claims to an ASP.NET Core application which authenticates using the Microsoft.Identity.Web client library and Azure AD B2C or Azure AD as the identity provider (IDP). This could easily be switched to OpenID Connect and use any IDP which supports OpenID Connect. The extra claims are added after an Azure Microsoft Graph HTTP request and it is important that this is only called once for a user session.

Code https://github.com/damienbod/azureb2c-fed-azuread

Normally I use the IClaimsTransformation interface to add extra claims to an ASP.NET Core session. This interface gets called multiple times and has no caching solution. If using this interface to add extra claims to your application, you must implement a cache solution for the extra claims and prevent extra API calls or database requests with every request. Instead of implementing a cache and using the IClaimsTransformation interface, alternatively you could just use the OnTokenValidated event with the OpenIdConnectDefaults.AuthenticationScheme scheme. This gets called after a successfully authentication against your identity provider. If Microsoft.Identity.Web is used as the OIDC client which is specific for Azure AD and Azure B2C, you must add the configuration to the MicrosoftIdentityOptions otherwise downstream APIs will not work. If using OpenID Connect directly and a different IDP, then use the OpenIdConnectOptions configuration. This can be added to the services of the ASP.NET Core application.

services.Configure<MicrosoftIdentityOptions>( OpenIdConnectDefaults.AuthenticationScheme, options => { options.Events.OnTokenValidated = async context => { if (ApplicationServices != null && context.Principal != null) { using var scope = ApplicationServices.CreateScope(); context.Principal = await scope.ServiceProvider .GetRequiredService<MsGraphClaimsTransformation>() .TransformAsync(context.Principal); } }; });

Note

If using default OpenID Connect and not the Microsoft.Identity.Web client to authenticate, use the OpenIdConnectOptions and not the MicrosoftIdentityOptions.

Here’s an example of an OIDC setup.

builder.Services.Configure<OpenIdConnectOptions>(OpenIdConnectDefaults.AuthenticationScheme, options => { options.Events.OnTokenValidated = async context => { if(ApplicationServices != null && context.Principal != null) { using var scope = ApplicationServices.CreateScope(); context.Principal = await scope.ServiceProvider .GetRequiredService<MyClaimsTransformation>() .TransformAsync(context.Principal); } }; });

The IServiceProvider ApplicationServices are used to add the scoped MsGraphClaimsTransformation service which is used to add the extra calls using Microsoft Graph. This needs to be added to the configuration in the startup or the program file.

protected IServiceProvider ApplicationServices { get; set; } = null; public void Configure(IApplicationBuilder app, IWebHostEnvironment env) { ApplicationServices = app.ApplicationServices;

The Microsoft Graph services are added to the IoC.

services.AddScoped<MsGraphService>(); services.AddScoped<MsGraphClaimsTransformation>();

The MsGraphClaimsTransformation uses the Microsoft Graph client to get groups of a user, create a new ClaimsIdentity, add the extra claims to this group and add the ClaimsIdentity to the ClaimsPrincipal.

using AzureB2CUI.Services; using System.Linq; using System.Security.Claims; using System.Threading.Tasks; namespace AzureB2CUI; public class MsGraphClaimsTransformation { private readonly MsGraphService _msGraphService; public MsGraphClaimsTransformation(MsGraphService msGraphService) { _msGraphService = msGraphService; } public async Task<ClaimsPrincipal> TransformAsync(ClaimsPrincipal principal) { ClaimsIdentity claimsIdentity = new(); var groupClaimType = "group"; if (!principal.HasClaim(claim => claim.Type == groupClaimType)) { var objectidentifierClaimType = "http://schemas.microsoft.com/identity/claims/objectidentifier"; var objectIdentifier = principal.Claims.FirstOrDefault(t => t.Type == objectidentifierClaimType); var groupIds = await _msGraphService.GetGraphApiUserMemberGroups(objectIdentifier.Value); foreach (var groupId in groupIds.ToList()) { claimsIdentity.AddClaim(new Claim(groupClaimType, groupId)); } } principal.AddIdentity(claimsIdentity); return principal; } }

The MsGraphService service implements the different HTTP requests to Microsoft Graph. Azure AD B2C is used in this example and so an application client is used to access the Azure AD with the ClientSecretCredential. The implementation is setup to use secrets from Azure Key Vault directly in any deployments, or from user secrets for development.

using Azure.Identity; using Microsoft.Extensions.Configuration; using Microsoft.Graph; using System.Threading.Tasks; namespace AzureB2CUI.Services; public class MsGraphService { private readonly GraphServiceClient _graphServiceClient; public MsGraphService(IConfiguration configuration) { string[] scopes = configuration.GetValue<string>("GraphApi:Scopes")?.Split(' '); var tenantId = configuration.GetValue<string>("GraphApi:TenantId"); // Values from app registration var clientId = configuration.GetValue<string>("GraphApi:ClientId"); var clientSecret = configuration.GetValue<string>("GraphApi:ClientSecret"); var options = new TokenCredentialOptions { AuthorityHost = AzureAuthorityHosts.AzurePublicCloud }; // https://docs.microsoft.com/dotnet/api/azure.identity.clientsecretcredential var clientSecretCredential = new ClientSecretCredential( tenantId, clientId, clientSecret, options); _graphServiceClient = new GraphServiceClient(clientSecretCredential, scopes); } public async Task<User> GetGraphApiUser(string userId) { return await _graphServiceClient.Users[userId] .Request() .GetAsync(); } public async Task<IUserAppRoleAssignmentsCollectionPage> GetGraphApiUserAppRoles(string userId) { return await _graphServiceClient.Users[userId] .AppRoleAssignments .Request() .GetAsync(); } public async Task<IDirectoryObjectGetMemberGroupsCollectionPage> GetGraphApiUserMemberGroups(string userId) { var securityEnabledOnly = true; return await _graphServiceClient.Users[userId] .GetMemberGroups(securityEnabledOnly) .Request().PostAsync(); } }

When the application is run, the two ClaimsIdentity instances exist with every request and are available for using in the ASP.NET Core application.

Notes

This works really well but you should not add too many claims to the identity in this way. If you have many identity descriptions or a lot of user data, then you should use the IClaimsTransformation interface with a good cache solution.

Links

https://docs.microsoft.com/en-us/aspnet/core/security/authentication/claims

User claims in ASP.NET Core using OpenID Connect Authentication
Implementing authorization in Blazor ASP.NET Core applications using Azure AD security groups
Implementing User Management with ASP.NET Core Identity and custom claims

https://andrewlock.net/exploring-dotnet-6-part-10-new-dependency-injection-features-in-dotnet-6/

Monday, 14. March 2022

Phil Windley's Technometria

Provisional Authenticity and Functional Privacy

Summary: Provisional authenticity and confidentiality can help us manage the trade offs between privacy and authenticity to support online accountability along with functional privacy. Last week, I discussed the trade offs between privacy, authenticity, and confidentiality, concluding that the real trade off is usually between privacy and authenticity. That might seem like it pits p

Summary: Provisional authenticity and confidentiality can help us manage the trade offs between privacy and authenticity to support online accountability along with functional privacy.

Last week, I discussed the trade offs between privacy, authenticity, and confidentiality, concluding that the real trade off is usually between privacy and authenticity. That might seem like it pits privacy against accountability and leaves us with a Hobson's choice where good privacy is impossible if we want to prevent fraud. Fortunately, the trade off is informed by a number of factors, making the outcome not nearly as bleak as it might appear at first.

Authenticity is often driven by a need for accountability1. Understanding accountability helps navigate the spectrum of choices between privacy and authenticity. As I mentioned last week, Know Your Customer (KYC) and Anti-Money Laundering (AML) regulations require that banks be able to identify the parties to transactions. That's why, when you open a bank account, they ask for numerous identity documents. The purpose is to enable law enforcement to determine the actors behind transactions deemed illegal (hopefully with a warrant). Technically, this is a bias toward authenticity at the cost of privacy. But there are nuances. The bank collects this data but doesn't need to use it unless there's a question of fraud or money laundering2.

The point is that while in a technical sense, the non-repudiability of bank transactions makes them less private, there aren't a lot of people who are concerned about the privacy of their banking transactions. The authenticity associated with those transactions is provisional or latent3. Transactions are only revealed to outside parties when legally required and most people don't worry about that. From that perspective, transactions with provisional authenticity are private enough. We might call this functional privacy.

I've used movie tickets several times as an example of an ephemeral transaction that doesn't need authenticity to function and thus is private. But consider another example where an ephemeral, non-authenticated transaction is not good enough. A while back our family went to the ice-skating rink. We bought a ticket to get in, just like at the movies. But each of us also signed a liability waiver. That waiver, which the skating rink required to reduce their risk, meant that the transaction was much less private. Unlike the bank, where I feel confident my KYC data is not being shared, I don't know what the skating rink is doing with the data.

This is a situation where minimal disclosure doesn't help me. I've given away the data needed to hold me accountable in the case of an accident. No promise was made to me about what the rink might do with it. The only way to hold me accountable and protect my privacy is for the authenticity of the transaction to be made provisional through agreement. If the skating rink were to make strong promises that the data would only be used in the event that I had an accident and threatened to sue, then even though I'm identified to the rink, my privacy is protected except in clearly defined circumstances.

Online we can make the authenticity's provisionality even more trustworthy using cryptographic commitments and key escrow. The idea is that any data about me that's needed to enforce the waiver would be hidden from the rink, unchangeable by me, and only revealed if I threaten to sue. This adds a technical element and allows me to exchange my need to trust the rink with trusting the escrow agent. Trusting the escrow agent might be more manageable than trusting every business I interact with. Escrow services could be regulated as fiduciaries to increase trust.

Provisional authenticity works when the data is only needed in a low-probability events. Often, however, data is actively used to provide utility in the relationship. In these cases, confidentiality agreements, essentially NDAs, are the answer to providing functional privacy and also providing the authenticity needed for accountability and utility. These agreements can't be the traditional contracts of adhesion where, rather than promising to protect confidentiality, companies force people to consent to surveillance. Agreements should be written to ensure that data is always shared with the same promise of confidentiality that existed in the root agreement.

Provisional authenticity and data NDAs provide good tools for protecting functional privacy without giving up accountability and relationship utility. Functional privacy and accountability are both necessary for creating digital systems that respect and protect people.

Notes Beyond accountability, a number of businesses make their living surveilling people online or are remunerated for aiding in the collection of data that informs that surveillance. I've written about surveillance economy and ideas for dealing with it previously. Note that I said need. I'm aware that banks likely use it for more than this, often without disclosing how it's being used. I'm grateful to Sam Smith for discussions that helped me clarify my thinking about this.

Photo Credit: Ancient strongbox with knights (Frederiksborg Museum) from Thomas Quine (CC BY 2.0)

Tags: privacy identity confidentiality authenticity

Friday, 11. March 2022

@_Nat Zone

東日本大震災の記憶ー2011年3月11日のツイート

今日で東関東大震災から11年。記憶が薄れないように… The post 東日本大震災の記憶ー2011年3月11日のツイート first appeared on @_Nat Zone.

今日で東関東大震災から11年。記憶が薄れないように、わたしの当日のツイートを貼り付けておきます。時系列そのままです。リアルタイムであの記憶が蘇ります。日本語、英語両方あります。

日本語のツイート 英語のツイート 日本語のツイート @kthrtty STAATSKAPELLE BERLIN もなかなか。 posted at 00:35:28 無常社会の例。現代のジャンバルジャン RT @masanork: 副業で続けてたんだったらともかく、辞めさせる必要ないと思うんだけど / 科学の先生は「大ポルノ女優」! 生徒がビデオ見つけて大騒動に(夕刊フジ) htn.to/t8n8AJ posted at 09:50:16 えーと。 RT @47news: ウイルス作成に罰則 関連法案を閣議決定 bit.ly/dFndIC posted at 09:51:53 地震だ! posted at 14:48:22 緊急地震速報。30秒以内に大きな揺れが来ます。 posted at 14:48:56 ビルがバキバキ言っている。出口は確保されている。 posted at 14:51:13 エレベーターは非常停止中。 posted at 14:52:59 余震なう。縦揺れ。 posted at 15:17:58 Tsunami 10m H. posted at 15:26:33 九段会館ホール天井崩落600人巻き込み。けが人多数。 posted at 15:49:02 横浜駅前ボーリング場天井崩落。10人が下敷き。神奈川県庁外壁剥がれ落ち。 posted at 15:50:25 RT @motoyaKITO: これやばいぞ RT @OsakaUp: どなたか、助けてあげて下さい!東京都台東区花川戸1-11-7 ギークハウス浅草 301号RT @itkz 地震が起きた時、社内サーバールームにいたのだが、ラックが倒壊した。 … posted at 16:25:52 汐留シティーセンター、津波対策のために地下2階出入口封鎖。出入りには1F、地下1Fを利用のこと。 posted at 16:50:24 Earthquake in Japan. Richter Scale 8.4. posted at 17:01:27 「こちらは汐留シティーセンター防災センターです。本日は地震のため、17時半にて営業を終了しました。」え?! posted at 17:32:27 「訂正します。店舗の営業を終了しました。」そりゃそうだよねw RT @_nat: 「こちらは汐留シティーセンター防災センターです。本日は地震のため、17時半にて営業を終了しました。」え?! posted at 17:42:42 another shake coming in a minute. posted at 17:44:03 Fukushima Nuclear Power Plant’s cooling system not working. Emergency state announced. 1740JST #earthquakes posted at 17:50:53 本当に?津波は川も上って来るはずだけと大丈夫?安全な場所で待機が基本のはずだけど。 RT @CUEICHI: こういうときは、動けるとおもった瞬間に、迷わず移動しないと、後になればなるほど身動きとれなくなります。 posted at 18:07:47 政府の17:40の指示は、待機。RT @CUEICHI: こういうときは、動けるとおもった瞬間に、迷わず移動しないと、後になればなるほど身動きとれなくなります。 posted at 18:09:21 Finally could get in touch with my daughter. posted at 18:32:37 RT @hitoshi: 【帰宅困難の方】毛布まではさすがに用意できませんし、ゆっくり寝るようなスペースは取れないかもしれませんが、店長が泊まることになっていますので、避難してきた方は明日の朝まで滞在可能です。豚組しゃぶ庵 港区六本木7-5-11-2F posted at 18:54:42 RT @UstreamTech_JP: このたびの地震災害報道に関して、NHK様より、放送をUSTREAM上で再配信をすることについて許諾いただきました。 posted at 18:57:00 RT @oohamazaki: 【東京23区内にいる帰宅難民へ】避難場所を公開しているところを可能なかぎりGoogle Maps でまとめました。リアルタイム更新します!bit.ly/tokyohinan posted at 19:51:50 食料班が帰還 posted at 19:53:49 @night_in_tunisi ありがとうございます! posted at 19:54:45 なんと。 RT @hiroyoshi: 霞ヶ関の各庁舎には講堂がある。なぜ帰宅難民に開放しない? posted at 20:29:22 こりゃぁ、都と国とで、大分対応が分かれるなぁ。 posted at 20:50:31 RT @fu4: RT @kazu_fujisawa: 4時ぐらいの携帯メールが今ごろたくさん届いた。TwitterとGmailだけ安定稼働したな。クラウドの信頼性は専用回線より劣るというのは、嘘だという事が判明した。 posted at 20:59:35 チリの友人からその旨連絡ありました。 RT @Y_Kaneko: チリは既に警戒していると聞きました。 RT @marinepolaris: ハワイや米国西海岸、南米チリペルーの在留邦人に津波の情報をお願いします。到達確実なので。 posted at 21:50:11 @iglazer Thanks. Yes, they are fine. posted at 22:10:17 チリ政府も支援体制を整えたそうです。 @trinitynyc posted at 22:12:35 NHK 被災人の知恵 www.nhk.or.jp/hisaito2/chie/… posted at 22:16:27 市原、五井のプラント火災、陸上からは近づけず。塩釜市石油コンビナートで大規模な火災。爆発も。 posted at 22:19:21 東京都、都立高校、すべて開放へ。受け入れ準備中。 posted at 22:20:55 RT @inosenaoki: 都営新宿線は21時45分に全線再開。他の都営地下鉄はすでに再開。ただし本数はまだ少ない。 posted at 22:22:36 RT @fujita_nzm: 【お台場最新情報】台場駅すぐの「ホテル日航東京」さんでは温かいコーンスープと、冷たいウーロン茶の無料サービスが始まり、喝采を浴びています。みなさん落ち着きを取り戻し、疲れて寝る方も増えてきました。 posted at 22:24:07 For earthquake info in English/Chinese etc., tune to 963 for NHK Radio. posted at 22:25:20 RT @genwat: 福島原発は報道されているとおりです。 電源車がつけばよし、つかなければ予想は難しいです。一気にメルトダウンというものではありません。デマにまぎらわされず、推移を見守りましょう。BWR=沸騰水型軽水炉なので、汚染黒鉛を吹いたりするタイプではありません posted at 22:26:25 English Earthquake Information site for the evacuation center etc. Plz RT. ht.ly/4cqaj posted at 22:30:20 [22:39:52] =nat: 宮城県警察本部:仙台市若林区荒浜で200人~300人の遺体が見つかった。 22:40 posted at 22:41:42 RT @tokyomx: 鉄道情報。本日の運転を終日見合わせを決めたのは次のとおり。JR東日本、ゆりかもめ、東武伊勢崎線、東武東上線、京王電鉄、京成電鉄。(現在情報です) posted at 22:44:05 @mayumine よかった! posted at 22:50:58 I’m at 都営浅草線 新橋駅 (新橋2-21-1, 港区) 4sq.com/hvEZ7Z posted at 23:39:48 @ash7 汐留はだめ。浅草線はOk posted at 23:44:31 英語のツイート Big Earthquake in Japan right now. posted at 14:54:37 Earthquake Intensity in Japan. ow.ly/i/921g posted at 14:59:14 All the trains in Tokyo are stopped. posted at 15:08:32 Still Shaking. posted at 15:08:48 It is one of the biggest shake that Japan had. @shita posted at 15:13:41 Tsunami 10m H. posted at 15:26:33 90min past the shake and it is still shaking in Tokyo. posted at 16:25:50 Earthquake in Japan. Richter Scale 8.4. posted at 17:01:28 another shake coming in a minute. posted at 17:44:03 Well, it is 8.8. RT @judico: OMG, 7.9 in Japan. Be safe @_nat_en! #earthquakes posted at 17:48:43 Fukushima Nuclear Power Plant’s cooling system not working. Emergency state announced. 1740JST #earthquakes posted at 17:50:54 Now it is corrected to be 8.8. RT @domcat: Earthquake in Japan. Richter Scale 8.4. (via @_nat_en) posted at 17:54:26 Finally could get in touch with my daughter. posted at 18:32:38 @rachelmarbus Thanks! posted at 18:36:40 Fukushima Nuclear Power Plant – If we can re-install power for the cooling system within 8 hours, it will be ok. #earthquakes posted at 18:39:30 @helena_arellano We still have 7 hours to install power for the cooling system. posted at 19:32:26 Tsunami is approaching Hawaii now. posted at 22:21:37 English Earthquake Information site for the evacuation center etc. Plz RT. ht.ly/4cqam posted at 22:30:20 According to the Miyagi Policy, 200-300 bodies found in the Arahama beach. posted at 22:43:00 The post 東日本大震災の記憶ー2011年3月11日のツイート first appeared on @_Nat Zone.

Damien Bod

Create Azure B2C users with Microsoft Graph and ASP.NET Core

This article shows how to create different types of Azure B2C users using Microsoft Graph and ASP.NET Core. The users are created using application permissions in an Azure App registration. Code https://github.com/damienbod/azureb2c-fed-azuread The Microsoft.Identity.Web Nuget package is used to authenticate the administrator user that can create new Azure B2C users. An ASP.NET Core Razor page […]

This article shows how to create different types of Azure B2C users using Microsoft Graph and ASP.NET Core. The users are created using application permissions in an Azure App registration.

Code https://github.com/damienbod/azureb2c-fed-azuread

The Microsoft.Identity.Web Nuget package is used to authenticate the administrator user that can create new Azure B2C users. An ASP.NET Core Razor page application is used to implement the Azure B2C user management and also to hold the sensitive data.

public void ConfigureServices(IServiceCollection services) { services.AddScoped<MsGraphService>(); services.AddTransient<IClaimsTransformation, MsGraphClaimsTransformation>(); services.AddHttpClient(); services.AddOptions(); services.AddMicrosoftIdentityWebAppAuthentication(Configuration, "AzureAdB2C") .EnableTokenAcquisitionToCallDownstreamApi() .AddInMemoryTokenCaches();

The AzureAdB2C app settings configures the B2C client. An Azure B2C user flow is implemented for authentication. In this example, a signin or signup flow is implemented, although if creating your own user, maybe only a signin is required. The GraphApi configuration is used for the Microsoft Graph application client with uses the client credentials flow. A user secret was created to access the Azure App registration. This secret is stored in the user secrets for development and stored in Azure Key Vault for any deployments. You could use certificates as well but this offers no extra security unless using directly from a client host.

"AzureAdB2C": { "Instance": "https://b2cdamienbod.b2clogin.com", "ClientId": "8cbb1bd3-c190-42d7-b44e-42b20499a8a1", "Domain": "b2cdamienbod.onmicrosoft.com", "SignUpSignInPolicyId": "B2C_1_signup_signin", "TenantId": "f611d805-cf72-446f-9a7f-68f2746e4724", "CallbackPath": "/signin-oidc", "SignedOutCallbackPath ": "/signout-callback-oidc" }, "GraphApi": { "TenantId": "f611d805-cf72-446f-9a7f-68f2746e4724", "ClientId": "1d171c13-236d-4c2b-ac10-0325be2cbc74", "Scopes": ".default" //"ClientSecret": "--in-user-settings--" }, "AadIssuerDomain": "damienbodhotmail.onmicrosoft.com",

The application User.ReadWrite.All permission is used to create the users. See the permissions in the Microsoft Graph docs.

The MsGraphService service implements the Microsoft Graph client to create Azure tenant users. Application permissions are used because we use Azure B2C. If authenticating using Azure AD, you could use delegated permissions. The ClientSecretCredential is used to get the Graph access token and client with the required permissions.

public MsGraphService(IConfiguration configuration) { string[] scopes = configuration.GetValue<string>("GraphApi:Scopes")?.Split(' '); var tenantId = configuration.GetValue<string>("GraphApi:TenantId"); // Values from app registration var clientId = configuration.GetValue<string>("GraphApi:ClientId"); var clientSecret = configuration.GetValue<string>("GraphApi:ClientSecret"); _aadIssuerDomain = configuration.GetValue<string>("AadIssuerDomain"); _aadB2CIssuerDomain = configuration.GetValue<string>("AzureAdB2C:Domain"); var options = new TokenCredentialOptions { AuthorityHost = AzureAuthorityHosts.AzurePublicCloud }; // https://docs.microsoft.com/dotnet/api/azure.identity.clientsecretcredential var clientSecretCredential = new ClientSecretCredential( tenantId, clientId, clientSecret, options); _graphServiceClient = new GraphServiceClient(clientSecretCredential, scopes); }

The CreateAzureB2CSameDomainUserAsync method creates a same domain Azure B2C user and also creates an initial password which needs to be updated after a first signin. The users UserPrincipalName email must match the Azure B2C domain and the users can only signin with the the password. MFA should be setup. This works really good but it is not a good idea to handle passwords from your users, if this can be avoided. You need to share this with the user in a secure way.

public async Task<(string Upn, string Password, string Id)> CreateAzureB2CSameDomainUserAsync(UserModelB2CTenant userModel) { if(!userModel.UserPrincipalName.ToLower().EndsWith(_aadB2CIssuerDomain.ToLower())) { throw new ArgumentException("incorrect Email domain"); } var password = GetEncodedRandomString(); var user = new User { AccountEnabled = true, UserPrincipalName = userModel.UserPrincipalName, DisplayName = userModel.DisplayName, Surname = userModel.Surname, GivenName = userModel.GivenName, PreferredLanguage = userModel.PreferredLanguage, MailNickname = userModel.DisplayName, PasswordProfile = new PasswordProfile { ForceChangePasswordNextSignIn = true, Password = password } }; await _graphServiceClient.Users .Request() .AddAsync(user); return (user.UserPrincipalName, user.PasswordProfile.Password, user.Id); }

The CreateFederatedUserWithPasswordAsync method creates an Azure B2C with any email address. This uses the SignInType federated, but uses a password and the user signs in directly to the Azure B2C. This password is not updated after a first signin. Again this is a bad idea because you need share the password with the user somehow and you as an admin should not know the user password. I would avoid creating users in this way and use a custom invitation flow, if you need this type of Azure B2C user.

public async Task<(string Upn, string Password, string Id)> CreateFederatedUserWithPasswordAsync(UserModelB2CIdentity userModel) { // new user create, email does not matter unless you require to send mails var password = GetEncodedRandomString(); var user = new User { DisplayName = userModel.DisplayName, PreferredLanguage = userModel.PreferredLanguage, Surname = userModel.Surname, GivenName = userModel.GivenName, OtherMails = new List<string> { userModel.Email }, Identities = new List<ObjectIdentity>() { new ObjectIdentity { SignInType = "federated", Issuer = _aadB2CIssuerDomain, IssuerAssignedId = userModel.Email }, }, PasswordProfile = new PasswordProfile { Password = password, ForceChangePasswordNextSignIn = false }, PasswordPolicies = "DisablePasswordExpiration" }; var createdUser = await _graphServiceClient.Users .Request() .AddAsync(user); return (createdUser.UserPrincipalName, user.PasswordProfile.Password, createdUser.Id); }

The CreateFederatedNoPasswordAsync method creates an Azure B2C federated user from a specific Azure AD domain which already exists and no password. The user can only signin using a federated signin to this tenant. No passwords are shared. This is really good way to onboard existing AAD users to an Azure B2C tenant. One disadvantage with this is that the email is not verified unlike implementing this using an invitation flow directly in the Azure AD tenant.

public async Task<string> CreateFederatedNoPasswordAsync(UserModelB2CIdentity userModel) { // User must already exist in AAD var user = new User { DisplayName = userModel.DisplayName, PreferredLanguage = userModel.PreferredLanguage, Surname = userModel.Surname, GivenName = userModel.GivenName, OtherMails = new List<string> { userModel.Email }, Identities = new List<ObjectIdentity>() { new ObjectIdentity { SignInType = "federated", Issuer = _aadIssuerDomain, IssuerAssignedId = userModel.Email }, } }; var createdUser = await _graphServiceClient.Users .Request() .AddAsync(user); return createdUser.UserPrincipalName; }

When the application is started, you can signin as an IT admin and create new users as required. The Birthday can only be added if you have an SPO license. If the user exists in the AAD tenant, the user can signin using the federated identity provider. This could be improved by adding a search of the users in the target tenant and only allowing existing users.

Notes:

It is really easy to create users using Microsoft Graph but this is not always the best way, or a secure way of onboarding new users in an Azure B2C tenant. If local data is required, this can be really useful. Sharing passwords between an IT admin and a new user should be avoided if possible. The Microsoft Graph invite APIs do not work for Azure AD B2C, only Azure AD.

Links

https://docs.microsoft.com/en-us/aspnet/core/introduction-to-aspnet-core

https://docs.microsoft.com/en-us/graph/api/user-post-users?view=graph-rest-1.0&tabs=csharp

Wednesday, 09. March 2022

Here's Tom with the Weather

C. Wright Mills and the Battalion

On Monday, there were a few people in my Twitter feed sharing Texas A&M’s Battalion article about The Rudder Association. While Texas A&M has improved so much over the years, this stealthy group called the Rudder Association is now embarrassing the school. I was glad to read the article and reassured that the kids are alright. I couldn’t help but be reminded of the letters written t

On Monday, there were a few people in my Twitter feed sharing Texas A&M’s Battalion article about The Rudder Association. While Texas A&M has improved so much over the years, this stealthy group called the Rudder Association is now embarrassing the school. I was glad to read the article and reassured that the kids are alright. I couldn’t help but be reminded of the letters written to the Battalion in 1935 by a freshman named C. Wright Mills.

College students are supposed to become leaders of thought and action in later life. It is expected they will profit from a college education by developing an open and alert mind to be able to cope boldly with everyday problems in economics and politics. They cannot do this unless they learn to think independently for themselves and to stand fast for their convictions. Is the student at A and M encouraged to do this? Is he permitted to do it? The answer is sadly in the negative.

Little did he know that current students would be dealing with this shit 85 years later with a group of former students with nothing better to do than infiltrate student-run organizations from freshman orientation to the newspaper. But shocking no one, they were too incompetent to maintain the privacy of the school regents who met with them.

According to meeting minutes from Dec. 1, 2020, the Rudder Association secured the attendance of four members of the A&M System Board of Regents. The meeting minutes obtained by The Battalion were censored by TRA to remove the names of the regents in the meeting as well as other “highly sensitive information.”

“DO NOT USE THEIR NAMES BEYOND THE RUDDER BOARD. They do not wish to be outed,” the minutes read on the regents in attendance.

Further examination by The Battalion revealed, however, that the censored text could be copied and pasted into a text document to be viewed in its entirety due to TRA using a digital black highlighter to censor.

Well done, Battalion.

(photo is from C. Wright Mills: Letters and autobiographical writings)

Tuesday, 08. March 2022

Phil Windley's Technometria

Privacy, Authenticity, and Confidentiality

Summary: Authenticity and privacy are usually traded off against each other. The tradeoff is a tricky one that can lead to the over collection of data. At a recent Utah SSI Meetup, Sam Smith discussed the tradeoff between privacy, authenticity and confidentiality. Authenticity allows parties to a conversation to know to whom they are talking. Confidentiality ensures that the conten

Summary: Authenticity and privacy are usually traded off against each other. The tradeoff is a tricky one that can lead to the over collection of data.

At a recent Utah SSI Meetup, Sam Smith discussed the tradeoff between privacy, authenticity and confidentiality. Authenticity allows parties to a conversation to know to whom they are talking. Confidentiality ensures that the content of the conversation is protected from others. These three create a tradespace because you can't achieve all three at the same time. Since confidentiality is easily achieved through encryption, we're almost always trading off privacy and authenticity. The following diagram illustrates these tradeoffs.

Privacy, authenticity, and confidentiality must be traded against each other. (click to enlarge)

Authenticity is difficult to achieve in concert with privacy because it affects the metadata of a conversation. Often it requires others besides the parties to a conversation potentially knowing who else is participating—that is, it requires non-repudiation. Specifically, if Alice and Bob are communicating, not only does Alice need to know she's talking to Bob, but also needs the ability to prove to others that she and Bob were communicating.

As an example, modern banking laws include a provision known as Know Your Customer (KYC). KYC requires that banks be able to identify the parties to transactions. That's why, when you open a bank account, they ask for numerous identity documents. The purpose is to enable law enforcement to determine the actors behind transactions deemed illegal (hopefully with a warrant). So, banking transactions are strong on authenticity, but weak on privacy1.

Authenticity is another way of classifying digital relationships. Many of the relationships we have are (or could be) ephemeral, relying more on what you have than who you are. For example, a movie ticket doesn't identify who you are but does identify what you are: one of N people allowed to occupy a seat in a specific theater at a specific time. You establish an ephemeral relationship with the ticket taker, she determines that your ticket is valid, and you're admitted to the theater. This relationship, unless the ticket taker knows you, is strong on privacy, weak on authenticity, and doesn't need much confidentiality either.

A credit card transaction is another interesting case that shows the complicated nature of privacy and authenticity in many relationships. To the merchant, a credit card says something about you, what you are (i.e., someone with sufficient credit to make a purchase) rather than who you are—strong on privacy, relatively weak on authenticity. To be sure, the merchant does have a permanent identifier for you (the card number) but is unlikely to be asked to use it outside the transaction.

But, because of KYC, you are well known to your bank, and the rules of the credit card network ensure that you can be identified by transaction for things like chargebacks and requests from law enforcement. So, this relationship has strong authenticity but weaker privacy guarantees.

The tradeoff between privacy and authenticity is informed by the Law of Justifiable Parties (see Kim Cameron's Laws of Identity) that says disclosures should be made only to entities who have a need to know. Justifiable Parties doesn't say everything should be maximally private. But it does say that we need to carefully consider our justification for increasing authenticity at the expense of privacy. Too often, digital systems opt for knowing who when they could get by with what. In the language we're developing here, they create authenticated, permanent relationships, at the expense of privacy, when they could use pseudonymous, ephemeral relationships and preserve privacy.

Trust is often given as the reason for trading privacy for authenticity. This is the result of a mistaken understanding of trust. What many interactions really need is confidence in the data being exchanged. As an example, consider how we ensure a patron at a bar is old enough to drink. We could create a registry and have everyone who wants to drink register with the bar, providing several identifying documents, including birthdate. Every time you order, you'd authenticate, proving you're the person who established the account and allowing the bar to prove who ordered what when. This system relies on who you are.

But this isn't how we do it. Instead, your relationship with the bar is ephemeral. To prove you're old enough to drink you show the bartender or server an ID card that includes your birthday. They don't record any of the information on the ID card, but rather use the birthday to establish what you are: a person old enough to drink. This system favors privacy over authenticity.

The bar use case doesn't require trust, the willingness to rely on someone else to perform actions on the trustor's behalf, in the ID card holder2. But it does require confidence in the data. The bar needs to be confident that the person drinking is over the legal age. Both systems provide that confidence, but one protects privacy, and the other does not. Systems that need trust generally need more authenticity and thus have less privacy.

In general, authenticity is needed in a digital relationship when there is a need for legal recourse and accountability. Different applications will judge the risk inherent in a relationship differently and hence have different tradeoffs between privacy and authenticity. I say this with some reticence since I know in many organizations, the risk managers are incredibly influential with business leaders and will push for accountability where it might not really be needed, just to be safe. I hope identity professionals can provide cover for privacy and the arguments for why confidence is often all that is needed.

Notes Remember, this doesn't mean that banking transactions are public, but that others besides the participants can know who participated in the conversation. The bar does need to trust the issuer of the ID card. That is a different discussion.

Photo Credit: Who's Who in New Zealand 228 from Schwede66 (CC BY-SA 4.0)

Tags: ssi identity privacy authenticity confidentiality

Monday, 07. March 2022

Phil Windley's Technometria

What is Privacy?

Summary: With privacy, we're almost never dealing with absolutes. Absolute digital privacy can be achieved by simply never using the Internet. But that also means being absolutely cut off from online interaction. Consequently, privacy is a spectrum, and we must choose where we should be on that spectrum, taking all factors into consideration. Ask ten people what privacy is and you

Summary: With privacy, we're almost never dealing with absolutes. Absolute digital privacy can be achieved by simply never using the Internet. But that also means being absolutely cut off from online interaction. Consequently, privacy is a spectrum, and we must choose where we should be on that spectrum, taking all factors into consideration.

Ask ten people what privacy is and you'll likely get twelve different answers. The reason for the disparity is that your feelings about privacy depend on context and your experience. Privacy is not a purely technical issue, but a human one. Long before computers existed, people cared about and debated privacy. Future U.S. Supreme Court Justice Louis Brandeis defined it as "the right to be left alone" in 1890.

Before the Web became a ubiquitous phenomenon, people primarily thought of privacy in terms of government intrusion. But the march of technological progress means that private companies probably have much more data about you than your government. Ad networks and the valuation of platforms based on how well they display ads to their users has led to the wide-spread surveillance of people, usually without them knowing the extent or consequences.

The International Association of Privacy Professionals (IAPP) defines four classes of privacy:

Bodily Privacy—The protection of a person's physical being and any invasion thereof. This includes practices like genetic testing, drug testing, or body cavity searches.

Communications Privacy—The protection of the means of correspondence, including postal mail, telephone conversations, electronic mail, and other forms of communication.

Information Privacy—The claim of individuals, groups, or organizations to determine for themselves when, how, and to what extent information about them is communicated to others.

Territorial Privacy—Placing limitations on the ability of others to intrude into an individual's environment. Environment can be more than just the home, including workplaces, vehicles, and public spaces. Intrusions of territorial privacy can include video surveillance or ID checks.

While Bodily and Territorial Privacy can be issues online, Communications and Information Privacy are the ones we worry about the most and the ones most likely to have a digital identity component. To begin a discussion of online privacy, we first need to be specific about what we mean when we talk about online conversations.

Each online interaction consists of packets of data flowing between parties. For our purposes, consider that a conversation. Even a simple Internet Control Message Protocol (ICMP) echo request packet is a conversation as we're defining it—the message needn't be meaningful to humans.

Conversations have content and they have metadata—the information about the conversation. In an ICMP echo, there's only metadata—the TCP and ICMP headers1. The headers include information like the source and destination IP addresses, the TTL (time-to-live), type of message, checksums, and so on. In a more complex protocol, say SMTP for email, there would also be content—the message—in addition to the metadata.

Communication Privacy is concerned with metadata. Confidentiality is concerned with content2. Put another way, for a conversation to be private, only the parties to the conversation should know who the other participants are. More generally, privacy concerns the control of any metadata about an online conversation so that only parties to the conversation know the metadata.

Defined in this way, online privacy may appear impossible. After all, the Internet works by passing packets from router to router, all of which can see the source IP address and must know the destination IP address. Consequently, at the packet level, there's no online privacy.

But consider the use of TLS (Transport Layer Security) to create an encrypted web channel between the browser and the server . At the packet level, the routers will know (and the operators of the routers can know) the IP addresses of the encrypted packets going back and forth. If a third party can correlate those IP addresses with the actual participants, then the conversation isn't absolutely private.

But other metadata—the headers—is private. Beyond the host name and information needed to set up the TLS connection, all the rest of the headers are encrypted. This includes cookies and the URL path. So, someone eavesdropping on the conversation will know the server name, but not the specific place on the site the browser connected to. For example, suppose Alice visits Utah Valley University's Title IX office (where sexual misconduct, discrimination, harassment, and retaliation are reported) by pointing her browser at uvu.edu/titleix. With TLS an eavesdropper could know Alice connected to Utah Valley University, but not know that she connected to web site for the Title IX office because the path is encrypted.

Extending this example, we can easily see the difference between privacy and confidentiality. If the Title IX office were located at a subdomain of uvu.edu, say titleix.uvu.edu, then an eavesdropper would be able to tell that Alice had connected to the Title IX web site, even if the conversation were protected by a TLS connection. The content that was sent to Alice and that she sent back would be confidential, but the important metadata showing that Alice connected to the Title IX office would not be private.

This example introduces another important term to this discussion: authenticity. If Alice goes to uvu.edu instead of titleix.uvu.edu then an eavesdropper cannot easily establish the authenticity of who Alice is speaking to at UVU—there are too many possibilities. Depending on how easily correlated Alice's IP number is with Alice, an eavesdropper can't reliably authenticate Alice either. So, while Alice's conversation with the Title IX office through uvu.edu is not absolutely private, it is probably private enough because we can't easily authenticate the parties to the conversation from the metadata alone.

Information Privacy, on the other hand, is distinguished from Communications Privacy online because it usually concerns content, rather than metadata. When Alice connects with the Title IX office, to extend the example from the previous paragraph, she might transmit data to the office, possibly by filling out Web forms, or even just by authenticating, allowing the Web site to identify Alice and correlate other information with her. All of this is done inside the confidential channel provided by the TLS connection. But Alice will still be concerned about the privacy of the information she's communicated.

Information privacy quickly gets out of the technical realm and into policy. How will Alice's information be handled? Who will see it? Will it be shared? With whom and under what conditions? These are all policy questions that impact the privacy of information that Alice willingly shared. Information privacy is generally about who controls disclosure.

Communications Privacy often involves the involuntary collection of metadata—surveillance. Information Privacy usually involves policies and practices for handling data that has been voluntarily provided. Of course, there are places where these two overlap. Data created from metadata becomes personally identifying information (PII), subject to privacy concerns that might be addressed by policy. Still, the distinction between Communications and Information Privacy is useful.

The intersection of Communications and Information privacy is sometimes called Transactional3 or Social Privacy. Transaction privacy is worth exploring as a separate category because transactional privacy is always evaluated in a specific context. Thus, it speaks to people's real concerns and their willingness to trade off privacy for a perceived benefit in a specific transaction. Transactional privacy concerns can be more transient.

The modern Web is replete with transactions that involve both metadata and content data. The risks of this data being used in ways that erode individual privacy are great. And because the mechanisms are obscure—even to Web professionals—people can't make good privacy decisions about the transactions they engage in. Transactional privacy is consequently an important lens for evaluating the privacy rights of people and they ways technology, policy, regulation, and the law can protect them.

With privacy, we're almost never dealing with absolutes. Absolute digital privacy can be achieved by simply never using the Internet. But that also means being absolutely cut off from online interaction. Consequently, privacy is a spectrum, and we must choose where we should be on that spectrum, taking all factors into consideration. Since confidentiality is easily achieved through encryption, we're almost always trading off privacy and authentication. More on that next week.

Notes ICMP packets can have data in the packet, but it's optional and almost never set. This distinction between privacy and confidentiality isn't often made in casual conversation where people often say they want privacy when they really mean confidentiality. I have seen the term "transactional privacy" used to describe the idea of people being sellers of their own data outright. That is not the sense I'm using it. I'm speaking more generally of the interactions that take place online.

Photo Credit: Hide and seek (2) from Ceescamel (CC BY-SA 4.0)

Tags: privacy confidentiality identity


reb00ted

Finding and sharing truth and lies: which of these 4 types are you?

Consider this diagram: Trying to find: truth Crook Scientist lies Demagogue Debunker Sharing: lies truth If I try to find the truth, but lie about what I’m telling others, I’m a crook. If I try to find lies that “work” and tell them to others, I’m a demagogue. If I try to find lies to expose them and share the truth, I’m a debunker of lies. And if

Consider this diagram:

Trying to find: truth Crook Scientist lies Demagogue Debunker Sharing: lies truth

If I try to find the truth, but lie about what I’m telling others, I’m a crook.

If I try to find lies that “work” and tell them to others, I’m a demagogue.

If I try to find lies to expose them and share the truth, I’m a debunker of lies.

And if I try to find the truth, and share it, that makes me a scientist.

If so, we can now describe each one of those categories in more detail, and understand the specific behaviors they necessarily need to engage in.

For example, the scientist will welcome and produce copious objective evidence. The demagogue, likely, will provide far less evidence, and if so, point to other people and their statements as their evidence. Those other people are likely either other demagogues or just crooks.

If we could annotate people on-line with these four categories, we could even run a PageRank-style algorithm on it to figure out which is which. Why aren’t we doing this? Might this interfere with attention as the primary driver of revenue for “free” on-line services?

P.S. Sorry for the click bait headline. It just lent itself so very well…

Sunday, 06. March 2022

Doc Searls Weblog

The frog of war

“Compared to war, all other forms of human endeavor shrink to insignificance. God help me, I do love it so.” — George S. Patton (in the above shot played by George C. Scott in his greatest role.) Is the world going to croak? Put in geological terms, will the Phanerozoic eon, which began with the […]

“Compared to war, all other forms of human endeavor shrink to insignificance. God help me, I do love it so.” — George S. Patton (in the above shot played by George C. Scott in his greatest role.)

Is the world going to croak?

Put in geological terms, will the Phanerozoic eon, which began with the Cambrian explosion a half billion years ago, end at the close of the Anthropocene epoch, when the human species, which has permanently put its mark on the Earth, commits suicide with nuclear weapons? This became a lot more plausible as soon as Putin rattled his nuclear saber.

Well, life will survive, even if humans do not. And that will happen whether or not the globe warms as much as the IPCC assures us it will. If temperatures in the climate of our current interglacial interval peak with both poles free of ice, the Mississippi river will meet the Atlantic at what used to be St. Louis. Yet life will abound, as life does, at least until the Sun gets so large and hot that photosynthesis stops and the phanerozoic finally ends. That time is about a half-billion years away. That might seem like a long time, but given the age of the Earth itself—about 4.5 billion years—life here is much closer to the end than the beginning.

Now let’s go back to human time.

I’ve been on the planet for almost 75 years, which in the grand scheme is a short ride. But it’s enough to have experienced history being bent some number of times. So far I count six.

First was on November 22, 1963, when John F. Kennedy was assassinated. This was when The Fifties actually ended and The Sixties began. (My great aunt Eva Quakenbush, née Searls or Searles—it was spelled both ways—told us what it was like when Lincoln was shot and she was 12 years old. “It changed everything,” she said. So did the JFK assassination.)

The second was the one-two punch of the Martin Luther King and Bobby Kennedy assassinations, on April 4 and June 6, 1968. The former was a massive setback for both the civil rights movement and nonviolence. And neither has fully recovered. The latter assured the election of Richard Nixon and another six years of the Vietnam war.

The third was the Internet, which began to take off in the mid-1990s. I date the steep start of hockey stick curve to April 30, 1995, when the last backbone within the Internet that had forbidden commercial traffic (NSFnet) shut down, uncorking a tide of e-commerce that is still rising.

The fourth was 9/11, in 2001. That suckered the U.S. into wars in Afghanistan and Iraq, and repositioned the country from the world’s leading peacekeeper to the world’s leading war-maker—at least until Russia stepped up.

The fifth was the Covid pandemic, which hit the world in early 2020 and is still with us, causing all sorts of changes, from crashes in supply chains to inflation to complete new ways for people to work, travel, vote, and think.

Sixth is the 2022 Russian invasion of Ukraine, which began on February 24, 2022, just eleven days ago as I write this.

Big a thing as this last bend is—and it’s huge—there are too many ways to make sense of it all:

The global struggle between democracy and autocracy The real End of History At last, EU gets it together Putin the warlord The man is nuts Zelensky the hero Russia about to collapse, maybe WWIII Ukraine will win Hard to beat propaganda Putin’s turning Russia into  It’ll get worse before it ends badly while we all do more than nothing but not enough Whatever it is, social media is reporting it all World War Wired Russia does not have an out The dawn of uncivilization

I didn’t list the threat of thermonuclear annihilation among the six big changes in history I’ve experienced because I was raised with it. Several times a year we would “duck and cover” under our desks when the school would set off air raid sirens. Less frequent than fire drills, these were far more scary, because we all knew we were toast, being just five miles by air from Manhattan, which was surely in the programmed crosshairs on one or more Soviet nukes.

Back then I put so little faith in adult wisdom, and its collective expression in government choices, that I had a bucket list of places I’d like to see before nuclear blasts or fallout doomed us all. My top two destinations were the Grand Canyon and California: exotic places for a kid whose farthest family venturings from New Jersey were to see relatives in North Carolina and North Dakota. (Of no importance but of possible interest is that I’ve now been a citizen of California for 37 years, married to an Angelino for 32 of those, and it still seems exotic to me. Mountains next to cities and beaches? A tradition of wildfires and earthquakes? Whoa.)

What’s around the corner we turned two Thursdays ago? Hard to tell, in spite of all that’s being said by Wise Ones in the links above. One things I do know for sure: People have changed, because more and more of them are digital now, connected to anybody and anything at any distance, and able to talk, produce “content” and do business—and to look and think past national and territorial boundaries. We make our tools and then our tools make us, McLuhan taught. Also, all media work us over completely. We have been remade into digital beings by our wires, waves, and phones. This raises optionalities in too many ways to list.

I’m an optimist by nature, and since the ’90s have been correctly labeled a cyber-utopian. (Is there anything more utopian than The Cluetrain Manifesto?) To me, the tiny light at the end of Ukraine’s tunnel is a provisional belief that bad states—especially ones led by lying bastards who think nothing of wasting thousands or millions of innocent lives just to build an empire—can’t win World War Wired. Unless, that is, the worst of those bastards launches the first nuke and we all go “gribbit.”

Our challenge as a species, after we stop Russia’s land grab from becoming a true world war, is to understand fully how we can live and work in the Wired World as digital as well as physical beings.


Mike Jones: self-issued

Two new COSE- and JOSE-related Internet Drafts with Tobias Looker

This week, Tobias Looker and I submitted two individual Internet Drafts for consideration by the COSE working group. The first is “Barreto-Lynn-Scott Elliptic Curve Key Representations for JOSE and COSE“, the abstract of which is: This specification defines how to represent cryptographic keys for the pairing-friendly elliptic curves known as Barreto-Lynn-Scott (BLS), for use with […]

This week, Tobias Looker and I submitted two individual Internet Drafts for consideration by the COSE working group.

The first is “Barreto-Lynn-Scott Elliptic Curve Key Representations for JOSE and COSE“, the abstract of which is:


This specification defines how to represent cryptographic keys for the pairing-friendly elliptic curves known as Barreto-Lynn-Scott (BLS), for use with the key representation formats of JSON Web Key (JWK) and COSE (COSE_Key).

These curves are used in Zero-Knowledge Proof (ZKP) representations for JOSE and COSE, where the ZKPs use the CFRG drafts “Pairing-Friendly Curves” and “BLS Signatures“.

The second is “CBOR Web Token (CWT) Claims in COSE Headers“, the abstract of which is:


This document describes how to include CBOR Web Token (CWT) claims in the header parameters of any COSE structure. This functionality helps to facilitate applications that wish to make use of CBOR Web Token (CWT) claims in encrypted COSE structures and/or COSE structures featuring detached signatures, while having some of those claims be available before decryption and/or without inspecting the detached payload.

JWTs define a mechanism for replicating claims as header parameter values, but CWTs have been missing the equivalent capability to date. The use case is the same as that which motivated Section 5.3 of JWT “Replicating Claims as Header Parameters” – encrypted CWTs for which you’d like to have unencrypted instances of particular claims to determine how to process the CWT prior to decrypting it.

We plan to discuss both with the COSE working group at IETF 113 in Vienna.


Kyle Den Hartog

Convergent Wisdom

Convergent Wisdom is utilizing the knowledge gained from studying multiple solutions that approach a similar outcome in different ways in order to choose the appropriate solution for the problem at hand.

I was recently watching a MIT Opencourseware video on Youtube titled “Introduction to ‘The Society of Mind’” which is a series of lectures (or as the author refers to them “seminars”) by Marvin Minsky. While watching the first episode of this course the professors puts forth an interesting theory about what grants humans the capability to handle a variety of problems while machines remain limited in their capacity to generically compute solutions to problems. In this theory he alludes to the concept that humans “resourcefullness” is what grants us this capability which to paraphrase is the ability for humans to leverage a variety of different paths to identify a variety of solutions to the same problem. All of which can be used in a variety of different situations in order to develop a solution to the generic problem at hand. While he was describing this theory he made an off hand comment about the choice of the word “resourcefullness” positing whether there was a shorter word to describe the concept.

This got me thinking about the lingustical preciseness to describe the concept and I came across a very fullfilling suggestion on stack exchange to do just that. They suggested the word “equifinality” which is incredibly precise, but also a bit of a pompous choice for a general audience. Albeit, great for the audience he was addressing. The second suggestion sent me down a tangent of thought that I find very enticing though. “Convergent” is a word that’s commonly used to describe this in common tongue today and more importantly can be paired with wisdom to describe a new concept. I’m choosing to define the concept of “convergent wisdom” as utilizing the knowledge gained from studying multiple solutions that approach the same outcome in different ways in order to choose the appropriate solution for the problem at hand.

What’s interesting about the concept of convergent wisdom is that it suitably describes the feedback loop that humans exploit in order to gain the capability of generalizable problem solving. For example, in chemical synthesis the ability to understand the pathway of creating an exotic compound is nearly as important as the compound itself because it can affect the feasiblity of mass production of the compound. In manufacteuring similarly, there are numerous instance of giant discoveries occuring (battery technology is the one that comes to mind first) which then fall short when it comes time to manufateur the product. In both of these instances the ability to understand the chosen path is nearly as important as the solution itself.

So why does this matter and why define the concept? This concept seems incredibly important to the ability to build generically intelligent machines. Today, it seems much of the focus of the artificial intelligence feild focuses primarily on the outcome while treating the process as a hidden and unimportant afterthought up until the point in which the algorithm starts to produce ethically dubious outcomes as well.

Through the study of not only the inputs and outputs, but also the pathway by with the outcome is achieved I believe the same feedback loop may be able to be formed to produce generalizable computing in machines. Unfortunately, I’m no expert in this space and have tons of reading to do on the topic. So now that I’ve been able to describe and define the topic can anyone point me to the area of study or academic literature which focuses on this aspect of AI?

Saturday, 05. March 2022

Just a Theory

How Goodreads Deleted My Account

Someone stole my Goodreads account; the company failed to recover it, then deleted it. It was all too preventable.

On 12:31pm on February 2, I got an email from Goodreads:

Hi David,

This is a notice to let you know that the password for your account has been changed.

If you did not recently reset or change your password, it is possible that your account has been compromised. If you have any questions about this, please reach out to us using our Contact Us form. Alternatively, visit Goodreads Help.

Since I had not changed my password, I immediately hit the “Goodreads Help” link (not the one in the email, mind you) and reported the issue. At 2:40pm I wrote:

I got an email saying my password had been changed. I did not change my password. I went to the site and tried go log in, but the login failed. I tried to reset my password, but got an email saying my email is not in the system.

So someone has compromised the account. Please help me recover it.

I also tried to log in, but failed. I tried the app on my phone, and had been logged out there, too.

The following day at 11:53am, Goodreads replied asking me for a link to my account. I had no idea what the link to my account was, and since I assumed that all my information had been changed by the attackers, I didn’t think to search for it.

Three minutes later, at 11:56, I replied:

No, I always just used the domain and logged in, or the iOS app. I’ve attached the last update email I got around 12:30 EST yesterday, in case that helps. I’ve also attached the email telling me my password had been changed around 2:30 yesterday. That was when I became aware of the fact that the account was taken over.

A day and half later, at 5:46pm on the 4th, Goodreads support replied to say that they needed the URL in order to find it and investigate and asked if I remembered the name on the account. This seemed odd to me, since until at least the February 2nd it was associated with my name and email address.

I replied 3 minutes later at 5:49:

The name is mine. The username maybe? I’m usually “theory”, “itheory”, or “justatheory”, though if I set up a username for Goodreads it was ages ago and never really came up. Where could I find an account link?

Over the weekend I can log into Amazon and Facebook and see if I see any old integration messages.

The following day was Saturday the fifth. I logged into Facebook to see what I could find. I had deleted the link to Goodreads in 2018 (when I also ceased to use Facebook), but there was still a record of it, so I sent the link ID Facebook had. I also pointed out that my email address had been associated with the account for many years until it was changed on Feb 2. Couldn’t they find it in the history for the account?

I still didn’t know the link to my account, but forwarded the marketing redirect links that had been in the password change email, as well as an earlier email with a status on my reading activity.

After I sent the email, I realized I could ask some friends who I knew followed me on Goodreads to see if they could dig up the link. Within a few minutes my pal Travis had sent it to me, https://www.goodreads.com/user/show/7346356-david-wheeler. I was surprised, when I opened it, to see all my information there as I’d left it, no changes. I still could not log in, however. I immediately sent the link to Goodreads support (at 12:41pm).

That was the fifth. I did no hear back again until February 9th, when I was asked if I could provide some information about the account so they could confirm it was me. The message asked for:

Any connected apps or devices Pending friend requests to your account Any accounts linked to your Goodreads account (Goodreads accounts can be linked to Amazon, Apple, Google, and/or Facebook accounts) The name of any private/secret groups of which you are a part Any other account-specific information you can recall

Since I of course had no access to the account, I replied 30 minutes later with what information I could recall from memory: my devices, Amazon Kindle connection (Kindle would sometimes update my reading progress, though not always), membership in some groups that may or may not have been public, and the last couple books I’d updated.

Presumably, most of that information was public, and the devices may have been changed by the hackers. I heard nothing back. I sent followup inquiries on February 12th and 16th but got no replies.

On February 23rd I complained on Twitter. Four minutes later @goodreads replied and I started to hope there might be some progress again. They asked me to get in touch with Support again, which i did at 10:59am, sending all the previous information and context I could.

Then, at 12:38am, this bombshell arrived in my inbox from Goodreads support:

Thanks for your your patience while we looked into this. I have found that your account was deleted due to suspected suspicious activity. Unfortunately, once an account has been deleted, all of the account data is permanently removed from our database to comply with the data regulations which means that we are unable to retrieve your account or the related data. I know that’s not the news you wanted and I am sincerely sorry for the inconvenience.Please let me know if there’s anything else I ​can assist you with.

I was stunned. I mean of course there was suspicious activity, the account was taken over 19 days previously! As of the 5th when I found the link it still existed, and I had been in touch a number of times previously. Goodreads knew that the account had been reported stolen and still deleted it?

And no chance of recovery due to compliance rules? I don’t live in the EU, and even if I was subject to the GDPR or CCPA, there is no provision to delete my data unless I request it.

WTAF.

So to summarize:

Someone took control of my account on February 2 I reported it within hours On February 5 my account was still on Goodreads We exchanged a number of messages By February 23 the account was deleted with no chance of recovery due to suspicious activity

Because of course there was suspicious activity. I told them there was an issue!

How did this happen? What was the security configuration for my account?

I created an entry for Goodreads in 1Password on January 5, 2012. The account may have been older than that, but for at least 10 years I’ve had it, and used it semi-regularly. The password was 16 random ASCII characters generated by 1Password on October 27, 2018. I create unique random passwords for all of my accounts, so it would not be found in a breached database (and I have updated all breached accounts 1Password has identified). The account had no additional factors of authentication or fallbacks to something like SMS, because Goodreads does not offer them. There was only my email address and password. On February 2nd someone changed my password. I had clicked no links in emails, so phishing is unlikely. Was Goodreads support social-engineered to let someone else change the password? How did this happen? I exchanged multiple messages with Goodreads support between February 2 and 23rd, to no avail. By February 23rd, my account was gone with all my reviews and reading lists.

Unlike Nelson, who’s account was also recently deleted without chance of recovery, I had not been making and backups of my data. Never occurred to me, perhaps because I never put a ton of effort into my Goodreads account, mostly just tracked reading and a few brief reviews. I’ll miss my reading list the most. Will have to start a new one on my own machines.

Though all this, Goodreads support were polite but not particularly responsive. days and then weeks went by without response. The company deleted the account for suspicious activity an claim no path to recovery for the original owner. Clearly the company doesn’t give its support people the tools they need to adequately support cases such as this.

I can think of a number of ways in which these situations can be better handled and even avoided. In fact, given my current job designing identity systems I’m going to put a lot of thought into it.

But sadly I’ll be trusting third parties less with my data in the future. Redundancy and backups are key, but so is adequate account protection. Letterboxed, for example, has no multifactor authentication features, making it vulnerable should someone decide it’s worthwhile to steal accounts to spam reviews or try to artificially pump up the scores for certain titles. Just made a backup.

You should, too, and backup your Goodreads account regularly. Meanwhile, I’m on the lookout for a new social reading site that supports multifactor authentication. But even with that, in the future I’ll post reviews here on Just a Theory and just reference them, at best, from social sites.

Update April 3, 2022: This past week, I finally got some positive news from Goodreads, two months after this saga began:

The Goodreads team would like to apologize for your recent poor experience with your account. We sincerely value your contribution to the Goodreads community and understand how important your data is to you. We have investigated this issue and attached is a complete file of your reviews, ratings, and shelvings.

And that’s it, along with some instructions for creating a new account and loading the data. Still no account recovery, so my old URL is dead and there is no information about my Goodreads friends. Still, I’m happy to at least have my lists and reviews recovered. I imported them into a new Goodreads account, then exported them again and imported them into my new StoryGraph profile.

More about… Security Goodreads Account Takeover Fail

Thursday, 03. March 2022

Doc Searls Weblog

Three thoughts about NFTs

There’s a thread in a list I’m on titled “NFTs are a Scam.” I know too little about NFTs to do more than dump here three thoughts I shared on the list in response to a post that suggested that owning digital seemed to be a mania of some kind. Here goes… First, from Walt Whitman, […]

There’s a thread in a list I’m on titled “NFTs are a Scam.” I know too little about NFTs to do more than dump here three thoughts I shared on the list in response to a post that suggested that owning digital seemed to be a mania of some kind. Here goes…

First, from Walt Whitman, who said he “could turn and live for awhile with the animals,” because,

They do not sweat and whine about their condition.
They do not lie awake in the dark and weep for their sins.
Not one is dissatisfied.
Not one is demented with the mania of owning things.

Second, the Internet is NEA, meaning,

No one owns it
Everyone can use it
Anyone can improve it

Kind of like the Universe that way.

What makes the Internet an inter-net is an agreement: that every network within it will pass packets from any one endpoint to any other, regardless of origin or destination. That agreement is a protocol: TCP/IP. Agreeing to use that protocol is like molecules agreeing to use gravity or the periodic table. Everything everyone does while operating or using the Internet is gravy atop TCP/IP. The Web is also NEA. So is email. Those are held together by simple protocols too.

Third is that the sure sign of a good idea is that it’s easy to do bad things with it. Look at email, which is 99.x% spam. Yet I’m writing one here and you’re reading it. NFT’s are kind of like QR codes in the early days after the patent’s release to the word by Denso Wave early in this millennium. I remember some really smart people calling QR codes “robot barf.” Still, good things happened.

So, if bad things are being done with NFTs, that might be a good sign.

The image above is of a window into the barn that for several decades served the Crissman family in Graham, North Carolina. It was toward the back of their 17 acres of beautiful land there. I have many perfect memories of time spent on that land with my aunt, uncle, five cousins and countless visitors. The property is an apartment complex now, I’m told.


Mike Jones: self-issued

Minor Updates to OAuth DPoP Prior to IETF 113 in Vienna

The editors have applied some minor updates to the OAuth DPoP specification in preparation for discussion at IETF 113 in Vienna. Updates made were: Renamed the always_uses_dpop client registration metadata parameter to dpop_bound_access_tokens. Clarified the relationships between server-provided nonce values, authorization servers, resource servers, and clients. Improved other descriptive wording.

The editors have applied some minor updates to the OAuth DPoP specification in preparation for discussion at IETF 113 in Vienna. Updates made were:

Renamed the always_uses_dpop client registration metadata parameter to dpop_bound_access_tokens. Clarified the relationships between server-provided nonce values, authorization servers, resource servers, and clients. Improved other descriptive wording.

The specification is available at:

https://tools.ietf.org/id/draft-ietf-oauth-dpop-06.html

Wednesday, 02. March 2022

Here's Tom with the Weather

Good Paper on Brid.gy

I read Bridging the Open Web and APIs: Alternative Social Media Alongside the Corporate Web because it was a good opportunity to fill some holes in my knowledge about the Indieweb and Facebook. Brid.gy enables people to syndicate their posts from their own site to large proprietary social media sites. Although I don’t use it myself, I’m often impressed when I see all the Twitter “likes” and

I read Bridging the Open Web and APIs: Alternative Social Media Alongside the Corporate Web because it was a good opportunity to fill some holes in my knowledge about the Indieweb and Facebook.

Brid.gy enables people to syndicate their posts from their own site to large proprietary social media sites.

Although I don’t use it myself, I’m often impressed when I see all the Twitter “likes” and responses that are backfed by brid.gy to the canonical post on a personal website.

The paper details the challenging history of providing the same for Facebook (in which even Cambridge Analytica plays a part) and helped me appreciate why I never see similar responses from Facebook on personal websites these days.

It ends on a positive note…

while Facebook’s API shutdown led to an overnight decrease in Bridgy accounts (Barrett, 2020), other platforms with which Bridgy supports POSSE remain functional and new platforms have been added, including Meetup, Reddit, and Mastodon.

Monday, 28. February 2022

Randall Degges

Journaling: The Best Habit I Picked Up in 2021

2021 was a challenging year in many ways. Other than the global pandemic, many things changed in my life (some good, some bad), and it was a somewhat stressful year. In March of 2021, I almost died due to a gastrointestinal bleed (a freak accident caused by a routine procedure). Luckily, I survived the incident due to my amazing wife calling 911 at the right time and the fantastic paramedic

2021 was a challenging year in many ways. Other than the global pandemic, many things changed in my life (some good, some bad), and it was a somewhat stressful year.

In March of 2021, I almost died due to a gastrointestinal bleed (a freak accident caused by a routine procedure). Luckily, I survived the incident due to my amazing wife calling 911 at the right time and the fantastic paramedics and doctors at my local hospital, but it was a terrifying ordeal.

While I was in recovery, I spent a lot of time thinking about what I wanted to do when feeling better. How I wanted to spend the limited time I have left. There are lots of things I want to spend my time doing: working on meaningful projects, having fun experiences with family and friends, going on camping and hiking trips, writing, etc.

The process of thinking through everything I wanted to do was, in and of itself, incredibly cathartic. The more time I spent reflecting on my thoughts and life, the better I felt. There’s something magical about taking dedicated time out of your day to write about your thoughts and consider the big questions seriously.

Without thinking much about it, I found myself journaling every day.

It’s been just about a year since I first started journaling, and since then, I’ve written almost every day with few exceptions. In this time, journaling has made a tremendous impact on my life, mood, and relationships. Journaling has quickly become the most impactful of all the habits I’ve developed over the years.

Benefits of Journaling

There are numerous reasons to journal, but these are the primary benefits I’ve personally noticed after a year of journaling.

Journaling helps clear your mind.

I have a noisy inner monologue, and throughout the day, I’m constantly being interrupted by ideas, questions, and concerns. When I take a few minutes each day to write these thoughts down and think through them, it puts my brain at ease and allows me to relax and get them off my mind.

Journaling helps put things in perspective.

I’ve often found myself upset or frustrated about something, only to realize later in the day while writing about how insignificant the problem is. The practice of writing things down brings a certain level of rationality to your thoughts that aren’t always immediately apparent.

I often discover that even the “big” problems in my life have obvious solutions I would never have noticed had I not journaled about them.

Journaling preserves memories.

My memory is terrible. If you asked me what I did last month, I’d have absolutely no idea.

Before starting a journal, the only way I could reflect on memories was to look through photos. The only problem with this is that often, while I can remember bits and pieces of what was going on at the time, I can’t remember everything.

As I’m writing my daily journal entry, I’ll include any relevant photos and jot down some context around them – I’ve found that by looking back through these entries with both pictures and stories, it allows me to recall everything.

And… As vain as it is, I hope that someday I’ll be able to pass these journals along to family members so that, if they’re interested, they can get an idea of what sort of person I was, what I did, and the types of things I thought about.

Journaling helps keep your goals on track.

It’s really easy to set a personal goal and forget about it – I’ve done it hundreds of times. But, by writing every day, I’ve found myself sticking to my goals more than ever.

I think this boils down to focus. It would be hard for me to journal every day without writing about my goals and how I’m doing, and that little bit of extra focus and attention goes a long way towards helping me keep myself honest.

It’s fun!

When I started journaling last year, I didn’t intend to do it every day. It just sort of happened.

Each day I found myself wanting to write down some thought or idea, and the more I did it, the more I enjoyed it. Over time, I noticed that I found myself missing it on the few occasions I didn’t journal.

Now, a year in, I look forward to writing a small journal entry every day. It’s part of my wind-down routine at night, and I love it.

Keeping a Digital and Physical Journal

Initially, when I started keeping a journal, I had a few simple goals:

I wanted to be able to quickly write (and ideally include photos) in my journal I wanted it to be easy to write on any device (phone, laptop, iPad, etc.) I wanted some way to physically print my journal each year so that I could have a physical book to look back at any time I want – as well as to preserve the memories as digital stuff tends to disappear eventually

With these requirements in mind, I did a lot of research, looking for a suitable solution. I looked at various journaling services and simple alternatives (physical journals, Google Docs, Apple Notes, etc.).

In the end, I decided to start using the Day One Mac app (works on all Apple devices). I cannot recommend it highly enough if you’re an Apple user.

NOTE: I have no affiliation whatsoever with the Day One app. But it’s incredible.

The Day One app looks beautiful, syncs your journals privately using iCloud, lets you embed photos (and metadata) into entries in a stylish and simple way, makes it incredibly easy to have multiple journals (by topic), track down any entries you’ve previously created, and a whole lot more.

For me, the ultimate feature is the ability to easily create a beautiful looking physical journal whenever I want. Here’s a picture of my journal from 2021.

It’s a bound book with high-quality photos, layouts, etc. It looks astounding. You can customize the book’s cover, include select entries, and make a ton of other customizations I won’t expand on here.

So, my recommendation is that if you’re going to start a journal and want to print it out eventually, use the Day One app – it’s been absolutely 10⁄10 incredible.

Wednesday, 23. February 2022

MyDigitalFootprint

Ethics, maturity and incentives: plotting on Peak Paradox.

Ethics, maturity and incentives may not appear obvious or natural bedfellows.  However, if someone else’s incentives drive you, you are likely on a journey from immaturity to Peak Paradox.  A road from Peak Paradox towards a purpose looks like maturity as your own incentives drive you. Of note, ethics change depending on the direction of travel.   ---- In psychology, maturit
Ethics, maturity and incentives may not appear obvious or natural bedfellows.  However, if someone else’s incentives drive you, you are likely on a journey from immaturity to Peak Paradox.  A road from Peak Paradox towards a purpose looks like maturity as your own incentives drive you. Of note, ethics change depending on the direction of travel.  

----

In psychology, maturity can be operationally defined as the level of psychological functioning one can attain, after which the level of psychological functioning no longer increases with age.  Maturity is the state, fact, or period of being mature.

Whilst immature is not fully developed or has an emotional or intellectual development appropriate to someone younger, I want to use the state of immaturity, which is the state where one is not fully mature. 

Incentives are a thing that motivates or encourages someone to do something.

Peak Paradox is where you try to optimise for everything but cannot achieve anything as you do not know what drives you and are constantly conflicted. 

Ethics is a branch of philosophy that "involves systematising, defending, and recommending concepts of right and wrong behaviour".  Ethical and moral principles govern a person's behaviour.


The Peak Paradox framework is below.

 


When we start our life journey, we travel from being immature to mature.  Depending on your context, e.g. location, economics, social, political and legal, you will naturally be associated with one of the four Peak Purposes. It might not be extreme, but you will be framed towards one of them (bias).  This is the context you are in before determining your own purpose, mission or vision.  Being born in a place with little food and water, there is a natural affinity to survival.  Being born in a community that values everyone and everything, you will naturally align to a big society.  Born to the family of a senior leader in a global industry, you will be framed to a particular model.  Being a child of Murdoch, Musk, Zuckerberg, Trump, Putin, Rothschild, Gates,  etc. - requires assimilation to a set of beliefs. 

Children of celebrities break from what the parents thinking, as have we and as do our children.  Politics and religious chats with teenagers are always enlightening. As children, we travel from the contextual purpose we are born into and typically head towards reaching Peak Paradox - on this journey, we are immature. (note, it is possible to go the other way and become more extreme in purpose than your parents.)  Later in life and with the benefits of maturity, we observe this journey from the simplicity of binary choices (black and white ethics) towards a more nuanced mess at Peak Paradox. At Peak Paradox, we sense a struggle to make sense of the different optimisations, drivers, purposes, incentives, rewards that others have.  This creates anxiety, tension and conflict within us. During this journey from a given purpose to Peak Paradox, the incentives given to you are designed to maintain the original purpose, to keep you following that ideal.  Incentive frame and keep us in a model which is sometimes hard to break.

It is only when we live with complexity and are able to appreciate others' purposes, optimisations, and drivers that we will also gain clarity on our own passion, purpose or mission. By living with Peak Paradox, we change from being driven by others' incentives to uncovering our own affinity; this is where we start to align to what we naturally believe in and find what fits our skin. 

I have written before that we have to move from Peak Paradox towards a purpose if we want to have clarity of purpose and achieve something

Enter Ethics

Suppose maturity is the transition from our actions being determined by others and following their ethical or moral code to determining what type of world or society we want to be part of. In that case, we need to think about the two journeys.  

On the route from birth to Peak Paradox, I have framed this as a route from immaturity to living with complexity.  On the route in, we live by others' moral and ethical codes and are driven by their incentives.   As we leave Peak Paradox and head to a place where we find less tension, conflicts and anxiety, which has a natural affinity to what we believe, and we create our own moral and ethical compass/ code and become driven by our own motivations.  

We should take a fresh perspective on ethics and first determine which direction someone is heading before we make a judgement.  This is increasingly important in #cyberethics and #digitalethics as we only see the point and have no ability to create a bearing or direction.

The purpose of Peak Paradox

As maturity heads towards being mature, we move in and out of living at Peak Paradox from different “Purposes” I am sure this is an iterative process.  The purpose of Peak Paradox is to be in a place where you are comfortable with complexity, but it is not a place to stay. It is like a holiday home, good to go there now and again, but it does not represent life.  The question we have is how do we know when we are at Peak Paradox, probably because our north star has become a black hole!  The key message here is that some never escape the Peak Paradox black hole, finding they live in turmoil driven by others' incentives that are designed to keep you there) and never finding their own passion, incentive or motivation.  The complexity death vortex is where you endlessly search for a never reachable explanation of everything as it is all too interconnected and interdependent to unravel. Leaders come out from Peak Paradox knowing why they have a purpose and a direction. 

The Journey

Imagine you are born into a celebrity household, over time you see the world is more complicated,  you work for a company believing that money bonuses and incentives matter.  Over time you come to understand the tensions such a narrow view brings.  You search for something better, committing to living a simpler life, changing your ethics and moral code.  This still creates tension, and you search for peace and harmony, where you find less tension and more alignment; when you arrive there, you have a unique code because you live with complexity and understand different optimisations.  It does not scale, and few can grasp your message.


Where are you on the journey?  




Monday, 21. February 2022

Mike Jones: self-issued

Four Months of Refinements to OAuth DPoP

A new draft of the OAuth 2.0 Demonstration of Proof-of-Possession at the Application Layer (DPoP) specification has been published that addresses four months’ worth of great review comments from the working group. Refinements made were: Added Authorization Code binding via the dpop_jkt parameter. Described the authorization code reuse attack and how dpop_jkt mitigates it. Enhanced […]

A new draft of the OAuth 2.0 Demonstration of Proof-of-Possession at the Application Layer (DPoP) specification has been published that addresses four months’ worth of great review comments from the working group. Refinements made were:

Added Authorization Code binding via the dpop_jkt parameter. Described the authorization code reuse attack and how dpop_jkt mitigates it. Enhanced description of DPoP proof expiration checking. Described nonce storage requirements and how nonce mismatches and missing nonces are self-correcting. Specified the use of the use_dpop_nonce error for missing and mismatched nonce values. Specified that authorization servers use 400 (Bad Request) errors to supply nonces and resource servers use 401 (Unauthorized) errors to do so. Added a bit more about ath and pre-generated proofs to the security considerations. Mentioned confirming the DPoP binding of the access token in the list in (#checking). Added the always_uses_dpop client registration metadata parameter. Described the relationship between DPoP and Pushed Authorization Requests (PAR). Updated references for drafts that are now RFCs.

I believe this brings us much closer to a final version.

The specification is available at:

https://tools.ietf.org/id/draft-ietf-oauth-dpop-05.html

Monday, 21. February 2022

Damien Bod

Implementing authorization in Blazor ASP.NET Core applications using Azure AD security groups

This article shows how to implement authorization in an ASP.NET Core Blazor application using Azure AD security groups as the data source for the authorization definitions. Policies and claims are used in the application which decouples the descriptions from the Azure AD security groups and the application specific authorization requirements. With this setup, it is […]

This article shows how to implement authorization in an ASP.NET Core Blazor application using Azure AD security groups as the data source for the authorization definitions. Policies and claims are used in the application which decouples the descriptions from the Azure AD security groups and the application specific authorization requirements. With this setup, it is easy to support any complex authorization requirement and IT admins can manager the accounts independently in Azure. This solution will work for Azure AD B2C or can easily be adapted to use data from your database instead of Azure AD security groups if required.

Code: https://github.com/damienbod/AzureADAuthRazorUiServiceApiCertificate/tree/main/BlazorBff

Setup the AAD security groups

Before we start using the Azure AD security groups, the groups need to be created. I use Powershell to create the security groups. This is really simple using the Powershell AZ module with AD. For this demo, just two groups are created, one for users and one for admins. The script can be run from your Powershell console. You are required to authenticate before running the script and the groups are added if you have the rights. In DevOps, you could use a managed identity and the client credentials flow.

# https://theitbros.com/install-azure-powershell/ # # https://docs.microsoft.com/en-us/powershell/module/az.accounts/connect-azaccount?view=azps-7.1.0 # # Connect-AzAccount -Tenant "--tenantId--" # AZ LOGIN --tenant "--tenantId--" $tenantId = "--tenantId--" $gpAdmins = "demo-admins" $gpUsers = "demo-users" function testParams { if (!$tenantId) { Write-Host "tenantId is null" exit 1 } } testParams function CreateGroup([string]$name) { Write-Host " - Create new group" $group = az ad group create --display-name $name --mail-nickname $name $gpObjectId = ($group | ConvertFrom-Json).objectId Write-Host " $gpObjectId $name" } Write-Host "Creating groups" ################################## ### Create groups ################################## CreateGroup $gpAdmins CreateGroup $gpUsers #az ad group list --display-name $groupName return

Once created, the new security groups should be visible in the Azure portal. You need to add group members or user members to the groups.

That’s all the configuration required to setup the security groups. Now the groups can be used in the applications.

Define the authorization policies

We do not use the security groups directly in the applications because this can change a lot or maybe the application is deployed to different host environments. The security groups are really just descriptions about the identity. How you use this, is application specific and depends on the solution business requirements which tend to change a lot. In the applications, shared authorization policies are defined and only used in the Blazor WASM and the Blazor server part. The definitions have nothing to do with the security groups, the groups get mapped to application claims. A Policies class definition was created for all the policies in the shared Blazor project because this is defined once, but used in the server project and the client project. The code was built based on the excellent blog from Chris Sainty. The claims definition for the authorization check have nothing to do with the Azure security groups, this logic is application specific and sometimes the applications need to apply different authorization logic how the security groups are used in different applications inside the same solution.

using Microsoft.AspNetCore.Authorization; namespace BlazorAzureADWithApis.Shared.Authorization { public static class Policies { public const string DemoAdminsIdentifier = "demo-admins"; public const string DemoAdminsValue = "1"; public const string DemoUsersIdentifier = "demo-users"; public const string DemoUsersValue = "1"; public static AuthorizationPolicy DemoAdminsPolicy() { return new AuthorizationPolicyBuilder() .RequireAuthenticatedUser() .RequireClaim(DemoAdminsIdentifier, DemoAdminsValue) .Build(); } public static AuthorizationPolicy DemoUsersPolicy() { return new AuthorizationPolicyBuilder() .RequireAuthenticatedUser() .RequireClaim(DemoUsersIdentifier, DemoUsersValue) .Build(); } } }

Add the authorization to the WASM and the server project

The policy definitions can now be added to the Blazor Server project and the Blazor WASM project. The AddAuthorization extension method is used to add the authorization to the Blazor server. The policy names can be anything you want.

services.AddAuthorization(options => { // By default, all incoming requests will be authorized according to the default policy options.FallbackPolicy = options.DefaultPolicy; options.AddPolicy("DemoAdmins", Policies.DemoAdminsPolicy()); options.AddPolicy("DemoUsers", Policies.DemoUsersPolicy()); });

The AddAuthorizationCore method is used to add the authorization policies to the Blazor WASM client project.

var builder = WebAssemblyHostBuilder.CreateDefault(args); builder.Services.AddOptions(); builder.Services.AddAuthorizationCore(options => { options.AddPolicy("DemoAdmins", Policies.DemoAdminsPolicy()); options.AddPolicy("DemoUsers", Policies.DemoUsersPolicy()); });

Now the application policies, claims are defined. Next job is to connect the Azure security definitions to the application authorization claims used for the authorization policies.

Link the security groups from Azure to the app authorization

This can be done using the IClaimsTransformation interface which gets called after a successful authentication. An application Microsoft Graph client is used to request the Azure AD security groups. The IDs of the Azure security groups are mapped to the application claims. Any logic can be added here which is application specific. If a hierarchical authorization system is required, this could be mapped here.

public class GraphApiClaimsTransformation : IClaimsTransformation { private readonly MsGraphApplicationService _msGraphApplicationService; public GraphApiClaimsTransformation(MsGraphApplicationService msGraphApplicationService) { _msGraphApplicationService = msGraphApplicationService; } public async Task<ClaimsPrincipal> TransformAsync(ClaimsPrincipal principal) { ClaimsIdentity claimsIdentity = new(); var groupClaimType = "group"; if (!principal.HasClaim(claim => claim.Type == groupClaimType)) { var objectidentifierClaimType = "http://schemas.microsoft.com/identity/claims/objectidentifier"; var objectIdentifier = principal .Claims.FirstOrDefault(t => t.Type == objectidentifierClaimType); var groupIds = await _msGraphApplicationService .GetGraphUserMemberGroups(objectIdentifier.Value); foreach (var groupId in groupIds.ToList()) { var claim = GetGroupClaim(groupId); if (claim != null) claimsIdentity.AddClaim(claim); } } principal.AddIdentity(claimsIdentity); return principal; } private Claim GetGroupClaim(string groupId) { Dictionary<string, Claim> mappings = new Dictionary<string, Claim>() { { "1d9fba7e-b98a-45ec-b576-7ee77366cf10", new Claim(Policies.DemoUsersIdentifier, Policies.DemoUsersValue)}, { "be30f1dd-39c9-457b-ab22-55f5b67fb566", new Claim(Policies.DemoAdminsIdentifier, Policies.DemoAdminsValue)}, }; if (mappings.ContainsKey(groupId)) { return mappings[groupId]; } return null; } }

The MsGraphApplicationService class is used to implement the Microsoft Graph requests. This uses application permissions with a ClientSecretCredential. I use secrets which are read from an Azure Key vault. You need to implement rotation for this or make it last forever and update the secrets in the DevOps builds every time you deploy. My secrets are only defined in Azure and used from the Azure Key Vault. You could use certificates but this adds no extra security unless you need to use the secret/certificate outside of Azure or in app settings somewhere. The GetMemberGroups method is used to get the groups for the authenticated user using the object identifier.

public class MsGraphApplicationService { private readonly IConfiguration _configuration; public MsGraphApplicationService(IConfiguration configuration) { _configuration = configuration; } public async Task<IUserAppRoleAssignmentsCollectionPage> GetGraphUserAppRoles(string objectIdentifier) { var graphServiceClient = GetGraphClient(); return await graphServiceClient.Users[objectIdentifier] .AppRoleAssignments .Request() .GetAsync(); } public async Task<IDirectoryObjectGetMemberGroupsCollectionPage> GetGraphUserMemberGroups(string objectIdentifier) { var securityEnabledOnly = true; var graphServiceClient = GetGraphClient(); return await graphServiceClient.Users[objectIdentifier] .GetMemberGroups(securityEnabledOnly) .Request().PostAsync(); } private GraphServiceClient GetGraphClient() { string[] scopes = new[] { "https://graph.microsoft.com/.default" }; var tenantId = _configuration["AzureAd:TenantId"]; // Values from app registration var clientId = _configuration.GetValue<string>("AzureAd:ClientId"); var clientSecret = _configuration.GetValue<string>("AzureAd:ClientSecret"); var options = new TokenCredentialOptions { AuthorityHost = AzureAuthorityHosts.AzurePublicCloud }; // https://docs.microsoft.com/dotnet/api/azure.identity.clientsecretcredential var clientSecretCredential = new ClientSecretCredential( tenantId, clientId, clientSecret, options); return new GraphServiceClient(clientSecretCredential, scopes); } }

The security groups are mapped to the application claims and policies. The policies can be applied in the application.

Use the Policies in the Server

The Blazor server applications implements secure APIs for the Blazor WASM. The Authorize attribute is used with the policy definition. Now the user must be authorized using our definition to get data from this API. We also use cookies because the Blazor application is secured using the BFF architecture which has improved security compared to using tokens in the untrusted SPA.

[ValidateAntiForgeryToken] [Authorize(Policy= "DemoAdmins", AuthenticationSchemes = CookieAuthenticationDefaults.AuthenticationScheme)] [ApiController] [Route("api/[controller]")] public class DemoAdminController : ControllerBase { [HttpGet] public IEnumerable<string> Get() { return new List<string> { "admin data", "secret admin record", "loads of admin data" }; } }

Use the policies in the WASM

The Blazor WASM application can also use the authorization policies. This is not really authorization but only usability because you cannot implement authorization in an untrusted application which you have no control of once it’s running. We would like to hide the components and menus which cannot be used, if you are not authorized. I use an AuthorizeView with a policy definition for this.

<div class="@NavMenuCssClass" @onclick="ToggleNavMenu"> <ul class="nav flex-column"> <AuthorizeView Policy="DemoAdmins"> <Authorized> <li class="nav-item px-3"> <NavLink class="nav-link" href="demoadmin"> <span class="oi oi-list-rich" aria-hidden="true"></span> DemoAdmin </NavLink> </li> </Authorized> </AuthorizeView> <AuthorizeView Policy="DemoUsers"> <Authorized> <li class="nav-item px-3"> <NavLink class="nav-link" href="demouser"> <span class="oi oi-list-rich" aria-hidden="true"></span> DemoUser </NavLink> </li> </Authorized> </AuthorizeView> <AuthorizeView> <Authorized> <li class="nav-item px-3"> <NavLink class="nav-link" href="graphprofile"> <span class="oi oi-list-rich" aria-hidden="true"></span> Graph Profile </NavLink> </li> <li class="nav-item px-3"> <NavLink class="nav-link" href="" Match="NavLinkMatch.All"> <span class="oi oi-home" aria-hidden="true"></span> Home </NavLink> </li> </Authorized> <NotAuthorized> <li class="nav-item px-3"> <p style="color:white">Please sign in</p> </li> </NotAuthorized> </AuthorizeView> </ul> </div>

The Blazor UI pages should also use an Authorize attribute. This prevents an unhandled exception. You could add logic which forces you to login then with the permissions required or just display an error page. This depends on the UI strategy.

@page "/demoadmin" @using Microsoft.AspNetCore.Authorization @inject IHttpClientFactory HttpClientFactory @inject IJSRuntime JSRuntime @attribute [Authorize(Policy ="DemoAdmins")] <h1>Demo Admin</h1>

When the application is started, you will only see what you allowed to see and more important, only be able to get data for what you are authorized.

If you open a page where you have no access rights:

Notes:

This solution is very flexible and can work with any source of identity definitions, not just Azure security groups. I could very easily switch to a database. One problem with this, is that with a lot of authorization definitions, the size of the cookie might get to big and you would need to switch from using claims in the policies definitions to using a cache database or something. This would also be easy to adapt because the claims are only mapped in the policies and the IClaimsTransformation implementation. Only the policies are used in the application logic.

Links

https://chrissainty.com/securing-your-blazor-apps-configuring-policy-based-authorization-with-blazor/

https://docs.microsoft.com/en-us/aspnet/core/blazor/security

Sunday, 20. February 2022

Here's Tom with the Weather

Saturday, 19. February 2022

Doc Searls Weblog

Clearing things up

Back in 2009 I shot the picture above from a plane flight on approach to SFO. On Flickr (at that link) the photo has had 16,524 views and has been faved 420 times as of now. Here’s the caption: These are salt evaporation ponds on the shores of San Francisco Bay, filled with slowly evaporating […]

Back in 2009 I shot the picture above from a plane flight on approach to SFO. On Flickr (at that link) the photo has had 16,524 views and has been faved 420 times as of now. Here’s the caption:

These are salt evaporation ponds on the shores of San Francisco Bay, filled with slowly evaporating salt water impounded within levees in former tidelands. There are many of these ponds surrounding the South Bay.

A series microscopic life forms of different kinds and colors predominate to in series as the water evaporates. First comes green algae. Next brine shrimp predominate, turning the pond orange. Next, dunaliella salina, a micro-algae containing high amounts of beta-carotene (itself with high commercial value), predominates, turning the water red. Other organisms can also change the hue of each pond. The full range of colors include red, green, orange and yellow, brown and blue. Finally, when the water is evaporated, the white of salt alone remains. This is harvested with machines, and the process repeats.

Given the popularity of that photo and others I’ve shot like it (see here and here), I’ve wanted to make a large print of it to mount and hang somewhere. But there’s a problem: the photo was shot with a 2005-vintage Canon 30D, an 8.2 megapixel SLR with an APS-C (less than full frame) sensor, and an aftermarket zoom lens. It’s also a JPEG shot, which means it shows compression artifacts when you look closely or enlarge it a lot. To illustrate the problem, here’s a close-up of one section of the photo:

See how grainy and full of artifacts that is? Also not especially sharp. So that was an enlargement deal breaker.

Until today, that is, when my friend Marian Crostic, a fine art photographer who often prints large pieces, told me about Topaz LabsGigapixel AI. I’ve tried image enhancing software before with mixed results, but on Marian’s word and an $80 price, I decided to give this one a whack. Here’s the result:

Color me impressed enough to think it’s worth sharing.

 

 

Friday, 18. February 2022

Identity Woman

Event Series: Making the Augmented Social Network Vision a Reality

This series began in November with Logging Off Facebook: What Comes Next? The 2nd event will be March 4th online Both events are going to be Open Space Technology for three sessions. We will co-create the agenda the opening hour. The 3rd Event will be April 1 online. Building on the previous one we will […] The post Event Series: Making the Augmented Social Network Vision a Reality appeared firs

This series began in November with Logging Off Facebook: What Comes Next? The 2nd event will be March 4th online Both events are going to be Open Space Technology for three sessions. We will co-create the agenda the opening hour. The 3rd Event will be April 1 online. Building on the previous one we will […]

The post Event Series: Making the Augmented Social Network Vision a Reality appeared first on Identity Woman.

Wednesday, 16. February 2022

Mike Jones: self-issued

JWK Thumbprint URI Draft Addressing Working Group Last Call Comments

Kristina Yasuda and I have published an updated JWK Thumbprint URI draft that addresses the OAuth Working Group Last Call (WGLC) comments received. Changes made were: Added security considerations about multiple public keys coresponding to the same private key. Added hash algorithm identifier after the JWK thumbprint URI prefix to make it explicit in a […]

Kristina Yasuda and I have published an updated JWK Thumbprint URI draft that addresses the OAuth Working Group Last Call (WGLC) comments received. Changes made were:

Added security considerations about multiple public keys coresponding to the same private key. Added hash algorithm identifier after the JWK thumbprint URI prefix to make it explicit in a URI which hash algorithm is used. Added reference to a registry for hash algorithm identifiers. Added SHA-256 as a mandatory to implement hash algorithm to promote interoperability. Acknowledged WGLC reviewers.

The specification is available at:

https://www.ietf.org/archive/id/draft-ietf-oauth-jwk-thumbprint-uri-01.html

Tuesday, 15. February 2022

MyDigitalFootprint

How do you recognise when your north star has become a black hole?

This post is about being lost — without realising it. source: https://earthsky.org/space/x9-47-tucanae-closest-star-to-black-hole/ I have my NorthStar, and I am heading for it, but somehow the gravitational pull of a black hole we did not know existed got me without realising it! I am writing about becoming lost on a journey as I emerge from working from home, travel restrictions, lockdowns and
This post is about being lost — without realising it.

source: https://earthsky.org/space/x9-47-tucanae-closest-star-to-black-hole/

I have my NorthStar, and I am heading for it, but somehow the gravitational pull of a black hole we did not know existed got me without realising it! I am writing about becoming lost on a journey as I emerge from working from home, travel restrictions, lockdowns and masks; to find nothing has changed, but everything has changed.

The hope of a shake or wake up call from something so dramatic as a global pandemic is immediately lost as we re-focus on how to pay for the next meal, drink, ticket, bill, rent, mortgage, school fee or luxury item. Have you become so wedded to an economic model that we cannot see that we will not get to our imagined NorthStar?

I feel right now that I have gone into a culdesac and cannot find the exit. The road I was following had a shortcut, but my journey planner had a shortcut that assumed I was walking and could hop over the gate onto the public path and not the reality that I was in my car.

I wrote about “The New Fatigue — what is this all about?” back in Feb 2021. I could not pinpoint how I was productive, maintained fitness, and ate well, but something was missing — human contact and social and chemistry-based interactions. I posted a view about the 7 B’s and how we were responding to a global pandemic; we lost #belonging. I wrote more on this under a post about Isolation — the 8th deadly sin.

Where am I going with this? Because we want a radical change masked as a “New Normal, something better”, but we are already finding nothing has actually changed on the journey we have been on, and I am now questioning that the bright north star I had lost its sparkle!

I have used heuristics and rules to help me for the longest time; anyone on the neuro-diverse spectrum has to have them because without them surviving becomes exhausting. However, these shortcuts (when created and learnt) also mean I stopped questioning why. Now that the very fabric that set up my heuristics has changed, those rules don’t necessarily work or apply. We love a shortcut because it gets us out of trouble, we love the quick route because it works, we love an easy known trusted route because we don’t have to think. We use them all the time in business to prioritise. “What is the ROI on this?” In truth, we either don’t have the resources or cannot be bothered to spend the time to look in detail, so we use the blunt tool (ROI) to make a decision.

My tools don’t work (as well or at all)

I found my NorthStar with my tools. I was navigating to the north star with my tools. My tools did not tell me I was heading past a black hole that could suck me in. I am not sensing I am lost as my tools are not telling me; all the things we did pre-pandemic don’t work as well on the other side — but nothing other than feeling lost is telling me this. We have not gone back to everything working and still have not created enough solid ground to build new rules, so we are now lost, looking for a new NorthStar with tools that do not work.

Our shortcuts sucked us in and took away the concept that we need to dwell, be together, work it out, and take time. Our tools and shortcuts reduced our time frames and tricked us into thinking they would work forever. The great quote from Anthony Zhou below assumes you know where you are going. That is not true.

How do I recognise that my north star has become a black hole because my shortcuts and rules no longer work, creating fatigue I cannot describe, and I feel lost? There is a concept of anchor points in philosophy, and it is a cognitive bias. When you lose your anchor in a harbour, you drift (ignoring sea anchors for those who sail). The same can be said when you lose your own personal anchor points that have provided the grounding for your decision making. Routines and experience are not anchor points. But the pandemic looks to have cut the ties we had to anchor points, so we feel all somewhat lost and drifting. The harder we try to re-apply the old rules, the more frustrated we become that nothing works. Perhaps it is time to make some new art such that we can discover the new rules and find some new anchor points. Then, maybe I will feel less lost?


Tuesday, 08. February 2022

MyDigitalFootprint

Hostile environments going in the right direction; might be the best place to work?

Whilst our universe is full of hostile places, and they are engaging in their own right, I want to unpack the thinking and use naturally occurring hostile environments as an analogy to help unpack complex decision making in hostile to non-hostile work environments. ---- I enjoyed reading Anti-Fragile in 2013; it is the book about things that gain from disorder by Nassim Nicholas Taleb. "Some th

Whilst our universe is full of hostile places, and they are engaging in their own right, I want to unpack the thinking and use naturally occurring hostile environments as an analogy to help unpack complex decision making in hostile to non-hostile work environments.

----

I enjoyed reading Anti-Fragile in 2013; it is the book about things that gain from disorder by Nassim Nicholas Taleb. "Some things benefit from shocks; they thrive and grow when exposed to volatility, randomness, disorder, and stressors and love adventure, risk, and uncertainty. Yet, in spite of the ubiquity of the phenomenon, there is no word for the exact opposite of fragile. Let us call it antifragile. Antifragility is beyond resilience or robustness. The resilient resists shocks and stays the same; the antifragile gets better." When writing this, I have the same problem looking for a direct opposite of a Hostile Environment, as in an extreme ecosystem (ecology),  and whilst I have opted for a non-hostile environment, anti-hostile would be better.

 In nature, a hostile or extreme environment is a habitat characterised by harsh environmental conditions, beyond the optimal range for the development of humans, for example, pH 2 or 11, −20°C or 113°C, saturating salt concentrations, high radiation, 200 bars of pressure, among others.  These hostile places are super interesting because life is possible, and it is from what life emerged. I will extend the thinking and use this as an analogy to help unpack complex decision making by comparing hostile to non-hostile, extreme to amicable. 

 

Time apparently moves more slowly than in a non-hostile environment than in a hostile environment.  Time, in this case, is about the period between change.  Change in a hostile environment is challenging and has to be done in tiny steps over a long period of time. Rapid change increases the risk of death and non-survival.   To survive in a hostile environment, the living thing needs stability and resilience.  The processes (chemistry) methods (biology) must be finely adjusted to become incredibly efficient and effective - efficacy matters. Change is incremental and very slow.  To survive, sharing is a better option; having facts (data) matters, you survive together, and there are few paradoxes. Of note is that hostile environments will become less hostile. As it moved from an acidic creation to our current diversity, the earth is worthy of note.

In a non-hostile environment, time can move fast between iteration/ adaptation.  The risk of a change leading to death is lower as the environment is tolerant of change.   The jumps can be more significant with fewer threats.  Because the environment has a wide tolerance, it is less sensitive to risk (5 sigma deviation is acceptable); therefore, you can have large scale automation, programmable algorithms and less finely tuned processes, methods and rules.  Innovation and change will come thick and fast as the environment is amiable and amicable.  The time between this innovation and adaptation is fast.   The environment creates more volatility, uncertainty, complexity and ambiguity.   Politics and power focus on control over survival.  The world is full of paradoxes.  Non-hostile environments will become more hostile. 

----

Yes, there are problems will all analogies, and this one breaks down; however, there is a principle here worth thinking about.  In hostile environments, there are fewer paradoxes.   I would argue that this is because survival is one driving purpose.  Survival is one of the four-opposing purposes in the peak-paradox model.    In non-hostile environments, you can optimise for more than one thing.  Indeed you can have many purposes all competing.  This leads to many paradoxes.   In work environments where senior leadership is unable to comprehend paradoxes, I also observe hostile environments (different to natural ones but just as toxic).  Where I find teams that embrace VUCA, innovation, change and can see the paradoxes in their data, facts and knowledge;  observe congenial, amenable, non-hostile and anti-hostile environments.  

The key here is not this observation but about direction.  Knowing which camp you are in is essential, but so too is knowing the direction. How to measure Peak-Hostility is going to be another post. Non-hostile places tend to become more hostile because of politics and power dynamics; dealing with paradoxes is hard work.  Because they demand working together and clarity of purpose, hostile environments can become less hostile.

If we plot this thinking on the Peak Pardox framework, I believe it will be difficult to escape the dynamics of Peak Human Purpose (survival) until scarcity is resolved.  At Peak Individual Purpose, the few will control, but this creates a hostile environment for the majority.  Peak Work, we observe fierce competition between the two camps, where hostile can win by focussing on costs, but non-hostile wins through innovation.  At Peak Society Purpose - there is something unique as non-hostile could lead to anti-hostile.  

As to decision-making, what becomes critical is whether your decision-making processes match your (hostile/ non-hostile) environment and direction, as they demand very different approaches.  Hostile, in many ways, is more straightforward as there is a much more defined purpose to which decisions can be aligned.  Non-hostile introduce paradoxes, optimisation and complexity into the processes with many interested stakeholders.  If there is a mismatch in methods, this can be destructive—much more to think about.   

 

 

 


Monday, 07. February 2022

Werdmüller on Medium

The web is a miracle

Not everything has to be a business. Continue reading on Medium »

Not everything has to be a business.

Continue reading on Medium »

Thursday, 03. February 2022

Altmode

Chasing Power Anomalies

Recently, we and a number of our neighbors have been noticing our lights flickering in the evening and early morning. While we have considered it to be mostly an annoyance, this has bothered some of our neighbors enough that they have opened cases with the utility and began raising the issue on our street mailing […]

Recently, we and a number of our neighbors have been noticing our lights flickering in the evening and early morning. While we have considered it to be mostly an annoyance, this has bothered some of our neighbors enough that they have opened cases with the utility and began raising the issue on our street mailing list.

Pacific Gas and Electric (PG&E) responded to these customers with a visit, and in some cases replaced the service entrance cable to the home. In at least one case PG&E also said they might need to replace the pole transformer, which would take a few months to complete. I have heard no reports that these efforts have made any difference.

This isn’t our first recent challenge with voltage regulation in our neighborhood. Our most recent issue was a longer-term voltage regulation problem that occurred on hot days, apparently due to load from air conditioners and the fact that our neighborhood is fed by older 4-kilovolt service from the substation. This is different, and raised several questions:

How local are the anomalies? Are neighbors on different parts of the street seeing the same anomalies, or are they localized to particular pole transformers or individual homes? What is the duration and nature of the anomalies? Are they only happening in the evening and early morning, or do we just notice them at these times?

To try to answer these questions, I found a test rig that I built several years ago when we were noticing some dimming of our lights, apparently due to neighbors’ air conditioners starting on summer evenings. The test rig consists of a pair of filament transformers: 110 volt to 6 volt transformers that were used in equipment with electronic tubes, which typically used 6 volts to heat the tube’s filament. The transformers are connected in cascade to reduce the line voltage to a suitable level for the line-in audio input on a computer. An open-source audio editing program, Audacity, is used to record the line voltage. I often joke that this is a very boring recording: mostly just a continuous 60 hertz tone.

At the same time, I started recording the times our lights flickered (or my uninterruptable power supply clicked, another symptom). I asked my neighbors to record when they see their lights flicker and report that back to me.

I created a collection of 24-hour recordings of the power line, and went looking for the reported power anomalies. It was a bit of a tedious process, because not everyone’s clocks are exactly synchronized. But I was successful in identifying several power anomalies that were observed by neighbors on opposite ends of the street (about three blocks). Here’s a typical example:

Typical power anomaly

As you can see, the problem is very short in duration, about 60 milliseconds or so.

I was getting a lot of flicker reports, and as I mentioned, searching for these anomalies was tedious. So I began looking at the analysis capabilities of Audacity. I noticed a Silence Finder plug-in and attempted to search for the anomalies using that tool. But Silence Finder is designed to find the kind of silence that one might find between tracks on an LP: very quiet for a second or so. Not surprisingly, Silence Finder didn’t find anything for me.

I noticed that Silence Finder is written in a specialized Lisp-like signal processing language known as Nyquist. So I had a look at the source code, which is included with Audacity, and was able to understand quite a bit of what was going on. For efficiency reasons, Silence Finder down-samples the input data so it doesn’t have to deal with as much data. In order to search for shorter anomalies, I needed to change that, as well as the user interface limits on minimum silence duration. Also, the amplitude of the silence was expressed in dB, which makes sense for audio but I needed more sensitivity to subtle changes in amplitude. So I changed the silence amplitude from dB to a linear voltage value.

The result was quite helpful. The modified plug-in, which I called “Glitch Finder”, was able to quite reliably find voltage anomalies. For example:

Power recording 1/29/2022-1/30/2022

The label track generated by Glitch Finder points out the location of the anomalies (at 17:05:12, 23:00:12, and 7:17:56 the next morning), although they’re not visible at this scale. Zoom in a few times and they become quite obvious:

Power anomaly at 1/30/2022 7:17:56

Thus far I have reached these tentative conclusions:

The power problems are primarily common to the neighborhood, and unlikely to be caused by a local load transient such as plugging an electric car in. They seem to be concentrated mainly in the evening (4-11 pm) and morning (6-10 am). These seem to be times when power load is changing, due to heating, cooking, lighting, and home solar power systems going off and on at sunset and sunrise. The longer term voltage goes up or down a bit at the time of a power anomaly. This requires further investigation, but may be due to switching activity by the utility. Further work

As usual, a study like this often raises new questions about as quickly as it answers questions. Here are a few that I’m still curious about.

What is the actual effect on lights that causes people to notice these anomalies so easily? I currently have an oscilloscope connected to a photoelectric cell, set to trigger when the lights flash. It will be interesting to see how that compares with the magnitude of the anomaly. Do LED lights manifest this more than incandescent bulbs? It seems unlikely that such a short variation would affect the filament temperature of an incandescent bulb significantly. Do the anomalies correlate with any longer-term voltage changes? My test rig measures long-term voltage in an uncalibrated way, but the processing I’m currently doing doesn’t make it easy to look at longer-term voltage changes as well.

Wednesday, 02. February 2022

Moxy Tongue

Bureaucratic Supremacy

The fight against "Bureaucratic Supremacy" affects us all. Time for unity beyond the dysfunctional cult politics driving people apart from their independence. Words are thinking tools; used wrongly, contrived inappropriately, disseminated poorly, words can cause great harm to people and society. This is being demonstrated for all people to witness in 2020++ History is long; let the dead pas
The fight against "Bureaucratic Supremacy" affects us all. Time for unity beyond the dysfunctional cult politics driving people apart from their independence.
Words are thinking tools; used wrongly, contrived inappropriately, disseminated poorly, words can cause great harm to people and society. This is being demonstrated for all people to witness in 2020++
History is long; let the dead past bury its dead. You are here, now, and the structure of your participation in this life, the administration of your Rights in civil society, matters a great deal for the results both are capable of rendering.
"We the people" is an example of how words crafted by intent, can be manipulated over time to render outcomes out-of-step with their original intent. Once upon a time.. "people" was an easy word to define. An experiment in self-governance, unique in all the world and history, arrived because people made it so. Fast forward to 2022, and people no longer function as "people" under the law; in lieu of basic observable fact, a bureaucratic interpretation and abstraction of intent has been allowed to take root among people.
Oft confused in 2020++ with phrases like "white supremacy", or "Institutional racism", methods of administrative bureaucracy have taken a supreme role in defining and operationalizing Rights defined for "people". This "bureaucratic supremacy" has allowed abstraction of words like "people" to render a Government operated by bureaucrats, not under the authority of the people "of, by, for" whom it was originally conceived and instantiated, but instead under methods of processing bureaucratic intent. From the point-of-view of the oppressed historically, the bureaucracy has skin in the game, and domination is absolute. But, from the point-of-view of the unexperienced future derived "of, by, for" people, skin has nothing to do with it. History's leverage is one of administrative origin.
Pandemics will come and go in time; experiences of the administrative machinery that guarantees the integrity, security and continued self-governance of society by people should never be overlooked. Especially in the context of now "computational" Constitutional Rights for people, (not birth certificates - vaccination passport holders - or ID verification methods & artifacts poorly designed) where operational structure determines the integrity of operational run time results. Literature might say "Freedom of Speech" is a Right, but if the administrative system does not compute said Rights, then they cease to exist. 
"Bureaucratic Supremacy" has a predictable moat surrounding its practices; credentialed labor. Employees with labor certifications are only as useful as the validity of the credential under inspection and in practice. Administering the permission to be hired, work, contribute value and extend a meaningful voice into a civil system is easily sequestered if/when that credential is meaningless under inspection, and is only used as means of identifying bureaucratic compliance.
Bureaucratic supremacy is the direct result of bureaucratic compliance; people, functioning as "people" willing to cede their inalienable rights in exchange for a paycheck, yield a systemic approach to human management that often counters the initial intent and integrity of a systems existence. Often heard when something happens that lacks systemic integrity, "I was only doing my job" represents an output of bureaucratic fraud, whereby people claim plausible deniability of responsibility and accountability based on the structure of their working efforts. Corporate law is founded on the premise of "liability control", whereby a resulting bureaucracy allows real human choices to function as bureaucratic outcomes lacking any real direct human definition. People are no longer operating as "people" when abstracted by the law in such ways, and the world over, systems of bureaucracy with historic significance control and confuse the interpretation of results that such a system of labor induces.
Rooted at the origin of a self-governed civil society is an original act of human Sovereignty. In America, this act is writ large by John Hancock for a King's benefit, as well as every administrative bureaucracy the world will ever come to experience. People declare independence from bureaucracies by personal Sovereign authority. This is the root of Sovereign authority, and can never be provisioned by a bureaucracy. Bureaucratic supremacy is a perversion of this intent, and labor credentials make it so.
Where do "we" go from here?
People, Individuals all, is the only living reality of the human species that will ever actually exist. People, living among one another, never cease to function as Individuals, and any systemic process that uses a literary abstraction, or computational abstraction to induce "we" into a bureaucratic form is an aggressive act of fraud against Humanity, and the Sovereignty defined "of, by, for" such people.
In America, people are not the dog of their Government; self-governance is man's best friend. The order of operations in establishing such a "more perfect Union" is critical for its sustained existence. Be wary of listening to lifetime bureaucrats; they will speak with words that are no longer tools for human advancement, but instead, are designed to reinforce the "bureaucratic supremacy" of the authority derived by their labor credentials. Inspect those credentials directly to ensure they are legitimate, for false labor credentials are endemic.
Structure yields results; fraud by bureaucracy and Rights for people are juxtapositional and never exist in the same place at the same time. 
Think About It.. More Individuals in Civil Society Needed: https://youtu.be/KHbzSif78qQ

Tuesday, 01. February 2022

Here's Tom with the Weather

Although it makes a good point, the "False balance" article seems to accept the widely held assumption that Rogan is just "letting people voice their views" without interrupting them but he did so recently with guest Josh Szeps to wrongly argue against covid myocarditis evidence.

Although it makes a good point, the "False balance" article seems to accept the widely held assumption that Rogan is just "letting people voice their views" without interrupting them but he did so recently with guest Josh Szeps to wrongly argue against covid myocarditis evidence.

Monday, 31. January 2022

Identity Woman

Reality 2.0 Podcast: ID.me Vs. The Alternatives

I chatted with Katherine Druckman and Doc Searls of Reality 2.0 about the dangers of ID.me, a national identity system created by the IRS and contracted out to one private company, and the need for the alternatives, decentralized systems with open standards.  The post Reality 2.0 Podcast: ID.me Vs. The Alternatives appeared first on Identity Woman.

I chatted with Katherine Druckman and Doc Searls of Reality 2.0 about the dangers of ID.me, a national identity system created by the IRS and contracted out to one private company, and the need for the alternatives, decentralized systems with open standards. 

The post Reality 2.0 Podcast: ID.me Vs. The Alternatives appeared first on Identity Woman.

Sunday, 30. January 2022

Jon Udell

Life in the neighborhood

I’ve worked from home since 1998. All along I’ve hoped many more people would enjoy the privilege and share in the benefits. Now that it’s finally happening, and seems likely to continue in some form, let’s take a moment to reflect on an underappreciated benefit: neighborhood revitalization. I was a child of the 1960s, and … Continue reading Life in the neighborhood

I’ve worked from home since 1998. All along I’ve hoped many more people would enjoy the privilege and share in the benefits. Now that it’s finally happening, and seems likely to continue in some form, let’s take a moment to reflect on an underappreciated benefit: neighborhood revitalization.

I was a child of the 1960s, and spent my grade school years in a newly-built suburb of Philadelphia. Commuter culture was well established by then, so the dads in the neighborhood were gone during the day. So were some of the moms, mine included, but many were at home and were able to keep an eye on us kids as we played in back yards after school. And our yards were special. A group of parents had decided not to fence them, thus creating what was effectively a private park. The games we played varied from season to season but always involved a group of kids roaming along that grassy stretch. Nobody was watching us most of the time. Since the kitchens all looked out on the back yards, though, there was benign surveillance. Somebody’s mom might be looking out at any given moment, and if things got out of hand, somebody’s mom would hear that.

For most kids, a generation later, that freedom was gone. Not for ours, though! They were in grade school when BYTE Magazine ended and I began my remote career. Our house became an after-school gathering place for our kids and their friends. With me in my front office, and Luann in her studio in the back, those kids enjoyed a rare combination of freedom and safety. We were mostly working, but at any given moment we could engage with them in ways that most parents never could.

I realized that commuter culture had, for several generations, sucked the daytime life out of neighborhoods. What we initially called telecommuting wasn’t just a way to save time, reduce stress, and burn less fossil fuel. It held the promise of restoring that daytime life.

All this came back to me powerfully at the height of the pandemic lockdown. Walking around the neighborhood on a weekday afternoon I’d see families hanging out, kids playing, parents working on landscaping projects and tinkering in garages, neighbors talking to one another. This was even better than my experience in the 2000s because more people shared it.

Let’s hold that thought. Even if many return to offices on some days of the week, I believe and hope that we’ve normalized working from home on other days. By inhabiting our neighborhoods more fully on weekdays, we can perhaps begin to repair a social fabric frayed by generations of commuter culture.

Meanwhile here is a question to ponder. Why do we say that we are working from and not working at home?


Randall Degges

How to Calculate the Energy Consumption of a Mac

I’m a bit of a sustainability nerd. I love the idea of living a life where your carbon footprint is neutral (or negative) and you leave the world a better place than it was before you got here. While it’s clear that there’s only so much impact an individual can have on carbon emissions, I like the idea of working to minimize my personal carbon footprint. This is a big part of the reason why

I’m a bit of a sustainability nerd. I love the idea of living a life where your carbon footprint is neutral (or negative) and you leave the world a better place than it was before you got here.

While it’s clear that there’s only so much impact an individual can have on carbon emissions, I like the idea of working to minimize my personal carbon footprint. This is a big part of the reason why I live in a home with solar power, drive an electric vehicle, and try to avoid single-use plastics as much as possible.

During a recent impact-focused hackathon at work (come work with me!), I found myself working on an interesting sustainability project. Our team’s idea was simple: because almost all Snyk employees work remotely using a Mac laptop, could we measure the energy consumption of every employee’s Mac laptop to better understand how much energy it takes to power employee devices, as well as the amount of carbon work devices produce?

Because we know (on average) how much carbon it takes to produce a single kilowatt-hour (kWh) of electricity in the US (0.85 pounds of CO2 emissions per kWh), if we could figure out how many kWh of electricity were being used by employee devices, we’d be able to do some simple math and figure out two things:

How much energy is required to power employee devices How much carbon is being put into the atmosphere by employee devices

Using this data, we could then donate money to a carbon offsetting service to “neutralize” the impact of our employee’s work devices.

PROBLEM: Now, would this be a perfectly accurate way of measuring the true carbon impact of employees? Absolutely not – there are obviously many things we cannot easily measure (such as the amount of energy of attached devices, work travel, food consumption, etc.), but the idea of being able to quantify the carbon impact of work laptops was still interesting enough that we decided to pursue it regardless.

Potential Energy Tracking Solutions

The first idea we had was to use smart energy monitoring plugs that employees could plug their work devices into while charging. These plugs could then store a tally of how much energy work devices consume, and we could aggregate that somewhere to get a total amount of energy usage.

I happen to have several of the Eve Energy smart plugs around my house (which I highly recommend if you use Apple’s HomeKit) that I’ve been using to track my personal energy usage for a while now.

While these devices are incredible (they work well, come with a beautiful app, etc.), unfortunately, they don’t have any sort of publicly accessible API you can use to extract energy consumption data.

We also looked into various other types of smart home energy monitoring plugs, including the Kasa Smart Plug Mini, which does happen to have an API.

Unfortunately, however, because Snyk is a global company with employees all over the world, hardware solutions were looking less and less appealing as to do what we wanted, we’d need to:

Ship country-specific devices to each new and existing employee Include setup instructions for employees (how to configure the plugs, how to hook them up to a home network, etc.) Instruct employees to always plug their work devices into these smart plugs, which many people may forget to do Is It Possible to Track Mac Energy Consumption Using Software?

When someone on the team proposed using software to track energy consumption, I thought it’d be a simple task. I assumed there were various existing tools we could easily leverage to grab energy consumption data. But boy, oh boy, I was wrong!

As it turns out, it’s quite complicated to figure out how many watt-hours of electricity your Mac laptop is using. To the best of my knowledge, there are no off-the-shelf applications that do this.

Through my research, however, I stumbled across a couple potential solutions.

Using Battery Metrics to Calculate Energy Consumption

The first idea I had was to figure out the size of the laptop’s battery (in milliamp-hours (mAh)), as well as how many complete discharge cycles the battery has been through (how many times has the battery been fully charged and discharged).

This information would theoretically allow us to determine how much energy a Mac laptop has ever consumed by multiplying the size of the battery in mAh by the number of battery cycles. We could then simply convert the number of mAh -> kWh using a simple formula.

After a lot of Google-fu and command-line scripting, I was able to get this information using the ioreg command-line tool, but in the process, I realized that there was a critical problem with this approach.

The problem is that while the variables I mentioned above will allow you to calculate the energy consumption of your laptop over time, when your laptop is fully charged and plugged into a wall outlet it isn’t drawing down energy from the battery – it’s using the electricity directly from your wall.

This means that the measuring approach above will only work if you never use your laptop while it is plugged into wall chargers – you’d essentially need to keep your laptop shut down while charging and only have it turned on while on battery power. Obviously, this is not very realistic.

Using Wall Adapter Information to Calculate Energy Consumption

After the disappointing battery research, I decided to take a different approach. What if there was a way to extract how much energy your laptop was pulling from a wall adapter?

If we were able to figure out how many watts of electricity, for example, your laptop was currently drawing from a wall adapter, we could track this information over time to determine the amount of watt-hours of electricity being consumed. We could then easily convert this number to kWh or any other desired measure.

And… After a lot of sifting through ioreg output and some help from my little brother (an engineer who helps build smart home electric panels), I was able to successfully extract the amount of watts being pulled from a plugged-in wall adapter! Woo!

The Final Solution: How to Calculate the Energy Consumption of Your Mac Using Software

After many hours of research and playing around, what I ended up building was a small shell script that parses through ioreg command-line output and extracts the amount of watts being pulled from a plugged-in wall adapter.

This shell script runs on a cron job once a minute, logging energy consumption information to a file. This file can then be analyzed to compute the amount of energy consumed by a Mac device over a given time period.

I’ve packaged this solution up into a small GitHub project you can check out here.

The command I’m using to grab the wattage information is the following:

/usr/sbin/ioreg -rw0 -c AppleSmartBattery | grep BatteryData | grep -o '"AdapterPower"=[0-9]*' | cut -c 16- | xargs -I % lldb --batch -o "print/f %" | grep -o '$0 = [0-9.]*' | cut -c 6-

Here it is broken down with a brief description of what these commands are doing:

/usr/sbin/ioreg -rw0 -c AppleSmartBattery | \ # retrieve power data grep BatteryData | \ # filter it down to battery stats grep -o '"AdapterPower"=[0-9]*' | \ # extract adapter power info cut -c 16- | \ # extract power info number xargs -I % lldb --batch -o "print/f %" | \ # convert power info into an IEEE 754 float grep -o '$0 = [0-9.]*' | \ # extract only the numbers cut -c 6- # remove the formatting

The output of this command is a number which is the amount of watts currently being consumed by your laptop (I verified this by confirming it with hardware energy monitors). In order to turn this value into a usable energy consumption metric, you have to sample it over time. After thinking this through, here was the logging format I came up with to make tracking energy consumption simple:

timestamp=YYYY-MM-DDTHH:MM:SSZ wattage=<num> wattHours=<num> uuid=<string>

This format allows you to see:

The timestamp of the log The amount of watts being drawn from the wall at the time of measurement (wattage) The number of watt hours consumed at the time of measurement (wattHours), assuming this measurement is taken once a minute, and The unique Mac UUID for this device. This is logged to help with deduplication and other statistics in my case.

Here’s an example of what some real-world log entries look like:

timestamp=2022-01-30T23:41:00Z wattage=8.37764739 wattHours=.13962745650000000000 uuid=EDD819A5-1409-5797-9BE4-22EAAC75D999 timestamp=2022-01-30T23:42:01Z wattage=8.7869072 wattHours=.14644845333333333333 uuid=EDD819A5-1409-5797-9BE4-22EAAC75D999 timestamp=2022-01-30T23:43:00Z wattage=9.16559505 wattHours=.15275991750000000000 uuid=EDD819A5-1409-5797-9BE4-22EAAC75D999 timestamp=2022-01-30T23:44:00Z wattage=8.49206352 wattHours=.14153439200000000000 uuid=EDD819A5-1409-5797-9BE4-22EAAC75D999 timestamp=2022-01-30T23:45:00Z wattage=7.45262718 wattHours=.12421045300000000000 uuid=EDD819A5-1409-5797-9BE4-22EAAC75D999

To sum up the amount of energy consumption over time, you can then parse this log file and sum up the wattHours column over a given time period. Also, please note that the script I wrote will NOT log energy consumption data to the file if there is no energy being consumed (aka, your laptop is not plugged into a wall adapter).

PROBLEMS: While this is the final solution we ended up going with, it still has one fatal flaw: this approach only works if the script is ran once a minute. This means that if your laptop is shut down or sleeping and this code is not running, there will be no way to log energy consumption data.

What I Learned About Tracking Energy Consumption on Macs

While building our short sustainability-focused hackathon project, I learned a lot about tracking energy consumption on Macs.

Your laptop doesn’t always use its battery as a power source, so tracking battery metrics is not an ideal solution It’s possible to track energy consumption by measuring the draw from wall adapters, although this approach isn’t perfect as it requires your computer to be on and running code on a regular interval While using hardware energy trackers isn’t convenient in our case, this is certainly the simplest (and probably the best) option for personal energy tracking

If you’d like to see the software-based energy tracking solution I built, please check it out on GitHub.

I’m currently in the process of following up with Snyk’s IT department to see if this is something we could one day roll out automatically to employee devices. I still think it would be incredibly interesting to see a central dashboard of how much energy Snyk employees are using to “power” their work, and what that amount of carbon looks like.

PS: The creation of this blog post took precisely 19.972951810666647 watt-hours of electricity and generated .016977009039067 pounds of CO2.

Saturday, 29. January 2022

Mike Jones: self-issued

Working Group Adoption of the JWK Thumbprint URI Specification

The IETF OAuth working group has adopted the JWK Thumbprint URI specification. The abstract of the specification is: This specification registers a kind of URI that represents a JSON Web Key (JWK) Thumbprint value. JWK Thumbprints are defined in RFC 7638. This enables JWK Thumbprints to be used, for instance, as key identifiers in contexts […]

The IETF OAuth working group has adopted the JWK Thumbprint URI specification. The abstract of the specification is:

This specification registers a kind of URI that represents a JSON Web Key (JWK) Thumbprint value. JWK Thumbprints are defined in RFC 7638. This enables JWK Thumbprints to be used, for instance, as key identifiers in contexts requiring URIs.

The need for this arose during specification work in the OpenID Connect working group. In particular, JWK Thumbprint URIs are used as key identifiers that can be syntactically distinguished from other kinds of identifiers also expressed as URIs in the Self-Issued OpenID Provider v2 specification.

Given that the specification does only one simple thing in a straightforward manner, we believe that it is ready for working group last call.

The specification is available at:

https://www.ietf.org/archive/id/draft-ietf-oauth-jwk-thumbprint-uri-00.html

Aaron Parecki

Stream a USB webcam to HDMI on a Raspberry Pi

This post exists to collect my notes on displaying a USB webcam on the Raspberry Pi HDMI outputs. This is not the same as streaming the webcam (easy), and this is not for use with the Raspberry Pi camera module. This is specifically for USB UVC webcams.

This post exists to collect my notes on displaying a USB webcam on the Raspberry Pi HDMI outputs. This is not the same as streaming the webcam (easy), and this is not for use with the Raspberry Pi camera module. This is specifically for USB UVC webcams.

Note: Do not actually do this, it's terrible.

Install Raspberry Pi OS Lite, you don't want the full desktop environment.

Once you boot the Pi, install VLC and the X windows environment:

sudo apt install vlc xinit

Configure your Pi to boot to the command line already logged in, using the tool raspi-config.

Create the file ~/.bash_profile with the following contents which will start X on boot:

if [ -z $DISPLAY ] && [ $(tty) = /dev/tty1 ]
then
startx
fi

Create the file ~/.xinitrc to launch VLC streaming the webcam when X launches:

#!/bin/bash
cvlc v4l2:// :v4l2-dev=/dev/video0

Now you can reboot the Pi with a webcam plugged in and you'll get a full screen view of the camera.

If your webcam isn't recognized when it first boots up, you'll need to quit VLC and start it again. You can quit by pressing ctrl-Q, then type startx to restart it after you plug the camera back in. If that doesn't work, you might have to SSH in and kill the process that way.

There are many problems with this approach:

It seems VLC is not hardware accelerated so there is pretty bad tearing of the image Sometimes the webcam isn't recognized when the Pi boots up and I have to unplug it and plug it back in when it boots and restart the script The image tearing and stuttering is completely unusable for pretty much anything

Do you know of a better solution? Let me know!

So far I haven't found anything that actually works, and I've searched all the forums and tried all the solutions with guvcview and omxplayer with no luck so far.

For some other better solutions, check out my blog post and video How to Convert USB Webcams to HDMI.


Werdmüller on Medium

Surfing the stress curve

Using the Yerkes-Dodson Law to craft a better, calmer life. Continue reading on Medium »

Using the Yerkes-Dodson Law to craft a better, calmer life.

Continue reading on Medium »


Hans Zandbelt

OpenID Connect for Oracle HTTP Server

Over the past years ZmartZone enabled a number of customers to migrate their Single Sign On (SSO) implementation from proprietary Oracle HTTP Server components to standards-based OpenID Connect SSO. Some observations about that: Oracle Webgate and mod_osso are SSO plugins … Continue reading →

Over the past years ZmartZone enabled a number of customers to migrate their Single Sign On (SSO) implementation from proprietary Oracle HTTP Server components to standards-based OpenID Connect SSO. Some observations about that:

Oracle Webgate and mod_osso are SSO plugins (aka. agents) for the Oracle HTTP Server (OHS) that implement a proprietary (Oracle) SSO/authentication protocol that provides authentication (only) against Oracle Access Manager the said components are closed source implementations owned by Oracle these components leverage a single domain-wide SSO cookie which has known security drawbacks, especially in todays distributed and delegated (cloud and hybrid) application landscape, see here ZmartZone supports builds of mod_auth_openidc that can be used as plugins in to Oracle HTTP Server (11 and 12), thus implementing standards based OpenID Connect for OHS with an open source component those builds are a drop in replacement into OHS that can even be used to set the same headers as mod_osso/Webgate does/did mod_auth_openidc can be used to authenticate to Oracle Access Manager but also to (both commercial and free) alternative Identity Providers such as PingFederate, Okta, Keycloak etc. when required Oracle HTTP Server can be replaced with stock Apache HTTPd the Oracle HTTP Server builds of mod_auth_openidc come as part of a light-weight commercial support agreement on top of the open source community support channel

In summary: modern OpenID Connect-based SSO for Oracle HTTP Server can be implemented with open source mod_auth_openidc following a fast, easy and lightweight migration plan.

See also:
https://hanszandbelt.wordpress.com/2021/10/28/mod_auth_openidc-vs-legacy-web-access-management
https://hanszandbelt.wordpress.com/2019/10/23/replacing-legacy-enterprise-sso-systems-with-modern-standards/

Friday, 28. January 2022

Identity Woman

Exploring Social Technologies for Democracy with Kaliya Young, Heidi Nobuntu Saul, Tom Atlee

We see democracy as ideally a process of co-creating the conditions of our shared lives, solving our collective problems, and learning about life from and with each other. Most of the social technologies for democracy we work with are grounded in conversation – discussion, dialogue, deliberation, choice-creating, negotiation, collective visioning, and various forms of council, […] The post Explo

We see democracy as ideally a process of co-creating the conditions of our shared lives, solving our collective problems, and learning about life from and with each other. Most of the social technologies for democracy we work with are grounded in conversation – discussion, dialogue, deliberation, choice-creating, negotiation, collective visioning, and various forms of council, […]

The post Exploring Social Technologies for Democracy with Kaliya Young, Heidi Nobuntu Saul, Tom Atlee appeared first on Identity Woman.

Monday, 24. January 2022

Jon Udell

Remembering Diana

The other day Luann and I were thinking of a long-ago friend and realized we’d forgotten the name of that friend’s daughter. Decades ago she was a spunky blonde blue-eyed little girl; we could still see her in our minds’ eyes, but her name was gone. “Don’t worry,” I said confidently, “it’ll come back to … Continue reading Remembering Diana

The other day Luann and I were thinking of a long-ago friend and realized we’d forgotten the name of that friend’s daughter. Decades ago she was a spunky blonde blue-eyed little girl; we could still see her in our minds’ eyes, but her name was gone.

“Don’t worry,” I said confidently, “it’ll come back to one us.”

Sure enough, a few days later, on a bike ride, the name popped into my head. I’m sure you’ve had the same experience. This time around it prompted me to think about how that happens.

To me it feels like starting up a background search process that runs for however long it takes, then notifies me when the answer is ready. I know the brain isn’t a computer, and I know this kind of model is suspect, so I wonder what’s really going on.

– Why was I was so sure the name would surface?

– Does a retrieval effort kick off neurochemical change that elaborates over time?

– Before computers, what model did people use to explain this phenomenon?

So far I’ve only got one answer. That spunky little girl was Diana.


Hyperonomy Digital Identity Lab

Trusted Digital Web (TDW2022): Characteristic Information Scopes

Figure 1. Trusted Digital Web (TDW2022): Characteristic Information Scopes (based on the Social Evolution Model

Sunday, 23. January 2022

Moxy Tongue

Rough Seas Ahead People

The past is dead.  You are here now. The future will be administered. Data is not literature, it is structure. Data is fabric. Data is blood. Automated data will compete with humans in markets, governments, and all specialty fields of endeavor that hold promise for automated systems to function whereas.  Whereas human; automated human process. Automate human data extraction. Au
The past is dead. 
You are here now.
The future will be administered. Data is not literature, it is structure. Data is fabric. Data is blood. Automated data will compete with humans in markets, governments, and all specialty fields of endeavor that hold promise for automated systems to function whereas. 
Whereas human; automated human process. Automate human data extraction. Automate human data use.
I am purposefully vague -> automate everything that can be automated .. this is here, now.
What is a Constitution protecting both "Human Rights" and "Civil Rights"? 
From the view of legal precedent and human intent actualized, it is a document, a work of literary construct, and its words are utilized to determine meaning in legal concerns where the various Rights of people are concerned. Imperfect words of literature, implemented in their time and place. And of those words, a Governing system of defense for the benefit "of, by, for" the people Instituting such Governance.
This is the simple model, unique in the world, unique in history as far as is known to storytellers the world over. A literary document arriving here and now as words being introduced to their data manifestations. Data loves words. Data loves numbers. Data loves people the most. Why?
Data is "literally" defined as "data" in relation to the existence of Humanity. That which has no meaning to Humanity is not considered "data" being utilized as such. Last time I checked, Humanity did not know everything, yet. Therefore much "data" has barely been considered as existing, let alone being understood in operational conditions called "real life", or "basic existence" by people. 
This is our administrative problem; words are not being operationalized accurately as data. The relationship between "words" and "data" as operational processes driving the relationship between "people" and "Government Administration" has not been accurately structured. In other words, words are not being interpreted as data accurately enough, if at all.
A governed system derived "of, by, for" the people creating and defending such governed process, has a basic starting point. It seems obvious, but many are eager to acquiesce to something else upon instantiation of a service relationship, when easy or convenient enough, so perhaps "obvious" is just a word. "Of, By, For" people means that "Rights" are for people, not birth certificates. 
Consider how you administer your own life. Think back to last time you went to the DMV. Think back to last time you filed taxes and something went wrong that you needed to fix. Think back to when you registered your child for kindergarten. Think back to the last time you created an online bank account. 
While you are considering these experiences, consider the simultaneous meaning created by the words "of, by, for" and whether any of those experiences existed outside of your Sovereign Rights as a person.
Humanity does not come into existence inside a database. The American Government does not come into authority "of, by, for" database entries. 
Instead, people at the edges of society, in the homes of our towns derive the meaning "of, by, for" their lawful participation. Rights are for people, not birth certificates. People prove birth certificates, birth certificates do not prove people. If an administrative process follows the wrong "administrative precedent" and logic structure, then "words" cease meaning what they were intended to mean.
This words-to-data slight of hand is apparently easy to run on people. The internet, an investment itself of Government created via DARPA and made public via NSF, showcases daily the mis-construed meaning of "words" as "data". People are being surveilled, tracked and provisioned access to services based on having their personal "ID:DATA" leveraged. In some cases, such as the new ID.me services being used at Government databases, facial scans are being correlated to match people as "people" operating as "data". The methods used defy "words" once easily accessible, and have been replaced by TOSDR higher up the administrative supply chain as contracts of adhesion.
Your root human rights, the basic meaning of words with Constitutional authority to declare war upon the enemies of a specific people in time, have been usurped, and without much notice, most all people have acquiesced to the "out-of-order" administrative data flows capturing their participation. Freedom can not exist on such an administrative plantation, whereby people are captured as data for use by 2nd and 3rd parties without any root control provided to the people giving such data existence and integrity.
People-backwards-authority will destroy this world. America can not be provisioned from a database. People possess root authority in America. America is the leader of the world, and immigrants come to America because "people possess root authority" in America. "Of, By, For" People in America, this is the greatest invention of America. Owning your own authority, owning root authority as a person expressing the Sovereign structure of your Rights as a person IS the greatest super power on planet Earth.
The American consumer marketplace is born in love with the creative spirit of Freedom. The American Dream lures people from the world over to its shores. A chance to be free, to own your own life and express your freedom in a market of ideas, where Rights are seen, protected, and leveraged for the benefit of all people. A place where work is honored, and where ladders may be climbed by personal effort and dedication in pursuit of myriad dreams. A land honored by the people who sustain its promise, who guard its shores, and share understanding of how American best practices can influence and improve the entire world.
It all begins with you.
If I could teach you how to do it for yourself I would. I try. My words here are for you to use as you wish. I donate them with many of my efforts sustained over many years. This moment (2020-2022) has been prepared for by many for many many years. A populace ignorant of how data would alter the meaning of words in the wrong hands was very predictable. Knowing what words as data meant in 1992 was less common. In fact, getting people to open ears, or an email, was a very developmental process. Much hand-holding, much repetition. I have personally shared words the world over, and mentored 10's of thousands over the past 25 years. To what end?
I have made no play to benefit from the ignorance of people. I have sought to propel conversation, understanding, skill, and professional practices. By all accounts, I have failed at scale. The world is being over-run by ignorance, and this ignorance is being looted, and much worse, it is being leveraged against the best interest of people, Individuals all.
"We the people" is a literary turn-of-hand in data terms; People, Individuals All. The only reality of the human species that matters is the one that honors what people actually are. Together, each of us as Individual, living among one another.. is the only reality that will ever exist. "We" is a royal construct if used to instantiate an Institutional outcome not under the control of actual people as functioning Individuals, and instead abstracts this reality via language, form, contract or use of computer science to enable services to be rendered upon people rather than "of, by, for" people.
The backwards interpretation of words as data process is the enemy of Humanity. Simple as that.
You must own root authority; Americans, People. 

Read Next: Bureaucratic Supremacy




Werdmüller on Medium

The deep, dark wrongness

The internet, community, and finally being yourself Continue reading on Medium »

The internet, community, and finally being yourself

Continue reading on Medium »

Tuesday, 18. January 2022

Kerri Lemole

W3C Verifiable Credentials Education Task Force 2022 Planning

At the W3C VC-EDU Task Force we’ve been planning meeting agendas and topics for 2022. We’ve been hard at work writing use cases, helping education standards organizations understand and align with VCs, and we’ve been heading towards a model recommendation doc for the community. In 2022 we plan on building upon this and are ramping up for an exciting year of pilots. To get things in order, we

At the W3C VC-EDU Task Force we’ve been planning meeting agendas and topics for 2022. We’ve been hard at work writing use cases, helping education standards organizations understand and align with VCs, and we’ve been heading towards a model recommendation doc for the community. In 2022 we plan on building upon this and are ramping up for an exciting year of pilots.

To get things in order, we compiled a list of topics and descriptions in this sheet and have set up a ranking system. This ranking system is open until January 19 at 11:59pm ET and anyone is invited to weigh in. The co-chairs will evaluate the results and we’ll discuss them at the January 24th VC-EDU Call (call connection info).

It’s a lengthy and thought-provoking list and I hope we have the opportunity to dig deep into each of these topics and maybe more. I reconsidered my choices quite a few times before I landed on these top 5:

Verifiable Presentations (VPs) vs (nested) Verifiable Credentials (VCs) in the education context — How to express complex nested credentials (think full transcript). The description references full transcript but this topic is also related to presentation of multiple single achievements by the learner. I ranked this first because presentations are a core concept of VCs and very different from how the education ecosystem is accustomed to sharing their credentials. VPs introduce an exchange of credentials in response to a verifiable request versus sharing a badge online or emailing a PDF. Also, there’s been quite a bit of discussion surrounding more complex credentials such as published transcripts that we can get into here. Integration with Existing Systems — Digitizing existing systems, vs creating; existing LMSes; bridging; regulatory requirements — ex: licensing, PDFs needing to be visually inspected. To gain some traction with VCs, we need to understand how systems work now and what can be improved upon using VCs but also, how do we make VCs work with what is needed now? Bridging Tech. This ties into integrating with existing systems above. We are accustomed to the tech we have now and it will be with us for some time. For instance, email will still be used for usernames and identity references even when Decentralized Identifiers start gaining traction. They will coexist and it can be argued that compromises will need to be made (some will argue against this). Protocols — Much of the work in VC-EDU so far has been about the data model. But what about the protocols — what do we /do/ with the VCs once we settle on the format? (How to issue, verify, exchange, etc). This made my top five because as the description notes, we’re pretty close to a data model but we need to understand more about the protocols that deliver, receive, and negotiate credential exchanges. Part of what we do in VC-EDU is learn more about what is being discussed and developed in the broader ecosystem and understanding protocols will help the community with implementation. Context file for VC-EDU — Create a simple context file to describe an achievement claim. There are education standards organizations like IMS Global (Open Badges & CLR) that are working towards aligning with VC-EDU but having an open, community-created description of an achievement claim, even if it reuses elements from other vocabularies, will provide a simple and persistent reference. A context file in VC-EDU could also provide terms for uses in VCs that haven’t yet been explored in education standards organizations and could be models for future functionality considerations.

Monday, 17. January 2022

Here's Tom with the Weather

TX Pediatric Covid Hospitalizations

Using data from healthdata.gov, this is a graph of the “total_pediatric_patients_hospitalized_confirmed_covid” column over time for Texas. A similar graph for the U.S was shown on Twitter by Rob Swanda.

Using data from healthdata.gov, this is a graph of the “total_pediatric_patients_hospitalized_confirmed_covid” column over time for Texas. A similar graph for the U.S was shown on Twitter by Rob Swanda.


Markus Sabadello on Medium

Transatlantic SSI Interop

Today, there are more and more initiatives working on decentralized identity infrastructures, or Self-Sovereign Identity (SSI). However, there is a big paradox underlying all those initiatives: Even though they often use the same technical specifications , e.g. W3C Decentralized Identifiers (DIDs) and Verifiable Credentials (VCs), they are in practice usually not compatible. There are just to

Today, there are more and more initiatives working on decentralized identity infrastructures, or Self-Sovereign Identity (SSI). However, there is a big paradox underlying all those initiatives: Even though they often use the same technical specifications , e.g. W3C Decentralized Identifiers (DIDs) and Verifiable Credentials (VCs), they are in practice usually not compatible. There are just too many details where technological choices can diverge. Yes we all use DIDs and VCs. But do we use Data Integrity Proofs (formerly called Linked Data Proofs) or JWT Proofs? JSON-LD contexts or JSON schemas, or both? Do we use DIDComm (which version?), or CHAPI, or one of the emerging new variants of OpenID Connect? Which one of the many revocation mechanisms? Which DID methods? How do we format our deep links and encode our QR codes?

We all want to build the missing identity layer for the Internet, where everything is interoperable just like on the web. But we all do it in slightly different ways. So how can we solve this paradox? Do we create yet another interoperability working group?

No! We try out simple steps and make them work. We conduct concrete experiments that bridge gaps and cross borders. In this case, we planned and executed an experiment that demonstrates interoperability between prominent decentralized identity initiatives in the EU and the US, funded by the NGIatlantic.eu program. Two companies collaborated on this project: Danube Tech (EU) and Digital Bazaar (US).

EU-US collaboration on decentralized identity

In the EU, the European Blockchain Service Infrastructure (EBSI) is building an ambitious network that could become the basis for a digital wallet for all EU citizens. In the US, the Department of Homeland Security‘s Silicon Valley Innovation Program (SVIP) is working with companies around the world on personal digital credentials as well as trade use cases. Both projects have developed easy-to-understand narratives (student “Eva” in EBSI, immigrant “Louis” in SVIP). Both narratives are described further in the W3C’s DID Use Cases document (here and here). So we thought, let’s conduct an experiment that combines narratives and technological elements from both the EU and US sides!

SVIP (left side) and EBSI (right side)

We built and demonstrated two combined stories:

Eva studied in the EU and would then like to apply for a US visa. In this story, there is an EU-based Issuer of a VC, and a US-based Verifier. Louis is an immigrant in the US and would like to apply for PhD studies at an EU university. In this story, there is a US-based Issuer of a VC, and an EU-based Verifier.

For a walkthrough video, see: https://youtu.be/1t9m-U-3lMk

For a more detailed report, see: https://github.com/danubetech/transatlantic-ssi-interop/

In the broader decentralized identity community, both the EU- and US-based initiatives currently have strong influence. EBSI’s main strength is its ability to bring together dozens of universities and other organizations to build a vibrant community of VC Issuer and Verifiers. SVIP’s great value has been its continuous work on concrete test suites and interoperability events (“plugfests”) that involve multiple heterogeneous vendor solutions.

In this project, we used open-source libraries supported by ESSIF-Lab, as well as the Universal Resolver project from the Decentralized Identity Foundation (DIF). We also used various components supplied by Digital Bazaar, such as the Veres Wallet.

We hope that our “Transatlantic SSI Interop” experiment can serve as an inspiration and blueprint for further work on interoperability not only between different DID methods and VC types, but also between different vendors, ecosystems, and even continents.

Wallet containing an EU Diploma and US Permanent Resident Card

Aaron Parecki

How to Green Screen on the YoloBox Pro

This step-by-step guide will show you how to use the chroma key feature on the YoloBox Pro to green screen yourself onto picture backgrounds and videos, or even add external graphics from a computer.

This step-by-step guide will show you how to use the chroma key feature on the YoloBox Pro to green screen yourself onto picture backgrounds and videos, or even add external graphics from a computer.

There are a few different ways to use the green screening feature in the YoloBox. You can use it to add a flat virtual background to your video, or you could use it to put yourself over a moving background or other video sources like an overhead or document camera. You could even key yourself over your computer screen showing your slides from a presentation.

You can also switch things around and instead of removing the background from your main camera, instead you can generate graphics on a computer screen with a green background and add those on top of your video.

Setting up your green screen

Before jumping in to the YoloBox, you'll want to make sure your green screen is set up properly. A quick summary of what you'll need to do is:

Light your green screen evenly Light your subject Don't wear anything green

Watch Kevin The Basic Filmmaker's excellent green screen tutorial for a complete guide to these steps!

Green screening on top of an image

We'll first look at how to green screen a camera on top of a static image. You can load images in to the YoloBox by putting them on the SD card. I recommend creating your background image at exactly the right size first, 1920x1080.

On the YoloBox, click the little person icon in the top right corner of the camera that you want to remove the green background from.

That will open up the Chroma Key Settings interface.

Turn on the "Keying Switch", and you should see a pretty good key if your green screen is lit well. If you have a blue screen instead of green, you can change that setting here. The "Similarity" and "Smoothness" sliders will affect how the YoloBox does the key. Adjust them until things look right and you don't have too much of your background showing and it isn't eating into your main subject.

Tap on the "Background Image" to choose which image from your SD card to use as the background. Only still graphics are supported.

Click "Done" and this will save your settings into that camera's source.

Now when you tap on that camera on the YoloBox, it will always include the background image in place of the green screen.

Green screening on top of other video sources

Green screening yourself on top of other video sources is similar but a slightly different process.

First, set up your HDMI source as described above, but instead of choosing a background image, leave it transparent.

Then click the "Add Video Source" button to create a new picture-in-picture layout.

Choose "PiP Video" from the options that appear. For the "Main Screen", choose the video angle that you want to use as the full screen background that you'll key yourself on top of. This could be a top down camera or could be your computer screen with alides for a presentation. It will then ask you to choose a "Sub Screen", and that is where you'll choose your camera angle that you've already set up for chroma keying.

This is where you can choose how big you want your picture to be, and you can drag it around with your finger to change the position.

Once you save this, your new PiP layout will appear as another camera angle you can switch to.

Cropping the green screened video

You may notice that if your green background doesn't cover the entire frame, you'll have black borders on the sides of your chroma keyed image. The YoloBox doesn't exactly have a cropping feature to fix this, but you can use the "Aspect Ratio" setting to crop the background.

You can edit your PiP video settings and choose "1:1" in the Aspect Ratio option to crop your video to a square, removing the black borders from the edges.

Adding computer graphics using the chroma key

Lastly, let's look at how to bring in graphics from an external computer source and key them out on the YoloBox.

When you plug in your computer's HDMI to the YoloBox, your computer will see it as an external monitor. Make sure your computer screen isn't mirrored so you can still use your main computer screen separately.

You can generate graphics in any program as long as you can have it use a green background. You can create animated graphics in Keynote for example, but for this tutorial we'll use the app H2R Graphics.

In H2R Graphics, you'll first want to make sure you set the background color to a bright green like #00FF00. Then you can open up the main output window and drag it over to your second screen (the YoloBox).

Choose the little person icon in the top right corner of the HDMI input of your computer screen to bring up the keying settings for it.

The defaults should look fine, but you can also make any adjustments here if you need. Click "Done" to save the settings.

Now you can create a new PiP layout with your main video as the background and your computer screen keyed out as the foreground or "Sub Screen".

For the Main Screen, choose the video angle you want to use as the background.

For the Sub Screen, choose your computer screen which should now have a transparent background.

Now the layout with your H2R Graphics output window is created as the PiP angle you can choose.

My YoloBox stand

If you haven't already seen it, be sure to check out my YoloBox stand I created! It tilts the YoloBox forward so it's easier to use on a desk, and you can also attach things to the cold shoe mounts on the back.

We have a version for both the YoloBox Pro and the original YoloBox, and it comes in red and black!

You can see the full video version of this blog post on my YouTube channel!

Thursday, 13. January 2022

Mike Jones: self-issued

Described more of the motivations for the JWK Thumbprint URI specification

As requested by the chairs during today’s OAuth Virtual Office Hours call, Kristina Yasuda and I have updated the JWK Thumbprint URI specification to enhance the description of the motivations for the specification. In particular, it now describes using JWK Thumbprint URIs as key identifiers that can be syntactically distinguished from other kinds of identifiers […]

As requested by the chairs during today’s OAuth Virtual Office Hours call, Kristina Yasuda and I have updated the JWK Thumbprint URI specification to enhance the description of the motivations for the specification. In particular, it now describes using JWK Thumbprint URIs as key identifiers that can be syntactically distinguished from other kinds of identifiers also expressed as URIs. It is used this way in the Self-Issued OpenID Provider v2 specification, for instance. No normative changes were made.

As discussed on the call, we are requesting that that the chairs use this new draft as the basis for a call for working group adoption.

The specification is available at:

https://www.ietf.org/archive/id/draft-jones-oauth-jwk-thumbprint-uri-01.html

Wednesday, 12. January 2022

ian glazer's tuesdaynight

Memories of Kim Cameron

Reification. I learned that word from Kim. In the immediate next breath he said from the stage that he was told not everyone knew what reify meant and that he would use a more approachable word: “thingify.” And therein I learned another lesson from Kim about how to present to an audience. My memories of … Continue reading Memories of Kim Cameron

Reification. I learned that word from Kim. In the immediate next breath he said from the stage that he was told not everyone knew what reify meant and that he would use a more approachable word: “thingify.” And therein I learned another lesson from Kim about how to present to an audience.

My memories of Kim come in three phases: Kim as Legend, Kim as Colleague, and Kim as Human, and with each phase came new things to learn.

My first memories of Kim were of Kim as Legend. I think the very first was from IIW 1 (or maybe 2 – the one in Berkeley) at which he presented InfoCard. He owned the stage; he owned the subject matter. He continued to own the stage and the subject matter for years…sometimes the subject matter was more concrete, like InfoCard, and sometimes it was more abstract, like the metaverse. But regardless, it was enthralling.

At some point something changed… Kim was no longer an unapproachable Legend. He was someone with whom I could talk, disagree, and more directly question. In this phase of Kim as Colleague, I was lucky enough to have the opportunity to ask him private follow-up questions to his presentation. Leaving aside my “OMG he’s talking to me” feelings, I was blown away by his willingness to go into depth of his thought process with someone who didn’t work with him. He was more than willing to be challenged and to discuss the thorny problems in our world.

Somewhere in the midst of the Kim as Colleague phase something changed yet again and it is in this third phase, Kim as Human, where I have my most precious memories of him. Through meeting some of his family, being welcomed into his home, and sharing meals, I got to know Kim as the warm, curious, eager-to-laugh person that he was. There was seemingly always a glint in his eye indicating his willingness to cause a little trouble. 

The last in-person memory I have of him was just before the pandemic lockdowns in 2020. I happened to be lucky enough to be invited to an OpenID Foundation event at which Kim was speaking. He talked about his vision for the future and identity’s role therein. At the end of his presentation, I and others helped him down the steep stairs off of the stage. I held onto one of his hands as we helped him down. His hand was warm.


Identity Woman

Why we need DIDComm

This is the text of an email I got today from a company that i had a contract with last year. It is really really really annoying the whole process of sending secure communications and documents. Once I finished reading it – I was reminded quite strongly why we need DIDComm as a protocol to […] The post Why we need DIDComm appeared first on Identity Woman.

This is the text of an email I got today from a company that i had a contract with last year. It is really really really annoying the whole process of sending secure communications and documents. Once I finished reading it – I was reminded quite strongly why we need DIDComm as a protocol to […]

The post Why we need DIDComm appeared first on Identity Woman.

Tuesday, 11. January 2022

Vittorio Bertocci - CloudIdentity

Remembering Kim Cameron

Kim might no longer update his blog, nudge identity products toward his vision or give inspiring, generous talks to audiences large and small, but his influence looms large in the identity industry – an industry Kim changed forever. A lot has been written about Kim’s legacy to the industry already, by people who...

Kim might no longer update his blog, nudge identity products toward his vision or give inspiring, generous talks to audiences large and small, but his influence looms large in the identity industry – an industry Kim changed forever. A lot has been written about Kim’s legacy to the industry already, by people who write far better than yours truly, hence I won’t attempt that here.

I owe a huge debt of gratitude to Kim: I don’t know where I’d be or what I’d be doing if it wouldn’t have been for his ideas and direct sponsorship. That’s something I have firsthand experience on, so I can honor his memory by writing about that.

Back in 2005, still in Italy, I was one of the few Microsoft employees with hands-on, customer deployment experience in WS-STAR, the suite of protocols behind the SOA revolution. That earned me a job offer in Redmond, to evangelize the .NET stack (WCF, workflow, CardSpace) to Fortune 500 companies. That CardSpace thing was puzzling. There was nothing like it, it was ultra hard to develop for, and few people appeared to understand what it was for. One day I had face time with Kim. He introduced me to his Laws of Identity, and that changed everything. Suddenly the technology I was working on had a higher purpose, something directly connected to the rights and wellbeing of everyone- and a mission, making user centric identity viable and adopted. I gave myself to the mission with abandon, and Kim helped in every step of the way:

He invested time in developing me professionally, sharing his master negotiator and genuinely compassionate view of people to counter my abrasive personality back then He looped me in important conversations, inside and outside the company- conversations way above my pay grade or actual experience at that point. He introduced me to all sorts of key people, and helped me understand what was going on. Perhaps the most salient example is the initiative he led to bring together the different identity products Microsoft had in the late 2000s (and culminating in a joint presentation we delivered at PDC2008). The company back then was a very different place, and his steely determination coupled with incredible consensus building skills forever changed my perception of what’s possible and how to influence complex, sometimes adversarial organizations.  He really taught me to believe in myself and in a mission. That’s thanks to his encouragement that I approached Joan Murray (then acquisition editor at Addison Wesley) on the expo floor of some event, pitching to her a book that the world absolutely needs about cardspace and user centric identity, and once accepted finding the energy to learn everything (putting together a ToC, recruiting coauthors, writing in English…) as an evenings and weekends project. Kim generously wrote the foreword for us, and relentlessly promoted the book.
His sponsorship continued even after the CardSpace project, promoting my other books and activities (like those U-prove videos now lost in time). 

Those are just the ones top of mind. I am sure that if”d dig in his or my blog, I’d find countless more. It’s been a huge privilege to work so closely with Kim, and especially to benefit from his mentorship and friendship. I never, ever took that privilege for granted. Although Kim always seemed to operate under the assumption that everyone had something of value to contribute, and talking with him made you feel heard, he wasn’t shy in calling out trolls or people who in his view would stifle community efforts.

When the user centric identity effort substantially failed to gain traction in actual products, with the identity industry incorporating some important innovations (hello, claims) but generally rejecting many of the key tenets I held so dear, something broke inside me. I became disillusioned with pure principled views, and moved toward a stricter Job to be done, user cases driven stance.

That, Kim’s temporary retirement from Microsoft and eventually my move to Auth0 made my interactions with Kim less frequent. It was always nice to run into him at conferences; we kept backchanneling whenever industry news called for coordinated responses; and he reached out to me once to discuss SSI, but we never had a chance to do so. As cliche’ as it might be, I now deeply regret not having reached out more myself.
Last time I heard from him, it was during a reunion of the CardSpace team. It was a joyous occasion, seeing so many people that for a time all worked to realize his vision, and touched in various degrees by his influence. His health didn’t allow him to attend in person, but he called in – we passed the phone around, exchanging pleasantries without knowing we were saying our goodbyes. I remember his “hello Vittorio” as I picked up the phone from Mike- his cordial, even sweet tone as he put his usual care in pronouncing my name just right- right there to show the kindness this giant used with us all. 


Aaron Parecki

How to convert USB webcams to HDMI

There are a handful of interesting USB webcams out there, which naturally work great with a computer. But what if you want to combine video from a USB webcam with your HDMI cameras in a video switcher like the ATEM Mini?

There are a handful of interesting USB webcams out there, which naturally work great with a computer. But what if you want to combine video from a USB webcam with your HDMI cameras in a video switcher like the ATEM Mini?

Most video switchers don't have a way to plug in USB webcams. That's because webcams are expected to plug in to a computer, and most video switchers aren't really computers. Thankfully over the past few years, UVC has become a standard for webcams, so there is no more worry about installing manufacturer-specific drivers for webcams anymore. For the most part, you can take any USB webcam and plug it into a computer and it will Just Work™.

I'm going to show you three different ways you can convert a USB UVC webcam to HDMI so you can use them with hardware video switchers like the ATEM Mini.

You can see a video version of this blog post on my YouTube channel!

Method 1: QuickTime Player

The simplest option is to use QuickTime on a Mac computer. For this, you'll need a Mac of course, as well as an HDMI output from the computer.

First, plug in the HDMI from your computer into your video switcher. Your computer will see it as a second monitor. In your display settings, make sure your computer is not mirroring the display. You want the computer to see the ATEM Mini or other video switcher as a secondary external display.

If you're doing this with the ATEM Mini, it's helpful to have a monitor plugged in to the ATEM Mini's HDMI output port, and then you can show your computer screen full screen on the ATEM's output by selecting that input's button in the "output" selector on the right side of the controls. This is important since you'll want to be able to navigate around the second screen a bit in the next steps.

Next, open QuickTime Player. Plug in your USB webcam into your computer. In the QuickTime "File" menu, choose "New Movie Recording". A window should appear with your default webcam. Click the little arrow next to the record button and you should see all your connected cameras as an option. Choose the USB camera you want to use and you should see it in the main video window.

Now drag that QuickTime window onto your second monitor that is actually the ATEM Mini. Click the green button in the top left corner to make the window full screen. Now what you see on the ATEM Mini should be just the full screen video. Make sure you move your cursor back to your main monitor so that it doesn't show up on the screen.

You're all set! You can switch the ATEM back to the multiview and you should see your webcam feed as one of the video inputs you can switch to.

Method 2: OBS

OBS is a powerful tool for doing all sorts of interesting things with video on your computer. You can use it to run a livestream, switching between multiple cameras and adding graphics on top. What we're going to use it for now is a simple way to get your USB cameras to show up on a second monitor attached to your computer.

Another benefit of OBS is that it is cross platform, so this method will work on Mac, Windows or Linux!

The basic idea is to create a scene in OBS that is just a full screen video of the webcam you want to use. Then you'll tell OBS to output that video on your second monitor, but the second monitor will actually be your computer's HDMI output plugged in to the ATEM Mini.

First, create a new scene, call it whatever you want, I'll call mine "Webcam". Inside that scene, add a new source of type "Video Capture Device". I'll call mine "Webcam Source".

When you create the source, it will ask you which video capture device you want to use, so choose your desired webcam at this step.

At this point you should see the webcam feed in the OBS main window. If it's not full screen, that's probably because the webcam is not full 1920x1080 resolution. You can drag the handles on the video to resize the picture to take up the full 1920x1080 screen. 

Next, right click anywhere in the main video window and choose "Fullscreen Projector (Preview)". Or if you use OBS in "Studio Mode", right click on the right pane and choose "Fullscreen Projector (Program)". Choose your secondary monitor that's plugged in to the ATEM, and OBS should take over that monitor and show just the video feed.

Method 3: Hardware Encoder

If you don't want to tie up a computer with this task, or don't have the space for a computer, another option is to use a dedicated hardware encoder to convert the USB webcam to HDMI.

There aren't a lot of options on the market for this right now, likely because it's not a super common thing to need to do. Currently, any device that can convert a UVC webcam to HDMI is basically a tiny computer. One example is the YoloBox which can accept some USB webcams as a video source alongside HDMI cameras. You could use the YoloBox to convert the USB camera to HDMI using the HDMI output of the YoloBox. 

Another option is this TBS2603au encoder/decoder

I originally was sent this device by TBS because I was interested in using it as an RTMP server. I wasn't able to figure that out, and have since switched to using the Magewell Pro Convert as an RTMP server which has been working great. But as I was poking around in the menus I realized that the TBS2603au has a USB port which can accept webcams!

So here are the step by step instructions for setting up the TBS2603au to output a USB webcam over its HDMI port.

The TBS2603au is controlled from its web interface. I'm going to assume you already know how to connect this to your network and configure the IP address and get to the device's web page. The default username and password are "admin" and "admin". Once you log in, you'll see a dashboard like this.

First, click on the "Encode" icon in the top bar. At the bottom, turn off the HDMI toggle and turn on the one next to USB.

Next click on the "Extend" tab in the top menu and choose "Video Mix".

Scroll down to the "Output Config" section and change "Mix Enable" to "Off", and choose "USBCam" from the "Video Source" option.

At this point you should see your webcam's picture out the device's HDMI port! And if that's plugged in to the ATEM Mini, your webcam will appear in your multiview!

I've tried this with a few different webcams and they all work great! 

The OBSBot Tiny is an auto-tracking PTZ camera that follows your face. The nice thing is that the camera itself is doing the face tracking, so no drivers are required!

The Elgato FaceCam is a high quality webcam for your PC, and it also works with this device. Although at that point you should probably just get a DSLR/mirrorless camera to use with the ATEM Mini.

This even works with the Insta360 One X2 in webcam mode. You won't get a full 360 picture, since in webcam mode the Insta360 One X2 uses only one of its two cameras. It does do some auto-tracking though.

The Mevo Start cameras are another interesting option, since you can crop in to specific parts of the video using a phone as a remote control.

There are a couple of problems with this method to be aware of. I wasn't able to find a way to output audio from the USB webcam, which means you will need to get your audio into the ATEM from another camera or external microphone. Another problem was with certain cameras (mainly the OBSBot Tiny), I left the device running overnight and in the morning it had crashed. I suspect it's because the OBSBot requires more power than other cameras due to its PTZ motor.

The TBS encoder isn't cheap, so it's not something you'd buy to use a generic webcam with your ATEM. But for use with specialized USB webcams like document cameras or PTZ cameras it could be a good option to use those cameras with streaming encoders like the ATEM Mini!

Let me know what USB webcams you'd like to use with your ATEM Mini or other hardware streaming encoder!

Friday, 07. January 2022

Identity Praxis, Inc.

Identity management is key to increasing security, reducing fraud and developing a seamless customer experience

I enjoyed participating in the Mobile Ecosystem Forum (MEF) Enterprise webinar on December 9, 2021. MEF explores its recent Personal Data and Identity Management Enterprise Survey – supported by Boku – in a webinar on 9th December 2021. MEF Programme Director, Andrew Parkin-White, is joined by Michael Becker, CEO of Identity Praxis and MEF Advisor and […] The post Identity management is key

I enjoyed participating in the Mobile Ecosystem Forum (MEF) Enterprise webinar on December 9, 2021.

MEF explores its recent Personal Data and Identity Management Enterprise Survey – supported by Boku – in a webinar on 9th December 2021. MEF Programme Director, Andrew Parkin-White, is joined by Michael Becker, CEO of Identity Praxis and MEF Advisor and Phil Todd, Director of Stereoscope, who co-authored the report.

Andrew Parkin-White wrote a nice blog piece that summarised our discussion. Three learnings came from our dialog:

Identity management is an iterative process with three core elements – initial identification, authentication (re-identifying the individual) and verification (ensuring the individual is who they claim to be) Enterprises employ a vast array of technologies to execute these processes which are growing in scope and complexity Understanding why identity management is necessary to enterprises and how this creates opportunities for vendors

You can watch the entire session you YouTube (60 min).

The post Identity management is key to increasing security, reducing fraud and developing a seamless customer experience appeared first on Identity Praxis, Inc..


Here's Tom with the Weather

The First Shots

A month ago, I learned about Katalin Karikó as I was reading Brendan Borrell’s The First Shots. She developed the modified mRNA (from which Moderna gets its name) that made possible the mRNA vaccines. The book describes how the University of Pennsylvania squandered her interest in the patent for her work by selling the rights to a company called Epicentre. Eventually, Moderna licensed the p

A month ago, I learned about Katalin Karikó as I was reading Brendan Borrell’s The First Shots. She developed the modified mRNA (from which Moderna gets its name) that made possible the mRNA vaccines. The book describes how the University of Pennsylvania squandered her interest in the patent for her work by selling the rights to a company called Epicentre. Eventually, Moderna licensed the patent from Epicentre to complement the work of Derrick Rossi.

In an interview, she also credits Paul Krieg and Douglas Melton for their contributions.

As a recipient of 3 doses of the Moderna vaccine, I’m thankful to these researchers and was glad to read this book.

Thursday, 06. January 2022

Here's Tom with the Weather

Wednesday, 05. January 2022

Just a Theory

Every Day Is Jan 6 Now

The New York _Times_ gets real about the January 6 coup attempt.

The New York Times Editorial Board in an unusually direct piece last week:

It is regular citizens [who threaten election officials] and other public servants, who ask, “When can we use the guns?” and who vow to murder politicians who dare to vote their conscience. It is Republican lawmakers scrambling to make it harder for people to vote and easier to subvert their will if they do. It is Donald Trump who continues to stoke the flames of conflict with his rampant lies and limitless resentments and whose twisted version of reality still dominates one of the nation’s two major political parties.

In short, the Republic faces an existential threat from a movement that is openly contemptuous of democracy and has shown that it is willing to use violence to achieve its ends. No self-governing society can survive such a threat by denying that it exists. Rather, survival depends on looking back and forward at the same time.

See also this Vox piece. Great to see these outlets sound the alarm about the dangers to American democracy. The threats are very real, and clear-eyed discussions should ver much be dominating the public sphere.

More of this, please.

More about… New York Times January 6 Coup Democracy Vox

Moxy Tongue

Human Authority

Own Root, Dependencies:  

Own Root, Dependencies:

 
















Tuesday, 04. January 2022

@_Nat Zone

2022年のプライバシー標準

今年もgihyoの新春特集に書かせていただきました… The post 2022年のプライバシー標準 first appeared on @_Nat Zone.

今年もgihyoの新春特集に書かせていただきました。ご笑読ください。

もくじ

データ倫理が意識された年 金になる「ぼくのかんがえたさいきょうのプライバシー」 「トラッキング」を定義する:ISO/IEC 27551 ユーザ中心のプライバシー設定管理フレームワーク:ISO/IEC DIS 27556 プライバシー向上のための非識別化フレームワーク:ISO/IEC DIS 27559* Grant Management for OAuth 2.0 データ倫理元年へ向けて

記事はこちらから参照できます→ https://gihyo.jp/lifestyle/column/newyear/2022/privacy-standards?page=1

The post 2022年のプライバシー標準 first appeared on @_Nat Zone.

Sunday, 02. January 2022

Jon Udell

The (appropriately) quantified self

A year after we moved to northern California I acquired a pair of shiny new titanium hip joints. There would be no more running for me. But I’m a lucky guy who gets to to bike and hike more than ever amidst spectacular scenery that no-one could fully explore in a lifetime. Although the osteoarthritis … Continue reading The (appropriately) quantified self

A year after we moved to northern California I acquired a pair of shiny new titanium hip joints. There would be no more running for me. But I’m a lucky guy who gets to to bike and hike more than ever amidst spectacular scenery that no-one could fully explore in a lifetime.

Although the osteoarthritis was more advanced on the right side, we opted for bilateral replacement because the left side wasn’t far behind. Things hadn’t felt symmetrical in the years leading up to the surgery, and that didn’t change. There’s always a sense that something’s different about the right side.

We’re pretty sure it’s not the hardware. X-rays show that the implants remain firmly seated, and there’s no measurable asymmetry. Something about the software has changed, but there’s been no way to pin down what’s different about the muscles, tendons, and ligaments on that side, whether there’s a correction to be made, and if so, how.

Last month, poking around on my iPhone, I noticed that I’d never opened the Health app. That’s beause I’ve always been ambivalent about the quantified self movement. In college, when I left competive gymnastics and took up running, I avoided tracking time and distance. Even then, before the advent of fancy tech, I knew I was capable of obsessive data-gathering and analysis, and didn’t want to go there. It was enough to just run, enjoy the scenery, and feel the afterglow.

When I launched the Health app, I was surprised to see that it had been counting my steps since I became an iPhone user 18 months ago. Really? I don’t recall opting into that feature.

Still, it was (of course!) fascinating to see the data and trends. And one metric in particular grabbed my attention: Walking Asymmetry.

Walking asymmetry is the percent of time that your steps with one foot are faster or slower than the other foot.

An even or symmetrical walk is often an important physical therapy goal when recovering from injury.

Here’s my chart for the past year.

I first saw this in mid-December when the trend was at its peak. What caused it? Well, it’s been rainy here (thankfully!), so I’ve been riding less, maybe that was a factor?

Since then I haven’t biked more, though, and I’ve walked the usual mile or two most days, with longer hikes on weekends. Yet the data suggest that I’ve reversed the trend.

What’s going on here?

Maybe this form of biofeedback worked. Once aware of the asymmetry I subconsciously corrected it. But that doesn’t explain the November/December trend.

Maybe the metric is bogus. A phone in your pocket doesn’t seem like a great way to measure walking asymmetry. I’ve also noticed that my step count and distances vary, on days when I’m riding, in ways that are hard to explain.

I’d like to try some real gait analysis using wearable tech. I suspect that data recorded from a couple of bike rides, mountain hikes, and neighborhood walks could help me understand the forces at play, and that realtime feedback could help me balance those forces.

I wouldn’t want to wear it all the time, though. It’d be a diagnostic and therapeutic tool, not a lifestyle.


Mike Jones: self-issued

Computing Archaeology Expedition: The First Smiley :-)

In September 1982, artificial intelligence professor Scott Fahlman made a post on the Carnegie Mellon Computer Science Department “general” bboard inventing the original smiley :-). I remember thinking at the time when I read it “what a good idea!”. But in 2002 when I told friends about it, I couldn’t find Scott’s post online anywhere. […]

In September 1982, artificial intelligence professor Scott Fahlman made a post on the Carnegie Mellon Computer Science Department “general” bboard inventing the original smiley :-). I remember thinking at the time when I read it “what a good idea!”. But in 2002 when I told friends about it, I couldn’t find Scott’s post online anywhere.

So in 2002, I led a computing archaeology expedition to restore his post. As described in my original post describing this accomplishment, after a significant effort to locate it, on September 10, 2002 the original post made by Scott Fahlman on CMU CS general bboard was retrieved by Jeff Baird from an October 1982 backup tape of the spice vax (cmu-750x). Here is Scott’s original post:

19-Sep-82 11:44 Scott E Fahlman :-) From: Scott E Fahlman <Fahlman at Cmu-20c> I propose that the following character sequence for joke markers: :-) Read it sideways. Actually, it is probably more economical to mark things that are NOT jokes, given current trends. For this, use :-(

I’m reposting this here now both to recommemorate the accomplishment nearly twenty years later, and because my page at Microsoft Research where it was originally posted is no longer available.

Wednesday, 29. December 2021

Just a Theory

Review: Project Hail Mary

A brief review of the new book by Andy Weir.

Project Hail Mary by Andy Weir
2021 Ballantine Books

Project Hail Mary follows the success of Andy Weir’s first novel, The Martian, and delivers the same kind of enjoyment. If a harrowing story of a solitary man in extreme environments using science and his wits to overcome one obstacle after another then this is the book for you. No super powers, no villains, no other people, really — just the a competent scientist overcoming the odds through experimentation, constant iteration, and sheer creativity. Personally I can’t get enough of it. Shoot it right into my veins.

Andy Weir seems to know his strengths and weaknesses, given these two books. If you want read stories of a diverse array of people interacting and growing through compelling character arcs, well, look elsewhere. Project Hail Mary doesn’t feature characters, really, but archetypes. No one really grows in this story: Ryland Grace, our protagonist and narrator, displays a consistent personality from start to finish. The book attempts to show him overcoming a character flaw, but it comes so late and at such variance to how he behaves and speaks to us that it frankly makes no sense.

But never mind, I can read other books for character growth and interaction. I’m here for the compelling plot, super interesting ideas and challenges (a whole new species that lives on the sun and migrates to Venus to breed? Lay it on me). It tickles my engineering and scientist inclinations, and we could use more of that sort of plotting in media.

So hoover it up. Project Hail Mary is a super fun adventure with compelling ideas, creative, competent people overcoming extreme circumstances without magic or hand-waving, and an unexpected friendship between two like-minded nerds in space.

I bet it’ll make a good movie, too.

More about… Books Andy Weir

Werdmüller on Medium

Hopes for 2022

Instead of a review of the year, let’s look ahead. Continue reading on Medium »

Instead of a review of the year, let’s look ahead.

Continue reading on Medium »

Thursday, 23. December 2021

Kyle Den Hartog

Financing Open Source Software Development with DAO Governance Tokens

Is it possible to fix the tragedy of the commons problem with a DAO Governance Token?

One of the biggest problems in open source software development today is that it’s that the majority of open source software is written by developers as side projects on their nights and weekends. Out of the mix of developers who do produce software in their nights and weekends only a small sliver of them receive any funding for their work. Of the small portion of developers who do get sponsored, an even smaller percentage are actually able to make enough money to fully cover their expenses in life. So clearly we haven’t developed a sustainable solution to finance open source software development. So what are the main ways that open source software development gets funded? The two primary methods that I see open source software being developed with is via organizational sponsors or altruistic funding. Let’s break these down a bit more to gain a better understanding of them.

The most common and well understood method today that open source projects are funded is via for-profit corporations sponsoring development of projects by way of allowing their full time staff to work on these large projects. Some great examples of this are projects like Kubernetes, Linux kernel, React Framework, Hashicorp Vault, and Rust programming language. In all of these examples, these projects are either directly managed via a team of developers at large organizations (think React Framework being maintained by Facebook), managed by a startup who opensources their core product with additional sticky features (think Hashicorp Vault), managed by a foundation with a combination of many different developers from many different organizations (think Kubernetes and the Linux kernel these days and now Rust Lang), and finally there’s hybrid projects which have transitioned from one category to another over time (Think Rust language being started at Mozilla and then transferred to a foundation). With all of these models one thing is clear. Developers have a day job that pays them and they’re essentially employed to produce open source software. The reasons why many companys are funding developers to produce open source development is so scattered that I’m sure I couldn’t name them all. However, one thing in my experience is clear and that is that most companies have some form of strategic decision at play that leads them down the path of making their source code open. Whether that strategy be as simple as they want to allow others to solve a problem they’ve had to solve, they want to leverage open source as a sales channel, or they’re simply looking for free software developement contributions from developers who like the project. Whatever, the reason the company has to justify it’s contributions it’s pretty clear that this is a major avenue for contribution to the OSS community.

The second most common method of development which has been around for awhile, but has only recently become a more legitimate model of funding has been through altruistic funding. What I mean by this method is that people, organizations, or other such entities will “sponsor” a developer who’s released an open source project that they believe should continue to be worked on. This most commonly was done via Paypal or Buy me a coffee in the past with Patreon and Github Sponsors getting involved more recently as well. This model of funding is becoming more common of a way to fund a small project which is used substantially by much larger projects or companies who want some certainty that the project will continue to be maintained in the future. It’s shown some promise for becoming a sustainable source of funding for developers who are looking for a way to monetize their projects without a massive overhead that comes with starting a company. However, while this method does leave the original maintainer in control of their project to continue to bring their vision to reality, it often times does not provide a sustainable and large enough income for most maintainers to leverage this avenue full time.

So what’s up with this DAO governance token idea then?

To put it simply the concept of leveraging a DAO token to manage an open source project is still just that - an idea. So why do I considered it worth exploring? Today, in the Defi space we see many different projects that are being built completely open source and doing so with often times very complex tokenomics schemes just to sustainably fund the development of the protocol. With each new project the developers need to find a new way to integrate a token into the system in order to fund their time and effort that they’d like to put into growing the project. However, what if we could re-shape the purpose of tokens to make it actually what the tokens are about which is funding the development of the project rather than trying to create a new gift card scheme for each new project?

The way I imagine this would work is via a DAO goverance token which effectively represents a share of the project. Each token that’s already been minted would allow for voting on proposals for to accept or reject new changes to the project in the same way that DAOs allow for decentralized goverance of treasuries today. However these proposals all come in the form of a pull request to modify the code allowing for developers to directly receive value for the proposed changes their making. Where things get interesting is that along with the new pull request comes a proposal set forth by the contributor who would assign a value they believe it’s worth represented in the value of new tokens which would be approved if the pull request is approved. This effectively would be diluting the value of the current tokens in exchange for work they’ve done to improve the project leading to an interesting trade in value. Current majority stake holders give up a small portion of their funds in exchange for receiving new contributions if and only if they believe the dilution is acceptable.

So how does this make developers money?

As a project grows and is utilized by more and more developers it will create an economic incentive for people and companies who wish to steer a project to buy up the currently available tokens or contribute to the project in order to collect these tokens. This value would be tradeable in terms of real world value either for money to buy food or for additional utility in upstream or downstream projects. The value of the tokens is only as valuable as the number of people who are utilizing the project and believe they need the ability to affect the direction of the project or make sure it remains maintained. Meaning for projects like Kubernetes where there’s numerous companies who’s core infrastructure built on top of this project they want to make sure they’re features are getting added and supported. Just like they do today in the Cloud Native Computing Foundation which sees many people from many different organizations and backgrounds contributing to the project now.

Where this becomes interesting is in the economic decision making that happens as a market is formed around maintainership of projects. With many of the good things that will be introduced like being able to have more full-time freelance software developers available I’m sure there will be interesting economic issues that will be introduced. It’s my belief though that this controversy will be worked out in different ways through different principles that projects will choose. As an example today one of the most obvious problems in large projects today such as when SushiSwap forked Uniswap and started taking sushiswap in a different direction. However, the legitimacy of the fork will help to form interesting economic behaviors on whether or not the value of the fork will go up like SushiSwap has shown by adding new and interesting contributions to their fork, or whether it will go down like many of the random clone projects that often lead to scams do.

I believe that if the mechanics of the maintainership role are established correctly then it may even be possible to create some interesting dynamics to reduce forking by leveraging the market dynamics. As an example if the DAO fork was required to mint the same number of tokens in the new project as the original project and make them assigned to the same maintainers then the original maintainers of the project could leverage their newly minted tokens in the new project in order to outright reject all proposals in the fork and slow down the momentum of the project. I tend to think this may be bad for innovation, but it’s an interesting example of how leveraging markets to make maintainership decisions can be utilized to build sustainability in open source development given that maintainership status of projects has legitimate value that if broken up and governed properly could be leveraged to reshape how software development is funded.

Tuesday, 21. December 2021

Tim Bouma's Blog

Public Sector Profile of the Pan-Canadian Trust Framework Version 1.4

The Public Sector Profile of the Pan-Canadian Trust Framework Version 1.4 is now available on GitHub Summary of Changes to Version 1.4: Public Sector Profile of the Pan-Canadian Trust Framework Version 1.4 is a continued refinement as result of application and iteration of the framework. While there are no major conceptual changes from Version 1.3, there are numerous refinements of d

The Public Sector Profile of the Pan-Canadian Trust Framework Version 1.4 is now available on GitHub

Summary of Changes to Version 1.4:

Public Sector Profile of the Pan-Canadian Trust Framework Version 1.4 is a continued refinement as result of application and iteration of the framework. While there are no major conceptual changes from Version 1.3, there are numerous refinements of definitions and descriptions and continued improvement of editorial and style consistency. Numerous improvements have been made due to feedback incorporated from the application of the PSP PCTF to trusted digital identity assessment and acceptance processes. Other changes have resulted from review and providing input into the National Standard of Canada, CAN/CIOSC 103–1, Digital trust and identity — Part 1: Fundamentals The PSP PCTF Assessment Workbook has been updated to reflect the latest changes.

Mike Jones: self-issued

Identity, Unlocked Podcast: OpenID Connect with Mike Jones

I had a fabulous time talking with my friend Vittorio Bertocci while recording the podcast Identity, Unlocked: OpenID Connect with Mike Jones. We covered a lot of ground in 43:29 – protocol design ground, developer ground, legal ground, and just pure history. As always, people were a big part of the story. Two of my […]

I had a fabulous time talking with my friend Vittorio Bertocci while recording the podcast Identity, Unlocked: OpenID Connect with Mike Jones. We covered a lot of ground in 43:29 – protocol design ground, developer ground, legal ground, and just pure history.

As always, people were a big part of the story. Two of my favorite parts are talking about how Kim Cameron brought me into the digital identity world to build the Internet’s missing identity layer (2:00-2:37) and describing how we applied the “Nov Matake Test” when thinking about keeping OpenID Connect simple (35:16-35:50).

Kim, I dedicate this podcast episode to you!

Sunday, 19. December 2021

Mike Jones: self-issued

Stories of Kim Cameron

Since Kim’s passing, I’ve been reflecting on his impact on my life and remembering some of the things that made him special. Here’s a few stories I’d like to tell in his honor. Kim was more important to my career and life than most people know. Conversations with him in early 2005 led me to […]

Since Kim’s passing, I’ve been reflecting on his impact on my life and remembering some of the things that made him special. Here’s a few stories I’d like to tell in his honor.

Kim was more important to my career and life than most people know. Conversations with him in early 2005 led me to leave Microsoft Research and join his quest to “Build the Internet’s missing identity layer” – a passion that still motivates me to this day.

Within days of me joining the identity quest, Kim asked me to go with him to the first gathering of the Identity Gang at PC Forum in Scottsdale, Arizona. Many of the people that I met there remain important in my professional and personal life! The first Internet Identity Workshop soon followed.

Kim taught me a lot about building positive working relationships with others. Early on, he told me to always try to find something nice to say to others. Showing his devious sense of humor, he said “Even if you are sure that their efforts are doomed to fail because of fatal assumptions on their part, you can at least say to them ‘You’re working on solving a really important problem!’ :-)” He modelled by example that consensus is much easier to achieve when you make allies rather than enemies. And besides, it’s a lot more fun for everyone that way!

Kim was always generous with his time and hospitality and lots of fun to be around. I remember he and Adele inviting visitors from Deutsche Telekom to their home overlooking the water in Bellevue. He organized a night at the opera for identity friends in Munich. He took my wife Becky and I and Tony Nadalin out to dinner at his favorite restaurant in Paris, La Coupole. He and Adele were the instigators behind many a fun evening. He had a love of life beyond compare!

At one point in my career, I was hoping to switch to a manager more supportive of my passion for standards work, and asked Kim if I could work for him. I’ll always remember his response: “Having you work for me would be great, because I wouldn’t have to manage you. But the problem is that then they’d make me have others work for me too. Managing people would be the death of me!”

This blog exists because Kim encouraged me to blog.

I once asked Kim why there were so many Canadians working in digital identity. He replied: “Every day as a Canadian, you think ‘What is it that makes me uniquely Canadian, as opposed to being American? Whereas Americans never give it a thought. Canadians are always thinking about identity.'”

Kim was a visionary and a person of uncommon common sense. His Information Card paradigm was ahead of its time. For instance, the “selecting cards within a wallet” metaphor that Windows CardSpace introduced is now widespread – appearing in platform and Web account selectors, as well as emerging “self-sovereign identity” wallets, containing digital identities that you control. The demos people are giving now sure look a lot like InfoCard demos from back in the day!

Kim was a big believer in privacy and giving people control over their own data (see the Laws of Identity). He championed the effort for Microsoft to acquire and use the U-Prove selective disclosure technology, and to make it freely available for others to use.

Kim was hands-on. To get practical experience with OpenID Connect, he wrote a complete OpenID Provider in 2018 and even got it certified! You can see the certification entry at https://openid.net/certification/ for the “IEF Experimental Claimer V0.9” that he wrote.

Kim was highly valued by Microsoft’s leaders (and many others!). He briefly retired from Microsoft most of a decade ago, only to have the then-Executive Vice President of the Server and Tools division, Satya Nadella, immediately seek him out and ask him what it would take to convince him to return. Kim made his asks, the company agreed to them, and he was back within about a week. One of his asks resulted in the AAD business-to-customer (B2C) identity service in production use today. He also used to have regular one-on-ones with Bill Gates.

Kim wasn’t my mentor in any official capacity, but he was indeed my mentor in fact. I believe he saw potential in me and chose to take me under his wing and help me develop in oh so many ways. I’ll always be grateful for that, and most of all, for his friendship.

In September 2021 at the European Identity and Cloud (EIC) conference in Munich, Jackson Shaw and I remarked to each other that neither of us had heard from Kim in a while. I reached out to him, and he responded that his health was failing, without elaborating. Kim and I talked for a while on the phone after that. He encouraged me that the work we are doing now is really important, and to press forward quickly.

On October 25, 2021, Vittorio Bertocci organized an informal CardSpace team reunion in Redmond. Kim wished he could come but his health wasn’t up to travelling. Determined to include him in a meaningful way, I called him on my phone during the reunion and Kim spent about a half hour talking to most of the ~20 attendees in turn. They shared stories and laughed! As Vittorio said to me when we learned of his passing, we didn’t know then that we were saying goodbye.

P.S. Here’s a few of my favorite photos from the first event that Kim included me in:

All images are courtesy of Doc Searls. Each photo links to the original.

Thursday, 16. December 2021

Markus Sabadello on Medium

Report from EBSI4Austria

In 2018, all European member states, together with Norway and Lichtenstein, signed a declaration stating the joint ambition to take advantage of blockchain technology. These 29 countries founded the European Blockchain Partnership (EBP), and within this partnership, they decided to build the so-called European Blockchain Services Infrastructure (EBSI). EBSI was created aiming to, on the one hand,

In 2018, all European member states, together with Norway and Lichtenstein, signed a declaration stating the joint ambition to take advantage of blockchain technology. These 29 countries founded the European Blockchain Partnership (EBP), and within this partnership, they decided to build the so-called European Blockchain Services Infrastructure (EBSI).

EBSI was created aiming to, on the one hand, provide blockchain capabilities used by the partner of the EPB to implement and realize blockchain projects and use cases within these countries. Moreover, on the other hand, to achieve certain use cases on a European level. The so-called use case groups were defined and present the working groups related to a specific use case to support the latter idea. These use case groups consist of representatives of the EBP member counties, domain experts as well as the European Commission.

Initially, four use case groups were founded, namely the European Self-Sovereign Identity Framework (ESSIF), the diploma use case, document traceability, and secure document transfer. ESSIF focuses on digital identities where the user is in control over her identity data. The diploma use case focuses on educational diplomas of students and related processes such as issuing, verifying, revocation, and all of these processes in cross-border scenarios. Document traceability considers the anchoring of document-related identifiers like hashes on the blockcahin and secure document sharing on tax-related information transfer.

EBSI defined so-called use case groups that should be achieved using the provided capabilities to showcase their functionality and bring in expertise in the specific fields. Each use case group consists of representatives of the member states, domain experts, and the European Commission.

About EBSI4Austria

EBSI4Austria is a CEF funded project with two main objectives. First, EBSI4Austria aims to set up, operate and maintain the Austrian’s EBSI node. Second, we pilot the diploma use case on the Austrian level supported by two Universities and data providers as well as verifiers.

EBSI created a so-called early adopter program to speed up the use case integration of the participating countries. EBSI4Austria joined this ambiguous program already in the first wave reflecting our project’s motivation.

Partners

EBSI4Austria consists of three partners, namely two Universities such as Graz University of Technology (TU Graz) and the Vienna University of Economics (WU Vienna), together with Danube Tech, a Vienna based company that provides leading expertise in Self-Sovereign Identity (SSI) as well as distributed systems and is involved in related standardization bodies. The Universities are responsible for issuing students’ diplomas and also verifying them. Austrian’s EBSI node is set up and operated at the department eGovernment innovation center (EGIZ), which is part of Graz University of technology.

User Story

Figure 1 illustrates the user story that is covered in our project. A student studying at the Graz University of Technology is finishing her bachelor’s program. TU Graz issues her diploma credential stating her bachelor’s degree, which the student stores in her wallet. Next, she wants to apply for a master’s program at the Vienna University of Economics and Business; thus, she presents her bachelor’s diploma credential. After successfully finishing her master’s program at WU Vienna, the university issues her master’s diploma credential to the student. The student is very ambitious; therefore, she applies for a Ph.D. position at the Berlin Institute of Technology by presenting her diplomas. All involved parties utilize the EBSI blockchain network to verify if the issuing universities are trusted issuers.

Figure 1: User Story of the Diploma Use Case Technology

In order to implement our EBSI4Austria project, we used similar technologies as many other Self-Sovereign Identity (SSI) initiatives, i.e., based on building blocks such as Decentralized Identifiers (DIDs) and Verifiable Credentials (VCs).

We created two DIDs on the EBSI blockchain for the two universities, as follows:

Test DID for TU Graz: did:ebsi:zuoS6VfnmNLduF2dynhsjBU Test DID for WU Vienna: did:ebsi:z23EQVGi5so9sBwytv6nMXMo

In addition, we registered them in EBSI’s Trusted Issuer Registry (TIR).

We also designed Verifiable Credentials to model digital versions of university diplomas. We implemented them using different credential and proof formats to accommodate changing requirements and guidelines in the EBSI specifications throughout the year. See here for some examples in different formats:

Example Diploma by TU Graz:

JSON-LD+LD-Proofs JSON-LD+JWT (also see JWT payload only) JSON+JWT (also see JWT payload only)

Example Diploma by WU Wien:

Paper version (in German) Paper version (in English) JSON-LD+LD-Proofs JSON-LD+JWT (also see JWT payload only) JSON+JWT (also see JWT payload only)

We also designed our own (experimental) JSON-LD context in order to be able to work with Linked Data Proofs (see essif-schemas-vc-2020-v1.jsonld). In our opinion, it would be preferable if JSON-LD contexts were provided by EBSI to all member states instead of having to do this separately for each EBSI pilot project.

We use the following technologies in our project:

Universal Resolver → For resolving DIDs. Universal Registrar → For creating DIDs. Universal Issuer → For issuing VCs. Universal Verifier → For verifying VCs. SSI Java Libraries ld-signatures-java — For Linked Data Signatures. verifiable-credentials-java — For Verifiable Credentials.

We set up the following demonstration websites:

https://tugraz.ebsi4austria.danubetech.com/ — Issuer demo website https://wuwien.ebsi4austria.danubetech.com/ — Verifier demo website

See this Github repository for additional technical details about EBSI4Austria.

Multi-University Pilot

Within EBSI’s early adopter program, EBSI4Austria also joined the multi-university pilot (MU pilot) in which the focus is on issuing and verifying student diplomas between universities but in this case, even in a cross-border scenario. This multi-university pilot should underpin the possibilities even across countries.

While working on the MU pilot, we participated in several EBSI Early Adopter program meetings to identify issuers, verifiers, and types of credentials. We were in contact with members of Spanish EBSI pilot projects (especially from the SSI company Gataca), to compare our approaches to EBSI DIDs and Verifiable Credentials. We had several technical discussions and email exchanges regarding details of those credentials, e.g. about the JSON-LD contexts and exact proof formats we were planning to use. During these exchanges, we were able to exchange initial examples of verifiable credentials and verify them.

Within one of the “clusters” of the EBSI MU pilot, we also collaborated closely with the “UniCert” aka “EBSI4Germany” project led by the Technical University of Berlin, a member of the EBSI early adopter program and the German IDunion consortium. This collaboration proved to be particularly interesting for the following reasons:

1. Since TU Berlin participates both in EBSI and IDunion, they have unique insights into the similarities and differences between these different SSI networks.

2. TU Berlin was also able to share some experiences regarding the use of existing standards such as Europass and ELMO/EMREX, which can help with semantic interoperability of Verifiable Credentials use in EBSI.

Figure 2: Multi-university pilot scenario.

Note: This blog post was co-authored by Andreas Abraham (eGovernment Innovation Center) and Markus Sabadello (Danube Tech). The EBSI4Austria project was funded under agreement No INEA/CEF/ICT/A2020/2271545.

Wednesday, 15. December 2021

Here's Tom with the Weather

Last day with Pandemic Beard

Tuesday, 14. December 2021

@_Nat Zone

日本経済新聞にインタビューが掲載されました:「巨大ITと国際規格策定」

2021年12月14日の日本経済新聞朝刊(16面)… The post 日本経済新聞にインタビューが掲載されました:「巨大ITと国際規格策定」 first appeared on @_Nat Zone.

2021年12月14日の日本経済新聞朝刊(16面)に、大豆生田記者による5段にわたるインタビュー記事が掲載されました。

巨大ITと国際規格策定

米OpenIDファウンデーション理事長 崎村夏彦氏テクノロジストの時代2021年12月14日 2:00 [有料会員限定]

https://www.nikkei.com/article/DGKKZO78400100T11C21A2TEB000/

写真を撮られると思っていなかったので、ボサボサヘアのやつれた研究者然とした写真が掲載されております。どうして標準化の世界に飛び込んだかなども語られております。

崎村さんが標準化に飛び込んだ理由、知らなかったな @_nat / “巨大ITと国際規格策定” https://t.co/au3TDCZwPT

— Masanori Kusunoki / 楠 正憲 (@masanork) December 13, 2021

この記事に関連して、Q&AをTwitterのSpaceを使ってやる会を企画したいと思っています。

記事についてのQ&A のスペース開催はいつが良いですか?

— 崎村夏彦『デジタルアイデンティティ』7/16発売 (@_nat) December 15, 2021

アンケートによると、平日夜が濃厚です。Twitter上でアナウンスします(ここにも書くかも)ので、@_nat をフォローして少々お待ち下さい。

The post 日本経済新聞にインタビューが掲載されました:「巨大ITと国際規格策定」 first appeared on @_Nat Zone.

Monday, 13. December 2021

Mike Jones: self-issued

OpenID Presentations at December 2021 OpenID Virtual Workshop

I gave the following presentations at the Thursday, December 9, 2021 OpenID Virtual Workshop: OpenID Connect Working Group (PowerPoint) (PDF) OpenID Enhanced Authentication Profile (EAP) Working Group (PowerPoint) (PDF)

I gave the following presentations at the Thursday, December 9, 2021 OpenID Virtual Workshop:

OpenID Connect Working Group (PowerPoint) (PDF) OpenID Enhanced Authentication Profile (EAP) Working Group (PowerPoint) (PDF)

Friday, 10. December 2021

MyDigitalFootprint

Why is being data Savvy not the right goal?

It is suggested that all which glitters is gold when it comes to data: the more data, the better. I have challenged this thinking that more data is better on numerous occasions, and essentially they all come to the same point. Data volume does not lead to better decisions.   A “simplistic” graph is doing the rounds (again) and is copied below. The two-axis links the quality of a decisio
It is suggested that all which glitters is gold when it comes to data: the more data, the better. I have challenged this thinking that more data is better on numerous occasions, and essentially they all come to the same point. Data volume does not lead to better decisions.  

A “simplistic” graph is doing the rounds (again) and is copied below. The two-axis links the quality of a decision and the person's capability with data.  It infers that boards, executives and senior leadership need to be “data-savvy” if they are to make better decisions. Data Savvy is a position between being “data-naive or data-devoid” and “drunk on data.”  The former has no data or skills; the latter is too much data or cannot use the tools. Data Savvy means you are skilled with the correct data and the right tools.

This thinking is driven by those trying to sell data training by simplifying a concept to such a point its becomes meaningless but is easy to sell/ buy and looks great as a visual.  When you don’t have enough time to reflect on the graph and the message, it looks logical, inspired and correct - it is none of these things.   The basis of the idea is that a board or senior leadership team who are data-savvy will make better decisions, based on the framing that if you are naive or drunk on data, you will make poor decisions.  

The first issue I have is that if the data does not have attestation, your capability (data-savviness) will make no difference to the quality of the decision.  One could argue that you will test the data if data-savvy, but this is also untrue as most boards cannot test the data, relying on the organisations' processes and procedures to ensure “quality” data. This is a wild assumption. 


It is worth searching for what “data-savvy” means and reading a few articles.  You will find that many put becoming data-savvy as a step in the journey to being data-driven.  To a second point:  data-driven means you will always be late.  To wait for enough data to reduce the risk to match your risk framework means that you will be late in the decision-making process.  Data-driven does not make you fast, agile, ahead, innovative or adaptive.   Data-driven makes you late, slow, behind and a follower.

Is the reality of wanting to be data-savvy or a desire to be data-driven that you look to use data to reduce risk and therefore become more risk-averse, which means you miss the signals that would make you genuinely innovative?

The question as a CDO (data or digital) we should reflect on is “how do we reconcile that we want to be first, innovative, creative or early; but our processes, methods, and tools depend on data that means we will always be late!” The more innovative we want to be, the less data we will have and the more risk we need to take, which does not align to the leadership, culture or rewards/ incentives that we have or operate to.


Identity Praxis, Inc.

The Identity Imperative: Risk Management, Value Creation, and Balance of Power Shifts

Article published by the Mobile Ecosystem Forum, 12/10/2021. Article published by the Mobile Ecosystem Forum, 12/10/2021. “We know now that technology and business models are accelerating at a faster pace than ever before in human history. In 10 years time, who knows what kind of conversations we’re going to be having, but the one thing we […] The post The Identity Imperative: Risk Manageme

Article published by the Mobile Ecosystem Forum, 12/10/2021.

Article published by the Mobile Ecosystem Forum, 12/10/2021.

“We know now that technology and business models are accelerating at a faster pace than ever before in human history. In 10 years time, who knows what kind of conversations we’re going to be having, but the one thing we know is that we’re all going to be increasingly vulnerable, as more of our services, more of our citizen identity, move online.” – Surash Patel, VP EMEA, TeleSign Corporation 2021 (click here to listen).1

I recently sat down with  Surash Patel, VP EMEA for TeleSign and Board Member of the Mobile Ecosystem Forum (MEF) to discuss the personal data & identity (PD&I) market, for a PD&I market assessment report I’m working on for the MEF (the report will be out in January 2022). Surash’s above quote stuck out to me because I think he is right. It also reminds me of another quote, one from WPP:

“By 2030 society will no longer tolerate a business model that relies on mass transactions of increasingly sensitive personal data: a quite different system will be in place.” – WPP2

I took away three key insights, although are more, from my interview with Surash:

Enterprises must immediately start learning how to master [mobile] identity verification; mobile identity verification can help reduce losses to fraud and self-inflicted losses of revenue. Enterprises that effectively use mobile identity verification can create value and generate trust and engagement at every stage of the customer journey. There is much we—people, private organizations, and public institutions—need to know and do to equip for the now and prepare for the future.

The following summarizes my conversation with Surash. . To watch the complete interview with Surash Patel of TeleSign (39:11 min), click here.

Risk Mitigation, Value Creation, and the Customer Journey

When introducing himself and his background, Surash opened with a wonderfully self-reflective quote:

“I completely missed a trick on my career and where it was going, in that I thought about the value exchange between the consumer and the brand. From a marketing perspective, I never really considered it from the digital identity perspective before–seeing the numbers on digital fraud right now I think that case is becoming more and more clear to me.” – Surash Patel, VP EMEA, Telesign Corporation 2021 (click here).

By reading between the lines of his statement, I gather that, as a marketer, he previously saw identity as a tool for audience targeting and promotion. But, once he went into the infrastructure side of the business, he realized identity plays an even bigger role throughout the industry. This is because identity has a critical role at every touchpoint along the customer journey–not just for marketing, but for fraud prevention, revenue protection, and trust.

Risk Mitigation and managing losses

Drawing from industry reports, Surash notes that t businesses are losing upwards of $56 billion a year to fraud each year.4 Because of this, “knowing your customer,” i.e., knowing that there is a legitimate human on the other side of a digital transaction, is not just a nice to have, but a business imperative. Surash points out that it’s not just fraud that brands must contend with when it comes to losses. They must also contend with self-inflicted wounds.

Surash referenced a report from Checkout.com which found that, in 2019, brands in the UK, US, France, and Germany lost $20.3 billion in 2019 due to false declines at check out, i.e. identity verification system failures. $12.7 billion of these losses went to competitors, while $7.6 billion just evaporated.5 My takeaway from this is that it’s necessary for brands to see identity verification as a strategic imperative, not just an IT function.

But, reducing fraud and managing revenue breakage is not all Surash brought up. He also noted that, based on findings in the Checkout.com report, consumers would pay an average of $4 to be sure their transactions are secure. So, not only can brands reduce fraud, but they can also retain sales by more effectively identifying their customers (listen to his comments here).

The Potential For Harm is Real and Must Be Managed

Let’s briefly return to Surash’s quote above:

“We know now that technology and, you know, business models are accelerating at a faster pace than ever before in human history. In 10 years time, who knows what kind of conversations we’re going to be having, but the one thing we know is that we’re all going to be increasingly vulnerable as more of our services, more of a citizen identity, move online.” – Surash Patel, VP EMEA, Telesign Corporation 2021 (click here to listen).6

I agree with him–the people, not just businesses, are at risk of being even more vulnerable than they are now, but that does not mean the risks we face today are trivial. Harm from the misuse of personal data is all around us. We primarily measure this in financial terms. For example, in 2020, U.S. consumers reported losses of $86M to fraud originating from text messaging scams .7

On harm

There is more privacy harm out there than financial loss, noted by Ignacio N. Cofone,8 and it is not a trivial discussion to be slipped under the rug. In fact, it is one of the fundamental drivers behind emerging people-centric regulations, industry best practices, and the reshaping of law.

This topic is too big to cover in this article, but I can provide a good resource for you on privacy harm. One of my go-to resources when considering this issue is Daniel Solove, who recently, along with Danielle Keates Citron, updated the  Typology of Privacy Harms. This is a must-read if you are a student on the topic of being of service to the connected individual.

The Customer Journey and Balance of Power Shift

To address these privacy harms, Surash specifically calls for the government to get involved. However, Surash thinks brands and individuals alike can do more as well. Surash makes it clear that individuals need to be more aware and accountable for their own actions and interactions. He also thinks, however, that brands need to learn to engage people in an even value exchange (hear Surash’s comment). Furthermore, he recognizes that people are taking more control of their data, and as this continues, we may eventually see the evolution of consumer “curation services  (hear his remark),” what may call “infomediaries.” Again, I’m drawn to the WPP quote above. Brands need to prepare for fundamental shifts in people’s attitudes and expectations. The implications of these shifts will be profound, as they will force a change in competition, business models, product offerings, and business practices.

There is Much We All Need to Know and Do

So, after taking all this in, what’s next?

What I learned from my time with Surash is that an effective identity management implementation can collectively save brands billions, while building trust and improving their ability to serve their customer throughout the customer’s journey.

Surash emphasized that the people must know that they are at risk and be aware of all that is going on in the industry. As a result of this knowledge, they can take steps to advocate for and protect themselves. He notes that individuals “absolutely need to know the value of their data” and “how they can challenge” the brands’ use of their data. Surash suggested that individuals need to start shifting the balance of power by approaching brands and questioning, “Do you really need that data to serve me? If not, don’t ask me for it.” Surash does recognize, however, that this is going to be hard for individuals to go up against the “large” brand. As noted below, we believe in both the companies’ and the government’s abilities to do more.

For brands, Surash wants them to:

Take cyberattacks seriously and prepare as the attacks are expected to get worse. Get the fraud and marketing teams working together and not at loggerheads Not just onboard individuals after the first transaction, but to continually evaluate and authenticate their customers as their customers move along the journey. He suggests that brands must learn to evaluate the local context of each engagement, regularly verify and authenticate their customers, and show the people they serve some respect by making an effort to validate if individuals’ circumstances (preferences, address, phone number, etc.) have changed over time. Surash implies that these actions will both reduce the risk of fraud and cybercrime, but also improve the relationship they have with those they serve. Ensure there is always an even value exchange, and if the brand wants more in return during a transaction, e.g. more data to support a future upsell, then they should consider paying the individual for it.

As for public institutions, .e.g. governments, Surash suggests that “there isn’t enough being done to protect the consumers.” Governments should work with industry to refine value propositions, institute consistent standards, and advocate for consumers.

Clearly, this is all just the tip of the iceberg. There is definitely more to come.

Watch the complete interview with Surash Patel of TeleSign (39:11 min, click here).

REFERENCES

Becker, Michael. “The Chain of Trust & Mobile Number Identity Scoring: AnInterview with Virginie Debris of GSM.” Accessed October 28, 2021. https://www.youtube.com/watch?v=ftJ_4800W2Y.

Becker, Michael, and Surash Patel. “The Identity Imperative: Risk Management, Value Creation, and Balance of Power Shifts.” Accessed October 30, 2021. https://www.youtube.com/watch?v=V5WlrHSohpM.

Buzzard, John, and Tracy Kitten. “2021 Identity Fraud Study: Shifting Angles.”Livonia, MI: Javelin, March 2021. https://www.javelinstrategy.com/content/2021-identity-fraud-report-shifting-angles-identity-fraud.

Citron, Danielle Keats, and Daniel Solove. “Privacy Harms.” Boston University Law Review 102, no. 2022 (February 2021). https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3782222.

“Data 2030: What Does the Future of Data Look Like? | WPP.” London: WPP, November 2020. https://www.wpp.com/wpp-iq/2020/11/data-2030—what-does-the-future-of-data-look-like.

Scrase, Julie, Kasey Ly, Henry Worthington, and Ben Skeleton. “Black Boxes and Paradoxes. The Real Cost of Disconnected Payments.” Checkout.com, July 2021. https://www.checkout.com/connected-payments/black-boxes-and-paradoxes.

Skiba, Katherine. “Consumers Lost $86m to Fraud Originating in Scam Texts.”AARP, June 2021. https://www.aarp.org/money/scams-fraud/info-2021/texts-smartphone.html.

Becker and Patel, “The Identity Imperative.” “Data 2030.”↩︎ Becker, “The Chain of Trust & Mobile Number Identity Scoring.” Buzzard and Kitten, “2021 Identity Fraud Study.” Scrase et al., “Black Boxes and Paradoxes. The Real Cost of Disconnected Payments.” Becker and Patel, “The Identity Imperative.” Skiba, “Consumers Lost $86m to Fraud Originating in Scam Texts.”

The post The Identity Imperative: Risk Management, Value Creation, and Balance of Power Shifts appeared first on Identity Praxis, Inc..

Thursday, 09. December 2021

Identity Woman

Techsequences Podcast: Self-Sovereign Identity

I chatted with Alexa Raad and Leslie Daigle of Techsequences about self-sovereign identity: what identity is and how we’ve lost control of our own identity in today’s world. Click on the link below to listen. https://www.techsequences.org/podcasts/?powerpress_pinw=252-podcast “Who are you?”. Answering that may seem at once easy and yet incredibly complex.  In the real world, we […] The post

I chatted with Alexa Raad and Leslie Daigle of Techsequences about self-sovereign identity: what identity is and how we’ve lost control of our own identity in today’s world. Click on the link below to listen. https://www.techsequences.org/podcasts/?powerpress_pinw=252-podcast “Who are you?”. Answering that may seem at once easy and yet incredibly complex.  In the real world, we […]

The post Techsequences Podcast: Self-Sovereign Identity appeared first on Identity Woman.

Sunday, 05. December 2021

Altmode

Sussex Day 11: Padding(ton) Home

Sunday, November 14, 2021 We got an early start, said good-bye to Celeste (who got to stay in the room a little longer), and headed for Paddington Station about 7 am to catch the Heathrow Express. We bought our tickets, got out on the platform, and were greeted with a message board saying that there […]

Sunday, November 14, 2021

We got an early start, said good-bye to Celeste (who got to stay in the room a little longer), and headed for Paddington Station about 7 am to catch the Heathrow Express. We bought our tickets, got out on the platform, and were greeted with a message board saying that there were delays on the line and that some trains had been canceled. This made us a little nervous, but the Network Rail application on my phone reassured us that there would, in fact, be a train soon. Although we had a bit more than the usual wait for Heathrow Express, the train proceeded normally and was not excessively crowded.

After the usual long walk, we reached the ticket counter and checked in. They were thorough in checking our vaccination and COVID testing status, although not to the point of actually checking the QR codes associated with each. After checking bags, there was another long walk to the vicinity of the gate. United’s lounge in London is still closed, but in the meantime they have an arrangement with Singapore Airlines for the use of their lounge where we were able to get breakfast.

At the gate, Kenna was diverted for extra security screening because the “SSSS” designation was printed on her boarding pass. Following that inconvenience, our flight departed on time, which given that we have only a 2-hour layover in Chicago (including customs and immigration) we appreciated. However, our arrival gate was occupied by another plane, resulting in about a 30 minute delay which made us understandably nervous.

Greenland from the air

Having seen US Customs signs back in San Francisco promoting the Mobile Passport immigration application for our phones, we entered our passport information and customs declaration. But after racing to the immigration hall, we were told, “We don’t use that any more. Get in line.” More nervousness about the time. After getting through Customs (which left us outside security), we took the tram to Terminal 1 for our flight to San Francisco.

Here we noticed that Kenna didn’t have the TSA Precheck designation on her boarding card, probably as a result of the SSSS designation earlier. It may not have mattered; there were signs saying precheck was closed and the people checking boarding passes didn’t seem to know. So we both went through the “slow line”, and unfortunately Kenna set something off and had to go through some extra screening. Apparently they thought there was something about one of her shoes, which they ran through the X-ray machine again; more delay. It was interesting that there were a number of women having their shoes rechecked at the same time.

We raced to our gate, nearly the furthest from the security checkpoint, and made it in enough time, but with not much to spare. The ride to San Francisco was unremarkable, and we collected our bags and caught our ride home, according to plan.

Epilogue

Arriving home we were severely jet lagged as expected, but tried to stay up as late as we could manage. After a few hours of sleep, I awoke about 2 am. I could hear some water dripping, which I attributed to a downspout being clogged with leaves following some recent rainfall. So I got up to investigate, and instead discovered that there was a substantial amount of water dripping from the ceiling into our guest room. It turns out that a hot water pipe in the attic had developed a pinhole leak and over time had soaked one wall. So we now have a new project.

This article is the final installment in a series about our recent travels to southern England. To see the introductory article in the series, click here.

Saturday, 04. December 2021

Altmode

Sussex Day 10: London

Saturday, November 13, 2021 London isn’t in Sussex, that’s just the theme of the trip. Celeste expressed a lot of interest in visiting the Imperial War Museum, which none of us had visited, so we decided to make that our first destination. After a quick Pret a Manger breakfast, we took the Tube to the […]

Saturday, November 13, 2021

London isn’t in Sussex, that’s just the theme of the trip.

Celeste expressed a lot of interest in visiting the Imperial War Museum, which none of us had visited, so we decided to make that our first destination. After a quick Pret a Manger breakfast, we took the Tube to the south side of London. The first thing you notice is the battleship guns at the front. My interest was also piqued by a short segment of Berlin Wall near the front entrance.

The museum has a large collection on several floors, with areas emphasizing World War I, World War II, the Cold War, the Holocaust, etc. One could easily spend several days to see all of the exhibits. Toward the end of our visit, we went in to the World War II gallery (having already seen quite a number of exhibits dealing with WW II), and it went on…and on. The gallery was very large and went into great detail, including many stories about participants in the war, German as well as Allied. We hadn’t expected the gallery to be nearly as large as it was, and might have allocated more time if we had.

Early in the afternoon we tired of the museum and decided to look for lunch. We thought we might like German food, so guided by our phones, we walked north and came to Mercato Metropolitano, a large semi-outdoor food court with sustainable food from many world cuisines. Each of us selected something we like, but we never found the German restaurant we thought was there.

Continuing north, we got to the Borough Market, a large trading market established in 1756. Perhaps because it was a Saturday, it was very crowded. Normally this might not have been as notable but that since the COVID epidemic we have avoided and become unaccustomed to crowds. We walked through quickly and then continued on to the Thames, where we went along the south shore to the Millennium Bridge. We walked out on the bridge, took some pictures, and continued west to the Westminster Bridge. All along the way there were people — lots of people.

After crossing the Westminster bridge, we took a short Tube ride to the West End. Again, everything was crowded. We tried a couple of places for dinner, but nothing was available without an advance booking. The Five Guys burger restaurant was jammed, and there was even a long queue at McDonalds (!). We couldn’t figure out the attraction there.

We finally settled on Itsu, the same Asian-themed fast food chain that we had tried in Brighton. We were able to find a table and had an enjoyable light meal.

The big event of the day was this evening: we had tickets to Back to the Future: The Musical, playing at the Adelphi Theatre on The Strand. This is a new show that that just opened in July 2021 and has not yet made it to the United States. The theatre was, as expected, nearly full. But we had been told that COVID vaccination, negative tests, and the wearing of masks would be required. In fact, we were never asked about vaccinations or tests, and the majority of the audience did not wear masks. We felt somewhat less safe as a result.

Still, the show was very enjoyable. As Celeste pointed out, this is a “tech show” with the strong point being special effects. Most of the performances, particularly Doc Brown, were excellent as well, although Celeste noted that some of the actors had trouble with American accents.

We took the Tube back to our hotel and are retiring quickly. Tomorrow will be an early day for Jim and Kenna’s flight back home.

This article is part of a series about our recent travels to southern England. To see the introductory article in the series, click here.

Friday, 03. December 2021

Altmode

Sussex Day 9: Brighton to London

Friday, November 12, 2021 Since it is now 2 days before our return to the United States, today was the day for our pre-trip COVID test. We were a little nervous about that because, of course, it determines whether we return as planned. Expecting a similar experience as for our Day 2 test, we were […]

Friday, November 12, 2021

Since it is now 2 days before our return to the United States, today was the day for our pre-trip COVID test. We were a little nervous about that because, of course, it determines whether we return as planned. Expecting a similar experience as for our Day 2 test, we were a bit surprised that this time we would have to do a proctored test where the proctor would watch us take the test via video chat. The next surprise was that you seem to need both a smartphone to run their app and some other device for the chat session. So we got out our iPads, and (third surprise) there was apparently a bug in their application causing it not to work on an iPad. So we got out my Mac laptop and (fourth surprise) couldn’t use my usual browser, Firefox, but could fortunately use Safari. Each test took about half an hour, including a 15-minute wait for the test to develop. Following the wait, a second video chat was set up where they read the test with you and issued your certificate. Very fortunately, both of our tests were negative.

We checked out of the apartment/hotel just before checkout time and stored our bags. Then the question was what to do until Celeste finished classes so we could all take the train to London. The answer was the Sea Life Brighton, apparently the oldest aquarium in the world. While not an extensive collection, many of the exhibits were in a classic style with ornate frames supporting the glass windows. There was a very enjoyable tunnel where you can sit while fish (and turtles!) swim overhead. The aquarium covered a number of regions of the world, with more of an emphasis on fresh-water fish than many others we have seen.

After browsing a bookstore for a while, we collected our bags and headed for the train station. Trains run to Victoria Station in London every half hour, and fortunately that connected well with the train Celeste took from Falmer to meet us.

After the train trip and Tube ride to Paddington Station, we walked the short distance to our hotel, a newly renovated boutique hotel called Inhabit. We chose it largely because it had nice triple rooms, including an actual bed (not sofa bed) for Celeste. No London trip would be complete without a hotel where it’s necessary to lug your bags up a flight of stairs, but fortunately this one only required a single flight. Our room was modern and comfortable.

I had booked a table at the Victoria, a pub in the Paddington area, and we were seated in a pleasant and not noisy dining room upstairs. Dinner was excellent. Upon returning to the hotel, Celeste immediately collapsed for the night on her cozy bed.

This article is part of a series about our recent travels to southern England. To see the introductory article in the series, click here.

Thursday, 02. December 2021

Altmode

Sussex Day 8: Hove and Skating

Thursday, November 11, 2021 While Celeste was in classes, Kenna and I set out on foot for Hove, Brighton’s “twin” city to the west. We had a rather pleasant walk through a shopping district, but there wasn’t much remarkable to see. In Hove, we turned south and followed the main road along the Channel back […]

Thursday, November 11, 2021

While Celeste was in classes, Kenna and I set out on foot for Hove, Brighton’s “twin” city to the west. We had a rather pleasant walk through a shopping district, but there wasn’t much remarkable to see. In Hove, we turned south and followed the main road along the Channel back to the west. We stopped to look at one of the characteristic crescent-shaped residential developments, and continued toward Brighton. We considered going on the i360 observation tower, but it wasn’t particularly clear and the expense didn’t seem worth it.

Celeste and a friend of hers (another exchange student from Colorado) joined us in the afternoon to go ice skating at the Royal Pavilion Ice Rink. While I am used to hockey skates, it was a bit of an adjustment to the others who are used to the toe picks on figure skates. We all got the hang of it; the ice was beautifully maintained (although with some puddles) and the rink was not particularly crowded for our 3 pm session.

After skating we sat in the attached cafe to chat until it was time for dinner, which we had at an Italian restaurant, Bella Italia, in the Lanes.

This article is part of a series about our recent travels to southern England. To see the introductory article in the series, click here.


Sussex Day 7: Pavilion and Museum

Wednesday, November 10, 2021 Celeste has a busy class schedule the early part of the day, so Kenna and I set out on our own, first for a hearty breakfast at Billie’s Cafe and then to the Royal Pavilion, one of the sightseeing highlights of Brighton. Originally a country estate, it was remodeled by King […]

Wednesday, November 10, 2021

Celeste has a busy class schedule the early part of the day, so Kenna and I set out on our own, first for a hearty breakfast at Billie’s Cafe and then to the Royal Pavilion, one of the sightseeing highlights of Brighton. Originally a country estate, it was remodeled by King George IV into an ornate building, with the exterior having an Indian theme and the interior extensively decorated and furnished in Chinese style.

Brighton’s Royal Pavilion has had a varied history, having been of less interest to Queen Victoria (George IV’s successor in the throne) who moved most of the furnishings to London and sold the building to the City of Brighton. Over the years it has been refurnished in the original style and with many of the original furnishings, some of which have been loaned by Queen Elizabeth. The Pavilion was in the process of being decorated for Christmas, which reminded us of a visit we made two years ago to Filoli in California.

After the Pavilion, we went across the garden to the Brighton Museum, which had a wide range of exhibits ranging from ancient history of the British Isles and ancient Egypt to LGBT styles of the late 20th century and modern furniture.

Having finished her classes, Celeste joined us for lunch at Itsu, one of a chain of Asian-inspired fast food restaurants. We then returned with Celeste to the museum to see a bit more and allow her time to do some research she had planned.

We then made our way behind the Pavilion, where a seasonal ice rink is set up for recreational ice skating. With its location next to the Pavilion it is a particularly scenic place to skate. We are looking forward to doing that tomorrow.

Celeste returned to campus, and Kenna and I, having had a substantial lunch, opted for a light dinner at Ten Green Bottles, a local wine bar.

This article is part of a series about our recent travels to southern England. To see the introductory article in the series, click here.

Wednesday, 01. December 2021

Identity Woman

Joining Secure Justice Advisory Board

I am pleased to share that I have joined the Secure Justice Advisory board. I have known Brian Hofer since he was one of the leaders within Oakland Privacy that successfully resisted the Domain Awareness Center for Oakland. I wrote a guest blog post about a philosophy of activism and theory of change called Engaging […] The post Joining Secure Justice Advisory Board appeared first on Identity Wo

I am pleased to share that I have joined the Secure Justice Advisory board. I have known Brian Hofer since he was one of the leaders within Oakland Privacy that successfully resisted the Domain Awareness Center for Oakland. I wrote a guest blog post about a philosophy of activism and theory of change called Engaging […]

The post Joining Secure Justice Advisory Board appeared first on Identity Woman.


MyDigitalFootprint

"Hard & Fast" Vs "Late & Slow"

The title might sound like a movie but this article is about unpacking decision making. We need leaders to be confident in their decisions so we can hold them accountable. We desire leaders to lead, wanting them to be early. They achieve this by listening to the signals and reacting before is is obvious to the casual observer. However, those in leadership who we hold accountable do not want to
The title might sound like a movie but this article is about unpacking decision making.

We need leaders to be confident in their decisions so we can hold them accountable. We desire leaders to lead, wanting them to be early. They achieve this by listening to the signals and reacting before is is obvious to the casual observer. However, those in leadership who we hold accountable do not want to make the “wrong” decisions. A wrong decision can mean liability, loss of reputation or perceived to be too risky. A long senior leadership career requires navigating a careful path between not takingtoo much risk by going too “early”, which leads to failure, and not being late such that anyone could have made the decision earlier and looking incompetent. Easy leadership does not look like leadership as it finds a path of not being early or late (the majority)



When we unpack leadership trends over the past 100 years that include ideas such as improving margin, diversification, reduction, speed to market, finance lead decisions, data-led, customer first, agile, just-in-time, customer centricity, digital first, personalisation, automated decisions, innovation, transformation, ethics, diversity, privacy by design, shareholder primacy, stakeholder management, re-engineering, outsourcing to name a few. Over the same period of time our ideas of leadership styles has also evolved.



There is an inference or hypothesis that we can test, which is that our approach to risk means we have the leaders that we now deserve. Does our risk create the leadersghip we have or does leadership manage risk to what we want is a cause and effect problem that results from the complex market we have.

The Ladder of Inference below is a concept developed by the late Harvard Professor Chris Argyris, to help explain why anyone reading this and looking at (the same/a) set of evidence we can draw very different conclusions. However, the point is that what we want leadership who has the courage for decisions that are “hard and fast”, but what we get “late and slow” Data led, waiting for the data, following the model all confirm that the decisions we are taking are late and slow. We know there is a gap, it is just hard to know why. Hard and fast occurs when there is a lack of data or evidence and rests on judgment and not confirmation, the very things we value but peanilse for at the same time.





Right now we see this with how the government have reacted to COVID, again we can conclude with hindsight that no-one country leadership got it right and the majority appear to continuen to get it wrong in the view that the voters will not vote from them if they take the hard choices, follow the science, follow the data making sure we are late and slow.

Climate change and COP26. There will never be enough data and waiting for more data confirms our need to manage to a risk model that does not account for the environment with the same weight as finance.

Peak Paradox

The Peak Paradox framework forces us to address the question “what are we optimising for?” Previous articles have highlighted the issues about decision making at Peak Paradox, however at each point we should also consider the leadership style “Hard & Fast versus Late & Slow”





The Peak Paradox model gives us a position in space, at each point where we are, thinking about hard & fast vs late and slow introduces a concept of time and direction into the model.

Tuesday, 30. November 2021

Altmode

Sussex Day 6: Downtime

Tuesday, November 9, 2021 Somewhat at the midpoint of our trip, it was time to take care of a few things like laundry. It’s also time for the thrice-annual Internet Engineering Task Force meeting, which was supposed to be in Madrid, but is being held online (again) due to the pandemic. I co-chaired a session […]

Tuesday, November 9, 2021

Somewhat at the midpoint of our trip, it was time to take care of a few things like laundry. It’s also time for the thrice-annual Internet Engineering Task Force meeting, which was supposed to be in Madrid, but is being held online (again) due to the pandemic. I co-chaired a session from noon to 2 pm local time today, so I needed to be at the hotel for that. Meanwhile Kenna and Celeste did some exploring around the little shops in the Brighton Lanes.

Our downtime day also gave us an opportunity to do some laundry. One of the attractive features of our “aparthotel” is a compact combination washer/dryer. Our room also came with a couple of detergent pods, which were unfortunately and unexpectedly heavily scented. We will be using our own detergent in the future. The dryer was slow, but it did the job.

IETF virtual venue

I am again thankful for the good internet service here; the meeting went without a hitch (my co-chair is in Melbourne, Australia). Kenna and Celeste brought lunch from Pret a Manger to eat between meeting sessions I needed to attend. Following the second session we went off for dinner at a pizza place we had discovered, Franco Manca. The pizza and surroundings were outstanding; we would definitely return (and Celeste probably will). We then saw Celeste off to her bus back to campus and we returned to our hotel.

This article is part of a series about our recent travels to southern England. To see the introductory article in the series, click here.


Matt Flynn: InfoSec | IAM

Introducing OCI IAM Identity Domains

A little over a year ago, I switched roles at Oracle and joined the Oracle Cloud Infrastructure (OCI) Product Management team working on Identity and Access Management (IAM) services. It's been an incredibly interesting (and challenging) year leading up to our release of OCI IAM identity domains.  We merged an enterprise-class Identity-as-a-Service (IDaaS) solution with our OCI-native IAM se

A little over a year ago, I switched roles at Oracle and joined the Oracle Cloud Infrastructure (OCI) Product Management team working on Identity and Access Management (IAM) services. It's been an incredibly interesting (and challenging) year leading up to our release of OCI IAM identity domains

We merged an enterprise-class Identity-as-a-Service (IDaaS) solution with our OCI-native IAM service to create a cloud platform IAM service unlike any other. We encountered numerous challenges along the way that would have been much easier if we allowed for customer interruption. But we had a key goal to not cause any interruptions or changes in functionality to our thousands of existing IDaaS customers. It's been immeasurably impressive to watch the development organization attack and conquer those challenges.

Now, with a few clicks from the OCI admin console, customers can create self-contained IDaaS instances to accommodate a variety of IAM use-cases. And this is just the beginning. The new, upgraded OCI IAM service serves as the foundation for what's to come. And I've never been more optimistic about Oracle's future in the IAM space.

Here's a short excerpt from our blog post Introducing OCI IAM Identity Domains:

"Over the past five years, Oracle Identity Cloud Service (IDCS) has grown to support thousands of customers and currently manages hundreds of millions of identities. Current IDCS customers enjoy a broad set of Identity and Access Management (IAM) features for authentication (federated, social, delegated, adaptive, multi-factor authentication (MFA)), access management, manual or automated identity lifecycle and entitlement management, and single sign-on (SSO) (federated, gateways, proxies, password vaulting).

In addition to serving IAM use cases for workforce and consumer access scenarios, IDCS has frequently been leveraged to enhance IAM capabilities for Oracle Cloud Infrastructure (OCI) workloads. The OCI Identity and Access Management (OCI IAM) service, a native OCI service that provides the access control plane for Oracle Cloud resources (networking, compute, storage, analytics, etc.), has provided the IAM framework for OCI via authentication, access policies, and integrations with OCI security approaches such as compartments and tagging. OCI customers have adopted IDCS for its broader authentication options, identity lifecycle management capabilities, and to provide a seamless sign-on experience for end users that extends beyond the Oracle Cloud.

To better address Oracle customers’ IAM requirements and to simplify access management across Oracle Cloud, multi-cloud, Oracle enterprise applications, and third-party applications, Oracle has merged IDCS and OCI IAM into a single, unified cloud service that brings all of IDCS’ advanced identity and access management features natively into the OCI IAM service. To align with Oracle Cloud branding, the unified IAM service will leverage the OCI brand and will be offered as OCI IAM. Each instance of the OCI IAM service will be managed as identity domains in the OCI console."

Learn more about OCI IAM identity domains

Monday, 29. November 2021

Altmode

Sussex Day 5: Lewes

Monday, November 8, 2021 We started our day fairly early, getting a quick Starbucks breakfast before getting on the bus to University of Sussex to meet Celeste at 9:30 am. Celeste has an hour-long radio show, “Oops That Had Banjos”, on the campus radio station, University Radio Falmer. She invited us to co-host the show. […]

Monday, November 8, 2021

We started our day fairly early, getting a quick Starbucks breakfast before getting on the bus to University of Sussex to meet Celeste at 9:30 am. Celeste has an hour-long radio show, “Oops That Had Banjos”, on the campus radio station, University Radio Falmer. She invited us to co-host the show. The studio was exactly as I had imagined, and it was a lot of fun doing the show with her. We each contributed a couple of songs to the playlist, and got to introduce them briefly.

After the show, Celeste had classes so we continued on to Lewes. We hadn’t been able to see much on our short visit Sunday evening. We started out at Lewes Castle & Museum, again getting an idea of the history of the place and then visiting portions of the castle itself. It was a clear day, and the view from the top was excellent. As with many of these sites, the castle went through many changes through the centuries as political conditions changed.

Lewes Barbican Gate and view from the Castle

After climbing around the castle, we were ready for lunch. We checked out a few restaurants in town before settling on the Riverside Cafe, in an attractive area on the River Ouse. After lunch, we walked among a number of small shops before entering a Waterstones bookstore. How we miss spending time in quality bookstores! I expect we’ll be seeking them out more once we return.

We then took the train back to Brighton, since I had a meeting to attend for work. The meeting went well; the internet connection at the hotel is solid and makes it seem like it hardly matters where in the world I am when attending these meetings.

Celeste came down to Brighton to have dinner with us. We decided to go with Latin American food at a local chain called Las Iguanas. The food was quite good although somewhat standard, at least to those of us from California and Colorado.

This article is part of a series about our recent travels to southern England. To see the introductory article in the series, click here.


reb00ted

Facebook's metaverse pivot is a Hail Mary pass

Update Feb 04, 2022: It looks like I was entirely correct with this November post. Facebook is out of new users, over 10 billion in metaverse investments have nothing to show for it yet, and the markets have caught up, dropping the stock by $200 billion in a day. To make things worse, Zuckerberg supposedly said “focus on video” in the all-hands on the same day. He chickened out; he should have dou

Update Feb 04, 2022: It looks like I was entirely correct with this November post. Facebook is out of new users, over 10 billion in metaverse investments have nothing to show for it yet, and the markets have caught up, dropping the stock by $200 billion in a day. To make things worse, Zuckerberg supposedly said “focus on video” in the all-hands on the same day. He chickened out; he should have doubled down on the metaverse story if he truly believes it. But he did not.

The more I think about Facebook’s Meta’s pivot to the metaverse, the less it appears like they do this voluntarily. I think they have no other choice: their existing business is running out of steam. Consider:

At about 3.5 billion month active users of at least one of their products (Facebook, Instagram, Whatsapp etc), they are running out of more humans to sign up.

People say they use Facebook to stay in touch with family and friends. But there is now one ad in my feed for each three or four posts that I actually want to see. Add more ads than this, and users will turn their backs: Facebook doesn’t help them with what they want help with any more, it’s all ads.

While their ARPU is much higher in the US than in Europe, where in turn it is much higher than the rest of the world – hinting that international growth should be possible – their distribution of ARPU is not all that different from the whole ad market’s distribution of ad revenues in different regions. Convincing, say, Africa to spend much more on ads does not sound like a growth story.

And between the regulators in the EU and elsewhere, moves to effectively ban further Instagram-like acquisitions, lawsuits left and right, and Apple’s privacy moves, their room to manoever is getting tighter, not wider.

Their current price/sales ratio of just under 10 is hard to be justified for long under these constraints. They must also be telling themselves that relying on an entirely ad-based business model is not good long-term strategy any more, given the backlash against surveillance capitalism.

So what do you do?

I think you change the fundamentals of your business at the same time you change the conversation, leveraging the technology you own. And you end up with:

Oculus as the replacement for the mobile phone;

Headset and app store sales, for Oculus, as an entirely new business model that’s been proven (by the iPhone) to be highly profitable and is less under attack by regulators and the public; it also supports potentially much higher ARPU than just ads;

Renaming the company to something completely harmless and bland sounding; that will also let you drop the Facebook brand should it become too toxic down the road.

The risks are immense, starting with: how many hours a day do you hold your mobile phone in your hand, in comparison to how many hours a day you are willing to wear a bucket on your head, ahem, a headset? Even fundamental interaction questions, architecture questions and use case questions for the metaverse are still completely up in the air.

Credit to Mark Zuckerberg for pulling off a move as substantial as this for an almost trillion dollar company. I can’t think of any company which has ever done anything similar at this scale. When Intel pivoted from memory to CPUs, back in the 1980’s and at a much smaller scale, at least it was clear that there was going to be significant, growing demand for CPUs. This is not clear at all about headsets beyond niches such as gaming. So they are really jumping into the unknown with both feet.

But I don’t think any more they had a choice.

Sunday, 28. November 2021

Altmode

Sussex Day 4: Hastings

Sunday, November 7, 2021 Having gone west to Chichester yesterday, today we went east to Hastings, notable for the Norman conquest of 1066 (although the actual Battle of Hastings was some distance inland). We arranged to meet Celeste on the train as we passed through Falmer, where her campus is located, for the hour-or-so trip […]

Sunday, November 7, 2021

Having gone west to Chichester yesterday, today we went east to Hastings, notable for the Norman conquest of 1066 (although the actual Battle of Hastings was some distance inland). We arranged to meet Celeste on the train as we passed through Falmer, where her campus is located, for the hour-or-so trip along the coast. Unfortunately, it seems like it’s an hour train ride to most sights outside Brighton.

Hastings is an attractive and somewhat touristy town, along the Channel and in a narrow valley surrounded by substantial hills. We walked through the town, stopping for a fish and chips lunch along the way, and admiring the small shops in the Old Town. We took a funicular up one of the two hills and had an excellent view of the surrounding terrain. Unfortunately, the ruins of the castle at Hastings were closed for the season.

Funicular and view from the top

After returning via funicular, we continued through the town to the Hastings Museum, a well curated (and free!) small museum that was thorough in its coverage of the history of the area, from the Iron Age to the present. It also included an extensive collection from a local family that sailed around the world in the 1800s.

Taking the train back, we had a change of trains in Lewes, which Celeste had visited and enjoyed previously. We stopped at the Lewes Arms pub, but unfortunately (since it was Sunday evening) the kitchen had closed so we couldn’t get food. So Celeste returned to campus and got dinner there, while Kenna and I got take-out chicken sandwiches to eat in our hotel.

Our weekly family Zoom conference is on Sunday evening, England time, so we ate our sandwiches while chatting with other family members back home. It’s so much easier to stay in close touch with family while traveling than it was just a few years ago.

This article is part of a series about our recent travels to southern England. To see the introductory article in the series, click here.


Just a Theory

Accelerate Perl Github Workflows with Caching

A quick tip for speeding up Perl builds in GitHub workflows by caching dependencies.

I’ve spent quite a few hours evenings and weekends recently building out a comprehensive suite of GitHub Actions for Sqitch. They cover a dozen versions of Perl, nearly 70 database versions amongst nine database engines, plus a coverage test and a release workflow. A pull request can expect over 100 actions to run. Each build requires over 100 direct dependencies, plus all their dependencies. Installing them for every build would make any given run untenable.

Happily, GitHub Actions include a caching feature, and thanks to a recent improvement to shogo82148/actions-setup-perl, it’s quite easy to use in a version-independent way. Here’s an example:

name: Test on: [push, pull_request] jobs: OS: strategy: matrix: os: [ ubuntu, macos, windows ] perl: [ 'latest', '5.34', '5.32', '5.30', '5.28' ] name: Perl ${{ matrix.perl }} on ${{ matrix.os }} runs-on: ${{ matrix.os }}-latest steps: - name: Checkout Source uses: actions/checkout@v2 - name: Setup Perl id: perl uses: shogo82148/actions-setup-perl@v1 with: { perl-version: "${{ matrix.perl }}" } - name: Cache CPAN Modules uses: actions/cache@v2 with: path: local key: perl-${{ steps.perl.outputs.perl-hash }} - name: Install Dependencies run: cpm install --verbose --show-build-log-on-failure --no-test --cpanfile cpanfile - name: Run Tests env: { PERL5LIB: "${{ github.workspace }}/local/lib/perl5" } run: prove -lrj4

This workflow tests every permutation of OS and Perl version specified in jobs.OS.strategy.matrix, resulting in 15 jobs. The runs-on value determines the OS, while the steps section defines steps for each permutation. Let’s take each step in turn:

“Checkout Source” checks the project out of GitHub. Pretty much required for any project. “Setup Perl” sets up the version of Perl using the value from the matrix. Note the id key set to perl, used in the next step. “Cache CPAN Modules” uses the cache action to cache the directory named local with the key perl-${{ steps.perl.outputs.perl-hash }}. The key lets us keep different versions of the local directory based on a unique key. Here we’ve used the perl-hash output from the perl step defined above. The actions-setup-perl action outputs this value, which contains a hash of the output of perl -V, so we’re tying the cache to a very specific version and build of Perl. This is important since compiled modules are not compatible across major versions of Perl. “Install Dependencies” uses cpm to quickly install Perl dependencies. By default, it puts them into the local subdirectory of the current directory — just where we configured the cache. On the first run for a given OS and Perl version, it will install all the dependencies. But on subsequent runs it will find the dependencies already present, thank to the cache, and quickly exit, reporting “All requirements are satisfied.” In this Sqitch job, it takes less than a second. “Run Tests” runs the tests that require the dependencies. It requires the PERL5LIB environment variable to point to the location of our cached dependencies.

That’s the whole deal. The first run will be the slowest, depending on the number of dependencies, but subsequent runs will be much faster, up to the seven-day caching period. For a complex project like Sqitch, which uses the same OS and Perl version for most of its actions, this results in a tremendous build time savings. CI configurations we’ve used in the past often took an hour or more to run. Today, most builds take only a few minutes to test, with longer times determined not by dependency installation but by container and database latency.

More about… Perl GitHub GitHub Actions GitHub Workflows Caching

Saturday, 27. November 2021

Altmode

Sussex Day 3: Chichester and Fishbourne

Saturday, November 6, 2021 After a pleasant breakfast at a cafe in The Lanes, we met up with Celeste at the Brighton train station and rode to Chichester, about an hour to the west. Chichester is a pleasant (and yes, touristy) town with a notable cathedral. Arriving somewhat late, we walked through the town and […]

Saturday, November 6, 2021

After a pleasant breakfast at a cafe in The Lanes, we met up with Celeste at the Brighton train station and rode to Chichester, about an hour to the west. Chichester is a pleasant (and yes, touristy) town with a notable cathedral. Arriving somewhat late, we walked through the town and then found lunch at a small restaurant on a side road as many of the major restaurants in town were quite crowded (it is a Saturday, after all).



One of the main attractions in the area is the Fishbourne Roman Palace, one village to the west. We set out on foot, through a bit of rain, for a walk of a couple of miles. But when we arrived it was well worth the trip. This is an actual Roman palace, constructed in about 79AD, that had been uncovered starting in the 1960s, along with many coins, implements, and other artifacts. The mosaic floors were large and particularly impressive. As a teenager, I got to visit the ruins in Pompeii; these were of a similar nature. This palace and surrounding settlements were key to the Roman development of infrastructure in England.

Returning from Fishbourne to Chichester, we made a short visit to Chichester Cathedral. Unfortunately, the sun had set and it was difficult to see most of the stained glass. At the time of our visit, there was a large model of the Moon, traveling to several locations in Europe, that was hanging from the ceiling in the middle of the church. It was a striking thing to see, especially as we first entered.

After our train trip back from Chichester, we parted with Celeste who returned to campus. Since it was a Saturday night, restaurants were crowded, but we were able to get dinner at a large chain pub, Wetherspoons. The pub was noisy and table service was minimal. We ordered via their website and they only cleared the previous patrons’ dirty dishes when they delivered our food. The food was acceptable, but nothing to blog about.

This article is part of a series about our recent travels to southern England. To see the introductory article in the series, click here.

Wednesday, 24. November 2021

Mike Jones: self-issued

JWK Thumbprint URI Specification

The JSON Web Key (JWK) Thumbprint specification [RFC 7638] defines a method for computing a hash value over a JSON Web Key (JWK) [RFC 7517] and encoding that hash in a URL-safe manner. Kristina Yasuda and I have just created the JWK Thumbprint URI specification, which defines how to represent JWK Thumbprints as URIs. This […]

The JSON Web Key (JWK) Thumbprint specification [RFC 7638] defines a method for computing a hash value over a JSON Web Key (JWK) [RFC 7517] and encoding that hash in a URL-safe manner. Kristina Yasuda and I have just created the JWK Thumbprint URI specification, which defines how to represent JWK Thumbprints as URIs. This enables JWK Thumbprints to be communicated in contexts requiring URIs, including in specific JSON Web Token (JWT) [RFC 7519] claims.

Use cases for this specification were developed in the OpenID Connect Working Group of the OpenID Foundation. Specifically, its use is planned in future versions of the Self-Issued OpenID Provider v2 specification.

The specification is available at:

https://www.ietf.org/archive/id/draft-jones-oauth-jwk-thumbprint-uri-00.html

Identity Woman

Quoted in Consumer Reports article on COVID Certificates

How to Prove You’re Vaccinated for COVID-19 You may need to prove your vaccination status for travel or work, or to attend an event. Paper credentials usually work, but a new crop of digital verification apps is adding confusion. Kaliya Young, an expert on digital identity verification working on the COVID Credentials Initiative, is also […] The post Quoted in Consumer Reports article on COVID C

How to Prove You’re Vaccinated for COVID-19 You may need to prove your vaccination status for travel or work, or to attend an event. Paper credentials usually work, but a new crop of digital verification apps is adding confusion. Kaliya Young, an expert on digital identity verification working on the COVID Credentials Initiative, is also […]

The post Quoted in Consumer Reports article on COVID Certificates appeared first on Identity Woman.

Tuesday, 23. November 2021

Identity Woman

Is it all change for identity?

Opening Plenary EEMA’s Information Security Solutions Europe Keynote Panel Last week while I was at Phocuswright I also had the pleasure of being on the Keynote Panel at EEMA‘s Information Security Solutions Europe [ISSE] virtual event. We had a great conversation talking about the emerging landscape around eIDAS and the recent announcement that the EU […] The post Is it all change for identity?

Opening Plenary EEMA’s Information Security Solutions Europe Keynote Panel Last week while I was at Phocuswright I also had the pleasure of being on the Keynote Panel at EEMA‘s Information Security Solutions Europe [ISSE] virtual event. We had a great conversation talking about the emerging landscape around eIDAS and the recent announcement that the EU […]

The post Is it all change for identity? appeared first on Identity Woman.

Thursday, 18. November 2021

Ally Medina - Blockchain Advocacy

Initial Policy Offerings

A Reader’s Guide How should crypto be regulated? And by whom? These are the big questions the industry is grappling with in the wake of the infrastructure bill being signed with the haphazardly expanded definition of a broker dealer for tax reporting provisions. So now the industry is *atwitter* with ideas about where to go from here. Three large companies have all come out with policy

A Reader’s Guide

How should crypto be regulated? And by whom? These are the big questions the industry is grappling with in the wake of the infrastructure bill being signed with the haphazardly expanded definition of a broker dealer for tax reporting provisions. So now the industry is *atwitter* with ideas about where to go from here.

Three large companies have all come out with policy suggestions: FTX, Coinbase and and A16z. While these proposals differ in approach, they all seek to address a few central policy questions. I’ll break down the proposals based on key subject areas:

Consumer Protection

A16: Suggests a framework for DAO’s to provide disclosures.

Coinbase: Sets a goal to “​​Enhance transparency through appropriate disclosure requirements. Protect against fraud and market manipulation”

FTX: Similarly suggests framework for “disclosure and transparency standards”

All three of these make ample mention of consumer protections that seem to begin and end at disclosures. Regulators might want something with a little more teeth. FTX provides a more robust outline for combating fraud, suggesting the use of on-chain analytics tools. This is a smart and concrete suggestion of how to improve existing regulation that relies on SARS reports filed AFTER suspicious activity.

Exactly how Decentralized?

A16: Seeks to create a definition and entity status for DAO’s, which would ostensibly require a different kind of regulation than more custodial services.

Coinbase: ​Platforms and services that do not custody or otherwise control the assets of a customer — including miners, stakers and developers — would need to be treated differently

FTX- Doesn’t mention decentralization.

These are really varied approaches. I’m not criticizing FTX here, they are focusing on consumer protections and combating fraud which are good things to highlight. However the core regulatory issues is- can we differentiate between decentralized and centralized products and does that create a fundamentally conflict with existing law. A16z’s approach is novel, a new designation without a new agency.

The Devil You Know vs The Devil you Don’t

A16- Suggests the Government Office of Accountability “assess the current state of regulatory jurisdiction over cryptocurrency, digital assets, and decentralized technology, and to compare the costs and benefits of harmonizing jurisdiction among agencies against vesting supervision and oversight with a federally chartered self-regulatory organization or one or more nonprofit corporations.”

Coinbase- Argues that this technology needs a new regulatory agency and that all digital assets should be under a single regulatory authority. Also suggests coordination with a Self Regulatory Organization.

FTX- Doesn’t step into that morass.

Coinbase has the most aggressive position here. I personally am not convinced of the need for a new regulatory agency. We haven’t tried it the old fashioned way yet, where existing agencies offer clarity about what would bring a digital asset into their jurisdiction and what would exclude it. Creating a new agency is a slow and expensive process. And then that agency would need to justify its existence by aggressively cracking down. It’s a bit like creating a hammer and then inevitably complaining that it sees everything as a nail.

How to achieve regulatory change in the US for crypto:

Stop tweeting aggressively at the people who regulate you. Negative points if you are a billionaire complaining about taxes. Spend some time developing relationships with policymakers and working collaboratively with communities you want to support. Lotta talk of unbanked communities- any stats on how they are being served by this tech? (Seriously please share) Consider looking at who is already doing the work you want to accelerate and consider working with them/learning from and supporting existing efforts rather than whipping out your proposal and demanding attention. Examples: CoinCenter, Blockchain Association. At the state level: Blockchain Advocacy Coalition of course, Cascadia Blockchain Council, Texas Blockchain Council etc.

A16: https://int.nyt.com/data/documenttools/2021-09-27-andreessen-horowitz-senate-banking-proposals/ec055eb0ce534033/full.pdf#page=9

Coinbase: https://blog.coinbase.com/digital-asset-policy-proposal-safeguarding-americas-financial-leadership-ce569c27d86c

FTX: https://blog.ftx.com/policy/policy-goals-market-regulation/

Monday, 15. November 2021

reb00ted

Social Media Architectures and Their Consequences

This is an outcome of a session I ran at last week’s “Logging Off Facebook – What comes next?" unconference. We explored what technical architecture choices have which technical, or non-technical consequences for social media products. This table was created during the session. It is not complete, and personally I disagree with a few points, but it’s still worthwhile publishing IMHO. So here y

This is an outcome of a session I ran at last week’s “Logging Off Facebook – What comes next?" unconference. We explored what technical architecture choices have which technical, or non-technical consequences for social media products.

This table was created during the session. It is not complete, and personally I disagree with a few points, but it’s still worthwhile publishing IMHO.

So here you are:

Facebook-style ("centralized") Mastodon-style ("federated") IndieWeb-style ("distributed/P2P") Blockchain-style Moderation Uniform, consistent moderation policy for all users Locally different moderation policies, but consistent for all users on a node Every user decides on their own Posit - algorithmic smart contract that drives consensus Censorship easy; global one node at a time full censorship not viable full censorship not viable Software upgrades Fast, uncomplicated for all users Inconsistent across the network Inconsistent across the network Consistent, but large synchronization / management costs Money Centralized; most accumulated by "Facebook" Donations (BuyMeACoffee, LiberaPay); Patronage (Patreon) Paid to/earned by network nodes; value fluctuates due to speculation Authentication Centralized Decentralized (e.g. Solid, OpenID, SSI) Decentralized (e.g. wallets) Advertising Decided by "Facebook" Not usually Determined by user Governance Centralized, unaccountable Several components: protocol-level, code-level and instance-level Several components: protocol-level, code-level and instance-level Search & Discovery Group formation Regulation Ownership Totalitarian Individual

Thursday, 04. November 2021

Tim Bouma's Blog

The Rise of MetaNations

Photo by Vladislav Klapin on Unsplash We are witnessing the rise of metanations (i.e., digitally native nations, not nation states that are trying to be digital). The first instance of which is Facebook Meta. The newer term emerging is the metaverse, which will eventually refer to the collection of emerging digitally native constructs, such as digital identity, digital currency and non-fungibl
Photo by Vladislav Klapin on Unsplash

We are witnessing the rise of metanations (i.e., digitally native nations, not nation states that are trying to be digital). The first instance of which is Facebook Meta. The newer term emerging is the metaverse, which will eventually refer to the collection of emerging digitally native constructs, such as digital identity, digital currency and non-fungible tokens . We’re not there yet, but many are seeing the trajectory where metanations, like Facebook will have metacitizens, who will have metarights to interact and transact in this new space. This is not science fiction, but is becoming a reality and the fronts are opening up on identity, currency, rights and property that exist within these digital realms but also touch upon the real world.

So what’s the imperative for us as real people and governments? To make sure that these realms are as open and inclusive as possible. Personally for me, I don’t want to have a future where certain metacitizens can exert their metarights in an unfair way within the real world; the chosen few getting to the front of the line for everything.

But we can’t just regulate and outlaw — we need to counter in an open fashion. We need open identity, open currency, open payments, and open rights

Where I am seeing the battle shape up most clearly is in the open payments space, specifically the Lightning Network. I am sure as part of Facebook’s play, they will introduce their own currency, Diem, that can only be used within their own metaverse according to their own rules. Honestly, I don’t believe we can counter this as governments and regulators, we need support open approaches such as the Lightning Network. A great backgrounder article by Nik Bhatia, author of Layered Money, here

Wednesday, 03. November 2021

Identity Praxis, Inc.

FTC’s Shot Across the Bow: Purpose and Use Restrictions Could Frame The Future of Personal Data Management

I just read a wonderful piece from Joseph Duball,1 who reported on the U.S. Federal Trade Commissioner Rebecca Kelly Slaughter’s keynote at the IAPP’s “Privacy. Security. Risk 2021” event. According to Duball, Slaughter suggests that we need to change “the way people view, prioritize, and conceptualize their data.” “Too many services are about leveraging consumer […] The post FTC’s Shot Across t

I just read a wonderful piece from Joseph Duball,1 who reported on the U.S. Federal Trade Commissioner Rebecca Kelly Slaughter’s keynote at the IAPP’s “Privacy. Security. Risk 2021” event. According to Duball, Slaughter suggests that we need to change “the way people view, prioritize, and conceptualize their data.”

“Too many services are about leveraging consumer data instead of straightforwardly providing value. For even the savviest users, the price of browsing the internet is being tracked across the web.” – Rebecca Kelly Slaughter, Commissioner, U.S. Federal Trade Commission 20212

According to Slaughter, privacy issues within data-driven markets stem from surveillance capitalism, which is fueled by indiscriminate data collection practices. She suggests that the remedy to curtail these practices is to focus on “purpose and use” restrictions and limitations rather than solely relying on the notice & choice framework. In other words, in the future industry may no longer justify data practice with explanations like “they opted in” and “we got consent” or falling ba

“Collection and use limitations can help protect people’s rights. It should not be necessary to trade one’s data away as a cost of full participation in society and the modern information economy.” – Rebecca Kelly Slaughter, Commissioner, U.S. Federal Trade Commission 20213

FTC Has Concerns Other Than Privacy

So that there is no uncertainty or doubt, however, Duball4 reports that, while consumer privacy is a chief concern for the commission, it is not the primary concern to the exclusion of other concerns. The commission is also worried about algorithmic bias and “dark patterns” practices. In other words, it is not just about the data, it is about the methods used to “trick” people into giving it up and how it is processed and applied to business decision-making.

Takeaway

My takeaway is that it is time for organizations to take a serious look at revamping their end-to-end processes and tech-stacks. This is a c-suite leadership all-hands-on-deck moment. It will take years for the larger organizations to turn their flotilla in the right direction, and for the industry at large to sort everything out. However, rest asserted, the empowered person–the self-sovereign individual–is nigh and will sort it out for industry soon enough.

There is time, but not much, maybe three, five, or seven years, before the people will be equipped with the knowledge and tools to take back control of their data. It is already happening; just look at open banking in the UK. These new tools, aka personal information management systems, will enable people to granularly exchange data on their terms with their stated purpose of use, not the businesses. They will give them the power to process data in a way that protects them from bias or at least helps them know when it is happening.

Why should businesses care about all this? Well, I predict, as do many so I’m not too far out on a limb here, that in the not-too-distant future, the empowered, connected individual will walk with their wallet and only do business with those institutions that respect their sovereignty, both physically and digitally. So, to all out there, it is time, and it is time to prepare for the future.

REFERENCES Duball, Joseph. “On the Horizon: FTC’s Slaughter Maps Data Regulation’s Potential Future.” The Privacy Advisor, November 2021. https://iapp.org/news/a/on-the-horizon-ftcs-slaughter-maps-data-regulations-potential-future/.↩︎ IBID.↩︎ IBID.↩︎ IBID.↩︎

The post FTC’s Shot Across the Bow: Purpose and Use Restrictions Could Frame The Future of Personal Data Management appeared first on Identity Praxis, Inc..


Vishal Gupta

THREE core fundamental policy gaps that are clogging the courts of India

No civilisation can exist or prosper without the rule of law. The rule of law cannot exist without a proper justice system. In India, criminals have a free reign as justice system is used as a tool to make victims succumb into unfair settlements or withdrawals. Unfortunately, the Indian justice system has gone caput and remains clogged with over 4+ crore cases pending in courts due to following co

No civilisation can exist or prosper without the rule of law. The rule of law cannot exist without a proper justice system. In India, criminals have a free reign as justice system is used as a tool to make victims succumb into unfair settlements or withdrawals. Unfortunately, the Indian justice system has gone caput and remains clogged with over 4+ crore cases pending in courts due to following core reasons.

Policy gap 1 — Only in India there is zero deterrence to perjury

a. In India, perjury law is taken very lightly. 99% of cases are full of lies, deceit and mockery of justice. Affidavits in India do not serve much purpose. There have been various judgments and cries from all levels of judiciary but in vain.

b. Perjury makes any case 10x more complex and consumes 20x more time to adjudicate. It is akin to allowing a bullock cart in the middle of express highway. It is nothing but an attack on justice delivery system. Historically, perjury used to be punished with death sentence and considered an equally serious crime as murder.

c. This is against the best international practices and does not make economic sense for India.

Policy gap 2 — India has no equivalent laws such as the U.S. Federal Rule 11 enacted in 1983 (as amendment in 1993) to curb the explosion of frivolous litigation.

The entire provision of rule 11 of U.S. federal rules can be summarized in the following manner:

It requires that a district court mandatorily sanction attorneys or parties who submit improper pleadings like pleadings with

Improper purpose Containing frivolous arguments Facts or arguments that have no evidentiary support Omissions and errors, misleading and crafty language Baseless or unreasonable denials with non-application of mind Negligence, failure to appear or unreasonable adjournments

All developed countries like UK and Australia also have similar laws to combat frivolous litigation.

Police gap 3 — Only in India the lawyers are barred from contingent fee agreements under Bar council rules Indian lawyers are incentivized to make cases go longer, add complexity and never end. Consequently, they super specialize and become innovators in creating alibis so that the case never ends. Lawyers in India do not investigate or legally vet the case before filing. There is no incentive to apply for torts claims or prosecute perjury and therefore there is no deterrence created. Lawyers are supposed to be gate keepers to prevent frivolous litigation but in India it is actually opposite due to the perverse policy on contingent fee and torts.

Economic modelling suggests that — contingent fee arrangements reduce frivolous suits when compared to hourly fee arrangements. The reasoning is simple: When an attorney’s compensation is based solely on success, as opposed to hours billed, there is great incentive to accept and prosecute only meritorious cases.

At least one empirical analysis concludes that — “hourly fees encourage the filing of low-quality suits and increase the time to settlement (i.e., contingency fees increase legal quality and decrease the time to settlement).”

Negative impact of the 3 policy gaps

1. Conviction rates in India are abysmally low below 10% whereas internationally between 60% — 100%.[1]

a. Japan — 99.97%
b. China — 98%
c. Russia — 90%
d. UK — 80%
e. US — between 65% and 80%

2. Globally the 80%-99% cases get settled before trial but India its actually opposite because

There is absolutely no fear of law and perjury is a norm. Lawyers have no interest in driving a settlement due to per appearance fee system. There is an artificial limit of fee shifting (awarding costs), torts and there is no motivation of the part of judges to create deterrence.

3. The contingent fee and tort cannot be enabled till such time the fraternity of lawyers can be relied upon for ethical and moral conduct.

— — — — — — — — — — — — — — — — — — — — — —

Easy solution to complex problem

— — — — — — — — — — — — — — — — — — — — — —

Policy 1 — Restricting perjury and making it a non-bailable offence will resolve 50% cases immediately
(i.e. 1.5 crore cases within 3 months)

India must truly embrace “Satyam ev jayate” and make perjury a non-bailable offence. All lawyers and litigants should be given 3 month’s notice to refile their pleadings or settle the cases. They would have to face the mandatory consequences of false evidence or averments later found to be untrue.

There needs to be a basic expectation reset in Indian courts litigation — the filings are correct, and the lawyer is responsible for prima facie diligence and candor before courts. The role of judiciary is not to distinguish between truth and false but to determine the sequence of events, fix accountability and award penalties. Presenting false evidence or frivolous arguments must be seen as separate offence on its own.

At a minimum IPC 195(1)(b)(i) must be removed that states the following –
“No Court shall take cognizance- of any offence punishable under any of the following sections of the IPC (45 of 1860), namely, sections 193 to 196 (both inclusive), 199, 200, 205 to 211 (both inclusive) and 228, when such offence is alleged to have been committed in, or in relation to, any proceeding in any Court”

Because:

This necessarily makes the judges party to the complaint and therefore all judges are reluctant to prosecute perjury. This estoppel opens up possibility of corruption in judiciary and public offices. The intention of this provision has clearly backfired. This restriction is unique to india and against the international norms Policy 2 — Sanctions on lawyers to streamline the perverse incentives that plague Indian justice system

The courts in USA are mandated to compulsorily sanction lawyers for various professional misconduct in litigation (Federal rule 11 of civil procedure) like:

Remedies and sanctions for lawyer’s misconduct can be categorized into three groups.

Sanctions and remedies for attorney misconduct which are available to public authorities. Such sanctions include professional discipline, criminal liability of lawyers who assist their clients in committing criminal acts, and judicially imposed sanctions such as for contempt of court. Professional discipline is generally the best known sanction for attorney misconduct. Sanctions which are available to lawyers’ clients. For example, damages for attorney malpractice, forfeiture of an attorney’s fee, and judicial nullification of gifts or business transactions that breach a lawyer’s fiduciary duty to a client. Remedies that may be available to third parties injured by a lawyer’s conduct on behalf of a client. These include injunctions against representing a client in violation of the lawyer’s duty to a third party, damages for breach of an obligation the attorney assumes to a non-client, and judicial nullification of settlements or jury verdicts obtained by attorney misconduct. Policy 3 — Contingent Fee and Tort Law making judiciary 3x efficient.
(needs perjury law as discussed above to unlock)

In India Lawyers spend more time in making the case complex and lengthy based on “dehari system” while the western counterparts earn far more money by genuinely solving cases and creating real values for the country. The market economics based on judicial policy on regulating legal profession and ethics in India is however geared towards permanently clogging the system. 3 of the 4 stakeholders benefit by making the litigation never end.

There is a dire need for the system to be re-incentivized wherein the legal profession can generate 10x more value for the country in catching and penalizing law abusers. This in turn will also attract and create more talented lawyers because then they will be investing more time in investigating and preparing the cases to win.

Making perjury a non-bailable offence and rules for mandatorily sanctioning lawyer mis-conduct will unlock the contingent fee and tort law in India.

Allowing contingent fee for lawyers have four principal policy justifications

Firstly, such arrangements

enable the impecunious (having no money) to obtain representation. Such persons cannot afford the costs of litigation unless and until it is successful. Even members of the middle- and upper-socioeconomic classes may find it difficult to pay legal fees in advance of success and collection of judgment. This is particularly so today as litigation has become more complex, often involving suits against multiple parties or multinational entities, and concerning matters requiring expert scientific and economic evidence.

Secondly,

Contingent fee arrangements can help align the interests of lawyer and client, as both will have a direct financial stake in the outcome of the litigation.

Third

By predicating an attorney’s compensation on the success of a suit, the attorney is given incentive to function as gatekeeper, screening cases for both merit and sufficiency of proof, and lodging only those likely to succeed. This provides as an important and genuine signal for litigants to understand the merit of their case.

Fourth

And more generally, all persons of sound mind should be permitted to contract freely, and restrictions on contingent fee arrangements inhibit this freedom.

Three other reasons to justify unlocking of contingent fee:-

Clients, particularly unsophisticated ones, may be unable to determine when an attorney has underperformed or acted irresponsibly;15 in these instances, an attorney’s reputation would be unaffected, and thus the risk of reputational harm would not adequately protect against malfeasance. Even when clients are aware of an attorney’s poor performance or irresponsibility, they may lack the means, media, or credibility to effectively harm the attorney’s reputation. The interests of attorney and client are more closely aligned, ceteris paribus, when fee arrangements are structured so as to minimize perverse incentives. Why contingent fees reduce caseload:

Lesser cases better filing

Complainants get genuine advice about chances of winning. They do not file unless they get lawyer’s buy-in.

Lawyers screen cases for merit and sufficiency of proof before filing. Lawyers don’t pick up bad cases to manage reputation. Comprehensive evidence gathering happens before Lawyer decides to file a case.

Lawyers simplify case and only allege charges that can be sustained.

Use simple and concise arguments for the judges. Lawyers spend more time working hard outside the courts. Thus increasing case quality.

Faster case proceedings

Less adjournments and hearings They do better preparation and take less adjournments. Multiple steps get completed in single hearings. They create urgency for clients to show up at every hearing. Lawyers do not unnecessarily appeal and stay matters because they do not get paid per hearing and want quick results.

Case withdrawals

There are less takers if a lawyer drops a case when there are surprises by client which will adversely impact the outcome. Lawyers persuade complainants to settle when appropriate. Conclusions

With 3+ crore cases pending and a dysfunctional justice delivery there is a mass exodus of Ultra High Net worth Individuals (UHNIs) from India. There is an absolute urgent need for above reforms before the situation turns into a complete banana republic.

It is an absolute embarrassment and national shame to allow Indians to be blatant liars even in courts. Business and survival in India today is a race to the bottom just because there is no bar on falsehood and having corrupt values, it is near impossible to survive without being part of the same culture.

It is prayed that even more stricter laws be enacted than just global norms such that Indians can be trusted globally to never lie. It’s not just about Indian courts but they could be counted as the most truthful race even in foreign lands and command high respect globally.

Restoring justice delivery Economic benefits of reform in perjury, contingent fee and legal ethics

1. Higher quality of litigation work.

2. Attract better talent to the profession due to higher profitability.

3. Create more jobs in the economy.

4. Offer higher pool of qualified people for judiciary.

5. Drastic reduction in corrupt or criminal activity due to fear of law.

6. Unlocking of trillions of dollars of wealth/resources stuck in litigation.

7. Better investment climate due to more reliability in business transactions.

8. Higher reliability in products and services.

9. Access to justice for all.

10. Restoring faith in judiciary and honour in being Indian.

Further reading

1. Perjury: Important Case Laws Showing How Seriously It is Taken in India! (lawyersclubindia.com)

2. LAW OF PERJURY- (Second Edition) — Indian Bar Association

[1] Comparison of the conviction rates of a few countries of the world | A wide angle view of India (wordpress.com)

Tuesday, 02. November 2021

MyDigitalFootprint

Optimising for “performance” is directional #cop26

In the week where the worlds “leaders” meet to discuss and agree on the future of our climate at  #COP26, I remain sceptical about agreements.  At #COP22 there was an agreement to halve deforestation by 2020; we missed it, so we have moved the target out.   Here is a review of all the past COP meetings and outcomes. It is hard to find any resources to compare previous agreements

In the week where the worlds “leaders” meet to discuss and agree on the future of our climate at  #COP26, I remain sceptical about agreements.  At #COP22 there was an agreement to halve deforestation by 2020; we missed it, so we have moved the target out.   Here is a review of all the past COP meetings and outcomes. It is hard to find any resources to compare previous agreements with achievements.  Below is from the UN. 



The reason I remain doubtful and sceptical is that the decision of 1.5 degrees is framed.  We are optimising for a goal.  In this case, we do not want to increase our global temperature beyond 1.5 degrees.  Have you ever tried to heat water and stop the heating process such that a temperature target was reached. Critically you only have one go.    Try it. Fill a pan with ice and set a target, say 38.4 degrees, use a thermometer and switch off the pan when you think the final temperature will be your target. Did you manage to get within 1.5 degrees of your target? 

The Peak Paradox framework forces us to think that we cannot optimise for one outcome, one goal, one target or for a one-dimensional framing. To do so would be to ignore other optimisations, visions, ideas or beliefs.  When we optimise, for one thing, something else will not have an optimal outcome.    

In business, we are asked to articulate a single purpose, one mission, the single justification that sets out a reason to exist.  The more we as a board or senior leadership team optimise for “performance”, the more we become directional. Performance itself, along with the thinking that drives the best allocation of resources, means we are framed to optimise for an outcome.   In social sciences and economics, this is called path dependency.   The unintended consequences of our previous decisions that drive efficiency and effectiveness might not directly impact us, but another part of an interdependent system that, through several unconnected actions, will feed into our future decisions and outcomes.  Complex system thinking highlights such causes and effects of positive and negative feedback loops. 

By example. Dishwasher tablets make me more productive but make the water system less efficient by reducing the effectiveness of the ecosystem. However, I am framed by performance and my own efficiency; therefore, I have to optimise my time, and the dishwasher is a perfect solution.  Indeed we are marketed that the dishwasher saves water and energy compared to other washing up techniques. The single narrow view of optimisation is straightforward and easy to understand.  The views we hold on battery cars to mobile phones is framed as one of being more productive.  Performance as a metric matters more than anything else, why because the story says performance creates economic activity that creates growth that means fewer people are in poverty.  

“Performance” is a one-dimensional optimisation where economic activity based on financial outcomes wins.  You and I are the agents of any increase in performance, and the losses in the equation of equilibrium are somewhere else in the system. We are framed and educated to doubt the long, complex link between the use of anti-bacterial wipes and someone else’s skin condition. Performance as a dimension for measurement creates an optimal outcome for one and a sub-optimal outcome for someone else. 

Performance as a measure creates an optimal outcome for one and a sub-optimal outcome for someone else. 

If it was just me, then the cause and effect relationship are hard to see, but when more of humanity optimises for performance, it is the scale at which we all lose that suddenly comes into effect. Perhaps it is time to question the simple linear ideas of one purpose, one measure, one mission, and try to optimise for different things simultaneously; however, that means simple political messages, tabloid headlines, and social-media-driven advertising will fail.  Are leaders ready to lead, or do they enjoy too much power to do the right thing?


Thursday, 28. October 2021

Hans Zandbelt

mod_auth_openidc vs. legacy Web Access Management

A sneak preview of an upcoming presentation about a comparison between mod_auth_openidc and legacy Web Access Management.

A sneak preview of an upcoming presentation about a comparison between mod_auth_openidc and legacy Web Access Management.

Wednesday, 27. October 2021

Identity Praxis, Inc.

A call for New PD&I Exchange Models, The Trust Chain, and A Connected Individual Identity Scoring Scheme: An Interview with Virginie Debris of GMS

Art- A call for New Industry Data Exchange Models, The Trust Chain, and A Connected Individual Transaction And Identity Scoring Scheme: An Interview with Virginie Debris of GMS I recently sat down with Virginie Debris, the Chief Product Officer for Global Messaging Service (GMS) and Board Member of the Mobile Ecosystem Forum, to talk about […] The post A call for New PD&I Exchange Models, Th

Art- A call for New Industry Data Exchange Models, The Trust Chain, and A Connected Individual Transaction And Identity Scoring Scheme: An Interview with Virginie Debris of GMS

I recently sat down with Virginie Debris, the Chief Product Officer for Global Messaging Service (GMS) and Board Member of the Mobile Ecosystem Forum, to talk about personal data and identity (PD&I). We had an enlightening discussion (see video of the interview: 46:16 min)). The conversation took us down unexpected paths and brought several insights and recommendations to light.

In our interview, we discussed the role of personal data and identity and how enterprises use it to know and serve their customers and protect the enterprises’ interests. To my delight, we uncovered three ideas that could help us all better protect PD&I and improve the market’s efficiency.

Idea One: Build out and refine “The Trust Chain”, or “chain of trust,” a PD&I industry value chain framework envisioned by Virginie. Idea Two: Refine PD&I industry practices, optimize all of the data that mobile operators are holding on to, and ensure that appropriate technical, legal, and ethical exchange mechanisms are in place to ensure responsible use of PD&I. Idea Three: Standardize a connected individual identity scoring scheme, i.e., a scheme for identity and transaction verification, often centralized around mobile data. This scheme is analogous to credit scoring for lending and fraud detection for credit card purchases. It would help enterprises simultaneously better serve their customers, protect PD&I, mitigate fraud, and improve their regulatory compliance efforts.

According to Virginie, a commercial imperative for an enterprise is knowing their customer–verifying the customer’s identity prior to and during engagements. Knowing the customer helps enterprises not only better serve the customer, but also manage costs, reduce waste, mitigate fraud, and stay on the right side of the law and regulations. Virginie remarked that her customers often say, “I want to know who is my end user. Who am I talking to? Am I speaking to the right person in front of me?” This is hard enough in the physical realm, and in the digital realm it is even more difficult. The ideas discussed in this interview can help enterprises answer these questions.

Consumer Identity and the Enterprise

The mobile phone has become a cornerstone for digital identity management and commerce. In fact, Cameron D’Ambrosi, Managing Director of Liminal, has gone as far as to suggest mobile has an irreplaceable role in the digital identity ecosystem.1 Mobile can help enterprises be certain whom they are dealing with, and with this certainty help them, with confidence, successfully connect, communicate, and engage people in nearly any transactions.

To successfully leverage mobile as a tool for customer identity management, which is an enabler of what is known as “know your customer” or KYC, enterprises work with organizations like GMS to integrate mobile identity verification into their commercial workflow. In our interview, Virginie notes that GMS is a global messaging aggregator, the “man in the middle.” It provides messaging and related services powered by personal data and identity to enterprises and mobile operators, including KYC services.

Benefits gained from knowing your customer

There is a wide range of use cases for why an enterprise may want to use services provided by players like GMS. They can:

Improve customer experience: Knowing the customer and the context of a transaction can help improve the customer experience. Maintain data hygiene: Ensuring data in a CRM or customer system of record that is accurate can improve marketing, save money, reduce fraud, and more. Effectively manage data: Reducing duplicate records, tagging data, and more can reduce costs, create efficiency, and generate new business opportunities (side note: poor data management costs enterprises billions annually).2 Ensure regulatory compliance: Industry and government best practices, legislation, and regulation is not just nice to have; it is a business requirement. Staying compliant can mitigate risk, build trust, and help organizations differentiate themselves in the market. Mitigate cybercrime: Cybercrime is costing industry trillions of dollars a year (Morgan (2020) predicts the tally could be as much as $10.5 trillion annually by 2025).3These losses can be reduced with an effective strategy. The connected individual identity scoring scheme

When a consumer signs up for or buys a product or service, an enterprise may prompt them to provide a mobile number and other personal data as part of the maintenance of their profile and to support the transaction. An enterprise working with GSM, in real-time, can ping GMS’s network to verify if the consumer-provided mobile number is real, i.e., operational. Moreover, they can ask GMS to predict, with varying levels of accuracy, if a mobile number and PD&I being used in a transaction is associated with a real person. They can also ask if the presumed person conducting the transaction can be trusted or if they might be a fraudster looking to cheat the business. This is a decision based on relevant personal information provided by the individual prior to or during the transaction, as well as data drawn from other sources.

This type of real-time identity and trust verification is made possible by a process Virginie refers to as “scoring.” I refer to it as “the connected individual identity scoring scheme.” Scoring is an intricate and complex choreography of data management and analysis, executed by GMS in milliseconds. This dance consists of pulling together and analyzing a myriad of personal data, deterministic and probabilistic identifiers, and mobile phone signals. The actors in this dance include GMS, the enterprise, the consumer, and GMS’s strategic network of mobile network operators and PD&I aggregator partners.

When asked by an enterprise to produce a score, GMS, in real-time, combines and analyzes enterprise-provided data (e.g., customer name, addresses, phone number, presumed location, etc.), mobile operator signal data (e.g., the actual location of a phone, SIM card, and number forwarding status), and PD&I aggregator supplied data. From this information, it produces a score. This score is used to determine the likelihood a transaction being initiated by “someone” is legitimate and can be trusted, or not. A perfect score of 1 would suggest that, with one hundred percent certainty, the person is who they say they are and can be trusted, and a score of zero would suggest they are most certainly a cybercriminal.

In our interview, Virginie notes, “nothing is perfect, we need to admit that,” thus suggesting that one should never expect a perfect score. The more certain a business wants to be, i.e. the higher score they require to confirm a transaction, the more the business should expect the possibility of increased transactional costs, time, and friction in the user experience. Keeping this in mind, businesses should develop a risk tolerance matrix, based on the context of a transaction, to determine if they want to accept the current transaction or not. For example, for lower risk or lower cost transactions (e.g., an online pizza order) the business might have a lower assurance tolerance and will accept a lower score. For higher-risk or higher-cost transactions (e.g., a bank wire transfer), they might need a higher assurance tolerance and accept only higher scores.

Example: Detecting fraud in a banking experience

Virginie used a bank transaction as an example. She explained that a bank could check if a customer’s mobile phone is near the expected location of a transaction. If it was not, this might suggest there is a possibility of fraud occurring, which would negatively impact the score.

Mobile scoring happens every day, but not always by this name–others refer to it as mobile signaling or mobile device intelligence. However, Virginie alluded to a challenge. There is no industry standard for scoring, which may lead to inconsistencies in execution and bias across the industry. She suggested that more industry collaboration is needed to prevent this.

The Trust Chain

During our conversation, Virginie proposed a novel idea which frames what we in the industry could do to optimize the PD&I value and use it responsibly. Virginie said we need to build a chain of trust amongst the PD&I actors, “The Trust Chain”.

I have taken poetic license, based on our conversation, and have illustrated The Trust Chain in the figure below. The figure depicts connected individuals* at the center, resting on a bed of industry players linked to enterprises. A yellow band circles them all to illustrate the flow of personal data and identity throughout the chain.

Defining the connected individual and being phygital: It is so easy in business to get distracted by our labels. It is important to remember the terms we use to refer to the people we serve—prospect, consumer, patient, shopper, investor, user, etc.—are contrived and can distract. These terms are all referring to the same thing: a human, an individual, and more importantly, a contextual state or action at some point along the customer journey, i.e., sometimes I am a shopper considering a product, other times I am a consumer using the product. The shopper and the consumer are not always the same person. Understanding this is important to ensure effective engagement in the connected age. In the context of today’s world and this discussion, the individual is connected. They are connected with phones, tablets, smartwatches, cars, and more. These connections have made us “phygital” beings, merging the digital and physical self. Each and every one of these connections is producing data.

According to Virginie, the key to making the industry more effective and efficient is to tap into more and more of the connected individual data held and managed by mobile network operators. This is because, in her own words, “they know everything.” To tap into this data, Virginie said a number of technical, legal, and ethical complexities must be overcome. In addition, an improved model for data exchange amongst the primary actors of the industry—mobile network operators, enterprises, messaging aggregators (like GMS), and PD&I aggregators—needs to be established. In other words, “The Trust Chain” needs to be refined and built. The presumption behind all of this is that the current models of data exchange can be found wanting.

What we need to do next

In summary, the conclusion I draw from my interview with Virginie are that we should come together to tackle:

The technical, legal, and ethical complexities to enable more effective access to the treasure trove of data held by the mobile network operators The standardization of a connected individual scoring scheme The development and integrity for “The Trust Chain”

My takeaway from our discussion is simple, I agree with her ideas. These efforts and more are needed. The use of personal data and identity throughout the industry is accelerating at an exponential rate. To ensure all parties can safely engage, transact, and thrive, it is critical that industry leaders develop a sustainable and responsible, marketplace.

I encourage you to watch to the full interview here.

Becker, “Mobile’s Irreplaceable Role in the Digital Identity Ecosystem.”↩︎ “Dark Data – Are You at Risk?”↩︎ Morgan, “Cybercrime To Cost The World $10.5 Trillion Annually By 2025.”↩︎ REFERENCES Becker, Michael. “Mobile’s Irreplaceable Role in the Digital Identity Ecosystem: Liminal’s Cameron D’Ambrosi Speaks to MEF – Blog.” MEF, October 2021. https://mobileecosystemforum.com/2021/10/07/mobiles-irreplaceable-role-in-the-digital-identity-ecosystem-liminals-cameron-dambrosi-speaks-to-mef/. “Dark Data – Are You at Risk?” Veritas Dark Data. Are You at Risk?, July 2019. https://www.veritas.com/form/whitepaper/dark-data-risk. Morgan, Steve. “Cybercrime To Cost The World $10.5 Trillion Annually By 2025.” Cybercrime Magazine, November 2020. https://cybersecurityventures.com/cybercrime-damages-6-trillion-by-2021/.

The post A call for New PD&I Exchange Models, The Trust Chain, and A Connected Individual Identity Scoring Scheme: An Interview with Virginie Debris of GMS appeared first on Identity Praxis, Inc..

Monday, 25. October 2021

Kyle Den Hartog

My Take on the Misframing of the Authentication Problem

First off, the user experience of authenticating on the web has to be joyful first and foremost. Secondly, I think it's important that we recognize that the security of any authentication system is probabilistic, not deterministic.

Prelude: First off, if you haven’t already read The Quest to Replace the Password stop reading this and give that a read first. To paraphrase my computer security professor, If you haven’t read this paper before you design an authentication system you’re probably just reinventing something already created or missing a piece of the puzzle. So go ahead take a read of that paper first before you continue on. In fact, I just re-read this paper before I started working on it because it does an excellent job of framing the problem.

Over the past few years, I’ve spent a fair amount of time thinking about what the next generation of authentication and authorization systems will look like from a variety of different perspectives. I started out looking at the problem from a user’s perspective originally just looking to get rid of passwords. Then I looked at it as an attacker working as an intern penetration tester which gave me a unique eye into understanding the attacker mindset. Unfortunately, I didn’t find myself enjoying the red team side of security too much (combined with having a 2-year non-compete as an intern - that was a joke in hindsight) so by happenstance, I found myself into the standards community where the next generation of authentication (AuthN) and authorization (AuthZ) systems have been being built. Combine this with the view of a software engineer attempting to build the standards that are being written by some world-class experts with centuries of combined experience. Throughout this experience, I’ve gotten to view the state of the art while also getting to keep the naiveness that comes with being a fairly new engineer relative to some of the experts I get to work with. During this time I’ve become a bit more opinionated on what makes a good authentication system and this time around I’m going to jot down my current thoughts on what makes something useful.

However, one aspect that I think that paper lacks is that it frames the problem of authentication on the web as something where the next goal is to move to something different other than passwords, but we just haven’t found that better thing yet. While I think this is probably true for some aspects of low-security systems I think that fundamentally passwords or more generally, “Things I know” are here to stay. The problem is that we haven’t done a good enough job of understanding the requirements that the user needs to intuitively use the system as well as making sure that the system sufficiently gets out of the way of the user. As the paper does point out though, this is likely due to our specialization bias which doesn’t allow us to take into considerations the holistic view points necessary to approach the problem. What I’m proposing I think is a way in which we can jump this hurdle through the usage of hard data. Read on and let me know if you think this can solve this issue or if I’m just full of my own implicit biases.

So what’re the important insights that I’ve been thinking about lately? First off, the user experience of authenticating on the web has to be joyful first and foremost. If the user doesn’t have a better experience than what they do with passwords, which is a very low bar to beat when considering all the passwords they have to remember, then it’s simply unacceptable and won’t achieve the uptake that’s needed to overtake passwords.

Secondly, I think it’s important that we recognize that the security of any authentication system is probabilistic, not deterministic. By reframing the problem to understand that with each additional security check we do under the classifications of “what I know”, “what I am”, and “what I have” it allows us to better understand the problem we’re actually trying to solve with security. To put this idea in perspective think about the problem this way. What’s the probability that a security system will be broken for any particular user during a particular period? For example, a user who chooses to reuse their password that’s set to “#1password” for every website is a lot less likely (my intuition - happy to be proven wrong) than a user who can memorize a password like “p2D16U$nClNjqLseKTtnjw” for every website. However, there’s a significant tradeoff on the user’s experience which is why the case where a user reuses an easy to remember is a lot more likely to occur when studying users than the latter case even though we know it’s less secure.

So what gives? This all sounds obvious to a semi-well thought-out engineer, right? The difference is we simply don’t know the probability of a security system failing in an ideal scenario over a pre-determined period. To put this in perspective - can anyone point me to an academic research paper or even some user research that tells me the probability that a user’s password will be discovered by an attacker in the next year? What about the probability that the user shares their password with a trusted person because the system wasn’t deployed with a delegation system? Or how about how the probability will drop as the user reuses their password across many websites? Simply put I think we’ve been asking the wrong question here and until we can have hard data on this we can’t make rigorous choices on the acceptable UX/security tradeoffs that are so hard to decide today.

This isn’t relevant for just passwords either, this extends to many different forms of authentication that fall under the other two authentication classes as well. For example, what’s the probability that a user’s account will be breached when relying on the Open ID Connect protocol rather than a password? Furthermore, what’s the likelihood that the user prefers the Open ID Connect system rather than a password for each website, and is that likelihood worth the increase or decrease in probabilistic security under one or many attack vectors?

The best part of this framing is that it changes how we look at security on the web from the user’s perspective, but that’s not the only part that has to be considered as is rightly pointed in the paper. There’s a very important third factor that has to be considered as well which is deployability, which I like to reframe as “developer experience” or DevX.

By evaluating a constraint of a system in this way we reframe the problem into a measurable outcome that becomes far more tractable and measurable between the variety of constraints that need to be considered including the developer deploying or maintaining the system, the user who’s using it, and the resistance to common threats (Don’t worry about unrealistic threat models for now - mature them over time) which the user expects the designers of the system to protect them from.

Once we’ve got that data let’s sit down and re-evaluate what are the most important principles of designing the system. I’ll make a few predictions to wrap this up as well.

First prediction: I think once we have this data we’ll see a few different things which will be obvious in hindsight. A system that doesn’t prioritize UX over DevX over probabilistic security resilience will be dead in the water since it goes against the “user should enjoy the authentication experience principle”. Additionally, DevX has to come before security because without a good devX it’s less likely to be implemented at all let alone properly.

Second prediction: I’d venture to guess that we’ll learn a few things about the way we frame security on the web with the clear winner being that we should be designing for MFA systems by default, but “what I have” categories need to be the basis of the majority of experiences for the user with the “what I know” categories being an escalated approach with “what I am” categories only being needed in the highest assurance use cases or at the time of more red flags having been raised (e.g. new IP address, new device, etc) and being enforced on-device rather than handled by a remote server.

Final prediction: Recovery is going to be the hardest part of the system to figure out with multi-device flows being only slightly easier to solve. I wouldn’t be surprised if the solution to recovery was actually to not recover and instead make it super easy to “burn and recreate” (as John Jordan has advocated in the decentralized identity community) an account on the web because that’s how hard recovery actually is to get right.

So that’s what I’ve got for now. I’m sure I’m missing something here and I’m sure I’m wrong in a few other cases. Share your comments on this down below or feel free to send me an email and tell me I’m wrong. I do appreciate the thoughtfulness that others put into pointing these things out so let me know what you think and let’s discuss it further. Thanks for reading!

Thursday, 21. October 2021

Mike Jones: self-issued

OpenID and FIDO Presentation at October 2021 FIDO Plenary

I described the relationship between OpenID and FIDO during the October 21, 2021 FIDO Alliance plenary meeting, including how OpenID Connect and FIDO are complementary. In particular, I explained that using WebAuthn/FIDO authenticators to sign into OpenID Providers brings phishing resistance to millions of OpenID Relying Parties without them having to do anything! The presentation […]

I described the relationship between OpenID and FIDO during the October 21, 2021 FIDO Alliance plenary meeting, including how OpenID Connect and FIDO are complementary. In particular, I explained that using WebAuthn/FIDO authenticators to sign into OpenID Providers brings phishing resistance to millions of OpenID Relying Parties without them having to do anything!

The presentation was:

OpenID and FIDO (PowerPoint) (PDF)

MyDigitalFootprint

When does democracy break?

We spend most of our waking hours being held to account between the guardrails of risk-informed and responsible decision making.  Undoubtedly, we often climb over the guardrails and make ill-informed, irresponsible and irrational decisions, but that is human agency. It is also true that we would not innovate, create or discover if we could not explore the other side of our safety guardrails.
We spend most of our waking hours being held to account between the guardrails of risk-informed and responsible decision making.  Undoubtedly, we often climb over the guardrails and make ill-informed, irresponsible and irrational decisions, but that is human agency. It is also true that we would not innovate, create or discover if we could not explore the other side of our safety guardrails. 

Today’s perception of responsible decision making is different to that our grandparents held.  Our grandchildren will look back at our “risk-informed” decisions with the advantage of hindsight and question why our risk management frameworks were so short-term-focused.  However, we need to recognise that our guardrails are established by current political, economic and societal framing.   

What are we optimising for?”

I often ask the question, “what are we optimising for?” The reason I ask this question is to draw out different viewpoints in a leadership team.  The viewpoints that drive individual optimisation is framed by their experience, ability to understand time-frames and incentives. 

Peak Paradox is a non-confrontational framework to explore our different perceptions of what we are optimising for.  We need different and diverse views to ensure that our guardrails don’t become so narrow they look like rail tracks, and we repeat the same mistakes because that is what the process determines.  Equally, we must agree together boundaries of divergence which means we can optimise as a team for something that we believe in and has a shared purpose of us and our stakeholders.  Finding this dynamic area of alignment is made easier with the Peak Paradox framework. 

However, our model of risk-informed responsible decision making is based on the idea that the majority decides, essentially democracy.  If the “majority” is a supermajority, we need 76%; for a simple majority, we need 51%, and for minority protections less than 10%.  What we get depends on how someone has previously set up the checks, balances, and control system. And then there is the idea of monarchy or the rich and powerful making the decisions for everyone else, the 0.001%.  

However, our guardrails for democratic decision making breaks down when there is a need for choices, decisions, judgement that has to be made that people do not like.  How do we enable better decisions when hard decisions mean you will lose the support of the majority? 

We do not all agree with vaccines (even before covid), eating meat, climate change, our government or authority.  Vaccines in the current global pandemic period are one such tension of divergence.  We see this every day in the news feeds; there is an equal and opposite view for every side.  Humanity at large lives at Peak Paradox, but we don’t equally value everyone views. Why do we struggle with the idea that individual liberty of choice can be taken away in the interests of everyone?  

Climate change is another.  Should the government act in the longer term and protect the future or act in the short term persevering liberty and ensure re-election?  Protestors with a cause may fight authority and are portrayed as mavericks and disruptors, but history remembers many as pioneers, martyrs and great leaders.  Those who protested in the 1960 and 1970s against nuclear energy may look back today and think that they should have been fighting fossil fuels.  Such pioneers are optimising for something outside of the normal guardrails. Whilst they appear to be living outside of the accepted guardrails, they can see they should be adopted.

Our guardrails work well when we can agree on the consequences of risk-informed and responsible decision making; however, in an age when information is untrusted and who is responsible is questioned, we find that we all have different guardrails.  Living at Peak Paradox means we have to accept we will never agree on what is a responsible decision and we are likely to see that our iteration of democracy that maintains power and control is going to end in a revolution if we could only agree on what to optimise for.   



@_Nat Zone

【11月12日開催】金融・決済サービスにおける認証・API認可・同意取得プロセスの最新動向

~オープンバンキングからGAINまで~ 来る11月… The post 【11月12日開催】金融・決済サービスにおける認証・API認可・同意取得プロセスの最新動向 first appeared on @_Nat Zone.

~オープンバンキングからGAINまで~

来る11月12日に、『金融・決済サービスにおける認証・API認可・同意取得プロセスの最新動向と将来像~オープンバンキングからGAINまで~』と題してセミナー(有料)を行います(予定)。お客様が集まれば、ですが。2019年に行ったセミナーのフォローアップの位置付けです。以下のアジェンダからは読み取りにくいかもしれませんが、今回は特に分散アイデンティティの実現に重要な「OIDC for Identity Assurance」や、9月にドイツで発表した主に金融機関が属性プロバイダー(Identity Information Provider)としての役割を果たしていくグローバルな枠組みであるGAIN (Global Assured Identity Network) などにスポットライトを当てて行きたいと思います。

また、今回はリモートだけでなく対面もあるハイブリッド方式で、わたしは現地に入りますので、もしいらっしゃればお目にかかることができるので楽しみにしております。お申し込みはこちらのリンクから

https://seminar-info.jp/entry/seminars/view/1/5474

アジェンダ

1.金融系サービスにとっての「認証」問題とデジタルアイデンティティー
(1)認証の課題と口座の不正利用・不正送金問題~事例を交えて
(2)GAFAの中心戦略デジタルアイデンティティーとは何か
(3)アイデンティティ管理フレームワーク
(4)質疑応答

2.デジタルアイデンティティーの標準「OpenID Connect」の概要
(1)ログインモデルの変遷
(2)OpenID Connectの概要
(3)OpenID Connectの3つの属性連携モデル
(4)質疑応答

3.Open Banking と OpenIDの拡張仕様
(1)Open Bankingとは?FAPIとは?
(2)FAPIの世界的広がり
(3)CIBA, Grant Management, OIDC4IDA
(4)質疑応答

4.デジタルアイデンティティーの実現に向けて
(1)金融機関にとってのメリット
(2)何から取り組むべきか
(3)開発・実装上の留意点
(4)質疑応答

5.デジタルアイデンティティーの未来
(1)プライバシーの尊重とデジタルアイデンティティー
(2)選択的属性提供と分散アイデンティティ基盤OpenID Connect
(3)GAINトラストフレームワーク
(4)質疑応答

6.質疑応答
(1)全体を通じて
(2)書籍『デジタルアイデンティティー』についてのQ&Aセッション

The post 【11月12日開催】金融・決済サービスにおける認証・API認可・同意取得プロセスの最新動向 first appeared on @_Nat Zone.

Wednesday, 20. October 2021

Werdmüller on Medium

Reconfiguring