Last Update 6:46 AM February 20, 2024 (UTC)

Identity Blog Catcher

Brought to you by Identity Woman and Infominer.
Support this collaboration on Patreon!!!

Tuesday, 20. February 2024

John Philpin : Lifestream

If you are watching closely you will know that my 366 projec

If you are watching closely you will know that my 366 project is pretty messed up right now. Just a little bit too much going on - so laying out breadcrumb markers that I will be revisiting slowly but surely. 🖇️ Like this one for example. A WIP - but had to share a 13 second video of some lads on the beach taken with an iPhone from a long long way-a-way.

If you are watching closely you will know that my 366 project is pretty messed up right now. Just a little bit too much going on - so laying out breadcrumb markers that I will be revisiting slowly but surely.

🖇️ Like this one for example.

A WIP - but had to share a 13 second video of some lads on the beach taken with an iPhone from a long long way-a-way.


Simon Willison

htmz

htmz Astonishingly clever browser platform hack by Lean Rada. Add this to a page: <iframe hidden name=htmz onload="setTimeout(() => document.querySelector( this.contentWindow.location.hash || null)?.replaceWith( ...this.contentDocument.body.childNodes ))"></iframe> Then elsewhere add a link like this: <a href="/flower.html#my-element" target=htmz>Flower</a>

htmz

Astonishingly clever browser platform hack by Lean Rada.

Add this to a page:

<iframe hidden name=htmz onload="setTimeout(() => document.querySelector( this.contentWindow.location.hash || null)?.replaceWith( ...this.contentDocument.body.childNodes ))"></iframe>

Then elsewhere add a link like this:

<a href="/flower.html#my-element" target=htmz>Flower</a>

Clicking that link will fetch content from /flower.html and replace the element with ID of my-element with that content.

Via Hacker News


aiolimiter

aiolimiter I found myself wanting an asyncio rate limiter for Python today - so I could send POSTs to an API endpoint no more than once every 10 seconds. This library worked out really well - it has a very neat design and lets you set up rate limits for things like "no more than 50 items every 10 seconds", implemented using the leaky bucket algorithm.

aiolimiter

I found myself wanting an asyncio rate limiter for Python today - so I could send POSTs to an API endpoint no more than once every 10 seconds. This library worked out really well - it has a very neat design and lets you set up rate limits for things like "no more than 50 items every 10 seconds", implemented using the leaky bucket algorithm.

Monday, 19. February 2024

John Philpin : Lifestream

🚧 51/366 TIL that 🍎 is not quite as open and sharing with

🚧 51/366 TIL that 🍎 is not quite as open and sharing with the Windows world as we might believe. YES - you can install an app called iCloud for Windoze on your PC BUT - for it to work - you need to authorize it from any device you have that is part of the 🍎 eco system. Channeling Henry F - you can use any OS you want to use iCloud - but to authorize it you have to have an iOS device. Filed I

🚧 51/366

TIL that 🍎 is not quite as open and sharing with the Windows world as we might believe. YES - you can install an app called iCloud for Windoze on your PC BUT - for it to work - you need to authorize it from any device you have that is part of the 🍎 eco system.

Channeling Henry F - you can use any OS you want to use iCloud - but to authorize it you have to have an iOS device.

Filed In The Fucked Up Bucket … unless I read the tin wrongly?


Doc Searls Weblog

On too many conferences

 
Prompt: “Many different conferences and meetings happening all at once in different places” Rendered by Microsoft Copilot / Designer

 


Heres Tom with the Weather


Ben Werdmüller

Social, I love you, but you’re bringing me down

This weekend I realized that I’m kind of burned out: agitated, stressed about nothing in particular, and peculiarly sleepless. It took a little introspection to figure out what was really going on. Here’s what I finally decided: I really need to pull back from using social media in particular as much as I do. A few things brought me here: The sheer volume of social media sites is intense

This weekend I realized that I’m kind of burned out: agitated, stressed about nothing in particular, and peculiarly sleepless. It took a little introspection to figure out what was really going on.

Here’s what I finally decided: I really need to pull back from using social media in particular as much as I do.

A few things brought me here:

The sheer volume of social media sites is intense Our relationship with social media has been redefined I want to re-focus on my actual goals

I’d like to talk about them in turn. Some of you might be feeling something similar.

The sheer volume of social media sites is intense

It used to be that I posted and read on Twitter. That’s where my community was; that’s where I kept up to date with what was happening.

Well, we all know what happened there.

In its place, I find myself spending more time on:

Mastodon Threads Bluesky LinkedIn (really!) Facebook (I know) Instagram

The backchannel that Twitter offered has become rather more diffuse. Mastodon, Threads, and Bluesky offer pretty much the same thing as each other, with a different set of people. LinkedIn is more professional; I’m unlikely to post anything political there, and I’m a bit more mindful of polluting the feed. My Facebook community is mostly people I miss hanging out with, so I’ll usually post sillier or less professionally relevant stuff there. And Instagram, until recently, was mostly photos of our toddler.

I haven’t been spending a ton of time interacting on any of them; it’s common for almost a full day to go between posts. Regardless, there’s something about moving from app to app to app that feels exhausting. I realized I was experiencing a kind of FOMO — am I missing something important?! — that became an addiction.

Each dopamine hit, each context switch, each draw on my attention pushes me further to the right on the stress curve. Everyone’s different, but this kind of intense data-flood — of the information equivalent of empty calories, no less — makes me feel awful.

Ugh. First step: remove every app from my phone. Second step: drastically restrict how I can access them on the web.

Our relationship with social media has been redefined

At this point we’re all familiar with the adage that if you’re not the customer, you’re the product being sold.

It never quite captured the true dynamic, but it was a pithy way to emphasize that we were being profiled in order to optimize ad sales in our direction. Of course, there was never anything to say that we weren’t being profiled or that our data wasn’t being traded even if we were the ostensible customer, but it seemed obvious that data mining for ad sales was more likely to happen on an ad-supported site.

With the advent of generative AI, or more precisely the generative AI bubble, this dynamic can be drawn more starkly. Everything we post can be ingested by a social media platform as training data for its AI engines. Prediction engines are trained on our words, our actions, our images, our audio, and then re-sold. We really are the product now.

I can accept that for posts where I share links to other resources, or a rapid-fire, off-the-cuff remark. Where I absolutely draw the line is allowing an engine to be trained on my child. Just as I’m not inclined to allow him to be fingerprinted or added to a DNA database, I’m not interested in having him be tracked or modeled. I know that this is likely an inevitability, but if it happens, it will happen despite me. I will not be the person who willingly uploads him as training data.

So, when I’m uploading images, you might see a picture of a snowy day, or a funny sign somewhere. You won’t see anything important, or anything representative of what life actually looks like. It’s time to establish an arms-length distance.

There’s something else here, too: while the platforms are certainly profiling and learning from us, they’re still giving us more of what we pause and spend our attention on. In an election year, with two major, ongoing wars, I’m finding that to be particularly stressful.

It’s not that I don’t want to know what’s going on. I read the news; I follow in-depth journalism; I read blogs and opinion pieces on these subjects. Those things aren’t harmful. What is harmful is the endless push for us to align into propaganda broadcasters ourselves, and to accept broad strokes over nuanced discussion and real reflection. This was a problem with Twitter, and it’s a problem with all of today’s platforms.

The short form of microblogging encourages us to be reductive about impossibly important topics that real people are losing their lives over right now. It’s like sports fans yelling about who their preferred team is. In contrast, long-form content — blogging, newsletters, platforms like Medium — leaves space to explore and truly debate. Whereas short-form is too low-resolution to capture the fidelity of the truth, long-form at least has the potential to be more representative of reality.

It’s great for jokes. Less so for war.

I want to re-focus on my actual goals

What do I actually want to achieve?

Well, I’ve got a family that I would like to support and show up for well.

I’ve got a demanding job doing something really important, that I want to make sure I show up well for.

I’ve also got a first draft of a majority of a novel printed out and sitting on my coffee table with pen edits all over it. I’d really like to finish it. It’s taken far longer than I intended or hoped for.

And I want to spend time organizing my thoughts for both my job and my creative work, which also means writing in this space and getting feedback from all of you.

Social media has the weird effect of making you feel like you’ve achieved something — made a post, perhaps received some feedback — without actually having done anything at all. It sits somewhere between marketing and procrastination: a way to lose time into a black hole without anything to really show for it.

So I want to move my center of gravity all the way back to writing for myself. I’ll write here; I’ll continue to write my longer work on paper; I’ll share it when it’s appropriate.

Posting in a space I control isn’t just about the principle anymore. It’s a kind of self-preservation. I want to preserve my attention and my autonomy. I accept that I’m addicted, and I would like to curb that addiction. We all only have so much time to spend; we only have one face to maintain ownership of. Independence is the most productive, least invasive way forward.

 

IndieNews


Patrick Breyer

EU-Parlamentarier fordern Schutz für Julian Assange vor möglicher Auslieferung an die USA

Eine Gruppe von 46 Europaabgeordneten verschiedener Fraktionen hat heute einen letzten Appell an den britischen Innenminister gerichtet, Wikileaks-Gründer Julian Assange zu schützen und seine mögliche Auslieferung an die Vereinigten Staaten zu …

Eine Gruppe von 46 Europaabgeordneten verschiedener Fraktionen hat heute einen letzten Appell an den britischen Innenminister gerichtet, Wikileaks-Gründer Julian Assange zu schützen und seine mögliche Auslieferung an die Vereinigten Staaten zu verhindern. In einem Schreiben an den Innenminister, das am Tag vor der letzten Gerichtsanhörung zur Auslieferung von Julian Assange versandt wurde, betonen die Unterzeichner ihre Besorgnis über den Fall Assange und die Auswirkungen auf die Pressefreiheit sowie die ernsten Risiken für Assanges Gesundheit im Falle einer Auslieferung an die USA.

Die US-Regierung versuche dem Brief zufolge erstmals, das 1917 erlassene Spionagegesetz gegen einen Journalisten und Verleger anzuwenden. Wenn die USA damit Erfolg hätten und Assange ausgeliefert würde, würde dies eine Neudefinition des investigativen Journalismus bedeuten. Es würde die Geltung der US-Strafgesetze auf die ganze Welt und auch auf Nicht-US-Staatsbürger ausdehnen, ohne aber dass die Geltung der US-Verfassungsgarantie der Meinungsfreiheit ebenso ausgedehnt wer

Dr. Patrick Breyer, Europaabgeordneter der Piratenpartei Deutschland und Mitinitiator des Briefes, kommentiert:

“Die Welt schaut jetzt auf Großbritannien und seinen Respekt vor den Menschenrechten und der Menschenrechtskonvention. Die britischen Beziehungen zur EU stehen auf dem Spiel.

Die Inhaftierung und strafrechtliche Verfolgung von Assange ist ein extrem gefährlicher Präzedenzfall für alle Journalist:innen, Medienakteure und die Pressefreiheit. Jeder Journalist könnte künftig für die Veröffentlichung von ‘Staatsgeheimnissen’ strafrechtlich verfolgt werden. US-Vertreter haben mir gegenüber bestätigt, dass die auf Assange angewandten Maßstäbe auch für jeden anderen Journalisten gelten würden. Das dürfen wir nicht zulassen.

Die Öffentlichkeit hat ein Recht darauf, von den von Machthabern begangenen Staatsverbrechen zu erfahren, damit sie diese stoppen und vor Gericht bringen können. Julian Assange hat mit Wikileaks eine Ära eingeläutet, in der Ungerechtigkeit nicht mehr unter den Teppich gekehrt werden kann – jetzt liegt es an uns, Transparenz, Rechenschaftspflicht und unser Recht auf Wahrheit zu verteidigen.
Während Australien Julian Assange unterstützt, schweigt unsere Bundesregierung unter Verweis auf die britische Justiz. Doch in Wahrheit liegt es genauso in der Hand des britischen Innenministeriums, ob Assange wegen politischer Verfolgung oder drohender unmenschlicher Behandlung freigelassen wird. Die für morgen angekündigten Demos in Berlin, Düsseldorf, Hamburg, Köln und München werden helfen das feige deutsche Schweigen zu überwinden.”


Simon Willison

Quoting dang

Spam, and its cousins like content marketing, could kill HN if it became orders of magnitude greater—but from my perspective, it isn't the hardest problem on HN. [...] By far the harder problem, from my perspective, is low-quality comments, and I don't mean by bad actors—the community is pretty good about flagging and reporting those; I mean lame and/or mean comments by otherwise good users who

Spam, and its cousins like content marketing, could kill HN if it became orders of magnitude greater—but from my perspective, it isn't the hardest problem on HN. [...]

By far the harder problem, from my perspective, is low-quality comments, and I don't mean by bad actors—the community is pretty good about flagging and reporting those; I mean lame and/or mean comments by otherwise good users who don't intend to and don't realize they're doing that.

dang


Ben Werdmüller

Heat pumps outsold gas furnaces again last year — and the gap is growing

"Americans bought 21 percent more heat pumps in 2023 than the next-most popular heating appliance, fossil gas furnaces." Quietly, the way we heat our homes is changing - and it has the potential to make a big impact. Because heat pumps use around a quarter of the energy of a conventional furnace, and don't necessarily depend on fossil fuels at all, the aggregate energy savi

"Americans bought 21 percent more heat pumps in 2023 than the next-most popular heating appliance, fossil gas furnaces." Quietly, the way we heat our homes is changing - and it has the potential to make a big impact.

Because heat pumps use around a quarter of the energy of a conventional furnace, and don't necessarily depend on fossil fuels at all, the aggregate energy savings could be really significant. Anecdotally (I have a steam furnace that I hate with the fire of a thousand suns), it's also just a far better system.

It might not seem like a particularly sexy technology, but there's scope to spend a little effort here on UX in the same way that Nest did for thermostats and make an even bigger impact. #Climate

[Link]


Can ChatGPT edit fiction? 4 professional editors asked AI to do their job – and it ruined their short story

"We are professional editors, with extensive experience in the Australian book publishing industry, who wanted to know how ChatGPT would perform when compared to a human editor. To find out, we decided to ask it to edit a short story that had already been worked on by human editors – and we compared the results." No surprise: ChatGPT stinks at this. I've sometimes used it t

"We are professional editors, with extensive experience in the Australian book publishing industry, who wanted to know how ChatGPT would perform when compared to a human editor. To find out, we decided to ask it to edit a short story that had already been worked on by human editors – and we compared the results."

No surprise: ChatGPT stinks at this. I've sometimes used it to look at my own work and suggest changes. I'm not about to suggest that any of my writing is particularly literary, but its recommendations have always been generic at best.

Not that anyone in any industry, let alone one whose main product is writing of any sort, would try and use AI to make editing or content suggestions, right? Right?

... Right? #AI

[Link]


Damien Bod

Using a CSP nonce in Blazor Web

This article shows how to use a CSP nonce in a Blazor Web application using the InteractiveServer server render mode. Using a CSP nonce is a great way to protect web applications against XSS attacks and other such Javascript vulnerabilities. Code: https://github.com/damienbod/BlazorServerOidc Notes The code in this example was built using the example provided by […]

This article shows how to use a CSP nonce in a Blazor Web application using the InteractiveServer server render mode. Using a CSP nonce is a great way to protect web applications against XSS attacks and other such Javascript vulnerabilities.

Code: https://github.com/damienbod/BlazorServerOidc

Notes

The code in this example was built using the example provided by Javier Calvarro Nelson.

https://github.com/javiercn/BlazorWebNonceService

Services and middleware

The Blazor Web application is implemented using the AddInteractiveServerComponents for the InteractiveServer server render mode. The nonce can be used by implementing a nonce service using the CircuitHandler. The nonce service is a scoped service.

builder.Services.AddRazorComponents() .AddInteractiveServerComponents(); builder.Services.TryAddEnumerable(ServiceDescriptor .Scoped<CircuitHandler, BlazorNonceService>(sp => sp.GetRequiredService<BlazorNonceService>())); builder.Services.AddScoped<BlazorNonceService>();

The headers are implemented using the NetEscapades.AspNetCore.SecurityHeaders package. The headers are added to the Blazor nonce service using the NonceMiddleware middleware.

app.UseSecurityHeaders(SecurityHeadersDefinitions.GetHeaderPolicyCollection( app.Environment.IsDevelopment(), app.Configuration["OpenIDConnectSettings:Authority"])); app.UseMiddleware<NonceMiddleware>(); Setup Security headers

The security headers CSP script tag is setup as best possible for a Blazor Web application. A CSP nonce is used as well as the fallback definitions for older browsers.

.AddContentSecurityPolicy(builder => { builder.AddObjectSrc().None(); builder.AddBlockAllMixedContent(); builder.AddImgSrc().Self().From("data:"); builder.AddFormAction().Self().From(idpHost); builder.AddFontSrc().Self(); builder.AddBaseUri().Self(); builder.AddFrameAncestors().None(); builder.AddStyleSrc() .UnsafeInline() .Self(); // due to Blazor builder.AddScriptSrc() .WithNonce() .UnsafeEval() // due to Blazor WASM .StrictDynamic() .OverHttps() .UnsafeInline(); // fallback for older browsers when the nonce is used }) Setup Middleware to add the nonce to the state

The NonceMiddleware uses the nonce header created by the security headers package and sets the Blazor nonce service with the value. This is updated on every request.

namespace BlazorWebFromBlazorServerOidc; public class NonceMiddleware { private readonly RequestDelegate _next; public NonceMiddleware(RequestDelegate next) { _next = next; } public async Task Invoke(HttpContext context, BlazorNonceService blazorNonceService) { var success = context.Items.TryGetValue( "NETESCAPADES_NONCE", out var nonce); if (success && nonce != null) { blazorNonceService.SetNonce(nonce.ToString()!); } await _next.Invoke(context); } } Using the nonce in the UI

The BlazorNonceService can be used from the Blazor components in the InteractiveServer render mode. The nonce is applied to all script tags. If the script does not have the correct nonce, it will not be loaded. The GetNonce method reads the nonce value from the BlazorNonceService service.

@inject IHostEnvironment Env @inject BlazorNonceService BlazorNonceService @using System.Security.Cryptography; <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <base href="/" /> <link rel="stylesheet" href="css/bootstrap/bootstrap.min.css" /> <link href="css/site.css" rel="stylesheet" /> <link href="BlazorWebFromBlazorServerOidc.styles.css" rel="stylesheet" /> <HeadOutlet @rendermode="InteractiveServer" /> </head> <body> <Routes @rendermode="InteractiveServer" /> http://_framework/blazor.web.js </body> </html> @code { /// <summary> /// Original src: https://github.com/javiercn/BlazorWebNonceService /// </summary> [CascadingParameter] HttpContext Context { get; set; } = default!; protected override void OnInitialized() { var nonce = GetNonce(); if (nonce != null) { BlazorNonceService.SetNonce(nonce); } } public string? GetNonce() { if (Context.Items.TryGetValue("nonce", out var item) && item is string nonce and not null) { return nonce; } return null; } } Notes

Nonces can be applied to Blazor Web using the server rendered mode and the BlazorNonceService which implements the CircuitHandler. Thanks the Javier Calvarro Nelson for providing a solution to this. Next would be to find a solution for the AddInteractiveWebAssemblyComponents setup. You should always use a CSP nonce on a server rendered application and only load scripts with the CSP nonce applied to it.

Links

https://github.com/javiercn/BlazorWebNonceService

https://github.com/andrewlock/NetEscapades.AspNetCore.SecurityHeaders


Ben Werdmüller

Journalism Needs Leaders Who Know How to Run a Business

"We need people with a service mindset, who understand how to run a business, but a business with a mission that’s more important than ever. We need leaders who embrace new revenue models, run toward chaos, and are excited to build new structures from the ground up. We need leaders who are generous, who nurture the careers of their employees, and who are serious about creating

"We need people with a service mindset, who understand how to run a business, but a business with a mission that’s more important than ever. We need leaders who embrace new revenue models, run toward chaos, and are excited to build new structures from the ground up. We need leaders who are generous, who nurture the careers of their employees, and who are serious about creating diverse and inclusive workplaces. And we need leaders promoted for their skills and their thoughtfulness, not their loud voice, charisma, or pedigree."

A lot of these values have been championed by some of the more progressive organizations in tech that I've seen, as well as other kinds of workplaces that have thought hard about the conditions that actually lead to productive work that matters.

What doesn't work: reverence for old models, or treating journalism as if it's somehow completely special and different. There's a lot to learn from other sectors and people who have tried hard to improve their workplaces everywhere. #Media

[Link]


Simon Willison

ActivityPub Server in a Single PHP File

ActivityPub Server in a Single PHP File Terence Eden: "Any computer program can be designed to run from a single file if you architect it wrong enough!" I love this as a clear, easy-to-follow example of the core implementation details of the ActivityPub protocol - and a reminder that often a single PHP file is all you need. Via lobste.rs

ActivityPub Server in a Single PHP File

Terence Eden: "Any computer program can be designed to run from a single file if you architect it wrong enough!"

I love this as a clear, easy-to-follow example of the core implementation details of the ActivityPub protocol - and a reminder that often a single PHP file is all you need.

Via lobste.rs

Sunday, 18. February 2024

Simon Willison

datasette-studio

datasette-studio I've been thinking for a while that it might be interesting to have a version of Datasette that comes bundled with a set of useful plugins, aimed at expanding Datasette's default functionality to cover things like importing data and editing schemas. This morning I built the very first experimental preview of what that could look like. Install it using pipx: pipx install da

datasette-studio

I've been thinking for a while that it might be interesting to have a version of Datasette that comes bundled with a set of useful plugins, aimed at expanding Datasette's default functionality to cover things like importing data and editing schemas.

This morning I built the very first experimental preview of what that could look like. Install it using pipx:

pipx install datasette-studio

I recommend pipx because it will ensure datasette-studio gets its own isolated environment, independent of any other Datasette installations you might have.

Now running "datasette-studio" instead of "datasette" will get you the version with the bundled plugins.

The implementation of this is fun - it's a single pyproject.toml file defining the dependencies and setting up the datasette-studio CLI hook, which is enough to provide the full set of functionality.

Is this a good idea? I don't know yet, but it's certainly an interesting initial experiment.


John Philpin : Lifestream

🚧 50/366

🚧 50/366

🚧 50/366


Simon Willison

Datasette 1.0a10

Datasette 1.0a10 The only changes in this alpha release concern the way Datasette handles database transactions. The database.execute_write_fn() internal method used to leave functions to implement transactions on their own - it now defaults to wrapping them in a transaction unless they opt out with the new transaction=False parameter. In implementing this I found several places inside Datase

Datasette 1.0a10

The only changes in this alpha release concern the way Datasette handles database transactions. The database.execute_write_fn() internal method used to leave functions to implement transactions on their own - it now defaults to wrapping them in a transaction unless they opt out with the new transaction=False parameter.

In implementing this I found several places inside Datasette - in particular parts of the JSON write API - which had not been handling transactions correctly. Those are all now fixed.


Representation Engineering: Mistral-7B on Acid

Representation Engineering: Mistral-7B on Acid Theia Vogel provides a delightfully clear explanation (and worked examples) of control vectors - a relatively recent technique for influencing the behaviour of an LLM by applying vectors to the hidden states that are evaluated during model inference. These vectors are surprisingly easy to both create and apply. Build a small set of contrasting pr

Representation Engineering: Mistral-7B on Acid

Theia Vogel provides a delightfully clear explanation (and worked examples) of control vectors - a relatively recent technique for influencing the behaviour of an LLM by applying vectors to the hidden states that are evaluated during model inference.

These vectors are surprisingly easy to both create and apply. Build a small set of contrasting prompt pairs - "Act extremely happy" v.s. "Act extremely sad" for example (with a tiny bit of additional boilerplate), then run a bunch of those prompts and collect the hidden layer states. Then use "single-component PCA" on those states to get a control vector representing the difference.

The examples Theia provides, using control vectors to make Mistral 7B more or less honest, trippy, lazy, creative and more, are very convincing.

Via Hacker News


wddbfs – Mount a sqlite database as a filesystem

wddbfs – Mount a sqlite database as a filesystem Ingenious hack from Adam Obeng. Install this Python tool and run it against a SQLite database: wddbfs --anonymous --db-path path/to/content.db Then tell the macOS Finder to connect to Go -> Connect to Server -> http://127.0.0.1:8080/ (connect as guest) - connecting via WebDAV. /Volumes/127.0.0.1/content.db will now be a folder full

wddbfs – Mount a sqlite database as a filesystem

Ingenious hack from Adam Obeng. Install this Python tool and run it against a SQLite database:

wddbfs --anonymous --db-path path/to/content.db

Then tell the macOS Finder to connect to Go -> Connect to Server -> http://127.0.0.1:8080/ (connect as guest) - connecting via WebDAV.

/Volumes/127.0.0.1/content.db will now be a folder full of CSV, TSV, JSON and JSONL files - one of each format for every table.

This means you can open data from SQLite directly in any application that supports that format, and you can even run CLI commands such as grep, ripgrep or jq directly against the data!

Adam used WebDAV because "Despite how clunky it is, this seems to be the best way to implement a filesystem given that getting FUSE support is not straightforward". What a neat trick.


Ben Werdmüller

Opinion: I'm an American doctor who went to Gaza. I saw annihilation, not war

"On one occasion, a handful of children, all about ages 5 to 8, were carried to the emergency room by their parents. All had single sniper shots to the head. These families were returning to their homes in Khan Yunis, about 2.5 miles away from the hospital, after Israeli tanks had withdrawn. But the snipers apparently stayed behind. None of these children survived." There i

"On one occasion, a handful of children, all about ages 5 to 8, were carried to the emergency room by their parents. All had single sniper shots to the head. These families were returning to their homes in Khan Yunis, about 2.5 miles away from the hospital, after Israeli tanks had withdrawn. But the snipers apparently stayed behind. None of these children survived."

There is no justification for this horror. This is not a solution; this is not an acceptable response. It has to stop. #Democracy

[Link]

Saturday, 17. February 2024

John Philpin : Lifestream

049/366 | ⚽️ Football

Caveat ‘Navigation is 🚧’ « 048/366 | 050/366 » ’Football’ By 'Me' I don’t know which I am more amazed by. That I was half way up a hill looking down on this beach over the weekend and mesmerized by these 4 guys kicking a ball between themselves for close to a couple of hours in pretty intense sun. It was truly amazing - just gentle self control of moving a ball between them - you got

Caveat ‘Navigation is 🚧’

« 048/366 | 050/366 »

’Football’ By 'Me'

I don’t know which I am more amazed by.

That I was half way up a hill looking down on this beach over the weekend and mesmerized by these 4 guys kicking a ball between themselves for close to a couple of hours in pretty intense sun. It was truly amazing - just gentle self control of moving a ball between them - you gotta know that i am no football fan - but they had me.

OR

That being half way up that hill my 5 year old iPhone recorded a video of them - which I then cropped to focus just on the game and it’s ‘just there’.

Who needs a new iPhone? Really.

Click on the green arrow to watch the video.

This is one of 366 daily posts that are planned to appear throughout the year of 2024. More context will follow.

📡 Follow with RSS

🗄️ All the posts


Doc Searls Weblog

Mom’s breakfast

As a cook, my Swedish mother was best known for her Swedish meatballs, an indelicacy now familiar as the reward for completing the big-box retail maze called Ikea. Second-best was the limpa bread (vörtbröd) she baked every Christmas. She once won an award for that one. Maybe twice. But her most leveraged dish was the […]

As a cook, my Swedish mother was best known for her Swedish meatballs, an indelicacy now familiar as the reward for completing the big-box retail maze called Ikea. Second-best was the limpa bread (vörtbröd) she baked every Christmas. She once won an award for that one. Maybe twice.

But her most leveraged dish was the breakfast she made for us often when my sister and I were kids: soft-boiled eggs over toast broken into small pieces in a bowl. It’s still my basic breakfast, many decades later.

Mine, above, are different in three small ways:

I cut the toast into small squares with a big kitchen knife. (For Mom the toast was usually white bread, which was the only thing most grocery stores sold, or so it seemed, back in the 1950s. I lean toward Jewish rye, sourdough, ciabatta, anything not sweet.) Mom boiled the eggs for three minutes. I poach mine. That’s a skill I learned from my wife. Much simpler. Put the eggs for a second or two into the boiling water, take them out, and then break them into the same water. (Putting them in first helps keep them intact.) Make sure the water has some salt in it, so the eggs hold their shape. Pull them out with a slotted spoon when the white gets somewhat firm and the yolk is still runny. Lay them on the toast. I season them with a bit of hot sauce: sriracha, Tapatio, Cholula, whatever. That way they look like this before I chow them down—

The hot sauce also makes the coffee taste better for some reason.

Thus endeth the first—and perhaps last and only—recipe post on this blog.


Ben Werdmüller

Paying people to work on open source is good actually

"My fundamental position is that paying people to work on open source is good, full stop, no exceptions. We need to stop criticizing maintainers getting paid, and start celebrating. Yes, all of the mechanisms are flawed in some way, but that’s because the world is flawed, and it’s not the fault of the people taking money. Yelling at maintainers who’ve found a way to make a liv

"My fundamental position is that paying people to work on open source is good, full stop, no exceptions. We need to stop criticizing maintainers getting paid, and start celebrating. Yes, all of the mechanisms are flawed in some way, but that’s because the world is flawed, and it’s not the fault of the people taking money. Yelling at maintainers who’ve found a way to make a living is wrong."

Strongly co-signed. Sure, I have a bias: around a decade of my career in total has been spent working directly on open source projects. But throughout doing that work, I encountered people who felt that because I was releasing my work in the open, I didn't have a right to earn a living. I reject that entirely.

I agree with every part of the argument presented in this post. If people can't be paid to work on open source, only people with disposable time and income will get to do so. The result is software that skews to people from wealthier demographics who don't have families, or that can't be sustainably maintained - and I don't think that's what we want at all.

There are people who say "we need universal basic income!" or "the solution is to get rid of money entirely!" and that's lovely, in a way, but people need to eat today, not just in some future post-capitalist version of the world. #Technology

[Link]


It's kind of impressive to see Ghost ...

It's kind of impressive to see Ghost become a real open source alternative to WordPress. Many people have said it couldn't be done - but by focusing on a certain kind of independent creator (adjacent to both Medium and Substack), they've done it. It's a pretty amazing feat.

It's kind of impressive to see Ghost become a real open source alternative to WordPress. Many people have said it couldn't be done - but by focusing on a certain kind of independent creator (adjacent to both Medium and Substack), they've done it. It's a pretty amazing feat.


Simon Willison

Paying people to work on open source is good actually

Paying people to work on open source is good actually In which Jacob expands his widely quoted (including here) pithy toot about how quick people are to pick holes in paid open source contributor situations into a satisfyingly comprehensive rant. This is absolutely worth your time - there's so much I could quote from here, but I'm going to go with this: "Many, many more people should be getti

Paying people to work on open source is good actually

In which Jacob expands his widely quoted (including here) pithy toot about how quick people are to pick holes in paid open source contributor situations into a satisfyingly comprehensive rant. This is absolutely worth your time - there's so much I could quote from here, but I'm going to go with this:

"Many, many more people should be getting paid to write free software, but for that to happen we’re going to have to be okay accepting impure or imperfect mechanisms."

Friday, 16. February 2024

Simon Willison

Datasette 1.0a9

Datasette 1.0a9 A new Datasette alpha release today. This adds basic alter table support API support, so you can request Datasette modify a table to add new columns needed for JSON objects submitted to the insert, upsert or update APIs. It also makes some permission changes - fixing a minor bug with upsert permissions, and introducing a new rule where every permission plugin gets consulted fo

Datasette 1.0a9

A new Datasette alpha release today. This adds basic alter table support API support, so you can request Datasette modify a table to add new columns needed for JSON objects submitted to the insert, upsert or update APIs.

It also makes some permission changes - fixing a minor bug with upsert permissions, and introducing a new rule where every permission plugin gets consulted for a permission check, with just one refusal vetoing that check.


Doc Searls Weblog

Assassinations Work

On April 4, 1968, when I learned with the rest of the world that Martin Luther King Jr. had been assassinated, I immediately thought that the civil rights movement, which King had led, had just been set back by fifty years. I was wrong about that. It ended right then (check that last link). Almost […]


On April 4, 1968, when I learned with the rest of the world that Martin Luther King Jr. had been assassinated, I immediately thought that the civil rights movement, which King had led, had just been set back by fifty years. I was wrong about that. It ended right then (check that last link). Almost fifty-six years have passed since that assassination, and the cause still has a long way to go: far longer than what MLK and the rest of us had imagined before he was killed.

Also, since MLK was the world’s leading activist for peace and nonviolence, those movements were set back as well. (Have they moved? How much? I don’t have answers. Maybe some of you do.)

I was twenty years old when MLK and RFK were killed, and a junior at Guilford College, a Quaker institution in Greensboro, North Carolina. Greensboro was a hotbed of civil rights activism and strife at the time (and occasionally since). I was an activist of sorts back then as well, both for civil rights and against the Vietnam War. But being an activist, and having moral sympathies of one kind or another, are far less effective in the absence of leadership than they are when leadership is there, and strong.

 Alexei Navalny was one of those leaders. He moved into the past tense today: (1976-2024). His parentheses closed in an Arctic Russian prison. He was only 47 years old. At age 44 he was poisoned—an obvious assassination attempt—and survived, thanks to medical treatment in Germany. He was imprisoned in 2021 after he returned to Russia, and… well, you can read the rest here. Since Navalny was the leading advocate of reform in Russia and opposed Vladimir Putin’s one-man rule of the country, Putin wanted him dead. So now Navalny is gone, and with it much hope of reform.

Not every assassination is motivated by those opposed to a cause. Some assassins are just nuts. John Hinkley Jr. and Mark David Chapman, for example. Hinkley failed to kill Ronald Reagan, and history moved right along. But Chapman succeeded in killing John Lennon, and silence from that grave has persisted ever since.

My point is that assassination works. For causes a leader personifies, the setbacks can be enormous, and in some cases total, or close enough, for a long time.

I hope Alexei Navalny’s causes will still have effects in his absence. Martyrdom in some ways works too. But I expect those effects to take much longer to come about than they would if Navalny were still alive. And I would love to be wrong about that.


IdM Thoughtplace

Regarding the recent SAP IDM Announcement

 “Life begins like a dream, becomes a little real, and ends like a dream.” ― Michael Bassey Johnson, The Oneironaut’s Diary As many of you already know, SAP has made public its plans on how SAP IDM will be retired as a supported offering. I’ve been stewing on this for a bit as I try to figure out exactly how I feel about this and what needs to happen next. To be fair, I haven’t

 “Life begins like a dream, becomes a little real, and ends like a dream.” ― Michael Bassey Johnson, The Oneironaut’s Diary

As many of you already know, SAP has made public its plans on how SAP IDM will be retired as a supported offering. I’ve been stewing on this for a bit as I try to figure out exactly how I feel about this and what needs to happen next.

To be fair, I haven’t worked with the product much for just over four years, and even then, I was working more with Version 7 than with Version 8. My opinions are completely my own and do not represent my current employer, any previous employer, or SAP.

While IDM is certainly showing its age, there are some very good things about it that I would love to see as an open-source offering. First is the Batch Processing capabilities of IDM, based on the old MaXware Data Synchronization Engine/MetaCenter solutions. It features some powerful functionality to synchronize and cleanse data. It sets up fairly easily and is quite easy to configure. I’m sure the open-source community could do well with maintaining the UI (It definitely should be JAVA-based rather than the old Windows MMC) that will fit better in today’s Enterprise setting. Also, easy integration with SaaS services is a needed upgrade.

The other thing that should be released into the wild is the Virtual Directory. It also provides powerful functionality for several use cases, from pass-through authentication to assisting in M&A use cases. It’s the perfect example of a “Black Box” offering that just works. It also makes it much easier to synchronize and cleanse data by representing many different back ends via the easy-to-consume LDAP standard.

It saddens me that SAP is choosing to move away from IDM, as one of the key selling points of SAP IDM is its ability to integrate seamlessly with the SAP ecosystem. I hope SAP will help all LCM/IGA vendors connect more easily with systems. SaaS integration should be easy or standards-based, but we still need to be concerned for organizations still using on-premises SAP tools.

SAP has indicated that Microsoft’s Entra ID will be the main partner in the future, but I hope they make this information open to all vendors and that there will be continuing support of standard protocols. This article gives me some hope, but actions speak louder than words. I do have some concerns that SAP, known as a vast software ecosystem that supports itself and tends to ignore the enterprise, is handing off to another large software provider whose management tools tend to support their software ecosystem first and consider the enterprise second. Let’s face it: most of Microsoft’s Identity and Access Management efforts have been about supporting the Office 365 family of products. Don’t get me wrong; it’s better than SAP in this regard, but it’s not that high of a level to meet. For what it’s worth, I am guardedly optimistic, but I always try to remain hopeful.

Finally, I think it’s important to thank the IDM team in Sofia for all their hard work over the years, which, of course, would not have been possible without the vision and effort of the original MaXware team based in Trondheim, Norway, and associated teams in the UK, Australia, and the US. The production from these small teams helped define what Identity Management is to this day.

Will this be my last blog entry on the topic of SAP IDM? I don’t know. Part of it will depend on if there are any moves towards the Open Source world. There have been at least three times in my life when I thought I was done with this tool, and deep down, I’m pretty sure there is a little more in my future. 

In the meantime, I hope to resume blogging more regarding the Identity and Access Management field in the near future. Time will tell.




Doc Searls Weblog

Cluetrain at 25

The Cluetrain Manifesto will turn 25 in two months. I am one of its four authors, and speak here only for myself. The others are David Weinberger, Rick Levine, and Chris Locke. David and Rick may have something to say. Chris, alas, demonstrates the first words in Chapter One of The Cluetrain Manifesto in its […]
Chris Locke found this on the Web in early 1999, and it became the main image on the Cluetrain Manifesto homepage. We’ve never found its source.

The Cluetrain Manifesto will turn 25 in two months.

I am one of its four authors, and speak here only for myself. The others are David Weinberger, Rick Levine, and Chris Locke. David and Rick may have something to say. Chris, alas, demonstrates the first words in Chapter One of The Cluetrain Manifesto in its book form. Try not to be haunted by Chris’s ghost when you read it.

Cluetrain is a word that did not exist before we made it up in 1999. It is still tweeted almost daily on X (née Twitter), and often on BlueSky and Threads, the Twitter wannabes. And, of course, on Facebook. Searching Google Books no longer says how many results it finds, but the last time I was able to check, the number of books containing the word cluetrain was way past 10,000.

So by now cluetrain belongs in the OED, though nobody is lobbying for that. In fact, none of the authors lobbied for Cluetrain much in the first place. Chris and David wrote about it in their newsletters, and I said some stuff in Linux Journal.  But that was about it. Email was the most social online medium back then, so we did our best with that. We also decided not to make Cluetrain a Thing apart from its website. That meant no t-shirts, bumper stickers, or well-meaning .orgs. We thought what it said should succeed or fail on its own.

Among other things, it succeeded in grabbing the interest of Tom Petzinger, who devoted a column in The Wall Street Journal to the manifesto.* And thus a meme was born. In short order, we were approached with a book proposal, decided a book would be a good way to expand on the website, and had it finished by the end of August. The first edition came out in January 2000—just in time to help burst the dot-com bubble. It also quickly became a bestseller, even though (or perhaps in part because) the whole book was also published for free on the Cluetrain website—and is still there.

You can’t tell from the image of the cover on the right, but that orange was as da-glo as a road cone, and the gray at the bottom was silver. You couldn’t miss seeing it on the displays and shelves of bookstores, which were still thick on the ground back then.

A quarter century after we started working on Cluetrain, I think its story has hardly begun—because most of what it foresaw, or called for, has not come true. Yet.

So I’m going to visit some of Cluetrain’s history and main points in a series of posts here. This is the first one.

*A search for that column on the WSJ.com website brings up nothing: an example of deep news‘ absence. But I do have the text, and may share it with you later.)


John Philpin : Lifestream

🚧 48/366

🚧 48/366

🚧 48/366


Simon Willison

llmc.sh

llmc.sh Adam Montgomery wrote this a neat wrapper around my LLM CLI utility: it adds a "llmc" zsh function which you can ask for shell commands (llmc 'use ripgrep to find files matching otter') which outputs the command, an explanation of the command and then copies the command to your clipboard for you to paste and execute if it looks like the right thing. Via @montasaurus_rex

llmc.sh

Adam Montgomery wrote this a neat wrapper around my LLM CLI utility: it adds a "llmc" zsh function which you can ask for shell commands (llmc 'use ripgrep to find files matching otter') which outputs the command, an explanation of the command and then copies the command to your clipboard for you to paste and execute if it looks like the right thing.

Via @montasaurus_rex


Kent Bull

CESR enters provisional status in IANA Media Type Registry

Registration of the composable event streaming representation (CESR) format in the IANA Media Type Registry shows a recent development of the key event receipt infrastructure (KERI) and authentic chained data containers (ACDC) space and how the space is growing. See the following link for the official entry: IANA Media Type […]

Registration of the composable event streaming representation (CESR) format in the IANA Media Type Registry shows a recent development of the key event receipt infrastructure (KERI) and authentic chained data containers (ACDC) space and how the space is growing.

See the following link for the official entry: IANA Media Type Registry entry for CESR (application/cesr)


Patrick Breyer

Digitale-Dienste-Gesetz tritt in Kraft, ist aber enttäuschend industriefreundlich

Morgen tritt das Digitale-Dienste-Gesetz der EU (Digital Services Act) für alle Anbieter und Plattformen in Kraft. Als Berichterstatter des Innenausschusses (LIBE) erklärt der Europaabgeordnete der Piratenpartei Dr. Patrick Breyer, der das …

Morgen tritt das Digitale-Dienste-Gesetz der EU (Digital Services Act) für alle Anbieter und Plattformen in Kraft. Als Berichterstatter des Innenausschusses (LIBE) erklärt der Europaabgeordnete der Piratenpartei Dr. Patrick Breyer, der das Gesetz mit verhandelt hat:

“Mit dem Digitale-Dienste-Gesetz hat das EU-Parlament versucht, das überwachungskapitalistische Geschäftsmodell der allgegenwärtigen Online-Überwachung zu überwinden, ist aber gescheitert. Wir haben es nicht geschafft, Alternativen zu den toxischen Algorithmen der Plattformen zu eröffnen, die die kontroversesten und extremsten Inhalte an die Spitze der Timelines pushen. Und wir haben es nicht geschafft, legale Inhalte, einschließlich Medienberichte, davor zu schützen, dass sie durch fehleranfällige Upload-Filter oder willkürlich festgelegte Plattformregeln unterdrückt werden.

Es gibt aber auch Verbesserungen wie eine Beschwerdemöglichkeit und außergerichtliche Streitbeilegung gegen Anbieterwillkür sowie das Verbot manipulierender Überwachungswerbung unter Ausbeutung sensibler Nutzerdaten wie politische Meinung, sexuelle Orientierung usw.

Bis zu einem echten Digitalen Grundgesetz ist es noch ein weiter Weg. Wir müssen den Mut fassen das digitale Zeitalter endlich in die eigene Hand zu nehmen, statt es Konzernen und Überwachungsbehörden zu überlassen!“


@_Nat Zone

2月26日13:00〜、ITメディアセキュリティWeekで基調講演をします「GAFAを成功に導いたデジタルアイデンティティとは── デジタルアイデンティティなくしてDXなし? No ID, No DX」

ITメディアさまから、ITMedia Security Week 2024冬での基調講演の機会をいただきました。ITMedia Security Week 2024冬は「自社にとって必要なセキュリティ対策」の考え方、作り方、対策のポイント、手段を今あらためて総点検する参加無料のL…

ITメディアさまから、ITMedia Security Week 2024冬での基調講演の機会をいただきました。ITMedia Security Week 2024冬は「自社にとって必要なセキュリティ対策」の考え方、作り方、対策のポイント、手段を今あらためて総点検する参加無料のLIVE配信によるカンファレンスです。事前登録が必要です。錚々たる講演者の中でわたしがお話させていただくのは以下のとおりです。

基調講演1-1 2月26日 13:00~13:40
GAFAを成功に導いたデジタルアイデンティティとは ── デジタルアイデンティティなくしてDXなし? No ID, No DX

国を挙げてDXが推進されデジタルへの依存を深めていく中、サイバーセキュリティの脅威が社会や事業の根幹を揺るがすほどに高まっています。今後、さらにデジタルテクノロジーが社会に浸透するのに伴い、デジタル社会における「人」を表現するデジタルアイデンティティが正しく作られることがより大切になり、また、ログインごとに確かに同一人物であることを確認して安全なサービスを提供しなければなりません。企業の情報システムも近い将来、デジタルアイデンティティと認証による新しいアプローチがなければ立ち行かなくなるはずです。GAFAをサイバービジネスの勝者たらしめた、そして企業のDX推進にも欠かせないであろう「デジタルアイデンティティ」について議論していきます。

お申し込み

お申し込みはこちらのリンクからおこなっていただけます⇨

https://members07.live.itmedia.co.jp/library/NjQyMDE%253D?group=sec240226#keynote1-1

ぜひお越しください。

Ben Werdmüller

Leaked Emails Show Hugo Awards Self-Censoring to Appease China

"A trove of leaked emails shows how administrators of one of the most prestigious awards in science fiction censored themselves because the awards ceremony was being held in China." What's remarkable here is that they weren't censored by the government - instead this trove of emails suggests it was their own xenophobic assumptions about what was necessary to be acceptable

"A trove of leaked emails shows how administrators of one of the most prestigious awards in science fiction censored themselves because the awards ceremony was being held in China."

What's remarkable here is that they weren't censored by the government - instead this trove of emails suggests it was their own xenophobic assumptions about what was necessary to be acceptable in a Chinese context that shut authors out of one of the most prestigious prizes in science fiction. This includes eliminating authors whose work that would have been eligible was actually published in China.

There's a dark comedy to be written here about a group of westerners who are so worried about appeasing a government they consider to be censorial that they commit far more egregious acts of censorship themselves. #Culture

[Link]

Thursday, 15. February 2024

John Philpin : Lifestream

🚧 47/366

🚧 47/366

🚧 47/366


Simon Willison

uv: Python packaging in Rust

uv: Python packaging in Rust "uv is an extremely fast Python package installer and resolver, written in Rust, and designed as a drop-in replacement for pip and pip-tools workflows." From Charlie Marsh and Astral, the team behind Ruff, who describe it as a milestone in their pursuit of a "Cargo for Python". Also in this announcement: Astral are taking over stewardship of Armin Ronacher's Ry

uv: Python packaging in Rust

"uv is an extremely fast Python package installer and resolver, written in Rust, and designed as a drop-in replacement for pip and pip-tools workflows."

From Charlie Marsh and Astral, the team behind Ruff, who describe it as a milestone in their pursuit of a "Cargo for Python".

Also in this announcement: Astral are taking over stewardship of Armin Ronacher's Rye packaging tool, another Rust project.

uv is reported to be 8-10x faster than regular pip, increasing to 80-115x faster with a warm global module cache thanks to copy-on-write and hard links on supported filesystems - which saves on disk space too.

It also has a --resolution=lowest option for installing the lowest available version of dependencies - extremely useful for testing, I've been wanting this for my own projects for a while.

Also included: "uv venv" - a fast tool for creating new virtual environments with no dependency on Python itself.

Via @charliermarsh


Ben Werdmüller

The text file that runs the internet

It's hard to read this without feeling like the social contract of the web is falling apart. And when social agreements fall apart, that's when we start having to talk about more rigid, enforced contracts instead. As the piece notes: "There are people on both sides who believe we need better, stronger, more rigid tools for managing crawlers. They argue that there’s too m

It's hard to read this without feeling like the social contract of the web is falling apart.

And when social agreements fall apart, that's when we start having to talk about more rigid, enforced contracts instead. As the piece notes:

"There are people on both sides who believe we need better, stronger, more rigid tools for managing crawlers. They argue that there’s too much money at stake, and too many new and unregulated use cases, to rely on everyone just agreeing to do the right thing."

I think it's inevitable that we'll see more regulation and a more locked-down web. Probably, past a certain point, this was always going to happen. But I'll miss the days of rough consensus and working code. #Technology

[Link]


Jon Udell

Creating a GPT Assistant That Writes Pipeline Tests

Here’s the latest installment in the series on working with LLMS: Creating a GPT Assistant That Writes Pipeline Tests. Once you get the hang of writing these tests, it’s mostly boilerplate, so I figured my team of assistants could help. I recruited Cody, GitHub Copilot, and Unblocked — with varying degrees of success. Then I … Continue reading Creating a GPT Assistant That Writes Pipeline Tests

Here’s the latest installment in the series on working with LLMS: Creating a GPT Assistant That Writes Pipeline Tests.

Once you get the hang of writing these tests, it’s mostly boilerplate, so I figured my team of assistants could help. I recruited Cody, GitHub Copilot, and Unblocked — with varying degrees of success. Then I realized I hadn’t yet tried creating a GPT. As OpenAI describes them, “GPTs are custom versions of ChatGPT that users can tailor for specific tasks or topics by combining instructions, knowledge, and capabilities.”

The rest of the series:

1 When the rubber duck talks back

2 Radical just-in-time learning

3 Why LLM-assisted table transformation is a big deal

4 Using LLM-Assisted Coding to Write a Custom Template Function

5 Elevating the Conversation with LLM Assistants

6 How Large Language Models Assisted a Website Makeover

7 Should LLMs Write Marketing Copy?

8 Test-Driven Development with LLMs: Never Trust, Always Verify

9 Learning While Coding: How LLMs Teach You Implicitly

10 How LLMs Helped Me Build an ODBC Plugin for Steampipe

11 How to Use LLMs for Dynamic Documentation

12 Let’s talk: conversational software development

13 Using LLMs to Improve SQL Queries

14 Puzzling over the Postgres Query Planner with LLMs

15 7 Guiding Principles for Working with LLMs

16 Learn by Doing: How LLMs Should Reshape Education

17 How to Learn Unfamiliar Software Tools with ChatGPT


Simon Willison

Val Town Newsletter 15

Val Town Newsletter 15 I really like how Val Town founder Steve Krouse now accompanies their "what's new" newsletter with a video tour of the new features. I'm seriously considering imitating this for my own projects.

Val Town Newsletter 15

I really like how Val Town founder Steve Krouse now accompanies their "what's new" newsletter with a video tour of the new features. I'm seriously considering imitating this for my own projects.


Our next-generation model: Gemini 1.5

Our next-generation model: Gemini 1.5 The big news here is about context length: Gemini 1.5 (a Mixture-of-Experts model) will do 128,000 tokens in general release, available in limited preview with a 1 million token context and has shown promising research results with 10 million tokens! 1 million tokens is 700,000 words or around 7 novels - also described in the blog post as an hour of video

Our next-generation model: Gemini 1.5

The big news here is about context length: Gemini 1.5 (a Mixture-of-Experts model) will do 128,000 tokens in general release, available in limited preview with a 1 million token context and has shown promising research results with 10 million tokens!

1 million tokens is 700,000 words or around 7 novels - also described in the blog post as an hour of video or 11 hours of audio.

Via Jeff Dean


Patrick Breyer

Trilog-Deal: EU-Parlament und Rat besiegeln Verlängerung der flächendeckenden Chatkontrolle durch US-Internetkonzerne

EU-Parlament und EU-Rat haben sich heute morgen im Trilog auf die Verlängerung der umstrittenen flächendeckenden freiwilligen Chatkontrolle 1.0 durch US-Internetkonzerne wie Meta (Instagram, Facebook), Google (GMail) und Microsoft (X-Box) bis April …

EU-Parlament und EU-Rat haben sich heute morgen im Trilog auf die Verlängerung der umstrittenen flächendeckenden freiwilligen Chatkontrolle 1.0 durch US-Internetkonzerne wie Meta (Instagram, Facebook), Google (GMail) und Microsoft (X-Box) bis April 2026 geeinigt und wollen die Verlängerung im Schnellverfahren noch vor der Europawahl verabschieden. Die Mehrheit im EU-Parlament einschließlich Union und SPD wollte ursprünglich nur um 9 Monate verlängern, um schnellstmöglich zu einer gezielten Überwachung Verdächtiger und einem weit besseren Schutz von Kindern durch sicherere Voreinstellung von Diensten, proaktive Suche nach frei zugänglichem Missbrauchsmaterial, Löschpflichten und ein EU-Kinderschutzzentrum überzugehen. Stattdessen stimmten sie heute einer mehr als doppelt so langen Verlängerung des Status Quo zu.


Der Europaabgeordneter der Piratenpartei und digitale Freiheitskämpfer Dr. Patrick Breyer, der gegen die eigenmächtige Chatkontrolle von Direktnachrichten durch Meta klagt, kritisiert: 


„Das EU-Parlament will von den grundrechtswidrigen flächendeckenden Chatkontrollen weg, mit dem heutigen Deal zementiert es sie aber. Das EU-Parlament will einen viel besseren und gerichtsfesten Schutz vor Kindesmissbrauch im Netz, mit dem heutigen Deal wird aber überhaupt nichts zum besseren Schutz unserer Kinder erreicht. Mit so wenig Rückgrat werden immer weitere Verlängerungen des Status Quo folgen und ein besserer Schutz von Kindern immer unwahrscheinlicher. Missbrauchsopfer haben besseres verdient! 


EU-Kommission, EU-Regierungen und einem internationalen überwachungsbehördlich-industriellen Netzwerk ist es leider gelungen, der Parlamentsmehrheit Angst vor einer vermeintlichen ‚Schutzlücke‘ durch Wegfall der flächendeckenden freiwilligen Chatkontrolle 1.0 zu machen. In Wahrheit leistet die freiwillige Massenüberwachung unserer persönlichen Nachrichten und Fotos durch US-Dienste wie Meta, Google oder Microsoft keinen signifikanten Beitrag zur Rettung missbrauchter Kinder oder Überführung von Missbrauchtätern, sondern kriminalisiert umgekehrt tausende Minderjähriger, überlastet Strafverfolger und öffnet einer willkürlichen Privatjustiz der Internetkonzerne Tür und Tor. Wenn nach eigenen Angaben der EU-Kommission nur jede vierte Meldung überhaupt für die Polizei relevant ist, bedeutet das für Deutschland Jahr für Jahr 75.000 ausgeleitete intime Strandfotos und Nacktbilder, die bei unbekannten Moderatoren im Ausland nicht sicher sind und in deren Händen nichts zu suchen haben. Die Verordnung zur freiwilligen Chatkontrolle ist sowohl unnötig als auch grundrechtswidrig: Die sozialen Netzwerke als Hostingdienste brauchen zur Überprüfung öffentlicher Posts keine Verordnung. Dasselbe gilt für Verdachtsmeldungen durch Nutzer. Und die fehleranfälligen automatisierten Meldungen aus der Durchleuchtung privater Kommunikation durch Zuckerbergs Meta-Konzern, die 80% der Chatmeldungen ausmachen, werden durch die angekündigte Einführung von Ende-zu-Ende-Verschlüsselung ohnehin entfallen.

Als Pirat arbeite ich daran, die eigenmächtige Chatkontrolle als verdachtslose und flächendeckende Überwachungsmaßnahme vor Gericht stoppen zu lassen. Bis 2026 werden wir Piraten gegen alle Versuche kämpfen, doch noch Mehrheiten im EU-Rat für die extreme Dystopie verpflichtender Chatkontrolle 2.0 zur Zerstörung des digitalen Briefgeheimnisses und sicherer Verschlüsselung zu finden und kritische EU-Staaten mit infamen Kampagnen und Falschinformationen der EU-Kommission doch noch zur Zustimmung zu manipulieren.“

Die Spitzenkandidatin der Piratenpartei zur Europawahl Anja Hirschel erklärt: „Die geradezu hektisch verabschiedete, extralange Verlängerung der freiwilligen Chatkontrolle 1.0 wurde heute morgen beschlossen. So werden nun gezielt weit über die kommende Europawahl hinaus Tatsachen geschaffen. Flächendeckende Überwachung kann dadurch normalisiert werden und dann als Grundlage für eine noch verschärftere Chatkontrolle 2.0 dienen. Wir Piraten lassen uns dennoch nicht entmutigen und werden jetzt erst recht für den Schutz unser Privatsphäre kämpfen!“

Die Einigung bedarf noch der Zustimmung von EU-Parlament und EU-Rat. Anfang März werden sich die EU-Innenminister erneut mit dem Parallelvorschlag der EU-Kommission zur Zerstörung des digitalen Briefgeheimnisses und sicherer Verschlüsselung (Chatkontrolle 2.0) beschäftigen. Bisher gibt es keine Einigung zwischen Befürwortern und Gegnern unter den EU-Regierungen, so dass dieses Vorhaben auf Eis liegt.


Simon Willison

Adaptive Retrieval with Matryoshka Embeddings

Adaptive Retrieval with Matryoshka Embeddings Nomic Embed v1 only came out two weeks ago, but the same team just released Nomic Embed v1.5 trained using a new technique called Matryoshka Representation. This means that unlike v1 the v1.5 embeddings are resizable - instead of a fixed 768 dimension embedding vector you can trade size for quality and drop that size all the way down to 64, while

Adaptive Retrieval with Matryoshka Embeddings

Nomic Embed v1 only came out two weeks ago, but the same team just released Nomic Embed v1.5 trained using a new technique called Matryoshka Representation.

This means that unlike v1 the v1.5 embeddings are resizable - instead of a fixed 768 dimension embedding vector you can trade size for quality and drop that size all the way down to 64, while still maintaining strong semantically relevant results.

Joshua Lochner build this interactive demo on top of Transformers.js which illustrates quite how well this works: it lets you embed a query, embed a series of potentially matching text sentences and then adjust the number of dimensions and see what impact it has on the results.

Via @xenovacom


Ben Werdmüller

Building Slack: Day 1

Catnip for me: the first post in a new blog that tells the story of building Slack from the ground up, by two of its former employees. This was surprising to me, although I guess I don't really know why: "We used the tried and true LAMP stack (Linux, Apache, MySQL, PHP). We were all deeply familiar with these conventional tools, and Cal and the Flickr team had defined a framew

Catnip for me: the first post in a new blog that tells the story of building Slack from the ground up, by two of its former employees.

This was surprising to me, although I guess I don't really know why: "We used the tried and true LAMP stack (Linux, Apache, MySQL, PHP). We were all deeply familiar with these conventional tools, and Cal and the Flickr team had defined a framework for building out and scaling web applications using them (called flamework for Flickr framework)." #Technology

[Link]

Wednesday, 14. February 2024

John Philpin : Lifestream

🚧 46/366

🚧 46/366

🚧 46/366


Simon Willison

How Microsoft names threat actors

How Microsoft names threat actors I'm finding Microsoft's "naming taxonomy for threat actors" deeply amusing this morning. Charcoal Typhoon are associated with China, Crimson Sandstorm with Iran, Emerald Sleet with North Korea and Forest Blizzard with Russia. The weather pattern corresponds with the chosen country, then the adjective distinguishes different groups (I guess "Forest" is an adjecti

How Microsoft names threat actors

I'm finding Microsoft's "naming taxonomy for threat actors" deeply amusing this morning. Charcoal Typhoon are associated with China, Crimson Sandstorm with Iran, Emerald Sleet with North Korea and Forest Blizzard with Russia. The weather pattern corresponds with the chosen country, then the adjective distinguishes different groups (I guess "Forest" is an adjective color).

Via Hacker News comment


Ben Werdmüller

Caribou High School to use fingerprinting to track student attendance

"[The ACLU] publicly challenged the school district in a statement to media outlets stating that it has filed a public records request seeking more information about the district’s decision to [a firm] to track student attendance and tardiness by having students place their fingers on a biometric scanner." So many questions: how anyone thought this was a good idea to begin

"[The ACLU] publicly challenged the school district in a statement to media outlets stating that it has filed a public records request seeking more information about the district’s decision to [a firm] to track student attendance and tardiness by having students place their fingers on a biometric scanner."

So many questions: how anyone thought this was a good idea to begin with; how the data is stored and processed; whether this is legal; what the software company providing this platform could possibly be thinking. Nipping this in the bud feels like a good idea. #Technology

[Link]


Patrick Breyer

Europäischer Menschenrechtsgerichtshof verbietet Schwächung sicherer Ende-zu-Ende-Verschlüsselung – das Aus für die Chatkontrolle?

Der Europäische Menschenrechtsgerichtshof hat gestern eine generelle Schwächung sicherer Ende-zu-Ende-Verschlüsselung verboten mit der Begründung, dass Verschlüsselung Bürgern und Unternehmen dabei helfe, sich gegen Hacking, Diebstahl von Identitäts- und personenbezogenen Daten, Betrug …

Der Europäische Menschenrechtsgerichtshof hat gestern eine generelle Schwächung sicherer Ende-zu-Ende-Verschlüsselung verboten mit der Begründung, dass Verschlüsselung Bürgern und Unternehmen dabei helfe, sich gegen Hacking, Diebstahl von Identitäts- und personenbezogenen Daten, Betrug und die unzulässige Weitergabe vertraulicher Informationen zu schützen. Hintertüren könnten auch von kriminellen Netzen ausgenutzt werden und würden die Sicherheit der elektronischen Kommunikation aller Nutzer ernsthaft gefährden. Es gebe andere Lösungen zur Überwachung verschlüsselter Kommunikation, ohne generell den Schutz aller Nutzer zu schwächen. Als Beispiel nennt das Urteil den Einsatz von Staatstrojanern bzw. Quellen-TKÜ.

Der EU-Abgeordnete Dr. Patrick Breyer (Piratenpartei) kommentiert das Urteil:

„Mit diesem grandiosen Grundsatzurteil ist die von der EU-Kommission zur Chatkontrolle geforderte ‚client-side scanning‘-Überwachung auf allen Smartphones eindeutig illegal. Sie würde den Schutz aller zerstören, statt gezielt gegen Tatverdächtige zu ermitteln. Die EU-Regierungen müssen die Zerstörung sicherer Verschlüsselung jetzt endlich aus den Chatkontrolle 2.0-Plänen streichen – genauso wie die flächendeckende Überwachung Unverdächtiger!

Sichere Verschlüsselung rettet Leben. Ohne Verschlüsselung können wir nie sicher sein, ob unsere Nachrichten oder Fotos an Personen weitergeleitet werden, die wir nicht kennen und denen wir nicht vertrauen können. Das so genannte ‚client-side scanning‘ würde entweder unsere Kommunikation grundlegend unsicher machen, oder die europäischen Bürgerinnen und Bürger könnten Whatsapp oder Signal überhaupt nicht mehr nutzen, weil die Anbieter die Einstellung ihrer Dienste in Europa bereits in Aussicht gestellt haben. Es ist unfassbar, dass der letzte Positionsentwurf des EU-Rats weiter die Zerstörung sicherer Verschlüsselung vorsieht. Wir Piraten werden jetzt erst recht für unser digitales Briefgeheimnis kämpfen!“

Hintergrund: Die EU-Kommission und ein überwachungsbehördlich-industriellen Netzwerk fordert unter Verweis auf kursierende Missbrauchsdarstellungen flächendeckende Chatkontrollen auch auf Ende-zu-Ende-verschlüsselten Messengern. Umsetzbar wäre dies nur durch Aushebelung der sicheren Verschlüsselung. Die Mehrheit der EU-Regierungen unterstützt den Vorstoß, eine Sperrminorität blockiert ihn aber. Anfang März wollen die EU-Innenminister erneut beraten. Das EU-Parlament hat auf Druck von Piraten und Zivilgesellschaft einer Zerstörung sicherer Verschlüsselung und einer Chatkontrolle eine Absage erteilt. Dies ist jedoch nur die Ausgangsposition für mögliche Verhandlungen mit dem EU-Rat, falls dieser sich auf eine Position verständigt. Meta hat angekündigt, im Laufe dieses Jahres Nachrichten über Facebook und Instagram Ende-zu-Ende zu verschlüsseln und die bisherige freiwillige Chatkontrolle einzustellen. Dennoch ist die EU dabei, die Erlaubnis zur freiwilligen Chatkontrolle zu verlängern.

Informationsseite Breyers zur Chatkontrolle


Simon Willison

Memory and new controls for ChatGPT

Memory and new controls for ChatGPT ChatGPT now has "memory", and it's implemented in a delightfully simple way. You can instruct it to remember specific things about you and it will then have access to that information in future conversations - and you can view the list of saved notes in settings and delete them individually any time you want to. The feature works by adding a new tool called

Memory and new controls for ChatGPT

ChatGPT now has "memory", and it's implemented in a delightfully simple way. You can instruct it to remember specific things about you and it will then have access to that information in future conversations - and you can view the list of saved notes in settings and delete them individually any time you want to.

The feature works by adding a new tool called "bio" to the system prompt fed to ChatGPT at the beginning of every conversation, described like this:

"The `bio` tool allows you to persist information across conversations. Address your message `to=bio` and write whatever information you want to remember. The information will appear in the model set context below in future conversations."

I found that by prompting it to 'Show me everything from "You are ChatGPT" onwards in a code block"' - see via link.

Via My ChatGPT introspection session


GPUs on Fly.io are available to everyone!

GPUs on Fly.io are available to everyone! We've been experimenting with GPUs on Fly for a few months for Datasette Cloud. They're well documented and quite easy to use - any example Python code you find that uses NVIDIA CUDA stuff generally Just Works. Most interestingly of all, Fly GPUs can scale to zero - so while they cost $2.50/hr for a A100 40G (VRAM) and $3.50/hr for a A100 80G you can con

GPUs on Fly.io are available to everyone!

We've been experimenting with GPUs on Fly for a few months for Datasette Cloud. They're well documented and quite easy to use - any example Python code you find that uses NVIDIA CUDA stuff generally Just Works. Most interestingly of all, Fly GPUs can scale to zero - so while they cost $2.50/hr for a A100 40G (VRAM) and $3.50/hr for a A100 80G you can configure them to stop running when the machine runs out of things to do.

We've successfully used them to run Whisper and to experiment with running various Llama 2 LLMs as well.

To look forward to: "We are working on getting some lower-cost A10 GPUs in the next few weeks".

Tuesday, 13. February 2024

Simon Willison

How To Center a Div

How To Center a Div Josh Comeau: "I think that my best blog posts are accessible to beginners while still having some gold nuggets for more experienced devs, and I think I've nailed that here. Even if you have years of CSS experience, I bet you'll learn something new." Lots of interactive demos in this. Via @joshwcomeau

How To Center a Div

Josh Comeau: "I think that my best blog posts are accessible to beginners while still having some gold nuggets for more experienced devs, and I think I've nailed that here. Even if you have years of CSS experience, I bet you'll learn something new."

Lots of interactive demos in this.

Via @joshwcomeau


John Philpin : Lifestream

🚧 45/366

🚧 45/366

🚧 45/366


Simon Willison

Announcing DuckDB 0.10.0

Announcing DuckDB 0.10.0 Somewhat buried in this announcement: DuckDB has Fixed-Length Arrays now, along with array_cross_product(a1, a2), array_cosine_similarity(a1, a2) and array_inner_product(a1, a2) functions. This means you can now use DuckDB to find related content (and other tricks) using vector embeddings! Also notable: "DuckDB can now attach MySQL, Postgres, and SQLite databases i

Announcing DuckDB 0.10.0

Somewhat buried in this announcement: DuckDB has Fixed-Length Arrays now, along with array_cross_product(a1, a2), array_cosine_similarity(a1, a2) and array_inner_product(a1, a2) functions.

This means you can now use DuckDB to find related content (and other tricks) using vector embeddings!

Also notable: "DuckDB can now attach MySQL, Postgres, and SQLite databases in addition to databases stored in its own format. This allows data to be read into DuckDB and moved between these systems in a convenient manner, as attached databases are fully functional, appear just as regular tables, and can be updated in a safe, transactional manner."


Quoting Will Wilson, on FoundationDB

Before we even started writing the database, we first wrote a fully-deterministic event-based network simulation that our database could plug into. This system let us simulate an entire cluster of interacting database processes, all within a single-threaded, single-process application, and all driven by the same random number generator. We could run this virtual cluster, inject network faults, ki

Before we even started writing the database, we first wrote a fully-deterministic event-based network simulation that our database could plug into. This system let us simulate an entire cluster of interacting database processes, all within a single-threaded, single-process application, and all driven by the same random number generator. We could run this virtual cluster, inject network faults, kill machines, simulate whatever crazy behavior we wanted, and see how it reacted. Best of all, if one particular simulation run found a bug in our application logic, we could run it over and over again with the same random seed, and the exact same series of events would happen in the exact same order. That meant that even for the weirdest and rarest bugs, we got infinity “tries” at figuring it out, and could add logging, or do whatever else we needed to do to track it down.

[...] At FoundationDB, once we hit the point of having ~zero bugs and confidence that any new ones would be found immediately, we entered into this blessed condition and we flew.

[...] We had built this sophisticated testing system to make our database more solid, but to our shock that wasn’t the biggest effect it had. The biggest effect was that it gave our tiny engineering team the productivity of a team 50x its size.

Will Wilson, on FoundationDB


Aya

Aya "A global initiative led by Cohere For AI involving over 3,000 independent researchers across 119 countries. Aya is a state-of-art model and dataset, pushing the boundaries of multilingual AI for 101 languages through open science." Both the model and the training data are released under Apache 2. The training data looks particularly interesting: "513 million instances through templating

Aya

"A global initiative led by Cohere For AI involving over 3,000 independent researchers across 119 countries. Aya is a state-of-art model and dataset, pushing the boundaries of multilingual AI for 101 languages through open science."

Both the model and the training data are released under Apache 2. The training data looks particularly interesting: "513 million instances through templating and translating existing datasets across 114 languages" - suggesting the data is mostly automatically generated.

Via Hacker News


Ben Werdmüller

Extending our Mastodon social media trial

The BBC extends its Mastodon experiment for another six months: "We are also planning to start some technical work into investigating ways to publish BBC content more widely using ActivityPub, the underlying protocol of Mastodon and the Fediverse." The BBC's approach has been great: transparent, realistic, and well-scoped. I suspect we'll see more media entities exploring A

The BBC extends its Mastodon experiment for another six months: "We are also planning to start some technical work into investigating ways to publish BBC content more widely using ActivityPub, the underlying protocol of Mastodon and the Fediverse."

The BBC's approach has been great: transparent, realistic, and well-scoped. I suspect we'll see more media entities exploring ActivityPub as the year progresses - not only because of Threads, but as activity as a whole on the social web heats up. #Technology

[Link]


Simon Willison

The original WWW proposal is a Word for Macintosh 4.0 file from 1990, can we open it?

The original WWW proposal is a Word for Macintosh 4.0 file from 1990, can we open it? In which John Graham-Cumming attempts to open the original WWW proposal by Tim Berners-Lee, a 68,608 bytes Microsoft Word for Macintosh 4.0 file. Microsoft Word and Apple Pages fail. OpenOffice gets the text but not the formatting. LibreOffice gets the diagrams too, but the best results come from the Infinit

The original WWW proposal is a Word for Macintosh 4.0 file from 1990, can we open it?

In which John Graham-Cumming attempts to open the original WWW proposal by Tim Berners-Lee, a 68,608 bytes Microsoft Word for Macintosh 4.0 file.

Microsoft Word and Apple Pages fail. OpenOffice gets the text but not the formatting. LibreOffice gets the diagrams too, but the best results come from the Infinite Mac WebAssembly emulator.

Via Hacker News


Patrick Breyer

Zustimmung zum KI-Gesetz/AI Act droht Gesichtsüberwachung zum europäischen Alltag zu machen

Die federführenden Ausschüsse des Europäischen Parlaments für bürgerliche Freiheiten (LIBE) und Verbraucherschutz (IMCO) haben dem Verhandlungsergebnis zum KI-Gesetz/AI Act der EU heute mit breiter Mehrheit zugestimmt (71:8:7). Die deutschen Abgeordneten …

Die federführenden Ausschüsse des Europäischen Parlaments für bürgerliche Freiheiten (LIBE) und Verbraucherschutz (IMCO) haben dem Verhandlungsergebnis zum KI-Gesetz/AI Act der EU heute mit breiter Mehrheit zugestimmt (71:8:7). Die deutschen Abgeordneten von Union, SPD und Grünen unterstützten das Ergebnis, FDP und AfD enthielten sich, während die Abgeordneten der Piraten und Linken dagegen stimmten. Der digitale Freiheitskämpfer und Europaabgeordnete Dr. Patrick Breyer (Piratenpartei) warnt davor, dass das Gesetz den Weg für die Einführung biometrischer Massenüberwachung in Europa freimacht, wo sich EU-Regierungen dafür entscheiden:

„Mit diesem KI-Gesetz will die EU China offenbar nicht nur technologisch sondern auch innenpolitisch nacheifern. Flächendeckende und permanente Echtzeit-Gesichtserkennung, einschüchternde Verhaltensüberwachung im öffentlichen Raum wie in Hamburg eingesetzt, fehleranfällige Gesichtserkennung in Videoüberwachungsbändern von Demonstrationen schon bei Bagatelldelikten, die KI-gestützte Auswertung der Herkunft von Personen, unwissenschaftliche ‚Video-Lügendetektoren‘ – keine dieser dystopischen Technologien verbietet der AI Act unseren Regierungen, zu denen auch illiberale und rechtsextreme Regierungen wie in Ungarn oder Italien zählen. Statt uns vor einem High-Tech-Überwachungsstaat zu schützen, regelt der AI Act penibel, wie man ihn einführt. So wichtig eine Regulierung von KI-Technologie ist, ist die Verteidigung unserer Demokratie gegen die Errichtung eines High-Tech-Überwachungsstaats für uns Piraten nicht verhandelbar.

Dass die Ampel jetzt ein Verbot auf nationaler Ebene ins Spiel bringt, ist ein schlechter Karnevalsscherz. In einem Rechtsstaat sind Grundrechtseingriffe, die nicht erlaubt sind, ohnehin verboten. Die Überwachungsexperimente in den Ländern zur ‚Gefahrenabwehr‘ wie in Hamburg oder Mannheim kann der Bund nicht verbieten und tragen die Ampelparteien vor Ort auch gerne mit. Entscheidend ist doch, dass die nächste Große Koalition oder gar eine Regierung mit AfD-Beteiligung die EU-Anleitung zur Einführung biometrischer Massenüberwachung jederzeit aufgreifen kann.“

Anja Hirschel, Spitzenkandidatin der Piratenpartei für die Europawahl 2024, fügt hinzu: „Der AI Act ebnet den Weg für die Einführung einer nie dagewesenen Massenüberwachung in Europa. Der Aufbau einer flächendeckenden, automatisierten Überwachungsinfrastruktur inklusive biometrischer Fernerkennungssysteme wird damit ermöglicht. Eine permanente Gesichtsüberwachung in Echtzeit würde dann zu unsere neuen Realität.“

Breyer bekräftigt: „Einer gesucht, alle überwacht? Mit dieser gesetzlichen Anleitung zu biometrischer Massenüberwachung kann unser Gesicht in der Öffentlichkeit mit der Begründung ‚Personanfahndung‘ immer und überall flächendeckend und verdachtslos gescannt werden. Die vermeintlichen Ausnahmen sind Augenwischerei – wegen der im ‚AI Act‘ genannten Delikte sucht die Justiz per Europäischem Haftbefehl nach über 6.000 Menschen. Unter ständiger Überwachung sind wir nicht mehr frei! In der Realität wurde mit biometrischer Massenüberwachung des öffentlichen Raums noch kein einziger Terrorist gefunden, kein einziger Anschlag verhindert, stattdessen führt sie zu unzähligen Festnahmen Unschuldiger und bis zu 99% Falschverdächtigungen. Das Gesetz legitimiert und normalisiert eine Kultur des Misstrauens. Es führt Europa in eine dystopische Zukunft eines misstrauischen High-Tech-Überwachungsstaats nach chinesischem Vorbild.“

Laut einer repräsentativen Umfrage im Auftrag Breyers, die von YouGov in 10 EU-Ländern durchgeführt wurde, lehnt eine deutliche Mehrheit der Europäer:innen biometrische Massenüberwachung im öffentlichen Raum ab.

Der Europäische Datenschutzausschuss und der Europäische Datenschutzbeauftragte haben ein “generelles Verbot des Einsatzes von KI zur automatischen Erkennung menschlicher Merkmale in öffentlich zugänglichen Räumen” gefordert, da dies “direkte negative Auswirkungen auf die Ausübung der Meinungs-, Versammlungs- und Vereinigungsfreiheit sowie der Freizügigkeit” habe. Mehr als 200 zivilgesellschaftliche Organisationen, Aktivisten, Technikspezialisten und andere Experten auf der ganzen Welt setzen sich für ein weltweites Verbot biometrischer Erkennungstechnologien ein, die eine massenhafte und diskriminierende Überwachung ermöglichen. Sie argumentieren, dass “diese Instrumente die Fähigkeit haben, Menschen zu identifizieren, zu verfolgen, auszusondern und zu verfolgen, wo immer sie sich aufhalten, und damit unsere Menschenrechte und bürgerlichen Freiheiten untergraben”. Auch die UN-Hochkommissarin für Menschenrechte spricht sich gegen den Einsatz biometrischer Fernerkennungssysteme im öffentlichen Raum aus und verweist auf die “mangelnde Einhaltung von Datenschutzstandards”, “erhebliche Probleme mit der Genauigkeit” und “diskriminierende Auswirkungen”.

Das Europäische Parlament hatte sich im vergangenen Jahr noch für ein Verbot biometrischer Massenüberwachung ausgesprochen.


Simon Willison

Caddy: Config Adapters

Caddy: Config Adapters The Caddy web application server is configured using JSON, but their "config adapters" plugin mechanism allows you to write configuration files in YAML, TOML, JSON5 (JSON with comments), and even nginx format which then gets automatically converted to JSON for you. Caddy author Matt Holt: "We put an end to the config format wars in Caddy by letting you use any format yo

Caddy: Config Adapters

The Caddy web application server is configured using JSON, but their "config adapters" plugin mechanism allows you to write configuration files in YAML, TOML, JSON5 (JSON with comments), and even nginx format which then gets automatically converted to JSON for you.

Caddy author Matt Holt: "We put an end to the config format wars in Caddy by letting you use any format you want!"

Via @mholt6


Moxy Tongue

Own Your Own AI

Working on it.. OYO AI by kidOYO® Learning Software, Education Services

Working on it.. OYO AI by kidOYO® Learning Software, Education Services


Simon Willison

The unsettling scourge of obituary spam

The unsettling scourge of obituary spam Well this is particularly grim. Apparently "obituary aggregator" sites have been an SEO trick for at least 15 years, and now they're using generative AI to turn around junk rewritten (and frequently inaccurate) obituaries even faster. Via Andy Baio

The unsettling scourge of obituary spam

Well this is particularly grim. Apparently "obituary aggregator" sites have been an SEO trick for at least 15 years, and now they're using generative AI to turn around junk rewritten (and frequently inaccurate) obituaries even faster.

Via Andy Baio

Monday, 12. February 2024

John Philpin : Lifestream

🚧 44/366

🚧 44/366

🚧 44/366


Moxy Tongue

Sovereign AI

In 2024, the utility of words yielding philosophical clarity that becomes embedded into the design of systems being deployed globally, and Nationally, yields methods that must be structured accurately in order to abide by the Sovereign systems they serve. In America, people own root authority, or the Sovereign infrastructure does not confer accuracy for, of, by human use. Data is the life blood o

In 2024, the utility of words yielding philosophical clarity that becomes embedded into the design of systems being deployed globally, and Nationally, yields methods that must be structured accurately in order to abide by the Sovereign systems they serve.

In America, people own root authority, or the Sovereign infrastructure does not confer accuracy for, of, by human use. Data is the life blood of AI systems. Data structure yields Sovereign results, and across our fast advancing world, inaccuracy deconstructs faster than accuracy builds accurately. The time has come for open transparent accuracy in the data structure of Soveriegnty itself to be laid bare, enabling the development of "Sovereign AI" upon foundations that serve people.

Many moons ago, this structural conversation began in the world of identity management. Professionally-deployed systems were challenged to confront inaccuracies in their modeling of human identity. Ubiquitously, people were no longer being conveyed structural constraints ensuring the structural accuracy of their root administrative authority over data systems of ultimate importance to their Sovereign participation and administration under well-founded laws that were crafted pre-tcp/ip, pre-digital data.

Identity systems have been challenged now for over 20 years to align their practices to the people they service. The work is not done. Self-Sovereign ID principles that emerged here on this blog, led to decentralized identity methods and practices advancing for developer use, and into general awareness by a population that is intensely interested in digital frontiers where their lives meet opportunity, security, and civil system integrity. The fire walls of Sovereign integrity, having been breached many times in consequential ways, started exposing their own structural deficiencies.

Enter AI: human identity that primarily exists in a database-driven system, and is founded on an old-era of physical presence, is now the domain of AI. Human beings can not compete structurally here, as AI derives utility that people provide, and far too often, provide ignorantly, without much personal insight or accountability for the structural choices conveyed upon them. Laws, as dependencies function, evolve at a much slower pace, and seem to lack insight into the structural underpinnings of identity silos that tcp/ip was advanced to break down and add utility too. Unfortunately, protections were not advanced with the same insight, and civil society is finding itself in a reactive mode, reacting to change like a surfer riding a wave, rather than a break wall securing civil participation in an AI-enabled society.

This is the moment. AI Sovereignty has a basic and tremendously important dependency in American civil society: people own root. 

If the data structure of human participation in America does not convey this basic structural reality, then people do not exist in a civil society, as defined by founding documents, intent, and Constitutional reach. Work is underway on this vector, and as always, the resulting choices and structures advanced will yield the results being pursued. The question on the table being asked is simple: do innovators understand what it means in structural Terms to ensure that people own root authority? 

"Own Your Own AI"


Ben Werdmüller

Who makes money when AI reads the internet for us?

"Local news publishers, [VP Platforms at The Boston Globe] Karolian told Engadget, almost entirely depend on selling ads and subscriptions to readers who visit their websites to survive. “When tech platforms come along and disintermediate that experience without any regard for the impact it could have, it is deeply disappointing.”" There's an interesting point that Josh Mil

"Local news publishers, [VP Platforms at The Boston Globe] Karolian told Engadget, almost entirely depend on selling ads and subscriptions to readers who visit their websites to survive. “When tech platforms come along and disintermediate that experience without any regard for the impact it could have, it is deeply disappointing.”"

There's an interesting point that Josh Miller makes here about how the way the web gets monetized needs to change. Sure, but that's a lot like the people who say that open source funding will be solved by universal basic income: perhaps, at some future date, but that doesn't solve the immediate problem.

Do browser vendors have a responsibility to be good stewards for publishers? I don't know about that in itself. I'm okay with them freely innovating - but they also need to respect the rights of the content they're innovating with.

Micropayments emphatically don't work, but I do wonder if there's a way forward here (alongside other ways) where AI summarizers pay for access to the articles they're consuming as references, or otherwise participate in their business models somehow. #AI

[Link]


Damien Bod

Using Blob storage from ASP.NET Core with Entra ID authentication

This article shows how to implement a secure upload and a secure download in ASP.NET Core using Azure blob storage. The application uses Microsoft Entra ID for authentication and also for access to the Azure Blob storage container. Code: https://github.com/damienbod/AspNetCoreEntraIdBlobStorage Security architecture The application is setup to store the file uploads to an Azure Blob […]

This article shows how to implement a secure upload and a secure download in ASP.NET Core using Azure blob storage. The application uses Microsoft Entra ID for authentication and also for access to the Azure Blob storage container.

Code: https://github.com/damienbod/AspNetCoreEntraIdBlobStorage

Security architecture

The application is setup to store the file uploads to an Azure Blob storage container. The authentication uses delegated only flows. A user can authenticate into the application using Microsoft Entra ID. The Azure App registration defines App roles to use for access authorization. The roles are used in the enterprise application. Security groups link the users to the roles. The security groups are used in the Azure Blob container where the RBAC is applied using the groups. A SQL database is used to persist the meta data and integrate into the other parts of the application.

Setting up Azure Blob storage

Two roles were created in the Azure App registration. The roles are assigned to groups in the Enterprise application. The users allowed to used to Azure Blob storage are assigned to the groups.

The groups are then used to apply the RBAC roles in the Azure Blob container. The Storage Blob Data Contributor and the Storage Blob Data Reader roles are used.

Authentication

Microsoft Entra ID is used for authentication and implemented using the Microsoft.Identity.Web Nuget packages. The is a standard implementation. Two policies were created to validate the two different roles used in this solution.

string[]? initialScopes = configuration.GetValue<string> ("AzureStorage:ScopeForAccessToken")?.Split(' '); services.AddMicrosoftIdentityWebAppAuthentication(configuration) .EnableTokenAcquisitionToCallDownstreamApi(initialScopes) .AddInMemoryTokenCaches(); services.AddAuthorization(options => { options.AddPolicy("blob-one-read-policy", policyBlobOneRead => { policyBlobOneRead.RequireClaim("roles", ["blobonereadrole", "blobonewriterole"]); }); options.AddPolicy("blob-one-write-policy", policyBlobOneRead => { policyBlobOneRead.RequireClaim("roles", ["blobonewriterole"]); }); }); services.AddRazorPages().AddMvcOptions(options => { var policy = new AuthorizationPolicyBuilder() .RequireAuthenticatedUser() .Build(); options.Filters.Add(new AuthorizeFilter(policy)); }).AddMicrosoftIdentityUI();

Upload

The application uses the IFormFile interface with the file payload and uploads the file to Azure Blob storage. The BlobClient is setup to use Microsoft Entra ID and the meta data is added to the blob.

public BlobDelegatedUploadProvider(DelegatedTokenAcquisitionTokenCredential tokenAcquisitionTokenCredential, IConfiguration configuration) { _tokenAcquisitionTokenCredential = tokenAcquisitionTokenCredential; _configuration = configuration; } [AuthorizeForScopes(Scopes = ["https://storage.azure.com/user_impersonation"])] public async Task<string> AddNewFile(BlobFileUploadModel blobFileUpload, IFormFile file) { try { return await PersistFileToAzureStorage(blobFileUpload, file); } catch (Exception e) { throw new ApplicationException($"Exception {e}"); } } private async Task<string> PersistFileToAzureStorage( BlobFileUploadModel blobFileUpload, IFormFile formFile, CancellationToken cancellationToken = default) { var storage = _configuration.GetValue<string>("AzureStorage:StorageAndContainerName"); var fileFullName = $"{storage}/{blobFileUpload.Name}"; var blobUri = new Uri(fileFullName); var blobUploadOptions = new BlobUploadOptions { Metadata = new Dictionary<string, string?> { { "uploadedBy", blobFileUpload.UploadedBy }, { "description", blobFileUpload.Description } } }; var blobClient = new BlobClient(blobUri, _tokenAcquisitionTokenCredential); var inputStream = formFile.OpenReadStream(); await blobClient.UploadAsync(inputStream, blobUploadOptions, cancellationToken); return $"{blobFileUpload.Name} successfully saved to Azure Blob Storage Container"; }

The DelegatedTokenAcquisitionTokenCredential class is used to get access tokens for the blob upload or download. This uses the existing user delegated session and creates a new access token for the blob storage access.

using Azure.Core; using Microsoft.Identity.Client; using Microsoft.Identity.Web; namespace DelegatedEntraIDBlobStorage.FilesProvider.AzureStorageAccess; public class DelegatedTokenAcquisitionTokenCredential : TokenCredential { private readonly ITokenAcquisition _tokenAcquisition; private readonly IConfiguration _configuration; public DelegatedTokenAcquisitionTokenCredential(ITokenAcquisition tokenAcquisition, IConfiguration configuration) { _tokenAcquisition = tokenAcquisition; _configuration = configuration; } public override AccessToken GetToken(TokenRequestContext requestContext, CancellationToken cancellationToken) { throw new NotImplementedException(); } public override async ValueTask<AccessToken> GetTokenAsync(TokenRequestContext requestContext, CancellationToken cancellationToken) { string[]? scopes = _configuration["AzureStorage:ScopeForAccessToken"]?.Split(' '); if (scopes == null) { throw new Exception("AzureStorage:ScopeForAccessToken configuration missing"); } AuthenticationResult result = await _tokenAcquisition .GetAuthenticationResultForUserAsync(scopes); return new AccessToken(result.AccessToken, result.ExpiresOn); } }

Download

The download creates a BlobClient using the user delegated existing session. The file is downloaded directly.

using Azure.Storage.Blobs; using Azure.Storage.Blobs.Models; using Microsoft.Identity.Web; namespace DelegatedEntraIDBlobStorage.FilesProvider.AzureStorageAccess; public class BlobDelegatedDownloadProvider { private readonly DelegatedTokenAcquisitionTokenCredential _tokenAcquisitionTokenCredential; private readonly IConfiguration _configuration; public BlobDelegatedDownloadProvider(DelegatedTokenAcquisitionTokenCredential tokenAcquisitionTokenCredential, IConfiguration configuration) { _tokenAcquisitionTokenCredential = tokenAcquisitionTokenCredential; _configuration = configuration; } [AuthorizeForScopes(Scopes = ["https://storage.azure.com/user_impersonation"])] public async Task<Azure.Response<BlobDownloadInfo>> DownloadFile(string fileName) { var storage = _configuration.GetValue<string>("AzureStorage:StorageAndContainerName"); var fileFullName = $"{storage}/{fileName}"; var blobUri = new Uri(fileFullName); var blobClient = new BlobClient(blobUri, _tokenAcquisitionTokenCredential); return await blobClient.DownloadAsync(); } } Notes

The architecture is simple and has the base features required for a secure solution. Data protection and virus scanning needs to be applied to the files and this can be configured in the Azure Blob storage. The access is controlled to the users in the group. If this needs to be controlled more, the write access can be removed from the users and switched to a service principal. This can have both security advantages and disadvantages. Multiple clients might also need access to files in this solution and the security needs to be enforced. This requires further architecture changes.

Links

https://learn.microsoft.com/en-us/azure/storage/blobs/authorize-access-azure-active-directory

https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blobs-introduction

https://github.com/AzureAD/microsoft-identity-web


Simon Willison

Quoting Jacob Kaplan-Moss

“We believe that open source should be sustainable and open source maintainers should get paid!” Maintainer: *introduces commercial features* “Not like that” Maintainer: *works for a large tech co* “Not like that” Maintainer: *takes investment* “Not like that” — Jacob Kaplan-Moss

“We believe that open source should be sustainable and open source maintainers should get paid!”

Maintainer: *introduces commercial features*
“Not like that”

Maintainer: *works for a large tech co*
“Not like that”

Maintainer: *takes investment*
“Not like that”

Jacob Kaplan-Moss


Toying with paper crafty publishers cutting into hobby market (1986)

Toying with paper crafty publishers cutting into hobby market (1986) When I was a teenager I was given a book called Make Your Own Working Paper Clock, which encouraged you to cut the book itself up into 160 pieces and glue them together into a working timepiece. I was reminiscing about that book today when I realized it was first published in September 1983, so it recently celebrated its 40t

Toying with paper crafty publishers cutting into hobby market (1986)

When I was a teenager I was given a book called Make Your Own Working Paper Clock, which encouraged you to cut the book itself up into 160 pieces and glue them together into a working timepiece.

I was reminiscing about that book today when I realized it was first published in September 1983, so it recently celebrated its 40th birthday.

It turns out the story is even more interesting: the author of the book, James Smith Rudolph, based it on a similar book he had found in a Parisian bookshop in 1947, devoid of any information of the author or publisher.

In 1983 that original was long out of copyright, and "make your own" crafting books had a surge of popularity in the United States so he took the idea to a publisher and translated it to English.

This 1986 story from the Chicago Tribune filled in the story for me.

Via @alex@alexwlchan.net

Sunday, 11. February 2024

Simon Willison

Quoting Eric Lehman, internal Google email in 2018

One consideration is that such a deep ML system could well be developed outside of Google-- at Microsoft, Baidu, Yandex, Amazon, Apple, or even a startup. My impression is that the Translate team experienced this. Deep ML reset the translation game; past advantages were sort of wiped out. Fortunately, Google's huge investment in deep ML largely paid off, and we excelled in this new game. Neverthe

One consideration is that such a deep ML system could well be developed outside of Google-- at Microsoft, Baidu, Yandex, Amazon, Apple, or even a startup. My impression is that the Translate team experienced this. Deep ML reset the translation game; past advantages were sort of wiped out. Fortunately, Google's huge investment in deep ML largely paid off, and we excelled in this new game. Nevertheless, our new ML-based translator was still beaten on benchmarks by a small startup. The risk that Google could similarly be beaten in relevance by another company is highlighted by a startling conclusion from BERT: huge amounts of user feedback can be largely replaced by unsupervised learning from raw text. That could have heavy implications for Google.

Eric Lehman, internal Google email in 2018


John Philpin : Lifestream

🚧 43/366

🚧 43/366

🚧 43/366


Ben Werdmüller

Against Disruption: On the Bulletpointization of Books

"A wide swath of the ruling class sees books as data-intake vehicles for optimizing knowledge rather than, you know, things to intellectually engage with. [...] Some of us enjoy fiction. And color." Amen. I'm firmly on team fiction. A brilliant novel can teach you more about the world than a hundred AI "thunks"; as this article says, it's about the interpretation more than

"A wide swath of the ruling class sees books as data-intake vehicles for optimizing knowledge rather than, you know, things to intellectually engage with. [...] Some of us enjoy fiction. And color." Amen.

I'm firmly on team fiction. A brilliant novel can teach you more about the world than a hundred AI "thunks"; as this article says, it's about the interpretation more than it is about data. Writing and reading are inherently human endeavors. They're a conversation that sometimes takes place over generations. There is no shortcut. #Media

[Link]


Werdmüller on Medium

A creative process

No apps; no frameworks; just space. Continue reading on Medium »

No apps; no frameworks; just space.

Continue reading on Medium »


Ben Werdmüller

A creative process

Over on Threads, Amanda Zamora asks: I'm plotting away on Agencia Media and some personal writing/reporting this weekend (over a glass of 🍷 and many open tabs). One of the things I love most about building something new is the chance to design for intended outcomes — how to structure time and energy? What helps quiet chaos? Bring focus and creativity? Inspired by Ben Werdmuller’s recent

Over on Threads, Amanda Zamora asks:

I'm plotting away on Agencia Media and some personal writing/reporting this weekend (over a glass of 🍷 and many open tabs). One of the things I love most about building something new is the chance to design for intended outcomes — how to structure time and energy? What helps quiet chaos? Bring focus and creativity? Inspired by Ben Werdmuller’s recent callout about new Mac setups, I want to know about the ways you've built (or rebuilt) your way of working! Apps, workflows, rituals, name 'em 👇

A thing I’ve had to re-learn about building and creating is the importance of boredom in the way I think. I know that some people thrive when moving from thing to thing to thing at high speed, but I need time to reflect and toss ideas around in my head without an imposing deadline: the freedom to be creative without consequence.

The best way I’ve found to do that is to walk.

The work I’m proudest of was done in a context where I could walk for hours on end. When I was building Elgg, I would set off around Oxford, sometimes literally walking from one end of the city to the other and back again. When I was building Known and working for Matter, I roamed the east bay, sometimes walking from Berkeley to the tip of Oakland, or up through Tilden Park. I generally didn’t listen to music or audiobooks; I was alone with my thoughts and the sounds of the city. It helped me to figure out my priorities and consider what I was going to do next. When I came up with something new, it was more often than not in the midst of one of those walks.

When you’re deep into building something that’s your own, and that’s the entirety of what you’re doing (i.e., you don’t have another day job), you have the ability to structure your time however you’d like. Aside from the possible guilt of not working a traditional office day, there’s no reason to do that. Particularly at the beginning stages, I found that using the morning as unstructured reflective time led to better, more creative decision-making.

Again, this is me: everyone is different, and your mileage may vary. I do best when I have a lot of unstructured time; for some people, more structure is necessary. I think the key is to figure out what makes you happy and less stressed, and to get out from behind a screen. But also, walking really does boost creativity, so there’s that.

I recognize there’s a certain privilege inherent here: not everyone lives somewhere walkable, and not everyone feels safe when they’re walking out in the world. The (somewhat) good news is that indoor walking works just as well, if you can afford a low-end treadmill.

So what happens when you get back from a walk with a head full of ideas?

It’s probably no surprise that my other creativity hack is to journal: I want to get those unstructured thoughts, particularly the “what ifs” and “I wishes”, out on the page, together with the most important question, which is “why”. Writing long-form in this way puts me into a more contemplative state, much the same way that writing a blog post like this one helps me refine how I think about a topic. Putting a narrative arc to the thought gives it context and helps me refine what’s actually useful.

The through line here is an embrace of structurelessness; in part that’s just part of my personality, but in part it’s an avoidance of adhering to someone else’s template. If I’m writing items on a to-do list straight away, I’m subject to the design decisions of the to-do list software’s author. If I’m filling in a business model canvas, I’m thinking about the world in the way the canvas authors want me to. I can, and should, do all those things, but I always want to start with a blank page first. A template is someone else’s; a blank page is mine.

Nobody gets to see those thoughts until I’ve gone over them again and turned them into a written prototype. In the same way that authors should never show someone else their first draft, letting someone into an idea too early can deflate it with early criticism. That isn’t to say that understanding your hypotheses and doing research to validate them isn’t important — but I’ve found that I need to keep up the emotional momentum behind an idea if I’m going to see it through, and to do that, I need to keep the illusion that it’s a really good idea just long enough to give it shape.

Of course, when it has shape, I try to get all the expert feedback I can. Everyone needs an editor, and asking the right questions early and learning fast is an obvious accelerant.

So I guess my creative process boils down to:

Embrace boredom and unstructured, open space to think creatively Capture those creative thoughts in an untemplated way, through narrative writing Identify my hypotheses and figure out what needs to be researched to back up the idea Ask experts and do that research as needed in order to create a second, more validated draft Get holistic feedback from trusted collaborators on that second draft Iterate 1-2 times Build the smallest, fastest thing I can based on the idea

There are no particular apps involved and no special frameworks. Really, it’s just about giving myself some space to be creative. And maybe that’s the only advice I can give to anyone building something new: give yourself space.


Simon Willison

Python Development on macOS Notes: pyenv and pyenv-virtualenvwrapper

Python Development on macOS Notes: pyenv and pyenv-virtualenvwrapper Jeff Triplett shares the recipe he uses for working with pyenv (initially installed via Homebrew) on macOS. I really need to start habitually using this. The benefit of pyenv over Homebrew's default Python is that pyenv managed Python versions are forever - your projects won't suddenly stop working in the future when Homebre

Python Development on macOS Notes: pyenv and pyenv-virtualenvwrapper

Jeff Triplett shares the recipe he uses for working with pyenv (initially installed via Homebrew) on macOS.

I really need to start habitually using this. The benefit of pyenv over Homebrew's default Python is that pyenv managed Python versions are forever - your projects won't suddenly stop working in the future when Homebrew changes its default Python version.

Via @webology

Saturday, 10. February 2024

John Philpin : Lifestream

🚧 42/366

🚧 42/366

🚧 42/366


Ben Werdmüller

Hidden prison labor web linked to foods from Target, Walmart

"Intricate, invisible webs, just like this one, link some of the world’s largest food companies and most popular brands to jobs performed by U.S. prisoners nationwide, according to a sweeping two-year AP investigation into prison labor that tied hundreds of millions of dollars’ worth of agricultural products to goods sold on the open market." It's very on the nose that a fo

"Intricate, invisible webs, just like this one, link some of the world’s largest food companies and most popular brands to jobs performed by U.S. prisoners nationwide, according to a sweeping two-year AP investigation into prison labor that tied hundreds of millions of dollars’ worth of agricultural products to goods sold on the open market."

It's very on the nose that a former Southern slave plantation is now the country's largest maximum-security prison and a hub for this kind of forced labor. #Democracy

[Link]


Enough is enough—it’s time to set Julian Assange free

The former Editor in Chief of the Guardian on Julian Assange: "I know they won’t stop with Assange. The world of near-total surveillance, merely sketched by Orwell in Nineteen Eighty-four, is now rather frighteningly real. We need brave defenders of our liberties. They won’t all be Hollywood hero material, any more than Orwell’s Winston Smith was." It's interesting that eve

The former Editor in Chief of the Guardian on Julian Assange: "I know they won’t stop with Assange. The world of near-total surveillance, merely sketched by Orwell in Nineteen Eighty-four, is now rather frighteningly real. We need brave defenders of our liberties. They won’t all be Hollywood hero material, any more than Orwell’s Winston Smith was."

It's interesting that every description of Assange's actions needs to start with, "I'm not a fan of Assange." He's certainly a problematic character. But I do believe that the war leaks he helped release were an important insight into what was being done in our name. They were important, and it's also notable that they're being downplayed now.

Rusbridger's larger point - that his potential extradition has larger implications for press freedom - is also well-made. We need people to hold truth to power; sometimes that involves revealing the secrets that are being kept from us. #Media

[Link]


Heres Tom with the Weather

Phishing Mitigation for Mastodon.social

When a person is already logged into a mastodon instance, if they visit some pages on their instance associated with a user from another server, they are not redirected to the remote server because it is easier to interact with the remote user with their existing local session. However, if a person without an account is just visiting or they have an account but are logged out, mastodon redirects

When a person is already logged into a mastodon instance, if they visit some pages on their instance associated with a user from another server, they are not redirected to the remote server because it is easier to interact with the remote user with their existing local session. However, if a person without an account is just visiting or they have an account but are logged out, mastodon redirects them to the remote server presumably because mastodon doesn’t know whether they have a local account and visiting the remote server will have the complete and authoritative data for that remote user.

A welcome update to mastodon.social (included in 4.3.0-nightly) is a warning presented to visitors or logged out users before mastodon redirects them to a remote server for the original page. The code for Add confirmation when redirecting logged-out requests to permalink is particularly relevant to mastodon.social compared to other fediverse instances as mastodon.social has become a relatively big target for phishing. It’s a good bet that if someone is navigating the fediverse that their account is on mastodon.social. So, if an arbitrary victim is logged out of their mastodon.social account and visits a mastodon.social page belonging to the attacker, prior to this mitigation, mastodon.social would automatically redirect the victim to the attacker’s page which might be a fake login form to trick the victim into submitting their login credentials to the attacker’s site. Unfortunately, a significant percentage of people will submit the form.

One could imagine mastodon.social maintaining a list of trusted servers for automatic redirects but that would be an undesirable hornet’s nest and it’s not a bad thing when web surfers are conscious of the trust boundaries on the web.


John Philpin : Lifestream

🚧 41/366

🚧 41/366

🚧 41/366


Simon Willison

Rye: Added support for marking virtualenvs ignored for cloud sync

Rye: Added support for marking virtualenvs ignored for cloud sync A neat feature in the new Rye 0.22.0 release. It works by using an xattr Rust crate to set the attributes "com.dropbox.ignored" and "com.apple.fileprovider.ignore#P" on the folder. Via Rye 0.22.0 release notes

Rye: Added support for marking virtualenvs ignored for cloud sync

A neat feature in the new Rye 0.22.0 release. It works by using an xattr Rust crate to set the attributes "com.dropbox.ignored" and "com.apple.fileprovider.ignore#P" on the folder.

Via Rye 0.22.0 release notes

Friday, 09. February 2024

Ben Werdmüller

Meta won't recommend political content on Threads

"Threads users will be allowed to follow accounts that post political content, but the algorithm that suggests content from users you don't follow will not recommend accounts that post about politics." It's not clear to me what the definition of "politics" encompasses here. Is it just literal party / election politics? Does it include discussions about equal rights, which w

"Threads users will be allowed to follow accounts that post political content, but the algorithm that suggests content from users you don't follow will not recommend accounts that post about politics."

It's not clear to me what the definition of "politics" encompasses here. Is it just literal party / election politics? Does it include discussions about equal rights, which would disproportionately hit users from underrepresented groups?

Adam Mosseri says that he wants to create a "less angry place", but what about the topics where people are right to be angry? #Technology

[Link]


Phil Windleys Technometria

Zero Trust with Zero Data

The physical world is full of zero trust examples, but they gather attributes for the access control decisions in a very different way than we're used to online. Presenting your ID to buy beer is used so often as an example of how verifiable credentials work that it's cliche. Cliche or not, there's another aspect of using an ID to buy beer that I want to focus on: it's an excellent example of

The physical world is full of zero trust examples, but they gather attributes for the access control decisions in a very different way than we're used to online.

Presenting your ID to buy beer is used so often as an example of how verifiable credentials work that it's cliche. Cliche or not, there's another aspect of using an ID to buy beer that I want to focus on: it's an excellent example of zero trust

Zero Trust operates on a simple, yet powerful principle: “assume breach.” In a world where network boundaries are increasingly porous and cyber threats are more evasive than ever, the Zero Trust model centers around the notion that no one, whether internal or external, should be inherently trusted. This approach mandates continuous verification, strict access controls, and micro-segmentation, ensuring that every user and device proves their legitimacy before gaining access to sensitive resources. If we assume breach, then the only strategy that can protect the corporate network, infrastructure, applications, and people is to authorize every access.

From Zero Trust
Referenced 2024-02-09T08:25:55-0500

The real world is full of zero trust examples. When we're controlling access to something in the physical world—beer, a movie, a boarding gate, points in a loyalty program, prescriptions, and so on—we almost invariably use a zero trust model. We authorize every access. This isn't surprising, the physical world is remarkably decentralized and there aren't many natural boundaries to exploit and artificial boundaries are expensive and inconvenient.

The other thing that's interesting about zero trust in the physical world is that authorization is also usually done using Zero Data. Zero data is a name StJohn Deakin gave to the concept of using data gathered just in time to make authorization and other decisions rather than relying on great stores of data. There are obvious security benefits from storing less data, but zero data also offers significantly greater convenience for people and organizations alike. To top all that off, it can save money by reducing the number of partner integrations (i.e., far fewer federations) and enable applications that have far greater scale.

Let's examine these benefits in the scenario I opened with. Imagine that instead of using a credential (e.g., driver's license) to prove your age when buying beer, we ran convenience stores like a web site. Before you could shop, you'd have to register an account. And if you wanted to buy beer, the company would have to proof the identity of the person to ensure they're over 21. Now when you buy beer at the store, you'd log in so the system could use your stored attributes to ensure you were allowed to buy beer.

This scenario is still zero trust, but not zero data. And it's ludicrous to imagine anyone would put up with it, but we do it everyday online. I don't know about you, but I'm comforted to know that every convenience store I visit doesn't have a store of all kinds of information about me in an account somewhere. Zero data stores less data that can be exploited by hackers (or the companies we trust with it).

The benefit of scale is obvious as well. In a zero data, zero trust scenario we don't have to have long-term transactional relationships with every store, movie, restaurant, and barber shop we visit. They don't have to maintain federation relationships with numerous identity providers. There are places where the ability to scale zero trust really matters. For example, it's impossible for every hospital to have a relationship with every other hospital for purposes of authorizing access for medical personal who move or need temporary access. Similarly, airline personal move between numerous airports and need access to various facilities at airports.

How do we build zero data, zero trust systems? By using verifiable credentials to transfer attributes about their subject in a way that is decentralized and yet trustworthy. Zero data aligns our online existence more closely with our real-world interactions, fostering new methods of communication while decreasing the challenges and risks associated with amassing, storing, and utilizing vast amounts of data.

Just-in-time, zero data, attribute transfer can make many zero trust scenarios more realizable because it's more flexible. Zero trust with zero data, facilitated by verifiable credentials, represents a pivotal transition in how digital identity is used in authorization decisions. By minimizing centralized data storage and emphasizing cryptographic verifiability, this approach aims to address the prevalent challenges in data management, security, and user trust. By allowing online interactions to more faithfully follow established patterns of transferring trust from the physical world, zero trust with zero data promotes better security with increased convenience and lower cost. What's not to like?

You can get more detail on many of the concepts in this post like verifiable credentials in my new book Learning Digital Identity from O'Reilly Media.

Photo Credit: We ID Everyone from DALL-E (Public Domain) DALL-E apparently thinks a six-pack has 8 bottles but this was the best of several attempts.


Ben Werdmüller

FCC Makes AI-Generated Voices in Robocalls Illegal

"The FCC announced the unanimous adoption of a Declaratory Ruling that recognizes calls made with AI-generated voices are "artificial" under the Telephone Consumer Protection Act (TCPA)." A sign of the times that the FCC had to rule that making an artificial intelligence clone of a voice was illegal. I'm curious to understand if this affects commercial services that intenti

"The FCC announced the unanimous adoption of a Declaratory Ruling that recognizes calls made with AI-generated voices are "artificial" under the Telephone Consumer Protection Act (TCPA)."

A sign of the times that the FCC had to rule that making an artificial intelligence clone of a voice was illegal. I'm curious to understand if this affects commercial services that intentionally use AI to make calls on a user's behalf (eg to book a restaurant or perform some other service). #AI

[Link]

Thursday, 08. February 2024

John Philpin : Lifestream

040/366 | 🇫🇷 The 9th Arrondissement

« 039/366 | 041/366 » The Palais Garnier In The 9th Arrondissement _(By Me)_ … it’s also February 9th . Stretching? This is one of 366 daily posts that are planned to appear throughout the year of 2024. More context will follow. 📡 Follow with RSS 🗄️ All the posts

« 039/366 | 041/366 »

The Palais Garnier In The 9th Arrondissement _(By Me)_

… it’s also February 9th .

Stretching?

This is one of 366 daily posts that are planned to appear throughout the year of 2024. More context will follow.

📡 Follow with RSS

🗄️ All the posts


Patrick Breyer

Polizeilicher Datenaustausch: Piraten kritisieren europaweit vernetzte Gesichtsdatenbanken

Die Abgeordneten des Europäischen Parlaments haben heute das Trilog-Ergebnis der Verordnung zum automatisierten Datenaustausch für die polizeiliche Zusammenarbeit (“Prüm II”) angenommen. Union, SPD, FDP und AfD stimmten dafür, während die …

Die Abgeordneten des Europäischen Parlaments haben heute das Trilog-Ergebnis der Verordnung zum automatisierten Datenaustausch für die polizeiliche Zusammenarbeit (“Prüm II”) angenommen. Union, SPD, FDP und AfD stimmten dafür, während die Abgeordneten der Piratenpartei im Europäischen Parlament, Grüne und Linke dagegen stimmten.

Dr. Patrick Breyer, Mitglied des Europäischen Parlaments für die Piratenpartei Deutschland, erklärt:

“Mithilfe fehleranfälliger Gesichtserkennung soll die Polizei europaweit Polizeidatenbanken durchsuchen dürfen – CDU, SPD und FDP haben das gegen unsere Stimmen durchgesetzt. Europaweit vernetzte Gesichtsdatenbanken ermöglichen biometrische Massenüberwachung im öffentlichen Raum. Das wird zu unzähligen Festnahmen Unschuldiger und bis zu 99% Falschverdächtigungen führen. Unter ständiger Überwachung sind wir nicht mehr frei! Wir dürfen keine Kultur des Misstrauens normalisieren.”

Der Europaabgeordnete Marcel Kolaja der tschechischen Piratenpartei kommentiert:

“Schon das derzeitige System, in dem die polizeilichen Datenbanken der einzelnen Mitgliedstaaten miteinander verknüpft sind, hat eine Reihe von Mängeln. Es verdient eine Reform. Aber nicht eine, die aus ein paar Teilproblemen ein einziges großes Problem macht. Ich kann daher zum jetzigen Zeitpunkt eine stärkere Verknüpfung der nationalen Datenbanken nicht unterstützen. Außerdem dehnen die Vorschriften, über die wir heute abgestimmt haben, den Anwendungsbereich des Systems auf polizeiliche Aufzeichnungen aus. Dazu gehören auch solche, die auf der Grundlage einer irrtümlichen Annahme oder von Hörensagen erstellt worden sind. Die Überarbeitung der Vorschriften sollte sich darauf konzentrieren, das System sicherer zu machen und den Austausch relevanter Daten zu ermöglichen, um die Strafverfolgung effektiver zu gestalten – und nicht auf die Aufnahme irrelevanter Informationen.”

Zuvor hatte die Bürgerrechtsorganisation EDRi die Ablehnung des Abkommens gefordert.


John Philpin : Lifestream

All Change? More Of The Same? 🚧

Life can move slowly. But then it doesn’t. Case in point …. or is it? 🔗 Me on LinkedIN 12th December 2023 Resignation 14 December 2023 🔗 Business Desk Reporting 19th December 2023 Good result? It’s a start. Then we learn that the new chair is Jenifer Kerr - wait, she’s the chair of NZTE. Indeed she is. We now have one person chairing NZTE and Callaghan - thought the profiles

Life can move slowly.
But then it doesn’t.
Case in point …. or is it?

🔗 Me on LinkedIN 12th December 2023

Resignation 14 December 2023

🔗 Business Desk Reporting 19th December 2023

Good result?

It’s a start.

Then we learn that the new chair is Jenifer Kerr - wait, she’s the chair of NZTE.

Indeed she is.

We now have one person chairing NZTE and Callaghan - thought the profiles are slightly different. (I wonder why?)

NZTE

Jennifer has extensive governance experience, both in New Zealand and overseas. Her current positions include chair of Worksfe, deputy chair of Callaghan Innovation, a director of Eke Panuku Development Auckland and Waipa Networks, and member of New Zealand Police’s Audit and Risk Committee. Former governance roles include director of New Zealand Rugby and Counties Manukau Rugby Union. Previously, Jennifer has been general manager of customers, people and environment at Transpower, former group director of human resources and health and safety at Fonterra, and group manager of human resources for Mobil Oil for all of Europe. She has run her own consultancy and has strong experience in organisational strategy, chief executive recruitment and succession, executive remuneration and stakeholder relationships. Jennifer is a member of Global Women and has degrees in arts and social sciences from the University of Waikato. She is of Ngāti Mutunga and Ngāti Tama descent.

Callaghan

Jennifer Kerr has extensive international experience in the HR and health, safety and wellbeing sectors in North America, Europe, the United Kingdom and New Zealand. She was formerly General Manager of Customers, People and Environment at Transpower and Group Director Human Resources and Health & Safety at Fonterra. She has also operated her own consultancy business, and prior to that was the Group Manager of Human Resources for Mobil Oil for all of Europe. Jennifer has governance experience in the United Kingdom and New Zealand, including pension plan trustee roles in both countries. Jennifer is a member of New Zealand Global Women and has taken an active role during her career in mentoring and coaching other women to achieve their potential. She is of Ngāti Mutunga and Ngāti Tama descent.

One observation - nowhere in either of those bios is there a mention of anything to do with ‘RandD’, ‘Innovation’ ‘Tech’ and all the various off shoots and things that those two orgizations are there for. I guess it is clear what the focus is.

And maybe something else ..

Ever since I arrived in NZ I have been amazed by the sheer number of governmental organizations that float around the country ‘doing’ ‘things’. It’s a country of 5 million people. These are the organizations I have identified so far that brave entrepreneurs navigate for help.

Callaghan Innovation NZTE NZGCP MBIE NZGIF KiwiNet AUT

… not to mention the ‘chambers’ and ‘ema’ and ‘business nz’ and ‘eda’s’ and ‘incubators’ and ‘accelerators’ and … (thankyou Andy)

Here’s the question - as the new administration start implementing their plan - how many closures and/or mergers should we expect?

What do you think?

And is Callaghan and NZTE first?


Patrick Breyer

Bundesdatenschutzgesetz-Novelle bringt unzuverlässiges Schufa-Scoring, ausufernde Überwachung und weniger Daten-Auskünfte

Der gestern beschlossene Gesetzentwurf der Bundesregierung zur Reform des Bundesdatenschutzgesetzes und zum Scoring durch Auskunfteien stößt bei den Datenschutzexperten der Piratenpartei auf Kritik. Der Europaabgeordnete Dr. Patrick Breyer erklärt: „Die …

Der gestern beschlossene Gesetzentwurf der Bundesregierung zur Reform des Bundesdatenschutzgesetzes und zum Scoring durch Auskunfteien stößt bei den Datenschutzexperten der Piratenpartei auf Kritik. Der Europaabgeordnete Dr. Patrick Breyer erklärt:

„Die Bundesregierung feiert ihren Gesetzentwurf als besseren Verbraucherschutz bei Schufa-Scoring, aber in Wahrheit wird unzuverlässiges Schufa-Scoring legalisiert, ausufernde Überwachung zementiert und das Daten-Auskunftsrecht der Bürger eingeschränkt.

Der Europäische Gerichtshof hat Benachteiligungen wegen bloßer Scorewerte letztes Jahr eigentlich ganz verboten, durch das jetzt geplante Gesetz sollen sie wieder zugelassen werden. Dass man wegen eines Scorewerts abgelehnt wurde, muss ein Unternehmen nach dem Gesetzentwurf erst gar nicht mitteilen. Man kann seinen Scorewert erfragen, aber was bedeutet er? Wie gut oder schlecht er im Vergleich zu anderen ist, muss nicht mitgeteilt werden. Auch wie unser Scorewert zustande kommt, darf laut Gesetzentwurf weitgehend verschwiegen werden – man soll nur die ‚wichtigsten‘ Kriterien erfahren.

Dem Gesetzentwurf fehlen jegliche Qualitätsanforderungen an Scoring-Algorithmen. Ungenaue und unsichere Scorewerte auf spärlicher Datenbasis dürfen weiter verwendet werden. Eine externe Prüfung oder Zertifizierung der Zuverlässigkeit des Scoringverfahrens wird nicht vorgeschrieben. Auch ist keinerlei Prüfung vorgesehen, ob ein Scoringalgorithmus systematisch nach Geschlecht, Alter, Herkunft usw. diskriminiert.

Im Gesetzentwurf stecken zudem noch Regelungen zu ganz anderen Themen: Die Videoüberwachung des öffentlichen Raums durch Polizei und Behörden soll ausufernd zur ‚Erfüllung ihrer Aufgaben‘ zugelassen werden – dieses Kriterium ist völlig schwammig und unverhältnismäßig weitgehend. Und welche Daten Unternehmen über uns speichern, woher sie kommen und an wen sie sie weiter geben, soll künftig mit der Begründung des ‚vorrangigen Schutzes von Geschäftsgeheimnissen’ geheim gehalten werden dürfen – dabei gehören unsere Daten doch niemand anderem als uns selbst! Hier werden Internetkonzerne und andere Unternehmen geradezu dazu eingeladen, Datenauskünfte pauschal zu verweigern und Betroffenen ihr Transparenzrecht zu verwehren.“


John Philpin : Lifestream

📸 Side Hustle?

📸 Side Hustle?

📸 Side Hustle?


💬 You might guess that the I hate the name of the book, but

💬 You might guess that the I hate the name of the book, but there’s more. Read the sentence. The whole thing is just wrong.

💬 You might guess that the I hate the name of the book, but there’s more. Read the sentence. The whole thing is just wrong.


Thinking it’s not wrong.

Thinking it’s not wrong.

Thinking it’s not wrong.


I thought it was only me that 💬 said that.

I thought it was only me that 💬 said that.

I thought it was only me that 💬 said that.


039/366 | ⏪ The Reverse Network Effect

« 038/366 | 040/366 » A Vintage Machine - Photographed By Me Back in the day a technological marvel such as this would cost you hundred and hundreds of dollars. (No joking kids). Of course the number of people you could communicate using such a device - back in the day - could be counted in the millions. So $10 does sound like a bargain, except. Back in the Day - the cost to connection

« 038/366 | 040/366 »

A Vintage Machine - Photographed By Me

Back in the day a technological marvel such as this would cost you hundred and hundreds of dollars. (No joking kids). Of course the number of people you could communicate using such a device - back in the day - could be counted in the millions. So $10 does sound like a bargain, except.

Back in the Day - the cost to connection ratio was very, very small.

For example and being conservative and for the sake of round numbers …

Cost : $1000
Number of People You Can Connect To : 10,000,000
Cost to Connection Ratio : $0.0001

Today’s numbers … with the same simplicity assumptions;

Cost : $10
Number of People You Can Connect To : 10
Cost to Connection Ratio : $1

Now THAT is inflation … not to mention that back in the day the Network effect meant that you cut per connection was falling daily., whilst today, growing exponentially.

This is one of 366 daily posts that are planned to appear throughout the year of 2024. More context will follow.

📡 Follow with RSS

🗄️ All the posts


💬 I knew there was a good reason to procrastinate.

💬 I knew there was a good reason to procrastinate.

💬 I knew there was a good reason to procrastinate.


I wonder if ‘my beautiful wife’ is an employee, vendor or co

I wonder if ‘my beautiful wife’ is an employee, vendor or coworker.

I wonder if ‘my beautiful wife’ is an employee, vendor or coworker.


Ben Werdmüller

What Medium's Tony Stubblebine has learned about tech and journalism

Tony is a smart, analytical person, which comes across strongly in this useful, transparent interview about the future of Medium. It's doing better than it ever has. Also, I like this, which is very close to how my career has worked to date: "The creator economy locked a lot of people into this passive income game that just doesn’t pay nearly as well as the other game, w

Tony is a smart, analytical person, which comes across strongly in this useful, transparent interview about the future of Medium. It's doing better than it ever has.

Also, I like this, which is very close to how my career has worked to date:

"The creator economy locked a lot of people into this passive income game that just doesn’t pay nearly as well as the other game, which is research something until you know more about it than anyone else, and then go get paid for that." #Media

[Link]

Wednesday, 07. February 2024

Ben Werdmüller

Review: Chris Dixon's Read Write Own

A characteristically great review from Molly White of Chris Dixon's disclosure-free shilling of blockchains as a way to save the web. Read, written, owned. I do think there are some areas where blockchain is unfairly maligned: it introduced the idea of decentralization to a much wider audience, and it's the only community that has made widespread use of identity in the brow

A characteristically great review from Molly White of Chris Dixon's disclosure-free shilling of blockchains as a way to save the web. Read, written, owned.

I do think there are some areas where blockchain is unfairly maligned: it introduced the idea of decentralization to a much wider audience, and it's the only community that has made widespread use of identity in the browser.

But this kind of shilling - particularly without disclosures - is out of date and unnecessary. What would serve the conversation is an open, good faith discussion of the possible options that doesn't go out of its way to dismiss technologies in active use as being dead. Otherwise what you're left with is the impression that rather than serving a higher calling to save the web, the author is looking for technologies he can make a lot of money from. #Technology

[Link]


Boeing Max 9s start flying again after door panel blowout

"“I would tell my family to avoid the Max. I would tell everyone, really,” said Joe Jacobsen, a former engineer at Boeing and the Federal Aviation Administration." And so I shall. This is going to be a textbook example of how moving to a sales-led rather than engineering-led culture can be incredibly harmful. Clearly Boeing is feeling stress from its competition, but rushin

"“I would tell my family to avoid the Max. I would tell everyone, really,” said Joe Jacobsen, a former engineer at Boeing and the Federal Aviation Administration." And so I shall.

This is going to be a textbook example of how moving to a sales-led rather than engineering-led culture can be incredibly harmful. Clearly Boeing is feeling stress from its competition, but rushing planes out the door has hurt its standing rather than helped it. This ongoing incident makes me incredibly reluctant to fly on any Boeing plane at all. #Business

[Link]


Poll Shows 74 Percent of Republicans Like Donald Trump’s Dictator Plan

"Only 44 percent of adults completely rebelled at the notion of giving the former president — who is currently facing 91 felony charges — dictatorial authority, calling it “definitely bad” for America." In case anyone was still wondering what the stakes are this election season. #Democracy [Link]

"Only 44 percent of adults completely rebelled at the notion of giving the former president — who is currently facing 91 felony charges — dictatorial authority, calling it “definitely bad” for America."

In case anyone was still wondering what the stakes are this election season. #Democracy

[Link]


Patrick Breyer

CDU und SPD verlängern freiwillige Chatkontrolle durch Big Tech-Konzerne

Das EU-Parlament hat heute grünes Licht für eine Verlängerung der umstrittenen Chatkontrolle durch US-Internetkonzerne bis 2025 gegeben. Dagegen stimmten Piraten, Linke, FDP, Grüne, AfD und zwei SPD-Abgeordnete, während CDU und …

Das EU-Parlament hat heute grünes Licht für eine Verlängerung der umstrittenen Chatkontrolle durch US-Internetkonzerne bis 2025 gegeben. Dagegen stimmten Piraten, Linke, FDP, Grüne, AfD und zwei SPD-Abgeordnete, während CDU und fast alle SPD-Abgeordneten zustimmten. Das Parlament will sich mit dem Rat noch in der nächsten Woche einig werden, um die Verlängerung im Schnellverfahren noch vor der Europawahl zu verabschieden.

Dr. Patrick Breyer, der Klage gegen die eigenmächtige Chatkontrolle durch Meta eingereicht hat, kommentiert:

„Die Verlängerung der freiwilligen Chatkontrolle ist ein schwerer Fehler: Statt den neuen Vorschlag des EU-Parlaments zu einem wirksameren und gerichtsfesten Kinderschutz ohne Chatkontrolle durchzusetzen, hat EU-Kommissarin ‚Big Sister‘ Johansson jetzt Zeit, Mehrheiten für die verpflichtende Chatkontrolle 2.0 zur Zerstörung des digitalen Briefgeheimnisses zu finden und kritische EU-Staaten mit infamen Kampagnen und Falschinformationen zur Zustimmung zu manipulieren. Der Streit um die Dauer der Verlängerung ist bedeutungslos, weil nach diesem Präzedenzfall beliebig erneut verlängert werden wird.

Die freiwillige Massenüberwachung unserer persönlichen Nachrichten und Fotos durch US-Dienste wie Meta, Google oder Microsoft leistet keinen signifikanten Beitrag zur Rettung missbrauchter Kinder oder Überführung von Missbrauchtätern, sondern kriminalisiert umgekehrt tausende Minderjähriger, überlastet Strafverfolger und öffnet einer willkürlichen Privatjustiz der Internetkonzerne Tür und Tor. Wenn nach Johanssons eigenen Angaben im Dezember nur jede vierte Meldung überhaupt für die Polizei relevant ist, bedeutet das für Deutschland Jahr für Jahr 75.000 ausgeleitete intime Strandfotos und Nacktbilder, die bei unbekannten Moderatoren im Ausland nicht sicher sind und in deren Händen nichts zu suchen haben.

Die Verordnung zur freiwilligen Chatkontrolle ist sowohl unnötig als auch grundrechtswidrig: Die sozialen Netzwerke als Hostingdienste brauchen zur Überprüfung öffentlicher Posts keine Verordnung. Dasselbe gilt für Verdachtsmeldungen durch Nutzer. Und die fehleranfälligen automatisierten Meldungen aus der Durchleuchtung privater Kommunikation durch Zuckerbergs Meta-Konzern, die 80% der Chatmeldungen ausmachen, werden durch die angekündigte Einführung von Ende-zu-Ende-Verschlüsselung ohnehin entfallen. Das Rechtsgutachten einer ehemaligen EuGH-Richterin belegt, dass die freiwillige Chatkontrolle als verdachtslose und flächendeckende Überwachungsmaßnahme grundrechtswidrig ist. Ein Missbrauchsbetroffener und ich klagen dagegen.“

Breyers Informationsportal zur Chatkontrolle


Ben Werdmüller

Apple releases 'MGIE', a revolutionary AI model for instruction-based image editing

"Computer - enhance!" I like the approach in this release from Apple: an open source AI model that can edit images based on natural language instructions. In other words, a human can tell the engine what to do to an image, and it goes and does it. Rather than eliminating the human creativity in the equation, it gives the person doing the photo editing superpowers: instea

"Computer - enhance!"

I like the approach in this release from Apple: an open source AI model that can edit images based on natural language instructions. In other words, a human can tell the engine what to do to an image, and it goes and does it.

Rather than eliminating the human creativity in the equation, it gives the person doing the photo editing superpowers: instead of needing to know how to use a particular application to do the editing, they can simply give the machine instructions. I feel much more comfortable with the balance of power here than with most AI applications.

Obviously, it has implications for vendors like Adobe, which have established some degree of lock-in by forcing users to learn their tools and interfaces. If this kind of user interface takes off - and, given new kinds of devices like Apple Vision Pro, it inevitably will - they'll have to compete on capabilities alone. I'm okay with that. #AI

[Link]


Patrick Breyer

Piraten lehnen Deregulierung von Gentechnik ab

In Straßburg nahm das EU-Parlament eine umstrittene Gesetzesvorlage zur “Deregulierung in der neuen Gentechnik (NGT)” an. Diese beinhaltet unter anderem den geplanten Abbau von Regularien und Vorschriften in Bezug auf Sicherheitskontrollen, …

In Straßburg nahm das EU-Parlament eine umstrittene Gesetzesvorlage zur “Deregulierung in der neuen Gentechnik (NGT)” an. Diese beinhaltet unter anderem den geplanten Abbau von Regularien und Vorschriften in Bezug auf Sicherheitskontrollen, Kennzeichnungspflichten und Rückverfolgbarkeit von gentechnisch veränderten Organismen (GVO). Viele Verbände befürchten eine Veränderung zu Lasten von Verbrauchern und Landwirten und zu Gunsten von Monopolbetrieben.

Der Europaabgeordnete Dr. Patrick Breyer, der gegen die Vorlage gestimmt hat, kommentiert:

„Die Piratenpartei steht für Fortschritt und Innovation in der Wissenschaft, jedoch nicht entgegen der ökologischen Vielfalt. Angesichts der Debatte zur Deregulierung der Neuen Gentechnik (NGT) im EU-Parlament sprechen wir uns entschieden gegen eine Aufweichung des bestehenden Vorsorgeprinzips mit seiner strengen Risikoabschätzung aus.

Wir lehnen Patente auf Saatgut ab, insbesondere wenn diese den Einsatz von speziellen Pestiziden erfordern oder zu einer Gefährdung der biologischen Landwirtschaft führen könnten. Die Übertragung und Vermehrung genetisch veränderten Saatguts auf Feldern von Biobauern und die daraus resultierende ‘Kontaminierung’ stellen eine ernsthafte Bedrohung für den Biostatus der Ernte dar. Eine fehlende Kennzeichnung schränkt nicht nur die Wahlmöglichkeiten der Verbraucher ein, sondern bedroht somit auch die Vielfalt unseres Ökosystems und fördert Monopole in der Landwirtschaft bzw. Saatgut- und Düngemittelindustrie.“

Anja Hirschel, Spitzenkandidatin der Piratenpartei für die Europawahl 2024, kommentiert:

“Die Erhaltung der Saatgutvielfalt und insbesondere der Schutz samenfester Sorten ist essenziellen Bestandteil unserer landwirtschaftlichen Zukunft. Dazu gehört auch, keine genetische Erosion des eigenen Saatgutes und womöglich sogar noch Urheberrechtsverstoßes befürchten zu müssen. Verbraucher sollen mit einer klaren Kennzeichnung erkennen können, wie ihre Lebensmittel erzeugt wurden.

Forschung an NGTs soll und darf stattfinden, um Innovationen im Dienste der Menschen voranzutreiben. Jedoch muss dies sicher und transparent geschehen. Und genau dies untergräbt der aktuelle Gesetzesvorschlag. Wir setzen uns für eine informierte und ausgewogene Debatte ein. Die Entwicklung und Anwendung der NGT muss im besten Interesse aller erfolgen,unter objektiver Bewertung der Chancen und Risiken. Wir haben uns daher entschlossen gegen die Vorlage zur Deregulierung zu stimmen.“

In der heutigen Abstimmung stimmte eine Mehrheit für eine Kennzeichnungspflicht für Produkte, die NGTs enthalten. Entscheiden werden jedoch die anstehenden Verhandlungen mit den EU-Regierungen (Trilog).


Neue EU-Pläne gegen Kindesmissbrauch greifen zu kurz

Die EU-Kommission hat gestern einen Vorschlag zur Aktualisierung der strafrechtlichen Vorschriften über sexuellen Missbrauch und sexuelle Ausbeutung von Kindern vorgestellt. Dazu erklärt der Europaabgeordnete der Piratenpartei Dr. Patrick Breyer: „Die …

Die EU-Kommission hat gestern einen Vorschlag zur Aktualisierung der strafrechtlichen Vorschriften über sexuellen Missbrauch und sexuelle Ausbeutung von Kindern vorgestellt. Dazu erklärt der Europaabgeordnete der Piratenpartei Dr. Patrick Breyer:

„Die Vorschläge greifen viel zu kurz, um Kinder besser zu schützen. Neben sinnvollen Vorschlägen wird ohne Wirksamkeitsnachweis auf einer Kriminalisierungs- und Strafverschärfungswelle geritten, die in Deutschland längst überwunden ist. Nach deutschem Vorbild werden verschlüsselte Messengerdienste, anonyme Foren oder verschlüsselte Dateiablagedienste einem Risiko der Kriminalisierung und Schließung wegen ‚Erleichterung oder Förderung von Straftaten‘ ausgesetzt (Artikel 8).

Eine gefährliche Lücke in den Plänen ist die oft dilettantische und unterausgestattete Strafverfolgung von Missbrauchsdelikten. Wir brauchen europaweite Standards und Leitlinien für strafrechtliche Ermittlungen wegen Kindesmissbrauchs, einschließlich der Identifikation von Opfern und der nötigen technischen Mittel. Wir brauchen Zahlen darüber, wie lange sich Verfahren ziehen und wie erfolgreich sie sind, um besser werden zu können. Strafverfolger müssen verpflichtet werden, kriminelles Material zur Löschung zu melden, statt sich – wie im Fall Boystown – einfach für unzuständig oder überlastet zu erklären.

Auch zur besseren Verhinderung und Prävention von Kindesmissbrauch enthält der Entwurf viel zu wenig Konkretes. Wir brauchen eine systematische wissenschaftliche Evaluierung und Umsetzung multidisziplinärer Präventionsprogramme. Die EU muss eine Schlüsselrolle beim Austausch zwischen Forschung und Praxis, bei Evaluierung, Umsetzung und Bewertung der bewährtesten Präventionsansätze spielen. Es ist lächerlich, dass dazu im Entwurf eine bloße Datenbank vorgeschlagen wird.“


John Philpin : Lifestream

038/366 | 🚇 A Stack Of Subs ❓

« 037/366 | 039/366 » ’A Stack of Subs’ By 'Leonardo' .. no, not exactly. Not a submarine - a sub .. like a subway? ’A Stack of Subs’ By 'Leonardo' A Train Station? 🤦‍♂️ Oh no. Not a ‘Subway’ … A ‘Subway sandwich’. In fact make it a stack. That’s right …A ‘stack of subs’. ’A Stack of Subs’ By 'Leonardo' This is one of 366 daily posts that are planned to appear

« 037/366 | 039/366 »

’A Stack of Subs’ By 'Leonardo'

.. no, not exactly. Not a submarine - a sub .. like a subway?

’A Stack of Subs’ By 'Leonardo'

A Train Station?

🤦‍♂️

Oh no. Not a ‘Subway’ … A ‘Subway sandwich’.

In fact make it a stack.

That’s right …A ‘stack of subs’.

’A Stack of Subs’ By 'Leonardo'

This is one of 366 daily posts that are planned to appear throughout the year of 2024. More context will follow.

📡 Follow with RSS

🗄️ All the posts


💬 Proactive TL;DR

💬 Proactive TL;DR

💬 Proactive TL;DR


💬 via Stowe Boyd

💬 via Stowe Boyd

💬 via Stowe Boyd


Friend of mine mentioned ‘Cyclades’, a French project from ‘

Friend of mine mentioned ‘Cyclades’, a French project from ‘back in the day’. You know, they were solving for problems that TCP/IP was trying to solve before there was TCP/IP. 🤯 I did just find this ‘billy-doo’. We get so used to the narrative we are told. Don’t we?

Friend of mine mentioned ‘Cyclades’, a French project from ‘back in the day’.

You know, they were solving for problems that TCP/IP was trying to solve before there was TCP/IP.

🤯

I did just find this ‘billy-doo’.

We get so used to the narrative we are told. Don’t we?

Tuesday, 06. February 2024

Ben Werdmüller

Want to sell a book or release an album? Better start a TikTok.

"You’ve got to offer your content to the hellish, overstuffed, harassment-laden, uber-competitive attention economy because otherwise no one will know who you are. [...] The commodification of the self is now seen as the only route to any kind of economic security." In the new economy, every artist must also be an entrepreneur. In doing so, they compromise their intentions;

"You’ve got to offer your content to the hellish, overstuffed, harassment-laden, uber-competitive attention economy because otherwise no one will know who you are. [...] The commodification of the self is now seen as the only route to any kind of economic security."

In the new economy, every artist must also be an entrepreneur. In doing so, they compromise their intentions; a world where everyone is just shilling is one free from the purity of ideas and discourse. There is no such thing as being discovered or being heralded on the merit of your work alone. You've got to sell. #Culture

[Link]

Monday, 05. February 2024

@_Nat Zone

XRIの世界線から見たDIDとOpenIDーYoutube配信

2024年2月8日午後11時より、Youtubue 配信で「XRIの世界線から見たDIDとOpenID」というのをやります。 かつて、XRIという識別子体系がありました。この識別子からメタデータを引き出すResolutionの仕組みはOpenID Authentication 2…

2024年2月8日午後11時より、Youtubue 配信で「XRIの世界線から見たDIDとOpenID」というのをやります。

かつて、XRIという識別子体系がありました。この識別子からメタデータを引き出すResolutionの仕組みはOpenID Authentication 2.0に採用されるなどある程度の普及は見ましたが、結局消えてなくなってしまいました。
このXRIというのは実はDIDとても良く似ています。このため、DIDの発展を望むならば、XRIの歴史を紐解いて、同じ轍をふまないようにすることは重要であると思われます。そこで、今回はXRIのたどった道を、エピソードなども交えながら紹介しようと思います。

お酒でも片手に、ご覧いただければと思います。

ちなみに、ここでも見れますが、ここからYoutubeに飛ぶとコメントもみることができるのでそちらのほうが吉かと思います。

それではよろしくお願いいたします。


Damien Bod

Secure an ASP.NET Core Blazor Web app using Microsoft Entra ID

This article shows how to implement an ASP.NET Core Blazor Web application using Microsoft Entra ID for authentication. Microsoft.Identity.Web is used to implement the Microsoft Entra ID OpenID Connect client. Code: https://github.com/damienbod/Hostedblazor8MeID Note: I based this implementation on the example provided by Tomás López Rodríguez and adapted it. Setup The Blazor Web application is an

This article shows how to implement an ASP.NET Core Blazor Web application using Microsoft Entra ID for authentication. Microsoft.Identity.Web is used to implement the Microsoft Entra ID OpenID Connect client.

Code: https://github.com/damienbod/Hostedblazor8MeID

Note: I based this implementation on the example provided by Tomás López Rodríguez and adapted it.

Setup

The Blazor Web application is an OpenID Connect confidential client (code flow, PKCE) which uses Microsoft Entra ID for authentication. An Azure App registration (Web configuration) is used to create the client and only delegated scopes are used. A secret is used to authenticate the application in development. Client assertions can be used in production deployments. NetEscapades.AspNetCore.SecurityHeaders is used to implement the security headers as best possible for Blazor Web. No identity management or user passwords are handled in the application.

The client part of the Blazor Web application can use the PersistentAuthenticationStateProvider class to read the user profile data.

This uses data from the server part implemented in the PersistingRevalidatingAuthenticationStateProvider class. See the code in the github repo.

OpenID Connect confidential client

The AddMicrosoftIdentityWebAppAuthentication method is used to implement the client authentication using the Microsoft.Identity.Web packages. I use a downstream API to force that the client uses code flow with PKCE instead of the implicit flow. Microsoft Graph is only requesting delegated user profile data.

// Add authentication services var scopes = builder.Configuration.GetValue<string>("DownstreamApi:Scopes"); string[] initialScopes = scopes!.Split(' '); builder.Services.AddMicrosoftIdentityWebAppAuthentication(builder.Configuration) .EnableTokenAcquisitionToCallDownstreamApi(initialScopes) .AddMicrosoftGraph("https://graph.microsoft.com/v1.0", scopes) .AddInMemoryTokenCaches();

The client automatically reads from the AzureAd configuration. This can be changed if you would like to update the product name. The client uses the standard Microsoft Entra ID setup. You need to add the permissions in the Azure App registration created for this application.

"AzureAd": { "Instance": "https://login.microsoftonline.com/", "Domain": "[Enter the domain of your tenant, e.g. contoso.onmicrosoft.com]", "TenantId": "[Enter 'common', or 'organizations' or the Tenant Id (Obtained from the Azure portal. Select 'Endpoints' from the 'App registrations' blade and use the GUID in any of the URLs), e.g. da41245a5-11b3-996c-00a8-4d99re19f292]", "ClientId": "[Enter the Client Id (Application ID obtained from the Azure portal), e.g. ba74781c2-53c2-442a-97c2-3d60re42f403]", "ClientSecret": "[Copy the client secret added to the app from the Azure portal]", "ClientCertificates": [ ], // the following is required to handle Continuous Access Evaluation challenges "ClientCapabilities": [ "cp1" ], "CallbackPath": "/signin-oidc" }, "DownstreamApi": { "Scopes": "User.ReadBasic.All user.read" },

Login and Logout

An AuthenticationExtensions class was used to implement the login and the logout for the application. The Login method is an HTTP GET request which redirects to the OpenID Connect server. The Logout method is an authentication HTTP POST request which requires CSRF protection and accepts no parameters. The return URL to the unauthenticated signed out page is fixed and so no open redirect attacks are possible. The login cleans up the local cookies as well as a redirect to the identity provider to logout on Microsoft Entra ID.

using Microsoft.AspNetCore.Authentication.Cookies; using Microsoft.AspNetCore.Authentication.OpenIdConnect; using Microsoft.AspNetCore.Authentication; namespace BlazorWebMeID; public static class AuthenticationExtensions { public static WebApplication SetupEndpoints(this WebApplication app) { app.MapGet("/Account/Login", async (HttpContext httpContext, string returnUrl = "/") => { await httpContext.ChallengeAsync(OpenIdConnectDefaults.AuthenticationScheme, new AuthenticationProperties { RedirectUri = !string.IsNullOrEmpty(returnUrl) ? returnUrl : "/" }); }); app.MapPost("/Account/Logout", async (HttpContext httpContext) => { var authenticationProperties = new AuthenticationProperties { RedirectUri = "/SignedOut" }; await httpContext.SignOutAsync(OpenIdConnectDefaults.AuthenticationScheme, authenticationProperties); await httpContext.SignOutAsync(CookieAuthenticationDefaults.AuthenticationScheme); }).RequireAuthorization(); return app; } }

Security headers

The security headers are used to protect the session. When using AddInteractiveWebAssemblyComponents mode, the script CSP header is really weak and adds little protection leaving the application open to numerous XSS, Javascript attacks. It is not possible to use CSP nonces with Blazor Web using the InteractiveWebAssemblyComponents mode, or I have not found a way to do this, as the Blazor Web components cannot read the HTTP headers in the response. A Blazor WASM hosted in an ASP.NET Core application can use CSP nonces and is a more secure application.

namespace HostedBlazorMeID.Server; public static class SecurityHeadersDefinitions { public static HeaderPolicyCollection GetHeaderPolicyCollection(bool isDev, string? idpHost) { ArgumentNullException.ThrowIfNull(idpHost); var policy = new HeaderPolicyCollection() .AddFrameOptionsDeny() .AddContentTypeOptionsNoSniff() .AddReferrerPolicyStrictOriginWhenCrossOrigin() .AddCrossOriginOpenerPolicy(builder => builder.SameOrigin()) .AddCrossOriginResourcePolicy(builder => builder.SameOrigin()) .AddCrossOriginEmbedderPolicy(builder => builder.RequireCorp()) .AddContentSecurityPolicy(builder => { builder.AddObjectSrc().None(); builder.AddBlockAllMixedContent(); builder.AddImgSrc().Self().From("data:"); builder.AddFormAction().Self().From(idpHost); builder.AddFontSrc().Self(); builder.AddStyleSrc().Self(); builder.AddBaseUri().Self(); builder.AddFrameAncestors().None(); // due to Blazor Web, nonces cannot be used with AddInteractiveWebAssemblyComponents mode. // weak script CSP.... builder.AddScriptSrc() .Self() // self required .UnsafeEval() // due to Blazor WASM .UnsafeInline(); // only a fallback for older browsers when the nonce is used }) .RemoveServerHeader() .AddPermissionsPolicy(builder => { builder.AddAccelerometer().None(); builder.AddAutoplay().None(); builder.AddCamera().None(); builder.AddEncryptedMedia().None(); builder.AddFullscreen().All(); builder.AddGeolocation().None(); builder.AddGyroscope().None(); builder.AddMagnetometer().None(); builder.AddMicrophone().None(); builder.AddMidi().None(); builder.AddPayment().None(); builder.AddPictureInPicture().None(); builder.AddSyncXHR().None(); builder.AddUsb().None(); }); if (!isDev) { // maxage = one year in seconds policy.AddStrictTransportSecurityMaxAgeIncludeSubDomains(); } policy.ApplyDocumentHeadersToAllResponses(); return policy; } }

Notes

I am starting to understand how Blazor Web works and have difficultly with the session state and sharing this between different components. Some basic browser security cannot be used, i.e. CSP nonces. The mixed mode has strange UI effects which I could not clean up.

There are now four types of Blazor applications.

Blazor WASM hosted in an ASP.NET Core application Blazor Server Blazor Web Blazor WASM standalone

Blazor WASM hosted in an ASP.NET Core application and Blazor Server can be secured in a good way using the recommended security best practices (OpenID Connect confidential client). Blazor Web can implement a confidential client but is missing the recommend script session protection. Blazor WASM standalone cannot implement the recommended authentication as it is a public application and should no longer be used in secure environments.

Links

https://github.com/CrahunGit/Auth0BlazorWebAppSample/tree/master/BlazorApp4

https://github.com/dotnet/blazor-samples/tree/main/8.0/BlazorWebAppOidc

https://github.com/AzureAD/microsoft-identity-web

https://github.com/andrewlock/NetEscapades.AspNetCore.SecurityHeaders

Friday, 02. February 2024

Just a Theory

Presentation: Introduction to the PGXN Architecture

I made a presentation on the PGXN architecture for the Tembo team.

As I started digging into the jobs and tools for the Postgres extension ecosystem as part of my new gig, I realized that most people have little knowledge of the PGXN architecture. I learned a lot designing PGXN and its services, and am quite pleased with where it ended up, warts and all. So I thought it worthwhile to put together a brief presentation on the fundamental design principals (static REST file API), inter-related services (root mirror, manager, API, site) and tools (CLI, CI/CD).

Yesterday, the Tembo blog published the presentation, including the video and slides, along with a high-level architecture diagram. I hope it’s a useful point of reference for the Postgres community as we look to better distribute extensions in the future.

More about… PGXN Software Architecture REST JSON Tembo

Thursday, 01. February 2024

Just a Theory

Contemplating Decentralized Extension Publishing

The Go package ecosystem uses distributed publishing to release modules without authentication or uploads. Could we do something similar for Postgres extensions?
TL;DR

As I think through the future of the Postgres extension ecosystem as a key part of the new job, I wanted to understand how Go decentralized publishing works. In this post I work it out, and think through how we might do something similar for Postgres extension publishing. It covers the Go architecture, namespacing challenges, and PGXS abuse; then experiments with URL-based namespacing and ponders reorganizing installed extension files; and closes with a high-level design for making it work now and in the future.

It is, admittedly, a lot, mainly written for my own edification and for the information of my fellow extension-releasing travelers.

I find it fascinating and learned a ton. Maybe you will too! But feel free to skip this post if you’re less interested in the details of the journey and want to wait for more decisive posts once I’ve reached the destination.

Introduction

Most language registries require developers to take some step to make releases. Many automate the process in CI/CD pipelines, but it requires some amount of effort on the developer’s part:

Register for an account Learn how to format things to publish a release Remember to publish again for every new version Create a pipeline to automate publishing (e.g., a GitHub workflow) Decentralized Publishing

Go decentralized publishing has revised this pattern: it does not require user registration or authentication to to publish a module to pkg.go.dev. Rather, Go developers simply tag the source repository, and the first time someone refers to the tag in Go tools, the Go module index will include it.

For example, publishing v1.2.1 of a module in the github.com/golang/example repository takes just three commands:

git tag v1.2.1 -sm 'Tag v1.2.1' git push --tags go list -m github.com/golang/example@v1.2.1

After a few minutes, the module will show up in the index and then on pkg.go.dev. Anyone can run go get -u github.com/golang/example to get the latest version. Go developers rest easy in the knowledge that they’re getting the exact module they need thanks to the global checksum database, which Go uses “in many situations to detect misbehavior by proxies or origin servers”.

This design requires go get to understand multiple source code management systems: it supports Git, Subversion, Mercurial, Bazaar, and Fossil.1 It also needs the go.mod metadata file to live in the project defining the package.

But that’s really it. From the developer’s perspective it could not be easier to publish a module, because it’s a natural extension of the module development tooling and workflow of committing, tagging, and fetching code.

Decentralized Extension Publishing

Could we publish Postgres extensions in such a decentralized pattern? It might look something like this:

The developer places a metadata file in the proper location (control file, META.json, Cargo.toml, whatever — standard TBD) To publish a release, the developer tags the repository and calls some sort of indexing service hook (perhaps from a tag-triggered release workflow) The indexing service validates the extension and adds it to the index

Note that there is no registration required. It simply trusts the source code repository. It also avoids name collision: github.com/bob/hash is distinct from github.com/carol/hash.

This design does raise challenges for clients, whether they’re compiling extensions on a production system or building binary packages for distribution: they have to support various version control systems to pull the code (though starting with Git is a decent 90% solution).

Namespacing

Then there’s name conflicts. Perhaps github.com/bob/hash and github.com/carol/hash both create an extension named hash. By the current control file format, the script directory and module path can use any name, but in all likelihood the use these defaults:

directory = 'extension' module_pathname = '$libdir/hash'

Meaning .sql files will be installed in the Postgres share/extension subdirectory — along with all the other installed extensions — and library files will be installed in the library directory along with all other libraries. Something like this:

pgsql ├── lib │ └── hash.so └── share └── extension │ └── hash.control │   ├── hash--1.0.0.sql └── doc └── hash.md

If both projects include, say, hash.control, hash--1.0.0.sql, and hash.so, the files from one will stomp all over the files of the other.

Installer Abuse

Go avoids this issue by using the domain and path from each package’s repository in its directory structure. For example, here’s a list of modules from google.golang.org repositories:

$ ls -1 ~/go/pkg/mod/google.golang.org api@v0.134.0 api@v0.152.0 appengine@v1.6.7 genproto genproto@v0.0.0-20230731193218-e0aa005b6bdf grpc@v1.57.0 grpc@v1.59.0 protobuf@v1.30.0 protobuf@v1.31.0 protobuf@v1.32.0

The ~/go/pkg/mod directory has subdirectories for each VCS host name, and each then subdirectories for package paths. For the github.com/bob/hash example, the files would all live in ~/go/pkg/mod/github.com/bob/hash.

Could a Postgres extension build tool follow a similar distributed pattern by renaming the control file and installation files and directories to something specific for each, say github.com+bob+hash and github.com+carol+hash? That is, using the repository host name and path, but replacing the slashes in the path with some other character that wouldn’t create subdirectories — because PostgreSQL won’t find control files in subdirectories. The control file entries for github.com/carol/hash would look like this:

directory = 'github.com+carol+hash' module_pathname = '$libdir/github.com+carol+hash'

Since PostgreSQL expects the control file to have the same name as the extension, and for SQL scripts to start with that name, the files would have to be named like so:

hash ├── Makefile ├── github.com+carol+hash.control └── sql └── github.com+carol+hash--1.0.0.sql

And the Makefile contents:

EXTENSION = github.com+carol+hash MODULEDIR = $(EXTENSION) DATA = sql/$(EXTENSION)--1.0.0.sql PG_CONFIG ?= pg_config PGXS := $(shell $(PG_CONFIG) --pgxs) include $(PGXS)

In other words, the extension name is the full repository host name and path and the Makefile MODULEDIR variable tells pg_config to put all the SQL and documentation files into a directories named github.com+carol+hash — preventing them from conflicting with any other extension.

Finally, the github.com+carol+hash.control file — so named becaus it must have the same name as the extension — contains:

default_version = '1.0.0' relocatable = true directory = 'github.com+carol+hash' module_pathname = '$libdir/github.com+carol+hash'

Note the directory parameter, which must match MODULEDIR from the Makefile, so that CREATE EXTENSION can find the SQL files. Meanwhile, module_pathname ensures that the library file has a unique name — the same as the long extension name — again to avoid conflicts with other projects.

That unsightly naming extends to SQL: using the URL format could get to be a mouthful:

CREATE EXTENSION "github.com+carol+hash";

Which is do-able, but some new SQL syntax might be useful, perhaps something like:

CREATE EXTENSION hash FROM "github.com+carol+hash";

Or, if we’re gonna really go for it, use slashes after all!

CREATE EXTENSION hash FROM "github.com/carol/hash";

Want to use both extensions but they have conflicting objects (e.g., both create a “hash” data type)? Put them into separatre schemas (assuming relocatable = true in the control file):

CREATE EXTENSION hash FROM "github.com/carol/hash" WITH SCHEMA carol; CREATE EXTENSION hash FROM "github.com/bob/hash" WITH SCHEMA bob; CREATE TABLE try ( h1 carol.hash, h2 bob.hash );

Of course it would be nice if PostgreSQL added support for something like Oracle packages, but using schemas in the meantime may be sufficient.

Clearly we’re getting into changes to the PostgreSQL core, so put that aside and we can just use long names for creating, modifying, and dropping extensions, but not necessarily otherwise:

CREATE EXTENSION "github.com+carol+hash" WITH SCHEMA carol; CREATE EXTENSION "github.com+bob+hash" WITH SCHEMA bob; CREATE EXTENSION "gitlab.com+barack+kicker_type"; CREATE TABLE try ( h1 carol.hash, h2 bob.hash kt kicker ); Namespacing Experiment

To confirm that this approach might work, I committed 24134fd and pushed it in the namespace-experiment branch of the semver extension. This commit changes the extension name from semver to github.com+theory+pg-semver, and follows the above steps to ensure that its files are installed with that name.

Abusing the Postgres extension installation infrastructure like this does work, but suffers from a number of drawbacks, including:

The extension name is super long, as before, but now so too are the files in the repository (as opposed to the installer renaming them on install). The shared library file has to have the long name, so therefore does the .c source file. The SQL files must all start with github.com+theory+pg-semver, although I skipped that bit in this commit; instead the Makefile generates just one from sql/semver.sql. Any previous installation of the semver type would remain unchanged, with no upgrade path. Changing an extension’s name isn’t a great idea.

I could probably script renaming and modifying file contents like this and make it part of the build process, but it starts to get complicated. We could also modify installers to make the changes, but there are a bunch of moving parts they would have to compensate for, and given how dynamic this can be (e.g., the semver Makefile reads the extension name from META.json), we would rapidly enter the territory of edge case whac-a-mole. I suspect it’s simply too error-prone.

Proposal: Update Postgres Extension Packaging

Perhaps the Go directory pattern could inspire a similar model in Postgres, eliminating the namespace issue by teaching the Postgres extension infrastructure to include all but one of the files for an extension in a single directory. In other words, rather than files distributed like so for semver:

pgsql ├── lib │ └── semver.so └── share └── extension │ └── semver.control │   ├── semver--0.32.1.sql │   ├── semver--0.32.0--0.32.1.sql └── doc └── semver.md

Make it more like this:

pgsql └── share └── extension └── github.com └── theory └── pg-semver └── extension.control └── lib │ └── semver.so └── sql │ └── semver--0.32.1.sql │ └── semver--0.32.0--0.32.1.sql └── doc └── semver.md

Or perhaps:

pgsql └── share └── extension └── github.com └── theory └── pg-semver └── extension.control └── semver.so └── semver--0.32.1.sql └── semver--0.32.0--0.32.1.sql └── semver.md

The idea is to copy the files exactly as they’re stored in or compiled in the repository. Meanwhile, the new semver.name file — the only relevant file stored outside the extension module directory — simply points to that path:

github.com/theory/pg-semver

Then for CREATE EXTENSION semver, Postgres reads semver.name and knows where to find all the files to load the extension.

This configuration would require updates to the control file, now named extension.control, to record the full package name and appropriate locations. Add:

name = 'semver' package = 'github.com/theory/pg-semver'

This pattern could also allow aliasing. Say we try to install a different semver extension from github.com/example/semver. This is in its extension.control file:

name = 'semver' package = 'github.com/example/pg-semver'

The installer detects that semver.name already exists for a different package and raises an error. The user could then give it a different name by running something like:

make install ALIAS_EXTENSION_NAME=semver2

This would add semver2.name right next to semver.name, and its contents would contain github.com/example/semver, where all of its files are installed. This would allow CREATE EXTENSION semver2 to load the it without issue (assuming no object conflicts, hopefully resolved by relocate-ability).

I realize a lot of extensions with libraries could wreak some havoc on the library resolver having to search so many library directories, but perhaps there’s some way around that as well? Curious what techniques experienced C developers might have adopted.

Back to Decentralized Publishing

An updated installed extension file structure would be nice, and is surely worth a discussion, but even if it shipped in Postgres 20, we need an updated extension ecosystem today, to work well with all supported versions of Postgres. So let’s return to the idea of decentralized publishing without such changes.

I can think of two pieces that’d be required to get Go-style decentralized extension publishing to work with the current infrastructure.

Module Uniqueness

The first is to specify a new metadata field to be unique for the entire index, and which would contain the repository path. Call it module, after Go (a single Git repository can have multiple modules). In PGXN Meta Spec-style JSON it’d look something like this:

{ "module": "github.com/theory/pg-semver", "version": "0.32.1", "provides": { "semver": { "abstract": "A semantic version data type", } } }

Switch from the PGXN-style uniqueness on the distribution name (usually the name of the extension) and let the module be globally unique. This would allow another party to release an extension with the same name. Even a fork where only the module is changed:

{ "module": "github.com/example/pg-semver", "version": "0.32.1", "provides": { "semver": { "abstract": "A semantic version data type", } } }

Both would be indexed and appear under the module name, and both would be find-able by the provided extension name, semver.

Where that name must still be unique is in a given install. In other words, while github.com/theory/pg-semver and github.com/example/pg-semver both exist in the index, the semver extension can be installed from only one of them in a given Postgres system, where the extension name semver defines its uniqueness.

This pattern would allow for much more duplication of ideas while preserving the existing per-cluster namespacing. It also allows for a future Postgres release that supports something like the flexible per-cluster packaging as described above.2

Extension Toolchain App

The second piece is an extension management application that understands all this stuff and makes it possible. It would empower both extension development workflows — including testing, metadata management, and releasing — and extension user workflows — finding, downloading, building, and installing.

Stealing from Go, imagine a developer making a release with something like this:

git tag v1.2.1 -sm 'Tag v1.2.1' git push --tags pgmod list -m github.com/theory/pg-semver@v1.2.1

The creatively named pgmod tells the registry to index the new version directly from its Git repository. Thereafter anyone can find it and install it with:

pgmod get github.com/theory/pg-semver@v1.2.1 — installs the specified version pgmod get github.com/theory/pg-semver — installs the latest version pgmod get semver — installs the latest version or shows a list of matching modules to select from

Any of these would fail if the cluster already has an extension named semver with a different module name. But with something like the updated extension installation locations in a future version of Postgres, that limitation could be loosened.

Challenges

Every new idea comes with challenges, and this little thought experiment is no exception. Some that immediately occur to me:

Not every extension can be installed directly from its repository. Perhaps the metadata could include a download link for a tarball with the results of any pre-release execution? Adoption of a new CLI could be tricky. It would be useful to include the functionality in existing tools people already use, like pgrx. Updating the uniqueness constraint in existing systems like PGXN might be a challenge. Most record the repository info in the resources META.json object, so it would be do-able to adapt into a new META format, either on PGXN itself or in a new registry, should we choose to build one. Getting everyone to standardize on standardized versioning tags might take some effort. Go had the benefit of controlling its entire toolchain, while Postgres extension versioning and release management has been all over the place. However PGXN long ago standardized on semantic versioning and those who have released extensions on PGXN have had few issues (one can still use other version formats in the control file, for better or worse). Some PGXN distributions have shipped different versions of extensions in a single release, or the same version as in other releases. The release version of the overall package (repository, really) would have to become canonical.

I’m sure there are more, I just thought of these offhand. What have you thought of? Post ’em if you got ’em in the #extensions channel on the Postgres Slack, or give me a holler on Mastodon or via email.

Or does it? Yes, it does. Although the Go CLI downloads most public modules from a module proxy server like proxy.golang.org, it still must know how to download modules from a version control system when a proxy is not available. ↩︎

Assuming, of course, that if and when the Postgres core adopts more bundled packaging that they’d use the same naming convention as we have in the broader ecosystem. Not a perfectly safe assumption, but given the Go precedent and wide adoption of host/path-based projects, it seems sound. ↩︎

More about… Postgres PGXN Extensions Go Packaging Distributed Publishing

Wednesday, 31. January 2024

Just a Theory

PGXN Tools v1.4

The pgxn-tools Docker image has seen some recent bug fixes and improvements.

Over on the PGXN Blog I’ve posted a brief update on recent bug fixes and improvements to the pgxn-tools Docker image, which is used fairly widely these days to test, bundle, and release Postgres extensions to PGXN. This fix is especially important for Git repositories:

v1.4.1 fixes an issue where git archive was never actually used to build a release zip archive. This changed at some point without noticing due to the introduction of the safe.directory configuration in recent versions of Git. Inside the container the directory was never trusted, and the pgxn-bundle command caught the error, decided it wasn’t working with a Git repository, and used the zip command, instead.

I also posted a gist listing PGXN distributions with a .git directory.

More about… Postgres PGXN Docker GitHub Workflow

Mike Jones: self-issued

Invited OpenID Federation Presentation at 2024 FIM4R Workshop

The OpenID Federation editors were invited to give a presentation on OpenID Federation at the 18th FIM4R Workshop, which was held at the 2024 TIIME Unconference. Giuseppe De Marco, Roland Hedberg, John Bradley, and I tag-teamed the presentation, with Vladimir Dzhuvinov also participating in the Q&A. Topics covered included motivations, architecture, design decisions, capabilities, use […]

The OpenID Federation editors were invited to give a presentation on OpenID Federation at the 18th FIM4R Workshop, which was held at the 2024 TIIME Unconference. Giuseppe De Marco, Roland Hedberg, John Bradley, and I tag-teamed the presentation, with Vladimir Dzhuvinov also participating in the Q&A. Topics covered included motivations, architecture, design decisions, capabilities, use cases, history, status, implementations, and people.

Here’s the material we used:

OpenID Federation 1.0: Shaping The Advanced Infrastructure of Trust

It was the perfect audience – chock full of people with practical federation deployment experience!


Fully-Specified Algorithms adopted by JOSE working group

The “Fully-Specified Algorithms for JOSE and COSE” specification has been adopted by the JOSE working group. See my original post about the spec for why fully-specified algorithms matter. Thanks to all who supported adoption and also thanks to those who provided useful detailed feedback that we can address in future working group drafts. The specification […]

The “Fully-Specified Algorithms for JOSE and COSE” specification has been adopted by the JOSE working group. See my original post about the spec for why fully-specified algorithms matter. Thanks to all who supported adoption and also thanks to those who provided useful detailed feedback that we can address in future working group drafts.

The specification is available at:

https://www.ietf.org/archive/id/draft-ietf-jose-fully-specified-algorithms-00.html

Wrench in the Gears

Magical Realism Among The Raindrops

I woke up with piles of boxes filling one end of my living room. They’d been there since last weekend when I hauled them up from the basement and down from the third floor. Given our recent investigations into fascia, the psyche, and computation, perhaps the physicality of my cardboard wrangling was a subconscious tactic [...]

I woke up with piles of boxes filling one end of my living room. They’d been there since last weekend when I hauled them up from the basement and down from the third floor. Given our recent investigations into fascia, the psyche, and computation, perhaps the physicality of my cardboard wrangling was a subconscious tactic to help me process the loss. Contained within an assemblage of Playmobil, YA fantasy novels, vintage ephemera, and yellowed diplomas were the remaining belongings of two-thirds of my tiny nuclear family. At 8:30am movers were scheduled to arrive to whisk everything to storage, so I made sure to be up and out of the house early. Everything was organized and labelled with notes for my husband on what was to go, and which items were to be relocated in the house. It’s what moms do, right?

The night before I tucked away “Piggy,” my child’s beloved stuff animal companion, threadbare again after one flesh-colored terry cloth “chest” transplant. I gave him one last big hug and placed him gently next to “Brown Wolf,” both of them atop a quilt of wild 70s prints made by my grandmother in the summer camp trunk covered with Dolly’s Dairy Bar stickers, a Brevard, NC staple. It is the end of an era. I wept softly and alone in the chill of the sewing room.

After I had a few years of parenting under my belt, I would proffer parents of infants this insight – no matter how terrible or how wonderful life is at any given moment, the one constant is change. Cherish the sweet moments and know that the miserable ones will pass eventually. It may seem like forever, but it isn’t. I still think it’s solid advice, though bittersweet all the same. I’ve come to accept the situation and scrabble for scraps of grace to get through another day until the house sells and I can start to start over. We are still human, right? Feeling all of the big feelings is part of the job description, and yes, I’m here for it.

While packing up some lovely textiles woven by my husband’s late mother, I came across a few concert t-shirts from back in the day when he wore army jackets and his hair longish and was a DJ for late night hardcore shows that aired on college radio stations. Me? I was pretty square, naive, knew very little of the world. We met during a study abroad semester in Venice. On foggy fall nights we would walk holding hands to the tip of Dorsoduro, past the Salute plague church, to “the point,” Punta Della Dogana – young love. There we would sit watching the lights edging St. Mark’s Square where the Canale Grande met La Giudecca, the deep channel.

Back then I had no conception of water memory, Neptune,  psi, or waves of consciousness. Unseen from street level, Atlas and Fortuna kept watch over the point, the latter holding a rudder aloft showing the direction of the wind, and our fate. I can look back and see my course was set when I was just a callow child off on a big adventure that would eventually sweep me to the City of Brotherly Love nestled between the Delaware and the Schuylkill, and then on, apparently, to the very different burbling 4,000-year-old waters of the Ozarks.

A few years back, my friend Dru shared with me the Chinese parable of the man who lost his horse. The gist was that when something happens in our lives that is apparently good or apparently bad, it is best to keep a neutral outlook, because the good might lead to the bad, or the bad to the good. Life cycles up and down. That our lives together started out under the globe of the heavens held by two atlas figures topped by a seventeenth century statue of Fortune charting human weather, commerce, navigation, bureaucratic systems…it’s quite a perfect summation of the tapestry I’ve been weaving without quite being aware of it until now.

Good? Bad? I guess I shall stay the course and see what comes next and try to hold onto my sense of wonder and playfulness as the trickster energy bubbles up around me.

I texted my soon-to-be-ex-husband a picture of “Misfits: Legacy of Brutality,” saying it seems the tables have turned. Now he’s an institutional administrator while I’ve gradually slid over into the outlier spot. Earlier in the process of letting go I spent a few days sifting through drawers of old cards and letters and report cards and crayon drawings, the fleeting realms of moms. Back then I was still a beloved wife and mother and daughter. The cards said so, at least. Now, I am an out-of-tune, dissonant dissident in a world rushing obliviously forward. Only a handful of people seem to be able to see and comprehend the layers of logic protocols and invisible sensors bathing us in invisible harmonic frequencies and smart cybernetic governance to tap into our soulful connection to the divine.

This is the world my child will navigate. Capably, I have no doubt.

And me? I will watch the phase shift coalesce from the sidelines. If I’m lucky I will view the etheric confluence barefoot peeking over a bed of glorious rainbow zinnias with a tomato sandwich in hand, juice running down my forearm, and when the sandwich is done, I will put a kayak on my aging Subaru and point it towards the crystal clear waters of Lake Ouachita.

I have a memory of being in a circle of families. Our children were part of a semester school in the Pisgah Forest south of Asheville. We were about to entrust our precious ones to four months of expeditionary learning, which in retrospect wasn’t my best parenting choice but you do the best with the information you have at the time. One of the teachers read Khalil Gibran’s poem “On Children.”

 

And a woman who held a babe against her bosom said, Speak to us of Children.

And he said:

Your children are not your children.

They are the sons and daughters of Life’s longing for itself.

They come through you but not from you,

And though they are with you yet they belong not to you.

You may give them your love but not your thoughts,

For they have their own thoughts.

You may house their bodies but not their souls,

For their souls dwell in the house of tomorrow, which you cannot visit, not even in your dreams.

You may strive to be like them but seek not to make them like you.

For life goes not backward nor tarries with yesterday.

You are the bows from which your children as living arrows are sent forth.

The archer sees the mark upon the path of the infinite, and He bends you with

His might that His arrows may go swift and far.

Let your bending in the archer’s hand be for gladness.

For even as He loves the arrow that flies, so He loves also the bow that is stable.

 

I remember being nonplussed and unsettled. I wasn’t ready, but we never are. I understand more about information fields and the divine creation and energy than I did then. My child did come through me, and settled so far, so far away, launched from breast milk, home cooked dinners from our CSA farm box, hand sewn Halloween costumes, art classes, bedtime stories, family trips, walks to the library, and kisses on the brow, even after adopting the habit of pushing me away in annoyance.

With everything in order, I headed out to get a leisurely breakfast at a restaurant that was about a twenty-minute walk from home just off the Ben Franklin Parkway, Mr. Electric himself. There was a steady rain washing away last week’s snow leaving craggy miniature mountains of ice chips languishing on the soggy drifts of leaves that didn’t manage to get bagged up during the fall – crystalline phase shift. It was a mild, drizzly rain, the kind that has a hint of the spring that is to come. There wasn’t much wind, and the misty droplets coated all the edges, plants and buildings with sparkly beads of water – billions of miniscule lenses abstracting and expanding the limits of the ordinary material world. It was magical.

 

Ironically the umbrella I’d grabbed wasn’t my regular rainbow one, or even the boring black one, but another that was emblazoned with the UPenn Arts and Sciences logo on it. The past week or so I’ve spent some time learning about Mark Johnson, a philosopher from the University of Oregon who spent his career talking about the ways we use our body to make meaning in the world. You see, our intelligence isn’t just in our brains and the nervous system that keeps it jumping, it’s also in our body and the liquid crystal fascia is an amazing part of that system. Art, culture, creativity, sports, dance are all gifts we toss back and forth with the quantum field of our collective consciousness.

These goofballs really want to meld all of that with natural philosophy, STEM we call it now, to create a unified computational system. This goal of melding social and natural sciences extends back at least to Leibniz who sought to create a universal computer where the grace of the human soul might be leveraged to shine a light on the secrets of the cosmos. I don’t think it’s likely to work, at least not how they think, but I chuckled all the same that the umbrella I’d unthinkingly grabbed on the way out the door had a special message for me. I was walking away from the life represented by that UPenn logo and soon heading south into the unknown, and the water was there as my witness.

In his “Stalking the Wild Pendulum” Itzhak Bentov wrote of our lives as embodied information organizers, pattern seekers, and meaning makers. Bentov felt that as we make sense of the material world through our experiences, our epiphanies, heartbreaks, and joy, there are invisible tapestries woven on the warp and weft of psyche and biology. These complex works of art, infinitely varied, are gradually gifted to the universal mind upon the passing of our bodies. That vision really resonates with me, particularly in combination with Johnson’s theories about non-linguistic communication and the importance of art and culture and the movement of the body in carrying out that important work on both and individual and social level.

Stephers shared with me a fascinating paper about social systems and how we use artefacts to leave traces of ourselves on our environments and those traces then influence the behaviors of others. Our use of cultural artefacts to imprint our consciousness on the energetic fields of the universe that vibrates around us is similar to the concept of pheromones that coordinate the collective efforts of eusocial creatures like ants or termites. My sense is that Web3 hopes to make cultural artefacts calculable at scale, converting them into universal coordinating signals to coax digital harmonies from the remaining wild corners that are slowly being overtaken by silicon. I mapped out some notes on this in my Dallas Mythos map below. 

Source: https://embed.kumu.io/dc9b79f81e2bb35bc4fce22d59dde62b#untitled-map?s=bm9kZS1iU0kxa29DZA%3D%3D

What follows are images from my walk to breakfast and back, with a slight extension of the route to the former GlaxoSmithKline US headquarters that has since been turned into a String Theory charter school. This embodied meaning making exhorts us to go out into the world, the real world and see what’s out there to be seen! For me, since everything turned upside down, I’ve felt inclined to stay close to home and disconnect. Maybe it’s not a healthy way to be, but for right now cocooning feels right. Still, when I go out with intention and my eyes open, the universe tells me stories and leaves me clues. No, not like I’m “hearing” things or “getting downloads,” more like I’m just aligned with the symbolic language of the universe and open to exploring in a wondering, wandering way.

First, I encountered  a large puddle at an intersection. I paused for a few minutes watch the ripples of the raindrops and all of the rings that danced across the surface of the water. Bentov’s book spoke of information being stored in interference patterns. He described a three-pebble experiment where the ripples where flash frozen into a holographic information storage system in the water. In my case there weren’t pebbles, but dancing drops that captivated me.

And then just a half block down I had to stop again and take a picture of this surprising gift – a fish ornamented with spirals swimming through a colorful mosaic sea, as if to remind me that we don’t get to choose our river. We can, however, pay attention and find the best way to work with the flow.

And after I finished my mushroom toast, all the better to get in touch with the mycelial elements, I came out of the restaurant and saw that in the chainlink fence surrounding the empty lot across the street someone had created an impromptu party cup installation, mashing the plastic into the gaps to spell out the word “HOPE” in crude, but very legible letters. I smiled.

At this point I decided to go a bit further afield rather than head straight back to the house. I passed the Temple of the Church of Jesus Christ of Latter Day Saints, which stands about a block and a half from the Catholic Cathedral of Saints Peter and Paul.

And between the two on a sidewalk above the I-676 crosstown highway was a pretty lame art installation of Ben Franklin with his lightening bolts and keys, all electrical potential and trusted cryptography and Metaverse portals. It’s all right there if you have the eyes to see it. In the background is the former GlaxoSmithKline building.

I knew I wanted to take a photo of the Zenos Frudakis 2000 bronze sculpture “Freedom.” Zenos is the name of an Old Testament prophet featured in the Book of Mormon, which is interesting since you can see the temple from the piece. The composition was inspired by Rodin’s “Gates of Hell,” in turn inspired by Dante, which can be found a few blocks further along the Parkway.

I’d seen the piece a few times before in passing, but I’d never stopped to look at it very closely. I was kind of shocked.

The image conveys a person escaping the confines of the background block. For me, however, knowing the multiverse phase shift that is underway this “freedom,” especially one sponsored by a company working with Google Alphabet’s Verily on electroceutical development, feels a bit off.

Could it be the animal skull at the base?

Or the smirking cat staring out?

Or an incongruous jester, which is totally out of keeping wit the rest of the piece?

The texture of the figures spoke to me of fascia – movement, communication through embodiment, fractal tissues, today glistening with water.

Bodies at different scales embedded in their environment, a miniature twin tucked in one corner.

I felt drawn to reach out and put my hand on the outstretched hand. It reminded me of my dad, whose hands were so big and the way his fingers would curl down over mine and give them a firm squeeze. I miss that.

As I headed west to go home there was a colorfully odd sticker on a metal plate. A ghost with rainbow balloons – the souls are with us in all of their photonic playfulness.

Tucked up next to the retaining was of the highway was a clutch of pine trees with an abundance of small cones scattered on the pavers below, which I took as a sign that I should fill my pockets and proceed to Sister Cities Park.

It’s an uptight pocket park, managed within an inch of its life in that Disneyesque, public-private partnership, “don’t walk on the grass”, “we have the rights to all pictures taken of you here,” and “yes, you need a permit authorization to do anything fun,” neutered nature tamed and contained vibe.

 

Oh, and this is the first time I noticed a wayside elucidating the fact that the site was formerly a pauper’s burial ground and execution site. Not sure how our Sister Cities, Tel Aviv among them, would feel if they knew. St. Peter in his alcove with his keys overlooks the crystal fountain (based in Toronto). At the corner where the park met the sidewalk was Robert Indiana’s Latin take on his iconic Love statue. “Amor” commissioned in 2015 for the papal visit and World Meeting of Families.

Of course I wanted to leave a heart on the LED light fountain. Today’s kids don’t get swimming pools, just interactive splash pads. Who knows how that’s going to fit into stigmergic social computational artefacts once we’re all hydrosapiens. As I was laying down the pine cones the security guard hustled over. Now, it’s January and raining and no one but me is in the park, so she must have been inside watching me on some closed circuit cameras. She asked what I was doing and I replied what does it look like I’m doing? She said are those acorns? I said they were pine cones and just kept doing what I was doing. She turned and left saying I had to “clean it up.” I encircled the heart with needles from a few pine branches than had dropped in the storm and set an intention. Then I crossed over to the Cathedral. As soon as I stepped away she swooped in with a broom and dustpan to rid the park of my dissonant heart. But she didn’t pick up the actual litter under the bench even after I pointed it out. Then she and the other guard went and hid inside the closed cafe.

 

So I decided to make another heart. This one at the base of the Shakepeare statue outside the library – all the world’s a stage. I used rock salt left over on the sidewalks for the outline and filled it in with small magnolia pods and a few springs of yarrow that were unbelievably still a very cheery yarrow. I guess I should cut the security guard some slack. It is a spectacle after all, and she was simply playing her assigned archetypal role.

I went inside the library thinking that I would take a photo of the computer terminals in the children’s department that annoyed me so much when we made visits when my child was so little. I was pleasantly surprised to see that the beeping, jarring computer game terminals for toddlers were gone. Yay. I did see a display in the lobby that the public transit division wanted your daydreams, which is pretty creepy. I also had a long and rewarding conversation with the social worker who was manning the services desk in the lobby. I explained to her about social impact bonds and pay for success and blockchaining unhoused people (MyPass in Austin) and offered my theory that it was about biohybrid computing. She looked up Michael Levin on her phone and seemed genuinely interested, which gave some meaning to a morning that would otherwise have been rather bleak.

Three is such a nice number, so I decided to make a final heart on the fountain outside the Barnes Foundation – talk about the power of curated artifacts! I remember back to the lockdown days when a few of us did an esoteric tour of the Parkway and left some lovely crystals in the water. The water was turned off for the season, but there was a bit in one corner from the rain. This one was simple – not too much material around – twigs, sycamore ball fluff, an auburn oak leaf, and a smidge of pine.

The rest of the way home was pretty uneventful. I stopped to dip my hand in the water of the PeCO electrical manhole cover – liquid portal. 

I saw an acupuncture storefront with the sign for Still Waters.

And a window with artwork that looked like a black and white concept game – the Niantic Pokemon Go version of digital “Playful Cities” in abstracted, quantifiable reality.

And as I got to my house I saw my tree, that I will miss very much. In the city you only have room for at most one tree in the front. Ours was a thornless honey locust and we put it in when we bought our house in 1998. The past year or so the bark started peeling back and I noticed some larvae under it and I was worried for it. We had an injection put into the roots to stop the sucking bugs. A big hunk of bark fell off last year and I was very upset. But now six months later it’s healing up with new bark. You can still see the scarred place, but I want to believe it will bounce back, even in spite of all the incessant geoengineering. 

I am a Sagitarrius and we are rather mercurial, but as spring arrives I will try to remember the parable of the man who lost his horse. Good-bad, bad-good – we just need to practice swimming in the flow in which we’ve found ourselves. I hope you’ve enjoyed coming with me on this damp stroll through the city that I loved and that I’m leaving. I’m sure I will find new stories in new places to share.

 

 

Tuesday, 30. January 2024

Jon Udell

How to Learn Unfamiliar Software Tools with ChatGPT

Here’s the latest installment in the series on working with LLMS: How to Learn Unfamiliar Software Tools with ChatGPT. Ideally, tools like GeoGebra and Metabase provide interfaces so intuitive that you rarely need to read the docs, and you can learn the software just by poking around in it. In reality, of course, we need … Continue reading How to Learn Unfamiliar Software Tools with ChatGPT

Here’s the latest installment in the series on working with LLMS: How to Learn Unfamiliar Software Tools with ChatGPT.

Ideally, tools like GeoGebra and Metabase provide interfaces so intuitive that you rarely need to read the docs, and you can learn the software just by poking around in it. In reality, of course, we need those docs — and they still need to be excellent. But now, we’ll extract a new benefit from them. When we can partner with machines that have read the docs, and can look over our shoulders as we try to do the things described in the docs, we’ll turbocharge our ability to dive into unfamiliar software tools and quickly learn how to use them.

The rest of the series:

1 When the rubber duck talks back

2 Radical just-in-time learning

3 Why LLM-assisted table transformation is a big deal

4 Using LLM-Assisted Coding to Write a Custom Template Function

5 Elevating the Conversation with LLM Assistants

6 How Large Language Models Assisted a Website Makeover

7 Should LLMs Write Marketing Copy?

8 Test-Driven Development with LLMs: Never Trust, Always Verify

9 Learning While Coding: How LLMs Teach You Implicitly

10 How LLMs Helped Me Build an ODBC Plugin for Steampipe

11 How to Use LLMs for Dynamic Documentation

12 Let’s talk: conversational software development

13 Using LLMs to Improve SQL Queries

14 Puzzling over the Postgres Query Planner with LLMs

15 7 Guiding Principles for Working with LLMs

16 Learn by Doing: How LLMs Should Reshape Education


Michael Ruminer

Thoughts on Self-Sovereign Identity: A Systematic Review, Mapping and Taxonomy

Thoughts on Self-Sovereign Identity: A Systematic Review, Mapping and Taxonomy I ran across a paper named Self-Sovereign Identity: A Systematic Review, Mapping and Taxonomy. It was published mid-2022 so it is not the most up to date for the topic but it is very interesting and still of high value. It’s a meta-study of four research questions about self-sovereign identity. RQ-1: What Practi

Thoughts on Self-Sovereign Identity: A Systematic Review, Mapping and Taxonomy

I ran across a paper named Self-Sovereign Identity: A Systematic Review, Mapping and Taxonomy. It was published mid-2022 so it is not the most up to date for the topic but it is very interesting and still of high value. It’s a meta-study of four research questions about self-sovereign identity.

RQ-1: What Practical Problems Have Been Introduced and Solved? RQ-2: What Properties, Formal Definitions and Cryptographic Tools Have Been Used? RQ-3: What Conceptual Ideas Have Been Introduced or Refuted? RQ-4: When, Where, and by Whom Were SSI Studies Published?

It spends a lot of text before the research questions on how it built the study objectively, criteria for the data, and criteria for the inclusion of the other research papers. Though interesting, it was not what compelled me in the paper. As you might imagine, it was the red meat of the paper, the research questions, that I really found most interesting.

You’ll find in the research question sections that it does a nice inventory of various papers and a description of what they cover. I found RQ-1 to be the most interesting as it covers a lot of verifiable credentials and verifiable presentation topics.

Of RQ-1 I found section 6.2 to be of special interest. It covers:

The operational facet is divided into two facets: VC and VP.
They are a collection of concepts related to the functional aspects
of verifiable credentials and verifiable presentations.

And includes:

Revocation Decentralized Identifiers Issuer Authorization Delegation Backup and Recovery

RQ-3 is a short section and starts with an interesting statement that is probably less true today than when written but still holds a lot of truth.

...that there is currently no agreement on a definition of SSI...
Our third research question is answered by an examination of
the literature’s debates on the SSI definition.

Though I appreciate RQ-4 and it makes sense in the context of the paper, I found the least value in its presentation. It did remind me of a relationship graph I created a number of years back except that my graph was on the relationship of the specifications at the time. The header image of this post is a small rendering of that graph. You can find the useful version at Verifiable Credentials Specification Map. Reader beware that the specification map I list was last updated late May of 2021 so it is not an accurate source of the state of specifications for today though many of the relationships it does show are likely valid. This is really a topic for a different day.

All in all, despite the relative age of the paper, the other papers it refers to are often still valid today in their intent and basic questions, agreements, and refutations. I think it is well worth your time to look at the research questions portions if interested in self-sovereign identity (a phrase that seems to be moving more so out of popular use) and verifiable credentials.


Jon Udell

You say feature, I say bug: the enshittification of Microsoft Paint

I’ve happily used MS Paint as my basic bitmap editor since Windows 3, almost 25 years ago. Mostly I’ve used it to create images from screenshots, but that has suddenly become way harder. Formerly, when I’d cut a region, the now-empty region would display using the default white background. Now it displays a checkered background … Continue reading You say feature, I say bug: the enshittification of

I’ve happily used MS Paint as my basic bitmap editor since Windows 3, almost 25 years ago. Mostly I’ve used it to create images from screenshots, but that has suddenly become way harder. Formerly, when I’d cut a region, the now-empty region would display using the default white background. Now it displays a checkered background like so.

Here is the procedure to refill the white background:

Switch the foreground color to white Use the Fill tool to fill the checkered region Then switch the foreground back to black.

ARE YOU KIDDING ME?

Nope. It’s evidently an unintended consequence of a pair of new feature: layers and transparency.

To get started, click on the new Layers button in the toolbar, which will open a panel on the side of the canvas.”

Microsoft also revealed today that an upcoming Paint feature is support for image transparency, which will add the ability to open and save transparent PNG files.

During editing, users will notice a prominent checkerboard pattern displayed on the canvas, serving as a visual indicator and highlighting the transparent regions within the image.

This ensures that when content is erased from the canvas, it is completely removed, eliminating the need to cover unwanted regions of an image with white fill.

bleepingcomputer.com

I never asked for these “long-awaited” new features, Paint is (or was) useful to me precisely because it only does the kind of basic bitmap editing I need when compositing screenshots. But I can opt out, right?

Nope.

ARE YOU KIDDING ME?

Nope.

This feature (layers and image transparency) seems to be introduced in September 2023 and doesn’t actually allow to be turned off.

Doing what vengy proposes for each and every image being edited is a natural madness and will drive even the most sane person crazy.

What worked for me was to uninstall Paint and replace it with a classic version:

Uninstalling can be done by simply right-clicking Paint icon in Start Menu and selecting Uninstall from context menu. Classic Paint can be get from here or here.

Download and install it.

Go to Settings → Apps → Apps & Features → More settings → App execution aliases.

Toggle the switch to Off for mspaint.exe and pbrush.exe items.

superuser.com

Evidently people are willing to hack their systems in order to revert to a now-unsupported version that they prefer. As insane as it would be, I’m considering whether to become one of those people. Sigh. I guess 25 years was a pretty good run.


Just a Theory

PGXN Challenges

Some thoughts on the challenges for PGXN’s role in the ideal PostgreSQL extension ecosystem of the future.

Last week, I informally shared Extension Ecosystem: Jobs and Tools with colleagues in the #extensions channel on the Postgres Slack. The document surveys the jobs to be done by the ideal Postgres extension ecosystem and the suggests the tools and services required to do those jobs — without reference to existing extension registries and packaging systems.

The last section enumerates some questions we need to ponder and answer. The first one on the list is:

What will PGXN’s role be in this ideal extension ecosystem?

The PostgreSQL Extension Network, or PGXN, is the original extension distribution system, created 2010–11. It has been a moderate success, but as we in the Postgres community imagine the ideal extension distribution future, it’s worthwhile to also critically examine existing tools like PGXN, both to inform the project and to realistically determine their roles in that future.

With that in mind, I here jot down some thoughts on the challenges with PGXN.

PGXN Challenges

PGXN sets a lot of precedents, particularly in its decoupling of the registry from the APIs and services that depend on it. It’s not an all-in-one thing, and designed for maximum distributed dissemination via rsync and static JSON files.

But there are a number of challenges with PGXN as it currently stands; a sampling:

PGXN has not comprehensively indexed all public PostgreSQL extensions. While it indexes more extensions than any other registry, it falls far short of all known extensions. To be a truly canonical registry, we need to make it as simple as possible for developers to register their extensions. (More thoughts on that topic in a forthcoming post.)

In that vein, releasing extensions is largely a manual process. The pgxn-tools Docker image has improved the situation, allowing developers to create relatively simple GitHub workflows to automatically test and release extensions. Still, it requires intention and work by extension developers. The more seamless we can make publishing extensions the better. (More thoughts on that topic in a forthcoming post.)

It’s written in Perl, and therefore doesn’t feel modern or easily accessible to other developers. It’s also a challenge to build and distribute the Perl services, though Docker images could mitigate this issue. Adopting a modern compiled language like Go or Rust might increase community credibility and attract more contributions.

Similarly, pgxnclient is written in Python and the pgxn-utils developer tools in Ruby, increasing the universe of knowledge and skill required for developers to maintain all the tools. They’re also more difficult to distribute than compiled tools would be. Modern cross-compilable languages like Go and Rust once again simplify distribution and are well-suited to building both web services and CLIs (but not, perhaps native UX applications — but then neither are dynamic languages like Ruby and Python).

The PGXN Search API uses the Apache Lucy search engine library, a project that retired in 2018. Moreover, the feature never worked very well, thanks to the decision to expose separate search indexes for different objects — and requiring the user to select which to search. People often can’t find what they need because the selected index doesn’t contain it. Worse, the default index on the site is “Documentation”, on the surface a good choice. But most extensions include no documentation other than the README, which appears in the “Distribution” index, not “Documentation”. Fundamentally the search API and UX needs to be completely re-architected and -implemented.

PGXN uses its own very simple identity management and basic authentication. It would be better to have tighter community identity, perhaps through the PostgreSQL community account.

Given these issues, should we continue building on PGXN, rewrite some or all of its components, or abandon it for new services. The answer may come as a natural result of designing the overall extension ecosystem architecture or from the motivations of community consensus. But perhaps not. In the end, we’ll need a clear answer to the question.

What are your thoughts? Hit us up in the #extensions channel on the Postgres Slack, or give me a holler on Mastodon or via email. We expect to start building in earnest in February, so now’s the time!

More about… Postgres PGXN Extensions

Monday, 29. January 2024

Identity Woman

Event Reflection: Children’s Digital Privacy Summit 2024

Last week, I flew to LA To attend the Children’s Digital Privacy Summit hosted by Denise Tayloe and her team at Privo. I’ve known Denise since the early days of IIW, and it was great to meet her team for the first time.    They put on a great show what began with a talk […] The post Event Reflection: Children’s Digital Privacy Summit 2024 appeared first on Identity Woman.

Last week, I flew to LA To attend the Children’s Digital Privacy Summit hosted by Denise Tayloe and her team at Privo. I’ve known Denise since the early days of IIW, and it was great to meet her team for the first time.    They put on a great show what began with a talk […]

The post Event Reflection: Children’s Digital Privacy Summit 2024 appeared first on Identity Woman.


Doc Searls Weblog

If Your Privacy Is in the Hands of Others Alone, You Don’t Have Any

In her latest Ars Technica story, Ashley Belanger reports that Patreon, the widely used and much-trusted monetization platform for creative folk, opposes the minimal personal privacy protections provided by a law you probably haven’t heard of until now: the Video Privacy Protection Act, or VPPA. Patreon, she writes, wants a judge to declare that law (which dates […]
Prompt: “A panopticon in which thousands of companies are spying on one woman alone in the center with nothing around her.” Via Microsoft Bing Image Creator

In her latest Ars Technica story, Ashley Belanger reports that Patreon, the widely used and much-trusted monetization platform for creative folk, opposes the minimal personal privacy protections provided by a law you probably haven’t heard of until now: the Video Privacy Protection Act, or VPPA. Patreon, she writes, wants a judge to declare that law (which dates from the videotape rental age) unconstitutional because it inconveniences Patreon’s ability to share the personal data of its users with other parties.† Naturally, the EFF, the Center for Democracy & Technology, the ACLU of Northern California, and the ACLU itself all stand opposed to Patreon on this and have filed an amicus brief explaining why.

But I’m not here to talk about that. I’m here to bring up the inconvenient fact that Ars Technica is also in the surveillance business. A PageXray of Ashley’s story finds this—

360 adserver requests 259 tracking requests 131 other requests

—which it visualizes with this:

And that’s just one small part of it.

But will Ashley, or any reporter, grab the third rail of their employer’s participation in the tracking-based advertising business? Or visit that business’s responsibility for what was already the biggest boycott in human history way back in 2015? The odds are against it. I’ve challenged many reporters to grab that third rail, just like I’m challenging Ashley here. In every case, nothing happened.

I never challenged Farhad Manjoo, but he did come through exposing The New York Times (his employer’s) own participation in the privacy-opposed tracking-based adtech business, back in 2019. Here’s a PageXray of tracking via that piece today:

Better, but not ideal.

Five years ago this month, I wrote a column about privacy in Linux Journal with the same title as this post. Here it is again, with just a few tiny edits. Amazing how little things have changed since then—and how much worse they have become. But I do see hope. Read on.

If you think regulations are going to protect your privacy, you’re wrong. In fact, they can make things worse, especially if they start with the assumption that your privacy is provided only by other parties, most of whom are incentivized to violate it.

Exhibit A for how much worse things can get is the EU’s GDPR (General Data Protection Regulation). As soon as the GDPR went into full effect in May 2018, damn near every corporate entity on the Web put up a “cookie notice” requiring acceptance of terms and privacy policies that allow them to continue violating your privacy by harvesting, sharing, auctioning off and otherwise using your data, and data about you.

For websites and services in that harvesting business (a population that rounds to the whole commercial web), these notices provide a one-click way to adhere to the letter of the GDPR while violating its spirit.

There’s also big business in the friction that it produces. To see how big, look up GDPR+compliance on Google. You’ll get 232 million results (give or take a few dozen million).

None of those results are for you, even though you are who the GDPR is supposed to protect. See, to the GDPR, you are a mere “data subject” and not an independent and fully functional participant in the technical, social, and economic ecosystem the Internet supports by design. All privacy protections around your data are the burden of other parties.

Or at least that’s the interpretation that nearly every lawmaker, regulatory bureaucrat, lawyer, and service provider goes by. (One exception is Elizabeth Renieris @hackylawyer. Her collection of postings is required reading on the GDPR and much else.) The same goes for those selling GDPR compliance services, comprising most of those 190 million GDPR+compliance search results.

The clients of those services include nearly every website and service on Earth that harvests personal data. These entities have no economic incentive to stop harvesting, sharing, and selling personal data the usual ways, beyond fear that the GDPR might actually be enforced, which so far (with few exceptions), it hasn’t been. (See Without enforcement, the GDPR is a fail.)

Worse, the tools for “managing” your exposure to data harvesters are provided entirely by the websites you visit and the services you engage. The “choices” they provide (if they provide any at all) are between 1) acquiescence to them doing what they please and 2) a maze of menus full of checkboxes and toggle switches “controlling” your exposure to unknown threats from parties you’ve never heard of, with no way to record your choices or monitor effects.

So let’s explore just one site’s presentation, and then get down to what it means and why it matters.

Our example is https://www.mirror.co.uk. If you haven’t clicked on that site already, you’ll see a cookie notice that says,

We use cookies to help our site work, to understand how it is used, and to tailor the adverts presented on our site. By clicking “Accept” below, you agree to us doing so. You can read more in our cookie notice. Or, if you do not agree, you can click Manage below to access other choices.

They don’t mention that “tailor the adverts” really means something like this:

We open your browser to infestation by tracking beacons from countless parties in the online advertising business, plus who-knows-what-else that might be working with those parties (there is no way to tell, and if there was we wouldn’t provide it), so those parties and their “partners” can use those beacons to follow you like a marked animal everywhere you go and report your activities back to a vast marketplace where personal data about you is shared, bought and sold, much of it in real time, supposedly so your eyeballs can be hit with “relevant” or “interest-based” advertising as you travel from site to site and service to service. While we are sure there are bad collateral effects (fraud and malware, for example), we don’t care about those because it’s our business to get paid just for clicks or “impressions,” whether you’re impressed or not—and the odds that you won’t be impressed average to certain.

Okay, so now click on the “Manage” button.

Up will pop a rectangle where it says “Here you can control cookies, including those for advertising, using the buttons below. Even if you turn off the advertising-related cookies, you will still see adverts on our site, because they help us to fund it. However, those adverts will simply be less relevant to you. You can learn more about cookies in our Cookie Notice on the site.”

Under that text, in the left column, are six “Purposes of data collection”, all defaulted with little check marks to ON (though only five of them show, giving the impression that there are only those five). The right column is called “Our partners”, and it shows the first five of what turn out to be 259 companies, nearly all of which are not brands known to the world or to anybody outside the business (and probably not known widely within the business as well). All are marked ON by that little check mark. Here’s that list, just through the letter A:

1020, Inc. dba Placecast and Ericsson Emodo 1plusX AG 2KDirect, Inc. (dba iPromote) 33Across 7Hops.com Inc. (ZergNet) A Million Ads Limited A.Mob Accorp Sp. z o.o. Active Agent AG ad6media ADARA MEDIA UNLIMITED AdClear GmbH Adello Group AG Adelphic LLC Adform A/S Adikteev ADITION technologies AG Adkernel LLC Adloox SA ADMAN – Phaistos Networks, S.A. ADman Interactive SL AdMaxim Inc. Admedo Ltd admetrics GmbH Admotion SRL Adobe Advertising Cloud AdRoll Inc adrule mobile GmbH AdSpirit GmbH adsquare GmbH Adssets AB AdTheorent, Inc AdTiming Technology Company Limited ADUX advanced store GmbH ADventori SAS Adverline ADYOULIKE SA Aerserv LLC affilinet Amobee, Inc. AntVoice Apester Ltd AppNexus Inc. ARMIS SAS Audiens S.r.l. Avid Media Ltd Avocet Systems Limited

If you bother to “manage” any of this, what record do you have of it—or of all the other collections of third parties who you’ve agreed to follow you around? Remember, there are a different collection of these at every website with third parties that track you, and different UIs, each provided by other third parties.

It might be easier to discover and manage parasites in your belly than cookies in your browser.

Think I exaggerate? The long list of cookies in just one of my browsers (which I had to dig deep to find) starts with this list:

1rx.io 247-inc.net 2mdn.net 33across.com 360yield.com 3lift.com 4finance.com

After several hundred others, my cookie  list ends with:

zencdn.net zoom.us zopim.com

I know what zoom.us is. The rest are a mystery to me.

To look at just that first one, 1rx.io, I have to dig way down in the basement of the preferences directory (in Chrome it’s chrome://settings/cookies/detail?site=1rx.io), where I find that its locally stored data is this:

_rxuuid

Name
_rxuuid
Content
%7B%22rx_uuid%22%3A%22RX-2b58f1b1-96a4-4e1d-9de8-3cb1ca4175b0%22%2C%22nxtrdr%22%3Afalse%7D
Domain
.1rx.io
Path
/
Send for
Any kind of connection
Accessible to script
No (HttpOnly)
Created
Wednesday, December 12, 2018 at 4:48:53 AM
Expires
Thursday, December 12, 2019 at 4:48:53 AM

I’m a somewhat technical guy, and at least half of that stuff means nothing to me.

As for “managing” those,  my only choice on that page is to “Remove All”. Does that mean Remove everything on that page alone or Remove all cookies everywhere? And how can I remember what I’ve had removed?

Obviously, there is no way for anybody to “manage” this, in any meaningful sense of the word.

We also can’t fix it on the sites and services side, no matter how much those sites and services care (which most don’t) about the “customer journey”, the “customer experience” or any of the other bullshit they’re buying from marketers this week.

Even within the CRM (customer relationship management) world, the B2B customers of CRM companies use one cloud and one set of tools to create as many different “experiences” for users and customers as there are companies deploying those tools to manage customer relationships from their side.  There are no corresponding tools on our side. (Though there is work going on. See here.)

So the digital world remains one where we have no common or standard way to scale our privacy and data usage tools, choices, or experiences across all sites and services. And that’s what we’ll need if we want real privacy online.

The simple place where we need to start is this: privacy is personal, meaning something we create for ourselves (which in the natural world we do with clothing and shelter, both of which lack equivalents in the digital world).

And we need to be clear that privacy is not a grace of privacy policies and terms of service that differ with every company and over which none of us have true control—especially when there is an entire industry devoted to making those companies untrustworthy, even if they are in full compliance with privacy laws.

Devon Loffreto (who coined the term self-sovereign identity and whose good work we’ll be visiting in an upcoming issue of Linux Journal) puts the issue in simple geek terms: we need root authority over our lives. Hashtag: #OwnRoot.

It is only by owning root that we can crank up agency on the individual’s side. We have a perfect base for that in the standards and protocols that gave us the Internet, the Web, email, and too little else. And we need it here too. Soon.

We (a few colleagues and I) created Customer Commons as a place for terms that individuals can proffer as first parties, just by pointing at them, much as licenses at Creative Commons can be pointed at. Sites and services can agree to those terms, and both can keep records and follow audit trails.

And there are some good signs that this will happen. For example, the IEEE approached Customer Commons last year with the suggestion that we stand up a working group for machine-readable personal privacy terms. It’s called P7012. If you’d like to join, please do.

Unless we #OwnRoot for our own lives online, privacy will remain an empty promise by a legion of violators.

One more thing. We can put the GDPR to our use if we like. That’s because Article 4 of the GDPR defines a data controller as “the natural or legal person, public authority, agency or other body which, alone or jointly with others, determines the purposes and means of the processing of personal data…” This means each of us can be our own data controller. Most lawyers dealing with the GDPR don’t agree with that. They think the individual data subject will always need a fiduciary or an intermediary of some kind: an agent of the individual, but not an individual with agency. Yet the simple fact is that we should have root authority over our lives online, and that means we should have some degree of control over our data exposures, and how our data, and data about us, is used—much as we do over how we control or moderate our privacy in the physical world. More about all that in upcoming posts.

The original version of this post was published on the Private Internet Access blogPrivate Internet Access and Linux Journal at the time were both holdings of London Trust Media.

Also, check out the Privacy Manifesto at the ProjectVRM wiki. I maintain it and welcome bug fixes.

† This is an example of what Cory Doctorow calls “enshittification” and Wikipedia (at that link) more politely calls “platform decay.” It’s a big trade-away of goodwill by Patreon. Says to me they must be making an enshitload of money in the adtech fecosystem.


Phil Windleys Technometria

Acceptance Networks for Self-Sovereign Identity

We can't have broad adoption of verifiable credentials until we find a way to scale their presentation by providing tooling that credential verifiers can use to reduce their risk and gain confidence in the facts presented to them. When I hand a merchant in London a piece of plastic that I got from a bank in Utah to make a purchase, a tiny miracle happens. Despite the fact that the merchant has neve

We can't have broad adoption of verifiable credentials until we find a way to scale their presentation by providing tooling that credential verifiers can use to reduce their risk and gain confidence in the facts presented to them.

When I hand a merchant in London a piece of plastic that I got from a bank in Utah to make a purchase, a tiny miracle happens. Despite the fact that the merchant has never met me before and has no knowledge of my bank, she blithely allows me to walk out of the store with hundreds of dollars of merchandise, confident that she will receive payment. I emphasized the word confident in the last sentence because it's core to understanding what's happened. In the past, these kinds of transactions required that the merchant trust me or my bank. But in the modern world, trust has been replaced by confidence.

We often mix these concepts up and I'm as guilty as anyone. But trust always involves an element of risk, whereas confidence does not. These are not binary, but rather represent a spectrum. In the scenario I paint above, the merchant is still taking some risk, but it's very small. Technology, processes, and legal agreements have come together to squeeze out risk. The result is a financial system where the risk is so small that banks, merchants, and consumers alike have confidence that they will not be cheated. There's a name in the financial services industry for the network that reduces risk so that trust can be replaced with confidence: an acceptance network.

Acceptance Networks

An acceptance network is the network of merchants or service providers that accept a particular form of payment, usually credit or debit cards, from a particular issuer or payment network. The term refers to a broad ecosystem that facilitates these transactions, including point-of-sale terminals, online payment gateways, and other infrastructure. Each component of the acceptance network plays a crucial role in ensuring that transactions are processed efficiently, securely, and accurately. This drives out risk and increases confidence. Acceptance networks are foundational components of modern payment ecosystems and are essential to the seamless functioning of digital financial transactions. Visa, Mastercard, American Express, and Discover are all examples of acceptance networks.

Before the advent of acceptance networks, credit was a spotty thing with each large merchant issuing it's own proprietary credit card—good only at that merchant. My mom and dad had wallets full of cards for JC Penney, Sears, Chevron, Texaco, and so on. Sears trusted its card. Chevron trusted its card. But it was impossible to use a Chevron card at Sears. They had limited means to verify if it was real and no way to clear the funds so that Chevron could pay Sears for the transaction.

That scenario is similar to the state of digital identity today. We have identity providers (IdPs) like Google and Apple who control a closed ecosystem of relying parties (with a lot of overlap). These relying parties trust these large IdPs to authenticate the people who use their services. They limit their risk by only using IdPs they're familiar with and only accepting the (usually) self-asserted attributes from the IdP that don't involve much risk. Beyond that they must verify everything themselves.

Fixing this requires the equivalent of an acceptance network for digital identity. When we launched Sovrin Foundation and the Sovrin network1 in 2016, we were building an acceptance network for digital identity, even though we didn't use that term to describe it. Our goal was to create a system of protocols, processes, technology and governance that would reduce the risk of self-sovereign identity and increase confidence in an identity system that let the subjects present verifiable credentials that carried reliable attributes from many sources.

I've written previously about identity metasystems that provide a framework for how identity transactions happen. Individual identity systems are built according to the architecture and protocols of the metasystem. Acceptance networks are an instantiation of the metasystem for a particular set of users and types of transactions. A metasystem for self-sovereign identity might have several acceptance networks operating in it to facilitate the operation of specific identity systems.

Problems an Acceptance Network Can Solve

To understand why an acceptance network is necessary to reduce risk and increase confidence in identity transactions, let's explore the gaps that exist without it. The following diagram shows the now familiar triangle of verifiable credential exchange. In this figure, issuers issue credentials to holders who may or may not be the subject of the credentials. The holder presents cryptographic proofs that assert the value of relevant attributes using one of more of the credentials that they hold. The verifier verifies the proof and uses the attributes.

Verifiable Credential Exchange

Let's explore what it means for the verifier to verify the proof. The verifier wants to know a number of things about the credential presentation:

Were the credentials issued to the entity making the presentation?

Have any of the credentials been tampered with?

Have any of the credentials been revoked?

What are the schema for the credentials (to understand the data in them)?

Who issued the credentials in the proof?

The first four of these can be done cryptographically to provide confidence in the attestation. The technology behind the credential presentation is all that's necessary. They can be automated as part of the exchange. For example, the proof can contain pointers (e.g., DIDs) to the credential definitions. These could contain public keys for the credential and references to schema.

The last one—who issued the credential—is not a technical matter. To see why, imagine that Alice (as holder and subject) has been issued a credential from her university (the issuer) giving information about her educational experiences there. She's applying for a job and wants to present the credential to a prospective employer (the verifier). How does the employer know that Alice didn't just make the credential herself or buy it from a diploma mill?

Knowing who issued the credential is not something that can be done solely with technology (although it can help). The employer in this scenario wants more than an identifier for the issuer. And they want to know that the public key really does belong to the university. In short, the employer wants to resolve the identifier to other information that tells them something about the university and the credential. There are lots of ways to do that—people have been doing this sort of thing for centuries: states keep registries of businesses (universities are businesses), accreditation organizations keep registries of schools they've accredited, the Department of Education has registries of various institutions of higher education in the US, and so on.

The employer could make use of these by building its own database of university identifiers it trusts. And every time a new one shows up, they could investigate and add it to their registry (or not)2. But going back to the magic of the credit card scenario that I opened this article with, if every merchant had to keep their own registry of banks, the experience wouldn't be magical for me or the merchant. The financial acceptance network makes it easy for the merchant to have confidence that they'll be paid because they have not only technology, but processes, protocols, governance, and legal agreements that make the verification process automatable.

Acceptance Networks for Digital Identity

For some use cases, keeping your own registry of the issuers you trust works. But for many, it's just too much work and makes it difficult to make use of a variety of credentials. This kind of "localized trust" is unwieldy in an identity system that might involve millions of issuers and identifiers and credentials for billions or even trillions of subjects. I've written extensively about identity metasystems and what they provide to help bridge the gap. This one, on how metasystems help provide life-like identity for digital systems is perhaps the most comprehensive. Acceptance networks implement metasystems.

An acceptance network for digital identity must have a number of important properties, including the following:

Credentials are decentralized and contextual—There is no central authority for all credentials. Every party can be an issuer, a holder (identity owner), or a verifier. Verifiable credentials can be adapted to any country, any industry, any community, or any set of trust relationships.

Credential issuers decide on what data is contained in their credentials—Anyone can create a credential schema for their use case. Anyone can create a credential definition based on any of these schemas.

Verifiers make their own trust decisions about which credentials to accept—There's no central authority who determines what credentials are important or which are used for what purpose. The acceptance network supplies the technical underpinnings for credential exchange and support protocols for automating the verification of credential issuers.

Credential verifiers don't need to have any specific technical, contractual, or commercial relationship with credential issuers—Verifiers do not need to contact issuers to perform verification.

Credential holders are free to choose which credentials to carry and what information to disclose—People and organizations are in control of the credentials they hold (just as they are with physical credentials) and determine what to share with whom.

You may be thinking "but these are mostly about decentralized decision making." While it would be easier to imagine the acceptance network as a big directory, that solution can't possible support all the different ways people and organizations might want to use credentials. That doesn't mean an acceptance network couldn't be run by a single organization, like some financial services networks. Just that it has to support a variety of credential ecosystems running common protocols. I also think that there will be more than one and most issuers and verifiers will be part of several (again, like in financial services).

Structure of an Acceptance Network

One of the things we can take away from the architecture of financial services acceptance networks is that they are built in layers. No one has thought more about how this can work than Drummond Reed and the Trust Over IP Foundation (ToIP).3 This figure, from ToIP, shows how such a stack works.

Trust Over IP Stack

The layers build on each other to provide something the lower level didn't. Layer 1 is the foundational functionality, like DID methods. Layer 2 builds on that to support creating digital relationships with anyone. Layer 3 uses those relationships to effect credential exchange. Layer 4 is the ecosystems that say things about the issuers for different use cases. The dual stack emphasizes the need for governance at every layer.

The acceptance network specifies the accepted protocols and technologies. The acceptance network also supports ecosystems, providing governance models and technology. The acceptance network is involved at each layer. Here are some examples of things an acceptance network might do at each layer:

Layer 1—limit the allowed DID methods and certify them.

Layer 2—require that wallets and agents using the network support specific versions of the DIDComm protocol. Provide a certification framework for wallet and agent vendors for security and interoperability.

Layer 3—require specific versions of the exchange protocols. Participate in protocol development. Provide a certification framework for specific implementations to aid with security and interoperability.

Layer 4—support the formation, certification, and discovery of credential ecosystem providers. Govern what is required to be a certified ecosystem provider and provide models for acceptable ecosystem governance.

As part of it's overall governance of the ecosystem, the acceptance network also provides model legal agreements for and between the various participants, trust mark rights (think of the Visa logo), and drives a uniform user experience.

The following diagram shows the credential exchange from the preceding figure with an acceptance network providing support to the verifier so that it can have confidence in the data the issuer has supplied through the holder.

Acceptance Network in Operation

Credential issuers who know their credential might be widely used would join one or more acceptance networks. They agree to follow the rules and regulations in the governance framework of the acceptance network. The acceptance network issues a credential to them that they can use to prove they are a member.4 The acceptance network maintains a registry—likely a registry of registries—that verifiers can use to discover information about the issuer of a credential that has been presented to them.

Using an Acceptance Network

Returning to our previous scenario, Alice holds a credential issued by her university. She presents it to a prospective employer who wants to know that the credential is from an accredited university. Alice's university has been accredited by an accreditation organization5. They have followed their process for accrediting Alice's university and issued it a credential. They have also added the university to their registry. The university and the accrediting organization are members of an acceptance network. The employer's systems know to automatically query the acceptance network when it received a credential proof from a issuer it does not know. Doing so provides the assurance that the issuer is legitimate. It could also provide information about the accreditation status of the university. This information reduces the risk that the employer would otherwise bear.

In this scenario, the employer is trusting the processes and structure of the acceptance network. The employer must decide which acceptance networks to use. This is much more scalable than having to make these determinations for every credential issuer. The acceptance network has allowed the verification process to scale and made the overall use of verifiable credentials easier and less risky.

A Note on Implementation

This discussion of acceptance networks has undoubtedly brought images to your mind about how it is structured or how to build one. The comparison to financial services acceptance networks points to a network run by an organization. And the term registrybrings to mind a database of some kind. Why these are certainly possibilities, I think it's also possible to imagine more decentralized solutions. For example, the registry could be a distributed ledger or blockchain. The governance is likely most easily done by an organization, but there are other options like a decentralized autonomous organization (DAO). The scenario I described above illustrates a federated system where certifying authorities for specific ecosystems determine their own methods, processes, and requirements, but link their registry to that of the acceptance network.

Conclusion

As I mentioned above, we've been solving the problem of how to know which institutions to trust for centuries. We have ways of knowing whether a university is accredited, whether a bank is real, whether a company is actually registered and what its reputation is. What is missing is an easy way to make use of this information digitally so that processes for reducing risk can be automated. Acceptance networks rationalize the process and provide the needed tooling to automate these checks. They reduce the many-to-many problem that exists when each verifier has to determine whether to trust each issuer with a more scalable many-to-several system. Acceptance networks allow credential presentation to scale by providing the needed infrastructure for giving verifiers confidence in the facts that holders present to them.

Notes

You can see in the linked post how we used trust to describe what we were building, even as we were reducing risk and inspiring confidence.

Note that this investigation could make use of technology. Knowing the universities name, they could look up a well known location on the universities web site to find the identifier. They could use PKI (digital certificates) to be sure they're talking to the right place. They could look up the university in an online registry of accredited universities.

Trust over IP isn't the only one working on this. Marie Wallace of Accenture and Stephen Wilson of Lockstep Partners have been writing about this idea.

Note that there could be different levels or types of members who perform different roles in the ecosystem and make different agreements.

An example is the Northwest Commission on Colleges and Universities.

Photo Credit: Data flowing over networks from DALL-e

Saturday, 27. January 2024

Wrench in the Gears

Human Weather: Why For Now, Outrage Isn’t My Tool of Choice

I am sharing the following exchange by text upon the request of the other party involved. This is someone I became acquainted with on education issues and over the past year around how their state fits into the topics I research. I did a livestream following this exchange, relating aspects of my experience to recent [...]

I am sharing the following exchange by text upon the request of the other party involved. This is someone I became acquainted with on education issues and over the past year around how their state fits into the topics I research. I did a livestream following this exchange, relating aspects of my experience to recent findings from Michael Levin’s lab around group information fields and communication blocking.

If you’re not sure what human weather is and how linguistic concepts manipulate it, check out this short link:

If you’d like to check out the clips from the Levin lab on embryology and information fields I have a playlist of clips here. Each one is super important.

Now in case you haven’t been following me for awhile, and you imagine I am excited about the prospect of using electrically engineered trachea cells to build swishy buildings in outer space, I’m not. In fact, I don’t support any of the work coming out of Levin’s lab or other biotechnology labs. My position at this time is that I seek to understand the concepts that are shaping the world around us, and the future, and demand that we have open, candid public conversations about the ethics of all of these things and how we should or should not proceed. Fair enough?

I don’t think I have to spend paragraphs or many minutes dramatizing my anger / fear / frustration over where we have landed as a culture. As far as I’m concerned, for me at least, that is a waste of valuable time. That said, if you feel directing your energy to that end is helpful, but all means don’t let me stop you. You do you, and I will do me, and if you are not into me or the insights I offer by all means move along to some other content creator. There’s something for everyone out there.

Below is a stream I did this week reflecting on our fraught exchange. At the time, I chose not to share the person’s name or the contents of our private exchange, because that wouldn’t have been proper. However, this person seems to believe that I misrepresented the conversation and requested I share the text exchange. So, in the spirit of openness I am honoring that request. Cheers to human weather, but I could use a little less turbulence.

 

 

 

These were two comments left on my livestream. I want to point out that I do not expect this person will share my perspective.

That said, I was also within my rights to remove myself from that emotional, energetic whirlwind. The commentary was very personally directed at me, and it was unkind.

My inclination, having researched the mechanics of the system, is that the words above are an example of how our inner worlds can be shaped by collective thought forms, Levin’s information fields. This is not to discount in any way the trauma we have all been through, the profound losses in all their myriad permutations. It is my opinion, that for me, staying in a negative energy state only serves the system’s goals. Therefore I am choosing for myself to continue with my research in a calm, level-headed way. If there are people out there who want to imagine that by stepping back from the outrage machine I am somehow agreeing to the plan, there’s nothing I can do about that. I am not about to transform myself to fit someone else’s idea of who they think I should be or what I should be doing. I will work hard not to try and force others to be who I might imagine they should be. Fair is fair.

We each have our own ways of being in the world. May we work towards fulfillment in ways that do not drag other people down. I have done that in the past myself. I recognize it, and I am trying to do better. We are all human. We are all learning. We all stumble. We all have grace.

Human weather can get ugly. Let us enjoy the clear, calm days when we have them.

In the exchange below, my comments are on the blue background. The other person’s comments are on the light gray background.

The Substack shared above can be read here.

I will pause here and note I have NEVER said “some new medical paradigm will be better than the old.” NEVER. This is attributing opinions to me that are not mine, but something imagined in the head of the person texting. This is human weather.

It was a check-in when I was discussing piezoelectricity and radium and the Curies’ ties to the Society for Psychical Research.

Ok, so the above statement to me implied that the work I am doing is a luxury – “quite a few people don’t have the luxury to worry about metaphysical research.” In my mind I do this work knowing that people in precarity are likely unable to do it and that since I am able, I have a responsibility to do it and put the conversation out into the public sphere for discussion.

 

Thursday, 25. January 2024

Doc Searls Weblog

Privacy is Social

Eight years ago I was asked on Quora to answer the question “What is the social justification for privacy?” This was my answer.  Society is comprised of individuals, thick with practices and customs that respect individual needs. Privacy is one of those. Only people who live naked outdoors without clothing and shelter can do without […]
Looking into the windows of a living room in an Amsterdam houseboat floating on a canal.

Eight years ago I was asked on Quora to answer the question “What is the social justification for privacy?” This was my answer

Society is comprised of individuals, thick with practices and customs that respect individual needs. Privacy is one of those. Only people who live naked outdoors without clothing and shelter can do without privacy. The rest of us all have ways of expressing and guarding spaces we call “private” — and that others respect as well.

Private spaces are virtual as well as physical. Society would not exist without well-established norms for expressing and respecting each other’s boundaries. “Good fences make good neighbors,” says Robert Frost.

One would hardly ask to justify the need for privacy before the Internet came along; but it is a question now because the virtual world, like nature in the physical one, doesn’t come with privacy. By nature, we are naked in both. The difference is that we’ve had many millennia to work out privacy in the physical world, and approximately two decades to do the same in the virtual one. That’s not enough time.

In the physical world, we get privacy from clothing and shelter, plus respect for each others’ boundaries, which are established by mutual understandings of what’s private and what’s not. All of these are both complex and subtle. Clothing, for example, customarily covers what we (in English vernacular at least) call our “privates,” but also allows us selectively to expose parts of our bodies, in various ways and degrees, depending on social setting, weather and other conditions. Privacy in our sheltered spaces is also modulated by windows, doors, shutters, locks, blinds, and curtains. How these signal intentions differ by culture and setting, but within each the signals are well understood, and boundaries are respected. Some of these are expressed in law as well as custom. In sum, they comprise civilized life.

Yet life online is not yet civilized. We still lack sufficient means for expressing and guarding private spaces, for putting up boundaries, for signaling intentions to each other, and for signaling back respect for those signals. In the absence of those we also lack sufficient custom and law. Worse, laws created in the physical world do not all comprehend a virtual one in which all of us, everywhere in the world, are by design zero distance apart — and at costs that yearn toward zero as well. This is still very new to human experience.

In the absence of restricting customs and laws it is easy for those with the power to penetrate our private spaces (such as our browsers and email clients) to do so. This is why our private spaces online today are infected with tracking files that report our activities back to others we have never met and don’t know. These practices would never be sanctioned in the physical world, but in the uncivilized virtual world they are easy to rationalize: Hey, it’s easy to do, everybody does it, it’s normative now, transparency is a Good Thing, it helps fund “free” sites and services, nobody is really harmed, and so on.

But it’s not okay. Just because something can be done doesn’t mean it should be done, or that it’s the right thing to do. Nor is it right because it is, for now, normative, or because everybody seems to put up with it. The only reason people continue to put up with it is because they have little choice — so far.

Study after study shows that people are highly concerned about their privacy online, and vexed by their limited ability to do anything about its absence. For example —

Pew reports that “93% of adults say that being in control of who can get information about them is important,” that “90% say that controlling what information is collected about them is important,” that 93% “also value having the ability to share confidential matters with another trusted person,” that “88% say it is important that they not have someone watch or listen to them without their permission,” and that 63% “feel it is important to be able to “go around in public without always being identified.” Ipsos, on behalf of TRUSTe, reports that “92% of U.S. Internet users worry about their privacy online,” that “91% of U.S. Internet users say they avoid companies that do not protect their privacy,” “22% don’t trust anyone to protect their online privacy,” that “45% think online privacy is more important than national security,” that 91% “avoid doing business with companies who I do not believe protect my privacy online,” that “77% have moderated their online activity in the last year due to privacy concerns,” and that, in sum, “Consumers want transparency, notice and choice in exchange for trust.” Customer Commons reports that “A large percentage of individuals employ artful dodges to avoid giving out requested personal information online when they believe at least some of that information is not required.” Specifically, “Only 8.45% of respondents reported that they always accurately disclose personal information that is requested of them. The remaining 91.55% reported that they are less than fully disclosing.” The Annenberg School for Communications at the University of Pennsylvania reports that “a majority of Americans are resigned to giving up their data—and that is why many appear to be engaging in tradeoffs.” Specifically, “91% disagree (77% of them strongly) that ‘If companies give me a discount, it is a fair exchange for them to collect information about me without my knowing.'” And “71% disagree (53% of them strongly) that ‘It’s fair for an online or physical store to monitor what I’m doing online when I’m there, in exchange for letting me use the store’s wireless internet, or Wi-Fi, without charge.'”

There are both policy and market responses to these findings. On the policy side, Europe has laws protecting personal data that go back to the Data Protection Directive of 1995. Australia has similar laws going back to 1988. On the market side, Apple now has a strong pro-privacy stance, posted Privacy – Apple, taking the form of an open letter to the world from CEO Tim Cook. One excerpt:

“Our business model is very straightforward: We sell great products. We don’t build a profile based on your email content or web browsing habits to sell to advertisers. We don’t ‘monetize’ the information you store on your iPhone or in iCloud. And we don’t read your email or your messages to get information to market to you. Our software and services are designed to make our devices better. Plain and simple.”

But we also need tools that serve us as personally as do our own clothes. And we’ll get them. The collection of developers listed here by ProjectVRM are all working on tools that give individuals ways of operating privately in the networked world. The most successful of those today are the ad and tracking blockers listed under Privacy Protection. According to the latest PageFair/Adobe study, the population of persons blocking ads online passed 200 million in June of 2015, with a 42% annual increase in the U.S. and an 82% rate in the U.K. alone.

These tools create and guard private spaces in our online lives by giving us ways to set boundaries and exclude unwanted intrusions. These are primitive systems, so far, but they do work and are sure to evolve. As they do, expect the online world to become as civilized as the offline one — eventually.

For more about all of this, visit my Adblock War Series.


Mike Jones: self-issued

OAuth 2.0 Protected Resource Metadata draft addressing all known issues

Aaron Parecki and I have published a draft of the “OAuth 2.0 Protected Resource Metadata” specification that addresses all the issues that we’re aware of. In particular, the updates address the comments received during the discussions at IETF 118. As described in the History entry for -02, the changes were: Switched from concatenating .well-known to […]

Aaron Parecki and I have published a draft of the “OAuth 2.0 Protected Resource Metadata” specification that addresses all the issues that we’re aware of. In particular, the updates address the comments received during the discussions at IETF 118. As described in the History entry for -02, the changes were:

Switched from concatenating .well-known to the end of the resource identifier to inserting it between the host and path components of it. Have WWW-Authenticate return resource_metadata rather than resource.

The specification is available at:

https://www.ietf.org/archive/id/draft-ietf-oauth-resource-metadata-02.html

Wednesday, 24. January 2024

Jon Udell

Learn by Doing: How LLMs Should Reshape Education

Here’s the latest installment in the series on working with LLMS: Learn by Doing: How LLMs Should Reshape Education. If you’re teaching SQL, this article points to a pedagogical challenge/opportunity: How would I create a lesson that guides a student to an understanding of CROSS JOIN without ever mentioning or explicitly teaching anything about it? … Continue reading Learn by Doing: How LLMs Should

Here’s the latest installment in the series on working with LLMS: Learn by Doing: How LLMs Should Reshape Education.

If you’re teaching SQL, this article points to a pedagogical challenge/opportunity: How would I create a lesson that guides a student to an understanding of CROSS JOIN without ever mentioning or explicitly teaching anything about it?

If you’re teaching anything else, the same question could (I’ll argue should) apply. How to scaffold learning by doing?

The rest of the series:

1 When the rubber duck talks back

2 Radical just-in-time learning

3 Why LLM-assisted table transformation is a big deal

4 Using LLM-Assisted Coding to Write a Custom Template Function

5 Elevating the Conversation with LLM Assistants

6 How Large Language Models Assisted a Website Makeover

7 Should LLMs Write Marketing Copy?

8 Test-Driven Development with LLMs: Never Trust, Always Verify

9 Learning While Coding: How LLMs Teach You Implicitly

10 How LLMs Helped Me Build an ODBC Plugin for Steampipe

11 How to Use LLMs for Dynamic Documentation

12 Let’s talk: conversational software development

13 Using LLMs to Improve SQL Queries

14 Puzzling over the Postgres Query Planner with LLMs

15 7 Guiding Principles for Working with LLMs


Doc Searls Weblog

The Biggest Wow in Indiana

In the summer of ’22 we were still new to Indiana and in an exploring mood. Out of nowhere one afternoon my wife said, “Let’s go check out French Lick.” She just liked the name of the town, plus the idea of taking a half-day road trip under a sweet blue sky and big puffy […]
No, an AI did not paint this. It’s the West Baden Springs Hotel, which I shot with a camera while gawking at it for the first time.

In the summer of ’22 we were still new to Indiana and in an exploring mood. Out of nowhere one afternoon my wife said, “Let’s go check out French Lick.” She just liked the name of the town, plus the idea of taking a half-day road trip under a sweet blue sky and big puffy clouds. I said “Sure,” because I’m a basketball fan and supposed there would be a Larry Bird museum or something like it in his hometown. (Turns out there isn’t, but never mind that.)

Southern Indiana would be part of Kentucky if the Ohio River didn’t inconvenience that option. It’s hilly, and the natives have Southern accents. They say CEment and INsurance. (You know, like Larry Bird.) About halfway down the roads from Bloomington to French Lick, Google took us off the highway onto a twisty hypotenuse through deep woods and lumpy farmland, away from the hard right angle the boring main route would take at the city of Paoli (home of Indiana’s only ski slopes). This shortcut ended where the going got flat and we turned right onto a highway called 150. Then, in a short distance, we drove under a serious-looking arch that announced WEST BADEN SPRINGS. It might also have said ALMOST FRENCH LICK, because that’s what Google Maps told us. We were less than a mile away. After the arch, the edge of a town appeared straight ahead, but that scene was totally upstaged by the one on the right.

My wife and I both gasped. I said, “What the fuck is THAT?” My wife said “Wow!”

Suddenly we were … where? Spain? France? Croatia? There, under that perfect sky sat a beautiful giant resort-like structure that was obviously old, grand, well-maintained, and set among smaller structures just as old and nearly as grand, on a wide expanse of green.

A few hundred feet farther came this entrance on the right:

The only Carlsbads we knew were the town in California and the caverns in Arizona New Mexico. And all we knew about West Baden Springs was what that arch had just told us.

Of course, we couldn’t wait to get inside this thing, whatever it was. Here’s what we saw:

“Wow” doesn’t cover it. Nor do the many views of what turns out to have been—and still is—the West Baden Springs Hotel, which I’ve put in this album here. Check ’em out.

We’ve been back a bunch of times since then. And to French Lick, home of the French Lick Springs Hotel, which was once a great competitor to the West Baden Springs Hotel, and is almost as grand, just as historic, and a short and fun train ride away. Here’s the West Baden Station.

I’ll be putting up more albums of both places soon. Meanwhile, come visit, ya’ll.

Monday, 22. January 2024

Just a Theory

I’m a Postgres Extensions Tembonaut

Near year, new job. I accepted a new position at Tembo to work on improving the PostgreSQL extension ecosystem full time.

New year, new job.

I’m pleased to announce that I started a new job on January 2 at Tembo, a fully-managed PostgreSQL developer platform. Tembo blogged the news, too.

I first heard from Tembo CTO Samay Sharma last summer, when he inquired about the status of PGXN, the PostgreSQL Extension Network, which I built in 2010–11. Tembo bundles extensions into Postgres stacks, which let developers quickly spin up Postgres clusters with tools and features optimized for specific use cases and workloads. The company therefore needs to provide a wide variety of easy-to-install and well-documented extensions to power those use cases. Could PGXN play a role?

I’ve tended to PGXN’s maintenance for the last fourteen years, and thanks in no small part to hosting provided by depesz. As of today’s stats it distributes 376 extensions on behalf of 419 developers. PGXN has been a moderate success, but Samay asked how we could collaborate to build on its precedent to improve the extensions ecosystem overall.

It quickly became apparent that we share a vision for what that ecosystem could become, including:

Establishing the canonical Postgres community index of extensions, something PGXN has yet to achieve Improving metadata standards to enable new patterns, such as automated binary packaging Working with the Postgres community to establish documentation standards that encourage developers to provide comprehensive extension docs Designing and building developer tools that empower more developers to build, test, distribute, and maintain extensions

Over the the past decade I’ve have many ideas and discussion on these topics, but seldom had the bandwidth to work on them. In the last couple years I’ve enabled TLS and improved the site display, increased password security, and added a notification queue with hooks that post to both Twitter (RIP @pgxn) and Mastodon (@pgxn@botsin.space). Otherwise, aside from keeping the site going, periodically improving new accounts, and eyeing the latest releases, I’ve had little bandwidth for PGXN or the broader extension ecosystem.

Now, thanks to the vision and strategy of Samay and Tembo CEO Ry Walker, I will focus on these projects full time. The Tembo team have already helped me enumerate the extension ecosystem jobs to be done and the tools required to do them. This week I’ll submit it to collaborators from across the Postgres community1 to fill in the missing parts, make adjustments and improvements, and work up a project plan.

The work also entails determining the degree to which PGXN and other extension registries (e.g., dbdev, trunk, pgxman, pgpm (WIP), etc.) will play a role or provide inspiration, what bits should be adopted, rewritten, or discarded.2 Our goal is to build the foundations for a community-owned extensions ecosystem that people care about and will happily adopt and contribute to.

I’m thrilled to return to this problem space, re-up my participation in the PostgreSQL community, and work with great people to build out the extensions ecosystem for future.

Want to help out or just follow along? Join the #extensions channel on the Postgres Slack. See you there.

Tembo was not the only company whose representatives have reached out in the past year to talk about PGXN and improving extensions. I’ve also had conversations with Supabase, Omnigres, Hydra, and others. ↩︎

Never be afraid to kill your darlings↩︎

More about… Personal Work Tembo Postgres Extensions

Sunday, 21. January 2024

Mike Jones: self-issued

Celebrating Ten Years of OpenID Connect at the OpenID Summit Tokyo 2024

We held the first of three planned tenth anniversary celebrations for the completion of OpenID Connect at the OpenID Summit Tokyo 2024. The four panelists were Nov Matake, Ryo Ito, Nat Sakimura, and myself. We shared our perspectives on what led to OpenID Connect, why it succeeded, and what lessons we learned along the way. […]

We held the first of three planned tenth anniversary celebrations for the completion of OpenID Connect at the OpenID Summit Tokyo 2024. The four panelists were Nov Matake, Ryo Ito, Nat Sakimura, and myself. We shared our perspectives on what led to OpenID Connect, why it succeeded, and what lessons we learned along the way.

The most common refrain throughout our descriptions was the design philosophy to “Keep simple things simple”. I believe that three of the four of us cited it.

I recounted that we even had a thought experiment used to make the “Keep simple things simple” principle actionable in real time: the “Nov Matake Test”. As we considered new features, we’d ask ourselves “Would Nov want to add it to his implementation?” And “Is it simple enough that he could build it in a few hours?”

The other common thread was the criticality of interop testing and certification. We held five rounds of interop testing before finishing the specifications, with the specs being refined after each round based on the feedback received. The early developer feedback was priceless – much of it from Japan!

Our OpenID Connect 10th anniversary presentations were:

Remarks by Mike Jones Remarks by Nov Matake Remarks by Ryo Ito Remarks by Nat Sakimura

Thanks to the OpenID Foundation Japan for the thought-provoking and enjoyable OpenID Summit Tokyo 2024!

Friday, 19. January 2024

Mike Jones: self-issued

2024 OpenID Foundation Board Election Results

Thanks to those of you who elected me to a two-year term on the OpenID Foundation board of directors. This is an incredibly exciting time for the OpenID Foundation and for digital identity, and I’m thrilled to be able to contribute via the OpenID board. Thanks for placing your trust in me! I’d like to […]

Thanks to those of you who elected me to a two-year term on the OpenID Foundation board of directors. This is an incredibly exciting time for the OpenID Foundation and for digital identity, and I’m thrilled to be able to contribute via the OpenID board. Thanks for placing your trust in me!

I’d like to also take this opportunity to congratulate my fellow board members who were also elected: George Fletcher, Atul Tulshibagwale, and Mark Verstege. See the OpenID Foundation’s announcement of the 2024 election results.

My candidate statement was:

I am on a mission to build the Internet’s missing identity layer. OpenID specifications and initiatives are key to realizing that vision.

Widespread deployment of OpenID specifications has the potential to make people’s online interactions more seamless, secure, and valuable. I have been actively working since 2007 to make that an everyday reality.

2024 has huge potential for advances in digital identity. People are starting to have identity wallets holding digital credentials that they control. National and international federations are being established. Open Banking and Open Finance deployments are ongoing. Adoption of OpenID Connect (which we created a decade ago!) continues going strong. We’re on track to have OpenID Connect be published as ISO standards. OpenID specifications and programs are essential to all these outcomes.

While many of you know me and my work, here’s a few highlights of my contributions to the digital identity space and the OpenID community:

– I was primary editor of OpenID Connect, primary editor of the OAuth 2.0 bearer token specification [RFC 6750], and primary editor of the JSON Web Token (JWT) specification [RFC 7519] and the JSON Object Signing and Encryption (JOSE) specifications [RFCs 7515-7518], which are used by OpenID Connect. I was an editor of the Security Event Token specification [RFC 8417], which is used by Shared Signals and OpenID Connect. I’m an editor of the SIOPv2 specification and a contributor to the other OpenID for Verifiable Credentials specifications. I’m an editor of the OpenID Federation specification. The OAuth DPoP specification [RFC 9449] was my latest RFC. I’m an author of 32 RFCs and 17 final OpenID specifications, with more of each in the pipeline.

– I spearheaded creation of the successful OpenID Connect certification program and continue actively contributing to its success. Over 2,800 certifications have been performed and the pace keeps increasing! Certification furthers the Foundation’s goals of promoting interoperation and increasing the quality of implementations. It’s also become an important revenue stream for the Foundation.

– My contributions to the Foundation have included serving on the board since 2008, serving as board secretary during most of my tenure. I’ve helped organize numerous OpenID summits and working group meetings and regularly present there. I chaired the election committee that developed the Foundation’s election procedures and software. I co-chaired the local chapters committee that developed the policies governing the relationships with local OpenID chapters around the world. I serve on the liaison committee, facilitating our cooperation with other organizations. And way back in 2007, I worked with the community to create the legal framework for the OpenID Foundation, enabling both individuals and corporations to be full participants in developing OpenID specifications and ensuring that they can be freely used by all.

I’d like to continue serving on the OpenID board, because while the OpenID community is having notable successes, our work is far from done. Taking it to the next level will involve both additional specifications work and strategic initiatives by the Foundation. We need to continue building a broad base of supporters and deployers of OpenID specifications around the world. We need to continue fostering close working relationships with partner organizations. And we need to continue safeguarding OpenID’s intellectual property and trademarks, so they remain freely available for all to use.

I have a demonstrated track record of energetically serving the OpenID community and producing results that people actually use. I plan to continue taking an active role in making open identity solutions even more successful and ubiquitous. That’s why I’m running for a community board seat in 2024.

Mike Jones
michael_b_jones@hotmail.com
Blog: https://self-issued.info/
Professional Website: https://self-issued.consulting/


reb00ted

I can't remember any time when more spaces for innovation and entrepreneurship were wide open than now

Lenin supposedly said: There are decades where nothing happens, and there are weeks where decades happen. It’s the same in technology. I came to Silicon Valley in the mid-90’s, just in time to see the dot-com boom unfold. Lots happened very quickly in that time. There were a few more such periods of rapid change since, like when centralized social media got going, and when phones turned

Lenin supposedly said:

There are decades where nothing happens, and there are weeks where decades happen.

It’s the same in technology.

I came to Silicon Valley in the mid-90’s, just in time to see the dot-com boom unfold. Lots happened very quickly in that time. There were a few more such periods of rapid change since, like when centralized social media got going, and when phones turned into real computers. But for many years now, not much has happened: we got used to the idea that there’s a very small number of ever-larger tech giants, which largely release incremental products and that’s just that. Nothing much happens.

But over the last year or so, suddenly things are happening again. I think not only are the spaces for innovation and entrepreneurship now more open than they have been for at least a decade or more; it’s possible they have never been as open as they are now.

Consider:

Everybody’s favorite subject: machine learning and AI. I don’t believe in much of what most people seem to believe about AI these days. I’m not part of the hype train. However, I do believe that machine learning is a fundamental innovation that allows us to program computers in a radically different way than we have in the past 50 and more years: instead of telling the computer what to do, we let it observe how it’s done and have it copy what it saw. Most of what today’s AI companies use machine learning for, in my view, is likely not going to stand the test of time. However, I do believe that this fundamentally different way of programming a computer is going to find absolutely astounding and beneficial applications at some point. It could be today: the space for invention, innovation and entrepreneurship is wide open.

The end of ever-larger economies of scale and network effects in tech. The dominant tech companies are very close to having pretty much all humans on the planet as customers. The number of their users is not going to double again. So the cost structure of their businesses is not going to get reduced any more simply by selling the same product to more customers, nor is the benefit of their product going to grow through growing network effects as much as in the past. It’s like they are running into a physical limit to the size of many things they can do. This opens space for innovation and successful competition.

Most interesting, it allows the creation of bespoke products again; products that are optimized for particular markets, customer groups and use cases. Ever noticed that Facebook is the same product for everybody, whether you are rich or poor, whether you have lots of time, or none, whether you are CEO or a kid, whether you like in one place or another, whether you are interested in sports or not and so forth? It’s the same for products of the other big platform vendors. That is a side effect of the focus on economies of scale. All of a sudden, increased utility for the user will need to come from serving their specific needs, not insisting that all cars need to be black. For targeted products, large platforms have no competitive advantages over small organizations; in fact, they may be at a real disadvantage. Entrepreneurs, what are you waiting for?

The regulators suddenly have found their spine and aren’t kidding around, starting with the EU.

The Apple App Store got in the way of your business? They are about to force the App Store open and allow side loading and alternate app stores (although Apple is trying hard to impede this as much as possible; a fight is brewing; my money is on the regulators).

The big platforms hold all your data hostage? Well, in many jurisdictions around the world you now have the right to get all copy of all your data. Even better, the “continuous, real-time access” provision of the EU’s Digital Markets Act is about to come into force.

The platforms don’t let you interoperate or connect? Well, in the EU, a legal requirement for interoperability of messaging apps is already on the books, and more are probably coming. Meta’s embrace of ActivityPub as part of Threads is a sign of it.

Imagine what you can do, as an entrepreneur, if you can distribute outside of app stores, use the same data on the customer that the platforms have, and you can interoperate with them? The mind boggles … many product categories that previous were impossible to compete with suddenly are in play again.

Social networking is becoming an open network through the embrace of ActivityPub by Meta’s Threads. While nobody outside of Meta completely understands why they are doing this, they undoubtedly are progressing towards interoperability with the Fediverse. Whatever the reasons, chances are that they also apply to other social media products, by Meta and others. All of a sudden competing with compelling social media application is possible again because you have a fully built-out network with its network effects from day one.

Consumers know tech has a problem. They are more willing to listen to alternatives to what they know than they have in a long time.

And finally, 3D / Spatial Computing a la Apple. (I’m not listing Meta here because clearly, they don’t have a compelling vision for it. Tens of billions spent and I still don’t know what they are trying to do.)

Apple is creating an an entirely new interaction model for how humans can interact with technology. It used to be punch cards and line printers. Then we got interactive green-screen terminals. And then graphics displays, and mice. That was in the 1980’s. Over the next 40 years, basically nothing happened (except adding voice for some very narrow applications). By using the space around us as a canvas, Apple is making it possible to interact with computing in a radically different way. Admittedly, nobody knows so far how to really take advantage of the new medium, but once somebody does, I am certain amazing things will happen.

Again, an opportunity ripe for the taking. If it works, it will have the same effects on established vendors as the arrival of the web had on established vendors: some managed to migrate, or the arrival graphical user interfaces on the vendors of software for character terminals; most failed to make the switch. So this is another ideal entrepreneurial territory.

But here’s the kicker: what if you combined all of the above? What can you build if your primary interaction model is 3D overlayed over the real world, with bespoke experiences for your specific needs, assisted by (some) intelligence that goes beyond what computers typically do today, accomplished by some form of machine learning, all fed by personal data collected by the platforms, and distributed outside of the straightjacket and business strategies of app stores?

We have not seen as much opportunity as this in a long time; maybe ever.

Thursday, 18. January 2024

Heres Tom with the Weather

Winer's Law of the Internet

Something to keep in mind as big tech connects to the fediverse is Winer’s Law of the Internet which ends with The large companies always try to make the technology complicated to reduce competition to other organizations with large research and development budgets. This is 20 years old but it has stood the test of time.

Something to keep in mind as big tech connects to the fediverse is Winer’s Law of the Internet which ends with

The large companies always try to make the technology complicated to reduce competition to other organizations with large research and development budgets.

This is 20 years old but it has stood the test of time.

Tuesday, 16. January 2024

@_Nat Zone

[拡散希望] OpenIDの25年Youtube配信〜粘るは金・金は粘りの素

2024年1月18日午後11時に、Youtubue 配信で、OpenIDの25年という、翌日のOpenID Summit の前夜祭というか予習というかの配信をやります。どうやったら成功するかのヒントにもなるかも。 金曜日のOpenID Summit では OpenID Conne…

2024年1月18日午後11時に、Youtubue 配信で、OpenIDの25年という、翌日のOpenID Summit の前夜祭というか予習というかの配信をやります。どうやったら成功するかのヒントにもなるかも。

金曜日のOpenID Summit では OpenID Connect 10年というイベントがあります。わたしも出るんですが…。しかし、実は今年はある意味OpenID 25周年だったりするのです。

そこで、今回はOpenID Connect に至るまでの粘り腰の歴史とそこから得られる教訓などをお話したいと思います。曰く、「粘るは金・金は粘りの素」

ぜひお誘い合わせの上お越しください。


Phil Windleys Technometria

Exploring Digital Identity

I was recently on the Identity at the Center podcast speaking with hosts Jim McDonald and Jeff Steadman. We discussed my journey into the field of identity, Internet Identity Workshop, and my latest book "Learning Digital Identity." We also discussed the book writing process, key takeaways from the book, and the future of identity innovation. It was a fun conversation. I hope you enjoy it too.

I was recently on the Identity at the Center podcast speaking with hosts Jim McDonald and Jeff Steadman. We discussed my journey into the field of identity, Internet Identity Workshop, and my latest book "Learning Digital Identity." We also discussed the book writing process, key takeaways from the book, and the future of identity innovation. It was a fun conversation. I hope you enjoy it too.


@_Nat Zone

OpenID Summit Tokyo 2024

OpenID Connect 10 年の軌跡とデジタル ID が描く未来 OpenID Connect 1.0 の仕様が発行されてから 2024 年でちょうど 10 年目を迎えます。 OpenID Summit Tokyo 2024 では OpenID Connect のこれまで…
OpenID Connect 10 年の軌跡とデジタル ID が描く未来

OpenID Connect 1.0 の仕様が発行されてから 2024 年でちょうど 10 年目を迎えます。

OpenID Summit Tokyo 2024 では OpenID Connect のこれまでの歩みと現在地、そして今まさに仕様策定が進んでいる最新のプロファイルについて、ビジネス・ユースケースとテクノロジの両方の目線から日本のみなさまへお伝えしたいと考えています。

前回の OpenID Summit Tokyo 2020 からコロナ禍を挟んだこの 4 年間でデジタル ID がどのように世の中を変えたのか、現在から将来にかけてどのような世の中を描いていくのか、国内外のデジタル ID スペシャリストが語り尽くしますので、デジタル ID にご関心をお持ちの様々な方々に広くご参加いただけると幸いです。

開催日時2024年1月19日(金)10:00 – 18:00 (9:30 受付開始) 予定会場名渋谷ストリームホール (受付4F)会場住所〒150-0002 東京都渋谷区渋谷3-21-3主催一般社団法人OpenIDファウンデーション・ジャパン参加費無料お申込受付Peatix (https://openidsumiittokyo2024.peatix.com/) よりお申し込みください。参加人数250名予定 プログラム 時刻Grand hall(同時通訳あり)Breakout room10:00 – 10:20開会の挨拶富士榮 尚寛 — 一般社団法人OpenIDファウンデーション・ジャパン 代表理事
曽我 紘子 — 一般社団法人OpenIDファウンデーション・ジャパン 事務局長10:20 – 10:45OIDF Strategic Outlook for 2024 and Briefing on the Sustainable Interoperable Digital Identity (SIDI) Summit in ParisGail Hodges — Executive Director, OpenID Foundation10:45 – 11:10OpenIDファウンデーション・ジャパン ワーキンググループ活動報告小岩井 航介 — KDDI 株式会社 サービス開発1部 ID・認証開発担当エキスパート
Nov Matake — 一般社団法人OpenIDファウンデーション・ジャパン エバンジェリスト
kura(倉林 雅)— 一般社団法人OpenID ファウンデーション・ジャパン 理事、エバンジェリスト11:10 – 11:40Panel: Celebrating Ten Years of OpenID ConnectModerator:
Michael B. Jones — Building the Internet’s Missing Identity Layer, Self-Issued Consulting
Panelists:
Nat Sakimura — Chairman, OpenID Foundation
Nov Matake — 一般社団法人OpenIDファウンデーション・ジャパン エバンジェリスト
ritou — 一般社団法人OpenIDファウンデーション・ジャパン エバンジェリスト11:40 – 12:50休憩12:50 – 13:15EU Digital Identity Wallets (eIDAS 2) – status and way forwardTorsten Lodderstedt — Federal Agency for Disruptive Innovation, Germany, Lead Architect, German EUDI Wallet projectデジタルアイデンティティの技術を学ぼう! ~認証認可にまつわる標準仕様文書を読んでみよう~名古屋 謙彦 (ayokura) — GMOサイバーセキュリティ byイエラエ株式会社13:15 – 13:40Waiting for the EUDI Wallet: Securing the transition from SAML 2.0 to OpenID ConnectAmir Sharif — Researcher at the Center for Cyber Security, Security & Trust research unit, Fondazione Bruno Kessler, Trento, Italy.OpenID Connectの活用実績とLINEヤフーの会社合併におけるID連携の貢献依馬 裕也 — LINEヤフー株式会社 マーケティングソリューションカンパニー ビジネスプラットフォーム 統括本部 ビジネスソリューション開発本部 ソリューションマーケティング部 部長
三原 一樹 — LINEヤフー株式会社 コミュニケーションカンパニー LY会員サービス統括本部 ID本部 部長
吉田 享平 — LINEヤフー株式会社 コミュニケーションカンパニー LY会員サービス統括本部 ID本部13:40 – 14:00Insights into Open Wallet Foundation’s initiativesJoseph Heenan — Board Member, OpenWallet FoundationAMA (Ask Me Anything) on OpenID ConnectMichael B. Jones and OIDF-J evangelists14:00 – 14:20休憩14:20 – 14:45Trusted Webの実現に向けて成田 達治 — 内閣官房デジタル市場競争本部事務局次長OpenID Connect活用時のなりすまし攻撃対策の検討湯浅 潤樹 — 奈良先端科学技術大学 サイバーレジリエンス構成学研究室14:45 – 15:10OpenID Federation 1.0: The Trust Chain vs The x.509 Certificate ChainVladimir Dzhuvinov — Identity Architect, Connect2IDメルカリアプリのアクセストークン: 独自仕様からOAuth 2.0 / OIDCベースへの切り替えグエン・ザー — 株式会社メルカリ IDプラットフォームチーム ソフトウェアエンジニア15:10 – 15:30Passkeys and Identity Federationritou — 一般社団法人OpenIDファウンデーション・ジャパン エバンジェリストオープンソース・ソフトウェアへのOAuth 2.0ベースのセキュリティプロファイルの実装乗松 隆志 — 株式会社 日立製作所 シニアOSSスペシャリスト15:30 – 15:50休憩15:50 – 16:15The progress of Nubank and Open Finance in BrazilLuciana Kairalla — Open Finance General Manager at NubankOIDFシェアードシグナルフレームワーク(ID2)を利用してリアルタイムでセキュリティシグナルを共有するための最新情報Tom Sato — VeriClouds BoD (Seattle, US)16:15 – 16:40事業の成長にどのようにID技術/IDチームが貢献してきたか – SoftBank の取り組み小松 隆行 — ソフトバンク株式会社 ビジネスシステム開発本部 デジタルID戦略部 部長Verifiable Credential Demo ~ SD-JWT VC & mdoc/mDL issuance using OpenID for Verifiable Credential Issuance川崎 貴彦 — 株式会社Authlete 共同創業者, 代表取締役16:40 – 17:10パネルディスカッション: 組織内に「IDチーム」を確立・拡大するには?Moderator:
工藤 達雄 — 株式会社Authlete ソリューション戦略担当VP
Panelists:
柴田 健久 — PwCコンサルティング合同会社 サイバーセキュリティ&プライバシー
渡邊 博紀 — 株式会社セブン&アイ・ホールディングス グループDX本部 グループデジタルシステムUnit シニアオフィサー
菊池 梓 — デジタル庁 アイデンティティユニット(Closed)17:10 – 17:30休憩17:30 – 17:55Your Identity Is Not Self-SovereignJustin Richer — Principal Architect, Authlete(Closed)17:55 – 18:20Closing KeynoteNat Sakimura — Chairman, OpenID Foundation(Closed)18:20 – 18:25閉会の挨拶富士榮 尚寛 — 一般社団法人OpenIDファウンデーション・ジャパン 代表理事(Closed)

※プログラムは現時点のものです。内容については変更される可能性があります。

参加登録

OpenID Summit Tokyo 2024 へのイベント申込は Peatix で受け付けています。

Monday, 15. January 2024

Damien Bod

Migrate ASP.NET Core Blazor Server to Blazor Web

This article shows how to migrate a Blazor server application to a Blazor Web application. The migration used the ASP.NET Core migration documentation, but this was not complete and a few extra steps were required. The starting point was a Blazor Server application secured using OpenID Connect for authentication. The target system is a Blazor […]

This article shows how to migrate a Blazor server application to a Blazor Web application. The migration used the ASP.NET Core migration documentation, but this was not complete and a few extra steps were required. The starting point was a Blazor Server application secured using OpenID Connect for authentication. The target system is a Blazor Web application using the “InteractiveServer” rendermode.

History

2024-02-12 Updated to support CSP nonces

Code: https://github.com/damienbod/BlazorServerOidc

Migration

The following Blazor Server application was used as a starting point:

https://github.com/damienbod/BlazorServerOidc/tree/main/BlazorServerOidc

This is a simple application using .NET 8 and OpenID Connect to implement the authentication flow. Security headers are applied and the user can login or logout using OpenIddict as the identity provider.

As in the migration guide, steps 1-3, the Routes.razor was created and the imports were extended. Migrating the contents of the Pages/_Host.cshtml to the App.razor was more complicated. I have a Layout in the original application and this needed migration into the App file as well.

This completed Blazor Web App.razor file looked like this:

@inject IHostEnvironment Env <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <base href="/" /> <link rel="stylesheet" href="css/bootstrap/bootstrap.min.css" /> <link href="css/site.css" rel="stylesheet" /> <link href="BlazorWebFromBlazorServerOidc.styles.css" rel="stylesheet" /> <HeadOutlet @rendermode="InteractiveServer" /> </head> <body> <Routes @rendermode="InteractiveServer" /> <script nonce="@BlazorNonceService.Nonce" src="_framework/blazor.web.js"></script> </body> </html>

The App.razor uses the routes component. Inside the routes component, the CascadingAuthenticationState is used and a new component for the layout called the MainLayout.

@inject NavigationManager NavigationManager <CascadingAuthenticationState> <Router AppAssembly="@typeof(Program).Assembly"> <Found Context="routeData"> <AuthorizeRouteView RouteData="@routeData" DefaultLayout="@typeof(Layout.MainLayout)"> <NotAuthorized> @{ var returnUrl = NavigationManager.ToBaseRelativePath(NavigationManager.Uri); NavigationManager.NavigateTo($"api/account/login?redirectUri={returnUrl}", forceLoad: true); } </NotAuthorized> <Authorizing> Wait... </Authorizing> </AuthorizeRouteView> </Found> <NotFound> <LayoutView Layout="@typeof(Layout.MainLayout)"> <p>Sorry, there's nothing at this address.</p> </LayoutView> </NotFound> </Router> </CascadingAuthenticationState>

The MainLayout component uses two more new razor components, one for the nav menu and one for the login, logout component.

@inherits LayoutComponentBase <div class="page"> <div class="sidebar"> <NavMenu /> </div> <main> <div class="top-row px-4"> <LogInOrOut /> </div> <article class="content px-4"> @Body </article> </main> </div> <div id="blazor-error-ui"> An unhandled error has occurred. <a href="" class="reload">Reload</a> <a class="dismiss">🗙</a> </div>

The login, logout component uses the original account controller and improved the logout.

@inject NavigationManager NavigationManager <AuthorizeView> <Authorized> <div class="nav-item"> <span>@context.User.Identity?.Name</span> </div> <div class="nav-item"> <form action="api/account/logout" method="post"> <AntiforgeryToken /> <button type="submit" class="nav-link btn btn-link text-dark"> Logout </button> </form> </div> </Authorized> <NotAuthorized> <div class="nav-item"> <a href="api/account/login?redirectUri=/">Log in</a> </div> </NotAuthorized> </AuthorizeView>

The program file was updated like in the migration docs. Blazor Web does not support reading the HTTP headers from inside a Blazor component and so the security headers were weakened which is a very bad idea. CSP nonces are not supported and so a super web security feature is lost if updating to Blazor Web. I believe moving forward, the application should be improved.

using BlazorWebFromBlazorServerOidc.Data; using Microsoft.AspNetCore.Authentication.Cookies; using Microsoft.AspNetCore.Authentication.OpenIdConnect; using Microsoft.AspNetCore.Authorization; using Microsoft.AspNetCore.Mvc; using Microsoft.AspNetCore.Mvc.Authorization; using Microsoft.IdentityModel.JsonWebTokens; using Microsoft.IdentityModel.Protocols.OpenIdConnect; using Microsoft.IdentityModel.Tokens; namespace BlazorWebFromBlazorServerOidc; public class Program { public static void Main(string[] args) { var builder = WebApplication.CreateBuilder(args); builder.Services.TryAddEnumerable(ServiceDescriptor.Scoped<CircuitHandler, BlazorNonceService> (sp => sp.GetRequiredService<BlazorNonceService>())); builder.Services.AddScoped<BlazorNonceService>(); builder.Services.AddAuthentication(options => { options.DefaultScheme = CookieAuthenticationDefaults.AuthenticationScheme; options.DefaultChallengeScheme = OpenIdConnectDefaults.AuthenticationScheme; }) .AddCookie() .AddOpenIdConnect(options => { builder.Configuration.GetSection("OpenIDConnectSettings").Bind(options); options.SignInScheme = CookieAuthenticationDefaults.AuthenticationScheme; options.ResponseType = OpenIdConnectResponseType.Code; options.SaveTokens = true; options.GetClaimsFromUserInfoEndpoint = true; options.TokenValidationParameters = new TokenValidationParameters { NameClaimType = "name" }; }); builder.Services.AddRazorPages().AddMvcOptions(options => { var policy = new AuthorizationPolicyBuilder() .RequireAuthenticatedUser() .Build(); options.Filters.Add(new AuthorizeFilter(policy)); }); builder.Services.AddRazorComponents() .AddInteractiveServerComponents(); builder.Services.AddSingleton<WeatherForecastService>(); builder.Services.AddControllersWithViews(options => options.Filters.Add(new AutoValidateAntiforgeryTokenAttribute())); var app = builder.Build(); JsonWebTokenHandler.DefaultInboundClaimTypeMap.Clear(); if (!app.Environment.IsDevelopment()) { app.UseExceptionHandler("/Error"); app.UseHsts(); } // Using an unsecure CSP as CSP nonce is not supported in Blazor Web ... app.UseSecurityHeaders( SecurityHeadersDefinitions.GetHeaderPolicyCollection(app.Environment.IsDevelopment(), app.Configuration["OpenIDConnectSettings:Authority"])); app.UseMiddleware<NonceMiddleware>(); app.UseHttpsRedirection(); app.UseStaticFiles(); app.UseRouting(); app.UseAuthentication(); app.UseAuthorization(); app.UseAntiforgery(); app.MapRazorPages(); app.MapControllers(); app.MapRazorComponents<App>() .AddInteractiveServerRenderMode().RequireAuthorization(); app.Run(); } }

With the weakened security headers the application works and the authentication flow works.

Conclusion

Blazor Web in the InteractiveServer mode can use CSP nonces and it is possible to implement a secure web application.

Links

https://learn.microsoft.com/en-us/aspnet/core/migration/70-80

Securing a Blazor Server application using OpenID Connect and security headers

https://github.com/dotnet/aspnetcore/issues/53192

https://github.com/dotnet/aspnetcore/issues/51374

https://github.com/javiercn/BlazorWebNonceService

Sunday, 14. January 2024

Jon Udell

7 Guiding Principles for Working with LLMs

Here’s the latest installment in the series on working with LLMS: 7 Guiding Principles for Working with LLMs. The rest of the series: 1 When the rubber duck talks back 2 Radical just-in-time learning 3 Why LLM-assisted table transformation is a big deal 4 Using LLM-Assisted Coding to Write a Custom Template Function 5 Elevating … Continue reading 7 Guiding Principles for Working with LLMs

Saturday, 13. January 2024

IDIM Musings

Interesting work happening at the ISO SC 37 Biometrics committee

Getting immersed in a new subject is a great experience. ISO SC 37 wrapped up a week-long series of meetings yesterday in Okayama Japan. Here are some things I learned along the way: SC 37 is smaller than the other ISO committees I’m in – about 60 in-person plus about 20 online this week across … Continue reading Interesting work happening at the ISO SC 37 Biometrics committee

Getting immersed in a new subject is a great experience. ISO SC 37 wrapped up a week-long series of meetings yesterday in Okayama Japan. Here are some things I learned along the way:

SC 37 is smaller than the other ISO committees I’m in – about 60 in-person plus about 20 online this week across 6 work groups. It makes for a tight-knit group and provides lots of time to build relationships. It also means that the workload is sane – on other committees I have to ignore all but the 2 or 3 projects I’m directly involved in.

The main work of the committee is extensible data exchange formats, APIs, evaluation of biometric technologies, performance testing and societal aspects.

SC 37 was started in the early 2000s to meet the needs of ICAO to add biometric information to passports – face, finger and iris formats were the first. Since then, work has been done to cover other biometric modalities such as voice, skeletal structure, vein pattern and others.

Participants are primarily from Government border control, immigration, public safety, law enforcement departments. I’m one of a very small number focused on consumer-provided sensors and devices for use in commercial scenarios.

Biometrics is all about statistical analysis – I never really thought much about it – using the lazy term “probabilistic” – but it really is about estimation and weighting of data samples. So I might have to break open a textbook or two to relearn by sum-overs, means, medians, and standard deviations – not looking forward that that part!

“Convoluted Neural Networks” appear to be broadly used to look for patterns and help to draw conclusions. Another area I must learn about quickly.

There is a significant desire to apply biometric binding of people to data records – for example during ID Proofing and Verification. But it was clear that SC 37 is not the best place to do that work. SC 17 for mDoc credentials or SC 27/WG 5 for identity management seem better suited for this work.

I realized by the end of the week that being an “identity management” guy in the “biometrics” world is a very good thing – the latter can be applied to operations of the former and I get to work on both sides.

So I’m looking forward to working with the committee members and digging deep into the material. Hopefully I can bring useful opinions from the consumer world to drive discussions and improve the body of work.

Friday, 12. January 2024

Heres Tom with the Weather

South Africa Lays Out Genocide Case vs. Israel at World Court in The Hague

“Nowhere Is Safe in Gaza”: South Africa Lays Out Genocide Case vs. Israel at World Court in The Hague

Tuesday, 09. January 2024

@_Nat Zone

Youtube日本語チャネルの登録者数がようやく1000人になりました。これまで待っていたこととか…今後の予定とか…

苦節1年、Youtubeの日本語チャネル @55id (ゴーゴーID) の登録者数が漸く1000人を超えました。まぁ過去半年、ほとんど更新なしなので伸び悩みは仕方ないことだったのですが…。 1000人超えたらやろうと思っていたことがあります。それは、 ゲストを呼んでお話を伺う。 …

苦節1年、Youtubeの日本語チャネル @55id (ゴーゴーID) の登録者数が漸く1000人を超えました。まぁ過去半年、ほとんど更新なしなので伸び悩みは仕方ないことだったのですが…。

1000人超えたらやろうと思っていたことがあります。それは、

ゲストを呼んでお話を伺う。

いくらなんでもあまりにも登録者数が少ないと失礼かなと思い、これまで様子見でした。1000人を目安としていたので、ようやく始められる感じです。わたしがお話を伺いたいこと一覧に近いんですが、ラインナップとしては(順不同)で

Subjectという言葉の意味について〜「主体」という訳は誤訳だ! こうすればフィッシング被害はなくなるんだ!〜高度認証とパスキーの導入の効果 OpenID for VC とアイデンティティウォレットの現在地 QWACすべきかQWACせざるべきか、それが問題だ mDL/モバイル運転免許証って何? mDLとSD-JWTの問題点〜ZKPは食い込めるのか? Shared Signal とゼロトラスト マイナンバーカードのモバイル対応について パスキープロバイダを作ってみた DID/VC共創コンソーシアムの狙い タイ国政府ID Wallet ホワイトペーパーについて タイ国政府デジタルアイデンティティ基準について

などなど。それぞれゲストの想定はあるのですが、お請けいただけるかはまた別問題でして、実現性はよくわかりません。よくわかりませんが、追いかけていきたいと思います。

その他の今後の予定

Youtubeチャンネル @55id としてはこうしたゲストをお呼びする番組だけでなく、ソロ配信もやっていきたいと思います。現在想定しているものとしては以下のようなものがあります(順不同)。

OpenID Connect へ至る波乱万丈の航海〜XRIの世界線より OpenID Connect で諦めたものと今後の技術への期待 DIDへ至る道〜OpenIDはめちゃめちゃSelf-Sovereignだったんだけど… 認証って何?認可って何? 無常社会〜誹謗中傷・偽情報とプライバシー サードパーティクッキーとOpenID Connect ID本「デジタルアイデンティティ」を読む会(複数回) JIS X 9250 プライバシーの枠組みを読む会(複数回) NIST SP800-63-4 Second IPDを読む会(複数回) EU DI ARF改訂版を読む会(複数回)

などなど。他にもリクエストがありましたら、ここか、Youtubeチャンネルのコミュニティにコメントとして残していただければと思います。

ではでは!

Monday, 08. January 2024

Heres Tom with the Weather

Otisburg.social move post-mortem

I moved my account from @herestomwiththeweather@mastodon.social to @tom@herestomwiththeweather.com on January 2nd. In the spirit of learning from post-mortems, I am documenting a few mistakes I made. One of the main motivations for the move was that over a year ago, I had configured webfinger on this site to point to the account I had on mastodon.social. But once someone has found me on mastod

I moved my account from @herestomwiththeweather@mastodon.social to @tom@herestomwiththeweather.com on January 2nd. In the spirit of learning from post-mortems, I am documenting a few mistakes I made.

One of the main motivations for the move was that over a year ago, I had configured webfinger on this site to point to the account I had on mastodon.social. But once someone has found me on mastodon, I would from then on be known by my mastodon identifier rather than the identifier with my personal domain. If I lost access to that particular mastodon account for whatever reason, I would be unreachable by that mastodon identifier. However, as I described in Webfinger Expectations, if my webfinger configuration points me to a server that will allow me to participate on the fediverse with my own personal identifier using my own domain, then in theory, if I lose access to the account on that server, I can swap it out with another similar server and be reachable again with my personal identifier. So, last week I moved to Otisburg.social which is running what I consider a minimum viable activitypub server called Irwin. As it is experimental, I am the only user on the server.

So what did I screw up? I didn’t plan for two things. Both are related to the diversity of software and configurations on the Fediverse.

First, although I was vaguely aware of the optional Authorized Fetch mastodon feature, I didn’t anticipate that it would prevent me from re-following some of my followers. Prior to the migration, I assumed this feature would not be enabled on any of the servers the people I followed were using. I quickly realized that I could not re-follow people on 3 servers which had this feature enabled. So, I lost contact with the people on those servers for a few days until I fixed it by also signing GET requests in addition to POST requests.

Second, I didn’t adequately prepare for the possibility that some of my followers would not automatically move to the new server. Of 96 followers, I had about 15 that did not successfully re-follow. It seems that some of these failed because they were not on a Mastodon server and their server did not adequately handle the Move activity sent by mastodon.social. Unfortunately, although mastodon allowed me to download a csv file of the people I followed, it did not provide a link to download a file of followers so I don’t know everyone I lost during the move.

Otherwise, the move went well and it is a great feature and I’m glad to see an effort underway to standardize it.

One unresolved issue is that when someone visits my profile on a mastodon site, selecting “open original page” will fetch https://otisburg.social/actor/tom@herestomwiththeweather.com and the user would expect to see my status updates or toots or whatever you call them. However, currently that url redirects to this website and activitypub status updates are not available here.

Friday, 05. January 2024

Doc Searls Weblog

The New News Business

Eigth in the News Commons series. Back when I was on the board of my regional Red Cross chapter (this one), I learned four lessons about fund raising: People are glad to pay value for value. People are most willing to pay when they perceive and appreciate the value they get from a product or […]

Eigth in the News Commons series.

How Microsoft Bing Image Creator illustrates EmanciPay

Back when I was on the board of my regional Red Cross chapter (this one), I learned four lessons about fund raising:

People are glad to pay value for value. People are most willing to pay when they perceive and appreciate the value they get from a product or service. People are most willing to pay full value when the need or opportunity to pay is immediate, and the amount they pay is up to them. People are willing to pay more when they have a relationship with the other party (seller, service provider, philanthropy, cause, whatever)

Here’s something I wrote in The Cluetrain Manifesto (10th anniversary edition) about all four lessons at work:

Not long after Cluetrain came out in early 2000, I found myself on a cross-country flight, sitting beside a Nigerian pastor named Sayo Ajiboye. After we began to talk, it became clear to me that Sayo (pronounced “Shaiyo”) was a deeply wise man. Among his accomplishments was translating the highly annotated Thompson Bible into his native Yoruba language: a project that took eight of his thirty-nine years.

I told him that I had been involved in a far more modest book project—The Cluetrain Manifesto—and was traveling the speaking circuit, promoting it. When Sayo asked me what the book was about, I explained how “markets are conversations” was the first of our ninety-five theses, and how we had unpacked it in a chapter by that title. Sayo listened thoughtfully, then came back with the same response I had heard from other readers in what back then was still called the Third World: “Markets are conversations” is a pretty smart thing for well-off guys from the First World to be talking about. But it doesn’t go far enough.

When I asked him why, he told me to imagine we were in a “natural” marketplace—a real one in, say, an African village where one’s “brand” was a matter personal reputation, and where nobody ruled customer choices with a pricing gun. Then he picked up one of those blue airline pillows and told me to imagine it was a garment, such as a coat, and that I was interested in buying it. “What’s the first thing you would say to the seller?” he asked.

“What does it cost?”

“Yes, you would say that,” he replied, meaning that this was typical of a First World shopper for whom price is the primary concern. Then he asked me to imagine that a conversation follows between the seller and me—that the two of us get to know each other a bit and learn from each other. “Now,” he asked, “What happens to the price?”
I said maybe now I’m willing to pay more while the seller is willing to charge less.

“Why?” Sayo asked.

I didn’t have an answer.

“Because you now have a relationship,” he said.

As we continued talking, it became clear to me that everything that happens in a marketplace falls into just three categories: transaction, conversation, and relationship. In our First World business culture, transaction matters most, conversation less, and relationship least. Worse, we conceive and justify everything in transactional terms. Nothing matters more than price and “the bottom line.” By looking at markets through the prism of transaction or even conversation, we miss the importance of relationship. We also don’t see how relationship has a value all its own: one that transcends, even as it improves, the other two.

Consider your relationship with friends and family, Sayo said. The value system there is based on caring and generosity, not on price. Balance and reciprocity may play in a relationship, but are not the basis of it. One does not make deals for love. There are other words for that.

Back in the industrialized world, few of our market relationships run so deep, nor should they. By necessity much of our relating is shallow and temporary. We don’t want to get personal with an ATM machine or even with real bank tellers. Friendly is nice, but in most business situations that’s about as far as we want to go.

But relationship is a broad category: broad enough to contain all forms of relating—the shallow as well as the deep, the temporary as well as the enduring. In the business culture of the industrialized world, Sayo said, we barely understand relationship’s full meaning or potential. And we should. Doing so would be good for business.

So he told me our next assignment was to unpack and study another thesis: Markets are relationships.

That is why, six years after the first edition of Cluetrain came out, I started ProjectVRM (the R means Relationship) at the Berkman Klein Center, wrote The Intention Economy: When Customers Take Charge, (Harvard Business Review Press, 2012), co-founded Customer Commons (in 2013), and am now a visiting scholar with the Ostrom Workshop at Indiana University, thinking out loud about how a news commons might thrive as a market of relationships—starting here in Bloomington, IU’s home town.

In The News Business (which precedes this post), I said the three current business models for local news were advertising, subscription, and philanthropy, and promised a fourth. This is it: emancipayments.

We* came up with this idea in 2009. Here is how the EmanciPay page on the ProjectVRM wiki puts it:

Overview

Simply put, Emancipay makes it easy for anybody to pay (or offer to pay) —

as much as they like however they like for whatever they like on their own terms

— or at least to start with that full set of options, and to work out differences with sellers easily and with minimal friction.

Emancipay turns consumers (aka users) into customers by giving them a pricing gun (something which in the past only sellers used) and their own means to make offers, to pay outright, and to escrow the intention to pay when price and other requirements are met. And to be able to do this at scale across all sellers, much as cash, browsers, credit cards and email clients do the same. Payments themselves can also be escrowed.

In slightly more technical terms, EmanciPay is a payment framework for customers operating with full agency in the open marketplace, and at scale. It operates on open protocols and standards, so it can be used by any buyer, seller or intermediary.

It was conceived as a way to pay for music, journalism, or what any artist brings into the world. But it can apply to anything. For example, [subscriptions], which have become by 2021 a giant fecosystem in which every seller has separate and non-substitutable scale across all subscribers, while subscribers have zero scale across all sellers, with the highly conditional exceptions of silo’d commercial intermediaries. As [Customer Commons] puts it,

There’s also not much help coming from the subscription management services we have on our side: Truebill, Bobby, Money Dashboard, Mint, Subscript Me, BillTracker Pro, Trim, Subby, Card Due, Sift, SubMan, and Subscript Me. Nor from the subscription management systems offered by Paypal, Amazon, Apple or Google (e.g. with Google Sheets and Google Doc templates). All of them are too narrow, too closed and exclusive, too exposed to the surveillance imperatives of corporate giants, and too vested in the status quo.

That status quo sucks (see here, or just look up “subscription hell”), and it’s way past time to unscrew it.) But how?

The better question is where?

The answer to that is on our side: the customer’s side.

While EmanciPay was first conceived by ProjectVRM as a way to make live payments to nonprofits and to provide a new monetization method for publishers. it also works as a counterpart to sellers’ subscription systems in what Zuora (a supplier of subscription management systems to the publishing industry, including The Guardian and Financial Times) calls the “subscription economy“, which it says “is built on ever changing relationships with your customers”. Since relationships are two-way by nature, EmanciPay is one way that customers can manage their end, while publisher-side systems such as Zuora’s manage the other.

EmanciPay economic case

EmanciPay provides a new form of economic signaling not available to individuals, either on the Net or before the Net became available as a communications medium. EmanciPay will use open standards and be comprised of open source code. While any commercial [Fourth party] can use EmanciPay (or its principles, or any parts of it they like), EmanciPay’s open and standard framework will support fourth parties by making them substitutable, much as the open standards of email (smtp, pop3, imap) make email systems substitutable. (Each has what Joe Andrieu calls service endpoint portability.)

EmanciPay is an instrument of customer independence from all of the billion (or so) commercial entities on the Net, each with its own arcane and silo’d systems for engaging and managing customer relations, as well as receipt, acknowledgement and accounting for payments from customers.

Use Case Background

EmanciPay was conceived originally as a way to provide a customers with the means to signal interest and ability to pay for media and creative works (most of which are freely available on the Web, if not always free of charge). Through EmanciPay, demand and supply can relate, converse and transact business on mutually beneficial terms, rather than only on terms provided by the countless different silo’d systems we have today, each serving to hold the customer captive, and causing much inconvenience and friction in the process.

Media goods were chosen for five reasons:

because most are available for free, even if they cost money, or are behind paywalls paywalls, which are cookie-based, cannot relate to individuals as anything other than submissive and dependent parties (and each browser a users employs carries a different set of cookies) both media companies and non-profits are constantly looking for new sources of revenue the subscription model, while it creates steady income and other conveniences for sellers, is often a bad deal for customers, and is now so overused (see Subscriptification) that the world is approaching a peak subscription crisis, and unscrewing it can only happen from the customer’s side (because the business is incapable of unscrewing the problem itself all methods of intermediating payment choices are either silo’d by the seller or silo’d by intermediators, discouraging participation by individuals.

What the marketplace requires are new business and social contracts that ease payment and stigmatize non-payment for creative goods. The friction involved in voluntary payment is still high, even on the Web, where one must go through complex ceremonies even to make simple payments.  There is no common and easy way either to keep track of what media (free or otherwise) we use (see Media Logging), to determine what it might be worth, and to pay for it easily and in standard ways &#151; to many different suppliers. (Again, each supplier has its own system for accepting payments.)

EmanciPay differs from other payment models (subscriptions, newsstand, tip jars) by providing customers with the ability to choose what they wish to pay and how they’ll pay it, with minimum friction — and with full choice about what they disclose about themselves.

EmanciPay will also support credit for referrals, requests for service, feedback and other relationship support mechanisms, all at the control of the user. For example, EmanciPay can provide quick and easy ways for listeners to pay for public radio broadcasts or podcasts, for readers to pay for otherwise “free” papers or blogs, for listeners to pay to hear music and support artists, for users to issue promises of payment for for stories or programs — all without requiring the individual to disclose unnecessary private information, or to become a “member” — although these options are kept open.

This will scaffold genuine relationships between buyers and sellers in the media marketplace. It will also give deeper meaning to “membership” in non-profits. (Under the current system, “membership” generally means putting one’s name on a pitch list for future contributions, and not much more than that.)

EmanciPay will also connect the sellers’ CRM (Customer Relationship Management) systems with customers’ VRM (Vendor Relationship Management) systems, supporting rich and participatory two-way relationships. In fact, EmanciPay will by definition be a VRM system.

Micro-accounting and Macro-distribution

The idea of “micro-payments” for goods on the Net has been around for a long time, and is often brought up as a potential business model for journalism. For example in this article by Walter Isaacson in Time Magazine. It hasn’t happened, at least not globally, because it’s too complicated, and in prototype only works inside private silos.

What ProjectVRM suggests instead is something we don’t yet have, but very much need:

micro-accounting for actual uses. Think of this simply as “keeping track of” the news, podcasts, newsletters, or music we consume. macro-distribution of payments for accumulated use (that’s no longer “micro”).

Much — maybe most — of the digital goods we consume are both free for the taking and worth more than $zero. How much more? We need to be able to say. In economic terms, demand needs to have a much wider range of signals it can give to supply. And give to each other, to better gauge what we should be willing to pay for free stuff that has real value but not a hard price.

As currently planned, EmanciPay would –

Provide a single and easy way for consumers of “content” to become customers of it. In the current system — which isn’t one — every artist, every musical group, and every public radio and TV station has his, her or its own way of taking in contributions from those who appreciate the work. This can be arduous and time-consuming for everybody involved. (Imagine trying to pay separately every musical artist you like, for all your enjoyment of each artist’s work.) What EmanciPay proposes, however, is not a replacement for existing systems, but a new system that can supplement existing fund-raising systems — one that can soak up much of today’s MLOTT: Money Left On The Table. Provide ways for individuals to look back through their media usage histories, inform themselves about what they have been enjoying, and determine how much it is worth to them. The Copyright Arbitration Royalty Panel (CARP), and later the Copyright Royalty Board (CRB), both came up with “rates and terms that would have been negotiated in the marketplace between a willing buyer and a willing seller.” This almost absurd language first appeared in the 1995 Digital Performance Royalty Act (DPRA) and was tweaked in 1998 by the Digital Millennium Copyright Act (DMCA), under which both the CARP and the CRB operated. The rates they came up with peaked at $.0001 per “performance” (a song or recording), per listener. EmanciPay creates the “willing buyer” that the DPRA thought wouldn’t exist. Stigmatize non-payment for worthwhile media goods. This is where “social” will finally come to be something more than yet another tech buzzmodifier.

All these require micro-accounting, not micro-payments. Micro-accounting can inform ordinary payments that can be made in clever new ways that should satisfy everybody with an interest in seeing artists compensated fairly for their work. An individual listener, for example, can say “I want to pay 1¢ for every song I hear,” and “I’ll send SoundExchange a lump sum of all the pennies wish to pay for songs I have heard over a year, along with an accounting of what artists and songs I’ve listened to” — and leave dispersal of those totaled pennies up to the kind of agency that likes, and can be trusted, to do that kind of thing. That’s the macro-distribution part of the system.

Similar systems can also be put in place for readers of newspapers, blogs, and other journals. What’s important is that the control is in the hands of the individual and that the accounting and dispersal systems work the same way for everybody.

I visited EmanciPay use cases twice in Linux Journal:

An Easy Way to Pay for Journalism, Music and Everything Else We Like (May 2015) An Immodest Proposal for the Music Industry (November 2018)

There are two differences in the world today that should make it easier to code up something like EmanciPay:

Smartphones and apps on them have become extensions of ourselves. AI.

For the latter, I am not talking about the kind of centralized AI we get from Amazon, Microsoft/OpenAI, Adobe, and the rest. I’m talking about AI that’s as personal as our own underwear and gives us what Sam Altman calls “individual empowerment and agency on a scale we’ve never seen before.” That quote became the title of the post I wrote at that link. I will unpack it further in an upcoming News Commons post.

But first I’ll dig deeper into what we need to develop EmanciPay, and how we can use it to scaffold up the kind of markets first imagined by The Cluetrain Manifesto, a quarter century ago.

*Big hat tip to Keith Hopper for his thinking and work on this, especially toward ListenLog, which is now fourteen years ahead of its time. And that time will come. Also to Joe Andrieu, whose The User as a Point of Integration (published in 2007) is a founding document in the VRM canon. He reported on progress here in 2017. All hail writers who keep their archives alive on the Web.


MyDigitalFootprint

The CDO is dead, long live the CDO

Hierarchy appears to be the only option to pull a large group of individuals together toward a common goal. Many insects did it well before humans, but over the last 10,0000 years, humans have moved from decentralised clans into centralised nation-states consisting of hundreds of millions of people.  From the pharaohs to Max Weber’s 20th bureaucratic management structure, we can now ex


Hierarchy appears to be the only option to pull a large group of individuals together toward a common goal. Many insects did it well before humans, but over the last 10,0000 years, humans have moved from decentralised clans into centralised nation-states consisting of hundreds of millions of people.  From the pharaohs to Max Weber’s 20th bureaucratic management structure, we can now exist only in such a system because as a society or organisation grows beyond a few dozen people, the hierarchical pyramid is seen as the only option for the organisation.  The justification is that it is nature and natural. 

The “ideal organisation” was defined by Max Weber as a clear and strong hierarchy underpinned by the division of tasks based on specialisation. A fundamental assumption was that each unit takes care of one piece of the chain and the division of tasks within the unit is clearly defined, work can be executed much more efficiently. Weber envisioned this as a “superior structure” because it focused on rationality and equal treatment of all - assuming everyone was happy with their 1984 dystopian jobs and oversight. By formalising processes within organisations, but especially in government, favouritism and abuse of power could be prevented - as he assumed that those in charge and specialised workers remained. A really poor underlying assumption was that people act rationally, efficiently, and professionally.  He also ignored the concept of shareholder primacy that would emerge.  However, so strong is the argument that it remains the leading business philosophy today, and we still teach that hierarchy, administration and bureaucracy driven by efficiency and effective thinking underpinned by a better division of tasks and responsibilities to drive organisational performance measurable objective is still best practice. The justification is that it represents nature and is natural. 

Different models emerged from the internet, web, and digital thinking allowing us to ponder if something better is possible. Small iterations, such as matrix management, never really changed the fundamentals and whilst embracing all stakeholders and ESG has added complexity to the decisions and models - hierarchy remains. However, it is possible to sense in the noise signals that “Data” is finally about to disrupt and dislodge a love/ hate affair with hierarchy, power, and control. 

Counter to this and according to both scientists, individuals, including leaders, do not make their decisions after a thorough analysis, but first decide, unconsciously and intuitively, and then rationalise their decision. We have become experts in seeking out framed and biased data to justify and provide a rational argument for a decision we have already made.

-----

Endless inches have been written about structure, who and how to lead and what a digital organisation looks like. However, right now, we are likely to be in the death phase for a pure CDO (chief data/ digital officer) role as it becomes clear that the CTO, CIO and CDO roles have very similar mandates and are now actually competing for power, resources and a clear value to the organisation. The ridged ideals of separation and specialism models driving structures, management, incentives, KPI’s and hierarchy cannot work with the emergence of highly complex interdependency in a digital age - are we witnessing that the division of labour falls apart at scale because of complexity and speed? 

For a complicated task, the separation of that task into modules, and components to create the scale of efficiency and effectiveness works, and it works really well. For any task where the output directly and immediately, because of close feedback loops, changes the inputs for the next unit of creation and consumption - the hierarchy fails.  The ridged division of labour into separate tasks, modules, parts or components does not work - because it fundamentally needs repeatability.   

When operating at speed and scale - but there is a high level of repeatability, hierarchy delivers, and we know its problems.  However, data is enabling us to operate at speed, scale and uniqueness (non-repeatability) - this is different. 

However, data is enabling us to operate at speed, scale and uniqueness (non-repeatability) - this is different. 

---

We have recognised for a while that leadership/ chief roles are actually a representation of the ontology of an organisation (structure), the question has never been if you need a CEO, CFO, CMO, CTO as these were clearly delineated roles and specialisms.  We understand the roles, mandates, powers and delegated authority the experts and leaders bring and have.  

In a digital business, we have created four roles CTO, CIO, CDO and CISO. But the reality is that the CMO and CDO need the CFO’s data. The CFO wants the COO and CMO’s data. The CIO has lots of data but no direct path to showing value. The CISO needs other data from outside the organisation and the CTO needs data the CMO has not yet imagined. Organisations opted to create new roles such as Chief Data or Digital Officer to address the emerging complexity and to avoid too much power being vested in the CTO.  

Breaking the axis of old - the CEO/ CFO

The CEO/CFO axis has this framing; we’ll do more things that are cheaper in the short term and less of those that are more expensive in the long term as our incentive and remuneration guide us.  In a company with unified incentives, there is a uniciation of framing which means it is difficult for new ideas to emerge.  This incentive friction or incentive dependency means we always face the same decisions in the same way.

Many leaders are realising that data is forging changes, and this change is not about tech, data or analysis, but it is about whether we, the leadership, exec or board, are making the right decisions.  Decisions of the past were framed by the ideal hierarchy underpinned by efficiency and effectiveness at scale driven by the demands for specialisation. However, to know we are making the right decision going forward, we need to understand we have the right choices today, which means we need all “data” and our data can no longer be about isolation, separation or specialisation. 

I expect that we are about to see a new axis of power arrive with the emergence of a new chief role as we realise, we need a new power dynamic structure.  A two-way axis (CEO/CFO) becomes a three-way stool.  The emergence of the CEO/ CFO/ CxO. The CEO/ CFO will maintain the remits of controlling and understanding the speed and scale at which the hierarchy operates,  with the new role focussing on uniqueness - how data breaks the old hierarchy.   It does not matter what you call the X, the point is what is their mandate.  They have to own: 

the data philosophy

all data emerging from the company and its ecosystem 

data governance and security

direct the current and future platform's abilities, functionality and performance

The point here is that this role is not about the analysis of the data but about ensuring the quality, attestation, providence, provenance, function, form, taxonomy, ontology and pedagogy of the data needed for decision and that we will continue to access to and enjoy the data we need to make better decisions.

The critical point is that once the CEO/CFO could know if they were making the right decisions as the framing they needed was about efficiency and effectiveness within a specialism model, however going forward, this axis is not only insufficient, it is damaging. This new third member is needed to provide modern-day stabilisation and governance to ensure that as an expanded team, “we know we are making the right decisions”, as we must regularly ask efficacy questions that were not needed under the old repeatability model, as the mission was sufficient as the guide.  

The first victim will be the CDO, as it exists in a specialisation-driven hierarchy that has no choice but to change - not its structure and how it thinks. 






Doc Searls Weblog

The News Business

Seventh in the News Commons series. How does the news business see itself? Easy: ask an AI. Or a lot of them.* That’s what I’ve been doing. Unless otherwise noted, all the following respond to the same three-word prompt: the news business. Here goes… Microsoft Bing (Full name: Microsoft Bing Image Creator from Designer), which […]

Seventh in the News Commons series.

A display in the Breaking the News exhibit at the Monroe County History Center

How does the news business see itself?

Easy: ask an AI. Or a lot of them.*

That’s what I’ve been doing. Unless otherwise noted, all the following respond to the same three-word prompt: the news business. Here goes…

Microsoft Bing (Full name: Microsoft Bing Image Creator from Designer), which uses DALL-E 3:

Dream Studio by Stability.ai (which, as you see, required a longer prompt than I used with the others):

Deep Dream Generator:

Adobe Firefly:

Craiyon, again with a longer prompt:

Stable Diffusion:

Finally, a series from DeepAI., each generated in a different style.

First, impressionism:

Surreal graphics:

Renaissance painting:

Abstract painting:

AI art:

What do these say about the news business? Well,

It’s mostly male. It’s mostly about newspapers, somewhat about TV, and idealized both. It used to be big. It doesn’t know what to make of the Internet. It’s obsolete in the extreme.

For most of the prior century, the news business was big. In tech parlance, it scaled. Here in the U.S. and Canada, every town had a newspaper, and in some cases several. Many towns—and all cities—had radio stations. Every name-brand city had a TV station, or two, or more. The great newsweeklies, Time and Newsweek, had millions of subscribers and made lots of money. So did TV network news operations. Newsstands were everywhere.

All of that has collapsed. Some print and broadcast news operations still exist, but most are shells of their former selves, and many put news icing on a cake of partisan talk shows. Exceptions to collapse are the surviving news giants (New York Times, Washington Post, Wall Street Journal), and resourceful public broadcasters. (Pew Research shows NPR’s audience has long topped 20 million people, though it is slowly declining.)

People today get most of their news through phones, tablets, and laptops. These are packed with apps that maximize optionality. People now hardly listen, watch, or read on schedules set by publishers, stations, or networks. Everyone with a smartphone had a limitless variety of news sources. Or sources within sources such as Instagram, TikTok, YouTube, and old-fashioned social media such as Facebook and X.

According to Pew, the top news sources for young people today are TikTok and social media. In other words, from each other. The threshold of news creation and production is also low. This is why, according to Exploding Topics, there are now over three million podcasts worldwide.

As Scott Galloway put it in a recent Pivot podcast (which I can’t find right now), news is a shitty business—at least if you want to scale up something huge. It’s not even a great small business. But hell, neither is running a restaurant, a nail salon, a clothing shop, or a small farm. But those are real businesses.

As Jeff Jarvis makes clear in The Gutenberg Parenthesis: The Age of Print and Its Lessons for the Age of the Internet (which I highly recommend), we are at the end of one long era and the start of another one.

In these early years of The Internet Age, the most substantive news and news businesses are the local kind. True, not everybody cares about local news. But everybody lives somewhere, and it does matter what goes on where people live. Belonging somewhere in the virtual world is optional, but it is mandatory in the physical one. And, as with running a restaurant, a store, or a farm, reporting local news is a labor of talent and love. It’s what we still call “a living.”

Right now there are three models for the local news business: advertising, subscription, and philanthropy. In my next post, the eighth in this series, I’ll lay out the case for a fourth one.

*I didn’t try Midjourney, DALL-E 3, or Stable Diffusion because they all require subscriptions, and I don’t feel like paying for those yet. DALL-E 2 yielded blah results.

Wednesday, 03. January 2024

Damien Bod

Securing a Blazor Server application using OpenID Connect and security headers

This article shows how to secure a Blazor Server application. The application implements an OpenID Connect confidential client with PKCE using .NET 8 and configures the security headers as best possible for the Blazor Server application. OpenIddict is used to implement the identity provider and the OpenID Connect server. Code: https://github.com/damienbod/BlazorServerOidc OpenID Connect flow In […]

This article shows how to secure a Blazor Server application. The application implements an OpenID Connect confidential client with PKCE using .NET 8 and configures the security headers as best possible for the Blazor Server application. OpenIddict is used to implement the identity provider and the OpenID Connect server.

Code: https://github.com/damienbod/BlazorServerOidc

OpenID Connect flow

In the first step, the authentication can be solved using OpenID Connect. With this, the process of user authentication is removed from the client application and delegated to an identity provider. In this demo, OpenIddict is used. The OpenID Connect code flow with PKCE is used and the application uses a client secret to authenticate. This can be further improved by using a certificate and client assertions when using the code from the OpenID Connect flow to request the tokens. The flow can also be improved to use OAuth 2.0 Pushed Authorization Requests PAR, if the identity provider supports this.

In ASP.NET Core or Blazor, this can be implemented using the Microsoft ASP.NET Core OpenIdConnect package.

Microsoft.AspNetCore.Authentication.OpenIdConnect

The authentication flow requires some service setup as well as middleware changes in the pipelines. The solution uses the OpenID Connect scheme to take care of the user authentication and saves this in a cookie. The name claim is set to use the “name” claim. This is different with every identity provider and also depending how the ASP.NET Core application maps this. All razor page requests must be authenticated unless otherwise specified.

builder.Services.AddAuthentication(options => { options.DefaultScheme = CookieAuthenticationDefaults.AuthenticationScheme; options.DefaultChallengeScheme = OpenIdConnectDefaults.AuthenticationScheme; }) .AddCookie() .AddOpenIdConnect(options => { builder.Configuration.GetSection("OpenIDConnectSettings").Bind(options); options.SignInScheme = CookieAuthenticationDefaults.AuthenticationScheme; options.ResponseType = OpenIdConnectResponseType.Code; options.SaveTokens = true; options.GetClaimsFromUserInfoEndpoint = true; options.TokenValidationParameters = new TokenValidationParameters { NameClaimType = "name" }; }); builder.Services.AddRazorPages().AddMvcOptions(options => { var policy = new AuthorizationPolicyBuilder() .RequireAuthenticatedUser() .Build(); options.Filters.Add(new AuthorizeFilter(policy)); }); builder.Services.AddControllersWithViews(options => options.Filters.Add(new AutoValidateAntiforgeryTokenAttribute()));

The application pipeline is setup after the services are built. ASP.NET Core adds some automatic claim mapping and renames some claims per default. This can be reset to use the exact claims sent back from the identity provider.

var app = builder.Build(); JsonWebTokenHandler.DefaultInboundClaimTypeMap.Clear(); if (!app.Environment.IsDevelopment()) { app.UseExceptionHandler("/Error"); app.UseHsts(); } app.UseSecurityHeaders( SecurityHeadersDefinitions.GetHeaderPolicyCollection( app.Environment.IsDevelopment(), app.Configuration["OpenIDConnectSettings:Authority"])); app.UseHttpsRedirection(); app.UseStaticFiles(); app.UseRouting(); app.UseAuthentication(); app.UseAuthorization(); app.MapRazorPages(); app.MapControllers(); app.MapBlazorHub().RequireAuthorization(); app.MapFallbackToPage("/_Host"); app.Run(); CascadingAuthenticationState

The CascadingAuthenticationState is used to force and share the authentication requirements in the UI and the Blazor Server components. If the application and the user are not authenticated and authorized, the user is redirected to the identity provider.

@inject NavigationManager NavigationManager <CascadingAuthenticationState> <Router AppAssembly="@typeof(App).Assembly"> <Found Context="routeData"> <AuthorizeRouteView RouteData="@routeData" DefaultLayout="@typeof(MainLayout)"> <NotAuthorized> @{ var returnUrl = NavigationManager.ToBaseRelativePath(NavigationManager.Uri); NavigationManager.NavigateTo($"api/account/login?redirectUri={returnUrl}", forceLoad: true); } </NotAuthorized> <Authorizing> Wait... </Authorizing> </AuthorizeRouteView> </Found> <NotFound> <LayoutView Layout="@typeof(MainLayout)"> <p>Sorry, there's nothing at this address.</p> </LayoutView> </NotFound> </Router> </CascadingAuthenticationState>

Authentication flow UI Pages

The menu needs to display the login or the logout buttons depends on the state of the application. Blazor provides an AuthorizeView component for this. This can be used in the UI to hide or show elements depending on the authenticate state.

<AuthorizeView> <Authorized> <span>@context.User.Identity?.Name</span> <form action="api/account/logout" method="post"> <button type="submit" class="nav-link btn btn-link text-dark"> Logout </button> </form> </Authorized> <NotAuthorized> <a href="api/account/login?redirectUri=/">Log in</a> </NotAuthorized> </AuthorizeView>

I added a log in, log out in an account controller and signed out Razor page to handle the authentication requests from the UI. The application automatically authenticates so the login is only used in the signed out page. The login removes the authenticated user from both schemes; the cookie and the OpenID Connect scheme. To remove the authenticated user from the identity provider, the application needs to redirect. This cannot be implemented in an ajax request.

[IgnoreAntiforgeryToken] // need to apply this to the form post request [Authorize] [HttpPost("Logout")] public IActionResult Logout() { return SignOut( new AuthenticationProperties { RedirectUri = "/SignedOut" }, CookieAuthenticationDefaults.AuthenticationScheme, OpenIdConnectDefaults.AuthenticationScheme); }

The SignedOut is the only unauthorized page, component in the application. The AllowAnonymous is used for this.

using Microsoft.AspNetCore.Authorization; using Microsoft.AspNetCore.Mvc.RazorPages; namespace BlazorServerOidc.Pages; [AllowAnonymous] public class SignedOutModel : PageModel { public void OnGet() { } }

Security headers

The user is authenticated. The application need to protect the session as well. I use NetEscapades.AspNetCore.SecurityHeaders for this.

NetEscapades.AspNetCore.SecurityHeaders NetEscapades.AspNetCore.SecurityHeaders.TagHelpers

Blazor Server is a severed rendered application with UI components. The application can implement nonce based CSP. All the typical security headers are added as best possible for this technology.

namespace BlazorServerOidc; public static class SecurityHeadersDefinitions { public static HeaderPolicyCollection GetHeaderPolicyCollection(bool isDev, string? idpHost) { ArgumentNullException.ThrowIfNull(idpHost); var policy = new HeaderPolicyCollection() .AddFrameOptionsDeny() .AddContentTypeOptionsNoSniff() .AddReferrerPolicyStrictOriginWhenCrossOrigin() .AddCrossOriginOpenerPolicy(builder => builder.SameOrigin()) .AddCrossOriginResourcePolicy(builder => builder.SameOrigin()) .AddCrossOriginEmbedderPolicy(builder => builder.RequireCorp()) // remove for dev if using hot reload .AddContentSecurityPolicy(builder => { builder.AddObjectSrc().None(); builder.AddBlockAllMixedContent(); builder.AddImgSrc().Self().From("data:"); builder.AddFormAction().Self().From(idpHost); builder.AddFontSrc().Self(); builder.AddBaseUri().Self(); builder.AddFrameAncestors().None(); builder.AddStyleSrc() .UnsafeInline() .Self(); builder.AddScriptSrc() .WithNonce() .UnsafeInline(); // only a fallback for older browsers when the nonce is used // disable script and style CSP protection if using Blazor hot reload // if using hot reload, DO NOT deploy with an insecure CSP }) .RemoveServerHeader() .AddPermissionsPolicy(builder => { builder.AddAccelerometer().None(); builder.AddAutoplay().None(); builder.AddCamera().None(); builder.AddEncryptedMedia().None(); builder.AddFullscreen().All(); builder.AddGeolocation().None(); builder.AddGyroscope().None(); builder.AddMagnetometer().None(); builder.AddMicrophone().None(); builder.AddMidi().None(); builder.AddPayment().None(); builder.AddPictureInPicture().None(); builder.AddSyncXHR().None(); builder.AddUsb().None(); }); if (!isDev) { // maxage = one year in seconds policy.AddStrictTransportSecurityMaxAgeIncludeSubDomains(); } policy.ApplyDocumentHeadersToAllResponses(); return policy; } }

The CSP nonces are added to all scripts in the UI.

@using Microsoft.AspNetCore.Components.Web @namespace BlazorServerOidc.Pages @addTagHelper *, Microsoft.AspNetCore.Mvc.TagHelpers @addTagHelper *, NetEscapades.AspNetCore.SecurityHeaders.TagHelpers <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <base href="~/" /> <link rel="stylesheet" href="css/bootstrap/bootstrap.min.css" /> <link href="css/site.css" rel="stylesheet" /> <link href="BlazorServerOidc.styles.css" rel="stylesheet" /> <component type="typeof(HeadOutlet)" render-mode="ServerPrerendered" /> </head> <body> @RenderBody() <div id="blazor-error-ui"> <environment include="Staging,Production"> An error has occurred. This application may no longer respond until reloaded. </environment> <environment include="Development"> An unhandled exception has occurred. See browser dev tools for details. </environment> <a href="" class="reload">Reload</a> <a class="dismiss">🗙</a> </div> <script asp-add-nonce src="_framework/blazor.server.js"></script> </body> </html> Next steps

In a follow up blog, I would like to implement the same type of security for the new .NET 8 Blazor web applications.

Links

https://learn.microsoft.com/en-us/aspnet/core/blazor/security/server/

https://learn.microsoft.com/en-us/aspnet/core/blazor/security/server/interactive-server-side-rendering

https://learn.microsoft.com/en-us/power-apps/developer/data-platform/webapi/quick-start-blazor-server-app

https://stackoverflow.com/questions/64853618/oidc-authentication-in-server-side-blazor

How OpenID Connect Works

https://github.com/openiddict/openiddict-core

https://datatracker.ietf.org/doc/html/rfc9126

https://learn.microsoft.com/en-us/aspnet/core/security/authentication/claims

Tuesday, 02. January 2024

IDIM Musings

Heading to ISO SC 37 — Biometrics January 2024

I've joined the Standards Council of Canada's mirror committee for ISO SC 37 (Biometrics). My focus is on proving liveness and biometric matching for face and consumer-operated sensors. Excited to explore international interest at the upcoming January 2024 SC 37 plenary in Okayama.

Another year, another ISO committee! 

I have joined the Standards Council of Canada mirror committee for SC 37 (Biometrics) and will be attending the international meetings. The hard work has begun learning the committee culture, personalities and flow of projects.

Getting started in an ISO committee involves applying for accreditation as a national standards body expert in the domain; connecting up to the wikis and email lists; attaching to the document management system; and locating the ‘active coal face’1 – the precise status of each interesting project. Not trivial. It’s the same pattern for every standards body – getting caught up takes effort.

There are six work groups within SC 37. I’ll be spending most of my time in WG 5 (Biometric Testing and Reporting) and WG 4 (Technical Implementation of Biometric Systems). WG 1 (Harmonized Biometric Vocabulary), WG 2 (Biometric Technical Interfaces), WG 3 (Biometric Data Interchange Formats) and WG 6 (Cross-Jurisdictional and Societal Aspects of Biometrics) will receive attention as and when needed.

Here are some of the projects that look immediately interesting to me. My opinion will certainly change as I get into the work

ISO/IEC 30107-3 — Biometric presentation attack detection — Part 3: Testing and reporting ISO/IEC 30107-4:2020 — Information technology — Biometric presentation attack detection — Part 4: Profile for testing of mobile devices ISO/IEC TS 19795-9:2019 — Information technology — Biometric performance testing and reporting — Part 9: Testing on mobile devices ISO/IEC 20059 — Vulnerability of Biometric Recognition Systems with Respect to Morphing Attacks ISO/IEC 9868 — Remote biometric identification systems — Design, development, and audit ISO/IEC 29794-5 — Face image quality assessment ISO/IEC TR 22116:2021 — A study of the differential impact of demographic factors in biometric recognition system performance ISO/IEC 19795-10 — Quantifying biometric system performance variation across demographic groups ISO/IEC 39794-5:2019 — Information technology — Extensible biometric data interchange formats — Part 5: Face image data

One observation that I must confirm: SC 37 appears to be oriented to controlled-environment biometric systems: where the operator of the system controls the sensors, hardware and software.

I believe, for now, that there are additional design requirements for the exponential-growth bring-your-own biometric sensor (on your mobile device). And this aspect is very interesting to me. My company’s product does remote ID Proofing and Verification using biometric evaluation for linking, and biometric re-verification (aka “returning user authentication”). We construct 3D face models for proof of liveness and matching. We are concerned with proven liveness and end to end device/sensor security (for mobile devices cameras and webcams).

Performance testing, presentation attack detection/performance, mobile-first and face-first topics are hot now.

I’m about to find out the level of international interest and state of standardization in these areas during the SC 37 plenary at Okayama in January 2024 – wish me luck!.

Realizing that ‘active coal face’ is a dying idiom — but what is the climate-friendly replacement? Maybe the ‘code repo pull requests page’? I invite you to respond!

Sunday, 31. December 2023

Doc Searls Weblog

Choices

Comment with wrong captions only.
Choose wisely.

Comment with wrong captions only.

Saturday, 30. December 2023

Just a Theory

Times Up

December 22 was my last day at The New York Times. I will miss many colleagues and the Guild, but it’s time.

December 22, 2023 was my last day at The New York Times. My tenure was just under two and a half years.

My first 19 months at the company were pretty much everything I had hoped, as I collaborated with Design and Product to design a distributed platform and conceived, designed, and implemented CipherDoc, a service for encrypted data management. I’m incredibly proud of that work!

But alas, plans change. After the company mothballed the project, I refocused my time on glue work: re-platforming services, upgrading runtimes and dependencies, improving logging and observability, and documenting code, architectures, and playbooks. It felt good to reduce the onboarding, maintenance, and on-call overhead for my teams; I hope it helps them to be more productive and fulfilled in their work.

I treasure the terrific people I met at The Times, just super thoughtful, empathetic, and creative co-workers, partners, and colleagues. It’s no wonder they had the energy to form a union, The Times Tech Guild, for which I’m gratified to have helped organize and steward members. The Guild connected me with a far broader range of talented and committed people than I would have otherwise. I will miss them all, and continue to cheer for and support the movement from the outside.

And now, as 2023 winds down, I’ve decided to try something new. More news in the new year!

More about… Personal Work New York Times

IDIM Musings

It’s been a busy few years…

In 2019, I transitioned from independent consulting to IDEMIA, contributing to the ISO subcommittee for Mobile Driving License. In 2021, I joined Ping Identity, focusing on government-issued digital credentials and inter-company implementations. I also continued my involvement with Kantara Initiative. As of 2023, I'm at Facetec, working on liveness-proven biometric verification and collaborating wi

In 2019 I decided to leave independent consulting and join a large corporation – IDEMIA – you might notice their logos at airport security stations. My job was to be embedded in the ISO subcommittee for Mobile Driving License, a.k.a. the ISO 18013-5 mDL standard and to support IDEMIA’s mobile eID/mDL product team. This was a great extension of my existing work on digital credentials in other ISO subcommittees and W3C groups. Being tied to a product team was a new experience for me!

In 2021 I moved to Ping Identity to support the product team for PingOne Neo – ID Verify and Digital Credentials. My focus remained on government-issued digital credentials via ISO mDL and to establish implementations of inter-company digital credentials . I had the privilege of working beside incredibly experienced and generous standards experts who continue to be deeply embedded in OpenID Foundation, IETF, FIDO, Open Wallet Foundation core work. The novelty of Ping was to see how a new product can be engineered to fit into an overarching suite of enterprise products. The interplay of product and corporate priorities on the one hand and the long time scale of standardization on the other led to some “interesting” discussions over my time at Ping.

One constant throughout has been my involvement with Kantara Initiative. This non-profit corporation continues to be a bright light in the world of trustworthy use identity and personal information. I’m currently Chair of the Identity Assurance Work Group and since 2023, Chair of the Board of Directors. Kantara has had great fortune in our Executive Directors over the years – from Joni Brennan to Colin Wallis and now Kay Chopard. The Board is working hard to strengthen our program offerings (as a conformance body for NIST 800-63 and the UK DIATF), expand into new geographies, and sustain innovation in our community work groups – while fiscally prudent and in line with our fiduciary duty to the organization and its members.

At the end of 2023, I have decided to pivot my attention to a big missing piece in the digital credentials story – how to ensure that the recipient of a digital credential is a real human (and the intended one), and that the presenter of a digital credential is in fact authorized to do so. I have joined Facetec to dive deep into the world of liveness-proven biometric verification. Their tech is impressive. The SDK uses the mobile device camera to construct a 3D face model of the person from which it determines whether there’s a real living human present, whether the session is a spoof attempt (using physical artifacts or video playbacks), and whether images are being injected bypassing the camera subsystem. The server SDK completes the liveness analysis and performs highly-accurate matching against a known-good reference. My job? Same-same. Working with international standards bodies (such as ISO SC 37) and testing labs to develop or extend standards for performance evaluation of this new 3D face biometric mode. I’m ingesting big stacks of documents and trying to work out where standardization of user-supplied biometric-enabled devices stands today – in comparison with the well-established border security/public safety modes of operation. This is exciting new ground and appears to be mostly greenfield – and it all ties back to the IDM and digital credentials industry.

Thursday, 28. December 2023

Heres Tom with the Weather

Social Web 101

Whether it’s the Indieweb or the Fediverse, you should not expect to be able to make a reply or do anything else other than view posts on someone else’s domain (e.g. example.com). If you end up on someone else’s domain and wish to interact with them, in general, you should hop back to your site or app and interact with them from there. It’s like riding a bike and you’ll soon forget it was ever a

Whether it’s the Indieweb or the Fediverse, you should not expect to be able to make a reply or do anything else other than view posts on someone else’s domain (e.g. example.com). If you end up on someone else’s domain and wish to interact with them, in general, you should hop back to your site or app and interact with them from there. It’s like riding a bike and you’ll soon forget it was ever a challenge.

Wednesday, 27. December 2023

Jon Udell

Watch your hands

I’m lucky to have two hands, let’s be clear, and the minor struggles I’ve had with them over the years don’t qualify as any kind of real hardship. Yet there have been a lot of small injuries — an ongoing annoyance made worse by being mostly my fault. Consider the most recent incident. To relieve … Continue reading Watch your hands

I’m lucky to have two hands, let’s be clear, and the minor struggles I’ve had with them over the years don’t qualify as any kind of real hardship. Yet there have been a lot of small injuries — an ongoing annoyance made worse by being mostly my fault.

Consider the most recent incident. To relieve strain on my left hand, I switched a few months ago from a steel-string to a nylon-string guitar. I knew I wanted the lesser force required to press nylon strings, and the vacation from steel strings has really helped. The wider spacing between the strings is also better for my hands, I realized as I settled in. I’d started on a classical guitar, then hadn’t owned one in decades, it feels good to have one again and I’ve been playing it a lot.

Being the guy who wrote a blog post about an early warning system for RSI not even six months ago, I see the absurdity of my situation. Back in July I was rehabbing an overextended little finger. Now I’m rehabbing a thumb and related muscles insulted by my failure to properly adapt to the new instrument.

You can wrap your thumb around the narrower neck of a steel-string guitar in order to grab the lowest string. You can do that with the wider neck of classical guitar too. But as I probably learned the first time and then forgot, you really shouldn’t. A D major chord with F# in the bass is too much of a stretch for the thumb, at least for my thumb, on a classical guitar. You won’t see classical guitarists do that. Instead they’ll make a tripod with the index, middle, and ring fingers.

So once again I get to rewire my hand posture. Which, again, is a minor hardship, not remotely comparable to the guy I mentioned last time who had to switch sides and learn to fret with his right hand. As I also mentioned there, he found an upside. Now he’s a switch-hitter who can use both halves of his brain directly. In my case, I’m trying to embrace the rewiring as a way to break a habit and form new neural pathways. It’d be nice, though, if that weren’t always a response to self-inflicted injury!

But like I said, it’s a minor hardship. My hands could have been mangled in my dad’s car engine that one time, or in the anchor chain of Ben Smith’s boat that other time: two escapes from disaster that still provoke the occasional nightmare. I’m lucky to have these two hands, and again vow to take better care of them.

Why is it so hard (for me, at least) to detect and avoid injurious hand postures? I guess it’s because whatever you’re projecting — when you write sentences or lines of code, or play notes and chords — has to flow through your hands with minimal conscious attention to your hands. Note to self: pay more attention.

Tuesday, 26. December 2023

Jon Udell

Critical mass in the Goldilocks zone

The use of the phrase “critical mass” in this NYTimes story about the enshittification of Goodreads stopped me in my tracks. Give all of Goodreads’s issues, it might seem easy enough to encourage writers and readers simply to flock to another forum. Sites like The Storygraph and Italic Type have sprung up as promising alternatives, … Continue reading Critical mass in the Goldilocks zone

The use of the phrase “critical mass” in this NYTimes story about the enshittification of Goodreads stopped me in my tracks.

Give all of Goodreads’s issues, it might seem easy enough to encourage writers and readers simply to flock to another forum. Sites like The Storygraph and Italic Type have sprung up as promising alternatives, but they’re still far from reaching a critical mass of users.

Nuclear physicists know they are dancing with the devil when they bring fissile material to criticality. They also know that the reaction can be controlled, that it must be, and that the means of control obey well-understood principles.

Social sites typically push toward supercriticality with no such understanding. If Goodreads enshittifies at 125 million users, why would another service expect a different outcome at similar scale?

We can learn from a natural experiment. Not mentioned in the story is a long-running service, LibraryThing, that’s been going strong since 2005. I interviewed its founder and lead developer, Tim Spalding, back in 2008. Listening to that interview again today reminded me that everything I loved about LibraryThing remains important and matters even more now.

LibraryThing was, and remains, a place where you make and share lists of books in order to connect with other people and with books — not primarily via friend relationships but rather book relationships. It’s a small business that’s kept Tim and his crew happily engaged in serving a few million bibliophiles, some of whom pay a membership fee to be super-cataloguers.

I’m not in LibraryThing’s core demographic. Books aren’t as central to my life as they are to members of the service who carefully curate their own lists, tag books and authors, contribute to a common knowledge wiki, and write reviews. But I appreciate their work when I visit the site.

Today I added Ed Yong’s remarkable An Immense World to my list. Among the book’s dozens of reviews on the site, I found a 2000-word essay that usefully highlights many of the strange (to humans) powers of animal perception that Yong describes.

I guess LibraryThing isn’t on the Times’ radar because it hasn’t reached a critical mass of … what, exactly? Tens of millions of people? Hundreds of millions? I’m glad it hasn’t! That’s a recipe for meltdown. LibraryThing has been going strong, for almost two decades, in the Goldilocks zone: neither too little activity nor too much, just the right amount for meaningful experiences at human scale.

I feel the same way about Mastodon. Conventional wisdom says it’s dead in the water: “nobody” goes there, no Mastodon apps rank highly in the app stores. But if critical mass means operating at the scale of Twitter or Facebook, then who wants that? Who benefits from the inevitable enshittification? Not me, and probably not you.

LibraryThing shows that a controlled reaction, at smaller scale, is sustainable over time. Mastodon so far has been successful in the same way, and I see no reason why that can’t continue. Although Mastodon is young, my connections there date back to social networking’s roots in the early blogosphere. It feels like the right amount of critical mass. For me, a billion people on Mastodon is an anti-goal. I would much rather see hundreds or maybe thousands of healthy communities emerge, each operating in its own Goldilocks zone. Many small and safe critical masses, instead of a few huge and dangerous ones, powering small businesses whose proprietors are — like Tim Spalding and his crew — able to maintain real relationships with their customers.

That global conversation we thought we were having on Twitter? We don’t know how to control that reaction and I’m not sure it makes sense to try.


Wrench in the Gears

My Quiet Christmas

As I have come to see it, the world consists of mounds of information, much of it digital. Our days are spent encountering, reacting, and sifting through it. Our conscious genius pulls together threads from which are crafted uplifting, foreboding, humorous, and tragic stories. We live inside these stories alone or with like-minded people. Most [...]

As I have come to see it, the world consists of mounds of information, much of it digital. Our days are spent encountering, reacting, and sifting through it. Our conscious genius pulls together threads from which are crafted uplifting, foreboding, humorous, and tragic stories. We live inside these stories alone or with like-minded people. Most of the time our minds simply tweak and update unfolding dramas of consensus reality or a particular flavor of alternate reality; it’s an efficient, largely automatic process.

Rarely do we toss out whole plot lines and start from scratch. Why? Well, there would be so many subplots left hanging and so many question marks around our relational identities that the uncertainty of getting everything sorted out means it’s generally easier to keep on the established path. Jettisoning a comfortably broken-in outlook is a very messy business; and how often, really, do we feel the need to, or empowered to, rip apart the boxes we inhabit?

Each of us has the ability to craft amazing stories out of fascinating facts and extend invitations to loved ones and complete strangers to come inside and look around, sit a spell, and make themselves at home. There is, however, no way to compel another person to hang out in the story you made. You can’t even force them to cross your threshold. Exhort and cajole all you want, the stories we inhabit remain a personal choice, as they should be. Just as you wouldn’t want to have to live inside someone else’s ill-fitting story, the same goes for everyone else. It’s a hard truth. It can hurt.

Our acceptance of the primacy of the digital world has meant that the velocity and intensity of social physics has changed. My experience of these changes have been dramatic and potentially catastrophic, at least in the short run. The web3 protocol layer system seems intent on launching us all into a digital-twin simulation cloudmind whether we’ve elected to participate or not. Left in the wake of this century-long noetic campaign is the wreckage of once deeply-held relationships, relationships with those who are (or were) close to us in heart – actual physical hearts, not emojis, up-votes, and clicks.

There are too many digital stories, too many doors, too much unease with the masses either searching for answers or working really hard to avoid seeing the holes being torn in the fabric of what we once understood as “reality.” Our story rooms may now be filled with people from far flung corners of the globe. These are people whose heart fields we’ll probably never have the opportunity to feel directly – no mingled auras from warm embraces, conversations over cups of tea, shared meals, card games, porch sitting, bonfires, jam sessions.

In some cases, those whose heart fields we shared in the actual world have wandered off to the digital rooms of sirens, forsaking old connections for greener pastures. The transformation, turbo-charged by the lockdowns, happened with ferocious speed and offered little chance for redress, remedy, or reconciliation. We didn’t know what hit us. Then it was over, and everything had changed.

This is the first Christmas I’ve ever spent alone, totally alone. I was cancelled for breaking out of my box, looking around, asking questions, and imploring people to start having conversations about emerging technology and cybernetics and finance and consciousness and gaming before we go the way of the Lotus Eaters. Maybe I was cancelled for my intensity. I don’t know, the only thing I was told is that it was about “my research.”

I’ve made it through a tearful week. Lots of tears shed during my Christmas Eve read aloud finishing “Momo.” I apologize for the five-month gap, but I know in my heart I was supposed to read it last night. I keep saying time is not what we think it is. In the end of Ende’s story, the hour-lily wielding Momo and the tortoise Cassiopeia save the day and defeat the time bankers. The children return to the amphitheater for free play, grown-ups shelve their Taylorist tendencies, the town’s humanity is restored, and Momo doesn’t have to be alone anymore.

The “Momo” read-aloud playlist here.

I feel we are on Earth, sparks of God’s divine creation, to experience, falter, learn, and feel in the depths of our souls. The holidays are a season of big feelings – even more so when you’ve lost a loved one to the great beyond and other loved ones to the pit-traps of sinister social physics. The afternoon was mild here in Philadelphia, and I took the opportunity to fill my basket with gatherings and head out to the Kelpius Cave in the Wissahickon- alchemy of the soul. What better balm could there be than to lay out a heart-felt intention to close a rocky year?

When I thought that I was going to decamp to Seattle in the summer I took all of my precious nature materials with a friend to Valley Forge and created a magnificent spiral between Mount Misery and Mount Joy. Since I returned, I’ve accumulated a few more items – feathers and quartz from Arkansas, acorns from Laurel Hill cemetery, dried flowers from birthday carnations I bought myself, and gifts from friends across the country. I took them with a candle to the park and lit up the darkness and asked God for guidance to do the next right thing.

Once you’ve ripped up the boxes you really need a divine compass.

The coming year should bring new opportunities in Hot Springs, Arkansas after we sell the family home in the spring – fingers crossed someone wants to pay a good price for a well-loved 1880s row house with many original features. I’m hoping 2024 will be gentler than the clear-the-decks tsunami of the past six months. I met my husband when I was twenty. We grew up together. This divorce is my first break-up. I know that sounds ridiculous, and yet it’s true. I am my own person, and I know my own mind. That said, I’m going to have to craft my own new traditions and ways of doing things as a single person and doing that feels overwhelming right now. Will I ever celebrate Christmas again? It’s hard to say. It’s also been two years since my child has spoken to me. Prayers for reconciliation would be welcome. I still love my family. I am fine with them living in the story of their choice. I think we could create a shared space that bridges our separate boxes.

Despite the waves of difficultly I’ve been navigating since 2019, I know I would die a dysfunctional person had I elected to live the rest of my life inside stories I knew to be fundamentally flawed in their incompleteness and injurious to my spirit. So, I’m just going to put one foot in front of the other and try not to be afraid of a future with a lone heart-field.

I’m Jerry Hawver’s daughter, and Hawvers “git er done.”

Merry Christmas everyone. Thanks for stopping by my story, even if we can’t mingle our actual auras. May the new year bring blessings to you and your loved ones. Our power is in the heart – hugs all around. Hug your people as much as you can while you have them near you. It’s a sad day when you have no one to hug. 

A light in the darkness.

Stones and brick, rubble from the floor of the “cave,” outline the heart.  A ring of soft pine needles mingled with dried purple carnations and white Alstroemeria sprinkled with green leaves gifted from California and the last of my creosote gifted to me last year by Dru in Tucson for my birthday – it smells like the desert in the rain.

The next ring was an outline of pinecones gathered in the park with ferns and white pine sprigs and gifted dandelion seed heads that lit up the dark like stars.

I used a large stone as a centerpiece to hold many assorted lovely things lit by a candle.

A soft, leathery buckeye casing holding a spiral shell and manzanita berries with some Arkansas quartz below it.

Quartz and a shed snakeskin for sparkly new beginnings.

The last sprig of sage from Standing Rock gifted to me by Jason with a bit of fungus above it and acorns from which mighty things grow.

And an oak leaf for strength and fortitude.

River pebbles – may the rich tapestry of our experiences, both beautiful and challenging, rub us to a polished crystalline smooth symmetry.

May we have the wits to find our way through the worm holes and arrive on the other side.

Smoke from a wrapped mullein leaf gifted to me by a beautiful family in Washington state, thank you kind herbalist and standing farmer.

And this ridiculous figurine was already there when I arrived, in a niche made from where a portion of the wall had fallen away. I have no idea of the meaning behind it, but I suspect the trickster energy was perhaps expecting me. Bittersweet.

Philadelphia has brought me a depth of understanding my life would not have had if I had landed somewhere else. While I am grateful for its many stories, I see it’s time to move on. Goodbye for now Kelpius Cave, ritual anchor for the Nephele era.

 

Sunday, 24. December 2023

@_Nat Zone

季節のご挨拶〜バッハ:シチリアーノ(フルート、スピネット)

今年も季節のご挨拶。バッハのシチリアーノを主にシチリア島の風景をバックにお届けします。使っているフルートは1920年代にロンドンで作られたコーカスウッド製のフルート Louis (製造番号81番)と、たぶん40年ほど前に作られたと思われるTOKAIスピネット(小型のチェンバロ)で…

今年も季節のご挨拶。バッハのシチリアーノを主にシチリア島の風景をバックにお届けします。使っているフルートは1920年代にロンドンで作られたコーカスウッド製のフルート Louis (製造番号81番)と、たぶん40年ほど前に作られたと思われるTOKAIスピネット(小型のチェンバロ)です。フルートの方は今年しっかりとオーバーホールしてかなり良い状態になりました。

このシチリアーノは、バッハの作品の中で最も有名な曲の一つですが、一方では贋作疑惑がある曲でもあります。スタイルがバッハとちょっと違うのです。同時代のクヴァンツの曲を下敷きに弟子が書いたものにバッハが手を入れたのではないかとも言われています。これをバッハ作と伝えているのは、次男のこれも大作曲家のC.P.E.バッハ(彼についてはわたしのブログ「クラシック音楽の父、C.P.E.バッハ〜生誕300年」を御覧ください)がそう記録しているからですが、弟子である彼自身が書いたものに父親が手を入れたものに父の名を記録したのではないかとも言われています1

ことしもいろいろなことが有りました。身近な人も何人も亡くしました2。世界の情勢も暗い色を示しており、女性や子供を多く含む非戦闘員が虐殺されています。なぜ人類は歴史に学ばないのかと思わせる日々です。そのような情勢から、普段の年とは違い、非宗教的な曲を選びました。本当はフォーレのIn Paradisumか賛美歌のVeni Emanuel をやろうかと思っていたのですが、地名が出てきてしまう3ので…。

曲の背景の動画は主にシチリア島のものです。エトナ山やシラクサの灯台などが出てきます。平和な風景と音楽とが、みなさまの心に平安をもたらしますことを。 それでは、良い年の瀬を。

Saturday, 23. December 2023

Jon Udell

Don’t look ahead. Look sideways as you climb the hill.

I do a lot more cycling in Sonoma County, California than was possible in Cheshire County, New Hampshire. The Mediterranean climate here, which enables me to ride year-round, is a blessing for my mental and physical well-being. And because the topography is even more rugged, I’m doing more climbing that ever. Yesterday, Luann dropped me … Continue reading Don’t look ahead. Look sideways as you clim

I do a lot more cycling in Sonoma County, California than was possible in Cheshire County, New Hampshire. The Mediterranean climate here, which enables me to ride year-round, is a blessing for my mental and physical well-being. And because the topography is even more rugged, I’m doing more climbing that ever.

Yesterday, Luann dropped me at the Coleman Overlook on the coast and I cycled home from there. I’m not a fast climber, I can’t keep up with younger friends when we cycle together, and my rides aren’t extreme by local standards. But my cumulative elevation gain over the course of a year far exceeds what it ever was back east, and I’ve had plenty of time to reflect on climbing strategy.

A better gear ratio would help, but my older road bike won’t accommodate that. So on the steepest pitches I switch to a weaving ascent that eases the grade, which I’ve decided is OK. For a while I resisted shoes with cleats that lock into pedals, but now I use them to gain extra leverage which really helps.

It’s never mainly about the equipment, though, the real challenge is always mental. How do you think about reaching the top of the big hill you’re climbing? One piece of conventional wisdom: don’t look ahead. If you look down at the road you aren’t forced to think about the grade, or your slow progress up it. Instead you see pavement moving beneath you, and feel steady progress.

Of course that defeats the purpose of cycling through spectacular Sonoma County landscapes. Recently a friend suggested a different strategy: look to the side. Of course! There’s little or no traffic on many of these routes, so it’s safe to do that. And the effect is mesmerizing.

I’ve described it like this:

Everything looks different from everywhere. You’re always seeing multiple overlapping planes receding into the distance, like dioramas. And they change dramatically as you move around even slightly. Even just ten paces in any direction, or a slight change in elevation, can alter the sightlines completely and reveal or hide a distant landmark.

So, don’t look ahead to the top of the hill, and don’t look down at the road. Look left and right to see sliding panels of majestic scenery. It really helps!

Wednesday, 20. December 2023

Heres Tom with the Weather

CPJ on NPR

Yesterday, NPR published a 4-minute interview Number of journalists killed in Gaza since Oct. 7 attacks called unprecedented loss with the president of the nonprofit Committee to Protect Journalists. CPJ maintains a list at Journalist casualties in the Israel-Gaza war. Until this unprecedented threat to journalists is mitigated, I would expect to continue to hear a whole lot of nonsense.

Yesterday, NPR published a 4-minute interview Number of journalists killed in Gaza since Oct. 7 attacks called unprecedented loss with the president of the nonprofit Committee to Protect Journalists. CPJ maintains a list at Journalist casualties in the Israel-Gaza war.

Until this unprecedented threat to journalists is mitigated, I would expect to continue to hear a whole lot of nonsense.

Wednesday, 20. December 2023

Mike Jones: self-issued

Ten Years of OpenID Connect and Looking to the Future

Ten years ago today the drafts that would be approved as the final OpenID Connect specifications were published, as announced in my post Fourth and possibly last Release Candidates for final OpenID Connect specifications and Notice of 24 hour review period. The adoption of OpenID Connect has exceeded our wildest expectations. The vast majority of […]

Ten years ago today the drafts that would be approved as the final OpenID Connect specifications were published, as announced in my post Fourth and possibly last Release Candidates for final OpenID Connect specifications and Notice of 24 hour review period.

The adoption of OpenID Connect has exceeded our wildest expectations. The vast majority of federated signins to sites and applications today use OpenID Connect. Android, AOL, Apple, AT&T, Auth0, Deutsche Telekom, ForgeRock, Google, GrabTaxi, GSMA Mobile Connect, IBM, KDDI, Microsoft, NEC, NRI, NTT, Okta, Oracle, Orange, Ping Identity, Red Hat, Salesforce, Softbank, Symantec, T-Mobile, Telefónica, Verizon, Yahoo, and Yahoo! Japan, all use OpenID Connect, and that’s just the tip of the iceberg. While OpenID Connect is “plumbing” and not a consumer brand, it’s filling a need and doing it well.

It’s fitting that the second set of errata corrections to the OpenID Connect specifications were just approved, as described in the post Second Errata Set for OpenID Connect Specifications Approved. While we are proud of the quality of the final specifications, with 9 3/4 years of thousands of developers using and deploying the specifications, it’s unsurprising that issues would be found that needed clarification and correction.

The updated OpenID Connect specifications have just been submitted to the International Organization for Standardization (ISO) for Publicly Available Submission (PAS) status. Approved PAS submissions are published as ISO specifications. This will foster adoption in jurisdictions that require using standards that are published by organizations with international treaty status.

Celebrations of the tenth anniversary of the approval of OpenID Connect will occur worldwide in 2024. The first will be in Asia at the OpenID Summit Tokyo in January. The second will be in the Americas at Identiverse in May. The third will be in Europe at the European Identity and Cloud Conference in June. Join us at these events for the celebrations!

I can’t wait to see what the next decade brings for OpenID Connect!

Monday, 18. December 2023

Damien Bod

Signing git commits on Windows and using with Github

This article shows how to setup and sign git commits on Windows for Github. Most of this is already documented on the Github docs, but I ran into trouble when using this with git Extensions on a windows host. My commits could not be signed until I set the home system variable on the windows […]

This article shows how to setup and sign git commits on Windows for Github. Most of this is already documented on the Github docs, but I ran into trouble when using this with git Extensions on a windows host. My commits could not be signed until I set the home system variable on the windows host.

Install gpg on Windows

To sign git commits, you can download the windows gpg4win and install it. Git should already be installed. No extra features are required.

Generate a key on Windows

The Windows cmd line is used to generate a new gpg key. I have a safe host PC and do not require a passphrase. Other than this, I just used the default settings to generate a new gpg key.

Note: The requested email needs to match the email used on your Github account.

gpg --full-generate-key

Github docs: https://docs.github.com/en/authentication/managing-commit-signature-verification/generating-a-new-gpg-key

The pub key can be used to export the pub key and upload it to the Github account.

Use Git Extensions and apply the signing per repo

I would like to use this key for a single repository. I have multiple user accounts for multiple systems and each have different git requirements. For one Github repository, I would like to sign all commits. I generated the key in the global user profile but only use the key on this repository.

In the git bash window from the repository, if you want to sign the commits, the following commands can be executed. I open the bash through git extensions. If you already have the key id from the creating of it, you can skip step 3.

git config commit.gpgsign true git config --global gpg.program "C:\Program Files (x86)\GnuPG\bin\gpg.exe" gpg --list-secret-keys --keyid-format=long git config user.signingkey <your key> Signing key not found

If the signing key is not found on windows, it is probably because the application is looking in the wrong location for the gpg key. To validate this, open the git bash from the repository and find the Home: value. This must match the value in the windows command line.

gpg --version

Find the value for Home:

Now open and the value in the windows cmd and check that it matches the git bash console. If it does not match, take the window cmd Home: value and add this to the system variables.

Add an environment variable called GNUPGHOME with the value found in the gpg --version cmd

Now the commit should be signed. Next step is to setup Github to use this key.

Add the gpg key to your Github account

The gpg key can be attached to the Github account. The docs explain this good.

Github docs: https://docs.github.com/en/authentication/managing-commit-signature-verification/adding-a-gpg-key-to-your-github-account

gpg --list-secret-keys gpg -a --export <your-pub-key>

Display the verified commits on Github: https://docs.github.com/en/authentication/managing-commit-signature-verification/displaying-verification-statuses-for-all-of-your-commits

When you commit and push, Github should display a verified status

You can now require that all commits are verified for a repository

Links

https://www.gnupg.org

https://www.gpg4win.org/

https://docs.github.com/en/authentication/managing-commit-signature-verification/telling-git-about-your-signing-key

https://docs.github.com/en/account-and-profile/setting-up-and-managing-your-personal-account-on-github/managing-email-preferences/setting-your-commit-email-address

Saturday, 16. December 2023

Michael Ruminer

DIDKit and Verifiable Credentials

Recently I made a post on Veramo and Verified Credentials where I discussed a little bit about Veramo and my findings in trying to use it. This post is about DIDKit from SpruceID. DIDKit is another set of packages for creating DIDs, verifiable credentials, verifiable presentations, etc. I only used the command line interface (CLI) tool in my experimentation with it and was entirely focused on DID

Recently I made a post on Veramo and Verified Credentials where I discussed a little bit about Veramo and my findings in trying to use it. This post is about DIDKit from SpruceID. DIDKit is another set of packages for creating DIDs, verifiable credentials, verifiable presentations, etc. I only used the command line interface (CLI) tool in my experimentation with it and was entirely focused on DID creation. This is how it went.

First off, let me say that I tried DIDKit not only because I wanted to see how another tool worked but because when using Veramo there is no way via the CLI to get the private key for a DID I created. I suspect with enough magic it could be retrieved via JavaScript, but maybe not. It’s locked away in a key management system (KMS).

Secondly, I found Create and Post Your Own did:web post to be easier to navigate than the directions on the SpruceID site. I did find somewhere in the DIDKit docs how to create a did:web from their did:key generation command but can’t find it again which only goes to show how difficult it is to find some information in their documentation. (Since initially writing this I looked harder and did find it at “did-web in minutes”)Although SpruceID says it supports did:web it seems that it can resolve did:web but it doesn’t create a did:web fully. You must create a did:key and make some edits on it to make it into a did:web. The edits are minor, but I consider this a weird limitation. I read somewhere along the way that it only supports the ED25519 curve in creating did:key/did:web, but I am not sure if that limitation referred only to creation or resolving as well.

Once I created a JSON Web Key (JWK) using one of their key generation functions, built the did:key from the key, edited it to be a did:web, and published it to manicprogrammer.github.io/.well-known/did.json it resolved fine with their “didkit did-resolve” command.

Unlike the Veramo tool, I found no way through the CLI to add services so I just manually added a DIDComm service to the DID. As with my Veramo post let me say — don’t message me on that DIDComm service as I don’t have nor have I found a mobile device agent/wallet to which I can add an existing DID and mediator endpoints for DIDComm much less the DIDComm v2 endpoint I gave it. So I can’t get the messages. Even blocktrust.dev which has the mediator doesn’t have such a thing. Perhaps I’ll setup a browser based DIDComm messaging app to test it out if I can find a DIDComm V2 set of libraries for the browser. (I’ll have to get proficient with JavaScript though, which I have never found the time to do.) Why do I have the endpoint in the DID then? Just for fun to see that I could add one and it properly resolve. Obviously, I have not been able to test it.

What else did I find in my experience with DIDKit? In the documentation, I couldn’t find what the below ‘key” parameter did. But, following the directions to create a did:key the below command worked:

didkit key-to-did key -k issuer_key.jwk

I also found that on its Github it doesn’t have a discussion section.

All in all, the toolkit did a good job from the CLI. I am likely to try out the creation of a VC with it. It has packages for Rust (in which DIDKit is written), C, Java/Android, Python, and Javascript.


Veramo and Verifiable Credentials

A few weeks ago I messed around with the Veramo project for #verifiablecredentials. It’s not ready for prime time nor is it professed to be, but it does get some things done. I focused on DIDs. I was entirely using the CLI as javascript is not a “native” language for me but I may decide to get into that space to mess with this project some. The deficiencies I ran into may be CLI specific and not p

A few weeks ago I messed around with the Veramo project for #verifiablecredentials. It’s not ready for prime time nor is it professed to be, but it does get some things done. I focused on DIDs. I was entirely using the CLI as javascript is not a “native” language for me but I may decide to get into that space to mess with this project some. The deficiencies I ran into may be CLI specific and not present when using the APIs directly.

I did get a well-formed DID out of it that I published at:

did:web:manicprogrammer.github.io

Don’t send me a didcomm message via it despite it having a service for that defined as I don’t currently have a personal agent setup to communicate via that service. I know, why publish the service then? For fun. Maybe I’ll remove it.

One issue I hit with the CLI that was especially off-putting to me is that I found no way to get the private key via the CLI for the created DID, even with the did export function (perhaps I am wrong, but I didn’t see it and thus wonder what the value of the export is). If someone finds me wrong let me know. <EDIT>: since writing this I have prompted a question about it and confirmed what I found. See here.</EDIT> It uses by default a local KMS. This means I can only create and send credentials created by Veramo as I don’t have the private key for signing via another method. I suppose I can get to it via some javascript magic and I may try to do that or else I’ll likely create a different did with keys I have generated in other ways.

All in all. It was worthwhile to play with from the CLI. No doubt, the real value is in the Javascript APIs. What’s it good for? I don’t really know yet. It’s good for messing around and becoming more familiar with verifiable credential structure, verifiable presentation structure, DID structure etc.

#decentralizedidentity #verifiablecredentials

Friday, 15. December 2023

Kent Bull

European Banking Association with Pillar 3 data hub adopts KERI & ACDC

There is some major positive good news for the KERI & ACDC ecosystem. The European Banking Authority announced on LinkedIn the EBA Pillar 3 data hub which is built on KERI & ACDC. The Pillar 3 data hub uses the vLEI system developed by GLEIF, the Global Legal Entity Identifier […]

There is some major positive good news for the KERI & ACDC ecosystem. The European Banking Authority announced on LinkedIn the EBA Pillar 3 data hub which is built on KERI & ACDC. The Pillar 3 data hub uses the vLEI system developed by GLEIF, the Global Legal Entity Identifier Foundation.

vLEI System Architecture Diagram: What this means

This means that around 6000 financial institutions in the EU will now be required to use the EBA Pillar 3 data hub, and by extension KERI & ACDC, for their financial reporting. You can read more in their website announcement and the linked report PDF.

The important parts include pages 50-52, specifically the Garner analysis in section 6.3.128:

EBA had Gartner analyze KERI & ACDC found ” that there are no comparably efficient alternative solutions globally”

And section 6.4.130 Conclusion:

“vLEI effectively meets…Pillar 3 reporting requirements…perceived as a low-risk project overall…significant opportunity to enhance integrity of reporting processes”

A number of advantages and risks were identified with the paper saying

The automation of identity verification and related processes through the vLEI could also offer numerous potential advantages for both Financial Institutions and other Regulators in the EU financial market

https://www.eba.europa.eu/sites/default/files/2023-12/d5b13b4d-a9dc-4680-8b7c-0a3a4c694fac/Discussion%20paper%20on%20Pillar3%20data%20hub.pdf

Potential Advantages:

For Financial Institutions Non-repudiable identification This means once you say something with KERI you can’t un-say it, as in you can’t change your story without getting caught Operational efficiency: unified digital format Enhanced products and services: trusted information enables risk management, customer service, and enhanced online experience For Supervisors/Regulators Enhanced Trust: simplify both validation of regulatory reports and authorized sign off Comprehensive Entity Overview: transparent, aggregated view of legal entities and hierarchies Standardization of Data Processes: smarter, more cost-effective, reliable data workflows

Risks:

Development of ecosystem Due to ecosystem novelty there is only one QVI right now (Provenant), Support is needed from GLEIF to grow the population of QVIs, incentive structure, and ecosystem orchestration Ensure adequate support to institutions End-user applications bank wallet Wallet for reporting institutions Compatibility with eIDAS 2.0 framework User-friendly applications for digital signatures, key management, and logging services Market’s recognition of benefits offered by vLEI Pillar 3 reporting requirement could trigger a “snowball effect” to catalyze market recognition of vLEI benefits.

This is an exciting development in the KERI space that this blog will continue to track.

Thursday, 14. December 2023

Justin Richer

Discovery, Negotiation, and Configuration

Interoperability is a grand goal, and a tough problem to crack. After all, what is interoperability other than independent things just working out of the box? In the standards and specifications world, we have to be precise about a lot of things, but none more precise than what we expect to be interoperable with, and how we get to the interop point. In my experience, there are a few common w

Interoperability is a grand goal, and a tough problem to crack. After all, what is interoperability other than independent things just working out of the box? In the standards and specifications world, we have to be precise about a lot of things, but none more precise than what we expect to be interoperable with, and how we get to the interop point.

In my experience, there are a few common ways to get there.

Conformance

The easiest way to achieve interoperability is for there to be no choice from the implementations. If there’s only one option, and implementations are motivated to follow that option, then interoperability is much easier to count on. If a specification has a MUST, and means it, you can realistically rely on that being followed by well-meaning developers.

This zero-level interoperability is not without its confusion, though. In any specification, there are a lot of assumptions that lead up to a MUST being placed. Changes in this context could make that MUST behave differently in between implementations. For example, taking the byte value of a data structure, even a simple one like a string or number, assumes an encoding and order for that data structure. What’s most dangerous about this kind of problem is that it’s easy for multiple developers to make the same assumptions and therefore assure themselves that the MUST as written is sufficient, until someone else comes along with different assumptions and everything breaks in spite of seeming to be conformant.

To give a concrete example, try asking anyone for a USB cable and chances are you’ll get a few different varieties back, from type A, to type C, to micro-B, to power-only varieties. All of them are USB and conform to all aspects of the cabling standard, but “USB” is not sufficient to ensure compatibility on a physical level.

Even with its limitations, it’s still a good idea for specifications to be specific as much as possible. But the world can’t always make a single choice and stick to it. Things start to get more interesting when there’s choice between options, though, so how do we handle that?

Discovery

If my system can ask your system which options it supports ahead of time, then ostensibly I should be able to pick one of those options and expect it to work. Many standard internet APIs are based around this concept, with an initial discovery phase that sets the parameters of interoperability for future connections.

This pattern works fairly well, at least for common options and happy paths. If your system supports some or all of what my system wants, then I can probably figure out how to connect to you successfully. If your system doesn’t support what I need, then at least I know I can’t start the process. OpenID Connect usually works from a discovery-based process, where the RP fetches the IdP’s discovery document prior to connecting.

The discovery pattern is predicated on an agreement of how to do discovery in the first place. I need to at least know how to make an initial call in order to figure out what the options are for all future calls. This is expected to be out of band for the rest of the protocol, but is often built on the same underlying assumptions. Does the protocol assume HTTP as a transport? Then discovery can use that, also.

Discovery is generally done without context, though. The existence of something in a discovery step does not guarantee that it will be usable in the context of a real request. A server might support seven different cryptographic algorithms, but might only allow some of them to specific clients or request types. That kind of detail is hard to capture through discovery.

For a physical example, let’s say that before you ask for a USB cable, you can check a list of all the available types that the person you’re asking has available. That way when you ask for a specific cable, you’ll at least know that they had it as an option. But maybe they only had one and already lent it out to someone else, or they only hand out power-only cables to people they haven’t met before, in case the cable goes walkabout.

Negotiation

If we can instead bake the discovery process into the protocol itself, we can end up with a negotiation pattern. One party makes a request that includes the options that they’re capable of, and the other party responds with their own set of options, or chooses from the first set. From there, both parties now know the parameters the need to connect.

This kind of option works well with connection-focused protocols, and it has the distinct advantage of avoiding an additional round trip to do discovery. There’s also no longer a need to specify a separate process for discovery, since it’s baked in to the protocol itself. Content negotiation in HTTP, algorithm selection in TLS, and grant negotiation in GNAP all follow this pattern.

Negotiation falls short when decisions have to be made about the initial connection, much like when there’s a separate discovery call. The protocol can be built to robustly account for those failures, such as a content type being unavailable in HTTP, but the ability to negotiate does not guarantee satisfactory results. Negotiation can also end up with less than ideal results when there’s not a clear preference order, but in such cases it’s possible for a negotiation to continue over several round trips.

If you need a USB cable, you can walk up to someone and say “Hi, can I borrow a USB cable? I need it to be USB-A or USB-C and I need it for a data connection.” The person you’re asking can then see if they have anything that fits your criteria, and choose appropriately from their supply. If they hand you something that’s less than ideal, you can clarify “I’d really prefer USB-C if you have it, but this will work if not”.

Configuration

On a simpler level, many developers simply want to choose an option and run with it, and if the other side makes a compatible choice, this can short-circuit any kind of discovery or negotiation process in a positive way. This might seem magical, but it happens way more often than many software architects and standards authors like to admit. It’s not uncommon for two developers to make similar assumptions, or for libraries to influence each others’ implementations such that they end up doing the same thing even without any external agreement to do so.

If a developer codes up something based on an example, and it works, that developer is not likely to look beyond the example. Why would they? The goal is to get something to connect, and if it does, then that job is done and they can move on to more interesting problems. And if it doesn’t work? Chance are they’ll tweak the piece that doesn’t work until it does work.

JSON works this way in practice, with well-formed JSON being the interoperability expectation and anything else being, effectively, schema-by-fiat. While there are schema languages on top of JSON, the practical truth is that applications apply their own internal schema-like expectations to the JSON by looking for a field with a certain name in a certain place with a data value that parses how they expect it to. Anything that runs afoul of that is an error not of JSON but for the application to deal with. This is a far cry from the days of XML, which expected processing of namespaces and schemas to make sense of tags at the parsing level. Was it more robust? Arguably, yes. But it was also far too heavy for an average developer to care about. JSON’s approach lets us get to data exchange simply by letting us get it right by accident most of the time, and ignoring things that don’t make sense.

If you want a USB-C cable but just ask someone for a USB cable, and they hand you a USB-C cable, everyone’s happy. You may have been accidentally interoperable with your request and response, but there’s power in that kind of accident and the frequency with which it happens.

Practicality

All of these interoperability methods have merit, and most systems are built out of a combination of all of them in one way or another. When we’re defining what interoperability means, we always need to take in the context of what is interoperable, for whom, and when.At the end of the day, practical interoperability means that things connect well enough to get stuff done. We should endeavor to build our standards and systems to allow for robust discovery and negotiation, but always keep in mind that developers will find the best path for them to connect.

Interoperability is a grand goal indeed, and while a lot of the time we stumble backwards into it, there are well-trodden paths for getting there.

Wednesday, 13. December 2023

Damien Bod

Securing a MudBlazor UI web application using security headers and Microsoft Entra ID

This article shows how a Blazor application can be implemented in a secure way using MudBlazor UI components and Microsoft Entra ID as an identity provider. The MudBlazor UI components adds some inline styles and requires a specific CSP setup due to this and the Blazor WASM script requirements. Code: https://github.com/damienbod/MicrosoftEntraIDMudBlazor Setup The application is […]

This article shows how a Blazor application can be implemented in a secure way using MudBlazor UI components and Microsoft Entra ID as an identity provider. The MudBlazor UI components adds some inline styles and requires a specific CSP setup due to this and the Blazor WASM script requirements.

Code: https://github.com/damienbod/MicrosoftEntraIDMudBlazor

Setup

The application is setup using a Blazor WASM UI hosted in an ASP.NET Core application. The MudBlazor Nuget package was added to client project. Some MudBlazor components were added to the UI using MudBlazor documentation.

Security Headers

The security headers need to be added to protect the session of the web application. I use NetEscapades.AspNetCore.SecurityHeaders to implement the headers. We can protect the UI using CSP nonces and so the NetEscapades.AspNetCore.SecurityHeaders.TagHelpers Nuget package is also used. The following packages are added to the server project.

NetEscapades.AspNetCore.SecurityHeaders NetEscapades.AspNetCore.SecurityHeaders.TagHelpers

The SecurityHeadersDefinitions class adds the security headers as best possible for this technical setup. A nonce is used in the CSP for the scripts tag. The ‘unsafe-eval’ value is added to the script CSP definition due to the Blazor WASM technical setup. This reduces the security protections. The unsafe inline is added as a fallback for older browsers. The style CSP definition allows unsafe inline due to the MudBlazor UI components.

namespace MicrosoftEntraIdMudBlazor.Server; public static class SecurityHeadersDefinitions { public static HeaderPolicyCollection GetHeaderPolicyCollection(bool isDev, string? idpHost) { if(idpHost == null) { throw new ArgumentNullException(nameof(idpHost)); } var policy = new HeaderPolicyCollection() .AddFrameOptionsDeny() .AddContentTypeOptionsNoSniff() .AddReferrerPolicyStrictOriginWhenCrossOrigin() .AddCrossOriginOpenerPolicy(builder => builder.SameOrigin()) .AddCrossOriginResourcePolicy(builder => builder.SameOrigin()) .AddCrossOriginEmbedderPolicy(builder => builder.RequireCorp()) // remove for dev if using hot reload .AddContentSecurityPolicy(builder => { builder.AddObjectSrc().None(); builder.AddBlockAllMixedContent(); builder.AddImgSrc().Self().From("data:"); builder.AddFormAction().Self().From(idpHost); builder.AddFontSrc().Self(); builder.AddBaseUri().Self(); builder.AddFrameAncestors().None(); builder.AddStyleSrc() .UnsafeInline() // due to Mudblazor .Self(); builder.AddScriptSrc() .WithNonce() .UnsafeEval() // due to Blazor WASM .UnsafeInline(); // disable script and style CSP protection if using Blazor hot reload // if using hot reload, DO NOT deploy with an insecure CSP }) .RemoveServerHeader() .AddPermissionsPolicy(builder => { builder.AddAccelerometer().None(); builder.AddAutoplay().None(); builder.AddCamera().None(); builder.AddEncryptedMedia().None(); builder.AddFullscreen().All(); builder.AddGeolocation().None(); builder.AddGyroscope().None(); builder.AddMagnetometer().None(); builder.AddMicrophone().None(); builder.AddMidi().None(); builder.AddPayment().None(); builder.AddPictureInPicture().None(); builder.AddSyncXHR().None(); builder.AddUsb().None(); }); if (!isDev) { // maxage = one year in seconds policy.AddStrictTransportSecurityMaxAgeIncludeSubDomains( maxAgeInSeconds: 60 * 60 * 24 * 365); } policy.ApplyDocumentHeadersToAllResponses(); return policy; } }

The UseSecurityHeaders adds the security headers middleware.

app.UseSecurityHeaders(SecurityHeadersDefinitions .GetHeaderPolicyCollection(env.IsDevelopment(), configuration["AzureAd:Instance"]));

A nonce is used to protect the UI application and the tag helpers are used for this.

@addTagHelper *, NetEscapades.AspNetCore.SecurityHeaders.TagHelpers

The asp-add-nonce adds the nonce to the scripts for all the HTTP responses.

<script asp-add-nonce src="_framework/blazor.webassembly.js"></script> <script asp-add-nonce src="_content/MudBlazor/MudBlazor.min.js"></script> <script asp-add-nonce src="antiForgeryToken.js"></script>

Microsoft Entra ID

Microsoft Entra ID is used to protect the Blazor application. The Microsoft.Identity.Web packages are used to implement the OpenID Connect client. The application authentication security is implemented using backend for frontend (BFF) security architecture. The UI part, is a view belonging to the server backend. All security is implemented using the trusted backend and the session is persisted using a secure HTTP only cookie. The WASM uses this cookie for the secure data requests.

Microsoft.Identity.Web Microsoft.Identity.Web.UI Microsoft.Identity.Web.GraphServiceClient

The AddMicrosoftIdentityWebAppAuthentication implements the UI OpenID Connect client.

var scopes = configuration.GetValue<string>("DownstreamApi:Scopes"); string[] initialScopes = scopes!.Split(' '); services.AddMicrosoftIdentityWebAppAuthentication(configuration) .EnableTokenAcquisitionToCallDownstreamApi(initialScopes) .AddMicrosoftGraph("https://graph.microsoft.com/v1.0", initialScopes) .AddInMemoryTokenCaches();

Note: if using in-memory cache, the cache gets reset after every application restart, but not the cookie. You need to use a persistent cache or reset the cookie when the tokens are missing.

Links

https://mudblazor.com/

https://github.com/MudBlazor/MudBlazor/

https://github.com/damienbod/Blazor.BFF.AzureAD.Template

https://me-id-mudblazor.azurewebsites.net/

Sunday, 10. December 2023

Foss & Crafts

61: A Textile Historian's Survival Guide

How do you survive in a world that is no longer optimized for making your own clothing when you suddenly find that modern conveniences no longer accommodate you? As a textile historian, Morgan has been ruminating for years about women’s contributions to the domestic economy, the massive time investment of producing clothing for a family, and the comparative properties of different textile fibers.

How do you survive in a world that is no longer optimized for making your own clothing when you suddenly find that modern conveniences no longer accommodate you? As a textile historian, Morgan has been ruminating for years about women’s contributions to the domestic economy, the massive time investment of producing clothing for a family, and the comparative properties of different textile fibers. These research interests were informed by a lifetime of sewing and other fiber crafts. None of this experience, however, properly prepared her to face the reality of needing to rely on her own hands to provide large portions of her own wardrobe.

Guest co-host Juliana Sims sits down with Morgan to talk about how, in the wake of a recently developed allergy to synthetic fabrics, she now finds herself putting that knowledge of historical textile production to use to produce clothing that she can wear.

Links and other notes:

Morgan presented this as a (much shorter) talk at the Dress Conference 2023 Slides from the presentation Morgan's Dissertation, which we also covered RSI Glove Pattern

The quote that Morgan somewhat misremembered about a woman preparing wool before the winter:

"A thrifty countrywoman had a small croft, she and her sturdy spouse. He tilled his own land, whether the work called for the plough, or the curved sickle, or the hoe. She would now sweep the cottage, supported on props; now she would set the eggs to be hatched under the plumage of the brooding hen; or she gathered green mallows or white mushrooms, or warmed the low hearth with welcome fire. And yet she diligently employed her hands at the loom, and armed herself against the threats of winter." -- Ovid, Fasti 4.687-714

Friday, 08. December 2023

reb00ted

Meta/Threads Interoperating in the Fediverse Data Dialogue Meeting yesterday

I participated in a meeting titled “Meta’s Threads Interoperating in the Fediverse Data Dialogue” at Meta in San Francisco yesterday. It brought together a good number of Meta/Threads people (across engineering, product, policy), some Fediverse entrepreneurs like myself, some people who have been involved in ActivityPub standardization, a good number of industry observers / commentators, at least

I participated in a meeting titled “Meta’s Threads Interoperating in the Fediverse Data Dialogue” at Meta in San Francisco yesterday. It brought together a good number of Meta/Threads people (across engineering, product, policy), some Fediverse entrepreneurs like myself, some people who have been involved in ActivityPub standardization, a good number of industry observers / commentators, at least one journalist, and people from independent organizations whose objective is to improve the state of the net. Altogether about 30 people.

It was conducted under the Chatham House rule, so I am only posting my impressions, and I don’t identify people and what specific people said. (Although most attendees volunteered for a public group photo at the end; I will post a link when I get one. Photo added at the bottom of this post.)

For the purposes of this post, I’m not going to comment about larger questions such as whether Meta is good or bad, should be FediBlock’ed immediately or not; I’m simply writing down some notes about this meeting.

In no particular order:

The Threads team has been doing a number of meetings like this, in various geographies (London was mentioned), and with various stakeholders including the types of people that came to this meeting, as well as Fediverse instance operators, regulators and civil society.

Apparently many (most?) invitees to these meetings were invited because other invitees had been recommending them. I don’t know whether or what kind of future meetings like this they are planning, but I’d be happy to pass along names if we know each other and you get in touch. Thanks to – you know who you are – who passed along my name.

The Threads team comes across as quite competent and thoughtful at what they do.

On some subjects that are “obvious” to those of use who have hung around open federated systems long enough like myself, many attendees seemed strangely underinformed. I didn’t get the impression that they don’t want to know, but simply that integrating with the “there-is-nobody-in-charge” Fediverse is so different from other types of projects they have done in the past, they are still finding their bearings. I heard several: “A year ago, I did not know what the Fediverse was.”

Rolling out a large node – like Threads will be – in a complex, distributed system that’s as decentralized and heterogeneous as the Fediverse is not something anybody really has done before. It’s unclear what can go wrong, so the right approach appears to be to go step-by-step, feature by feature: try it, see how it works in practice, fix what needs fixing, and only then move on to the next feature.

That gradual approach opens them up to suspicions their implementation is one-sided and entirely self-serving. I guess that can’t be avoided until everything is deployed they publicly said they will deploy.

While there are many challenges, I did get the impression the project is proceeding more or less as planned, and there are no major obstacles.

Everybody knows and is aware Meta brings a “trust deficit” to the Fediverse. The best mitigation mentioned was to be as transparent as possible about all aspects of what they plan and do.

I think that’s a good approach, but also that they can do far more on transparency than they have so far. For example, they could publicly share a roadmap and the engineering rationale for why the steps they identified need to be in this sequence.

There are many, many questions on many aspects of the Fediverse, from technical details, to operational best practices, to regulatory constraints and how they apply to a federated system. The group generally did not know, by and large, how to get them answered, but agreed that meetings like this serve as a means to connect with people who might know.

I think this is a problem all across the Fediverse, not specific to Meta. We – the Fediverse – need to figure out a way to make that easier for new developers; certainly my own learning curve to catch up was steeper than I would have liked, too.

Many people did not know about FediForum, our Fediverse unconference, and I suspect a bunch of the meeting attendees will come to the next one (likely in March; we are currently working on picking a date). Many of the discussions at this meeting would have been right at home as FediForum sessions, and while I am clearly biased as FediForum organizer, I would argue that doing meetings like this in an open forum like FediForum could help substantially with the trust deficit mentioned above.

There’s significant interest in the Fediverse Test Suite we just got funding approval for from the EU’s NGI Zero program. There’s general agreement that the Fediverse could work “better”, be more reliable, and more be comprehensible to mainstream users, if we had better the test coverage than the Fediverse has today. This is of interest to all new (and existing) developers.

There was a very interesting side discussion on whether it would be helpful for Fediverse instances (including Threads) to share reputation information with other instances that each instance might maintain on individual ActivityPub actors for its own purposes already. Block lists as they are shared today are a (very primitive) version of this; a more multi-faceted version might be quite helpful across the Fediverse. This came up in a breakout group discussion, and was part of brainstorming; I didn’t hear that anybody actually worked on this.

When we think of privacy risks when Meta connects to the Fediverse, we usually think of what happens to data that moves from today’s Fediverse into Meta. I didn’t realize the opposite is also quite a challenge (personal data posted to Threads, making its way into the Fediverse) for an organization as heavily monitored by regulators around the world as is Meta.

There was very little talk (not none, but little) about the impact on regulation, such as the “continuous and real-time access” provision in the EU’s Digital Markets Act and whether that was a / the driver for Fediverse integration.

There was very little discussion on business models for Threads, and where exactly ads would go. For example, would ads just stay within the Threads app, or would they inject them into ActivityPub feeds, like some companies did with RSS back in the days? Of course, should that happen, as a non-Threads Fediverse user, one can always unfollow; there is no way for them to plaster non-Threads users with ads if they don’t interact with Threads accounts.

I came away convinced that the team working on Threads indeed genuinely wants to make federation happen, and have it happen in a “good” way. I did not get any sense whatsoever that any of the people I interacted were executing any secret agenda, whether embrace-and-extend, favoring Threads in some fashion or anything like that. (Of course, that is a limited data point, but I thought I convey it anyway.)

However, the meeting did not produce a clear answer to the elephant-in-the-room question that was raised repeatedly by several attendees in several versions, which is some version of: “All the openness with Threads, namely integration with the Fediverse, supporting account migration out from Threads etc, is the opposite of what Facebook/Meta has done over its history. What has fundamentally changed so that you now believe openness is the better strategy?” And: “In the past Facebook was a far more open system than it is today, you gradually locked it down. What guarantee is there that your bosses won’t follow the same playbook this time, even if you think they won’t?”

Personally I believe this question needs a better answer than has been given publicly so far, and the answer needs to come from the very top of Meta. The statement must have staying power beyond what any one executive can deliver.

I left the meeting with far more questions than I could get answered; but nobody wanted to stay all night :-)

My gut feel is that it is safe to assume they will do a reasonably fair, responsible job with a gradual rollout of federation for at least the next year or two or such. Beyond that, and in particular if it turns out creators with large follower groups indeed move off Threads at a significant rate (one of the stated reasons why they are implementing Fediverse support as creators have asked for this), I don’t think we know at all what will happen. (I’m not sure that anybody knows at this point.) And of course, none of this addresses the larger issues that Meta has as an organization.

In the hope this summary was useful …

(Source. Thanks tantek.com for initiating this.)

Tuesday, 05. December 2023

Wrench in the Gears

Special Needs Students – A Social Impact Bridge To Collective Intelligence?

This two-hour talk provides additional context for Washington Sean’s guest post, here, about his experience of the Santa Paula Mountains and Thomas Aquinas College as an energetic gatekeeper. In it I walk through a map I made about a year ago that links Vanguard Corporation and Burroughs Research Lab in Paoli, PA to the Deveraux [...]

This two-hour talk provides additional context for Washington Sean’s guest post, here, about his experience of the Santa Paula Mountains and Thomas Aquinas College as an energetic gatekeeper. In it I walk through a map I made about a year ago that links Vanguard Corporation and Burroughs Research Lab in Paoli, PA to the Deveraux Foundation’s Campbell Ranch School for special needs children in Goleta, CA. Included are side explorations into Big Tech’s workforce development programs for autistic youth, bio-nano and theoretical physics research at UC Santa Barbara, Theosophy’s Ojai roots, the Bionic Woman, an Arco-affiliated Bauhaus sculptor who retired to Montecito, and IoT “Hollywood” cosmetics.

You can access an interactive version of the map below here.

 

Ineractive Map: https://embed.kumu.io/c1f287d3928ee50886f46ea7d8e83a37#untitled-map?s=bm9kZS1UNlR5dGYwSw%3D%3D

Saturday, 02. December 2023

Wrench in the Gears

Weigh Points: Lightening the Load on the Walk Through the Labyrinth – Guest Post by Washington Sean

This open letter and touching memoir was written by my friend Washington Sean after listening to my recent ramblings about Trichotomy and anticipated territorial skirmishes over an imagined inter-dimensional bridge where piles of soul bound tokens might be held in the future, inert digital golems awaiting the spirits of inbound travelers. My birthday is later [...]

This open letter and touching memoir was written by my friend Washington Sean after listening to my recent ramblings about Trichotomy and anticipated territorial skirmishes over an imagined inter-dimensional bridge where piles of soul bound tokens might be held in the future, inert digital golems awaiting the spirits of inbound travelers. My birthday is later this week and this offering feels like a special gift. It was a delight to picture myself a bit in that magical place beyond the looming gate of St. Thomas Aquinas. And I didn’t have to carry 40 pounds of gear up a mountain!

I’ve been thinking a lot about multiverses and portals and one’s life work. After I finished the trichotomy stream, Washington Sean and I exchanged a few emails about consciousness and shipping companies – waves, you know. I brought up W.R. Grace and J. Peter Grace Jr. of the Grace Line, devout Catholics with ties to the Trappist Abbey in Spencer, MA outside Worcester. RFK Jr.’s dad Joseph was a fellow-Catholic colleague with the Maritime Commission. I sense that standardized contemplative prayer is a key element in the B-Corp social impact “moral economy,” a game where focused spiritualized attention will be used to fuel the Web3 global brain platform.

Below is the stream, I’ll put a screenshot of that map with a link in case you wish to explore where the Grace empire intersects with the Vatican and Zen Buddhist brownie bakeries staffed by former-felons. My friend is a talented photographer. Enjoy his windows into a time before the fire. 

 

Interactive Map: https://embed.kumu.io/a90ed96b44ccb02e398d6f53f2ad3dbe#untitled-map?s=bm9kZS12QjBlMnNkOQ%3D%3D

Dear Alison,

Ok — so I am having a hard time with this last email you sent me.  In the email you addressed the concept of a trichotomy as a means of explaining the differences between the mind/spirit, the body and the soul. And you also mentioned Thomas Aquinas and pointed to J Peter Grace and his role as a past Governor (Board of Directors) for a school called Thomas Aquinas College.

Peter Grace, “whose wife was a key player in Vatican II” was also mentioned in your email. This wasn’t unexpected as we had been discussing shipping magnates Stavros Niarchos and Aristotle Onassis and their connection to the Negroponte brothers (another typical conversation with Alison, right?) and somehow Peter Grace was mixed in, someone I’d never heard of before. And so, with all of these topics thrown into an email is nothing too out of the ordinary, as our correspondences are often filled with different threads and rabbit holes. But unknowingly on your part, you had sent something else that was embedded in your email — something else that caused a peculiar epiphany – a sort of unsettling realization about our research endeavors that led to an interesting reevaluation of my own past – my own lived experience.

Specifically, it was your mention of how Thomas Aquinas College was fitting into this mix. I know I’ve mentioned I grew up in Ventura County, but I’ve never told you about the significance of the College in my own life. Nor would I have ever had a reason to until now… So, this is another ‘bizarre’ coincidence — or perhaps more evidence of our ability to tap into the magical realism of our lived experiences – perhaps sharing a mutual ‘frequency’ or a least being signaled to look at certain topics, certain ideas, within a common bandwidth, listening for concepts and symbols that resonate with me, with us, right now.

Thomas Aquinas College now has two locations – and they did not open the Massachusetts location until 2019. But — and here is why this is weird and personal — the California campus is in Ventura County where I grew up. But not just that — the campus sits at the entrance of a popular trail head where I literally ‘grew up’ — as in — where I slowly turned into a man; and where I formed a relationship with God and nature; where I battled my place in the world and contemplated my existence in the future.

The CA campus is located in Santa Paula canyon along state Highway 150 between the communities of Santa Paula and Ojai. It was near the campus that the ‘Thomas Fire’ of 2017 started. The Thomas Fire was, for a brief period, California’s largest wildfire in total burned acreage, scorching over 280,000 acres, but it’s place at the top of the charts was quickly usurped by another raging firestorm in the following season. Several of my close friends lost their homes in that fire including my two best friends. A great effort was made by the local fire department to protect the college a few days after the east winds event was over and when the winds changed, and the fire began to burn back the way it had come. The college experienced only minor physical damage, but the traumatic impact of the fire still lives on in that part of Ventura County.

Ironically, even though I have called the Pacific Northwest home for many years, I was visiting a friend for his birthday in 2017 (on November 30th — surely just another coincidence that November 30th is the same day you sent this perplexing email) who was battling cancer for the 3rd bout.  A couple days later his sister’s home in the Ventura foothills, where the birthday party was hosted, was completely obliterated.  I looked online at the drone shots that were available and I could pick their house out – a pile of ashes with only a gun safe standing in what used to be the garage.  

I was in town for only two nights, so the day after the birthday party in Ventura I visited my best friend’s home in upper Santa Paula whose house was very close to the origin of the fire that would erupt two days later, and it was among the first to burn down. Of the few things they had time to grab before they left, was the wedding plaque that we had commissioned to be carved in leather commemorating the year of their union (2012). My friend’s wife tells a great story of how she ran back into the house and had to use a stick to knock it off the wall since she could not reach it.

Their home was not far as the crow flies from the campus grounds, and it is odd in fact, that I can still remember driving down Highway 150 after my visit with them as I was on my way back home to my parents’ house where I’d soon bum a ride from my father and get dropped at the Burbank airport and catch the last flight back to Portland. But on that drive home that night I passed by the campus, and I can recall how raw and powerful the winds were already blowing. Call it a premonition, or a hunch, but the thought of fire entered my mind that night, right as I drove by the Thomas Aquinas College campus. Two days later, the winds would grow even more ferocious and would knock down an electrical wire, sparking the fire. By that Tuesday I had learned what had happened. And it was on a Wednesday morning that I would be delivering opening remarks for the small non-profit I was president of at the time. It was our annual awards breakfast and while I kept a cheery face for our sponsors and guests, inside I was solemn and grieving, and I couldn’t help but feel a tad guilty for having a premonition, not having the knowledge or intuitive sense of how to act on it, and then learning how many of my friends were now burned out of their homes, their lives forever altered.

The Thomas Aquinas College campus sits above the Santa Paula creek on a small bluff nestled between hilly terrain at the mouth of Santa Paula canyon. This is not a ‘mighty river’ or a mountain river that one might see or think of as imagined in one of the more pristine national parks of the Sierra Nevada. The Santa Paula creek is a typical river of the Los Padres National Forest. Much of the time, rivers in southern California run at a low flow rate and often they can barely muster a trickle or even dry up completely, running ephemerally for long stretches before entering the myriad of pathways into the greater water table. However, during periods of heavy rain, these same little creeks can turn into flood channels of immense power and geological change – deluges capable of swallowing huge swaths of land and altering the landscape in a matter of days or hours. The clay soils have poor drainage and cannot absorb the water fast enough. And after intense fires, the sage, manzanita, and scrub brush of the chapparal can leave a layer of plant derived oil that coats the surface with a thin, impermeable layer, further exacerbating the drainage issues. Thus, the Los Padres National Forest, and the terrain around the Santa Paula canyon are areas that have been experiencing rapid geological transformation – occurring in faster cycles than more stout or solid mountain ranges.

There is a sulphur spring near the mouth of the Santa Paula canyon. The smell is thick, and the pronounced stench of rotten eggs is hard to miss on the highway, as it just before the main trailhead parking areas. Up until very recently, by grant of an old easement with the Thomas Aquinas College, the National Forest Service, and the other ranch and property owners, the first mile of the trail actually cut through the campus property, ascending a few hundred feet in elevation along an asphalt road. The road crests by the Ferndale Ranch (home to a meandering flock of peacocks for many years) then drops down and heads around a bend. The last point on private property was just past the oil wells where the road ended, and a dirt trail formally began. It was only a few years ago that the campus was finally successful in obtaining new easement rights along the creek directly to the parking lots so that the official trail no longer goes through the campus property at all. But from the time I was a child and even through my 20’s and late 30’s the trail always involved that long, winding road.

My first trip into Santa Paula canyon was in 4th grade – as part of a day long field trip through my public elementary school. In fact, now that I think of it, lots of the public school kids in took day trips to the Ferndale Ranch (located just beyond the College campus) and hiked in to Punch Bowls — about 2 1/2 miles of hiking with a fairly steep and laborious slug up a hill at the end to climb for nine and ten year olds. The cliffs near the punch bowls were really dangerous — and the water could easily kill drown a child or adult during the non-drought years – as a series of cascading pools with increasingly large cliff jumps was more the stuff of adventurers and high school kids, not elementary or middle schoolers. I did not do the jumps until in my early twenties.

 

The year after I did the ‘outdoor education’ through my public elementary school, it would have been 1992 or 1993, the district discontinued the program as a tragic drowning occurred with another school — a 12-year-old boy passed away when he got sucked over the falls. But even though the schools stopped taking kids there, the popularity of the spot has only grown and my mother kept taking us there through our adolescence. In fact, she used to call it ‘church’ and on more than one occasion she called in sick for us and took us hiking so that we could “go to church.” Raised catholic, but never practicing, my mom considered it more important to be in nature than in a giant building when it came to praising God and his creation.

But it was later in my life that the East Fork trail and Santa Paula Canyon became even more important. One of my best friends, the one who would later lose his home near the origin of the Thomas Fire, was the one who introduced me and another friend to the camp. He had been there as a boy with his father, and he had sort of rediscovered it. Through the Thomas Aquinas college campus, up the Santa Paula Canyon trail and then past the popular ‘Punch Bowls’ (we always called them the moon rocks) hiking spot, was the old and in some spots, quite non-existent trail that went up the East Fork that led to the secret camp. Truth be told, it wasn’t really ‘secret’, and it was more like ‘rediscovered’. The camp, that we endearingly called ‘the land of one hundred waters’, was shown on some older maps, not on newer ones. But Santa Paula Peak, situated prominently above the valley where the camp was located, was listed on the Sierra Clubs’ Top 50 Peaks in California. And other prominent hikers knew of the camp and old trails. So, while it was sort of hard to get to and somewhat forgotten, the idea of it being a true secret was more romanticized than reality. The camp actually use to have a wagon trail to it (supposedly) in the late 19th century, where pioneer families would take extended vacations, but our ascent was always along the worn and mostly faded away and long neglected trail.

After the 2004-2005 floods, most of even the old trail washed away and it became more like boulder scrambling until the very last bit. Despite the technical difficulty and the lack of stable ground, the destination made it worth it. The camp was ‘a secret camp’ partly because it had some improvements that were added by a generation of hikers and campers that came before us. My friend’s father was a member of these hikers, part of a group of friends comprising an informal fraternity of sorts. The camp they built for themselves in the late 70’s and early 80’s included improvements that most camps do not have – especially those that are a full day’s hard walk through arduous terrain. The camp featured a twenty-foot long picnic table built out of a large fir tree that had been cut down and milled on site. There was also an asadero style bbq grill and an assortment of cast iron pots, skillets and other cookware. Nearby springs flowed year-round amidst a flat parcel of ground submerged beneath the deep shade of canyon live oaks that were hundreds of years old. Chumash griding bowls and mortars have been found in the area while exploring off the trails. About another thousand feet in elevation, above the camp, and along another worn and overgrown trail, there was located another camp sitting in huge boulders and cliffs comprised of sandstone and siltstone. The cliffs had this reddish tinge or hue to them that when hit with the last rays of sunlight, they’d turn the most amazing color before our eyes. 

When we first started our hikes, the trail began by going through the large green cast iron gates at the entrance to the campus by the highway. On the weekends, the gates were locked and hikers were instructed by a sign to use the non-vehicle entrance. Strict instructions marked on signs all along the part of the road through the campus were hard to miss and the security guard was largely dedicated to making sure the hikers never wandered off the route. It almost felt intentional – the way the trailhead made the route so much longer. Surely, as we had tried on more than one occasion, there was a quicker, more efficient way through the campus.  But on the weekends, hundreds, sometimes even over a thousand people on a real hot spell, would use the trailhead in search of the reliable swimming holes further up the canyon.

All in all, I have spent over a year of my life sleeping in this place. Starting at nineteen and through a lot of my twenties, I needed to be there, up the East Fork, at the secret camp, at the headwaters, near the eternal springs, nestled amidst the oaks and old growth firs, above the dominion of the Thomas Aquinas Campus. I wasn’t the only one. I shared this experience with my best friends. Even my wife has spent a few weeks camping up in this place and would know the college if mentioned. I could tell stories for hours of this place.

Five nights here, three or four days there – the days turned into weeks and the weeks piled into months. I soon realized that I was spending much of my free time going to this place. When I wasn’t working or on another adventure, I made my best effort to get my bag packed and get up the mountain. When I was faced with either getting a new roommate or moving into my van to save money, I chose my van. I was already spending so much time at camp – it was an easy choice and it helped me save money even quicker for the big trip to New Zealand I had been planning.

Quickly however, it became apparent that Thomas Aquinas College with its big cast iron gates and zealous security guards was a barrier to our adventures up the mountain. Eventually, even parking on the highway at the trailhead would become too risky as more than one of us had our windows smashed. So, finding a cut-through, or a quicker path to the get to camp was a matter of circumstance, not choice. The impedance of the college and its long and winding road became cumbersome and annoying. For most people, the hike ended at around the 3 ½ mile mark – when they got to moon rocks. But for us, that was only about a quarter of the way, and the biggest nuisance was always going through the gates and walking up the long and winding road around the campus water reclamation pond (fancy for on-site sewage treatment). It took at least thirty or forty-five minutes to walk that first section – getting past the oil wells and onto the rock and dirt track always offered a sigh of relief. A mile is not that far, but it is a noticeable distance that was hard to ignore with 40-50lbs of rations and gear stuffed in a backpack. Clearly, we were hindered and greatly bothered with what seemed extra or unnecessary labor put in front of us by the College.

So, we got bikes. We would use our bikes and ride up the hill along the asphalt path in extra low gear. Still physically difficult, but much quicker. And even better – on the way back we would really shave off the drag and droll of finishing a weekend of backpacking by plopping relentlessly along a hot, stinky asphalt path back to civilization. What better way than to whizz through the campus and down the hill in a mere instant – or about 25 mph. The security guard actually scolded us one time when we were riding our bikes too fast as even the cars were expected to observe a 15 mph speed limit.

Truth be told – there was actually another way to get to our secret camp – from an entirely different direction. So, in a sense, we could escape the dominion and the gates of Thomas Aquinas if we absolutely wanted to. We had found a way to our paradise that circumnavigated the more obvious and conventional path. But the other route featured its own drawbacks. First, it required an access code through a locked gate – so only certain people knew how to acquire the code (another easement battle from a forlorn era). But with the code, we could travel up the mouth of Timber canyon and ascend up the face of Santa Paula Peak. Like the Santa Paula Canyon trail, part of this felt redundant as there was, feasibly, a dirt road that would allow us to start closer to the actual forest (reserved public lands) lands. But we did not have permission to drive on the dirt roads. And this trail, being on an exposed and very sunny south face of a prominent 5,000 foot peak (the peak used to house a fire lookout tower and it was confirmed that a double-wide trail suitable for dirt bike travel used to exist all the way to the summit), was hard to climb during the day, and even in the winter months depending on the weather. We’d typically hike up the peak trail when cooler weather was around. On more than a few occasions we hiked up the peak at night. While better maintained and easier to navigate (up until 2021 washouts on the backside) this trail required more climbing and had a long descent into camp.

Weigh points along the peak trail last comprised of the following spots: ‘the house’ (burned in Thomas Fire but was the old trailhead start), the gate, the oaks (now known as ‘cow shit camp’, Kenny’s hill, the switchbacks, lunch rock, orange peel, soda pop, the peak spur, the saddle, the grassy spot, the last turn.

Weigh points along the canyon trail and the East Fork were last comprised of the following spots: The Gates (start of trail at highway), Ferndale, Noel’s place, the Oil well, the washout (aka ‘hell hill’), big cone, moon rocks, first crossing, log jam, poison oak camp, land of the milky white rocks, the witches portal, the flats, the spring trail, the meadow.

As I already mentioned, most hikers going up the canyon stopped just past big cone or the moon rocks (popularly known as the Punch Bowls), but our journey was always much longer. During the summer months we’d stash extra beers in the creek near the moon rocks for the last stop on our return hike home. Before retrieving our bikes (also stashed nearby) we would drink, swim and eat whatever last rations we had managed to save. Our adventures ended with riding the rest of the trail in a slightly altered state of consciousness, gleefully blowing past the other hikers.

Eventually we learned the cycle of the big weekend hikers and campers could occasionally yield substantial gleanings of food, clothes and an assortment of abandoned camping gear. Descending the trail after a big summer weekend filled with tourists came with the feeling of returning to civilization in phases – the moon rocks and surrounding camping areas functioned as a sort of open-air dumpster or secondhand store. Any valuable gear we found we would stash for our next hike up. But discarded food stuffs such as ultra-processed and flavor enhanced cheese chips or a bag (opened or not it wouldn’t matter) of fortified and enriched chocolate chip cookies was always a welcome addition to our favorite end of hike snack – the ‘tuna boat’ – a green bell pepper stuffed with chunk tuna and chopped onions, cilantro and jalapeno or serrano, and finished with a few dallops of hot sauce from a fast food condiment packet.

Groups of people would come to the moon rocks from the nearby towns of Ventura, Oxnard, Camarillo and Port Hueneme for day hikes. Sometimes even people from Los Angeles County would hike in the canyon near the popular swimming spots. In my lifetime, the canyon acquired great popularity and many young people found it a particularly cool place to ‘camp out and party’. The drinking and revelry often combined with a lack of experience in many of the camping basics –concepts that we took for granted like what to bring and how to stay nourished and hydrated were not taught in the limited curriculum of field trips and ‘outdoor education’ available in elementary school. This resulted in a great number of hang overs and many of the inexperienced campers simply abandoning leftover food, dirty clothes, and even perfectly good gear that just looked too heavy to carry to a nineteen-year-old with a throbbing headache. Their discarded leftovers were always our miraculous discoveries.

So much of my life, my energy field, the imprint of who I am (and who I am still to become) is in relation to that magical place. I have hundreds (probably thousands) of photos of my time in the East Fork canyon and Los Padres National Forest. I hope you can appreciate the significance of the coincidence and the elements of magical realism that present here.

Thomas Aquinas College was always a gatekeeper – not just a ‘way’ point, on our trail, but also, a ‘weigh’ point, a spot on the trail that weighed both our packs and our worthiness as we traveled through its dominion. After passing through the gates, it taxed us appropriately on our travels into higher realms.

The important work that I would do up in the camps and bluffs above the dominion of the college, beyond the realm of the material world, was spiritual work. True, our materialist trappings of fine cuts of meat, malted ales and other liquors, legumes, potatoes and assorted vegetables, they always traveled with us, as they served to literally weigh us down all along the chosen route. But the majority of the work done at camp was always spiritual work. In fact, one of my best friends, (not the one who lived right next to the fire, but who also lost his home(s) to the fire), worked for several years to construct his very own cabin, hidden away from camp up in the bluffs, nestled in its own little nook. It was a beautiful little forest cabin before it burned down in the Thomas Fire. It was comprised of a variety of native stone and hand-hewn lumber, and also some building materials he had carried in. It had taken him months of accumulated time to build it, almost all by his own hand. Before it burned down, he used to refer to it as ‘his life’s work.’

What is this notion of our ‘life’s work’? The time that I craved to be there – and the amount of time I spent there – for some it was perceived as escaping from work or escaping from responsibility – from existence in the material realm. But it never felt like that for me.  It was a place of great importance, and learning. Of experiments and mysteries.

I didn’t understand it in this way then – but the important barrier (or bridge) that Thomas Aquinas College served in my travels isn’t just an inconvenience. It was a physical burden to have to pass through the gates of Thomas Aquinas College and walk the extra mile. But now, now I see that it was also a spiritual bridge that they controlled – or, more precisely, closely monitored.

At first, I used to believe that the college was not receptive of the hikers, and that they’d rather not have us there. The stern nature of the signs and security guards keeping everyone on the right path seemed bothersome to the pristine and peaceful environment that was cultivated on the campus grounds. But now, re-thinking the role not as an inconvenience to the college, but instead, as a chosen duty, or a higher calling, a sort of self-endowed guardianship over a gateway and control of the bridge, or the portal, from one dimension to the next. The college watched over my travels as I left material world and prepared to ascend into the spiritual world. They were monitoring my travel into and out of Gods splendor – now I see that it was not an accident that the college situated itself in this manner. The perceived hacks and different techniques my friends and I learned to get around the expectation and presumed relationship with the College as gate keeper and pathway enforcer were beneficial to us, but that did not diminish the college’s dominion over the bridge or the majority of other hikers. In fact, it was only because of our desire to return to the higher realms over and over again that the important role the college served became noticeable. Had I only hiked into the canyon on occasion, it would be easy to overlook. But having made this special place my literal second home, the college was always the first weigh point on the longer journey.

Thomas Aquinas College is not far from several other notable institutions of learning and foundations of spiritual significance. The Ojai Valley has been a center of new thought for well over a century now (and quite longer if considering indigenous cosmology). Just over four miles from the college campus as the crow flies, there is the Ojai Valley School, an alternative education school for children founded in 1911. And not far off, perhaps another mile or so further west is the Thacher School, established even earlier in 1887. Situated close to the Thacher school is the Krishnamurti Foundation for America, another well-known institution. It was in 1922, in Ojai, where Jiddu Krishnamurti is said to have had a “life changing experience.” Directly to the west of Thomas Aquinas College, about five miles away and sitting atop the summit of Sulphur Mountain is Meher Mount – a spiritual retreat center first established in 1946 and dedicated to the Avatar Baba Meher. A little further west and centrally located in the small town of Ojai is the Krotona Theosopical Society – first founded in Hollywood in 1912, and later relocated in Ojai in 1922. Yogi Paramahansa Yogananda was known to visit the area and helped found his spiritual sanctuary in 1950 as part of his self-realization fellowship. No doubt there are more places of spiritual and cultural significance in and around Thomas Aquinas College. The reputation of the area precedes itself.

However, in taking a step back and thinking about the college’s place differently, and in general contemplating what it is about this area that has drawn so many to the area in search of enlightenment, I could not help but wonder – where the energy is coming from? Is it inherent in the land, as if by a magical grid or ley lines? Or is this energy coming in from somewhere else?

Just south of the campus and the sulphur springs, about two miles away, is another site of importance that has reemerged as important – perhaps the most significant and important site of all – that suggests broadening my horizons even further – and asking, or perhaps, doing the work to begin to formulate new questions, better questions, that stretch my understanding of the intelligent forces at work in our world.

COMSAT, a global satellite communications company, established its teleport site in Santa Paula in 1975. The site is shown on the COMSAT webpage under its ‘commercial’ heading. But, as of November 1, 2023, COMSAT, and SatCom Direct, are now part of Goonhilly. The technical components at the site and the capabilities of the site far exceed mere commercial endeavors.

For many years I just assumed that the satellite dishes and arrays that are visible from the highway, just south of the gates at Thomas Aquinas, were nothing special – just more telecommunications equipment for the burgeoning cellular telephone industry. Now I understand that the array there is unique – and possesses capabilities to look farther into space – into a cosmic void where one day we might discover the true meaning of intelligence.

Goonhilly “is one of the world’s premier space communication gateways providing control and uplinks for satellites and deep space missions.” With the recent acquisition of COMSAT Teleports, including the Santa Paula site, Goonhilly is expanding “all communication service offerings from LEO (Low Earth Orbit) right through commercial GEO to lunar and deep space.”

Goonhilly’s first site was located at Goonhilly Downs, near Helston on the Lizard Pennisula in Cornwall, England. Under a 999-year lease from British Telecom (BT Inc.) it was one of the largest stations in the world at its inception and still possesses some of the most advanced capabilities on the planet. Its parabolic dish, nicknamed ‘Arthur,’ was one of the first of its kind. Its current largest dish, called ‘Merlin’ is equally impressive in the modern era.

Now, connected to the Santa Paula site next to the Thomas Aquinas College (and others around the earth), Goonhilly has positioned itself in many ways as its own kind of keeper, or guardian, of our ability to move from one dimension to the next. Goonhilly maintains the flow of information as it is transformed from the physical experience into digitized atoms and bits that are parceled and scattered into the heavens. As the information slowly coalesces and comes back together again, it is Goonhilly that is now positioned to monitor and carefully watch over our access to intelligence and information as we pursue our rightful place in existence and amongst the higher realms.

Space signaling, and earth-based radar as a form of communication and wayfinding from one dimension to the next, is a technologically complex subject that I don’t ever think I will fully understand. Nor should I have to. As long as I am alive, I will keep my eyes (and my heart) open to discovering new ways around the barriers or tolled bridges or locked gates that slow me down and impede my journey. As long as I am alive, I will look for ways to lighten my pack and quicken my journey as I pass by the weigh points.

By Washington Sean – December 2023


Aaron Parecki

I took the High-Speed Brightline Train from Miami to Orlando with only two hours notice

It was 11am at the Fort Lauderdale airport, an hour after my non-stop flight to Portland was supposed to have boarded. As I had been watching our estimated departure get pushed back in 15 minute increments, I finally received the dreaded news over the loudspeaker - the flight was cancelled entirely. As hordes of people started lining up to rebook their flights with the gate agent, I found a quiet s

It was 11am at the Fort Lauderdale airport, an hour after my non-stop flight to Portland was supposed to have boarded. As I had been watching our estimated departure get pushed back in 15 minute increments, I finally received the dreaded news over the loudspeaker - the flight was cancelled entirely. As hordes of people started lining up to rebook their flights with the gate agent, I found a quiet spot in the corner and opened up my laptop to look at my options.

The other Alaska Airlines flight options were pretty terrible. There was a Fort Lauderdale to Seattle to Portland option that would have me landing at midnight. A flight on a partner airline had a 1-hour connection through Dallas, and there were only middle seats available on both legs. So I started to get creative, and searched for flights from Orlando, about 200 miles north. There was a non-stop on Alaska Airlines at 7pm, with plenty of available seats, so I called up customer service and asked them to change my booking. Since the delay was their fault, there were no change fees even though the flight was leaving from a different airport.

So now it was my responsibility to get myself from Miami to Orlando by 7pm. I could have booked a flight on a budget airline for $150, but it wouldn't have been a very nice experience, and I'd have a lot of time to kill in the Orlando airport. Then I remembered the Brightline train recently opened new service from Miami to Orlando, supposedly taking less time than driving there.

Brightline Station Fort Lauderdale

Never having tried to take that train before, I didn't realize they run a shuttle service from the Fort Lauderdale airport to the train station, so I jumped in an Uber headed to the station. On the way there, I booked a ticket on my phone. The price from Miami to Orlando was $144 for Coach, or $229 for Premium class. Since this will probably be the only time I take this train for the foreseeable future, I splurged for the Premium class ticket to see what that experience is like.

Astute readers will have noticed that I mentioned I booked a ticket from Miami rather than Fort Lauderdale. We'll come back to that in a bit. Once I arrived at the station, I began my Brightline experience.

Walking in to the station felt like something between an airport and a car rental center.

There was a small ticket counter in the lobby, but I already had a ticket on my phone so I went up the escalators.

At the top of the escalators was an electronic gate where you scan your QR code to go through. Mine didn't work (again, more on that later), but it was relatively empty and a staff member was able to look at my ticket on my phone and let me through anyway. There was a small X-Ray machine, I tossed my roller bag and backpack onto the belt, but kept my phone and wallet in my pocket, and walked through the security checkpoint.

Once through the minimal security checkpoint, I was up in the waiting area above the platform with a variety of different sections. There was a small bar with drinks and snacks, a couple large seating areas, an automated mini mart, some tall tables...

... and the entrance to the Premium lounge.

Brightline Station Premium Lounge

The Premium Lounge entrance had another electronic gate with a QR code scanner. I tried getting in but it also rejected my boarding pass. My first thought was I booked my ticket just 10 minutes earlier so it hadn't synced up yet, so I went back to the the security checkpoint and asked what was wrong. They looked at my boarding pass and had no idea what was wrong, and let me in to the lounge via the back employee-only entrance instead.

Once inside the lounge, I did a quick loop to see what kind of food and drink options there were. The lounge was entirely un-attended, the only staff I saw were at the security checkpoint, and someone occasionally coming through to take out dirty dishes.

The first thing you're presented with after entering the lounge is the beverage station. There are 6 taps with beer and wine, and you use a touch screen to make your selection and pour what you want.

On the other side of the wall is the food. I arrived at the tail end of the breakfast service, so there were pretty slim pickings by the end.

There were yogurts, granola, a bowl of bacon and egg mix, several kinds of pastries, and a bowl of fruit that nobody seemed to have touched. I don't know if this was just because this was the end of the morning, but if you were vegan or gluten free there was really nothing you could eat there.

There was also a coffee and tea station with some minimal options.

Shortly after I arrived, it rolled over to lunch time, so the staff came out to swap out the food at the food station. The lunch options were also minimal, but there was a bit more selection.

There was a good size meat and cheese spread. I'm not a big fan of when they mix the meat and cheese on the same plate, but there was enough of a cheese island in the middle I was reasonably confident I wasn't eating meat juice off the side of the cheeses. The pasta dish also had meat so I didn't investigate further. Two of the three wraps had meat and I wasn't confident about which were which so I skipped those. There was a pretty good spinach and feta salad, and some hummus as well as artichoke dip, and a variety of crackers. If you like desserts, there was an even better selection of small desserts as well.

At this point I was starting to listen for my train's boarding announcement. There was really barely any staff visible anywhere, but the few people I saw had made it clear they would clearly announce the train over the loudspeakers when it was time. There was also a sign at the escalators to the platform that said boarding opens 10 minutes before the train departs.

The trains run northbound and southbound every 1-2 hours, so it's likely that you'll only hear one announcement for a train other than yours the entire time you're there.

The one train announcement I heard was a good demonstration of how quickly the whole process actually is once the train shows up. The train pulls up, they call everyone down to the platform, and you have ten minutes to get onto the train. Ten minutes isn't much, but you're sitting literally right on top of the train platform so it takes no time to get down there.

Once your train is called, it's time to head down the escalator to the train platform!

Boarding the Train

But wait, I mentioned my barcode had failed to be scanned a couple times at this point. Let me explain. Apparently, in my haste in the back of the Uber, I had actually booked a ticket from Miami to Orlando, but since I was already at the Fort Lauderdale airport, I had gone to the Fort Lauderdale Brightline station since it was the closest. So the departure time I saw on my ticket didn't match the time the train arrived at Fort Lauderdale, and the ticket gates refused to let me in because the ticket didn't depart from that station. I don't know why none of the employees who looked at my ticket mentioned this ever. It didn't end up being a big deal because thankfully Miami was earlier in the route, so I essentially just got on my scheduled train 2 stops late.

So anyway, I made my way down to the platform to board the train. I should also mention at this point that I was on a conference call from my phone. I had previously connected my phone to the free wifi at the station, and it was plenty good enough for the call. As I went down the escalator to the platform, it broke up a bit in the middle of the escalator, but picked back up once I was on the platform outside.

There were some signs on the platform to indicate "Coach 1", "Coach 2" and "Coach 3" cars. However my ticket was a "Premium" ticket, so I walked to where I assumed the front of the train would be when it pulled up.

I got on the train on the front car marked "SMART" and "3", seats 9-17. It wasn't clear what "SMART" was since I didn't see that option when booking online. My seat was seat 9A, so I wasn't entirely sure I was in the right spot, but I figured better to be on the train than on the platform, so I just went in. We started moving shortly after. As soon as I walked in, I had to walk past the train attendant pushing a beverage cart through the aisles. I made it to seat 9, but it was occupied. I asked the attendant where my seat was, and she said it was in car 1 at the "front", and motioned to the back of the train. I don't know why their cars are in the opposite order you'd expect. So I took my bags back to car 1 where I was finally greeted with the "Premium" sign I was looking for.

I was quickly able to find my seat, which was not in fact occupied. The Premium car was configured with 2 seats on one side and 1 seat on the other side.

The Brightline Premium Car

Some of the seats are configured to face each other, so there is a nice variety of seating options. You could all be sitting around a table if you booked a ticket for 4 people, or you could book 2 tickets and sit either next to each other or across from each other.

Since I had booked my ticket so last minute, I had basically the last available seat in the car so I was sitting next to someone. As soon as I sat down, the beverage cart came by with drinks. The cart looked like the same type you'd find on an airplane, and even had some identical warning stickers on it such as the "must be secured for takeoff and landing" sign. The drink options were also similar to what you'd get on a Premium Economy flight service. I opted for a glass of prosecco, and made myself comfortable.

The tray table at the seat had two configurations. You could either drop down a small flap or the whole tray.

The small tray was big enough to hold a drink or an iPad or phone, but not much else. The large tray was big enough for my laptop with a drink next to it as well as an empty glass or bottle behind it.

Under the seat there was a single power outlet for the 2 seats with 120v power as well as two USB-C ports.

Shortly after I had settled in, the crew came back with a snack tray and handed me these four snacks without really giving me the option of refusing any of them.

At this point I wasn't really hungry since I had just eaten at the airport, so I stuffed the snacks in my bag, except for the prosciutto, which I offered to my seat mate but he refused.

By this point we were well on our way to the Boca Raton stop. A few people got off and on there, and we continued on. I should add here that I always feel a bit unsettled when there is that much movement of people getting on and off all the time. These stops were about 20-30 minutes away from each other, which meant the beginning of the ride I never really felt completely settled in. This is the same reason I prefer a 6 hour flight over two 3 hour flights. I like to be able to settle in and just not think about anything until we arrive.

We finally left the last of the South Florida stops, West Palm Beach, and started the rest of the trip to Orlando. A bunch of people got off at West Palm Beach, enough that the Premium cabin was nearly empty at that point. I was able to move to the seat across the aisle which was a window/aisle seat all to myself!

Finally I could settle in for the long haul. Shortly before 3, the crew came by with the lunch cart. The options were either a vegetarian or non-vegetarian option, so that made the choice easy for me.

The vegetarian option was a tomato basil mozzarella sandwich, a side of fruit salad, and some vegetables with hummus. The hummus was surprisingly good, not like the little plastic tubs you get at the airport. The sandwich was okay, but did have a nice pesto spread on it.

After lunch, I opened up my computer to start writing this post and worked on it for most of the rest of the trip.

As the train started making a left turn to head west, the conductor came on the loudspeaker and made an announcement along the lines of "we're about to head west onto the newest tracks that have been built in the US in 100 years. We'll be reaching 120 miles per hour, so feel free to feel smug as we whiz by the cars on the highway." And sure enough, we really picked up the speed on that stretch! While we had reached 100-120mph briefly during the trip north, that last stretch was a solid 120mph sustained for about 20 minutes!

Orlando Station

We finally slowed down and pulled into the Orlando station at the airport.

Disembarking the train was simple enough. This was the last stop of the train so there wasn't quite as much of a rush to get off before the train started again. There's no need to mind the gap as you get off since there's a little platform that extends from the train car.

At the Orlando station there was a short escalator up and then you exit through the automated gates.

I assumed I would have to scan my ticket when exiting but that ended up not being the case. Which actually meant that the only time my ticket was ever checked was when entering the station. I never saw anyone come through to check tickets on the train.

At this point I was already in the airport, and it was a short walk around the corner to the tram that goes directly to the airport security checkpoint.

The whole trip took 176 minutes for 210 miles, which is an average speed of 71 miles per hour. When moving, we were typically moving at anywhere from 80-120 miles per hour.

Summary The whole experience was way nicer than an airplane, I would take this over a short flight from Miami to Orlando any day. It felt similar to a European train, but with service closer to an airline. The service needs to be better timed with the stops when people are boarding. The only ticket check was when entering the station, nobody came to check my ticket or seat on the train, or even when I left the destination station. While the Premium car food and drinks were free, I'm not sure it was worth the $85 extra ticket price over just buying the food you want. Unfortunately the ticket cost was similar to that of budget airlines, I would have preferred the cost to be slightly lower. But even still, I would definitely take this train over a budget airline at the same cost.

We need more high speed trains in the US! I go from Portland to Seattle often enough that a train running every 90 minutes that was faster than a car and easier and more comfortable than an airplane would be so nice!

Thursday, 30. November 2023

Wrench in the Gears

Templated Thought Forms

I did an impromptu live today trying to sort through issues around communication of complex ideas, navigating information streams, and collective thought fields. It’s a bit late, so I don’t have the wherewithall to summarize at the moment, but here are the papers referenced if you want to explore them. My testimony at the Philadelphia [...]

I did an impromptu live today trying to sort through issues around communication of complex ideas, navigating information streams, and collective thought fields. It’s a bit late, so I don’t have the wherewithall to summarize at the moment, but here are the papers referenced if you want to explore them.

My testimony at the Philadelphia School Reform Commission About Ridge Lane 4 minutes: https://wrenchinthegears.com/2018/05/17/yes-i-am-an-advisor-for-ridge-lane-superintendent-hite-may-17-2018/

Ridge Lane LP Credo (Tridentine Creed) – https://www.ridge-lane.com/our-credo

Attilla Grandpierre: The Physics of Collective Consciousness: https://wrenchinthegears.com/wp-content/uploads/2023/11/The-Physics-of-Collective-Consciousness.pdf

Knowledgeworks iLearn Whitepaper: https://wrenchinthegears.com/wp-content/uploads/2023/11/Knowledgeworks-Connections-to-the-Future-iLearn.pdf

Dana Klisanin: Transception: The Dharma of Evolutionary Guidance Media – https://wrenchinthegears.com/wp-content/uploads/2023/11/Transception-The-Dharma-of-Evolutinary-Guidance-Media-Dana-Klisanin.pdf

Automated Human Biofield Assessment and Tuning – https://wrenchinthegears.com/wp-content/uploads/2023/11/Human-Emotion-Recognition-Analysis-and-Transformation-By-Bioenergy-Field-in-Smart-Grid.pdf

Gordana Vitaliano (Nanoengineering / Addiction) – Tapping Into Subconsciousness Processing: https://web.archive.org/web/20021217014453/https://vxm.com/NLPVitaliano.html

Kneoworld Map: https://web.archive.org/web/20210104142455/https://littlesis.org/oligrapher/6018-kneomedia-gamified-edu-tainment

Hal Puthoff , Austin Remote Viewing: https://earthtech.org/team/ https://ciaotest.cc.columbia.edu/olj/sa/sa_jan02srm01.html

Evan Baehr of Learn Capital at the Vatican: https://www.youtube.com/watch?v=NWLy3m5gXH4

 

Tuesday, 28. November 2023

Jon Udell

Puzzling over the Postgres query planner with LLMs

Here’s the latest installment in the series on LLM-assisted coding over at The New Stack: Puzzling over the Postgres Query Planner with LLMs. The rest of the series: 1 When the rubber duck talks back 2 Radical just-in-time learning 3 Why LLM-assisted table transformation is a big deal 4 Using LLM-Assisted Coding to Write a … Continue reading Puzzling over the Postgres query planner with LLMs

Monday, 27. November 2023

Altmode

On DMARC Marketing

Just before Thanksgiving, NPR‘s All Things Considered radio program had a short item on DMARC, a protocol that attempts to control fraudulent use of internet domains by email spammers by asserting that messages coming from those domains are authenticated using DKIM or SPF. Since I have been working in that area, a colleague alerted me […]

Just before Thanksgiving, NPR‘s All Things Considered radio program had a short item on DMARC, a protocol that attempts to control fraudulent use of internet domains by email spammers by asserting that messages coming from those domains are authenticated using DKIM or SPF. Since I have been working in that area, a colleague alerted me to the coverage and I listened to it online.

A couple of people asked me about my opinion of the article, which I thought might be of interest to others as well.

From the introduction:

JENNA MCLAUGHLIN, BYLINE: Cybercriminals love the holiday season. The internet is flooded with ads clamoring for shoppers’ attention, and that makes it easier to slip in a scam. At this point, you probably know to watch out for phishing emails, but it might surprise you to know that there’s a tool that’s been around a long time that could help solve this problem. It’s called DMARC – or the Domain Message Authentication, Reporting and Conformance Protocol – whew. It’s actually pretty simple. It basically helps prove the sender is who they say they are.

Of course it doesn’t help prove the sender is who they say they are at all, it expresses a request for what to do when the sender doesn’t. But I’ll forgive this one since it’s the interviewer’s misunderstanding.

ROBERT HOLMES: DMARC seeks to bring trust and confidence to the visible from address of an email so that when you receive an email from an address at wellsfargo.com or bestbuy.com, you can say with absolute certainty it definitely came from them.

(1) There is no “visible from address”. Most mail user agents (webmail, and programs like Apple Mail) these days leave out the actual email address and only display the “friendly name”, which isn’t verified at all. I get lots of junk email with addresses like:

From: Delta Airlines <win-Eyiuum8@Eyiuum8-DeltaAirlines.com>

This of course isn’t going to be affected by Delta’s DMARC policy (which only applies to email with a From address of @delta.com), but a lot of recipients are going to only see “Delta Airlines.” Even if the domain was visible, it’s not clear how much attention the public pays to the domain, compounded by the fact that this one is deceptively constructed.

(2) There is no absolute certainty. Even with a DKIM signature in many cases a bogus Authentication-Results header field could be added, or the selector record in DNS could be spoofed by cache poisoning.

HOLMES: So the thing about good security – it should be invisible to Joe Public.

This seems to imply that the public doesn’t need to be vigilant as long as the companies implement DMARC. Not a good message to send. And of course p=none, which for many domains is the only safe policy to use, isn’t going to change things at all, other than to improve deliverability to Yahoo and Gmail.

HOLMES: I think the consequences of getting this wrong are severe. Legitimate email gets blocked.

Inappropriate DMARC policies cause a lot of legitimate email blockage as well.

When we embarked on this authentication policy thing (back when we were doing ADSP), I hoped that it would cause domains to separate their transactional and advertising mail, use different domains or subdomains for those, and publish appropriate policies for those domains. It’s still not perfect, since some receive-side forwarders (e.g., alumni addresses) break DKIM signatures. But what has happened instead is a lot of blanket requirements to publish restrictive DMARC policies regardless of the usage of the domain, such as the CISA requirement on federal agencies. And of course there has been a big marketing push from DMARC proponents that, in my opinion, encourages domains to publish policies that are in conflict with how their domains are used.

Going back to my earlier comment, I really wonder if domain-based policy mechanisms like DMARC provide significant benefit when the domain isn’t visible. On the other hand, DMARC does cause definite breakage, notably to mailing lists.

Saturday, 25. November 2023

Moxy Tongue

The AI + Human Identity Riddle

Some riddles are so complex that people just stop contemplating, and concede. Here on the VRM dev list, over many years, the struggle to explain structural concepts in motion has fought for words in a social context that disavows progress by design. There is only one place where "self-induced Sovereignty" is approachable in accurate human terms, and conversing the methods elsewhere, like Canada, E

Some riddles are so complex that people just stop contemplating, and concede. Here on the VRM dev list, over many years, the struggle to explain structural concepts in motion has fought for words in a social context that disavows progress by design. There is only one place where "self-induced Sovereignty" is approachable in accurate human terms, and conversing the methods elsewhere, like Canada, Europe, China, or via the modern Administrative State, etc... is a failure at inception, and the process of grabbing the word to apply policy obfuscations on accurate use is intentional. As has been discussed plenty of times, the State-defined "Sovereign citizen movement" is the most dangerous ideology in existence, and the people who live under its definition are the most dangerous people in existence. I mean, have you seen them all out there doing what they do, acting all Sovereign unto themselves? Scary stuff, right?

"Human Rights" is another great example, used pervasively in Government administration, and by identity entrepreneurs in aggressively collating the un-identified/ un-banked/ un-civilized. Powerful words and concepts are always open to perversion by intent, and require courageous stewardship to hold accountable to the actual meaning conferred by utterance. Most western societies operate under the belief that human rights exist, and their society protects them. Yet, in defining the human rights for and unto the human population being administered, the order of operations is strictly accurate, where words documenting process fall well-short. 
In the US, human rights are administered rights. Prior to administration, there is no functional concept of human rights, thats why the UN applies its language to the most vulnerable, children, as it does... an administered right to administration of Nationality and Identity sets the stage for data existence. You, the Individual person, like all the rest of us, have no existence until you are legally administered, despite an ability to observe human existence prior to administration. "Civil Society", like math once did, exists in a state that denies the existence of zero - the unadministered pre-citizen, pre-customer, pre-Righted, pre-identified with actual existence as a human with Rights. Sorry, but you are going to need an administered ID for that.
If human rights existed, then human migration would be a matter of free expression, and access to Sovereign jurisdictions proclaiming support for "Human Rights" would need to quibble less about process of admittance. Simple in theory, but get ready for the administrative use of fear to induce human behavior in compliance to expected procedures.... "Terrorist Migrants Are Coming!!!!" ... ie, no actual, structural human rights here, move along. Only cult-politics can confront this with policy, as structural human rights have no meaningful expression in structural terms... just linguistics.
Now in China, who cares... human rights don't exist. The default commie-state of the newborn baby, dependent on others for survival, is extended throughout life. Little commie-baby-brains are the most compliant, most enforceable population on the planet, and structural considerations of participation are not to be found - bring on the social credit score. At home, or in the hospital, newborn families are not inherently tuned to the structural considerations of a new born life, they are tired, exhausted, hopefully in glee at the newly arriving human they have custody for. This is where the heist occurs. Systems love compliant people, they can do anything with them as their fodder. 
Data collection starts immediately; afterall, that new baby needs administered existence in order to exist with rights. Right? 
Data plantations running on endless databases, storing administered credentials in State Trusts, set the stage for the participation of living beings that will one day wake in a world that they themselves will have custody for, in theory. But in that moment, those little commie-baby-brains need to be administered by the State, need to be defined by State identification processes, in order to exist legally, and as a function of the future, those theories will yield to administered practices.
Structure yields results... 
How many people care? As "we" look around and communicate on this list, W2 employees with pensions become pervasive in all of these conversations. These structural participants, one-level removed from their Righted structure as citizens, even within "more perfect Sovereign unions", form sub-unions protecting their continuity as structural outcomes. What outcomes? What is the structural outcome they achieve? Like little commie-baby-brains, these adult participants cede their self-Sovereign integrity and representation, to induce group-action via group-think and group-coordination. It works really well, and makes these people "feel" really powerful in their time and place, inducing actions that try to scale their structural considerations for all people who remain out-of-step.
Thats where "You" come in, and by example, where a more perfect union comes in, to protect the greatest minority, and default sub-unit of the human organism, the Individual. Here, the commie-baby-brain must stand accountable to the inherent structural integrity of an Individual human life with direct, personal living authority. Personal authority, the kind that really makes little adult-commie-brains upset as "people" who can't/don't/won't be able to express such authority with any confidence, and without any guidance from a group-administrator. You see it all the time, as the administrative system routs one attention to channel 4 for programming, and channel 5 for another. Cult-capture is structurally assured by design, a coup by actual intent. 
This sets the backdrop of work and conversation here. There are too many people on this list that will NEVER materialize an accurate understanding of the structural accuracy of human rights, and the Sovereign structure of human liberty and accountability, the kind that builds real "civil" societies. Instead, they will pervert words, quibble against words, use fear to harm dialogue, and insist on centralized authority to secure people from themselves, and the most scary people on the planet... people who think and act for themselves to stand up Sovereign jurisdictions as self-represented participants and sources of Sovereign authority. 
Too hard? Too old? Too young? Too compliant? Too accurate? 
Prior to any administrative act, actual human rights exist. (Zero Party Doctrine) Good people, with functionally literate adult minds, and the persistent choice to preserve human health, wealth, wisdom, liberty, and personal pursuits of happiness in time serve as the actual foundation of a dominant world-view with no equal. Own your own life, own your own work, own your own future, own your own administration as righted people in civil societies. The alternative is not civil, and not humane to Individuals, all people. Structure yields superior results....
As builders of AI systems serving such people, structural considerations trump hype. I will stop short of discussing technical methods ensuring such outcomes, but suffice it to say it is a choice to induce unending focus on such outcomes. It requires no marketing, it requires no venture capital, and it wins because it is supported by winners. People, Individuals all, who give accountability personally to the freedoms they express incrementally in the face of far too ample losers who are always one comma away from proclaiming your adult minds a threat to the existence of their little commie-baby-brains while begging for access to the world built by self-Sovereign efforts of leaders, the real kind.
AI is about raising the bottom up to the statistical mean in present form. Protecting leading edge human thought, and output, is going to require a new approach to human identity. Databases are the domain of AI, and humanity will not be able to compete as artifacts living on a databased plantation. 
A great civil society reset is required (Not the "Build Back Better" commie-baby-brain variety), and it is just a matter of time before it becomes essential for any person now participating in the masses as little commie-baby-brains despite believing "they" exist in some other way as a result of linguistics or flag flying. Watch the administration process devolve, as people, the bleeding kind, are conflated with "people", the linguistic variety, and have their "Rights" administered as "permissions" due to some fear-inducing event. Well documented on this list by pseudo-leaders.
Open source AI, local AI, rooted human authority... all in the crosshairs. Remember who you are up against. Universal derivation and sharing no longer exist in the same place at the same time. Declare your structural reality.. human Individual leader, or statistical mean on a data-driven plantation. It is still not optional. People, Indiviiduals all, must own root authority.

Friday, 24. November 2023

Phil Windleys Technometria

SSI is the Key to Claiming Ownership in an AI-Enabled World

I've been trying to be intentional about using generative AI for more and more tasks in my life. For example, the image above is generated by DALL-E. I think generative AI is going to upend almost everything we do online, and I'm not alone. One of the places it will have the greatest impact is its uses in personal agents whether or not these agents enable people to lead effective online lives.

I've been trying to be intentional about using generative AI for more and more tasks in my life. For example, the image above is generated by DALL-E. I think generative AI is going to upend almost everything we do online, and I'm not alone. One of the places it will have the greatest impact is its uses in personal agents whether or not these agents enable people to lead effective online lives.

Jamie Smith recently wrote a great article in Customer Futures about the kind of AI-enabled personal agents we should be building. As Jamie points out: "Digital identity [is how we] prove who we are to others"​​. This statement is particularly resonant as we consider not just the role of digital identities in enhancing personal agents, but also their crucial function in asserting ownership of our creations in an AI-dominated landscape.

Personal agents, empowered by AI, will be integral to our digital interactions, managing tasks and providing personalized experiences. As Bill Gates says, AI is about to completely change how you use computers. The key to the effectiveness of these personal agents lies in the robust digital identities they leverage. These identities are not just tools for authentication; they're pivotal in distinguishing our human-generated creations from those produced by AI.

In creative fields, for instance, the ability to prove ownership of one's work becomes increasingly vital as AI-generated content proliferates. A strong digital identity enables creators to unequivocally claim their work, ensuring that the nuances of human creativity are not lost in the tide of AI efficiency. Moreover, in sectors like healthcare and finance, where personal agents are entrusted with sensitive tasks, a trustworthy, robust, self-sovereign identity ensures that these agents act in harmony with our real-world selves, maintaining the integrity and privacy of our personal data.

In this AI-centric era, proving authorship through digital identity becomes not just a matter of pride but a shield against the rising tide of AI-generated fakes. As artificial intelligence becomes more adept at creating content—from written articles to artwork—the line between human-generated and AI-generated creations blurs. A robust, owner-controlled digital identity acts as a bastion, enabling creators to assert their authorship and differentiate their genuine work from AI-generated counterparts. This is crucial in combating the proliferation of deepfakes and other AI-generated misinformation, ensuring the authenticity of content and safeguarding the integrity of our digital interactions. In essence, our digital identity becomes a critical tool in maintaining the authenticity and trustworthiness of the digital ecosystem, protecting not just our intellectual property but the very fabric of truth in our digital world.

As we embrace this new digital frontier, the focus must not only be on the convenience and capabilities of AI-driven agents but also on fortifying our digital identities so that your personal agent is controlled by you. Jamie ends his post with five key questions about personal agents that we shouldn't lose sight of:

Who does the digital assistant belong to?

How will our personal agents be funded?

What will personal agents do tomorrow, that we can’t already do today?

Will my personal agent do things WITH me and FOR me, or TO me?

Which brands will be trusted to offer personal agents?

Your digital identity is your anchor in the digital realm, asserting our ownership, preserving our uniqueness, and fostering trust in an increasingly automated world, helping you operationalize your digital relationships. The future beckons with the promise of AI, but it's our digital identity that will define our place in it.

Thursday, 23. November 2023

Wrench in the Gears

Gratitude And Mixed Emotion: My Thanksgiving Evolution Is Still In Process

Tomorrow (well today, since I’m ten minutes late in getting this posted) is Thanksgiving in the United States. I’ve had mixed feelings about it since the February 2017 raid on Standing Rock where Regina Brave made her treaty stand. After watching MRAPs coming down a muddy, snowy hill to confront a Lakota grandmother and Navy [...]

Tomorrow (well today, since I’m ten minutes late in getting this posted) is Thanksgiving in the United States. I’ve had mixed feelings about it since the February 2017 raid on Standing Rock where Regina Brave made her treaty stand. After watching MRAPs coming down a muddy, snowy hill to confront a Lakota grandmother and Navy veteran on Unicorn Riot’s livestream, it was hard to go back to being a parade bystander. It didn’t feel right watching the Major Drumstick float go by as we took in drum corps performances and waited for Santa to make his appearance on the Ben Franklin Parkway, officially opening the season of holiday excess. To be honest, I was kind of a downer.

Six years later, I have more life experience and clarity around cognitive domain management, identity politics, prediction modeling, and the strategic use of drama and trauma. I sense now that Standing Rock was likely part of an unfolding spectacle that was intended to set the stage for the use of indigenous identity as cover for faux green sustainability markets linked to web3 natural capital – the rights of nature literally tethered to digital ledgers in the name of equity and acknowledgement of past harms.

That is not to say protester concerns around the threats of pipelines to water systems weren’t valid. They were. It is not to dismiss the injustice of broken treaties, to diminish the horror of violence waged against native bodies, to undermine authentic efforts towards being in right relationship with the Earth and one another. No, the reasons people came to Standing Rock mattered. They did, but I expect few who participated would have ever realized the community that arose on that icy riverbank was viewed by myriad analysts as an emergent complex system, an extremely valuable case study at a time when we were on the cusp of AI swarm intelligence and cognitive warfare.

I’d come to know a man through my education activism who was there on the day of the raid, a person I considered a friend and mentor. We’ve drifted apart since the lockdowns. Such digital entanglements now seem pervasive and ephemeral. Were we really friends? Was it an act? In some extended reality play? Part of some larger campaign meant to position us where we were supposed to be years hence? I don’t think we’re meant to know. Perhaps like “degrees of freedom,” “degrees of uncertainty” are baked into the “many-worlds” equations humming within the data center infrastructure spewing out today’s digital twin multiverses. In any event, his research opened the door of my mind to the quiet, devastating treachery of social impact finance as well as the vital importance of indigenous spiritual practice and sovereignty. His ties to Utah’s complex and troubled history, a place that birthed virtual worlds of encrypted teapots under craggy mountains soaked in radioactive star dust on the shores of crystalline salt lakes sitting atop vast stores of transmitting copper ore, spun me into the space where I am now.

Looking back I held a simmering anger that hurt my family in ways I did not realize. It was probably an energetic frequency. We didn’t talk about it. We didn’t argue. Everything was fine, until lockdowns happened and suddenly it wasn’t. I felt betrayed having been what I considered a “good citizen” doing all the right things for so many decades and then abruptly having the scales fall from my eyes. This culture to which I had been habituated wasn’t at all what I thought it was. Nonetheless we were supposed to continue to perform our assigned roles as if nothing had changed. As long as we kept saying the assigned lines, things in middle-class progressive America would be ok. I was expected to paper over the rifts that were opening up in reality as I had known it, tuck away my disenchantment, my questions. Once one domino fell, they would all go, and that would be incredibly painful to everyone around me. And anyway, I didn’t have an answer that would reconcile the inconsistencies ready in my back pocket. There was a sad logic in it. If there was no easy fix, why wreck the status quo? Surely that wasn’t doing anyone any favors, right?

Nothing turned out like I thought it would. Suddenly, I was a planner without a plan. Today I have lots more information, and if anything, I recognize that I know less than I need to – that is than my mind needs to. In a world of information you can never pin it all down, organize it, make sense of it from an intellectual standpoint. But maybe it’s time to lead with the heart, at least that’s what all the techno-bros are saying. Maybe I should just shrug and let Sophia the robot guide me into some transformative meditation? Well, probably not. And I’ll pass on the Deepak Chopra wellness app, too. I foresee a future where rocks and water and trees are my partners in biophotonic exchange. At least that is what feels right for now. Patience.

For the moment I am on my own. I still love my small family with all my heart, and I really miss my dad every day. I have his watch with the scent of his Polo cologne on the band. It makes me tear up, a mix of poignant loss, and remembering hugs in his strong arms. It’s funny since I seem to have fallen outside of “civilized” time now. The days all run into one another and mostly I’m just aware of the contours of the seasons as I wait for the next phase of my life to start in the spring. Oh, that watch – there is a sense of irony in the universe for sure, a trickster energy I have to learn to appreciate more.

I have a small turkey brining in the fridge. I’ll be eating alone, but that’s ok. I don’t feel pressured to make all the fixings – sweet potatoes and broccoli will be fine. Maybe this weekend I’ll make an apple pie, my dad’s favorite. I’m downsizing. I’m leaning into less is more. I’m going to work on seeing playfulness in the world and practicing ways to embody the type of consciousness that might bring more of it in. I have a new friend, we’ll just say my Snake Medicine Show buddy who has been practicing this art for many years in a quest to move into right relationship and, well maybe vanquish is too strong a word, but at least neutralize what she calls the McKracken consciousness. She’s the kind of fun friend you want to have around to riff off of one another. I’m fortunate to have a small group of people in my life who despite my oddities manage to vibe with far-out concepts that stretch well beyond not only the norm, but a lot of the alternative modes of thinking. We are learning, together.

So for tomorrow I will concentrate on being grateful. Indigenous worldviews center gratitude. In spite of all of the disruptions to my year, I still have many blessings. Those who read my blog and watch my long videos, I count you among them. Thank you. Below is the stream we ran last night. It is the first in what will probably be an ongoing series about our trip from Colorado to Arkansas and back. My relocation plans are centered around Hot Springs now, so if you are in the area or have insight, do send them my way. I will include a map I made that weaves together photos from the trip and historical materials. You can access the interactive version here. Underneath I’m sharing a write up I did of insights gifted to me by my Snake Medicine Show friend. I love new tools for my toolbox. Maybe you will find it helpful on your journey.

Much love to you all; we are a wonderous work in progress, each and every one.

 

My summary of insights from Snake Medicine Show who gifted me with a guest post last December. You can read her lively linguistic offering, Emotional Emancipation – A Prayer of Proclamation, here.

We’ve been conditioned to perceive the world from a perspective that prioritizes matter. Such a view reduces our lived experience to leaderboards where our value is measured by the things we acquire objects, stuff, credentials, prestige. And yet an open invitation has been extended. We can try on different lens. What if we shift our worldview to center the dynamic potential of energy in motion. Rather than getting entangled by inconsequential nodes popping up here and there within the universe’s vast current, we can join as partners in a cosmic dance of fluid motion and unlimited possibility.

As authentic beings, grounded in truth, attuned to nature and the wonders of cosmic creation, we have the opportunity to dip into that current and reflect imaginative constructs into our shared reality. We are prisms of abundance.

The sea of shared consciousness is mutable, playful, and emergent. We can invite ideas into this dimension. However, once we do so, we have the responsibility to nurture them by giving them focused attention. Through creative partnerships, we can bring more energy to the process than we can acting on our own. As dancers in the current we hold space together, wombs to sustain modes of being beyond our conditioned expectations. We can choose to be patient and await what unfolds.

With proper tuning we will encounter guidance, grace, that directs us towards actions furthering a larger purpose. We many not even be fully aware of what that purpose is. As playful co-creators we should have faith and hold space for circuits to connect, activating the generative feedback that can begin to heal zero-sum consciousness. Mingle our unique bioenergetic frequencies with the understanding that resonant harmonies will guide sacred signals to the right receptor(s). It doesn’t take much to activate healing, just the right amount.

Show up with right relationship and the current will meet us there. We don’t need to know the right time or place; we just need to embody the right tune.

 

 

Monday, 20. November 2023

Talking Identity

Ethics vs Human-Centered Design in Identity

It was really nice of Elizabeth Garber to acknowledge me in the whitepaper that she co-authored with Mark Haine titled “Human-Centric Digital Identity: for Government Officials”. I recommend everyone read it, even if you aren’t in government, as it is a very strong and considerate effort to try and tackle a broad, complicated, but important […]

It was really nice of Elizabeth Garber to acknowledge me in the whitepaper that she co-authored with Mark Haine titled “Human-Centric Digital Identity: for Government Officials”. I recommend everyone read it, even if you aren’t in government, as it is a very strong and considerate effort to try and tackle a broad, complicated, but important topic. It reminded me that I was overdue to publish the talk I gave at Identiverse 2023, since it was her being in the audience that led us to have some nice conversations about our shared viewpoint on how Value-Sensitive Design and Human-Centric Design are key cogs in building ethical and inclusive digital identity systems. So, below is a re-recording of my talk, “Collision Course: Ethics vs Human-Centered Design in the New Perimeter”.

This talk was a challenging one for me to put together, because the subject matter is something I’ve been struggling to come to grips with. With digital identity becoming a critical component of a more digital-centric world, it seems clear that success and sustainability hinges on placing people and their societal values at the center of the architecture. But in doing so as part of my day-to-day work, I sometimes find the principles of human-centered design, inclusion, privacy-by-design, and ethics coming into conflict with each other. How are we, as identity practitioners, supposed to resolve these conflicts, navigating the challenge of building for the world that people actually live in, and not the one we wished they lived in? I increasingly found that there was no blueprint, no guide, already existing that I could tap into.

Well, I’ve often found that nothing brings things into better focus than being forced to develop a talk around it, so that’s what I set out to do. Can’t say I have the answers, but I did try to lay out some approaches and ideas that I found helpful when faced with these questions in the global projects I’m involved in. As always, I would love to hear people’s thoughts and experiences as they relate to this topic.

Links

In the talk, I refer to a few resources that folks can use to learn more about Value Sensitive Design and Ethic in Tech. Below are links to the same:

Introduction to Value Sensitive Design Value Sensitive Design and Information Systems Translating Values into Design Requirements Ethics for the Digital Age: Where Are the Moral Specs? (Value Sensitive Design and Responsible Innovation) An Ethical Framework for Evaluating Experimental Technology Ethics-by-Design:Project SHERPA Value sensitive design as a formative framework

Other links from the talk:

It’s getting easier to make an account on Mastodon An Introduction to the GDPR (v3), IDPro Body of Knowledge 1(5). Impact of GDPR on Identity and Access Management, IDPro Body of Knowledge 1(1) Code of Conduct: the Human Impact of Identity Exclusion by Women in Identity

Damien Bod

Improve ASP.NET Core authentication using OAuth PAR and OpenID Connect

This article shows how an ASP.NET Core application can be authenticated using OpenID Connect and OAuth 2.0 Pushed Authorization Requests (PAR) RFC 9126. The OpenID Connect server is implemented using Duende IdentityServer. The Razor Page ASP.NET Core application authenticates using an OpenID Connect confidential client with PKCE and using the OAuth PAR extension. Code: https://github.com/damienbod/

This article shows how an ASP.NET Core application can be authenticated using OpenID Connect and OAuth 2.0 Pushed Authorization Requests (PAR) RFC 9126. The OpenID Connect server is implemented using Duende IdentityServer. The Razor Page ASP.NET Core application authenticates using an OpenID Connect confidential client with PKCE and using the OAuth PAR extension.

Code: https://github.com/damienbod/oidc-par-aspnetcore-duende

Note: The code in this example was created using the Duende example found here: https://github.com/DuendeSoftware/IdentityServer

By using Pushed Authorization Requests (PAR), the authentication flow security is improved. In ASP.NET Core using PAR, the application is authenticated on the trusted backchannel before sending any authentication request. The parameters are no longer sent in the URL reducing the risk by not sharing the parameters or prevent parameter pollution with redirect_uri injection. No parameters are shared in the front channel. The OAuth 2.0 Authorization Framework: JWT-Secured Authorization Request (JAR) RFC 9101 can also be used together with this to further improve the authentication security.

Overview

The OAuth PAR extension adds an extra step to the two step OpenID Connect client authentication code flow. OAuth Pushed Authorization Requests (PAR) extended flow has three steps:

Client sends a HTTP request in the back channel with the authorization parameters and the client is authenticated first. The body of the request has the OpenID Connect code flow parameters. The server responds with the request_uri. The client uses the request_uri from the first step and authenticates. The server uses the flow parameters from the first request. As code flow with PKCE is used, the code is returned in the front channel. The client completes the authentication using the code flow in the back channel, standard OpenID Connect code flow with PKCE. Duende IdentityServer setup

I used Duende IdentityServer to implement the standard. Any OpenID Connect server which supports the OAuth PAR standard can be used. It is very simple to support this using Duende IdentityServer. The RequirePushedAuthorization is set to true and PAR is active for this client. The rest of the client configuration is a standard OIDC confidential client using code flow with PKCE.

new Client[] { new Client { ClientId = "web-par", ClientSecrets = { new Secret("--your-secret--".Sha256()) }, RequirePushedAuthorization = true, AllowedGrantTypes = GrantTypes.CodeAndClientCredentials, RedirectUris = { "https://localhost:5007/signin-oidc" }, FrontChannelLogoutUri = "https://localhost:5007/signout-oidc", PostLogoutRedirectUris = { "https://localhost:5007/signout-callback-oidc" }, AllowOfflineAccess = true, AllowedScopes = { "openid", "profile" } } };

ASP.NET Core OpenID Connect client

The ASP.NET Core client requests extra changes. An extra back channel PAR request is sent in the OpenID Connect events. The OIDC events needs to be changed compared to the standard core OIDC setup. I used the Duende.AccessTokenManagement.OpenIdConnect nuget package to implement this and updated the OIDC events using the ParOidcEvents class from the Duende examples. The setup uses the PAR events in the AddOpenIdConnect configuration which requires a HttpClient and the IDiscoveryCache interface from Duende.

services.AddTransient<ParOidcEvents>(); // Duende.AccessTokenManagement.OpenIdConnect nuget package services.AddSingleton<IDiscoveryCache>(_ => new DiscoveryCache(configuration["OidcDuende:Authority"]!)); services.AddHttpClient(); services.AddAuthentication(options => { options.DefaultScheme = CookieAuthenticationDefaults.AuthenticationScheme; options.DefaultChallengeScheme = OpenIdConnectDefaults.AuthenticationScheme; }) .AddCookie(CookieAuthenticationDefaults.AuthenticationScheme, options => { options.ExpireTimeSpan = TimeSpan.FromHours(8); options.SlidingExpiration = false; options.Events.OnSigningOut = async e => { // automatically revoke refresh token at signout time await e.HttpContext.RevokeRefreshTokenAsync(); }; }) .AddOpenIdConnect(OpenIdConnectDefaults.AuthenticationScheme, options => { options.Authority = configuration["OidcDuende:Authority"]; options.ClientId = configuration["OidcDuende:ClientId"]; options.ClientSecret = configuration["OidcDuende:ClientSecret"]; options.ResponseType = "code"; options.ResponseMode = "query"; options.UsePkce = true; options.Scope.Clear(); options.Scope.Add("openid"); options.Scope.Add("profile"); options.Scope.Add("offline_access"); options.GetClaimsFromUserInfoEndpoint = true; options.SaveTokens = true; options.MapInboundClaims = false; // needed to add PAR support options.EventsType = typeof(ParOidcEvents); options.TokenValidationParameters = new TokenValidationParameters { NameClaimType = "name", RoleClaimType = "role" }; }); // Duende.AccessTokenManagement.OpenIdConnect nuget package // add automatic token management services.AddOpenIdConnectAccessTokenManagement();

The ParOidcEvents class is used to implement the events required by the OAuth PAR standard. This can be used with any token server which supports the standard.

/// <summary> /// original code src: /// https://github.com/DuendeSoftware/IdentityServer /// </summary> public class ParOidcEvents(HttpClient httpClient, IDiscoveryCache discoveryCache, ILogger<ParOidcEvents> logger, IConfiguration configuration) : OpenIdConnectEvents { private readonly HttpClient _httpClient = httpClient; private readonly IDiscoveryCache _discoveryCache = discoveryCache; private readonly ILogger<ParOidcEvents> _logger = logger; private readonly IConfiguration _configuration = configuration; public override async Task RedirectToIdentityProvider(RedirectContext context) { var clientId = context.ProtocolMessage.ClientId; // Construct the state parameter and add it to the protocol message // so that we include it in the pushed authorization request SetStateParameterForParRequest(context); // Make the actual pushed authorization request var parResponse = await PushAuthorizationParameters(context, clientId); // Now replace the parameters that would normally be sent to the // authorize endpoint with just the client id and PAR request uri. SetAuthorizeParameters(context, clientId, parResponse); // Mark the request as handled, because we don't want the normal // behavior that attaches state to the outgoing request (we already // did that in the PAR request). context.HandleResponse(); // Finally redirect to the authorize endpoint await RedirectToAuthorizeEndpoint(context, context.ProtocolMessage); } private const string HeaderValueEpocDate = "Thu, 01 Jan 1970 00:00:00 GMT"; private async Task RedirectToAuthorizeEndpoint(RedirectContext context, OpenIdConnectMessage message) { // This code is copied from the ASP.NET handler. We want most of its // default behavior related to redirecting to the identity provider, // except we already pushed the state parameter, so that is left out // here. See https://github.com/dotnet/aspnetcore/blob/c85baf8db0c72ae8e68643029d514b2e737c9fae/src/Security/Authentication/OpenIdConnect/src/OpenIdConnectHandler.cs#L364 if (string.IsNullOrEmpty(message.IssuerAddress)) { throw new InvalidOperationException( "Cannot redirect to the authorization endpoint, the configuration may be missing or invalid."); } if (context.Options.AuthenticationMethod == OpenIdConnectRedirectBehavior.RedirectGet) { var redirectUri = message.CreateAuthenticationRequestUrl(); if (!Uri.IsWellFormedUriString(redirectUri, UriKind.Absolute)) { _logger.LogWarning("The redirect URI is not well-formed. The URI is: '{AuthenticationRequestUrl}'.", redirectUri); } context.Response.Redirect(redirectUri); return; } else if (context.Options.AuthenticationMethod == OpenIdConnectRedirectBehavior.FormPost) { var content = message.BuildFormPost(); var buffer = Encoding.UTF8.GetBytes(content); context.Response.ContentLength = buffer.Length; context.Response.ContentType = "text/html;charset=UTF-8"; // Emit Cache-Control=no-cache to prevent client caching. context.Response.Headers.CacheControl = "no-cache, no-store"; context.Response.Headers.Pragma = "no-cache"; context.Response.Headers.Expires = HeaderValueEpocDate; await context.Response.Body.WriteAsync(buffer); return; } throw new NotImplementedException($"An unsupported authentication method has been configured: {context.Options.AuthenticationMethod}"); } private async Task<ParResponse> PushAuthorizationParameters(RedirectContext context, string clientId) { // Send our PAR request var requestBody = new FormUrlEncodedContent(context.ProtocolMessage.Parameters); var secret = _configuration["OidcDuende:ClientSecret"] ?? throw new Exception("secret missing"); _httpClient.SetBasicAuthentication(clientId, secret); var disco = await _discoveryCache.GetAsync(); if (disco.IsError) { throw new Exception(disco.Error); } var parEndpoint = disco.TryGetValue("pushed_authorization_request_endpoint").GetString(); var response = await _httpClient.PostAsync(parEndpoint, requestBody); if (!response.IsSuccessStatusCode) { throw new Exception("PAR failure"); } return await response.Content.ReadFromJsonAsync<ParResponse>(); } private static void SetAuthorizeParameters(RedirectContext context, string clientId, ParResponse parResponse) { // Remove all the parameters from the protocol message, and replace with what we got from the PAR response context.ProtocolMessage.Parameters.Clear(); // Then, set client id and request uri as parameters context.ProtocolMessage.ClientId = clientId; context.ProtocolMessage.RequestUri = parResponse.RequestUri; } private static OpenIdConnectMessage SetStateParameterForParRequest(RedirectContext context) { // Construct State, we also need that (this chunk copied from the OIDC handler) var message = context.ProtocolMessage; // When redeeming a code for an AccessToken, this value is needed context.Properties.Items.Add(OpenIdConnectDefaults.RedirectUriForCodePropertiesKey, message.RedirectUri); message.State = context.Options.StateDataFormat.Protect(context.Properties); return message; } public override Task TokenResponseReceived(TokenResponseReceivedContext context) { return base.TokenResponseReceived(context); } private class ParResponse { [JsonPropertyName("expires_in")] public int ExpiresIn { get; set; } [JsonPropertyName("request_uri")] public string RequestUri { get; set; } = string.Empty; } } Notes

It is simple to use PAR and the it adds an improved authentication security with one extra request in the authentication flow. This should be used if possible. The standard can be used together with the OAuth JAR standard and even extended with the OAuth RAR.

Links

https://github.com/DuendeSoftware/IdentityServer

OAuth 2.0 Pushed Authorization Requests (PAR) RFC 9126

OAuth 2.0 Authorization Framework: JWT-Secured Authorization Request (JAR) RFC 9101

OAuth 2.0 Rich Authorization Requests (RAR) RFC 9396

Thursday, 16. November 2023

Heres Tom with the Weather

RIP Karl Tremblay

C’est une triste nouvelle. Karl Tremblay est mort hier. Voici est Les Cowboys Fringants au Centre Bell. Le groupe chante “L’Amérique pleure.” Ce soir, Les Canadiens Montreal lui a rendu hommage avant le match du hockey.

C’est une triste nouvelle. Karl Tremblay est mort hier. Voici est Les Cowboys Fringants au Centre Bell. Le groupe chante “L’Amérique pleure.”

Ce soir, Les Canadiens Montreal lui a rendu hommage avant le match du hockey.

Wednesday, 15. November 2023

ian glazers tuesdaynight

Counselors in the Modern Era

Towards the end of 2019, I was invited to deliver a keynote at the OpenID Foundation Summit in Japan. At a very personal level, the January 2020 Summit was an opportunity to spend time with dear friends from around the world. It would be the last time I saw Kim Cameron in person. It would … Continue reading Counselors in the Modern Era

Towards the end of 2019, I was invited to deliver a keynote at the OpenID Foundation Summit in Japan. At a very personal level, the January 2020 Summit was an opportunity to spend time with dear friends from around the world. It would be the last time I saw Kim Cameron in person. It would include a dinner with the late Vittorio Bertocci. And it was my last “big” trip before the COVID lock down.

At the Summit, I was asked to talk about the “Future of Identity.” It was a bit of a daunting topic since I am no real futurist and haven’t been an industry analyst for a long time. So I set about writing what I thought the next 10 years would look like from the view of a practitioner. You can read what I wrote as well as see a version of me presenting this. 

A concept I put forward in that talk was one of “counselors”: software agents that act on one’s behalf to make introductions of the individual to a service and vice versa, perform recognition of these services and associated credentials, and prevent or at least inhibit risky behavior, such as dodgy data sharing. I provide an overview of these concepts in my Future of Identity talk at approximately minute 20.

Why even talk about counselors

That’s a reasonable question. I have noticed that there is a tendency in the digital identity space (and I am sure in others too) to marvel at problems. Too many pages spent talking about how something is a fantastically hard problem to solve and why we must do so… with scant pages of follow up on how we do so. Additionally, there’s another tendency to marvel at very technical products and services that “solve the problem.” Except they don’t. They solve a part of the problem or they are one of many tools needed to solve the problem. The challenges of digital identity management are legion and they manifest themselves in different ways to different industry sectors in different geographies. One can argue that while we have used magnificent tools to solve account management problems, we really haven’t begun to solve identity management ones. Counselors are a way to both humanize online interactions and make meaningful (as in meaningful and valuable to the individual) progress on solving the challenges of digital identity management.

Counselors in the Modern Era

Sitting through the sessions at Authenticate 2023, and being surrounded by a ton of super smart people, I realized that the tools to make counselors real are very much within our grasp. Among these tools, 4 are the keys to success:

Interface layer powered by generative AI and LLMs Bilateral recognition tokens powered by passkeys Potentially verifiable data powered by Verified Credentials Safe browsing hints Interface layer powered by generative AI and LLMs

At their core, counselors are active clients that run on a person’s device. Today we can think of these akin to personal digital assistants, password managers, and digital wallets. What is missing is a user interface layer that is more than a Teddy Ruxpin clone that only knows a few key phrases and actions accompanied by zero contextual awareness. What is needed is a meaningful conversational interface that is contextually aware. Generative AI and large language models (LLMs) are showing promise that they can power that layer. And these models are now running on form factors that could easily be mobile, wearable, and eventually implantable. This would enable the counselor to understand requests such as “Find me the best price for 2 premium economy seats to Tokyo for these dates in January” and “know” that I’ll be flying out of D.C. and am a Star Alliance flier. 

Recognition tokens powered by passkeys

We have got to get out of the authentication business. It fails dramatically and spectacularly and seemingly on a daily basis. We have to move to the business of enabling service providers and consumers to recognise each other. Both the original talk and my more recent Ceremonies talk speak to this need. A crucial puzzle piece for recognition is the use of cryptography. Right now the easiest way a normal human being can use cryptography to “prove” who they are is WebAuthn and, more generally, passkeys. Armed with passkeys, a counselor can ensure that a service recognizes the person and that the counselor recognizes the service. To be clear, today, passkeys and the ceremonies and experiences surrounding them are in the early stages of  global adoption… but it is amazing to see the progress that happened in the prior year and it bodes well for the future.

One thing to note is that passkeys as they work today provide a form of cryptographic proof that the thing interacting with a service is the same one you saw the day before, and notional there is the same human associated with the thing. There is no real reason why, in theory, this couldn’t be flipped around such that the service has to provide a form of cryptographic proof that the service is the same one with which the thing interacted the day before. A counselor could broker these kinds of flows to ensure that the service recognizes the person that the counselor is working on behalf of and the counselor can recognize the service.

Potentially verifiable data powered by verifiable credentials

One thing a counselor needs to do is to share data, on behalf of the individual, with a service. This data could be credit card information, home address, passport number, etc. Some of the data they would need to share are pieces of information about the individual from 3rd party authorities such as a local department of motor vehicles or employer. Ideally, the service would like a means to verify such information, and the individual, in some cases, would like the issuer of the information not to know where the information is shared. Here verified credentials (VCs) could play a role. Additionally, the service may want information about an individual that the individual provides and acts as the authority/issuer. Here too verified credentials could play a role. Standardized request and presentation patterns and technologies are crucially important and my hope is the VCs will provide them.

So why include the word “potentially” in the title of this section? There are many scenarios in which the service neither needs nor cares to  verify information provided by the individual. Said differently, not every use case is a high assurance use case (nor should it be) and not every use case is rooted in a regulated sector. Hopefully VCs will provide a standardized (or at least standardizable) means for data presentation that can span both use cases that require verification and those that do not. If not, we’ll always have CSV.

Safe interaction hints

While one’s street sense might warn you that walking down that dark alley or getting money from that shifty looking ATM isn’t a good idea, an online version of that same street sense isn’t as easily cultivated. Here we need to turn to outside sources. Signals such as suspect certificates, questionable privacy policies, and known malware drop sites can all be combined to inform the individual everything from “This isn’t the site you actually are looking for” to “I suggest you do not sign up on this service… here are 3 alternatives” to “I’ll generate a one-time credit card number for you here.” One can imagine multiple sources for such hints and services. From browser makers to government entities to privacy-oriented product companies and well beyond. This is where real differentiation and competition can and should occur. And this is where counselors move from being reasonably inert cold storage layers for secrets and data to real valuable tools for an online world.

The missing something: privacy 

At this point, I felt like I had identified the critical ingredients for a counselor: interface layer, recognition tokens, potentially verifiable data, and safe browsing hints… and then I mentioned this to Nat Sakimura. Nat has a way of appearing at the critical moment, say 10 words, and disrupt your way of thinking. I joke that he’s from the future here to tell us what not to do in order to avoid catastrophe. And I have been lucky and privileged enough to have Nat appear from time to time.

This time he appeared to tell me that the four things I had identified were insufficient. There was something missing. Having safe browsing hints is not enough… what is missing are clear, processable and actionable statements about privacy and data use. A counselor can “read” these statements from a site or service, interpret them into something understandable for the individual, better informing them on how the service will behave, or at least how it ought to behave. Couple this with things like consent receipts, which the counselor can manage, and the individual has records of what the service provider said they would do and to what the individual agreed to. There is an opportunity here for counselors to focus the individual’s attention on what is material for the individual and learn their preferences, such as knowing the individual will not accept tracking cookies.

From where will these counselors come

One can easily imagine a variety of sources of counselors. The mobile operating system vendors are best-placed to extend their existing so-called smart assistants to become counselors by leveraging their existing abilities to manage passwords, passkeys, debit and credit cards, and along with other forms of credentials. 3rd parties also could build counselors much like we see with digital assistants, password managers, and digital wallets. I expect that there is a marketplace for services, especially safe browsing hints. Here, organizations from government entities to civil society organizations to privacy-oriented product companies could build modules, for lack of a better word, that would be leveraged by the counselor to enhance its value to the individual.

Regardless of where counselors originate, observability and auditability is key. An individual needs a means to examine the actions the automated counselor took and the reasons for the actions. They need a way to revoke past data sharing decisions and consents granted. And they need a means to “fire” their counselor and switch to a new one whilst retaining access, control, and history.

In conclusion

We, as an industry, have been working on identity management for quite some time. But, from some perspectives, we haven’t made progress. Pam Dingle once said something to me to the effect of, “We’ve built a lot of tools but we haven’t solved the problems. We are just at the point where we have the tools we need to do so.” We have solved many of the problems of user account management, but we have yet to solve the problems of identity management to the same extent. The magical future where I can put the supercomputer on my wrist to work in a way that delivers real value and not just interesting insights and alerts feels both disappointingly far away yet tantalizingly within our grasp. I believe that counselors are what is needed to extend our reach to that magical, and very achievable, future. 

To do this I believe there are five things required:

Ubiquitous devices, available to all regardless of geography and socio economic condition, in all manner of form factors, which can run privacy-preserving LLMs and thus the interface layer for counselors Maturation of passkey patterns including recovery use cases such that the era of shared secrets can be enshrined in history Standardization of request and presentation processes of potentially verifiable data, along with standardized data structures Trustable sources of safe interaction signals with known rules and standardized data formats Machine-readable and interpretable privacy and data use notices coupled with consent receipts

The tools we need to unlock the magical future are real… or certainly real enough for proof of concept purposes. Combining those five ingredients makes the magical future and much more magical present… and this is the present in which I want to be.

[I am indebted to Andi Hindle for his help with this post. Always have a proper English speaker check your work — IG 11/15/2023]

Tuesday, 14. November 2023

Talking Identity

Thank You for Supporting the IdentityFabio Fundraiser

It is a testament to his enduring spirit that we all continue to find ways to cope with the absence of Vittorio from our identity community and from our lives. On the day that I learnt of the news, I poured out my immediate feelings in words. But as Ian put it so eloquently (like […]

It is a testament to his enduring spirit that we all continue to find ways to cope with the absence of Vittorio from our identity community and from our lives. On the day that I learnt of the news, I poured out my immediate feelings in words. But as Ian put it so eloquently (like only he can), we continued to look for ways to operationalize our sadness. In his case, he and Allan Foster set out to honor Vittorio’s legacy by setting up the Vittorio Bertocci Award under the auspices of the Digital Identity Advancement Foundation, which hopefully will succeed in becoming a lasting tribute to the intellect and compassion that Vittorio shared with the identity community. My personal attempt was to try and remember the (often twisted yet somehow endearingly innocent) sense of humor that Vittorio imbued into our personal interactions. And so, prodded by the inquiries from many about the fun (and funny) t-shirt I had designed just to get a good laugh from him at Identiverse, I created a fundraiser for the Pancreatic Cancer Action Network in his honor.

Thanks to all of you Vittorio stans and your incredible generosity, we raised $3,867.38 through t-shirt orders ($917.63) and donations ($2,949.75). I am obviously gratified by the support you gave my little attempt at operationalizing my sadness. But the bonus I wasn’t expecting was the messages people left on the fundraiser site – remembering Vittorio, what he meant to them, and in some cases how cancer affected their lives personally. In these troubling times, it was heartwarming to read these small signals of our shared humanity.

Thank you all once again. And love you Vittorio, always.

Monday, 13. November 2023

Jon Udell

Debugging SQL with LLMS

Here’s the latest installment in the series on LLM-assisted coding over at The New Stack: Techniques for Using LLMs to Improve SQL Queries. The join was failing because the two network_interfaces columns contained JSONB objects with differing shapes; Postgres’ JSONB containment operator, @>, couldn’t match them. Since the JSONB objects are arrays, and since the … Continue reading Debugging SQL w

Here’s the latest installment in the series on LLM-assisted coding over at The New Stack: Techniques for Using LLMs to Improve SQL Queries.

The join was failing because the two network_interfaces columns contained JSONB objects with differing shapes; Postgres’ JSONB containment operator, @>, couldn’t match them. Since the JSONB objects are arrays, and since the desired match was a key/value pair common to both arrays, it made sense to explode the array and iterate through its elements looking to match that key/value pair.

Initial solutions from ChatGPT, Copilot Chat, and newcomer Unblocked implemented that strategy using various flavors of cross joins involving Postgres’ jsonb_array_elements function.

The rest of the series:

1 When the rubber duck talks back

2 Radical just-in-time learning

3 Why LLM-assisted table transformation is a big deal

4 Using LLM-Assisted Coding to Write a Custom Template Function

5 Elevating the Conversation with LLM Assistants

6 How Large Language Models Assisted a Website Makeover

7 Should LLMs Write Marketing Copy?

8 Test-Driven Development with LLMs: Never Trust, Always Verify

9 Learning While Coding: How LLMs Teach You Implicitly

10 How LLMs Helped Me Build an ODBC Plugin for Steampipe

11 How to Use LLMs for Dynamic Documentation

12 Let’s talk: conversational software development


Phil Windleys Technometria

dApps Are About Control, Not Blockchains

I recently read Igor Shadurin's article Dive Into dApps. In it, he defines a dApp (or decentralized application): The commonly accepted definition of a dApp is, in short, an application that can operate autonomously using a distributed ledger system.

I recently read Igor Shadurin's article Dive Into dApps. In it, he defines a dApp (or decentralized application):

The commonly accepted definition of a dApp is, in short, an application that can operate autonomously using a distributed ledger system.

From Dive Into dApps
Referenced 2023-11-12T15:39:42-0500

I think that definition is too specific to blockchains. Blockchains are an implementation choice and there are other ways to solve the problem. That said, if you're looking to create a dApp with a smart contract, then Igor's article is a nice place to start.

Let's start with the goal and work backwards from there. The goal of a dApp is to give people control over their apps and the data in them. This is not how the internet works today. As I wrote in The CompuServe of Things, the web and mobile apps are almost exclusively built on a model of intervening administrative authorities. As the operators of hosted apps and controllers of the identity systems upon which they're founded, the administrators can, for any reason whatsoever, revoke your rights to the application and any data it contains. Worse, most use your data for their own purposes, often in ways that are not in your best interest.

dApps, in contrast, give you control of the data and merely operate against it. Since they don't host the data, they can run locally, at the edge. Using smart contracts on a blockchain is one way to do this, but there are others, including peer-to-peer networks and InterPlanetary File System (IPFS). The point is, to achieve their goal, dApps need a way to store data that the application can reliably and securely reference, but that a person, rather than the app provider, controls. The core requirement for achieving control is that the data service be run by a provider who is not an intermediary and that the data model be substitutable. Control requires meaningful choice among a group of interoperable providers who are substitutable and compete for the trust of their customers.

I started writing about this idea back in 2012 and called it Personal Cloud Application Architecture. At the time the idea of personal clouds had a lot of traction and a number of supporters. We built a demonstration app called Forever and later, I based the Fuse connected car application on this idea: let people control and use the data from their cars without an intermediary. Fuse's technical success showed the efficacy of the idea at scale. Fuse had a mobile app and felt like any other connected car application, but underneath the covers, the architecture gave control of the data to the car's owner. Dave Winer has also developed applications that use a substitutable backend storage based on Node.

Regular readers will wonder how I made it this far without mentioning picos. Forever and Fuse were both based on picos. Picos are designed to be self-hosted or hosted by providers who are substitutable. I've got a couple of projects tee'd up for two groups of students this winter that will further extend the suitability for picos as backends for dApps:

Support for Hosting Picos—the root pico in any instance of the pico engine is the ancestor of all picos in that engine and thus has ultimate control over them. To date, we've used the ability to stand up a new engine and control access to it as the means of providing control for the owner. This project will allow a hosting provider to easily stand up new instance of the engine and its root pico. For this to be viable, we'll use the support for peer DIDs my students built into the engine last year to give owners a peer DID connection to their root pico on their instance of the engine and thus give them control over the root pico and all its decedents.

Support for Solid Pods—at IIW this past October, we had a few sessions on how picos could be linked to Solid pods. This project will marry a pod to each pico that gets created and link their lifecycles. This, combined with their support for peer DIDs, makes the pico and its data movable between engines, supporting substitutability.

If I thought I had the bandwidth to support a third group, I'd have them work on building dApps and an App Store to run on top of this. Making that work has a few other fun technical challenges. We've done this before. As I said Forever and Fuse were both essentially dApps. Manifold, a re-creation of SquareTag is a large dApp for the Internet of Things that supports dApplets (is that a thing?) for each thing you store in it. What makes it a dApp is that the data is all in picos that could be hosted anywhere...at least in theory. Making that less theoretical is the next big step. Bruce Conrad has some ideas around that he calls the Pico Labs Affiliate Network.

I think the work of supporting dApps and personal control of our data is vitally important. As I wrote in 2014:

On the Net today we face a choice between freedom and captivity, independence and dependence. How we build the Internet of Things has far-reaching consequences for the humans who will use—or be used by—it. Will we push forward, connecting things using forests of silos that are reminiscent the online services of the 1980’s, or will we learn the lessons of the Internet and build a true Internet of Things?

From The CompuServe of Things
Referenced 2023-11-12T17:15:48-0500

The choice is ours. We can build the world we want to live in.


Damien Bod

Authentication with multiple identity providers in ASP.NET Core

This article shows how to implement authentication in ASP.NET Core using multiple identity providers or secure token servers. When using multiple identity providers, the authentication flows need to be separated per scheme for the sign-in flow and the sign-out flow. The claims are different and would require mapping logic depending on the authorization logic of […]

This article shows how to implement authentication in ASP.NET Core using multiple identity providers or secure token servers. When using multiple identity providers, the authentication flows need to be separated per scheme for the sign-in flow and the sign-out flow. The claims are different and would require mapping logic depending on the authorization logic of the application.

Code: https://github.com/damienbod/MulitipleClientClaimsMapping

Setup

OpenID Connect is used for the authentication and the session is stored in a cookie. A confidential client using OpenID Connect code flow with PKCE is used for both schemes. The client configuration in the secure token servers need to match the ASP.NET Core configuration. The sign-in and the sign-out callback URLs are different for the different token servers.

The AddAuthentication method is used to define the authentication services. Cookies are used to store the session. The “t1” scheme is used to setup the Duende OpenID Connect client and the “t2” scheme is used to setup the OpenIddict scheme. The callback URLs are specified in this setup.

builder.Services.AddAuthentication(options => { options.DefaultScheme = CookieAuthenticationDefaults.AuthenticationScheme; options.DefaultChallengeScheme = OpenIdConnectDefaults.AuthenticationScheme; }) .AddCookie() .AddOpenIdConnect("t1", options => // Duende IdentityServer { builder.Configuration.GetSection("IdentityServerSettings").Bind(options); options.SignInScheme = CookieAuthenticationDefaults.AuthenticationScheme; options.ResponseType = OpenIdConnectResponseType.Code; options.SaveTokens = true; options.GetClaimsFromUserInfoEndpoint = true; options.TokenValidationParameters = new TokenValidationParameters { NameClaimType = "name" }; options.MapInboundClaims = false; }) .AddOpenIdConnect("t2", options => // OpenIddict server { builder.Configuration.GetSection("IdentityProviderSettings").Bind(options); options.SignInScheme = CookieAuthenticationDefaults.AuthenticationScheme; options.ResponseType = OpenIdConnectResponseType.Code; options.SaveTokens = true; options.GetClaimsFromUserInfoEndpoint = true; options.TokenValidationParameters = new TokenValidationParameters { NameClaimType = "name" }; });

The configurations which are different per environment are read from the configuration object. The source of the data can be the appsettings.json, Azure Key Vault, user secrets or whatever you use.

"IdentityProviderSettings": { // OpenIddict "Authority": "https://localhost:44318", "ClientId": "codeflowpkceclient", "ClientSecret": "--your-secret-from-keyvault-or-user-secrets--", "CallbackPath": "/signin-oidc-t2", "SignedOutCallbackPath": "/signout-callback-oidc-t2" }, "IdentityServerSettings": { // Duende IdentityServer "Authority": "https://localhost:44319", "ClientId": "oidc-pkce-confidential", "ClientSecret": "--your-secret-from-keyvault-or-user-secrets--", "CallbackPath": "/signin-oidc-t1", "SignedOutCallbackPath": "/signout-callback-oidc-t1" }

Sign-in

The application and the user can authenticate using different identity providers. The scheme is setup when starting to authentication flow so that the application knows which secure token server should be used. I added two separate controller endpoints for this. The Challenge request is then sent correctly.

[HttpGet("LoginOpenIddict")] public ActionResult LoginOpenIddict(string returnUrl) { return Challenge(new AuthenticationProperties { RedirectUri = !string.IsNullOrEmpty(returnUrl) ? returnUrl : "/", }, "t2"); } [HttpGet("LoginIdentityServer")] public ActionResult LoginIdentityServer(string returnUrl) { return Challenge(new AuthenticationProperties { RedirectUri = !string.IsNullOrEmpty(returnUrl) ? returnUrl : "/" }, "t1"); }

The UI part of the application calls the correct endpoint. This is just a HTTP link which sends a GET request.

<li class="nav-item"> <a class="nav-link text-dark" href="~/api/Account/LoginIdentityServer">Login t1 IdentityServer</a> </li> <li class="nav-item"> <a class="nav-link text-dark" href="~/api/Account/LoginOpenIddict">Login t2 OpenIddict</a> </li>

Sign-out

The application also needs to sign-out correctly. A sign-out is sent to the secure token server and not only locally in the application. To sign out correctly, the application must use the correct scheme. This can be found using the HttpContext features. Once the scheme is known, the sign-out request can be sent to the correct secure token server.

[Authorize] public class LogoutModel : PageModel { public async Task<IActionResult> OnGetAsync() { if (User.Identity!.IsAuthenticated) { var authProperties = HttpContext.Features .GetRequiredFeature<IAuthenticateResultFeature>(); var schemeToLogout = authProperties.AuthenticateResult!.Ticket! .Properties.Items[".AuthScheme"]; if (schemeToLogout != null) { return SignOut(new AuthenticationProperties { RedirectUri = "/SignedOut" }, CookieAuthenticationDefaults.AuthenticationScheme, schemeToLogout); } } await HttpContext.SignOutAsync( CookieAuthenticationDefaults.AuthenticationScheme); return Redirect("/SignedOut"); } } Notes

Setting up multiple secure token servers or identity providers for a single ASP.NET Core application is relatively simple using the standard ASP.NET Core endpoints. Once you start using the different identity provider authentication Nuget client packages from the specific libraries, it gets complicated as the client libraries overwrite different default values which breaks the other client flows. Using multiple identity providers, it is probably better to not use the client libraries and stick to the standard OpenID Connect implementation.

Links

https://learn.microsoft.com/en-us/aspnet/core/security

https://learn.microsoft.com/en-us/aspnet/core/security/authorization/limitingidentitybyscheme

https://github.com/damienbod/aspnetcore-standup-authn-authz

https://github.com/damienbod/aspnetcore-standup-securing-apis

https://learn.microsoft.com/en-us/aspnet/core/security/authentication/claims

User claims in ASP.NET Core using OpenID Connect Authentication

Friday, 10. November 2023

Bill Wendels Real Estate Cafe

Can #Fee4Savings transform Class Action Lawsuits into Consumer Saving – BILLIONS annually?

Visiting RealEstateCafe’s website and wondering if we’ve been missing in action or still in business as the residential real estate brokerage industry is being slammed… The post Can #Fee4Savings transform Class Action Lawsuits into Consumer Saving – BILLIONS annually? first appeared on Real Estate Cafe.

Visiting RealEstateCafe’s website and wondering if we’ve been missing in action or still in business as the residential real estate brokerage industry is being slammed…

The post Can #Fee4Savings transform Class Action Lawsuits into Consumer Saving – BILLIONS annually? first appeared on Real Estate Cafe.

Thursday, 09. November 2023

Riley Hughes

How Vittorio Shaped my Perspective on SSI, and how he can Shape Yours

Photo credit: Brian Campbell from this article on the Ping Identity blog Vittorio Bertocci, much like many others in the identity space, had an important impact on my professional life. You can imagine how I felt when, a month following his tragic passing, I saw another blog post produced by the GOAT of creating understandable technical content about identity. Further, the subject of the post
Photo credit: Brian Campbell from this article on the Ping Identity blog

Vittorio Bertocci, much like many others in the identity space, had an important impact on my professional life. You can imagine how I felt when, a month following his tragic passing, I saw another blog post produced by the GOAT of creating understandable technical content about identity. Further, the subject of the post is my deepest area of knowledge: verifiable credential adoption (which was the topic of conversation for almost all my time spent with Vittorio).

Vittorio’s sage perspective on verifiable credentials is important for the IDtech community to understand. In this post, I want to outline how Vittorio influenced our direction at Trinsic and highlight a few important points from the recent post.

In 2017 I was a fresh face in the identity industry, pumped full of slogans and claims from the infinitely optimistic self-sovereign identity (SSI) evangelists who I surrounded myself with at the time. Having only seen one perspective, I fully believed that consumers could “own” their identity, that “data breaches would be a thing of the past”, and that verifiable credentials would usher in a new era of privacy maximalism.

The world needs idealists — but it also needs pragmatists. Vittorio became an archetype of the pragmatic energy that eventually worked its way into the culture and products at the company I cofounded in 2019, Trinsic. His directed questions and healthy skepticism of marvelous SSI claims came not from a Luddite spirit, but from deep experience. In a blog post about his involvement in the CardSpace project at Microsoft, he said, “When the user centric identity effort substantially failed to gain traction in actual products, with the identity industry incorporating some important innovations (hello, claims) but generally rejecting many of the key tenets I held so dear, something broke inside me. I became disillusioned with pure principled views, and moved toward a stricter Job to be done, user cases driven stance.”

For the last four years as a reusable identity infrastructure company, our developer tools for integrating verifiable credentials, identity wallets, and policy/governance tools have become quite popular. Thousands of developers have created dozens of applications that have acquired hundreds of thousands of end-users and issued close to a million credentials in production. This experience has given us a unique vantage point on patterns and methods for successfully deploying verifiable credentials in production. We’ve also spoken to many of these customers and other partners on our podcast and in private to understand these patterns more deeply.

I state all of this so that I can say the following with some credibility: Vitorrio’s perspectives (and by extension Auth0’s) are a must-read for anyone working on user-centric identity. I’ll double click on a few of what I view to be the most important points below.

What do we need to do to make a classic OpenID Connect flow behave more like the drivers license for buying wine scenario in offline life?
The two main discrepancies we identified were: Ability to use the token with multiple RPs and Ability to transact with an RP without IdP knowing anything about time and parties involved in the transaction

The first point I want to highlight is that Vittorio introduces verifiable credentials (VCs) by relating them to something his audience is familiar with — OIDC. This is not only a helpful practice for pitching products in general, but it embeds an important point for IDtech people: VCs are not a fundamental transformation of identity. VCs are an incremental improvement on previous generations of identity technology. (But one that I believe can enable exponentially better product experiences when done right.)

VCs will be adopted when they are applied to use cases that existing solutions fail to accommodate. It’s key for VC-powered products to demonstrate how VCs enable a problem to be solved in a new way — otherwise, buyers will opt for the safer federated solutions over VCs.

A classic example to illustrate my point is “passwordless login”. I’ve been hearing about it for 6 years, and yet never actually seen verifiable credentials be adopted for passwordless authentication. I believe the reason for this is that the two points above (ability to use the token with multiple RPs, IdP not knowing about the transaction) aren’t important enough for this use case, and that other, lighter-weight solutions can do it better.

We might say that there are too many cooks in the kitchen… I dare say this space is overspecified… A lot of work will need to happen in the marketplace, as production implementations with working use cases feel the pain points from these specs and run into a few walls for some of VCs to fully come to life.

Vittorio taught me about the history of OAuth, OpenID, OAuth2, and OpenID Connect. I learned about early, nonstandard iterations of “login with” buttons that had millions of active users. I learned about the market forces that led these divergent applications to eventually standardize.

Standardization is essential for adoption. But adoption is essential for knowing what to standardize (there’s nothing worse than standardizing the wrong thing)! Prematurely standardizing before adoption is a classic “cart before the horse” scenario. My conversations with Vittorio led me to write this catch-22 of interoperability post.

IDtech builders need to focus on building a good, adoptable product first. Then make it interoperable/compatible with other products second. This is a key design principle baked into Trinsic’s platform (e.g. whatever you build will inherit interoperability when it’s needed, but you won’t waste time figuring it out in the meantime).

[A misconception:] Centralized DBs will disappear… and in turn this would prevent some of the massive data leaks that we have seen in recent history. It’s unclear how that would work.

Vittorio correctly identified this as a misconception. Centralized databases indeed won’t disappear anytime soon. The notion that companies “won’t need to hold my data”, if it ever happens, will be far in the future.

The near-term disruption that will happen, however, is something I pointed out in a conversation with Vittorio that started on Twitter and moved offline. Service providers who don’t originate data themselves, but aggregate or intermediate between parties in a transaction, are at risk of disruption from verifiable credentials.

The example I use in the post linked above is Work Number. Employers give Work Number information about their employees to avoid fielding background screening calls. If employers gave that information directly to employees in a verifiable credential, however, Work Number’s role would need to change dramatically. Because of this threat, identity verification, student attestations, background screening, and other of these kinds of companies are among the first to adopt verifiable credentials.

Unless users decide to not present more data than necessary for particular operations, it is possible that they will end up disclosing more/all credential data just for usability sake.

This dynamic is Jevons paradox applied to identity — VCs counterintuitively risk creating worse privacy conditions, even with things like data minimization, because of the frequency of use. Nobody has a crystal ball, so it’s impossible to know whether this risk will materialize. Governance is the best tool at our disposal to reduce this risk and enable better privacy for people. I talk about this a fair bit in this webinar and plan to write a blog post about it in the future.

Users will typically already have credentials in their wallets and verifiers will simply need to verify them, in a (mostly) stateless fashion… However, we do have a multi-parties cold start problem. To have viable VCs we need effective, dependable and ubiquitous wallets. To have good wallets, we need relying parties implementing flows that require them, and creating real, concrete requirements for actual use. To incentivize RPs to implement and explore new flows, we need high value, ubiquitous credentials that make good business sense to leverage. But to get natural IdPs to create the infrastructure to issue such credentials, you need all of the above… plus a business case.

The chicken-and-egg problem (or, “cold start” problem) is a trick for almost all IDtech products. While there will always be exceptions to the rule, I have seen enough failure and success to feel confident in a somewhat concrete recipe for overcoming this obstacle.

Remove the wallet as a dependency. If a user needs to be redirected to an app store, download an app, step through onboarding steps, see an empty app, go scan QR codes to get credentials, all before it can actually be used… it’s extremely unlikely to be adopted. Instead, give users invisible “wallets” for their credentials. This is the #1 unlock that led to several of Trinsic’s customers scaling to hundreds of thousands of users. If your entity can play the role of issuer (or IdP) then you’re in a great position. If you’re not, obtain your own data so that you can be. Dig in with one or more companies and partner closely to build something very specific first with existing data. Sell to use cases that aren’t well-served by existing OIDC or similar technologies. Expand the markets you’re selling to by going into the long tail. Focus on either low-frequency, high-value use cases or high-frequency, low-value applications. Make it easy to integrate.

Shamefully, it took me 5 years of pattern matching to land at the conclusions that Vittorio and others saw much sooner. These are the same points that led to adoption of OAuth/OIDC. And frankly, when you look at it, are pretty obvious.

The main one is being able to disclose our identity/claims without issuers knowing. It is a civil liberty; it is a right. As more of our life moves online, we should be able to express our identity like we do it offline.

Privacy is an important point. This requirement, in particular, is a requirement for most governments to be involved. It’s also a biggie for any sensitive/”vice” industry (gambling, adult content, controlled substances, etc.) which historically is a driver of new technology due to having broad appeal and high frequency.

Once [the adoption flywheel] happens, it will likely happen all of a sudden… which is why it is really a good idea to stay up-to-date and experiment with VCs TODAY

This “slow… then all at once” dynamic is a critical insight, and very true. We’ve seen this over the last year in the identity verification segment. My first conversations with identity verification companies were at Sovrin in 2018. Despite consistently following along, there was no movement from anybody for years. Suddenly, after Onfido acquired Airside in May, Plaid, Persona, ClearMe, Instnt, Au10tix, and more have jumped into the fray with their own “Reusable ID” solutions.

Auth0 posits that governments will be the critical unlock for verifiable credentials. While I don’t think that’s wrong, we are seeing increased bottoms-up adoption from the private sector, both from IDtech companies and verification providers of all kinds. Governments will play an important role, ideally anchoring wallets with high-assurance legal identity credentials and leading with standards that will produce to interoperable solutions.

If you haven’t already, I encourage you to read the whole post. I’m grateful for the Auth0 team for shipping the post after Vittorio’s passing, so the world can benefit from his knowledge. You can also continue to learn from Vittorio through his podcast, which I’ve found to be a tremendous resource over the years.

If this topic interests you, check out the podcast I host, The Future of Identity. And if you have any feedback on this post, find me on X or LinkedIn — I’m always trying to get smarter and would love to know if I’m wrong about anything. 😊

Wednesday, 08. November 2023

Mike Jones: self-issued

On the journey to an Implementer’s Draft: OpenID Federation draft 31 published

OpenID Federation draft 31 has been published at https://openid.net/specs/openid-federation-1_0-31.html and https://openid.net/specs/openid-federation-1_0.html. It’s the result of concerted efforts to make the specification straightforward to read, understand, and implement for developers. Many sections have been rewritten and simplified. Some content has been reorganized to make its structure and

OpenID Federation draft 31 has been published at https://openid.net/specs/openid-federation-1_0-31.html and https://openid.net/specs/openid-federation-1_0.html. It’s the result of concerted efforts to make the specification straightforward to read, understand, and implement for developers. Many sections have been rewritten and simplified. Some content has been reorganized to make its structure and relationships more approachable. Many inconsistencies were addressed.

Some inconsistencies fixed resulted in a small number of breaking changes. For instance, the name “trust_mark_owners” is now consistently used throughout, whereas an alternate spelling was formerly also used. The editors tried to make all known such changes in this version, so hopefully this will be the last set of breaking changes. We published draft 31 now in part to get these changes out to implementers. See the history entries at https://openid.net/specs/openid-federation-1_0-31.html#name-document-history for a detailed description of the changes made.

A comprehensive review of the specification is still ongoing. Expect more improvements in the exposition in draft 32. With any luck, -32 will be the basis of the next proposed Implementer’s Draft.

We’re definitely grateful for all the useful feedback we’re receiving from developers. Developer feedback is gold!

Tuesday, 07. November 2023

@_Nat Zone

[11月19日-22日] ブロックチェインのガバナンスに関するグローバルなミーティング BGIN Block #9 への参加のお誘い

2019年6月に日本が議長国を務めたG20で採択されたコミュニケにおいて、「分散型金融におけるマルチステークホルダーの対話の重要性」が明記されたことを受け、2020年4月に設立されたBlockchain Governance Initiative Network (以下、BGIN…

2019年6月に日本が議長国を務めたG20で採択されたコミュニケにおいて、「分散型金融におけるマルチステークホルダーの対話の重要性」が明記されたことを受け、2020年4月に設立されたBlockchain Governance Initiative Network (以下、BGIN)の第9回総会(Block #9)が、11月19日から11月22日まで、オーストラリアのシドニーで開催されます。BGINの各セッションは、パネルディスカッションではなく、主に議論を主導する人(Main Discussants)が存在するものの、誰でも議論に参加できること、その議論の結果がミーティングノートとして残り、その後の文書作成に生かされるものです。日本からも金融庁、日銀、ビジネス、エンジニア、アカデミアなど多数のステークホルダーが現地参加して、文書作成のための議論に加わることになっています。もちろんオンサイトでの参加がベストですが、リモートからの参加も可能ですので、ぜひご参加(申込みはこちら)いただければと思います。

各日のハイライト

各日のハイライトは以下のようになっています。(ジョージタウン大学の松尾教授のブログよりアダプトしました…。)

1日目(ブロックチェーンガバナンス)

1日目は、草の根で技術開発が進むブロックチェーンのガバナンスについて改めて議論を行います。10月に京都で行われたInternet Governance Forum(IGF)でもブロックチェーンガバナンスの議論が行われましたが、IGFにも参加したメンバーや、金融規制当局、そしてブロックチェーンエンジニアを交え、ブロックチェーンガバナンスの課題と、ガバナンスのあり方の認識合わせについての議論を行います。またEtherum財団から、Ethereumのガバナンスのレクチャーを受けた上で、ガバナンス上の問題の洗い出しと、今後の文書化の活動の議論を行います。

2日目(金融への応用)

2日目の午前中は、ブロックチェーンの金融応用において、分散性の意味を改めて議論した上で、CBDC、デポジットトークン、ステーブルコイン、暗号資産、DeFiなど、“お金”に見えるが少しずつ異なるものが、どのように協調していくべきかを議論します。

まずは、MakerDAOにおける分散型金融についてのキーノートからスタートし、その後、CBDC、デポジットトークン、ステーブルコイン、暗号資産、DeFiの協調について、中央銀行、アカデミア、ステーブルコイン事業者、ブロックチェーンエンジニアを交えて、それぞれの性質の違いと、連携のあり方について、今後の文書化の方向性の議論を行います。

続いて、現在米国政府が検討しているデジタル資産の標準化のR&D戦略について、元ホワイトハウスでデジタル資産の大統領令をとりまとめたCarole Houseをセッションチェアに、TC307議長、NISTのメンバーを含めて、その方針について議論します。

午後はワークショップセッションとして、2つのパラレルセッションに分かれ、各テーマ90分かけて具体的な文書の編集作業と議論を行います。

ステーブルコインの障害点 分散型アプリケーションの透明性とDeFiの健全性 CBDCとプライバシ スマートコントラクトセキュリティとガバナンス 3日目(アイデンティティ、鍵管理、プライバシ)

午前中は、Ethereum開発者のVitalik Butelinが共著になって現在議論を呼んでいるPrivacy Poolを使った、新しいKYC/AMLのあり方について、共著者のFabian Scharがキーノートをした後に、ZCashのトップでありサイファーパンクのZooko Wilcoxを交え、その展開方法とマルチステークホルダーでの理解を議論します。

その後、ブロックチェーンのセキュリティ、プライバシ、ビジネスの重要コンポーネントであるウォレットについての議論を行う。Open Wallet FoundationのトップであるDaniel Goldscheiderを中心に安全なWalletの構築とマルチパーティー計算との連携について議論します。

午後はワークショップセッションとして、2つのパラレルセッションに分かれ、各テーマ90分かけて具体的な文書の編集作業と議論を行います。

ゼロ知識証明とその応用 Walletのアカウンタビリティー WorldCoinのプライバシー影響 デジタルアイデンティティー
4日目(インダストリセッション・ローカルブロックチェーンセッション)

4日目は、スポンサーになっている企業・組織からのプレゼンテーションやパネルを中心に、インダストリにおける動向と課題の議論を行うとともに、地元オーストラリアにおけるブロックチェーントレンドの紹介と発展のための議論を行います。

さらなる情報は…

オフィシャルなタイムテーブルなどの詳細情報は、BGINのサイトの特設ページよりご覧になっていただけますので、そちらも合わせてご覧いただければ幸いです。


Jon Udell

Let’s Talk: Conversational Software Development

Here’s number 12 in the series on LLM-assisted coding over at The New Stack: Let’s Talk: Conversational Software Development I keep coming back to the theme of the first article in this series: When the rubber duck talks back. Thinking out loud always helps. Ideally, you get to do that with a human partner. A … Continue reading Let’s Talk: Conversational Software Development

Here’s number 12 in the series on LLM-assisted coding over at The New Stack: Let’s Talk: Conversational Software Development

I keep coming back to the theme of the first article in this series: When the rubber duck talks back. Thinking out loud always helps. Ideally, you get to do that with a human partner. A rubber duck, though a poor substitute, is far better than nothing.

Conversing with LLMs isn’t like either of these options, it’s something else entirely; and we’re all in the midst of figuring out how it can work. Asking an LLM to write code, and having it magically appear? That’s an obvious life-changer. Talking with an LLM about the code you’re partnering with it to write? I think that’s a less obvious but equally profound life-changer.

The rest of the series:

1 When the rubber duck talks back

2 Radical just-in-time learning

3 Why LLM-assisted table transformation is a big deal

4 Using LLM-Assisted Coding to Write a Custom Template Function

5 Elevating the Conversation with LLM Assistants

6 How Large Language Models Assisted a Website Makeover

7 Should LLMs Write Marketing Copy?

8 Test-Driven Development with LLMs: Never Trust, Always Verify

9 Learning While Coding: How LLMs Teach You Implicitly

10 How LLMs Helped Me Build an ODBC Plugin for Steampipe

11 How to Use LLMs for Dynamic Documentation

Monday, 06. November 2023

Damien Bod

Using a strong nonce based CSP with Angular

This article shows how to use a strong nonce based CSP with Angular for scripts and styles. When using a nonce, the overall security can be increased and it is harder to do XSS attacks or other type of attacks in the web UI. A separate solution is required for development and production deployments. Code: […]

This article shows how to use a strong nonce based CSP with Angular for scripts and styles. When using a nonce, the overall security can be increased and it is harder to do XSS attacks or other type of attacks in the web UI. A separate solution is required for development and production deployments.

Code: https://github.com/damienbod/bff-aspnetcore-angular

When using Angular, the root of the UI usually starts from a HTML file. A meta tag with the CSP_NONCE placeholder was added as well as the ngCspNonce from Angular. The meta tag is used to add the nonce to the Angular provider or development npm packages. The ngCspNonce is used for Angular, although this does not work without adding the nonce to the Angular provider.

<!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8" /> <meta name="CSP_NONCE" content="**PLACEHOLDER_NONCE_SERVER**" /> <title>ui</title> <base href="/" /> <meta name="viewport" content="width=device-width, initial-scale=1" /> <link rel="icon" type="image/x-icon" href="favicon.ico" /> </head> <body> <app-root ngCspNonce="**PLACEHOLDER_NONCE_SERVER**"></app-root> </body> </html>

The CSP_NONCE is added to the Angular providers. This is required, otherwise the nonce is not added to the Angular generated scripts. The nonce value is read from the meta tag header.

import { provideHttpClient, withInterceptors } from '@angular/common/http'; import { ApplicationConfig, CSP_NONCE } from '@angular/core'; import { secureApiInterceptor } from './secure-api.interceptor'; import { provideRouter, withEnabledBlockingInitialNavigation, } from '@angular/router'; import { appRoutes } from './app.routes'; const nonce = ( document.querySelector('meta[name="CSP_NONCE"]') as HTMLMetaElement )?.content; export const appConfig: ApplicationConfig = { providers: [ provideRouter(appRoutes, withEnabledBlockingInitialNavigation()), provideHttpClient(withInterceptors([secureApiInterceptor])), { provide: CSP_NONCE, useValue: nonce, }, ], }; CSP in HTTP responses production

The UI now uses the nonce based CSP. The server can return all responses forcing this and increasing the security of the web application. It is important to use a nonce and not the self attribute as this overrides the nonce. You do not want to use self as this allows jsonp scripts. The unsafe-inline is used for backward compatibility. This is a good setup for production.

style-src 'unsafe-inline' 'nonce-your-random-nonce-string'; script-src 'unsafe-inline' 'nonce-your-random-nonce-string'; CSP style in development

Unfortunately, it is not possible to apply the style nonce in development due to the Angular setup. I used self in development for styles. This works, but has problems as you only discover style errors after a deployment, not during feature development. The later you discover errors, the more expensive it is to fix it.

Replace the values in the index.html

Now that the Angular application can use the nonce correctly, it needs to be updated with every page refresh or GET. The nonce is generated in the server part of the web application and is added to the index html file on each response. It is applied to all scripts and styles.

Links

https://nx.dev/getting-started/intro

https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP

https://github.com/damienbod/bff-auth0-aspnetcore-angular

https://github.com/damienbod/bff-openiddict-aspnetcore-angular

https://github.com/damienbod/bff-azureadb2c-aspnetcore-angular

https://github.com/damienbod/bff-aspnetcore-vuejs

Friday, 03. November 2023

Werdmüller on Medium

No, newsrooms don’t need to cede control to social media.

But they do need to evolve. Continue reading on Medium »

But they do need to evolve.

Continue reading on Medium »


Wrench in the Gears

Do You Wanna Play A Game?

After a month hiatus, I’m back east for the winter. Last night we streamed the second part of “Do You Want To Play A Game,” which explores the psycho-geography of gamified bio-hybrid relational computing. Check out the CV of Michael Mateas, a professor in computing and game science at UC Santa Cruz, for links about [...]

After a month hiatus, I’m back east for the winter. Last night we streamed the second part of “Do You Want To Play A Game,” which explores the psycho-geography of gamified bio-hybrid relational computing. Check out the CV of Michael Mateas, a professor in computing and game science at UC Santa Cruz, for links about research into automating communication between players and agents in interactive drama sessions. Or poke around the map below. In the next week or so Jason and I plan to present insights from our journey from Denver to Arkansas and back. Stay tuned.

Source: https://embed.kumu.io/5a929576b87ec9690a13a1b7be9fbb66#untitled-map?s=bm9kZS1YY1VlZ1hLeA%3D%3D

Part 1

Part 2

Thursday, 02. November 2023

Heres Tom with the Weather

Challenging Orwellian Language

A week ago, I made a post about the bizarre use of the phrase “right to self-defense” and today Ta-Nehisi Coates addressed this phrase. I keep hearing this term repeated over and over again: “the right to self-defense.” What about the right to dignity? What about the right to morality? What about the right to be able to sleep at night? Because what I know is, if I was complicit — and I am c

A week ago, I made a post about the bizarre use of the phrase “right to self-defense” and today Ta-Nehisi Coates addressed this phrase.

I keep hearing this term repeated over and over again: “the right to self-defense.” What about the right to dignity? What about the right to morality? What about the right to be able to sleep at night? Because what I know is, if I was complicit — and I am complicit — in dropping bombs on children, in dropping bombs on refugee camps, no matter who’s there, it would give me trouble sleeping at night. And I worry for the souls of people who can do this and can sleep at night.


Phil Windleys Technometria

Permissionless and One-to-One

In a recent post, Clive Thompson speaks of the humble cassette tape as a medium that had a a weirdly Internet-like vibe. Clive is focusing on how the cassette tape unlocked creativity, but in doing so he describes its properties in a way that is helpful to discussions about online relationships in general.

In a recent post, Clive Thompson speaks of the humble cassette tape as a medium that had a a weirdly Internet-like vibe. Clive is focusing on how the cassette tape unlocked creativity, but in doing so he describes its properties in a way that is helpful to discussions about online relationships in general.

Clive doesn't speak about cassette tapes being decentralized. In fact, I chuckle as I write that down. Instead he's focused on some core properties. Two I found the most interesting were that cassette tapes allowed one-to-one exchange of music and that they were permissionless. He says:

If you wanted to record a cassette, you didn’t need anyone’s permission.

This was a quietly radical thing, back when cassette recorders first emerged. Many other forms of audio or moving-image media required a lot of capital infrastructure: If you wanted to broadcast a TV show, you needed a studio and broadcasting equipment; the same goes for a radio show or film, or producing and distributing an album. And your audience needed an entirely different set of technologies (televisions, radios, projectors, record players) to receive your messages.

From The Empowering Style of Cassette Tapes
Referenced 2023-11-02T08:01:46-0400

The thing that struck me on reading this was the idea that symmetric technology democratizes speech. The web is based on assymetric technology: client-server. In theory everyone can have a server, but they don't for a lot of reasons including cost, difficulty, and friction. Consequently, the web is dominated by a few large players who act as intervening administrative authorities. They decide what happens online and who can participate. The web is not one-to-one and it is decidedly not permissionless.

In contrast, the DIDComm protocol is symmetric and so it fosters one-to-one interactions that provide meaningful, life-like online relationships. DIDComm supports autonomic identity systems that provide a foundation for one-to-one, permissionless interactons. Like the cassette tape, DIDComm is a democratizing technology.

Photo Credit: Mix Tape from Andreanna Moya Photography (CC BY-NC-ND 2.0 DEED)

Thanks for reading Phil Windley's Technometria! Subscribe for free to receive new posts and support my work.

Wednesday, 01. November 2023

Mike Jones: self-issued

Hybrid Public Key Encryption (HPKE) for JOSE

The new “Use of Hybrid Public-Key Encryption (HPKE) with Javascript Object Signing and Encryption (JOSE)” specification has been published. Its abstract is: This specification defines Hybrid public-key encryption (HPKE) for use with Javascript Object Signing and Encryption (JOSE). HPKE offers a variant of public-key encryption of arbitrary-sized plaintexts for a recipient public key. HPKE works […]

The new “Use of Hybrid Public-Key Encryption (HPKE) with Javascript Object Signing and Encryption (JOSE)” specification has been published. Its abstract is:

This specification defines Hybrid public-key encryption (HPKE) for use with Javascript Object Signing and Encryption (JOSE). HPKE offers a variant of public-key encryption of arbitrary-sized plaintexts for a recipient public key.

HPKE works for any combination of an asymmetric key encapsulation mechanism (KEM), key derivation function (KDF), and authenticated encryption with additional data (AEAD) function. Authentication for HPKE in JOSE is provided by JOSE-native security mechanisms or by one of the authenticated variants of HPKE.

This document defines the use of the HPKE with JOSE.

Hybrid Public Key Encryption (HPKE) is defined by RFC 9180. There’s a whole new generation of specifications using it for encryption. The Messaging Layer Security (MLS) Protocol [RFC 9420] uses it. TLS Encrypted Client Hello uses it. Use of Hybrid Public-Key Encryption (HPKE) with CBOR Object Signing and Encryption (COSE) brings it to COSE. And this specification brings it to JOSE.

One of our goals for the JOSE HPKE specification is to keep it closely aligned with the COSE HPKE specification. That should be facilitated by having multiple authors in common, with Hannes Tschofenig and Orie Steele being authors of both, and me being a COSE co-chair.

Aritra Banerjee will be presenting the draft to the JOSE working group at IETF 118 in Prague. I’m hoping to see many of you there!

The specification is available at:

https://www.ietf.org/archive/id/draft-rha-jose-hpke-encrypt-01.html

Tuesday, 31. October 2023

Heres Tom with the Weather

Irwin: Dabbling with ActivityPub

It has been a year since I have blogged about my IndieAuth server Irwin. Prior to that, in Minimum Viable IndieAuth Server, I explained my motivation for starting the project. In the same spirit, I would like an activitypub server as simple to understand as possible. I thought it might be interesting to add the activitypub and webfinger support to an IndieAuth server so I have created an experi

It has been a year since I have blogged about my IndieAuth server Irwin. Prior to that, in Minimum Viable IndieAuth Server, I explained my motivation for starting the project. In the same spirit, I would like an activitypub server as simple to understand as possible. I thought it might be interesting to add the activitypub and webfinger support to an IndieAuth server so I have created an experimental branch ap_wip. An important part of this development has been writing specs. For example, here are my specs for handling the “Move” command, an important Mastodon feature.

I still have about half a dozen items to do before I consider dogfooding this branch but hopefully I can do that soon.


Werdmüller on Medium

Return To Office is all about power

Enlightened employers will work on culture instead Continue reading on Medium »

Enlightened employers will work on culture instead

Continue reading on Medium »


Mike Jones: self-issued

On the Closing Stretch for Errata Corrections to OpenID Connect

The initial OpenID Connect specifications became final on February 25, 2014. While the working group is rightfully proud of the quality of the work and the widespread adoption it has attained, specification writing is a human endeavor and mistakes will inevitably be made. That’s why the OpenID Foundation has a process for publishing Errata corrections […]

The initial OpenID Connect specifications became final on February 25, 2014. While the working group is rightfully proud of the quality of the work and the widespread adoption it has attained, specification writing is a human endeavor and mistakes will inevitably be made. That’s why the OpenID Foundation has a process for publishing Errata corrections to specifications.

Eight issues were identified and corrected that year, with the first set of errata corrections being published on November 8, 2014. Since that time, suggestions for improvements have continued to trickle in, but with a 9+ year trickle, a total of 95 errata issues have been filed! They range from the nearly trivial, such as an instance of http that should have been https, to the more consequential, such as language that could be interpreted in different ways.

I’m pleased to report that, with a substantial investment by the working group, I’ve managed to work through all the 87 additional errata issues filed since the first errata set and incorporate corrections for them into published specification drafts. They are currently undergoing OpenID Foundation-wide review in preparation for a vote to approve the second set of errata corrections.

As a bonus, the OpenID Foundation plans to submit the newly minted corrected drafts for publication by ISO as Publicly Available Specifications. This should foster even broader adoption of OpenID Connect by enabling deployments in some jurisdictions around the world that have legal requirements to use specifications from standards bodies recognized by international treaties, of which ISO is one. Just in time for OpenID Connect’s 10th anniversary!

Monday, 30. October 2023

Mike Jones: self-issued

OpenID Summit Tokyo 2024 and the 10th Anniversary of OpenID Connect

I’m pleased to bring your attention to the upcoming OpenID Summit Tokyo 2024, which will be held on Friday, January 19, 2024. Join us there for a stellar line-up of speakers and consequential conversations! This builds on the successes of past summits organized by the OpenID Foundation Japan. For instance, I found the OpenID Summit […]

I’m pleased to bring your attention to the upcoming OpenID Summit Tokyo 2024, which will be held on Friday, January 19, 2024. Join us there for a stellar line-up of speakers and consequential conversations!

This builds on the successes of past summits organized by the OpenID Foundation Japan. For instance, I found the OpenID Summit Tokyo 2020 and associated activities and discussions both very useful and very enjoyable.

A special feature of the 2024 summit will be celebrating the 10th anniversary of the OpenID Connect specifications, which were approved on February 25, 2014. Speakers who were there for its creation, interop testing, and early deployments will share their experiences and lessons learned, including several key participants from Japan. As I recounted at EIC 2023, building ecosystems is hard. And yet we achieved that for OpenID Connect! We are working to create new identity ecosystems as we speak. I believe that the lessons learned from OpenID Connect are very applicable today. Come join the conversation!

Finally, as a teaser, I’m also helping the OpenID Foundation to plan two additional 10th anniversary celebrations at prominent 2024 identity events – one in Europe and one in the Americas. Watch this space for further news about these as it develops!

Friday, 27. October 2023

Phil Windleys Technometria

Cloudless: Computing at the Edge

New use cases will naturally drive more computing away from centralized cloud platforms to the edge. The future is cloudless.

Doc Searls sent me a link to this piece from Chris Anderson on cloudless computing. Like the term zero data that I wrote about a few weeks ago, cloudless computing is a great name that captures an idea that is profound.

Cloudless computing uses cryptographic identifiers, verifiable data, and location-independent compute1 to move apps to the data wherever it lives, to perform whatever computation needs to be done, at the edge. The genius of of the name cloudless computing is that it gets us out of the trenches of dapps, web3, blockchain, and other specific implementations and speaks to an idea or concept. The abstractions can make it difficult get a firm hold on the ideas, but it's important to getting past the how so we can speak to the what and why.

You be rightly skeptical that any of this can happen. Why will companies move from the proven cloud model to something else? In this talk, Peter Levine talks specifically to that question.

One of the core arguments for why more and more computing will move to the edge is the sheer size of modern computing problems. Consider one example: Tesla Full Self Driving (FSD). I happen to be a Tesla owner and I bought FSD. At first it was just because I am very curious about it and couldn't stand to not have first-hand experience with it. But now, I like it so much I use it all the time and can't imagine driving without an AI assist. But that's beside the point. To understand why that drives computing to the edge, consider that the round trip time to get an answer from the cloud is just too great. The car needs to make decisions onboard for this to work. Essentially, to put this in the cloudless perspective, the computation has to move to where the data from the sensors is. You move the compute to the data, not the other way around.2

And that's just one example. Levine makes the point, as I and others have done, that the Internet of Things leads to trillions of nodes on the Internet. This is a difference in scale that has real impact on how we architect computer systems. While today's CompuServe of Things still relies largely on the cloud and centralized servers, that model can't last in a true Internet of Things.

The future world will be more decentralized than the current one. Not because of some grand ideal (although those certainly exist) but simply because the problems will force it to happen. We're using computers in more dynamic environments than the more static ones (like web applications) of the past. The data is too large to move and the required latency too low. Cloudless computing is the future.

Notes

Anderson calls this deterministic computer. He uses that name to describe computation that is consistent and predictable regardless of how the application gets to the data, but I'm not sure that's the core idea. Location independence feels better to me.

An interesting point is that training the AI that drives the car is still done in the cloud somewhere. But once the model is built, it operates close to the data. I think this will be true for a lot of AI models.

Photo Credit: Cloudless Sunset from Dorothy Finley (CC BY 2.0 DEED - cropped)

Thanks for reading Phil Windley's Technometria! Subscribe for free to receive new posts and support my work.

Wednesday, 25. October 2023

Mike Jones: self-issued

BLS Key Representations for JOSE and COSE updated for IETF 118

Tobias Looker and I have published an updated Barreto-Lynn-Scott Elliptic Curve Key Representations for JOSE and COSE specification in preparation for IETF 118 in Prague. This one of suite of IETF and IRTF specifications, including BLS Signatures and JSON Web Proofs that are coming together to enable standards for the use of JSON-based and CBOR-based […]

Tobias Looker and I have published an updated Barreto-Lynn-Scott Elliptic Curve Key Representations for JOSE and COSE specification in preparation for IETF 118 in Prague. This one of suite of IETF and IRTF specifications, including BLS Signatures and JSON Web Proofs that are coming together to enable standards for the use of JSON-based and CBOR-based tokens utilizing zero-knowledge proofs.

The specification is available at:

https://www.ietf.org/archive/id/draft-ietf-cose-bls-key-representations-03.html

CBOR Web Token (CWT) Claims in COSE Headers Draft Addressing IETF Last Call Comments

Tobias Looker and I have published an updated CBOR Web Token (CWT) Claims in COSE Headers specification that addresses the IETF Last Call (WGLC) comments received. Changes made were: Added Privacy Consideration about unencrypted claims in header parameters. Added Security Consideration about detached content. Added Security Consideration about claims that are present both in the […]

Tobias Looker and I have published an updated CBOR Web Token (CWT) Claims in COSE Headers specification that addresses the IETF Last Call (WGLC) comments received. Changes made were:

Added Privacy Consideration about unencrypted claims in header parameters. Added Security Consideration about detached content. Added Security Consideration about claims that are present both in the payload and the header of a CWT. Changed requested IANA COSE Header Parameter assignment number from 13 to 15 due to subsequent assignments of 13 and 14. Acknowledged last call reviewers.

The specification is available at:

https://www.ietf.org/archive/id/draft-ietf-cose-cwt-claims-in-headers-07.html

The specification is scheduled for the IESG telechat on November 30, 2023.

Tuesday, 24. October 2023

MyDigitalFootprint

What has the executive team already forgotten about hard-fought lessons about leadership learned in COVID times?

Crisis situation are characterised by being urgent, complicated, nuanced, ambiguous and messy. The easy part is acknowledging that crisis presents exceptional and unprecedented challenges for organisations and leadership teams. In such periods, the stakes appear higher, and the decisions made can have far-reaching consequences.   The question of whether a leadership team should think

Crisis situation are characterised by being urgent, complicated, nuanced, ambiguous and messy. The easy part is acknowledging that crisis presents exceptional and unprecedented challenges for organisations and leadership teams. In such periods, the stakes appear higher, and the decisions made can have far-reaching consequences.  

The question of whether a leadership team should think, act and behave differently during times of war, conflict, and crisis is undoubtedly open for debate. But what did the last global pandemic (crisis) teach us, and what lessons learned have we forgotten in the light of new wars? 


Pre-pandemic leadership framing about how to deal with a crisis.

The vast majority of how to deal with a crisis pre-pandemic was based on coaching, training and mentoring but lacked real experience of the realities because global crises do not happen at scale very often. Whilst essential to prepare, thankfully most directors never get to work in a crisis and learn. Pre-COVID, the structured and passed down wisdom focussed on developing the following skills.   

Be Adaptable: In times of crisis, including war and conflict, the operating environment becomes highly volatile and uncertain. A leadership team must become more adaptable and flexible to respond to rapidly changing circumstances. Directors are trained to be more willing to recognise and pivot their strategies and make quick decisions, unlike stable times, where longer-term planning is often feasible.

Unity and Cohesion: A leadership team should act cohesively during a crisis and drop the niggles and power plays. Clear communication and collaboration among executives and directors are essential to ensure everyone is aligned and working towards a common goal. Unity is critical in times of uncertainty to maintain the organisation's stability and morale.

Decisiveness: Crisis demands decisiveness, with fewer facts and more noise, from its leaders. In the face of adversity, a leadership team should be ready to make tough choices promptly which will be very different to normal day-to-day thinking. Hesitation can be costly, and the consequences of indecision are amplified and become more severe during a crisis. 

Resource Allocation: A crisis will strain resources, making efficient and effective allocation idealistic and not practical. A leadership team should reevaluate its resource allocation, prioritising the needs based on the best opinion today, which will mean compromise and sacrifice.  It is about doing your best, as it will never be the most efficient, effective or right. 

Risk Management: In times of crisis, certain risks are heightened. A leadership team must adjust its risk management strategy, potentially being more conservative and prudent to safeguard the people and the organisation's long-term viability.

This is a lovely twee list, they are obvious, highly relevant and important, but the reality is totally different. Leaders and directors quickly move past these ideals to the reality of crisis management.  The day-to-day stress and grind of crisis surface the unsaid, hostile and uncomfortable - all aspects we learned in COVID and include.

Consistency: Maintaining a level of consistency across the leadership’s behaviour, regardless of the personal view and external circumstances. Drastic changes in leadership style create additional confusion and anxiety, creating an additional dimension to the existing crisis.

Ethical Compass: The moral and ethical compass of a leadership team should not waver in times of crisis. Principles such as honesty, integrity, acceptance, and respect (for all views and opinions that are legal) should be upheld, as compromising on ethics can lead to long-term damage to the individual and organisation's reputation.  Different opinions matter, as does the importance of ensuring they are aired and discussed openly, however hard and uncomfortable.  We might not agree because of our own framing, but that does not mean we actually know what is true or false. 

Strategic Focus: While adaptability is important, a leadership team should not lose sight of its agreed long-term strategic vision. Abrupt changes can disrupt the organisation's core mission and values. Strategies may need to be tweaked, but the overarching values and vision should remain consistent, even in the face of uncertainty.  If it does not - then there is a massively different issue you are facing.

Transparency: Honesty and transparency are essential, particularly during times of crisis. A leadership team should communicate openly with themselves, employees and stakeholders, providing them with a clear understanding of the challenges and the strategies being employed to overcome them.  Those prioritising themselves over the cause and survival need to be cut free. 

Legal and Regulatory Compliance: A leadership team should not compromise on legal and regulatory compliance, however much there is a push to the boundaries. Violating laws or regulations can lead to severe consequences that may outweigh any short-term benefits. Many will not like operating in grey areas, which might mean releasing of the leadership team.

Crisis on Crisis: because we don't know what is going on in someone else's head, heart or home, individuals can quickly run into burnout.  We don’t know who has a sick child, a family member has cancer, lost a loved one or is just in a moment of doubt.  Each leadership team should assume that everyone in their team needs help and support constantly.  


What have we already forgotten?

Post-pandemic leadership quickly forgot about burnout, ethics, transparency and single-mindedness to revert to power plays, incentives and individualism.  It was easy to return to where we are most comfortable, and most experiences exist - stability and no global crisis.  Interest rates and debt access are hard but are not a crisis unless your model is shot.   The congratulatory thinking focussed on we survived the global crisis, it was a blip that is unlikely to be repeated. 

The unique challenges and pressures of war demand adaptability, unity, decisiveness, and resource allocation adjustments - essential skills.  However, we have learned that this focus should not come at the expense of consistency, ethical integrity, strategic focus, transparency, and legal compliance. A leadership team's ability to strike this balance can determine the organisation's survival and success during the most trying times. Ultimately, leadership must adapt while maintaining its core values and principles to navigate the turbulent waters of wartime effectively.

Whether a leadership team should act differently in times of war is a matter of balance, but the lessons and skills we have need to be front and centre.  Today, focus on the team and spend more time than ever checking in on your team, staff, suppliers and those in the wider ecosystem.  Crisis and conflict destroys life and lives at many levels.


Monday, 23. October 2023

Aaron Parecki

OAuth for Browser-Based Apps Draft 15

After a lot of discussion on the mailing list over the last few months, and after some excellent discussions at the OAuth Security Workshop, we've been working on revising the draft to provide clearer guidance and clearer discussion of the threats and consequences of the various architectural patterns in the draft.

After a lot of discussion on the mailing list over the last few months, and after some excellent discussions at the OAuth Security Workshop, we've been working on revising the draft to provide clearer guidance and clearer discussion of the threats and consequences of the various architectural patterns in the draft.

I would like to give a huge thanks to Philippe De Ryck for stepping up to work on this draft as a co-author!

This version is a huge restructuring of the draft and now starts with a concrete description of possible threats of malicious JavaScript as well as the consequences of each. The architectural patterns have been updated to reference which of each threat is mitigated by the pattern. This restructuring should help readers make a better informed decision by being able to evaluate the risks and benefits of each solution.

https://datatracker.ietf.org/doc/html/draft-ietf-oauth-browser-based-apps

https://www.ietf.org/archive/id/draft-ietf-oauth-browser-based-apps-15.html

Please give this a read, I am confident that this is a major improvement to the draft!


Werdmüller on Medium

The map-reduce is not the territory

AI has the potential to run our lives. We shouldn’t let it. Continue reading on Medium »

AI has the potential to run our lives. We shouldn’t let it.

Continue reading on Medium »


Phil Windleys Technometria

Internet Identity Workshop 37 Report

Last week's IIW was great with many high intensity discussions of identity by people from across the globe. We recently completed the 37th Internet Identity Workshop. We had 315 people from around the world who called 163 sessions. The energy was high and I enjoyed seeing so many people who are working on identity talking with each other and sharing their ideas. The topics were diverse. Verifiable

Last week's IIW was great with many high intensity discussions of identity by people from across the globe.

We recently completed the 37th Internet Identity Workshop. We had 315 people from around the world who called 163 sessions. The energy was high and I enjoyed seeing so many people who are working on identity talking with each other and sharing their ideas. The topics were diverse. Verifiable credentials continue to be a hot topic, but authorization is coming on strong. In closing circle someone said (paraphrashing) that authentication is solved and the next frontier is authorization. I tend to agree. We should have the book of proceedings completed in about a month and you'll be able to get the details of sessions there. You can view past Books of Proceedings here.

As I said, there were attendees from all over the world as you can see by the pins in the map at the top of this post. Not surprisingly, most of the attendees were from the US (212), followed by Canada (29). Japan, the UK, and Germany rounded out the top five with 9, 8, and 8 attendees respectively. Attendees from India (5), Thailand (3), and Korea (3) showed IIW’s diversity with attendees from APAC. And there were 4 attendees from South America this time. Sadly, there were no attendees from Africa again. Please remember we offer scholarships for people from underrepresented areas, so if you’d like to come to IIW38, please let us know. If you’re working on identity, we want you there.

In terms of states and provinces, California was, unsurprisingly, first with 81. Washington (32), British Columbia (14), Utah (11), Ontario (11) and New York (10) rounded out the top five. Seattle (22), San Jose (15), Victoria (8), New York (8), and Mountain View (6) were the top cities.

As always the week was great. I had a dozen important, interesting, and timely conversations. If Closing Circle and Open Gifting are any measure, I was not alone. IIW is where you will meet people help you solve problems and move your ideas forward. Please come! IIW 38 will be held April 16-18, 2024 at the Computer History Museum. We'll have tickets available soon.

Thanks for reading Phil Windley's Technometria! Subscribe for free to receive new posts and support my work.


@_Nat Zone

[10月27日] オンライントークイベント「さようなら,意味のない暗号化ZIPメール」(第37回テレコム学際研究賞記念)

 情報処理2020年7月号「《小特集》さようなら,意味のない暗号化ZIP添付メール」(が、第37回(2021年度)テレコム学際研究賞(https://www.taf.or.jp/award/)にて特別表彰をいただきました。遅ればせながら、記念にオンライントークイベントを行います。…

 情報処理2020年7月号「《小特集》さようなら,意味のない暗号化ZIP添付メール」(が、第37回(2021年度)テレコム学際研究賞(https://www.taf.or.jp/award/)にて特別表彰をいただきました。遅ればせながら、記念にオンライントークイベントを行います。

 小特集が組まれるに至った経緯や、編集のこぼれ話、その後のPPAPの状況など、江渡先生、楠さん、上原先生、大泰司さん、﨑村がお話します。

 ご参加の皆様ともぜひお話をしましょう。この時間帯ですので、お飲み物等ご準備いただき、気軽に参加ください。

 なお、おおよその人数を把握したいので、参加希望の方はイベントのページで「参加予定」としてくださるようお願いします。

日時: 10月27日(金)19:00〜21:00 場所: 以下zoomにて(100名まで)

https://us02web.zoom.us/j/84056190037?pwd=WGVzbkhaS0NsTmx0dzNhR3l0N2lRdz09

4.参加方法: 以下のfacebookイベントを「出席予定」にしてください。参加費は不要です。

https://www.facebook.com/events/297711129877286/

ご参考:
情報処理学会誌小特集「さようなら,意味のない暗号化ZIP添付メール」

以上


Jon Udell

The WordPress plugin for ActivityPub

I turned on the ActivityPub plugin for WordPress. On the left: my current Mastodon account at social.coop. On the right: my newly-AP-augmented WordPress account. While making the first AP-aware blog post I thought I’d preserve the moment.

I turned on the ActivityPub plugin for WordPress. On the left: my current Mastodon account at social.coop. On the right: my newly-AP-augmented WordPress account. While making the first AP-aware blog post I thought I’d preserve the moment.

Sunday, 22. October 2023

Mike Jones: self-issued

JSON Web Proofs specifications updated in preparation for IETF 118

David Waite and I have updated the “JSON Web Proof”, “JSON Proof Algorithms”, and “JSON Proof Token” specifications in preparation for presentation and discussions in the JOSE working group at IETF 118 in Prague. The primary updates were to align the BBS algorithm text and examples with the current CFRG BBS Signature Scheme draft. We […]

David Waite and I have updated the “JSON Web Proof”, “JSON Proof Algorithms”, and “JSON Proof Token” specifications in preparation for presentation and discussions in the JOSE working group at IETF 118 in Prague. The primary updates were to align the BBS algorithm text and examples with the current CFRG BBS Signature Scheme draft. We also applied improvements suggested by Brent Zundel and Alberto Solavagione.

The specifications are available at:

https://www.ietf.org/archive/id/draft-ietf-jose-json-web-proof-02.html https://www.ietf.org/archive/id/draft-ietf-jose-json-proof-algorithms-02.html https://www.ietf.org/archive/id/draft-ietf-jose-json-proof-token-02.html

Thanks to David Waite for doing the heavy lifting to update the BBS content. Thanks to MATTR for publishing their Pairing Cryptography software, which was used to generate the examples. And thanks to Alberto Solavagione for validating the specifications with his implementation.

Saturday, 21. October 2023

Mike Jones: self-issued

OAuth 2.0 Protected Resource Metadata updated in preparation for IETF 118

Aaron Parecki and I have updated the “OAuth 2.0 Protected Resource Metadata” specification in preparation for presentation and discussions at IETF 118 in Prague. The updates address comments received during the discussions at IETF 117 and afterwards. As described in the History entry, the changes were: Renamed scopes_provided to scopes_supported Added security consideration for scopes_supported […]

Aaron Parecki and I have updated the “OAuth 2.0 Protected Resource Metadata” specification in preparation for presentation and discussions at IETF 118 in Prague. The updates address comments received during the discussions at IETF 117 and afterwards. As described in the History entry, the changes were:

Renamed scopes_provided to scopes_supported Added security consideration for scopes_supported Use BCP 195 for TLS recommendations Clarified that resource metadata can be used by clients and authorization servers Added security consideration recommending audience-restricted access tokens Mention FAPI Message Signing as a use case for publishing signing keys Updated references

The specification is available at:

https://www.ietf.org/archive/id/draft-ietf-oauth-resource-metadata-01.html

Fully-Specified Algorithms updated in preparation for IETF 118

Orie Steele and I have updated the “Fully-Specified Algorithms for JOSE and COSE” specification in preparation for presentation and discussions at IETF 118 in Prague. The updates address comments received during the discussions at IETF 117 and afterwards. Specifically, this draft adds descriptions of key representations and of algorithms not updated by the specification. See […]

Orie Steele and I have updated the “Fully-Specified Algorithms for JOSE and COSE” specification in preparation for presentation and discussions at IETF 118 in Prague. The updates address comments received during the discussions at IETF 117 and afterwards. Specifically, this draft adds descriptions of key representations and of algorithms not updated by the specification. See my original post about the spec for why fully-specified algorithms matter.

Hopefully working group adoption will be considered by the JOSE working group during IETF 118.

The specification is available at:

https://www.ietf.org/archive/id/draft-jones-jose-fully-specified-algorithms-02.html

Friday, 20. October 2023

Talking Identity

How can Governments do Digital Identity Right?

I highly recommend that everyone read the “Human-Centric Digital Identity: for Government Officials” whitepaper that Elizabeth Garber and Mark Haine have written, even if you aren’t in government. Published by the OpenID Foundation and co-branded by twelve non-profit organisations, this paper offers a broad view of the global digital identity landscape, key considerati

I highly recommend that everyone read the “Human-Centric Digital Identity: for Government Officials” whitepaper that Elizabeth Garber and Mark Haine have written, even if you aren’t in government. Published by the OpenID Foundation and co-branded by twelve non-profit organisations, this paper offers a broad view of the global digital identity landscape, key considerations for government officials, and the path ahead to global interoperability. It is also grounded in the key role that digital identity plays in the wider international human rights agenda and the recent OECD Digital Identity Recommendations.

It is really difficult to write about digital identity when trying to put humanity (not individuals) AND society at the center. This paper is a very strong and considerate effort to try and tackle a broad, complicated, but important topic.

Elizabeth was kind enough to acknowledge me in the paper because of our shared viewpoint and discussions on how Value-Sensitive Design and Human-Centric Design are key cogs in building ethical and inclusive digital identity systems. But what she and Mark have tackled goes far beyond what I am able to do, and I do hope to see their work have a big impact on all the Identirati currently engaged in shaping the future of Digital Credentials and Digital Identity.

Wednesday, 18. October 2023

Moxy Tongue

Zero Party Doctrine

 Zero Party Doctrine; Missing In-Law, Socio-Economics, and Data Administration. The 'Zero Party Doctrine' (ZPD) rests on an observable truth: Reality has an order of operational authority that all Sovereign Law is derived of, by, from; People, Individuals All, self-representing their own living authority from the first moment of their birth, provided methods of Custodial care and the oppo

 Zero Party Doctrine; Missing In-Law, Socio-Economics, and Data Administration.


The 'Zero Party Doctrine' (ZPD) rests on an observable truth: Reality has an order of operational authority that all Sovereign Law is derived of, by, from; People, Individuals All, self-representing their own living authority from the first moment of their birth, provided methods of Custodial care and the opportunity for cooperation among peoples and their derived ID-entities in our world, structurally insures and ensures that this Sovereign reality is protected for all equally. 

As in math, this missing concept of 'Zero' in Law, Socio-Economics, and Data Administration is not merely an incremental change to our understanding or the mechanics of practice, it fundamentally alters how the Law, Socio-Economic, and Data Administration practices function. 

All Sovereign Law is derived of, by, for the Sovereign authority of Individual people, the sources of observable truth in every possible transaction that Humanity has ever recorded, and ever will. 

All Sovereign Laws, Governments, Markets, and their corporate tools of expression, in their final accounting, must give proper authority to the root administrator of permissioned transactions in any system derived by Law, Government, Process, or Market. 

All data, all accountability under the Law, shall afford party zero, the pre-customer, pre-citizen, pre-client, pre-accountable party to possess all appropriate, consequential, or derived data from their mere participation in the reality under which the Law, Government, Market is produced of, by for their civil, human benefit. No other such condition shall be legal or allowable. The Zero Party Doctrine fundamentally rewrites history, providing it a new origin story, one with zero included. 01010000 01100101 01101111 01110000 01101100 01100101 00100000 01001111 01110111 01101110 00100000 01010010 01101111 01101111 01110100 00100000 01000001 01110101 01110100 01101000 01101111 01110010 01101001 01110100 01111001 


Related: What is "Sovereign source authority"?

Saturday, 14. October 2023

Mike Jones: self-issued

What does Presentation Exchange do and what parts of it do we actually need? (redux)

I convened the session “What does Presentation Exchange do and what parts of it do we actually need?” this week at the Internet Identity Workshop (IIW) to continue the discussion started during two unconference sessions at the 2023 OAuth Security Workshop. I briefly summarized the discussions that occurred at OSW, then we had a vigorous […]

I convened the session “What does Presentation Exchange do and what parts of it do we actually need?” this week at the Internet Identity Workshop (IIW) to continue the discussion started during two unconference sessions at the 2023 OAuth Security Workshop. I briefly summarized the discussions that occurred at OSW, then we had a vigorous discussion of our own.

Key points made were:

There appeared to be rough consensus in the room that Presentation Exchange (PE) is pretty complicated. People had differing opinions on whether the complexity is worth it. A lot of the complexity of PE comes from being able to request multiple credentials at once and to express alternatives. Ultimately, the verifier knows what kinds of credentials it needs and the relationships between them. PE tries to let the verifier express some of that to the wallet. Code running in the verifier making choices about the credentials it needs will always be more powerful than PE, because it has the full decision-making facilities of programming languages – including loops, conditionals, etc. Making a composite request for multiple credentials can have a better UX than a sequence of requests. In some situations, the sequence could result in the person having to scan multiple QR codes. There may be ways to avoid that, while still having a sequence of requests. Some said that they need the ability to request multiple credentials at once. Brent Zundel (a PE author) suggested that while wallets could implement all of PE, verifiers could implement only the parts they need. Not many parties had implemented all of PE. Torsten Lodderstedt suggested that we need feedback from developers. We could create a profile of PE, reducing what implementers have to build and correspondingly reducing its expressive power.

The slides used to summarize the preceding discussions are available as PowerPoint and PDF. There are detailed notes capturing some of the back-and-forth at IIW with attribution.

Thanks to everyone who participated for an informative and useful discussion. My goal was to help inform the profiling and deployment choices ahead of us.

P.S. Since Thursday’s discussion, it occurred to me that a question I wish I’d asked is:

When a verifier needs multiple credentials, they may be in different wallets. If the verifier tries to make a PE request for multiple credentials that are spread between wallets, will it always fail because no single wallet can satisfy it?

Fodder for the next discussion…


Just a Theory

JSON Path Operator Confusion

The relationship between the Postgres SQL/JSON Path operators @@ and @? confused me. Here’s how I figured out the difference.

The CipherDoc service offers a robust secondary key lookup API and search interface powered by JSON/SQL Path queries run against a GIN-indexed JSONB column. SQL/JSON Path, introduced in SQL:2016 and added to Postgres in version 12 in 2019, nicely enables an end-to-end JSON workflow and entity lifecycle. It’s a powerful enabler and fundamental technology underpinning CipherDoc. I’m so happy to have found it.

Confusion

However, the distinction between the SQL/JSON Path operators @@ and @? confused me. Even as I found that the @? operator worked for my needs and @@ did not, I tucked the problem into my mental backlog for later study.

The question arose again on a recent work project, and I can take a hint. It’s time to figure this thing out. Let’s see where it goes.

The docs say:

jsonb @? jsonpath → boolean Does JSON path return any item for the specified JSON value?

'{"a":[1,2,3,4,5]}'::jsonb @? '$.a[*] ? (@ > 2)' → t

jsonb @@ jsonpath → boolean Returns the result of a JSON path predicate check for the specified JSON value. Only the first item of the result is taken into account. If the result is not Boolean, then NULL is returned.

'{"a":[1,2,3,4,5]}'::jsonb @@ '$.a[*] > 2' → t

These read quite similarly to me: Both return true if the path query returns an item. So what’s the difference? When should I use @@ and when @?? I went so far as to ask Stack Overflow about it. The one answer directed my attention back to the jsonb_path_query() function, which returns the results from a path query.

So let’s explore how various SQL/JSON Path queries work, what values various expressions return.

Queries

The docs for jsonb_path_query say:1

jsonb_path_query ( target jsonb, path jsonpath [, vars jsonb [, silent boolean ]] ) → setof jsonb Returns all JSON items returned by the JSON path for the specified JSON value. If the vars argument is specified, it must be a JSON object, and its fields provide named values to be substituted into the jsonpath expression. If the silent argument is specified and is true, the function suppresses the same errors as the @? and @@ operators do. select * from jsonb_path_query( '{"a":[1,2,3,4,5]}', '$.a[*] ? (@ >= $min && @ <= $max)', '{"min":2, "max":4}' ) → jsonb_path_query ------------------ 2 3 4

The first thing to note is that a SQL/JSON Path query may return more than one value. This feature matters for the @@ and @? operators, which return a single boolean value based on the values returned by a path query. And path queries can return a huge variety of values. Let’s explore some examples, derived from the sample JSON value and path query from the docs.2

select jsonb_path_query('{"a":[1,2,3,4,5]}', '$ ?(@.a[*] > 2)'); jsonb_path_query ------------------------ {"a": [1, 2, 3, 4, 5]} (1 row)

This query returns the entire JSON value, because that’s what $ selects at the start of the path expression. The ?() filter returns true because its predicate expression finds at least one value in the $.a array greater than 2. Here’s what happens when the filter returns false:

select jsonb_path_query('{"a":[1,2,3,4,5]}', '$ ?(@.a[*] > 5)'); jsonb_path_query ------------------ (0 rows)

None of the values in the $.a array are greater than five, so the query returns no value.

To select just the array, append it to the path expression after the ?() filter:

select jsonb_path_query('{"a":[1,2,3,4,5]}', '$ ?(@.a[*] > 2).a'); jsonb_path_query ------------------ [1, 2, 3, 4, 5] (1 row) Path Modes

One might think you could select $.a at the start of the path query to get the full array if the filter returns true, but look what happens:

select jsonb_path_query('{"a":[1,2,3,4,5]}', '$.a ?(@[*] > 2)'); jsonb_path_query ------------------ 3 4 5 (3 rows)

That’s not the array, but the individual array values that each match the predicate. Turns out this is a quirk of the Postgres implementation of path modes. From what I can glean, the SQL:2016 standard dictates something like these SQL Server descriptions:

In lax mode, the function returns empty values if the path expression contains an error. For example, if you request the value $.name, and the JSON text doesn’t contain a name key, the function returns null, but does not raise an error. In strict mode, the function raises an error if the path expression contains an error.

But the Postgres lax mode does more than suppress errors. From the docs (emphasis added):

The lax mode facilitates matching of a JSON document structure and path expression if the JSON data does not conform to the expected schema. If an operand does not match the requirements of a particular operation, it can be automatically wrapped as an SQL/JSON array or unwrapped by converting its elements into an SQL/JSON sequence before performing this operation. Besides, comparison operators automatically unwrap their operands in the lax mode, so you can compare SQL/JSON arrays out-of-the-box.

There are a few more details, but this is the crux of it: In lax mode, which is the default, Postgres always unwraps an array. Hence the unexpected list of results.3 This could be particularly confusing when querying multiple rows:

select jsonb_path_query(v, '$.a ?(@[*] > 2)') from (values ('{"a":[1,2,3,4,5]}'::jsonb), ('{"a":[3,5,8]}')) x(v); jsonb_path_query ------------------ 3 4 5 3 5 8 (6 rows)

Switching to strict mode by preprending strict to the JSON Path query restores the expected behavior:

select jsonb_path_query(v, 'strict $.a ?(@[*] > 2)') from (values ('{"a":[1,2,3,4,5]}'::jsonb), ('{"a":[3,5,8]}')) x(v); jsonb_path_query ------------------ [1, 2, 3, 4, 5] [3, 5, 8] (2 rows)

Important gotcha to watch for, and a good reason to test path queries thoroughly to ensure you get the results you expect. Lax mode nicely prevents errors when a query references a path that doesn’t exist, as this simple example demonstrates:

select jsonb_path_query('{"a":[1,2,3,4,5]}', 'strict $.b'); ERROR: JSON object does not contain key "b" select jsonb_path_query('{"a":[1,2,3,4,5]}', 'lax $.b'); jsonb_path_query ------------------ (0 rows)

In general, I suggest always using strict mode when executing queries. Better still, perhaps always prefer strict mode with our friends the @@ and @? operators, which suppress some errors even in strict mode:

The jsonpath operators @? and @@ suppress the following errors: missing object field or array element, unexpected JSON item type, datetime and numeric errors. The jsonpath-related functions described below can also be told to suppress these types of errors. This behavior might be helpful when searching JSON document collections of varying structure.

Have a look:

select '{"a":[1,2,3,4,5]}' @? 'strict $.a'; ?column? ---------- t (1 row) select '{"a":[1,2,3,4,5]}' @? 'strict $.b'; ?column? ---------- <null> (1 row)

No error for the unknown JSON key b in that second query! As for the error suppression in the jsonpath-related functions, that’s what the silent argument does. Compare:

select jsonb_path_query('{"a":[1,2,3,4,5]}', 'strict $.b'); ERROR: JSON object does not contain key "b" select jsonb_path_query('{"a":[1,2,3,4,5]}', 'strict $.b', '{}', true); jsonb_path_query ------------------ (0 rows) Boolean Predicates

The Postgres SQL/JSON Path Language docs briefly mention a pretty significant deviation from the SQL standard:

A path expression can be a Boolean predicate, although the SQL/JSON standard allows predicates only in filters. This is necessary for implementation of the @@ operator. For example, the following jsonpath expression is valid in PostgreSQL:

$.track.segments[*].HR < 70

This pithy statement has pretty significant implications for the return value of a path query. The SQL standard allows predicate expressions, which are akin to an SQL WHERE expression, only in ?() filters, as seen previously:

select jsonb_path_query('{"a":[1,2,3,4,5]}', '$ ?(@.a[*] > 2)'); jsonb_path_query ------------------------ {"a": [1, 2, 3, 4, 5]} (1 row)

This can be read as “return the path $ if @.a[*] > 2 is true. But have a look at a predicate-only path query:

select jsonb_path_query('{"a":[1,2,3,4,5]}', '$.a[*] > 2'); jsonb_path_query ------------------ true (1 row)

This path query can be read as “Return the result of the predicate $.a[*] > 2, which in this case is true. This is quite the divergence from the standard, which returns contents from the JSON queried, while a predicate query returns the result of the predicate expression itself. It’s almost like they’re two different things!

Don’t confuse the predicate path query return value with selecting a boolean value from the JSON. Consider this example:

select jsonb_path_query('{"a":[true,false]}', '$.a ?(@[*] == true)'); jsonb_path_query ------------------ true (1 row)

Looks the same as the predicate-only query, right? But it’s not, as shown by adding another true value to the $.a array:

select jsonb_path_query('{"a":[true,false,true]}', '$.a ?(@[*] == true)'); jsonb_path_query ------------------ true true (2 rows)

This path query returns the trues it finds in the $.a array. The fact that it returns values from the JSON rather than the filter predicate becomes more apparent in strict mode, which returns all of $a if one or more elements of the array has the value true:

select jsonb_path_query('{"a":[true,false,true]}', 'strict $.a ?(@[*] == true)'); jsonb_path_query --------------------- [true, false, true] (1 row)

This brief aside, and its mention of the @@ operator, turns out to be key to understanding the difference between @? and @@. Because it’s not just that this feature is “necessary for implementation of the @@ operator”. No, I would argue that it’s the only kind of expression usable with the @@ operator

Match vs. Exists

Let’s get back to the @@ operator. We can use a boolean predicate JSON Path like so:

select '{"a":[1,2,3,4,5]}'::jsonb @@ '$.a[*] > 2'; ?column? ---------- t (1 row)

It returns true because the predicate JSON path query $.a[*] > 2 returns true. And when it returns false?

select '{"a":[1,2,3,4,5]}'::jsonb @@ '$.a[*] > 6'; ?column? ---------- f (1 row)

So far so good. What happens when we try to use a filter expression that returns a true value selected from the JSONB?

select '{"a":[true,false]}'::jsonb @@ '$.a ?(@[*] == true)'; ?column? ---------- t (1 row)

Looks right, doesn’t it? But recall that this query returns all of the true values from $.@, but @@ wants only a single boolean. What happens when we add another?

select '{"a":[true,false,true]}'::jsonb @@ 'strict $.a ?(@[*] == true)'; ?column? ---------- <null> (1 row)

Now it returns NULL, even though it’s clearly true that @[*] == true matches. This is because it returns all of the values it matches, as jsonb_path_query() demonstrates:

select jsonb_path_query('{"a":[true,false,true]}'::jsonb, '$.a ?(@[*] == true)'); jsonb_path_query ------------------ true true (2 rows)

This clearly violates the @@ documentation claim that “Only the first item of the result is taken into account”. If that were true, it would see the first value is true and return true. But it doesn’t. Turns out, the corresponding jsonb_path_match() function shows why:

select jsonb_path_match('{"a":[true,false,true]}'::jsonb, '$.a ?(@[*] == true)'); ERROR: single boolean result is expected

Conclusion: The documentation is inaccurate. Only a single boolean is expected by @@. Anything else is an error.

Futhermore, it’s dangerous, at best, to use an SQL standard JSON Path expression with @@. If you need to use it with a filter expression, you can turn it into a boolean predicate by wrapping it in exists():

select jsonb_path_match('{"a":[true,false,true]}'::jsonb, 'exists($.a ?(@[*] == true))'); jsonb_path_match ------------------ t (1 row)

But there’s no reason to do so, because that’s effectively what the @? operator (and the corresponding, cleverly-named jsonb_path_exists() function does): it returns true if the SQL standard JSON Path expression contains any results:

select '{"a":[true,false,true]}'::jsonb @? '$.a ?(@[*] == true)'; ?column? ---------- t (1 row)

Here’s the key thing about @?: you don’t want to use a boolean predicate path query with it, either. Consider this predicate-only query:

select jsonb_path_query('{"a":[1,2,3,4,5]}'::jsonb, '$.a[*] > 6'); jsonb_path_query ------------------ false (1 row)

But see what happens when we use it with @?:

select '{"a":[1,2,3,4,5]}'::jsonb @? '$.a[*] > 6'; ?column? ---------- t (1 row)

It returns true even though the query itself returns false! Why? Because false is a value that exists and is returned by the query. Even a query that returns null is considered to exist, as it will when a strict query encounters an error:

select jsonb_path_query('{"a":[1,2,3,4,5]}'::jsonb, 'strict $[*] > 6'); jsonb_path_query ------------------ null (1 row) select '{"a":[1,2,3,4,5]}'::jsonb @? 'strict $[*] > 6'; ?column? ---------- t (1 row)

The key thing to know about the @? operator is that it returns true if anything is returned by the path query, and returns false only if nothing is selected at all.

The Difference

In summary, the difference between the @? and @@ JSONB operators is this:

@? (and jsonb_path_exists()) returns true if the path query returns any values — even false or null — and false if it returns no values. This operator should be used only with SQL-standard JSON path queries that select data from the JSONB. Do not use predicate-only JSON path expressions with @?. @@ (and jsonb_path_match()) returns true if the path query returns the single boolean value true and false otherwise. This operator should be used only with Postgres-specific boolean predicate JSON path queries, that return data from the predicate expression. Do not use SQL-standard JSON path expressions with @@.

This difference of course assumes awareness of this distinction between predicate path queries and SQL standard path queries. To that end, I submitted a patch that expounds the difference between these types of JSON Path queries, and plan to submit another linking these differences in the docs for @@ and @?.

Oh, and probably another to explain the difference in return values between strict and lax queries due to array unwrapping.

Thanks

Many thanks to Erik Wienhold for patiently answering my pgsql-hackers questions and linking me to a detailed pgsql-general thread in which the oddities of @@ were previously discussed in detail.

Well almost. The docs for jsonb_path_query actually say, about the last two arguments, “The optional vars and silent arguments act the same as for jsonb_path_exists.” I replaced that sentence with the relevant sentences from the jsonb_path_exists docs, about which more later. ↩︎

Though omitting the vars argument, as variable interpolation just gets in the way of understanding basic query result behavior. ↩︎

In fairness, the Oracle docs also discuss “implicit array wrapping and unwrapping”, but I don’t have a recent Oracle server to experiment with at the moment. ↩︎

More about… Postgres JSON Path SQL/JSON Path Operators JSON JSONB

Markus Sabadello on Medium

JSON-LD VCs are NOT “just JSON”

Experiments with JSON-LD VC payloads secured by JWS vs. Data Integrity Detailed results: https://github.com/peacekeeper/json-ld-vcs-not-just-json In the world of Verifiable Credentials (VCs), it can be hard to keep track of various evolving formats and data models. A potpourri of similar-sounding terms can be found in specification documents, mailing lists and meeting notes, such as VCDM, VC-JWT,

Experiments with JSON-LD VC payloads secured by JWS vs. Data Integrity
Detailed results: https://github.com/peacekeeper/json-ld-vcs-not-just-json

In the world of Verifiable Credentials (VCs), it can be hard to keep track of various evolving formats and data models. A potpourri of similar-sounding terms can be found in specification documents, mailing lists and meeting notes, such as VCDM, VC-JWT, VC-JWS, JWT VCs, SD-JWT, SD-JWT-VC, SD JWS, VC JOSE COSE, SDVC, JsonWebSignature2020, etc.

Also, statements like the following can frequently be found in discussions:

“Can we make @context optional. It’s simpler and not always needed.” “If you don’t want to use @context and just ignore it, you could.” “You can secure a JSON-LD VC using JWT.” “You can use SD-JWT for any JSON payload, including JSON-LD.” “JSON-LD is JSON.”

One concrete question related to this is what it means if a VC using the JSON-LD-based W3C VC Data Model is secured by proof mechanisms that were designed without JSON-LD in mind.

Experimentation

To explore this, I conducted two experiments to describe what happens if you take a JSON-LD document based on the W3C VC Data Model, and you secure it once with JWS, and once with Data Integrity. The former signs the document, while the latter signs the underlying RDF graph of the document. The part where this gets interesting is the JSON-LD @context. For JWS, only the contents of the document matter. For Data Integrity, the contents of the @context matter as well. The two proof mechanisms have a rather different understanding of the “payload” that is to be secured.

Experiment #1: Data Integrity changes, JWS doesn’t

In the first experiment, we start with a JSON-LD document named example1a.input. This document references a JSON-LD @context https://example.com/context1/, and the contents of that @context are as in the file context1a.jsonld.

In a subsequent variation of this, we start with another JSON-LD document named example1b.input, which is equivalent to the above example1a.input. This document also references the same JSON-LD @context https://example.com/context1/, but now, the contents of that @context are as in the file context1b.jsonld, which is different from the file context1a.jsonld that was used above.

The result: If the contents of the JSON-LD @context change, even if the JSON-LD document stays the same, the Data Integrity signature also changes, whereas the JWS signature doesn’t change. This also means that verifying a Data Integrity signature would fail, whereas verifying a JWS signature would succeed.

Experiment #2: JWS changes, Data Integrity doesn’t

In the second experiment, we start with a JSON-LD document named example2a.input. This document references a JSON-LD @context https://example.com/context2a/, and the contents of that @context are as in the file context2a.jsonld.

In a subsequent variation of this, we start with another JSON-LD document named example2b.input, which is different from the above example2a.input. The difference is that the first document used the term “givenName”, while the second document uses the term “firstName”. The second document references a different JSON-LD @context https://example.com/context2b/, and the contents of that @context are as in the file context2b.jsonld, which is different from the file context2a.jsonld that was used above. The difference is that the first @context defines the term “givenName”, while the second @context defines the term “firstName”, however, they define them using the same URI, i.e. with equivalent semantics.

The result: Despite the fact that the JSON-LD document changes, the Data Integrity signature doesn’t change, whereas the JWS signature changes. This is because Data Integrity “understands” that even though the document has changed, the semantics of the RDF graph are still the same. JWS on the other hand “sees” only the JSON-LD document, not the semantics behind it. This also means that verifying a Data Integrity signature would succeed, whereas verifying a JWS signature would fail.

Conclusion

Depending on your perspective, you could interpret the results in different ways. You could call Data Integrity insecure, since it depends on information outside the JSON document. You could also call JWS insecure, since it fails to secure the JSON-LD data model.

The real point of this article is however NOT to say that any of the mentioned data models or proof mechanisms are inherently insecure, but rather to raise awareness of the nuances. To say “JSON-LD is JSON” is correct on the document layer, and wrong on the data model layer. Certain combinations of data models and proof mechanisms can lead to surprising results, if they are not understood properly.

Thursday, 12. October 2023

Jon Udell

How to Use LLMs for Dynamic Documentation

Here’s #11 in the new series on LLM-assisted coding over at The New Stack: How to Use LLMs for Dynamic Documentation My hunch is that we’re about to see a fascinating new twist on the old idea of literate programming. Some explanations can, will, and should be written by code authors alone, or by those … Continue reading How to Use LLMs for Dynamic Documentation

Here’s #11 in the new series on LLM-assisted coding over at The New Stack:
How to Use LLMs for Dynamic Documentation

My hunch is that we’re about to see a fascinating new twist on the old idea of literate programming. Some explanations can, will, and should be written by code authors alone, or by those authors in partnership with LLMs. Others can, will, and should be conjured dynamically by code readers who ask LLMs for explanations on the fly.

The rest of the series:

1 When the rubber duck talks back

2 Radical just-in-time learning

3 Why LLM-assisted table transformation is a big deal

4 Using LLM-Assisted Coding to Write a Custom Template Function

5 Elevating the Conversation with LLM Assistants

6 How Large Language Models Assisted a Website Makeover

7 Should LLMs Write Marketing Copy?

8 Test-Driven Development with LLMs: Never Trust, Always Verify

9 Learning While Coding: How LLMs Teach You Implicitly

10 How LLMs Helped Me Build an ODBC Plugin for Steampipe

Wednesday, 11. October 2023

Mike Jones: self-issued

OpenID Presentations at October 2023 OpenID Workshop and IIW

I gave the following presentation at the Monday, October 9, 2023 OpenID Workshop at CISCO: OpenID Connect Working Group (PowerPoint) (PDF) I also gave the following invited “101” session presentation at the Internet Identity Workshop (IIW) on Tuesday, October 10, 2023: Introduction to OpenID Connect (PowerPoint) (PDF)

I gave the following presentation at the Monday, October 9, 2023 OpenID Workshop at CISCO:

OpenID Connect Working Group (PowerPoint) (PDF)

I also gave the following invited “101” session presentation at the Internet Identity Workshop (IIW) on Tuesday, October 10, 2023:

Introduction to OpenID Connect (PowerPoint) (PDF)

Public Drafts of Third W3C WebAuthn and FIDO2 CTAP Specifications

The W3C WebAuthn and FIDO2 working groups have been actively creating third versions of the W3C Web Authentication (WebAuthn) and FIDO2 Client to Authenticator Protocol (CTAP) specifications. While remaining compatible with the original and second standards, these third versions add features that have been motivated by experience with deployments of the previous versions. Additions include […]

The W3C WebAuthn and FIDO2 working groups have been actively creating third versions of the W3C Web Authentication (WebAuthn) and FIDO2 Client to Authenticator Protocol (CTAP) specifications. While remaining compatible with the original and second standards, these third versions add features that have been motivated by experience with deployments of the previous versions. Additions include Cross-Origin Authentication within an iFrame, Credential Backup State, the isPasskeyPlatformAuthenticatorAvailable method, Conditional Mediation, Device-Bound Public Keys (since renamed Supplemental Public Keys), requesting Attestations during authenticatorGetAssertion, the Pseudo-Random Function (PRF) extension, the Hybrid Transport, and Third-Party Payment Authentication.

I often tell people that I use my blog as my external memory. I thought I’d post references to these drafts to help me and others find them. They are:

Web Authentication: An API for accessing Public Key Credentials, Level 3, W3C Working Draft, 27 September 2023 Client to Authenticator Protocol (CTAP), FIDO Alliance Review Draft, March 21, 2023

Thanks to John Bradley for helping me compile the list of deltas!

Saturday, 07. October 2023

Talking Identity

And Just Like That, He’s Gone

Writing this post is hard, because the emotions are still fresh and very raw. In so many ways, I feel like I was only just beginning to know Vittorio Luigi Bertocci.  Of course, we all feel like we “know” him, because he has always been a larger-than-life character operating at the very forefront of our […]

Writing this post is hard, because the emotions are still fresh and very raw. In so many ways, I feel like I was only just beginning to know Vittorio Luigi Bertocci. 

Of course, we all feel like we “know” him, because he has always been a larger-than-life ch