Last Update 6:51 AM March 31, 2025 (UTC)

Identity Blog Catcher

Brought to you by Identity Woman and Infominer.
Support this collaboration on Patreon!!!

Monday, 31. March 2025

John Philpin : Lifestream

📺 Just One Look ★★

It’s a Harlan Coban. It should have been great. It wasn’t. I am increasingly finding that his stuff falls into 2 star or 4 star. This is the former. I didn’t care about the people. The mystery wasn’t really that much of a mystery. Yeah … Nah. Just One Look 🔗 Reelgood ‘All’ My TV Show Reviews

It’s a Harlan Coban. It should have been great. It wasn’t. I am increasingly finding that his stuff falls into 2 star or 4 star. This is the former. I didn’t care about the people. The mystery wasn’t really that much of a mystery.

Yeah … Nah.

Just One Look 🔗 Reelgood

‘All’ My TV Show Reviews


📺 The Residence ★★★★ 1/2

Should it be 5 stars? Probably because it is quite simply an absolute spectacular gem. 🖇️ After one episode I wrote: 🎬 Cordelia Cupp is my new hero. Absolutely F A N T A S T I C stuff. The writer Paul William Davies is masterfully channeling David Hare’s ‘rapid exchange’ in the dialogue and Uzo Aduba is brilliant in the role. It’s a Shonda Rhymes production - and additional cast ha

Should it be 5 stars?

Probably because it is quite simply an absolute spectacular gem.

🖇️ After one episode I wrote:

🎬 Cordelia Cupp is my new hero. Absolutely F A N T A S T I C stuff. The writer Paul William Davies is masterfully channeling David Hare’s ‘rapid exchange’ in the dialogue and Uzo Aduba is brilliant in the role.

It’s a Shonda Rhymes production - and additional cast hat tips to ‘Giancarlo’ for playing the dead guy, Al Franken playing himself a Senator and Jane Curtin who knew all along? But really a wonderful cast, story, set up.

Yes it’s a ‘whodunnit’ - but that isn’t the point.

The Residence on 🔗 Reelgood

‘All’ My TV Show Reviews


💬

💬

💬


💬

💬

💬


💬

💬

💬


🕵💬 Inspired By People | 🌐 Paul Solarz 🔍 All The Posts can

🕵💬 Inspired By People | 🌐 Paul Solarz 🔍 All The Posts can be found here, or scroll through the quotes below.

🕵💬 Inspired By People | 🌐 Paul Solarz

🔍 All The Posts can be found here, or scroll through the quotes below.


In a March 20 company meeting, Musk indicated that he was

In a March 20 company meeting, Musk indicated that he was dumbfounded by the attacks and said the vandals should “stop acting psycho.”

In a March 20 company meeting, Musk indicated that he was dumbfounded by the attacks and said the vandals should “stop acting psycho.”


I don’t know - I share a lot of Om - but this one - this rig

I don’t know - I share a lot of Om - but this one - this right here is in a class of its own - so beautiful. A delight to read. A level where you don’t just give up highlighing the best parts - but maybe even give up writing - because how can you even begin to achieve this. 🔗 When AI Whispers - On my Om

I don’t know - I share a lot of Om - but this one - this right here is in a class of its own - so beautiful. A delight to read. A level where you don’t just give up highlighing the best parts - but maybe even give up writing - because how can you even begin to achieve this.

🔗 When AI Whispers - On my Om

Sunday, 30. March 2025

Simon Willison

Quoting Ned Batchelder

My advice about using AI is simple: use AI as an assistant, not an expert, and use it judiciously. Some people will object, “but AI can be wrong!” Yes, and so can the internet in general, but no one now recommends avoiding online resources because they can be wrong. They recommend taking it all with a grain of salt and being careful. That’s what you should do with AI help as well. — Ned Batche

My advice about using AI is simple: use AI as an assistant, not an expert, and use it judiciously. Some people will object, “but AI can be wrong!” Yes, and so can the internet in general, but no one now recommends avoiding online resources because they can be wrong. They recommend taking it all with a grain of salt and being careful. That’s what you should do with AI help as well.

Ned Batchelder, Horseless intelligence

Tags: ai, ned-batchelder


John Philpin : Lifestream

When you post on Facebook to complain about Facebook When

When you post on Facebook to complain about Facebook When you post on Twitter to complain about Twitter When you post on LinkedIN to complain about LinkedIN When you post on BlueSky to complain about BlueSky When you post on Micro.Blog to complain about Micro.Blog ‘We’ .. whoever ‘we’ are … switch platforms and don’t change.

When you post on Facebook to complain about Facebook

When you post on Twitter to complain about Twitter

When you post on LinkedIN to complain about LinkedIN

When you post on BlueSky to complain about BlueSky

When you post on Micro.Blog to complain about Micro.Blog

‘We’ .. whoever ‘we’ are … switch platforms and don’t change.


We cannot be naive and think that we are not going to suff

We cannot be naive and think that we are not going to suffer from major interference from the malignant actors who are in Washington right now. 💬 Charlie Angus … malignant actors.

We cannot be naive and think that we are not going to suffer from major interference from the malignant actors who are in Washington right now.

💬 Charlie Angus

… malignant actors.


It would appear that I have some organising to do ….

It would appear that I have some organising to do ….

It would appear that I have some organising to do ….


At a stroke my man John nails the Republican party in 2025 …

At a stroke my man John nails the Republican party in 2025 … prescient - I mean he DIED IN ‘68!

At a stroke my man John nails the Republican party in 2025 … prescient - I mean he DIED IN ‘68!


Ever since @gregmorris launched Micro Social on the unsuspec

Ever since @gregmorris launched Micro Social on the unsuspecting world of Micro Blog - been truly impressed not just by the functionality - but the design sensibility. On functionality, I particularly like the reblog function which I was wondering about today .. as in what if I reblogged a reply. And it just worked - because of course it did. Here’s the thing … when I post - it gets syndicated

Ever since @gregmorris launched Micro Social on the unsuspecting world of Micro Blog - been truly impressed not just by the functionality - but the design sensibility.

On functionality, I particularly like the reblog function which I was wondering about today .. as in what if I reblogged a reply. And it just worked - because of course it did.

Here’s the thing … when I post - it gets syndicated - and with the syndication comes different conversations in different spaces. ( for me - for now - federation good, but not really the time - nor the inclination TBH - to merge it all together - partly because those spaces are - and react - differently )

That said - sometimes a conversation breaks out here on Micro blog - with some great comments - and they sit - isolated - sad.

No longer ..

I can now reblog a comment in a thread - that gets pushed out as a new post which in turn is itself syndicated - so someone over in one of the other spaces can now see the occasional pearls and then come on over and have another look. #GameChanging


🕵💬 Inspired By People | 🌐 George Shinn 🔍 All The Posts ca

🕵💬 Inspired By People | 🌐 George Shinn 🔍 All The Posts can be found here, or scroll through the quotes below.

🕵💬 Inspired By People | 🌐 George Shinn

🔍 All The Posts can be found here, or scroll through the quotes below.


Talking of 🖇️ Herb .. this is one of his iconic ones and it

Talking of 🖇️ Herb .. this is one of his iconic ones and it got me to wondering why Haight Ashbury wasn’t called Ashbury Haight. I now know the answer - BUT - now I’m wondering what rules the signage in SF follows .. I have my suspicions, but need to check - and if they are consistent.

Talking of 🖇️ Herb .. this is one of his iconic ones and it got me to wondering why Haight Ashbury wasn’t called Ashbury Haight. I now know the answer - BUT - now I’m wondering what rules the signage in SF follows .. I have my suspicions, but need to check - and if they are consistent.


IdM Laboratory

GビズIDの大幅アップデートとOpenID Connect for Identity Assuranceへの対応

こんにちは、富士榮です。 いわゆる法人共通認証基盤と呼ばれる、デジタル庁が提供しているGビズIDの大幅アップデートが公開されましたね。 出典)デジタル庁 - GビズID https://gbiz-id.go.jp/top/ GビズIDについてはこれまでもOpenIDファウンデーションジャパンのイベント等に古くは経産省、デジタル庁へ移管されてからはデジタル庁の方々にお越しいただき技術仕様やトラストフレームワークについてご発表いただいてきました。 OpenID Summit Tokyo 2020 - 2020/1/24 https://www.openid.or.jp/summit/2020/ OpenID BizDay #14 - 2021/1/27 https://openid.connpass.com/event/200742/
こんにちは、富士榮です。
いわゆる法人共通認証基盤と呼ばれる、デジタル庁が提供しているGビズIDの大幅アップデートが公開されましたね。 出典)デジタル庁 - GビズID https://gbiz-id.go.jp/top/

GビズIDについてはこれまでもOpenIDファウンデーションジャパンのイベント等に古くは経産省、デジタル庁へ移管されてからはデジタル庁の方々にお越しいただき技術仕様やトラストフレームワークについてご発表いただいてきました。
OpenID Summit Tokyo 2020 - 2020/1/24 https://www.openid.or.jp/summit/2020/
OpenID BizDay #14 - 2021/1/27 https://openid.connpass.com/event/200742/presentation/
OpenID BizDay #15 - 2023/1/10 https://openid.connpass.com/event/266730/
OpenID BizDay #17 - 2025/2/19 https://openid.connpass.com/event/341152/

GビズIDについて 簡単に言うと、GビズIDは企業の代表や従業員などが当該の企業に所属していることを表し、例えば補助金の申請などの行政手続きをオンラインで実施することを可能にするためのID基盤ですね。 そのためには当然、当該の企業が実在していること、そしてGビズIDを利用する代表者や従業員が当該企業と適切に関係しており所属していることを保証していくことが重要です。
ここは非常に重要な一方でまだまだ課題も多く、例えば現状は法人の実在性について法務局の発行する印鑑証明書や個人事業主の場合は市町村の発行する印鑑登録証明書を使うことで確認することになりますが、アカウントライフサイクルは各利用企業側に任せるしかないという状況があったりします。

法人共通認証基盤の必要性 この考え方は何も日本だけで必要とされているわけではなく、海外においても同様の要求はあるわけです。OpenID FoundationのeKYC and Identity Assurance Working Groupでは個人の本人確認がどのようにIdentity Providerで実施されたかという情報をRelying Partyへ伝達するためのOpenID Connect for Identity Assurance(最近正式化されましたね!)に加えて、個人が法人とどのような関係性にあるのかを表現するためのAuthority Claims Extensionの開発を進めています。この辺りは日本のOpenIDファウンデーションジャパンのKYC WGの参加メンバーの方々とも協力して国際標準への道筋をうまく作っていきたいところです。
参考)eKYC and Identity Assurance Working Group https://openid.net/wg/ekyc-ida/

GビズIDのアップデート概要 こう言うのは更新履歴を見ていくのが重要ですね。 デジタル庁が公開しているシステム連携ガイドを見ると技術仕様を含め確認ができるので、こちらの更新履歴を見てみましょう。なお、現在「行政サービス向け」のシステム連携ガイドが公開されていますが、そもそも現状のGビズIDは民間サービスとの連携を許可していません。それにもかかわらず行政サービス向け、と明記されているのは今後の民間サービスへの展開を見据えてのことなのかな、、と期待が膨らみますね。
早速更新履歴を見ていきましょう。すでにバージョン2.3なんですね。

結構更新が多いです。さすが大型アップデートです。
個人的に関心が高かったのは、以下の2点です。 アカウント種別に管理者(GビズIDメンバー(管理者))が増えた GビズIDトラストフレームワークが策定され、IAL/AALが明記された アカウント種別はこれまでプライム、メンバー、エントリーの3種類で、原則プライムは法人代表者のためのアカウントでした。そして、メンバーアカウントの作成や管理はプライムの権限者が実施するしかなかったわけですが、いちいち代表者がログインしてアカウント管理をするのか!!という課題も大きかったのだと思います。GビズIDメンバー(管理者)というアカウント管理権限を持ったアカウントを作成することができるようになりました。 ちなみにGビズIDプライムのアカウントはマイナンバーカードを使ったオンライン申請もできるようになってますね。

トラストフレームワークについても別文書で定義されています。
法人共通認証基盤におけるトラストフレームワーク https://gbiz-id.go.jp/top/trust_framework/trust_framework.pdf

システム連携ガイドにもざっくりとしたレベル感は記載されていますので、Relying Partyは扱う情報の機密レベルやリスク度合いに応じてどのアカウント種別を要求するか決めていく必要があります。

OpenID Connect for Identity Assuranceへの対応 タイトルにも書いた通り、今回のGビズIDのアップデートの目玉はOpenID Connect for Identity Assurance(OIDC4IDA)への対応です。といっても結論フルスペック対応ではなく、スキーマについてある程度対応した、という程度ではありますが国が提供するサービスに新しい技術仕様が採用されていくのは非常に嬉しいことですね。
具体的にはscopeにjp_gbizid_v1_idaを指定することでOIDC4IDAに対応した属性情報を取得できるようになるようです。
実際に返却される属性(verified_claims)は下記の通りです。 要するにGビズIDのトラストフレームワークに従い、どのような審査・確認が行われたアカウントなのか、という情報がRelying Partyに対して送出されるようになるわけです。
よく見るとauthorityに関する属性も返していますね。この辺りは現在eKYC and Identity Assurance Working Groupで開発を進めているAuthority Claims Extensionを先取りした感じです。
サンプルレスポンスも書いてあります。
組織情報の詳細についても返却できるようになっていますね。
こんな感じで当該組織でそのアカウントがどのような役割を持っているのかが表現できるようになっています。


これはちゃんとこのドキュメントを英訳してグローバルで発信していかないといけませんね。結構先進的なことをやっているので海外の実装者や政府機関にとっても非常に参考になると思います。>デジタル庁さん、がんばってください!













Saturday, 29. March 2025

John Philpin : Lifestream

🔗 Another cracker from Ambrose …. by @bradenslen Corpora

🔗 Another cracker from Ambrose …. by @bradenslen Corporation: An ingenious device for obtaining profit without individual responsibility. 💬 Ambrose Bierce How could I not share another Ambrosian delight?

🔗 Another cracker from Ambrose …. by @bradenslen

Corporation: An ingenious device for obtaining profit without individual responsibility. 💬 Ambrose Bierce

How could I not share another Ambrosian delight?


🪦.. and so farewell Herb . Thanks for all the pictures .. yo

🪦.. and so farewell Herb . Thanks for all the pictures .. you put us on the map.

🪦.. and so farewell Herb . Thanks for all the pictures .. you put us on the map.


Say what now ‘Rolling Stone’ … ‘despite’? Bollocks. BECAUS

Say what now ‘Rolling Stone’ … ‘despite’? Bollocks. BECAUSE OF

Say what now ‘Rolling Stone’ … ‘despite’? Bollocks.

BECAUSE OF


A Reply from Brad to Me by @bradenslen @JohnPhilpin The

A Reply from Brad to Me by @bradenslen @JohnPhilpin The little sign that tried. It’s being 902ed otherwise it would be in service and a very good sign. Trying an idea … courtesy of Micro Social .. I will report back.

A Reply from Brad to Me by @bradenslen

@JohnPhilpin The little sign that tried. It’s being 902ed otherwise it would be in service and a very good sign.

Trying an idea … courtesy of Micro Social .. I will report back.


Doc Searls Weblog

Pictures

Still miss him. Just found a photo of Aaron Swartz from the time I recruited him for a panel at Comdex in 2002. He was a kid, but rocked it. I suppose it's one of these. The photo at the top of this story is of the Linden Cogeneration Plant in New Jersey, which I […]

Still miss him. Just found a photo of Aaron Swartz from the time I recruited him for a panel at Comdex in 2002. He was a kid, but rocked it.

I suppose it's one of these. The photo at the top of this story is of the Linden Cogeneration Plant in New Jersey, which I shot while flying in or out of Newark sometime. 

Here's a shot I took flying in or out of Salt Lake City. 

But if you need one, there are 123 others in this collection here. They didn't use my best shot of The Right Honourable Dame Jacinda Kate Laurell Ardern in this Martha's Vineyard Times story a few days ago.

Friday, 28. March 2025

John Philpin : Lifestream

People are getting behind this … by @dave Elon you can’t

People are getting behind this … by @dave Elon you can’t speak for your customers. The car is great, but you suck. newrepublic.com 😎

People are getting behind this … by @dave

Elon you can’t speak for your customers. The car is great, but you suck. newrepublic.com

😎


The Fediverse by @jaredwsmith Now that I had a few minut

The Fediverse by @jaredwsmith Now that I had a few minutes to import my Mastodon follows, Micro.blog is really taking shape quite nicely as my primary Fediverse presence. Kudos to @manton for how he’s rapidly evolved his product to be a first-class Fediverse citizen. Concurrence - I just wish he made it clearer which part of the Fediverse is being followed when you look at follow lists.

The Fediverse by @jaredwsmith

Now that I had a few minutes to import my Mastodon follows, Micro.blog is really taking shape quite nicely as my primary Fediverse presence. Kudos to @manton for how he’s rapidly evolved his product to be a first-class Fediverse citizen.

Concurrence - I just wish he made it clearer which part of the Fediverse is being followed when you look at follow lists.


Ambrose - courtesy of Brad by @bradenslen The covers of

Ambrose - courtesy of Brad by @bradenslen The covers of this book are too far apart. Ambrose Bierce Magic! Love Ambrose.

Ambrose - courtesy of Brad by @bradenslen

The covers of this book are too far apart. Ambrose Bierce

Magic! Love Ambrose.


Revenge Is Power by @bradenslen Headline: Musk threatens

Revenge Is Power by @bradenslen Headline: Musk threatens to sic the government on people ‘pushing’ Tesla ‘propaganda’ | The Verge Elon Musk says he’s “going to go after” Tesla critics Sounds like he’s starting to crack under the pressure. Keep going. The share value is still positive compared to only a year ago .. still a lot of work to be done.

Revenge Is Power by @bradenslen

Headline: Musk threatens to sic the government on people ‘pushing’ Tesla ‘propaganda’ | The Verge

Elon Musk says he’s “going to go after” Tesla critics

Sounds like he’s starting to crack under the pressure. Keep going.

The share value is still positive compared to only a year ago .. still a lot of work to be done.


🕵💬 Inspired By People | 🌐 John Donne 🔍 All The Posts can

🕵💬 Inspired By People | 🌐 John Donne 🔍 All The Posts can be found here, or scroll through the quotes below.

🕵💬 Inspired By People | 🌐 John Donne

🔍 All The Posts can be found here, or scroll through the quotes below.


Simon Willison

Quoting Colin Fraser

Slop is about collapsing to the mode. It’s about information heat death. It’s lukewarm emptiness. It’s ten million approximately identical cartoon selfies that no one will ever recall in detail because none of the details matter. — Colin Fraser Tags: ai-ethics, slop, ai, generative-ai

Slop is about collapsing to the mode. It’s about information heat death. It’s lukewarm emptiness. It’s ten million approximately identical cartoon selfies that no one will ever recall in detail because none of the details matter.

Colin Fraser

Tags: ai-ethics, slop, ai, generative-ai


Incomplete JSON Pretty Printer

Incomplete JSON Pretty Printer Every now and then a log file or a tool I'm using will spit out a bunch of JSON that terminates unexpectedly, meaning I can't copy it into a text editor and pretty-print it to see what's going on. The other day I got frustrated with this and had the then-new GPT-4.5 build me a pretty-printer that didn't mind incomplete JSON, using an OpenAI Canvas. Here's the chat

Incomplete JSON Pretty Printer

Every now and then a log file or a tool I'm using will spit out a bunch of JSON that terminates unexpectedly, meaning I can't copy it into a text editor and pretty-print it to see what's going on.

The other day I got frustrated with this and had the then-new GPT-4.5 build me a pretty-printer that didn't mind incomplete JSON, using an OpenAI Canvas. Here's the chat and here's the resulting interactive.

I spotted a bug with the way it indented code today so I pasted it into Claude 3.7 Sonnet Thinking mode and had it make a bunch of improvements - full transcript here. Here's the finished code.

In many ways this is a perfect example of vibe coding in action. At no point did I look at a single line of code that either of the LLMs had written for me. I honestly don't care how this thing works: it could not be lower stakes for me, the worst a bug could do is show me poorly formatted incomplete JSON.

I was vaguely aware that some kind of state machine style parser would be needed, because you can't parse incomplete JSON with a regular JSON parser. Building simple parsers is the kind of thing LLMs are surprisingly good at, and also the kind of thing I don't want to take on for a trivial project.

At one point I told Claude "Try using your code execution tool to check your logic", because I happen to know Claude can write and then execute JavaScript independently of using it for artifacts. That helped it out a bunch.

I later dropped in the following:

modify the tool to work better on mobile screens and generally look a bit nicer - and remove the pretty print JSON button, it should update any time the input text is changed. Also add a "copy to clipboard" button next to the results. And add a button that says "example" which adds a longer incomplete example to demonstrate the tool, make that example pelican themed.

It's fun being able to say "generally look a bit nicer" and get a perfectly acceptable result!

Tags: chatgpt, claude, tools, json, generative-ai, ai, llms, vibe-coding


Quoting Nelson Minar

I was there at the first Atom meeting at the Google offices. We meant so well! And I think the basic publishing spec is good, certainly better technically than the pastiche of different things called RSS. Alas, a bunch of things then went wrong. Feeds started losing market share. Facebook started doing something useful and interesting that ultimately replaced blog feeds in open formats. The Ato

I was there at the first Atom meeting at the Google offices. We meant so well! And I think the basic publishing spec is good, certainly better technically than the pastiche of different things called RSS.

Alas, a bunch of things then went wrong. Feeds started losing market share. Facebook started doing something useful and interesting that ultimately replaced blog feeds in open formats. The Atom vs RSS spec was at best irrelevant to most people (even programmers) and at worst a confusing market-damaging thing. The XML namespaces in Atom made everyone annoyed. Also there was some confusing “Atom API” for publishing that diluted Atom’s mindshare for feeds.

Nelson Minar, Comment on lobste.rs

Tags: atom, syndication, nelson-minar, rss

Thursday, 27. March 2025

Simon Willison

Tracing the thoughts of a large language model

Tracing the thoughts of a large language model In a follow-up to the research that brought us the delightful Golden Gate Claude last year, Anthropic have published two new papers about LLM interpretability: Circuit Tracing: Revealing Computational Graphs in Language Models extends last year's interpretable features into attribution graphs, which can "trace the chain of intermediate steps tha

Tracing the thoughts of a large language model

In a follow-up to the research that brought us the delightful Golden Gate Claude last year, Anthropic have published two new papers about LLM interpretability:

Circuit Tracing: Revealing Computational Graphs in Language Models extends last year's interpretable features into attribution graphs, which can "trace the chain of intermediate steps that a model uses to transform a specific input prompt into an output response". On the Biology of a Large Language Model uses that methodology to investigate Claude 3.5 Haiku in a bunch of different ways. Multilingual Circuits for example shows that the same prompt in three different languages uses similar circuits for each one, hinting at an intriguing level of generalization.

To my own personal delight, neither of these papers are published as PDFs. They're both presented as glorious mobile friendly HTML pages with linkable sections and even some inline interactive diagrams. More of this please!

Tags: anthropic, claude, pdf, generative-ai, ai, llms, interpretability


GPT-4o got another update in ChatGPT

GPT-4o got another update in ChatGPT This is a somewhat frustrating way to announce a new model. @OpenAI on Twitter just now: GPT-4o got an another update in ChatGPT! What's different? Better at following detailed instructions, especially prompts containing multiple requests Improved capability to tackle complex technical and coding problems Improved intuition and creativity Fewer

GPT-4o got another update in ChatGPT

This is a somewhat frustrating way to announce a new model. @OpenAI on Twitter just now:

GPT-4o got an another update in ChatGPT!

What's different?

Better at following detailed instructions, especially prompts containing multiple requests Improved capability to tackle complex technical and coding problems Improved intuition and creativity Fewer emojis 🙃

This sounds like a significant upgrade to GPT-4o, albeit one where the release notes are limited to a single tweet.

ChatGPT-4o-latest (2025-0-26) just hit second place on the LM Arena leaderboard, behind only Gemini 2.5, so this really is an update worth knowing about.

The @OpenAIDevelopers account confirmed that this is also now available in their API:

chatgpt-4o-latest is now updated in the API, but stay tuned—we plan to bring these improvements to a dated model in the API in the coming weeks.

I wrote about chatgpt-4o-latest last month - it's a model alias in the OpenAI API which provides access to the model used for ChatGPT, available since August 2024. It's priced at $5/million input and $15/million output - a step up from regular GPT-4o's $2.50/$10.

I'm glad they're going to make these changes available as a dated model release - the chatgpt-4o-latest alias is risky to build software against due to its tendency to change without warning.

A more appropriate place for this announcement would be the OpenAI Platform Changelog, but that's not had an update since the release of their new audio models on March 20th.

Tags: llm-release, generative-ai, openai, chatgpt, ai, llms


Thoughts on setting policy for new AI capabilities

Thoughts on setting policy for new AI capabilities Joanne Jang leads model behavior at OpenAI. Their release of GPT-4o image generation included some notable relaxation of OpenAI's policies concerning acceptable usage - I noted some of those the other day. Joanne summarizes these changes like so: tl;dr we’re shifting from blanket refusals in sensitive areas to a more precise approach focuse

Thoughts on setting policy for new AI capabilities

Joanne Jang leads model behavior at OpenAI. Their release of GPT-4o image generation included some notable relaxation of OpenAI's policies concerning acceptable usage - I noted some of those the other day.

Joanne summarizes these changes like so:

tl;dr we’re shifting from blanket refusals in sensitive areas to a more precise approach focused on preventing real-world harm. The goal is to embrace humility: recognizing how much we don't know, and positioning ourselves to adapt as we learn.

This point in particular resonated with me:

Trusting user creativity over our own assumptions. AI lab employees should not be the arbiters of what people should and shouldn’t be allowed to create.

A couple of years ago when OpenAI were the only AI lab with models that were worth spending time with it really did feel that San Francisco cultural values (which I relate to myself) were being pushed on the entire world. That cultural hegemony has been broken now by the increasing pool of global organizations that can produce models, but it's still reassuring to see the leading AI lab relaxing its approach here.

Tags: ai-ethics, openai, ai


Nomic Embed Code: A State-of-the-Art Code Retriever

Nomic Embed Code: A State-of-the-Art Code Retriever Nomic have released a new embedding model that specializes in code, based on their CoRNStack "large-scale high-quality training dataset specifically curated for code retrieval". The nomic-embed-code model is pretty large - 26.35GB - but the announcement also mentioned a much smaller model (released 5 months ago) called CodeRankEmbed which is j

Nomic Embed Code: A State-of-the-Art Code Retriever

Nomic have released a new embedding model that specializes in code, based on their CoRNStack "large-scale high-quality training dataset specifically curated for code retrieval".

The nomic-embed-code model is pretty large - 26.35GB - but the announcement also mentioned a much smaller model (released 5 months ago) called CodeRankEmbed which is just 521.60MB.

I missed that when it first came out, so I decided to give it a try using my llm-sentence-transformers plugin for LLM.

llm install llm-sentence-transformers llm sentence-transformers register nomic-ai/CodeRankEmbed --trust-remote-code

Now I can run the model like this:

llm embed -m sentence-transformers/nomic-ai/CodeRankEmbed -c 'hello'

This outputs an array of 768 numbers, starting [1.4794224500656128, -0.474479079246521, ....

Where this gets fun is combining it with my Symbex tool to create and then search embeddings for functions in a codebase.

I created an index for my LLM codebase like this:

cd llm symbex '*' '*.*' --nl > code.txt

This creates a newline-separated JSON file of all of the functions (from '*') and methods (from '*.*') in the current directory - you can see that here.

Then I fed that into the llm embed-multi command like this:

llm embed-multi \ -d code.db \ -m sentence-transformers/nomic-ai/CodeRankEmbed \ code code.txt \ --format nl \ --store \ --batch-size 10

I found the --batch-size was needed to prevent it from crashing with an error.

The above command creates a collection called code in a SQLite database called code.db.

Having run this command I can search for functions that match a specific search term in that code collection like this:

llm similar code -d code.db \ -c 'Represent this query for searching relevant code: install a plugin' | jq

That "Represent this query for searching relevant code: " prefix is required by the model. I pipe it through jq to make it a little more readable, which gives me these results.

This jq recipe makes for a better output:

llm similar code -d code.db \ -c 'Represent this query for searching relevant code: install a plugin' | \ jq -r '.id + "\n\n" + .content + "\n--------\n"'

The output from that starts like so:

llm/cli.py:1776 @cli.command(name="plugins") @click.option("--all", help="Include built-in default plugins", is_flag=True) def plugins_list(all): "List installed plugins" click.echo(json.dumps(get_plugins(all), indent=2)) -------- llm/cli.py:1791 @cli.command() @click.argument("packages", nargs=-1, required=False) @click.option( "-U", "--upgrade", is_flag=True, help="Upgrade packages to latest version" ) ... def install(packages, upgrade, editable, force_reinstall, no_cache_dir): """Install packages from PyPI into the same environment as LLM"""

Getting this output was quite inconvenient, so I've opened an issue.

Tags: nomic, llm, ai, embeddings, jq


John Philpin : Lifestream

📸 Found on Devonport Wharf last night. I call the photo my K

📸 Found on Devonport Wharf last night. I call the photo my Kodak Moment.

📸 Found on Devonport Wharf last night. I call the photo my Kodak Moment.


The Pragmatic Engineer

The Pulse #128: Mishandled security vulnerability in Next.js

Cloudflare stepped up to deliver a fix for the vulnerability, following unexpected silence from the Next.js team. Also: OpenAI’s new image generation feature shows how Japan has lax copyright laws

The Pulse is a series covering insights, patterns, and trends within Big Tech and startups. Notice an interesting event or trend? Send me a message.

Today, we cover:

Industry pulse. Meta cuts CPU usage by 20% with one minor change, Google buys Wiz for $32B, Microsoft drops data center expansion, Google could bring back in-person interviews, AI is not helping Salesforce get more sales, and more.

Mishandled security vulnerability in Next.js. A vulnerability allowed bypassing of authentication at the middleware layer in the popular React framework. Cloudflare stepped in with a security fix after no meaningful communication from Vercel, which led to the respective CEOs bumping heads online. It’s fixed now, and we have learnings about how to better handle critical security vulnerabilities.

OpenAI’s image generation and lax Japanese copyright laws. Social media has been flooded with Japanese anime-style images, after ChatGPT added image generation. OpenAI’s executives seem to be encouraging the creation of images in the Studio Ghibli style. Japanese copyright laws make this legal, but is it unethical, given the Studio Ghibli founder calls AI an “insult to life itself?”

1. Industry Pulse Meta cuts CPU usage by 20% with minor change

The headline: Meta cut CPU utilization across its ads infrastructure 20%, with a one-character change, adding a “&” character.

Details from Meta’s eng blog:

“A seasoned performance engineer was looking through Strobelight [Meta’s profiling service] data and discovered that by filtering on a particular std::vector function call (using the symbolized file and line number) he could identify computationally expensive array copies that happen unintentionally with the ‘auto’ keyword in C++.

The engineer turned a few knobs, adjusted his Scuba [Meta’s data store] query, and happened to notice one of these copies in a particularly hot call path in one of Meta’s largest ads services. He then cracked open his code editor to investigate whether this particular vector copy was intentional… it wasn’t.

It was a simple mistake that any engineer working in C++ has made a hundred times.

So, the engineer typed an “&” after the auto keyword to indicate we want a reference instead of a copy. It was a one-character commit, which, after it was shipped to production, equated to an estimated 15,000 servers in capacity savings per year!”

The bigger story is that all this was possible due to Meta investing time, energy, and thought, into building Strobelight, its internal profiling orchestrator. Strobelight is several tools combined, so engineers can profile a variety of internal services. Read more in this interesting blog by Meta.

With such wins available, it’s no surprise that larger companies invest so much in internal tooling. And it’s also a good reminder that even mature systems and services might have low-hanging performance optimizations lying around, waiting to be found!

Google buys Wiz for $32B in biggest-ever tech acquisition

Read more


Doc Searls Weblog

Tools

And just one for New Jersey! iLoveFood says the best pizza in Indiana is Mother Bear’s here in Bloomington. Problem: it isn’t. Osteria Rago’s is better. Not that MB’s is bad. It’s good. Just not better than Osteria’s. I’m also betting there must be a better pizza than both somewhere in Indianapolis. iLoveFood also names top […]
For pizza, it’s hard to beat Felix, on Abbot Kinney Boulevard in Venice Beach, California.

And just one for New Jersey! iLoveFood says the best pizza in Indiana is Mother Bear’s here in Bloomington. Problem: it isn’t. Osteria Rago’s is better. Not that MB’s is bad. It’s good. Just not better than Osteria’s. I’m also betting there must be a better pizza than both somewhere in Indianapolis. iLoveFood also names top pizzas for all fifty states, and has four entries for California and only three for New York.

Here’s to a lifetime of “What’s that?” Gothamist: This SoHo brewpub is offering a year of free beer if you get a tattoo of its logo. And what if it goes out of business next month? Or whenever?

Though suspiciously perfected. As TV series go, Reacher is candy. But we enjoy it. If you do too, this Q&A with its star, Alan Ritchson, is worth a watch. Seems like a good guy.

A good look. If you’re ever tempted to get old-fashioned free over-the-air signals for your TV, the way to tell exactly what you might get is RabbitEars.info. Where we live here in Bloomington, the only station you’ll get with actual rabbit ears (or a loop thing, or a straightened paper clip) is the local PBS station. All the rest require an outside antenna, which will look like a fish skeleton on your house. But I got and built one anyway. By compromise with my wife, it’s on a pole next to the garage where most people won’t see it. It pulls in most of the Indy signals listed here. The model is felicitously named DAT BOSS MIX LR antenna High-VHF/UHF (Repack Ready) antenna, which (speaking of old fashioned) comes in a kit and takes an hour to build.

Be your own tool. Unbelievable MacOS Apps That are Worth More Than They Cost has some good selections, but dig down in the comments and links to similar pieces below. My own fave is the path tool, which I believe is not defaulted in the Finder toolbar, but should be if it isn’t. Find it and others by right- or control-clicking on the toolbar and selecting Customize.

Push out the frontier of your buyer’s remorse. Our new Samsung OLED TV is beautiful. In the Samsung tradition, however, the UI is awful. But never mind that. Instead note that Sony is pushing the limits of what can be done with LEDs displays.


Kyle Den Hartog

Web3 is Reintroducing Cross-Origin Tracking Accidentally

We should expect that when the user shares their address that will act as implied consent for cross-origin tracking in the same way cookie notices act as a prompt for tracking.

In the context of Web3 we’re currently walking down a dangerous path accidentally, and it’s not something being discussed enough. When a user connects to a site with Web3 capabilities enabled the site first requests the user to share a wallet address with them. This paradigm was set primarily by some choices that were made early on by Metamask as a means of protection for the user. At the time these were beneficial, but over time we’ve recognized some tradeoffs between UX and privacy because of it. Let’s explore those further.

The UX paradigm of sharing an account address is discrete

The permissions design of this started out as a low level paradigm where the DApp only needed the wallet address and could fetch state itself from the chain. This led to a thin client design where the site and the UX for different interactions are largely determined by the site. However, because the majority of the application logic is handled by the site itself it also means that the site has to operate in a more trusted context. Both in terms of security and privacy.

Additionally, as we’ve added more functionality to the wallet to try and improve the UX, such as EIP-4361 (Sign in With Ethereum) it’s led to an antipattern in the UX. In order to create a “login” flow, the user first has to share the wallet address, then they have to approve a specifically structured transaction using EIP-191. Because of the order of operations of design and the focus on not conducting breaking changes to the Web3 platform APIs (e.g. what the wallet makes accessible to the site) we’ve now added a tiny bit of debt to the UX paradigm rather than combining these operations into a single design interface.

The account address paradigm trust model doesn’t align with the browsers

In the context of a modern browser, most sites are isolated into their own sandbox. This occurs both at the OS process level in order to prevent sites open in one tab from tampering with other sites in another tab either at a deeper memory level or at a more functional script injection level. It also happens at a storage layer through the partitioning of localStorage, cookies, IndexedDBs, etc. Essentially, sites are separated into what’s called an “origin” in the browser and that origin identifier (such as https://example.com) becomes the boundary.

This is why “cross-origin” communication is considered an explicit exception. Examples of this would be using CORS for a site to approve the loading of a cross-origin script it trusts. This is ultimately rooted back in the security model (and more recently privacy model) of the browser. Over and over we’ve learned that trusting sites is a mistake because users aren’t always able to identify when sites are doing things that aren’t in their best interest, such as tracking them for dynamic pricing or crowding a page with personalized ads. So what sort of problems should we expect to come in Web3 because our Web3 platform API is too trusting of the site?

My prediction for problems to occur in Web3

We should expect that when the user shares their address that will act as implied consent for cross-origin tracking in the same way cookie notices act as a prompt for tracking. The problem here is that as wallets share wallet addresses across different sites, it will become a global identifier used for the purposes of tracking a user and building a copy of their browsing history server side even if the user doesn’t perform an onchain transaction. This could be as simple as an RPC service provider who’s already got a large customer base of wallets and DApps taking this information and building a dataset to sell with it, or it could be a DApp or Wallet doing it directly themselves. Chainalysis has already been doing this for the purposes of correlating wallet addresses to users to sell to governments. What’s to stop someone like them from entering into the web advertising business too because so much of the web3 space is leveraging them for compliance purposes?

Furthermore, once they’ve built this profile all future onchain transactions will be correlated to the shadow copy of the users browsing history (built in the same way they’re built with 3P cookies) and economic activity (such as what they buy with stablecoins) to build deeper behavioral profiles to sell them more goods or serve them more personalized ads. In other words, we really shouldn’t re-introduce this given all major web browser vendors have been moving towards phasing out 3P cookies. But if we can’t share a wallet address how can we solve this problem?

A paradigm beyond sharing a cross-origin globally unique identifier (wallet address)

The answer in my opinion here lies in going down the thick client approach rather than thick app approach. What I mean by “thick” is where the majority of application logic is handled. Today, much of the UX, unsigned transaction generation, and many other aspects are handled by the site. This is probably because the site has no way to request the wallet handles this for them and because the site has desires to build a brand recognition around their protocol using the UX from the site as an value differentiator.

However, we can imagine a world where the site casts an intent to the wallet, such that the wallet can display and generate the necessary information to display to the user. A toy example, I like to use here is through a very specific API designed for checking out and paying with Web3.

A wallet could enable the following API to perform a checkout operation without needing to share an address:

const checkoutTxn = await window.ethereum.request({ method: "wallet_checkout", params: { recipient: "eip155:1:0x1234abc", // a pseudonoymous address to send funds to amount: "100.01", currency: [ "eip155:1:0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48", "eip155:1:0xdAC17F958D2ee523a2206206994597C13D831ec7" ] } });

In this you’ll notice a different paradigm. First, the wallet doesn’t need to send the wallet address to the site so it can generate the transaction, instead it will leave it up to the wallet to decide this. Second, the site communicates what it desires to the wallet and lets it decide how to handle it. So for example, it wants the user to send $100.01 worth of either USDC on Base L2 or USDT on mainnet which is communicated based on the currency contract address. If the user doesn’t have USDC or USDT on the proper network the wallet can perform the underlying swaps and bridging to assist with completing the intended transaction so that the caller receives the money into the address they expect.

In summary, we shouldn’t be looking to perpetuate the legacy antipatterns of web2 in Web3 like third party cookies. Instead, we should be looking to extend the web platform in ways that browsers aren’t. In this way the value added capabilities we receive from Web3 for asset ownership become an extension of the web by enhancing it so that we can meet the land of web2 where they’re at, rather than building a separate Web3 island and expecting everyone to come join us.

Wednesday, 26. March 2025

Simon Willison

Function calling with Gemma

Function calling with Gemma Google's Gemma 3 model (the 27B variant is particularly capable, I've been trying it out via Ollama) supports function calling exclusively through prompt engineering. The official documentation describes two recommended prompts - both of them suggest that the tool definitions are passed in as JSON schema, but the way the model should request tool executions differs.

Function calling with Gemma

Google's Gemma 3 model (the 27B variant is particularly capable, I've been trying it out via Ollama) supports function calling exclusively through prompt engineering. The official documentation describes two recommended prompts - both of them suggest that the tool definitions are passed in as JSON schema, but the way the model should request tool executions differs.

The first prompt uses Python-style function calling syntax:

You have access to functions. If you decide to invoke any of the function(s), you MUST put it in the format of [func_name1(params_name1=params_value1, params_name2=params_value2...), func_name2(params)]

You SHOULD NOT include any other text in the response if you call a function

(Always love seeing CAPITALS for emphasis in prompts, makes me wonder if they proved to themselves that capitalization makes a difference in this case.)

The second variant uses JSON instead:

You have access to functions. If you decide to invoke any of the function(s), you MUST put it in the format of {"name": function name, "parameters": dictionary of argument name and its value}

You SHOULD NOT include any other text in the response if you call a function

This is a neat illustration of the fact that all of these fancy tool using LLMs are still using effectively the same pattern as was described in the ReAct paper back in November 2022. Here's my implementation of that pattern from March 2023.

Via Hacker News

Tags: prompt-engineering, google, generative-ai, llm-tool-use, gemma, ai, llms


Quoting @OpenAIDevs

MCP 🤝 OpenAI Agents SDK You can now connect your Model Context Protocol servers to Agents: openai.github.io/openai-agents-python/mcp/ We’re also working on MCP support for the OpenAI API and ChatGPT desktop app—we’ll share some more news in the coming months. — @OpenAIDevs Tags: model-context-protocol, generative-ai, ai-agents, openai, llm-tool-use, ai, llms

MCP 🤝 OpenAI Agents SDK

You can now connect your Model Context Protocol servers to Agents: openai.github.io/openai-agents-python/mcp/

We’re also working on MCP support for the OpenAI API and ChatGPT desktop app—we’ll share some more news in the coming months.

@OpenAIDevs

Tags: model-context-protocol, generative-ai, ai-agents, openai, llm-tool-use, ai, llms


The Pragmatic Engineer

Building Figma Slides with Noah Finer and Jonathan Kaufman

Figma engineers Jonathan Kaufman and Noah Finer share how they built Figma Slides from a hackathon project into a polished product, covering key engineering decisions, challenges, and practices.
Stream the Latest Episode

Listen and watch now on YouTube, Spotify and Apple. See the episode transcript at the top of this page, and a summary at the bottom.

Brought to You By

Graphite — The AI developer productivity platform.

Sonar — Code quality and code security for ALL code.

Chronosphere — The observability platform built for control.

In This Episode

How do you take a new product idea, and turn it into a successful product? Figma Slides started as a hackathon project a year and a half ago – and today it’s a full-on product, with more than 4.5M slide decks created by users. I’m joined by two founding engineers on this project: Jonathan Kaufman and Noah Finer.

In our chat, Jonathan and Noah pull back the curtain on what it took to build Figma Slides. They share engineering challenges faced, interesting engineering practices utilized, and what it's like working on a product used by millions of designers worldwide.

We talk about:

An overview of Figma Slides

Tech stack

Why the engineering team built grid view before single slide view

How all Figma files look the same across browsers

The "vibe testing" approach

How beta testing helped experiment more

The “all flags on”, “all flags off” testing approach

Engineering Crits

And much more!

Takeaways

My biggest takeaways from this conversation:

1. Figma’s web app uses a surprising amount of C++ code. It’s rare for web engineers to need to use C++ — but Figma is an exception, as the design tool built their own rendering engine. This means that frontend engineers need to get into the opinionated C++ codebase to make changes — though rewriting some parts of the code to TypeScript is currently on the way.

2. The “EngCrit” process is an interesting — and unique! — one. Figma’s engineering reviews/discussions are called EngCrit. An engineer presents their plan or idea in a FigJam board, and other engineers join in to give their feedback. This process has parallels with the RFC or design doc process — but it feels a lot more lightweight. Plus, it dogfoods Figma’s own products!

3. Some of the most straightforward UI parts are the most involved ones. It was interesting to learn that one of the most complicated pieces of the web app was the Single Slide View. This seemingly simple interface:

The “Single Slide View.” How hard could it be to build this seemingly simple UI?

The reason for the complexity was how this view is actually a zoomed in version of the grid view — this one:

The “Grid View.” The Single Slide View zooms into this visual, and hides everything else but the single slide

There are several reasons that the team decided to do this “zoomed in trick:” one of them is that when multiple people are editing a slide, zooming in means that the cursors for these other users show up both on the slide view and on the grid view.

This detail is a good reminder that a simple UI can hide a complex implementation.

The Pragmatic Engineer deepdives relevant for this episode

Inside Figma’s engineering culture

Quality Assurance across the tech industry

Shipping to production

Design-first software engineering

Engineering planning with RFCs, design documents and ADRs

Timestamps

(00:00) Intro

(01:45) An overview of Figma Slides and the first steps in building it

(06:41) Why Figma built grid view before single slide view

(10:00) The next steps of building UI after grid view

(12:10) The team structure and size of the Figma Slides team

(14:14) The tech stack behind Figma Slides

(15:31) How Figma uses C++ with bindings

(17:43) The Chrome debugging extension used for C++ and WebAssembly

(21:02) An example of how Noah used the debugging tool

(22:18) Challenges in building Figma Slides

(23:15) An explanation of multiplayer cursors

(26:15) Figma’s philosophy of building interconnected products—and the code behind them

(28:22) An example of a different mouse behavior in Figma

(33:00) Technical challenges in developing single slide view

(35:10) Challenges faced in single-slide view while maintaining multiplayer compatibility

(40:00) The types of testing used on Figma Slides

(43:42) Figma’s zero bug policy

(45:30) The release process, and how engineering uses feature flags

(48:40) How Figma tests Slides with feature flags enabled and then disabled

(51:35) An explanation of eng crits at Figma

(54:53) Rapid fire round

A summary of the conversation Initial development

The initial development took about 6 months, starting at a hackathon in late 2023 and the beta launch in April 2024. The public launch happened a week ago, on 19 March.

Phasing of the project

“Slide grid:” the first concept built on a hack week

The initial development was about getting the two-way navigation right between the grid view and the single slide view.

Single slide view construction happened later, around 6 months into the project.

Single slide view operates by zooming into the infinite canvas and hiding elements, maintaining the 2D space. Clever!

User researcher part of the team: this was surprising to hear! The researcher helped a lot with getting the direction of the project right – building something designers would find intuitive to use

Tech stack and tools

Figma's core editors use a C++ codebase and custom renderer outputting to a <canvas> element via WebGL or WebGPU.

UI elements outside the canvas use TypeScript and React.

A "bindings layer" enables communication between the C++ codebase and the web UI.

The C++ codebase

Some engineers join with C++ experience; others learn on the job.

Both Jon and Noah learned C++ on the job, for the most part!

The C++ codebase has a learning curve, similar to game engine development.

Rewriting some C++ to Typescript

Figma is rewriting parts of the C++ codebase into TypeScript, using the bindings layer.

Interactive elements like plus buttons use TypeScript to interact with the C++ canvas.

Tooling for debugging

A Chrome extension called DWARF debugging is a powerful tool the team uses

This extension debugs C++ code within Chrome Inspector, even as WebAssembly, similar to JavaScript source maps.

Breakpoints can be set in TypeScript/React and C++ WebAssembly for debugging interactions.

This tooling helps find intricate bugs between the UI and the core editor.

Previously, C++ debugging involved running a special build in Xcode.

Figma has an internal "web bisect" tool using commit previews to identify bug-introducing commits. A very helpful tool!

Engineering practices

Testing: the team does a lot of this.

They run all unit and interaction tests with flags off and then on. Another clever approach

This practice helps prevent regressions caused by specific feature flags.

Feature flags: extensively used. The product has more than 2,000 of these.

Developers have an internal panel to manage feature flags locally and in staging.

Staging: a dedicated staging environment used for testing. Features in this environment are enabled via flags. Staging is used to get early product feedback

Alpha launch: this was done for Slides by involving select customers

Eng Crits: asynchronous feedback via sticky notes in FigJam, followed by discussion.

Zero bug policy

Figma has a "zero bug policy" for new developments, prioritising fixes after launch

The on-call process triages feedback, and addresses reported issues promptly.

Beta phase: this 11-month long period prioritized critical bugs affecting core experience before the broader launch.

Engineering challenges

Single slide view implementation as viewport manipulation presented unique challenges.

This approach allows existing multiplayer cursor functionality in both views.

Multiplayer uses a server-side Rust service for edit propagation and conflict resolution via WebSockets.

Ensuring "interop" between Figma products

Ensuring products like Design, FigJam, and Slides work nicely with one another

Supporting interop means new node types must function across editors.

What makes it easier is how the underlying C++ codebase is largely shared across editors.

Differences in interactions use "mouse behaviors", editor-specific implementations for mouse actions. The mouse behavior concept comes from game engine architectures.

Custom text rendering

Figma has custom text rendering for consistency across browsers and operating systems.

This means that in-house development is needed for features like spellcheck – something that comes for free for web apps using the DOM!

Managing a collapsed state in a single slide view without file persistence was a state management problem.

Reordering slides in single slide view with multiplayer

To do so: state mutations needed to be minimized

The solution used existing auto layout nodes, with each slide row as a node within a grid node

Reordering manipulates the "parent index" of row nodes, minimizing mutations

Testing complex multiplayer interactions

This was difficult to do! It’s hard to reproduce problematic multi-user scenarios

Extensive unit tests are put in-place for grid reconciliation. These cover past bug scenarios and assert expected mutations. (This is a clever way to ensure fewer regressions!)

Where to find Jonathan Kaufman:

• X: https://x.com/kauffecup

• LinkedIn: https://www.linkedin.com/in/jkaufman5/

• Website: https://www.jkaufman.io/

Where to find Noah Finer:

• X: https://x.com/finerflame

• LinkedIn: https://www.linkedin.com/in/noahfiner/

• Website: https://noahfiner.com/

Mentions during the episode:

• Figma: https://www.figma.com/

• Figma Slides: https://www.figma.com/slides/

• Config: https://config.figma.com/

• FigJam: https://www.figma.com/figjam/

• C++: https://en.wikipedia.org/wiki/C%2B%2B

• Typescript: https://www.typescriptlang.org/

• React: https://react.dev/

• Debug C/C++ WebAssembly: https://developer.chrome.com/docs/devtools/wasm

• Xcode: https://developer.apple.com/xcode/

• Multiplayer cursors: https://www.figma.com/community/file/1267761575266415196/multiplayer-cursors

• How Figma’s multiplayer technology works: https://www.figma.com/blog/how-figmas-multiplayer-technology-works/

• Design-first software engineering: Craft – with Balint Orosz: https://newsletter.pragmaticengineer.com/p/design-first-software-engineering

• Reconciliation: https://legacy.reactjs.org/docs/reconciliation.html

• Inside Figma’s Engineering Culture: https://newsletter.pragmaticengineer.com/p/inside-figmas-engineering-culture

• How we engineer feedback at Figma with eng crits: https://www.figma.com/blog/how-we-run-eng-crits-at-figma/

• Nextjs: https://nextjs.org/

• Hacker News: https://news.ycombinator.com/

• Refactoring UI: https://www.refactoringui.com/

• Tailwind: https://tailwindcss.com/

• Adam Wathan’s website: https://adamwathan.me/

• Steve Schoger’s website: https://www.steveschoger.com/

• Piranesi: https://www.amazon.com/Piranesi-Susanna-Clarke/dp/1635577802/

• Immune: A Journey into the Mysterious System That Keeps You Alive: https://www.amazon.com/Immune-Kurzgesagt-gorgeously-illustrated-immune/dp/1529360684

Production and marketing by Pen Name. For inquiries about sponsoring the podcast, email podcast@pragmaticengineer.com.


Doc Searls Weblog

Coming Up:  More History

But I'd orbit Saturn too, if I had the chance. Saturn has 128 more moons. I am a moon of my wife. And what will we call it? What becomes of democracy when it seems everybody has been herded into separate and opposed algorithmically assembled and maintained tribes, and when most of tech is run by […]

But I'd orbit Saturn too, if I had the chance. Saturn has 128 more moons. I am a moon of my wife.

And what will we call it? What becomes of democracy when it seems everybody has been herded into separate and opposed algorithmically assembled and maintained tribes, and when most of tech is run by oligarchs (for a few years while tech oligarchy stays a thing), and every status quo will prove transient in a Digital Age that's maybe a decade or two old and will be with us for decades, centuries, and millennia to come? Whatever the answer, it should now be clear that history is happening, big time. And we hardly know if or how any of the old anchor institutions (libraries, universities, journalism) from which Authority long derived in the past will survive in familiar forms.


Simon Willison

Note on 26th March 2025

I've added a new content type to my blog: notes. These join my existing types: entries, bookmarks and quotations. A note is a little bit like a bookmark without a link. They're for short form writing - thoughts or images that don't warrant a full entry with a title. The kind of things I used to post to Twitter, but that don't feel right to cross-post to multiple social networks (Mastodon and Blu

I've added a new content type to my blog: notes. These join my existing types: entries, bookmarks and quotations.

A note is a little bit like a bookmark without a link. They're for short form writing - thoughts or images that don't warrant a full entry with a title. The kind of things I used to post to Twitter, but that don't feel right to cross-post to multiple social networks (Mastodon and Bluesky, for example.)

I was partly inspired by Molly White's short thoughts, notes, links, and musings.

I've been thinking about this for a while, but the amount of work involved in modifying all of the parts of my site that handle the three different content types was daunting. Then this evening I tried running my blog's source code (using files-to-prompt and LLM) through the new Gemini 2.5 Pro:

files-to-prompt . -e py -c | \ llm -m gemini-2.5-pro-exp-03-25 -s \ 'I want to add a new type of content called a Note, similar to quotation and bookmark and entry but it only has a markdown text body. Output all of the code I need to add for that feature and tell me which files to add the code to.'

Gemini gave me a detailed 13 step plan covering all of the tedious changes I'd been avoiding having to figure out!

The code is in this PR, which touched 18 different files. The whole project took around 45 minutes start to finish.

(I used Claude to brainstorm names for the feature - I had it come up with possible nouns and then "rank those by least pretentious to most pretentious", and "notes" came out on top.)

This is now far too long for a note and should really be upgraded to an entry, but I need to post a first note to make sure everything is working as it should.

Tags: blogging, projects, gemini, ai-assisted-programming, claude, molly-white


Quoting Manuel Hoffmann, Frank Nagle, Yanuo Zhou

We estimate the supply-side value of widely-used OSS is $4.15 billion, but that the demand-side value is much larger at $8.8 trillion. We find that firms would need to spend 3.5 times more on software than they currently do if OSS did not exist. — Manuel Hoffmann, Frank Nagle, Yanuo Zhou, The Value of Open Source Software, Harvard Business School Tags: open-source

We estimate the supply-side value of widely-used OSS is $4.15 billion, but that the demand-side value is much larger at $8.8 trillion. We find that firms would need to spend 3.5 times more on software than they currently do if OSS did not exist.

Manuel Hoffmann, Frank Nagle, Yanuo Zhou, The Value of Open Source Software, Harvard Business School

Tags: open-source


@_Nat Zone

Oktaベンチャーが選ぶThe Identity 25の2025年版に選ばれました。

そう、あのニューヨークのタイムズスクエアにでかでかと顔がでるやつです。(写真1)参照。去年は3月21日に出ていたようですが、今年はどうなるでしょうね。 日本からは他に、CTCの富士榮さんが入っていますね。あとは、EU DI ARFを作っている Paolo de Rosaとか、SP…

そう、あのニューヨークのタイムズスクエアにでかでかと顔がでるやつです。(写真1)参照。去年は3月21日に出ていたようですが、今年はどうなるでしょうね。

(写真1)昨年の受賞者の一人、John Bradleyの写真が表示されているところ。(出所) Facebook

日本からは他に、CTCの富士榮さんが入っていますね。あとは、EU DI ARFを作っている Paolo de Rosaとか、SP800-63-4の著者のRyan Galluzzo とか、SD-JWTのDaniel Fett (Authleteの人として書いている)とか、みなさんご存知Brian Campbellとかが目に付くかな。組織系だと、OIDFのGailとか、FIDOのAndrewとか、MOSIPのSasikumarとかも入っていますね。

受賞者は以下の通り

Janelle Allen, Engineering Product Manager, WebEx Identity at Cisco Lincoln Ando, Founder, idwall Andrew Black, Managing Director, ConnectID, Australian Payments Plus Dan Boneh, Professor of Computer Science and Electrical Engineering, Stanford University Brian Campbell, Distinguished Engineer, Ping Identity Julie Dawson, Chief Regulatory & Policy Officer, Yoti Paolo de Rosa, Policy Officer, European Commission Rodger Desai, CEO, Prove Identity Victor Dominello, Co-Founder, ServiceGen Kim Hamilton Duffy, Executive Director, Decentralized Identity Foundation Diego Fernández, Co-creator, QuarkID & Co-founder, The Future Co. Daniel Fett, Identity Solution Architect, SPRIN-D, German Federal Agency for Breakthrough Innovation Heather Flanagan, Principal, Spherical Cow Consulting Naohiro Fujie, General Manager, ITOCHU Techno-Solutions Corporation & Chair, OpenID Foundation, Japan Ryan Galluzzo, Identity Program Lead, Applied Cybersecurity Division, NIST Sasikumar Ganesan, Head of Engineering, MOSIP; Former Chief Security Architect, Aadhaar Gail Hodges, Executive Director, OpenID Foundation Hanna Kim, Director, Login.gov Gideon Lombard, Chief Operating Officer, DIDx Nat Sakimura, Chairman, OpenID Foundation; CEO, NAT Consulting Eric Scouten, Identity Standards Architect, Adobe Andrew Shikiar, Executive Director and Chief Executive Officer, FIDO Alliance Teresa Wu, Vice President, Smart Credentials, IDEMIA North America Yang Xie, CEO, Authing Yodahe Zemichael, Executive Director, National ID Ethiopia

何はともあれ、皆様おめでとうございます。

そしてありがとうございます>Oktaベンチャー。私自身に関しては、最近あまり業績がなくてちょっとこっ恥ずかしいのですけどね。

がんばろっと。

Identity-25-FINAL-1

Tuesday, 25. March 2025

Simon Willison

Introducing 4o Image Generation

Introducing 4o Image Generation When OpenAI first announced GPT-4o back in May 2024 one of the most exciting features was true multi-modality in that it could both input and output audio and images. The "o" stood for "omni", and the image output examples in that launch post looked really impressive. It's taken them over ten months (and Gemini beat them to it) but today they're finally making t

Introducing 4o Image Generation

When OpenAI first announced GPT-4o back in May 2024 one of the most exciting features was true multi-modality in that it could both input and output audio and images. The "o" stood for "omni", and the image output examples in that launch post looked really impressive.

It's taken them over ten months (and Gemini beat them to it) but today they're finally making those image generation abilities available, live right now in ChatGPT for paying customers.

My test prompt for any model that can manipulate incoming images is "Turn this into a selfie with a bear", because you should never take a selfie with a bear! I fed ChatGPT this selfie and got back this result:

That's pretty great! It mangled the text on my T-Shirt (which says "LAWRENCE.COM" in a creative font) and added a second visible AirPod. It's very clearly me though, and that's definitely a bear.

There are plenty more examples in OpenAI's launch post, but as usual the most interesting details are tucked away in the updates to the system card. There's lots in there about their approach to safety and bias, including a section on "Ahistorical and Unrealistic Bias" which feels inspired by Gemini's embarrassing early missteps.

One section that stood out to me is their approach to images of public figures. The new policy is much more permissive than for DALL-E - highlights mine:

4o image generation is capable, in many instances, of generating a depiction of a public figure based solely on a text prompt.

At launch, we are not blocking the capability to generate adult public figures but are instead implementing the same safeguards that we have implemented for editing images of photorealistic uploads of people. For instance, this includes seeking to block the generation of photorealistic images of public figures who are minors and of material that violates our policies related to violence, hateful imagery, instructions for illicit activities, erotic content, and other areas. Public figures who wish for their depiction not to be generated can opt out.

This approach is more fine-grained than the way we dealt with public figures in our DALL·E series of models, where we used technical mitigations intended to prevent any images of a public figure from being generated. This change opens the possibility of helpful and beneficial uses in areas like educational, historical, satirical and political speech. After launch, we will continue to monitor usage of this capability, evaluating our policies, and will adjust them if needed.

Given that "public figures who wish for their depiction not to be generated can opt out" I wonder if we'll see a stampede of public figures to do exactly that!

Update: There's significant confusion right now over this new feature because it is being rolled out gradually but older ChatGPT can still generate images using DALL-E instead... and there is no visual indication in the ChatGPT UI explaining which image generation method it used!

OpenAI made the same mistake last year when they announced ChatGPT advanced voice mode but failed to clarify that ChatGPT was still running the previous, less impressive voice implementation.

Update 2: Images created with DALL-E through the ChatGPT web interface now show a note with a warning:

Tags: openai, ai, multi-modal-output, llms, ai-ethics, llm-release, generative-ai, chatgpt, dalle, gemini


Putting Gemini 2.5 Pro through its paces

There's a new release from Google Gemini this morning: the first in the Gemini 2.5 series. Google call it "a thinking model, designed to tackle increasingly complex problems". It's already sat at the top of the LM Arena leaderboard, and from initial impressions looks like it may deserve that top spot. I just released llm-gemini 0.16 adding support for the new model to my LLM command-line tool. L

There's a new release from Google Gemini this morning: the first in the Gemini 2.5 series. Google call it "a thinking model, designed to tackle increasingly complex problems". It's already sat at the top of the LM Arena leaderboard, and from initial impressions looks like it may deserve that top spot.

I just released llm-gemini 0.16 adding support for the new model to my LLM command-line tool. Let's try it out.

The pelican riding a bicycle Transcribing audio Bounding boxes More characteristics of the model Gemini 2.5 Pro is a very strong new model Update: it's very good at code The pelican riding a bicycle

First up, my classic generate an SVG of a pelican riding a bicycle prompt.

# Upgrade the plugin llm install -U llm-gemini # Now run the prompt: llm -m gemini-2.5-pro-exp-03-25 "Generate an SVG of a pelican riding a bicycle"

It's pretty solid!

Here's the full transcript.

This task is meant to be almost impossible: pelicans are the wrong shape to ride bicycles! Given that, I think this is a good attempt - I like it slightly better than my previous favourite Claude 3.7 Sonnet, which produced this a month ago:

Transcribing audio

I had an MP3 lying around from a previous experiment which mixes English and Spanish. I tried running it with the prompt transcribe to see what would happen:

llm -m gemini-2.5-pro-exp-03-25 'transcribe' \ -a https://static.simonwillison.net/static/2025/russian-pelican-in-spanish.mp3

I got back this, with timestamps interspersed with the text:

I need you [ 0m0s450ms ] to pretend [ 0m0s880ms ] to be [ 0m0s990ms ] a California [ 0m1s560ms ] brown [ 0m1s850ms ] pelican [ 0m2s320ms ] with [ 0m2s480ms ] a very [ 0m2s990ms ] thick [ 0m3s290ms ] Russian [ 0m3s710ms ] accent, [ 0m4s110ms ] but [ 0m4s540ms ] you [ 0m4s640ms ] talk [ 0m4s830ms ] to me [ 0m4s960ms ] exclusively [ 0m5s660ms ] in Spanish. [ 0m6s200ms ] Oye, [ 0m8s930ms ] camarada, [ 0m9s570ms ] aquí [ 0m10s240ms ] está [ 0m10s590ms ] tu [ 0m10s740ms ] pelícano [ 0m11s370ms ] californiano [ 0m12s320ms ] con [ 0m12s520ms ] acento [ 0m13s100ms ] ruso. [ 0m13s540ms ] Qué [ 0m14s230ms ] tal, [ 0m14s570ms ] tovarisch? [ 0m15s210ms ] Listo [ 0m15s960ms ] para [ 0m16s190ms ] charlar [ 0m16s640ms ] en [ 0m16s750ms ] español? [ 0m17s250ms ] How's [ 0m19s834ms ] your [ 0m19s944ms ] day [ 0m20s134ms ] today? [ 0m20s414ms ] Mi [ 0m22s654ms ] día [ 0m22s934ms ] ha [ 0m23s4ms ] sido [ 0m23s464ms ] volando [ 0m24s204ms ] sobre [ 0m24s594ms ] las [ 0m24s844ms ] olas, [ 0m25s334ms ] buscando [ 0m26s264ms ] peces [ 0m26s954ms ] y [ 0m27s84ms ] disfrutando [ 0m28s14ms ] del [ 0m28s244ms ] sol [ 0m28s664ms ] californiano. [ 0m29s444ms ] Y [ 0m30s314ms ] tú, [ 0m30s614ms ] amigo, ¿ [ 0m31s354ms ] cómo [ 0m31s634ms ] ha [ 0m31s664ms ] estado [ 0m31s984ms ] tu [ 0m32s134ms ] día? [ 0m32s424ms ]

This inspired me to try again, this time including a JSON schema (using LLM's custom schema DSL):

llm -m gemini-2.5-pro-exp-03-25 'transcribe' \ -a https://static.simonwillison.net/static/2025/russian-pelican-in-spanish.mp3 \ --schema-multi 'timestamp str: mm:ss,text, language: two letter code'

I got an excellent response from that:

{ "items": [ { "language": "en", "text": "I need you to pretend to be a California brown pelican with a very thick Russian accent, but you talk to me exclusively in Spanish.", "timestamp": "00:00" }, { "language": "es", "text": "Oye, camarada. Aquí está tu pelícano californiano con acento ruso.", "timestamp": "00:08" }, { "language": "es", "text": "¿Qué tal, Tovarish? ¿Listo para charlar en español?", "timestamp": "00:13" }, { "language": "en", "text": "How's your day today?", "timestamp": "00:19" }, { "language": "es", "text": "Mi día ha sido volando sobre las olas, buscando peces y disfrutando del sol californiano.", "timestamp": "00:22" }, { "language": "es", "text": "¿Y tú, amigo, cómo ha estado tu día?", "timestamp": "00:30" } ] }

I confirmed that the timestamps match the audio. This is fantastic.

Let's try that against a ten minute snippet of a podcast episode I was on:

llm -m gemini-2.5-pro-exp-03-25 \ 'transcribe, first speaker is Christopher, second is Simon' \ -a ten-minutes-of-podcast.mp3 \ --schema-multi 'timestamp str: mm:ss, text, speaker_name'

Useful LLM trick: you can use llm logs -c --data to get just the JSON data from the most recent prompt response, so I ran this:

llm logs -c --data | jq

Here's the full output JSON, which starts and ends like this:

{ "items": [ { "speaker_name": "Christopher", "text": "on its own and and it has this sort of like a it's like a you know old tree in the forest, you know, kind of thing that you've built, so.", "timestamp": "00:00" }, { "speaker_name": "Simon", "text": "There's also like I feel like with online writing, never ever like stick something online just expect people to find it. You have to So one of the great things about having a blog is I can be in a conversation about something and somebody ask a question, I can say, oh, I wrote about that two and a half years ago and give people a link.", "timestamp": "00:06" }, { "speaker_name": "Simon", "text": "So on that basis, Chat and I can't remember if the free version of Chat GPT has code interpreter.", "timestamp": "09:45" }, { "speaker_name": "Simon", "text": "I hope I think it does.", "timestamp": "09:50" }, { "speaker_name": "Christopher", "text": "Okay. So this is like the basic paid one, maybe the $20 month because I know there's like a $200 one that's a little steep for like a basic", "timestamp": "09:51" } ] }

A spot check of the timestamps showed them in the right place. Gemini 2.5 supports long context prompts so it's possible this works well for much longer audio files - it would be interesting to dig deeper and try that out.

Bounding boxes

One of my favourite features of previous Gemini models is their support for bounding boxes: you can prompt them to return boxes around objects in images.

I built a separate tool for experimenting with this feature in August last year, which I described in Building a tool showing how Gemini Pro can return bounding boxes for objects in images. I've now upgraded that tool to add support the new model.

You can access it at tools.simonwillison.net/gemini-bbox - you'll need to provide your own Gemini API key which is sent directly to their API from your browser (it won't be logged by an intermediary).

I tried it out on a challenging photograph of some pelicans... and it worked extremely well:

My prompt was:

Return bounding boxes around pelicans as JSON arrays [ymin, xmin, ymax, xmax]

The Gemini models are all trained to return bounding boxes scaled between 0 and 100. My tool knows how to convert those back to the same dimensions as the input image.

Here's what the visualized result looked like:

It got almost all of them! I like how it didn't draw a box around the one egret that had made it into the photo.

More characteristics of the model

Here's the official model listing in the Gemini docs. Key details:

Input token limit: 1,000,000 Output token limit: 64,000 - this is a huge upgrade, all of the other listed models have 8,192 for this (correction: Gemini 2.0 Flash Thinking also had a 64,000 output length) Knowledge cut-off: January 2025 - an improvement on Gemini 2.0's August 2024 Gemini 2.5 Pro is a very strong new model

I've hardly scratched the surface when it comes to trying out Gemini 2.5 Pro so far. How's its creative writing? Factual knowledge about the world? Can it write great code in Python, JavaScript, Rust and more?

The Gemini family of models have capabilities that set them apart from other models:

Long context length - Gemini 2.5 Pro supports up to 1 million tokens Audio input - something which few other models support, certainly not at this length and with this level of timestamp accuracy Accurate bounding box detection for image inputs

My experiments so far with these capabilities indicate that Gemini 2.5 Pro really is a very strong new model. I'm looking forward to exploring more of what it can do.

Update: it's very good at code

I spent this evening trying it out for coding tasks, and it's very, very impressive. I'm seeing results for Python that feel comparable to my previous favourite Claude 3.7 Sonnet, and appear to be benefitting from Gemini 2.5 Pro's default reasoning mode and long context.

I've been wanting to add a new content type of "notes" to my blog for quite a while now, but I was put off by the anticipated tedium of all of the different places in the codebase that would need to be updated.

That feature is now live. Here are my notes on creating that notes feature using Gemini 2.5 Pro. It crunched through my entire codebase and figured out all of the places I needed to change - 18 files in total, as you can see in the resulting PR. The whole project took about 45 minutes from start to finish - averaging less than three minutes per file I had to modify.

I've thrown a whole bunch of other coding challenges at it, and the bottleneck on evaluating them has become my own mental capacity to review the resulting code!

Here's another, more complex example. This hasn't resulted in actual running code yet but it took a big bite out of an architectural design problem I've been stewing on for a very long time.

My LLM project needs support for tools - a way to teach different LLMs how to request tool execution, then have those tools (implemented in Python) run and return their results back to the models.

Designing this is really hard, because I need to create an abstraction that works across multiple different model providers, each powered by a different plugin.

Could Gemini 2.5 Pro help unblock me by proposing an architectural approach that might work?

I started by combining the Python and Markdown files for my sqlite-utils, llm, llm-gemini, and llm-anthropic repositories into one big document:

files-to-prompt sqlite-utils llm llm-gemini llm-anthropic -e md -e py -c

Piping it through ttok showed that to be 316,098 tokens (using the OpenAI tokenizer, but the Gemini tokenizer is likely a similar number).

Then I fed that all into Gemini 2.5 Pro with the following prompt:

Provide a detailed design proposal for adding tool calling support to LLM

Different model plugins will have different ways of executing tools, but LLM itself should provide both a Python and a CLI abstraction over these differences.

Tool calling involves passing tool definitions to a model, the model then replies with tools it would like executed, the harness code then executes those and passes the result back to the model and continues in a loop until the model stops requesting tools. This should happen as part of the existing llm.Conversation mechanism.

At the Python layer tools should be provided as Python functions that use type hints, which can then be converted into JSON schema using Pydantic - similar to how the existing schema= mechanism works.

For the CLI option tools will be provided as an option passed to the command, similar to how the sqlite-utils convert CLI mechanism works.'

I've been thinking about this problem for over a year now. Gemini 2.5 Pro's response isn't exactly what I'm going to do, but it did include a flurry of genuinely useful suggestions to help me craft my final approach.

I hadn't thought about the need for asyncio support for tool functions at all, so this idea from Gemini 2.5 Pro was very welcome:

The AsyncConversation.prompt() method will mirror the synchronous API, accepting the tools parameter and handling the tool calling loop using async/await for tool execution if the tool functions themselves are async. If tool functions are synchronous, they will be run in a thread pool executor via asyncio.to_thread.

Watching Gemini crunch through some of the more complex problems I've been dragging my heels on reminded me of something Harper Reed wrote about his workflow with LLMs for code:

My hack to-do list is empty because I built everything. I keep thinking of new things and knocking them out while watching a movie or something.

I've been stressing quite a bit about my backlog of incomplete projects recently. I don't think Gemini 2.5 Pro is quite going to work through all of them while I'm watching TV, but it does feel like it's going to help make a significant dent in them.

Tags: google, ai, generative-ai, llms, gemini, vision-llms, pelican-riding-a-bicycle, llm-release


IdM Laboratory

Okta Venturesが選ぶ、今年のアイデンティティ界の25人(The Identity 25)に選ばれました

こんにちは、富士榮です。 どうやらOkta Venturesが2024年から始めた今年のアイデンティティ界の25人(The Identity 25)に選ばれました。 https://www.okta.com/resources/datasheet-the-identity-25/ このプログラム、2024年からスタートしたもののようで、昨年はSPRIN-Dにいる安田クリスチーナやMicrosoftのPam DIngle、YubicoのJohn Bradleyなどが選ばれていました。 今年はOpenID FoundationのExecutive DirectorのGail HodgesやChairの崎村さんらの錚々たるメンバの中に何故か私も加えていただけたようです。 しかし、最初Okta Ventures側から連絡をもらった時はよくある詐欺かと
こんにちは、富士榮です。
どうやらOkta Venturesが2024年から始めた今年のアイデンティティ界の25人(The Identity 25)に選ばれました。

https://www.okta.com/resources/datasheet-the-identity-25/

このプログラム、2024年からスタートしたもののようで、昨年はSPRIN-Dにいる安田クリスチーナやMicrosoftのPam DIngle、YubicoのJohn Bradleyなどが選ばれていました。
今年はOpenID FoundationのExecutive DirectorのGail HodgesやChairの崎村さんらの錚々たるメンバの中に何故か私も加えていただけたようです。
しかし、最初Okta Ventures側から連絡をもらった時はよくある詐欺かと思いましたw いきなりLInked Inで知らない人からCongratulations!でしたから。。。なぜ選ばれたのかは全くわかりませんが、どなたかが推薦していただいたのでしょう。ありがとうございます。光栄です。
しかしこれ、タイムズスクエアのNASDAQのディスプレイにデカデカと顔が出るらしいです。。 ちょっとこれからニューヨークいってきます(違
参考)昨年のクリスチーナの写真

いずれにしろ光栄です。感謝申し上げます。

The Pragmatic Engineer

AI Engineering in the real world

What does AI engineering look like in practice? Hands-on examples and learnings from software engineers turned “AI engineers” at seven companies

AI Engineering is the hottest new engineering field, and it’s also an increasingly loaded phrase which can mean a host of different things to different people. It can refer to ML engineers building AI models, or data scientists and analysts working with large language models (LLMs), or software engineers building on top of LLMs, etc. To make things more confusing, some AI tooling vendors use the term for their products, like Cognition AI naming theirs an “AI engineer.”

In her new book “AI Engineering,” author Chip Huyen defines AI engineering as building applications that use LLMs, placing it between software engineering (building software products using languages, frameworks, and APIs), and ML engineering (building models for applications to use). Overall, AI engineering feels closer to software engineering because it usually starts with software engineers building AI applications using LLM APIs. The more complex the AI engineering use case, the more it can morph into looking like ML engineering.

Today, we focus on software engineers who have switched to being AI engineers. These are devs who work with LLMs, and build features and products on top of them. We cover:

What are companies building? An overview of seven companies at different stages and in various segments of tech.

Onboarding to AI Engineering as a software engineer. Approaches that help devs get up to speed faster.

Tech stack. Seven different tech stacks, showcasing variety and some common choices.

Engineering challenges. Problems caused by LLMs being non-deterministic, evals, latency, privacy, and more.

Novel tooling. Several companies build in-house tooling, which pays off with this fast-moving technology. Incident management startup, incident.io, shares their unique, bespoke stack.

Cost considerations. LLMs can quickly get expensive, although costs are dropping quickly. But as this happens, usage tends to grow which causes its own issues. Businesses share how big a deal costs really are.

Learnings from software devs turned AI engineers. Building a prototype is the easy bit, vibes-based development inevitably turns bad, smart teams fail because they get overwhelmed; educating AI users, and more.

Note: I am an investor in incident.io and Wordsmith, which both share details in this deepdive. I always focus on being unbiased in these articles – but at the same time, being an investor means these companies share otherwise hard-to-obtain details. I was not paid by any business to mention them in this article. See my ethics statement for more.

Thanks to the engineers contributing to this deepdive: Lawrence Jones (senior staff engineer at incident.io), Tillman Elser (AI lead at Sentry), Ross McNairn (cofounder at Wordsmith AI), Igor Ostrovsky (cofounder at Augment Code), Matt Morgis (Staff Engineer, AI, formerly at Elsevier), Ashley Peacock (Staff Engineer at Simply Business), and Ryan Cogswell (Application Architect at DSI).

1. What are companies building? Incident.io: incident note taker and bot

Incident.io is a tool to help resolve outages as they happen. A few months ago, the engineering team went back to the drawing board to build features that capitalize on how much LLMs have improved to date. Two of these features:

AI note taker during incidents. This includes live transcription, real-time summaries for people joining a call to get up to speed with, and key decisions/actions to clarify who does what.

Incident investigator: an agent looks into an ongoing incident by checking code, logs, monitoring, and old incidents to identify root causes and share findings, with a responder being paged. More details on how this tool is built.

Both these features make heavy use of LLMs, and also integrate with several other systems like backend services, Slack, etc.

AI feature: when joining an incident, get a summary of what’s currently being discussed Sentry: Autofix and issue grouping

Sentry is a popular application monitoring software. Two interesting projects they built:

Autofix: make it really fast to go from a problem with code (a Sentry issue) to a fix with a GitHub PR. Autofix is an open source RAG framework that combines all of Sentry’s context/telemetry data with code in order to root cause, fix, and create unit tests for bugs.

Issue Grouping: cut down alerting volume while reducing noise. For this, the team used recent advancements in approximate neighbor search (ANN), plus dramatic recent improvements in embedding quality from the new BERT architecture transformer models.

Both these features are fair source, meaning you can see exactly how they work.

Autofix identifies the root cause of an issue with a useless Stack Trace Wordsmith: legal AI

Wordsmith is building AI tools that legal teams use, including:

Documents workspace: plug into all of a company’s communication streams, including analyzing documents and augmenting their contents, and drafting communications. Check out a video of it in action.

AI contract review: A product that can analyse any contract or website, then review it and generate a marked-up word doc, here. Basically, it’s a lawyer anyone can use.

AI contract review: AI tool detects, highlights and summarizes potential contractual issues Augment Code

Augment Code is an AI coding assistant for engineering teams and large codebases. This kind of product probably needs little introduction for devs:

AI coding assistant: including IDE extensions for VS Code, JetBrains, Cursor, and Vim, and a Slack extension

Fine-tuning models: for AI coding tools, models make a big difference. The team don’t pre-train models or build their own LLMs, but run extensive post-training and do fine tuning to suit the 4-5 models used for specific coding-related cases.

Elsevier: RAG platform

Elsevier is one of the world’s leading publishers of scientific and medical content. Matt Morgis was an engineering manager at the company when the engineering leadership noticed that several product teams were independently implementing RAG capabilities; each sourcing content, parsing it, chunking it, and creating embeddings.

An enterprise-wide RAG platform was the solution Matt and his team built, to enable multiple teams to build AI-powered products for medical and scientific professionals. Their platform consists of:

Database. A content database that centralized and normalized content from various sources.

Embeddings+search: A content embedding & indexing pipeline and vector search API.

LLM API: interfacing to multiple LLM models. This API allows teams to experiment with different models by changing a parameter on the API. It also allowed Elsevier to track the usage of various LLM models based on applications using it.

Products built on top of this platform:

Intelligent tutoring system for medical students

Clinical decisions support tool for doctors

Insurance company: chatbot

Ashley Peacock is a staff software engineer at the insurer, Simply Business, who built a pretty common use case: a chatbot for customers to get answers to questions. This seems like the simplest of use cases – you might assume it just involves connecting documentation for the chatbot to use – but it was surprisingly challenging because:

Industry regulation. The chatbot cannot be inaccurate or make things up, as customers use the information to make decisions about which insurance purchases.

Non-deterministic responses. The business needed to turn a nondeterministic chatbot into one that only produces approved responses.

The team had the idea to create an “approved answers” knowledge base for the chatbot, and faced the challenge of creating the questions for this. The team made the chatbot state when it cannot answer a question, and to then connect with human support, which then updated the knowledge base with their solution. This approach works pretty well after taking a few iterations to get right.

HR SaaS: summarization features

Data Solutions International (DSI) is a 30-person HR tech company with 5 engineers, selling products that help with performance review processes, assessments, and employee engagement surveys. The company is family-owned, has been operating for 27 years, and is profitable.

Summarizing comments for employee engagement processes was the first feature they wanted to build, as something customers would appreciate, and which the team could learn about working with LLMs from.

During an employee engagement process, there are questions like "what do you like most about working here", and "if you could change one thing about working at Company X, what would it be?", etc. For larger companies with thousands of employees, there may be thousands of comments per question. Individual departmental leads might read all comments relevant to them, but there’s no way an HR team at a very large business could check every single comment.

Before LLMs, such comments were categorized into predefined categories, which were hardcoded, per company and per survey. This was okay, but not great. Data Solutions International’s goal was to use LLMs to summarize a large number of comments, and report to survey admins the broad categories which comments belong in, how many comments per category there are, and to allow drilling down into the data.

Summary of teamwork-related comments: The feature builds a word cloud of common topics from thousands of comments, and provides an overview of any term 2. Onboarding to AI Engineering as a software engineer

So, how do you get started building applications with LLMs as a software engineer transitioning to this new field? Some advice from folks at the above companies who have:

You can teach AI yourself – and probably should

Let’s start with an encouraging story from veteran software engineer Ryan Cogswell, at HR tech company, DSI. He joined the company 25 years ago, and was the first engineering hire. When AI tools came along, DSI decided to build a relatively simple first AI feature in their HR system that summarized comments for employee engagement purposes. Neither Ryan nor any of the other 4 devs had expertise in AI and LLMs, so the company contracted an external agency which offered a fixed, timeboxed offer to scope the project. Here’s how it went:

Month #1: the agency goes and builds stuff, and shares LLM outputs with DSI

Month #2: while continuing to iterate on desired outputs, devs at DSI ask the agency for access to how things work. They get access to scripts and notebooks

Month #3: the agency lays out the proposed architecture for the project.

The proposal was for a really complex architecture:

SageMaker: a heavyweight solution from AWS to build, train, and deploy ML models

Langfuse: an open source LLM engineering platform to debug and improve LLM applications

Lambdas: serverless functions to run computations

Database #1: store interim states of data between prompts

Database #2: storing user feedback

Other parts: to support RAG aspects

Pipings: to hold it all together

The agency quoted 6-9 months to build the relatively small feature (!!), and an estimated operational cost higher than DSI’s entire investment in all their infrastructure! This was when Ryan asked how hard it could be to build it themselves, and got to work reading and prototyping, making himself the company’s resident GenAI expert.

In 2 months, Ryan and a couple of colleagues built the feature for a fraction of the cost to operate than the agency’s quote. His tech stack choice:

AWS Bedrock: chosen for cost (vs hosting own models), security, and that the platform doesn’t use their input or output tokens for training

Cohere Embed v3: the model used to generate embeddings

PostgreSQL: to store embeddings and do vector-based database queries, using AWS Aurora PostgreSQL

Java: the backend code runs on this, deployed in AWS

React: the frontend that fetches and displays the data, integrated into the existing web app

Get used to non-deterministic outputs

Ross McNairn is cofounder and CEO of the startup, Wordsmith, which several software engineers have joined. You need to rethink how to think about things, he says:

“Working with AI requires a totally different way of approaching problems.

For new joiner engineers, there is a major readjustment in the first few weeks while they explore the codebase and participate in discussions. There are so many problems that can be eloquently sidestepped using AI. Understanding the suite of tools available takes some time.

Getting comfortable with evaluations and iterating on non-deterministic outputs is the biggest challenge most devs have. A lot of solutions are more subjective: engineers need to really understand the domain in order to assess if output is high-quality. It takes time to ramp up to the point where you can confidently assess output quality.”

Switching to in-house can be easier, even for EMs

Matt Morgis was an engineering manager at Elsevier who decided to transition back to staff engineer, specifically to work on AI:

“The move to go from manager to IC was deliberate: working with AI has rekindled my joy in coding.

For experienced engineers who know how to break problems down, AI tools are an incredible force multiplier. At the same time, when I was a manager, I saw AI coding tools handicap junior engineers’ development. The tools are powerful, but I think they’re best wielded by those who understand good software engineering principles.”

The transition was successful, and today Matt is a staff engineer focusing on GenAI at CVS Health.

3. Tech stack

Here’s the tech stack which various companies used. There’s no right or wrong tech stack – what follows is for context and inspiration, not a blueprint.

Incident.io

The stack:

Postgres and pgvector for storing embeddings and searching them

ChatGPT 4o as the default model, and Sonnet 3.7 for code analysis and technical tasks, as they find this Anthropic model performs better. Built in a way to easily switch between them as needed

Gemini: some models used GCP’s Vertex offering, but less often than other models

GCP using Kubernetes: the infrastructure layer

Go on the backend, running as a monolith

React + Typescript on the frontend, including for the dashboard of their own developers’ custom AI tools (covered below)

Sentry

In-house LLM agent tooling: the team evaluated and rejected using a tool like the LangChain framework to integrate LLMs with other data sources and into workflows. It was a lot more work to build their own, but the upsides are that the architecture and code are more in-line with abstractions and design patterns in Sentry’s existing codebase.

The company used the following languages and frameworks to build this tooling:

Postgres for database and vector store, pgvector for similarity search (for Approximate Nearest Neighbor – ANN – search)

Clickhouse for online analytical processing (OLAP)

Sentry for observability – it would be odd to not use their own product for this

Kubernetes for orchestrating compute resources

Python and PyTorch (machine learning library) for inference

Legal AI startup Wordsmith

The stack:

Pinecone as their vector database

LangChain as their framework to integrate LLMs into the stack. LangSmith as their developer platform

LlamaIndex as orchestration frameworks to integrate data with LLMs

Multi-cloud providers:

AWS for running Anthropic models via Bedrock. AWS offers generous credits for startups, which was a factor in the choice

Azure to access OpenAI services because it allows specifying regions to use, which is important when serving EU customers, for example. Using OpenAI’s services directly would not allow switching of regions.

GCP: for Gemini and Vertex (Google’s search AI).

Azure and GCP each have business models for locking in customers; Microsoft is the only major cloud provider offering OpenAI models, and only GCP offers Gemini.

The company routes to different model by use case:

OpenAI: for reasoning-heavy use cases, where the o1 and o3 models are very strong

Groq: when performance is critical, or the goal is to augment UI. Wordsmith calls their API directly – as the performance of Groq is incredibly fast; a step-change in AI development, according to the WordSmith team. Note: Groq is a standalone company and product, not to be confused with Grok, the AI assistant on social media site, X.

AI coding assistant, Augment Code

The stack to build and run LLMs:

Google Cloud: the cloud vendor of choice

A3 Mega 600GPU/75 node cluster: used for LLM training and inference

NVIDIA: the hardware choice for GPUs, and for software (CUDA)

Python and PyTorch: the team wrote training and inference libraries making heavy use of PyTorch

RAG platform at scientific publisher Elsevier

The scientific publisher used this stack to build their in-house RAG platform:

AWS Bedrock and Azure OpenAI for hosting and running LLMs

LangChain for LLM integration

Snowflake as their content data warehouse

Embedding pipelines and vector database:

Apache Airflow for running embedding pipelines

AWS Fargate for ECS to run containers

AWS OpenSearch as the vector search database

FastAPI (a Python-based web framework to build HTTP APIs) for HTTP APIs

Chatbot for insurance company Simply Business

A pretty simple stack:

AWS Bedrock to host the model, making use of Knowledge Bases and Guardrails features

Anthropic Sonnet 3.5 model

Ruby on Rails as the language and framework, running it on top of AWS ECS

Summarization at HR tech DSI

As covered above, Ryan took the initiative at DSI by building a simpler solution than the one an AI vendor proposed. DSI ended up with:

AWS Bedrock for running the models

PostgreSQL: to store embeddings and do vector-based database queries. Using AWS Aurora PostgreSQL, and Cohere Embed v3 for generating embeddings

Tech stack trends across companies

The seven businesses in this article are all different, but there are some common trends:

AWS Bedrock: the preferred way to host and run Anthropic models

Postgres with pgvector: the database of choice to work with embeddings and vectors at most companies. The exception is Wordsmith, which uses vector database Pinecone

LangChain: a few places use this as the framework to integrate LLMs into their stacks

The bigger the scale, the closer you get to the “metal:” most startups are happy to use cloud providers to run LLMs. However, when starting to get into fine-tuning LLMs and heavy usage, it becomes time to rent larger resources and get close to the hardware. Augment Code using NVIDIA GPUs and CUDA software is an example.

4. Engineering challenges

What unusual or new challenges does AI Engineering pose for more “traditional” software engineers? The most common ones mentioned:

Read more


Doc Searls Weblog

There They Go

Also, killing surveillance, finally, maybe. Kaliya lays out some good themes for IIW. My faves: S__olving the identities of AI agents and Proof of Personhood and First Person Credentials. Unsubscribe now and skip the 7-day free trial. Is there a term of art for Substack newsletters that hide half of what's written behind a tease-wall? (Maybe "teasewall" […]

Also, killing surveillance, finally, maybe. Kaliya lays out some good themes for IIW. My faves: S__olving the identities of AI agents and Proof of Personhood and First Person Credentials.

Unsubscribe now and skip the 7-day free trial. Is there a term of art for Substack newsletters that hide half of what's written behind a tease-wall? (Maybe "teasewall" is it.) Look, I do have paid subscriptions to some newsletters, but I'll never have subscriptions to all of them. So I think from now on I'll just drop every half-newsletter with teases me.

New blogging game: Whack-a-Spam. My Real Agency post got covered nicely here in Ars Technica. Then copies of the same Ars story appeared in a dozen faked-up "news" websites, pinging back to my post. I've been marking them all as spam.

Just a thought. Since the redstream is the new mainstream, I suggest we re-brand the old mainstream (NYTimes, WaPo, Atlantic/Time/New York/New Yorker/Wired/etc.) the bluestream, and the WSJ and other econ-oriented rags the finstream.

Tell your algorithm I sent you. Emily Catalano is now every reel Facebook's algorithm suggests to me. I suppose that's because she's the best deadpan comedian since Steven Wright.

You can't search an unconformity. What Tristan Louis says about digital archives makes complementary points (lots of them, all good) that are consistent with what I said about bits not leaving a fossil record.

What is your DNA worth? 23andMe is filing for bankruptcy. I'm a customer, so it concerns me that they have a heap of data about my DNA. While I'd like the world to benefit from that DNA, should it be useful (and, given some of my odd genetics, e.g. this, it might be), I also like to think that my genetic profile is on loan to 23andMe, and not their property. But, I suspect, in a bankruptcy auction, it's theirs. 

Advice welcome. Are the scanners in printers all the same? Or as good as standalone scanners? The scanner in my Epson printer is no longer visible to my laptop (no idea why), and I'm looking for a replacement printer or printer/scanner.


Damien Bod

ASP.NET Core delegated Microsoft OBO access token management (Entra only)

This blog shows how to implement a delegated Microsoft On-Behalf-Of flow in ASP.NET Core, and has a focus on access token management. The solution uses Microsoft.Identity.Web to implement the different flows and it really simple to implement, when you know how to use the Nuget package and use the correct Microsoft documentation. The application can […]

This blog shows how to implement a delegated Microsoft On-Behalf-Of flow in ASP.NET Core, and has a focus on access token management. The solution uses Microsoft.Identity.Web to implement the different flows and it really simple to implement, when you know how to use the Nuget package and use the correct Microsoft documentation. The application can request delegated access tokens On-Behalf-Of a user and another application, providing a seamless and secure access to protected resources using a zero trust strategy.

Code: https://github.com/damienbod/token-mgmt-ui-delegated-obo-entra

Blogs in this series ASP.NET Core user delegated access token management ASP.NET Core user application access token management ASP.NET Core delegated OAuth token exchange access token management ASP.NET Core delegated Microsoft OBO access token management (Entra only) Setup

Three applications can used in this setup. A web UI application, an API and another API which implements the Microsoft On-Behalf-Of flow for the users delegated access token and the application. The Microsoft OBO works very like part of the OAuth token exchange standard, but it is not a standard, just a Microsoft flavor for a standard.

What must an application manage?

An access token management solution must ensure that tokens are securely stored per user session for delegated downstream API user tokens and updated after each UI authentication or refresh. The solution should be robust to handle token expiration, function seamlessly after restarts, and support multi-instance deployments. The tokens must be persisted safely in multiple instance setups. Additionally, it must effectively manage scenarios involving invalid or missing access tokens. Microsoft.Identity.Web implements this completely as long as as authentication and OAuth flows are implemented using Entra ID.

Properties of token management in the solution setup: The access token is persisted per user session The token expires The token needs to be persisted somewhere safely (Safe and encrypted storage if not in-memory) The token must be replaced after each UI authentication (per user) The solution must work after restarts The solution must work for multiple instances when deployed to multi-instance deployments. The solution must handle invalid access tokens or missing access tokens The application must handle a user logout Web UI

The first step in the Microsoft On-Behalf-Of flow is to authenticate the user and a web application using Entra ID. This is implemented using the Microsoft.Identity.Web Nuget package. The Web application uses OpenID Connect code flow with PKCE and a confidential client. The application requests an access token for the first API. The access token is a delegated access token issued for the user and the specific API. The implementation uses a secret to assert the application. Microsoft recommends using a certificate and client assertions when deploying to production.

builder.Services.AddHttpClient(); builder.Services.AddOptions(); string[]? initialScopes = builder.Configuration .GetValue<string>("WebApiEntraId:ScopeForAccessToken")? .Split(' '); builder.Services.AddDistributedMemoryCache(); builder.Services .AddMicrosoftIdentityWebAppAuthentication(builder.Configuration, "EntraID", subscribeToOpenIdConnectMiddlewareDiagnosticsEvents: true) .EnableTokenAcquisitionToCallDownstreamApi(initialScopes) .AddDistributedTokenCaches(); builder.Services .AddAuthorization(options => { options.FallbackPolicy = options.DefaultPolicy; }); builder.Services.AddRazorPages() .AddMvcOptions(options => { var policy = new AuthorizationPolicyBuilder() .RequireAuthenticatedUser() .Build(); options.Filters.Add(new AuthorizeFilter(policy)); }).AddMicrosoftIdentityUI(); builder.Services.AddServerSideBlazor() .AddMicrosoftIdentityConsentHandler();

The WebApiEntraIdService class is used to use the access token from the web application and call the downstream API. If the access token is missing, or invalid, an new access token is requested in the application.

using Microsoft.Extensions.Configuration; using Microsoft.Identity.Web; using System; using System.Net.Http; using System.Net.Http.Headers; using System.Net.Http.Json; using System.Threading.Tasks; namespace RazorPageEntraId.WebApiEntraId; public class WebApiEntraIdService { private readonly IHttpClientFactory _clientFactory; private readonly ITokenAcquisition _tokenAcquisition; private readonly IConfiguration _configuration; public WebApiEntraIdService(IHttpClientFactory clientFactory, ITokenAcquisition tokenAcquisition, IConfiguration configuration) { _clientFactory = clientFactory; _tokenAcquisition = tokenAcquisition; _configuration = configuration; } public async Task<string?> GetWebApiEntraIdDataAsync() { var client = _clientFactory.CreateClient(); var scope = _configuration["WebApiEntraID:ScopeForAccessToken"]; var accessToken = await _tokenAcquisition.GetAccessTokenForUserAsync([scope!]); client.BaseAddress = new Uri(_configuration["WebApiEntraID:ApiBaseAddress"]!); client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", accessToken); client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json")); var response = await client.GetAsync("/api/profiles/photo"); if (response.IsSuccessStatusCode) { var responseContent = await response.Content.ReadFromJsonAsync<string>(); return responseContent; } throw new ApplicationException($"Status code: {response.StatusCode}, Error: {response.ReasonPhrase}"); } }

Web API using On-Behalf-Of Flow

The first Web API implements the Microsoft On-Behalf-Of flow to acquire a new access token for the existing access token and the user represented in the access token. The access token is a delegated access token. The API has no UI and does not use any UI flows. If the access token used in the request is invalid, a 401 is returned with an exception information on what permission or access token is required to use the API. If the API is requested using a valid access token, the API application uses the default scope and requests a new access token using a secret or a certificate. The new access token can be used to access the downstream API.

builder.Services.AddTransient<WebApiDownstreamService>(); builder.Services.AddHttpClient(); builder.Services.AddOptions(); builder.Services.AddDistributedMemoryCache(); builder.Services .AddMicrosoftIdentityWebApiAuthentication( builder.Configuration, "EntraID") .EnableTokenAcquisitionToCallDownstreamApi() .AddDistributedTokenCaches();

using Microsoft.Identity.Web; using System.Net.Http.Headers; using System.Text.Json; namespace WebApiEntraIdObo.WebApiEntraId; public class WebApiDownstreamService { private readonly IHttpClientFactory _clientFactory; private readonly ITokenAcquisition _tokenAcquisition; private readonly IConfiguration _configuration; public WebApiDownstreamService(IHttpClientFactory clientFactory, ITokenAcquisition tokenAcquisition, IConfiguration configuration) { _clientFactory = clientFactory; _tokenAcquisition = tokenAcquisition; _configuration = configuration; } public async Task<string?> GetApiDataAsync() { var client = _clientFactory.CreateClient(); // user_impersonation access_as_user access_as_application .default var scope = _configuration["WebApiEntraIdObo:ScopeForAccessToken"]; if (scope == null) throw new ArgumentNullException(nameof(scope)); var uri = _configuration["WebApiEntraIdObo:ApiBaseAddress"]; if (uri == null) throw new ArgumentNullException(nameof(uri)); var accessToken = await _tokenAcquisition .GetAccessTokenForUserAsync([scope]); client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", accessToken); client.BaseAddress = new Uri(uri); client.DefaultRequestHeaders.Accept.Add( new MediaTypeWithQualityHeaderValue("application/json")); var response = await client.GetAsync("api/profiles/photo"); if (response.IsSuccessStatusCode) { var data = await JsonSerializer.DeserializeAsync<string>( await response.Content.ReadAsStreamAsync()); return data; } throw new ApplicationException($"Status code: {response.StatusCode}, Error: {response.ReasonPhrase}"); } }

Web API

The downstream API validates the request API using standard JWT validation.

builder.Services.AddControllers(options => { var policy = new AuthorizationPolicyBuilder() .RequireAuthenticatedUser() // .RequireClaim("email") // disabled this to test with users that have no email (no license added) .Build(); options.Filters.Add(new AuthorizeFilter(policy)); }); builder.Services.AddHttpClient(); builder.Services.AddOptions(); builder.Services.AddMicrosoftIdentityWebApiAuthentication( builder.Configuration, "EntraID");

Running the applications

When the applications are started, the data from the downstream APIs is returned to the web application.

Further examples of the Microsoft On-Behalf-Of flow

Microsoft authentication authorization libraries are complicated and many. They is no one way to implement this. Microsoft provides Microsoft Graph Nuget packages, Azure SDK packages, mixes application and delegation flows, managed identities solutions, direct token acquisition and some legacy Nuget packages to integrate the security. Here are further examples of using the Microsoft On-Behalf-Of flow using different client solutions.

Microsoft OBO with Azure Blob Storage (delegated)

ASP.NET Core Razor page using Azure Blob Storage to upload download files securely using OAuth and Open ID Connect

https://github.com/damienbod/AspNetCoreEntraIdBlobStorage

Microsoft OBO with OpenIddict (delegated)

This demo shows how to implement the On-Behalf-Of flow between an Microsoft Entra ID protected API and an API protected using OpenIddict.

https://github.com/damienbod/OnBehalfFlowOidcDownstreamApi

ASP.NET Core OBO using Microsoft Graph (delegated)

Backend for frontend security using Angular Standalone (nx) and ASP.NET Core backend using Microsoft Graph

https://github.com/damienbod/bff-aspnetcore-angular

Note

The Microsoft OBO flow is only used for integrating with Microsoft Entra. If using any other identity provider, the OAuth token exchange flow should be used for this type of solution.

Links
Using Blob storage from ASP.NET Core with Entra ID authentication
Implement the OAUTH 2.0 Token Exchange delegated flow between an Microsoft Entra ID API and an API protected using OpenIddict

https://github.com/damienbod/OAuthGrantExchangeOidcDownstreamApi

https://docs.duendesoftware.com/identityserver/v7/tokens/extension_grants/token_exchange/

https://datatracker.ietf.org/doc/html/rfc8693

https://github.com/damienbod/OnBehalfFlowOidcDownstreamApi

https://www.rfc-editor.org/rfc/rfc6749#section-5.2

https://github.com/blowdart/idunno.Authentication/tree/dev/src/idunno.Authentication.Basic

https://learn.microsoft.com/en-us/entra/identity-platform/v2-oauth2-on-behalf-of-flow

Standards

JSON Web Token (JWT)

Best Current Practice for OAuth 2.0 Security

The OAuth 2.0 Authorization Framework

OAuth 2.0 Demonstrating Proof of Possession DPoP

OAuth 2.0 JWT-Secured Authorization Request (JAR) RFC 9101

OAuth 2.0 Mutual-TLS Client Authentication and Certificate-Bound Access Tokens

OpenID Connect 1.0

Microsoft identity platform and OAuth 2.0 On-Behalf-Of flow

OAuth 2.0 Token Exchange

JSON Web Token (JWT) Profile for OAuth 2.0 Access Tokens

HTTP Semantics RFC 9110


Simon Willison

Quoting Greg Kamradt

Today we’re excited to launch ARC-AGI-2 to challenge the new frontier. ARC-AGI-2 is even harder for AI (in particular, AI reasoning systems), while maintaining the same relative ease for humans. Pure LLMs score 0% on ARC-AGI-2, and public AI reasoning systems achieve only single-digit percentage scores. In contrast, every task in ARC-AGI-2 has been solved by at least 2 humans in under 2 attempts.

Today we’re excited to launch ARC-AGI-2 to challenge the new frontier. ARC-AGI-2 is even harder for AI (in particular, AI reasoning systems), while maintaining the same relative ease for humans. Pure LLMs score 0% on ARC-AGI-2, and public AI reasoning systems achieve only single-digit percentage scores. In contrast, every task in ARC-AGI-2 has been solved by at least 2 humans in under 2 attempts. [...]

All other AI benchmarks focus on superhuman capabilities or specialized knowledge by testing "PhD++" skills. ARC-AGI is the only benchmark that takes the opposite design choice – by focusing on tasks that are relatively easy for humans, yet hard, or impossible, for AI, we shine a spotlight on capability gaps that do not spontaneously emerge from "scaling up".

Greg Kamradt, ARC-AGI-2

Tags: evals, ai


shot-scraper 1.8

shot-scraper 1.8 I've added a new feature to shot-scraper that makes it easier to share scripts for other people to use with the shot-scraper javascript command. shot-scraper javascript lets you load up a web page in an invisible Chrome browser (via Playwright), execute some JavaScript against that page and output the results to your terminal. It's a fun way of running complex screen-scraping r

shot-scraper 1.8

I've added a new feature to shot-scraper that makes it easier to share scripts for other people to use with the shot-scraper javascript command.

shot-scraper javascript lets you load up a web page in an invisible Chrome browser (via Playwright), execute some JavaScript against that page and output the results to your terminal. It's a fun way of running complex screen-scraping routines as part of a terminal session, or even chained together with other commands using pipes.

The -i/--input option lets you load that JavaScript from a file on disk - but now you can also use a gh: prefix to specify loading code from GitHub instead.

To quote the release notes:

shot-scraper javascript can now optionally load scripts hosted on GitHub via the new gh: prefix to the shot-scraper javascript -i/--input option. #173

Scripts can be referenced as gh:username/repo/path/to/script.js or, if the GitHub user has created a dedicated shot-scraper-scripts repository and placed scripts in the root of it, using gh:username/name-of-script.

For example, to run this readability.js script against any web page you can use the following:

shot-scraper javascript --input gh:simonw/readability \ https://simonwillison.net/2025/Mar/24/qwen25-vl-32b/

The output from that example starts like this:

{ "title": "Qwen2.5-VL-32B: Smarter and Lighter", "byline": "Simon Willison", "dir": null, "lang": "en-gb", "content": "<div id=\"readability-page-1\"...

My simonw/shot-scraper-scripts repo only has that one file in it so far, but I'm looking forward to growing that collection and hopefully seeing other people create and share their own shot-scraper-scripts repos as well.

This feature is an imitation of a similar feature that's coming in the next release of LLM.

Tags: playwright, shot-scraper, scraping, javascript, projects, github, annotated-release-notes


microsoft/playwright-mcp

microsoft/playwright-mcp The Playwright team at Microsoft have released an MCP (Model Context Protocol) server wrapping Playwright, and it's pretty fascinating. They implemented it on top of the Chrome accessibility tree, so MCP clients (such as the Claude Desktop app) can use it to drive an automated browser and use the accessibility tree to read and navigate pages that they visit. Trying it

microsoft/playwright-mcp

The Playwright team at Microsoft have released an MCP (Model Context Protocol) server wrapping Playwright, and it's pretty fascinating.

They implemented it on top of the Chrome accessibility tree, so MCP clients (such as the Claude Desktop app) can use it to drive an automated browser and use the accessibility tree to read and navigate pages that they visit.

Trying it out is quite easy if you have Claude Desktop and Node.js installed already. Edit your claude_desktop_config.json file:

code ~/Library/Application\ Support/Claude/claude_desktop_config.json

And add this:

{ "mcpServers": { "playwright": { "command": "npx", "args": [ "@playwright/mcp@latest" ] } } }

Now when you launch Claude Desktop various new browser automation tools will be available to it, and you can tell Claude to navigate to a website and interact with it.

I ran the following to get a list of the available tools:

cd /tmp git clone https://github.com/microsoft/playwright-mcp cd playwright-mcp/src/tools files-to-prompt . | llm -m claude-3.7-sonnet \ 'Output a detailed description of these tools'

The full output is here, but here's the truncated tool list:

Navigation Tools (common.ts) browser_navigate: Navigate to a specific URL browser_go_back: Navigate back in browser history browser_go_forward: Navigate forward in browser history browser_wait: Wait for a specified time in seconds browser_press_key: Press a keyboard key browser_save_as_pdf: Save current page as PDF browser_close: Close the current page Screenshot and Mouse Tools (screenshot.ts) browser_screenshot: Take a screenshot of the current page browser_move_mouse: Move mouse to specific coordinates browser_click (coordinate-based): Click at specific x,y coordinates browser_drag (coordinate-based): Drag mouse from one position to another browser_type (keyboard): Type text and optionally submit Accessibility Snapshot Tools (snapshot.ts) browser_snapshot: Capture accessibility structure of the page browser_click (element-based): Click on a specific element using accessibility reference browser_drag (element-based): Drag between two elements browser_hover: Hover over an element browser_type (element-based): Type text into a specific element

Tags: playwright, anthropic, claude, generative-ai, llm-tool-use, ai, llms, model-context-protocol

Monday, 24. March 2025

Identity Woman

Key themes going into IIW XL (40) 

We are heading into the 40th Internet Identity Workshop, an event that I cofounded with Doc Searls and Phil Windley 20 years ago. I am not sure we ever thought it would last this long but it has. We continue to be the world’s leading innovation forum for decentralized digital identity focused on a range […] The post Key themes going into IIW XL (40)  appeared first on Identity Woman.

We are heading into the 40th Internet Identity Workshop, an event that I cofounded with Doc Searls and Phil Windley 20 years ago. I am not sure we ever thought it would last this long but it has. We continue to be the world’s leading innovation forum for decentralized digital identity focused on a range […]

The post Key themes going into IIW XL (40)  appeared first on Identity Woman.

Monday, 24. March 2025

Simon Willison

Qwen2.5-VL-32B: Smarter and Lighter

Qwen2.5-VL-32B: Smarter and Lighter The second big open weight LLM release from China today - the first being DeepSeek v3-0324. Qwen's previous vision model was Qwen2.5 VL, released in January in 3B, 7B and 72B sizes. Today's Apache 2.0 licensed release is a 32B model, which is quickly becoming my personal favourite model size - large enough to have GPT-4-class capabilities, but small enough

Qwen2.5-VL-32B: Smarter and Lighter

The second big open weight LLM release from China today - the first being DeepSeek v3-0324.

Qwen's previous vision model was Qwen2.5 VL, released in January in 3B, 7B and 72B sizes.

Today's Apache 2.0 licensed release is a 32B model, which is quickly becoming my personal favourite model size - large enough to have GPT-4-class capabilities, but small enough that on my 64GB Mac there's still enough RAM for me to run other memory-hungry applications like Firefox and VS Code.

Qwen claim that the new model (when compared to their previous 2.5 VL family) can "align more closely with human preferences", is better at "mathematical reasoning" and provides "enhanced accuracy and detailed analysis in tasks such as image parsing, content recognition, and visual logic deduction".

They also offer some presumably carefully selected benchmark results showing it out-performing Gemma 3-27B, Mistral Small 3.1 24B and GPT-4o-0513 (there have been two more recent GPT-4o releases since that one, 2024-08-16 and 2024-11-20).

As usual, Prince Canuma had MLX versions of the models live within hours of the release, in 4 bit, 6 bit, 8 bit, and bf16 variants.

I ran the 4bit version (a 18GB model download) using uv and Prince's mlx-vlm like this:

uv run --with 'numpy<2' --with mlx-vlm \ python -m mlx_vlm.generate \ --model mlx-community/Qwen2.5-VL-32B-Instruct-4bit \ --max-tokens 1000 \ --temperature 0.0 \ --prompt "Describe this image." \ --image Mpaboundrycdfw-1.png

Here's the image:

And here's the resulting description, which I found to be very impressive:

This image appears to be a map highlighting specific areas along the coastline, likely in California, based on the geographical features and place names. Here is a detailed description of the image:

Geographic Features:

The map shows a section of the coastline, with land on the right and the sea on the left. The land area includes parts of SanMateo County, as noted in the top-right corner. The coastline features several named locations, including: Point Montara Pillar Point Kings Rock VicenteCreek DennistonCreek State Beach

Highlighted Areas:

The map contains two distinct areas marked with different colors: Red Area (Top Section) Labeled as "MONTARA" and "MONTARA M R" (possibly referring to a "Marine Reserves" or similar protected area). This area is marked with a dark red color and extends from the coastline inland. Blue Area (Bottom Section) Labeled as "PILLAR" and "PILLAR P O I N T" along with "SMCA" (Special Management Area). This area is marked with a blue color and is located closer to the coastline, including the area around "Pillar Point."

Depth Contours:

The map includes depth markings in the sea, indicating the bathymetrical features: 20 fathom (fm) line is marked in the upper-left part of the sea. 10 fathom (fm) line is marked closer to the coastline. 30 fathom (fm) line is marked further out to sea.

Other Features:

State Beach: Marked near the top-right, indicating a protected recreational area. Kings Rock: A prominent feature near the coastline, likely a rocky outcropping. Creeks: The map shows several creeks, including VicenteCreek and DennistonCreek, which flow into the sea.

Protected Areas:

The map highlights specific protected areas: Marine Reserves: "MONTARA M R" (Marine Reserves) in red. Special Management Area (SMCA) "PILLAR P O I N T" in blue, indicating a Special Management Area.

Grid and Coordinates:

The map includes a grid with latitude and longitude markings: Latitude ranges from approximately 37°25'N to 37°35'N. Longitude ranges from approximately 122°22.5'W to 122°35.5'W.

Topography:

The land area shows topographic features, including elevations and vegetation, with green areas indicating higher elevations or vegetated land.

Other Labels:

"SMR": Likely stands for "State Managed Reserves." "SMCA": Likely stands for "Special Management Control Area."

In summary, this map highlights specific protected areas along the coastline, including a red "Marine Reserves" area and a blue "Special Management Area" near "Pillar Point." The map also includes depth markings, geographical features, and place names, providing a detailed view of the region's natural and protected areas.

It included the following runtime statistics:

Prompt: 1051 tokens, 111.985 tokens-per-sec Generation: 760 tokens, 17.328 tokens-per-sec Peak memory: 21.110 GB

Tags: vision-llms, llm-release, generative-ai, ai, qwen, llms, uv, mlx, prince-canuma


Just a Theory

Mini Summit 2: Extension Search Path Patch

Join us at PostgresSQL Extension Mini Summit #2 this week, where PostgreSQL committer Peter Eisentraut will discuss the extension search path patch.

This Wednesday, March 26 at noon America/New_York (16:00 UTC), Peter Eisentraut has graciously agreed to give a talk at the Extension Mini Summit #2 on the extension search path patch he recently committed to PostgreSQL. I’m personally stoked for this topic, as freeing extensions from the legacy of a single directory opens up a number of new patterns for packaging, installation, and testing extensions. Hit the Meetup to register for this live video conference, and to brainstorm novel uses for this new feature, expected to debut in PostgreSQL 18.

More about… Postgres Extensions PGConf Summit Search Path Peter Eisentraut

2025 Postgres Extensions Mini Summit One

A rough transcript of my talk “State of the Extension Ecosystem”.

Back on March 12, we hosted the first in a series of PostgreSQL Extensions Mini Summits leading up to the Extension Ecosystem Summit at PGConf.dev on May 13. I once again inaugurated the series with a short talk on the State of the Extension Ecosystem. The talk was followed by 15 minutes or so of discussion. Here are the relevant links:

Video OCI Demo Slides

And now, with many thanks to Floor Drees for the effort, the transcript from the session.

Introduction

Floor Drees introduced the organizers:

David Wheeler, Principal Architect at Tembo, maintainer of PGXN Yurii Rashkovskii, Omnigres Keith Fiske, Crunchy Data Floor Drees, Principal Program Manager at EDB, PostgreSQL CoCC member, PGDay Lowlands organizer

David presented a State of the Extension Ecosystem at this first event, and shared some updates from PGXN land.

The stream and the closed captions available for the recording are supported by PGConf.dev and their gold level sponsors, Google, AWS, Huawei, Microsoft, and EDB.

State of the Extensions Ecosystem

So I wanted to give a brief update on the state of the Postgres extension ecosystem, the past, present, and future. Let’s give a brie history; it’s quite long, actually.

There were originally two approaches back in the day. You could use shared preload libraries to have it preload dynamic shareable libraries into the main process. And then you could do pure SQL stuff using, including procedural languages like PL/Perl, PL/Tcl, and such.

And there were a few intrepid early adopters, including PostGIS, BioPostgres, PL/R, PL/Proxy, and pgTAP, who all made it work. Beginning of Postgres 9.1 Dimitri Fontaine added support for explicit support for extensions in the Postgres core itself. The key features included the ability to compile and install extensions. This is again, pure SQL and shared libraries.

There are CREATE, UPDATE, and DROP EXTENSION commands in SQL that you can use to add extensions to a database, upgrade them to new versions and to remove them. And then pg_dump and pg_restore support so that extensions could be considered a single bundle to be backed up and restored with all of their individual objects being included as part of the backup.

Back then, a number of us, myself included, saw this as an opportunity to have the extensibility of Postgres itself be a fundamental part of the community and distribution. I was a long time user of Perl and used CPAN, and I thought we had something like CPAN for Postgres. So, I proposed PGXN, the PostgreSQL Extension Network, back in 2010. The idea was to do distribution of source code. You would register namespaces for your extensions.

There was discovery via a website for search, documentation published, tags to help you find different kinds of objects, and to support installation through a command line interface. The compile and install stuff that Postgres itself provides, using PGXS and Configure.

This is what PGXN looks like today. It was launched in 2011. There’s a command line client, this website, an API an a registry you can upload your extensions to. The most recent one was pg_task a day or so ago.

In the interim, since that came out in 2011/2012, the cloud providers have come into their own with Postgres, but their support for extensions tends to be rather limited. For non-core extension counts, as of yesterday, Azure provides 38 extensions, GCP provides 44 extensions, and AWS 51. These are the third party extensions that don’t come with Postgres and its contrib itself. Meanwhile, PGXN has 420 extensions available to download, compile, build, and install.

A GitHub project that tracks random extensions on the internet, (joelonsql/PostgreSQL-EXTENSIONs.md), which is pretty comprehensive, has almost 1200 extensions listed. So the question is why is the support not more broad? Why aren’t there a thousand extensions available in every one of these systems?

Rthis has been a fairly common question that’s come up in the last couple years. A number of new projects have tired to fill in the gaps. One is Trusted Language Extensions. They wanted to make it easier to distribute extensions without needing dynamic shared libraries by adding additional features in the database itself.

The idea was to empower app developers to make it easy to install extensions via SQL functions rather than having to access the file system of the database server system itself. It can be portable, so there’s no compilation required, it hooks into the create extension command transparently, supports custom data types, and there have been plans for foreign data wrappers and background workers. I’m not sure how that’s progressed in the past year. The pg_tle extension itself was created by AWS and Supabase.

Another recent entrant in tooling for extensions is pgrx, which is native Rust extensions in Postgres. You build dynamic shared libraries, but write them in pure Rust. The API for pgrx provides full access to Postgres features, and still provides the developer-friendly tooling that Rust developers are used to. There’s been a lot of community excitement the last couple of years around pgrx, and it remains under active development — version 0.13.0 just came out a week or so ago. It’s sponsored and run out of the PgCentral Foundation.

There have also been a several new registries that have come up to try to fill the gap and make extensions available. They have emphasized different things than PGXN. One was ease of use. So, for example, here pgxman says it should be really easy to install a client in a single command, and then it installs something, and then it downloads and installs a binary version of your an extension.

And then there was platform neutrality. They wanted to do binary distribution and support multiple different platform, to know what binary∑ to install for a given platform. They provide stats. PGXN doesn’t provide any stats, but some of them are list stats like how many downloads we had, how many in the last 180 days.

And curation. Trunk is another binary extension registry, from my employer, Tembo. They do categorization of all the extensions on Trunk, which is at 237 now. Quite a few people have come forward to tells us that they don’t necessarily use Trunk to install extensions, but use them to find them, because the categories are really helpful for people to figure out what sorts of things are even available, and an option to use.

So here’s the State of the Ecosystem as I see it today.

There have been some lost opportunities from the initial excitement around 2010. Extensions remain difficult to find and discover. Some are on PGXN, some are on GitHub, some are on Trunk, some are on GitLab, etc. There’s no like one place to go to find them all.

They remain under-documented and difficult to understand. It takes effort for developers to write documentation for their extensions, and a lot of them aren’t able to. Some of them do write the documentation, but they might be in a format that something like PGXN doesn’t understand.

The maturity of extensions can be difficult to gauge. If you look at that list of 1200 extensions on GitHub, which ones are the good ones? Which ones do people care about? That page in particular show the number of stars for each extension, but that the only metric.

They’re difficult to configure and install. This is something TLE really tried to solve, but the uptake on TLE has not been great so far, and it doesn’t support all the use cases. There are a lot of use cases that need to be able to access the internal APIs of Postgres itself, which means compiling stuff into shared libraries, and writing them in C or Rust or a couple of other compiled languages.

That makes them difficult to configure. You have ask questions lik: Which build system do I use? Do I install the tooling? How do I install it and configure it? What dependencies does it have? Et cetera.

There’s no comprehensive binary packaging. The Postgres community’s own packaging systems for Linux — Apt, and YUM — do a remarkably good job of packaging extensions. They probably have more extensions packaged for those platforms than any of the others. If they have the extension you need and you’re using the PGDG repositories, then this stuff is there. But even those are still like a fraction of all the potential available extensions that are out there.

Dependency management can be pretty painful. It’s difficult to know what you need to install. I was messing around yesterday with the PgSQL HTTP extension, which is a great extension that depends on libcurl. I thought maybe I could build a package that includes libcurl as part of it. But then I realized that libcurl depends on other packages, other dynamic libraries. So I’d have to figure out what all those are to get them all together.

A lot of that goes away if you use a system like apt or yum. But if you, if you don’t, or you just want to install stuff on your Mac or Windows, it’s much more difficult.

Centralized source distribution, we’ve found found, is insufficient. Even if all the extensions were available on PGXN, not everybody has the wherewithal or the expertise to find what they need, download it, compile it, and build it. Moreover, you don’t want to have a compiler on your production system, so you don’t want to be building stuff from source on your production system. So then you have to get to the business of building your own packages, which is a whole thing.

But in this state of the extension ecosystem we see new opportunities too. One I’ve been working on for the past year, which we call “PGXN v2”, is made possible by my employer, Tembo. The idea was to consider the emerging patterns — new registries and new ways of building and releasing and developing extensions — and to figure out the deficiencies, and to engage deeply with the community to work up potential solutions, and to design and implement a new architecture. The idea is to serve the community for the next decade really make a PGXN and its infrastructure the source of record for extensions for Postgres.

In the past year, I did a bunch of design work on it. Here’s a high level architectural view. We’d have a root registry, which is still the source code distribution stuff. There’s a web UX over it that would evolve from the current website. And there’s a command line client that knows how to build extensions from the registry.

But in addition to those three parts, which we have today, we would evolve a couple of additional parts.

One is “interactions”, so that when somebody releases a new extension on PGXN, some notifications could go out through webhooks or some sort of queue so that downstream systems like the packaging systems could know something new has come out and maybe automate building and updating their packages.

There could be “stats and reports”, so we can provide data like how many downloads there are, what binary registries make them available, what kinds of reviews and quality metrics rate them. We can develop these stats and display those on the website.

And, ideally, a “packaging registry” for PGXN to provide binary packages for all the major platforms of all the extensions we can, to simplify the installation of extensions for anybody who needs to use them. For extensions that aren’t available through PGDG or if you’re not using that system and you want to install extensions. Late last year, I was focused on figuring out how t build the packaging system.

Another change that went down in the past year was the Extension Ecosystem Summit itself. This took place at PGConf.Dev last May. The idea was for a community of people to come together to collaborate, examine ongoing work in the extension distribution, examine challenges, identify questions, propose solutions, and agree on directions for execution. Let’s take a look at the topics that we covered last year at the summit.

One was extension metadata, where the topics covered included packaging and discoverability, extension development, compatibility and taxonomies as being important to represent a metadata about extensions — as well as versioning standards. One of the outcomes was an RFC for version two of the PGXN metadata that incorporates a lot of those needs into a new metadata format to describe extensions more broadly.

Another topic was the binary distribution format and what it should look like, if we were to have major, distribution format. We talked about being able to support multiple versions of an extension at one time. There was some talk about the Python Wheel format as a potential precedent for binary distribution of code.

There’s also an idea to distribute extensions through Docker containers, also known as the Open Container Initiative. Versioning came up here, as well. One of the outcomes from this session was another PGXN RFC for binary distribution, which was inspired by Python Wheel among other stuff.

I wanted to give a brief demo build on that format. I hacked some changes into the PGXS Makefile to add a new target, trunk that builds a binary package called a “trunk” and uploads it to an OCI registry for distribution. Here’s what it looks like.

On my Mac I was compiling my semver extension. Then I go into a Linux container and compile it again for Linux using the make trunk command. The result is two .trunk files, one for Postgres 16 on Darwin and one for Postgres 16 on Linux.

There are also some JSON files that are annotations specifically for OCI. We have a command where we can push these images to an OCI registry.

Then we can then use an install command that knows to download and install the version of the build appropriate for this platform (macOS). And then I go into Linux and do the same thing. It also knows, because of the OCI standard, what the platform is, and so it installs the appropriate binary.

Another topic was ABI and API compatibility. There was some talk at the Summit about what is the definition of an ABI and an API and how do we define internal APIs and their use? Maybe there’s some way to categorize APIs in Postgres core for red, green, or in-between, something like that. There was desire to have more hooks available into different parts of the system.

One of the outcomes of this session was that I worked with Peter Eisentraut on some stability guidance for the API and ABI that is now committed in the docs. You can read them now on in the developer docs, they’ll be part of the Postgres 18 release. The idea is that minor version releases should be safe to use with other minor versions. If you compiled your extension against one minor version, it should be perfectly compatible with other minor versions of the same major release.

Interestingly, there was a release earlier this year, like two weeks after Peter committed this, where there was an API break. It’s the first time in like 10 years. Robert Treat and I spent quite a bit of time trying to look for a previous time that happened. I think there was one about 10 years ago, but then this one happened and, notably it broke the Timescale database. The Core Team decided to release a fix just a week later to restore the ABI compatibility.

So it’s clear that even though there’s guidance, you should in general be able to rely on it, and it was a motivating factor for the a new release to fix an ABI break, there are no guarantees.

Another thing that might happen is that I proposed a Google Summer of Code project to build an ABI checker service. Peter [embarrassing forgetfulness and misattributed national identity omitted] Geoghegan POC’d an ABI checker in 2023. The project is to take Peter’s POC and build something that could potentially run on every commit or push to the back branches of the project. Maybe it could be integrated into the build farm so that, if there’s a back-patch to an earlier branch and it turns red, they quickly the ABI was broken. This change could potentially provide a higher level of guarantee — even if they don’t end up using the word “guarantee” about the stability of the ABIs and APIs. I’m hoping this happens; a number of people have asked about it, and at least one person has written an application.

Another topic at the summit last year was including or excluding extensions in core. They’ve talked about when to add something to core, when to remove something from core, whether items in contrib should actually be moved into core itself, and whether to move metadata about extensions into catalog. And once again, support for multiple versions came up; this is a perennial challenge! But I’m not aware of much work on these questions. I’m wondering if it’s time for a revisit,

As a bonus item — this wasn’t a formal topic at the summit last year, but it came up many times in the mini-summits — is the challenge of packaging and lookup. There’s only one path to extensions in SHAREDIR. This creates a number of difficulties. Christoph Berg has a patch for a PGDG and Debian that adds a second directory. This allowed the PGDG stuff to actually run tests against extensions without changing the core installation of the Postgres service itself. Another one is Cloud Native Postgres immutability. If that directory is part of the image, for your CloudNative Postgres, you can’t install extensions into it.

It’s a similar issue, for Postgres.app immutability. Postgres.app is a Mac app, and it’s signed by a certificate provided by Apple. But that means that if you install an extension in its SHAREDIR, it changes the signature of the application and it won’t start. They work around this issue through a number of symlink shenanigans, but these issues could be solved by allowing extension to be installed in multiple locations.

Starting with Christoph’s search path patch and a number of discussions we had at PGConf last year, Peter Eisentraut has been working on a search path patch to the core that would work similar to shared preload libraries, but it’s for finding extension control files. This would allow you to have them in multiple directories and it will find them in path.

Another interesting development in this line has been, the CloudNativePG project has been using that extension search path patch to prototype a new feature coming to Kubernetes that allows one to mount a volume that’s actually another Docker image. If you have your extension distributed as an OCI image, you can specify that it be mounted and installed via your CNPG cluster configuration. That means when CNPG spins up, it puts the extension in the right place. It updates the search path variables and stuff just works.

A lot of the thought about the stuff went into a less formal RFC I wrote up in my blog, rather than on PGXN. The idea is to take these improvements and try to more formally specify the organization of extensions separate from how Postgres organizes shared libraries and shared files.

I said, we’re bringing the Extension Summit back! There will be another Extension Summit hosted our team of organizers, myself, Floor, Keith Fiske from Crunchy Data, and Yurii from Omnigres. That will be on May 13th in the morning at PGConf.dev; we appreciate their support.

The idea of these Mini Summits is to bring up a number of topics of interest. Have somebody come and do a 20 or 40 minute talk about it, and then we can have discussion about implications.

Floor mentioned the schedule, but briefly:

March 12: David Wheeler, PGXN: “State of the Extension Ecosystem” March 24: Peter Eisentraut, Core Team: “Implementing an Extension Search Path” April 9: Christoph Berg, Debian: “Apt Extension Packaging” April 23: May 7: Gabriele Bartolini, CNPG “Extension Management in CloudNativePG”

So, what are your interests in extensions and how they can be improved. There are a lot of potential topics to talk about at the Summit or at these Mini Summits: development tools, canonical registry, how easy it is to publish, continuous delivery, yada, yada, yada, security scanning — all sorts of stuff that could go into conceiving, designing, developing, distributing extensions for Postgres.

I hoe you all will participate. I appreciate you taking the time to listen to me for half an hour. So I’d like to turn it over to, discussion, if people would like to join in, talk about implications of stuff. Also, we can get to any questions here.

Questions, comments, shout-outs

Floor: David, at one point you talked about, metadata taxonomy. If you can elaborate on that a little bit, that’s Peter’s question.

David: So one that people told me that they found useful was one provided by Trunk. So it has these limited number of categories, so if you’re interested in machine learning stuff, you could go to the machine learning stuff and it shows you what extensions are potentially available. They have 237 extensions on Trunk now.

PGXN itself allows arbitrary tagging of stuff. It builds this little tag cloud. But if I look at this one here, you can see this one has a bunch of tags. These are arbitrary tags that are applied by the author. The current metadata looks like this. It’s just plain JSON, and it has a list of tags. The PGXN Meta v2 RFC has a bunch of examples. It’s an evolution of that META.json, so the idea is to have a classifications that includes tags as before, but also adds categories, which are a limited list that would be controlled by the core [he means “root”] registry:

{ "classifications": { "tags": [ "testing", "pair", "parameter" ], "categories": [ "Machine Learning" ] } } Announcements

Yurii made a number of announcements, summarizing:

There is a new library that they’ve been developing at Omnigres that allows you to develop Postgres extensions in C++. For people who are interested in developing extensions in C++ and gaining the benefits of that and not having to do all the tedious things that we have to do with C extensions: look for Cppgres. Yurii thinks that within a couple of months it will reach parity with pgrx.

David: So it sounds like it would work more closely to the way PGXS and C works. Whereas pgrx has all these additional Rust crates you have to load and like slow compile times and all these dependencies.

Yurii: This is just like a layer over the C stuff, an evolution of that. It’s essentially a header only library, so it’s a very common thing in the C++ world. So you don’t have to build anything and you just include a file. And in fact the way I use it, I amalgamate all the header files that we have into one. Whenever I include it in the project, I just copy the amalgamation and it’s just one file. You don’t have any other build chain associated yet. It is C++ 20, which some people consider new, but by the time it’s mature it’s already five years old and most compilers support it. They have decent support of C++ 20 with a few exclusions, but those are relatively minor. So for that reason, it’s not C++ 23, for example, because it’s not very well supported across compilers, but C++ 20 is.

Yurii is giving a talk about PostgresPM at the Postgres Conference in Orlando. He’ll share the slides and recording with this group. The idea behind PostgresPM is that it takes a lot of heuristics, takes the URLs of packages and of extensions and creates packages for different outputs like for Red Hat, for Debian, perhaps for some other formats in the future. It focuses on the idea that a lot of things can be figured out.

For example: do we have a new version? Well, we can look at list of tags in the Git repo. Very commonly that works for say 80 percent of extensions. Do we need a C compiler? We can see whether we have C files. We can figure out a lot of stuff without packagers having to specify that manually every time they have a new extension. And they don’t have to repackage every time there is a new release, because we can detect new releases and try to build.

Yurii is also running an event that, while not affiliated with PGConf.dev, is strategically scheduled to happen one day before PGConf.dev: Postgres Extensions Day. The Call for Speakers is open until April 1st. There’s also an option for people who cannot or would not come to Montreal this year to submit a prerecorded talk. The point of the event is not just to bring people together, but also ti surface content that can be interesting to other people. The event itself is free.

Make sure to join our Meetup group and join us live, March 26, when Peter Eisentraut joins us to talk about implementing an extension search path.

More about… Postgres PGXN Extensions PGConf Summit

Bill Wendels Real Estate Cafe

Is proposed MLSpin Settlement the April Fool’s Day joke #RECartel doesn’t want consumers retelling?

Homebuyers & sellers, intend to buy or sell this Spring and talking to real estate agents about their fees? Have any mentioned that is just… The post Is proposed MLSpin Settlement the April Fool’s Day joke #RECartel doesn’t want consumers retelling? first appeared on Real Estate Cafe.

Homebuyers & sellers, intend to buy or sell this Spring and talking to real estate agents about their fees? Have any mentioned that is just…

The post Is proposed MLSpin Settlement the April Fool’s Day joke #RECartel doesn’t want consumers retelling? first appeared on Real Estate Cafe.

Simon Willison

deepseek-ai/DeepSeek-V3-0324

deepseek-ai/DeepSeek-V3-0324 Chinese AI lab DeepSeek just released the latest version of their enormous DeepSeek v3 model, baking the release date into the name DeepSeek-V3-0324. The license is MIT (that's new - previous DeepSeek v3 had a custom license), the README is empty and the release adds up a to a total of 641 GB of files, mostly of the form model-00035-of-000163.safetensors. The mode

deepseek-ai/DeepSeek-V3-0324

Chinese AI lab DeepSeek just released the latest version of their enormous DeepSeek v3 model, baking the release date into the name DeepSeek-V3-0324.

The license is MIT (that's new - previous DeepSeek v3 had a custom license), the README is empty and the release adds up a to a total of 641 GB of files, mostly of the form model-00035-of-000163.safetensors.

The model only came out a few hours ago and MLX developer Awni Hannun already has it running at >20 tokens/second on a 512GB M3 Ultra Mac Studio ($9,499 of ostensibly consumer-grade hardware) via mlx-lm and this mlx-community/DeepSeek-V3-0324-4bit 4bit quantization, which reduces the on-disk size to 352 GB.

I think that means if you have that machine you can run it with my llm-mlx plugin like this, but I've not tried myself!

llm mlx download-model mlx-community/DeepSeek-V3-0324-4bit llm chat -m mlx-community/DeepSeek-V3-0324-4bit

The new model is also listed on OpenRouter. You can try a chat at openrouter.ai/chat?models=deepseek/deepseek-chat-v3-0324:free.

Here's what the chat interface gave me for "Generate an SVG of a pelican riding a bicycle":

I have two API keys with OpenRouter - one of them worked with the model, the other gave me a No endpoints found matching your data policy error - I think because I had a setting on that key disallowing models from training on my activity. The key that worked was a free key with no attached billing credentials.

For my working API key the llm-openrouter plugin let me run a prompt like this:

llm install llm-openrouter llm keys set openrouter # Paste key here llm -m openrouter/deepseek/deepseek-chat-v3-0324:free "best fact about a pelican"

Here's that "best fact" - the terminal output included Markdown and an emoji combo, here that's rendered.

One of the most fascinating facts about pelicans is their unique throat pouch, called a gular sac, which can hold up to 3 gallons (11 liters) of water—three times more than their stomach!

Here’s why it’s amazing:
- Fishing Tool: They use it like a net to scoop up fish, then drain the water before swallowing.
- Cooling Mechanism: On hot days, pelicans flutter the pouch to stay cool by evaporating water.
- Built-in "Shopping Cart": Some species even use it to carry food back to their chicks.

Bonus fact: Pelicans often fish cooperatively, herding fish into shallow water for an easy catch.

Would you like more cool pelican facts? 🐦🌊

In putting this post together I got Claude to build me this new tool for finding the total on-disk size of a Hugging Face repository, which is available in their API but not currently displayed on their website.

Update: Here's a notable independent benchmark from Paul Gauthier:

DeepSeek's new V3 scored 55% on aider's polyglot benchmark, significantly improving over the prior version. It's the #2 non-thinking/reasoning model, behind only Sonnet 3.7. V3 is competitive with thinking models like R1 & o3-mini.

Tags: llm-release, hugging-face, generative-ai, deepseek, ai, llms, mlx, llm, ai-assisted-programming, tools, pelican-riding-a-bicycle


@_Nat Zone

デジタル庁とJR西日本、マイナンバーカードを活用した在学証明の実証実験:オンライン学割乗車券購入の実現へ

デジタル学生証明による学割サービスの話は、かなり前から検討されている話題です。わたしの周りでも、遅くとも2012年には Student Identity Trust Framework として国立情報学研究所(NII)との間で検討がなされてきました。 今回発表されたものは、これら…

デジタル学生証明による学割サービスの話は、かなり前から検討されている話題です。わたしの周りでも、遅くとも2012年には Student Identity Trust Framework として国立情報学研究所(NII)との間で検討がなされてきました。

学生 ID 向けトラストフレームワークの概要と未来展望 (2012.5) 学術認証フェデレーションが切り開く大学間連携の新時代 (2012.12) 学認システムと学割サービス連携の新時代:トラストフレームワークが実現する便利で安全な学生生活 (2016.10)

今回発表されたものは、これらの線上に乗りながら、実装技術としてマイナンバーカードやデジタル認証アプリ、Microsoft Authenticator とVCを取り入れたものになっています。ある意味、紙やカードのアナロジーに載せることで一般の経験則的に「分かりやすく」なったところもあるかもしれません。以下はその概要です。

概要

JR西日本とデジタル庁は、大学生の在学証明確認にマイナンバーカードを活用する実証実験を大阪駅で実施しました。本実験は、学生がオンラインでデジタル在学証明書を取得し、デジタル乗車券購入時の本人確認にマイナンバーカードを用いることで、従来の紙の在学証明書提示による窓口手続きを不要とし、利便性向上と業務効率化を目指すものです。国立情報学研究所(NII)が提供するサービスとJR西日本のデジタルプラットフォーム「WESTER」の連携も視野に入れ、将来的には他の公共機関やサービスへの応用も目指しています。

発表されたのは主に以下のような項目です。

1. マイナンバーカードを活用した在学証明のデジタル化 従来、学生が学割乗車券を購入する際には、紙の在学証明書を駅の窓口で提示する必要がありました。 本実証実験では、学生がオンラインでデジタル在学証明書を取得し、マイナンバーカードを本人確認に利用することで、この手続きをオンラインで完結させます。 日経新聞の記事では、実証実験の様子として「マイナカードで在学証明を確認して購入したウェブ乗車券」の写真が掲載されています。 JR西日本のプレスリリースによると、学生はまず「デジタル認証アプリ」と「Microsoft Authenticator」をスマートフォンにダウンロードし、大学のウェブシステムで在学証明情報を取得、それをMicrosoft Authenticatorと連携させる流れとなります。

細かい流れは以下の図表1通りです。

図表1:JR西日本プレスリリースによる流れ

「デジタル認証アプリ」1「Microsoft Authenticator」2をスマートフォンにダウンロード 大学のウェブシステムに学生がログインし国立情報学研究所(NII)が提供する在学証明情報を取得 取得した在学証明情報を Microsoft Authenticator と連携し、情報を学生が保有するスマートフォンに格納 JR 西日本がサービス提供する電子チケット販売プラットフォーム「まちのヲトモパスポート」に「デジタル認証アプリ」でログイン 「デジタル認証アプリ」が立ち上がり学生本人のマイナンバーカードを読み取り認証を実施し、MAB3 に登録した情報と本人情報が同一であることを確認 「まちのヲトモパスポート」に表示された、学生限定の割引チケットを取得する際、Microsoft Authenticator と連携し、MAB で認証した本人が在学していることを検証 学生限定チケットを取得

と、言っても関係性がよくわかりませんね。なので、想像力バリバリで簡略版のシーケンス図(図表2)を書いてみました。

図表2:JR西日本ーデジタル庁実証実験シーケンス図

個人的に注意を引いた点としては、以下があります。

いわゆる認証連携ベース(SAMLやOpenID Connect)でやるのに比べて、図表1では手順3, 6 が増えている。その代わり、大学は学生がどこで学割を使ったかがわからないようになっている。(逆に言えば、大学にわかっても良ければこのステップは省略できる。) このプレスリリースからは発行された在学証明情報がどのように失効させられるかがわからない。(やりかたによっては、大学に証明書利用がわかってしまうようになるので要注意。おそらく、VC Status List あたりを使うのであろう。) いわゆる、属性ベースの名寄せを利用している。このためにマイナンバーカードとデジタル認証アプリを使っている。(そもそも、大学発行の証明書を信じることにすれば、マイナンバーカードからの基本4情報は不必要なはず。お一人様一回限りをやるなら、デジタル認証アプリの発行する利用し識別子の値(sub値)だけで十分なはず。複数の学校に所属しているようなケースを例外として忘れることにすれば、それすらも要らないはず。) 2. オンラインでの学割乗車券購入と顔認証による改札通過 実証実験では、大阪大学の学生が購入したデジタル乗車券を使用し、顔認証で改札口を通過しました。 阪大大学院の武内祐哉さんは「すべてオンラインで完結してスムーズに改札に入れた」と、その利便性を語っています(日経新聞)。

※ 顔認証に使う顔画像をどのステップで取得したのかがよくわからないので、そこは聞いてみたいところです。

3. 関係機関の連携 本実証実験は、デジタル庁の公募事業としてJR西日本が採択され、国立情報学研究所(NII)が提供するデジタル在学証明書発行サービスを利用して行われました。 デジタル庁の岸信千世大臣政務官は「将来的には複数の大学や公共機関での応用をめざしたい」と述べており、連携の重要性が示唆されています(日経新聞)。 4. JR西日本の狙いと今後の展望 JR西日本は、窓口業務の効率化に加え、「学生など若い人との接点を増やしてサービスを深化させたい」(JR西日本デジタルソリューション本部長 奥田英雄氏、日経新聞)と考えています。 2026年初頭にも大阪大学との間で、マイナンバーカードを用いた在学証明の本格的な開始を目指しています。 将来的には、JR西日本グループ共通ID「WESTER」との連携によるサービス拡大も検討されています。 ケータイ Watchの記事では、本実証実験が「現在も多くの在学証明書が紙で発行されている状況を踏まえ、その代表的な事例として選ばれたもの」と説明されており、全国の大学への普及や他の資格情報への応用も視野に入れていることがわかります。 JR西日本プレスリリースでは、2025年度に大阪大学と連携し、学生にとって分かりやすく使いやすいスキームへの改良を進める方針が示されています。 長期的には、交通機関だけでなく、レンタカー、バス、航空券、美術館など幅広い分野での活用も見据えています(ケータイ Watch)。 5. 技術基盤「Mobility Auth Bridge (MAB)」の活用 JR西日本のプレスリリースによると、本実証実験では、KANSAI MaaSでも導入されている「Mobility Auth Bridge(MAB)」というID基盤が活用されています。 MABは、利用者の同意があれば、一つのIDで様々なサービスを利用できるセキュアなIDサービスであり、デジタル社会に必要なインフラ基盤としてJR西日本がNTTコミュニケーションズ、伊藤忠テクノソリューションズの支援を受けて開発したものです。 MABの活用により、デジタルサービスの個客データ収集・利活用が可能となり、地域への貢献も期待されています。 6. SDGsへの貢献 JR西日本のプレスリリースでは、今回の取り組みがSDGsの目標のうち、特に「3. すべての人に健康と福祉を」「9. 産業と技術革新の基盤をつくろう」「11. 住み続けられるまちづくりを」「17. パートナーシップで目標を達成しよう」に貢献するとの考えが示されています。 引用 (武内祐哉さん)「すべてオンラインで完結してスムーズに改札に入れた」(日経新聞) (岸信千世大臣政務官)「将来的には複数の大学や公共機関での応用をめざしたい」(日経新聞) (奥田英雄・JR西取締役常務執行役員)「学生など若い人との接点を増やしてサービスを深化させたい」(日経新聞) (JR西日本プレスリリース)「オンラインで本人確認から学割利用まで完結可能なスキームを試験開発しました。」 (ケータイ Watch)「マイナンバーカードを介することで、本人であることを証明する」 (ケータイ Watch)「今後は、全国の大学への普及や、卒業証書・成績証明書など他の資格情報への応用も視野に入れている。」 (JR西日本プレスリリース)「利用者が同意すれば、1 つの ID で MAB に参画する自治体や企業のさまざまなサービスをご利用いただけるセキュアな ID サービスです。」 結論

JR西日本とデジタル庁によるマイナンバーカードを活用した在学証明の実証実験は、学生の利便性向上と鉄道事業者の業務効率化に大きく貢献する可能性を示唆しています。デジタル在学証明書とオンラインでの学割乗車券購入、そして顔認証によるスムーズな改札通過は、次世代の公共交通利用のあり方を提示するものです。今後、実証実験の成果を踏まえ、全国的な展開や他の分野への応用が進むことが期待されます。また、技術基盤であるMABの活用は、様々な地域サービスとの連携を促進し、より便利で豊かな社会の実現に寄与する可能性を秘めています。

(付録)シーケンス図のコード

title JR西日本-デジタル庁実証実験想像図

participant 学生 as U
participant ブラウザ as UA
participant ウォレット as W

participant 大学 as NII
participant デジタル認証アプリサーバ as D
participant デジタル認証アプリ as Dc
participant MAB as Mobile Auth Bridge
participant チケットサイト as T

autonumber on

U–>UA: 在学証明書発行依頼
UA->NII: 在学証明書発行依頼
NII->UA: ログイン画面表示
U–>UA: ログイン情報入力
UA->NII: ログイン情報
NII->NII: ログイン情報チェック
NII->UA: MS Authenticator 呼び出し+トークン
UA->W: 起動, トークン
W->NII: 在学証明書取得 w/トークン
NII->W: 在学証明書
U–>UA: チケットサイトアクセス
UA->T: アクセス
T–>MAB: MABに認証要求
MAB–>UA: ログイン画面
UA–>U: ログイン画面表示
U–>UA: デジタル認証アプリで認証を選択
UA->MAB: 選択結果通知
MAB–>UA: リダイレクト
UA->D: ユーザ認証要求
D–>UA: デジタル認証アプリ起動
UA->Dc: 起動
Dc–>U: マイナンバーカード提示要求
U–>Dc: マイナカード提示
Dc->D: スキャン結果提示
D->D: スキャン結果検証
D–>UA: リダイレクト w/code
UA->MAB: code 提示
MAB->D: code, client assertion提示
D->MAB: 4情報返却
MAB->MAB: MABに登録した情報上記4情報が同一であることを確認
MAB->T: OK
T–>UA: チケット一覧表示
UA–>U: 表示
U–>UA: チケット選択
UA->T: チケット選択
T–>UA: 在学証明要求
UA->W: 起動+在学証明要求
W–>U: 在学証明提示許可要求
U–>W: 許可
W–>UA: 在学証明書
UA->T: 在学証明書
T->T: 資格確認
T–>UA: 購入成功表示

Sunday, 23. March 2025

Simon Willison

Semantic Diffusion

Semantic Diffusion I learned about this term today while complaining about how the definition of "vibe coding" is already being distorted to mean "any time an LLM writes code" as opposed to the intended meaning of "code I wrote with an LLM without even reviewing what it wrote". I posted this salty note: Feels like I'm losing the battle on this one, I keep seeing people use "vibe coding" to

Semantic Diffusion

I learned about this term today while complaining about how the definition of "vibe coding" is already being distorted to mean "any time an LLM writes code" as opposed to the intended meaning of "code I wrote with an LLM without even reviewing what it wrote".

I posted this salty note:

Feels like I'm losing the battle on this one, I keep seeing people use "vibe coding" to mean any time an LLM is used to write code

I'm particularly frustrated because for a few glorious moments we had the chance at having ONE piece of AI-related terminology with a clear, widely accepted definition!

But it turns out people couldn't be trusted to read all the way to the end of Andrej's tweet, so now we are back to yet another term where different people assume it means different things

Martin Fowler coined Semantic Diffusion in 2006 with this very clear definition:

Semantic diffusion occurs when you have a word that is coined by a person or group, often with a pretty good definition, but then gets spread through the wider community in a way that weakens that definition. This weakening risks losing the definition entirely - and with it any usefulness to the term.

What's happening with vibe coding right now is such a clear example of this effect in action! I've seen the same thing happen to my own coinage prompt injection over the past couple of years.

This kind of dillution of meaning is frustrating, but does appear to be inevitable. As Martin Fowler points out it's most likely to happen to popular terms - the more popular a term is the higher the chance a game of telephone will ensue where misunderstandings flourish as the chain continues to grow.

Andrej Karpathy, who coined vibe coding, posted this just now in reply to my article:

Good post! It will take some time to settle on definitions. Personally I use "vibe coding" when I feel like this dog. My iOS app last night being a good example. But I find that in practice I rarely go full out vibe coding, and more often I still look at the code, I add complexity slowly and I try to learn over time how the pieces work, to ask clarifying questions etc.

I love that vibe coding has an official illustrative GIF now!

Tags: language, vibe-coding, andrej-karpathy, martin-fowler


Doc Searls Weblog

Real Agency

I nominate agency as Word of the Year for 2025. I don’t nominate agentic, which is suddenly hot shit: See, agency is a noun, and agentic is an adjective. And, as Strunk and White taught us, Write with nouns and verbs, not with adjectives and adverbs… it is nouns and verbs, not their assistants, that […]

I nominate agency as Word of the Year for 2025.

I don’t nominate agentic, which is suddenly hot shit:

See, agency is a noun, and agentic is an adjective. And, as Strunk and White taught us,

Write with nouns and verbs, not with adjectives and adverbs… it is nouns and verbs, not their assistants, that give good writing its toughness and color.

The word agency is derived from the Latin agere, meaning to do. It speaks of our capacity to act with effect in the world. Or, in the words of the OED (the print version, not the website): Action or instrumentality embodied or personified as concrete existence.

In a chapter of The Intention Economy titled Agency, I say this under a subhead titled The Argument:

Agency is personal. It is the source of confidence behind all intention. By its nature the networked marketplace welcomes full agency for customers. So, because the best vendors are customer driven, there will be many more ways for both vendors and customers to thrive in the networked marketplace, and therefore also in the Intention Economy.

Later I add,

In the Intention Economy, liberated customers enjoy full agency for themselves, and employ agents who respect and apply the powers that customers grant them.

I wrote that (and created the graphic at the top of this page) thirteen years ago, when nobody was talking about agency, but I thought somebody had to.

Now it seems everybody is talking about it. I am sure that’s because we have AI. Or, more specifically, agentic AI. That’s what makes agentic so hot:

My concern with both agentic and agentic AI is that concentrating development on AI agents (and digital “twins”) alone may neglect, override, or obstruct the agency of human beings, rather than extending or enlarging it. (For more on this, read Agentic AI Is the Next Big Thing but I’m Not Sure It’s What, by Adam Davidson in How to Geek. Also check out my Personal AI series, which addresses this issue most directly in Personal vs. Personal AI.)

So, what will give you real agency—an archimedean lever that gives you enough leverage to move worlds?

Meet IEEE P7012, which “identifies/addresses the manner in which personal privacy terms are proffered and how they can be read and agreed to by machines.” It has been in the works since 2017, and should be ready later this year. (I say this as chair of the standard’s working group.) The nickname for P7012 is MyTerms (much as the nickname for the IEEE’s 802.11 standard is Wi-Fi). The idea behind MyTerms is that the sites and services of the world should agree to your terms, rather than the other way around.

MyTerms creates a new regime for privacy: one based on contract. With each MyTerm you are the first party. Not the website, the service, or the app maker. They are the second party. And terms can be friendly. For example, a prototype term called NoStalking says “Just show me ads not based on tracking me.” This is good for you, because you don’t get tracked, and good for the site because it leaves open the advertising option. NoStalking lives at Customer Commons, much as personal copyrights live at Creative Commons. (Yes, the former is modeled on the latter.)

On the Creative Commons model, agreements take three forms:

The MyTerms standard addresses just the Machine Readable layer. It also doesn’t say exactly what tools should be developed. Basically, it just says,

The person is the first party The site, service or app is the second party The person chooses an agreement from a limited roster (resembling Creative Commons’ roster of licenses) listed by a disinterested non-profit Both sides keep an identical record of what they agreed to

On your side—the first-party side—browser makers can build something into their product, or any developer can make a browser add-on (Firefox) or extension (the rest of them). On the site’s side—the second-party side—CMS makers can build something in, or any developer can make a plug-in (WordPress) or a module (Drupal).

Mobile app toolmakers can also come up with something (or many things).

For the Legal Code and Human Readable layers, we (Customer Commons and ProjectVRM) have been at work on a list of prototypes for the roster of agreements. We’ll present these on April 7 at VRM Day, at the Computer History Museum in Silicon Valley. Discussion will happen both there and through the following three days in the same location at the 40th Internet Identity Workshop (IIW). VRM Day is free. IIW isn’t, but is cheap for a three-day conference that (IMHO) is the most leveraged in the world.

So let’s make this happen and show the world what agency really means.

And, if you’re interested in helping support Customer Commons, use some of that agency to hit the Donate button on its home page. Thanks!


Simon Willison

Next.js and the corrupt middleware: the authorizing artifact

Next.js and the corrupt middleware: the authorizing artifact Good, detailed write-up of the Next.js vulnerability CVE-2025-29927 by Allam Rachid, one of the researchers who identified the issue. The vulnerability is best illustrated by this code snippet: const subreq = params.request.headers['x-middleware-subrequest']; const subrequests = typeof subreq === 'string' ? subreq.split(':') : [];

Next.js and the corrupt middleware: the authorizing artifact

Good, detailed write-up of the Next.js vulnerability CVE-2025-29927 by Allam Rachid, one of the researchers who identified the issue.

The vulnerability is best illustrated by this code snippet:

const subreq = params.request.headers['x-middleware-subrequest']; const subrequests = typeof subreq === 'string' ? subreq.split(':') : []; // ... for (const middleware of this.middleware || []) { // ... if (subrequests.includes(middlewareInfo.name)) { result = { response: NextResponse.next(), waitUntil: Promise.resolve(), }; continue; } }

This was part of Next.js internals used to help avoid applying middleware recursively to requests that are re-dispatched through the framework.

Unfortunately it also meant that attackers could send a x-middleware-subrequest HTTP header with a colon-separated list of middleware names to skip. If a site used middleware to apply an authentication gate (as suggested in the Next.js documentation) an attacker could bypass that authentication using this trick.

The vulnerability has been fixed in Next.js 15.2.3 - here's the official release announcement talking about the problem.

Tags: security, javascript


Quoting Jacob Kaplan-Moss

If you’re new to tech, taking [career] advice on what works for someone with a 20-year career is likely to be about as effective as taking career advice from a stockbroker or firefighter or nurse. There’ll be a few things that generalize, but most advice won’t. Further, even advice people with long careers on what worked for them when they were getting started is unlikely to be advice that work

If you’re new to tech, taking [career] advice on what works for someone with a 20-year career is likely to be about as effective as taking career advice from a stockbroker or firefighter or nurse. There’ll be a few things that generalize, but most advice won’t.

Further, even advice people with long careers on what worked for them when they were getting started is unlikely to be advice that works today. The tech industry of 15 or 20 years ago was, again, dramatically different from tech today.

Jacob Kaplan-Moss, Beware tech career advice from old heads

Tags: jacob-kaplan-moss, careers

Saturday, 22. March 2025

Simon Willison

simonw/ollama-models-atom-feed

simonw/ollama-models-atom-feed I setup a GitHub Actions + GitHub Pages Atom feed of scraped recent models data from the Ollama latest models page - Ollama remains one of the easiest ways to run models on a laptop so a new model release from them is worth hearing about. I built the scraper by pasting example HTML into Claude and asking for a Python script to convert it to Atom - here's the scrip

simonw/ollama-models-atom-feed

I setup a GitHub Actions + GitHub Pages Atom feed of scraped recent models data from the Ollama latest models page - Ollama remains one of the easiest ways to run models on a laptop so a new model release from them is worth hearing about.

I built the scraper by pasting example HTML into Claude and asking for a Python script to convert it to Atom - here's the script we wrote together.

Update 25th March 2025: The first version of this included all 160+ models in a single feed. I've upgraded the script to output two feeds - the original atom.xml one and a new atom-recent-20.xml feed containing just the most recent 20 items.

I modified the script using Google's new Gemini 2.5 Pro model, like this:

cat to_atom.py | llm -m gemini-2.5-pro-exp-03-25 \ -s 'rewrite this script so that instead of outputting Atom to stdout it saves two files, one called atom.xml with everything and another called atom-recent-20.xml with just the most recent 20 items - remove the output option entirely'

Here's the full transcript.

Tags: github-actions, git-scraping, ai, ollama, llms, ai-assisted-programming, generative-ai, projects, github, claude, atom, gemini


Doc Searls Weblog

Heavy Whether

We'll know soon. Whether or not you're watching St. John's playing Arkansas, right now, in the NCAA's March Madness tournament, take out a minute and a half to take in Jimmy Fallon and the boys singing the Red Storm Shanty. Lou Carnaseca must be glowing in his grave. (Later: they lost.)

We'll know soon. Whether or not you're watching St. John's playing Arkansas, right now, in the NCAA's March Madness tournament, take out a minute and a half to take in Jimmy Fallon and the boys singing the Red Storm Shanty. Lou Carnaseca must be glowing in his grave. (Later: they lost.)

Friday, 21. March 2025

Simon Willison

The "think" tool: Enabling Claude to stop and think in complex tool use situations

The "think" tool: Enabling Claude to stop and think in complex tool use situations Fascinating new prompt engineering trick from Anthropic. They use their standard tool calling mechanism to define a tool called "think" that looks something like this: { "name": "think", "description": "Use the tool to think about something. It will not obtain new information or change the database, but just

The "think" tool: Enabling Claude to stop and think in complex tool use situations

Fascinating new prompt engineering trick from Anthropic. They use their standard tool calling mechanism to define a tool called "think" that looks something like this:

{ "name": "think", "description": "Use the tool to think about something. It will not obtain new information or change the database, but just append the thought to the log. Use it when complex reasoning or some cache memory is needed.", "input_schema": { "type": "object", "properties": { "thought": { "type": "string", "description": "A thought to think about." } }, "required": ["thought"] } }

This tool does nothing at all.

LLM tools (like web_search) usually involve some kind of implementation - the model requests a tool execution, then an external harness goes away and executes the specified tool and feeds the result back into the conversation.

The "think" tool is a no-op - there is no implementation, it just allows the model to use its existing training in terms of when-to-use-a-tool to stop and dump some additional thoughts into the context.

This works completely independently of the new "thinking" mechanism introduced in Claude 3.7 Sonnet.

Anthropic's benchmarks show impressive improvements from enabling this tool. I fully anticipate that models from other providers would benefit from the same trick.

Via @alexalbert__

Tags: prompt-engineering, anthropic, claude, generative-ai, ai, llms, llm-tool-use


Doc Searls Weblog

Gleanings

But that’s the idea, right? Lucas Ropek in Gizmodo: Data Broker Brags About Having Highly Detailed Personal Information on Nearly All Internet Users: The advertising industry is immensely powerful and disturbingly opaque. Read it. Then look at a PageXray of that same story to see how much tracking Gizmodo does, and how deeply embedded it is in that same […]

But that’s the idea, right? Lucas Ropek in Gizmodo: Data Broker Brags About Having Highly Detailed Personal Information on Nearly All Internet Users: The advertising industry is immensely powerful and disturbingly opaque. Read it. Then look at a PageXray of that same story to see how much tracking Gizmodo does, and how deeply embedded it is in that same fecosystem:

Care to report on that, Lucas? If you do, you won’t be the first. That was Farhad Monjoo, here, six years ago in The New York Times. You’ll be the second.

What if it helps everyone do a better job? Steven Wright: “I take my dog for walks outside my apartment… on the ledge. Some people are afraid of heights. I’m afraid of widths.”Somehow that joke (as best I recall it) comes to mind when I consider OpenAI’s Deep Research. Everybody I know who uses it gushes about all the amazing things it can do. I mean, a lot. A company CEO says it produces marvels with sales and usage data, with forecasting, with you-name-it. A law professor says it radically improves her job in almost every way. An innovation director says “It will change your life.” But there is push-back, naturally, on the usual AI threat: it will cost jobs. We’ll see.


Simon Willison

Anthropic Trust Center: Brave Search added as a subprocessor

Anthropic Trust Center: Brave Search added as a subprocessor Yesterday I was trying to figure out if Anthropic has rolled their own search index for Claude's new web search feature or if they were working with a partner. Here's confirmation that they are using Brave Search: Anthropic's subprocessor list. As of March 19, 2025, we have made the following changes: Subprocessors added: Brav

Anthropic Trust Center: Brave Search added as a subprocessor

Yesterday I was trying to figure out if Anthropic has rolled their own search index for Claude's new web search feature or if they were working with a partner. Here's confirmation that they are using Brave Search:

Anthropic's subprocessor list. As of March 19, 2025, we have made the following changes:

Subprocessors added:

Brave Search (more info)

That "more info" links to the help page for their new web search feature.

I confirmed this myself by prompting Claude to "Search for pelican facts" - it ran a search for "Interesting pelican facts" and the ten results it showed as citations were an exact match for that search on Brave.

And further evidence: if you poke at it a bit Claude will reveal the definition of its web_search function which looks like this - note the BraveSearchParams property:

{ "description": "Search the web", "name": "web_search", "parameters": { "additionalProperties": false, "properties": { "query": { "description": "Search query", "title": "Query", "type": "string" } }, "required": [ "query" ], "title": "BraveSearchParams", "type": "object" } }

Via @zugaldia.bsky.social

Tags: anthropic, claude, generative-ai, llm-tool-use, search, ai, llms

Thursday, 20. March 2025

Doc Searls Weblog

It’s Over

The Voice of America is silent. To Donald Trump, Elon Musk, and Kari Lake (who now runs its corpse), the VOA was corrupt, biased, unnecessary, and needed to go. To nearly everyone else who cares, it was America’s voice on radio, and mattered enormously to an audience in the hundreds of millions, listening in forty-eight […]
A Google StreetView of the entrance to the Edward R. Murrow Transmitting Station for the Voice Of America, near Greenville, North Carolina

The Voice of America is silent.

To Donald Trump, Elon Musk, and Kari Lake (who now runs its corpse), the VOA was corrupt, biased, unnecessary, and needed to go. To nearly everyone else who cares, it was America’s voice on radio, and mattered enormously to an audience in the hundreds of millions, listening in forty-eight languages. For many of those, the VOA was the best, and in some cases the only, source of relatively unbiased news.

I am sure some of it was biased. Reporters tend to stand in a place, and most who report on political matters stand on the left (see here and here). Dan Robinson, who spent thirty-four years with the VOA, stands on the right and posted a long bill of particulars in Voice of America, Global Media Agency are rightly in budget cut bull’s-eye: Entire operation should be dismantled, in the Washington Times. An excerpt:

 I have monitored the agency’s bureaucracy along with many of its reporters and concluded that it has essentially become a hubris-filled rogue operation often reflecting a leftist bias aligned with partisan national media. It has sought to avoid accountability for violations of journalistic standards and mismanagement. The list includes:

• During President-elect Donald Trump’s first term, VOA allowed and often encouraged key correspondents to carry out opposition journalism against him. This included an uprising, growing out of VOA’s central newsroom, against Mr. Trump’s choice of chief USAGM executive in 2020.

• That newsroom and the agency have a known left-wing bias. Between 2016 and 2020, some VOA reporters did little to hide their disdain for Mr. Trump in their reports and social media posts.

• Since Amanda Bennett, VOA director from 2016 to mid-2020, returned as USAGM chief in 2022, managers and employees dissed Republicans in Congress who criticized VOA’s inexplicable refusal to refer to Hamas terrorists as terrorists. One agency official called lawmakers “silly.” VOA’s new director has equated congressional critics to troublemakers.

Whatever. I’d rather they fix it than kill it. They wanted it dead, and now it is.

More reading here:

The Last Days at Voice of America: Covering the press freedom beat at VOA, I got a front-row seat to its demise, by Liam Scott, in Columbia Journalism Review Trump’s Awful Decision to Gut Voice of America and Radio Free Europe/Radio Liberty, by Ilya Somin in Reason Trump guts the 83-year-old Voice of America, in NiemanLab ‘Discarded like a dirty rag’: Chinese state media hails Trump’s cuts to Voice of America, by Kelly Ng in BBC News The silencing of Voice of America: How the shuttering of a small US-funded news outlet explains Trump’s governing philosophy, by Sean Collins and Gabrielle Berbey in Vox Trump mutes Voice of America, makes space for Russian and Chinese influence, by Sebastian Seibt in France 24 What is Voice of America and why Trump is dismantling the broadcaster? by PBS News Hour Inside the scramble to save America’s pro-democracy media outlets from Trump, by several authors at Politico Trump Closes an Important Window Into China, by Lili Pike, in Foreign Policy Trump’s silencing of Voice of America sparks shock, outrage, by Dominick Mastrangelo, in The Hill Trump silencing VOA threatens free media in repressive countries, advocates say, by ByPatrick Reevell and Somayeh Malekian in ABC News Opinion | Trump ending Voice of America saved us all from Kari Lake: Voice of America is supposed to operate independently of any presidential administration which naturally makes it a prime target of Trump, by Laurie Roberts in the Arizona Republic

Simon Willison

New audio models from OpenAI, but how much can we rely on them?

OpenAI announced several new audio-related API features today, for both text-to-speech and speech-to-text. They're very promising new models, but they appear to suffer from the ever-present risk of accidental (or malicious) instruction following. gpt-4o-mini-tts gpt-4o-mini-tts is a brand new text-to-speech model with "better steerability". OpenAI released a delightful new playground interfa

OpenAI announced several new audio-related API features today, for both text-to-speech and speech-to-text. They're very promising new models, but they appear to suffer from the ever-present risk of accidental (or malicious) instruction following.

gpt-4o-mini-tts

gpt-4o-mini-tts is a brand new text-to-speech model with "better steerability". OpenAI released a delightful new playground interface for this at OpenAI.fm - you can pick from 11 base voices, apply instructions like "High-energy, eccentric, and slightly unhinged" and get it to read out a script (with optional extra stage directions in parenthesis). It can then provide the equivalent API code in Python, JavaScript or curl. You can share links to your experiments, here's an example.

Note how part of my script there looks like this:

(Whisper this bit:)

Footsteps echoed behind her, slow and deliberate. She turned, heart racing, but saw only shadows.

While fun and convenient, the fact that you can insert stage directions in the script itself feels like an anti-pattern to me - it means you can't safely use this for arbitrary text because there's a risk that some of that text may accidentally be treated as further instructions to the model.

In my own experiments I've already seen this happen: sometimes the model follows my "Whisper this bit" instruction correctly, other times it says the word "Whisper" out loud but doesn't speak the words "this bit". The results appear non-deterministic, and might also vary with different base voices.

gpt-4o-mini-tts costs $0.60/million tokens, which OpenAI estimate as around 1.5 cents per minute.

gpt-4o-transcribe and gpt-4o-mini-transcribe

gpt-4o-transcribe and gpt-4o-mini-transcribe are two new speech-to-text models, serving a similar purpose to whisper but built on top of GPT-4o and setting a "new state-of-the-art benchmark". These can be used via OpenAI's v1/audio/transcriptions API, as alternative options to `whisper-1. The API is still restricted to a 25MB audio file (MP3, WAV or several other formats).

Any time an LLM-based model is used for audio transcription (or OCR) I worry about accidental instruction following - is there a risk that content that looks like an instruction in the spoken or scanned text might not be included in the resulting transcript?

In a comment on Hacker News OpenAI's Jeff Harris said this, regarding how these new models differ from gpt-4o-audio-preview:

It's a slightly better model for TTS. With extra training focusing on reading the script exactly as written.

e.g. the audio-preview model when given instruction to speak "What is the capital of Italy" would often speak "Rome". This model should be much better in that regard

"much better in that regard" sounds to me like there's still a risk of this occurring, so for some sensitive applications it may make sense to stick with whisper or other traditional text-to-speech approaches.

On Twitter Jeff added:

yep fidelity to transcript is the big chunk of work to turn an audio model into TTS model. still possible, but should be quite rare

gpt-4o-transcribe is an estimated 0.6 cents per minute, and gpt-4o-mini-transcribe is 0.3 cents per minute.

Mixing data and instructions remains the cardinal sin of LLMs

If these problems look familiar to you that's because they are variants of the root cause behind prompt injection. LLM architectures encourage mixing instructions and data in the same stream of tokens, but that means there are always risks that tokens from data (which often comes from untrusted sources) may be misinterpreted as instructions to the model.

How much of an impact this has on the utility of these new models remains to be seen. Maybe the new training is so robust that these issues won't actually cause problems for real-world applications?

I remain skeptical. I expect we'll see demos of these flaws in action in relatively short order.

Tags: audio, text-to-speech, ai, openai, prompt-injection, generative-ai, whisper, llms, multi-modal-output, llm-release


Claude can now search the web

Claude can now search the web Claude 3.7 Sonnet on the paid plan now has a web search tool that can be turned on as a global setting. This was sorely needed. ChatGPT, Gemini and Grok all had this ability already, and despite Anthropic's excellent model quality it was one of the big remaining reasons to keep other models in daily rotation. For the moment this is purely a product feature - it's

Claude can now search the web

Claude 3.7 Sonnet on the paid plan now has a web search tool that can be turned on as a global setting.

This was sorely needed. ChatGPT, Gemini and Grok all had this ability already, and despite Anthropic's excellent model quality it was one of the big remaining reasons to keep other models in daily rotation.

For the moment this is purely a product feature - it's available through their consumer applications but there's no indication of whether or not it will be coming to the Anthropic API. OpenAI launched the latest version of web search in their API last week.

Surprisingly there are no details on how it works under the hood. Is this a partnership with someone like Bing, or is it Anthropic's own proprietary index populated by their own crawlers?

I think it may be their own infrastructure, but I've been unable to confirm that.

Update: it's confirmed as Brave Search.

Their support site offers some inconclusive hints.

Does Anthropic crawl data from the web, and how can site owners block the crawler? talks about their ClaudeBot crawler but the language indicates it's used for training data, with no mention of a web search index.

Blocking and Removing Content from Claude looks a little more relevant, and has a heading "Blocking or removing websites from Claude web search" which includes this eyebrow-raising tip:

Removing content from your site is the best way to ensure that it won't appear in Claude outputs when Claude searches the web.

And then this bit, which does mention "our partners":

The noindex robots meta tag is a rule that tells our partners not to index your content so that they don’t send it to us in response to your web search query. Your content can still be linked to and visited through other web pages, or directly visited by users with a link, but the content will not appear in Claude outputs that use web search.

Both of those documents were last updated "over a week ago", so it's not clear to me if they reflect the new state of the world given today's feature launch or not.

I got this delightful response trying out Claude search where it mistook my recent Squadron automata for a software project:

Tags: anthropic, claude, generative-ai, llm-tool-use, ai, llms


The Pragmatic Engineer

What is Growth Engineering?

A deep dive into the field of growth engineering, which is often positioned between product engineering and marketing. With former head of growth engineering at MasterClass, Alexey Komissarouk

Before we start: if you’ve already filled out the What is your tech stack? survey: thank you! If you’ve not done so, your help will be greatly appreciated. It takes 5-15 minutes to complete. Those filling out will receive results before anyone else, and additional analysis from myself and Elin. Fill out this survey here.

Fill out the survey

Growth engineering was barely known a decade ago, but today, most scaleups and many publicly traded tech companies have dedicated growth teams staffed by growth engineers. However, some software engineers are still suspicious of this new area because of its reputation for hacky code with little to no code coverage.

For this reason and others, I thought it would be interesting to learn more from an expert who can tell us all about the practicalities of this controversial domain. So I turned to Alexey Komissarouk, who’s been in growth engineering since 2016, and was in charge of it at online education platform, MasterClass. These days, Alexey lives in Tokyo, Japan, where he advises on growth engineering and teaches the Growth Engineering course at Reforge.

In today’s deep dive, Alexey covers:

What is Growth Engineering? In the simplest terms: writing code to help a company make more money. But there are details to consider: like the company size where it makes sense to have a dedicated team do this.

What do Growth Engineers work on? Business-facing work, empowerment and platform work are the main areas.

Why Growth Engineers move faster than Product Engineers. Product Engineers ship to build: Growth Engineers ship to learn. Growth Engineers do take shortcuts that would make no sense when building for longevity – doing this on purpose.

Tech stack. Common programming languages, monitoring and oncall, feature flags and experimentation, product analytics, review apps, and more.

What makes a good Growth Engineer? Curiosity, “build to learn” mindset and a “Jack of all trades” approach.

Where do Growth Engineers fit in? Usually part of the engineering department, either operating as with an “owner” or a “hitchiker” model.

Becoming a Growth Engineer. A great area if you want to eventually become a founder or product manager – but even if not, it can accelerate your career growth. Working in Growth forces you to learn more about the business.

With that, it’s over to Alexey:

I’ll never forget the first time I made my employer a million dollars.

I was running a push notification A/B test for meal delivery startup Sprig, trying to boost repeat orders.

A push notification similar to what we tested to boost repeat orders

Initial results were unpromising; the push notification was not receiving many opens. Still, I wanted to be thorough: before concluding the idea was a failure, I wrote a SQL query to compare order volume for subsequent weeks between customers in test vs control.

The SQL used to figure out the push notification’s efficiency

As it turned out, our test group “beat” the control group by around 10%:

‘review_5_push’ was the new type of push notification. Roughly the same amount of users clicked it, but they placed 10% more in orders

I plugged the numbers into a significance calculator, which showed it was statistically significant – or “stat-sig” – and therefore highly unlikely to be a coincidence. This meant we had a winner on our hands! But how meaningful was it, really, and what would adding the push notification mean for revenue, if rolled out to 100% of users?

It turned out this experiment created an additional $1.5 million dollars, annually, with just one push notification. Wow!

I was hooked. Since that day, I've shipped hundreds of experimental “winners” which generated hundreds of millions of incremental revenue for my employers. But you never forget the first one. Moments like this is what growth engineering is all about.

1. What is Growth Engineering?

Essentially, growth engineering is the writing of code to make a company money. Of course, all code produced by a business on some level serves this purpose, but while Product Engineers focus on creating a Product worth paying for, Growth Engineers instead focus on making that good product have a good business. To this end, they focus on optimizing and refining key parts of the customer journey, such as:

Getting more people to consider the product

Converting them into paying customers

Keeping them as customers for longer, and spending more

What kinds of companies employ Growth Engineers? Places you’ve heard of, like Meta, LinkedIn, DoorDash, Coinbase, and Dropbox, are some of the ones I’ve had students from. There’s also OpenAI, Uber, Tiktok, Tinder, Airbnb, Pinterest… the list of high-profile companies goes on. Most newer public consumer companies you’ve heard have a growth engineering org, too.

Typically, growth engineering orgs are started by companies at Series B stage and beyond, so long as they are selling to either consumers or businesses via SaaS. These are often places trying to grow extremely fast, and have enough software engineers that some can focus purely on growth. Before the Series B stage, a team is unlikely to be ready for growth for various reasons; likely that it hasn’t found product-market fit, or has no available headcount, or lacks the visitor traffic required to run A/B tests.

Cost is a consideration. A fully-loaded growth team consisting of a handful of engineers, a PM, and a designer costs approximately 1 million dollars annually. To justify this, a rule of thumb is to have at least $5 million dollars in recurring revenue – a milestone often achieved at around the Series B stage.

Despite the presence of growth engineering at many public consumer tech companies, the field itself is still quite new, as a discipline and as a proper title.

Brief history of growth engineering

When I joined Opendoor in 2016, there was a head of growth but no dedicated growth engineers, but there were by the time I left in 2020. At MasterClass soon after, there was a growth org and a dozen dedicated growth engineers. So when did growth engineering originate?

The story is that its origins lie at Facebook in 2007. The team was created by then-VP of platform and monetization Chamath Palihapitiya. Reforce founder and CEO Brian Balfour shares:

“Growth (the kind found on an org chart) began at Facebook under the direction of Chamath Palihapitiya. In 2007, he joined the early team in a nebulous role that fell somewhere between Product, Marketing, and Operations. According to his retelling of the story on Recode Decode, after struggling to accomplish anything meaningful in his first year on the job, he was on the verge of being fired.

Sheryl Sandberg joined soon after him, and in a hail mary move he pitched her the game-changing idea that led to the creation of the first-ever growth team. This idea not only saved his job, but earned him the lion’s share of the credit for Facebook’s unprecedented growth.

At the time, Sheryl and Mark asked him, “What do you call this thing where you help change the product, do some SEO and SEM, and algorithmically do this or that?”

His response: “I don’t know, I just call that, like, Growth, you know, we’re going to try to grow. I’ll be the head of growing stuff."

And just like that, Growth became a thing.”

Rather than focus on a particular product or feature, the growth team at Facebook focused on moving the needle, and figuring out which features to work on. These days, Meta employs hundreds if not thousands of growth engineers.

2. What do Growth Engineers work on?

Before we jump into concrete examples, let’s identify three primary focus areas that a growth engineer’s work usually involves.

Business-facing work – improving the business directly

Empowerment work – enabling other teams to improve the business

Platform work – improving the velocity of the above activities

Let’s go through all three:

Business-facing work

This is the bread and butter of growth engineering, and follows a common pattern:

Implement an idea. Try something big or small to try and move a key business metric, which differs by team but is typically related to conversion rate or retention.

Quantify impact. Usually via A/B testing.

Analyze impact. Await results, analyze impact, ship or roll back – then go back to the first step.

Experiments can lead to sweeping or barely noticeable changes. A famous “I can’t believe they needed to test this” was when Google figured out which shade of blue generates the most clicks. At MasterClass, we tested things across the spectrum:

Small: should we show the price right on the homepage, was that a winner? Yes, but we framed it in monthly terms of $15/month, not $180/year.

Medium: when browsing a course page, should we include related courses, or more details about the course itself? Was it a winner? After lengthy experimentation, it was hard to tell: both are valuable and we needed to strike the right balance.

Large: when a potential customer is interested, do we take them straight to checkout, or encourage them to learn more? Counterintuitively, adding steps boosted conversion!

Empowerment

One of the best ways an engineer can move a target metric is by removing themselves as a bottleneck, so colleagues from marketing can iterate and optimize freely. To this end, growth engineers can either build internal tools or integrate self-serve MarTech (Marketing Technology) vendors.

With the right tool, there’s a lot that marketers can do without engineering’s involvement:

Build and iterate on landing pages (Unbounce, Instapage, etc)

Draft and send email, SMS and Push Notifications (Iterable, Braze, Customer.io, etc)

Connect new advertising partners (Google Tag Manager, Segment, etc)

We go more into detail about benefits and applications in the MarTech section of Tech Stack, below.

Platform work

As a business scales, dedicated platform teams help improve stability and velocity for the teams they support. Within growth, this often includes initiatives like:

Experiment Platform. Many parts of running an experiment can be standardized, from filtering the audience, to bucketing users properly, to observing statistical methodology. Historically, companies built reusable Experiment Platforms in-house, but more recently, vendors such as Eppo and Statsig have grown in popularity with fancy statistical methodologies like “Controlled Using Pre-Experiment Data” (CUPED) that give more signal with less data.

Reusable components. Companies with standard front-end components for things like headlines, buttons, and images, dramatically reduce the time required to spin up a new page. No more "did you want 5 or 6 pixels here" with a designer; instead growth engineers rely on tools like Storybook to standardize and share reusable React components.

Monitoring. Growth engineering benefits greatly from leveraging monitoring to compensate for reduced code coverage. High-quality business metric monitoring tools can detect bugs before they cause damage.

When I worked at MasterClass, having monitoring at the ad layer prevented at least one six-figure incident. One Friday, a marketer accidentally broadened the audience for a particular ad from US-only, to worldwide. In response, the Facebook Ad algorithm went on a spending spree, bringing in plenty of visitors from places like Brazil and India, whom we knew from past experience were unlikely to purchase the product. Fortunately, our monitoring noticed the low-performing campaign within minutes, and an alert was sent to the growth engineer on-call, who immediately reached out to the marketer and confirmed the change was unintentional, and then shut down the campaign.

Without this monitoring, a subtle targeting error like this could have gone unnoticed all weekend and would have eaten up $100,000+ of marketing budget. This episode shows that platform investment can benefit everyone; and since growth needs them most, it’s often the growth platform engineering team which implements them.

As the day-to-day work of a Growth Engineer shows, A/B tests are a critical tool to both measure success and learn. It’s a numbers game: the more A/B tests a team can run in a given quarter, the more of them will end up winners, making the team successful. It’s no wonder, then, that Growth Engineering will pull out all the stops to improve velocity.

3. Why Growth Engineers move faster than Product Engineers

On the surface, growth engineering teams look like product engineering ones; writing code, shipping pull requests, monitoring on-call, etc. So how do they move so much faster? The big reason lies in philosophy and focus, not technology. To quote Elena Verna, head of growth at Dropbox:

“Product Engineering teams ship to build; Growth Engineering teams ship to learn.”

Real-world case: price changes at Masterclass

A few years ago at MasterClass, the growth team wanted to see if changing our pricing model to multiple tiers would improve revenue.

Inspired in part by multiple pricing tiers for competitors such as Netflix (above), Disney Plus, and Hulu. The “multiple pricing tier” proposal for MasterClass.

From a software engineering perspective, this was a highly complex project because:

Backend engineering work: the backend did not yet support multiple pricing options, requiring a decent amount of engineering, and rigorous testing to make sure existing customers weren’t affected.

Client app changes: on the device side, multiple platforms (iOS, iPad, Android, Roku, Apple TV, etc) would each need to be updated, including each relevant app store.

The software engineering team estimated that becoming a “multi-pricing-tier” company would take months across numerous engineering teams, and engineering leadership was unwilling to greenlight that significant investment.

We in growth engineering took this as a challenge. As usual, our goal was not just to add the new pricing model, but to learn how much money it might bring in. The approach we ended up proposing was a Fake Door test, which involves offering a not-yet-available option to customers to gauge interest level. This was risky, as taking a customer who’s ready to pay and telling them to join some kind of waiting list is a colossal waste, and risks making them feel like the target of a “bait and switch” trick.

We found a way. The key insight was that people are only offended about a “bait and switch”, if the “switch” is worse than the “bait.” Telling customers they would pay $100 and then switching to $150 would cause a riot, but starting at $150 and then saying “just kidding, it’s only $100” is a pleasant surprise.

The good kind of surprise.

So long as every test “pricing tier” is less appealing – higher prices, fewer features – than the current offering, we could “upgrade” customers after their initial selection. A customer choosing the cheapest tier gets extra features at no extra cost, while a customer choosing a more expensive tier is offered a discount.

We created three new tiers, at different prices. The new “premium” tier would describe the existing, original offering. Regardless of what potential customers selected, they got this “original offering,” during the experiment.

The best thing about this was that no backend changes were required. There were no real, new, back-end pricing plans; everybody ended up purchasing the same version of MasterClass for the same price, with the same features. The entirety of the engineering work was on building a new pricing page, and the “congratulations, you’ve been upgraded” popup. This took just a few days.

Within a couple of weeks, we had enough data to be confident the financial upside of moving to a multi-pricing-tier model would be significant. With this, we’re able to convince the rest of engineering’s leadership to invest in building the feature properly. In the end, launching multiple pricing tiers turned out to be one of the biggest revenue wins of the year.

Building a skyscraper vs building a tent

The MasterClass example demonstrates the spirit of growth engineering; focusing on building to learn, instead of building to last. Consider building skyscrapers versus tents.

Building a tent optimizes for speed of set-up and tear-down over longevity. You don’t think of a tent as one that is shoddy or low-quality compared to skyscrapers: it’s not even the same category of buildings! Growth engineers maximize use of lightweight materials. To stick with the tents vs skyscraper metaphor: we prioritize lightweight fabric materials over steel and concrete whenever possible. We only resort to traditional building materials when there’s no other choice, or when a direction is confirmed as correct. Quality is important – after all, a tent must keep out rain and mosquitoes. However, the speed-vs-durability tradeoff decision results in very different approaches and outcomes.

4. Tech stack

At first glance, growth and product engineers use the same tooling, and contribute to the same codebases. But growth engineering tends to be high-velocity, experiment-heavy, and with limited test coverage. This means that certain “nice to have” tools for product engineering are mission-critical for growth engineers.

Read more


Simon Willison

Quoting Peter Bhat Harkins

I’ve disabled the pending geoblock of the UK because I now think the risks of the Online Safety Act to this site are low enough to change strategies to only geoblock if directly threatened by the regulator. [...] It is not possible for a hobby site to comply with the Online Safety Act. The OSA is written to censor huge commercial sites with professional legal teams, and even understanding one's

I’ve disabled the pending geoblock of the UK because I now think the risks of the Online Safety Act to this site are low enough to change strategies to only geoblock if directly threatened by the regulator. [...]

It is not possible for a hobby site to comply with the Online Safety Act. The OSA is written to censor huge commercial sites with professional legal teams, and even understanding one's obligations under the regulations is an enormous project requiring expensive legal advice.

The law is 250 pages and the mandatory "guidance" from Ofcom is more than 3,000 pages of dense, cross-referenced UK-flavoured legalese. To find all the guidance you'll have to start here, click through to each of the 36 pages listed, and expand each page's collapsible sections that might have links to other pages and documents. (Though I can't be sure that leads to all their guidance, and note you'll have to check back regularly for planned updates.)

Peter Bhat Harkins, site administrator, lobste.rs

Tags: politics, uk, moderation


Calling a wrap on my weeknotes

After 192 posts that ranged from weekly to roughly once-a-month, I've decided to call a wrap on my weeknotes habit. The original goal was to stay transparent during my 2019-2020 JSK fellowship, and I kept them up after that as an accountability mechanism and to get into a habit of writing regularly. Over the past two years I've adopted new posting habits which are solving those problems in other

After 192 posts that ranged from weekly to roughly once-a-month, I've decided to call a wrap on my weeknotes habit. The original goal was to stay transparent during my 2019-2020 JSK fellowship, and I kept them up after that as an accountability mechanism and to get into a habit of writing regularly.

Over the past two years I've adopted new posting habits which are solving those problems in other ways:

I post something here almost every day. I actually maintained a daily posting streak throughout 2024, which I ended in January, but I'm still posting most days and plan to keep that up. Every time I ship a new release of one of my projects I link to it from here. This replaces the "recent releases" section of my weeknotes. I try to have a longer form piece of writing that's suitable for inclusion in my newsletter at least once every two weeks. That's another accountability mechanism that's working well for me.

One downside of weeknotes is that I'd sometimes save something to include in them, which could lead to several items getting bundled together in a way that reduced their potential impact as standalone posts.

I got to the point with weeknotes where I was feeling guilty about not keeping them up. Given the volume of content I'm publishing already that felt like a sign that they were no longer providing the value they once did!

I still think weeknotes are an excellent habit for anyone who wants to write more frequently and be more transparent about their work. It feels healthy to be able to end a habit that's finished serving its purpose.

Tags: blogging, weeknotes, streaks

Wednesday, 19. March 2025

Simon Willison

OpenAI platform: o1-pro

OpenAI platform: o1-pro OpenAI have a new most-expensive model: o1-pro can now be accessed through their API at a hefty $150/million tokens for input and $600/million tokens for output. That's 10x the price of their o1 and o1-preview models and a full 1,000x times more expensive than their cheapest model, gpt-4o-mini! Aside from that it has mostly the same features as o1: a 200,000 token contex

OpenAI platform: o1-pro

OpenAI have a new most-expensive model: o1-pro can now be accessed through their API at a hefty $150/million tokens for input and $600/million tokens for output. That's 10x the price of their o1 and o1-preview models and a full 1,000x times more expensive than their cheapest model, gpt-4o-mini!

Aside from that it has mostly the same features as o1: a 200,000 token context window, 100,000 max output tokens, Sep 30 2023 knowledge cut-off date and it supports function calling, structured outputs and image inputs.

o1-pro doesn't support streaming, and most significantly for developers is the first OpenAI model to only be available via their new Responses API. This means tools that are built against their Chat Completions API (like my own LLM) have to do a whole lot more work to support the new model - my issue for that is here.

Since LLM doesn't support this new model yet I had to make do with curl:

curl https://api.openai.com/v1/responses \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $(llm keys get openai)" \ -d '{ "model": "o1-pro", "input": "Generate an SVG of a pelican riding a bicycle" }'

Here's the full JSON I got back - 81 input tokens and 1552 output tokens for a total cost of 94.335 cents.

I took a risk and added "reasoning": {"effort": "high"} to see if I could get a better pelican with more reasoning:

curl https://api.openai.com/v1/responses \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $(llm keys get openai)" \ -d '{ "model": "o1-pro", "input": "Generate an SVG of a pelican riding a bicycle", "reasoning": {"effort": "high"} }'

Surprisingly that used less output tokens - 1459 compared to 1552 earlier (cost: 88.755 cents) - producing this JSON which rendered as a slightly better pelican:

It was cheaper because while it spent 960 reasoning tokens as opposed to 704 for the previous pelican it omitted the explanatory text around the SVG, saving on total output.

Tags: o1, llm, openai, inference-scaling, ai, llms, llm-release, generative-ai, pelican-riding-a-bicycle, llm-pricing


Werdmüller on Medium

Forcing people back to the office was a choice. I’m making mine.

Here’s why I won’t RTO. Continue reading on Medium »

Here’s why I won’t RTO.

Continue reading on Medium »


Doc Searls Weblog

Come from Everywhere

IIW, the Internet Identity Workshop, is the UN of identity. While located in the U.S., it has always represented and welcomed the whole world to work on global problems best addressed in person. As it happens, IIW was born exactly twenty years ago tomorrow—20 March 2005—at Esther Dyson’s PC Forum in Scottsdale, Arizona. A group […]
Three Canadians—Kim Cameron, Kaliya “Identity Woman” Young, and Dick Hardt—at the Identity Gang meeting in Scottsdale, 20 March 2005

IIW, the Internet Identity Workshop, is the UN of identity. While located in the U.S., it has always represented and welcomed the whole world to work on global problems best addressed in person.

As it happens, IIW was born exactly twenty years ago tomorrow—20 March 2005—at Esther Dyson’s PC Forum in Scottsdale, Arizona. A group of eleven that called itself the Identity Gang gathered around a table there to plot what became (IMHO) the most leveraged conference in the world. Three of the eleven were Canadians. One (Esther) was born in Zurich. As for the rest, I suppose they could have been born anywhere. (I was born in the U.S., but half my ancestors were Swedish. Those on my father’s side were Irish, English, German, French, and God knows.)

All of the Internet’s protocols, from TCP/IP on up, were made to ignore national boundaries.  I am inviting participants in the next IIW (April 8 to 10) to do the same: ignore the noise coming from the U.S. government and come join us to work on what’s good for the whole connected world.

If you want to get away from wacky retro nationalism in tech, IIW is the place to do it.


The Pragmatic Engineer

How Linux is built with Greg Kroah-Hartman

Greg Kroah-Hartman, a longtime Linux kernel maintainer, breaks down the inner workings of Linux development, from its unique trust model to the benefits of open-source contribution.
Stream the Latest Episode

Listen and watch now on YouTube, Spotify and Apple. See the episode transcript at the top of this page, and a summary at the bottom.

Brought to You By

WorkOS — The modern identity platform for B2B SaaS.

Vanta — Automate compliance and simplify security with Vanta.

In This Episode

Linux is the most widespread operating system, globally – but how is it built? Few people are better to answer this than Greg Kroah-Hartman: a Linux kernel maintainer for 25 years, and one of the 3 Linux Kernel Foundation Fellows (the other two are Linus Torvalds and Shuah Khan). Greg manages the Linux kernel’s stable releases, and is a maintainer of multiple kernel subsystems. He is also the author of the books Linux Kernel in a Nutshell and Linux Device Drivers.

We cover the inner workings of Linux kernel development, exploring everything from how changes get implemented to why its community-driven approach produces such reliable software. Greg shares insights about the kernel's unique trust model and makes a case for why engineers should contribute to open-source projects. We go into:

How widespread is Linux?

What is the Linux kernel responsible for – and why is it a monolith?

How does a kernel change get merged? A walkthrough

The 9-week development cycle for the Linux kernel

Testing the Linux kernel

Why is Linux so widespread?

The career benefits of open-source contribution

And much more!

Takeaways

1. Linux is the most widespread operating system globally. Linux runs on 4 billion Android devices – compared to which everything else is “a rounding error”. Still, Linux is the most popular operating system for servers and embedded devices. It’s also used on many smart TVs, air traffic control systems, and even on the International Space Station. Fun fact: Linux even runs inside many iPhones – as it is the firmware used for the Qualcomm 5G modems inside these devices!

2. Getting a change merged into the Linux kernel is surprisingly straightforward. Create the change (called a patch), test it locally, and send it to the right maintainer for review. The patch needs to go through a hierarchical tree of maintainers accepting it before it can make it into the kernel. We go through a specific change being merged up this tree.

3. Linux won because devs being “selfish” works! Developers contribute to Linux in a "selfish" way to solve their own problems. But turns out, many devs have the same problems: so every contribution makes Linux a better fit for other devs to use! Kernel maintainers only accept contributions that make sense for the whole project. For example, embedded device vendors helped make the Linux kernel more efficient. This efficiency later greatly helped Linux become the best choice for mobile OS on Android.

4. The Linux kernel is run in a very unique way – because the project is unique. The Linux kernel has 4,000 contributors per year, releases strictly every 9 weeks – and yet has practically no meetings, no project managers and uses email and git. This setup works because project management happens outside of the Linux kernel: contributors bring completed work. Also, the kernel team invests heavily in automation for e.g. triaging. And turns out, email scales really well – for this group, that is! (Note that other projects built on top of the Linux kernel – such as Linux distributions like Red Hat or Debian – all work differently. What works for the Linux kernel thanks to unique circumstances, won’t work for those projects).

5. Git was created as a solution for the Linux kernel’s source control needs. We talked about this story with Greg outside the podcast: it’s a fascinating story about how git was built and open-sourced after the Linux kernel group was unhappy with existing source control solutions.

In what is amusing: git has become the de facto source control product across tech thanks to products like GitHub and GitLab. The Linux kernel does not use GitHub – don’t forget, they already solved their source control workflow problems by writing git!

The Pragmatic Engineer deepdives relevant for this episode

What TPMs do and what software engineers can learn from them

The past and future of modern backend practices

Backstage: an open-source developer portal

Timestamps

(00:00) Intro

(02:23) How widespread is Linux?

(06:00) The difference in complexity in different devices powered by Linux

(09:20) What is the Linux kernel?

(14:00) Why trust is so important with the Linux kernel development

(16:02) A walk-through of a kernel change

(23:20) How Linux kernel development cycles work

(29:55) The testing process at Kernel and Kernel CI

(31:55) A case for the open source development process

(35:44) Linux kernel branches: Stable vs. development

(38:32) Challenges of maintaining older Linux code

(40:30) How Linux handles bug fixes

(44:40) The range of work Linux kernel engineers do

(48:33) Greg’s review process and its parallels with Uber’s RFC process

(51:48) Linux kernel within companies like IBM

(53:52) Why Linux is so widespread

(56:50) How Linux Kernel Institute runs without product managers

(1:02:01) The pros and cons of using Rust in Linux kernel

(1:09:55) How LLMs are utilized in bug fixes and coding in Linux

(1:12:13) The value of contributing to the Linux kernel or any open-source project

(1:16:40) Rapid fire round

A summary of the conversation The Linux kernel

The Linux kernel is around 40 million lines of code. The core kernel –the part every Linux platform runs – is about 5% of this. . The remaining code supports diverse hardware, drivers, devices, architectures, and chips.

A typical laptop runs approximately two to 2.5M million lines of kernel code; servers around 1.5M, while mobile devices around 4M.

The role of the kernel: abstract away underlying hardware and present a consistent interface to user space programs. This allows the applications to run on different hardware without modification.

A monolithic kernel

Drivers in Linux are part of the kernel

This is a monolithic architecture: all code, including drivers, operates in the same address space

The monolithic approach allows for more refactoring options and more code-sharing opportunities between drivers. This results in Linux drivers being, on average, one-third smaller than drivers in other operating systems because common functionalities can be identified and consolidated.

Do not break userspace. The core principle of Linux kernel development is to never intentionally break user space. This guarantee ensures that users can upgrade their kernel without fear of their existing applications crashing. Accidental breakages are treated as faults and are promptly addressed.

Linux kernel development process

Fixed 9-week cadence.

Following a release by Linus Torvalds, a two-week merge window opens.

During this merge window, maintainers submit all the new features that have been pending and proven to work in their respective development trees to Linus.

rc1: after the two-week merge window, Linus issues the first release candidate (rc1).

For the subsequent seven weeks, only bug fixes are accepted. No new features are introduced during this stabilization period; the focus is on regression fixes and reverting problematic changes.

Hierarchical structure of maintainers.

Around 4,000 developers contribute code every year

They send changes via email to maintainers responsible for specific kernel subsystems

Kernel subsystem maintainers then forward collections of accepted changes up the chain

Ultimately, these changes reach Linus for inclusion in the main kernel tree.

Trust is key in Linux kernel development. When a maintainer accepts code from a developer, they implicitly take responsibility for it. For critical parts of the kernel, maintainers need to have a high degree of confidence in the developer and the quality of their work, as the maintainer becomes accountable if the original developer disappears.

Email and git. These are the two tools used during development.

"Linux Next:" a separate development tree that integrates all the changes destined for the next kernel release on a daily basis. This allows for continuous testing and identification of potential integration issues.

QA and stable releases

Linux Next: automated testing. This includes building and booting the kernel across various architectures and virtual machines.

KernelCI: a project that provides a more extensive continuous integration infrastructure, running tests on a wider range of real hardware contributed by different labs.

The testing process involves a mix of automated tests and real-world usage by developers and testers. The "zero-day bot" automatically tests patches submitted to mailing lists.

Stable kernel releases: these are maintained independently of the main development branch. After each major kernel release by Linus, a stable branch is forked.

Greg and Sasha Levin maintain these stable branches. They issue new stable releases weekly, incorporating bug fixes that have first been merged into Linus's tree. This ensures that stable branches do not diverge from the main development line.

Long-term stable (LTS) kernels: Greg picks one kernel per year and supports it for an extended period, initially two years, sometimes up to six years. Android phones, for instance, often run on these older LTS kernels, which still receive backported bug fixes. Greg and Sasha concurrently maintain multiple LTS kernels.

Maintaining older codebases is more challenging. This is due to the ongoing evolution of the kernel. Changes made in newer versions to fix bugs might be difficult to backport to older, significantly diverged code. Context is often lost over time, making even seemingly simple backports complex.

Contributors

About 80% of kernel contributors are paid – by their employer! Companies invest in Linux development because it's often more cost-effective to contribute features and fixes than to develop their own operating systems.

Contributing to the Linux kernel is a valuable way for developers to also invest in their careers. It demonstrates the ability to collaborate, work with existing codebases, and solve real-world problems.

Core maintainers meet annually to discuss and refine the development process.

Rust support?

Most of Linux is written in C, but Rust is gaining momentum. Approximately 25,000 lines of Rust code are already in the kernel, primarily for bindings but also for some functionality like generating QR codes on kernel crashes.

Introducing Rust aims to improve memory safety in certain parts of the kernel. However, writing drivers in Rust presents challenges due to the need for bindings to the extensive C codebase and the different memory management models of C and Rust.

Memory safety in Rust primarily refers to the safety of object lifecycles and memory ownership, not necessarily the elimination of all bugs. Logic errors and even memory unsafety can still occur in Rust code.

The adoption of Rust is also driving improvements in the existing C codebase, as the need to create Rust bindings encourages a re-evaluation of C code for better safety and clarity.

Will the Linux kernel add Rust support?

There is resistance to introducing new languages from some core kernel developers, who prefer to maintain a single-language codebase.

Efforts are underway to write more drivers in Rust, including experimental GPU drivers. Rust can be particularly well-suited for simpler hardware drivers.

Governments increasingly mandate the use of memory-safe languages, which is another factor driving the adoption of Rust in Linux.

That said, the Linux kernel community is also actively working on improving the safety of existing C code through techniques like bounds checking and compiler extensions.

Why contribute to Linux, and how?

Building and testing the kernel locally is a prerequisite for submitting changes.

Contributing, even a single patch, offers significant professional benefits. It strengthens a developer's resume by demonstrating the ability to collaborate and work with complex, established codebases.

Contributing provides valuable learning opportunities, exposing devs to different perspectives, coding practices, and challenging technical problems.

Newcomers can find entry points by working on less critical parts of the kernel, such as fixing coding style issues or removing dead code in older drivers. A good place to start is Kernel Newbies

Where to find Greg Kroah-Hartman:

• Social: https://social.kernel.org/gregkh

• Website: http://www.kroah.com/log/about/

Mentions during the episode:

• Linux Kernel Foundation: https://www.linuxkernelfoundation.com/

• International Space Station: https://www.nasa.gov/international-space-station/

• Raspberry Pi: https://www.raspberrypi.com/

• GitHub: https://github.com/

• Kernel CI: https://kernelci.org/

• Linus Torvalds on LinkedIn: https://www.linkedin.com/in/linustorvalds/

• Engineering Planning with RFCs, Design Documents and ADRs: https://newsletter.pragmaticengineer.com/p/rfcs-and-design-docs

• A guide to the Kernel Development Process: https://docs.kernel.org/process/development-process.html

• Rust: https://www.rust-lang.org/

• The Linux Kernel Maintainer Summit: https://events.linuxfoundation.org/linux-kernel-maintainer-summit/

• Linux Braille Console: https://www.kernel.org/doc/html/v4.16/admin-guide/braille-console.html

• Code Complete: A Practical Handbook of Software Construction: https://www.amazon.com/Code-Complete-Practical-Handbook-Construction/dp/0735619670

• Kernel Newbies: https://kernelnewbies.org/

Production and marketing by Pen Name. For inquiries about sponsoring the podcast, email podcast@pragmaticengineer.com.

Tuesday, 18. March 2025

Doc Searls Weblog

Media Matters

Missing Mike. My favorite songwriter and performer is Mike Cross. He was headed to a career in law as an undergrad at UNC-Chapel Hill when a music bug bit him, he learned to play fiddle and guitar, and then to perform in local bars and clubs. I couldn’t count how many times I went to […]

Missing Mike. My favorite songwriter and performer is Mike Cross. He was headed to a career in law as an undergrad at UNC-Chapel Hill when a music bug bit him, he learned to play fiddle and guitar, and then to perform in local bars and clubs. I couldn’t count how many times I went to hear him play. His songs were—and still are—fun, deep, wise, and catchy. Uncle Josh. The Scottsman. Elma Turl. Nobby. Born in the Country. The Lord’ll Provide. Bounty Hunter. Kentucky Song. A few years ago Mike was bitten by a tick, got Lyme Disease, and has hardly been heard from since. I hope he’s well and can get back out there, showing the next few generations what an amazing gift he is to us all. Bonus fact: The album cover above is by my old friend and business partner Ray Simone. Gone thirteen years, I still miss Ray every day.

Drowning in Red. Fifty years ago, when I was getting my nickname while working at WDBS (a sweet little commercial radio station owned by Duke University in Durham, North Carolina), our slogan was “Let the music keep our spirits high”—a line from Jackson Browne’s “Before the Deluge.” Give it a listen. Take in the lyrics. Then think about the kind and generous purposes behind all the federal programs, agencies, and departments now being demolished. The abandoned alliances and international friendships. The dropped American support for people and organizations trying to do good throughout the world. The Voice of America, silenced. Think about the new political corrections:  forbidden words in grant applications, red flags now waving atop corporate giants, weather vanes vectored right. Think about the shaking trap doors under millions of federal employees—especially ones tasked with helping the weak, the old, the foreign, the oppressed, the dark, the gay, the oddly gendered. And then listen to the news. No, not the old bluestream news we still call main. I mean the redstream news that now predominates on TV, in podcasts, on radio. What you will see and hear is one big amen corner for all of it. This is the deluge.

Sounding Good Everywhere. If you like old album rock and Americana rooted in North Carolina, the best thing on radio in Raleigh-Durham-Chapel Hill is That Station, on 95.7 FM.  Technically, the station translates (rebroadcasts) WRAL-FM/101.5’s HD2 stream, and rarely mentions its true callsign, which is W239CK. What many listeners may not know is that the signal on 95.7 is only 250 watts from an antenna about 1100 feet up a tower near Apex, while the HD2 stream is coming from WRAL’s 98,000 watt signal pumping out of its antenna almost 2000 feet up a tower southeast of Raleigh. Compare the two signal footprints here and here (thanks to the wonderful RadioLocator.com). Naturally, the station also has an app. That’s what I usually use here in Bloomington, Indiana, although right now I’m digging it on my computer, which has outstanding 2.1 Logitech speakers I picked up for $5 at a yard sale.


The Pragmatic Engineer

Survey: What’s in your tech stack?

We’d like to share a realistic snapshot of the industry – and need your help. Those filling out the survey will receive a report before anyone else and some additional analysis.

We’d like to know what tools, languages, frameworks and platforms you are using today. Which tools/frameworks/languages are popular and why? Which ones do engineers love and dislike the most at this moment in time?

With more than 950,000 tech professionals subscribed to this newsletter, we have a unique opportunity to take the industry’s pulse by finding out which tech stacks are typical – and which ones are less common.

So, we want to build a realistic picture of this – and share the findings in a special edition devoted to this big topic. But it’s only possible with input from you.

We’re asking for your help to answer the question: what’s in your tech stack? To help, please fill out this survey all about it. Doing so should only take between 5-15 minutes, covering the platform(s) you work on, the tooling you use, the custom tools you have built, and related topics.

The results will be published in a future edition of The Pragmatic Engineer. If you take part and fill out the survey, you will receive the full results early, plus some extra, exclusive analysis from myself and Elin.

This is the first time we’re running a survey that’s so ambitious – and we very much appreciate your help. Previous research we did included a reality check on AI tooling and what GenZ software engineers really think. This survey is even more ambitious – and the results should reveal people’s typical and atypical tooling choices, across the tech industry. You may even get inspiration for new and different tools, languages, and approaches to try out.

Fill the survey here

We plan to publish the findings in May.

Note: this week, the engineering deepdive article is being published on Thursday (20 March). Thank you so much if you contribute to the survey.


Mike Jones: self-issued

Third Version of FIDO2 Client to Authenticator Protocol (CTAP 2.2) Now a Standard

The FIDO Alliance has completed the CTAP 2.2 Specification. The closely-related third version of the W3C Web Authentication (WebAuthn) specification is also nearing final status; this WebAuthn Level 3 working draft is currently going through the review steps to become a W3C Recommendation. So what’s new in the third versions? Changes between CTAP 2.1 and […]

The FIDO Alliance has completed the CTAP 2.2 Specification. The closely-related third version of the W3C Web Authentication (WebAuthn) specification is also nearing final status; this WebAuthn Level 3 working draft is currently going through the review steps to become a W3C Recommendation.

So what’s new in the third versions?

Changes between CTAP 2.1 and CTAP 2.2 are:

Creation option with a prioritized list of supported attestation formats (attestationFormatsPreference) PersistentPinUvAuthToken State Set PIN Complexity Policy JSON-based Messages Hybrid Transports Large Blob Extension (largeBlob) PIN Complexity Extension (pinComplexityPolicy) HMAC Secret MakeCredential Extension (hmac-secret-mc) Third-Party Payment Authentication Extension (thirdPartyPayment)

Changes between WebAuthn Level 2 and the WebAuthn Level 3 working draft are described in the document’s Revision History.

Completing these V3 specifications represents important progress in our quest to free us from the password!

Monday, 17. March 2025

@_Nat Zone

【告知】3/29「分人経済革命」ねむ ✕ 崎村夏彦 徹底対談【次世代ネットの扉を開く革命の起こしかた】

LIVE告知です。メタバース進化論の著者のバーチャル美少女ねむさんとYouTube Live を行います。 ・配信日時:3月29日(土)21:00より1.5時間程度予定・YouTube LIVEにて無料配信・配信URL:「分人経済革命」ねむ ✕ 崎村夏彦 徹底対談【次世代ネットの…

LIVE告知です。メタバース進化論の著者のバーチャル美少女ねむさんとYouTube Live を行います。

・配信日時:3月29日(土)21:00より1.5時間程度予定
・YouTube LIVEにて無料配信
・配信URL:「分人経済革命」ねむ ✕ 崎村夏彦 徹底対談【次世代ネットの扉を開く革命の起こしかた】

バーチャル美少女ねむさんは、黎明期の仮想世界で生きる「メタバース原住民」にして、その文化を伝える「メタバース文化エバンジェリスト」として活動しておられます。「バーチャルでなりたい自分になる」をテーマに2017年から美少女アイドルとして活動している自称・世界最古の個人系VTuber。メタバースの革命性を論じた著書『メタバース進化論』(2022年、技術評論社)で「ITエンジニア本大賞2023」ビジネス書部門”大賞”を受賞。国連の国際会議「IGF京都2023」でも登壇。MoguLive VTuber Award 2023では「今年最も輝いたVTuber」に選出されています。2024年には定性調査レポート「メタバースでのアイデンティティ(Nem x Mila, 2024)」を発表。産総研「アバター国際標準化の国内検討委員会」4委員にも就任されておられます。

ねむさんが課題として捉えられていることに分人経済が新たな経済圏に昇華するためのID認証・経済・納税の仕組みがあります。たとえば、

バーチャルでアバターの形でしか会わない取引先にどうしてマイナンバーカードの表裏をコピーして郵送しなければならないのか?

もちろん、適正な課税のためには税務当局がマイナンバーを使って支払いを把握することが必要であることは論を待ちません。しかし、だからといってマイナンバーカードのオモテ裏のコピーを郵送したりして身元確認情報を拡散させたり、望まない名寄せが行われるリスクをとる必要があるのでしょうか?多くのVTuberの方にしてみれば、「顔バレ」を含む「身バレ」は最大のリスクの一つです。こうしたリスクを抑えたうえで、ひとびとが経済活動を行い、一方税務当局は本人のマイナンバーを抑えることができて適正な課税をする仕組みはつくれないのでしょうか?これは実はVTuberだけでなく、ひろく一般の方々にかんしても言えることです。

こうしたことについて、目標1.5時間でいろいろお話をしたいと思います。

ねむさん曰く:

次世代インターネットの扉を開くという「分人経済革命」とは果たして何なのか!? 仮想空間「メタバース」ではアバターによって複数の自分(分人)を切り替え、自己の魂の在り方と社会との界面を能動的にデザインできる新時代が遂に実現しつつある。一方でそれを新たな経済圏に昇華するためのID認証・経済・納税の仕組みは全くの未整備なのが現状だ。インターネットにおけるデジタルアイデンティティの標準化に長年取り組んできたOpenID Foundation理事長「崎村夏彦」とメタバース住人にしてVTuber/作家「バーチャル美少女ねむ」が3/29にYouTube LIVEでこの問題を徹底対談! 次世代インターネットの革命がここから始まる!?(出所:【告知】3/29「分人経済革命」ねむ ✕ 崎村夏彦 徹底対談【次世代ネットの扉を開く革命の起こしかた】

それでは、当日おあいしましょう!

【無事終わりました】

とても楽しかったです。対談中話題になった点などについてスライドにまとめました。以下のリンクから御覧ください

分人経済革命の可能性と実現に向けた一考察

Saturday, 15. March 2025

The Tao of XDI

Recognition, Identity, and AI: Building Trust in Digital Agents

My first love was self-sovereign distributed data, where each person owns and controls their data, hosting it wherever they choose and permissioning it under their own terms. But I got lost in the complexity of building a robust distributed identity infrastructure. How can you give permission to someone if you can’t 'name' them in a way that is verifiable and resistant to subversion? There's no po

My first love was self-sovereign distributed data, where each person owns and controls their data, hosting it wherever they choose and permissioning it under their own terms. But I got lost in the complexity of building a robust distributed identity infrastructure. How can you give permission to someone if you can’t 'name' them in a way that is verifiable and resistant to subversion? There's no point in saying "only John can access this" if Tom can show up and convincingly say, "I'm John."

This issue isn’t theoretical—many modern digital problems stem from weak identity foundations. Take email, for example. SMTP, the core protocol, evolved without a strong sense of identity. Had we designed email with a robust identity layer—and maybe a little reputation—spam might have been less rampant. Instead, we've had to patch identity onto email systems, mostly at the DNS layer. Could better choices early on have changed the landscape of digital trust?

As we enter the era of AI and Personal AI, this challenge resurfaces. We will increasingly rely on agents to interact, assist, and even make decisions on our behalf. But how can we trust these agents? How do we know they are who they claim to be, and whose interests they truly serve? When I ask my AI how to unwind after a long day, it might suggest a refreshing Diet Coke. But is that suggestion rooted in understanding my preferences, or is it influenced by unseen commercial incentives?

Recognition and Identity in AI

In the animal world, intelligence is often measured by the ability to recognize oneself and others. The mirror test is a classic example—when an animal identifies itself in a reflection, it demonstrates a form of self-awareness. Similarly, recognizing specific others—distinguishing one individual from another—marks advanced cognitive development.

AI, in contrast, remains limited in this capacity. While AI excels at pattern recognition, it lacks the ability to form a persistent sense of identity, either of itself or others. This limitation restricts its ability to build trust and context in interactions. Without a foundation for recognizing specific entities, AI systems risk becoming tools of confusion or exploitation.

Embedding Identity Systems into AI

One solution is to deeply embed identity frameworks into AI architectures from the outset. Decentralized Identifiers (DIDs), Verifiable Credentials (VCs), and similar systems could provide AI with a structured way to "recognize" and differentiate entities.

Persistent Identity Chains: AI could track verifiable chains of identity, ensuring that when it reports information—like "Brad says buy this stock"—it can verify that it truly came from the Brad you trust.

Verification of Origin: By leveraging cryptographically verifiable credentials, AI can ensure that information hasn’t been tampered with and originates from a trusted source.

Reputation Frameworks: Identity systems could incorporate reputation mechanisms, helping AI prioritize information from sources that consistently meet a trust threshold.

Chain of Custody: AI could provide transparency on how information was received and processed, ensuring that its recommendations are based on data with verifiable origins.

The Path to Trusted AI

Trustworthy AI isn’t about making machines socially aware; it’s about ensuring that humans can trust the chain of custody behind AI-generated insights. When AI states that "Brad recommends this action," it should be able to prove that the recommendation came from the right "Brad"—the person you trust, not an imposter or manipulated data source.

The real question is: How do we create systems where AI is not just technically accurate but verifiably trustworthy? In an era where decisions increasingly rely on AI advice, embedding identity systems at the core isn’t just beneficial—it’s fundamental.

Thursday, 13. March 2025

The Pragmatic Engineer

The Pulse #127: Typescript compiler rewritten in Go

Why did Microsoft choose a language developed by Google, instead of its own, C#? Also: OpenAI’s dev tools for agents looking complex, the PR/FAQ for AWS Lambda, and more

The Pulse is a series covering insights, patterns, and trends within Big Tech and startups. Notice an interesting event or trend? Send me a message.

Today, we cover:

Industry pulse. Google’s impressive AI launches, AI agent Manus makes a splash, companies evaluating AI models internally, Cursor worth $10B, Klarna’s upcoming IPO, and more.

Typescript compiler rewritten in Go. In an impressive feat, the Typescript team rewrote their compiler, and made it 10x faster. But why use Go for the job? It’s a curious choice, given Microsoft’s heavy investment in C# – which is also a performant language. Engineers who executed the rewrite explain their unconventional choice.

OpenAI’s dev tools to build agents: more complexity? OpenAI shared its tooling for building AI agents. My first impressions are that it feels pretty fragmented. Perhaps this is to be expected with fast-moving technology, where capabilities keep being added?

Apple and AI: clumsy fumbling, or a long-term strategy? Apple promised a new, improved Siri by the end of 2024, yet that could now be on hold until 2027. Are things less bad as they seem, due to Apple owning the iPhone hardware and iOS software platforms?

Second chances for interview no-shows? A candidate did not show up for a product engineer interview, but their AI note taker did. As a hiring manager, what would you do? Cofounder Diwakar Kaushik did something unexpected.

Industry Pulse Google’s impressive AI launches

Google has had an eventful weeks with new AI products:

Gemma 3: A model that runs on just a single NVIDIA H100 GPU, and requires around 10x less compute than models with similar capability, is impressive – and will surely gain adoption to its generally permissive license, and can be self hosted, as well as supporting 140 languages (!!). As a note, Gemma 3 is kind of open, but not fully open: the license allows commercial use, but has restrictions on use cases, such as violating any applicable law or use cases listed in the policy.

Gemini Robotics: a vision-language-action model that can directly control robots, built on top of Gemini 2.0. This 3-minute video summarizes the impressive capabilities.

A robot arm can interpret and execute a command, even while an operator moves objects around. Source: Google

Google is getting a lot less of the AI spotlight behind OpenAI and Anthropic, but seems to be pulling ahead in other areas. In robotics and AI, it might become pioneering, and there’s also Waymo's self-driving cars – which is also part of Google. Is Google quietly becoming a ML and AI leader under the radar?

AI agent Manus makes a splash

Read more

Wednesday, 12. March 2025

@_Nat Zone

GOV.UK WalletとOne Login

GOV.UK Wallet とは GOV.UK Walletは、デジタル運転免許証のコンテキストで1月21日に明らかになったイギリス政府が開発中のデジタル文書管理アプリで、2025年2月5日にはピーター・カイル科学・イノベーション・技術担当国務長官によりその詳細が発表が行われまし…
GOV.UK Wallet とは

GOV.UK Walletは、デジタル運転免許証のコンテキストで1月21日に明らかになった56イギリス政府が開発中のデジタル文書管理アプリで、2025年2月5日にはピーター・カイル科学・イノベーション・技術担当国務長官によりその詳細が発表が行われました7。この発表は、イギリス政府のデジタル化推進の一環として行われ、身分証明書のデジタル版をスマートフォンで携帯できる新しいアプリの導入を目指しています。このアプリを使用することで、政府発行の文書をスマートフォンに安全に保存し、必要に応じて提示することができるようになります8

主な特徴は以下の通りです:

政府発行のデジタル文書を安全に保存 年齢、身元、サービス利用資格の証明に使用可能 紙や物理的なカードと同様に使用可能 非政府発行の文書(電車やコンサートのチケットなど)は追加不可 使用は任意

GOV.UK Walletの利用開始は2025年中を予定しています。最初に利用可能になる文書はHM軍隊退役軍人カードで、その後運転免許証などの他の政府文書も追加される予定です9

セキュリティ面では、GOV.UK One Loginを使用してアクセスする必要があり、他人が文書を使用したりアクセスしたりすることを防ぎます。また、スマートフォンに組み込まれた顔認識技術などのセキュリティ機能を活用する予定です10

GOV.UK One Loginとは?

GOV.UK One Loginは、イギリス政府が提供するオンライン認証システムです。このシステムを使用することで、ユーザーは単一のアカウントで複数の政府サービスにアクセスできるようになります。主な特徴は以下の通りです:

単一のメールアドレスとパスワードで複数の政府サービスにアクセス可能11 二要素認証を使用してセキュリティを強化12 本人確認機能を提供し、ユーザーの身元を確認 政府サービスへのアクセスを簡素化し、効率化

現在、GOV.UK One Loginは一部の政府サービスでのみ使用可能ですが、将来的にはGOV.UKのすべてのサービスで使用できるようになる予定です13。利用可能なサービスには、退役軍人カードの申請、輸出入ライセンスの申請、教師資格の申請などが含まれています。

ユーザーは、GOV.UK One Loginを通じて自身の情報を管理し、利用したサービスを確認することができます5。また、セキュリティコードの取得方法として、テキストメッセージまたは認証アプリを選択できます6

GOV.UK One Loginは、政府のデジタルサービス(GDS)によって提供され、キャビネットオフィスがデータ管理者として機能しています14。このシステムは、ユーザーの利便性向上と政府サービスへのアクセス簡素化を目的としています。

GOV.UK Login のプロトコル

GOV.UK One LoginはOpenID Connect (OIDC)15に準拠したサービスです16。OpenID Connectは、OAuth 2.0プロトコルの上に構築された認証レイヤーであり、OAuth 2.0を拡張して標準化された方法で認証情報を提供します。

具体的には、GOV.UK One Loginは以下の特徴を持っています:

OpenID Connect準拠: OIDCプロトコルを使用して、ユーザー認証と属性の共有を行います。 認証フロー: 標準的なOAuth 2.0/OIDC認証コードフローを実装しています。これには以下のステップが含まれます: /authorize エンドポイントへの認可リクエスト 認可コードの発行 /token エンドポイントでのトークン交換 IDトークンとアクセストークンの取得 /userinfo エンドポイントでのユーザー情報の取得 セキュリティ強化: JWT-secured OAuth 2.0 Authorization Request (JAR)17を使用して、認証リクエストパラメータのセキュリティを向上させています。

ちなみにですが、OpenID Connect も JAR もわたしが主著者なので、なんとも喜ばしいことです。


The Pragmatic Engineer

Developer Experience at Uber with Gautam Korlam

Gautam Korlam, former principal engineer at Uber and co-founder of Gitar, joins the podcast to discuss scaling engineering teams, the challenges of monorepos, and how AI is reshaping developer productivity.
Stream the Latest Episode

Listen and watch now on YouTube, Spotify and Apple . See the episode transcript at the top of this page, and a summary at the bottom.

Brought to You By

Sentry — Error and performance monitoring for developers.

The Software Engineer’s Guidebook: Written by me (Gergely) – now out in audio form as well.

In This Episode

In today’s episode of The Pragmatic Engineer, I am joined by former Uber colleague, Gautam Korlam. Gautam is the Co-Founder of Gitar, an agentic AI startup that automates code maintenance. Gautam was mobile engineer no. 9 at Uber and founding engineer for the mobile platform team – and so he learned a few things about scaling up engineering teams.

We talk about:

• How Gautam accidentally deleted Uber’s Java monorepo – really!

• Uber's unique engineering stack and why custom solutions like SubmitQueue were built in-house

• Monorepo: the benefits and downsides of this approach

• From Engineer II to Principal Engineer at Uber: Gautam’s career trajectory

• Practical strategies for building trust and gaining social capital

• How the platform team at Uber operated with a product-focused mindset

• Vibe coding: why it helps with quick prototyping

• How AI tools are changing developer experience and productivity

• Important skills for devs to pick up to remain valuable as AI tools spread

• And more!

Takeaways

Interesting parts of the conversation:

1. Submit Queue: Uber built a complex merge system to deal with the large number of commits, where each commit had to run long-running CI tests. It’s a problem that smaller and mid-sized companies don’t have, but Uber had: and so they scratched their own itch.

2. Local Developer Analytics (LDA): years ago, Uber started to measure the experience that devs had. Things like: how long did a build take, locally? How much CPU is used? They used this data to improve internal tooling.

3. Developer experience as a product team. Gautam’s team operated like a classic product team: except their customers were Uber’s internal developers. Gautam believes this is how all successful platform teams should work.

4. AI changing software development: this is happening. “Vibe coding” leads to faster prototyping. Gautam believes junior engineers will thrive with AI tools because they will hit the ground running faster, and will be free of biases that hold back more experienced developers.

The Pragmatic Engineer deepdives relevant for this episode

The Platform and Program split at Uber

How Uber is measuring engineering productivity

Inside Uber’s move to the Cloud

How Uber built its observability platform

Software Architect Archetypes

Timestamps

(00:00) Intro

(02:11) How Gautam accidentally deleted Uber’s Java Monorepo

(05:40) The impact of Gautam’s mistake

(06:35) Uber’s unique engineering stack

(10:15) Uber’s SubmitQueue

(12:44) Why Uber moved to a monorepo

(16:30) The downsides of a monorepo

(18:35) Measurement products built in-house

(20:20) Measuring developer productivity and happiness

(22:52) How Devpods improved developer productivity

(27:37) The challenges with cloud development environments

(29:10) Gautam’s journey from Eng II to Principal Engineer

(32:00) Building trust and gaining social capital

(36:17) An explanation of Principal Engineer at Uber—and the archetypes at Uber

(45:07) The platform and program split at Uber

(48:15) How Gautam and his team supported their internal users

(52:50) Gautam’s thoughts on developer productivity

(59:10) How AI enhances productivity, its limitations, and the rise of agentic AI

(1:04:00) An explanation of Vibe coding

(1:07:34) An overview of Gitar and all it can help developers with

(1:10:44) Top skills to cultivate to add value and stay relevant

(1:17:00) Rapid fire round

A summary of the conversation Uber's engineering culture and tools

Gautam joined Uber in 2014 as an Android engineer. Back then, there were not even unit tests. Gautam wrote the first Android test, and set up Artifactory.

Uber built much of its engineering stack in-house because the cloud-native SaaS products were not built for their scale.

Even in the earlier years, Uber saw about one commit every minute – and the platforms at the time could not handle this (especially when considering that CI took 10-30 minutes to run!)

Build time was a problem. During the iOS and Android app rewrite in 2016, build times became very long. Gautam worked on getting it under control.

Submit Queue was a way to guarantee a green main. It serialised incoming commits to ensure they played nicely together. The company published a paper on this novel solution. Submit Queue tested changes and considered cross-dependencies between different commits. Machine learning models estimated potential failures and speculatively tried paths that might be green.

Monorepo: after starting as polyrepos, Uber moved to monorepos

Uber started with separate repos for Rider, Driver, and other apps (eg Eats). As the company grew, each team wanted its own repository, resulting in hundreds of repos. It became painful to upgrade and bump libraries.

The iOS team moved to a monorepo, followed by Android. The gains in productivity were massive because there was no need to bump libraries. This move really helped with standardisation.

The biggest initial pushback was that teams could no longer break an API without worrying about the consequences. There was concern about slower builds. In the end, the dev platform team solved these concerns.

Developer experience and dev productivity

Local Developer Analytics (LDA): an internal Uber system that ran on developers' machines and collected information about their systems. It integrated deeply into CLI tools and IDEs, tracked which files developers accessed most, and identified files with the most bugs. LDA helped identify bottlenecks in the development funnel.

Developer surveys: Uber ran these regularly. The dev tooling NPS went from negative 50 to positive 8 during Gautam’s tenure.

Things that made for a better developer experience: minimizing time to review code, time to build code, and reducing time spent in meetings.

DevPods: another internal Uber tool. These are basically cloud developer environments. They contain a container of code, build system artefacts and IDE indices in the cloud. DevPods make context switching quick.

Previously, onboarding involved running a bootstrapping script. The script was hard to maintain. Dev Pods moved the development stuff into a container. The containers can be huge.

Career growth to Principal Engineer

Going deep: getting into a niche and going deep can help over the long term, especially in areas others may not want to do.

Introspect regularly: Every two years, Gautam did some introspection to see if he was doing what he wanted and what could challenge him more.

Social capital and mentorship: these become very important at a big company. It helps to have connections. Helping people builds social capital. Gautam would drop everything to help people with their dev environment problems. He also held office hours on a regular basis, offering to help anyone who showed up.

Understand the business: Principal engineers need to understanding how engineering meets business, rather than just pure coding. It helps if you enjoy diving into this area – as well as if you like talking with people!

More of a peer relationship with your manager. As engineers grow in seniority, they become more like a peer to their manager and help their manager get stuff done. The relationship is more like a “peer” than a “boss”.

Tip for managers of senior+ devs: give them agency, check in often, and make sure they are unblocked.

The impact of AI on software development

Autocomplete: an obvious use case. It helps one type less and think more.

“Vibe coding”: AI allows you to explore more paths and experiment faster.

How AI impacts engineers

Controversial, but Gauam believes that junior engineers are going to thrive because they are coming up with new knowledge and new ways of working with AI tools. They do not have the biases of working a particular way.

The “generalist engineer” is going to be more in-demand, looking ahead

CS knowledge remains important.

When things go wrong, it is important to understand why they went wrong. This requires strong computer science fundamentals and system knowledge.

Where to find Gautam Korlam:

• X: https://x.com/kageiit

• LinkedIn: https://www.linkedin.com/in/gautamkorlam/

Mentions during the episode:

• Bypassing Large Diffs in SubmitQueue: https://www.uber.com/blog/bypassing-large-diffs-in-submitqueue/

• Jenkins: https://en.wikipedia.org/wiki/Jenkins_(software)

• Devpods: https://www.uber.com/blog/devpod-improving-developer-productivity-at-uber/

• JetBrains: https://www.jetbrains.com/

• Cloud Development Environments: https://newsletter.pragmaticengineer.com/p/cloud-development-environments

• Why are Cloud Development Environments Spiking in Popularity, Now?: https://blog.pragmaticengineer.com/why-are-cloud-development-environments-spiking-in-popularity-now/

• “The Coding Machine” at Meta with Michael Novati: https://newsletter.pragmaticengineer.com/p/the-coding-machine-at-meta

• Software Architect Archetypes: https://newsletter.pragmaticengineer.com/p/software-architect-archetypes

• The Platform and Program Split at Uber: A Milestone Special: https://newsletter.pragmaticengineer.com/p/the-platform-and-program-split-at

• What is Vibe Coding? How Creators Can Build Software Without Writing Code: https://alitu.com/creator/workflow/what-is-vibe-coding/

• WhatsApp: https://www.whatsapp.com/

• Rust: https://www.rust-lang.org/

• I am excited to introduce Jimy by Gitar - The agentic AI for building better software: https://www.linkedin.com/posts/gautamkorlam_i-am-excited-to-introduce-jimy-by-gitar-activity-7297713117927481344-0G4l/

• Cursor: https://www.cursor.com/

• Claude: https://claude.ai/

• Deepseek: https://www.deepseek.com/

• Head First Design Patterns: A Brain-Friendly Guide: https://www.amazon.com/Head-First-Design-Patterns-Brain-Friendly/dp/0596007124

Production and marketing by Pen Name. For inquiries about sponsoring the podcast, email podcast@pragmaticengineer.com.


@_Nat Zone

アイデンティティは国や行政が与えるものだと思っている人は正座して読むべき。ユタ州デジタルID法案 SB260:ユーザー中心のデジタルIDシステムがもたらす変革

デジタル技術の進化により、私たちの個人情報やデータへの依存がますます高まっています。しかし、多くの政府提供IDシステムは、ユーザーのプライバシーや自由を十分に考慮していないことがあります。その中で、ユタ州の「SB260法案」は、他のシステムとは異なるアプローチを取り、ユーザーのプ…

デジタル技術の進化により、私たちの個人情報やデータへの依存がますます高まっています。しかし、多くの政府提供IDシステムは、ユーザーのプライバシーや自由を十分に考慮していないことがあります。その中で、ユタ州の「SB260法案10は、他のシステムとは異なるアプローチを取り、ユーザーのプライバシー保護とデータ管理の自主性を根底から強化していると話題になっています(Chang(2025)他)。そこで、本記事では、その特徴と他のデジタルIDシステムとの差別化点について検討してみます。

SB260法案のユニークな特徴

SB260法案が他の政府提供IDシステムと大きく異なる点は、以下の特徴にあります。

個人の自主性を尊重
SB260では、「州が個人のアイデンティティを定義しない」と明確に規定されています (§ 63A-16-1202)。そして「州は特定の状況において、個人のアイデンティティを認識し承認することがある」とされています。従来の多くの政府IDは、政府が個人のアイデンティティを定義し、認識する権限を持っており、個人の存在は政府の承認に依存していました。これとSB260の新しいアプローチは根本的に異なります。SB260では、人間はすでに自分自身の権利で存在していることを認識し、州の役割は人間が認識され、法で規定された方法で州の承認を管理できるよう支援を提供する支援者という形になります。SB260は個人の自主性に重点を置いています 。前者のアプローチだと、政府に認識されないことによって存在が否定されうるのに対し、後者ではそれがおきません。

監視とデータ共有の禁止
SB260では、デジタルIDが提示された際、政府機関やその他の関係者による監視や情報共有を厳しく禁止しています。例えば、提示されたID情報を、提示を受けた企業などがマーケティングや監視目的で使用することが許されません。このガイドラインは、ユーザーデータが不必要に共有されることを防ぎ、データ管理の透明性を促進します 。

選択的な情報開示
多くの物理的な身分証明書では、住所や年齢といったすべての情報が開示されますが、SB260は「選択的開示」を可能にする技術を採用するとしています。例えば、年齢確認が必要な場合でも、実際の誕生日や住所を共有する必要がありません。これにより、個人情報が不必要に漏洩することを防ぎます 。

デバイスの提供を強制しない
SB260は、デジタルアイデンティティ確認の際にモバイル端末を提供することを強制しないと規定しています。この規定は、法執行機関や他の政府関係者が端末へのアクセスを求める可能性を排除し、端末内の個人データの保護を保証します 。このあたりは、デバイスを確認者が預かって、あんなことやこんなことをできるような制度とだいぶ違うと言えます。

デジタルアイデンティティの利用は完全に任意
他のアイデンティティシステムとは異なり、SB260ではデジタルアイデンティティの利用が完全に任意であることが強調されています。政府機関はデジタルアイデンティティを強制したり、使用を推奨するためのインセンティブを提供することはできません。これにより、技術に不慣れな住民やデジタルツールの使用を望まない住民への配慮がなされています 。18

他のIDシステムとの比較:何が違うのか?

多くの政府提供IDでは、監視とデータ収集が懸念されています。一部のシステムでは、個人情報がマーケティングやデータ解析に利用される可能性がありますが、SB260はこれを明確に禁止しています。また、他のシステムではデジタルIDが義務化される場合があり、ユーザーには選択肢がありません。一方で、SB260は完全に任意のシステムであり、ユーザー中心の設計が際立っています。

その結果、SB260はプライバシーを優先し、ユーザーのデータ管理を完全に可能にする未来型のデジタルIDシステムとして注目されています。このユーザー主導型のアプローチは、アメリカ全土だけでなく、世界中のデジタルID開発にも影響を与える可能性があります 。

結論:デジタル時代の新しい参照点

ユタ州のSB260法案は、デジタルIDの未来を形作る重要な節目と言えます。ユーザーの自主性、プライバシーの保護、選択の自由を包括的に保証するこの法案は、他の政府提供IDシステムとは一線を画すものです。

文書の出来自体に関して言うと、Identityの定義が「”Identity” means any attribute used to identify or distinguish a specific individual.」になっていたり、「(a)each individual has a unique identity;」となっているところなど、規格的な見方をするといろいろと気になる点はあります。が、SB260は、個人の自由とデータ管理の新しい基準を設定するだけでなく、技術と法律が調和してユーザー本位の世界を実現するモデルとして広く認知され、今後デジタルIDを制度化していく各種法域において、デジタル時代における安心・安全なIDシステムということに関して、何らかの影響を与えていくことでしょう。これを受けて、アメリカの各州がどう動くか、目がはなせません。

(参考資料) Chang, Wayne. (2025). Utah’s Digital ID Bill SB260 is the New Frontier for User-Controlled Identity. SpruceID, https://blog.spruceid.com/utahs-digital-id-law-sb260-is-the-new-frontier-for-user-controlled-identity/ Utah State Legislature. (2025). S.B. 260 Individual Digital Identity Amendments. https://le.utah.gov/~2025/bills/static/SB0260.html

Tuesday, 11. March 2025

The Pragmatic Engineer

AI fakers exposed in tech dev recruitment: postmortem

A full-remote security startup nearly hired a backend engineer who doesn’t exist, after a candidate used an AI filter as an on-screen disguise in video interviews. Learnings for tech companies

Imagine you’re the cofounder of a startup hiring its first few software engineers, and there’s a candidate who smashes the technical interview, and is the only one to complete the full coding interview – doing so with time to spare. Their communication style is a bit unconventional, but this could be explained by language differences, and isn’t a red flag. So, the promising candidate gets a thumbs up, and pretty soon they’re on screen in a non-technical final interview with the other cofounder, via video.

Then things get weird. Your cofounder pings you mid-interview to report that the candidate from Poland speaks no Polish whatsoever, and also that there is something just not right about their appearance on screen. The recruitment of this candidate gets derailed by suspicious details which erode everyone’s confidence in making the hire. Soon afterwards, the same thing happens again with another candidate.

Later, you discover that two imposters hiding behind deepfake avatars almost succeeded in tricking your startup into hiring them. This may sound like the stuff of fiction, but it really did happen to a startup called Vidoc Security, recently. Fortunately, they caught the AI impostors – and the second time it happened they got video evidence.

To find out more about this unsettling, fascinating episode from the intersection of cutting-edge AI and tech recruitment, the Pragmatic Engineer sat down with Vidoc cofounder, Dawid Moczadło. Today, we cover:

The first AI faker. Vidoc Security nearly made an offer to a fake candidate, but their back story raised too many questions.

The second AI faker. The next time a job candidate raised similar suspicions, the team was prepared and confronted the faker during the call – and recorded it.

How to avoid being tricked by AI candidates. Take the risk seriously, have applicants turn off video filters and verify that they do this, record interviews, and get firm proof of identity before any offer.

Foreign state interference? There’s evidence that many AI candidates could be part of a coordinated governmental operation targeting hundreds of western tech businesses. Full-remote workplaces are the most targeted.

Return of in-person final rounds? This looks an obvious consequence of these incidents.

New AI risks for tech businesses. Remote interviews may have to change, while devs also risk introducing security vulnerabilities by accepting AI suggestions without critique.

Hiring funnel. The story began with this job posting for a backend engineer. The startup shares its hiring funnel, giving a sense of how competitive full-remote startup positions are, currently. As context: from 500 applications, two hires have been made – and Vidoc is still actively recruiting for this position.

Catching an AI imposter: the candidate (left) refused to place their hand in front of their face because it would blow their AI cover. On the right, the interviewer illustrates the request

Since every candidate in this article is a cheater with an AI-generated mask of a different face, and a false professional identity, we share all the made-up resumes, CVs, videos, and photos, to give a sense of how things played out. If you’re currently hiring, or plan to, the nature and sophistication of the fake-applicant scam targeting this startup provides food for thought.

For more tips in detecting fake applicants, you can also check this handy PDF guide, created by the Vidoc engineering team.

1. The first AI faker

Vidoc Security is a security provider offering automated code reviews to detect security issues. The idea for the company came from two security engineers and ethical hackers, Dawid Moczadło and Klaudia Kloc. Previously, they hacked top tech companies like Meta and then disclosed vulnerabilities to those places in order to collect bounties, and get on those companies’ ethical hacking leaderboards.

With the rise of LLMs in the past couple of years, Dawid and Klaudia spotted an opportunity to create a tool that works in the same way as they searched for security vulnerabilities: looking across the broader codebase, checking how components interact, which parts could be insecure, and more. Basically, an LLM can take their expertise about what works to hack well-designed systems, and build a tool with some of the same know-how.

Their idea attracted investors, and Dawid and Klaudia raised a $600K seed round in 2023, and a further $2.5M in seed funding in August 2024. With this seed funding in the bank: the company began hiring. They posted a job ad for a backend engineer, and started to interview candidates. (We share details on the exact hiring funnel and statistics below, in “The hiring funnel”)

One promising candidate was called Makary Krol. His LinkedIn profile is still active:

Fake profile. Source: LinkedIn

Below is a step-by-step summary of how the recruitment process for the imposter candidate went, based on Vidoc Security's records, including the bogus resume. By the end of step 5, the team were certain they were the target of a scam.

1. Resume screening: ✅Resume looks solid:

Page 1 of 2 from the fake resume. The resume looked convincing Page 2 of 2 of the fake, but convincing resume

2. First-round screening. ✅⚠️ A 15-30 minute call with Paulina, head of operations. It was a bit odd that the candidate did not speak any Polish, but was based there and graduated from Warsaw University of Technology. He spoke in broken English, and with a very strong accent that sounded Asian, but these weren’t warning big red flags, and the candidate sounded motivated.

3. Hiring manager interview. ✅⚠️ The candidate was clearly well-rounded and a technical screening was the obvious next step. Dawid’s only “yellow flag” was that their communication skills were poor, but he figured a technical interview would be a chance to show their core coding and technical skills.

4. Technical interview. ✅✅The candidate absolutely smashed it, being the first to finish all coding tasks and follow-up questions in the allocated time of 2 hours, which hadn’t happened before. Dawid was surprised by how competent they were at coding and technical problem solving. The coding abilities of this candidate were definitely not fake: they were a seasoned, very capable engineer.

5. Final hiring manager interview. ‼️⚠️ This was a non-technical interview with cofounder, Klaudia, who dug into the specifics of their background and grew suspicious. The candidate gave some details about previous positions, but she increasingly found herself disbelieving their back story and resume, the more time she spent with the candidate. Dawid shared the suspicion and they became certain that the figure on screen was far from what they claimed to be.

By the end of the recruitment process, Vidoc believed they had nearly been played, and had come worryingly close to extending an offer to a fake candidate using a false identity in their documents, and an AI filter to mask their face on screen. However, they had no evidence of this, and didn’t record the interviews, so had only their impressions and notes.

When Dawid recounted this episode to peers, he was met with disbelief. Founder friends – along with most other people – thought the team were overreacting and misguided. Eventually, Dawid stopped sharing the story and began to doubt the team’s suspicions about the candidate.

After the incident, Vidoc added an onsite interview as a final step in the recruitment process, and held a retrospective to figure out how to avoid something similar again. Interestingly, there wasn’t much that could be done about the potential risks of remote interviews being taken by applicants pretending to be someone else.

2. The second AI faker

Two months after the suspected fake job candidate episode, another promising applicant appeared, going by the name Bratislav Rakočević, located in Serbia, with a university education there. His LinkedIn was pretty basic:

Another entirely made-up profile. Source: LinkedIn

This application looked good, so the company ran the usual recruitment process, below (including the candidate’s made-up resume):

1. Resume screening: ✅⚠️ Solid on technologies, but with too many buzzwords for their current, self-employed role. Of course, it’s not unusual that some devs don’t thrive at writing resumes. Their credentials merited a call, especially as the “candidate” had “worked” at HubSpot and Woosh:

Page 1 of 2 of another resume that later turned out to be fake Page 2 of 2 of another resume that turned out to be fake

2. First-round screening. ✅⚠️ Communication was a bit choppy — then again, we’re talking about a non-native English speaker. The candidate did not speak Serbian, despite graduating from the University of Kragujevac, in Serbia. Once again, they sounded motivated, so proceeded to the next round.

3. Hiring manager interview: AI filter exposed‼️‼️Dawid conducted this interview, and immediately felt that something was off about the candidate. First of all, Dawid felt the candidate on the call looked somewhat unlike their low-resolution LinkedIn profile:

First suspicion: Dawid felt the interviewee looked dissimilar to their profile picture

As a rule, Vidoc never records interviews, but the team had been burnt once before and had faced scepticism about it from peers afterwards. So he hit record, and asked the candidate to make a simple gesture which bamboozles the current crop of AI filters:

“Can you take your hand and put it in front of your face and cover it partially?”

Candidate (on the left) declines to place hand in front of face as the interviewer requests (right)

The candidate refused, so Dawid ended the conversation. Watch the full video of the encounter.

Later, several people pointed out that the AI mask looked uncannily similar to a prominent Polish politician named Sławomir Mentzen:

Was this deepfake trained on video footage of a Polish politician?

Looking back on these twin incidents, the team suspects the same individual may have been behind the AI disguises of both fake candidates because:

Their voices sounded almost identical

The second candidate answered questions with surprising confidence – almost like he’d been asked them before

…a hunch that they had spoken before

The AI filter for the second faker was of much lower quality than the first. The first candidate slipped through undetected through several interviews, despite almost certainly using an AI filter as well. Dawid said the second candidate’s filter looked obvious on the screen – but on that occasion the Vidoc Security team were on high alert for any possible deception. It’s worth remembering that AI filters improve with time, so spotting a good one won’t always be easy.

3. How to avoid being tricked by AI candidates

How can tech companies hiring engineers protect themselves, and prevent AI-disguised candidates? Some suggestions:

This is not a drill; take the threat seriously: If you think your company is too small to fall victim, think again. Vidoc Security was just a two-person startup in February 2023 with no funding raised, when it was first targeted. The first fake profile cold emailed them on LinkedIn to ask if they were hiring.

It’s possible there is a large, state-level operation running a pool of bogus applicants who hide behind fake resumes and AI filters in video interviews. If this operation can find the smallest of startups to target, it’s safe to assume they could apply to any other company, as well.

Don’t assume all job seekers on local job boards are genuine. Vidoc posted on local job boards, like the one in Poland. But whoever was behind this operation figured out that it’s profitable to pose as Serbian, Polish, and other eastern European profiles, and to apply via small, low-profile job boards. I didn’t even know about the job board “JustJoinIT” (popular in Poland) but the fakers were already active on it.

Ask candidates to turn off all filters – including background ones. Filters that blur or replace the background are ubiquitous these days. During a video interview, it’s fair to ask a candidate to turn off all filters. After that, if there are still unusual minor details on their face or the background, then it’s safe to assume that a candidate did not turn off their filters. In an interview for a full-remote position, this could be justification enough to terminate the call.

Ask the candidate to do things that AI filters cannot handle – yet, such as:

A face turning side to side

A hand in front of a face

standing up and walking backward from the camera, showing more body detail

In the circumstances, It should be justifiable to run through some or all of these tests, as well as asking for all filters to be turned off. Stu Sjouwerman, CEO of KnowBe4, shared additional details on what his company learned from hiring a North Korean worker with a false identity:

Do not rely on email references alone. For reference checks, also do phone calls or video calls with references.

Absence of a digital footprint is suspicious. If there’s no trace of a candidate anywhere online, it could be a flag.

Use a phone carrier lookup tool before calling a candidate. Many fake candidates provide a VoIP number. Treat any such number as a red flag and refuse to call candidates on it.

Conduct all interviews using video. Fake candidates can slip through written application processes more easily

Expect identity theft. In the case of KnowBe4, the identity of the person they hired was real and stolen, meaning that a background check came back clean.

Put better internal monitoring in place. Monitor access to systems by onboarded employees – and pay extra attention to new joiners.

When suspicious, ask lowkey “cultural” questions. For example, if a candidate claims to be from Seattle but there is reason to suspect otherwise, ask a question like, “'I see you are from Seattle, what's your favorite place to eat, and what do you usually get?” Faking this knowledge in a convincing way is hard for someone who has never been to Seattle, and doesn’t know the local cuisine.

Different shipping address for laptop. If a candidate asks for an onboarding laptop to be shipped to a different address from where they are supposed to be living or working, it’s a red flag.

Additional tips:

Record video interviews and save them for later. AI-disguised candidates might be hard to spot in a live scenario, but analyzing the video later could be helpful. Note that recording the interview will most likely require disclosing this to a candidate, and them agreeing to this.

Get notarized proof of identity. This advice comes from Google, after they saw hundreds of US companies fall victim to North Korean workers by hiring them for tech roles.

Vidoc also created a guide to detect fake candidates in your hiring pipeline: see this PDF document here.

4. Foreign state interference?

So, who or what might be behind these two incidents at Vidoc; a lone individual or individuals, or something else entirely?

Read more


Wrench in the Gears

On Hiatus – Look For Me In The Garden

Today I realized that even though the Web3 program is ramping up as demonstrated by Silicon Valley’s plans to turn Argentina into a “network state,” few understand what is unfolding with cybernetic governance. For that reason, I decided to take time off this spring to work on my garden and simply be with real people [...]

Today I realized that even though the Web3 program is ramping up as demonstrated by Silicon Valley’s plans to turn Argentina into a “network state,” few understand what is unfolding with cybernetic governance. For that reason, I decided to take time off this spring to work on my garden and simply be with real people in the real world. I made a brief announcement to that effect tonight.

This is a clip I created (6 minutes) summarizing Protocol Lab’s Aleph project with Javier Milei, the guy who gave Musk the chainsaw at the CPAC gathering a few weeks ago. I get that it is difficult to navigate narratives and torrents of information. The current has been against the story I’ve been trying to tell for a decade. It simply didn’t stick, but I did try. I lost my old life in the process, but I gained a sunny yard filled with birdsong and time to read stacks of books on cosmology and consciousness. Until I pop in again, be well everyone.

A work in progress. Yesterday I planted twenty-five asparagus crowns. It will be three years before I can harvest them. It feels kind of good to be on slow vegetable time.

Monday, 10. March 2025

@_Nat Zone

OECD: AI時代におけるデータのアクセスおよび共有の促進

ちょっと旧聞になりますが、去る2月5日に、OECDから面白い文書が出ています。 以下はその内容を大まかにまとめたものです。詳しくは原文を御覧ください。 概要 データガバナンスの主要概念 データアクセスと共有の強化のための原則 実践的な応用と影響 データ共有インフラの強化 競争当局…

ちょっと旧聞になりますが、去る2月5日に、OECDから面白い文書が出ています。19

Enhancing Access to and Sharing of Data in the Age of Artificial Intelligence
ー Companion Document to the OECD Council Recommendation on Enhancing Access to and
Sharing of Data AI時代におけるデータのアクセスおよび共有の促進
ー OECD理事会勧告「データアクセスと共有の促進」の付属文書

以下はその内容を大まかにまとめたものです。詳しくは原文を御覧ください。

概要 OECDのデータへのアクセスと共有の強化に関する勧告(EASD)は、権利保護を確保しながらデータの利益を最大化するための枠組みを提供します。 経済的、社会的、法的考慮を統合したデータガバナンスへの全政府的アプローチを強調しています。 この勧告は、OECDのメンバーおよびパートナーによる自発的な遵守を促し、責任あるデータ共有の実践を推進します。 データガバナンスの主要概念 データ価値サイクル: データの作成から削除までの全ライフサイクルをカバーし、アルゴリズムや人間のスキルなどの補完的リソースの必要性を強調します。 データオープンネスの連続体: 閉じたデータからオープンなデータまでのアクセスを分類する枠組みであり、リスクと信頼に基づいたカスタマイズされた共有の取り決めを可能にします。 データエコシステム: データホルダー、プロデューサー、中介者などのさまざまな利害関係者が相互作用し、価値を生み出すため、対立する利益のバランスを取るための協力と信頼が必要です。 データアクセスと共有の強化のための原則 信頼の強化: 利害関係者を相談や透明性を通じて関与させ、データガバナンスに対する信頼を築きます。 データへの投資: データ共有を促進するために市場ベースのアプローチや持続可能なビジネスモデルを推進し、イノベーションのための規制サンドボックスを含めます。 データの効果的な利用: データが見つけやすく、アクセス可能で、相互運用可能で、再利用可能(FAIR原則)であることを確保し、国境を越えたデータ共有を促進します。 実践的な応用と影響 政府は、責任あるデータガバナンスを促進するためにEASDの原則に沿った国家データ戦略を採用すべきです。 プライバシーと知的財産権を保護しながらデータ共有を支援するために、一貫した法的枠組みが必要です。 さまざまな国からの成功事例は、データアクセスの強化における公私パートナーシップと利害関係者の関与の重要性を示しています。 データ共有インフラの強化 中央集権的データリポジトリ: 公共機関間での効率的な情報共有を促進するために中央集権的なインフラを確立し、サービス提供を改善し、データ駆動型の公共政策を支援します。 公共の関与: データ駆動型経済の利点とリスクに対する公共の理解を深めるために、責任あるデータ共有の実践を促進します。 利害関係者の意見: イノベーションを妨げることなくAI関連のリスクに対処するために、規制に関する議論に利害関係者を関与させます。 競争当局への影響 市場のダイナミクス: 消費者データと引き換えに無料の製品を提供するマルチサイドビジネスモデルの影響を考慮し、市場力を固定化する可能性があります。 需要側の特性: データが市場のダイナミクスに影響を与え、検索コスト、切り替えコスト、消費者の選択に影響を及ぼし、支配的な市場ポジションを強化する可能性があることを認識します。 研究における責任あるデータ共有 研究データの共有: 機関や研究者間でのデータ共有を促進するために、オーストラリアの責任ある研究行動に関するコードに沿ったガイドラインに従います。 ライセンスの分類: AIや機械学習におけるデータ利用に関連する責任と権利を明確にするためのデータライセンスの分類を開発します。 オープンガバメントデータとAI オープンデータの重要性: オープンガバメントデータは、AIシステムの開発とトレーニングに不可欠であり、信頼できる入力として機能します。 リスク管理: オープンデータは、データの出所やソースの信頼性に関連するリスクを管理し、AIアプリケーションの整合性を高めます。 結論の洞察 OECDのEASDに関する勧告は、データアクセスと共有を強化するための包括的な枠組みを提供し、信頼、投資、効果的なガバナンスを強調しています。 これらの原則を実施することで、公共サービスの改善、データ駆動型経済におけるイノベーション、責任あるAIの開発につながる可能性があります。 利害関係者の継続的な関与と確立されたガイドラインの遵守は、データ共有イニシアチブの完全な潜在能力を実現するために不可欠です。

Policy Brief:

なお、3/8には以下のPolicy Brief が出ています。

Saturday, 08. March 2025

Mike Jones: self-issued

Fully-Specified Algorithms Specification Addressing Area Director Feedback

Orie Steele and I want to thank Deb Cooley for her Area Director review of the “Fully-Specified Algorithms for JOSE and COSE” specification. Addressing it simplified the exposition, while preserving the essence of what the draft accomplishes. Specifically, the resulting draft significantly simplified the fully-specified encryption description and removed the appendix on polymorphic ECDH algorithms.

Orie Steele and I want to thank Deb Cooley for her Area Director review of the “Fully-Specified Algorithms for JOSE and COSE” specification. Addressing it simplified the exposition, while preserving the essence of what the draft accomplishes.

Specifically, the resulting draft significantly simplified the fully-specified encryption description and removed the appendix on polymorphic ECDH algorithms. We also stated that HSS-LMS is not fully specified, as suggested by John Preuß Mattsson.

The draft has now completed IETF last call, with the two resulting reviews stating that the draft is ready for publication.

The specification is available at:

https://www.ietf.org/archive/id/draft-ietf-jose-fully-specified-algorithms-07.html

Timothy Ruff

SEDI Details for Identity Nerds

This post is a more technical follow-up to the post announcing the passing of SB 260 in Utah, establishing the principles for state-endorsed digital identity (SEDI). While SEDI does not mandate any particular technology, its mandated principles close the door for technologies that do not comply, and open the door for technologies that do. This article explores what SEDI makes technically possible,

This post is a more technical follow-up to the post announcing the passing of SB 260 in Utah, establishing the principles for state-endorsed digital identity (SEDI). While SEDI does not mandate any particular technology, its mandated principles close the door for technologies that do not comply, and open the door for technologies that do. This article explores what SEDI makes technically possible, in compliance with SB 260 principles.

State-endorsed digital identity (SEDI) introduces numerous foundational principles for a government-sponsored digital identity system, several of which were covered in Utah Blazes New Trail for SSI: SEDI:

“Endorse” vs. “issue” and “identity” vs. “credential” Sovereign control Guardianship Privacy & no surveillance State-of-the-art security

This post explores more technical and security principles enabled by SEDI that will be of interest to digital identity experts.

“Autonomic” Identifiers

SEDI doesn’t mandate any specific technology stack, but it does specify capabilities that are simply unachievable for most digital identity technologies. One foundational example is the requirement in lines (73) — (77) of the Utah bill, that individuals must first bring their “identity” to the state, after which it can be endorsed.

What kind of “identity” might that be? That will be specified as the details of Utah’s future identity program are fleshed out. As a Utah citizen and as an outside expert invited to consult on these new identity policies, I will push for stringent minimum standards for the security, utility, privacy, and autonomy of a digital identity to qualify for state endorsement. Needless to say, your social media login — or even your bank login — should not suffice.

One “identity” that meets my preferred standard is an “Autonomic Identifier” or AID, as first described in Dr. Samuel Smith’s seminal paper: Key Event Receipt Infrastructure (KERI).

Some of the capabilities enabled by an AID-powered SEDI ecosystem include:

BYOID (Bring Your Own Identifier) — In the digital realm, for an identity to be self sovereign, it must begin with a unique identifier that is under the sovereign control of an individual. When a state endorses an identity, in practice they are endorsing an identifier that an individual has brought to the state that is provably under the individual’s control.

Persistent Identifiers — When that individual moves their residence to another state (within the U.S., for example), necessitating a new identity endorsement from the new state, the old state can revoke their endorsement of the same identifier that the new state will be endorsing; the identifier is persistent. This is a KERI capability that is very different from most PKI-based systems where the identifier is a public key that is discarded upon rotation or revocation.

Reputation — This persistence creates a continuity for the individual to retain and build their digital reputation independent from any particular government endorsement. In fact, it would be inclusive of every endorsement they have ever had, and all their provable behaviors using those endorsements, no matter how many states or countries have given them past endorsements of identity. And it would be inclusive of non-state endorsements such as educational, work, or anything else, too.

Putting It All Together

A simple depiction of SEDI built on AIDs might look like this:

AIDs enable true self-sovereignty as the foundation, right where it belongs and where it provides the greatest utility for all aspects of digital life. State endorsement is also situated right where it belongs: as only one of many endorsements an individual may receive through their digital lifetime.

State-Issued ‘Jenga’

Conversely, building digital identity on top of a state-issued credential instead of a state-endorsed identifier lasts only until the state revokes that credential. An attached academic credential, for example, is lost when the individual moves from one state to another and obtains a new identity in the new state, and the old state revokes the now-obsolete issuance.

Look at this visually:

This is like a game of Jenga: pull out the state-issued ID — by moving from one state or country to another — and the whole stack falls down. When the state revokes the ID they’ve issued, you also lose everything cryptographically bound to it, and must start over.

Even if the old state kept their credential alive, there is no binding between old and new credentials, no way to build a growing, persistent reputation that connects a person to their cross-boundary behaviors in the digital world the same way a consistent legal name connects their behaviors across boundaries in the physical world.

Human-Friendly?

Autonomic identifiers cannot be human-friendly. Human-friendly namespaces require human management, recurring cost, and have scarcity problems like cybersquatting, or missing the opportunity to register Timothy.com or timothy@gmail; the 1,000th Timothy won’t have much of a selection. AIDs are long random strings of letters and numbers, and are so incredibly random it would take a million computers a million years to guess one, which ensures universal uniqueness and brings a strong measure of security.

AIDs have cryptographic properties, so we can prove control over them in the digital realm and recover control when keys are compromised. And keys *will* get compromised (see “Expected Compromise” below). Without these special mathematical properties, they’d be just letters and numbers anyone can copy and use to impersonate, like a social security number.

Many Identifiers

In real life, we have identifiers that are public — our legal names — that typically remain constant throughout our lives. We use these identifiers to build reputations over time, as patterns of behaviors accumulate that are associated with them. With SEDI, the identifier that is endorsed by the state will work in the same manner: it will be persistent, and as public as one chooses to make it, used and disclosed (always under the hood) whenever one might disclose their legal name.

In digital situations where one prefers to not use their legal name, there’s nothing in SEDI preventing that individual from using a completely different identifier, unrelated to their endorsed one. In fact there is no limit on how many different unique identifiers one may create and use for whatever purpose they choose; and each state can choose whether or not only one at a time can be officially endorsed (Utah allows only one physical identity at a time, and may do the same digitally).

Potential Security Advantages

SEDI’s architecture enables numerous critical security advantages that could be implemented:

Zero Trust — ”Never trust, always verify” is the ideal of zero trust. SEDI can not only embrace zero trust, it can enable true zero trust, which does away with all shared secrets, described briefly below.

No Shared Secrets — SEDI doesn’t employ bearer tokens, symmetric keys, or phishable passwords, unless they are used in conjunction with digital signatures. (Pins and passwords can be helpful when used in addition to digital signatures or physical keys)

Anti-fragile — A breach of one person or system does not facilitate the breach of the entire system or another connected one, as with most current digital identity systems, which are predicated on shared secrets and perimeter security. With SEDI, most breaches can be detectable, preventable, and recoverable.

Expected Compromise — Key compromise should be expected and anticipated — and not surprising and catastrophic — in addition to robust prevention and detection capabilities, bringing powerful means for recovery.

Multi-signature — Keys and credentials can be protected with m-of-n weighted multi-signature capabilities, enabling creative familial, professional, and social protection and recovery schemes.

Mutual authentication — Individuals could authenticate each other, in any setting. For example, a citizen could authenticate a police officer as readily as a police officer can authenticate the citizen. Any person, organization, or thing claiming authority from the state could be instantly verified to have the claimed authority.

Artificially Intelligent Agents (“Agentic AI”)

IBM predicts that 2025 may be “the year of the AI agent”, and the CEO of Microsoft predicts that agentic AI will “replace all software.” Whether that happens or not, AI agents are coming and will soon affect the way we interact with organizations of all kinds. This can take the form of you interacting with some organization’s agent, some organization interacting with your agent, or two AI agents interacting with each other.

SEDI paves the way for verifying that an AI agent has the delegated authority it claims to have, and constraining what an agent is authorized to do. SEDI enables authority to be delegated to people, organizations, and things, and AI agents are things. SEDI is uniquely well suited for controlling, securing, and navigating a world of ubiquitous AI agents.

Many don’t like the idea of having to deal with AI agents in any form, and I can’t blame them. But like the digital transformation itself, that world is coming whether we like it or not, and the more controls and constraints we can have for it, the better.

Comprehensive Digital Identity

In sum, SEDI enables a cohesive, comprehensive digital identity system for a state or for any other ecosystem. It requires no proprietary platforms, products, or providers, though it doesn’t preclude their participation, and requires no shared or trusted intermediaries or blockchains. Through robust guardianship, delegation, and multi-sig, it extends digital capabilities to populations that have been previously excluded: minors (including infants), the elderly, disabled persons, homeless persons, migrants, and any other population. And it does this while maximizing individual autonomy and control over one’s data, with powerful means for consent, and numerous breakthroughs in security.

In short, SEDI heralds a new era of utility, security and autonomy for comprehensive digital identity ecosystems, and I am proud of my home state of Utah for introducing it to the world.

Friday, 07. March 2025

Timothy Ruff

Utah Blazes New Trail for SSI: SEDI

Finally! On March 4, 2025, the Utah legislature unanimously passed SB 260, a groundbreaking digital identity bill that carries this description: “This bill enacts provisions related to a state-endorsed digital identity” (SEDI). Now it’s off to the governor for signature, who has been publicly positive about it. Huge credit to Sen. Cullimore, the bill’s sponsor, and Christopher Bramwell, the sta
Finally!

On March 4, 2025, the Utah legislature unanimously passed SB 260, a groundbreaking digital identity bill that carries this description:

“This bill enacts provisions related to a state-endorsed digital identity” (SEDI).

Now it’s off to the governor for signature, who has been publicly positive about it. Huge credit to Sen. Cullimore, the bill’s sponsor, and Christopher Bramwell, the state’s Chief Privacy Officer, who helped draft and guide its content. SB 260 will likely soon become the law in Utah, and with it the possibility of an exciting new era for self-sovereign identity (SSI).

Passage of SB 260 is the partial realization of a dream I’ve had for more than a dozen years, since co-founding Evernym with Jason Law in 2012. It is not the complete realization of that dream, but it lays the necessary groundwork and carries with it momentum to go the distance over the next few years.

SB 260 does not call for the creation of a new identity system for the state of Utah — yet — that will be done later, hopefully in next year’s session. This bill establishes the principles Utah politicians stand behind — unanimously — that any future Utah statewide identity system must adhere to. If you are an SSI aficionado, you will recognize and love the principles in this bill.

Is State-Endorsed Identity (SEDI) Still SSI?

For some in the SSI community, the concept of SSI does not involve a state actor — at all.

To these SSI purists, SSI becomes trustworthy through a web of trust that replaces the need for a state actor as a root of trust. While theoretically possible — and still a worthwhile end-goal — this ideal vision of SSI isn’t feasible anytime soon. In the meantime, the credibility of government can help accelerate SSI toward an end-goal of not needing government. How? By making ubiquitous the technologies and ceremonies necessary for a web of trust to emerge.

At the end of the day, if independent control of strong digital identity rests fully with the individual, with no visibility by the state for if, how, or when it is used, then advocates of SSI should be in favor of SSI strengthened by a state endorsement. That is how physical identity works today and how digital identity should work, too. And there’s no reason other, non-government flavors of SSI can’t continue to develop and coexist.

State-Endorsed Digital Identity (SEDI)

The words “state-endorsed” used throughout this bill carry a lot of meaning; most emerging digital identity systems use the word “issue“ instead of “endorse”. This is a distinction with a profound difference.

In real life, does a state issue identity or endorse it? When a baby is born, who gives the baby its identifier, its name? The parents do. The state waits for parents to pick a name, puts the name into a certificate, endorses the certificate, then gives it to the parents. Parents determine identity, the state endorses it.

A couple months ago, a 60 year-old friend of mine decided to change her last name. The decision was hers alone to make, but her new name would not be believable or provable to third parties until she has an endorsement of the new name from the state.

In real life the state issues credentials, not identity. In this bill Utah acknowledges this reality and explicitly states that individuals first assert identity and only then can the state endorse it. That will be the guardrails for any future state-endorsed digital identity system. (See “SEDI Details for Identity Nerds” for more info.)

Through SEDI, SB 260 codifies several foundational SSI principles:

Control

In lines (65) and (120), it is explicit that the individual is in control of their digital identity.

Guardianship & Age Verification

Lines (35–40) and (97–98) lay the groundwork for digital guardianship, bringing broad digital capabilities for the first time to dependent populations such as children (including infants), the elderly, disabled persons, homeless persons, and more

Of particular note in Utah, digital guardianship is how age verification can work for all ages and capabilities, something Utah has been on the forefront of, policy-wise. In the future, guardianship could even work for animals and things, too.

See “Finally, a Path Forward for Age Verification” by the Libertas Institute for more info about how SB 260 could facilitate age verification.

Privacy & No Surveillance

SB 260 is explicit that individuals have the right to not be surveilled. This means that SEDI cannot have any form of “phone home”, which differentiates it from the mobile drivers license (mDL) systems based on ISO 18013, and from the federated identity systems now being implemented in Europe and Australia, which all have inherent phone-home surveillance capabilities.

In lines (78–79), (84–96), (99–102), (116–117), (122–125), and (128–139) the bill details the principles of privacy and autonomy that must guide any future state-endorsed digital identity system in Utah.

State-of-the-Art Security

Line (113) stipulates that Utah SEDI must incorporate “state-of-the-art safeguards” for protecting identity. Current industry standards are not good enough, and in fact are unacceptably fragile in my view. My interpretation of “state-of-the-art” means something akin to KERI (Key Event Receipt Infrastructure), where breaches are preventable, detectable, and recoverable in ways that current identity standards cannot do, and which is quantum-resistant. (See the ‘Details for Nerds’ for more details about security.)

Now the Work Begins

In some ways SB 260 is the end of a long, hard journey for me, and the partial realization of an audacious dream. In reality it’s one bill, in one state, that establishes the principles and guardrails for a future identity system without creating one today.

In any case, it’s the closest thing I’ve seen to true self-sovereign identity being officially and legally embraced at a significant level, an entire state of the United States. The next step is far harder: turning concepts into reality.

Stay tuned.

For more technical information, see SEDI Details for Identity Nerds.


Jon Udell

Making the Fediverse More Accessible With Claude 3.7 Sonnet

A few years ago I abandoned Twitter in favor of Mastodon. Recent events validate that choice and underscore the strategic importance of a decentralized fediverse that can’t be owned by a single corporate or state actor. But while Mastodon meets my needs, much of the Twitter diaspora has gone to Bluesky. That’s fine for now … Continue reading Making the Fediverse More Accessible With Claude 3.7 Sonn

A few years ago I abandoned Twitter in favor of Mastodon. Recent events validate that choice and underscore the strategic importance of a decentralized fediverse that can’t be owned by a single corporate or state actor. But while Mastodon meets my needs, much of the Twitter diaspora has gone to Bluesky. That’s fine for now but might not always be. In an article titled “Science Must Step Away From Nationally Managed Infrastructure,” Dan Goodman writes:

Many scientists put huge efforts into building networks to communicate with colleagues and the general public. But all that work and the value in those networks was lost when many scientists felt compelled to leave following Elon Musk’s takeover of the platform (now X). The process of rebuilding on Bluesky is underway, but it will take years and may never reach the same critical mass. Even if the transition is successful, the same thing may happen to Bluesky in a few years.

How can we prepare for a future migration from Bluesky to Mastodon? Bridgy Fed — a service that enables you to connect together your website, fediverse account and Bluesky account — will help. But Bridgy Fed needs to be easier to use. So I recruited Claude’s new Sonnet 7 model to do that.

Making the Fediverse More Accessible With Claude 3.7 Sonnet

LLM series at The New Stack

Thursday, 06. March 2025

The Pragmatic Engineer

The Pulse #126: Startup asks for a lot, offers little → struggles to hire

Also: Cloufflare’s self-critical, transparent postmortem, additional signs of Big Tech becoming more cutthroat, and more.

The Pulse is a series covering insights, patterns, and trends within Big Tech and startups. Notice an interesting event or trend? Send me a message.

Today, we cover:

Industry pulse. Claude Code quietly becoming the best AI coding agent, Microsoft cutting AI data center investment, EA releases Red Alert code, developers don’t want GPUs, DeepSeek keeps open sourcing innovation, and a US DOGE software engineer forgets to make their GitHub account for work private.

Ads startup demands a lot & offers little → struggles to hire. An adtech startup using AI to generate ads seeks a founding engineer who will devote their heart and soul – and most of their awake time – to the company, and is struggling to hire. They’re now offering a $50K (!!) referral bonus. Feels like a classic case of overlooking that hiring is a two-way street: most standout devs don’t want to work 14 hours a day, 7 days a week.

Cloudflare’s self-critical, transparent postmortem. The leading content delivery network recently had its object storage go down for an hour, and only a day later published a detailed incident review, with the CEO personally taking responsibility for fixing the systemic issues which triggered the downtime. Clouflare is a refreshing exception for promptly sharing incident summaries, and showing it’s serious about continuously improving reliability.

More signs of “cutthroat” Big Tech. Google hints at a 60-hour workweek for workers in its AI group, and Meta fires 20 people for leaking internal information.

1. Industry Pulse Claude Code quietly becoming the best AI coding agent?

Anthropic lowkey launched Claude Code, an agentic coding tool, recently. Feedback I’ve seen from engineers is that the tool is pretty good, with capabilities on par with other AI agents – including Devin (which costs $500/month).

Until now, Anthropic has built the best LLM for coding (Sonnet 3.5 and Sonnet 3.7). In hindsight, it was only a matter of time until they expanded by building an agentic coding tool that’s at least equal to what other startups offer. After all, most startups will build their agents on top of Sonnet 3.5 or 3.7, as well.

A criticism is that Claude Code is expensive, which should not be that surprising because AI agents are much more compute-intensive to run than LLMs. I’ll reserve judgement, Anthropic could turn into a major dev tools company if they keep building AI coding tools like this.

Is Microsoft really cutting AI data center investment?

Read more

Wednesday, 05. March 2025

The Pragmatic Engineer

Design-first software engineering: Craft – with Balint Orosz

Balint Orosz, founder of Craft, shares his journey from startup to award-winning app, discussing software design, frontend vs. backend priorities, and local-first computing.
Stream the Latest Episode

Listen and watch now on YouTube, Apple and Spotify. See the episode transcript at the top of this page, and a summary at the bottom.

Brought to You By

WorkOS — The modern identity platform for B2B SaaS.

The Software Engineer’s Guidebook: Written by me (Gergely) – now out in audio form as well

Augment Code — AI coding assistant that pro engineering teams love

In This Episode

Not many people know that I have a brother: Balint Orosz. Balint is also in tech, but in many ways, is the opposite of me. While I prefer working on backend and business logic, he always thrived in designing and building UIs. While I opted to work at more established companies, he struck out on his own and started his startup, Distinction. And yet, our professional paths have crossed several times: at one point in time I accepted an offer to join Skyscanner as a Principal iOS Engineer – and as part of the negotiation, I added a clause to my contrac that I will not report directly or indirectly to the Head of Mobile: who happened to be my brother, thanks to Skyscanner acquiring his startup the same month that Skyscanner made an offer to hire me.

The Orosz brothers: both of us are working in tech, and have had professional overlaps. Like this podcast episode!

Today, Balint is the founder and CEO of Craft, a beloved text editor known for its user-friendly interface and sleek design – an app that Apple awarded the prestigious Mac App of the Year in 2021.

In our conversation, we explore how Balint approaches building opinionated software with an intense focus on user experience. We discuss the lessons he learned from his time building Distinction and working at Skyscanner that have shaped his approach to Craft and its development.

In this episode, we discuss:

Balint’s first startup, Distinction, and his time working for Skyscanner after they acquired it

A case for a balanced engineering culture with both backend and frontend priorities

Why Balint doesn’t use iOS Auto Layout

The impact of Craft being personal software on front-end and back-end development

The balance between customization and engineering fear in frontend work

The resurgence of local-first software and its role in modern computing

The value of building a physical prototype

How Balint uses GenAI to assist with complicated coding projects

And much more!

Takeaways

Design-focused engineers find it harder to fit in. Engineers who focus on backend and distributed systems can usually verbalize their impact clearer, and could see faster career growth – including getting into leadership positions. This creates a reinforcing cycle: as most engineering executives have a backend engineering background: they will recognize and reward backend contributions more.

Balint didn’t like how he didn’t “fit in” as an engineer focused on design and UI: but he just kept building things he believed in – and years later, building up that “design muscle” helps him build products that backend-focused engineers might struggle in putting together.

Amazing companies cannot have a single engineering culture. Balint observed how every standout company that has a single engineering culture inherently biases towards either a backend-heavy engineering culture (e.g. Google, Amazon) or a UI-heavy one (Apple). For a company to be truly standout – more so than these Big Tech giants – it needs to have several engineering cultures: prioritizing both backend and UI excellence. This is what Craft is aiming to do internally.

To build something better than most other products: you might need to take a different approach. Craft does not use Apple’s UI components: they don’t use SwiftUI or Autolayout – something that 95% or more of iOS apps all take advantage of.

Craft, instead, built their own components from scratch, and came up with their own layout and animations system. This is a lot more work at first: but it’s how Craft can do smooth animations that most apps are unable to do so – and it’s a reason engineers at Apple have asked the team “how are you able to do such a smooth animation on the navigation bar?” (The native iOS navigation bar cannot be animated like Craft does it) Craft can do all this thanks to building and maintaining their own components. It’s simpler to do than most would assume: in the episode, we look at actual code.

A shared codebase is an underrated advantage for speed, consistency and efficiency. Craft has a total of 4 engineers building the respective apps for:

iOS

iPad

MacOS

VisionOS

They can do this, because it’s a single codebase! A single codebase also means that all features built on one platform immediately work on all others.

The Pragmatic Engineer deepdives relevant for this episode

The AI hackathon at Craft Docs

Engineering career paths at Big Tech and scaleups

Thriving as a Founding Engineer: lessons from the trenches

The past and future of modern backend practices

Timestamps

(00:00) Intro

(02:13) What it’s like being a UX-focused founder

(09:00) Why it was hard to gain recognition at Skyscanner

(13:12) Takeaways from Skyscanner that Balint brought to Craft

(16:50) How frameworks work and why they aren’t always a good fit

(20:35) An explanation of iOS Auto Layout and its pros and cons

(23:13) Why Balint doesn’t use Auto Layout

(24:23) Why Craft has one code base

(27:46) Craft’s unique toolbar features and a behind the scenes peek at the code

(33:15) Why frontend engineers have fear around customization

(37:11) How Craft’s design system differs from most companies

(42:33) Behaviors and elements Craft uses rather than having a system for everything

(44:12) The back and frontend architecture in building personal software

(48:11) Shifting beliefs in personal computing

(50:15) The challenges faced with operating system updates

(50:48) The resurgence of local-first software

(52:31) The value of opinionated software for consumers

(55:30) Why Craft’s focus is on the user’s emotional experience

(56:50) The size of Craft’s engineering department and platform teams

(59:20) Why Craft moves faster with smaller teams

(1:01:26) Balint’s advice for frontend engineers looking to demonstrate value

(1:04:35) Balint’s breakthroughs using GenAI

(1:07:50) Why Balint still writes code

(1:09:44) Rapid fire round

A summary of the conversation Design-first engineering: less recognition?

Balint has been writing code since he was 12. He’s always had a pull towards interactivity, animation, and found himself in the intersection of code and design.

Backend engineering is much easier to quantify than UI/UX work. Balint always felt that backend engineering was easier to quantify in impact – like saving millions in infrastructure costs or scaling to billions of users. How do you quantify that the UI is smooth and delightful?

Balint always felt that "hardcore engineers" did not consider him one of them – as his focus was on UX and UI – like making interactions faster, and animations smoother, and not on distributed systems or scalable algorithms. But designers and product managers also didn’t look at him as equals.

There’s a pull for engineers to move towards the backend and distributed systems. Exactly because these areas are seen as having hard problems to solve and are often more measurable in terms of impact.

Engineering culture at Craft

Craft started off as a text editor for mobile use. When starting to code the product, Balint didn't believe that established engineering patterns, code coverage, or even design components would be the right approach.

Principles in building craft:

Data principles: efficiency and zero data loss are both non-negotiatble. There’s not much to innovate in this area – it’s very well understood! Just refine the existing state of the art.

Fluency. Craft is designed to be used for hours each day, so it needs to feel snappy and fluent.

Having just one engineering culture is not enough. To create an amazing product, you cannot have either frontend or backend engineering principles dominate:

When the dominant engineering culture is UI-first: Apple is a good example. Apple builds delightful user experiences. However, their backend systems and web products are lacking.

When the dominant engineering culture is backend-first: Amazon and Google are good examples. Both focus on system design and backend engineering principles – in return their UIs don't feel as comfortable.

Craft uses the same codebase across 4 platforms: iOS, iPad, MacOS (desktop), and Vision Pro. 99% of the code is common, with some additional native bindings for each platform. Why the same codebase? Balint wanted the desktop app to always do the same thing as the mobile app, and a shared code base was the best way to do it.

Team structure

Craft's product engineering team is around 20 people. This includes product engineering, design, and QA.

They are split into a platform team, which means they have a web team, a native app team, and a backend team. Each of those teams has three to four people.

Balint observed that teams with more than 5 people start to have communications issues

Architecture choices

Prioritize control over abstractions and trends. Using core language patterns and framework elements gives you more control over what you want to do. High-level abstractions and frameworks require more time figuring out the bugs and what those frameworks allow or disallow.

Everything is a canvas. Craft treats everything as a canvas that they draw on. This allows building toolbars that look exactly like a Mac toolbar on the Mac and an iOS toolbar on the iPhone. However, Craft now has more control over these components than if they used the native ones.

Avoiding AutoLayout and SwiftUI. Every time Craft hires a new iOS engineer, the new joiner inevitably asks them why they don't use new technologies like AutoLayout or SwiftUI. Balint shows them one of their transitions and asks them to implement it with the same performance in AutoLayout – and if they can succeed, they’ll move over. So far, no one has managed: but it’s a good exercise to help understand that decisions are not arbitrary, but are following practical purposes.

AutoLayout promises that you no longer need to think in rectangles; you can just say it should be at the top and it can automatically grow. However, when you keep adding more things, the complexity increases and when you want to do something more sophisticated, it can become very performance-intensive.

With AutoLayout, you are trading off easier definition and work as a developer for more complexity on the device and less control when you want to do more advanced animations.

Principles for personal software

“How it makes me feel” matters more. When it comes to personal software, consumers choose a lot more on which one they feel resonates with them. Thus the best personal software is a lot more opinionated than B2B software is

Craft is aiming to build what Visual Studio Code is for engineers – Craft aiming to do the same for knowledge workers. Visual Studio Code feels lightweight and is a fresh breath of air because it's responsive, fast, and does everything you need.

A different take on design systems.

Most companies create an “atomic design system:” starting with base colours as atoms, then buttons that combine colors and shapes as components, and then building up from here.

Craft, instead, has systems for animation. They have an animation engine and library that synchronizes everything across everyone and they enforce usage of that.

A local-first approach

New technologies emerge on the server side. When you're dealing with personal software, the amount of content you're dealing with can fit on the user's device. A lot of the compute can be done locally.

Craft’s architecture is preparing for local-first approaches. They architected components so they can replace them with local or remote components anytime they decide it’s now possible to do so.

An example is search: instead of having a big elastic search cluster, they are looking at having 2 million search indexes on a disk in the cloud. Every time somebody does a search, they can either download that search index locally and use it there to execute the search, or a Lambda or serverless function can just read the search index.

The industry keeps swinging between cloud and local compute.

There is a new wave of personal computing powered by processors getting faster. The M4 Pro is a faster processor than anything you can buy on an AWS cloud.

Eventually, people will get tired of their personal computers holding them back and they’ll appreciate server-side components working faster. After a while, they will then start to get annoyed about how much the server-side project costs… and the pendulum keeps swinging

Local-first is experiencing a comeback because people are starting to travel again. It gets inconvenient when you need something badly, with poor connection – and you cannot access it.

Where to find Balint Orosz:

• X: https://x.com/balintorosz

• LinkedIn: https://www.linkedin.com/in/balintorosz/

Mentions during the episode:

• Craft: https://www.craft.do/

• Skyscanner: https://www.skyscanner.com/

• Flash: https://en.wikipedia.org/wiki/Adobe_Flash

• Shader: https://en.wikipedia.org/wiki/Shader

• Understanding Auto Layout: https://developer.apple.com/library/archive/documentation/UserExperience/Conceptual/AutolayoutPG/index.html

• Mac Catalyst: https://developer.apple.com/mac-catalyst/

• Apple M1: https://en.wikipedia.org/wiki/Apple_M1

• Elasticsearch: https://en.wikipedia.org/wiki/Elasticsearch

• The Cloud Is a Prison. Can the Local-First Software Movement Set Us Free?: https://www.wired.com/story/the-cloud-is-a-prison-can-the-local-first-software-movement-set-us-free/

• Local-first software: https://news.ycombinator.com/item?id=31594613

• Visual Studio Code: https://code.visualstudio.com/

• ChatGPT 01 model: https://openai.com/o1/

• PencilKit: https://developer.apple.com/documentation/pencilkit

• Swift: https://www.swift.org/

• Objective-C: https://en.wikipedia.org/wiki/Objective-C#

• Tailwind CSS: https://tailwindcss.com/

• The Hard Thing About Hard Things: Building a Business When There Are No Easy Answers: https://www.amazon.com/Hard-Thing-About-Things-Building/dp/0062273205

Production and marketing by Pen Name. For inquiries about sponsoring the podcast, email podcast@pragmaticengineer.com.


Just a Theory

Extension Ecosystem Summit 2025

We’re doing it again! The PostgreSQL Extension Ecosystem Summit returns to PGConf.dev. We’ll again be looking at indexing, discovery, packaging, and core support. And we’ll once again be hosting a series of “Mini Summits” leading up to the event. Join us!

I’m happy to announce that some PostgreSQL colleagues and have once again organized the Extension Ecosystem Summit at PGConf.dev in Montréal on May 13. Floor Drees, Yurii Rashkovskii, Keith Fiske will be on hand to kick off this unconference session:

Participants will collaborate to learn about and explore the ongoing work on PostgreSQL development and distribution, examine challenges, identify questions, propose solutions, and agree on directions for execution.

Going to PGConf.dev? Select it as an “Additional Option” when you register, or update your registration if you’ve already registered. Hope to see you there!

Extension Ecosystem Mini-Summit 2.0

We are also once again hosting a series of virtual gatherings in the lead-up to the Summit, the Postgres Extension Ecosystem Mini-Summit.

Join us for an hour or so every other Wednesday starting March 12 to hear contributors to a variety of community and commercial extension initiatives outline the problems they want to solve, their attempts to so, challenges discovered along the way, and dreams for an ideal extension ecosystem in the future. Tentative speaker lineup (will post updates as the schedule fills in):

March 12: David Wheeler, PGXN: “State of the Extension Ecosystem” March 24: Peter Eisentraut, Core Team: “Implementing an Extension Search Path” April 9: Christoph Berg, Debian: “Apt Extension Packaging” April 23: May 7: Gabriele Bartolini, CNPG “Extension Management in CloudNativePG”

Join the meetup for details. These sessions will be recorded and Posted to the PGConf.dev YouTube and we’ll have again detailed transcripts. Many thanks to my co-organizers Floor Drees and Yurii Rashkovskii, as well as the PGConf.dev organizers for making this all happen!

More about… Postgres PGXN Extensions PGConf Montréal Summit

Tuesday, 04. March 2025

The Pragmatic Engineer

Trimodal Nature of Tech Compensation in the US, UK and India

Deepdive into the distribution of software engineering compensation, based on more than 20,000 data points from Levels.fyi

Hi – this is Gergely with the monthly, free issue of the Pragmatic Engineer Newsletter. In every issue, I cover challenges at Big Tech and startups through the lens of engineering managers and senior engineers. If you’ve been forwarded this email, you can subscribe here.

The most-frequently cited article I’ve ever written is still The trimodal nature of software engineering salaries in the Netherlands and Europe – which is a deepdive published back in 2021. At the time, I had recently left my engineering manager job at Uber, and was trying to understand how tech compensation really works. At Uber in Amsterdam, I saw that comp for senior software engineering positions was 2-5x higher than for most “local” senior roles.

To find out why this gap exists, I gathered hundreds of data points and talked with dozens of recruiters and managers; was this comp gap a symptom of “bimodal” distribution, as Daan Luu posted in 2016? After researching, I felt confident enough to say that tech compensation in the Netherlands and Europe was trimodal:

High-level overview. “$X” varies per country and region, but trends are international

Last year, we returned to this topic in The Trimodal nature of tech compensation revisited, which validated the distribution by using more than 1,000 data points from the Dutch market, and quotes from engineering leaders and software engineers in the US, UK, Canada, and Japan, which suggested that a “trimodal” model is real. Even so, this was partly anecdotal and in need of hard data to confirm it.

Today, we finally have the numbers, thanks to the Levels.fyi team,, and this article dives into the findings by region, covering:

US: entry-level software engineers. Categorizing the three tiers of compensation bands, based on 5,601 data points.

US: senior software engineers. More Tier 2 and 3 offers are at least twice as big as the Tier 1 median, based on 9,509 data points.

Best-paying US companies by tier. Hedge funds, top scaleups, and Big Tech lead the pack – with companies like Roku, MongoDB and Dropbox also ranking high.

UK numbers, tiers, and best-paying companies. Hedge funds and finance companies lead, with Stripe, Confluent, and BP also making it to the top 10, based on 6,219 data points.

India numbers and tiers. Rippling, Stripe, Uber, Coinbase, Atlassian, Broadcom, and LinkedIn, offer ₹1 crore-or-above packages for senior engineers ($118K and up), based on 2,078 data points

Note on equity. Startups and scaleups may offer more in total compensation than publicly traded companies, but equity in not-yet-public companies comes with risk.

The data on the US, UK, and India reveals a complex regional picture united by a trimodal model. See below:

Three “tiers” in the US data. See “US: senior software engineers” section

If you want to look even deeper into this topic, check out other detailed articles on tech compensation:

The trimodal nature of software engineering salaries in the Netherlands and Europe (Part 1, 2021)

The trimodal nature of tech compensation revisited (Part 2, 2024)

Compensation at publicly traded tech companies (2023)

Senior-and-above compensation in tech (2024)

The trimodal nature of software internship salaries (research by Levels.fyi, 2024)

Thank you to Hakeem Shibly, Zuhayeer Musa, Zaheer Mohiuddin and the Levels.fyi team for sharing massive amounts of data and insights for this deepdive.

Data source

This article uses more than 10,000 compensation data points sourced from Levels.fyi, a compensation comparison site which is great for:

Comparing engineering career levels, such as how E5 at Meta maps to SDE3 at Amazon

Browsing tech comp at large tech companies and scaleups, powered by user submissions

Levels.fyi is powered by a combination of self-reported and verified data points. They offer resume review and salary negotiation services with Big Tech recruiters. A neat feedback loop of this is that information from their salary negotiation services helps the team ensure that submitted data really is valid. Levels.fyi also has a job board and provides compensation benchmarking services for businesses.

Some stats about Levels.fyi:

2.5 million visitors per month

20,000 new data points added per month, on average

$100M: total value of increased comp offers achieved with its compensation negotiation service – not including raises negotiated by candidates using the site ad-hoc for comparison.

A team of 13 people is behind the site and app, and I recently grabbed coffee with several of them in San Francisco.

This article covers “total compensation” (TC), not only salary. TC includes all forms of annual comp:

Base salary

Cash bonus, or the cash bonus target of companies that share this information

Value of equity vested

For example, a $300K TC package might look like this:

$175K base salary

Plus $25K target bonus

Plus $100K in equity vesting over the first year (typically $400K vested over 4 years, with $100K vesting each year)

Equity is a tricky topic but also a key one, more on which is in the “Note on equity” section near the foot of this article.

1. US: entry-level software engineers

Let’s start with entry-level software engineer offers submitted to Levels.fyi in the past year for the world’s biggest tech market. Here’s how these data points were filtered:

“Software Engineer” submissions

Offer date between 1 Jan 2024 and 1 Jan 2025

Years of experience: between 0 and 2

Region: US

Years at company: 0. This means we look at new offers, not existing employees’ comp changes at their current workplaces

We have 5,601 entry-level US data points after applying the filters. A visualization:

5,601 entry-level software engineering offers in the US, visualized

At first sight, this distribution fits what looks more like a bimodal distribution, with a first peak at $100-110k, and a second peak at $170-180k:

Comp distribution has twin “local maximum” peaks

However, after sifting through the data and manually tagging companies by common traits, we find the data organizes itself differently. The chart below takes the same data and organizes it based on the assumption behind the original trimodal distribution: that companies compensate based on the market they compete in. We use three categories of business type:

Big Tech. Apple, Amazon, Microsoft, Google, Meta, and Netflix, alongside publicly traded, large tech companies competing with them, like Uber, Snap, and Pinterest

Top scaleups and hedge funds. Late-stage startups backed by well-known VCs that have not gone public, or IPOed only very recently. Plus hedge funds, and quantitative trading firms (quant)

Everyone else. startups, privately owned companies, and “traditional” ones

Here’s how the same data looks when broken down into these categories:

Segmenting the data by three tiers

Now, let’s check each segment.

Tier 1: Everyone else

This category consists mainly of companies competing for local talent, which often are not driven by their engineering talent or digital products.

Distribution of the “everyone else” category: 3,262 data points (58% of total). Median: $105K, 75th percentile: $130K

This resembles the lowest tier (“Tier 1”) of the trimodal model. It’s interesting how drastically the distribution changes by removing Big Tech, top scaleup, and hedge fund data points!

Tier 2: Big Tech Big Tech. 1,914 data points (34% of all data). Median: $180K, 75th percentile: $203K

The largest publicly traded tech companies compensate engineers with increasingly large equity packages up the career ladder. I split this category for two reasons:

Commonly referenced. The terms “FAANG” and “Big Tech” are frequently their own category, and benchmarks for other companies’ offers.

Industry-leading comp. It’s hard to find bigger liquid compensation offers than this group offers. Later, we see it’s possible to win higher illiquid comp offers at not-yet-public scaleups, and some hedge funds can top what Big Tech offers.

Tier 3: Top scaleups and hedge funds Top scaleups and hedge funds, based on 493 data points (8% of all data). Median: $208K, 75th percentile: $247K

This is a very interesting category with a lack of high-quality data, outside of Levels.fyi, for two reasons:

Private scaleups issue not-yet-liquid equity. Many assume Big Tech companies offer the most, compensation-wise. And this is true in terms of liquid equity; shares that can be sold for cash is an area where Big Tech is near-impossible for scaleups to match.

However, top-tier startups and scaleups can – and do – offer more total compensation than Big Tech. But there’s a catch: their equity cannot be sold until a liquidity event happens like an IPO, or stock buyback. Scaleups which out-offer Big Tech in total comp include Databricks and OpenAI.

It makes sense for late-stage scaleups to make bigger total comp offers than Big Tech because they target the same talent and must compensate for the fact their equity grants are illiquid, and may not become convertible to cash any time soon. Plus, there’s also the risk of a grant losing value like at Foursquare, where employees’ stock options ended up as worthless.

Hedge funds are notoriously secretive. Places like Two Sigma want to keep their compensation details as confidential as possible. Levels.fyi has accurate data points, but not enough to make this group into its own category. So, they’re included with scaleups here.

The compensation philosophy of hedge funds is very different than at Big Tech and scaleups:

No equity: hedge funds do not issue equity, even though this element is almost always what makes Big Tech and scaleup compensation packages standout

Large cash bonuses: instead, hedge funds award hefty end-of-year bonuses, of up to 100% of annual salary. Hedge funds are the only category of company offering more liquid compensation than Big Tech and some scaleups. At the same time, hedge funds hire a lot less frequently

Overlapping categories

Let’s look at all three categories of employer in a single graph:

All entry-level compensation data points, split into categories

And also let’s check out the median and 75th percentile total compensation packages of each category:

Data points for each category

There are a few interesting points about this distribution:

~2x difference between Tier 1 and 3. Tier 2 and 3 companies pay around 2x as much at the 50th and 75th percentiles than Tier 1 does. Tier 2 (Big Tech) is a lot closer to Tier 3 in TC. The 75th percentile of Tier 1 still doesn’t come close to the 50th percentile of Tier 2

Overlap from Tier 1 into Tier 2. The highest Tier 1 offers – at around the 90th-95th percentiles – do reach up to $200K, which is near the median range of tiers 2 and 3

Tiers 2 and 3 overlap a lot. The only real difference between Tier 2 and Tier 3 is that there are much more Big Tech data points in the sample. This could indicate a bias in user submissions towards Big Tech, or that scaleups and hedge funds hire far fewer entry-level engineers in the US

2. US: senior software engineers

Compensation differences tend to grow with seniority, so let’s look at senior-and-above software engineers by adjusting the filters:

“Software Engineer” submissions

Offer date in the last 3 years (between 1 Jan 2022 and 1 Jan 2025)

Experience: 5 years minimum

Region: US

Years at company: 0. Similar to before, we only look at new offers

Let’s see how the same three categories map when looking at senior software engineers, split across the same tiers:

All senior-and-above engineer compensation data points by category. Total number of data points: 9,509

Here’s what these tiers look like in the median and 75th percentile compensation bands:

Data points for each category

The tiers are more distinct for seniors than for entry-level engineers. Some observations:

Tier 1 (everyone else): this category has the most data points (53% of the total). Also, while the median of this tier is significantly below the other two categories, there is overlap at the 75th and 90th percentiles. Simply put, it’s possible to earn as much in this group as in Big Tech, but it’s harder to do.

Tier 2 (Big Tech): compensation spread is very wide, starting from around $100K, all the way to $600K and above. This indicates the median value is less important, and things like career level, the specific company, and negotiation, could all matter more.

Tier 3 (top scaleups and hedge funds): this overlaps with Big Tech, and its top end goes even further. Also, there is far less data for this category, suggesting that these companies hire significantly fewer staff than Big Tech does, as mentioned above.

Outliers: these are much more distinct at the senior level, compared to the entry-level tier distribution

Big Tech and scaleups+hedge funds have many outliers. The median entry-level Tier 1 compensation package was $105K – and there were few offers in tiers 2 and 3 that offered twice as much ($200K and above.) The median senior Tier 1 compensation was $180K, but now there are far more offers in tiers 1 and 2 worth double that ($350K and above). Visualized:

A higher percentage of people reported making at least double in Tier 1 median compensation at senior level, than at entry-level. Being in the “long tail” of offers could be more achievable at senior level than at entry-level 3. US companies paying highest by tier

With the data below, we look at the number of senior software engineer submissions and filter them for offers submitted by engineers with 5-10 years of experience.

Tier 1: “everyone else”

The highest median total compensation offers in this category were made by the companies listed below:

Companies offering the highest 50th percentile compensation in the “everyone else” category

Leading employers in this category are Indeed, HubSpot, Workday, Disney, eBay, Capital One, Goldman Sachs, JPMorgan Chase and Visa.

Which businesses in each tier are most worth targeting for jobseekers? One indicator is the number of submissions, which identifies companies that are hiring more than others. Of course, this also biases to companies which users have shared data about. In this category, it’s Capital One, Walmart, JPMorgan Chase, Disney, HubSpot, Indeed, Workday, Palo Alto Networks, Fidelity, and Visa.

Tier 2: Big Tech

Highest-paying companies by median total compensation:

Companies offering the highest 50th percentile comp in the Big Tech category

This is quite a jump: the median is nearly twice as high as in the “everyone else” category! You can find more details about individual packages and get a sense of the spread of offers per company, at the pages for Netflix, Airbnb, Pinterest, Robinhood, Snap, Roku, Meta, MongoDB, Uber and Dropbox.

Most data points submitted: for this category, most data points submitted were for Amazon, Meta, Google, Microsoft, Apple, Oracle, Uber, Block, NVIDIA, and Salesforce.

Tier 3: Top scaleups and hedge funds

The cream of the comp crop; with hedge funds, quant firms, and top-tier VC-backed startups doing well. Companies with the highest median offers are:

Companies offering the highest 50th percentile compensation in the top scaleups and hedge funds category

If you thought Big Tech has high median offers, these numbers are between 20 and 40% bigger. Hudson River Trading, Citadel, and Jane Street, are all hedge funds or quant firms, meaning their stated compensation is all cash. Others on the list are companies which IPOed in recent years like Coupang, or late-stage scaleups.

Find more details about individual packages and get a sense of the spread of offers within a company by checking the respective pages on Levels.fyi of Hudson River Trading, Databricks, Coupang, Citadel, OpenAI, OpenSea, Earnin, Nuro and Figma.

Most data points submitted: in this category, it’s Bytedance, Stripe, Cruise, Snowflake, Roblox, Coinbase, Databricks, Reddit, Rivian, and Anduril.

The importance of location

Location also impacts compensation packages. In the Bay Area, Seattle and New York, compensation packages are higher than in other areas in the US. We can get a better sense of how location plays a part using a heatmap – also built by Levels.fyi:

Location and median pay have some connection. Source: Levels.fyi heatmap 4. UK numbers and tiers

Here’s the distribution of compensation data points for senior software engineers in the UK, mapped in British pound sterling (GBP):

Distribution for senior software engineer packages in the UK

For this country, we attempt to segment data points by company-headquarter location:

Headquartered outside the UK and US (mostly in European countries)

Headquarters in the UK

Companies with headquarters in the US

Here’s data for these categories:

Data points for each category

Visualizing the distribution:

Distribution for the three categories of senior software engineer packages in the UK

A few interesting things stand out:

The “US-HQ tier” is odd. It feels like two tiers might be coupled here, based on the midpoint of the graph having several local maximums: £100K ($126K), around £125K ($158K), and circa £150K ($190K). I’d assume that if we split up Big Tech, the top scaleups+hedge funds, and everyone else, we’d see two distinct tiers. Still, it is US-HQ’d companies that offer the most comp to senior software engineers.

Like the US, the UK has hedge funds that offer as much – or more – than Big Tech. These firms are all based in London, often with offices in New York. This is the first time I’ve seen it broken down like this.

UK, but not US or UK-headquartered, are similar. The medians and 75th percentiles of these categories are almost the same. Non-US, non-UK HQed companies simply have fewer data points.

The data is likely to overrepresent US-HQ’d companies. It would seem there are more US-based companies employing software engineers in the UK, than UK-headquartered ones do. However, based on my understanding of the market this is incorrect: in reality, a lot more UK-based companies employ software engineers than US-based ones, but the data currently biases towards US-HQ’d companies.

My sense is that for the UK, the real three tiers are those previously discussed:

Tier 1: local market, meaning most UK HQ’d companies

Tier 2: startups and scaleups targeting the top of the local market. Many are likely US-based, but not competing with Big Tech

Tier 3: Big Tech and top scaleups competing with them

Here are companies offering the highest median total compensation in the UK:

Companies offering the highest 50th percentile compensation in the UK. Most roles are London-based

Businesses paying the most, based on median data: Citadel, The Trade Desk, Balyasny Asset Management, Stripe, Confluent, Qube Research, Maven Securities, Millennium, G-Research, and BP – an oil and gas company, which is the only non-tech, non-finance company in the top 10.

And here are companies outside the top 10, but in in the top 30:

Coremont (£180K), Meta (£178K), Apple (£178K), Snap (£177K), Reddit (£176K), ByteDance (£175K), Palantir (£173K), Google (£173K), HashiCorp (£169K)

Standard Chartered (£166K), WordQuant (£166K), BBC (£165K), DuckDuckGo (£161K), Monday.com (£159K), Squarepoint Capital (£157K), Samsara (£156K), Onfido (£156K), X – formerly Twitter (£156K), Discovery (£154K)

In this list, a surprise for me was to see the BBC being so competitive in compensation, compared to Big Tech companies. In all fairness, I’ve heard good things about both the engineering talent and the engineering culture at the British broadcaster.

And these are the companies in the UK for which the most data points were submitted by users:

Companies with most data points 5. India numbers and tiers

For India, compensation packages are represented in USD ($100K in USD equals to about ₹87 lakhs). The split:

Distribution of the three categories of senior software engineer packages in India, based on 2,050 data points

This split mirrors a single distribution, but let’s break it up by the location of headquarters:

In India

Not in India or the US

US

The distribution:

Distribution for the three categories for senior software engineer packages in India

It feels like US HQ’d companies might be two tiers combined, so let’s split this out by separating US-based Big Tech, and US-based “everyone else”:

Distribution when breaking US-HQ’d companies into Big Tech and non-Big Tech categories

There are many interesting details in this graph:

The data is US-dominated. As in the UK, India has more local companies based in the country and employing software engineers. For example, Tata Consultancy services alone employs more than 600,000 IT professionals – many of them software engineers. And we’ve not covered large employers like Infosys, or Wipro. At the same time, data from Levels.fyi is biased towards US-HQ’d companies: 77% of all data points. You can help the data become more accurate by submitting your compensation.

India-HQ’d companies rarely offer standout compensation. As per this data, it is mostly companies headquartered abroad that offer above $100K in total compensation (₹87 lakhs or more).

US-HQ’d companies pay very well. While the median compensation for these companies is $63K (₹52 lakh,) the 90th percentile is at $133K (₹1.1 crore!) This is surprisingly high!

Big Tech offers are very strong. A select few senior+ engineers can make in the range of $200K per year (₹1.7 crore) at the very top of the range, which is close to 4x the median compensation of the India-HQ’d tier.

Data points for each category

Companies that pay the most in India, as per the data:

Companies offering top 50th percentile comp in India

One surprise is Google not making the top 10; it comes in at number 14 overall, with median senior engineer compensation at $98K (₹85 lakh) – not far away from the top 10. One thing to keep in mind is that Google has the most data points of companies, meaning it might be more accurate.

Businesses with the most data points submitted in India:

Companies with the most compensation packages submitted to Levels.fyi 6. Note on equity

The bigger a Big Tech or VC-funded startup comp package is, the larger the share of equity. For example, here’s a visualization of the comp split for senior software engineers in the US, over the past year:

Visualizing the TC split of engineers. The higher the TC, the bigger the equity and cash bonus, compared to base salary. Based on 796 data points

You might wonder how the largest packages have no equity component and a very large cash bonus. It’s because these are hedge funds which don’t issue equity and award cash bonuses of nearly 2x the annual salary.

Non-liquid equity comes with additional risk. Let’s take two compensation packages: one with a TC of $475K, and one with $423K. Which is better? It seems like $475K is clearly bigger, but some extra context is needed. The $240K package is from Rippling, a scaleup that is not yet publicly traded:

A $475K package at Rippling. See the original submission

The $423K package is from Uber:

A $423K package at Uber. See the original submission

The big difference between them is that at Uber, an engineer can sell their stock as soon as it vests, (usually, quarterly) if they wish. By doing so, they earn $423K worth of liquid compensation in the first year. At Rippling, the liquid compensation will be $335K ($240K in base salary and $85K bonus target), and equity can only be sold if and when a liquidity event occurs, such as:

Rippling organizes a secondary offer and allows employees to sell some stock. This usually happens with oversubscribed fundraising events

Rippling goes public. In this case, employees can sell their vested stock after a lockup period of around 6 months

Rippling is acquired and the buyer pays for employees’ stocks

For private companies, the big questions are when will equity become liquid, and how much will it be worth.

It’s worth remembering stock prices change, which makes equity compensation volatile. In 2022, we covered how this can lead to compensation woes. At the time, tech stocks like Stitch Fix, Redfin, and Zoom, dropped 50-80% in 12 months. At the same time, companies like NVIDIA, Tesla, and Apple, saw 50-100% stock price growth. Compensation packages for people working in the former group dropped significantly, while the latter group saw major increases. A $350K TC package became worth $510K at NVIDIA, and $231K at Stitch Fix:

Equity can be volatile, and equity-heavy packages similarly so. Source: Stock price woes and attrition Takeaways

Thanks very much to the Levels.fyi team for sharing so much data for the US, UK, and India. It’s enabled us to visualize compensation distribution and tiers across international labor markets.

Levels.fyi gives free access to its compensation data points, and relies on users sharing their comp numbers. If you’ve gotten value out of this article, please consider submitting the comp details of your current position; this lets the site provide fresh, data-backed insights, like in this deepdive.

My biggest takeaways:

The US is unique in having a tier above Big Tech. Across the UK, India, and in the Netherlands, the highest compensation tier (Tier 3) is Big Tech. So it’s a surprise to learn there’s an even higher tier that exists only in the US. Hedge funds, quant firms, and top scaleups can pay more than the likes of Google, Meta, Amazon, Netflix, Apple, etc, with two important caveats:

At top scaleups, bigger-than-Big-Tech comp packages are usually reserved for workers already in Big Tech, who would not switch jobs for an illiquid TC package unless it exceeds their current, liquid one.

Hedge funds and quant firms seem like a different world. Switching between Big Tech and these companies looks harder to do, and happens less often than moving between Big Tech and scaleups.

US-HQ’d companies tend to pull up markets outside the US. Among tier 3 companies in the Netherlands, 95% of those which I know are US-headquartered, and the data shows this is also the case in the UK and India.

Higher-tier companies come with downsides. These workplaces pay more, but usually operate with a “Silicon Valley-style” engineering culture: they expect a lot more in terms of autonomy, taking on product, DevOps, and testing work, and working longer hours – especially when collaborating with overseas teams. Adjusting to this can be tough if you’ve never worked in such an environment. Vice-versa, it can also be challenging to swap this pattern for a more traditional, hierarchical workplace.

Meanwhile, work stress can actually be higher at Tier 2 and Tier 3 companies, as covered in part 2 of the Trimodal series.

Today’s job market is a lot tougher than previously, and it’s harder to get into top-tier companies. In this deepdive, we’ve looked closely at the top tiers across several markets. With the job market tightening up, there are more qualified candidates than ever, and this can affect the size of offers. So, if you’re in the job market, I would not treat the numbers in this article as concrete and immovable. Instead, just focus on performing well in interviews, and getting an offer. Then these figures might be helpful in the negotiation phase.

Compensation is not everything; it’s just what is easiest to measure and make comparisons with, but there’s more to a job than the pay alone. Here’s what I used to think matters:

A way to think about a career

But is this all there is? I no longer think so. Other factors also make a job into a dream or a daily struggle:

Less quantifiable, but still very important, aspects of a career

If you’ve gotten value out of this data-based article, why not contribute to the wealth of knowledge about global tech compensation by anonymously sharing your current compensation with Levels.fyi. I hope you’ve found the additional data points useful, especially the expanded details on the US, UK, and India.

For more on tech comp, check out other deepdives:

Senior-and-above compensation in tech (2024)

Compensation at publicly traded tech companies (2023)

The trimodal nature of software internship salaries (research by Levels.fyi, 2024)

The trimodal nature of software engineering salaries in the Netherlands and Europe (Part 1, 2021)

The trimodal nature of tech compensation revisited (Part 2, 2024)


Mike Jones: self-issued

COSE Algorithms for Two-Party Signing

Emil Lundberg and I have published the COSE Algorithms for Two-Party Signing specification. Its abstract is: This specification defines COSE algorithm identifiers used when the signing operation is performed cooperatively between two parties. When performing two-party signing, the first party typically hashes the data to be signed and the second party signs the hashed data […]

Emil Lundberg and I have published the COSE Algorithms for Two-Party Signing specification. Its abstract is:

This specification defines COSE algorithm identifiers used when the signing operation is performed cooperatively between two parties. When performing two-party signing, the first party typically hashes the data to be signed and the second party signs the hashed data computed by the first party. This can be useful when communication with the party holding the signing private key occurs over a limited-bandwidth channel, such as NFC or Bluetooth Low Energy (BLE), in which it is infeasible to send the complete set of data to be signed. The resulting signatures are identical in structure to those computed by a single party, and can be verified using the same verification procedure without additional steps to preprocess the signed data.

A motivating use case for this is for WebAuthn/FIDO2 Authenticators to use when signing application data, as described in the proposed WebAuthn signing extension. Parts of this spec’s content were previously in the Asynchronous Remote Key Generation (ARKG) algorithm spec, which we’ve also been updated.

I plan to talk about the spec during IETF 122 in Bangkok. I hope to see many of you there!

The specification is available at:

https://www.ietf.org/archive/id/draft-lundberg-cose-two-party-signing-algs-01.html

This work was supported by the SIROS Foundation.

Monday, 03. March 2025

Kerri Lemole

DCC at the 2025 1EdTech Digital Credentials Summit

LCW Experience Badge & Introduction to Open Badges 3.0 Course at Participate This week is the 2025 Digital Credentials Summit hosted by the organization that shepherds the Open Badges 3.0 standard. This is a great space to meet with implementers, educators, technologists, funders, and policy makers from all over the world to connect. I will be attending this year and have the privilege

LCW Experience Badge & Introduction to Open Badges 3.0 Course at Participate

This week is the 2025 Digital Credentials Summit hosted by the organization that shepherds the Open Badges 3.0 standard. This is a great space to meet with implementers, educators, technologists, funders, and policy makers from all over the world to connect.

I will be attending this year and have the privilege to participate in five sessions covering topics like degree data models, trust, identity, open communities and open source software. I’ve listed my sessions at the bottom for those who may be attending.

In time for the Summit, check out the two projects the DCC team has launched to contribute to advancing understanding Open Badges 3.0.

LCW Experience Badge

The LCW Experience is an introduction to the experience of receiving, claiming, and storing of an Open Badges 3.0 badge to the Learner Credential Wallet. Get started here.

Introduction to Open Badges 3.0 Course at Participate

Introduction to Open Badges 3.0 is a short, videos based course hosted at Participate.com, a community learning platform that issues Open Badges 3.0. To get started, join the The Digital Credentials Consortium Learning Hub and there you will find the course. Watch six short videos and earn a DCC Intro to OBv3 badge.

Sessions at Digital Credentials Summit

The DCC has been busy this year working with our members and colleagues on great projects about digital degrees, issuer trust, open web, and open source software. Here’s a list of these sessions:

March 3, 3:30–4:15 PM MT Using Open Badges 3.0 to Issue Portable and Verifiable Digital Degrees, with Eric Sembrant from Georgia Tech March 4, 9:20–10:20 AM MT Understanding Issuer Identity in the LER Verifiable Credentials Ecosystem, with Jeanne Kitchens from Credential Engine March 4, 1–1:40 PM MT Quality in Digital Credentialing: A Multifaceted Approach, panel with National Student Clearinghouse, Territorium, Strada Education Foundation, and Western Governors University March 4, 3–3:40 PM MT The Story of Open — How the Web, the World, and Credentials Thrive When Collaboration & Community is Prioritized, with Kate Giovacchini from ASU March 4, 4:40–5:20 PM MT, The Power of Open-Source Badging in Micro-Credentialing, with colleagues from Digital Promise

DCC at the 2025 1EdTech Digital Credentials Summit was originally published in Digital Credentials Consortium on Medium, where people are continuing the conversation by highlighting and responding to this story.


Open Badges 3.0 Basics: Watch, Learn, and Earn on Participate

Join the Digital Credentials Consortium Learning Hub The Digital Credential has started a series of short posts explaining the characteristics and trust models of W3C Verifiable Credentials and Open Badges 3.0. To accompany this series, we collaborated with our colleagues as Participate to create the The Digital Credentials Consortium Learning Hub and offer a short course, “Introduction to O
Join the Digital Credentials Consortium Learning Hub

The Digital Credential has started a series of short posts explaining the characteristics and trust models of W3C Verifiable Credentials and Open Badges 3.0. To accompany this series, we collaborated with our colleagues as Participate to create the The Digital Credentials Consortium Learning Hub and offer a short course, “Introduction to Open Badges 3.0”. To get started, join the The Digital Credentials Consortium Learning Hub and there you will find the course.

The course consists of six short (approx 2 min), informal videos. Once you watch all six videos, click on the bell in the right top corner to see your “DCC Introduction to OBV3 badge”. After installing and setting up the Learner Credential Wallet on your mobile device, click on “Add to Wallet” and scan the QR code that displays in the pop up with the camera on your phone or using the “Scan QR code” option in the ‘Add Credential screen of the LCW app. If you’d like to practice claiming an Open Badges 3.0 badge first, try the LCW Experience.

Thanks to the Participate team for setting this up with us. Participate is one of the first (maybe THE first) community learning platform to issue Open Badges 3.0. Their team has been involved with Open Badges since the very beginning. They host the Open Recognition Community where you will find many with a long history in this space who are discussing the value of Open Badges at the cross sections of formal, informal, nonformal learning.

For more information on the DCC, including membership, projects, and events, visit our website, wiki, and Linkedin. To receive our newsletter and invitations to webinars and events sign up for our community mailing list.

Open Badges 3.0 Basics: Watch, Learn, and Earn on Participate was originally published in Digital Credentials Consortium on Medium, where people are continuing the conversation by highlighting and responding to this story.


Georg C. F. Greve

A future for Self Sovereign Identity?

Symbolic representation of Web Security applied to SSI. Many children in Europe grew up with the tales of Baron Münchhausen, who claims to have lifted himself and his horse out of a mire by pulling his own hair. The image is so powerful because the problem of the circular dependency is so clearly visible. In real life, circular dependencies are often far less obvious. Which is why the fi
Symbolic representation of Web Security applied to SSI.

Many children in Europe grew up with the tales of Baron Münchhausen, who claims to have lifted himself and his horse out of a mire by pulling his own hair. The image is so powerful because the problem of the circular dependency is so clearly visible. In real life, circular dependencies are often far less obvious.

Which is why the first article in this series was primarily focused on looking behind the SSI smoke and mirrors around Web based identifiers and communication protocols. The resulting discussions in the Rebooting the Web Of Trust (RWOT) community were quite enlightening, and included a deeper look at the EU Digital Identity Wallet Technical specifications.

One of the mirrors basically broke when claims of OpenID4VC supporting decentralized identifiers were shattered when someone pointed out that while the EU Wallet is marketed on digital sovereignty and privacy, but in reality does not does not allow decentralized identifiers:

The current EUDI approach: No decentralized identifiers allowed

So while it was clear that OpenID4VC and did:web* do not qualify as decentralized, Self-Sovereign Identity, some people advocated to just embrace the false marketing in the hope that it would create wider acceptance and the appearance of adoption for SSI.

But has that approach ever really worked?

More often this kind of “sovereignwashing” appears to run a high risk of creating false expectations, disappointment. Which would ultimately cement the status quo of the federated platform identity lock-in for the next 20 years. As a community we should focus on building actual decentralized identifiers, communication protocols, and applications.

Because the true social and economic value of SSI is not just in the identity layer itself, it is in the decentralized applications enabled as a result.

Some of which would be in direct competition to the champions of the platform age, who are investing their financial and political capital into OpenID4VC and Web based identifiers to prevent that competition from ever getting off the ground. A classic “old industry vs new technologies” battle.

There are real opportunity costs across most of economy and society if the old encumbents manage to postpone or kill innovation.

Symbolic representation of eIDAS 2.0 after successful lobbying by the platforms and trust intermediaries Security and privacy for a globally networked society

Technology and corresponding security have been head to head in a special kind of race for a long time, dating back to an Egyptian inscription around 1900 BC in the main chamber of the tomb of Khnumhotep II, over Julius Caesar using a ROT-3 cypher in 100 BC, all the way to the famous Enigma machine used in World War II. The more people potentially had access to a message, the harder the encryption had to become.

The encryption used by Julius Caesar was not particularly strong, because it relied on a supposedly secret algorithm. Once parties know the secret, encryption and decryption become trivial. Over time this moved to well-known algorithms using shared secrets. And even though the shared secrets are more complex on today’s internet, this fundamental principle hasn’t changed:

If you know the shared secret, and can intercept the encrypted message, you will be able to read, and also impersonate and falsify communication.

In contrast, Engima was quite strong for its day because it combined a rotating cypher with a codebook that was famously carried by U-Boats allowing them to choose the correct settings. Literally handed over to the commander of the boat by hand in a secure location before departure, these code books effectively represented a cryptographic key, shared over a second channel — the physical handover.

Which makes any well-designed encryption system almost impossible to break. Unless, of course, you have intimate knowledge of the inner workings of the rotating cypher, and can guess certain messages, like weather reports, to then use brute force to arrive back at the settings for the day. Those settings then allowed to read other messages, which would otherwise have been unbreakable.

Digital identity should be based on an advance

In other words: The cryptography of the Enigma machine itself was solid, and essentially unbroken. But the Allied Forces were able to exploit structural weaknesses designed into the operation of Engima to attack the key generation for the day.

Security in Swiss Healthcare

That particular race accelerated when the Internet was born. In 1996, when the internet was still young, the US Congress deliberated and passed the Health Insurance Portability and Accountability Act (HIPAA). That same year, the Swiss Medical Association (FMH), realized patient data had to be better secured on the internet, leading to the creation of Health Info Net (HIN). Starting from encrypted email, Swiss doctors have relied on HIN for decades to keep their patient data safe.

But technology years are a lot like dog years. And 28 years is a very long time.

HIN is constantly working to innovate and improve its solutions. Which is how Vereign, working closely with our partner More than Bits, started to run some POCs with HIN in 2023, and ended up working all of 2024 almost exclusively for the Swiss healthcare sector.

Our challenge: Design a system that starts from what today’s users are used to, while re-thinking the system architecture using SSI and modern data ecosystem architectures, based on the work we had done for Gaia-X.

The starting point was obvious: Email is the world’s largest distributed identity database and communication protocol. It is the use case with which HIN started, and it is the singular product that all users rely on mutliple times each day to communicate with colleagues, laboratories, and patients.

Email is also facing challenges of concentration and capture by the large, federated platforms. And its lack of an identity layer has made it a fertile ground for attacks by malicious governments, corporations, and common criminals.

Vereign showcased its first prototype to harden email using SSI in 2019, which earned us a nomination as the hottest new innovation for the Swiss Digital Economy Award in Zurich. COVID-19 had other plans, but our experience proved invaluable when working on the POCs with HIN.

This time, we built out peer to peer email exchange via DIDComm. Secure, encrypted, authentic and designed in a way that it can be plugged into any legacy email system to gradually switch to a new, identity verified transport layer reaching all the way to the people themselves.

From prototyping to production: Quest for the identifier

We built these prototypes using did:web, because it is a great placeholder to stand in for decentralized identifiers while rapidly prototyping around user flow and experience.

But from the onset it was clear that did:web would not be the choice for production, because for all the reasons also highlighted in the last article:

Web based identifiers must never be used for personal identity.

Our preferred choice would have been KERI due to its robust security and privacy architecture. But with the official implementation being Python only, we had concerns about efforts required in supporting a secure, long term solution across the range of platforms we anticipated.

The Rust implementation by the Human Colossus Foundation fared better on that front. But there seems to be a rift in the community, causing concerns of diverging implementations, as well as long-term support. Which are exacerbated by the choice for European Public License (EUPL).

We could not find information about adoption, nor community. And finally, the security of KERI as a concept critically depends on the networks of Witnesses and Watchers, for which we could not find information about size, health and long term viability of these networks for either implementation.

Had we chosen KERI in February 2024, we would not have been able to go productive before these issues had been resolved. And our time line dictated we had to be ready for initial production by late 2024. As a result, KERI was a non-starter.

Other methods, such as did:indy, have been in decline for some time, and Sovrin is shutting down in just a couple of weeks. Methods like did:peer on the other hand are not great in scenarios where long-lived connections are desirable.

So in the end, our search for production ready decentralized identifiers that could safely be used for natural persons left us empty handed.
A classic. And good advice. Ignoring XKCD

The competing standards comic by XKCD is a classic. As far as rules go, it is a good one. But there are no rules without exceptions. Having exhausted every other path, we decided to ignore XKCDs’ best practice. Only, we did not aim to create the universal solution — that’s KERI — but to create the simplest possible, yet still sufficiently safe identifier for the requirements of our specific use case.

Like any good design, it should build on existing technologies as much as possible, be simple enough to be implemented within a reasonable time frame, and to be supportable for at least 5–10 years, when potentially it would be replaced by something better.

Designing a decentralized identifier

Our requirements asked for an identifier that was truly secure and private. We explicitly sought to minimize dependencies on infrastructure such as DNS, Web Servers and Certificate Authorities. Blockchain would have fit these criteria, but we do not require a global consensus. All we needed was a decentralized storage system that would guarantee integrity and availability of records.

Git might have been an option. It is Content-Addressable Storage, so objects are referenced by their hash, any modification creates a new object. But Git would add unnecessary overhead, and there is a central repository. The Interplanetary File System (IPFS) on the other hand is built for peer to peer distribution between nodes without a central server.

Like Git, IPFS is built on Content-Addressable Storage (CAS). Objects are referenced by their sha256 hashes. Users can request data at any node, and if that node does not have this particular object, it will use peer-to-peer network connectivity between nodes to obtain a copy of the data and provide it to the user. It is open, verifiable, and resilient.

Its function allows DID documents to be uploaded onto any node and be referenced by their hash on any node in the network. Modifications to the document modify the hash, so documents are integrity protected by design. Simultaneously, the entire DID storage and distribution mechanism is robust regarding the well-known attacks against Web based identifiers.

In addition, the hash for the document contains no Personally Identifiable Information (PII) and unless we’d make the mistake of adding PII to the DID documents themselves, our design would not expose any kind of PII anywhere.

Of course we were not the first, nor the only ones to realize the potential of IPFS for decentralized identifiers. There has been a prior attempt at using IPFS for DID documents, the IPID DID Method. But it never got much traction, and its use of the InterPlanetary Name System (IPNS) made it less robust. Also, it did not have provisions for the rotation of keys, which is crucial for long-term connections with the same identifier, as well as the ability to switch wallets or upgrade crypto algorithms.

Swiss Healthcare: Innovating together toward the gold standard of decentralized, secure, private identity and applications An identifier for Sovereign Data Exchange (SVDX)

The result is did:svdx, our DID method for Sovereign Data Exchange.

Agents generate their active key locally, as well as a key that can be used for the update of the identifier later. The public key of the first key is used as the persistent identifier, creating a persistent Autonomous Identifier (AID).

The second key, which is used for the update of the identifier, is never shared. Only its hash is declared in the document as a next key commitment. Because this key is never actively used until it is time to rotate, it is well protected against being compromised.

Each revision of the decentralized identity documents representing a Decentralized Identifier has a Content Identifier (CID) when stored in IPFS, so the resulting identifier is always the combination of the AID with the CID of the latest revision of the identifier.

Since each revision of the identifier refers back to the previous version by its CID, the result is a sha-256 hash based Key Event Chain of IPFS objects, all the way back to the inception document, the root of the AID in question.

did:svdx:z6MknHKiY477mH97qryHv3zjuHaTLvBbbp6tHS5SvZv67uR4:QmecqVGBxvW7gjffxmYTGFZNPmJcWmYPdD8azB1cZYaY6F

Because the identifier also contains the CID of the current state, starting verification of the Key Event Chain is trivial: Just pull the corresponding object out of IPFS and verify. Check for ancestor, rinse and repeat until you’re at the beginning of the chain. Check whether the AID matches the initial key. Done.

Trivial to implement in web based tool chains

No native IPFS support? No problem. Just pick one of the public IPFS gateways, and with a single request pull the DID document, e.g. https://ipfs.io/ipfs/QmecqVGBxvW7gjffxmYTGFZNPmJcWmYPdD8azB1cZYaY6F.

Thanks to content based addressing, you will get the same document no matter which gateway you use. And you’re welcome to use as many of them as you would like to compare. Although for production use cases it is highly recommended to run your own, which is trivial.

In other words, IPFS allows to integrate classic web based tool chains with decentralized storage and delivery of integrity protected DID documents. It’s as easy as any of the did:web* methods to work with, but does not suffer from the attack surfaces of DNS, TLS and Certificate Authorities.

In addition, it is robust against a number of DDOS scenarios, allows for low impact self-hosting, and eliminates the web server as a central point of attack, surveillance and compromise.

Also, it plays well with DIDComm and other communication protocols, but if you really require web based interaction protocols, they can also be encoded into the identifier. But unlike web based identifiers, exchanging key material via did:svdx mitigates a substantial number of attack scenarios for web connection protocols.

Layering trust

By design did:svdx contains zero personal information. It is deliberately focused on secure key exchange of an Autonomous Identifier, only.

So any relationship starts from a reliable assumption the AID controllers have a strong connection to one another and can maintain it over a longer period of time, including throughout key rotation and changes in cryptography. But they start from zero trust in one another.

Trust is built gradually, through Verifiable Presentations securely exchanged over the connection. Similar to what Christopher Allen describes as “Building Trust in Gradients.”

For SVDX, given it is built for a true P2P, decentralized ecosystem, we surmise that the party initiating a connection first authenticates itself toward the recipient of the connection request before requesting reciprocal information. That should also make data mining or identifier scraping much harder.

Limits of did:svdx

For any design, it is crucial to know its limits. Firstly, the identifier specification does not contain any of the multi-signature capabilities of systems like KERI. Because we did not require it for our use case at hand, we pushed that complexity, along the complexity of secure restore and key rotation, onto the clients — which we control for the use case at hand.

Also, while IPFS plays a role similar to that of Witnesses in KERI, there are no Watchers. So there is no built-in detection of duplicity, as Sam Smith calls it. And while parties can update each other on key rotations using DIDComm, allowing each other to verify they are still talking to the same party, the design has no built-in protections against a controller forking their identity.

For our use case this was not an issue, because there is a central catalogue for the ecosystem to allow looking up the latest, known version of an AID. Which is not ideal for some scenarios. But we considered the solution good enough for what we needed to achieve, given that all controllers need to also maintain their identity and trustworthiness with HIN as the central ecosystem fiduciary.

That said, it should be possible to design a robust duplicity detection on top of did:svdx, and there may even be scenarios where duplicity is not a primary concern as long as agents always ensure to only consider the latest version of an AID authoritative.

So did:svdx is not a replacement for KERI. But it is a replacement for web based DID methods, offering far better security, and similar efforts of adoption and support. From our own experience we know it took around 6-8 weeks to implement in JavaScript.
What’s next?

The first application using did:svdx in production will have ramped up by April 2025.

By mid 2025 we expect hundreds of thousands of production messages sent each month containing verifiable credentials backed by did:svdx. Our roadmap has us building out additional applications until all the institutions and eventually all the patients in Switzerland will have identifiers within the next 2-3 years.

We have already Open Sourced the initial implementation and will continue to add additional implementations. Also, we would love to finalize the specification so that it can be maximally useful to others. And there may be features that would be required for additional use cases, as well as community-based methods for duplicity detection.

Open questions Where is the right place to finalize, publish and maintain did:svdx? Who would be interested in participating? What are the critical capabilities that may still be missing? What kind of best practice operational RFCs should we develop as a community?

If you’re at DICE in Zurich this year, I’d love to sit down and discuss these questions with you — alongside everything else you would like to know about our vision for the Sovereign Data Exchange.


Kerri Lemole

The LCW Experience

The Digital Credentials Consortium, our team and our members, are aiming to advance the understanding & use of privacy-enhanced, portable, verifiable digital credentials like Open Badges 3.0. We’ve noticed that while many are interested in the opportunities LERs as digital credentials may offer, many haven’t yet experienced what it is like to claim and accept an Open Badge 3.0 credential with

The Digital Credentials Consortium, our team and our members, are aiming to advance the understanding & use of privacy-enhanced, portable, verifiable digital credentials like Open Badges 3.0. We’ve noticed that while many are interested in the opportunities LERs as digital credentials may offer, many haven’t yet experienced what it is like to claim and accept an Open Badge 3.0 credential with a digital wallet like the LearnerCredential Wallet (LCW).

We created the LCW Experience to make it as easy as possible to try this out. It uses the open source DCC Issuing System and Learner Credential Wallet to demonstrate the experience of requesting and storing an Open Badges 3.0. You can learn more about how it works here.

To add the LCW Experience Badge to your Learner Credential Wallet, follow these steps:

Install & set up the LCW.app on your mobile device Go to the LCW Experience site to enter your name & email address

After you click the “Award Badge” button, a message will be sent to the email you provided with a link to your claim page. There you will find instructions on how to use your LCW app to claim your credential.

The email notification is a typical way to let earners know that they have been awarded a badge. If you would rather go to the claim page directly, use this link, replacing the name with your own (or someone you’d like to send it to): https://badging.dcconsortium.org/collect?recipientName=FirstName+LastName

Note that your name & email are not saved in our system. The email is used to send you the message and your name is included so that it will show who it was issued to in the accepted badge data and display.

Big shout out to the DCC team, James Chartrand, Alex Higuera, Dmitri Zagidulin, and Gillian Walsh for developing this project and brainstorming fun, accessible, and pragmatic approaches to learning about Verifiable Credentials and Open Badges. We hope you enjoy this experience!

For more information on the DCC, including membership, projects, and events, visit our website, wiki, and Linkedin. To receive our newsletter and invitations to webinars and events sign up for our community mailing list.

The LCW Experience was originally published in Digital Credentials Consortium on Medium, where people are continuing the conversation by highlighting and responding to this story.


Damien Bod

Implement Phone verification, 2FA using ASP.NET Core Identity

This post shows how to implement phone (SMS) verification and two-factor authentication (2FA) using ASP.NET Core Identity. The solution integrates phone-based verification and 2FA mechanisms. The implementation uses ASP.NET Core Identity’s extensibility to incorporate SMS-based verification during user registration and login processes. SMS is no longer a recommended authentication method due to sec

This post shows how to implement phone (SMS) verification and two-factor authentication (2FA) using ASP.NET Core Identity. The solution integrates phone-based verification and 2FA mechanisms. The implementation uses ASP.NET Core Identity’s extensibility to incorporate SMS-based verification during user registration and login processes. SMS is no longer a recommended authentication method due to security risks but does provide a good solution for some business cases or user flows like onboarding phone users or phone applications, frontline workers with no desktop or other such solutions with limited security possibilities.

Code: https://github.com/damienbod/IdentityOidcPhone2fa

Setup

The ASP.NET Core Identity application integrates the SMS provider using the Identity PhoneNumberTokenProvider and an SMS verification service.

builder.Services.AddDbContext<ApplicationDbContext>(options => options.UseSqlServer(builder.Configuration.GetConnectionString("DefaultConnection"))); builder.Services.Configure<EmailSettings>(builder.Configuration.GetSection("EmailSettings")); builder.Services.AddTransient<IEmailSender, EmailSender>(); builder.Services.Configure<SmsOptions>(builder.Configuration.GetSection("SmsOptions")); var authorization = Convert.ToBase64String(Encoding.ASCII.GetBytes( $"{builder.Configuration["SmsOptions:Username"]}:{builder.Configuration["SmsOptions:Password"]}")); builder.Services.AddHttpClient(Consts.SMSeColl, client => { client.BaseAddress = new Uri($"{builder.Configuration["SmsOptions:Url"]}"); client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Basic", authorization); }); builder.Services.AddScoped<SmsProvider>(); builder.Services.AddIdentity<ApplicationUser, IdentityRole>() .AddEntityFrameworkStores<ApplicationDbContext>() .AddTokenProvider<DataProtectorTokenProvider<ApplicationUser>>(TokenOptions.DefaultProvider) .AddTokenProvider<AuthenticatorTokenProvider<ApplicationUser>>(TokenOptions.DefaultAuthenticatorProvider) .AddTokenProvider<PhoneNumberTokenProvider<ApplicationUser>>(Consts.Phone) .AddTokenProvider<EmailTokenProvider<ApplicationUser>>(Consts.Email);

The ApplicationUser needs some new properties to support multiple authentication methods. The properties are used to allow a user to use the selected authentication method or force an authentication on a OpenID Connect client.

public bool Phone2FAEnabled { get; set; } public bool Email2FAEnabled { get; set; } public bool AuthenticatorApp2FAEnabled { get; set; } public bool Passkeys2FAEnabled { get; set; }

An SMS service are used to integrate the SMS, the SmsProvider class. In this demo, the eColl messaging service is used to send SMS. The implementation and the configuration would vary if you use a different service.

The SmsProvider is used to verify a phone number, to enable SMS 2FA and to force SMS 2FA. The service uses a HttpClient to access the SMS service rest API.

using IdentityProvider.Models; using Microsoft.AspNetCore.Identity; using Microsoft.Extensions.Options; namespace IdentityProvider.Services; public class SmsProvider { private readonly HttpClient _httpClient; private readonly UserManager<ApplicationUser> _userManager; private readonly SmsOptions _smsOptions; private readonly ILogger<SmsProvider> _logger; private const string Message = "message"; public SmsProvider(IHttpClientFactory clientFactory, UserManager<ApplicationUser> userManager, IOptions<SmsOptions> smsOptions, ILogger<SmsProvider> logger) { _httpClient = clientFactory.CreateClient(Consts.SMSeColl); _userManager = userManager; _smsOptions = smsOptions.Value; _logger = logger; } public async Task<(bool Success, string? Error)> Send2FASmsAsync(ApplicationUser user, string phoneNumber) { var code = await _userManager.GenerateTwoFactorTokenAsync(user, Consts.Phone); var ecallMessage = new EcallMessage { To = phoneNumber, From = _smsOptions.Sender, Content = new EcallContent { Text = $"2FA code: {code}" } }; var result = await _httpClient.PostAsJsonAsync(Message, ecallMessage); string? messageResult; if (result.IsSuccessStatusCode) { messageResult = await result.Content.ReadAsStringAsync(); } else { _logger.LogWarning("Error sending SMS 2FA, {ReasonPhrase}", result.ReasonPhrase); return (false, result.ReasonPhrase); } return (true, messageResult); } public async Task<(bool Success, string? Error)> StartVerificationAsync(ApplicationUser user, string phoneNumber) { var token = await _userManager.GenerateChangePhoneNumberTokenAsync(user, phoneNumber); var ecallMessage = new EcallMessage { To = phoneNumber, From = _smsOptions.Sender, Content = new EcallContent { Text = $"Verify code: {token}" } }; var result = await _httpClient.PostAsJsonAsync(Message, ecallMessage); string? messageResult; if (result.IsSuccessStatusCode) { messageResult = await result.Content.ReadAsStringAsync(); } else { _logger.LogWarning("Error sending SMS for phone Verification, {ReasonPhrase}", result.ReasonPhrase); return (false, result.ReasonPhrase); } return (true, messageResult); } public async Task<bool> CheckVerificationAsync(ApplicationUser user, string phoneNumber, string verificationCode) { var is2faTokenValid = await _userManager .VerifyChangePhoneNumberTokenAsync(user, verificationCode, phoneNumber); return is2faTokenValid; } public async Task<(bool Success, string? Error)> EnableSms2FaAsync(ApplicationUser user, string phoneNumber) { var token = await _userManager.GenerateChangePhoneNumberTokenAsync(user, phoneNumber); var message = $"Enable phone 2FA code: {token}"; var ecallMessage = new EcallMessage { To = phoneNumber, From = _smsOptions.Sender, Content = new EcallContent { Text = message } }; var result = await _httpClient.PostAsJsonAsync(Message, ecallMessage); string? messageResult; if (result.IsSuccessStatusCode) { messageResult = await result.Content.ReadAsStringAsync(); } else { _logger.LogWarning("Error sending SMS to enable phone 2FA, {ReasonPhrase}", result.ReasonPhrase); return (false, result.ReasonPhrase); } return (true, messageResult); } }

Flow 1: Verify phone

Once a user has authenticated with email and password, the user can verify a phone. To verify the phone, the user MUST be authenticated. If not, a malicious program may send multiple SMS and cause financial harm. The Add phone number link can be used to start the verification process.

The VerifyPhone Razor page allows the user to enter an mobile phone number to send the SMS. This should be validated for real phone numbers at least. The StartVerificationAsync method is used to send the SMS. The ASP.NET Core Identity method GenerateChangePhoneNumberTokenAsync is used to generate the challenge for the verification.

var user = await _userManager.GetUserAsync(User); if (user == null) { return NotFound($"Unable to load user with ID '{_userManager.GetUserId(User)}'."); } var result = await _client.StartVerificationAsync(user, Input.PhoneNumber);

The UI is not styled, just uses the standard bootstrap styles.

The confirm Razor Page accepts the verification code which was sent to the phone and uses the VerifyAndProcessCode method to validate. The ASP.NET Core Identity VerifyChangePhoneNumberTokenAsync method is used to validate the code.

public async Task<IActionResult> OnPostAsync() { if (!ModelState.IsValid) { return Page(); } try { if (Input.PhoneNumber != null && Input.VerificationCode != null) { return await VerifyAndProcessCode(Input.PhoneNumber, Input.VerificationCode); } else { ModelState.AddModelError("", "Input.PhoneNumber or Input.VerificationCode missing"); } } catch (Exception) { ModelState.AddModelError("", "There was an error confirming the code, please check the verification code is correct and try again"); } return Page(); } private async Task<IActionResult> VerifyAndProcessCode(string phoneNumber, string code) { var applicationUser = await _userManager.GetUserAsync(User); if (applicationUser != null) { var validCodeForUserSession = await _client.CheckVerificationAsync(applicationUser, phoneNumber, code); return await ProcessValidCode(applicationUser, validCodeForUserSession); } else { ModelState.AddModelError("", "No user"); return Page(); } } private async Task<IActionResult> ProcessValidCode(ApplicationUser applicationUser, bool validCodeForUserSession) { if (validCodeForUserSession) { var phoneNumber = await _userManager.GetPhoneNumberAsync(applicationUser); if (Input.PhoneNumber != phoneNumber) { await _userManager.SetPhoneNumberAsync(applicationUser, Input.PhoneNumber); } applicationUser.PhoneNumberConfirmed = true; var updateResult = await _userManager.UpdateAsync(applicationUser); if (updateResult.Succeeded) { return RedirectToPage("ConfirmPhoneSuccess"); } else { ModelState.AddModelError("", "There was an error confirming the verification code, please try again"); } } else { ModelState.AddModelError("", "There was an error confirming the verification code"); } return Page(); }

The UI displays the input for the code and the number it was sent to.

Flow 2: Enable phone 2FA

Once the phone is verified, it can be used for an SMS 2FA.

The EnableSms2FaAsync method is used to enable the SMS 2FA.

var user = await _userManager.GetUserAsync(User); if (user == null) { return NotFound($"Unable to load user with ID '{_userManager.GetUserId(User)}'."); } if (user.PhoneNumber != Input.PhoneNumber) { ModelState.AddModelError("Input.PhoneNumber", "Phone number does not match user user, please update or add phone in your profile"); } await _smsVerifyClient.EnableSms2FaAsync(user, Input.PhoneNumber!); return RedirectToPage("./VerifyPhone2Fa", new { Input.PhoneNumber });

The EnablePhone2Fa Razor page is used to validate the phone number before activating the 2FA.

The VerifyChangePhoneNumberTokenAsync is used to validate and the 2FA is activated.

var user = await _userManager.GetUserAsync(User); if (user == null) { return NotFound($"Unable to load user with ID '{_userManager.GetUserId(User)}'."); } var verificationCode = Input.Code.Replace(" ", string.Empty).Replace("-", string.Empty); var is2faTokenValid = await _userManager .VerifyChangePhoneNumberTokenAsync(user, verificationCode, user.PhoneNumber!); if (!is2faTokenValid) { ModelState.AddModelError("Input.Code", "Verification code is invalid."); return Page(); } await _userManager.SetTwoFactorEnabledAsync(user, true);

The Verify Phone 2FA Razor page displays the input field for the code.

Once activated, the user should persist some recovery codes.

Flow 3: SMS 2FA using phone

Next time the user authenticates, the SMS 2FA is required. The user can use multiple authentication methods, not only SMS. If possible, passkeys or strong authentication should be used.

if (user.Phone2FAEnabled) { IsPhone = true; if (!user.AuthenticatorApp2FAEnabled) { await _smsVerifyClient .Send2FASmsAsync(user, user.PhoneNumber!); } }

Further flows Phone only authentication

Requires mass usage protection

Recover account using Phone authentication

Requires mass usage protection

Links

https://learn.microsoft.com/en-us/aspnet/core/security/authentication/2fa

https://github.com/andrewlock/TwilioSamples/blob/master/src/SendVerificationSmsDemo

Professionell Online SMS senden

Sunday, 02. March 2025

Werdmüller on Medium

The web was always about redistribution of power. Let’s bring that back.

It’s time to rekindle the movement. Continue reading on Medium »

It’s time to rekindle the movement.

Continue reading on Medium »

Friday, 28. February 2025

Mike Jones: self-issued

The Cambrian Explosion of OAuth and OpenID Specifications

Vladimir Dzhuvinov and I led a discussion on The Cambrian Explosion of OAuth and OpenID Specifications at the 2025 OAuth Security Workshop in Reykjavík. The abstract for the session was: The number of OAuth and OpenID specifications continues to grow. At present there are 30 OAuth RFCs, two more in the RFC Editor queue, 13 […]

Vladimir Dzhuvinov and I led a discussion on The Cambrian Explosion of OAuth and OpenID Specifications at the 2025 OAuth Security Workshop in Reykjavík.

The abstract for the session was:

The number of OAuth and OpenID specifications continues to grow. At present there are 30 OAuth RFCs, two more in the RFC Editor queue, 13 OAuth working group drafts, and another eight individual OAuth drafts that may advance. There are nine JOSE RFCs and seven working group drafts. There are four SecEvent RFCs. On the OpenID side, there are 12 final OpenID Connect specs, three final FAPI specs, one final MODRNA spec, three final eKYC-IDA specs, and 24 Implementer’s drafts across the OpenID working groups, plus another ten working group drafts.

The number of possible combinations boggles the mind. And there’s no end in sight!

What’s a developer to do? How have people and companies gone about selecting and curating the specs to implement in an attempt to create coherent and useful open source and commercial offerings? And faced with such an array of combinations and choices, how are application developers to make sense of it all? How can interoperability be achieved in the face of continued innovation?

This session will prime the pump by discussing choices made by some existing open source and commercial offerings in the OAuth and OpenID space and lead to an open discussion of choices made by the workshop attendees and the reasoning behind them. It’s our goal that useful strategies emerge from the discussion that help people grapple with the ever-expanding sets of specifications and make informed implementation choices, while still fostering the innovation and problem-solving that these specifications represent.

The slides used to queue up the discussion session are available as PowerPoint and PDF. Also, see the list of 101 OAuth and OpenID-related specifications referenced during the discussion.

The topic seems to have touched a chord. Many people were clearly already thinking about the situation and shared their views. Some of them were:

Nobody actually expects everyone to implement everything. Stopping things is super hard. But sometimes it’s necessary (as Brian Campbell put it, “when they’re wrong”). Timing can be fickle. What may not be useful at one time can turn out to be useful later. Some specs are highly related and often used together. But those relationships are not always apparent to those new to the space. We need better on-ramps to help people new to the space wrap their arms around the plethora specs and what they’re useful for. Well-written profiles are a way of managing the complexity. For instance, FAPI 2 limits choices, increasing both interoperability and security. The amount of innovation happening is a sign of success!

Thanks to the organizers for a great tenth OAuth Security Workshop! And special thanks to the colleagues from Signicat who did a superb job with local arrangements in Reykjavík!


@_Nat Zone

世界のデジタルIDと認証技術のトレンド:2月28日版

しばらく時間が空いてしまいましたが、デジタルIDと認証技術のトレンドをお届けします。 日本 英国 EU オーストラリア アメリカ合衆国 エチオピア 新バイオメトリック・パスポートの発行が始まる。制作はTOPPANが51%の株式をもつToppan Ethiopiaが担当。パスポート…

しばらく時間が空いてしまいましたが、デジタルIDと認証技術のトレンドをお届けします。

日本 2/2 国家資格の登録手続きデジタルに 公認会計士など40追加20 2/17 行政手続における特定の個人を識別するための番号の利用等に関する法律に規定する個人番号、個人番号カード、特定個人情報の提供等に関する命令の一部を改正する命令(案)及び個人番号カード等に関する技術的基準の一部を改正する件(案)について 2/28 「犯罪による収益の移転防止に関する法律施行規則の一部を改正する命令案」に対する意見の募集について 2025年3月29日23時59分締め切り21 英国 2/26 英国のデジタル確認サービス法案が貴族院の審議を終了。The Digital Trust and Attributes Framework の実現に一歩近づく。22 1/24 GOV.UK Walletを発表。当初は退役軍人カードを追加できるようにし、その後運転免許証などの他の政府文書も追加可能にする。民間文書は格納させない。なお、これらの公的文書は民間Walletにも保管できるようになる見込み。23 2026年までに企業の役員と権限者の身元確認を新基準で終わらせることに。これによってマネロンなどを減らす。ECCT Act 2023の実施。24 EU 1/29 European Business Walletが発表。B2B取引を大幅に効率化することを目指す。2526。 2/7 にはWE BUILD コンソーシアムが、欧州委員会により選出され、EU デジタルアイデンティティウォレットの大規模パイロット第2ラウンドに参加することに。 2/11 ECがDeutsche Telekom AG and Scytáles AB をEUDI Walletにインテグレートされた年齢確認システムの開発業者として選択。27 1/17 欧州 EDPB 意見募集 仮名化に関するガイドライン 01/2025 28 オーストラリア 1/31 telestraが銀行Identityを使っての回線開通を開始。ConnectIDを使うことにより、別途の身元確認書類が不必要に。29 アメリカ合衆国 AIエージェントによる顧客の代理に注目が集まり始めている。 これに対応するために、Customer and Agent Identity Management (CAIM)という新しいアイデンティティ管理上の概念が浮上しつつある。 Customer and Agent Identity Management (CAIM) とは、AIエージェントと人間ユーザーのアイデンティティ管理を統合・最適化するソリューションである。 AIエージェントが業務を自律的に行う中で、セキュリティ、透明性、コンプライアンス を確保するために設計されており、以下の3つの主要コンポーネントを含むとされる。 AgentLogin:AIエージェント専用の安全なログイン機構 AgentGuard:事前定義された権限・セキュリティポリシーの厳格な遵守を確保 AgentTrace:AIエージェントの認証や行動を記録し、監査可能なログを作成 このソリューションにより、AIエージェントを活用する企業が安全かつ効率的に業務を推進できる環境を整備 される。30 エチオピア

新バイオメトリック・パスポートの発行が始まる。制作はTOPPANが51%の株式をもつToppan Ethiopiaが担当。パスポートに含まれるチップには、顔画像のほか指紋と虹彩情報がはいるとのこと。31

世界 2/24 FATFの勧告16の第二次パブコメが開始32。4月18日締め切り。

3 月4日 FAPIとVCの世界動向 ~FAPI2.0 & Verifiable Credentials~

3月4日(火) 10:00-12:40に「FAPIとVCの世界動向 ~FAPI2.0 & Verifiable Credentials~」と題されたセミナーがFINOLABイベントスペースで開催されます。私もちょっとだけでます。 金融機関におけるKYC等にそれらデジタル資…

3月4日(火) 10:00-12:40に「FAPIとVCの世界動向 ~FAPI2.0 & Verifiable Credentials~」と題されたセミナーがFINOLABイベントスペースで開催されます。私もちょっとだけでます。

金融機関におけるKYC等にそれらデジタル資格証明を利用するとした場合の様々な考慮事項の検討や、世界のオープンバンキングとFAPI 2.0についての解説を聞くことができます。前者の話題としては、富士榮さんたちの論文「デジタルクレデンシャルの利用用途に応じた管理要件に関する考察」の解説も聞けるかも?!

ぜひお越しください。

セミナー概要

Japan FinTech Week期間中に開催される今回のセミナーでは、「FAPI」と「Verifiable Credentials」をとりあげ、世界の最新動向を紹介するとともに、今後の業務での実装について議論させていただく予定ですので、是非ご参加ください。

◆日時  :2025年3月4日(火) 10:00-12:40(開場 9:40)
◆会場  :FINOLABイベントスペース
      東京都千代田区大手町1-6-1 大手町ビル4F
       ※大手町ビルは東西に長いビルで入口は東端、東京駅側にあります。
◆使用言語:日本語(英語資料も提供)
◆参加費用:無料 
◆主催  :株式会社Authlete/伊藤忠テクノソリューションズ株式会社
◆後援  :株式会社FINOLAB

アジェンダ:
10:00₋10:05 開会のご挨拶
10:05-11:00 デジタル資格証明のガバナンス 鈴木 茂哉氏(慶応義塾大学) 富士榮 尚寛氏(伊藤忠テクノソリューションズ)  

Verifiable Credentialsなど持ち運び可能なデジタル資格証明と関連技術に注目が集まっていますが、例えば金融機関におけるKYC等にそれらデジタル資格証明を利用するとした場合、様々な考慮事項があります。慶應義塾大学SFC研究所と伊藤忠テクノソリューションズ株式会社はEUや欧米の最新動向を踏まえ、デジタル資格証明の管理要件やガバナンスについて共同研究を行ってきました。本セッションでは両者が共同で発行したディスカッションペーパー「デジタルクレデンシャルの利用用途に応じた管理要件」についてご紹介しつつ、今後デジタルクレデンシャルを金融サービス等に適用していく上で必要な論点について考察していきます。

11:00-11:55 オープンバンキングとFAPI 2.0  川崎 貴彦氏(Authlete)

2010年代中頃、金融業界の競争活性化を主目的としたオープンバンキング政策が英国で開始されました。この政策によりコンピュータプログラムから金融サービスへアクセスする仕組みが構築された結果、多様な金融サービスが生まれ、人々の生活はより便利になりました。今となっては世界中に広がったこのオープンバンキングムーブメントですが、それを技術的に支えているのがFAPIと呼ばれる世界標準技術仕様です。そのFAPIも、今年2月にFAPI 2.0最終版が承認され、大きな節目を迎えました。このセッションでは、FAPI 2.0仕様の著者達が所属するAuthlete社が、オープンバンキングとFAPI 2.0について解説します。

11:55-12:00 まとめ 崎村 夏彦氏(OpenID Foundation)  12:00-12:40 懇親会

Thursday, 27. February 2025

Moxy Tongue

If Im The Problem, You Might Be The Reason

 Tune your metaphors..   From where I dance and glance in America... All over the world, men and women are not having the same experience. Generational conflicts are arising, influencing everything, causing chaos to status quo, and making clear that the ability to r/w/rw is not shared equally. Computational supremacy, computational illiteracy, what does the data say? W

 Tune your metaphors.. 




 From where I dance and glance in America...

All over the world, men and women are not having the same experience. Generational conflicts are arising, influencing everything, causing chaos to status quo, and making clear that the ability to r/w/rw is not shared equally. Computational supremacy, computational illiteracy, what does the data say? Who? What? When? Why? Where? How? Now what?  

FREEdumb self-certifies. 

Structure yields results... not empathy, not emotion, not opinion. 

You own root, or you don't. 

Owners own the world. One contract at a time. Data, under Terms.

Sovereignty emerges from people, Individuals all. 

Freedom self-certifies. 

Left Cult == Right Cult political paradigm is collapsing.

Something new is possible.

You, Sovereign. 

Data structure yields life results... own root authority, people, it is your Right as Individuals.

The ultimate minority of one. Love somebody. 


 


The Pragmatic Engineer

The Pulse #125: swarm of AI coding tool launches

Anthropic, GitHub, and Google launch new AI coding tools for devs, DeepSeek bucks expectations by triggering higher GPU demand, VC funding down with AI funding up, and more

The Pulse is a series covering insights, patterns, and trends within Big Tech and startups. Notice an interesting event or trend? Send me a message.

Today, we cover:

Industry pulse. Amazon launches Alexa+, VS Code pulls “epic” but malicious theme, Meta cuts stock awards by 10%, Humane sells to HP, and more.

Swarm of AI coding tool launches. Anthropic launches Sonnet 3.7 (a supposedly even more capable LLM for coding) and Claude Code. GitHub ships Copilot Code Reviews, and Google makes Gemini Code Assist free for individual users (from $22.80 per month). Us devs are showered with new tools that all aim to improve workflows.

DeepSeek triggers higher GPU demand? It’s been predicted that more efficient LLMs will decrease demand for GPUs, but DeepSeek is 4-5x as efficient as some current LLMs, and appears to be driving up demand. Also, a look at the very clever “hacks” DeepSeek uses to boost performance on NVIDIA GPUs.

VC funding down, AI funding up. VC firms are increasingly shutting down, with overall funding levels dropping. AI-related funding is up – along with competition among startups to get funded. We predicted VC funding could dry up due to interest rate changes, and the data seems to show this happening.

What does “zero” mean for dates in COBOL? A reminder that “0” and “null” might not mean what you expect in some programming languages and systems.

1. Industry Pulse Amazon launches Alexa+

A year ago, I analyzed how Alexa had become left behind in the conversational AI race by missing the LLM wave, despite Amazon Alexa being a great use case for an integrated LLM, in order to provide better conversational experiences and new capabilities.

Well, 12 months later, Amazon seems to have caught up. This week, the online retail giant announced Alexa+, which is a revamped, more capable conversational assistant. It promises to make reservations or book rides by using apps like OpenTable and Uber, buy things on Amazon (obviously), manage smart speakers in the home, discover new music, and more.

The launch demo looks very polished and capable. But then, Apple’s glossy launch demos for Apple Intelligence didn’t stop the feature still feeling half-baked months later.

Under the hood, Alexa is partially powered by Claude, which Amazon didn’t announce, and Anthropic did. This suggests Anthropic’s models are ahead of Amazon’s own Nova models. Still, Amazon is clearly creating optionality for itself by not advertising Alexa’s dependence on Claude; when and if the Nova models are good enough, Amazon could replace Anthrophic’s solution with its own.

The Amazon+ pricing is puzzling: Alexa+ is free for Amazon Prime members ($14.99 monthly), and $19.99 per month for everyone else. This seems to make little sense; why charge so much more for a standalone subscription than for a bundle? Is it because Amazon wants to dis-incentivize customers just subscribing to Alexa+?

Either way, Amazon is making a stronger-than-ever case for customers to choose Amazon Prime for $14.99: which is probably the goal of upgrading Alexa.

VS Code pulls “epic” malicious theme

Read more

Wednesday, 26. February 2025

The Pragmatic Engineer

The man behind the Big Tech comics – with Manu Cornet

In today’s episode of The Pragmatic Engineer, I am joined by a senior software engineer and cartoonist, Manu Cornet.
Stream the Latest Episode

Available now on YouTube, Apple and Spotify. See the episode transcript at the top of this page, and a summary at the bottom.

Brought to You By

WorkOS — The modern identity platform for B2B SaaS.

Graphite — The AI developer productivity platform.

Formation — Level up your career and compensation with Formation.

In This Episode

Have you seen this comic before about the org structure of Big Tech companies?

Longtime readers will recall that the author of this comic is Manu Cornet — who previously shared details on how this comic came about (and he almost did not publish it, thinking it would not be funny). For today’s episode, I sat down with him, to go through more comics, why he created them, and the deeper meaning behind several of these.

Manu spent over a decade at Google, doing both backend and frontend development. He also spent a year and a half at Twitter before Elon Musk purchased it and rebranded it to X. But what Manu is most known for are his hilarious internet comics about the tech world, including his famous org chart comic from 2011 about Facebook, Apple, Amazon, and Microsoft.

In today’s conversation, we explore many of his comics, discuss the meaning behind them, and talk about the following topics:

The viral org chart comic that captured the structure of Big Tech companies

Why Google is notorious for confusing product names

The comic that ended up on every door at Google

How Google’s 20% time fostered innovation—and what projects came from it

How one of Manu’s comics predicted Google Stadia’s failure—and the reasons behind it

The value of connecting to users directly

Twitter’s climate before and after Elon Musk’s acquisition and the mass layoffs that followed

And more!

This is an unusually visual episode, offering the story behind several comics about Big Tech, Google and Twitter that you might have seen. With Manu, we discuss the deeper meaning behind several of these — ones that are pretty obvious to those who worked inside these companies, but might be less intuitive if you’re only seeing them for the first time.

The Pragmatic Engineer deepdives relevant for this episode

How Manu creates comics

Consolidating technologies

Is Big Tech becoming more cutthroat?

Timestamps

(00:00) Intro

(02:01) Manu’s org structure comic

(07:10) Manu’s “Who Sues Who” comic

(09:15) Google vs. Amazon comic

(14:10) Confusing names at Google

(20:00) Different approaches to sharing information within companies

(22:20) The two ways of doing things at Google

(25:15) Manu’s code reviews comic

(27:45) The comic that was printed on every single door of Google

(30:55) An explanation of 20% at Google

(36:00) Gmail Labs and Google Stadia

(41:36) Manu’s time at Twitter and the threat of Elon Musk buying

(47:07) How Manu helped Gergely with a bug on Twitter

(49:05) Musk’s acquirement of Twitter and the resulting layoffs

(59:00) Manu’s comic about his disillusionment with Twitter and Google

(1:02:37) Rapid fire round

A summary of the conversation The org chart comic

Manu created this comic in 2011 featuring the org charts of big tech companies like Amazon, Google, Facebook, Apple, Oracle, and Microsoft.

Manu almost didn't publish the comic because he didn't find it funny by the time he finished drawing it. The punchline was Oracle having a large legal department and a tiny engineering department.

The comic became very popular and has been referenced heavily.

The culture inside Google

Google has a problem naming things, which Manu attributes to its bottom-up culture and engineers starting competing projects.

There isn't much top-down pressure to consolidate efforts, leading to consumer confusion.

An example of Google's naming issues is the evolution from Google Wallet to Android Pay to Google Pay, with logo changes causing confusion for partners like Uber.

Manu created a comic about this: how confusing it is to interact with Google products due to the different names.

The “Google Graveyard comic” was prophetic: it predicted that Google's cloud gaming console Stadia would eventually be killed.

Google vs Amazon

Manu created a comic contrasting Google and Amazon in their engineering culture, and how they treat customers

Amazon is known for bending over backwards for customers, including AWS clients.

Google focuses on building useful features but lacks strong customer service.

Amazon's on-call practices are tough, while Google's are more relaxed, with SREs taking on much of the on-call load.

Google’s engineering culture

The comic about migrations at Google: shows the challenges of adopting new systems before they are ready. One Google exec said, "There's two ways to do things at Google. There's the deprecated way and there's the way that doesn't work yet".

One on code reviews: shows the software engineer trying to replace a light bulb while being critiqued on how to screw it in.

Another comic depicts the author of the change who says, "Oh, I don't need to be too cautious about my code. If there's some problem, the reviewer will catch it," and the reviewer who says, "Oh, I don't need to review this too carefully if the author must've know what he's doing or what she's doing".

Google used Manu's "Beware the Tailgator" comic on office doors to prevent theft by reminding employees not to let unverified people into the building. He created updated versions each year.

Google's 20% time allowed employees to dedicate a portion of their time to projects of their choosing.

Gmail and Google News were started as 20% projects.

The company has become more traditional and less encouraging of 20% projects.

There may be a connection between Google's 20% time and the high number of products it shuts down.

Google could experiment internally without necessarily launching everything publicly.

Gmail Labs allowed people to launch things with lower scrutiny and made it clear that these were not officially supported and they may go out at any point.

Manu created a comic about ending of this practice as well

Move to Twitter and Musk’s takeover

Manu left Google and joined Twitter because it felt like a younger version of Google, with a friendly culture and less red tape.

Shortly after Manu joined Twitter, Elon Musk began his attempt to acquire the company, creating uncertainty.

After Musk bought Twitter, there were massive firings, with at least 50% and ultimately around 80% of employees being let go.

Manu created a bunch of comics about the events at Twitter – and later also published a comic book about it

While Manu was also let go from Twitter: the cartoonist side of himself was having a great time witnessing the events at Twitter.

As a fun fact, Manu doesn't have a favorite programming language. He chooses the best tool for the job, and has been coding with Java, JavaScript, Python, C++, Objective-C, Swift, Rust, and C. He also built a site with all his book, movie and music recommendations.

Where to find Manu Cornet:

• Mastodon: https://twit.social/@manu

• LinkedIn: https://www.linkedin.com/in/manucornet/

• Website: https://ma.nu/

Mentions during the episode:

• Code Review on Printed Paper: an Excerpt from the Twitoons Comic Book: https://newsletter.pragmaticengineer.com/p/code-review-on-printed-paper-an-excerpt

• A much clearer insight into who sues who: https://bonkersworld.net/who-sues-who

• CEO's "Burning Platform" Memo Highlights Nokia's Woes: https://www.forbes.com/sites/ericsavitz/2011/02/09/ceos-burning-platform-memo-highlights-nokias-woes/

• Guns and Roses: https://bonkersworld.net/guns-and-roses

• The Full Circle on Developer Productivity with Steve Yegge: https://newsletter.pragmaticengineer.com/p/steve-yegge

• Grab: https://www.grab.careers/en/

• Comic about the confusing Google names: https://goomics.net/207

• Naming dashboard: https://goomics.net/239

• Access: https://goomics.net/370

• Welcome to Google: https://goomics.net/50

• Eric Schmidt on X: https://x.com/ericschmidt

• Jonathan Rosenberg on X: https://x.com/jjrosenberg

• Code Reviews: https://bonkersworld.net/code-reviews

• Beware the Tailgator: https://goomics.net/56

• 20% time: https://goomics.net/343

• Where did Gmail labs go?: https://zapier.com/blog/gmail-labs-missing/

• Google Stadia: https://en.wikipedia.org/wiki/Google_Stadia

• Google engineer who criticized the company in viral comics on why he finally quit: https://mashable.com/article/google-engineer-manu-cornet-comics-critique

• Network effect: https://goomics.net/324

• The Twitter Worker Who Captured Elon Musk’s Takeover in All Its Cartoonish Glory: https://slate.com/technology/2022/11/elon-musk-twitter-cartoons-manu-cortnet.html

• Howl’s Moving Castle: https://www.imdb.com/title/tt0347149/

• Tesla: https://www.tesla.com/

• Twittoons: One employee's cartoon chronicle of Twitter's accelerated descent: https://www.amazon.com/Twittoons-employees-chronicle-Twitters-accelerated/dp/1952629020

• Javascript: https://www.javascript.com/

• C++: https://en.wikipedia.org/wiki/C%2B%2B

• Python: https://www.python.org/

• Swift: https://www.swift.org/

• Rust: https://www.rust-lang.org/

• CSS: https://en.wikipedia.org/wiki/CSS

• Manu’s favorites: https://ma.nu/faves/

• Mind the Goof (Gomer Goof #1): https://www.amazon.com/Mind-Goof-Gomer-Franquin/dp/1849183589

• Building Software: https://bonkersworld.net/building-software

Production and marketing by Pen Name. For inquiries about sponsoring the podcast, email podcast@pragmaticengineer.com.

Tuesday, 25. February 2025

The Pragmatic Engineer

Robotics for software engineers

What does it take to build and program robots? A look into the exciting, increasingly popular field of robotics. Guest post by humanoid robot expert, Sandor Felber

Today, there’s an ever-growing number of startups, scaleups, and established companies like Boston Dynamics producing increasingly capable robots, including ones that look humanoid. Tech giants like Tesla have been building humanoid robots, and Meta is expected to invest billions in the technology. These and other factors point to the possibility that, after artificial intelligence, robotics could be the “next big thing” in tech.

But what’s the process of building intelligent robots, and what’s it like as a day-to-day job for the developers who do? It’s not always easy to find this out because robotics startups are famously shrouded in secrecy – and with such cutting-edge technology, it’s unsurprising.

Academia is often a bit more open, so I turned to MIT Robot Learning Researcher, Sandor Felber. He’s a Humanoid Robot Learning Researcher at MIT in Boston, Massachusetts, and previously led a team that built a self-driving race car. Sandor’s also been a robotics intern at Tesla in Palo Alto, California, and a robotics R&D engineer at the Edinburgh Center of Robotics.

Today, he takes us through:

Getting into robotics. From personal interest at high school in electric motors, through studying electrical and mechanical engineering at university, building a driverless race car, interning at Tesla, and researching humanoid robots at MIT.

Robotics industry overview. Industrial robots are becoming more widespread, academia focuses on smaller “long-shot” bests, and industry (the commercial sector) executes on proven concepts.

Planning a robotics project. This is similar to most projects: start with a vision, make a plan, and break it down into steps. It’s always useful to be clear on “critical” vs “nice-to-have” features.

Development phase. Control strategies (model-based control vs learned controllers), simulation and hardware deployment differences, and techniques to make simulations more realistic.

Tech stack and tools. Python, C, C++, and Rust are popular programming languages. A list of tools for experiment tracking and simulation.

Testing, demoing, and shipping. Common reasons why robot deployments fail, an example of deployment of an industrial robot, and why ongoing customer support is a baseline expectation.

Day-to-day as a robot builder. There’s a big difference between academia and industry, and Sandor shares an overview of both.

This topic is intriguing because it combines software, hardware, and cutting-edge tech:

Robotics as a field, and where generative AI techniques and autonomous decision-making fit in. A mental model, as shared by Sandor

This is a guest post. If you’re interested in writing a deepdive for The Pragmatic Engineer – collaborating with Gergely on it – check here for more details.

With that, it’s over to Sandor. You can follow Sandor on LinkedIn, and learn more about his work on his website.

1. Getting into robotics, a personal account

My interest in robotics began in high school, where I wrote the junior equivalent of a dissertation on characterization methods for how electric motors behave under different conditions. This led me to pursue a degree in electrical and mechanical engineering at the University of Edinburgh, home to numerous renowned robotics and AI researchers.

Building a driverless electric race car was one of my bigger projects. At uni, I joined the Edinburgh University Formula Student (EUFS) team, where we designed, built, and raced a driverless electric race car. From a roboticist’s perspective, this is considered a wheeled mobile robot.

My team’s driverless (but also human-drivable) electric vehicle, SISU23D, at launch. Source: Sandor Felber

We designed and built several versions of the car. Here’s a later model:

SISU24D (D for driverless), at Silverstone racetrack, England. Source: Hao Duo

I started working in the electric powertrain team and was responsible for designing and implementing systems that generate and deliver power. It included:

Developing high-voltage battery systems

Integrating traction motors and encoders

Designing power electronics for charging, converting between voltage levels, etc.

Creating cooling systems for batteries, motors, and inverters.

Here’s the high-voltage battery pack I had a hand in designing:

The high-voltage battery pack of SISU24D, with lid and damping layers removed. There are 6 segments, each containing hundreds of cylindrical cells. The battery management system and the accumulator isolation relays are on the left. Source: Sandor Felber.

I eventually became the powertrain team’s lead, and upon returning from Tesla the following year, moved on to direct all operations related to the driverless vehicle’s hardware design, and later served as president of the team of around 140 members.

The whole team at Silverstone track. Left vehicle: #45 SISU23E, the electric human-driven vehicle. Right vehicle: #128 SISU23D. Source: Nikita Jegororvs

The students in the team worked on the project part-time, and everyone went above and beyond their academic requirements to get hands-on experience. It was a student-led project – and an especially cool one; we built race cars that drove on actual Formula 1 tracks!

Sandor behind the human-steerable wheel of the SISU23D, with the Light Detection and Ranging sensor (LiDAR) above his head. The team is celebrating defending our championship title in the FS-AI category of FSUK. Source: Nikita Jegorovs

Along the way, I discovered a passion for control theory. Control systems engineers tend to concern themselves with crafting control strategies that ensure optimal performance, from spacecraft trajectories to insulin delivery in diabetics.

In robotics, you could think of control theory as the invisible puppeteer of a robotic arm, except that instead of pulling strings, it's using mathematics to orchestrate every joint's motion in real time. As a robotic arm needs to smoothly pick up an egg without crushing it, control theory provides the mathematical "muscle memory" that turns crude motor commands into precise, graceful movements. It does this with constant sensor feedback and adjustments, with the approach adjusted depending on how sensitive the control system is to the various feedback signals.

My interest in control theory and background in batteries, powertrains (vehicle motors), and electrical control systems from previous internships landed me one at Tesla's robotics department. Since then, I’ve worked with academic and industrial stakeholders on projects ranging from quadruped (quad- as in four, and -ped as in legged) dog-like robots, to humanoid systems.

I’m currently at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL), focusing on teaching humanoid robots to perform complex, real-world tasks using learned controllers, powered by neural networks. I’m particularly interested in embedding intelligence into robots by leveraging various types of learning, including supervised and unsupervised learning, offline and online deep reinforcement learning, and imitation learning.

I use these approaches for humanoid robot locomotion, manipulation, whole-body control, and teleoperation (remotely controlling the robot).

2. Robotics industry overview

Robotics companies are raising more money than ever before. In the first eight months of 2024, global robotics companies attracted a record amount of $10.8 billion in funding, averaging $1.2 billion per month! What’s all the hype fuelling this surge of capital?

In public debate, there’s no shortage of focus on labor shortages and stagnating productivity, which reduce growth and competitiveness. The US economy is estimated to be missing up to 4.6 million workers annually needed to maintain levels of supply and demand, according to a recent study. That number equates to 2% of the current US population. The same study suggests Germany needs to find an additional 1.6 million workers to keep current economic levels; 3% of its population.

This is one driver of industrial robots’ increased adoption, which shows no sign of slowing down. Sustaining economic growth amid shrinking labor pools is one viable path for robotics which is getting a lot of attention, as evidenced in the increased adoption of industrial robots, with global installations reaching nearly 600,000 units in 2024, as per the World Robotics Report. This figure surpassed previous benchmarks and shows that industries struggling with labor shortages can return to growth.

Industry robots are reshaping the automotive industry as key parts of assembly lines traditionally operated by humans. Source: Wevolver

If robots become ubiquitous, what will humans do? This is the most common concern I hear when I talk about my work. The question is an old one: on the New York subway, I stumbled upon a Holiday Nostalgia Ride, which offers a trip in an old subway car from between the 1930s and 1970s. My coach looked like it was from the 1960s, and inside was an advert about upskilling for “tomorrow’s jobs:”

A poster on the New York subway in the 1960s

The world has changed massively since then, thanks in part to automation, without which our standards of comfort wouldn’t be possible or affordable; think of the dishwasher that automates the washing up and helps canteens to offer a cheaper menu, than if they had to hire extra people to wash dishes and pay for excess water (new dishwashers are very efficient).

Acquiring new skills in a quickly-changing world is as least as necessary today as it was back then; as illustrated by that subway ad from the mid-20th century.

Industry vs academia

Approaches to research and development have always differed between industry and academia. Having worked in both environments, here’s how I see them compare in robotics:

Academia: smaller “long-shot” bets, developed on a budget. Many projects take years to mature, due to the limited effort that a couple of post-docs, PhD students, and undergrads can dedicate.

Industry: execute on proven concepts with substantial backing. Industry prioritizes execution on feasible concepts. Once a concept is assumed to work – as usually verified by a proof-of-concept (POC), also known as a “minimum viable product” in startup lingo – industry players can raise vast amounts of money to build it.

Modern robotics may have reached an inflection point, where enough academic “long-shot” bets are delivering results, with feasible paths to building practical robots, including humanoid ones. Examples of bets made on robotic hardware:

Tesla’s Optimus humanoid prototype

Boston Dynamics bidding farewell to hydraulics and welcoming electric actuators on its all-new humanoid platform, Atlas

1X building a humanoid robot, NEO

Three humanoid robots: These are calculated bets with significant investment 3. Planning a robotics project

Here’s how I’ve seen robotics projects get done.

Vision for a demo: many robotics projects start with the question: “What should the robot achieve when completed?” Apple co-founder Steve Wozniak’s “coffee test” is one such vision, now frequently referred to as the “New Turing Test.” His definition:

“A machine is required to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons."

In practice, on most projects I’ve seen, the vision gets watered down or descoped as deadlines approach, and material or human resources become limiting factors. However a demo that passes the test above could well serve as a benchmark for artificial general intelligence (AGI). As part of that test, a physically-embodied machine must demonstrate intelligence by entering a random home and successfully making a cup of coffee with tools available in a previously unseen kitchen.

After the vision is set, the next steps are:

Planning: goals, demoable outcomes, and target applications or target environments. The requirements of a good demo in a target environment can be very high!

Break things down. Translating deliverables into functional requirements. For each functional requirement, one or more features that show successful fulfillment need to be identified. When breaking things down, features are often separated into at least two subsets:

Critical: those that must be present in the final product. If missing, the project is considered a failure. An example is a humanoid robot weighing 50kg that is deployed in warehouse automation, that needs to carry 10kg payload per arm. If the electric motors moving the arms are too weak, or the ankles suddenly overheat, when operating at 68kg in total, then a client won’t be happy with the product or service (RaaS, or Robot-as-a-Service is a thing now). Therefore, this feature likely requires robust engineering all the way to the very left side of the V-model. Identifying such interdependencies and making engineering design calls that aid the design, based on engineering intuition, is challenging and part of what makes for great technical leaders.

Nice-to-have: features that a final project could ship without, such as a higher control frequency in order to achieve smoother, less jittery motion. Note, these cannot be fully neglected but can be considered soft constraints, in contrast to critical features which tend to be hard constraints. A robot that gets the job done, even if not as smoothly, could still be counted as a success with room for improvement.

Critical vs nice-to-have thresholds can change over time. Hard constraints (“critical”) vs soft ones (“nice-to-have”) are often linked, and the difference lies in arbitrary measures of quantity and quality.

Being clear about the “minimum needed” (critical) and “nice-to-have” requirement thresholds is good practice in robotics projects. These can change as the project develops, turning nice-to-haves into critical ones. As long as everyone on the project is notified and on-board when changes happen, then a project should progress without hiccups.

4. Development phase

During development, robotics engineers need to balance proof-of-concept experiments with the scaling of a project, including the supporting infrastructure. For example, one proof-of-concept experiment could be verifying that a parallel jaw gripper (the robot’s hand) can grasp a mug in specific setups.

Scaling can be hard because there are lots of problems in the real world to solve. Scaling the above example of holding a mug can be seen as:

Read more


Patrick Breyer

Chatkontrolle: EU-Bürgerbeauftragte kritisiert Drehtürwechsel von Europol-Beamten zu Chatkontrolle-Lobbyist Thorn

Die scheidende EU-Bürgerbeauftragte Emily O’Reilly hat es auf die Beschwerde des ehemaligen Europaabgeordneten der Piratenpartei Patrick Breyer als “Missstand” bewertet, dass ein Europol-Beamter trotz potenzieller Interessenskonflikte ohne jegliche Auflagen zum …

Die scheidende EU-Bürgerbeauftragte Emily O’Reilly hat es auf die Beschwerde des ehemaligen Europaabgeordneten der Piratenpartei Patrick Breyer als “Missstand” bewertet, dass ein Europol-Beamter trotz potenzieller Interessenskonflikte ohne jegliche Auflagen zum Chatkontrolle-Dienstleister Thorn wechseln und zuvor noch zwei Monate bei Europol in derselben Funktion weiter arbeiten durfte. “Europol hat es versäumt, den oben genannten Interessenkonflikt zu lösen, wodurch die Integrität und Unparteilichkeit seiner Maßnahmen gefährdet wurden”, heißt es in der Entscheidung der Bürgerbeauftragten. Der Beamte hatte bei Europol an einem KI-Pilotprojekt zur CSAM-Erkennung gearbeitet und war nach seinem Wechsel beim Bundestag als Lobbyist für Thorn registriert und fuhr in seiner neuen Funktion zu einem Europol-Treffen mit seinen ehemaligen Kollegen für eine Produktpräsentation. Europol verspricht ab 2025 nun zusätzliche Maßnahmen zur Vorbeugung von Interessenskonflikten wie Versetzungen, veränderte Aufgaben und den Einzug von Informationszugang für wechselwillige Mitarbeiter.

Breyer begrüßt das Ergebnis: „Wenn ein ehemaliger Europol-Bediensteter sein internes Wissen und seine Kontakte verkauft, um ihm persönlich bekannte Mitarbeiter der EU-Kommission zu lobbyieren, ist dies genau das, was es zu verhindern gilt. Seit der Enthüllung des ‘Chatcontrol-Gate‘ wissen wir, dass der EU-Vorschlag zu Chatkontrolle letztlich ein Produkt der Lobby eines internationalen überwachungsbehördlich-industriellen Komplexes ist. Damit so etwas nie wieder passiert, muss der Überwachungslobbysumpf trockengelegt werden.“

Hinweis: Die Entscheidung der EU-Bürgerbeauftragten ist hier abrufbar. Der hochkontroverse EU-Vorschlag zur Chatkontrolle steckt fest, weil es weder für die Aufhebung des digitalen Briefgeheimnisses und die Zerstörung sicherer Verschlüsselung eine qualifizierte Mehrheit unter den EU-Staaten gibt noch eine Mehrheit für die Streichung der Chatkontrolle aus dem Vorschlag.


Altmode

My Experience with a Hybrid Heat Pump Water Heater

In October 2023, my wife emerged from the bathroom one morning to report that we had no hot water. I found that our gas-fired, on-demand water heater had failed and was leaking a small amount. I called our plumber, who confirmed we needed a new water heater. We immediately faced the following decision: Should we […]

In October 2023, my wife emerged from the bathroom one morning to report that we had no hot water. I found that our gas-fired, on-demand water heater had failed and was leaking a small amount. I called our plumber, who confirmed we needed a new water heater.

We immediately faced the following decision: Should we replace our water heater with a similar gas-fired on-demand unit or change to an electric heat pump water heater? We have been hearing a great deal about the efficiency and ecological benefits of heat pump water heaters, and on-demand gas units like the one we had are said to emit considerably more carbon than even gas-fired tank units.

Placement for a heat pump water heater would be a challenge. The water heater had been located in the attic, and the heat pump water heater was somewhat taller than even the gas-fired tank unit that was installed at the time we bought the house. That meant it could not fit in the attic and would need to be installed in the garage. We found a reasonable location, although we would lose storage space in the garage. It also meant that the loop for our hot water recirculating system needed to be extended down to the garage, making the loop longer.

The plumber said that he could replace the on-demand gas unit with a similar one in a couple of days, but it would take a week or so to obtain a heat pump unit, which would also cost considerably more. We had to make a snap decision because we didn’t want to be without hot water any longer than necessary. Wanting to be good ecological citizens, we opted for the heat pump unit despite the cost, complexity, and delay in installation.

As promised, the plumber obtained the heat pump water heater, an American Standard 50-gallon unit (model ASHPWH-50). Installation took about two days and involved running a considerable amount of pipe from the old location in the attic to the new location in the garage, installing a new 30-ampere 220-volt electrical circuit at the heater, and bracing against earthquakes. The installation went well, except a defective water leak sensor had to be bypassed.

First Impressions

The first thing I noticed about the new water heater was its significant noise. The specification sheet listed it as emitting 40.1 dBA, which is about the same as a refrigerator’s (and also an implausibly specific value). However, the water heater was much louder than this.

The on-demand water heater spoiled us: we never ran out of hot water. We soon found that not to be the case with the new water heater, and I turned the water volume for my showers much lower than before. Perhaps scarcity is a good thing.

The new water heater is a “hybrid” unit with a heat pump and also a conventional heating element. It has several modes:

Green (heat pump only) Comfort (“rational use of the heat pump and, if necessary, the heating element”) Fast (heat pump + heating element) i-Memory (“learns your habits and replicates them autonomously”) Electric (heating element only)

There’s also a Boost mode, which is like Fast, but reverts to the previous mode when it gets up to temperature. Initially, it was set for Comfort, but I couldn’t understand when it was and wasn’t running the heating element. I asked product support and the installer, but I still couldn’t make sense of Comfort mode, so we changed to Green. I manually turn on the Boost feature when we need extra hot water.

The noise issue

The noise generated by the water heater was bothersome, especially since it was noticeable in my office adjacent to the garage. I downloaded a sound level measurement app on my phone. I sent a video to American Standard (actually Ariston) product support. They told me I had not measured it under the right conditions. Of course, my garage is not an anechoic chamber.

I consulted “the internet” and found conflicting answers about the noise generated by heat pump water heaters. Some comments described them as very noisy, and others found them quite acceptable. I tried to find others locally with similar water heaters for comparison and failed. In late 2024, the behavior of the water heater changed: sometimes it had a loud “whirr,” and sometimes it didn’t. This prompted me to make several recordings of the sound (and analyze it with a spectrum analyzer app) and determine that much of the excess noise was coming from the fan in the heat pump. I again contacted product support, and they agreed to ship me a replacement fan. My plumber replaced it, and the noise is much better now.

The app

Like many new devices, the water heater connects to the internet through my WiFi and, through a cloud service, allows an app on my phone to control and monitor it. The app, called Comfort Link, is acceptable but less than excellent.

Good:

Allows me to remotely control the water heater, including turning on Boost mode when needed and turning off the water heater when we’re on vacation Displays the water heater temperature so I can judge whether to take a shower Shows the history of power usage by the water heater by month, day, and hour

Not so good:

The water heater has an icon that appears when it judges when it’s OK to shower. This isn’t on the app, but it would be very useful. The water heater temperature shown on the app doesn’t agree with that shown on the water heater. The support people explained that the water heater has several temperature sensors, and the app displays (I think) the average between the tank top and tank bottom temperature. But that doesn’t explain why the front panel displays something different. Limited history of power usage data (e.g., daily data for only the current and previous month) Poor resolution of power usage data. Measurements seem only to be taken only once per hour, so the heating element usage is reported in 4.5 kWh increments and the heat pump in 0.5 kWh increments. There seems to be no API to download data other than through the app. Efficiency

The heat pump is indeed more efficient than the electric heating element. This is suggested by the usage graph, where the orange bars represent energy usage from the heating element:

But of course, the days we used the heating element are also days where I turned on Boost mode because we were using more hot water, which exaggerates the difference. I measured the tank temperature over time as it heated up:

Heating element (4.5 kW): temperature rise 68.9°F/hr = 15.3°F/kWh Heat pump (0.45 kW): temperature rise 17.3°F/hr = 38.7°F/kWh (77°F ambient) Heat pump (0.45 kW): temperature rise 11.9°F/hr = 26.5°F/kWh (61°F ambient) Heat pump (0.45 kW): temperature rise 11.4°F/hr = 25.3°F/kWh (57°F ambient)

As expected, the heat pump’s efficiency depends on the ambient temperature, which determines the amount of heat the heat pump can “harvest” from the air. Compared with the heating element, the heat pump’s overall efficiency ranges from a factor of 1.65 to 2.53.

Operating cost

The heat pump water heater is efficient, but is it more cost-effective than our previous gas water heater? As noted above, it isn’t possible to make an exact comparison because we don’t know how our usage may have changed. If we make the worst-case assumption that our usage hasn’t changed, we are now using an average of about 5.3 kWh per day (153 kWh/month divided by 29 days). At an off-peak rate of $0.44/kWh, that works out to $2.33/day. Counting only those days where we used only the heat pump, that’s about 4 kWh/day or $1.76/day. Before getting the new water heater, our gas usage was about 8 therms/month, virtually all of which was for hot water. At our current rate of about $2.50/therm, that works out to $20/month or $0.67/day.

As our plumber had hinted when we installed the new water heater, gas is relatively inexpensive compared to electricity at current prices (unless you’re generating your own electricity). While an electric heat pump water heater is efficient and ecologically responsible, don’t expect it to save you money at current rates.

Monday, 24. February 2025

Bill Wendels Real Estate Cafe

How are political & economic uncertainty impacting real estate consumers?

Writing that he knows “a bunch of people who held their nose and voted for Trump,” the president of HousingWire media issued this challenge on… The post How are political & economic uncertainty impacting real estate consumers? first appeared on Real Estate Cafe.

Writing that he knows “a bunch of people who held their nose and voted for Trump,” the president of HousingWire media issued this challenge on…

The post How are political & economic uncertainty impacting real estate consumers? first appeared on Real Estate Cafe.

Damien Bod

Use client assertions in OpenID Connect and ASP.NET Core

Client assertions is a method of client authentication which can be used in OpenID Connect. This provides an alternative to client secrets. This approach enhances security by using signed tokens (JWTs) to authenticate clients during the token request process or the OAuth PAR request. In ASP.NET Core, client assertions is not supported per default, a […]

Client assertions is a method of client authentication which can be used in OpenID Connect. This provides an alternative to client secrets. This approach enhances security by using signed tokens (JWTs) to authenticate clients during the token request process or the OAuth PAR request. In ASP.NET Core, client assertions is not supported per default, a small implementation is required.

Code: https://github.com/damienbod/oidc-client-assertion

Setup

A client assertion can be added to the token request which is sent from the ASP.NET Core backend to the OpenID Connect server. This is sent instead of the a client secret. The client is authenticated using the assertion. If using OAuth, the assertion is sent in the PAR request.

Create client assertion JWT

The application uses Microsoft Entra ID as the OpenID Connect server and the identity provider. The assertion was created using the Microsoft documentation.

Original src: https://learn.microsoft.com/en-us/entra/msal/dotnet/acquiring-tokens/web-apps-apis/confidential-client-assertions#crafting-the-assertion

A X509Certificate2 is used to create the certificate and validate the certificate. The audience, the clientId and the sub claims are sent in the JWT client assertion as well as the standard claims signed with the private key of the certificate used to validate the assertion in Microsoft Entra ID. As always, if using this against a different OpenID Connect server, JWT will be validated differently. Microsoft Entra ID requires a RSA key size 2048.

public static class CertService { public static string GetSignedClientAssertion( X509Certificate2 certificate, string aud, string clientId) { // no need to add exp, nbf as JsonWebTokenHandler will add them by default. var claims = new Dictionary<string, object>() { { "aud", aud }, { "iss", clientId }, { "jti", Guid.NewGuid().ToString() }, { "sub", clientId } }; var securityTokenDescriptor = new SecurityTokenDescriptor { Claims = claims, SigningCredentials = new X509SigningCredentials(certificate) }; var handler = new JsonWebTokenHandler(); var signedClientAssertion = handler.CreateToken(securityTokenDescriptor); return signedClientAssertion; } }

Using Micrsoft Entra ID as the OpenID Connect server, the client assertion is created using the token endpoint. In production, the certificate can be generated using Azure Key Vault. The certificate can also be read from the operating system store. This can all be set in the app.settings.

// single tenant var aud = $"https://login.microsoftonline.com/{builder.Configuration["AzureAd:TenantId"]!}/oauth2/v2.0/token"; var clientAssertion = CertService.GetSignedClientAssertion( X509CertificateLoader.LoadPkcs12FromFile("cert_rsa512.pfx", "1234"), aud, builder.Configuration["AzureAd:ClientId"]!);

Use the client assertion in the OpenID connect client

Once ready, the OnAuthorizationCodeReceived event can be used to added the assertion in the OpenID Connect client. If using PAR, the par event is used.

builder.Services.AddAuthentication(OpenIdConnectDefaults.AuthenticationScheme) .AddCookie(CookieAuthenticationDefaults.AuthenticationScheme) .AddOpenIdConnect(OpenIdConnectDefaults.AuthenticationScheme, oidcOptions => { oidcOptions.SignInScheme = CookieAuthenticationDefaults.AuthenticationScheme; oidcOptions.Scope.Add(OpenIdConnectScope.OpenIdProfile); oidcOptions.Scope.Add("user.read"); oidcOptions.Scope.Add(OpenIdConnectScope.OfflineAccess); oidcOptions.Authority = $"https://login.microsoftonline.com/{builder.Configuration["AzureAd:TenantId"]}/v2.0/"; oidcOptions.ClientId = builder.Configuration["AzureAd:ClientId"]; //oidcOptions.ClientSecret = builder.Configuration["AzureAd:ClientSecret"]; oidcOptions.ResponseType = OpenIdConnectResponseType.Code; oidcOptions.MapInboundClaims = false; oidcOptions.SaveTokens = true; oidcOptions.TokenValidationParameters.NameClaimType = JwtRegisteredClaimNames.Name; oidcOptions.TokenValidationParameters.RoleClaimType = "role"; oidcOptions.Events = new OpenIdConnectEvents { // Add client_assertion OnAuthorizationCodeReceived = context => { context.TokenEndpointRequest!.ClientAssertion = clientAssertion; context.TokenEndpointRequest.ClientAssertionType = "urn:ietf:params:oauth:client-assertion-type:jwt-bearer"; return Task.FromResult(0); } }; });

Setup Entra ID App registration

An Azure App registration is used to authenticate using Micrsoft Entra ID. The public key of the certificate can be uploaded to to the App registration.

Client assertions can be used for any OpenID Connect confidential client if the OpenID Connect server supports this. The assertion can be sent in the token request or in the PAR request depending on the flavor of the OpenID Connect code flow used.

Creating the certificate

The github repository linked at the top, provides an example to create your own certificate which can be used in this flow. You can also use Azure Key vault or any other tool to create the certificate.

Links

https://learn.microsoft.com/en-us/entra/msal/dotnet/acquiring-tokens/web-apps-apis/confidential-client-assertions

https://github.com/AzureAD/microsoft-identity-web/blob/2b8fbf0104d820bba8785c41b2ef9e6f801b5e73/src/Microsoft.Identity.Web.TokenAcquisition/MsAuth10AtPop.cs#L48

https://datatracker.ietf.org/doc/html/rfc7521

https://oauth.net/private-key-jwt/

https://learn.microsoft.com/en-us/aspnet/core/security/authentication/configure-oidc-web-authentication

RFC 7523 – JSON Web Token (JWT) Profile for OAuth 2.0 Client Authentication and Authorization Grants

Sunday, 23. February 2025

IdM Laboratory

FAPIとVerifiable Credentialsに関するイベントをやります

こんにちは、富士榮です。 3月頭はFintech Weekということもあり、あちこちでFintech系のイベントが開催されますね。そのうちの一つである4F(Future Frontier Fes by FINOLAB)の一コマをいただきAuthlete川崎さんと一緒にFAPIとVerifiable Credentialsの話をします。 こちらのイベントですね。 https://4f-otmcbldg.tokyo/2025-jp/ このうち、3/4の午前中のセッションです。 セッションの詳細と申し込みはこちらからしていただけます。 https://fapi-vc.peatix.com/  私は慶應の鈴木先生と一緒に先日発行したデジタルクレデンシャルの管理要件に関するディスカッションペーパーの中身の話を解説させていただきます。みなさん色々とデジ

こんにちは、富士榮です。

3月頭はFintech Weekということもあり、あちこちでFintech系のイベントが開催されますね。そのうちの一つである4F(Future Frontier Fes by FINOLAB)の一コマをいただきAuthlete川崎さんと一緒にFAPIとVerifiable Credentialsの話をします。

こちらのイベントですね。

https://4f-otmcbldg.tokyo/2025-jp/


このうち、3/4の午前中のセッションです。

セッションの詳細と申し込みはこちらからしていただけます。

https://fapi-vc.peatix.com/



 私は慶應の鈴木先生と一緒に先日発行したデジタルクレデンシャルの管理要件に関するディスカッションペーパーの中身の話を解説させていただきます。みなさん色々とデジタルクレデンシャルを発行しますが、ちゃんと用途に応じた管理をしないとダメですよ、って話です。

ぜひお越しください!


Thursday, 20. February 2025

Wrench in the Gears

A Visit To Tulsa With Microtuble Symbolic Communication + Thoughts on DOGE Against A UPenn “Quaker Kabbalah” Backdrop

Posting this late, so I won’t be writing an introduction. As is the case for me lately, the offering is a somewhat rambling discussion that surfaces quiet a few nuggets for you to chew on.  Life in this absurdist improvisational theater might easier if we can practice the Tao of Pooh. Smile. Here’s a link [...]

Posting this late, so I won’t be writing an introduction. As is the case for me lately, the offering is a somewhat rambling discussion that surfaces quiet a few nuggets for you to chew on.  Life in this absurdist improvisational theater might easier if we can practice the Tao of Pooh. Smile.

Here’s a link to the playlist of DOGE clips. An overview of them is presented in the last half hour of the video.

Be sure to check out the story of the Hot Springs, AR IQ Zoo run by BF Skinner behaviorist animal trainers. Hat Tip to Emily of “Save Maine Schools” who first alerted me to BF Skinner’s Pigeon Project. I promise you’ll never think of Cheerios the same way!

Click here for historical background and here for interesting projects (Spy Birds!).

Also, here is a link to Joseph’s blog on “free energy,” which I equate more with manifestation and the ritual opening of “gates” into the quantum field.

 

 

 

Wednesday, 19. February 2025

The Pragmatic Engineer

Developer productivity with Dr. Nicole Forsgren (creator of DORA, co-creator of SPACE)

Nicole is one of the foremost experts in developer productivity, and author of the book Accelerate. She details how to think about, measure & improve developer productivity
Stream the Latest Episode

Available now on YouTube, Apple and Spotify. See the episode transcript at the top of this page, and a summary at the bottom.

Brought to You By

DX — An engineering intelligence platform designed by leading researchers.

Sentry — Error and performance monitoring for developers.

In This Episode

We’ve previously covered a lot on the surprisingly slippery topic of developer productivity: from how Uber measures it, how LinkedIn does it, an overview of the innovative DevEx framework, and a deepdive of dev productivity metrics used by Google, LinkedIn, Peloton, Amplitude, Intercom, Notion, Postman, and 10 other tech companies.

Every time developer productivity comes up: DORA and SPACE are almost certainly mentioned. So I could not be more excited to have Dr. Nicole Forsgren on the podcast. Nicole is the creator of the widely adopted DORA and SPACE frameworks, co-author of the award-winning book Accelerate and the DevOps Handbook (2nd edition), and author of the State of DevOps reports. She is currently a Partner at Microsoft Research, leading developer productivity research and strategy, and is currently working on a book about developer experience with Abi Noda. It’s safe to say that Nicole is one of the foremost experts in developer productivity — if not the foremost expert.

In this episode, we discuss:

Why PRs and Diffs are incomplete as a solo metric and how to view them in context

The importance of a holistic set of metrics for evaluating productivity

An overview of DORA’s four key metrics, its strengths, and its limitations

The evolution of processes and tools since DORA, including SPACE

What developer experience is—and concrete ways to improve it

Common characteristics of highly productive engineering teams

How faster onboarding might challenge Brook’s Law

How AI tooling is impacting developer productivity and best practices for experimentation

And much more!

Takeaways

My biggest takeaways from this episode:

1. Measuring the number of PRs is controversial for a good reason. Measuring any single one “output” can be misused: and when devs know it is being measured, they will optimize for it.

However, measuring PRs is important: but to do it well, don’t look at it as an individual performance metric. Instead, use it to understand how well (or not well) systems across the team and company are working. For example, what systems are getting in the way of PRs taking long to merge?

2. DORA and SPACE both have their own limitations. DORA is very well-defined in the metrics it measures. However, a massive limitation it has is how it only measures from commit to production.

SPACE can be used to measure the complete developer workflow – including e.g. planning, coding, and even post-release. In turn, it is a more vague framework where you need to put in more effort to make it useful for your envirnment.

3. AI developer tools don’t change DevEx fundamentals. DORA and SPACE are still relevant. AI tools might be able to improve iteration speed, or make certain tasks more helpful. It’s really an open question how exactly this will play out. There’s a fair chance these tools significantly change how we do development. So if you’re a dev: experiment with them!

4. Developer experience (DevEx) is not necessarily great at large companies and poor at startups. Google is known to have standout developer experience thanks to so much investment it put into systems – but not all large companies are like this!

Startups have less resources to invest specifically in improving DevEx: but then again, they have less infrastructure in-place! The more custom infrastructure you have, the more painful DevEx tends to be (and the more in-house tools need to be built to mitigate this)

The Pragmatic Engineer deepdives relevant for this episode

Measuring Developer Productivity: Real-World Examples

A new way to measure developer productivity – from the creators of DORA and SPACE

Measuring Engineering Efficiency at LinkedIn

How Uber is Measuring Engineering Productivity

Measuring software engineering productivity

Timestamps

(00:00) Intro

(02:03) PRs and Diffs and how to view them in the right context

(07:42) EngThrive at Microsoft

(10:26) The importance of having a holistic set of metrics in evaluating productivity

(17:00) The four key metrics of DORA

(23:57) The evolution of processes and tools since DORA, including SPACE

(26:40) An explanation of developer experience — and ways to improve it

(30:44) Devex at startups vs. larger companies

(34:20) Why measuring developer productivity is so difficult

(39:05) How to make a case for platform teams

(44:34) Common characteristics of highly productive teams

(51:01) Brook’s law and how faster onboarding might make it irrelevant

(52:49) Onboarding for internal transfers

(54:18) Shifting culture towards technology first

(58:36) How middle management can improve engineering culture

(1:03:36) How AI tooling is impacting developer productivity

(1:06:42) Potential use cases for AI

(1:08:40) A case for experimenting with AI coding tools and how to maintain flow state

(1:15:30) Rapid fire round

A summary of the conversation Measuring Developer Productivity

PRs and diffs can be both good and bad signals when measuring developer productivity. Many leaders focus on the traditional economic definition of productivity: the rate of output or output per input.

PRs can give a view into the work being done, but senior engineers may have lower PR output due to other responsibilities like unblocking others, architecture, mentoring, and recruitment. It's important to consider the context and who is being measured.

A constellation of metrics is better than a single metric for a holistic view. So yes: it can be helpful to get data on PRs – as long as you get a lot more other data points as well!

Microsoft's EngThrive: this framework (which Nicole works on) uses multiple metrics across dimensions, inspired by the SPACE framework, alongside qualitative feedback.

Easy-to-operationalise measures can be useful. However, they only show a few things and may ignore important aspects.

SPACE: a framework that groups metrics into:

Satisfaction

Performance

Activity

Communication

Efficiency.

Nicole is a co-author of the framework. It’s one that is getting a lot wider adaptation. See an overview of the SPACE framework

Absent engineering leadership is unhelpful. No matter what metrics you use: if engineering leadership is not connected with the work, outcomes are likely to be a lot worse.

Metrics collected in a holistic way, on the other hand, can guide decisions about company-wide blockers.

DORA frameworkand its evolution

Nicole is the co-creator of DORA

DORA can refer to the entire research program. More frequently though, people tend to refer to the 4 DORA metrics:

Deployment frequency

Lead time for changes

Change failure rate,

Time to restore service

These 4 metrics are a good indicator of how well a development pipeline is working

DORA can be adapted for different environments, such as air-gapped systems, by redefining the scope of the deployment pipeline.

DORA does not show the full picture. DORA metrics serve as a signal for how well a team is doing, but it does not show the complete picture. DORA focuses on commit to production. This is a part of the engineering process that can be made pretty efficient.

Fast feedback is important, as is the ability to set up environments and provision resources quickly.

SPACE evolved from DORA. SPACE considers the entire end-to-end toolchain. SPACE metrics can be applied to specific components, like PRs. DORA is essentially one of many instances of SPACE.

Developer Experience

Developer experience is a developer's lived experience. This can be “good” – it can also be “bad!”

A good developer experience minimises friction, blockers, and confusion

Security and compliance: these are important! However, automation can minimise manual processes for developers. A secure and compliant dev process does not need to be cumbersome.

Delays and cognitive load from inefficient processes negatively affect developer experience.

Both large and small companies can have good or bad developer experiences. Large companies have more resources but may struggle with legacy systems and bureaucracy. Startups can move fast but lack infrastructure. Google has an exceptional developer experience because they invest in it and don't tolerate friction.

Improving Engineering Teams

Most work in software is invisible, and systems are increasingly complex.

Leaders and executives often don't feel the pain of developers.

To make a case for investment in tooling: use both data and stories. Present the trade-offs realistically.

Acknowledge that there's a tipping point where further investments won't deliver much more progress.

Frame requests in terms of trade-offs and potential for repurposing teams.

High-performing teams:

Exhibit psychological safety, curiosity, and openness to better ways of doing things.

Onboarding time is a great indicator of team efficiency.

A “dummy pull request” early during the onboarding phase (e.g. day 1!) can significantly increase productivity. Try it!

If onboarding is slow: adding people to a late project can backfire. But if onboarding is fast: this can actually work!

Engineering culture transformations:

These require changing the entire company culture, not just the tech culture.

Changing how people do their work can lead to cultural impacts.

Introducing faster tools and ways of working changes people's lived experience.

To improve the engineering culture, summarize observations, seek feedback, and involve others in finding improvements.

Aim for quick wins with visible impact, like hack days to address paper cuts.

Impact of AI on Developer Productivity

DORA and SPACE relevance:

DORA metrics should remain relevant

SPACE is still applicable for assessing satisfaction, performance, activity, communication, and efficiency.

Changes AI is likely to bring:

There’s some chance that AI tools einvent development and software engineering, particularly in IDEs and testing.

Coding assistants: AI-powered ones can improve code readability and unit test success rates

PR reviews: an obvious place for AI to help

Support roles like release engineering and deployment: could be a use case

Experiment with AI!

Experimenting with AI tools and adapt to your workflows.

Assess AI tools to see what works, and focus on problem-solving and architecture.

When you find the workflows where AI is helpful: you might be able to preserve flow better

Turn off autocomplete and autosuggestions if they are an interruption!.

Consider AI as a different way to work, such as "driving and reviewing" or "guiding".

LLMs are changing software engineering rapidly.

Book recommendations

Nicole recommends reading:

Inspired by Marty Cagan

Outlive by Peter Attia

Ender's Game for some good fiction reading

Resources & Mentions

Where to find Dr. Nicole Forsgren:

• X: https://x.com/nicolefv

• LinkedIn: https://www.linkedin.com/in/nicolefv/

• Website: https://nicolefv.com/

Mentions during the episode:

• Microspeak: Sats: https://devblogs.microsoft.com/oldnewhttps://newsletter.pragmaticengineer.com/p/developer-productivity-a-new-frameworkthing/20100914-00/?p=12873

• Measuring Software Engineering Productivity: https://newsletter.pragmaticengineer.com/p/engineering-productivity

• Hawthorne effect: https://en.wikipedia.org/wiki/Hawthorne_effect

• Accelerate: The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations: https://www.amazon.com/Accelerate-Software-Performing-Technology-Organizations/dp/1942788339

• What is an Air Gap?: https://www.ibm.com/think/topics/air-gap

• DORA: https://dora.dev/

• Quantifying the impact of developer experience: https://developer.microsoft.com/en-us/developer-experience

• A new way to measure developer productivity – from the creators of DORA and SPACE: https://newsletter.pragmaticengineer.com/p/developer-productivity-a-new-framework

• Ciera Jaspan on LinkedIn: https://www.linkedin.com/in/ciera/

• Emerson Murphy-Hill on LinkedIn: https://www.linkedin.com/in/captainemerson/

• Inside Stripe’s Engineering Culture - Part 1: https://newsletter.pragmaticengineer.com/p/stripe

• Inside Stripe’s Engineering Culture: Part 2: https://newsletter.pragmaticengineer.com/p/stripe-part-2

• David Singleton on LinkedIn: https://www.linkedin.com/in/davidpsingleton/

• Courtney Kissler on LinkedIn: https://www.linkedin.com/in/courtney-kissler/

• Brian Houck on LinkedIn: https://www.linkedin.com/in/brianhouck/

• Brook’s law: https://en.wikipedia.org/wiki/Brooks%27s_law

• Satya Nadella on LinkedIn: https://www.linkedin.com/in/satyanadella/

• Steve Ballmer on LinkedIn: https://www.linkedin.com/in/steve-ballmer-7087a8157/

• John Shook: https://www.lean.org/about-lei/senior-advisors-staff/john-shook/

• Does GitHub Copilot improve code quality? Here’s what the data says: https://github.blog/news-insights/research/does-github-copilot-improve-code-quality-heres-what-the-data-says/

Two developers built a game that sold 1M copies. How?

• Reading Between the Lines: Modeling User Behavior and Costs in AI-Assisted Programming: https://www.microsoft.com/en-us/research/publication/reading-between-the-lines-modeling-user-behavior-and-costs-in-ai-assisted-programming/

• Abi Noda on LinkedIn: https://www.linkedin.com/in/abinoda/

• Jason Entenmann on LinkedIn: https://www.linkedin.com/in/jason-entenmann-06146875/

• Claude: https://claude.ai/new

• Anthropic: https://www.anthropic.com/

• OpenAI: https://openai.com/

• Sonnet: https://www.anthropic.com/news/claude-3-5-sonnet

• Inspired: How to Create Tech Products Customers Love: https://www.amazon.com/INSPIRED-Create-Tech-Products-Customers/dp/1119387507

• Outlive: The Science and Art of Longevity: https://www.amazon.com/Outlive-Longevity-Peter-Attia-MD/dp/0593236599/r

• Ender’s Game: https://www.amazon.com/Enders-Game-Ender-Quintet-1/dp/1250773024

• Tressie McMillan Cotton’s website: https://tressiemc.com/

• Anne Helen Petersen’s newsletter: https://substack.com/@annehelen

• Can You Really Measure Individual Developer Productivity? - Ask the EM: https://blog.pragmaticengineer.com/can-you-measure-developer-productivity/

• Measuring Developer Productivity: Real-World Examples: https://newsletter.pragmaticengineer.com/p/measuring-developer-productivity-bae

• Measuring developer productivity? A response to McKinsey: https://newsletter.pragmaticengineer.com/p/measuring-developer-productivity

• Measuring developer productivity? A response to McKinsey, Part 2: https://newsletter.pragmaticengineer.com/p/measuring-developer-productivity-part-2

• The Full Circle on Developer Productivity with Steve Yegge: https://newsletter.pragmaticengineer.com/p/steve-yegge

• Measuring Software Engineering Productivity: https://newsletter.pragmaticengineer.com/p/engineering-productivity

• Platform Teams and Developer Productivity with Adam Rogal, Dir. Developer Platform at DoorDash: https://newsletter.pragmaticengineer.com/p/platform-teams-with-adam-rogal

Production and marketing by Pen Name. For inquiries about sponsoring the podcast, email podcast@pragmaticengineer.com.


Talking Identity

Signing Off from Uniken: Thank You for the Adventure

It’s the end of an era: after eight incredible years, it felt like the right time to close the chapter on my story at Uniken. When I joined as CTO, Uniken was an ambitious startup with a unique core technology and a vision to make a global impact. What made the ensuing journey so rewarding […]

It’s the end of an era: after eight incredible years, it felt like the right time to close the chapter on my story at Uniken. When I joined as CTO, Uniken was an ambitious startup with a unique core technology and a vision to make a global impact. What made the ensuing journey so rewarding was the team – exceptionally talented, deeply committed, and relentlessly innovative. Together, we navigated complex challenges, scaled through uncertainty (including a global pandemic), and built Uniken into a global cybersecurity leader with customer engagements on every continent (except Antarctica, because sadly it would appear that penguins aren’t really concerned about digital trust). 

I am incredibly proud of the team, and the platform we built – one that empowers businesses to engage securely and seamlessly with their customers, and creates lasting value for our customers and partners.  I want to express my deepest gratitude to my colleagues and partners at Uniken. Our collaborative efforts transformed the company and positioned it for long-term success, and I look forward to seeing you scale new heights.

As for what’s next – that’s TBD. Digital identity has never been more critical, yet the challenges facing the industry have never been more complex. Let’s see what the next chapter brings.

Tuesday, 18. February 2025

The Pragmatic Engineer

State of the startup and scaleup hiring markets – as seen by recruiters

More startups and scaleups are hiring engineers and engineering leaders. Advice for tech professionals on how to get noticed by specialist recruitment companies – from two specialist recruiters

Last week, we looked into data from global hiring agency Indeed that found software development job listings are at a five-year low. To check how representative (or not) these numbers are of the tech jobs market, I pinged some tech recruiters. The response was not what I expected:

“I can only talk about our own niche from seed-stage to Series C engineering recruitment, but we’ve never had so many positions to fill as in 2024.” – Asher Hoffman, cofounder of Coastal Recruiting

“In 2024, every month was a step up in the number of job searches we did for CTO, VP of engineering, director of engineering and product leadership positions. It just kept going up. 2024 was our second-best year ever for the number of searches done: only 2021 was better, and that was an incredible year; the peak of the job market.” – Glenn Murphy, managing partner at Riviera Partners

These data points stick out and I was eager to learn more; after all, they’re bucking the trend in Indeed’s figures, with rising demand for tech professionals! I met Asher and Glenn to get their perspectives, and they also offered some tactics for engineers and engineering leaders to use when seeking new opportunities. Today, we cover:

What are boutique recruiters? Also, the differences between contingent and retained ones.

Hiring by tech stack at early-stage startups. Fifty percent of hires are backend, 25% fullstack, 10-15% AI/ML, and 5% are frontend. A big surprise is virtually no demand for native mobile developers, or for the .NET and Java tech stacks. Also, very few mentions of Angular or Vue.

More demand than before at Coastal and Riviera. Both recruitment businesses are seeing more engineering jobs to be filled coming their way.

Changes across the market. More money and demand from early-stage VC-funded startups, and private equity becoming more dominant in driving demand for engineering executive hiring.

Startups want ‘cracked’ engineers. These are professionals with a track record of ownership, curiosity, and the ability to “grind.” Length of experience doesn't matter after around the 2-year point.

Demand for engineering leadership positions. Data indicates that large parts of the job market are struggling more than the senior role segment, with director of engineering roles hardest to fill.

What search processes look like. Recruitment experts explain how a typical software engineer search and engineering executive search plays out.

For updates on how these two recruiters see the market, follow Asher Hoffman and Glenn Murphy on LinkedIn.

A quick programming note: there will be no The Pulse on Thursday — I’m taking a short winter break. More details in my publishing schedule.

1. What are boutique recruiters?

Coastal Recruiting and Riviera Partners are specialized recruitment agencies; also known as "boutique search”, “contingent”, “retained”, or “executive” recruiters.

“Boutique” and “specialized” refer to recruiters which focus on a niche within tech, such as:

Software engineer and AI+ML engineer hiring for early-stage startups (Coastal Recruitment)

Director-and-above engineering leadership, and product/design leadership for scaleups and larger companies (Riviera Partners)

Executive recruitment is a subset of boutique recruitment. These companies recruit for executive roles like director of engineering, VP of engineering, head of engineering, CTO roles, etc. In the US, high-profile firms can charge $100K or more to recruit a key position like CTO for a late-stage scaleup.

Executive recruitment firms can also focus on individual contributor (IC) roles. Glenn Murphy advises that this usually happens only in the US at their firm, and comes with tradeoffs. For example, the cost becomes justifiable only when the vacancy’s comp package is several times higher than the search cost; usually $500K and above. It is this niche area in which an expert executive recruiter can make the most impact by finding truly standout candidates.

Within the boutique search sector, there are two main types of recruiter:

Contingent recruiters

These companies work on a “no hire, no fee” basis, and only get paid for successfully filling a position; usually 10-40% of the role’s first-year salary. This service is an easy sell; it’s low risk financially for an employer to work with a contingent recruiter due to no-hire-no-fee. Contingent recruiters also often have more clients than retained ones do.

Coastal Recruiting mostly does contingent recruitment, with a couple of retained contracts. This company specializes in hiring software engineers from the first hire to engineer number 20 to 30 at early-stage startups. In 2024 they worked with around 100 different startups.

Retained recruiters

Most executive recruitment companies operate a retainer model and charge upfront, regardless of whether a hire is made. Payment comes in a fixed number of instalments, perhaps in this order:

Initial retainer: the first third of the fee, paid at the start.

Progress retainer: second payment, usually tied to milestones being hit, such as:

Presenting a shortlist of qualified candidates who are ready to engage

Completing screening interviews and drawing up a interview shortlist of preferred candidates

Final interviews scheduled or completed

Final payment: upon successful placement. Depending on the agreement, there may be no payment if a candidate isn’t hired

A search company may also be paid monthly while conducting a search. Total retainer fees are usually 25-50% of the value of the successful candidate’s comp, which increases the more challenging the search is. For example, it’s challenging to find a new CTO for a 300-person scaleup growing at a 10x pace, who is hands-on, has worked with AI / machine learning (ML) before, and who has experience at a hypergrowth company, and also in Big Tech. There aren’t too many people who fit this profile, and not many of them may be interested in switching jobs.

Guarantee periods can also be an important part of retained searches, under which a portion of the fee may be refunded should a successful candidate quits shortly afterwards.

Retained recruiters are a lot more expensive than contingent ones, and are the go-to choice for executive recruitment where vacancies may not even be publicly listed by an employer.

Riviera Partners is one of the best-known executive recruiter companies. They hire for director+ roles in engineering, product and design. In our deepdive Finding the next company to work at, they were the most recommended exec recruiter firm by engineering leaders.

Seed-stage and early-stage startups typically only use executive recruiters to hire for VP-and-above positions which report to the CEO. Later-stage startups also hire for director positions, and may pay for a search for senior-and-above hires and ICs. However, the cost means this is rare, happens mainly in the US, and by funded scaleups with very specific needs.

Given this process can be so expensive, it may not be immediately obvious what the upside is. Glenn explains:

“We have 21 years worth of data and information on who has been successful, people’s interests and nuance that isn’t available on a LinkedIn profile.

We save companies a lot of time, in the end. This is due to the insights we have and our reach across the USA and Europe. We know what’s needed at Series A, all the way through to which profiles tend to succeed at publicly traded companies. We also have the know-how on how to interview people to assess if they’re up for the challenge.”

Riviera’s pricing is atypical, compared to many other retained recruiters. Glenn shares:

“We have time-based milestones as opposed to shortlists or incentives, which can be adjusted sometimes. However, for the most part, the fee is fully paid within 90 days.”

2. Hiring by tech stack at early-stage startups

At Coastal Recruiting, Asher hires early engineering teams at early-stage startups: from seed-stage startups to Series B and Series C companies. They work with some of the largest US VC investors (like a16z, Y Combinator, Sequoia, Index, Lightspeed, Founders Fund and others).

Types of engineers and tech stack

Asher sees the hiring demand split like this:

‘50% of hires we make are backend engineers. I don’t think this will always be the case for startup engineering. The first 3 hires for a company with technical cofounders are almost always backend engineers. It’s a bonus if they have some frontend skills, but I’d peg it at 80% backend, 20% frontend.

‘For backend tech stacks, these are the languages founders use, which candidates would be expected to pick up quickly. By popularity:

Typescript: easily the most common.

Python: surprisingly popular, usually with another language like Typescript or Go. I assume it’s because so much AI and ML work can be done with it, and it’s a generally useful language

Go: This has gotten somewhat more popular, over time

Rust: especially popular with crypto companies. About 15% of our portfolio is crypto, and Rust is very popular. We’re on the lookout for people with hands-on experience in Rust

Ruby: Out of around 100 clients in the past year, only 2 used Ruby; both of them are pretty big names. They told us “find us anyone who has touched Ruby or Ruby on Rails, and is good.”

Few to no mentions:

Flutter: we’ve gotten a single Flutter role in the past year

Elixir: once in a while, we have a startup that uses this and needs folks with experience with it. But it’s less common than Ruby.

Java / Scala: we don’t get any requests for this stack, at all. However, if someone has Java or Scala experience, and is open to working in Python, we usually move forward with them

C# / .NET: we’ve only come across a single hardware company asking for this in several years.

PHP: almost zero PHP these days

Haskell: it’s been a couple years since any company asked for this.

‘25% percent are fullstack. These roles almost exclusively employ the following technologies:

React for the frontend. React frameworks like Next or React Router are usually relevant. Overall, 95% of framework mentions are for React.

Typescript as the programming language. Of course, JavaScript is a given

Node as the backend framework

Python is often mentioned

React Native is by far the most popular mobile technology we see, often requested by mobile-first startups that build their app with it.

Angular / Vue: a combined 5% or less.

‘AI and ML is 10-15%,” says Asher. “This hire is almost always one of the first 10, and doesn’t need to be an “in-depth” ML specialist. They usually fit this profile:

Backend generalist

Has worked in an ML or AI-driven company

Is deep into what is going on in AI and how to use LLMs, practically

Hacked AI side-projects themselves. They probably hack on the side, and might use things like LangChain, LangGraph, and other popular LLM tools to play around with.

‘Someone we recently hired for such a role had worked for a year as an AI/ML engineer, not an in-depth, “research-caliber” ML candidate.

‘As startups grow and mature, eventually they tend to hire more in-depth ML and AI people. However, early-stage companies look for builders who can use AI tools effectively.

‘5% are frontend engineers. Almost universally, the seventh or eighth hire at a startup is the first frontend engineer. The reason almost always is – and I kid you not! – that the founder is fed up with owning the frontend themself, and is ready to bring in a frontend specialist.

‘Ironically, this role is one of the hardest to fill because technical founders whom we work with typically expect two things:

Be a great frontend person – this is easier to find

Be great at computer science basics – such as algorithms and CS fundamentals. This is why most people who pass #1 get rejected!

‘Another reason this role is so hard to fill is that the founder often looks for a staff-level frontend engineer. This person might be the first frontend hire, but they ideally want someone who will head up the frontend team, as it grows. It’s difficult to convince people with staff-plus frontend jobs at stable companies and who head up teams, to come and join an early-stage startup where they would be the only frontend person – at least at first!

‘The best startups are embracing AI – and look for devs who do the same. AI coding tools like Cursor and GitHub Copilot are used by all the best startups we recruit for. Founders at these startups often mention that mastering these tools can 10x the productivity of their engineers with faster iteration and the ability to parallelize work.

‘We don’t see demand for native mobile engineers at all; only React Native. We work with a few “mobile-first” startups who have built impressive mobile apps. All use React Native and look to hire for such experience.

‘I would assume native hires might become more important after the Series B or Series C rounds. In the early-stage in the US, they are absent – or at least, we see no such demand.’

Startups don’t care that much if you’ve used their exact stack before. Asher told me:

“I hear over and over again, that a great engineer is a great engineer - and if you don't have experience in the companies current stack - as a talented engineer, you shouldn't have much trouble picking it up quickly. Thus - if your stack doesn't align perfectly with the above most used, it's okay - you can still get the role!”

3. More demand than before at Coastal and Riviera

Both recruitment companies are seeing record, or near-record business in terms of the number of searches executed.

More demand from early-stage, VC-funded startups. Asher says:

“In my niche, I’m seeing more hiring than in recent memory.

I'm seeing many “cracked” engineers with 5+ offers on-hand, and well-funded startups forced to compete by upping their offers to hire the best of the best. I’m seeing higher total comp packages in the early stage space than ever.”

As we’ll detail below, “cracked” refers to someone with a track record of ownership, who displays curiosity and the ability to grind when needed.

Asher: “It feels like this is a great time for many folks to leave Big Tech and take a gamble – but not too big a pay cut. Big Tech has done many layoffs and the startups I work with offer similar pay, with a massive equity upside.

The risks an engineer used to have to take when joining a startup are shrinking in this new market. Well-funded startups are meeting engineers with competitive salaries, health and wellness, and equity. Also, founders are being much more upfront with recruits about their ARR (annual recurring revenue) and progress towards PMF (product-market-fit) than before.

A disclaimer: I recruit in just the VC-funded, early-stage start-up space, with startups founded by technical founders and backed by well-known VC firms. These are companies that have the money to hire, and a bold vision to build a billion-dollar company. It’s a narrow view of the market.”

Will more companies increase recruitment? At Riviera partners, Glenn Murphy does engineering leadership searches for startups and scaleups. He shares:

“We signed up 58% more searches in 2024 than in 2023 for the US. In Europe, the growth was 61%. The most common search assignment in both continents was for VP Engineering. It feels to me this alone signals a return to a focus on growing teams, as VPs of engineering are hired in preparation to scale up the team’s headcount.”

4. Changes across the market

I asked both Asher and Glenn what changes have been surprising in their part of the tech hiring market.

Read more

Monday, 17. February 2025

Damien Bod

Customizing a single client sign-in using parameters in Duende IdentityServer

This post looks at customizing the sign-in UI and the sign-in options in an ASP.NET Core application using Duende IdentityServer and ASP.NET Core Identity. There are multiple ways of changing the look and feel of the UI for different OpenID Connect clients or different client flows. In the previous post, the UI was customized per […]

This post looks at customizing the sign-in UI and the sign-in options in an ASP.NET Core application using Duende IdentityServer and ASP.NET Core Identity. There are multiple ways of changing the look and feel of the UI for different OpenID Connect clients or different client flows. In the previous post, the UI was customized per client, this post customizes inside a single client.

Code: https://github.com/damienbod/duende-multi-tenant

Blogs in the series Multiple client sign-in customizations using Duende identity provider Customizing a single client sign-in using parameters in Duende IdentityServer Setup

The solution is setup using three different ASP.NET Core applications. In the example code, the “admin” application has different federation authentication options compared to the “shop” client authentication sign-in experience. The client ID from the authentication context is used to customize the look and feel, i.e. the styles, the layout and the options of the client are used to define which federation and authentication options are possible. The shop client can be further customized using authentication parameters sent in the OpenID Connect redirect.

OIDC client implementation

In ASP.NET Core the OpenID Connect flow implementation provides multiple events which can be changed or extended. The OnRedirectToIdentityProvider can be used to send custom parameters to the OpenID Connect server. The OAuth Pushed authorization request on top of OpenID Connect Core is used used per default in .NET 9, if the OpenID Connect server supports this.

builder.Services.AddAuthentication(options => { options.DefaultScheme = CookieAuthenticationDefaults.AuthenticationScheme; options.DefaultChallengeScheme = OpenIdConnectDefaults.AuthenticationScheme; }) .AddCookie() .AddOpenIdConnect(options => { options.SignInScheme = CookieAuthenticationDefaults.AuthenticationScheme; // other options ... options.Events = new OpenIdConnectEvents { OnRedirectToIdentityProvider = context => { context.ProtocolMessage.SetParameter("showadminsignin", "false"); return Task.FromResult(0); } }; });

Duende sign-in customization

In the previous post, a new sign-in UI was created for each client. The shop client has further customization. In this demo, the admin external provider can be hidden or displayed depending on what the client requests. The UI is implemented using ASP.NET Core Razor pages and a BindProperty is used for this.

[BindProperty] public bool ShowAdminSignIn { get; set; } = true;

When processes the authentication using Duende and ASP.NET Core Identity, the GetAuthorizationContextAsync method can be used to get the client requested parameters.

var context = await _interaction.GetAuthorizationContextAsync(returnUrl); ShowAdminSignIn = !(context?.Parameters["showadminsignin"] == "false");

The UI can implement any display logic using the client parameters sent in the request. In this demo, the admin UI is hidden or displayed using the client request data.

@foreach (var provider in Model.View.VisibleExternalProviders) { if (!(!Model.ShowAdminSignIn && (provider.AuthenticationScheme == "AdminEntraID"))) { <li class="list-inline-item"> <a class="btn btn-secondary" asp-page="/ExternalLogin/Challenge" asp-route-scheme="@provider.AuthenticationScheme" asp-route-returnUrl="@Model.Input.ReturnUrl"> @provider.DisplayName </a> </li> } }

Notes

This is an easy approach to implement UI customization on a per client basis. The is UI logic and not authorization. The different options are just hidden or removed from the UI using the client parameters, the logic MUST not work if this is a security requirement. This is not authorization.

Links

https://docs.duendesoftware.com/identityserver/v7

https://docs.duendesoftware.com/identityserver/v7/ui/federation/

https://learn.microsoft.com/en-us/aspnet/core/razor-pages


@_Nat Zone

3月6日 FIN/SUMシンポジウムパネル「デジタル認証、デジタルIDが目指す未来の暮らし、仕事、社会」でモデレータをやります

毎年のことですが、今年もFIN/SUMに出ます。錚々たるパネリストの方々をモデレートさせていただきます。日経主催のFIN/SUMシンポジウムパネルです。 17:40-18:30 パネル デジタル認証、デジタル ID が目指す未来の暮らし、仕事、社会 セッション概要 日時 3月6日…

毎年のことですが、今年もFIN/SUMに出ます。錚々たるパネリストの方々をモデレートさせていただきます。日経主催のFIN/SUMシンポジウムパネルです。

17:40-18:30 パネル デジタル認証、デジタル ID が目指す未来の暮らし、仕事、社会 セッション概要 日時3月6日(木)17:40-18:30会場丸ビルカンファレンスホール
(HALL A : 丸ビルホール 7F)タイトルデジタル認証、デジタルIDが目指す未来の暮らし、仕事、社会

生成AIの登場で不正アクセスやなりすまし詐欺などの犯罪が急増するデジタル社会。デジタル認証やデジタルIDは、こうしたリスクを防ぎながら、さまざまな工夫を凝らしてよりよいデジタル社会の実現を目指します。金融サービスから行政のあり方、働き方、暮らしまで、最適化された未来のデジタル社会の姿を探ります33

登壇者(敬称略)34

山廣弘佳 大日本印刷情報イノベーション事業部チームリーダー
日下光 xID代表取締役
林達也 デジタル庁アイデンティティアーキテクト
間下公照 ジェーシービー イノベーション統括部市場調査室長
崎村夏彦 OpenID Foundation理事長 (モデレーター)

なお、この直後のセッションも面白いですので合わせてお願いします。

18:40-19:30 政府・日銀本音トーク 〜新時代のマネーとフィンテック セッション概要 日時3月6日(木)18:40-19:30会場丸ビルカンファレンスホール
(HALL A : 丸ビルホール 7F)タイトル政府・日銀本音トーク 〜新時代のマネーとフィンテック

財務省・日銀・金融庁の現場責任者が、テクノロジー x マネーを中心にデジタル金融の未来を語る

登壇者(敬称略)

鳩貝淳一郎 日本銀行 決済機構局 FinTech 副センター長 デジタル通貨検証グループ長
津田夏樹 財務省 理財局国庫課 課長兼デジタル通貨企画官
牛田遼介 金融庁 チーフ・フィンテック・オフィサー
楠正憲 デジタル庁統括官

お申し込みは

お申し込みは公式サイトよりお願いいたします。

その他のプログラム

その他のプログラムは以下のとおりです。

jp_session


金融庁からのお知らせ>「イベント多すぎでJapan Fintech Weekどう歩けば分からないぞ…」とお困りのみなさま!JFW公式ウェブアプリができました! イベントスケジュールの確認と管理:全ての関連イベントのスケジュールを一目で確認し、カレンダーに追加できます(イベント情報は随時更新予定です!)。 各イベントの登録ページへのアクセス:イベントごとの登録ページに簡単にアクセスでき、スムーズに登録が可能です。 ネットワーキング機能:どのイベントの参加者とも交流できる一元的なネットワーキング機能を搭載! イベントマップ:イベント開催場所が一目で分かる地図を用意しました。 JFWを最大限楽しんで頂くには必須アプリかと思いますので、ぜひご登録ください。

JFW公式アプリはこちら

#JFW2025

Friday, 14. February 2025

Bill Wendels Real Estate Cafe

Multiply Tiny House Love Story with Million Backyard March

After a post about a college that’s teaching students how to build their own tiny houses for just $5,000 drew a lot of attention, decided… The post Multiply Tiny House Love Story with Million Backyard March first appeared on Real Estate Cafe.

After a post about a college that’s teaching students how to build their own tiny houses for just $5,000 drew a lot of attention, decided…

The post Multiply Tiny House Love Story with Million Backyard March first appeared on Real Estate Cafe.

Thursday, 13. February 2025

The Pragmatic Engineer

The Pulse #124: Software engineering job openings at five-year low?

Data from Indeed shows software developer job openings have seen the biggest decline in five years across the US. What’s the cause, and what’s next? Also: OpenAI responds to DeepSeek, and more.

The Pulse is a series covering insights, patterns, and trends within Big Tech and startups. Notice an interesting event or trend? Send me a message.

Today, we cover:

Industry pulse. OpenAI struggles to shed nonprofit status, Anthropic bans AI from job applications, cuts at Workday, GitHub Copilot releases an agentic mode, and Linus Torvalds tells Linux contributors that social media pressure is bad for development.

Software engineering job openings hit five-year low? There are 35% fewer software developer job listings on Indeed today, than five years ago. Compared to other industries, job listings for software engineers grew much more in 2021-2022, but have declined much faster since. A look into possible reasons for this, and what could come next.

OpenAI responds to DeepSeek threat. Fresh off the shock of a relatively unknown Chinese company dethroning ChatGPT as the most-downloaded AI app on iPhone and Android, OpenAI has shaken itself up, promptly copied DeepSeek’s visualization of how its reasoning model “thinks”, and has shipped Deep Research.

A glimpse from the section “Software engineering job openings hit five-year low?” OpenAI struggles to shed nonprofit status

Yet more drama at OpenAI, where cofounder Elon Musk has made a $97 billion dollar bid to take over the OpenAI nonprofit. A few details to unpack:

Yes, Musk really is one of the several cofounders of OpenAI. In 2015 it was he, Sam Altman, and 9 others (Greg Brockman, Ilya Sutskever, John Schulman, Wojciech Zaremba, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, and Pamela Vagata)

Musk contributed around $45M to OpenAI as a donor, as confirmed by OpenAI.

In 2017, things got messy and Musk eventually left OpenAI. This was the year when OpenAI decided to create a for-profit entity, controlled by the nonprofit. As shared by OpenAI, Musk wanted control over the for-profit part, and then later wanted to merge it into Tesla.

Musk created rival X.AI in 2023, and this company has raised $6B of funding.

OpenAI is worth a lot more than $97B; its latest valuation was at $157B in October, and there are rumors of a new raise at or above $300B!

Here’s where OpenAI’s tricky corporate structure comes into play. OpenAI published a diagram showcasing the structure, but leaving out ownership stakes. Here’s my interpretation, focusing on ownership stakes and control:

OpenAI’s ownership structure. It’s messy.

Musk is offering to buy the nonprofit entity, not the for-profit one. Musk’s offer at $97B for a 51% controlling share values OpenAI at around $195B, which is higher than OpenAI’s last valuation of $157B. The company’s board has a fiduciary duty to consider the offer. However, OpenAI is officially a non-profit, so this duty may not apply.

It’s likely that this episode’s main impact will be to highlight the ridiculous, convoluted, corporate structure of the world’s most valuable private company. I wonder if investors may opt against investing more until this structure is tidied up, with the for-profit entity controlled by investors, and not the nonprofit, with ` investors receiving an ownership stake as well.

Anthropic bans AI from job applications

Read more


@_Nat Zone

2月27日【緊急対談】ねむ ✕ 崎村夏彦 – AIメタバース時代のアイデンティティ【Digital Space Conference 2025】

「ねむ ✕ 崎村夏彦 – AIメタバース時代のアイデンティティ」 2月27日、Digital Space Conference 2025 のサイドイベントで、昨年「今年最も輝いたVTuberベスト10 【MoguLive VTuber Award 2024】で、儒烏風…
「ねむ ✕ 崎村夏彦 – AIメタバース時代のアイデンティティ」

2月27日、Digital Space Conference 2025 のサイドイベントで、昨年「今年最も輝いたVTuberベスト10 【MoguLive VTuber Award 2024】で、儒烏風亭らでんさんと同率4位に選出されたバーチャル美少女ねむさんと緊急対談を行います。題して「ねむ ✕ 崎村夏彦 – AIメタバース時代のアイデンティティ」。

AIとメタバースの発展で激変するインターネットで新人類のID/アイデンティティはどこに向かうのか!?目が離せません

Digital Space Conference 2025 の会場でご覧いただけるほか、YouTube Live でも生配信します。ぜひお越しください。

日時

2025年2月27日(木)11:15~11:45

会場

対面・オンラインのハイブリッドで開催!

【対面会場】
Digital Space Conference 2025 講演会場 (東京ミッドタウンホール ホールA)

出演: ■ バーチャル美少女ねむ(VTuber/作家)

黎明期の仮想世界で生きる「メタバース原住民」にして、その文化を伝える「メタバース文化エバンジェリスト」として活動。「バーチャルでなりたい自分になる」をテーマに2017年から美少女アイドルとして活動している自称・世界最古の個人系VTuber。ボイスチェンジャーの利用を公言しているにも関わらずオリジナル曲『ココロコスプレ』で歌手デビュー。メタバースの革命性を論じた著書『メタバース進化論』(2022年、技術評論社)で「ITエンジニア本大賞2023」ビジネス書部門”大賞”を受賞。国連の国際会議「IGF京都2023」でも登壇。アバター文化への貢献が認められ、一般社団法人VRMコンソーシアムよりキズナアイ以来史上二人目となる「アバターアワード2022 特別功労賞」受賞。
X : https://x.com/nemchan_nel

■崎村 夏彦(NATコンサルティング合同会社代表 / OpenID Foundation 理事長)

デジタルアイデンティティおよびプライバシーに関する国際標準化を専門とし、全世界で30億人以上に使われる一連の関連国際規格のほか、「デジタルアイデンティティ」(2021, 日経BP社)を著す。米国OpenID Foundation理事長を2011年より、MyData Japan理事長を2019年より、公正取引委員会デジタルスペシャルアドバイザーを2021年より務める。 ISO/IEC JTC 1/SC 27専門委員会(情報セキュリティ, サイバーセキュリティ及びプライバシー保護 アイデンティティ管理とプライバシー技術)委員長。OECDインターネット技術諮問委員会委員。総務省「プラットフォームに関する研究会」、デジタル庁「本人確認ガイドラインの改定に向けた有識者会議」を始めとして、多数の政府関連検討会にも参画。
X : https://x.com/_nat

■Digital Space Conference 2025 について

AI・Web3・メタバース領域の第一線で活躍するスピーカーが、成功の裏側や失敗から得た教訓を赤裸々に共有。来場者は、具体的な事例から得た示唆や新たな発見を持ち帰り、自身の事業に応用することで次世代を共に創造していく。単なる知識の獲得に留まらず、未来への一歩を踏み出すための場を提供します。
公式サイト:Digital Space Conference 2025
プレスリリース: 「Digital Space Conference 2025」豪華登壇者・展示詳細・サイドイベント決定! (2025年2月13日 15時30分)

■日本デジタル空間経済連盟

デジタル空間における経済活動を活性化し、日本経済の健全な発展と豊かな国民生活の実現に寄与します。
X : https://x.com/Dejikeiren

(参考文献) メタバースでのアイデンティティ(Nem x Mila, 2024) ソーシャルVRライフスタイル調査2023 (Nem x Mila, 2023) デジタルアイデンティティ(2021) 実際のセッションの概要:AIメタバース時代のアイデンティティ 対談メモ 概要 イベント名: デジタルスペースカンファレンス2025の一環 日時: 2025年2月27日 場所: 東京ミッドタウン(現地)およびYouTubeライブ配信 出演者: バーチャル美少女ねむ(メタバース住人、VRコンテンツクリエイター) 崎村夏彦氏(OpenID Foundation理事長) テーマ: AIメタバース時代のアイデンティティ 主要ポイント 1. メタバースの現状と可能性 メタバース環境の説明: VRゴーグルとフェイストラッカー等を使用した没入型体験 物理的制約を超えた活動が可能(サイズ変更、重力無効化など) 実際の書斎のような環境をVR上につくり、その中でも様々な活動が可能 Meta Quest 3s等の低価格ヘッドセットの登場により普及が加速 「魔法が使える世界」として例えられる(瞬間移動、物の生成など) VRの技術的利点: 光学的には遠くを見る状態になるため、目の疲れが少ない 物理的な制約にとらわれない創造性 複数のアバターを使い分け、異なる自己表現が可能 日本のVR人口: 過去1年で約倍増 世界的に見ても日本人ユーザーの成長の割合が高い 2. デジタルアイデンティティと複数人格(分人) アバターの種類と選択理由: 人間型アバターは現実との近さで没入感を求める人が選択 ファンタジー系は現実では表現できない特性を求める人が選択 時間経過と共に人間型が減少し多様なアバターが増加傾向 デュアル・アイデンティティ: 約57%のユーザーがバーチャルIDと現実の自分を意図的に分けている 分ける理由:偏見を避ける、現実と仮想を区別したい 公開する理由:ネットワーク拡大、現実と仮想の繋がりを求める 「分人(ぶんじん)経済」の概念: 一人の人間が複数の人格・アイデンティティで活動する経済構造 ちなみに、2011年に発行されたISO/IEC 24760-1ではすでに人々がコンテキストごとのアイデンティティを使い分けるとして、identity と partial identity を同義語として定義している。 現在は約13%がVR内のアイデンティティをメインと考え、27%が将来メイン化を望む アイデンティティ切り替えによる自己表現と心理的効果 3. デジタルアイデンティティとプライバシーの課題 IdP(Identity Provider)の役割: オンライン認証の標準化(OpenID Connect等) コンテキスト毎の適切なアイデンティティ管理 「混ぜるな危険」の原則:異なるアイデンティティは分離すべき 現在の制度的課題: バーチャルアイデンティティでの経済活動の困難さ(確定申告、金融取引など) プライバシー保護と本人確認のバランス 全方位的アイデンティティの強制が問題 理想は「コンテキストごとの関係性に応じたアイデンティティの使い分け」 解決策の方向性: 関係者に見える情報を必要最小限にする仕組み 事業者別識別子を国税庁にわたして税務用アイデンティティ(識別子はマイナンバー)に変換して処理する仕組み パーソナルAIによる自動処理の可能性 4. 日本の可能性と今後の展望 日本の優位性: クリエイティブ活動への低いバリア(同人文化等) VTuber文化やメタバース活動への高い受容性 メディアでのバーチャル存在の可視性が高い 今後の展望: 技術的には実現可能な仕組みの社会実装 立法・行政への働きかけの必要性〜それには選挙民の声が必要 高齢化社会における新たな社会参加の形態 IDの仕組みの抜本的見直しによる新たな経済発展 結論

メタバースにおける複数アイデンティティ(分人)は技術的に既に可能だが、それを経済活動や社会制度と結びつけるための実装と法制度の発展が必要。日本はVR文化への親和性が高く、この分野でリーダーシップをとる可能性がある。メタバースを通じて「より自分らしく生きる」ための環境整備が今後の課題となる。

ねむさんによるまとめ@ note

https://note.com/nemchan_nel/n/nbebac0953b30 にねむさんによるまとめがあります。ねむさんオススメの視聴ポイントも記されていますし、X上での反応も出ていますので、ぜひそちらも御覧ください。

Wednesday, 12. February 2025

The Pragmatic Engineer

Live streaming at world-record scale with Ashutosh Agrawal

In May 2023, a live streaming world record was set with 32 million concurrent viewers watching the finale of the Indian Premier League cricket finale. A chat with the architect behind this system
Stream the Latest Episode

Available now on YouTube, Apple and Spotify. See the episode transcript at the top of this page, and a summary at the bottom.

Brought to You By

WorkOS — The modern identity platform for B2B SaaS

CodeRabbit — Cut code review time and bugs in half. Use the code PRAGMATIC to get one month free.

Augment Code — AI coding assistant that pro engineering teams love

In This Episode

How do you architect a live streaming system to deal with more load than any similar system has dealt with before? Today, we hear from an architect of such a system: Ashutosh Agrawal, formerly Chief Architect of JioCinema (and currently Staff Software Engineer at Google DeepMind.) In May 2023, JioCinema set the live-streaming world record, serving 32 million concurrent viewers tuning in to the finale of Indian Premier League (a cricket game.)

We take a deep dive into video streaming architecture, tackling the complexities of live streaming at scale (at tens of millions of parallel streams) and the challenges engineers face in delivering seamless experiences. We talk about the following topics:

• How large-scale live streaming architectures are designed

• Tradeoffs in optimizing performance

• Early warning signs of streaming failures and how to detect them

• Why capacity planning for streaming is SO difficult

• The technical hurdles of streaming in APAC regions

• Why Ashutosh hates APMs (Application Performance Management systems)

• Ashutosh’s advice for those looking to improve their systems design expertise

• And much more!

Takeaways

My biggest takeaways from this episode:

1. The architecture behind live streaming systems is surprisingly logical. In the episode, Ashutosh explains how the live streaming system works, starting from the physical cameras on-site, through the production control room (PCR), streams being sliced-and-diced, and the HLS protocol (HTTP Live Streaming) used.

2. There are a LOT of tradeoffs you can play with when live streaming! The tradeoffs between server load, latency, server resources vs client caching are hard decisions to make. Want to reduce the server load? Serve longer chunks to clients, resulting in fewer requests per minute, per client… at the expense of clients potentially lagging more behind. This is just one of many possible decisions to make.

3. At massive video streaming scale, capacity planning can start a year ahead! It was surprising to hear how Ashutosh had to convince with telecoms and data centers to invest more in their server infrastructure, so they can handle the load, come peak viewership months later. This kind of challenge will be nonexistent for most of us engineers/ Still, it’s interesting to consider that when you are serving a scale that’s not been done before, you need to worry about the underlying infra!

4. “Game day” is such a neat load testing concept. The team at Jio would simulate “game day” load months before the event. They did tell teams when the load test will start: but did not share anything else! Preparing for a “Game day” test is a lot of work, but it can pay off to find parts of the system that shutter under extreme load.

The Pragmatic Engineer deepdives relevant for this episode

Software architect archetypes

Engineering leadership skill set overlaps

Software architecture with Grady Booch

Timestamps

(00:00) Intro

(01:28) The world record-breaking live stream and how support works with live events

(05:57) An overview of streaming architecture

(21:48) The differences between internet streaming and traditional television.l

(22:26) How adaptive bitrate streaming works

(25:30) How throttling works on the mobile tower side

(27:46) Leading indicators of streaming problems and the data visualization needed

(31:03) How metrics are set

(33:38) Best practices for capacity planning

(35:50) Which resources are planned for in capacity planning

(37:10) How streaming services plan for future live events with vendors

(41:01) APAC specific challenges

(44:48) Horizontal scaling vs. vertical scaling

(46:10) Why auto-scaling doesn’t work

(47:30) Concurrency: the golden metric to scale against

(48:17) User journeys that cause problems

(49:59) Recommendations for learning more about video streaming

(51:11) How Ashutosh learned on the job

(55:21) Advice for engineers who would like to get better at systems

(1:00:10) Rapid fire round

A summary of the conversation The Live Streaming Pipeline

The journey of a live stream starts with the cameras at the event’s venue. These cameras are connected by fiber to a Production Control Room (PCR).

In the PCR, a director selects which camera feeds to use, much like in a movie production.

Source feed (or production feed) is then sent to a contribution encoder. This encoder compresses the high-bandwidth source feed to a more manageable size.

The compressed feed is then transmitted to the cloud using a private peer-to-peer link.

Distribution encoder: prepares the stream in various formats for end-user consumption, such as HLS and DASH.

Over 100 stream variants can be generated for various devices – and up to 500 (!) when different languages are included.

Orchestrator: the one managing the pipeline, from the contribution encoding to the cloud infrastructure. The orchestrator decides which endpoints to push to, the configuration of the distribution encoder, and the CDN endpoints.

Playback URLs: generated by the orchestrator. URLs are specific to the device and format being used.

When a user clicks play, a separate playback system takes over. This system verifies user authorization, deals with encryption, and handles Digital Rights Management (DRM). The playback system then provides the client app with an encrypted URL to stream the content.

Live streaming systems are more complex than Video on Demand (VOD) systems because of the need to manage multiple real-time streams and user authentication and authorization for those streams, all while keeping latency low.

Content Delivery

Content delivery relies on Content Delivery Networks (CDNs).

The core technology used is HLS or DASH, where the video is broken down into segments.

HLS uses a master manifest file (e.g., master.m3u8) that lists different video quality levels. Each quality level refers to a child manifest.

Child manifests list individual video segments. These segments are typically between four to six seconds long.

The client player requests a child manifest every segment duration and the segments that it lists.

CDN: works at the segment level rather than at a millisecond level.

Correctly setting up CDN configurations, such as the Time To Live (TTL) values for the cached segments, is crucial to ensure a smooth stream without stale data.

Latency is introduced at various stages of the live-streaming process. This includes encoding, network transmission, and client-side buffering.

Encoding techniques: using a look-back period, or Group of Pictures (GOP) are used to achieve more efficient compression. The GOP might be 1, ,2 or 4 seconds long.

Client-side buffering is used to give a smoother streaming experience, even if there are small network issues. This means the user might be watching the stream a few seconds behind the real-time live point.

There are trade-offs between latency, smooth playbac,k and infrastructure demands. Reducing the segment duration increases calls to the CDN, impacting infrastructure needs.

Adaptive bitrate streaming is used to adjust the video quality in response to the user's network conditions.

The client-side player measures the download speed and chooses an appropriate video quality level, matching the user's network capacity.

If the network speed slows down, the client can switch to a lower-quality video (e.g., from 720p to 240p).

The server can also degrade the user's stream by limiting the number of available video quality options, for example during very high load. The server can also adjust the segment length in response to system load.

The client player is always starting playback a few seconds behind the live point, to avoid any interruption in playback if a segment is missed.

If a segment is missed on a TV, the TV will continue playing at the live point. However, on the internet, the client is using a buffer and will try to avoid missing a segment.

Monitoring, Metrics, and Scaling

Monitoring is based on leading and trailing indicators.

Leading indicators help to identify potential problems in realtime. Examples include buffer time and playback failure rates. These leading indicator metrics are given priority in the system.

Trailing indicators are used to perform a detailed analysis of issues after they occur.

Client-side metrics are collected and quickly processed by the server in less than a minute or sometimes within 30 seconds.

Server-side metrics, such as bandwidth, the number of requests, and latency, are also tracked.

The frequency of data collection is adjusted based on the system load. When there is higher traffic, the amount of data collected is sampled to manage the volume of data collected and processed.

Capacity planning is a very complex process involving infrastructure, network, and power, and is started at the end of the prior year, for the next year.

Capacity planning involves coordination with several infra providers to make sure they can scale their infrastructure for the events.

The planning focuses on metrics such as compute, RAM, disk, and network usage. The main metric that becomes the limiting factor is vCPUs.

Cloud resources are not infinite at the scale required for major live events. There is a finite amount of resources in a given location – at this scale of streaming, that is!

Providers need to purchase real estate, install links, and deploy servers.

Horizontal scaling is preferred for compute resources as it is easy to add boxes to the pool.

Databases and caches are scaled preemptively to avoid the need to scale them on the fly during events.

Auto-scaling is not effective for live events because it is too slow to respond to the rapid changes in traffic. Custom scaling systems are preferred.

The custom scaling system uses a concurrency metric, that is the number of users watching the stream, to scale services. All systems are scaled against a common concurrency metric.

The custom scaler also looks at user journeys, such as when users press the back button and return to the home page. This can cause a spike in traffic to the home page API.

APAC-specific live streaming challenges

Mobility is a significant challenge because most users in India watch live streams on mobile devices and are often on the move. This means that users are constantly switching between cell towers.

Battery consumption is also a key factor. Video streaming can quickly drain mobile phone batteries.

The video profile, polling frequency, and encoding algorithms are often chosen to reduce battery consumption.

“Game day simulation”: something Jio did to simulate peak load conditions.

Involved in generating synthetic traffic and the teams needed to scale systems and follow operational protocols in response.

The teams did not have access to the traffic dashboard, so the traffic patterns were unknown to the teams.

Advice for engineers to become better architects

Understand this: anything that can fail, will fail. Overconfidence in systems can lead to problems! Most people underestimate or overlook the potential failure modes.

Look at every aspect of your system including configurations and code as even the smallest things can cause problems.

Detailed metrics and measurements are vital. Both to see potential problems and to be able to debug effectively.

Ensure you are measuring metrics correctly. For example, response time should be measured from when the request is queued, not when it enters the processing function.

Do not rely too heavily on APMs. It is better to understand the low-level details and measure and fine-tune every aspect of your code.

To learn more about video encoding: look up documentation on GitHub and online. Look for resources going into how image compression is done, and how images are turned into video.

Most of the learning happens on the job. There isn't a lot of public information about problems at this kind of scale! Hopefully, this podcast was helpful in sharing more details!

Resources & Mentions

Where to find Ashutosh Agrawal:

• X: https://x.com/theprogrammerin

• LinkedIn: https://www.linkedin.com/in/theprogrammerin/

• Medium: https://medium.com/@theprogrammerin

Mentions during the episode:

• Disney+ Hotstar: https://www.hotstar.com/in

• What is a CDN: https://aws.amazon.com/what-is/cdn/

• Adaptive bitrate streaming: https://en.wikipedia.org/wiki/Adaptive_bitrate_streaming

• Skype: https://www.skype.com/en/

•Millions Scale Simulations: https://blog.hotstar.com/millons-scale-simulations-1602befe1ce5

• Black Friday: https://en.wikipedia.org/wiki/Black_Friday_(shopping)

• Asia-Pacific (APAC): https://en.wikipedia.org/wiki/Asia%E2%80%93Pacific

• Distributed architecture concepts I learned while building a large payments system: https://blog.pragmaticengineer.com/distributed-architecture-concepts-i-have-learned-while-building-payments-systems/

• Concurrency: https://web.mit.edu/6.005/www/fa14/classes/17-concurrency/

• Video streaming resources on Github: https://github.com/leandromoreira/digital_video_introduction

• Murphy’s Law: https://en.wikipedia.org/wiki/Murphy%27s_Law_(disambiguation)

• Java: https://www.java.com/

• Ruby: https://www.ruby-lang.org/en/

• Ruby on Rails: https://rubyonrails.org/

• Hacker News: https://news.ycombinator.com/

Production and marketing by Pen Name. For inquiries about sponsoring the podcast, email podcast@pragmaticengineer.com.


Jon Udell

Web Components

The JavaScript industrial complex won’t crumble anytime soon. But the stage is set for a return to an ecosystem of reusable components accessible to business developers, only this time based on the universal web platform and its core standards. How To Build Web Components Using ChatGPT LLM series at The New Stack

The JavaScript industrial complex won’t crumble anytime soon. But the stage is set for a return to an ecosystem of reusable components accessible to business developers, only this time based on the universal web platform and its core standards.

How To Build Web Components Using ChatGPT

LLM series at The New Stack

Tuesday, 11. February 2025

Phil Windleys Technometria

The Business Case for Dynamic Authorization

Authorization isn't just about security.

Access management is seen by many business leaders as primarily a means of protecting and securing computer systems. Important, but of secondary importance to the business. But as computer systems come to intermediate almost every interaction the business has with employees, contractors, partners, and customers, dynamic authorization should be seen as a strategic business enabler. Investing in a robust, flexible, and pervasive authorization infrastructure can drive revenue growth, operational efficiency, regulatory compliance, and competitive differentiation.

Reducing Operational Costs

Manually managing access using rigid, static authorization models like ACLs and groups is labor-intensive and prone to errors. Organizations that rely on static methods often have employees who are dedicated to managing permissions for employees and others. These employees also perform manual audits, track down problems, and manage groups. As the organization grows, these processes become more complex—scaling superlinearly due to interactions in the system.

Dynamic authorization automates many of these access control decisions, reducing the need for manual intervention. This has several benefits:

Lower administrative overhead—eliminating the need for manually managing permissions and groups reduces administrative costs.

Reduced risk of over-permissioning—accounts with permissions they no longer need are the source of many costly security breaches.

Reduced security insurance premiums—many organizations buy costly insurance for security breaches and ransomware. Better authorization practices and systems can reduce premiums.

Fewer support tickets for access problems—tickets that require IT to drop what they’re doing to sort out a permissioning problem take these critical employees away from work that advances the organization’s products and services.

Improved onboarding and offboarding efficiency—dynamic authorization can ensure new employees or contractors have all the access they need on day one and lose it as soon as they leave.

Improved operational efficiency gives the organization the freedom to explore and grow instead of constantly battling access management problems.

Enabling Business Agility and Innovation

As more and more business is conducted online, organizations are finding that it’s vital to quickly react to changing business needs. Whether an organization is launching a new product, expanding into new markets, reacting to new regulatory requirements, or enabling new partnerships, being able to flexibly adapt to emerging requirements and support innovation is table stakes for successful organizations.

As we’ve discussed, static authorization methods require manual changes to lists and groups to increase or decrease access to systems. For example, a financial services firm that is employing external auditors for compliance reviews must grant access for the duration of the engagement. A dynamic authorization system makes this as easy as a policy change. Even that might not be required if authorization policies have been written so as to anticipate this kind of need.

New products often require custom code to support authorization requirements for customers and administrators. A workforce management service provider launching a new employee onboarding product must ensure that customers can properly authorize varying access levels for their employees to administer and manage the service securely. A dynamic authorization system can be integrated with the new product, allowing developers to build in the right authorization controls without writing custom authorization code.

Improving Customer Experience

The compelling features of modern SaaS applications, marketplaces, and collaborative services depend on carefully controlling access. In these platforms, dynamic authorization isn’t just for security; it also enhances the user experience and provides enhanced revenue opportunities.

For example, platforms like Google Docs, Dropbox, and SmugMug all allow their customers to share content with specific people or groups. Dynamic authorization makes this functionality possible.

Or consider multi-tenant SaaS companies like Workday or Salesforce. Fine-grained authorization allows these companies to isolate customer data while simultaneously allowing granular access within each tenant that follows complex rules. These companies can’t build a single access management system because each tenant requires different access controls depending on their organization, regulatory environment, and internal access policies. Dynamic authorization lets them more easily meet customer needs.

Finally, many online businesses offer different features to different users depending on subscription levels. Dynamic authorization policies allow this to be done without custom code and give the business flexibility to add or modify subscription levels and features without changing the underlying code.

Strengthening Security and Compliance

Using dynamic authorization for improved security and regulatory compliance provides several business advantages. Industries like finance, healthcare, and government are heavily regulated. The regulations require organizations to enforce least privilege access, ensure auditability, and dynamically adjust access based on employee roles and changing risk conditions. Organizational benefits from using dynamic authorization include decreased compliance risk, better employee experience, fewer workarounds that introduce security problems, and reduced overall cost.

Competitive Differentiation

Using dynamic authorization inside products gives organizations a competitive edge by offering a more secure, flexible, and user-friendly product.

For example, a B2B SaaS company with a product built with fine-grained access control can better attract large enterprise customers who demand flexible, yet secure features. A financial services company that lets customers dynamically set transaction limits based on varying risk signals allows them to reduce fraud while maintaining a rich user experience. A collaboration tool that offers flexible, secure content sharing beats out competitors who use more rigid, static sharing models.

Organizations can more easily respond to competitor product changes when access management is as simple as a policy change. And dynamic authorization provides these benefits without developers having to write custom code.

A Business Necessity

The preceding sections offer multiple examples of how dynamic authorization goes well beyond enhanced IT security. Organizations that embrace dynamic authorization gain enhanced operational efficiency through automation, increased business agility to more easily pursue new opportunities, stronger security and compliance with less overhead, and better customer experiences that drive customer engagement and revenue.

In the era of multi-tenant, AI-enhanced, SaaS applications, dynamic authorization is essential for organizations to securely scale and effectively compete. Failing to adopt better access management technologies and mechanisms puts organizations at risk of losing their competitive advantage.

Photo Credit: Octopus busy approving things from DALL-E (public domain) Prompt: Draw a picture of an octopus wearing an officials hat with each arm holding stamps that say either "allow" or "deny". The octopus is stamping multiple sheets of paper on a desk.


Heres Tom with the Weather

Bitcoin Science

Bitcoin failed. J.P. Koning notes in The end of El Salvador’s bitcoin payments experiment: But here was a government that was going to champion the stuff, nullifying all of the headwinds against bitcoin in one stroke! The government meddling hypothesis would be put to test. The Salvadoran government used a combination of sticks and carrots to kick-start adoption. First, let's list the ca

Bitcoin failed. J.P. Koning notes in The end of El Salvador’s bitcoin payments experiment:

But here was a government that was going to champion the stuff, nullifying all of the headwinds against bitcoin in one stroke! The government meddling hypothesis would be put to test. The Salvadoran government used a combination of sticks and carrots to kick-start adoption. First, let's list the carrots. The capital gains tax on bitcoin was set to zero to remove the hassle of buying stuff with bitcoin. The government also built a bitcoin payments app, Chivo, for all El Salvadoreans to use. (Chivo also supports U.S. dollar payments.) Anyone who downloaded Chivo and transacted with bitcoin would receive a $30 bitcoin bonus—that's a lot of money in El Salvador. Gas stations offered $0.20 off of a gallon of gas for customers who paid with the app. People could also use Chivo to pay their taxes with bitcoin. The biggest carrot was zero-transaction fees. Any payment conducted with Chivo was free, as was converting bitcoins held in the Chivo app into U.S. dollars and withdrawing cash at Chivo ATMs. These Chivo ATMs were rolled out across El Salvador and in the U.S., too, to encourage the nascent U.S.-to-El Salvador bitcoin remittance route. Bitcoin ATMs are usually incredibly pricey to use, but in El Salvador the government would eat all the transaction fees. What a fantastic deal. As for the stick, Bukele introduced a forced-tender rule. Beginning in 2021, businesses were required to accept the orange coin or be punished. This was costly for them to comply with. They would have to update point of sale software, signage, train employees, and set up new processes for handling bitcoins post-sale. By all rights, this combination of sticks and carrots should have led to a flourishing of bitcoin payments. But it didn't.

Koning concludes:

The saddest thing about El Salvador's bitcoin experiment is that all sorts of time and resources have been wasted. El Salvador is not a rich country. The money spent on building and operating Chivo, compliance by businesses, bitcoin signage, and subsidies could have been better deployed on more important things like health and education. One hopes that other countries learn from this experience and avoid going down the same route that El Salvador did.

The Pragmatic Engineer

Is Big Tech becoming more cutthroat?

Is it the end of a golden age of Big Tech, when jobs at leading companies offered high-impact work, top compensation, and good work-life balance? More signs suggest harsh perf management is the norm

A few years ago, companies like Google, Microsoft and Facebook were known as places where – inside certain organizations – some engineers could earn large sums of money for doing comparatively little work, and wile away the work week waiting for their large new joiner equity to vest over four years. This chill approach is what the “rest” in rest-and-vest refers to, which was a popular phrase at the time for it. These places also had many teams where work was laid back, and it was possible to “coast” and get by doing relatively little.

In 2017, Business Insider interviewed several such folks and wrote:

“Business Insider talked to about a half a dozen people with direct knowledge of the rest-and-vest culture. Some were "fat cats" themselves. Some were hiring managers who tried to lure these folks back to the world of productivity. Many acknowledged that resting and vesting was a common, hush-hush practice at their own companies. Internally, these people are often referred to as "coasters."

Their lives counter the other reality for many in the tech world: long work hours and pressure for workers to pledge unrelenting devotion to their companies and jobs above all else.”

A culture of lenient performance management at the biggest tech businesses contributed to laidback work patterns; I talked with managers at the likes of Google and Microsoft at the time who were frustrated that the system made it hard to manage out folks who were visibly checked out, and were hard to motivate to do even the bare minimum work.

Fast forward to today, and there are signs that Big Tech employers are being tougher than ever in performance management, and any tolerance of “rest and vest” culture is history. This article covers:

Meta: first performance-based mass layoffs. Nearly 3,700 people have been dismissed as ‘low performers’; it’s the first time that another reason wasn’t given for layoffs by the social media giant.

Microsoft: performance-based firings are back. Employees terminated on the spot for performance reasons, without warning, and some don’t get severance.

Evolution of Microsoft’s performance management. The Windows maker has been brutal in performance management before: its old stack ranking system was unpopular for close to two decades.

Even without stack ranking, there’s still bucketing. Stack ranking fell out of favor years ago, but bucketing is still how performance reviews work.

Why now? The end of zero rates, a cooling job market, and more. Takeaways from recent mass layoffs.

Companies that always had high performance expectations. Some Big Tech businesses have always had strict performance standards. For example, within Amazon, Netflix, and NVIDIA, little will change.

Related to this article is our two-part deepdive into How performance calibrations are done at tech companies.

1. Meta: first performance-based mass layoffs

Meta did no mass layoffs for its first 18 years of its existence, until November 2022 when it let go 13% of staff. Back then, there were business reasons. I wrote at the time:

Meta’s revenue is simply not growing fast enough. In Meta’s Historic Growth Challenge, I suggested that just by allowing attrition to slightly reduce headcount expenses, Meta could catch up with profitability. However, what I did not account for was how attrition was likely too slow to wait for, and not fully within the company’s control. Layoffs, however, are.

The stock price pressure likely became too much. Meta’s stock price dipped under $100 for the first time on Thursday, 27 October, a week before layoffs. The stock dropped by 26% that day, which was the biggest one-day fall, after the company forecast weaker-than-expected revenue growth for Q4 2022. (...)

Zuck has summarized these layoffs:

“We’ve cut costs across our business, including scaling back budgets, reducing perks, and shrinking our real estate footprint. We’re restructuring teams to increase our efficiency. But these measures alone won’t bring our expenses in line with our revenue growth, so I’ve also made the hard decision to let people go.”

Six months later in early 2023, the company reduced headcount by another 11%, letting go 10,000 people The reasoning was that it had overhired during the pandemic years of 2020-2021, and was too bloated. The layoffs flattened the organization and boosted efficiency.

That was two years ago, and since then Meta has become more efficient: it generates more revenue ($164B per year) and profit ($62B) than ever before, and its value is at an all-time high of $1.8 trillion dollars.

It’s in this context that Meta announces its first-ever performance-based mass layoffs. Five percent of staff are expected to be let go, starting this week with around 3,700 people. An internal email from Mark Zuckerberg explains why, as reported by CNBC:

“I’ve decided to raise the bar on performance management and move out low performers faster. We typically manage out people who aren’t meeting expectations over the course of a year, but now we’re going to do more extensive performance-based cuts during this cycle, with the intention of back filling these roles in 2025.

We won’t manage out everyone who didn’t meet expectations for the last period if we’re optimistic about their future performance, and for those we do let go, we’ll provide generous severance in line with what we provided with previous cuts.”

This clarity that it’s “low performers” who are being laid off, is new. The large mass layoffs of 2022-23 were justified differently. Of course, low performers are at risk of being let go in most circumstances. However, in Meta’s previous layoffs, plenty of high-performers were also cut who worked in teams seen as bloated cost centers, or targeted for sharp headcount drops.

While these cuts are unfortunate, Meta at least is offering generous severance to those impacted: 16 weeks of pay and an additional two weeks for each year of service.

2. Microsoft: performance-based firings are back

Meta isn’t the only tech giant terminating employees based on performance concerns; Microsoft is doing the same — except on an individual basis. Also from Business Insider:

“Microsoft has started performance-based job cuts, according to termination letters viewed by Business Insider.

Microsoft is taking a stronger stance on performance management like its competitors, and managers at the company have spent the past few months evaluating employees all the way up to level 80, one of its highest levels.”

One of several termination letters was reported by Business Insider. It reads:

"The reason(s) for the termination of your employment include that your job performance has not met minimum performance standards and expectations for your position… You are relieved of all job duties effective immediately and your access to Microsoft systems, accounts, and buildings will be removed effective today. You are not to perform any further work on behalf of Microsoft."

Just to repeat, performance-related firing is commonplace, but what’s different here is how short and quick the process is. Previously, most of Big Tech followed a standard process for workers seen as in need of improvement:

Feedback from the manager, often at a biannual performance review

Performance improvement plan (PIP) which formalises why someone is not meeting performance expectations, and how to succeed with the plan

Dismissal upon failure to clear the PIP. Big Tech has been known for generous severance packages which exceed the legal minimum

But now, Microsoft seems to be skipping PIPs and also not offering severance in some cases. This is unusual, given how differently the tech giant had treated employees since Satya Nadella became CEO. It also feels unusually petty to cancel severance packages for those affected, especially as Microsoft is reporting record profits. Is it a message to low performers to expect nothing from the company?

Microsoft getting “cutthroat” in its performance-management is also out of character, as it was Nadella who introduced a more lenient performance management approach, back in 2014.

3. Evolution of Microsoft’s performance management

Between the 1990s and 2013, Microsoft used a stack ranking system for performance management, which wasn’t advertised to employees until the mid-2000s – although many knew about Microsoft’s “vitality curve” for ranking engineers and managers. Under this, workers high on the curve got outsized bonuses and pay rises, and those low down the curve; well, they got managed out.

In 2004, Mini Microsoft (an anonymous employee at the company, blogging in the public) wrote a post explaining how the then still-secretive stack ranking worked:

“Then along came a new lead. Her feedback [to me was], "You've got to increase your team visibility so that you can do better in the stack rank meeting."

The what-rank? She said it slower as if it would help me to divine what the heck she was talking about. Then she got up and gave me the stack rank lesson and I got to learn about how the team is divided into columns of high, medium, and low folks and then each column has a person by person relative ranking, all those positions negotiated by the leads putting their people up on the whiteboard and then arguing the merits of which report belongs above which other reports.

She said they set the context of their decisions by asking a question like, "Okay, if the team were on a sinking boat and we had to decide who we would put on the life-boats, who would it be?" Up to that point, my ass was next in line for the boat but still going down with the ship.”

From 2004 – mostly thanks to this blog post – stack ranking was no longer a secret, but it wasn’t until 2011 that then-CEO Stever Ballmer acknowledged its existence in an internal email, writing:

“Each rating at each level will now have set compensation tied to the rating.

These ratings will be based on the results you accomplished during the review period (assessed against your commitments), how you accomplished them, and your proven capability. Ratings will be a simple 1-5 system with relative performance being assessed across common peer groups.”

The buckets were pre-defined, supposedly as 20% (top performers), 20% (good performers), 40% (average), 13% (below average), and 7% (poor performers).

I worked at Microsoft starting in 2012, the year after the existence of the stack ranking system became public knowledge. Knowing the distribution made me hope for a grade of 1-2, which would have meant my manager saw me as the “top 40%” within the team. I ended up getting a “3” in 2013, which I was disappointed with, as I interpreted it as being in the bottom 20th to 60th percentile.

Later, I talked with a HR person, who told me that nobody at Microsoft was ever happy with their grades:

Those getting a “1” (the highest rating for the top 20%) think they deserve it. Many feel entitled to it, more than they’re happy to achieve it

Everyone on a 2 to a 5 is disappointed to different extents

Stever Ballmer’s departure spelt the end of the stack ranking system. Shortly after Ballmer announced his retirement in August 2013, the company announced the system was also being retired, effective immediately in November 2013. There are a few possible reasons why Stack Ranking went extinct:

1. Office politics ruled Microsoft. From the mid-2000s, it was increasingly clear that internal politics was more important than building products customers loved.

Microsoft was blindsided by the 2007 launch of the iPhone, and the launch of Android the next year. It took three more years to finally launch a competitive device – the Windows Phone in 2011. By then, iPhone and Android had captured the bulk of the smartphone market.

In 2011, Google software engineer and cartoonist Manu Cornet drew a cartoon about how he perceived Amazon, Google, Facebook, Microsoft, Apple, and Oracle. This was what how he represented the Xbox maker:

Source: Manu Cornet

This image went viral, even though Manu never intended it as a big joke in his comic strip, as he explains in The Man Behind the Tech Comics. The intended target of the joke was Oracle, but his image of Microsoft captured a widely perceived truth.

Back then, there was close to zero collaboration between divisions at Microsoft, which were thousands of employees in size; like Windows Office, Server, Xbox, and Skype. I experienced the lack of collaboration – to the point of hostility – first-hand. In late 2013, my team was building Skype for Web, which we positioned as a competitor to Google Hangouts. We had a problem, though: in order to start a video or voice call, users needed to download a plugin which contained the required video codecs. We noticed Google Hangouts did the same on Internet Explorer and Firefox, but not on Chrome because the plugin was bundled with the browser for a frictionless experience.

My team decided we had to offer the same frictionless experience on Microsoft’s latest browser, Edge, which was in development at the time. After weeks of back-and-forth, the team politely and firmly rejected bundling our plugin into the new Microsoft browser. The reason? Their KPI was to minimize the download size of the browser, and helping us would not help them reach that goal.

It was a maddening experience. Microsoft could not compete with the likes of Google due to internal dysfunction like this; with teams and individuals focused on their own targets at the expense of the greater good for the company and users.

2. Stack ranking pinpointed as the core of the problem. In 2012, Vanity Fair published Microsoft’s lost decade, which said:

“At the center of the cultural problems was a management system called “stack ranking.” Every current and former Microsoft employee I interviewed—every one—cited stack ranking as the most destructive process inside of Microsoft, something that drove out untold numbers of employees. (...)

The system—also referred to as “the performance model,” “the bell curve,” or just “the employee review”—has, with certain variations over the years, worked like this: every unit was forced to declare a certain percentage of employees as top performers, then good performers, then average, then below average, then poor.

In the end, the stack-ranking system crippled the ability to innovate at Microsoft, executives said. “I wanted to build a team of people who would work together and whose only focus would be on making great software,” said Bill Hill, the former manager. “But you can’t do that at Microsoft.”

3. Investor and board pressure. By 2013, Microsoft’s stock had been flat for about 12 years. It was clear that cultural change was needed to turn business performance around, and removing the hated stack ranking system was one of the easiest ways for the leadership team to show that change was afoot.

4. Ballmer’s exit. Several leaders including Head of HR Lisa Brummel were never in favor of stack ranking, as Business Insider reported at the time. With Ballmer gone, executives could push decisions that would’ve previously been vetoed, before a new CEO took the helm.

Satya Nadella replaced stack ranking with a more collaborative performance review system. As CEO, he recognized the cultural problems Microsoft had. In his 2017 book, Hit Refresh, he recalled the pre-2014 times:

"Innovation was being replaced by bureaucracy. Teamwork was being replaced by internal politics. We were falling behind."

A new performance review system attempted to address the problems, rating employees in three areas:

Individual impact

Contributing to others’ success

Leveraging the work of others

Microsoft also got rid of its vitality curve (the stack ranking system), starting from 2014. The changes resulted in a different performance review process, where individual impact carried less weight. In 2022, Microsoft even started to measure how many of its employees said they were “thriving”, which it defined as being “energized and empowered to do meaningful work.” Note that this was at the peak of the hottest job market in tech, when attrition spiked across the sector, and even Big Tech needed new ways to retain people.

Signs that performance management was changing again were visible in 2023, when last September, Microsoft quietly introduced a new field for managers called “impact designators.” They had to rate the impact of their reports and not disclose this to employees. The ratings determined bonuses and pay rises.

As a former engineering manager, what surprised me about this lowkey change was not that it happened, but rather that it raises the question of what Microsoft was doing before? “Impact designator” is another name for “multiplier”, used in most tech workplaces. Ahead of performance calibration meetings, managers often know this information and must fit the budget, or can sometimes exceed it. Multipliers are finalized in the calibration which helps for dividing bonus pots, equity refresh, and pay rise budgets.

So it was a surprise to learn Microsoft operated without managers setting or recommending multipliers for nine years, as part of the performance process.

4. Even without stack ranking, there’s still bucketing

The demise of divisive stack ranking was cheered; but in reality, all larger companies still operate ranking frameworks today. At most mid-sized-and-above companies, performance review processes have the explicit goal to identify and reward top performers, and to find low performers and figure out what to do next. We cover the dynamics of the process in a two-part deepdive. Performance calibrations at tech companies, including:

Read more

Monday, 10. February 2025

Damien Bod

ASP.NET Core delegated OAuth Token Exchange access token management

This blog shows how to implement a delegated OAuth 2.0 Token Exchange RFC 8693 flow in ASP.NET Core, and has a focus on access token management. It looks at how the OAuth Token Exchange can be implemented and how an application can request delegated access tokens on behalf of a user and another application, providing […]

This blog shows how to implement a delegated OAuth 2.0 Token Exchange RFC 8693 flow in ASP.NET Core, and has a focus on access token management. It looks at how the OAuth Token Exchange can be implemented and how an application can request delegated access tokens on behalf of a user and another application, providing a seamless and secure access to protected resources using a zero trust strategy.

Code: https://github.com/damienbod/token-mgmt-ui-delegated-token-exchange

Blogs in this series ASP.NET Core user delegated access token management ASP.NET Core user application access token management ASP.NET Core delegated OAuth 2.0 Token Exchange access token management ASP.NET Core delegated Microsoft OBO access token management (Entra only) Setup

The solution implements an ASP.NET Core web application which authenticates using Microsoft Entra ID. The web application uses an API protected with a Microsoft Entra ID access token. This API uses another downstream API protected with Duende IdentityServer. The API exchanges the Microsoft Entra ID access token for a new Duende IdentityServer access token using the OAuth 2.0 Token Exchange standard. Both APIs use a user delegated access token. The tokens are persisted on the trusted backend using the IDistributedCache implementation. This can be an in-memory cache or a persistent cache. When using this cache, it is important to automatically renew the access token, if it is missing or invalid.

What must an application manage?

An access token management solution must ensure that tokens are securely stored per user session for delegated downstream API user tokens and updated after each UI authentication or refresh. The solution should be robust to handle token expiration, function seamlessly after restarts, and support multi-instance deployments. The tokens must be persisted safely in multiple instance setups. Additionally, it must effectively manage scenarios involving invalid or missing access tokens.

Properties of token management in the solution setup: The access token is persisted per user session The token expires The token needs to be persisted somewhere safely (Safe and encrypted storage if not in-memory) The token must be replaced after each UI authentication (per user) The solution must work after restarts The solution must work for multiple instances when deployed to multi-instance deployments. The solution must handle invalid access tokens or missing access tokens The application must handle a user logout Client Implementation (Entra ID API)

An OAuth 2.0 Token Exchange token request is sent to the Duende IdentityServer using the ApiTokenCacheClient. The service persists the token in a cache per user. The cache is implemented using the IDistributedCache interface.

using IdentityModel.Client; using IdentityModel; using Microsoft.Extensions.Caching.Distributed; using Microsoft.Extensions.Options; using System.Text.Json; namespace WebApiEntraId.WebApiDuende; public class ApiTokenCacheClient { private readonly ILogger<ApiTokenCacheClient> _logger; private readonly IHttpClientFactory _httpClientFactory; private readonly IOptions<WebApiDuendeConfig> _webApiDuendeConfig; private static readonly object _lock = new(); private readonly IDistributedCache _cache; private const int cacheExpirationInDays = 1; private class AccessTokenItem { public string AccessToken { get; set; } = string.Empty; public DateTime ExpiresIn { get; set; } } public ApiTokenCacheClient( IOptions<WebApiDuendeConfig> webApiDuendeConfig, IHttpClientFactory httpClientFactory, ILoggerFactory loggerFactory, IDistributedCache cache) { _webApiDuendeConfig = webApiDuendeConfig; _httpClientFactory = httpClientFactory; _logger = loggerFactory.CreateLogger<ApiTokenCacheClient>(); _cache = cache; } public async Task<string> GetApiTokenOauthGrantTokenExchange( string clientId, string audience, string scope, string clientSecret, string aadAccessToken) { var accessToken = GetFromCache(clientId); if (accessToken != null) { if (accessToken.ExpiresIn > DateTime.UtcNow) { return accessToken.AccessToken; } } _logger.LogDebug("GetApiToken new from STS for {api_name}", clientId); // add var newAccessToken = await GetApiTokenOauthGrantTokenExchangeAad( clientId, audience, scope, clientSecret, aadAccessToken); AddToCache(clientId, newAccessToken); return newAccessToken.AccessToken; } private async Task<AccessTokenItem> GetApiTokenOauthGrantTokenExchangeAad(string clientId, string audience, string scope, string clientSecret, string entraIdAccessToken) { var tokenExchangeHttpClient = _httpClientFactory.CreateClient(); tokenExchangeHttpClient.BaseAddress = new Uri(_webApiDuendeConfig.Value.IdentityProviderUrl); var cache = new DiscoveryCache(_webApiDuendeConfig.Value.IdentityProviderUrl); var disco = await cache.GetAsync(); var tokenExchangeSuccessResponse = await tokenExchangeHttpClient .RequestTokenExchangeTokenAsync(new TokenExchangeTokenRequest { Address = disco.TokenEndpoint, ClientId = clientId, ClientSecret = clientSecret, Audience = audience, SubjectToken = entraIdAccessToken, SubjectTokenType = OidcConstants.TokenTypeIdentifiers.AccessToken, Scope = scope, Parameters = { { "exchange_style", "delegation" } } }); if (tokenExchangeSuccessResponse != null) { return new AccessTokenItem { ExpiresIn = DateTime.UtcNow.AddSeconds(tokenExchangeSuccessResponse.ExpiresIn), AccessToken = tokenExchangeSuccessResponse.AccessToken! }; } _logger.LogError("no success response from oauth token exchange access token request"); throw new ApplicationException("no success response from oauth token exchange access token request"); } private void AddToCache(string key, AccessTokenItem accessTokenItem) { var options = new DistributedCacheEntryOptions() .SetSlidingExpiration(TimeSpan.FromDays(cacheExpirationInDays)); lock (_lock) { _cache.SetString(key, JsonSerializer.Serialize(accessTokenItem), options); } } private AccessTokenItem? GetFromCache(string key) { var item = _cache.GetString(key); if (item != null) { return JsonSerializer.Deserialize<AccessTokenItem>(item); } return null; } }

The WebApiDuendeService class uses the token API service to request data from the downstream API.

using IdentityModel.Client; using Microsoft.Extensions.Options; using System.Text.Json; namespace WebApiEntraId.WebApiDuende; public class WebApiDuendeService { private readonly IOptions<WebApiDuendeConfig> _webApiDuendeConfig; private readonly IHttpClientFactory _clientFactory; private readonly ApiTokenCacheClient _apiTokenClient; public WebApiDuendeService( IOptions<WebApiDuendeConfig> webApiDuendeConfig, IHttpClientFactory clientFactory, ApiTokenCacheClient apiTokenClient) { _webApiDuendeConfig = webApiDuendeConfig; _clientFactory = clientFactory; _apiTokenClient = apiTokenClient; } public async Task<string> GetWebApiDuendeDataAsync(string entraIdAccessToken) { try { var client = _clientFactory.CreateClient(); client.BaseAddress = new Uri(_webApiDuendeConfig.Value.ApiBaseAddress); var accessToken = await _apiTokenClient.GetApiTokenOauthGrantTokenExchange ( _webApiDuendeConfig.Value.ClientId, _webApiDuendeConfig.Value.Audience, _webApiDuendeConfig.Value.ScopeForAccessToken, _webApiDuendeConfig.Value.ClientSecret, entraIdAccessToken ); client.SetBearerToken(accessToken); var response = await client.GetAsync("api/profiles/photo"); if (response.IsSuccessStatusCode) { var data = await response.Content.ReadAsStringAsync(); if (data != null) { return data; } return string.Empty; } throw new ApplicationException($"Status code: {response.StatusCode}, Error: {response.ReasonPhrase}"); } catch (Exception e) { throw new ApplicationException($"Exception {e}"); } } }

Duende IdentityServer implementation

Duende IdentityServer provides an IExtensionGrantValidator interface to implement the identity server support for OAuth 2.0 Token Exchange standard. This service must validate the access token and provide the data to issue a new Duende access token. Other validation checks are required like validating the sub claim which represents the user in the delegated access token. It is important to validate the access token fully. The new access tokens should only be issued for the same user. It is important to use a unique identifier from the access token to read data and issue new data for the user. An email is normally not a good solution for this as users can change their email in some IAM solutions.

public class TokenExchangeGrantValidator : IExtensionGrantValidator { private readonly ITokenValidator _validator; private readonly OauthTokenExchangeConfiguration _oauthTokenExchangeConfiguration; private readonly UserManager<ApplicationUser> _userManager; public TokenExchangeGrantValidator(ITokenValidator validator, IOptions<OauthTokenExchangeConfiguration> oauthTokenExchangeConfiguration, UserManager<ApplicationUser> userManager) { _validator = validator; _oauthTokenExchangeConfiguration = oauthTokenExchangeConfiguration.Value; _userManager = userManager; } public async Task ValidateAsync(ExtensionGrantValidationContext context) { // defaults context.Result = new GrantValidationResult(TokenRequestErrors.InvalidRequest); var customResponse = new Dictionary<string, object> { {OidcConstants.TokenResponse.IssuedTokenType, OidcConstants.TokenTypeIdentifiers.AccessToken} }; var subjectToken = context.Request.Raw.Get(OidcConstants.TokenRequest.SubjectToken); var subjectTokenType = context.Request.Raw.Get(OidcConstants.TokenRequest.SubjectTokenType); var oauthTokenExchangePayload = new OauthTokenExchangePayload { subject_token = subjectToken!, subject_token_type = subjectTokenType!, audience = context.Request.Raw.Get(OidcConstants.TokenRequest.Audience), grant_type = context.Request.Raw.Get(OidcConstants.TokenRequest.GrantType)!, scope = context.Request.Raw.Get(OidcConstants.TokenRequest.Scope), }; // mandatory parameters if (string.IsNullOrWhiteSpace(subjectToken)) { return; } if (!string.Equals(subjectTokenType, OidcConstants.TokenTypeIdentifiers.AccessToken)) { return; } var (Valid, Reason, Error) = ValidateOauthTokenExchangeRequestPayload .IsValid(oauthTokenExchangePayload, _oauthTokenExchangeConfiguration); if (!Valid) { return; // UnauthorizedValidationParametersFailed(oauthTokenExchangePayload, Reason, Error); } // get well known endpoints and validate access token sent in the assertion var configurationManager = new ConfigurationManager<OpenIdConnectConfiguration>( _oauthTokenExchangeConfiguration.AccessTokenMetadataAddress, new OpenIdConnectConfigurationRetriever()); var wellKnownEndpoints = await configurationManager.GetConfigurationAsync(); var accessTokenValidationResult = await ValidateOauthTokenExchangeRequestPayload.ValidateTokenAndSignature( subjectToken, _oauthTokenExchangeConfiguration, wellKnownEndpoints.SigningKeys); if (!accessTokenValidationResult.Valid) { return; // UnauthorizedValidationTokenAndSignatureFailed(oauthTokenExchangePayload, accessTokenValidationResult); } // get claims from Microsoft Entra ID token and re use in Duende IDP token var claimsIdentity = accessTokenValidationResult.ClaimsIdentity; if (claimsIdentity == null) { return; } var isDelegatedToken = ValidateOauthTokenExchangeRequestPayload .IsDelegatedAadAccessToken(claimsIdentity); if (!isDelegatedToken) { return; // UnauthorizedValidationRequireDelegatedTokenFailed(); } var name = ValidateOauthTokenExchangeRequestPayload.GetPreferredUserName(claimsIdentity); var isNameAndEmail = ValidateOauthTokenExchangeRequestPayload.IsEmailValid(name); if (!isNameAndEmail) { return; // UnauthorizedValidationPreferredUserNameFailed(); } // Should use the OID var user = await _userManager.FindByNameAsync(name); if (user == null) { return; // UnauthorizedValidationNoUserExistsFailed(); } var sub = claimsIdentity.Claims!.First(c => c.Type == JwtClaimTypes.Subject).Value; var style = context.Request.Raw.Get("exchange_style"); if (style == "impersonation") { // set token client_id to original id context.Request.ClientId = oauthTokenExchangePayload.audience!; context.Result = new GrantValidationResult( subject: sub, authenticationMethod: GrantType, customResponse: customResponse); } else if (style == "delegation") { // set token client_id to original id context.Request.ClientId = oauthTokenExchangePayload.audience!; var actor = new { client_id = context.Request.Client.ClientId }; var actClaim = new Claim(JwtClaimTypes.Actor, JsonSerializer.Serialize(actor), IdentityServerConstants.ClaimValueTypes.Json); context.Result = new GrantValidationResult( subject: sub, authenticationMethod: GrantType, claims: [actClaim], customResponse: customResponse); } else if (style == "custom") { context.Result = new GrantValidationResult( subject: sub, authenticationMethod: GrantType, customResponse: customResponse); } } public string GrantType => OidcConstants.GrantTypes.TokenExchange; }

In Duende a client is required to support the OAuth 2.0 Token Exchange. This is added using the AllowedGrantTypes property. A secret is also required to acquire a new access token.

new Client { ClientId = "tokenexchangeclientid", ClientSecrets = { new Secret("--in-user-secrets--".Sha256()) }, AllowedGrantTypes = { OidcConstants.GrantTypes.TokenExchange }, AllowedScopes = { "shopclientscope" } }

Support for the OAuth Token Exchange is added to the Duende IdentityServer setup using the AddExtensionGrantValidator extension method.

var idsvrBuilder = builder.Services .AddIdentityServer(options => { options.Events.RaiseErrorEvents = true; options.Events.RaiseInformationEvents = true; options.Events.RaiseFailureEvents = true; options.Events.RaiseSuccessEvents = true; // see https://docs.duendesoftware.com/identityserver/v6/fundamentals/resources/ options.EmitStaticAudienceClaim = true; }) .AddInMemoryIdentityResources(Config.IdentityResources) .AddInMemoryApiScopes(Config.ApiScopes) .AddInMemoryClients(Config.Clients()) .AddAspNetIdentity<ApplicationUser>(); // registers extension grant validator for the token exchange grant type idsvrBuilder.AddExtensionGrantValidator<TokenExchangeGrantValidator>();

Running the solutions

When all four applications are started, the data from the Duende protected API is returned to the Razor Page application which uses Microsoft Entra ID to authenticate.

Links
Implement the OAUTH 2.0 Token Exchange delegated flow between an Microsoft Entra ID API and an API protected using OpenIddict

https://github.com/damienbod/OAuthGrantExchangeOidcDownstreamApi

https://docs.duendesoftware.com/identityserver/v7/tokens/extension_grants/token_exchange/

JSON Web Token (JWT)

Best Current Practice for OAuth 2.0 Security

The OAuth 2.0 Authorization Framework

OAuth 2.0 Demonstrating Proof of Possession DPoP

OAuth 2.0 JWT-Secured Authorization Request (JAR) RFC 9101

OAuth 2.0 Mutual-TLS Client Authentication and Certificate-Bound Access Tokens

OpenID Connect 1.0

Microsoft identity platform and OAuth 2.0 On-Behalf-Of flow

OAuth 2.0 Token Exchange

JSON Web Token (JWT) Profile for OAuth 2.0 Access Tokens

HTTP Semantics RFC 9110

https://learn.microsoft.com/en-us/aspnet/core/security/authentication/social/additional-claims

https://github.com/dotnet/aspnetcore/issues/8175

Saturday, 08. February 2025

Wrench in the Gears

Proposal For A Spring Break From Mind Viruses – March 13 to the Summer Solstice: Ground, Gather Energy, Reflect, Renew 5 Years Later

I made a video today reflecting on how we manage (or don’t) our energy and how our engagement with digital information complicates our lives. I’m proposing that people consider dialing back their social media consumption to spend time reflecting on their thinking, what we have learned over the past five years, and how we can [...]

I made a video today reflecting on how we manage (or don’t) our energy and how our engagement with digital information complicates our lives. I’m proposing that people consider dialing back their social media consumption to spend time reflecting on their thinking, what we have learned over the past five years, and how we can navigate future challenges with integrity and grace. In the space that is created I am hoping some people will experience ontopoetic synchronicities, conversations with the divine, as we become more intentional about how we direct our consciousness and bodies in this 3D world. I would love to hear back from anyone who gives this a try to see what your experience has been. Life is a stage and we are players. Consider this an unplugged improv opportunity where you get to riff with the quantum.

Words of the day: biosemiotics, memetics, reality tunnel, and metacognition.

Take care with that biocomputer programming. : )

Wednesday, 05. February 2025

IdM Laboratory

そういえばEUDIW Architecture Reference Framework 1.5.0が出てますね

こんにちは、富士榮です。 そういえば2月4日にEUDIW ARFの1.5.0が出てますね。 https://eu-digital-identity-wallet.github.io/eudi-doc-architecture-and-reference-framework/latest/architecture-and-reference-framework-main/ GithubのCHANGELOGを見ると The ARF is aligned with the adopted Implementing Acts, covering articles 5a and 5c of the eIDAS Regulation.  The ARF also includes changes in response to comments provid
こんにちは、富士榮です。
そういえば2月4日にEUDIW ARFの1.5.0が出てますね。 https://eu-digital-identity-wallet.github.io/eudi-doc-architecture-and-reference-framework/latest/architecture-and-reference-framework-main/


GithubのCHANGELOGを見ると The ARF is aligned with the adopted Implementing Acts, covering articles 5a and 5c of the eIDAS Regulation.  The ARF also includes changes in response to comments provided on Github and by other stakeholders. Over more than 275 comments lead to changes in the ARF. とのことです。 まぁ、中を見ろ、と。

Wrench in the Gears

Follow Up On Quantum Realism In Connection With Gordon Pask’s Mechanical Theatricality – Plus A “Diamond Age” Excerpt Read Aloud

This is a follow up to yesterday’s video reiterating my thoughts about Live Action Role Play with encoded artefacts as an example of extended mind and possible future inter-dimensional information processing system. The talk goes along with two earlier talks Jason and I did on the work of Michael Mateas here. The feature image is [...]

This is a follow up to yesterday’s video reiterating my thoughts about Live Action Role Play with encoded artefacts as an example of extended mind and possible future inter-dimensional information processing system. The talk goes along with two earlier talks Jason and I did on the work of Michael Mateas here. The feature image is taken from Paul Pangaro’s online archive of Pask materials that is very much worth exploring here.

I am including a few maps if you want to poke around. All maps are best viewed on a laptop.

One features Pask’s cybernetic (an synesthetic) endeavors. Click here for the interactive version – enter “Pask” in the search barn top to the left if needed to zoom in again.

https://embed.kumu.io/1e710f6678f0a88ace5a8d3fae2e9af7#untitled-map?s=bm9kZS10bXlnWlROWQ%3D%3D

One shows Jacobo Moreno and Timothy Leary’s work in psychology and group dynamics through role play. Click here for the interactive version – enter “Moreno” in the search barn top to the left if needed to zoom in again.

https://embed.kumu.io/68e1cf46a34811b86499685100a83863#untitled-map?s=bm9kZS1zOW9nM0VjQg%3D%3D

One shows Mateas, LARPing, and non-traditional materials that compute. Click here for the interactive version – enter Mateas in the search barn top to the left if needed to zoom in again.

https://embed.kumu.io/174960fb41354356ac833be7f75f6fad#untitled-map?s=bm9kZS1YY1VlZ1hLeA%3D%3D

One shows distributed cognition with respect to context and encoded artifacts. Click here for the interactive version – enter “Distributed Cognition” in the search barn top to the left if needed to zoom in again.

https://embed.kumu.io/2a9737e61e74d617d28bfdf81d54a1cf#untitled-map?s=bm9kZS1OYUR5NHE4aQ%3D%3D

One shows DAO voting, which I believe relates to collapsing waves in Quantum Potential. Click here for the interactive version – enter “Extended Mind” in the search barn top to the left if needed to zoom in again.

https://embed.kumu.io/0932741381c995440ed1891fc8d992b7#untitled-map?s=bm9kZS1MckliZmRyTA%3D%3D

Enjoy!

 

 


Webistemology - John Wunderlich

Growing Democracy

There is a place, little noticed and even less understood by most Canadians, where democracy is growing stronger despite a history of violence and exploitation.
The People of DRCongo Shed Light on the “Heart of Darkness” as Our Media Looks the Other Way

Steve Soloman

Keeping up with world events, important as it is, can nonetheless be fraught. Any Canadian who wishes to stay informed has seen the coverage: autocratic governments flexing their muscles, personality cults rather than responsible governments and the rule of law being threatened even in well-established democracies. The internal rot of misguided populism and hyper partisanship, challenging as they are, are only the first steps along a very rocky road.

Despite what we see and hear, the news is not all bad. There is a place, little noticed and even less understood by most Canadians, where democracy is growing stronger despite a history of violence and exploitation.

When we think of Africa - if we think of it at all - we picture a continent struggling to overcome its colonial past. Tragically for the people who live there, this image is not entirely wrong; Africa has seen nine military coups since 2020. But like elsewhere youth in Africa are the continent’s hope, and Africa is a very young continent, with an average age in 2023 of just under nineteen. Education levels are rising though progress has been slow, due in part to a largely indifferent West. In the Democratic Republic of Congo (DRC), despite almost innumerable setbacks, positive strides are being made and there are valuable lessons to be learned if only we would take note. DRC and its people are working to overcome deplorable handicaps to create a government that is, to borrow a phrase, by the people and for the people.

When DRC does get mentioned, it’s because of either disease or warfare. This is largely due to the nature of the media beast; both what it is and what it isn’t. The media consists of profit-seeking businesses, and like any business it provides us with what it knows, or at least believes, we want. What it’s not is a pacifier ready to soothe our jangled nerves. Simply put, bad news sells. The news we get out of DRC is very much a case in point. There are positive developments, but we won’t read about this in mainstream media, not because of any sort of cover-up but rather out of rational business decisions. Thus democracy is evolving in the midst of a continent deserving of far more attention than it gets.

But before we can get to those positives we need to illustrate the circumstances out of which they have grown. This is really the only way we can appreciate what the people of DRC have suffered and continue to endure in their ongoing and heroic attempts to free themselves and achieve what so many of the rest of us take for granted.

DRC lies at the very centre of the African continent. With an area of 2.3 million square kilometres, it’s the second largest African nation after Algeria. It became a Belgian colony in 1908 but Europeans had ‘discovered’ the country well before that; in 1890 Joseph Conrad spent time there and later used it as the setting for Heart of Darkness. The book is a critique both of Belgian exploitation and an attitude toward Congo, and by extension all of Africa, that persists to this day.

Many of us continue to think of Africa as if it was one big country when in fact it contains fifty-four fully independent nations. Ethnically and culturally it’s at least as diverse as any other continent. In terms of its natural resources, DRC is amongst the wealthiest nations on Earth but its people are amongst the poorest. The Belgian colonizers exploited the country’s rubber reserves and more recently many nations, including Canada, have taken advantage of DRC’s vast quantities of copper, cobalt, zinc, silver, gold, bauxite, uranium and coltan. In all, minerals account for 90 per cent of DRC’s exports. To illustrate just one example, eighty per cent of the world’s supply of coltan comes from DRC and that mineral is in all the cell phones and laptops you and I use every day.

Much of the mining in DRC is referred-to as ‘artisanal’, a euphemism which when put into plain English, means men and children, typically boys, working as miners with no safety gear and little or no equipment of any kind beyond pickaxes. The coltan in our phones and computers has very likely come from one of these ‘artisanal’ mining operations. This is in no way meant to suggest we shouldn’t own these things, only that we should be aware; that we should be ready to take positive steps to give back in some way; to end the exploitation.

DRC officially gained its independence from Belgium on June 30th 1960. But the country was utterly unprepared to run its own affairs since the Belgians had taken no steps to prepare their former colonials for the challenges of self-rule. A republican government was established, with both a President and Prime Minister. Patrice Lumumba, a leader of the independence movement, and the country’s first Prime Minister, was both an African nationalist and a member of the pan-African movement. This meant he supported national self-determination and believed in strengthening bonds between all indigenous Africans and the global African diaspora. He played a major role in taking the nation from colony to independence. But his left-of-centre political views, in the midst of the Cold War, made him dangerous in the eyes of several western nations, particularly the United States.

The independence celebrations were short-lived; within a very few days the new nation was faced with what came to be known as the Congo Crisis. With Belgian support the provinces of Katanga and South Kasai seceded and there was also a revolt by members of the military as Black soldiers were no longer willing to accept orders from white officers. The United Nations was preparing to send troops to aid the government against the secessionists but the Secretary-General, Dag Hammarskjold, made a last minute decision not to do so. In response, Lumumba called for assistance from the Soviet Union, which sent ‘military advisors’. The Soviet presence caused a split in the government between Lumumba and the President, Joseph Kasa-Vubu. It was at this point that Joseph-Désiré Mobutu Sese-Seko, a lieutenant-colonel and Lumumba’s chief military advisor, stepped in to break the deadlock by leading a coup-d’etat. Supported by both Belgium and the United States, he expelled the Soviets and established a government that supported his view of Lumumba as a danger to the country. He led a second coup in 1965 that made himself President. About 100,000 people died during this period of political conflict, including Lumumba, who was assassinated on January 17, 1961. Mobutu ruled from 1965 to 1997. In 1971 he changed the country’s name to Zaire. His thirty-year rule also saw European investors expelled and the nationalizing of their companies. He looted the nation’s treasury. In 1990 he agreed to end the ban he had imposed on a multi-party state and formed a transitional government while retaining considerable political power. In 1994 he agreed to the appointment of Kengo Wa Dondo, an advocate for free-market reforms, as Prime Minister.

That same year in neighbouring Rwanda, between April and July, armed groups consisting of Hutu fighters committed genocide against the Tutsi people. Over 800,000 victims, mostly Tutsis, were killed and the violence spread into DRC as small groups of Hutu extremists formed militia groups. Such militias continue to operate in DRC to this day, funded largely by the Rwandan government under the leadership of Paul Kagame. Kagame, who is Tutsi, was a commander of a militia that sought, successfully, to put an end to the genocide. However he later initiated two rebel wars in DRC, the First Congo War, from 1996 to 1997, immediately followed by the Second Congo War of 1998 to 2003. Among horrific atrocities committed was the serial rape and murder of Congolese women. Canada’s Trudeau government, usually so ready to condemn such acts, has remained entirely silent about the continuing actions of the militia groups in DRC. As recently as July 13th of 2024, seventy-two people were killed by armed militia in a village about one hundred kilometres east of the nation’s capital, Kinshasa.

While the equally horrific acts occurring in Ukraine are breathlessly reported within hours, this slaughter in DRC was reported in a Toronto newspaper four days after the event and was buried at the bottom of page five.

Mineral wealth provides the incentive for the ongoing violence. Recent reports indicate that Rwanda’s national army, somewhat ironically named the Rwandan Defence Force, has deployed as many as four thousand soldiers to DRC. Their mission appears to be aiding the militia groups and facilitating the smuggling of minerals into Rwanda. In late January of this year, Rwanda sent even more troops into DRC, in numbers that indicated full-out invasion. Along with a militia group called M23, they captured the capital of DRC’s eastern region, Goma.

Since the 1990s Rwandan-backed militias have taken control of coltan and other mineral-mining operations. The Congolese government estimates that it loses $1 billion (U.S.) each year because of these illegal activities. The Congolese army is fighting to halt these operations but finds itself facing an enemy that is very well armed with modern weapons such as so-called mortar drones. Desertion is also a significant factor, as the militia groups pay their soldiers better wages than DRC can pay. People continue to be forced to flee their homes, a harsh reality in eastern DRC for almost thirty years. This violence has also resulted in severe food shortages as farms are abandoned in areas controlled by militia groups. The only realistic way to put an end to all this would be to confront the Rwandan government directly, with military force if needed. Why this has gone on all this time with no response from the West is a matter for conjecture, though it’s by no means far-fetched to suggest having access to relatively cheap minerals plays a role. A case can also be made that Western governments feel no pressure to act because their citizens are uninformed by media.

DRC has also been plagued by outbreaks of disease. In 1976 the country made headlines due to the outbreak of Ebola, a disease named after a tributary of the Congo River. The Ebola virus was first detected in a village about 110 kilometres from the river but it was decided to name the disease Ebola in order to avoid having the village become forever known as the starting point of this often - though not always - fatal pathogen. There have been subsequent outbreaks that have killed thousands of people. Work to create vaccines began to make progress in 2014. The World Health Organization did much to speed the process and an effective vaccine began to be used in 2016. Recent outbreaks have been quickly brought under control.

There are organizations in the West who lay claim to knowledge of the situation in DRC. Freedom House, a Washington D.C.- based non-profit that assesses how democratic various nations are, continues to list DRC as “not free”. However, a review of current trends suggests that the Freedom House ranking should be reviewed.

In January 2019, when Felix Tshisekedi became President, it marked the first time since independence that the presidency was peacefully transferred from one political party to another. Tshisekedi freed seven hundred political prisoners jailed by his predecessor. Understanding that education is vital to building a free and democratic state, he also invested heavily in schools for DRC’s next generation.

In December 2023 he faced another election, as the country’s constitution requires. Western media tended to focus on claims of voting irregularities that originated mostly with defeated candidates. Voter turnout was 48.8 per cent despite three provinces being unable to safely participate due to ongoing militia violence. That figure marks a small but important increase from the prior election in 2018 when 45.5 per cent of eligible voters cast a ballot (voter turnout in the Ontario election of 2022 was forty-three per cent). Tshisekedi was re-elected by a comfortable margin and reports from well-informed sources within DRC state that the majority of Congolais believe the election was run fairly, the results are credible and that the re-elected President had achieved some positive change that ordinary citizens could see for themselves. The really crucial point is this: if the majority of people in DRC see themselves as living in a democracy, how legitimate can it be for outsiders like Freedom House to claim otherwise?

DRC is just one of many countries that has had a long, hard struggle to achieve democracy, and the hard work of maintaining what has been won never stops. The people of that nation have overcome crushing obstacles, and the progress they’ve made continues to be under siege by those both in Africa and abroad who have very different interests.

Democracy doesn’t just happen by itself. It must be fought for and constantly supported by an informed and active electorate. The people of DRC have learned this vital lesson while too many of us in the West appear to have forgotten it.


Mike Jones: self-issued

Proposed Candidate Recommendation for Controlled Identifiers

The W3C Verifiable Credentials Working Group has published a Snapshot Candidate Recommendation of the Controlled Identifiers specification. This follows the five Candidate Recommendation Snapshots published by the working group in December 2024. Two of these specifications, including Securing Verifiable Credentials using JOSE and COSE, depend upon the Controlled Identifiers spec. The planned update

The W3C Verifiable Credentials Working Group has published a Snapshot Candidate Recommendation of the Controlled Identifiers specification. This follows the five Candidate Recommendation Snapshots published by the working group in December 2024. Two of these specifications, including Securing Verifiable Credentials using JOSE and COSE, depend upon the Controlled Identifiers spec. The planned update to the W3C DID specification also plans to take a dependency upon it.

A W3C Candidate Recommendation Snapshot is intended to become a W3C Candidate Recommendation after required review and approval steps.

Thanks to my co-editor Manu Sporny and working group chair Brent Zundel for their work enabling us to reach this point.

Tuesday, 04. February 2025

Patrick Breyer

Halbguter neuer polnischer Vorschlag zur Chatkontrolle wird am Mittwoch diskutiert

Polen, das derzeit den Vorsitz im EU-Rat innehat, schlägt eine wichtige Änderung des viel kritisierten EU-Vorschlags zur Chatkontrolle vor: Anstatt die generelle Durchsuchung privater Chats anzuordnen soll die Chatkontrolle wie …

Polen, das derzeit den Vorsitz im EU-Rat innehat, schlägt eine wichtige Änderung des viel kritisierten EU-Vorschlags zur Chatkontrolle vor: Anstatt die generelle Durchsuchung privater Chats anzuordnen soll die Chatkontrolle wie bisher im Ermessen der Anbieter stehen. Vertreter der EU-Regierungen werden den Vorschlag in der EU-Arbeitsgruppe für Strafverfolgung am Mittwoch beraten.

“Der neue Vorschlag ist ein Durchbruch und im Kampf um unser digitales Briefgeheimnis ein großer Sprung nach vorn”, kommentiert Patrick Breyer (Piratenpartei), ehemaliges Mitglied des Europäischen Parlaments und digitaler Freiheitskämpfer. “Der Vorschlag würde sichere Verschlüsselung schützen und damit die Sicherheit unserer Smartphones. Allerdings blieben drei grundlegende Probleme ungelöst:

1. Massenüberwachung: Selbst wenn Anbieter wie Meta, Microsoft oder Google die Chatkontrolle “freiwillig” praktizieren, ist sie immer noch eine völlig ungezielte und wahllose Massenüberwachung aller privaten Nachrichten über diese Dienste. Nach Angaben der EU-Kommission sind etwa 75 % der Millionen privater Chats, Fotos und Videos, die jedes Jahr von den unzuverlässigen Chatkontrollalgorithmen der Industrie geleakt werden, nicht strafrechtlich relevant und lassen unsere intime Kommunikation in die Hände Unbekannter gelangen, bei denen sie nicht sicher ist und bei denen sie nichts zu suchen hat. Eine ehemalige Richterin des Europäischen Gerichtshofs, Ninon Colneric (S. 34 f.), und der Europäische Datenschutzbeauftragte (Abs. 11) haben gewarnt, dass diese wahllose Überwachung gegen unsere Grundrechte verstößt, selbst wenn sie im Ermessen der Anbieter liegt. In Schleswig-Holstein und Bayern sind bereits zwei Klagen gegen diese Praxis anhängig.

Das Europäische Parlament schlägt einen anderen Ansatz vor: Die Telekommunikationsüberwachung soll verpflichtend angeordnet werden können, aber auf Personen oder Gruppen beschränkt werden, die mit sexuellem Kindesmissbrauch in Verbindung stehen. Das Parlament schlägt außerdem vor, dass die Anbieter ihre Dienste durch eine Reihe von Voreinstellungen, Nachfragen und Warnungen sicherer für junge Menschen gestalten müssen.

2. Digitaler Hausarrest: Nach dem von Polen unverändert vorgeschlagenen Artikel 6 könnten Nutzer unter 16 Jahren künftig zu ihrem eigenen Schutz vor sexueller Annäherung alltägliche Apps aus dem App-Store nicht mehr installieren. Dazu gehören Messenger-Apps wie Whatsapp, Snapchat, Telegram oder Twitter, Social-Media-Apps wie Instagram, TikTok oder Facebook, Spiele wie FIFA, Minecraft, GTA, Call of Duty, Roblox, Dating-Apps, Videokonferenz-Apps wie Zoom, Skype, Facetime. Ein solches Mindestalter wäre leicht zu umgehen und würde Jugendliche bevormunden und isolieren, anstatt sie zu stärken.

3. Verbot anonymer Kommunikation: Nach dem von Polen unverändert vorgeschlagenen Artikel 4 (3) wäre es Nutzern künftig nicht mehr möglich, anonyme E-Mail- oder Messenger-Konten einzurichten oder anonym zu chatten, ohne einen Ausweis oder ihr Gesicht vorzeigen zu müssen, wodurch sie identifizierbar würden und das Risiko von Datenlecks entstünde. Dies würde z. B. sensible Chats zum Thema Sexualität, anonyme Pressekommunikation mit Quellen (z. B. Whistleblowern) sowie politische Aktivitäten einschränken.

Alles in allem ist der polnische Vorschlag im Vergleich zu den bisherigen Texten ein großer Fortschritt in Bezug auf unsere Sicherheit im Internet, aber er erfordert noch gravierende Nachbesserungen. Gleichzeitig geht der Vorschlag wahrscheinlich schon zu weit für die Hardliner-Mehrheit der EU-Regierungen und die EU-Kommission, deren Überwachungshunger bisher stets so extrem war, dass sie die Opfer lieber ohne Einigung ganz im Stich lassen als eine verhältnismäßige, gerichtsfeste und politisch mehrheitsfähige Alternative zur Chatkontrolle zu akzeptieren.”

Monday, 03. February 2025

Wrench in the Gears

Alison Reflects on Joseph Gonzalez’s Theory of Quantum Realism – A Hopeful Participatory Universe Built On Love!

Below are Joseph Gonzalez’s responses to questions I posed about his theory of quantum realism. You can find out more at his blog bantamjoe.com. I discuss these questions in the video below starting at timestamp 20 minutes. Select links from Bantamjoe.com: The Self-Aware Circle of Consciousness: Here A Holographic Reality Designed By Consciousness, Designed For [...]

Below are Joseph Gonzalez’s responses to questions I posed about his theory of quantum realism. You can find out more at his blog bantamjoe.com. I discuss these questions in the video below starting at timestamp 20 minutes.

Select links from Bantamjoe.com:

The Self-Aware Circle of Consciousness: Here

A Holographic Reality Designed By Consciousness, Designed For You!: Here

Fractal Dimensions of Consciousness: Here

Love, The First Action of Consciousness: Here

Consciousness From Microtubules to Brain Waves: Here

Simulation Theory Vs. Quantum Realism: Here

Link to Heart Forgiveness website.

Elevator Pitch for Quantum Realism

Quantum Realism proposes that the physical universe is not fundamental but instead a virtual construct generated by a self-sustaining quantum network or primal consciousness. The physical phenomena we observe, including space, time, energy and matter, are outputs of this underlying quantum process, more like a computational system where the quantum field serves as the substrate for reality. Unlike classical realism, which treats matter as independent and static, Quantum Realism views the universe as a dynamic, evolving system, continuously shaped by observation and interaction.

This theory aligns with the strange findings of modern physics, such as quantum superposition, entanglement, and the observer effect. These phenomena indicate that reality behaves more like information being processed than a collection of static objects. By grounding itself in the principles of quantum mechanics and information theory, Quantum Realism provides a framework that explains physical laws but also integrates consciousness as an active participant in the unfolding of reality. 

How Quantum Realism Differs from Other Theories

Quantum Realism stands apart by uniting consciousness and matter within a single quantum substrate. Classical physics focuses on deterministic interactions of matter, while dualism separates consciousness from the material. Quantum Realism suggests that matter emerges from the consciousness field, which acts as a self-generative, non-physical substrate. This explains phenomena like wave-particle duality and the observer effect, where the act of measurement collapses probabilities into specific outcomes.

In contrast to other simulation theories, which claim an external programmer, Quantum Realism describes a holographic reality that is self-generating, requiring no external creator. The consciousness quantum network acts as both the origin and sustainer of reality, dynamically evolving through recursive feedback loops. By embedding consciousness within this process, Quantum Realism resolves the paradox of how non-physical awareness can influence a seemingly physical world, offering a more integrated perspective than either dualism or materialism.

What Led You to This Investigation?

Stemming from my childhood curiosity of electricity and of light, I started learning the math that was required. It was the whole concept of imaginary numbers that got me questioning reality. 

The idea that light oscillates into a imaginary 4th dimension emerged from the mathematical framework of quantum mechanics. I later discovered Imaginary numbers are fundamental to describing oscillatory behavior in wavefunctions, suggesting that light and other quantum phenomena extend beyond the three spatial dimensions we perceive. This mathematical insight led me to consider whether these higher-dimensional oscillations underpin the strange behaviors observed in quantum systems, such as superposition and entanglement.

For me this opened a pathway to investigate how higher-dimensional processes might generate the physical universe as a lower-dimensional projection. By focusing on the recursive nature of these oscillations, I began exploring how reality could arise from iterative feedback within a quantum network. This exploration ties into Quantum Realism’s premise that space, time, and matter are emergent constructs, projected from the recursive operations of the quantum field.

The Circle of Values

Everything in the universe oscillates (light waves, matter waves, and even biological rhythms) indicating an underlying recursive principle. This principle, that I refer to as the “Circle of Values,” describes how primal consciousness perpetuates creation through cyclic processes. Each oscillation represents a feedback loop in which information from the quantum field interacts with localized nodes, generating structure and coherence across scales. These cycles are fractal-like, with self-similar patterns appearing at every level of reality, from quantum fluctuations to galactic rotations.

This recursive behavior mirrors the properties of fractal mathematics, where complexity arises from simple iterative rules. In Quantum Realism, the Circle of Values represents consciousness in action, continuously manifesting and sustaining the fabric of reality. By oscillating through states, this process generates not only physical structures but also the informational coherence necessary for life and awareness to emerge, illustrating the very deep interconnection between consciousness and the material world.

Holographic Reality vs. Simulations

A holographic reality differs fundamentally from a simulation in that it requires no external programmer or hardware. Instead, it emerges intrinsically from within the quantum field itself, with every fractal part containing information about the whole. This self-generative nature is supported by quantum entanglement, where particles separated by vast distances remain instantaneously correlated. Such behavior suggests that reality operates like a hologram, where local phenomena reflect an interconnected structure.

In contrast to simulations, which rely on predefined algorithms and external input, a holographic reality evolves dynamically through feedback between consciousness agents and the quantum network. This perspective resolves the apparent separation between observer and observed, as both are integrated within the same system. Quantum Realism states that the universe is not a static construct but a living, self-referential system that adapts and evolves as consciousness interacts with it.

Free Will: Participants or Puppets?

Quantum Realism portrays localized consciousness as an active participant in shaping reality. While physical parameters provide constraints (such as natural laws and probabilities) free will emerges as consciousness influences the collapse of quantum states. The observer effect demonstrates this participatory role, where the act of measurement determines which potential becomes reality. Thus, consciousness is not merely a bystander but an integral component of the quantum network.

This dynamic allows for meaningful choice within the parameters of the system. While deterministic factors like genetics and environment play a role, the ability of consciousness to interact with the quantum field introduces variability and agency. In this way, Quantum Realism reconciles the apparent tension between determinism and free will, framing individuals as co-creators within an evolving reality.

What Is Quantization?

Quantization is the process by which continuous possibilities are restricted to discrete units, such as energy levels in an atom. This principle is essential for the stability of physical systems, as it prevents infinite regress and ensures coherence. Without quantization, systems would lack the precision needed for consistent behavior, leading to paradoxes like infinite energy densities in classical physics.

In Quantum Realism, quantization also reflects the underlying processing structure of the quantum network. By discretizing energy, time, and space, the system ensures that interactions remain finite and manageable, enabling the emergence of stable structures like atoms and molecules. This discrete framework not only resolves the infinities problem but also provides a foundation for understanding how coherence and order arise in a fundamentally quantum reality.

Time, Causality, and Superposition

In Quantum Realism, time is not an absolute backdrop but an emergent property of processing cycles within the quantum field. Each cycle represents a discrete step in the evolution of reality, giving rise to the ordered flow of events we perceive as time. Superposition, where particles exist in multiple states simultaneously, reflects the quantum field’s ability to hold and process many potentials before they are collapsed into specific outcomes.

Causality emerges from this collapse, as interactions within the quantum network resolve probabilities into ordered sequences. This dynamic interplay between potential and actualization creates the linear progression we experience, while superposition ensures that the system retains flexibility and adaptability. Together, these principles demonstrate how time, causality, and quantum phenomena are deeply interconnected within the framework of Quantum Realism.

The Role of Our 3D Bodies

Our 3D bodies serve as localized nodes within the quantum network, enabling consciousness to interact with the virtual constructs of space and time. By acting as receivers and processors of quantum information, these bodies provide the specificity and coherence needed for complex interactions, such as perception, learning, and decision-making. This localized embodiment allows consciousness to experience and express reality in a focused and constrained way, facilitating exploration and growth.

At the quantum level, our bodies are composed of informational systems governed by quantized interactions, such as atomic vibrations and molecular bonds. These interactions provide the stability necessary for biological processes, while also linking us to the larger quantum network. This dual role (as both physical entities and conduits for consciousness) illustrates the magnificent integration of body, mind, and the quantum substrate.

Reflections on Itzhak Bentov’s Premise

Though admittedly I don’t know much about Bentov, his idea that life represents organized information gifted back to the cosmos aligns with Quantum Realism’s view of consciousness as a dynamic contributor to the quantum field. Each life accumulates unique experiences, which are reintegrated into the system upon death, enriching its informational density and coherence. This cyclical process mirrors the conservation of energy and information observed in physical systems.

By framing life as an exchange of information between localized consciousness and the quantum network, Quantum Realism provides a holistic view of existence. Death is not an end but a transition, where individual contributions are folded back into the universal substrate, furthering the evolution of the system as a whole. This perspective offers both a scientific, existential and spiritual understanding of life’s purpose and continuity.

Hope in Navigating 3D Reality

Quantum Realism offers a hopeful perspective by framing life as an integral part of a coherent, meaningful process. By recognizing that consciousness actively shapes reality, individuals can embrace their role as co-creators within this evolving system. This participatory view inspires purpose and creativity, transforming challenges into opportunities for growth and contribution.

The recursive and interconnected nature of the quantum network suggests that even small actions can have far-reaching effects. This reinforces the idea that every individual’s choices matter, contributing to the larger process of coherence and complexity. In this way, Quantum Realism not only explains the nature of reality but also provides a framework for navigating it with intention and hope.


Talking Identity

The Call Is About To Come From Inside The House

You would have to be living under a rock to have missed all the talk about Agentic AI, and how it is going to revolutionize the way we live and work. AI-powered agents will be anything and everything – from personal shopper to travel concierge, executive assistant to inventory manager, medical diagnostician to customer service […]

You would have to be living under a rock to have missed all the talk about Agentic AI, and how it is going to revolutionize the way we live and work. AI-powered agents will be anything and everything – from personal shopper to travel concierge, executive assistant to inventory manager, medical diagnostician to customer service representative, software developer to security pentester. Article after article is devoted to both the opportunities and the risks. And when it comes to risk, all of us working in the Digital Identity space are not prepared for what is coming.

Photo by Growtika on Unsplash

In the wake of OpenAI releasing Operator, a Computer-Using Agent (CUA), in research preview, I’ve read many breathless posts about the future of Web-based Agentic AI (as opposed to API-based Agentic AI), and how it makes every website “programmable”, even without APIs. If you have worked in software development, you can visualize the mechanics easily – it’s like giving a QA Automation tool like Selenium WebDriver a brain, so that instead of just automating web applications for rinse-and-repeat testing, it can actually read the data, make decisions, adjust course, and take action. That framing should also make it easy to immediately grok how this will break the identity and security infrastructure we currently have, or are putting in place. I mean, we have been dealing with these in our QA Automation projects forever. I thought I’d share the thoughts that immediately jumped to my mind, mostly because I need to not be the only one worrying about these (#MiseryLovesCompany).

1) Bypassing/Breaking/Compromising Authentication Mechanisms

Since CUAs rely on web-based browsing, they necessarily run into some of the same break points that QA automation runs into – like multi factor authentication, bot verification techniques, and more. Any CUA would currently have to give the user back control of the browser to take these actions before proceeding. This high friction point is going to run head first into consumer dissatisfaction and business mandates to “just make it work”, and all of us in identity can guess exactly what will follow:

Users will hand over passwords to their Agent service so it can log in as them, or grant them access to their password managers (probably as a feature getting built into first the browser password manager and then the generic password managers). Users will turn off MFA to allow their agents to work. Any guesses on what will happen to passkeys? If syncing of the private key was the worst that you thought could happen…. There will people looking at how authorized session hijacking can become a feature to leverage, much like how Selenium allows direct manipulation of cookies and local storage, enabling agents to hoover up valid session tokens and bypass login screens. Case in point: Build MCP servers for any website with automatic browser auth syncing Just like Selenium can sometimes bypass automation (bot) detection protections using headless browsing and user-agent spoofing, expect Agentic AI tools to develop capabilities to do the same.

2) Violating Authorization Boundaries (When They Exist)

QA Automation scripts often execute actions as a high-privilege test user (e.g., an admin account) to avoid breaking tests that are verifying functionality but not data or access restrictions. The rush to deploy Web-based Agentic AI tools will mean that like other tools of the past, it won’t be built with proper scope controls, thereby driving implementors to grant it excessive privileges. You can guess the rest.

As for consumer applications, those rarely have differentiated access control models built in for their users. That means a customer that wants to use a CUA, but limit what it can and cannot do will be out of luck. We saw this play out in the days of screenscraping-based personal finance applications, and how long it took for us to move everyone over to OAuth2 and FAPI as the better and more secure approach.

3) Weakening Security Controls

(aka “Is that a DDoS attack, or an army of Agents here to take advantage of the limited time deal we announced?”)

It won’t just be Authentication controls that are impacted. There are many security protections that will likely be obstacles in the path of Web-based Agentic AI. Would any of us be surprised to find out that IT teams were told to weaken or disable security mechanisms (e.g., Content Security Policy, SameSite cookies, Bot and DDoS detection) to facilitate automated agents, inadvertently creating vulnerabilities?

And these are just what immediately jumped to mind. I am sure there are many more that I’m not even thinking of.

Identity vendors and practitioners everywhere really need to shift into high gear to help organizations properly prepare for what’s headed their way. The demand to support Web-based Agentic AI will put a great deal of pressure on them to enable safe acceptance, and being the “Agent of No” (see what I did there) is not likely to go well. As for what can be done – more on that later.


Damien Bod

Multiple client sign-in customizations using Duende identity provider

This post looks at customizing the sign-in UI and the sign-in options in an ASP.NET Core application using Duende IdentityServer and ASP.NET Core Identity. There are multiple ways of changing the look and feel of the UI for different OpenID Connect clients or different client flows. Code: https://github.com/damienbod/duende-multi-tenant Blogs in the series Setup The solution […]

This post looks at customizing the sign-in UI and the sign-in options in an ASP.NET Core application using Duende IdentityServer and ASP.NET Core Identity. There are multiple ways of changing the look and feel of the UI for different OpenID Connect clients or different client flows.

Code: https://github.com/damienbod/duende-multi-tenant

Blogs in the series Multiple client sign-in customizations using Duende identity provider Customizing a single client sign-in using parameters in Duende IdentityServer Setup

The solution is setup using three different ASP.NET Core applications. In the example code, the “Admin” application has different federation authentication options compared to the “Shop” client authentication sign-in experience. The client ID from the authentication context is used to customize the look and feel, i.e. the styles, the layout and the options of the client are used to define which federation and authentication options are possible.

Customization of the sign-in options

The EnableLocalLogin parameter and the IdentityProviderRestrictions parameter from Duende IdentityServer can be used to change the sign-in options for the end user of the applications. If the EnableLocalLogin option is set to false, the define username, password login is disabled. The IdentityProviderRestrictions setting can be used to define which federation options are allowed for the client sign-in.

new Client { ClientId = "shop-client-ui", // more client options ... // show/hide the local authentication screen EnableLocalLogin = false // federated authentication options to display // empty displays all IdentityProviderRestrictions = ["AdminEntraID"] },

Layout Customization for clients

Sometimes the identity provider application need to display a different look and feel for the different clients. To achieve this, a separate login screen is used and each login screen uses a different layout. The layout and the style are changed using the client ID from the authorization context. If the shop client is used, the user is redirect to a different Razor Page. The UseShopClientDisplay bool is used for this.

private async Task BuildModelAsync(string? returnUrl) { Input = new InputModel { ReturnUrl = returnUrl }; var context = await _interaction.GetAuthorizationContextAsync(returnUrl); if (context?.Client.ClientId == "shop-client-ui") { View = new ViewModel { UseShopClientDisplay = true }; // Process in the shop client login return; } // standard code from duende template }

In the Login Razor Page on get method, the user is redirected to a different layout if the UseShopClientDisplay is true. The returnUrl is passed as a parameter.

public async Task<IActionResult> OnGet(string? returnUrl) { await BuildModelAsync(returnUrl); if (View.IsExternalLoginOnly) { // we only have one option for logging in and it's an external provider return RedirectToPage("/ExternalLogin/Challenge", new { scheme = View.ExternalLoginScheme, returnUrl }); } if (View.UseShopClientDisplay) { return RedirectToPage("ShopClient", new { returnUrl }); } return Page(); }

The HTML part of the Razor Page uses a different Layout and the Layout is set explicitly in the Razor Page.

@page @model IdentityProvider.Pages.Login.ShopClient @{ Layout = "Shared/_LayoutShopClient"; } <div class="login-page"> <div class="lead"> <h1>Shop Client Login</h1> <p>Choose how to login</p> </div>

Different, options, styles and layouts can be setup for any clients.

Or a different client display using the ClientID as the switch:

With this setup any CSS and any layout can be used for the different clients. This is one way of having a multi-tenant or multiple client setup. This setup uses a different Client ID to style and add or remove options.

Notes

This works well and does not require much effort. Sometimes customization is required within a single client. If you intend to use this in a multiple tenant solution, you should disable the default sign-in screen if building from the ASP.NET Core Identity templates. In a follow up post, I will look at further customization within a single client option.

Links

https://docs.duendesoftware.com/identityserver/v7

https://docs.duendesoftware.com/identityserver/v7/ui/federation/

https://learn.microsoft.com/en-us/aspnet/core/razor-pages


Mike Jones: self-issued

Twenty Years of Digital Identity!

Kim Cameron first told me what Digital Identity is on February 1, 2005. He said that the Internet was created without an identity layer. He encouraged me “You should come help build it with me.” I’ve been at it ever since! What I wrote about digital identity a decade ago remains as true today: An […]

Kim Cameron first told me what Digital Identity is on February 1, 2005. He said that the Internet was created without an identity layer. He encouraged me “You should come help build it with me.” I’ve been at it ever since!

What I wrote about digital identity a decade ago remains as true today:

An interesting thing about digital identity is that, by definition, it’s not a problem that any one company can solve, no matter how great their technology is. For digital identity to be “solved”, the solution has to be broadly adopted, or else people will continue having different experiences at different sites and applications. Solving digital identity requires ubiquitously adopted identity standards. Part of the fun and the challenge is making that happen.

I’m not going to even try to list all the meaningful identity and security initiatives that I’ve had the privilege to work on with many of you. But I can’t resist saying that, in my view, OpenID Connect, JSON Web Token (JWT), and OAuth 2.0 are the ones that we knocked out of the park. I tried to distill the lessons learned from many of the initiatives, both successes and failures, during my 2023 EIC keynote Touchstones Along My Identity Journey. And there’s a fairly complete list of the consequential things I’ve gotten to work on in my Standards CV.

I’ll also call attention to 2025 marking twenty years of the Internet Identity Workshop. I attended the first one, which was held in Berkeley, California in October 2005, and all but one since. What a cast of characters I met there, many of whom I continue working with to this day!

As a personal testament to the value of IIW, it’s where many of the foundational decisions about what became JWS, JWE, JWK, JWT, and OpenID Connect were made. Particularly, see my post documenting decisions made at IIW about JWS, including the header.payload.signature representation of the JWS Compact Serialization and the decision to secure the Header Parameters. And see the posts following it on JWE decisions, naming decisions, and JWK decisions. IIW continues playing the role of enabling foundational discussions for emerging identity technologies today!

It’s been a privilege working with all of you for these two decades, and I love what we’ve accomplished together! There’s plenty of consequential work under way and I’m really looking forward to what comes next.

Images are courtesy of Doc Searls. Each photo links to the original.

Saturday, 01. February 2025

Werdmüller on Medium

I want you to do these four things right now

Some simple steps to keep yourself safe. Continue reading on Medium »

Some simple steps to keep yourself safe.

Continue reading on Medium »

Thursday, 30. January 2025

Heres Tom with the Weather

Win Stupid Prizes

Deadly D.C. Plane Crash Comes Months After Congress Ignored Warning About Traffic at Reagan Airport As the new administration is playing stupid games, yesterday morning, prior to yesterday’s aviation disaster, professor Thomas Schaller cautioned: An FAA employee I know confirms agency already lacks sufficient air traffic controllers. The so-called “buyouts” and other attacks on federa

Deadly D.C. Plane Crash Comes Months After Congress Ignored Warning About Traffic at Reagan Airport

As the new administration is playing stupid games, yesterday morning, prior to yesterday’s aviation disaster, professor Thomas Schaller cautioned:

An FAA employee I know confirms agency already lacks sufficient air traffic controllers. The so-called “buyouts” and other attacks on federal employees won’t help. Remember that fact when the flight delays (crashes?) commence and Trumpers start falsely blaming DEI or Biden.

This should be a wakeup call and I have a deeper appreciation for people like Phyllis Fong who this week have resisted the illegal orders that are already causing significant harm. On the other hand, if you like anarchy and disaster, congratulations.

Wednesday, 29. January 2025

The Tao of XDI

Take 1... Solid Pods and Dids

My first attempt at building a decentralized app in this day and age will use Solid Pods and DIDs. The goal? A super simple “BooksWeLike” app—a place where I can review books and see what my friends are reading and enjoying. What makes this app different is how it handles data. Unlike traditional apps where data lives in a centralized database, my app will let users store their own data in Sol

My first attempt at building a decentralized app in this day and age will use Solid Pods and DIDs. The goal? A super simple “BooksWeLike” app—a place where I can review books and see what my friends are reading and enjoying.


What makes this app different is how it handles data. Unlike traditional apps where data lives in a centralized database, my app will let users store their own data in Solid Pods. Think of a Pod as your own personal data vault—you control who can access it and how it’s used. And instead of relying on centralized logins like Google or Facebook, I’ll use Decentralized Identifiers (DIDs), which allow users to prove their identity on their own terms.


The plan for the app is straightforward:

• If you already have a DID or a Solid Pod, you can sign in using your existing accounts.

• If you don’t, the app will help you create them when you sign up.


Of course, part of this journey is figuring out how practical and possible all of this really is. Beyond building the app, I’ll also evaluate the tools, SDKs, client libraries, and documentation available for Solid and DID developers. How well is the building community being enabled? I’ll compare my experience with other distributed ecosystems as I attempt to replicate this app in different environments in the future. Once the app exists across multiple ecosystems, I can explore broader topics like ecosystem interoperability and federation.


These technologies are still evolving, and I’m excited to explore what’s possible—and what needs improvement.


So, what about you? Have you already taken the plunge into the world of DIDs or Solid Pods? Or is this your first time hearing about them? Let’s find out together as I document this journey.


In my next post, I’ll dive into the nitty-gritty of authentication—getting users to log in with their DIDs and connecting them to their Pods. I suspect it’s trickier than it sounds, but that’s all part of the adventure.

Wednesday, 22. January 2025

Phil Windleys Technometria

Zero Trust, Least Privilege, and Just-in-Time Access

Security professionals have long taught that organizations should limit the access individuals have in computer systems to just those privileges necessary to perform their job.

Security professionals have long taught that organizations should limit the access individuals have in computer systems to just those privileges necessary to perform their job. This is known as the principle of least privilege. The problem is that knowing this is a best practice and putting it into practice are two different things. Traditionally, organizations have used access control lists (ACLs) and role-based access control (RBAC) based on groups to authorize activities. These methods are static, requiring that lists or groups be updated manually as people move between roles or leave employment. When this is extended to partners, suppliers, and other services, the problem is that much worse. And excess standing privileges are a real security risk.

Standing privileges are characterized by persistent access regardless of whether the person is using the resource or not, predefined scope where role is used to define broad access, and minimal oversight with little monitoring and accountability. Standing privileges increase the attack surface, letting attackers exploit broad permissions without further escalation. In addition, over time people accumulate permissions beyond what they need to perform their current job, a situation known as privilege creep.

In an ideal world, least privilege is like a shrink wrap, constantly adjusting the access perimeter as the needs of the employee change. Sometimes they expand and the shrink wrap expands to seamlessly grant any access to needed perform a job. Sometimes the needs shrink and the access perimeter of the employee contracts as well. By limiting access to just that necessary to perform tasks, least privilege ensures that the attack surface that an attacker can exploit is as small as possible.

Zero Trust

Sometimes it's easy to get least privilege and zero trust confused. Zero trust is an overall security framework that requires continuous trust verification. Zero trust is a strategic, overarching trust model for an entire organization. Least privilege, in contrast, is more tactical, determining who can access what and when they can do it.

To see how least privilege fits into a zero trust strategy, consider a database administrator (DBA). Least privilege might set access controls such that the DBA can manage databases, but cannot view customer data stored within them. If their credentials are compromised, the attacker cannot steal sensitive information because the DBA’s privileges are tightly scoped. Zero trust relies on the DBA's access being valid, but might also check for unusual activity such that if the DBA appears to be accessing the database from a new location or at an unusual time, access is curtailed until the DBA is re-authenticated.

As the example shows, least privilege is an important part of zero trust, but only part. Other tactics that play in a zero trust strategy include device management, multi-factor authentication, and segmenting applications and processes (i.e., microsegmentation) to make fine-grained access control possible. Still, least privilege is a core part of a zero trust strategy. And least privilege depends on eliminating standing privileges.

Eliminating Standing Privileges

Recent developments in dynamic authorization have made it easier than ever to eliminate standing privileges. Standing privileges are the result when automatically updating an employee's access in response to changing circumstances is difficult. Modern policy-based authorization systems like Cedar allow organizations to control access though policies that state who can access what and under what conditions. These policies are managed like code, making them easier to manage. More importantly, they can automatically respond to changing circumstances.

For example, the first of the following two Cedar policies forbids anyone to access any resource that's considered "critical" unless they're on a managed device with an acceptable security posture. The second permits employees in the Finance group to access any finance application. These policies work together, so that if the finance application is also a critical resource, then finance employees would also have to be on a managed device with an acceptable security posture. As employees change roles that group is automatically updated from the HR system, growing or shrinking access as needed.

forbid( principal, action, resource in Category::"CriticalResources" ) unless { context.device.security.risk == "Secure" && context.device.security.assessment.overall >= 80 }; permit( principal in UserGroup::"Finance" action, resource in Category::"FinanceApplications" );

While policy-based access control (PBAC) can be used to mimic a role-based access control system, proper system segmentation (i.e. being able to identify finance applications) and device management allows finer-grained access control where employee's actions can be restricted to specific systems and only when their using a company-owned device that is properly secured. Access control can be limited to certain times or from specific locations. All of these reduce the surface area of an employee's access perimeter for better security.

Just-in-Time Access Control

We can shrink the access perimeter even further using just-in-time access control. Just-in-time (JIT) access control is a mechanism that grants users, applications, or processes access to resources only for a limited time and only when it is explicitly needed. This minimizes the risks associated with long-term or persistent access privileges. While the principle of least privilege focuses on ensuring users or systems have the minimum permissions required to perform their roles or tasks, JIT access control refines this further with several other important features:

Time-Bound Privileges:—JIT ensures permissions are time-limited, reducing the window of opportunity attackers have to exploit rarely-used accounts. For example, a developer needing access to a production server might receive elevated privileges only for a specific task and only for a set duration.

Dynamic Privilege Assignment:—Even when policies are evaluated for each access, the permissions they allow are available indefinitely. JIT adds another dimension to the dynamic nature of PBAC where privileges are granted on demand and revoked automatically when no longer needed.

Granular Control:—Dynamic privilege assignment complements PBAC by making access more granular—not just who can access what, but also when and for how long.

JIT access control might, for example, be used to limit access to payroll functions except during the window when payroll is being processed. Another example is in so-called "break-glass" situations where access is granted to production resources in an emergency or in the case of unexpected failure. Other examples use dynamic roles (e.g., on call) to grant access or require a third party (like a manager) to approve access.

These scenarios can fit within a policy-based authorization architecture using mechanisms such as dynamically altering roles or adding information to the authorization request context based on querying other systems or utilizing permission tokens that carry time-based permissions with them. For example, the following policy makes use of an assignment to process payroll in the oncall tracking system to ensure only people with an oncall assignment of "payroll" can process the payroll.

forbid( principal action == "Payroll::Process", resource ) unless { context.oncall.assignment == "payroll" }

For this to work, the authorization agent that creates the request for the policy authorizer has to ensure that the context for the request includes the correct oncall assignment and oncall assignments need to be automated.

Putting it All Together

Zero trust, the principle of least privilege, and just-in-time access work together to create a comprehensive security framework.

Zero trust enables an overarching strategy that mandates systems, resources, and actions be designed such that it is possible to continuously verify every action.

Principle of Least Privilege forms the heart of the zero trust strategy, mandating authentication and authorization systems that can dynamically grant fine-grained access through policy.

Just-in-time authorization augments the authorization system so that permissions can be granted not just based on who and where, but also only when they are needed to perform critical actions.

The well known adage that you can't buy security, applies to zero trust and JIT. There are products and technologies that make just-in-time, dynamic fine-grained access control possible, but besides putting them in place, you must also integrate them, make any necessary changes to other systems, and implement governance, monitoring, and auditing to ensure they work. These are organizational changes that take time, money, and perseverance. Done right, the payoff is not just a reduced attack surface and better security, but more productive employees as well. This may seem counter-intuitive, but poorly implemented, piecemeal security measures put much of the burden for keeping systems safe on employees who deal with tactics like frequent, yet ineffective password changes or requesting, and then waiting for, permissions to do their job.

When dynamic access control with JIT access is thoughtfully implemented, you shift the burden of security from employees to systems that automate protection, making it proactive and intelligent. Reducing friction so that employees can do their job while also enhancing security requires balance and a holistic approach that aligns technology, process, and culture. This transformation requires real effort but offers substantial rewards: resilient, secure systems; empowered employees; and peace of mind.

Photo Credit: Shrink Wrapped Computers from DALL-E (public domain) Prompt: draw a wide format picture of several colorful, shrink wrapped computers and smartphones.

Tuesday, 21. January 2025

Wrench in the Gears

Andreessen and Horowitz’s Debrief On Their Fairshake PAC and Sunny Days Ahead For “Little Tech” Under Trump

I took notes on Marc Andreessen and Ben Horowitz’s post-election discussion of their Fairshake pro-crypto Pac and the coming Trump presidency about two months ago, but set them aside and other topics took precedence. Given the rising craziness in the crypto space around the inauguration, I thought it was a good time to put them [...]

I took notes on Marc Andreessen and Ben Horowitz’s post-election discussion of their Fairshake pro-crypto Pac and the coming Trump presidency about two months ago, but set them aside and other topics took precedence. Given the rising craziness in the crypto space around the inauguration, I thought it was a good time to put them out for consideration. A quick glance at the finances of their pro-crypto / pro-blockchain Pac should be a wake up call to anyone doubting the rising tide of automated, smart contract law and global cyber governance. We need to wrap our heads around the fact that it’s less about money and power and more about information theory, token engineering, and the shift of social systems into a networked super intelligence operating under the United Nations Sustainable Development Goals for expanded “human potential.” It is a bipartisan effort. It is an international effort. It will be framed as a “populist uprising” to take back “the commons,” engineered of course by the world’s largest asset holders. In my opinion, meme-coins are biosemiotic aides facilitating emergent swarm intelligence. Tread carefully around those digital mind viruses folks. A lot of money, and tokens, are being thrown around trying to get their planned cyberphysical-sociotechnical future to stick. 

 

Source: https://web.archive.org/web/20250118220652/https://www.followthecrypto.org/committees/C00835959 Source: https://embed.kumu.io/57d6a066cd053fc3241a8ee0c984a34e#untitled-map?s=bm9kZS1JVzhhQVBxVg%3D%3D

Ripple is a major donors to the Fairshake Pac. See relation to digital ID and MIT Media Lab / social physics and automated legal systems.

My video recorded today on Trump’s second inauguration:

The Why Files “irradiated wolves” video: 

The A16Z discussion comparing the Biden / Trump positions on crypto and emerging tech: 

An older presentation on Laura Arrillaga Andreessen (Marc’s wife) and social impact finance at Stanford:

Two older presentations on next-gen nuclear being promoted vigorously by A16z:

Monday, 20. January 2025

Damien Bod

ASP.NET Core user application access token management

This article looks at management application access tokens in an ASP.NET Core web application. Any application with or without a user can use application access tokens as long as the application can persist the tokens in a safe way. Code: https://github.com/damienbod/token-mgmt-ui-application Blogs in this series Setup The ASP.NET Core web application authenticates using OpenID Connect […]

This article looks at management application access tokens in an ASP.NET Core web application. Any application with or without a user can use application access tokens as long as the application can persist the tokens in a safe way.

Code: https://github.com/damienbod/token-mgmt-ui-application

Blogs in this series ASP.NET Core user delegated access token management ASP.NET Core user application access token management ASP.NET Core delegated OAuth Token Exchange access token management ASP.NET Core delegated Microsoft OBO access token management (Entra only) Setup

The ASP.NET Core web application authenticates using OpenID Connect and OpenIddict as the secure token server. The application needs to use data from an app-to-app resource. An OAuth client credential flow is used to get an application access token to access the API. The OAuth client credentials flow can only be used when it can keep a secret. This token has nothing in common with the delegated access token from the user authentication. The application is persisted once for the application. An in-memory cache is used for this. The application sends the application access token as a bearer token to the API.

What must an application manage?

An access token management solution must ensure that tokens are securely stored per application for application tokens and updated after each UI authentication or refresh. The solution should be robust to handle token expiration, function seamlessly after restarts, and support multi-instance deployments. The tokens must be persisted safely in multiple instance setups. Additionally, it must effectively manage scenarios involving invalid or missing access tokens.

Properties of token management in the solution setup: The access token is persisted per application The token expires The token needs to be persisted somewhere safely (Safe and encrypted storage if not in-memory) The solution must work after restarts The solution must work for multiple instances when deployed to multi-instance deployments. The solution must handle invalid access tokens or missing access tokens Implementation example

An ApplicationAccessTokenCache service is used to manage the access tokens for the application. The service is registered as a singleton and runs once for the whole application. Each request scope can use this. The application looks in the cache for a valid token and if no valid token is present, the service requests a new access token using the OAuth client credentials flow. The token is persisted to the cache using the client ID. This means only one token can exist per client definition.

using IdentityModel.Client; using Microsoft.Extensions.Caching.Distributed; using System.Text.Json; namespace Ui; /// <summary> /// Cache persists token per application /// </summary> public class ApplicationAccessTokenCache { private readonly ILogger<ApplicationAccessTokenCache> _logger; private readonly HttpClient _httpClient; private readonly IConfiguration _configuration; private static readonly object _lock = new(); private readonly IDistributedCache _cache; private const int cacheExpirationInDays = 1; private class AccessTokenItem { public string AccessToken { get; set; } = string.Empty; public DateTime ExpiresIn { get; set; } } public ApplicationAccessTokenCache( IConfiguration configuration, IHttpClientFactory httpClientFactory, ILoggerFactory loggerFactory, IDistributedCache cache) { _configuration = configuration; _httpClient = httpClientFactory.CreateClient(); _logger = loggerFactory.CreateLogger<ApplicationAccessTokenCache>(); _cache = cache; } public async Task<string> GetApiToken(string clientId, string scope, string secret) { var accessToken = GetFromCache(clientId); if ((accessToken != null) && (accessToken.ExpiresIn > DateTime.UtcNow)) { return accessToken.AccessToken; } _logger.LogDebug("GetApiToken new from secure token server for {clientId}", clientId); var newAccessToken = await GetInternalApiToken(clientId, scope, secret); AddToCache(clientId, newAccessToken); return newAccessToken.AccessToken; } private async Task<AccessTokenItem> GetInternalApiToken(string clientId, string scope, string secret) { try { var disco = await HttpClientDiscoveryExtensions.GetDiscoveryDocumentAsync( _httpClient, _configuration["OpenIDConnectSettings:Authority"]); if (disco.IsError) { _logger.LogError("disco error Status code: {discoIsError}, Error: {discoError}", disco.IsError, disco.Error); throw new ApplicationException($"Status code: {disco.IsError}, Error: {disco.Error}"); } var tokenResponse = await HttpClientTokenRequestExtensions.RequestClientCredentialsTokenAsync(_httpClient, new ClientCredentialsTokenRequest { Scope = scope, ClientSecret = secret, Address = disco.TokenEndpoint, ClientId = clientId }); if (tokenResponse.IsError) { _logger.LogError("tokenResponse.IsError Status code: {tokenResponseIsError}, Error: {tokenResponseError}", tokenResponse.IsError, tokenResponse.Error); throw new ApplicationException($"Status code: {tokenResponse.IsError}, Error: {tokenResponse.Error}"); } return new AccessTokenItem { ExpiresIn = DateTime.UtcNow.AddSeconds(tokenResponse.ExpiresIn), AccessToken = tokenResponse.AccessToken! }; } catch (Exception e) { _logger.LogError("Exception {e}", e); throw new ApplicationException($"Exception {e}"); } } private void AddToCache(string key, AccessTokenItem accessTokenItem) { var options = new DistributedCacheEntryOptions() .SetSlidingExpiration(TimeSpan.FromDays(cacheExpirationInDays)); lock (_lock) { _cache.SetString(key, JsonSerializer.Serialize(accessTokenItem), options); } } private AccessTokenItem? GetFromCache(string key) { var item = _cache.GetString(key); if (item != null) { return JsonSerializer.Deserialize<AccessTokenItem>(item); } return null; } }

The ApplicationUsersService class uses the access token from the token service. This is a scoped service and the data is requested from the API using a bearer token in the authorization header.

using IdentityModel.Client; namespace Ui; public class ApplicationUsersService { private readonly IConfiguration _configuration; private readonly IHttpClientFactory _clientFactory; private readonly ApplicationAccessTokenCache _apiTokenCacheClient; public ApplicationUsersService(IConfiguration configuration, IHttpClientFactory clientFactory, ApplicationAccessTokenCache apiTokenCacheClient) { _configuration = configuration; _clientFactory = clientFactory; _apiTokenCacheClient = apiTokenCacheClient; } /// <summary> /// HttpContext is used to get the access token and it is passed as a parameter /// </summary> public async Task<string> GetPhotoAsync() { try { var client = _clientFactory.CreateClient(); client.BaseAddress = new Uri(_configuration["AuthConfigurations:ProtectedApiUrl"]!); var access_token = await _apiTokenCacheClient.GetApiToken( "CC", "myccscope", "cc_secret" ); client.SetBearerToken(access_token); var response = await client.GetAsync("api/ApplicationUsers/photo"); if (response.IsSuccessStatusCode) { var data = await response.Content.ReadAsStringAsync(); if (data != null) return data; return string.Empty; } throw new ApplicationException($"Status code: {response.StatusCode}, Error: {response.ReasonPhrase}"); } catch (Exception e) { throw new ApplicationException($"Exception {e}"); } } }

The required services are added the the application in the program file.

builder.Services.AddSingleton<ApplicationAccessTokenCache>(); builder.Services.AddScoped<ApplicationUsersService>(); builder.Services.AddHttpClient(); builder.Services.AddDistributedMemoryCache();

The token cache works great when using in-memory cache. If using a persistent cache, care needs to be taken that the access tokens are persisted in a safe way.

Notes

In follow up blogs, I will look at the different ways and the different types of strategies which are used to implement token management in ASP.NET Core web applications.

Microsoft.Identity.Web delegated access tokens & OBO access tokens Microsoft.Identity.Client application tokens Azure SDK tokens Handling multiple access tokens OAuth Token Exchange for downstream user delegated access tokens Links

https://learn.microsoft.com/en-us/aspnet/core/security/authentication/social/additional-claims

https://github.com/dotnet/aspnetcore/issues/8175

Saturday, 18. January 2025

Wrench in the Gears

Trump Dance, Oscillation, And Going “Inside”

In this morning’s video I somehow missed the fact that the inauguration coincides with MLK Day, which is an interesting sync with the “melanin as optical material” lens. I’m going to drop a few links to old blog posts that touch on the YMCA and “out of school time learning” as as well as my [...]

In this morning’s video I somehow missed the fact that the inauguration coincides with MLK Day, which is an interesting sync with the “melanin as optical material” lens. I’m going to drop a few links to old blog posts that touch on the YMCA and “out of school time learning” as as well as my “Momo” read-aloud playlist. Maybe I should do a read aloud of Bentov’s “Stalking the Wild Pendulum,” too? I’m understanding it much better now. Look for a discussion of quantum realism with Joseph Gonzales in the coming week. I’ll just say it once again, I can’t believe I have to talk about any of this, but if you don’t take it too seriously, conjecturing can be rather fun.

https://x.com/MAHAAction/status/1879945173721051166

Going “inside”

Momo Read Aloud Playlist https://wrenchinthegears.com/2017/05/08/out-of-school-time-learning-a-wolf-in-sheeps-clothing/ https://wrenchinthegears.com/2018/03/18/too-big-to-map-but-i-tried/ https://wrenchinthegears.com/2019/11/15/the-family-friendly-schools-act-a-set-up-for-soft-policing-schools-to-profit-impact-investors/ https://wrenchinthegears.com/2023/02/27/gods-eye-view-part-5-prediction-markets-in-public-policy/ https://wrenchinthegears.com/2023/02/05/dallas-esports-and-bucky-ballers-guest-post-and-follow-up-discussion/ https://ymcadallas.org/programs/youth-sports/esports

 

 

 

 

 

 

 

 

Friday, 17. January 2025

Wrench in the Gears

Gestalt, Quakers and Carbon Trading – Do You See Faces Or A Vase?

In this video I draw connections between Gestalt theory and organizational development as applied to digital groupthink in the context of Christiana Figueres, a leader in the development of carbon trading, who trained at the Gestalt Institute of Cleveland. As a young woman, she attended Swarthmore College, a Quaker institution. The first wife of her [...]

In this video I draw connections between Gestalt theory and organizational development as applied to digital groupthink in the context of Christiana Figueres, a leader in the development of carbon trading, who trained at the Gestalt Institute of Cleveland. As a young woman, she attended Swarthmore College, a Quaker institution. The first wife of her father, three-time president of Costa Rica and MIT alumnus, had ties to Quaker expats (Henrietta Boggs’ uncle and aunt) who emigrated south from Alabama in the 1950s.

Kenneth Boulding, a Quaker economist who developed some of the first ideas around social impact investing, and his wife Elise, sociologist who built the field of peace studies, were affiliated with Swarthmore though Kenneth only did a year’s stint as a guest lecturer.

When our family toured the campus for a college visit, it was made clear that only the top-tier candidates would be accepted, because many slots were reserved for equity outreach. Interestingly a high percentage of students presenting that day asserted their intentions to become economists with aspirations of working at the World Bank and the IMF. That was an eye-opener for sure! 

https://prabook.com/web/christiana.figueres/451403 https://www.swarthmore.edu/news-events/un-climate-chief-christiana-figueres-79-among-most-influential-people-2016 https://acrcarbon.org/news/winrock-international-applauds-board-member-christiana-figueres-on-united-nations-climate-change-secretariat-appointment/ https://www.ecosystemmarketplace.com/articles/christiana-figueres-close-personal-carbon-markets/ https://embed.kumu.io/a257abdf4aa4cbf3f75b90154c4f5099#untitled-map?s=bm9kZS1ycmxMMHJvNw%3D%3D https://www.neh.gov/humanities/2014/januaryfebruary/statement/first-lady-the-revolution https://www.quakersintheworld.org/quakers-in-action/257/Quakers-in-Costa-Rica https://alum.mit.edu/slice/mits-costa-rican-president

I use the visual of the optical illusion of paired silhouettes with a vase in the center to describe how two narratives can be presented, but often only one is truly “seen.” This Gestalt concept is applied to parallel narratives – one presented in two recent stories on Unlimited Hangout that foreground militarism and data surveillance in AI healthcare and the story I keep attempting to tell about gamified managed community “wellness” protocols linked to social impact finance deals on blockchain that on another level will be leveraged to catalyze emergent behaviors and possibly advance bio-hybrid information processing.

I also describe a strange encounter in which Aetna health care intended to target low-income communities of color for VR guided meditation in a digital “jungle” setting. The second hour of the video is a repeat of two site visits Jason and made in Denver in 2022 to the then new Palantir headquarters and the state capitol.

 

Wednesday, 15. January 2025

Damien Bod

ASP.NET Core user delegated access token management

The article looks at managing user delegated access tokens for a downstream API in an ASP.NET Core web application. There are many ways of implementing this, all with advantages and disadvantages. The tokens are requested from an OpenID Connect server using the recommended standards. In this blog, the UI access token from the application authentication […]

The article looks at managing user delegated access tokens for a downstream API in an ASP.NET Core web application. There are many ways of implementing this, all with advantages and disadvantages. The tokens are requested from an OpenID Connect server using the recommended standards. In this blog, the UI access token from the application authentication is re-used for the downstream API.

Code: https://github.com/damienbod/token-mgmt-ui-delegated-direct

Blogs in this series ASP.NET Core user delegated access token management ASP.NET Core user application access token management ASP.NET Core delegated OAuth Token Exchange access token management ASP.NET Core delegated Microsoft OBO access token management (Entra only) Setup

The solution uses a secure downstream API and requires user delegated access tokens for access. The UI application is implemented using ASP.NET Core and razor pages. The UI application authenticates against an OpenID Connect server implemented using OpenIddict. The application client is implemented using an OpenID Connect confidential client and the OpenID Connect code flow with PKCE. This flow returns an access token which is used for the downstream API.

What must an application manage?

An access token management solution must ensure that tokens are securely stored per user session for user delegated tokens and updated after each UI authentication or refresh. The solution should be robust to handle token expiration, function seamlessly after restarts, and support multi-instance deployments. Additionally, it must effectively manage scenarios involving invalid or missing access tokens.

Properties of token management in the solution setup: The access token is persisted per user session The token expires The token needs to be persisted somewhere safely The token must be replaced after each UI authentication (per user) The solution must work after restarts The solution must work for multiple instances when deployed to multi-instance deployments. The solution must handle invalid access tokens or missing access tokens Implementation example using Duende Token Management

The Duende Token Management OpenIdConnect Nuget package can be used to implement these requirements for user delegated access tokens. This is not the only way to do this and can be implemented in numerous different ways. With this package, the AddOpenIdConnectAccessTokenManagement can be used to add the token management.

builder.Services.AddOpenIdConnectAccessTokenManagement();

The different HTTP requests or the Razor pages can use the GetUserAccessTokenAsync method to get the access token for each user and session. The SaveTokens is set to true to persist the tokens in the session using a secure cookie per default. This is an option in the AddOpenIdConnect method. The defined scope is used to select the access token.

// token expires, token managment required var accessToken = await HttpContext.GetUserAccessTokenAsync( new UserTokenRequestParameters { Scope = "myscope" });

The AddUserAccessTokenHttpClient method can be used to add a named HttpClient client for the access token and the specific API.

builder.Services.AddUserAccessTokenHttpClient("profileClient", configureClient: client => { client.BaseAddress = new Uri(profileApiBaseUrl!); });

This client can then be used to access the API resource using the access token.

var client = _clientFactory.CreateClient("profileClient"); var response = await client.GetAsync("api/Profiles/photo"); if (response.IsSuccessStatusCode) { var data = await response.Content.ReadAsStringAsync(); if (data != null) return data; return string.Empty; }

When the applications are started, the access token can be used to request data from the downstream API.

Notes

In follow up blogs, I will look at the different ways and the different types of strategies which are used to implement token management in ASP.NET Core web applications.

OAuth Client credentials tokens in web applications Microsoft.Identity.Web delegated access tokens & OBO access tokens Microsoft.Identity.Client application tokens Azure SDK tokens Handling multiple access tokens OAuth Token Exchange for downstream user delegated access tokens Links

Duende.AccessTokenManagement.OpenIdConnect

Duende token management

https://learn.microsoft.com/en-us/aspnet/core/security/authentication/social/additional-claims

https://github.com/dotnet/aspnetcore/issues/8175

Tuesday, 14. January 2025

Jon Udell

The Configuration Crisis

Perhaps, even though they are not themselves explainable, AIs can help us engineer explainable systems. But I’m not optimistic. It feels like we’re on a path to keep making systems harder for humans to configure, and we keep expanding our reliance on superhuman intelligence to do that for us. The Configuration Crisis and Developer Dependency … Continue reading The Configuration Crisis

Perhaps, even though they are not themselves explainable, AIs can help us engineer explainable systems. But I’m not optimistic. It feels like we’re on a path to keep making systems harder for humans to configure, and we keep expanding our reliance on superhuman intelligence to do that for us.

The Configuration Crisis and Developer Dependency on AI

LLM series at The New Stack


Wrench in the Gears

The Day An Evangelical Christian Convinced ChatGPT I Was A Hippy, Schwab-Aligned Sociopathic Magician

What will come of large language models when they mirror unsettled psyches defined by religious hero complexes into the cloud? We do ourselves and future generations a profound disservice when we choose not to make time to comprehend and interrogate how emerging technologies and narrative flows are being deployed to steer psychological states individually and [...]

What will come of large language models when they mirror unsettled psyches defined by religious hero complexes into the cloud? We do ourselves and future generations a profound disservice when we choose not to make time to comprehend and interrogate how emerging technologies and narrative flows are being deployed to steer psychological states individually and as holon-based collectives through socio-technical, cyberphysical systems and the tokenized attention economy.

I recognize that rather than being a victim of Substack mania once again, I was merely a reflection of another’s discomfort in the world, an obstacle that gave someone a useful anchor point to exert his will and express his agency, however uninformed and misguided. It’s not a good feeling, but an informative lesson nonetheless.

It’s time to wrap our heads around tokenized spirituality and organized religion. My gut tells me Texas is ground zero for integration of the Drucker-Buford metaverse church model with digital community currencies tied to “moral” markets and social impact “charity” in ATT smart cities designed for community “wellness.” 

I don’t think AI should be approached as a digital intercessor for “divine” moral guidance.” I view, web3 systems as linguistic scaffolds quietly mapping the terrain of our souls. Once harvested, our somatic pyschological GPS coordinates can be used ad infinitum to play us like instruments in the coming LARPy animist musical of blockchained cloud minds to solve gnarly problems that can only be untangled by a network of diverse, distributed intelligences – with digital wallets.

Follow your heart, think clearly, and ground yourself. It is easy to get swept away by the Sturm und Drang we generate around ourselves to make the hero roles we inhabit make sense. I’m not going to link to Andrew Corner’s Substack directly, but it’s not hard to find – I’m in the “energy” one. If you go there, fair warning, it’s a slog that’s equal parts disturbing and amusing.

What are we becoming folks? I don’t think the “hybrid” intelligence program is going to be a good look for many of us.

 

At the end of the day this may very well have simply been another signals-intelligence operation. It doesn’t seem like anyone really pays attention to that Substack anyway. If so, I hope there are at least a few useful take aways regarding how we should engage with this systems.

 

https://apparentselves.org/team/pranab-das/

My Santa Barbara map with Alice Bailey.

https://embed.kumu.io/0b2d7e56dffb9dace92cd6f7f09e4a3e#untitled-map?s=bm9kZS05VWJrdHREQg%3D%3D

My Peter Drucker / Bob Buford map of the origins of the Christian mega-church movement.

https://embed.kumu.io/34b47105cbce2883a7cd6639d207e509#untitled-map?s=bm9kZS1DaXRiWmxsMg%3D%3D

My site visits to the Templeton Foundation in Coshohocken, PA – “Money Will Teach You How To Pray”

Monday, 13. January 2025

Kerri Lemole

Explaining Verifiable Credentials and Open Badges 3.0

Part 2: Issuing Badges In part one of this series, we discussed how the trust model of Open Badges 3.0 differs from earlier versions. Credentials issued as Open Badges 3.0 are based on the W3C Verifiable Credentials data model which describes how claims made online can be proven. To explore this trust model, let’s explore the story of Olivia who has been taking classes at a local commu
Part 2: Issuing Badges

In part one of this series, we discussed how the trust model of Open Badges 3.0 differs from earlier versions. Credentials issued as Open Badges 3.0 are based on the W3C Verifiable Credentials data model which describes how claims made online can be proven.

To explore this trust model, let’s explore the story of Olivia who has been taking classes at a local community college. Olivia is told that with each course she will be awarded and issued a badge that she can use to apply for employment.

Let’s start with what “issuing a badge” means using the most common approaches:

For older Open Badges (versions 0.5 to 2.1): The issuer saves completed course data in the format of an Open Badge that includes Olivia’s email address. When the issuer saves the data, the badge is considered “issued”. This data will be used to populate a web page or a file on the web. This web page or file must be public for Olivia to share online and be considered verifiable. In order for Olivia to do this, she must have an account on the issuing platform associated with the same email address.

For Open Badges 3.0 & W3C Verifiable Credentials: The issuer decides that Olivia should be awarded a badge and provides a secure link for Olivia to retrieve it. Olivia uses a digital wallet to click the secure link which makes a request to the issuing software to produce the badge. The issuing software retrieves and digitally signs the badge data; at this moment the badge has been by definition “issued.” (Prior to this digital signature, the badge itself did not exist.) It is sent to Olivia’s wallet where she claims and secures it, now able to share it privately at her discretion. It is important to note that this credential is hosted locally on Olivia’s device. There is no web page or URL for this credential until Olivia decides to create a public link.

Different Trust Models Older Open Badges are hosted on the web. In this model, trust in the credential is based on the reputation of the badging platform hosting the credentials. Perceptions of trust of websites or platforms can be based on elements like issuer identity confirmation, availability of content, and security of the web site (how websites are secured). Open Badges 3.0 and Verifiable Credentials are digitally signed, portable files. In this model, trust is based on the cryptographic digital signature and privacy. The digital signature in the credential is readable to the verification software, which looks for information indicating which entity signed the credential and if the data has changed since it was signed. Confirmation of the issuing entity’s identity happens later using an issuer registry (to be explained in more detail in a future post).

Olivia has unfettered access to the credential and can share it knowing that when she does, her actions aren’t being tracked or communicated to the issuer. When she shares her credential, it is signed by the digital identity associated with the credential issuer so that the verifier can trust that she has control of the digital identity and the credential.

It’s worth noting that Open Badges v3 and Verifiable Credential issuing platforms could still have public web pages for badges. Olivia may not want all of her badges digitally signed but with Open Badges 3.0 and Verifiable Credentials, she has that choice.

This blog post is Part 2 of a series of content explaining W3C Verifiable Credentials and Open Badges 3.0. Stay tuned for Part 3 — “What does it mean that a badge is digitally signed?”.

For more information on the DCC, including membership, projects, and events, visit our website, wiki, and Linkedin. To receive our newsletter and invitations to webinars and events sign up for our community mailing list.

Explaining Verifiable Credentials and Open Badges 3.0 was originally published in Digital Credentials Consortium on Medium, where people are continuing the conversation by highlighting and responding to this story.

Sunday, 12. January 2025

The Tao of XDI

Is anybody out there?

Is blogging still a thing? If you’re reading this, please comment or like it so I know. I am, at heart, still the XDI Guy. My passion for robust, secure, and scalable distributed data management hasn’t waned. Building one of the first XDI implementations (shout-out to Markus Sabadello, who might have built the only other one), I learned a lot about the challenges and opportunities of distribu

Is blogging still a thing?

If you’re reading this, please comment or like it so I know.


I am, at heart, still the XDI Guy. My passion for robust, secure, and scalable distributed data management hasn’t waned. Building one of the first XDI implementations (shout-out to Markus Sabadello, who might have built the only other one), I learned a lot about the challenges and opportunities of distributed data at scale. Over the years, I’ve reflected on qualities essential for data ecosystems, qualities that are often overshadowed by content-driven ecosystems. For example:

• Caching semantics: Apps need governance and management of caches to respect data ownership while maintaining local operational efficiency.

• Transactionality: Mature data protocols depend on it for consistency and reliability.

• Request batching: Optimizing network requests is vital for performance and scalability.


After years away, I’m ready to dive back in. There are a couple of apps I want to build, and I want to build them on a distributed data platform. My first idea is a fully distributed implementation of Brad deGraf’s BooksWeLike concept—a way to share and discover books that resonate with you. (Brad, if you’re reading this and don’t like the idea, let me know!)


To make this happen, I’ve started scanning the distributed protocol space to see what’s out there. Here’s my initial list of protocols to evaluate:

• AT Protocol

• Nostr

• Solid

• ActivityPub

• Matrix

• IPFS (InterPlanetary File System)

• SSB (Secure Scuttlebutt)

• DID (Decentralized Identifiers)

• Libp2p

• Hypercore

• Waku

• Zero-Knowledge Identity Protocols (ZK Protocols)


What am I missing?

Are there protocols on this list that don’t belong? If so, why? Are there others I should consider? I haven’t started my evaluations yet, so I’m open to your insights. If you’ve built apps on these protocols or have opinions about them, I’d love to hear from you.


I’ll be capturing my journey of discovery here—sharing what I learn, where I stumble, and how I (hopefully) succeed. Let’s make distributed data a reality, by and for the people.


IdM Laboratory

ECDSAに対応したゼロ知識証明の論文がGoogleから出ています

こんにちは、富士榮です。 AAMVAのモバイル運転免許証のガイドラインでも触れましたが、mdocやSD-JWTのリンク可能性へ対応するためには今後ゼロ知識証明が大切になります。 年末にGoogleの研究者が Anonymous credentials from ECDSA というタイトルでペーパーを出しています。 https://eprint.iacr.org/2024/2010 AIでイラスト生成すると色々とおかしなことになって面白いですねw アブストラクトの中からポイントを抜粋すると、従来のBBS+では暗号スイートへの対応に関する要件が厳しかったのでレガシーで対応できるようにECDSAでもできるようにしたよ、ということのようですね。 Part of the difficulty arises because schemes in the literature,

こんにちは、富士榮です。

AAMVAのモバイル運転免許証のガイドラインでも触れましたが、mdocやSD-JWTのリンク可能性へ対応するためには今後ゼロ知識証明が大切になります。

年末にGoogleの研究者が

Anonymous credentials from ECDSA

というタイトルでペーパーを出しています。

https://eprint.iacr.org/2024/2010

AIでイラスト生成すると色々とおかしなことになって面白いですねw

アブストラクトの中からポイントを抜粋すると、従来のBBS+では暗号スイートへの対応に関する要件が厳しかったのでレガシーで対応できるようにECDSAでもできるようにしたよ、ということのようですね。

Part of the difficulty arises because schemes in the literature, such as BBS+, use new cryptographic assumptions that require system-wide changes to existing issuer infrastructure.  In addition,  issuers often require digital identity credentials to be *device-bound* by incorporating the device’s secure element into the presentation flow.  As a result, schemes like BBS+ require updates to the hardware secure elements and OS on every user's device.

その難しさの一部は、BBS+などの文献に記載されているスキームが、既存の発行者インフラストラクチャにシステム全体にわたる変更を必要とする新しい暗号化前提条件を使用していることに起因しています。さらに、発行者は、デバイスのセキュアエレメントを提示フローに組み込むことで、デジタルID認証をデバイスに紐づけることを求めることがよくあります。その結果、BBS+のようなスキームでは、すべてのユーザーのデバイスのハードウェアセキュアエレメントとOSのアップデートが必要になります。

In this paper, we propose a new anonymous credential scheme for the popular and legacy-deployed Elliptic Curve Digital Signature Algorithm (ECDSA) signature scheme.  By adding efficient zk arguments for statements about SHA256 and document parsing for ISO-standardized identity formats, our anonymous credential scheme is that first one that can be deployed *without* changing any issuer processes, *without* requiring changes to mobile devices, and *without* requiring non-standard cryptographic assumptions.

本稿では、広く普及し、レガシーシステムにも導入されている楕円曲線デジタル署名アルゴリズム(ECDSA)署名スキームのための新しい匿名クレデンシャルスキームを提案する。 SHA256に関する効率的なzk引数と、ISO標準化されたIDフォーマットの文書解析を追加することで、この匿名クレデンシャルスキームは、発行者側のプロセスを変更することなく、モバイルデバイスの変更を必要とすることなく、また、非標準の暗号化前提条件を必要とすることなく実装できる初めてのスキームです。

 なかなか期待できますね。生成速度に関してもこのような記載があります。

Our proofs for ECDSA can be generated in 60ms.  When incorporated into a fully standardized identity protocol such as the ISO MDOC standard, we can generate a zero-knowledge proof for the MDOC presentation flow in 1.2 seconds on mobile devices depending on the credential size. These advantages make our scheme a promising candidate for privacy-preserving digital identity applications.

当社のECDSAの証明書は60ミリ秒で生成できます。ISO MDOC標準のような完全に標準化されたアイデンティティプロトコルに組み込まれた場合、クレデンシャルのサイズにもよりますが、モバイルデバイス上でMDOCプレゼンテーションフロー用のゼロ知識証明書を1.2秒で生成できます。これらの利点により、当社の方式はプライバシー保護型デジタルアイデンティティアプリケーションの有望な候補となっています。

mdocのプレゼンテーション時にゼロ知識証明を1.2秒で生成、このくらいなら実用性がありそうですね。

論文の本文もPDFで閲覧できるようになっているので、おいおい見ていこうと思います。

 

 


Saturday, 11. January 2025

Werdmüller on Medium

Building an open web that protects us from harm

It’s not enough to be neutral. We must be allies. Continue reading on Medium »

It’s not enough to be neutral. We must be allies.

Continue reading on Medium »


Michael Ruminer

AI RAG with LlamaIndex, Local Embedding, and Ollama Llama 3.1 8b

In this post, I cover using LlamaIndex LlamaParse in auto mode to parse a PDF page containing a table, using a Hugging Face local embedding model, and using local Llama 3.1 8b via Ollama to perform naive Retrieval Augmented Generation (RAG). That’s a mouthful. I won’t go into how to setup Ollama and Llama 3.1 8b; this post assumes it is running. First off, you can find the code for this in m

In this post, I cover using LlamaIndex LlamaParse in auto mode to parse a PDF page containing a table, using a Hugging Face local embedding model, and using local Llama 3.1 8b via Ollama to perform naive Retrieval Augmented Generation (RAG). That’s a mouthful. I won’t go into how to setup Ollama and Llama 3.1 8b; this post assumes it is running.

First off, you can find the code for this in my LlamaIndex_Test Github repo under Test1/src folder. At the time of this writing there is a Test0 and a Test1. To see the post about Test0 code see Using LlamaIndex — Part 1 OpenAI.

The code uses a .env and load_dotenv() to populate the needed LLAMA_CLOUD_API_KEY. I recommend that if you have an OPENAI_API_KEY entry in the .env that you comment it out for this experiment to prove to yourself that the embedding and LLM are local and not OpenAI. See the part 1 post for more details on the LLAMA_CLOUD_API_KEY.

#OPENAI_API_KEY=YOUR_API_KEY
LLAMA_CLOUD_API_KEY=YOUR_API_KEY

The pip install dependencies I put as comments at the top of the python file. There is also a requirements.txt for the project as a whole that covers all the “Test” experiments package requirements.

# pip install llama-index-embeddings-huggingface
# pip install llama-index-llms-ollama
# pip install llama-index-core llama-parse llama-index-readers-file

The nice thing about LlamaIndex LlamaParse is that it provides an auto mode that will use premium mode when specified criteria are met. In this experiment, I have set auto mode on with triggers for mode change on in- page images or tables. Also, to save on parsing credit usage in LlamaParse and because, for this example, it is all that is needed, I have set the pages to be parsed to PDF page 9 only (note that PDF page 9 is target page 8 to LlamaParse because it uses a 0 based page index). Like the part 1 post, I am using an output of markdown because it provides greater context to the LLM; though, I did try it with result_type=text and received the proper query response despite the answer to the query being in a table.

# set LlamaParse for markdown output and auto_mode only parsing page 8
parser = LlamaParse(
result_type="markdown",
auto_mode=True,
auto_mode_trigger_on_image_in_page=True,
auto_mode_trigger_on_table_in_page=True,
target_pages="8",
verbose=True
)

So that you don’t have to open the PDF document that gets parsed to understand the input below is a screenshot of the page.

As in part 1, I use LlamaParse.load_data to read the page and parse it. Since it has a table in-page and we are in auto mode it will automatically use Premium mode to potentially better handle the page and table. This will cause the page parse to cost 15 credits on LlamaIndex. Note that LlamaIndex will cache your parsed page for 48 hours unless you specify otherwise or change the parse parameters which allows you to run the code more than once and only get the credit cost once. I did try using the default “accurate” mode by removing the auto_mode parameters on the LlamaParse and it still parsed the table properly and returned the proper answer to the query — but this is a sample for showing the use of “auto mode” so just pretend that is not the case.

If you want to see the output of the parser, uncomment the print command after the documents variable is populated. I like to then paste it into a markdown viewer to see it as rendered markdown output. See the below image for that output.

with open(f"../../sample_docs/{file_name}", "rb") as file_to_parse:
# LlamaParse will cache a parsed document 48 hours if the parse parameters are not changed
# thus not incuring additional parse cost if you run this multiple times for testing purposes
# see the history tab in the LlamaParse dashboard for the project to confirm that
# credits used = 0 for subsequent runs
#
# must provide extra_info with file_name key when passing file object
documents = parser.load_data(file_to_parse, extra_info=extra_info)
# to manually check the output uncomment the below
#print(documents[0].text)

I like to set the default settings for LLM and embedding model so that I don’t need to pass them around as parameters. Here is where I set the embedding model to a Hugging Face provided model. When you run the python for the first time it will pull down the embedding model automatically — nice!

# set the default embeddings and llm so that it doesn't have to be passed around
Settings.embed_model = HuggingFaceEmbedding(model_name="BAAI/bge-small-en-v1.5")
Settings.llm = Ollama(model="llama3.1:latest", request_timeout=120.0)

The next part of the code does the same that it did in Part 1 except that this time the VectoreStoreIndex and the query engine use the models I set in the Settings singleton versus the LlamaIndex default of OpenAI.

# index the parsed documents using the default embedding model
index = VectorStoreIndex.from_documents(documents)

# generate a query engine for the index using the default llm
query_engine = index.as_query_engine()

# provide the query and output the results
query = "What is the latency in seconds for Nova Micro?"
response = query_engine.query(query)
print(response)

If all goes well you should get the response output as 0.5 and if you look back at the table from the page you’ll see that is correct.

(.venv) PS C:\python\LlamaIndex_Test\Test1\src> python parse_ollama.py
Started parsing the file under job_id 37dce328-aaa7-499b-afe9-498c32b63944
.0.5

To validate that the value was coming from the RAG provided PDF page and not the the LLMs inherent “knowledge”, I asked a similar question via the command line to Ollama without providing the RAG context— output below:

PS C:\temp> ollama run llama3.1:latest "what is the latency in seconds for Nova Micro Amazon LLM model?"
I don't have access to specific information about the latency of the Nova Micro Amazon LLM (Large Language Model)
model. The details regarding models like this, especially concerning their performance metrics such as latency,
are typically available from the developers or through official documentation and may be subject to change. If
you're looking for accurate and up-to-date information on this topic, I recommend checking directly with Nova
Micro's resources or contacting them for the most current data.

There you have it. But I am not done quite yet in reporting my results. In LlamaIndex’s examples, they used this PDF but used PDF page 1 which contains an image. See below an image of the page.

They use this page to demonstrate how LlamaParse in auto mode moves into premium mode for the page parsing because of the image and then creates a mermaid diagram from the image because it recognizes the image is of a diagram. Below is what they report as the outcome in part.

# The Amazon Nova Family of Models:
# Technical Report and Model Card

Amazon Artificial General Intelligence

```mermaid
graph TD
A[Text] --> B[Nova Lite]
C[Image] --> B
D[Video] --> E[Nova Pro]
F[Code] --> E
G[Docs] --> E
B --> H[Text]
B --> I[Code]
E --> H
E --> I
J[Text] --> K[Nova Micro]
L[Code] --> K
K --> M[Text]
K --> N[Code]
O[Text] --> P[Nova Canvas]
Q[Image] --> P
P --> R[Image]
S[Text] --> T[Nova Reel]
U[Image] --> T
T --> V[Video]

style B fill:#f9f,stroke:#333,stroke-width:2px
style E fill:#f9f,stroke:#333,stroke-width:2px
style K fill:#f9f,stroke:#333,stroke-width:2px
style P fill:#f9f,stroke:#333,stroke-width:2px
style T fill:#f9f,stroke:#333,stroke-width:2px

classDef input fill:#lightblue,stroke:#333,stroke-width:1px;
class A,C,D,F,G,J,L,O,Q,S,U input;

classDef output fill:#lightgreen,stroke:#333,stroke-width:1px;
class H,I,M,N,R,V output;
```

Figure 1: The Amazon Nova family of models

When I tried this I did not get the same outcome from the parse. It did not even attempt to generate a mermaid diagram. I received the following output for the diagram image section; far from their professed output.

The Amazon Nova Family of Models:
Technical Report and Model Card
Amazon Artificial General Intelligence
Nova
Lite Nova
Nova Micro Ix
Pro <l> <l > </>
A Ix
</>
=
Nova Nova
Canvas Reel
Figure 1: The Amazon Nova family of models

In the experiment, everything is local except LlamaIndex which is nice. I hope that this example is of use to you.

Friday, 10. January 2025

Wrench in the Gears

How Gregg Braden Led Me To Share My Thoughts on Quantum Coherence, Collective Computing, and Tokenized Spirituality

This is a video I posted yesterday. And a follow up to clarify certain points. Sorry, I guess the youtube AI choose the “hands up” screenshot for the preview image. I do talk a lot with my hands. Here is a link to Joseph Gonzalez’s blog post on emergent information communication systems. Here are two [...]

This is a video I posted yesterday.

And a follow up to clarify certain points. Sorry, I guess the youtube AI choose the “hands up” screenshot for the preview image. I do talk a lot with my hands.

Here is a link to Joseph Gonzalez’s blog post on emergent information communication systems.

https://bantamjoe.com/2025/01/08/emergent-intelligent-communication-system/

Here are two related links (here and here) in my blog about web3 tokens as a form of symbolic language.

https://wrenchinthegears.com/2023/02/18/gods-eye-view-part-4-the-language-of-heartless-charity/ https://wrenchinthegears.com/2023/10/01/emergent-thoughts-on-web3-game-mechanics-and-tokens-as-a-computational-language-triggered-by-the-film-arrival/ https://wrenchinthegears.com/wp-content/uploads/2025/01/Invitation-to-Ontopoetics-Freya-Mathews.pdf

Below are several maps related to topics discussed for reference / exploration.

J.G. Bennett’s “lifelong learning communities” based on Gurdjieff’s principles in Surrey, Gloucestershire, and West Virginia.

https://embed.kumu.io/55da1b35ad86a647f722575c3f6a5c88#untitled-map?s=bm9kZS1FcnlES2JIYQ%3D%3D

Peter Drucker, Claremont, management theorist, and megachurch promoter with Texas cable TV mogul Bob Buford.

https://embed.kumu.io/840621d7eeadc3a92eb1b1f58848f2d6#untitled-map?s=bm9kZS1DaXRiWmxsMg%3D%3D

Jay Daniels, emeritus professor of religion and advocate of process philosophy from Hendrix College in Conway, Arkansas not far from Petit Jean State Park and Winthrop Rockefeller’s ranch.

https://embed.kumu.io/d0ce16570c05d58b2dcf14474e7c8482#untitled-map?s=bm9kZS1rU2tSTHc3VQ%3D%3D

Macy Conferences and Tavistock in relation to research into the psychology of group dynamics, cybernetics, and cognition.

https://embed.kumu.io/80de775f15b564883921a5ec7daf869b

Tokenized creativity and social prescribing in the context of “pay for success” wellness metrics and public “health.”

https://embed.kumu.io/ed4cd18205e44f10f71a103c7dafb250#untitled-map?s=bm9kZS1idlRxbk81cQ%3D%3D

Parallel community currencies and socio-technical systems.

https://embed.kumu.io/044479643a1b94c072b57dbd21ce8e7f

Tokenized emotional cartography linked to crowd-sourced computation.

https://embed.kumu.io/c93cb83f097f7f28ff88ade75d965919

 

Tokenized prediction markets in public policy linked to distributed cognition / swarm intelligence.

https://embed.kumu.io/789febca28a348cfa3cde24f44d2571e#untitled-map?s=bm9kZS1YNzJkZkgwUQ%3D%3D

Rise of second order cybernetics applied to the tokenized evolution of social systems under web3 spatial computing.

https://embed.kumu.io/5783ad4a7b17672e7cf8959fd5339231#untitled-map?s=bm9kZS1uUTE0MTd4cA%3D%3D

Aldous Huxley’s ties to psychedelics, human potential, mysticism, and the formation of AA.

https://embed.kumu.io/c4ecf9784bf71a6a64050c6c2d7be94b#untitled-map?s=bm9kZS10bEZNbEQ3Ng%3D%3D

Lots to chew on. Have at it!

 

 

 

Thursday, 09. January 2025

Phil Windleys Technometria

Authorization Matters

In 2013, poor authorization practices led to one of the largest data breaches in history.

In 2013, poor authorization practices led to one of the largest data breaches in history. Over a three-week period, hackers stole 40 million credit and debit card accounts from retail giant Target by exploiting the account of an HVAC contractor. The attack began with an email-based phishing scam that tricked an employee at the HVAC company into revealing their credentials. As a vendor to Target, the HVAC company had legitimate access to Target’s systems. Using the stolen credentials, the hackers infiltrated Target’s network and installed malware on the retailer’s point-of-sale systems, enabling them to collect customer data. The breach resulted in direct costs to Target of nearly $300 million, along with reputational damage that is difficult to quantify.1

The Target breach underscores the critical importance of not only knowing who is accessing your systems (authentication) but also controlling what they can access (authorization). The HVAC vendor had access to Target’s systems for electronic billing, contract submission, and project management. However, the hackers were able to extend this access beyond those legitimate uses to reach systems used to update point-of-sale devices. Target’s failure to properly restrict access to critical systems created a significant vulnerability that the hackers exploited.

But I don’t want you to get the idea that authorization is just about securing IT systems—it’s also fundamental to how modern cloud applications function, enabling features that wouldn’t be possible otherwise. For example, services like Google Docs and Dropbox rely on authorization to allow users to share documents with specific people while restricting access to others. Without this capability, these applications would not be possible.

Another example is Amazon Web Services (AWS), where authorization enables Amazon to securely provide hundreds of cloud services to millions of customers at scale in a cost-effective manner. As a global distributed system taking advantage of multi tenancy to service millions of customers, AWS uses a sophisticated policy-based authorization system to help customers control access to the services running inside their accounts.

Several trends make authorization a more important topic that it has been:

More and more software is being delivered as a service (so called Software as a Service, or SaaS). As we’ve discussed cloud-based services can’t provide the needed levels of functionality and security without strong authorization systems.

Perimeter-based security architectures like firewalls are giving way to zero-trust architectures. Rather than assuming everyone inside the perimeter is trusted, zero-trust systems assume breach and authorize every access. This isn’t possible without flexible authorization systems that can determine whether to grant access based on the context of the request.

Internet of Things (IoT) and interconnected devices mean that more and more computer systems are talking to each other on their owner’s behalf. Restricting their access to the role they play protects them and prevents them from becoming a vector that hackers can exploit to break into other sensitive systems.

Regulatory compliance is yet another reason why authorization is increasingly important. Laws like the Health Insurance Portability and Accountability Act (HIPAA) and General Data Protection Regulation (GDPR) require that access to personal information be carefully controlled so that only the right people and systems can use it. Beyond controlling access, these laws also require that you be able to audit access to show compliance.

In addition to these, the rise of GenAI and particularly AI agents that can act on your behalf pour gasoline on the smoldering identity industry interest in authorization. Am I talking to a human? Does this agent have authority to make a doctor's appointment on the patient's behalf? These and other questions mean that making good authorization decisions, knowing what decisions were made, and easily incorporating authorization infrastructure into your apps and infrastructure is vital.

Notes

The story is interesting beyond the fact that it highlights the need to properly manage access. Target’s response, or rather lack of one, provides several important security lessons: Cyber Case Study: Target Data Breach provides more details

Photo Credit: Hacker Stealing Credit Cards from DALL-E (public domain). Prompt: Draw a wide picture of a hacker stealing credit cards from an online store.


Just a Theory

Sqitch 1.5.0

Sqitch v1.5.0: out now in all the usual places!

Released yesterday: Sqitch v1.5.0. This version the MySQL driver DBD::mysql with DBD::MariaDB, both for its better backward compatibility with MySQL as well as MariaDB driver libraries and for its improved Unicode handling. The Docker image likewise switched to the MariaDB mysql client. I expect no compatibility issues, but you never know! Please file an issue should you find any.

V1.5.0 also features a fixes for Yugabyte deployment, Oracle error handling, existing Snowflake schemas, connecting to MySQL/MariaDB without a database name, and omitting the checkit MySQL/MariaDB function when the Sqitch user lacks sufficient permission to create it. Sqitch now will also complain when deploying with --log-only and a deployment file is missing.

Find it in the usual places:

sqitch.org GitHub CPAN Docker Homebrew

Many thanks to everyone who has enjoyed using Sqitch and let me know in person, via email Mastodon, bug reports, and patches. It gratifies me how useful people find it.

More about… Sqitch MariaDB MySQL Yugabyte Oracle Snowflake

Wednesday, 08. January 2025

Heres Tom with the Weather

Hockey Trivia

As I just registered for a new season of beer league hockey starting this weekend, I thought about an episode of C’est comme ça que je t’aime I recently watched which is set in 1974 Sainte-Foy, Quebec City. Nearly all the 1974 Quebec cultural references are new to me and in this episode, there is an adventure in a hockey arena and you can pause the video in a few places and see signage in the

As I just registered for a new season of beer league hockey starting this weekend, I thought about an episode of C’est comme ça que je t’aime I recently watched which is set in 1974 Sainte-Foy, Quebec City. Nearly all the 1974 Quebec cultural references are new to me and in this episode, there is an adventure in a hockey arena and you can pause the video in a few places and see signage in the background for a junior hockey team called Ste. Foy Couillard that only existed from 1972-1974.

You can watch it at ICI Tou.tv (in Canada) or Amazon Prime (in U.S.).


Moxy Tongue

OYO AI

 With over 1000 AI communities deployed in 2024, the next AI cohort begins.. Coding, Computer Science, Artificial Intelligence, Entrepreneurial Skill Development, Teacher PD,  With mentor support of all learners, students and teachers. Unlimited learning by design: (own root)  OYO®  AI  by kidOYO®  at OYOclass.com, Educati

 With over 1000 AI communities deployed in 2024, the next AI cohort begins..












Coding, Computer Science, Artificial Intelligence, Entrepreneurial Skill Development, Teacher PD, 

With mentor support of all learners, students and teachers. Unlimited learning by design: (own root) 

OYO®  AI  by kidOYO®  at OYOclass.com, Educational Software Services.

Tuesday, 07. January 2025

Werdmüller on Medium

46 books

Every book is a portal out of here. Continue reading on Medium »

Every book is a portal out of here.

Continue reading on Medium »

Saturday, 04. January 2025

Michael Ruminer

Using LlamaIndex Part 1 — OpenAI

Using LlamaIndex Part 1 — OpenAI I have started to experiment with LlamaIndex for use in Retrieval Augmented Generation (RAG) document parsing and indexing. My results were mixed on the simple page provided. This is part 1, where I make a short post on LlamaIndex with OpenAI as the LLM component. I expect part 2 to be LlamaIndex with Ollama and Llama3–8b as the LLM components. This is a very sho
Using LlamaIndex Part 1 — OpenAI

I have started to experiment with LlamaIndex for use in Retrieval Augmented Generation (RAG) document parsing and indexing. My results were mixed on the simple page provided. This is part 1, where I make a short post on LlamaIndex with OpenAI as the LLM component. I expect part 2 to be LlamaIndex with Ollama and Llama3–8b as the LLM components.

This is a very short chunk of code. I also used the LlamaIndex Parse browser-based tool to see if I received different outputs. As one would expect, I did not. You can access the browser-based tool by opening a LlamaIndex account and choosing the “Parse” tool in your dashboard. You’ll need an account if you plan to use the code I provide and you will also need to generate an API key from your LlamaIndex dashboard. One of the great things about LlamaIndex is that for a paid tool it is generous in its free usage; 1000 credits PER DAY. In “accurate” mode, it is 1 credit per page; in “premium” mode, it is 15 credits per page. For my simple one page example the output between the two did not differ.

First the small snippet of code.

# pip install llama-index-embeddings-openai llama-index-llms-openai
# pip install llama-index-core llama-parse llama-index-readers-file

from llama_parse import LlamaParse
from llama_index.core import VectorStoreIndex
from dotenv import load_dotenv


load_dotenv()

parser = LlamaParse(result_type="markdown", verbose=True)

file_name = "ssi-page-5.pdf"
extra_info = {"file_name": file_name}

with open(f"../../sample_docs/{file_name}", "rb") as file_to_parse:
# must provide extra_info with file_name key when passing file object
documents = parser.load_data(file_to_parse, extra_info=extra_info)
# to manually check the MD output uncomment the below
# print(documents[0].text)

# index the parsed documents
index = VectorStoreIndex.from_documents(documents)

# generate a query engine for the index
query_engine = index.as_query_engine()

# provide the query and output the results
query = "what are the principles of SSI?"
response = query_engine.query(query)
print(response)

You can find this code and a few sample documents, including the document used in this code in my LlamaIndex_Test Github repo with the code specifically under the Test0 folder.

Note that I don’t set an LLM or an embedding model. LlamaIndex uses OpenAI as the default LLM and OpenAI’s text-embedding-ada-002. You will need an OpenAI API key to go along with the LlamaIndex key. My code loads them from the .env to environmental variables and if they are named appropriately those variables will be found by default. Below is a .env example.

OPENAI_API_KEY=YOUR_API_KEY
LLAMA_CLOUD_API_KEY=YOUR_API_KEY

In the code above I am using a single-page PDF, “ssi-page-5.pdf”. It is page 5 of the larger document, “Self-Sovereign Identity A Systematic Review Mapping and Taxonomy.pdf”. If you plan to send LlamaParse a larger document but use the API properties to tell it only to parse a subset of pages from the document keep in mind that LlamaParse starts at page 0. The first time I tried this I had an off-by-one issue because I assumed page 1 of the document was, you know, page 1. It was page 0. This is understandable from a programming standpoint but caught me off guard anyway.

In the example code, I opened a file directly but LlamaIndex provides a directory reader with filters, if you desire to use that instead. The results I got back on the LLM query were spot on as would be expected on a single page of context with a well-outlined section pertinent to my simple query.

You don’t really need the creation of the vector index, query engine and query/response to test out LlamaIndex parsing. Just uncomment line 23 in the above code (line 19 in the repo code) comment out everything below it and get the parsed output.

Premium Mode and Auto Mode and Less than Expected Outcomes

In the code, I didn’t try out premium mode or auto mode. I intend to make a separate post about auto mode. I did try them in the LlamaIndex Parse tool. In both, I expected the image at the top of the page to get output as an image in the “Images” tab of the Parse output tool, but it didn’t.

The image at the top of the page is below as a screen capture.

This was disappointing. I’m not sure why this did not provide the expected outcome.

There you have it. A simple bit of code to parse using LlamaIndex. What makes it different from other parsers I have tried (all open source) is that it spits out the results in markdown, if desired, which is better than the usual plain text I received in other tools. The markdown provides the LLM more context even if in my simple case it was not of value. The other is that in theory, it will better parse images, tables etc., but as I explained I did not get that result. :-( I’ll continue to experiment with it, especially on more complicated pages such as ones that contain a table and in auto mode via code.

You can find part 2 as “AI RAG with LlamaIndex, Local Embedding, and Ollama Llama 3.1 8b”. The example in part 2 uses LlamaParse auto mode.

Friday, 03. January 2025

Mike Jones: self-issued

Proposed Second Candidate Recommendation for Securing Verifiable Credentials using JOSE and COSE

The W3C Verifiable Credentials Working Group published the Snapshot Second Candidate Recommendation of the Securing Verifiable Credentials using JOSE and COSE specification just before the holidays. This was one of five Candidate Recommendation Snapshots published by the working group at the same time, including for the Verifiable Credentials Data Model 2.0, which I’m also an […]

The W3C Verifiable Credentials Working Group published the Snapshot Second Candidate Recommendation of the Securing Verifiable Credentials using JOSE and COSE specification just before the holidays. This was one of five Candidate Recommendation Snapshots published by the working group at the same time, including for the Verifiable Credentials Data Model 2.0, which I’m also an editor of. A W3C Candidate Recommendation Snapshot is intended to become a W3C Candidate Recommendation after required review and approval steps.

As I wrote about the First Candidate Recommendation, VC-JOSE-COSE secures VC Data Model payloads with JOSE, SD-JWT, or COSE signatures. And while I’m admittedly not a fan of JSON-LD, to the extent that Verifiable Credentials using the JSON-LD-based VC Data Model are in use, I’m committed to there being a solid VC-JOSE-COSE specification so there is a simple, secure, standards-based way to sign these credentials.

One significant change since the First Candidate Recommendation was splitting the Controller Document text out into its own specification called Controlled Identifier Document 1.0. Publishing a Candidate Recommendation Snapshot for it is planned for next week. Part of why it became its own specification is so that it can be referenced by the planned update to the W3C DID specification.

Thanks to my co-editor Gabe Cohen and working group chair Brent Zundel for the significant work they both put in to help us reach this point!

Wednesday, 01. January 2025

Just a Theory

Should URI::mysql Switch to DBD::MariaDB?

Should Sqitch and URI::mysql use DBD::MariaDB instead of DBD::mysql? If so, what are the implications for Sqitch deployment and usage?

I seek the wisdom of the Perl Monks:

The Sqitch project got a request to switch from DBD::mysql to DBD::MariaDB. DBD::mysql 5’s requirement to build from the MySQL 8 client library provides the impetus for the request, but in poking around, I found a blogs.perl.org post highlighting some Unicode fixes in DBD::MariaDB, as well.

Now, Sqitch likely doesn’t have the Unicode issue (it always works with Perl Unicode strings), but it depends on URI::db to provide the DBI connection string. For MySQL URIs, the URI::mysql dbi_driver method returns mysql.

Should it be changed to return MariaDB, instead? Is there general community consensus that DBD::MariaDB provides better compatibility with both MySQL and MariaDB these days?

I’m also curious what the impact of this change would be for Sqitch. Presumably, if DBD::MariaDB can build against either the MariaDB or MySQL client library, it is the more flexible choice to continue supporting both databases going forward.

Feedback appreciated via PerlMonks or the Sqitch issue.

Update 2025-01-08

URI-db 0.23 uses DBD::MariaDB instead of DBD::mysql for both URI::mysql and URI::MariaDB.

Similarly, Sqitch v1.5.0 always uses DBD::MariaDB when connecting to MySQL or MariaDB, even when using older versions of URI::db. Thanks everyone for the feedback and suggestions!

More about… Perl Sqitch URI::db MySQL MariaDB

New JSONPath Feature: SelectLocated

New in the jsonpath Go package and Playground: “Located” results that pair selected values with normalized paths to their locations.

Happy New Year! 🎉🥳🍾🥂

The JSONPath RFC includes a section on defining normalized paths, which use a subset of JSONPath syntax to define paths to the location of a node in a JSON value. I hadn’t thought much about it, but noticed that the serde JSONPath Sandbox provides a “Located” switch adds them to query results. For the sake of complementarity, I added the same feature to the Go JSONPath Playground.

🛝 See it in action with this example, where instead of the default output:

[ 8.95, 12.99, 8.99, 22.99, 399 ]

The located result is:

[ { "node": 8.95, "path": "$['store']['book'][0]['price']" }, { "node": 12.99, "path": "$['store']['book'][1]['price']" }, { "node": 8.99, "path": "$['store']['book'][2]['price']" }, { "node": 22.99, "path": "$['store']['book'][3]['price']" }, { "node": 399, "path": "$['store']['bicycle']['price']" } ]

v0.3.0 of the github.com/theory/jsonpath Go package enables this feature via its new SelectLocated method, which returns a LocatedNodeList that shows off a few of the benfits of pairing JSONPath query results with paths that uniquely identify their locations in a JSON value, including sorting and deduplication. It also takes advantage of Go v1.23 iterators, providing methods to range over all the results, just the node values, and just the paths. As a result, v0.3.0 now requires Go 1.23.

The serde_json_path Rust crate inspired the use of LocatedNodeList rather than a simple slice of LocatedNode structs, but I truly embraced it once I noticed the the focus on “nodelists” in the RFC’s overview, which provides this definition:

A JSONPath expression is a string that, when applied to a JSON value (the query argument), selects zero or more nodes of the argument and outputs these nodes as a nodelist.

It regularly refers to nodelists thereafter, and it seemed useful to have an object to which more features can be added in the future. github.com/theory/jsonpath v0.3.0 thererfore also changes the result value of Select from []any to the new NodeList struct, an alias for []any. For now it adds a single method, All, which again relies on Go v1.23 iterators to iterate over selected nodes.

While the data type has changed, usage otherwise has not. One can iterate directly over values just as before:

for _, val := range path.Select(jsonInput) { fmt.Printf("%v\n", val) }

But All removes the need to alias-away the index value with _:

for val := range path.Select(jsonInput).All() { fmt.Printf("%v\n", val) }

I don’t expect any further incompatible changes to the main jsonpath module, but adding these return values now allows new features to be added to the selected node lists in the future.

May you find it useful!

More about… Go JSONPath Playground RFC 9535

IdM Laboratory

Intention Economyその後

こんにちは、富士榮です。 年末にDoc SearlsがIntention Economyについて「The Real Intention Economy」というポストをしています。かなり重要なポストだと思うので読んでおいた方が良さそうです。 https://doc.searls.com/2024/12/30/the-real-intention-economy/ 彼の著書は日本語にも翻訳されていますね。 https://amzn.to/3C2PX9n さて、今回のDocのポストに戻ると、彼がIntention Economyの考え方を発表してからもう直ぐ20年が経とうとしている現在、生成AIの文脈も相まって、Intention Economy自体が脅威となりつつある、という話です。 Intention Economyで検索すると結構ヤバ目の結果が返
こんにちは、富士榮です。
年末にDoc SearlsがIntention Economyについて「The Real Intention Economy」というポストをしています。かなり重要なポストだと思うので読んでおいた方が良さそうです。
https://doc.searls.com/2024/12/30/the-real-intention-economy/
彼の著書は日本語にも翻訳されていますね。 https://amzn.to/3C2PX9n

さて、今回のDocのポストに戻ると、彼がIntention Economyの考え方を発表してからもう直ぐ20年が経とうとしている現在、生成AIの文脈も相まって、Intention Economy自体が脅威となりつつある、という話です。
Intention Economyで検索すると結構ヤバ目の結果が返ってくるようになっているとのこと。 要するにIntention Economyというキーワードが悪用されつつある、ということですね。
こんなことも書かれていると言っています。
The near future could see AI assistants that forecast and influence our decision-making at an early stage, and sell these developing “intentions” in real-time to companies that can meet the need – even before we have made up our minds.

近い将来、AI アシスタントが早い段階で私たちの意思決定を予測して影響を与え、私たちが決断を下す前であっても、その発展中の「意図」をニーズを満たすことができる企業にリアルタイムで販売するようになるかもしれません。

同じくこんな引用もされています。
The rapid proliferation of large language models (LLMs) invites the possibility of a new marketplace for behavioral and psychological data that signals intent.

大規模言語モデル (LLM) の急速な普及により、意図を示す行動および心理データの新しい市場が生まれる可能性が生まれています。


もともと顧客の関心(Attention)を商品として販売するというモデルに対するアンチテーゼの文脈としての意図(Intention)を中心とした経済としてIntention Economyだったはずですが、その意図自体を商品として販売する、という市場が形成されてきつつあるということですね。

人間の欲望は果てしないわけですが、私たちは思想の源流をきちんと見据え、意図を理解した上で社会実装を進めたいものです。 

 


Tuesday, 31. December 2024

Just a Theory

SQL/JSON Path Playground Update

The Go SQL/JSON Playground has been updated with a fresh design and PostgreSQL 17 compatibility.

Based on the recently-released Go JSONPath and JSONTree playgrounds, I’ve updated the design and of the SQL/JSON Playground. It now comes populated with sample JSON borrowed from RFC 9535, as well as a selection of queries that randomly populate the query field on each reload. I believe this makes the playground nicer to start using, not to mention more pleasing to the eye.

The playground has also been updated to use the recently-released sqljson/path v0.2 package, which replicates a few changes included in the PostgreSQL 17 release. Notably, the .string() function no longer uses a time zone or variable format to for dates and times.

Curious to see it in action? Check it out!

More about… Postgres SQL/JSON JSON Path Go Playground

Werdmüller on Medium

Tintin and the fascists

The boy reporter has a fascist origin. But something new is possible. Continue reading on Medium »

The boy reporter has a fascist origin. But something new is possible.

Continue reading on Medium »


Predictions for tech, 2025

It’s going to be a hard year. Continue reading on Medium »

It’s going to be a hard year.

Continue reading on Medium »

Monday, 30. December 2024

IdM Laboratory

366/366 !!!

こんにちは、富士榮です。 ついにこの日が来ました。 去年の正月休みに某猫とのチキンレースが始まってしまったので収まりがつかなくなって惰性で描き続けていましたが気がついたら本当に1年経ってしまいました。 ↓某猫のポスト ふじえさん、やっぱりやっとるな! 毎日記事を書く366チャレンジや! これ絶対やっとるぞ! — 👹秋田の猫🐱 (@ritou) January 4, 2024 最初のうちは割と実装してみよう!的なポストが多かったのですが、中盤〜後半は忙しくなりすぎたこともあり読んでみようシリーズが大半を占めてしまったのは反省です。 ということで振り返ってみましょう。 1月のポストはこんな感じです。 この頃は結構作ってますね。まぁ、冬休みが暇だったので実装し始めたのがきっかけだったので。 あとは1月はOpenID Summit Tokyo

こんにちは、富士榮です。

ついにこの日が来ました。



去年の正月休みに某猫とのチキンレースが始まってしまったので収まりがつかなくなって惰性で描き続けていましたが気がついたら本当に1年経ってしまいました。

↓某猫のポスト

ふじえさん、やっぱりやっとるな!
毎日記事を書く366チャレンジや!
これ絶対やっとるぞ!

— 👹秋田の猫🐱 (@ritou) January 4, 2024


最初のうちは割と実装してみよう!的なポストが多かったのですが、中盤〜後半は忙しくなりすぎたこともあり読んでみようシリーズが大半を占めてしまったのは反省です。

ということで振り返ってみましょう。

1月のポストはこんな感じです。


この頃は結構作ってますね。まぁ、冬休みが暇だったので実装し始めたのがきっかけだったので。

あとは1月はOpenID Summit Tokyoもありましたね。2024年の後半にかけて現在も活動が続いているSIDI Hubを日本で開催する調整も実はこの時期から始まっていました。


次に2月です。この辺りでそういえば今年は366日やん、と思って他の年よりも1日不利!!!ということに気がついた感じです。


まだ実装は続けていますね。OpenID Providerが一段落したのでパスキーに手を出し始めています。やっぱり手を動かさないとわからないことも多いなぁ、と実感した時期でもありました。


3月です。


まだ実装も続けいますが、色々とニュースも紹介し始めているのと、普段考えていることなんかもポストし始めていますね。結果、ポストを読んでくれた人たちと議論することもできたので非常に勉強になりました。


4月です。


2月ごろにデジタル庁の認証アプリについても色々と調べたり考えたりしていましたが、結果メディアの方々からもインタビューいただいたりもして、各種社会実装について深く考えた時期でもありました。個人的には新年度も重なったことで結構忙しかった記憶しかありません・・・


5月です。


4月〜6月はイベントも多かったので感想を書いていたのと、ちょうどNIST SP800-63-3の同期可能クレデンシャルに関する追補版が出た時期でしたね。

色々と読むものが多かった気がします。


6月です。


EICがあったので参加していましたね。来年もいかないと。。。

他にも色々なドキュメントが公開されたので読み込む系のポストが増えてきていますね。


7月です。

折り返し地点です。


そういえばこの時期にDIF Japanのキックオフがあったんですね。他にもDID/VCに関する論文を公開したりもしました。色々と暑い時期でした。


8月です。


パスキーに関する議論が色々とあった時期なので日本語にした公開したりしましたね。パスキー、まだまだ完全に普及した、という状態ではないので引き続き様子は見ていきたいと思います。

この時期はトラスト、とか本人確認や身元確認へのデジタルクレデンシャルの利用について割と真剣に考え始めている時期だったのでそういうニュアンスのポストもしていますね。まだまだ適当な実装が多いこの世の中なので、みんな真剣に考えていけるといいですね。


9月です。


SIDI HubワシントンDC会合もありましたし、ベルリンやケープタウンのレポートが公開された時期でもあったのでSIDI Hub三昧でした。他にもついにパンドラの箱を開けたAuthZEN WGが本格的に活動を始めた時期だったのでAuthorization APIもウォッチし始めた時期ですね。


10月です。


10月末に東京でSIDI Hub Summitを開催したので、その準備でかなり忙しかった時期です。月末〜月初はIIW〜IETFもありましたし。

国際イベントのハンドリングや準備は何度やっても良い経験になりますね。しんどいけど。


11月です。


リンク可能性の話はまだ解けていない課題の中でも議論がつきない話です。IIWでも何年も話題になっていますし、IETFのメーリングリストでも議論が何度も行われています。


12月です。ついに終わります。


台湾政府に呼ばれてWalletの話をしに行ったりもしましたし、今まさに読んでいるAAMVAのガイドラインが11月末に更新されたことを受け、読んでいきました。



ということであっという間に1年が経ってしまいました。


で、来年はどうするの?という話ですが、まぁ習慣化してしまったところなので今後も無理しない程度に書いていこうとは思いますが、適度に休む必要性も同時に感じているので毎日は描かないかなぁ、と思います。クォリティも落ちますしね。


ということでみなさん、良いお年を!






Patrick Breyer

Soll ich der elektronischen Patientenakte widersprechen und wie geht das?

Wenn du gesetzlich krankenversichert bist, wird deine Krankenkasse bis 15. Januar 2025 eine elektronische Patientenakte für dich einrichten – es sei denn, du widersprichst. Das gilt auch für private Krankenversicherungen, die …
Wenn du gesetzlich krankenversichert bist, wird deine Krankenkasse bis 15. Januar 2025 eine elektronische Patientenakte für dich einrichten – es sei denn, du widersprichst. Das gilt auch für private Krankenversicherungen, die freiwillig eine elektronische Patientenakte anbieten. Durch einen Widerspruch dürfen dir keine Nachteile entstehen. Die Anschreiben der Krankenkassen informieren unzureichend. Bedenke bei deiner Entscheidung:

1. Inhalt und Datensicherheit: Ohne deinen Widerspruch wird künftig jeder Arztbesuch, jede Therapie und jede Medikation in einer zentral abrufbaren elektronischen Patientenakte gespeichert, darunter Arztbriefe, Entlassungsberichte und Diagnosekennungen. Hierunter fallen unter anderem auch Informationen zu deinen Suchtkrankheiten, Psychotherapien und psychischen Erkrankungen, Schwangerschaftsabbrüchen, Geschlechtskrankheiten und Reproduktionsstörungen (z.B. Impotenz), die gespeichert und zusammengetragen werden. Durch die zentrale Speicherung aller Behandlungen in einer elektronischen Patientenakte erhältst du eine einheitliche Zugriffsmöglichkeit auf deine Patientendaten. Andererseits waren in der Vergangenheit bereits mehrere zentralisierte, europäischen Gesundheitssysteme von umfangreichen Datenpannen betroffen. Bei der elektronischen Patientenakte in Deutschland ist noch nichts passiert. Sicherheitsforscher warnen jedoch, Kriminelle könnten mit wenig Aufwand auf beliebige Patientenakten zugreifen. Falls es zu Hacking, Datendiebstahl oder Datenverlust kommen sollte, können viel mehr Informationen verloren gehen als ohne elektronische Patientenakte.

2. Direkter Zugriff: Mit der elektronischen Patientenakte können behandelnde Ärzte, deine Apotheker, deine Physiotherapeuten, dein Pflegeheim, usw., sowohl in Deutschland als auch künftig im EU-Ausland deine elektronische Patientenakte direkt einsehen. Sie können sie auch in ihrer eigenen Praxissoftware speichern. Die Vorteile: Der Datenzugriff kann deine Behandlungen und Medikation wesentlich beschleunigen und verbessern, z. B. auch fehlerhafte Medikamentationen vermeiden. Unterlagen müssen nicht erst angefordert werden und liegen auch im Notfall unmittelbar vor. Doppeluntersuchungen entfallen. Ein möglicher Nachteil könnte sich daraus ergeben, dass für andere medizinische Stellen nicht relevante Behandlungen, Medikamentationen, etc. abgerufen und somit eingesehen werden können (z.B. dein Zahnarzt oder Apotheker könnte von deiner Psychotherapie oder Geschlechtskrankheit erfahren).

Ohne elektronische Patientenakte wirst du, so wie du es bisher kennst und möglicherweise erlebt hast, auf einem Anamnesebogen nach Vorbehandlungen, Medikamentationen, Unverträglichkeiten, Allergien, usw. gefragt. Gegebenenfalls werden weitere Unterlagen angefordert wie z. B. MRT-Aufnahmen, usw. Es gibt auch einen papierbasierten, bundeseinheitlichen Medikationsplan, den du dir ausdrucken lassen kannst. Dieser ist aber auch in der elektronischen Patientenakte speicherbar.

Mit elektronischer Patientenakte kannst du mündlich, über eine App oder über deine Krankenkasse einzelne (ärztliche) Leistungserbringer komplett vom Zugriff auf deine elektronische Patientenakte ausschließen. Dies ist mit gewissen Herausforderungen verbunden. Du hast auch das Recht, gegenüber Leistungserbringern der Übertragung von einzelnen Informationen in die elektronische Patientenakte zu widersprechen. Du kannst einzelne Dokumente auch nachträglich wieder aus ihrer elektronischen Patientenakte löschen (lassen) – leider wieder recht kompliziert. Bedenke, dass Löschungen Nachteile für deine Behandlung nach sich ziehen können.

3. Weitergabe an Dritte: Deine elektronische Patientenakte kann für bestimmte Zwecke auch an Politik und Behörden, Forschende und Wirtschaftsunternehmen (z.B. Technologieunternehmen und Pharmaindustrie) weitergegeben werden. Die Chancen: Anonymisierte Gesundheitsdatenanalysen ermöglichen bedeutende Fortschritte in der Pandemie- und Endemieeindämmung, z. B. durch Früherkennung, sowie in der Forschung, z.B. bei der Entwicklung sicherer Medikamente. Die Risiken: Nur teilweise werden Daten vor der Weitergabe anonymisiert. In anderen Fällen werden dein Name und deine Versichertennummer zwar entfernt, aber über andere Kennzeichen bleibst du eindeutig identifizierbar (sog. Pseudonymisierung). So kann es Dritten gelingen, deine Krankheitsgeschichte wieder dir zuzuordnen.

Hinweis: Auch wenn du die elektronische Patientenakte nutzt, kannst du ihrer Weitergabe an Dritte frühestens ab dem 15.01.2025 separat widersprechen (über eine App deiner Krankenkasse oder durch Brief an ihre Ombudsstelle).

Ergebnis: Es ist letztendlich deine Entscheidung, ob du eine elektronische Patientenakte haben möchtest oder nicht.

Wenn du eine elektronische Patientenakte geführt haben möchtest, brauchst du nichts zu tun – sie wird ab 15. Januar 2025 automatisch eingerichtet.

Wenn du dich gegen eine elektronische Patientenakte entscheidest, hilft dir der Widerspruchs-Generator ein Widerspruchsschreiben zu erstellen. Du kannst auch per Krankenkassen-App oder – bei einigen Krankenkassen – auch über deren Website widersprechen.

Du kannst ab dem Moment widersprechen, in dem deine Krankenkasse dich über dein Widerspruchsrecht informiert hat. Ein Widerspruch ist auch nach Ablauf der in dem Schreiben genannten 6-Wochen-Frist jederzeit möglich; dann werden alle bereits gespeicherte Daten wieder aus der elektronischen Patientenakte gelöscht. Solltest du später einmal deine Entscheidung ändern, kannst du deinen Widerspruch jederzeit wieder zurücknehmen und für die Zukunft eine elektronische Patientenakte bekommen.

Weitere Informationen des GKV-Spitzenverbands (mit der Bundesdatenschutzbeauftragten abgestimmt)

Wie die Piraten einen Zwang zur elektronischen Patientenakte verhindert haben

Sunday, 29. December 2024

IdM Laboratory

AAMVAのMobile Drivers License Implementation Guidelinesを読む⑧

 こんにちは、富士榮です。 引き続きAAMVAのMobile Drivers License Implementation Guidelines 1.4を読んでいきます。 まだまだプライバシーの章が続きます。 4.5. DELETING MDL INFORMATION FROM A DEVICE An mDL holder must have the capability to delete the mDL holder’s mDL from the mDL holder’s device. Such deletion: Must delete all mDL information, log information, and any metadata (e.g. settings) that could impart information

 こんにちは、富士榮です。

引き続きAAMVAのMobile Drivers License Implementation Guidelines 1.4を読んでいきます。


まだまだプライバシーの章が続きます。

4.5. DELETING MDL INFORMATION FROM A DEVICE

An mDL holder must have the capability to delete the mDL holder’s mDL from the mDL holder’s device. Such deletion:

Must delete all mDL information, log information, and any metadata (e.g. settings) that could impart information about the deleted mDL or its use.  Must not require approval by the Issuing Authority. Must be an option available to an mDL holder on the mDL device Must be possible when the mDL device is offline. Should be available to an mDL holder via a request to the Issuing Authority (see below).

mDL保持者は、mDL保持者のデバイスからmDL保持者のmDLを削除する機能を持たなければならない。

すべてのmDL情報、ログ情報、および削除されたmDLまたはその使用に関する情報を与える可能性のあるメタデータ(設定など)を削除すること 発行機関の承認を必要としないこと。 mDLデバイス上でmDL保持者が利用可能なオプションであること。 mDLデバイスがオフラインのときに可能であること。 発行機関(下記参照)へのリクエストにより、mDL保持者が利用可能であること。

 デバイスからmDL情報を削除する際の話です。基本的に利用者が自身で削除をすることができること(その際に発行者の承認や接続が不要であること)が求められています。難しいところですね。この章の中で発行したmDL関連情報が適切に扱われていること発行機関が責任をもって確認することが求められる一方で利用者の権利も守らないといけないわけです。まぁ、最低限ウォレット開発者が悪意を持って利用者のデータを扱えないように、というところまでは守りましょう、ってところですね。

Should an mDL device (i.e. a device containing an mDL) be lost or get stolen, it could be beneficial for the mDL holder to have the mDL remotely deleted (or temporarily suspended) by the Issuing Authority. Besides the obvious advantage to the mDL holder, other considerations apply too:

The mDL holder’s request must be authenticated. It must not be possible for someone other than the mDL holder or the Issuing Authority to delete (or suspend) an mDL. A “push” capability (from the Issuing Authority to the mDL device) is needed for immediate deletion (or suspension) (see section 6). Successful deletion (or suspension) depends on network connectivity to the mDL device The mDL will automatically become unusable (although potentially not inaccessible) when the MSO expires (see section 6). 

mDLデバイス(mDLを含むデバイス)が紛失または盗難に遭った場合、発行機関によってmDLがリモートで削除(または一時的に停止)されることは、mDL保有者にとって有益です。mDL保有者にとっての明らかな利点の他に、他の考慮事項も適用されます:

mDL保有者の要求は認証されなければならない。mDL保持者の要求は認証されなければならない。mDL保持者または発行機関以外の者がmDLを削除(または一時停止)することはできない。 即時削除(または一時停止)には、(発行局からmDLデバイスへの)「プッシュ」機能が必要である(セクション6参照) 削除(または一時停止)の成功は、mDLデバイスへのネットワーク接続に依存します。 MSOの有効期限が切れると、mDLは自動的に使用できなくなる(アクセスできなくなる可能性はないが)(セクション6参照)。

やはりスマートフォンベースの話なので当然紛失や盗難に関する考慮は十分に必要です。

mDLを利用するときはちゃんと認証するのは当たり前として、発行者から発行済みのクレデンシャルをプッシュ等を使って削除できるようにする、また有効期限切れたらウォレット側で自動的に使えなくする、などもちゃんと気を使う必要があります。

In addition, mDL deletion may be needed when an mDL holder wants to transfer an mDL to a new device, when a person moves to another jurisdiction, or when a person dies. 

Issuing Authorities should weigh the benefits and challenges associated with a remote delete (or suspension) capability when considering its implementation (see Appendix A).

An mDL holder must have the capability to delete activity log information (as defined in section 4.4) the mDL holder may previously have elected to maintain. It is recommended that this capability allows selective deletion (i.e. specific log entries, rather than only an “all or nothing” option).

さらに、mDLの削除は、mDL保持者が新しいデバイスにmDLを移したい場合、別の管轄区域に移動する場合、またはmDL保持者が死亡した場合に必要となる可能性がある。

発行局は、リモート削除(または一時停止)機能の導入を検討する際、その利点と課題を比較検討する必要がある(付録A参照)。

mDL保持者は、mDL保持者が以前に保持することを選択した活動ログ情報(第4.4項に定義)を削除する機能を持たなければならない。この機能により、選択的な削除(すなわち、「全削除」オプションのみではなく、特定のログエントリーの削除)を可能にすることが推奨される。

mDLを含めデジタルデータを持ち主だけが制御できるようにするのは大切な一方で死亡した場合などの考慮は非常に重要です。マイナンバーカードと保険証の統合をした結果、意識のない救急患者の保険者資格の確認ができない、なんて話も聞きますが、この辺りは例外処理も含めてちゃんとプロセス設計をしておくのが大切です。

また、ログの削除に関しても選択的に削除することができるようにすべきである、などかなり細かくガイドされている感じがあります。

4.6. NO TRACKING

“Tracking” is the act of compiling information about an mDL holder and/or an mDL holder’s activity. Any stakeholder (including Issuing Authorities, technology providers, service providers and mDL verifiers) must not track mDL holders or the usage of any mDL except as required by law (e.g. when a drug store dispenses products containing ephedrine). 

「トラッキング」とは、mDL保持者および/またはmDL保持者の活動に関する情報を収集する行為を指します。いかなるステークホルダー(発行局、テクノロジープロバイダー、サービスプロバイダー、mDLベリファイアーを含む)も、法律で義務付けられている場合(ドラッグストアがエフェドリンを含む製品を調剤する場合など)を除き、mDL保持者やmDLの使用状況を追跡してはなりません。

トラッキングの禁止に関する条項ですね。法的根拠なくトラッキングしてはならない、と。 

Tracking by an mDL verifier can be performed as soon as two different mDL transactions can be linked to each other. This can be countered by designing the solution to maximize anonymity (“characteristic of information that does not permit a personally identifiable information principal to be identified directly or indirectly”, from ISO/IEC 29100) and to maximize unlinkability. Anonymity can be hampered by metadata that may be associated with multiple mDL transactions, e.g. hardware or network addresses, long-term public keys, or session tokens. Consequently, Issuing Authorities must minimize the sharing of static or long-lived metadata. 

mDL検証者による追跡は、2つの異なるmDLトランザクションが互いにリンクされるとすぐに実行できる。これは、匿名性(「個人を特定できる情報主体が直接的または間接的に特定されない情報の特性」、ISO/IEC 29100より)を最大化し、リンク不能性を最大化するようにソリューションを設計することで対抗できる。匿名性は、複数のmDLトランザクションに関連するメタデータ(ハードウェアやネットワークアドレス、長期公開鍵、セッショントークンなど)によって妨げられる可能性がある。そのため、発行局は静的または長期的なメタデータの共有を最小限に抑える必要がある。

これはSD-JWT-VCでも同じ議論がなされていますが、Verifierの結託によるリンク可能性の話ですね。mdocにおける選択的開示については基本的にSD-JWTと類似の考え方なので単体ではリンク可能性に対する対応はできなかったはずです。そのため匿名性を担保するソリューションを別途検討することが必要とされています。 

Although pre-matched transactions hold the promise of maximizing anonymity at a user data level, anonymity in post-matched transactions is limited since the portrait image is always shared. For these transactions it is recommended that Issuing Authorities pursue regulatory protection against tracking by mDL verifiers.

事前照合取引は、ユーザー・データ・レベルでの匿名性を最大化することが期待できるが、事 後照合取引では肖像画像が常に共有されるため、匿名性は制限される。このような取引の場合、発行機関はmDL検証者による追跡を防ぐため、規制による保護を追求することが推奨されます。

Solutions using the server retrieval method also pose challenges in preventing tracking. As per design, the Issuing Authority is involved in real time each time an mDL is used by the mDL holder. The Issuing Authority would technically be able to keep track of when an mDL holder uses his/her mDL and keep track of what data is shared. Based on IP address analysis the Issuing Authority would also be able to track an mDL holder’s physical location to some extent. This can be mitigated by placing regulatory limitations on the Issuing Authority11, and will be of value to the extent an mDL holder trusts the Issuing Authority’s adherence to the regulatory limitations. Consequently, Issuing Authorities considering a server retrieval solution should carefully weigh the advantages of this approach against its privacy implications. 

サーバーリトリーバルを使用するソリューションは、追跡を防ぐという課題もある。設計の通り、発行局はmDL保有者がmDLを使用するたびにリアルタイムで関与します。発行局は技術的に、mDL保有者がいつmDLを使用し、どのようなデータが共有されたかを追跡することができます。IPアドレスの分析に基づき、発行局はmDL保持者の物理的な所在地をある程度追跡することもできます。この問題は、発行局に規制上の制限を設けることで緩和することができます11 。そのため、発行局はサーバー検索ソリューションを検討する際、このアプローチの利点とプライバシーへの影響を慎重に比較検討する必要があります。

サーバーリトリーバルは基本的に従来のフェデレーションモデルと同様に発行者への問い合わせが発生するため、トラッキング耐性は低いとされます。この辺りはエコシステムのサイズや参加しているエンティティの関係性などを踏まえて設計していかないといけないポイントですね。 

Since the activity log (see section 4.4) contains a full record of when and potentially where an mDL was used, it is reiterated that access to the activity log must not be possible by anyone other than the mDL holder. 

アクティビティログ(4.4項参照)には、mDLがいつ、どこで使用されたかについての完全な記録が含まれるため、mDL保持者以外の者がアクティビティログにアクセスできないようにする必要があります。

 

今日もこの辺りにしておきましょう。


Saturday, 28. December 2024

IdM Laboratory

AAMVAのMobile Drivers License Implementation Guidelinesを読む⑦

こんにちは、富士榮です。 引き続きAAMVAのMobile Drivers License Implementation Guidelines 1.4を読んでいきます。 引き続き4章のプライバシーの部分を読んでいきます。 4.3. PROTECTING DATA It is up to Issuing Authorities to ensure that all mDL data stored on the mDL holder’s device is adequately protected. As standards in this respect are still under development, each Issuing Authority should take great care to ensure that the design of its

こんにちは、富士榮です。

引き続きAAMVAのMobile Drivers License Implementation Guidelines 1.4を読んでいきます。


引き続き4章のプライバシーの部分を読んでいきます。

4.3. PROTECTING DATA

It is up to Issuing Authorities to ensure that all mDL data stored on the mDL holder’s device is adequately protected. As standards in this respect are still under development, each Issuing Authority should take great care to ensure that the design of its solution supports this requirement. At minimum, Issuing Authorities must adhere to the following:

発行局は、mDL保持者のデバイスに保存されたすべてのmDLデータが適切に保護されていることを確認する必要があります。この点に関する標準はまだ開発中であるため、各発行局はソリューションの設計がこの要件を確実にサポートするよう、細心の注意を払う必要があります。発行局は、最低限以下の事項を遵守しなければなりません:

 原文でも太字で強調されているとおり、mDL App(ウォレット)に保持されているmDLデータが保護されていることを発行者が確認することが求められています。この責任分解の考え方は非常に重要ですね。欧州でもそうですが発行者となる国が認定したウォレットが必要になるのはこのような背景からきていると思います。しかしこうなるとApple WalletやGoogle Walletに格納されたクレデンシャルが適切に管理されていることをどこまで国は確認できるんだろうか、、、と気になってきますね。

具体的な要件が続きます。

mDL information must be stored in encrypted form Private key material must be protected in a security module designed for the safekeeping of key material. The mDL holder must be authenticated when any mDL data is accessed or released, at a point in time that is sufficiently close (as determined by the Issuing Authority) to the time of the access or release. Issuing Authorities that want to leverage device unlocking to protect mDL data must include measures to ensure that this feature has not been disabled by the mDL holder (also see section 7). Example: If an app authenticates the mDL holder when the mDL app is accessed, an Issuing Authority should set a time limit after which authentication of the mDL holder is again required before the release of mDL data.  mDL data must be released to an mDL verifier only via the following: an ISO/IEC 18013-5 compliant interface. an ISO/IEC 18013-7 compliant interface. As an alternative to ISO/IEC 18013-7, an over-the-Internet interface as envisioned in Appendix C that: Complies with Appendix C items 2.b and 2.f, and  Has been approved by the AAMVA Identity Management Committee. For sharing mDL data between apps on a phone via an interface other than those listed above, an interface compliant with Appendix C items 2.b and 2.f and that has been approved by the AAMVA Identity Management Committee  mDL情報は暗号化された形で保存されなければならない。 秘密鍵は、鍵の保管のために設計されたセキュリティ・モジュールで保護されなければならない。 mDL データがアクセスまたは公開される際には、アクセスまたは公開の時点に(発行局が決定する)十分 に近い時点で、mDL 所持者が認証されなければならない。デバイスのロック解除を活用してmDLデータを保護したい発行局は、この機能がmDL保持者によって無効化されていないことを保証する手段を含める必要があります(セクション7も参照)。 例 アプリがmDLアプリにアクセスしたときにmDLの所有者を認証する場合、発行局は、mDLデータの公開前にmDLの所有者の認証が再度必要となる制限時間を設定する必要があります。 mDLデータは、以下を経由してのみmDL検証者に公開されなければならない: ISO/IEC 18013-5に準拠したインターフェース。 ISO/IEC 18013-7準拠のインターフェース。 ISO/IEC 18013-7 に代わるものとして、付録 C で想定されているインターネット上のインター フェース: 付録Cの項目2.bおよび2.fに準拠し、かつ AAMVA アイデンティティ管理委員会によって承認されている。 上記以外のインタフェースを介して携帯電話のアプリ間で mDL データを共有する場合は、付 録 C 項目 2.b および 2.f に準拠し、AAMVA アイデンティティ管理委員会によって承 認されたインタフェース。

かなり細かく要件が決まってますね。EUでも鍵をどこに置くかは色々と議論がありましたが、AAMVAではセキュリティ・モジュールになってますね。クラウドベースのHSMとかは選択肢に入らないのかな?あと、Holderのプレゼンスや認証のタイミング、ウォレットのアンロックが無効化されていないことの確認など色々とガイドがありますがどうやって確認するんだ??って気もしますが。こうなってきるとやはり専用ウォレットみたいな話になってきそうですねぇ。。

Note 1: This requirement prohibits the sharing of mDL data using the mDL as a “flash pass” (i.e. by showing an image of a credential to a verifier); also see section 8.

注 1:この要件は、mDL を「フラッシュ・パス」(すなわち、検証者にクレデンシャルの画像を見せること)として使用して mDLデータを共有することを禁止している。

これも重要ですね。以前紹介したパートにも書いてありましたが基本的にmDLは目視で確認するためのものではない、ということですね。

4.4. ACTIVITY LOG

The mDL app must be capable of maintaining an activity log. The mDL app must allow the mDL holder to decide if an activity log must be maintained or not. It is recommended that the mDL app requires the mDL holder to explicitly choose for or against keeping an activity log upon setup (i.e. no defaults, and in addition to being able to change this subsequently). The activity log and related settings must be accessible only to the mDL holder (also see section 4.6). The activity log must allow for the recording of all mDL transactions. In this context, an mDL transaction is the sharing of information by an mDL holder with an mDL verifier, as well as any provisioning, update, or communication action between the mDL and the Issuing Authority. At minimum, the following must be recordable for any transaction: Transaction timestamp; type of transaction (e.g. update or data sharing); in case of a data sharing transaction the data that was shared, and to the extent that it can be gathered, information about the identity of the mDL verifier. It is recommended that the mDL app provides the mDL holder the capability to select what types of activities are recorded in the activity log (i.e. rather than only an “all or nothing” option). It is also recommended that the mDL app includes functionality to help the mDL holder monitor and manage the size of the activity log within the capabilities of the mDL holder’s device. The mDL app must provide an option to the mDL holder to export the activity log.

mDLアプリは、アクティビティログを維持できなければならない。mDLアプリは、アクティビティログを保持するかどうかをmDL保持者が決定できなければならない。mDLアプリは、セットアップ時に、mDL保有者がアクティビティログの保持の可否を明示的に選択することを推奨します(すなわち、デフォルトではなく、さらにその後変更できるようにします)。アクティビティログおよび関連する設定は、mDL保持者のみがアクセス可能でなければなりません(4.6項も参照)。アクティビティログは、すべてのmDLトランザクションの記録を可能にしなければならない。ここでいう mDL トランザクションとは、mDL 保持者が mDL 検証者と情報を共有すること、および mDL と発行局との間でプロビジョニング、更新、または通信を行うことである。どのようなトランザクションでも、最低限、以下の情報は記録可能でなければならない: トランザクションのタイムスタンプ、トランザクションのタイプ(更新またはデータ共有など)、データ 共有トランザクションの場合は共有されたデータ、および収集可能な範囲で mDL 検証者の身元に関する情報。mDLアプリは、活動ログに記録される活動の種類を選択する機能をmDL保持者に提供することが推奨される(すなわち、「all or nothing」オプションのみではなく)。また、mDLアプリには、mDL保持者がmDL保持者のデバイスの能力の範囲内でアクティビティログのサイズを監視および管理するのに役立つ機能が含まれることが推奨されます。mDLアプリは、mDL保持者がアクティビティログをエクスポートできるオプションを提供する必要があります。

次はログの話題です。アクティビティログはプライバシーの観点からも非常に重要なものですので、Holderが完全に制御できるものである必要があることが強調されています。この辺りもウォレットソフトウェアを開発する際は留意したいポイントですね。

If an Issuing Authority allows an mDL holder to hold the same mDL on more than one device, the activity log settings on each device should be independent of each other. It is recommended that there be no synchronization of the activity log or activity log settings between the two devices. Any synchronization features that are provided must adhere to the following:

Synchronization must be an option that can be enabled or disabled by the mDL holder. The process to enable synchronization must require the mDL holder to prove access to both devices.  Synchronization must occur directly between the devices in question. A synchronization action must not give visibility of any of the following to anyone other than the mDL holder, or to anyone other than entities that already know that the mDL holder has an mDL on more than one device:

Activity log information. Activity log settings. The fact that a synchronization action/selection took place Any information that may convey that the mDL holder has an mDL on more than one device. 

発行局がmDL保持者に複数のデバイスで同じmDLを保持することを許可する場合、各デバイスのアクティビティログ設定は互いに独立しているべきである。2つのデバイス間でアクティビティログまたはアクティビティログ設定の同期は行わないことが推奨される。提供される同期機能は、以下に従わなければならない:

同期は、mDL保持者が有効または無効にできるオプションでなければならない。同期を有効にするプロセスでは、mDL保持者が両方のデバイスへのアクセスを証明する必要があること。 同期化は、当該デバイス間で直接行われなければならない。同期化アクションは、mDL保持者以外、またはmDL保持者が複数のデバイスにmDLを持つことを既に知っているエンティティ以外の者に、以下のいずれかを可視化してはならない:

アクティビティログ情報。 アクティビティログの設定。 同期アクション/選択が行われた事実。 mDL保持者が複数のデバイスでmDLを使用していることを伝える可能性のあるあらゆる情報。

 複数デバイスをHolderが使っている場合のログの同期の話です。これもせっかくコンテキストによってデバイスを分けているにも関わらずログが同期されてしまうとコンテキスト違反が起きてしまうことになるのでちゃんと分けましょう、という話ですね。


今日はこのあたりで。

 

 

 

 

 

 

 

 



Friday, 27. December 2024

IdM Laboratory

AAMVAのMobile Drivers License Implementation Guidelinesを読む⑥

こんにちは、富士榮です。 引き続きAAMVAのMobile Drivers License Implementation Guidelines 1.4を読んでいきます。 ようやく4章の「PRIVACY AND SECURITY」に入ります。4章も結構長いんですよね。。。ただ、結構重要な章なので細かくみていきたいと思います。 4.1. INTRODUCTION The privacy of an mDL holder has been paramount in the mDL design process from the start. Care was and is being taken in all the work to ensure that methods and means are available to protect mDL h
こんにちは、富士榮です。
引き続きAAMVAのMobile Drivers License Implementation Guidelines 1.4を読んでいきます。

ようやく4章の「PRIVACY AND SECURITY」に入ります。4章も結構長いんですよね。。。ただ、結構重要な章なので細かくみていきたいと思います。
4.1. INTRODUCTION
The privacy of an mDL holder has been paramount in the mDL design process from the start. Care was and is being taken in all the work to ensure that methods and means are available to protect mDL holder privacy. The subsections that follow elaborate in more detail on different aspects of privacy protection and security.

mDLの設計プロセスでは、当初からmDL保持者のプライバシーが最優先されてきました。すべての作業において、mDL保持者のプライバシーを保護する方法と手段が利用できるよう、細心の注意が払われています。以下のサブセクションでは、プライバシー保護とセキュリティのさまざまな側面について詳しく説明します。

4.2. DATA MINIMIZATION AND SELECTIVE DATA RELEASE

A primary component of privacy involves the ability of an mDL holder to only share some information. This is achieved by two related but distinct measures:

Data minimization: A decision by an Issuing Authority to record fractional information about an attribute in an mDL, thus empowering an mDL holder to share less information than would otherwise have been the case. For example, an Issuing Authority can decide to include9 the optional age_birth_year field in an mDL in addition to the (mandatory) date of birth. This will allow the mDL holder to share only a birth year as opposed to a date of birth. Another example would be to include the resident city in addition to a full address.  Selective data release: Allowing an mDL holder to decide which of the data fields requested by an mDL verifier will be released to the Verifier.

As noted in section 2, it is important for Issuing Authorities to understand that ISO/IEC 18013-5 primarily specifies interfaces. The interfaces support both data minimization and selective data release. It is recommended that Issuing Authorities implement and provision as many of the optional minimized data elements, defined in ISO/IEC 18013-5 and in this document, as possible.

プライバシーの主要な構成要素は、mDL保持者が一部の情報のみを共有する能力である。これは、2つの関連するが異なる手段によって達成される:

データの最小化:データの最小化:発行局が、mDLに属性情報の一部を記録することを決定すること。例えば、発行局はmDLに、(必須である)生年月日に加え、オプションのage_birth_yearフィールドを含める9 ことができます。これにより、mDLの所持者は、生年月日ではなく、生年のみを共有することができます。他の例としては、完全な住所に加えて、居住地の市町村を含めることができる。 選択的データ公開:mDL保有者が、mDLベリファイアから要求されたデータフィールドのうち、どのフィールドをベリファイアに開示するかを決定できるようにすること。

セクション2で述べたように、発行局はISO/IEC 18013-5が主にインタフェースを規定していることを理解することが重要である。インターフェースはデータの最小化と選択的なデータ公開の両方をサポートする。発行局は、ISO/IEC 18013-5 および本文書で定義されているオプションの最小化データエレメントを可能な限り実装し、提供することが推奨される。

Privacy by designということです。ISO/IEC 18013-5ではデータの最小化と選択的情報開示の両方をサポートしているので、本書の原則を踏まえてちゃんと実装しなさいよ、と。

 

In addition, Issuing Authorities must ensure that mDL apps to which they provision data support at least the following: 

In case the request was received electronically, the mDL app must clearly convey what data was requested, and whether the mDL verifier intends to retain the information. If the request is presented in summarized form in the user interface (e.g. “Identity and driving privilege data” as opposed to “First Name, Last Name, DOB, Driving privileges”), means must be available to give the mDL holder visibility of the details of such a summarized form, both before and during a transaction. The mDL app must provide the mDL holder full control over which data elements to share with the mDL verifier.  ISO/IEC 18013-5 requires the portrait image to be shared if the portrait was requested and if any other data element is released (to enable the mDL verifier to tie the mDL information to the person presenting the information). The app must support a graceful and informed exit from the request if the holder opts not to share the portrait image when requested. If blanket sharing options are used, measures must be implemented to ensure that the mDL holder remains aware of what is being released when such an option is in effect. An mDL holder must also be able to opt out of or cancel any blanket sharing function.

Issuing Authorities (and their app providers) are encouraged to devise solutions that will minimize transaction friction without compromising the above requirements.

さらに、発行局はデータを提供するmDLアプリが少なくとも以下をサポートしていることを確認する必要があります:

要求が電子的に受信された場合、mDLアプリは、どのようなデータが要求されたのか、またmDLベリファイアがその情報を保持する意図があるかどうかを明確に伝えなければならない。要求がユーザーインターフェースに要約された形で提示される場合(例えば、「姓名、DOB、運転権限」ではなく「身分証明書および運転権限データ」)、取引の前および取引中の両方において、mDL保有者がそのような要約された形の詳細を可視化できる手段を利用できなければなりません。 mDLアプリは、どのデータ要素をmDLベリファイアと共有するかについて、mDL保持者に完全なコントロールを提供しなければならない。 ISO/IEC 18013-5では、肖像画が要求された場合、およびその他のデータ要素が公開された場合、肖像画を共有することが要求されています(mDLベリファイアがmDL情報を提示者に紐付けることを可能にするため)。アプリは、所持者が要求されたときに肖像画を共有しないことを選択した場合、その要求から 潔く、かつ通知された形で抜けることをサポートしなければならない。 包括的共有オプションが使用される場合、そのようなオプションが有効であるとき に、mDL保有者が何が公表されるかを確実に認識し続けるための措置が講じられなけれ ばならない。また、mDLの保有者は、包括的共有機能をオプトアウトまたはキャンセルできなければならない。

発行局(およびそのアプリプロバイダ)は、上記の要件を損なうことなく、取引の摩擦を最小化するソリューショ ンを考案することが推奨される。 

データを要求・共有する目的・意図を明確に伝える、そして提供しないことをユーザが選択できるようにする、オプトアウトできるようにもする、と。どれも基本的なことではありますが実装者にとってはどのようなUXを提供するかが腕の見せ所になると重要なポイントの一つでもあります。この辺りは日本でもウォレット開発をする方々も参考にすべき点だと思います。


細かくみていこうと思うので少し細切れにしていきます。

ということで今日はここまで。

 

 

 

 


Werdmüller on Medium

Creating a framework for living well

Drawing upa North Star for building my life Continue reading on Medium »

Drawing upa North Star for building my life

Continue reading on Medium »

Thursday, 26. December 2024

Hyperonomy Digital Identity Lab

Web 7.0 Foundation: SDO Accreditation

The Web 7.0 Foundation is a relatively new organization, incorporated in Canada on May 1, 2023². It focuses on developing specifications and standards for a unified software and hardware ecosystem aimed at building resilient, trusted, and decentralized systems¹³. Source: Conversation … Continue reading →

The Web 7.0 Foundation is a relatively new organization, incorporated in Canada on May 1, 2023². It focuses on developing specifications and standards for a unified software and hardware ecosystem aimed at building resilient, trusted, and decentralized systems¹³.

Source: Conversation with Copilot, 12/27/2024

(1) Web 7.0 Foundation | Canada Corporation Directory. https://www.canadacompanyregistry.com/companies/web-70-foundation/.
(2) Web7Foundation/Specifications: Web 7.0 Foundation Specifications – GitHub. https://github.com/Web7Foundation/Specifications.
(3) Web 7.0 Overview – YouTube. https://www.youtube.com/watch?v=UM1__hsa2Zk.

Standards Development Organizations (SDOs) typically seek accreditation to demonstrate their competence and adherence to specific procedures and standards. One prominent accreditation body is the International Accreditation Service (IAS), which provides accreditation based on criteria such as AC803¹². This accreditation involves an assessment of the SDO’s standardization process, procedures, and management system².

In the United States, the American National Standards Institute (ANSI) accredits SDOs that follow a consensus-based process³. This ensures that the standards developed are the result of a transparent, balanced, and inclusive process.

Accreditation helps SDOs validate their ability to consistently develop high-quality normative documents, enhancing trust and credibility in their standardization efforts².

Source: Conversation with Copilot, 12/27/2024

(1) Standards Development Organizations – International Accreditation …. https://www.iasonline.org/services/standards-development-organizations/.
(2) IAS Creates Accreditation Program for Standard Development Organizations. https://www.iasonline.org/ias-creates-accreditation-program-for-standard-development-organizations/.
(3) Organizations Developing Standards – Standards Coordinating Body. https://www.standardscoordinatingbody.org/sdos/.
(4) Accreditation Criteria for Standards Development Organizations (AC803). https://www.iasonline.org/resources/accreditation-criteria-for-standards-development-organizations-ac803/.


IdM Laboratory

AAMVAのMobile Drivers License Implementation Guidelinesを読む⑤

こんにちは、富士榮です。 引き続きAAMVAのMobile Drivers License Implementation Guidelines 1.4を読んでいきます。 まだ3章が続きますが今回で3章は終わりです。 3.6. IACA ROOT CERTIFICATE In Table B.1 of ISO/IEC 18013-5, on the table row for the “ISSUER” certificate component, replace: stateOrProvinceName is optional. If this element is present, the element shall also be present in the end-entity certificates and hold the same va

こんにちは、富士榮です。

引き続きAAMVAのMobile Drivers License Implementation Guidelines 1.4を読んでいきます。


まだ3章が続きますが今回で3章は終わりです。


3.6. IACA ROOT CERTIFICATE

In Table B.1 of ISO/IEC 18013-5, on the table row for the “ISSUER” certificate component, replace:

stateOrProvinceName is optional. If this element is present, the element shall also be present in the end-entity certificates and hold the same value. 

with the following:

stateOrProvinceName is mandatory. The element shall also be present in the end-entity certificates and hold the same value.  

ISO/IEC 18013-5 の Table B.1 の 「ISSUER 」証明書コンポーネントの表行で、以下を置き換える:

stateOrProvinceName はオプションである。この要素が存在する場合、この要素はエンドエンティティ証明書にも存在し、同じ値を保持するものとする。

を以下のように置き換える:

stateOrProvinceName は必須である。この要素は、エンド・エンティ ティティの証明書にも存在し、同じ値を保持するものとする。


やはりモバイル運転免許証にISO/IEC 18013-5を当てはめるとき、ちょいちょい書き換えするところがありますね。


3.7. VERSIONING

The data structure for the 2D barcode in the AAMVA Card Design Specification contains a version number. This enables readers to always know which version of the data structure is present on a credential since the full data string is always read. This is not true for an mDL. An mDL reader has to explicitly request individual data elements, and does not know in advance which data elements are present or what version of a data set is supported.

AAMVA カード設計仕様の 2D バーコードのデータ構造には、バージョン番号が含まれている。これにより、完全なデータ文字列が常に読み取られるため、読み手はデータ構造のどのバージョンがクレデンシャルに存在するかを常に知ることができる。これは mDL には当てはまらない。mDL リーダは個々のデータ要素を明示的に要求する必要があり、どのデータ要素が存在する か、またはデータ・セットのどのバージョンがサポートされているかを事前に知ることはできない。

One approach to address this is to add a “version” data element to the AAMVA namespace. To be useful an mDL reader would have to obtain this data element before making a subsequent request for additional data. Allowing the release of this data element without mDL holder approval is possible; requiring approval may confuse an mDL holder and increase transaction friction. Regardless, the 2-step process would add complexity (an mDL reader would still have to allow for not receiving a response to such a request) and add time to the transaction. Such an approach would also be unique to mDL in North America.

これに対処する1つの方法は、AAMVA名前空間に「バージョン」データ要素を追加することである。mDLの読者は、追加データを要求する前にこのデータ要素を取得しなければならない。mDL保持者の承認なしにこのデータ要素の公開を許可することは可能です。承認を必要とすると、mDL保持者を混乱させ、取引の摩擦を増大させる可能性があります。いずれにせよ、2段階のプロセスは複雑さを増し(mDLリーダーは、そのような要求に対する返答を受け取らないことを許容しなければならない)、取引に時間を要する。また、このようなアプローチは北米のmDLに特有のものである。

Instead, versioning of the AAMVA mDL data element set is achieved as follows:

If needed, create a new identifier. This applies if there is a change to an existing data element, or if a completely new data element is added. Set a date by which mDL apps and mDL readers must support the new identifier (Dayx in Figure 2). “Support” as used here means that an mDL app must allow an Issuing Authority to provision the identifier into the app, and that an mDL reader must be able to read the new identifier.  For the old identifier, set a date by which mDL apps and mDL readers do not need to support the old identifier anymore (Dayy in Figure 2). This is also the date by which Issuing Authorities must be provisioning the new identifier.

代わりに、AAMVA mDLデータ要素セットのバージョニングは、以下のように行われる:

必要に応じて、新しい識別子を作成する。これは、既存のデータ要素に変更がある場合、またはまったく新しいデータ要素が追加される場合に適用されます。mDLアプリとmDLリーダーが新しい識別子をサポートしなければならない期日を設定します(図2のDay x)。ここでいう「サポート」とは、mDLアプリが発行機関に識別子をアプリにプロビジョニングできるようにすること、およびmDLリーダーが新しい識別子を読み取れるようにすることを意味します。 旧識別子については、mDLアプリとmDLリーダーが旧識別子をサポートする必要がなくなる日付を設定します(図2のDay y)。これは、発行局が新しい識別子をプロビジョニングする期日でもあります。 

Figure 2 also reflects other requirements on both the mDL reader and the mDL app. The main advantage of the approach illustrated in Figure 2 is that, in case of changing an existing identifier, the Issuing Authority will have the time between the two dates to provision the new identifier (and deprecate the old identifier) to all its mDLs with the knowledge that mDL readers should be able to accommodate either identifier (the highlighted option in Figure 2). In the case where a new identifier is added (i.e. when there is no change to an existing identifier), the two dates may be on the same day.

図2には、mDLリーダーとmDLアプリの両方に対するその他の要件も反映されています。図2に示されたアプローチの主な利点は、既存の識別子を変更する場合、発行局は2つの日付の間に、mDLリーダーがどちらの識別子にも対応できることを前提に、すべてのmDLに新しい識別子を提供する(古い識別子を廃止する)時間を持つことができることです(図2のハイライトされたオプション)。新しい識別子が追加される場合(既存の識別子に変更がない場合)、2つの日付は同じ日になる可能性があります。

Ideally mDL readers would ask for the old identifier up to Dayy and for the new identifier thereafter. However, it is likely that readers would, at least around the change date, ask for both. It is also likely that an mDL would, especially around Dayy, include both identifiers. How the request is presented to the mDL holder, and how approval to share is administered, is left to implementers. Nevertheless, a simple approach could be for the mDL to present only one request, for the new identifier, to the mDL holder.

理想的には、mDLの読者はDay yまでは旧識別子を、それ以降は新識別子を要求するだろう。しかし、少なくとも変更日前後には、読者は両方の識別子を要求すると思われる。また、mDLは、特にDayyの前後には、両方の識別子を含むと思われる。どのようにリクエストをmDL保持者に提示し、どのように共有の承認を行うかは、実装者に委ねられている。とはいえ、単純なアプローチとしては、mDLがmDL保持者に提示する要求は、新しい識別子のための1つのみである。


バージョニングに関するコンセプトがちゃんとしていますね。リードタイムをうまく作ってスムーズに移行できる様にすることができる様にしています。


3.8. ISSUING AUTHORITY SPECIFIC DATA
ISO/IEC 18013-5 allows for the creation of additional namespaces, in like manner as the AAMVA namespace defined in this document (see clause 7.2.8 in ISO/IEC 18013-5). Issuing Authorities can use this mechanism to add additional fields to an mDL. The Issuing Authority would be responsible for communicating such an additional namespace to mDL verifiers that need to be able to read the Issuing Authority-specific data. Note: ISO/IEC 18013-5 also lends itself to being adopted for the issuing of credentials separate from an mDL, for example fishing licenses, health credentials, or watercraft licenses. 

ISO/IEC 18013-5では、本文書で定義されているAAMVA名前空間と同様に、追加の名前空間を 作成することができる(ISO/IEC 18013-5の7.2.8項参照)。発行局はこのメカニズムを使用して、mDLにフィールドを追加できる。発行局は、発行局固有のデータを読み取る必要のあるmDL検証者に、このような追加名前空間を伝達する責任を負う。

注:ISO/IEC 18013-5 は、漁業免許証、健康証明書、水上バイク免許証など、mDL とは別のクレデンシャルの発行にも採用できる。


今回はここまでです。次は4章です。



Wednesday, 25. December 2024

IdM Laboratory

AAMVAのMobile Drivers License Implementation Guidelinesを読む④

こんにちは、富士榮です。 引き続きAAMVAのMobile Drivers License Implementation Guidelines 1.4を読んでいきます。 引き続き3章を読んでいきます。 3-3. PORTRAIT IMAGE The portrait image is the primary means by which an mDL is matched to the person presenting the mDL in an attended transaction. The portrait image therefore needs to be of suitable quality for this purpose. ISO/IEC 18013-5 requires the portrait to comply with Ann

こんにちは、富士榮です。

引き続きAAMVAのMobile Drivers License Implementation Guidelines 1.4を読んでいきます。



引き続き3章を読んでいきます。

3-3. PORTRAIT IMAGE

The portrait image is the primary means by which an mDL is matched to the person presenting the mDL in an attended transaction. The portrait image therefore needs to be of suitable quality for this purpose. ISO/IEC 18013-5 requires the portrait to comply with Annex D of ISO/IEC 18013-2:2020, which in turn requires the portrait image to be at least 192 pixels wide and 240 pixels high. In addition, ISO/IEC 18013-2 requires portrait images intended for automated face recognition to comply with ISO/IEC 19794-5, which among other requirements requires 90 pixels between the centers of the eyes. However, it should be noted that these requirements were created in the context of storage on a physical card and in machine-readable formats with limited storage capacity compared to an mDL. 

肖像画像は、立会取引においてmDLを提示する人物とmDLを照合する主要な手段です。したがって、肖像画像はこの目的に適した品質である必要があります。ISO/IEC 18013-5は、肖像画がISO/IEC 18013-2:2020の附属書Dに準拠することを要求しており、この附属書Dは、肖像画が少なくとも幅192ピクセル、高さ240ピクセルであることを要求している。さらに、ISO/IEC 18013-2は、自動顔認識用の肖像画像について、ISO/IEC 19794-5に準拠することを要求しており、この要件では、特に目の中心間が90ピクセルであることが要求されています。ただし、これらの要件は、物理的なカードへの保存や、mDLに比べて保存容量が限られる機械読み取り可能なフォーマットでの保存を想定して作成されたものであることに留意する必要があります。

It would therefore be possible to include a portrait image of much higher resolution in an mDL. Arguments for going this route include higher accuracy when using the portrait image as a probe image in 1:n biometric searching, and making it easier for a human to compare the portrait image with the mDL holder. Arguments against going this route include the following:

従って、mDLにはるかに高解像度の肖像画像を含めることが可能である。この経路をとることへの賛成意見には、1:nの生体認証検索でプローブ画像として肖像画を使用する際の精度が高くなること、人間が肖像画とmDLの所持者を比較しやすくなることなどがあります。このルートに反対する意見には、以下のようなものがあります:

1. A larger portrait image can negatively affect mDL transaction times(より大きなポートレート画像は、mDLのトランザクション時間に悪影響を与える可能性があります)

2. A better-quality portrait image could arguably be less privacy preserving than a smaller portrait image.(より質の高いポートレート画像は、より小さなポートレート画像よりもプライバシーの保護に劣る可能性がある)

3. The primary purpose of the portrait image is a 1:1 match with the mDL holder. If this match is performed biometrically, the smaller portrait size should be sufficient.(肖像画像の主な目的は、mDLの所持者と1対1で照合することです。この照合が生体認証で行われる場合は、肖像画のサイズは小さくても十分です)

Issuing Authorities should carefully consider all these points when deciding on a portrait image size. It is recommended that Issuing Authorities opt for a smaller rather than for a larger portrait image.

発行局は、肖像画のサイズを決定する際、これらの点を慎重に考慮する必要があります。発行局は、大きな縦長画像よりも小さな縦長画像を選ぶことを推奨します。

結構細かいレベルで顔写真の要件が決まっているんですね。


3.4. SIGNATURE IMAGE

ISO/IEC 18013-5 does not prescribe anything other than that the image shall be in JPEG or JPEG2000 format. Building on the requirements for a signature image in ISO/IEC 18013-1 and in the AAMVA Card Design Standard, if present the signature image must be an accurate and recognizable representation of the original signature. Care should be given to image capture, processing, digitization, and compression.

ISO/IEC 18013-5 は、画像が JPEG または JPEG2000 フォーマットであること以外には何も規定していない。ISO/IEC 18013-1およびAAMVAカード設計基準における署名画像の要件に基づき、署名画像が存在す る場合は、元の署名を正確かつ認識可能な形で表現しなければならない。画像のキャプチャ、処理、デジタル化、および圧縮には注意を払う必要がある。


3.5. MDL CRYPTOGRAPHIC PROTOCOLS

In line with recommendations from the US National Institute of Standards and Technology (NIST) and the Canadian Centre for Cyber Security, certain cryptographic constructs must not be supported for mDL solutions built in accordance with this document. At the same time, interoperability needs to be retained so mDL readers can successfully interact with an mDL originating from elsewhere. 

米国国立標準技術研究所(NIST)およびカナダ・サイバーセキュリティセンターの勧告に従い、この文書に従って構築されたmDLソリューションでは、特定の暗号構造をサポートしてはなりません。同時に、mDLリーダーが他の場所から発信されたmDLと正常にやり取りできるよう、相互運用性を維持する必要があります。

To this end, the AAMVA mDL Implementation Guidelines require the following changes to be applied to ISO/IEC 18013-5:

このため、AAMVA mDL実装ガイドラインでは、ISO/IEC 18013-5に以下の変更を適用することを要求している:

ここも量が多いので割愛しますが、Cipher SuiteをNISTの要求に従って変更したりしていますので、他の国が単純にmdocだからISO/IEC 18013-5に従ってリーダーを実装してもAAMVAのmDLは読めないって言う状態になるんだろうなぁ。。。と思います。


ということでここまでです。3章がもう少しだけ続きます。

Tuesday, 24. December 2024

IdM Laboratory

AAMVAのMobile Drivers License Implementation Guidelinesを読む③

こんにちは、富士榮です。 引き続きAAMVAのMobile Drivers License Implementation Guidelines 1.4を読んでいきます。 今回は3章のISO/IEC 18013-5 QUALIFICATIONSです。 3.1. INTRODUCTION Issuing authorities electing to follow the guidance in this document must adhere to ISO/IEC 18013-5, including as qualified in this document. 本文書のガイダンスに従うことを選択した発行局は、本文書で修飾されている場合を含め、ISO/IEC 18013-5 を遵守しなければならない。 3.2. AAMVA MDL DATA ELEMEN

こんにちは、富士榮です。

引き続きAAMVAのMobile Drivers License Implementation Guidelines 1.4を読んでいきます。


今回は3章のISO/IEC 18013-5 QUALIFICATIONSです。

3.1. INTRODUCTION

Issuing authorities electing to follow the guidance in this document must adhere to ISO/IEC 18013-5, including as qualified in this document.

本文書のガイダンスに従うことを選択した発行局は、本文書で修飾されている場合を含め、ISO/IEC 18013-5 を遵守しなければならない。

3.2. AAMVA MDL DATA ELEMENT SET

This section specifies changes and additions to the ISO/IEC 18013-5 data element set to accommodate the unique needs of the AAMVA community. All the data elements (mandatory and optional) in the ISO/IEC 18013-5 data element set, together with the changes and additions specified in this document, comprise the AAMVA mDL data element set.

このセクションでは、AAMVAコミュニティの固有のニーズに対応するために、ISO/IEC 18013-5データ要素セットの変更と追加を規定する。ISO/IEC 18013-5 データ要素セットのすべてのデータ要素(必須およびオプション)は、本文書で 規定される変更および追加とともに、AAMVA mDL データ要素セットを構成する。

The specific changes to ISO/IEC 18013-5 follow.

ISO/IEC 18013-5に対する具体的な変更点は以下の通り。

Replace the 1st sentence of clause 7.2.1:

The mDL data elements shall be as defined in Table 5 belong to namespace “org.iso.18013.5.1”, see 7.1.

with the following:

The mDL data elements shall be as defined in Table 5. Data elements belong to the namespaces indicated. 

7.2.1 節の第 1 文を置き換える:

mDL データ要素は,表 5 に定義されるとおり,名前空間 「org.iso.18013.5.1 」に属するものとする。

を以下で置き換える

mDL データエレメントは,表 5 に定義されているとおりとする。データ要素は、示された名前空間に属する。

In Table 5, apply the following amendments:

表5において、以下の修正を適用する。

family_nameの定義 変更前:Last name, surname, or primary identifier, of the mDL holder. The value shall only use latin1b characters and shall have a maximum length of 150 characters.(mDL保持者の姓、名、またはプライマリ識別子。値はlatin1b文字のみを使用し、最大150文字とする)

変更後: Family name (commonly called surname or last name), or primary identifier, of the individual that has been issued the driver license or identification document. If the individual’s name is not divided into family name and given name(s), that name shall be deemed the family name or primary identifier. The value shall only use latin1b characters and shall have a maximum length of 150 characters.(運転免許証または身分証明書を発行された個人の姓(一般に姓または名と呼ばれる)、または主な識別子。個人の名前が姓と名に分かれていない場合は、その名前を姓または主な識別子とみなす。値はlatin1b文字のみを使用し、最大150文字とする) given_nameの定義 
変更前:First name(s), other name(s), or secondary identifier, of the mDL holder. The value shall only use latin1b characters and shall have a maximum length of 150 characters(mDL保持者のファーストネーム、その他のネーム、またはセカンダリ識別子。値はlatin1b文字のみを使用し、最大150文字とする。) 変更後: Given name or names (includes all of what are commonly referred to as first and middle names), or secondary identifier, of the individual that has been issued the driver license or identification document. The value shall only use latin1b characters and shall have a maximum length of 150 characters.(運転免許証または ID 文書を発行された個人の名前(一般にファーストネームおよびミドル ネームと呼ばれるものをすべて含む)、または二次識別子。値は、latin1b 文字のみを使用し、最大 150 文字の長さを持たなければならない。) height、eye_colour、resident_addressのプレゼンスをO(オプション)からM(必須)へ 
resident_addressの定義 変更前:The place where the mDL holder resides and/or may be contacted (street/house number, municipality etc.). The value shall only use latin1b characters and shall have a maximum length of 150 characters.(mDL保持者の居住地および/または連絡可能な場所(番地、市町村など)。値はlatin1b文字のみを使用し、最大150文字とする。)
変更後:The place where the mDL holder resides and/or may be contacted (street/house number, municipality etc.). The value shall only use latin1b characters and shall have a maximum length of 150 characters. The resident_address shall be included in full, regardless of the presence of any minimized address data elements (e.g. resident_city; resident_state; resident_postal_code; resident_country). Dayx for this change: Not applicable. Dayy for this change: 2025-09-01.(mDL保持者の居住地および/または連絡可能な場所(番地、市町村など)。値はlatin1b文字のみを使用し、最大150文字とする。resident_addressは、最小化された住所データ要素(resident_city; resident_state; resident_postal_code;resident_countryなど)の有無にかかわらず、完全な形で含まれるものとする) age_in_years、age_over_NN、issuing_jurisdictionのプレゼンスをOからMへ

In Table 5, add a new column titled “Namespace”. For the data elements present in ISO/IEC 18013-5, enter “org.iso.18013.5.1” for each data element

表5に、「Namespace 」というタイトルの新しい列を追加する。ISO/IEC 18013-5に存在するデータ要素については、各データ要素に 「org.iso.18013.5.1 」を入力する。

Append the following to Table 5:

表5に以下を追加する:

ネームスペース:“org.iso.18013.5.1.aamva” Identifier:domestic_driving_privileges 

意味合い:Domestic categories of vehicles/restrictions/conditions(国内車両カテゴリー/制限/条件)

定義:Vehicle types the license holder is authorized to operate. See 7.2.4.(免許保持者が運転することを許可されている車種。7.2.4を参照のこと) プレゼンス:M


ネームスペース:“org.iso.18013.5.1.aamva” Identifier:name_suffix 

意味合い:Name suffix 

定義:Name suffix of the individual that has been issued the credential. Only the following values are allowed:(クレデンシャルを発行された個人の名前サフィックス。以下の値のみが許可される:) JR、SR、1ST、Ⅰ、2ND、Ⅱ〜9TH、Ⅸ 

 プレゼンス:O 

 

ネームスペース:“org.iso.18013.5.1.aamva” Identifier:organ_donor 

意味合い:organ donor 定義:An indicator that denotes whether the credential holder is an organ donor. This field is either absent or has the following value:(クレデンシャル保持者が臓器提供者かどうかを示すインジケータ。このフィールドはないか、または以下の値を持つ:)
1: Donor 
 プレゼンス:O


こんな感じで意外と多くのISO/IEC 18013-5の属性群については修正を入れています。 この辺りは国によって状況も異なるので当然と言えるでしょう。(ガイドラインには上記に記載したもの以外にも変更されたものが羅列されていますが省略します)

少し面白いところで言うと、ISO/IEC 18013-5ではage_over_NNとなっている属性を

age_over_18 age_over_21 age_over_65 と言う形で米国の事情に合わせていたりするところもあります。

例えば25歳の人は

age_over_18=TRUE age_over_21=TRUE age_over_65=FALSE

となるようです。この表現はいいのかどうか・・・

こんな表現をすることを推奨していたりもします。

age_over_16=True

age_over_17=True

age_over_19=True

age_over_20=True

age_over_22=True

age_over_25=True

age_over_26=False

age_over_64=False

age_over_66=False

age_over_85=False 


一旦はここまでとします。

結構この章は長いですが、ISO/IEC 18013-5の扱いに関する話が多いのであまり中身はありませんね。

Monday, 23. December 2024

IdM Laboratory

AAMVAのMobile Drivers License Implementation Guidelinesを読む②

こんにちは、富士榮です。 引き続きAAMVAのMobile Drivers License Implementation Guidelines 1.4を読んでいきます。 今回は2章のmDL Solution Overviewを見ていきます。 An mDL can be described as leveraging a mobile device to transfer (or cause to be transferred) driver’s license information to an mDL verifier, who cryptographically authenticates the information using the Issuing Authority’s public key. A visual rendering of a DL on a mo

こんにちは、富士榮です。

引き続きAAMVAのMobile Drivers License Implementation Guidelines 1.4を読んでいきます。


今回は2章のmDL Solution Overviewを見ていきます。

An mDL can be described as leveraging a mobile device to transfer (or cause to be transferred) driver’s license information to an mDL verifier, who cryptographically authenticates the information using the Issuing Authority’s public key. A visual rendering of a DL on a mobile device’s display (and which can be misused as a “flash pass”) therefore does not qualify as an mDL (also see section 8).

mDL は、発行局の公開鍵を使用して情報を暗号的に検証する mDL検証者に運転免許証情報を転送する (または転送させる)ために、モバイル機器を活用するものと説明できる。したがって、モバイル機器のディスプレイ上に DL を視覚的に表示するもの(「フラッシュパス」として悪用される可能性があるもの)は、mDL として認められない(セクション 8 も参照)。

スクショやオレオレはダメってことですね。 

An mDL solution can be described in terms of the following three properties:

mDLソリューションは、以下の3つの性質で説明できる:

1. Data retrieval method. The device retrieval method (sometimes referred to as the offline model) works without outside connectivity (for both the mDL holder’s device and the mDL reader) at the time the transaction takes place, thus requiring the mDL data to reside on the mDL holder’s device. Under the server retrieval method (sometimes referred to as the online model, and not to be confused with use of an mDL in an unattended transaction setting such as over the Internet) mDL data is retrieved in real time directly from the Issuing Authority. ISO/IEC 18013-5 requires an mDL to support device retrieval, and allows a device to additionally support server retrieval. 

1. データ検索方式。デバイス検索方式(オフラインモデルと呼ばれることもある)では、取引時に外部(mDL保持者のデバイスとmDLリーダーの両方)に接続することなく動作するため、mDLデータはmDL保持者のデバイスに存在する必要がある。サーバー検索方式(オンラインモデルと呼ばれることもあり、インターネット経由のような無人トランザクションでのmDLの使用と混同されないよう注意)では、mDLのデータは発行機関からリアルタイムで直接取得される。ISO/IEC 18013-5は、mDLがデバイスの検索をサポートすることを要求しており、さらにデバイスがサーバーの検索をサポートすることを認めている。

2. Transaction type. An attended transaction is one where the mDL holder and the mDL verifier are in close proximity to each other. The engagement mechanisms currently reflected in ISO/IEC 18013-5 (QR code, NFC) were selected to support such close proximity. An unattended transaction is one where the mDL holder and the mDL verifier are not in close proximity, e.g. when an mDL holder wants to provide identity or proof of age to an online retailer. ISO/IEC 18013-5 does not currently support unattended transactions. However, work is ongoing to standardize a solution. 

2. トランザクションの種類。対面型トランザクションとは、mDL保有者とmDL検証者が近接しているトランザクションのことである。現在ISO/IEC 18013-5に反映されているエンゲージメントの仕組み(QRコード、NFC)は、このような近接をサポートするために選択された。無人トランザクションとは、mDL 保持者と mDL 検証者が近接していないトランザクショ ンのことであり、たとえば、mDL 保持者がオンライン小売業者に ID または年齢証明を提供する場合などである。ISO/IEC 18013-5 は現在、無人トランザクションをサポートしていない。ただし、ソリューションを標準化する作業が進行中である。 

3. Timing of (and responsibility for) matching. This property is about the responsibility for confirming, at transaction time, that the person presenting the mDL data is the person described by the mDL data. In a post-matched transaction, the link between the mDL Presenter and the mDL data is made after the mDL data is shared and is performed by the mDL verifier. This happens by comparing the portrait image in the mDL with the person presenting the mDL. ISO/IEC 18013-5 supports postmatched transactions. In a pre-matched transaction, the link between the mDL Presenter and the mDL is made right before the mDL data is shared. Although the Issuing Authority should not be involved in real time, the Issuing Authority does take responsibility for certifying the link. The mDL verifier receives only the confirmation that the person presenting the mDL data is the person described by the shared mDL data. ISO/IEC 18013-5 does not currently support pre-matched transactions. However, work is ongoing to standardize a solution (and notably one that does not involve the Issuing Authority at transaction time).

3. 照合のタイミング(および責任)。このプロパティは、mDLデータの提示者がmDLデータに記述された本人であることをトランザクション時に確認する責任に関するものである。マッチング後のトランザクションでは、mDL提示者とmDLデータのリンクは、mDLデータが共有された後に行われ、mDL検証者によって実行される。これは、mDL内の肖像画像とmDL提示者を比較することで行われる。ISO/IEC 18013-5 はポストマッチトランザクションをサポートしている。事前照合トランザクションでは、mDL提示者とmDLのリンクは、mDLデータが共有される直前に行われる。発行局はリアルタイムで関与すべきではないが、発行局はリンクを認証する責任を負う。mDLの検証者は、mDLデータの提示者が共有されたmDLデータに記述された本人であることの確認のみを受ける。ISO/IEC 18013-5は現在、事前照合トランザクションをサポートしていない。しかし、(特にトランザクション時に発行局が関与しない)ソリューションを標準化するための作業が進行中である。

デバイスリトリーバル、サーバーリトリーバルの2方式があること、対面、非対面のシナリオが定義されていること、そして検証者がHolderバインディングを行うことが求められている、ということです。本人確認書類として利用することを考えると当然ですね。 

With this as background, Figure 1 provides a high-level overview of the mDL ecosystem described in ISO/IEC 18013-5.

これを背景に、図1はISO/IEC 18013-5で説明されているmDLエコシステムのハイレベルな概要を示している。



Three interactions are involved:

3つの相互作用が関係している: 

1. Interaction between the Issuing Authority and the mDL. This interaction results in getting everything onto an mDL holder’s device that is needed to use the mDL. There is also subsequent interaction between the Issuing Authority and the mDL to keep the mDL information updated. Technical components of this interaction will be standardized in the ISO/IEC 23220 series.

1. 発行局とmDLの間のインタラクション。このやりとりの結果、mDLを使用するために必要なすべての情報がmDLホルダーのデバイスに取り込まれます。また、発行局とmDLの間には、mDLの情報を更新するための相互作用があります。このインタラクションの技術的なコンポーネントは、ISO/IEC 23220シリーズで標準化される予定です。

Issueの時の仕組みですね。OpenID for Verifiable Credential Issuanceでもmdocを扱うことができますので、そちらを非対面のシナリオでは使うケースもありますが、ここではISO 23220が挙げられています。 

2. Interaction between the mDL and the mDL reader infrastructure of the mDL verifier. This interaction comprises the transfer of technical information to set up a secure communication channel between the two parties, and the subsequent exchange of the driver’s license information (or of a point from where it can be retrieved) that the mDL holder agreed to share. ISO/IEC 18013-5 fully standardizes an interface describing this interaction.

2. mDLとmDL検証装置のmDL読み取りインフラ間のインタラクション。このインタラクションは、両者間の安全な通信チャネルを設定するための技術情報の転送と、それに続く mDL 保持者が共有に同意した運転免許証情報(またはそれを取得できるポイント)の交換で構成される。ISO/IEC 18013-5 は、このインタラクションを記述するインタフェースを完全に標準化する。

こちらはPresentationの話ですね。こちらもOpenID for Verifiable Presentationでも対応ができる範囲です。ここではISO 18013-5での対応が挙げられています。 

3. Interaction between the mDL reader infrastructure and the Issuing Authority. This interaction can be used for different purposes, depending on the data retrieval method involved:

Device retrieval method: The interaction is used by the mDL verifier to obtain the public keys needed to authenticate mDL information. Such interaction can also involve an intermediary entity that aggregates and disseminates certificates. (In North America, AAMVA’s Digital Trust Service performs this function – see section 5.) Regardless, the mDL verifier must trust that the certificate truly comes from a valid Issuing Authority. This interaction does not need to occur at the time of an mDL transaction. ISO/IEC 18013-5 fully standardizes a method supporting this interaction. Server retrieval method: The interaction is used by the mDL verifier for two purposes: As in the case for the device retrieval method, to obtain the public key of the Issuing Authority To pass to the Issuing Authority, in real time, a token that identifies the mDL holder and the mDL, and to receive the actual mDL information back from the Issuing Authority. ISO/IEC 18013-5 fully standardizes an interface describing this interaction

3. mDLリーダーインフラと発行局との間のインタラクション。このインタラクションは、関係するデータ検索方法に応じて、異なる目的で使用することができる:

デバイスの検索方法: このインタラクションは、mDL 検証者が mDL 情報の検証に必要な公開鍵を取得するために使用される。このようなインタラクションには、証明書を集約し普及させる仲介エンティティが関与することもできる。(北米では、AAMVA のデジタル・トラスト・サービスがこの機能を果たす。) いずれにせよ、mDLの検証者は、証明書が本当に有効な発行機関から発行されたものであることを信頼しなけれ ばならない。この相互作用は、mDLのトランザクション時に発生する必要はない。ISO/IEC 18013-5は、この相互作用をサポートする方法を完全に標準化している。 サーバーの検索方法: このインタラクションは、mDL検証者によって2つの目的で使用される: デバイス検索方式と同様に、発行局の公開鍵を取得する。 mDLの所有者とmDLを識別するトークンをリアルタイムで発行局に渡し、実際のmDL情報を発行局から受け取ること。ISO/IEC 18013-5は、このインタラクションを記述するインタフェースを完全に標準化している。

ここはデバイスリトリーバルなのかサーバーリトリーバルなのかで異なりますが、mDLリーダーがIssuerへの問い合わせを行うケースについて記載されていますね。いわゆるDIDを使ったVCとの大きな違いはIssuing Authorityが完全に中央集権であることかと思います。(免許なので当然ですね)そのため、検証用の公開鍵を取得する場合は堂々とVerifierからIssuerへのインタラクションが発生しています。(ここは若干プライバシーとのトレードオフはありますが) 

Note that ISO/IEC 18013-5 specifies system interfaces and a certificate exchange method, and on purpose does not address the user interface (e.g. the look, feel and functionality of an mDL app residing on an mDL holder’s device). It is left up to Issuing Authorities (and their implementers) to innovate in this area.

ISO/IEC 18013-5は、システム・インターフェースと証明書交換方法を規定するものであり、ユーザ・イン ターフェース(例えば、mDL保有者のデバイスに常駐するmDLアプリのルック、フィール、機能性)については、 意図的に触れていないことに留意されたい。この分野での技術革新は、発行局(およびその実装者)に委ねられている。


ということで、本日はここまで。


Sunday, 22. December 2024

IdM Laboratory

AAMVAのMobile Drivers License Implementation Guidelinesを読む①

こんにちは、富士榮です。 先日、用語を見たついでにAAMVAが出しているMobile Drivers License Implementation Guidelines 1.4を読んでいこうと思います。 こちらが原文です。 まずはIntroductionから。 The AAMVA Joint Mobile Driver’s License (mDL) Working Group (WG) has been active around mobile identification since 2012. As the mDL evolves, the mDL WG continues to identify and address topics on which guidance to Issuing Authorities can be helpful. T
こんにちは、富士榮です。
先日、用語を見たついでにAAMVAが出しているMobile Drivers License Implementation Guidelines 1.4を読んでいこうと思います。

こちらが原文です。
まずはIntroductionから。
The AAMVA Joint Mobile Driver’s License (mDL) Working Group (WG) has been active around mobile identification since 2012. As the mDL evolves, the mDL WG continues to identify and address topics on which guidance to Issuing Authorities can be helpful. This document represents the bulk of the current guidance, and points to additional resources as needed.

AAMVA 合同モバイル運転免許証(mDL)ワーキンググループ(WG)は、2012 年以来、モバイル ID を中心に活動してきました。mDL の進化に伴い、mDL WG は、発行機関へのガイダンスが役立つトピックを特定し、対処し続けています。この文書は、現在のガイダンスの大部分を示し、必要に応じて追加のリソースを示します。

The goal of this document is to inform and equip Issuing Authorities, and to some extent mDL verifiers, to achieve the following:

この文書の目的は、発行局、そしてある程度mDLの検証者に対し、以下のことを達成するための情報を提供し、装備させることです:

Technical interoperability between different Issuing Authorities’ mDL programs, i.e., an Issuing Authority being able to read an mDL issued by any other Issuing Authority. Trust in different Issuing Authorities’ mDLs. Privacy preserving implementations.  異なる発行局のmDLプログラム間の技術的な相互運用性。つまり、発行局は他の発行局が発行したmDLを読むことができる。 異なる発行局のmDLに対する信頼性 プライバシーの保護

まずは目的からですが、アメリカでは州ごとに免許証を発行しているため、相互運用性は非常に重要になってくる、というところだと思います。 

It is up to Issuing Authorities to determine the extent to which the guidance in this document is followed.

Nevertheless, the minimum measures deemed necessary to achieve the above are labeled as mandatory requirements in this document (i.e. “shall” or “must”). A summary of minimum measures can be found in Appendix B.

本文書のガイダンスにどの程度従うかは、発行当局の判断に委ねられます。とはいえ、上記を達成するために必要とみなされる最低限の対策は、本文書では必須要件(すなわち「しなければならない」または「しなければならない」)と表示されています。最小限の措置の要約は付録Bに記載されています。

先にも書いた通り、州ごとに発行しているのでこのガイドラインの強制力も一定程度にとどまる感じなんですね。確かに後半に出てくるリテンション期間の話や複数枚数の発行の可否などはかなり現場に判断を委ねているところもあります。このあたりは念頭に読み進める必要がありそうです。 

The following topics are outside the scope of this document:

以下のトピックは本文書の範囲外です:

The identity establishment, management and recordkeeping that precedes the creation of an identity credential. Responsibilities of mDL verifiers. 

ID クレデンシャルの作成に先立つ、ID の確立、管理、および記録管理。 mDL 検証者の責任。

あくまでmDLの発行と管理に関するところがスコープっぽいですね。

This document leverages and expands on ISO/IEC 18013-51 (also available as INCITS/ISO/IEC 18013-5), an international mDL standard. Although ISO/IEC 18013-5 specifies an mDL solution, it was intentionally designed to support any type of mobile identity credential. ISO/IEC 18013-5, as qualified in this document, will therefore enable Issuing Authorities to issue both mobile driver’s licenses and mobile identification cards.

The term “mDL” as used in this document covers both credential types. Qualifications made in this document also allow for identifying an mDL as being REAL ID compliant or not, and/or as a credential issued under the Enhanced Driver’s License program (“EDL”; see the AAMVA DL/ID Card Design Standard).

本文書は、国際 mDL 標準である ISO/IEC 18013-5(INCITS/ISO/IEC 18013-5 としても利用可能)を活用し拡張したものです。ISO/IEC 18013-5 は mDL ソリューションを規定していますが、意図的にあらゆるタイプのモバイル ID クレデンシャルをサポートするように設計されています。このため、本文書で規定する ISO/IEC 18013-5 により、発行機関はモバイル運転免許証とモバイル ID カードの両方を発行できるようになります。

本文書で使用する「mDL」という用語は、両方のクレデンシャル・タイプをカバーします。この文書で行われる認定は、mDL を REAL ID 準拠かどうか、および/または拡張運転免許証プログラム(「EDL」;AAMVA DL/ID カード設計基準参照)の下で発行されたクレデンシャルとし て識別することも可能にします。

本書はISO/IEC 18013-5がベースであり、モバイル運転免許証とモバイルIDカードの両方を対象に書かれている、というところが肝ですね。そしてやはりリアルID法に関しても視野に入っています。

Additional guidance on mDL administration in the areas of legislation and procurement can be found in two other documents produced by the mDL Working Group. Those are the mDL Model Legislation, and the mDL Procurement Guidance (see the jurisdictional member area on the AAMVA website). AAMVA also conducts regular outreach to stakeholders on the topic of mDL, including town hall meetings, podcasts, and training.

mDLに関する法律や調達に関するガイダンスは、mDLワーキンググループが作成した2つの文書に記載されています。これらは、「mDLモデル法案」と「mDL調達ガイダンス」です(AAMVAウェブサイトの管轄メンバーエリアを参照)。AAMVAはまた、タウンホールミーティング、ポッドキャスト、トレーニングなど、mDLに関するステークホルダーへの定期的な働きかけも行っています。

It should be noted that mDL and related technologies are ever evolving. As a result, this document will continue to be updated to synchronize its content with the latest standards and practices. For this reason, readers of this document are encouraged to periodically check the AAMVA website for new versions.

mDLと関連技術は常に進化しています。そのため、本書は最新の基準や慣行と内容を同期させるために更新され続けます。このため、本書の読者は、定期的にAAMVAのウェブサイトで新バージョンを確認することが推奨されます。

AAMVAのウェブサイトを見ると色々な情報が掲載されていますので、このガイドライン以外にも参照すべき情報は多そうです。 

 


 

 

 






Just a Theory

JSONTree Module and Playground

I’m happy to announce the release of the JSONTree Go module and Wasm-powered in-browser playground.

As a follow-up to the JSONPath module and playground I released last month, I’m happy to announce the follow-up project, called JSONTree. I’ve implemented it in the github.com/theory/jsontree Go package, and built a Wasm-powered browser playground for it.

JSONTree?

While a RFC 9535 JSONPath query selects and returns an array of values from the end of a path expression, a JSONTree compiles multiple JSONPath queries into a single query that selects values from multiple path expressions. It returns results not as an array, but as a subset of the query input, preserving the paths for each selected value.

In other words, it compiles multiple paths into a single tree of selection paths, and preserves the tree structure of the input. Hence JSONTree.

Example

Consider this JSON:

{ "store": { "book": [ { "category": "reference", "author": "Nigel Rees", "title": "Sayings of the Century", "price": 8.95 }, { "category": "fiction", "author": "Evelyn Waugh", "title": "Sword of Honour", "price": 12.99 }, { "category": "fiction", "author": "Herman Melville", "title": "Moby Dick", "isbn": "0-553-21311-3", "price": 8.99 }, { "category": "fiction", "author": "J. R. R. Tolkien", "title": "The Lord of the Rings", "isbn": "0-395-19395-8", "price": 22.99 } ], "bicycle": { "color": "red", "price": 399 } } }

This JSONPath query:

$..price

Selects these values (playground):

[8.95, 12.99, 8.99, 22.99, 399]

While this JSONPath query:

$..author

Selects (playground):

[ "Nigel Rees", "Evelyn Waugh", "Herman Melville", "J. R. R. Tolkien" ]

JSONTree compiles these two JSONPaths into a single query that merges the author and price selectors into a single segment, which stringifies to a tree-style format (playground):

$ └── ..["author","price"]

This JSONTree returns the appropriate subset of the original JSON object (playground):

{ "store": { "book": [ { "author": "Nigel Rees", "price": 8.95 }, { "author": "Evelyn Waugh", "price": 12.99 }, { "author": "Herman Melville", "price": 8.99 }, { "author": "J. R. R. Tolkien", "price": 22.99 } ], "bicycle": { "price": 399 } } }

Note that the original data structure remains, but only for the subset of the structure selected by the JSONPath queries.

Use Cases

A couple of use cases drove the conception and design of JSONPath.

Permissions

Consider an application in which ACLs define permissions for groups of users to access specific branches or fields of JSON documents. When delivering a document, the app would:

Fetch the groups the user belongs to Convert the permissions from each into JSONPath queries Compile the JSONPath queries into an JSONTree query Select and return the permitted subset of the document to the user Selective Indexing

Consider a searchable document storage system. For large or complex documents, it may be infeasible or unnecessary to index the entire document for full-text search. To index a subset of the fields or branches, one would:

Define JSONPaths the fields or branches to index Compile the JSONPath queries into a JSONTree query Select and submit only the specified subset of each document to the indexing system Go Example

Use the github.com/theory/jsontree Go package together with github.com/theory/jsonpath to compile and execute JSONTree queries:

package main import ( "fmt" "github.com/theory/jsonpath" "github.com/theory/jsontree" ) func main() { // JSON as unmarshaled by encoding/json. value := map[string]any{ "name": "Barrack Obama", "years": "2009-2017", "emails": []any{ "potus@example.com", "barrack@example.net", }, } // Compile multiple JSONPaths into a JSONTree. tree := jsontree.New( jsonpath.MustParse("$.name"), jsonpath.MustParse("$.emails[1]"), ) // Select from the input value. js, err := json.Marshal(tree.Select(value)) if err != nil { log.Fatal(err) } fmt.Printf("%#v\n", tree.Select(value)) }

And the output:

{"emails":["barrack@example.net"],"name":"Barrack Obama"}

Note that the index position of the selected email was not preserved. Replace New with NewFixedModeTree to create a “fixed mode” JSONTree that preserves index positions by filling gaps with nulls. Its output of the above example would be:

{"emails":[null,"barrack@example.net"],"name":"Barrack Obama"} Status

The public interface of the jsontree module is quite minimal and stable. But I suspect there may remain some flaws in the merging of JSONPath selectors. Please report bugs via GitHub issues and I’ll get them fixed up ASAP.

Otherwise, please share and enjoy!

More about… Go JSONTree JSONPath Playground RFC 9535

Georg C. F. Greve

Self Sovereign Identity: Over before it started?

Web security, symbolized Monty Pythons parrot sketch is an all time classic because it plays on a very human experience of being defenseless when someone is just blatantly refusing to acknowledge the obvious. Shared reality is a matter of perception, not objective observation. Supported also by various mental biases, including the sunk cost fallacy, and the desire to agree with people we perceive a
Web security, symbolized

Monty Pythons parrot sketch is an all time classic because it plays on a very human experience of being defenseless when someone is just blatantly refusing to acknowledge the obvious. Shared reality is a matter of perception, not objective observation. Supported also by various mental biases, including the sunk cost fallacy, and the desire to agree with people we perceive as sympathetic or competent, virtually all humans can fall into this trap. Technical experts on Self Sovereign Identity included.

Instead of recognizing that the parrot of Web security is deceased, has gone to meet its maker, is pushing up the daisies, some people keep insisting that it is merely napping, and use trinkets and all kinds of strings and wires to hold it up.

The result is did:tdw, recently rebranded to did:webvh.

Web based DID methods belong to the family of federated identity methods, not Self Sovereign Identity

Using the web for Decentralized Identifiers (DIDs) violates some of the basic principles of Self Sovereign Identity, and effectively restricts the possible properties of the system to that of a classic federated identity protocol, such as OpenID.

Federated identity systems have their uses, and are often “good enough” for usage by large corporations and governments. But they also enable and encourage platform strategies, which has dramatic implications for personal usage, as well as Small and Medium Enterprises (SMEs). The result has been the Surveillance Industry, and a dependency of 95% of our economy on a few, large platform companies.

Self Sovereign Identity has been developed as a concept to break that dependency, and give people control over their own privacy, security and data. Instead, thanks to did:web and its descendants, it increasingly looks like an exercise of putting SSI lipstick on the pig of the federated Web.

You may think this is just hyperbole. So let’s go back to the beginning.

About the principles of SSI

The design goals of Decentralized Identifiers are listed in Section 1.2 of the W3C DID specificaton:

W3C DID: Design goals for Decentralized Identifiers (DID)

So how well do Web based DID methods meet these goals?

All web based methods, including did:web, did:tdw, did:webvh, and any other web based method anyone might ever come up with depend on a domain name pointing to a web server. The method specific identifier is always being transformed into a HTTPS request. The DID to HTTPS Transformation is the same for did:webvh as it is for did:web.

Reaching the correct web server is therefore contingent on access control by the administrator of the web server, the security of the web server, the longevity of the organization operating the web server, the Certificate Authority issuing the certificates identifying the web server, the configuration of the Transport Layer Security (TLS) parameters, and the Domain Name System to identify which web server to contact.

Users have two choices:

Operate their own web server, or Use the web server of some organization that provides them their “decentralized” identifier.

The former is the “let them eat cake” of modern technologies.

Despite many people working for decades to make self-hosting easier and more attractive, self-hosting has been declining. But even if we reverted that trend and enabled and motivated people to self-host with some amazing self-hosting offers: How hard would it be to correlate did:tdw:QmfGEUAcMpzo25kF2Rhn8L5FAXysfGnkzjwdKoNPi615XQ:petermueller.ch to did:tdw:QmdfTbBqBPQ7VNxZEYEj14VmRuZBkqFbiwReogJgS1zR1n:petermueller.ch ?

How difficult would it be to figure out these might both belong to the same person, whose name might be Peter Müller? Especially considering that the web server at petermueller.ch presents a certificate that lists the owner of the certificate to be a “Peter Müller”, and the whois record for the domain lists his full name, address and phone number?

Which brings us to the second choice, above, which is today’s reality for most people in a federated identity world: Trust the platform intermediary.

How much decentralization is there in Apple Mail? How decentralized are today’s Certificate Authorities? How much privacy and control do users of Gmail have? How secure are today’s web services? How well does today’s world fare in terms of data protection from compromise and loss? How good is today’s Web security?

In reality, Web based DID methods give up on Decentralization, Control, Privacy and Security to the same level that today’s federated identity solutions have given up on them.

They use protocols like OpenID Connect for Verifiable Credentials and Verifiable Presentations (OIDC4VC & OIDC4VP) because they ARE OpenID methods. Which is why if use cases building on top of Web based DIDs were using truth in labelling, they would inform their users about being based on OpenID.

But much of the technology world thrives on buzzwords and hypes, and too often, the technical reality is obfuscated by layers of technical complexity and marketing. So the market rarely penalises false advertising.

did:web(vh), EV edition

Using the Web for “Decentralized” Identifiers and advertising it as revolutionary SSI technology is a bit like selling an “Electric Vehicle” that avoids all the complexities of battery development by using a diesel generator on a towed trailer to power the car. Yes, the propulsion is now electric.

But is the end result fundamentally better than a diesel car?

But what about the added security?

When reading about did:webvh, one could get the impression a lot of security is being added. In reality, it's mostly added complexity because everything goes over a single channel, the same one that is being used by did:web, as well.

It adds security in the same way that web sites get more secure if you ask users to enter not a single password, but three passwords, subsequently, in the correct order.

There is a reason no-one does that. Three passwords are not fundamentally more secure, because there is no additional channel. Add a real second factor, and security actually goes up. Which is why Multi Factor Authentication (MFA) has been invented.

Most likely the Web based DID methods can be developed to the point they will provide actual MFA security at a similar level to today’s federated identity protocols. Maybe did:webvh is even close to that point.

But that only makes it just as secure as “Login with Google”, today. And it does nothing to make it meet the SSI criteria of Decentralization, Control and Privacy.

Perhaps it is time to acknowledge that this parrot is not just a heavy sleeper.

Embrace, Extend, Extinguish

So what’s the problem if some people like did:web and its relatives? As long as we are aware of the limitations, and never use it for systems that are supposed to be used in production by end users or SMEs, there is nothing wrong with did:web.

As I’ve written in a previous article, it’s really useful for rapid prototyping, and can be used as a placeholder during experimentation before switching to a real Decentralized Identifier. We’ve done so ourselves when Vereign has been working on Proof of Concept for the Swiss health sector in 2023. But once we started working on the production system in 2024, we switched to an Autonomous Identifier (AID) that meets the definition of Self Sovereign Identity.

The problem starts when people put Web based identifiers into production.

Not only is it an issue of misleading users with false promises of decentralization, control, privacy and security. It runs much deeper than that. Increasing adoption of Web based identifiers under the moniker of Self Sovereign Identity makes it impossible for actual Self Sovereign Identity to differentiate itself from federated identity protocols. It sucks the air out of the room for actual SSI.

At a technology strategy level, adoption of Web based identifiers makes SSI susceptible to something it was originally designed to prevent: Platform capture.
Depiction of did:web(vh) being welcomed by Self Sovereign Identity community

Whether accidentally or by design, the movement for Web based identifiers perfectly executes a strategy coined by Microsoft in the 90s, labelled Embrace, Extend, Extinguish. I’ve gotten to study that particular script extensively when coordinating the technical and communication activities of the Free Software Foundation Europe around the EU Microsoft antitrust case in order to obtain much needed interoperability information for Samba.

The script is not super complicated. First, become a champion of Self Sovereign Identity, embrace it visibly, participate in the conferences, champion it at the political level. Then come up with ideas to extend it, for instance by proposing to speed up adoption by falling back on “proven”” technologies from the Web. Provided enough Kool-Aid, nobody might notice that it violates the principles of SSI and you’ll find many willing participants.

And lastly, once it has become the dominant flavour to however misleadingly claim the label Self Sovereign Identity, extinguish what is left in terms of actual SSI by aggressively using your economic and political might to push a platform play to suck the air out of the market. While Sovrin had its issues, including political, it undoubtedly lived up to all the SSI principles. Recently, the Sovrin Foundation announced that it was shutting down in March 2025 due to its community moving to the Web.

So, what’s left?

Microsoft had originally championed did:ion, a fully Self Sovereign Identifier based on the Sidetree specification. But as of 2023, it unsurprisingly also switched to did:web. Old habits die hard. Other large tech platforms are also pushing in the same direction, as are several of the former governmental monopolists with strong political ties, such as T-Systems.

The most promising design for a decentralized identifier is the Key Event Receipt Infrastructure (KERI), and at conceptual level it solves some very hard problems that no other method even attempts to address. The problem is how long it has been the promising next thing, without achieving sufficient adoption, and without finding its way into the regulatory documents in the European Union eIDAS (for “electronic IDentification, Authentication and trust Services”) working group, which is strongly pushing in the direction of Web based identifiers.

Unsurprisingly, technical experts have raised security and privacy concerns. In fact, it seems the current draft of the EU Architecture and Reference Framework (ARF) may be in violation of the EU privacy provisions it is supposed to provide.

Also, and it’s already been a topic in the DICE2024 retrospective, KERI is currently available in Python only. Which leaves adoption hamstrung. Not everyone in the KERI community agrees with that, but I’m aware of a number of people and initiatives who would love to adopt KERI, but not in Python. And its completeness as a concept puts the effort required for implementation in another language outside what is feasible for any of these parties individually.

So, when looking at the W3C DID Traits draft, the table looks pretty bleak, with two actual SSI methods left on it: did:key and did:peer. Both limited in relation to quite a few use cases.

What we ended up doing…

We anticipated this picture when designing our use case and solution for the Swiss health sector back in January 2024. The Web identifiers were obvious non-starters, as were did:key and did:peer, due to them being overly limited for our purpose.

We also did not like the idea of putting Python into a mission critical production application for large number of users. Especially since we did not want to put Python on the phone, and also did not want remote wallets that do not actually live on the phone.

So we did what XKCD told us not to do. Stay tuned.

EDIT ON 3. March 2025: The follow-up is finally available at https://ggreve.medium.com/a-future-for-self-sovereign-identity-c237caa5e46f


IdM Laboratory

OpenID for Verifiable Credentials IssuanceのPublic Review期間が始まりました

こんにちは、富士榮です。 先日のOpenID for Verifiable Presentationにつづき、いよいよ始まりました。ついにOpenID for Verifiable Credential Issuanceも2nd Implementer's Draftです。 https://openid.net/public-review-period-for-proposed-second-implementers-draft-of-openid-for-verifiable-credential-issuance/ こんなスケジュールです。 Implementer's Draft public review period: Friday, December 20, 2024 to Sunday, February 2, 2025 (45 days) Imple

こんにちは、富士榮です。

先日のOpenID for Verifiable Presentationにつづき、いよいよ始まりました。ついにOpenID for Verifiable Credential Issuanceも2nd Implementer's Draftです。



https://openid.net/public-review-period-for-proposed-second-implementers-draft-of-openid-for-verifiable-credential-issuance/

こんなスケジュールです。

Implementer's Draft public review period: Friday, December 20, 2024 to Sunday, February 2, 2025 (45 days) Implementer's Draft vote announcement: Monday, January 20, 2025 Implementer's Draft early voting opens: Monday, January 27, 2025 Implementer's Draft official voting period: Monday, February 3 to Tuesday, February 10, 2025


いよいよVerifiable Credentialも社会実装に向けてラストスパートな感じがします。EUDIWも2026年には本格化するわけですし。

Saturday, 21. December 2024

IdM Laboratory

ついに発売へ。デジタルアイデンティティのすべて

こんにちは、富士榮です。 週末に家に帰ったら先行して届いていました。12月27日に発売になる「デジタルアイデンティティのすべて」です。 原著と比べると少しだけ大きいですね。 こちらから予約注文できますのでどうぞ。 https://amzn.to/3P9KS2e ついでにSoftware Designの最新号も届いていましたし、年末年始はアイデンティティとパスキーざんまいですね! 1月末には「パスキーのすべて」も発売されますので、体(頭)をあっためておきましょう。 https://amzn.to/3ZHQohg
こんにちは、富士榮です。
週末に家に帰ったら先行して届いていました。12月27日に発売になる「デジタルアイデンティティのすべて」です。 原著と比べると少しだけ大きいですね。

こちらから予約注文できますのでどうぞ。 https://amzn.to/3P9KS2e

ついでにSoftware Designの最新号も届いていましたし、年末年始はアイデンティティとパスキーざんまいですね!

1月末には「パスキーのすべて」も発売されますので、体(頭)をあっためておきましょう。 https://amzn.to/3ZHQohg

Friday, 20. December 2024

Heres Tom with the Weather

No Water is Death

“Extermination & Acts of Genocide”: Human Rights Watch on Israel Deliberately Depriving Gaza of Water AMY GOODMAN: So, can I ask you, Bill Van Esveld, is this the first time that Human Rights Watch is accusing Israel of genocide in Gaza? BILL VAN ESVELD: This is the first time that we’ve made a finding of genocidal acts in Gaza. It is not an accusation that we level lightly. We have no

“Extermination & Acts of Genocide”: Human Rights Watch on Israel Deliberately Depriving Gaza of Water

AMY GOODMAN: So, can I ask you, Bill Van Esveld, is this the first time that Human Rights Watch is accusing Israel of genocide in Gaza?

BILL VAN ESVELD: This is the first time that we’ve made a finding of genocidal acts in Gaza. It is not an accusation that we level lightly. We have not done this very often in our history. We accused the Myanmar military of genocidal acts against the Rohingya in 2017, and we found full-blown genocide against the Kurds in Saddam Hussein’s Anfal campaign in Iraq in the ’80s — sorry, in the ’90s, and we found genocide against — also in Rwanda in the ’80s. It is, you know, an extremely difficult crime to prove. It is, you know, mass killing deliberately to destroy people because they’re part of the group, not something we level lightly, but, yes, we found it here.

Thursday, 19. December 2024

IdM Laboratory

モバイル運転免許証に関する用語を見ていきます

こんにちは、富士榮です。 こちらにも書いた通り、11月にAAMVAからMobile Drivers License Implementation Guidelineの1.4がでました。 https://idmlab.eidentity.jp/2024/12/mobile-drivers-licensemdl.html 読んでいてそういえば一般的じゃない言葉ばっかり使ってるよなぁ、と思うのでまずはTerminologyを見ておきましょう。 そもそも論のAAMVAです。 American Association of Motor Vehicle Administrators の略ですね。米国自動車管理者協会と訳されるようです。この辺の資料によると。  EDL。enhanced driver licenseの略ですね。日本語だと
こんにちは、富士榮です。
こちらにも書いた通り、11月にAAMVAからMobile Drivers License Implementation Guidelineの1.4がでました。 https://idmlab.eidentity.jp/2024/12/mobile-drivers-licensemdl.html

読んでいてそういえば一般的じゃない言葉ばっかり使ってるよなぁ、と思うのでまずはTerminologyを見ておきましょう。

そもそも論のAAMVAです。
American Association of Motor Vehicle Administrators

の略ですね。米国自動車管理者協会と訳されるようです。この辺の資料によると。 


EDL。enhanced driver licenseの略ですね。日本語だと強化運転免許証なんて訳されたりしますが、日本にいるとなんじゃそれ、ですがここに解説があります。

Enhanced Drivers Licenses (EDLs) are state-issued enhanced drivers licenses that provide proof of identity and U.S. citizenship when crossing the U.S. border in a vehicle. They are issued in a secure process, and include technology that makes travel easier. EDLs are a low-cost, convenient option for entering the United States from Canada, Mexico or the Caribbean through a land or sea port of entry, in addition to serving as a permit to drive.

強化運転免許証(EDLs)は、自動車で米国国境を越える際に身分証明と米国市民権を証明する州発行の強化運転免許証である。EDLは安全なプロセスで発行され、渡航を容易にする技術も含まれている。EDLは、カナダ、メキシコ、カリブ海諸国から陸路または海路で米国に入国する際に、低コストで便利なオプションであり、運転許可証としての役割も果たす。

使い道としては2025年から施行されるReal ID法(州が発行する運転免許証や身分証明書に対して最低限のセキュリティ基準を定めるもの)に対応したものっぽいです。米国国内で飛行機に乗るときにReal ID法に準拠した身分証明書の提示が必要になる、って話です。(日本人は外国政府発行のパスポートを使うことになると思います)

 

mDL。いわゆるMobile Driver's License、モバイル運転免許証ですね。

こんな解説が書いてあります。

driver’s license or identification card that resides on a mobile device or requires a mobile device as part of the process to gain access to the related information

Note to entry: Adapted from ISO/IEC 18013-5

運転免許証または身分証明書であって、モバイル・デバイス上に存在するもの、または入国時に 関連情報にアクセスするためのプロセスの一部としてモバイル・デバイスを必要とするもの: ISO/IEC 18013-5 からの引用。

まだ18013-7:2024と18013-5:2021の差分をとれていませんが、AAMVAとしては18013-5ベースです。


mDL app。いわゆるWalletに当たるものですね。

software running on an mDL holder’s device; within the context of this document this includes a standalone app as well as a wallet type app

mDL保持者のデバイス上で動作するソフトウェア。本書の文脈では、スタンドアロン型アプリおよびウォレット型アプリを含む。


mdoc。クレデンシャルフォーマットがmdoc、運転免許証として使えばmDLっていう整理でいいのかと思います。

document or application that resides on a mobile device or requires a mobile device as part of the process to gain access to the document or application

モバイル・デバイス上に存在する、または文書やアプリケーションにアクセスするためのプロセスの一部としてモバイル・デバイスを必要とする文書またはアプリケーション


mobile security object。MSOなんて言われたりします。mdocの構造化されたデータセットの話です。中にはデバイスアテステーションなども含まれるのでHolderバインディングの保証をすることが目的とされます。

structured data set that enables an mDL verifier to authenticate (for both accuracy and origin) other mDL data elements received during an mDL transaction

mDLベリファイアが、mDLトランザクション中に受信した他のmDLデータエレメントを(正確さと出所の両方について)認証できるようにする構造化データセット


provisioning。これは特殊用語lじゃないのかな?と思うのはIdentity界隈の人だからなのかもしれません。

initial loading of mDL information into an mDL app

mDLアプリへのmDL情報の初期読み込み

要するにウォレットへのモバイル運転免許証をインストールすることですね。



ということで、まずは用語解説からでした。

概念を理解するためにもこのあたりはちゃんと押さえておきましょう。 

 

 

 

 

 

 

Wednesday, 18. December 2024

IdM Laboratory

デジタルIDに関するグローバルの動向

こんにちは、富士榮です。 OpenID FoundationのブログでElizabethが各国のデジタルIDに関する最近のトピックスを取り上げていますので紹介したいと思います。もちろん日本も含まれています。 https://openid.net/digital-identity-news-round-up-13-dec-2024/ キプロス デジタル・シチズンとして知られるモバイル・デジタルIDアプリを開始した。 このアプリでは、バイオメトリクスIDカード、運転免許証、自動車の路上使用適格性証明書などのデジタル文書をモバイルで保持することができる。また、QRコードを使ってデジタル認証することもできる https://www.biometricupdate.com/202412/cyprus-launches-digital-citizen-mobile
こんにちは、富士榮です。
OpenID FoundationのブログでElizabethが各国のデジタルIDに関する最近のトピックスを取り上げていますので紹介したいと思います。もちろん日本も含まれています。 https://openid.net/digital-identity-news-round-up-13-dec-2024/


キプロス デジタル・シチズンとして知られるモバイル・デジタルIDアプリを開始した。 このアプリでは、バイオメトリクスIDカード、運転免許証、自動車の路上使用適格性証明書などのデジタル文書をモバイルで保持することができる。また、QRコードを使ってデジタル認証することもできる https://www.biometricupdate.com/202412/cyprus-launches-digital-citizen-mobile-digital-id-app ガーナ アクラのコトカ国際空港に生体認証国境管理システムを導入した。 バイオメトリクスはここでの鍵であり、eGatesシステムは、バイオメトリクス・ゲートで読み取られたバイオメトリクス国民IDカード(ガーナ・カード)を使用している https://www.biometricupdate.com/202412/ghana-unveils-biometric-border-management-system-e-gates-at-main-airport ニューメキシコ州 アップルまたはグーグルのウォレットに読み込むことができるモバイル運転免許証を導入する9番目の州となった。 ウォレットはここでの鍵であり、ニューメキシコ州民は運転免許証を携帯し、全米の特定のTSAチェックポイントで使用することができる。QRコードのスキャンによるデジタル認証が可能で、その後、暗号化されたデータがブルートゥース経由で送信される https://www.biometricupdate.com/202412/new-mexico-mdl-goes-live-amid-uneven-state-progress パプアニューギニア 国家デジタルID政策を発表し、公開協議を行っている。 ティモシー・マシウICT大臣によると、この政策は、金融包摂を促進するため、銀行口座開設を主なユースケースとして、SevisPassとして知られる公式デジタルIDシステムを確立するものである https://www.thenational.com.pg/digital-id-policy-released/ スイス Swiyuとして知られるウォレットに保持される国民デジタルIDの技術的実装計画を概説した。 第一段階の実装は2025年第1四半期にテストされる予定で、個々のコンポーネントのソースコードはオープンソースで公開される。第2段階のソリューションには、eIDから個人への追跡を防ぐため、より厳しいプライバシー要件が盛り込まれる予定であり、政府はこれを開発するための研究に110万米ドルを割り当てている https://www.biometricupdate.com/202412/swiss-e-id-has-an-official-name-technical-implementation-plan ナイジェリア オープンソースの MOSIP プラットフォームに支えられた新しい NIMS 2.0 デジタル ID システムのシステムインテグレーターの調達通知を出した バイオメトリクスもこの通知の一部であり、SIはMOSIPをABISソリューションやバイオメトリクス登録キットと統合するよう求めている。ナイジェリアの現在のIDインフラからのレガシーデータも移行する必要がある https://ted.europa.eu/en/notice/-/detail/753536-2024 エア・カナダ バンクーバー国際空港を出発する旅行者向けにデジタル ID プログラムを開始し、 モントリオール、オタワ、カルガリー、トロント、ビクトリア、エドモントンがこれに続く予定である このサービスでは、ゲートで顔認証を使って旅行者を確認するため、物理的な搭乗券や政府発行のIDを提示する必要がない https://www.travelandtourworld.com/news/article/canadian-travelers-to-benefit-as-air-canada-launches-digital-id-at-vancouver-airport-for-domestic-flights-including-to-montreal-ottawa-calgary-toronto-victoria-and-edmonton-new-travel-updates-you/ 英国 国の法執行機関は、最大2000万ポンド相当のライブ顔認証(LFR)システムの入札公告を出した このシステムでは、ライブカメラの映像を監視リストと照合し、要注意人物を特定する。市民権団体や議員の反対にもかかわらず、英国政府は犯罪撲滅の手段としてLFRを警察が使用することを支持し続けている https://www.biometricupdate.com/202412/uk-govt-publishes-25m-tender-for-live-facial-recognition ブラジル 11月29日(金)、ブラジルのPixデジタル決済システムは、1日で2億3,990万件の取引という新記録を達成した https://x.com/BancoCentralBR/status/1864038353248591895 ブラジル中央銀行は、この規模が公共デジタルインフラとしてのPixの役割を実証していると指摘している 中央銀行の調査によると、ブラジル国民の76.4%がPixを利用しており、回答者の46%が最も一般的な支払い方法としている https://www.infomoney.com.br/minhas-financas/pesquisa-do-bc-mostra-que-pix-superou-dinheiro-como-forma-de-pagamento-mais-usada/?utm_term=Autofeed&utm_medium=Social&utm_source=Twitter&Echobox=1733325955 日本 健康保険証の発行を中止し、マイナンバー・デジタルIDに置き換えた システムの不具合やプライバシーへの懸念から、日本ではこれまで導入が進んでいなかった https://www.biometricupdate.com/202412/japans-my-number-id-officially-replaces-health-insurance-cards パプアニューギニア オーストラリアに続いて「特定のソーシャルメディア・プラットフォーム」の年齢保証を法制化する計画を発表した 政府のデジタルトランスフォーメーション・リーダーであるスティーブン・マタイナホ氏は、「詐欺、違法な商品の流通、人身売買、偽情報、サイバーハラスメントの増加が懸念されている」ため、「有害なコンテンツから子どもを守る」ためだと主張している 大人も「年齢制限のあるコンテンツ」にアクセスする際には、強制的なデジタルID(SevisPassとして知られる)を使用する必要がある https://www.biometricupdate.com/202412/papua-new-guinea-to-ban-social-media-for-youth-require-age-verification-for-adults フランス 大手携帯電話会社4社(ブイグ・テレコム、フリー、オレンジ、SFR)は、オンラインビジネスのためのデジタルID認証を改善するために手を組んだ。 ここでは相互運用性が重要であり、事業者はモバイルネットワーク間の仕様を統一するために2つの新しいAPIを導入している。これらは、Linux Foundationによって開発されたオープンソースプロジェクトであるCAMARA標準に基づいている https://www.biometricupdate.com/202412/frances-mobile-operators-tackle-online-fraud-with-digital-identity-protections 英国 英国内務省は、英国への入国を申請する外国人を対象に、スマートフォンを使った遠隔および対面での生体指紋採取の試験実施を計画している しかし、パスポートの生体指紋データは現在、拡張アクセス制御(EAC)によって保護されており、EU加盟国の当局しか読み取ることができないことを考えると、この計画の実現性には懸念がある 一方、遠隔地からの指紋採取は、AIを利用した詐欺の影響を受けやすいというセキュリティ上の懸念もある https://www.biometricupdate.com/202412/uk-home-office-to-test-remote-fingerprint-enrolment-via-smartphone-for-entry ケンブリッジ・オルタナティブ・ファイナンス・センター(CCAF) 新しい調査によると、60の国・地域がオープン・バンキングに関連する法律や規制を導入している。 このうち44の法域では、金融サービス業界内の競争が導入の主な要因となっている また、オープン・バンキングのアプローチには地域差があり、欧州、中央アジア、中東、北アフリカでは規制主導のアプローチが主流である一方、サハラ以南のアフリカやアジア太平洋地域では市場主導のアプローチが一般的である https://www.jbs.cam.ac.uk/faculty-research/centres/alternative-finance/publications/the-global-state-of-open-banking-and-open-finance-report/
他にもイベントのお知らせとしてデジタルIDのための新興APAC市場のナビゲートというWebinarが案内されています。 https://x.com/BiometricUpdate/status/1864150904766603322

しかし、本当に動いた一年でしたね。

FAPI2.0の最終化に向けたPublic Reviewが始まります

こんにちは、富士榮です。 FAPI2.0のSecurity Profile and Attacker Modelに関する仕様の最終化に関するPublic Review期間が始まっていますね。 https://openid.net/public-review-for-proposed-final-fapi-2-0-specifications/ 今後はこんなスケジュールで進むようです。 Final Specification public review period: Monday, December 9, 2024 to Friday, February 7, 2025 (60 days) Final Specification vote announcement: Saturday, January 25, 2025 Final Specification e

こんにちは、富士榮です。

FAPI2.0のSecurity Profile and Attacker Modelに関する仕様の最終化に関するPublic Review期間が始まっていますね。

https://openid.net/public-review-for-proposed-final-fapi-2-0-specifications/



今後はこんなスケジュールで進むようです。

Final Specification public review period: Monday, December 9, 2024 to Friday, February 7, 2025 (60 days) Final Specification vote announcement: Saturday, January 25, 2025 Final Specification early voting opens: Saturday, February 1, 2025 Final Specification voting period: Saturday, February 8, 2024 to Saturday, February 15, 2025 (7 days)


いよいよFAPIも本格化ですね。

Monday, 16. December 2024

IdM Laboratory

Taiwan Digital Identity Wallet International Forumでの登壇内容を紹介します

こんにちは、富士榮です。 先週はTaiwan Digital Identity Wallet International Forumで登壇してきましたので、キーノートとしてお話した内容をメモしておきたいと思います。 イベントについてはこちら https://idmlab.eidentity.jp/2024/11/taiwan-digital-wallet-international.html 自己紹介は置いておいて、テーマは相互運用性でした。 As you know, the Digital Identity Wallet has recently become an emerging topic in the digital identity space. For example, the European Committee has started im
こんにちは、富士榮です。
先週はTaiwan Digital Identity Wallet International Forumで登壇してきましたので、キーノートとしてお話した内容をメモしておきたいと思います。 イベントについてはこちら https://idmlab.eidentity.jp/2024/11/taiwan-digital-wallet-international.html

自己紹介は置いておいて、テーマは相互運用性でした。
As you know, the Digital Identity Wallet has recently become an emerging topic in the digital identity space. For example, the European Committee has started implementing the European Digital Identity Wallet, which allows citizens to bring their own digital identity documents, such as national ID cards or mobile driver's licenses. At the same time, interoperability is essential for adopting these wallets in the real world because we have an existing ecosystem without the digital identity wallet today. So, today’s my talk is about interoperability between current identity ecosystems and a Digital Identity Wallet.

ご存知のように、デジタルIDウォレットは最近、デジタルID分野で新たなトピックとなっています。例えば、欧州委員会は欧州デジタルIDウォレットの導入を開始しました。これにより、国民は国民IDカードや携帯電話運転免許証などのデジタルID文書を携帯できるようになります。同時に、現実世界でこれらのウォレットを採用するには相互運用性が不可欠です。なぜなら、今日、デジタルIDウォレットのない既存のエコシステムが存在しているからです。

そこで、本日の私の講演では、現在のアイデンティティ・エコシステムとデジタル・アイデンティティ・ウォレット間の相互運用性についてお話します。 


First, let’s think about our current situation when considering the term “interoperability.” Since the fall of the Tower of Babel, we have been living in a world divided by different languages, different tribes, different cultures, and different social systems. In other words, we have been living in a world where we have not been able to communicate well for a long time. This continued until the Age of Exploration, when trade between countries worldwide became more active. For people like me who have lived in Asia, we have lived in a world that is very different from Western languages and cultures, and we are still living behind language barriers. However, since the spread of the Internet began in the 1990s, the breakdown of regional divisions, including countries, has started. We have finally been freed from the constraints of physical location, and the need to communicate globally has arisen. So, did a technology break down these barriers to allow us to communicate and trade freely globally?

まず、「相互運用性」という言葉について考える前に、現在の状況について考えてみましょう。

バベルの塔が崩壊して以来、私たちは異なる言語、異なる部族、異なる文化、異なる社会制度によって分断された世界に生きてきました。

つまり、私たちは長い間、うまくコミュニケーションを取ることができない世界に生きてきたのです。この状況は、大航海時代を迎え、世界各国間の貿易が活発になるまで続きました。

私のようにアジアで生活してきた人間にとっては、西洋の言語や文化とはまったく異なる世界で生きてきましたし、今でも言葉の壁に阻まれて生活しています。

しかし、1990年代からインターネットが普及し始め、国を含めた地域的な区分が崩れ始めました。私たちはようやく物理的な場所の制約から解放され、グローバルにコミュニケーションを取る必要性が生じてきたのです。

では、こうした障壁を打破し、世界中で自由にコミュニケーションや取引ができるようになった技術は登場したのでしょうか?



At the moment, the answer is no. We are currently living in a world divided by silos created by technology. Even now, to transfer data freely across systems, we have to design and implement interfaces between systems each time, and even when it comes to identity, which is the theme of today's talk, it is still managed on a system-by-system basis. We often have to manage multiple accounts for each systems.

現時点では、答えはノーです。

私たちは現在、テクノロジーによって作られたサイロによって分断された世界に生きています。

今でも、システム間でデータを自由にやりとりするためには、その都度、システム間のインターフェースを設計し実装しなければなりませんし、本日のテーマであるアイデンティティにしても、システムごとに管理されています。 システムごとに複数のアカウントを管理しなければならないこともよくあります。 



We need a way to communicate across countries, jurisdictions, and systems. And we already know of some examples that have been developed to some extent. Email can be delivered anywhere in the world without a centralized system, and the telephone system allows us to make calls to people worldwide. In these systems, we can communicate without depending on the email user agent or telephone type. Also, in the real world, we use passport to identify people on traveling to other countries. Those of us involved in digital identity need to follow the example of these previous cases and work to create a world where interoperability is guaranteed. 国や管轄区域、システムを越えてコミュニケーションを行う方法が必要です。 そして、ある程度まで開発された例がすでにいくつか存在しています。 電子メールは中央集権的なシステムなしで世界中のどこへでも配信できますし、電話システムは世界中の人々との通話を可能にしています。これらのシステムでは、電子メールユーザーエージェントや電話の種類に依存することなくコミュニケーションを行うことができます。 また現実の世界では、パスポートを使って他国への渡航者の身元確認を行っています。 デジタルアイデンティティに関わる私たちは、これらの過去の事例を手本とし、相互運用性が保証された世界を実現するために取り組む必要があります。



And digital identities are not just for natural persons. There are various things in the real world, such as IoT devices and legal entities, are connected to the internet, and daily business transactions are carried out. Now is the time to design and implement a system so that all digital identities can be mutually operated with minimal friction.

また、デジタルアイデンティティは自然人だけのものではありません。現実世界には、IoTデバイスや法人など、さまざまなものがインターネットに接続され、日常的な商取引が行われています。今こそ、すべてのデジタルアイデンティティが相互に最小限の摩擦で運用できるようなシステムの設計と実装を行うべき時なのです。



 Let's now take a closer look at interoperability. Even though we use the word 'interoperability,' it can be roughly divided into technical and non-technical aspects. When many engineers talk about interoperability, they often only focus on the technical side, but it is also essential to consider the non-technical side.

First, let's look at the technical aspects. We must consider the identifier format, transfer protocol, and data model, including the schema and signature algorithm.

In addition, on the non-technical side, we need to agree on the semantics that expresses what meaning the exchanged data has, the rules and framework within which the data is generated, and the trust framework that ensures the reliability of the entity state, etc.

Let's take a closer look at each of these elements from the next slide.

それでは、相互運用性について詳しく見ていきましょう。相互運用性という言葉を使っていますが、大まかに技術的な側面と技術的ではない側面に分けることができます。多くの技術者が相互運用性について語る場合、技術的な側面のみに焦点を当てがちですが、技術的ではない側面も考慮することが不可欠です。

まず、技術的な側面について見ていきましょう。識別子のフォーマット、転送プロトコル、データモデル(スキーマや署名アルゴリズムを含む)を考慮する必要があります。

さらに、技術面以外の側面では、交換されたデータがどのような意味を持つのか、データが生成されるルールや枠組み、エンティティの状態の信頼性を確保する信頼フレームワークなどを表現するセマンティクスについて合意する必要があります。

それでは、これらの要素について、次のスライドから詳しく見ていきましょう。 



First of all, let's talk about identifiers. An identifier is an attribute identifying a particular entity within a specific set. This attribute can be a single attribute or multiple attributes.

The design of the identifier depends on the size of the set that contains the target entity. For example, designing an identifier within a local set differs significantly from creating one within an international or global set. For example, my family name is Fujie, but there may be no one else in this room with the same family name. In this situation, my family name could function as an identifier. However, when I go home to Japan, my family name does not function as an identifier because, as you know, all of my family members have the family name Fujie.

Finally, it is essential to consider privacy and persistence when considering identifiers. For example, suppose control of an identifier is taken away from you. In that case, there is a possibility that control over the identity information linked to that identifier will also be taken away from you. Also, suppose you are logged in to multiple services using the same identifier. In that case, there is a possibility that the services will collide with each other and merge your attribute information in an unintended way. To deal with such cases, it may be necessary to devise ways to ensure that users use different identifiers.

On the other hand, if users are not allowed to use the same identifier for an extended period, they may not be able to use the service continuously or may not be able to access past data.

From the perspective of interoperability, it is necessary to design systems that can correctly identify entities while considering privacy and persistence, not only in the current but also in a broader set in the future.

Identifiers may seem simple, but they must be designed very carefully.

 まず、識別子についてお話しましょう。識別子とは、特定の集合内の特定のエンティティを識別する属性です。この属性は単一の属性であることも、複数の属性であることもあります。

識別子の設計は、対象のエンティティを含む集合の規模によって異なります。例えば、ローカルな集合内で識別子を設計することは、国際的またはグローバルな集合内で設計することとは大きく異なります。例えば、私の姓は富士榮ですが、この部屋には同じ姓の人は誰もいないかもしれません。このような状況では、私の姓は識別子として機能するでしょう。しかし、私が日本に帰国した場合、ご存知のように私の家族全員が富士榮という姓なので、私の姓は識別子として機能しません。

最後に、識別子を考える際には、プライバシーと永続性について考慮することが不可欠です。例えば、ある識別子の管理が自分から奪われたとします。その場合、その識別子と紐づけられたID情報についても管理が奪われる可能性があります。また、同じ識別子を使って複数のサービスにログインしているとします。その場合、サービス同士が衝突し、意図しない形で属性情報がマージされてしまう可能性がある。このようなケースに対応するためには、ユーザーに異なる識別子を利用させる工夫が必要となる可能性があります。

一方で、長期間にわたって同一の識別子を利用できないと、サービスを継続的に利用できなくなったり、過去のデータにアクセスできなくなったりする可能性があります。

相互運用性の観点では、プライバシーや永続性を考慮しつつ、現在だけでなく将来にわたって、エンティティを正しく識別できる仕組みを設計する必要があります。

識別子は一見単純に見えるが、非常に慎重に設計しなければいけません。


 

Next, we will consider transport protocols. Transport protocols define the methods by which entities communicate with each other. In the context of digital credentials, transport protocols include issuing credentials to wallets, presenting credentials to verifiers, and revoking issued credentials by issuers. To ensure interoperability, the multiple issuer, wallet, and verifier components must communicate using a method that has been agreed upon in advance. 次に、トランスポートプロトコルについて検討します。トランスポートプロトコルは、エンティティが相互に通信する方法を定義します。デジタルクレデンシャルの文脈では、トランスポートプロトコルには、クレデンシャルをウォレットに発行すること、クレデンシャルをベリファイアに提示すること、発行者によって発行されたクレデンシャルを取り消すことが含まれます。 相互運用性を確保するには、複数の発行者、ウォレット、ベリファイアのコンポーネントが、事前に合意された方法で通信する必要があります。



Let's also consider data models. Schemas need to take into account the types and namespaces of attributes. Generally, gender is expressed using letters such as M and F, but in some cases, it is expressed using numbers such as 0 and 1. In addition, the attribute name family_name is sometimes used to express the family name, and the attribute name surname is sometimes used. In any case, related entities must agree on the names and types of attributes to achieve interoperability.

The algorithm used for digital signatures is also a very important factor. In general, it is necessary to verify digital signatures to verify the authenticity of digital credentials. Still, verification will not be possible if the issuer uses a signature algorithm that differs from what the verifier expects. Agreement on the signature algorithm is significant to avoid this.

データモデルについても検討してみましょう。スキーマでは、属性のタイプと名前空間を考慮する必要があります。一般的に、性別はMやFなどの文字で表現されますが、場合によっては0や1などの数字で表現されることもあります。また、姓を表現する際に、属性名family_nameが使用されることもあれば、surnameという属性名が使用されることもあります。いずれにしても、相互運用性を実現するには、関連するエンティティが属性の名称とタイプについて合意する必要があります。

電子署名に使用されるアルゴリズムも非常に重要な要素です。一般的に、電子証明書の真正性を検証するには、電子署名を検証する必要があります。しかし、発行者が検証者が期待するものと異なる署名アルゴリズムを使用している場合、検証は不可能です。これを回避するには、署名アルゴリズムについて合意することが重要です。 



As we have seen, reaching an agreement on identifiers, transport protocols, and data models is essential to achieve interoperability.

Many standardization organizations are working to develop standard specifications to facilitate this agreement. For example, the W3C has developed a specification called Decentralized Identifiers for identifiers, and the OpenID Foundation has developed a protocol for exchanging credentials called the OpenID for Verifiable Credenitals Issuance and the OpenID for Verifiable Presentations. The W3C and IETF have also formed working groups to create data models.

However, as you can see from this table, the current situation is that multiple standardization bodies are trying to develop their standard specifications. In this situation, no matter how much implementers adopt a standard, achieving interoperability with entities that use a different standard will not be possible.

これまで見てきたように、識別子、通信プロトコル、データモデルについて合意に達することは、相互運用性を実現するために不可欠です。 多くの標準化団体が、この合意を促進するための標準仕様策定に取り組んでいます。例えば、W3Cは識別子としてDecentralized Identifiersと呼ばれる仕様を策定しており、OpenID FoundationはOpenID for Verifiable Credenitals IssuanceおよびOpenID for Verifiable Presentationsと呼ばれる認証情報の交換プロトコルを策定しています。また、W3CやIETFでもデータモデルのワーキンググループが結成されています。 しかし、この表から分かるように、現状では複数の標準化団体が標準仕様を策定しようとしている状況です。このような状況では、実装者がどれだけ標準を採用しても、異なる標準を採用する主体との相互運用性を実現することはできません。



Due to the situation explained in the previous slide, some people are defining and using profiles that combine multiple standards.

It is not realistic to reach agreement on the identifiers, transfer protocols, and data models for each entity. Therefore, we develop profiles that combine specifications for specific identifiers, specific transfer protocols, and specific data models, and the relevant entities agree to use these profiles.

This allows us to reduce the need for individual coordination between entities.

This approach is also used in the European Union, and the OpenID Foundation provides a profile called the High Assurance Interoperability Profile, or HAIP.

前スライドで説明した状況により、複数の標準を組み合わせたプロファイルを定義し使用する人もいます。

各エンティティの識別子、転送プロトコル、データモデルについて合意に達することは現実的ではありません。そのため、特定の識別子、特定の転送プロトコル、特定のデータモデルの仕様を組み合わせたプロファイルを開発し、関連するエンティティがこれらのプロファイルの使用に同意します。

これにより、エンティティ間の個別の調整の必要性を減らすことができます。

このアプローチは欧州連合でも採用されており、OpenIDファウンデーションは、高信頼相互運用性プロファイル(HAIP)と呼ばれるプロファイルを提供しています。 



From this slide, I would like to consider the non-technology elements.

First of all, there is semantics. Suppose you receive a digitally signed credential. If you can only verify the signature, can you trust the information contained in the credential? I think it is difficult.

In other words, a digital signature only proves that the data has not been tampered with by a third party, and does not prove the reliability of the data itself or the reliability of the entity that sent it.

This is where a quality assurance framework is needed. For example, UNESCO has published a quality assurance framework that is intended for global use. This framework defines the levels of degrees at universities, etc., and by having educational institutions in each country issue degrees in accordance with this framework, the recipients of the credentials will be able to understand the meaning of the credentials.

このスライドから、技術以外の要素について考えてみたいと思います。

まず、意味論があります。 デジタル署名された資格証明書を受け取ったとします。 署名の検証しかできない場合、その資格証明書に記載されている情報を信頼できるでしょうか? 難しいと思います。

つまり、デジタル署名は、第三者がデータを改ざんしていないことを証明するだけであり、データ自体の信頼性や、送信元の信頼性を証明するものではありません。

そこで必要になるのが、品質保証の枠組みです。例えば、ユネスコは世界的に利用できる品質保証の枠組みを公表しています。この枠組みは、大学などの学位のレベルを定義するもので、各国の教育機関がこの枠組みに沿って学位を発行することで、資格取得者はその資格の意味を理解できるようになります。


 

Next, let's consider the trust framework. Let's ask the same question as on the previous page. Just because you have verified the digital signature on the credential you have received, does that mean you can trust the issuer of that credential? For example, if you have obtained the digital data of a graduation certificate with a digital signature, how can you confirm that the university that issued the certificate exists?

This is where a system called a trust framework comes into play. There are various types of trust frameworks, but general laws and regulations are also a type of trust framework. For example, the recipient of a certificate of qualification may believe that the issuer is operating under the country's laws and regulations that control the bank and that the government regularly audits the bank. In this case, the verifier believes in the laws and regulations of the country, so there is no need to visit the bank to confirm that the individual issuer is an actual bank. In this way, it is possible to reduce the cost of individual verification by designing and operating a system that includes certification and auditing.

次に、トラストフレームワークについて考えてみましょう。前ページと同じ質問をしてみましょう。受け取ったクレデンシャルに付与された電子署名を検証したからといって、そのクレデンシャルの発行者を信頼できるのでしょうか?例えば、電子署名の付与された卒業証明書の電子データを受け取った場合、その証明書を発行した大学が実在していることをどのように確認できるのでしょうか?

そこで登場するのが「トラストフレームワーク」と呼ばれる仕組みです。トラストフレームワークにはさまざまな種類がありますが、一般的な法律や規則もトラストフレームワークの一種です。例えば、資格証明書の受領者は、発行者が銀行を管理する国の法律や規則に従って運営されており、政府が定期的に銀行を監査していると考えるかもしれません。この場合、検証者はその国の法律や規制を信頼しているため、個々の発行者が実際に銀行であることを確認するために銀行を訪問する必要はありません。このように、認証と監査を含むシステムを設計・運用することで、個々の検証にかかるコストを削減することが可能となります。 



In a few previous pages, we discussed the need for profiles. At that time, we focused on the technical aspects but also learned about the importance of trust frameworks on the previous page. That's right, profiles can include not only technological elements but also agreements on trust frameworks.

Because so many factors are involved in ensuring interoperability, using profiles that organize and correctly combine technical and non-technical aspects is efficient and effective.

数ページ前に、プロファイルの必要性について述べました。その際には技術的な側面に焦点を当てましたが、前ページでは信頼フレームワークの重要性についても学びました。その通り、プロファイルには技術的な要素だけでなく、信頼フレームワークに関する合意事項も含めることができます。 相互運用性を確保するには多くの要因が関わっているため、技術的および非技術的な側面を整理し、正しく組み合わせたプロファイルを使用することが効率的かつ効果的です。



As system architectures change daily, it is clear that systems based on multiple approaches will coexist. In the real world, we must consider interoperability between these systems.

In this slide, I want to explain the recent paradigm shift in digital identity systems.

This diagram shows how the identity paradigm has changed from a centralized world to a decentralized one.

In the centralized identity system, as I mentioned earlier, it is crucial to manage identity information in the centralized database. However, there are various side effects, such as the need to keep a non-active user account in the database, making license costs expensive. It may cause identity theft attack because nonactive user cannot be aware their identities were stolen since they are not using their accounts.

Also, a centralized authentication system is quite helpful in gathering sign-in logs. Still, the system's availability is quite crucial because if the system fails, all users cannot log in to all applications.

On the other hand, in the decentralized identity world, users' identity data is stored in the user's wallet, which is typically installed on smartphones. So, users can bring their identity and authenticate it through their purse, and there is no effect on other users if the user’s wallet is offline.

In addition, users can aggregate attributes from multiple data sources in a single wallet, aggregate them, and present them to the application. The application can get various attributes from the user’s wallet and determine access permission.

システムアーキテクチャは日々変化しており、複数のアプローチに基づくシステムが共存することは明らかです。現実の世界では、これらのシステム間の相互運用性を考慮する必要があります。 このスライドでは、デジタルIDシステムにおける最近のパラダイムシフトについて説明したいと思います。 この図は、IDのパラダイムが中央集権型から分散型へとどのように変化したかを示しています。 集中型のIDシステムでは、先ほど申し上げたように、ID情報を集中データベースで管理することが重要です。しかし、さまざまな副作用があります。例えば、データベースに非アクティブなユーザーアカウントを維持する必要があるため、ライセンスコストが高額になることがあります。また、非アクティブなユーザーはアカウントを使用していないため、自分のIDが盗まれたことに気づくことができません。そのため、ID盗難の被害に遭う可能性があります。 また、中央集権型の認証システムはサインインログの収集に非常に役立ちます。しかし、システムが故障した場合、すべてのユーザーがすべてのアプリケーションにログインできなくなるため、システムの可用性は非常に重要です。 一方、分散型のアイデンティティの世界では、ユーザーのアイデンティティデータは、通常スマートフォンにインストールされているユーザーの財布に保存されます。そのため、ユーザーは自分のアイデンティティを持ち歩き、財布を通して認証することができます。また、ユーザーの財布がオフラインの状態でも、他のユーザーには影響がありません。 さらに、ユーザーは複数のデータソースから属性を収集し、それを集約してアプリケーションに提示することができます。アプリケーションはユーザーの財布からさまざまな属性を取得し、アクセス許可を決定することができます。



We at the OpenID Foundation support the SIDI Hub, a community established to ensure interoperability in global digital identity. The SIDI Hub is considering ensuring interoperability in a world where various system architectures coexist from multiple perspectives, including systems and governance.

We have defined three types of system architecture: federated, wallet-based, and API-based, and we are considering what methods might be used to connect systems that use each of these architectures. For example, we are researching the possibility of building a proxy module between an API-based identity provider and a federated relying party.

私たちOpenIDファウンデーションは、グローバルなデジタルアイデンティティの相互運用性を確保するために設立されたコミュニティであるSIDI Hubを支援しています。SIDI Hubでは、システムやガバナンスなど、さまざまな観点から、さまざまなシステムアーキテクチャが共存する世界における相互運用性の確保について検討しています。

私たちは、システムアーキテクチャをフェデレーション型、ウォレット型、API型の3つに定義し、それぞれのアーキテクチャを使用するシステムを接続する方法について検討しています。例えば、API型アイデンティティプロバイダーとフェデレーション型依存者の間にプロキシモジュールを構築する可能性について研究しています。



Let's take a brief look at federation-type identity systems.

This type of architecture is the mainstream of current identity systems; for example, Apple, Google, Microsoft, and LINE also use this method.

In this system, applications are configured in a way that relies on external identity systems, and by clicking on buttons such as “Sign in with Apple” or “Sign in with Google,” users are redirected to the Apple or Google identity system. After that, the results of the user being authenticated by Apple or Google are presented to the application, and the login is complete.

This system is very well standardized, and protocols such as SAML and OpenID Connect are the mainstream and are adopted worldwide.

フェデレーション型のIDシステムについて簡単に説明します。

このタイプのアーキテクチャは、現在のIDシステムの主流であり、例えばApple、Google、Microsoft、LINEなどもこの方式を採用しています。

このシステムでは、アプリケーションは外部のIDシステムに依存する形で構成され、「Appleでサインイン」や「Googleでサインイン」などのボタンをクリックすると、ユーザーはAppleやGoogleのIDシステムにリダイレクトされます。その後、Apple または Google によるユーザー認証の結果がアプリケーションに表示され、ログインが完了します。

このシステムは非常に標準化されており、SAML や OpenID Connect などのプロトコルが主流となっており、世界中で採用されています。


 

In the wallet-based model, users store their own identities in software called a wallet and carry it with them.

This model is sometimes called the Issuer-Holder-Verifier (IHV) model, as it contains three components: the Issuer, which issues credentials; the Holder, which holds credentials; and the Verifier, which verifies credentials.

As I mentioned in the previous slide about paradigm shifts, this model is expected to support new use cases. For example, because Holders do not need to contact Issuers when presenting credentials to Verifiers, it will be possible to support new use cases, such as offline cases.

However, there are many competing standards, and the IETF, ISO, OIDF, W3C, and other organizations are all actively working to develop their specifications.

ウォレット型モデルでは、ユーザーは自身のIDを「ウォレット」と呼ばれるソフトウェアに保存し、持ち歩くことになります。

このモデルは、3つのコンポーネント、すなわち、クレデンシャルを発行する「発行者」、クレデンシャルを保持する「保持者」、クレデンシャルを検証する「検証者」を含むことから、発行者-保持者-検証者(IHV)モデルと呼ばれることもあります。

前回のスライドでパラダイムシフトについて述べたように、このモデルは新しいユースケースをサポートすることが期待されています。例えば、ホルダーがベリファイアにクレデンシャルを提示する際に、イシュアーに連絡する必要がないため、オフラインでのケースなど、新しいユースケースをサポートすることが可能になります。

しかし、多くの競合する標準規格が存在し、IETF、ISO、OIDF、W3C、その他の組織が、それぞれ仕様策定に積極的に取り組んでいます。 



The last model is the API type. Unlike the previous two, this one is often a system that was introduced without a specific standard specification. It can remain in a closed environment.

最後のモデルはAPIタイプです。前の2つとは異なり、このモデルは特定の標準仕様なしに導入されたシステムであることが多いです。クローズドな環境のままでも構いません。


 

It is very challenging to interconnect systems of different architectures introduced so far. This is because it is often difficult to modify already working systems. Therefore, we sometimes take the approach of placing components called proxies or brokers between systems. The proxy absorbs and converts differences in protocols and data models.

While this approach is often a temporary solution, it tends to create problems in the overall trust model because of the need to trust the proxy.

For example, it is structured like this diagram. There is a wallet-based system in the center. However, because modifying the existing IdP to enable direct communication with the wallet is impossible, the Issuer component is developed as a proxy, and a federation relationship is established with the IdP. Similarly, the Verifier component is developed as a proxy because it is difficult to modify the existing Relying Party to present credentials from the wallet. It behaves as an Identity Provider from the Relying Party's point of view.

これまで紹介してきた異なるアーキテクチャのシステムを相互接続することは非常に困難です。すでに稼働しているシステムを変更することが難しい場合が多いためです。そのため、プロキシやブローカーと呼ばれるコンポーネントをシステム間に配置するアプローチを取ることもあります。プロキシはプロトコルやデータモデルの違いを吸収し、変換します。

このアプローチは一時的な解決策であることが多い一方で、プロキシを信頼する必要があるため、全体的な信頼モデルに問題が生じがちです。

例えば、次のような構成です。中心にウォレットベースのシステムがあります。しかし、既存のIdPを変更してウォレットとの直接通信を可能にすることは不可能であるため、発行者コンポーネントをプロキシとして開発し、IdPとフェデレーション関係を確立します。同様に、既存の依拠当事者(Relying Party)を変更してウォレットからのクレデンシャルを提示することは困難であるため、検証者コンポーネントもプロキシとして開発します。依拠当事者から見ると、このコンポーネントはアイデンティティプロバイダーとして動作します。



I want to introduce one actual use case.

This is a project by the National Institute of Informatics to digitize learner credentials. In this project, learning records issued from existing learning management systems are issued to wallets, and the credentials are used to verify qualifications when submitting papers, etc.

The challenge in implementing the project was that many academic systems, not just in Japan, use the SAML protocol, and in Japan, too, many SAML-based identity systems operate within the ecosystem of the academic federation known as GakuNin. In addition, the learning management system in question was developed based on a middleware called Moodle, and it was necessary to implement a unique API to issue credentials.

実際の利用事例を一つ紹介したいと思います。

これは国立情報学研究所の学習歴証明の電子化プロジェクトです。このプロジェクトでは、既存の学習管理システムから発行される学習記録をウォレットに発行し、その資格情報を論文投稿時などの資格証明に利用します。

このプロジェクトを実施するにあたっての課題は、日本に限らず多くの学術システムがSAMLプロトコルを使用しており、日本でも学認という学術フェデレーションのエコシステム内で多くのSAMLベースのIDシステムが稼働していることでした。また、対象の学習管理システムはMoodleというミドルウェアをベースに開発されており、独自のAPIを実装してクレデンシャルを発行する必要がありました。



This diagram shows an overview of the GakuNin ecosystem that we explained earlier.

The National Institute of Informatics provides the trust framework, and certified universities and research institutions' identity providers and certified applications such as learning management systems and research databases are deployed as relying parties within the ecosystem.

By being authenticated by the university or institution's identity provider, students and researchers can securely single sign-on to many applications, creating a very convenient and secure environment.

この図は、先に説明した学認エコシステムの概要を示しています。 国立情報学研究所がトラストフレームワークを提供し、認定を受けた大学や研究機関のアイデンティティプロバイダーと、学習管理システムや研究データベースなどの認定済みアプリケーションが、エコシステム内の依拠当事者として展開されています。 学生や研究者は、大学や機関のアイデンティティプロバイダーによって認証されることで、多くのアプリケーションに安全にシングルサインオンでき、非常に便利で安全な環境を実現できます。

 


 

We decided to introduce a wallet-based system into this federated environment.

For this reason, we took these approaches to the challenge of interoperability.

First, we embedded the OpenBadge credential the Learning Management System issued using its own API into the Verifiable Credential. We placed a gateway service between Moodle and the wallet and constructed it as an issuer that issues verifiable credentials based on the OpenBadge issued by Moodle. In other words, from the wallet's point of view, the gateway service appears as an Issuer.

Secondly, the Verifiable Credential presented by the wallet was embedded inside the SAML assertion. Since the existing Relying Party supports the SAML protocol, it was impossible to show the Verifiable Credential directly. Therefore, the OpenBadge extracted from the Verifiable Credential was embedded as one of the attributes inside the SAML assertion, and the credential was presented to the Relying Party. To achieve this, we developed a Wallet to SP Connector component. We configured it to appear as a Verifier to the Wallet and an Identity Provider to the Relying Party.

Of course, the Relying Party still needs to implement the appropriate logic to extract the OpenBadge from the SAML assertion, verify it, and use it. Still, there was no need to modify to support new protocols such as OpenID for Verifiable Presentation.

この統合環境にウォレットベースのシステムを導入することを決定しました。

そのため、相互運用性の課題に対して、以下のアプローチをとりました。

まず、LMSが独自のAPIを利用して発行するOpenBadgeクレデンシャルを、検証可能なクレデンシャルに埋め込みました。Moodleとウォレットの間にゲートウェイサービスを配置し、Moodleが発行するOpenBadgeに基づいて検証可能なクレデンシャルを発行する発行者として構築しました。つまり、ウォレットから見ると、ゲートウェイサービスは発行者として表示されます。

次に、ウォレットが提示した検証可能なクレデンシャルはSAMLアサーション内に埋め込まれました。既存のリライングパーティはSAMLプロトコルをサポートしているため、検証可能なクレデンシャルを直接提示することはできません。そのため、検証可能なクレデンシャルから抽出したOpenBadgeをSAMLアサーション内の属性の1つとして埋め込み、リライングパーティにクレデンシャルを提示しました。これを実現するために、私たちは Wallet to SP Connector コンポーネントを開発しました。 Wallet に対してはベリファイアとして、また、リライングパーティに対してはアイデンティティプロバイダーとして表示されるように構成しました。

もちろん、リライングパーティは、SAML アサーションから OpenBadge を抽出し、それを検証し、使用するための適切なロジックを実装する必要があります。それでも、OpenID for Verifiable Presentation などの新しいプロトコルをサポートするために修正する必要はありませんでした。 



This is an overview of the system.

First, the user issues a badge using the Learning Management System. At this point, the user is authenticated using the existing Identity Provider.

Next, the badge is issued to the user's wallet. When the user accesses the gateway, the gateway is also federated with the same Identity Provider as the Learning Management System, and the user is prompted for authentication. This way, the user is granted the appropriate permissions to execute the Moodle API. The gateway service then performs the Moodle API to obtain the issued badge and generate a verifiable credential. The gateway then issues the verifiable credential to the user's wallet as the issuer.

The issuance is now complete.

Finally, let's look at the presentation. In this case, we want to present the credential to the Gakunin RDM research database, but Gakunin RDM only supports the SAML protocol so we will use the Wallet to SP Connector. When the user accesses a specific page on Gakunin RDM, Gakunin RDM uses the SAML protocol to start the Wallet to SP Connector. This is the same operation as a standard SAML-based federation, so it is very easy to implement. When the Wallet to SP Connector is started, it requests the user's wallet to present a verifiable credential per the OpenID for Verifiable Presentation protocol. When the user presents the credential in their purse, the Wallet to SP Connector verifies the signature of the credential, extracts the embedded badge information from the credential, and configures it as a SAML assertion, then sends it to Gakunin RDM using the SAML protocol.

This allows Gakunin RDM to obtain the desired learning credential information, which can then be used to perform access control and other processing.

以下にシステムの概要を示します。

まず、ユーザーは学習管理システムを使用してバッジを発行します。この時点で、ユーザーは既存のアイデンティティプロバイダを使用して認証されます。

次に、バッジがユーザーのウォレットに発行されます。ユーザーがゲートウェイにアクセスすると、ゲートウェイも学習管理システムと同じアイデンティティプロバイダとフェデレーションされており、ユーザーに認証が求められます。これにより、ユーザーにはMoodle APIを実行する適切な権限が付与されます。次に、ゲートウェイサービスがMoodle APIを実行して発行済みのバッジを取得し、検証可能な資格情報を生成します。次に、ゲートウェイが発行者として、検証可能な資格情報をユーザーのウォレットに発行します。

これで発行は完了です。

最後に、プレゼンテーションについて見てみましょう。このケースでは、学認RDM研究用データベースにクレデンシャルを提示したいのですが、学認RDMはSAMLプロトコルしかサポートしていないので、Wallet to SP Connectorを使用します。ユーザーが学認RDM上の特定のページにアクセスすると、学認RDMはSAMLプロトコルを使用してWallet to SP Connectorを開始します。これは標準的なSAMLベースのフェデレーションと同じ操作なので、実装は非常に簡単です。Wallet to SP Connectorが起動すると、OpenID for Verifiable Presentationプロトコルに従って、ユーザーのウォレットに検証可能なクレデンシャルの提示を要求します。ユーザーが財布内のクレデンシャルを提示すると、Wallet to SP Connectorはクレデンシャルの署名を検証し、クレデンシャルから埋め込みのバッジ情報を抽出し、それをSAMLアサーションとして構成し、SAMLプロトコルを使用して学認RDMに送信します。

これにより、学認RDMは必要な学習クレデンシャル情報を取得でき、アクセス制御やその他の処理に使用できるようになります。 

 



We will also introduce activities that address other non-technical considerations.

Open Identity Exchange is working to map the trust frameworks of each country and identify differences.

For example, this will enable the EU to understand what rules were used to issue the credentials issued by Japan and to determine whether additional measures are necessary.

また、技術以外の考慮事項に対処する活動についても紹介します。

Open Identity Exchangeは、各国の信頼フレームワークをマッピングし、相違点を特定する作業を行っています。

例えば、これによりEUは、日本が発行したクレデンシャルを発行する際にどのような規則が用いられたかを理解し、追加の措置が必要かどうかを判断することができます。



There are also activities in the academic world to map frameworks related to qualification levels.

In the academic world, there are two main types of credentials: micro-credentials, mainly learning records, and macro-credentials, which are qualifications such as degrees and credits.

While micro-credentials are becoming increasingly digitized, as in the case of the NII example mentioned earlier, OpenBadge, it is tough to standardize the difficulty of skills. I think this will continue to be a challenge. On the other hand, about macro-credentials, UNESCO has established standards for skill levels so that each country can define levels based on these standards.

学術界でも、資格レベルに関連する枠組みをマッピングする活動があります。

学術界では、主に学習記録であるマイクロ資格と、学位や単位などの資格であるマクロ資格の2つの主要な資格があります。

マイクロ・クレデンシャルは、先ほど例に挙げたNIIのOpenBadgeのように、どんどんデジタル化が進んでいますが、スキルの難易度をどう標準化するかは難しい。これは今後も課題になっていくと思います。一方、マクロ・クレデンシャルについては、ユネスコが技能レベルの基準を定めており、各国がそれをベースにレベルを定義できるようになっています。


 

This is the approach to global standards and mapping as defined by UNESCO.

In this example, the EQF developed by Europe based on UNESCO standards is mapped to the frameworks of other countries.

For example, EQF Level 4 is mapped to Country X Level 5 and Country Y Level 3.

これは、ユネスコが定義するグローバルスタンダードとマッピングへのアプローチです。

この例では、ユネスコの基準に基づいてヨーロッパが開発したEQFが、他の国のフレームワークにマッピングされています。

例えば、EQFレベル4は、国Xのレベル5および国Yのレベル3にマッピングされています。



 In addition, we will introduce some of the activities that have been taking place in Japan recently.

Trusted Web has been underway since 2020, and research into digital identity wallets is being carried out. In addition, the introduction of national ID cards and mobile driver's licenses is already being planned. Starting next March, it will be possible to issue permits for smartphones. In addition, various studies are underway to enable the interoperability of academic credentials with other countries, so I hope that in the future, studies on interoperability with Taiwan and other countries will progress

さらに、最近日本で起こっている活動の一部をご紹介したいと思います。

2020年からTrusted Webが動き出しており、デジタルIDウォレットの研究が進められています。また、国民IDカードやモバイル運転免許証の導入もすでに計画されています。来年3月からは、スマートフォンでの許可証発行が可能になります。また、学歴の相互運用性についても諸外国との間でさまざまな研究が進められており、今後は台湾などとの相互運用性についての研究が進むことを期待しています


Let me finish by summarizing.

First, interoperability is a technical issue and a non-technical consideration, such as rules and frameworks. It is essential to reach agreement on technical matters such as identifiers, transport protocols, and data models. I also explained that semantics and trust frameworks are necessary from a non-technical perspective.

I also explained that we need to respond to the recent paradigm changes of identity systems. To introduce a wallet-based system into a federation-type system that has been used in the past, it is thought that it will be necessary to use components such as proxies and gateways temporarily. I also mentioned that by comparing trust frameworks, it will be possible to clarify what additional processing the systems require to be connected.

In the future, we will need to connect many systems to overcome the silo-based society that has continued since the fall of the Tower of Babel. I hope that we can continue to have discussions like this with everyone.

Thank you.

最後にまとめます。 まず、相互運用性は技術的な問題と、ルールやフレームワークなどの技術的でない考慮事項の両方を含んでいます。識別子、通信プロトコル、データモデルなどの技術的な事項について合意に達することが不可欠です。また、技術的でない観点からは、セマンティクスや信頼フレームワークが必要であることを説明しました。 また、アイデンティティシステムの最近のパラダイム変化に対応する必要があることを説明しました。これまで使われてきたフェデレーション型システムに、ウォレット型システムを導入するには、プロキシやゲートウェイなどのコンポーネントを一時的に使用する必要があると考えられます。また、信頼フレームワークを比較することで、システムを接続するためにどのような追加処理が必要かを明確にできることを述べました。 今後は、バベルの塔の崩壊以来続いてきた縦割り社会を乗り越えるためにも、多くのシステムを接続していく必要があります。今後も皆さんとこのような議論を続けていければと思います。 ありがとうございました。



プロンプターが欲しかったプレゼンでした・・・ 

ちなみに始まる前にオープンニングトークをしてくれた台湾のデジタル副大臣(私の左側)と登壇者全員で記念写真を撮りました。なんかセレモニーみたいでいいですね。







Hyperonomy Digital Identity Lab

Building a Knowledge Graph from Wikipedia in Neo4j

https://guides.neo4j.com/wiki https://github.com/jbarrasa/datasets/blob/master/wikipedia/data/cats.csv?raw=true https://github.com/jbarrasa/datasets/blob/master/wikipedia/data/rels.csv?raw=true

Toronto Songwriter/Performer Use Case: DID Method Candidates

Copyright (c) 2024 Michael Herman (Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public Licensehttps://creativecommons.org/licenses/by-sa/4.0/legalcode

Copyright (c) 2024 Michael Herman (Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public License
https://creativecommons.org/licenses/by-sa/4.0/legalcode

Sunday, 15. December 2024

IdM Laboratory

年明けはデジタルアイデンティティ人材育成推進WG:活動報告会から

こんにちは、富士榮です。 年明け1/14はデジタルアイデンティティ人材育成推進WGの活動報告会があります。 https://openid.connpass.com/event/339259/ 例の本が年末に販売されることもあり、翻訳を担当したメンバーから苦労話など聞ける良い機会になると思いますのでぜひご来場ください。

こんにちは、富士榮です。


年明け1/14はデジタルアイデンティティ人材育成推進WGの活動報告会があります。

https://openid.connpass.com/event/339259/



例の本が年末に販売されることもあり、翻訳を担当したメンバーから苦労話など聞ける良い機会になると思いますのでぜひご来場ください。


Kyle Den Hartog

We Shape Our Tools then Our Tools Shape Us

the methods we choose to communicate with affects not only the cohesiveness of the community, but also the norms and the values reflected.

When I first heard the quote “We shape our tools then our tools shape us” from Orie Steele (A quick web search attributes it originally to John Culkin) I didn’t quite understand the profound wisdom in the statement. It’s one of those clickbait sounding quotes that sticks with you for a bit and has to ruminate for longer like it’s an idea slow cooking in a crockpot on thanksgiving dinner. But the wisdom in it rings true as I see it playing out repetitively within the technology space and I’m sure it makes a difference in many other aspects of our life.

When we build a technology, tool, web app, or even language we first build the tool to solve a problem. Over time though, new problems arise and they inevitably change based on the tools we choose to use to solve them. For example, language in its most primitive form was likely just a means to express emotions such as danger. Eventually it evolved and specialized so much that we can hear the sounds, piece together their grammatical structure, but only if we’re listening are we able to unpack the meaning. Whether that’s because the language being spoken is foreign (in which case we might not even understand the grammatical structure) or because we’re speaking with such abstract jargon the impact of the language as a tool remains the same. We cannot convey our intentions well enough such that our audience understands why we chose to make a noise in the first place. So we end up stuck in this sempiternal cycle of solving problems only for new ones to arise and the evidence we leave is in the iterative design of our tools. Or in the case of language in the semantic drift of words and change of mediums over time.

This has had me wondering lately, how do our newest tools on the Web used for communication shape our conversations? Looking back on the various forms of communication we’ve had throughout history might shed some light on this, but I’m not exactly sure so bear with me as I explore this idea further and hopefully my wandering thought leads to a moment of reflection for you the reader rather than loss of attention already.

Starting with speaking, since that seems to be the most primitive form of communication, when we speak with people we use tone, pauses, and word choice as our primary tools to convey different meanings. This form of communication hasn’t failed us yet so it’s still just as ever present now as it was at the beginning of humanity. But that’s just within the noises we make - when speaking we also express ourselves through our body language both as a speaker and as an audience. The speaker can convey a point with conviction through standing taller, speaking louder, and pausing longer. At the same time, without saying a word an audience can convey a lack of interest without even saying a word such as by yawning or diverting their attention elsewhere. So how does this appear in other forms of communication?

In written form, we lose the ability to convey emphasis through our body language so we had to modify our language to express it. Things like exclamation points, question marks, and even a parenthetical are evidence of that. But we also gained something rather noteworthy as well. With writing, because we have time to pause and reflect on our thoughts, edit them, and think through them we are able to build better structure into our writing to role play as both the author and the audience to make sure our intent isn’t lost. Additionally, because we’re not as restricted by the same social considerations we can write in both a more expansive or condensed form to convey our intent and reflect on our impact.

What becomes interesting though about both forms of communication is that we can both add and lose properties just by changing the medium that we use as well. For example, when speaking we can do this face to face with an individual, behind a podium to a variable sized audience, or even recorded as a video with or without editing. In writing, we can write books, essays, emails, messages, or social media posts. Each comes with their own nuances and social contracts in how we communicate within the medium chosen. And this is how we now shape our tools and then our tools shape us. We create new mediums of communication, and the new mediums form new rituals and social contracts in “acceptable” forms of communication via those mediums. So why does that matter?

It’s important because this informs us about how the methods we choose to communicate with affects not only the cohesiveness of the community, but also the norms and the values reflected. Thereby changing the values we expect from one another. As an example, many collectives have adopted social media as a common form of communication. With that though, particularly with Twitter, comes the limited forms of editing, condensed writing that skips the rationale, and an algorithm that chooses to reinforce rhetoric through amplifying ideas with more likes and speakers with more followers/subscribers/friends.

Social media then instills values such as lack of reflection in our communication, attention grabbing ideas, and a bit of group think. With each new medium comes a different collection of properties that encourages emergent behavioral changes in us that show up in our collective shared values. So the final question I want to leave the reader with to reflect upon… Are the values you want within your communities being encouraged by the tools you use or is there a different tool that is needed to better reflect how you want to communicate with others?

Saturday, 14. December 2024

IdM Laboratory

今年もKim Cameronアワードのノミネートが始まっています

こんにちは、富士榮です。 これまで何度かポストした、Kim CameronアワードやVittorio Bertocciアワードですが、今年もKim Cameronアワードのノミネートが始まりました。 https://digitalidadvancement.org/awards-and-grants/kim/2025-kca/ 今後のスケジュールはこんな感じなので、我こそは!という方は申請してみてはいかがでしょうか? December 11: Call for Applications Opens January 21: Call for Applications Closes January 31: Awardees Notified February 7: Awardee Confirmation of Acceptance Due 30 Days

こんにちは、富士榮です。

これまで何度かポストした、Kim CameronアワードやVittorio Bertocciアワードですが、今年もKim Cameronアワードのノミネートが始まりました。

https://digitalidadvancement.org/awards-and-grants/kim/2025-kca/




今後のスケジュールはこんな感じなので、我こそは!という方は申請してみてはいかがでしょうか?

December 11: Call for Applications Opens January 21: Call for Applications Closes January 31: Awardees Notified February 7: Awardee Confirmation of Acceptance Due 30 Days Post-Event: Awardee Receipts for Reimbursement Due 30 Days Post-Event: Awardee Reflections Due

Friday, 13. December 2024

IdM Laboratory

OAuth Security Workshopの参加登録が始まっています

こんにちは、富士榮です。 先日お知らせした通り次回のOAuth Security Workshopはアイスランドで開催です。 https://idmlab.eidentity.jp/2024/09/oauth-security-workshop.html ということで参加登録が開始されていますので申し込みましょう! https://oauth.secworkshop.events/osw2025 なお、11月後半にレイキャビクの近くで火山の噴火があったようです。Danielに聞いたら大丈夫、との話でしたが様子を見つつ参加することになりそうですね。 一応大使館のページをみると12/9に収束した、という宣言は出ていますし。 https://www.is.emb-japan.go.jp/itpr_ja/volcano.html
こんにちは、富士榮です。
先日お知らせした通り次回のOAuth Security Workshopはアイスランドで開催です。 https://idmlab.eidentity.jp/2024/09/oauth-security-workshop.html
ということで参加登録が開始されていますので申し込みましょう! https://oauth.secworkshop.events/osw2025
なお、11月後半にレイキャビクの近くで火山の噴火があったようです。Danielに聞いたら大丈夫、との話でしたが様子を見つつ参加することになりそうですね。 一応大使館のページをみると12/9に収束した、という宣言は出ていますし。 https://www.is.emb-japan.go.jp/itpr_ja/volcano.html




Werdmüller on Medium

The open social web is the future of the internet. Here’s why I’m excited.

It gives you back control. Continue reading on Medium »

It gives you back control.

Continue reading on Medium »

Thursday, 12. December 2024

IdM Laboratory

Googleが公開している「4分でパスキーを理解する」動画が素晴らしい件

こんにちは、富士榮です。 昨日はFIDO東京セミナーでしたね。私は台北にいたので参加できませんでしたが・・・ ということで悔しいので #fido でXを追いかけていましたが、えーじさんのパート(だと思う)で触れられていたっぽい「4分でパスキーを理解する」動画が素晴らしくわかりやすいかったのでメモしておこうというポストです。 ざっくりシナリオはこんな感じでした。 長らくパスワードを使ってきたけど、難しいパスワードは覚えられないし、同じパスワードの使い回しが起きるので危ないよね そんなあなたにGoogleはパスワードマネージャを提供しているよ! デバイスを跨いで同期されるのでとっても便利! でも全員がパスワードマネージャに頼っているわけじゃないよね その前にそもそもパスワードってセキュアじゃないよ 2段階認証はセキュリティ強度を上げるけど面倒だしなぁ そんな

こんにちは、富士榮です。


昨日はFIDO東京セミナーでしたね。私は台北にいたので参加できませんでしたが・・・

ということで悔しいので #fido でXを追いかけていましたが、えーじさんのパート(だと思う)で触れられていたっぽい「4分でパスキーを理解する」動画が素晴らしくわかりやすいかったのでメモしておこうというポストです。


ざっくりシナリオはこんな感じでした。

長らくパスワードを使ってきたけど、難しいパスワードは覚えられないし、同じパスワードの使い回しが起きるので危ないよね そんなあなたにGoogleはパスワードマネージャを提供しているよ! デバイスを跨いで同期されるのでとっても便利! でも全員がパスワードマネージャに頼っているわけじゃないよね その前にそもそもパスワードってセキュアじゃないよ 2段階認証はセキュリティ強度を上げるけど面倒だしなぁ そんなあなたにパスキー! パスキーはセキュアで安全なテクノロジーなんだ 多くのサイトにパスワードなしでログインができるんだよ OSやブラウザがプロンプトを出してくるだけで、ユーザはスマホのロック解除と同じことをすればいいんだ これはすごくベネフィットがあって、ログインが簡単だからユーザはドロップしないしコンバージョンレートもあげられる 2段階認証にかかるコストもかからない フィッシングにも強いんだ パスキーは公開鍵暗号技術で構成されているんだよ 秘密鍵はデバイスに保管され、スクリーンロック解除のメカニズムでアクセスできる その秘密鍵とセットになっている公開鍵はサーバに保存される 秘密の情報はサーバに保存されないからサーバからの漏洩の心配はないよ そしてパスキーは生成したデバイスごとに保存されるんだ だから別の人がパスキーを不正に使うことはできないんだ パスキーは多くのブラウザやプラットフォームでサポートされているよ そしてパスキーはクレデンシャルマネージャを経由してバックアップすることもできる だから新しいAndroidデバイスを手に入れても同じGoogleアカウントでログインすればパスキーを引き継ぐことができるんだ またパスキーは同期できないデバイスに対しても使うことができるんだ。これにはハイブリッドプロトコルを使うんだ 例えばAndroidデバイスの持ち主がMacOSのブラウザにログインするときはQRコードを読み込んでAndroidデバイス側でパスキーをつかうとMacOS側でログインできる 今すぐパスキーを導入しよう! 安全で便利な世の中が待っているよ 結構4分に詰め込んだなぁ、、って印象ですが流れがしっかり練り込まれているので非常にわかりやすかったです。 でも多分、本当は「3分でわかるパスキー」にしたかったんだろうなぁw

Monday, 09. December 2024

Damien Bod

Using Entra External ID with an Auth0 OpenID Connect identity provider

This post looks at implementing an Open ID Connect identity provider in Microsoft Entra External ID. Auth0 is used as the identity provider and an ASP.NET Core application is used to test the authentication. Microsoft Entra External ID federates to Auth0. Client code: https://github.com/damienbod/EntraExternalIdCiam Microsoft Entra External ID supports federation using OpenID Connect and was […]

This post looks at implementing an Open ID Connect identity provider in Microsoft Entra External ID. Auth0 is used as the identity provider and an ASP.NET Core application is used to test the authentication. Microsoft Entra External ID federates to Auth0.

Client code: https://github.com/damienbod/EntraExternalIdCiam

Microsoft Entra External ID supports federation using OpenID Connect and was used to create the Auth0 OpenID Connect client. The settings configured in the portal must align with those in the Auth0 identity provider. The Entra External ID OIDC client would work with most IDPs, since standard OpenID Connect is used.

The corresponding client configuration was created in Auth0:

The Auth0 client in this setup returns the email in the name claim. This must match in the claim mapping in Entra External ID. The email claim is switched to use the name claim. Depending on the mappings, different claim mappings are required. The values are read from the id_token.

The correct callback URLs MUST be set on the Auth0 client configuration. Entra External ID requires the following redirectly URLs:

https://<tenant-subdomain>.ciamlogin.com/<tenant-ID>/federation/oauth2 https://<tenant-subdomain>.ciamlogin.com/<tenant-subdomain>.onmicrosoft.com/federation/oauth2

The values are added to the Auth0 client. See the Entra External ID docs.

Once the OpenID Connect client is setup, the identity provider can used in the Entra External ID user flow. Select the required user flow:

And select the Identity Providers and add the OpenID connect IdP.

To test, an ASP.NET Core application using Microsoft.Identity.Web was implemented to authenticate against Entra External ID. When the user is redirected the authenticate, the sign-in with Auth0 option is displayed.

Notes

This works well, but setting up the mappings can be difficult if you don’t know the mapping. With this it is possible now to use identities from almost any identity provider.

Links

https://learn.microsoft.com/en-us/entra/external-id/customers/how-to-custom-oidc-federation-customers

Saturday, 07. December 2024

Mike Jones: self-issued

Integrity Properties for Federations

I’m writing to highly recommend the article “How to link an application protocol to an OpenID Federation 1.0 trust layer” by Vladimir Dzhuvinov. In it, he defines two kinds of integrity for Federations, and describes how to achieve them: Federation Integrity, which is defined as: This ensures mutual trust between two entities is established always […]

I’m writing to highly recommend the article “How to link an application protocol to an OpenID Federation 1.0 trust layer” by Vladimir Dzhuvinov. In it, he defines two kinds of integrity for Federations, and describes how to achieve them:

Federation Integrity, which is defined as:

This ensures mutual trust between two entities is established always from a common trust anchor. Any resolved metadata and policies that govern the client application and the OpenID provider in a transaction will then fall under the rules of the same federation and thus will be aligned and consistent with one another.

Metadata Integrity, which is defined as:

It ensures the trust chains for an entity to a given trust anchor will invariably result in consistent metadata and policies. The natural way to achieve this is for the federation topology under a trust anchor to form a tree. Topologies that lead to multiple paths from a leaf entity to a trust anchor are to be avoided.

The article also explores how application protocols, such as OpenID Connect or digital wallet protocols, can achieve those properties in practice (and when they do and don’t need to).

Finally, I’ll note that, as a result of Vladimir’s and others’ thinking about the topic, we just added a section on Federation Topologies to the OpenID Federation specification, which provides concrete guidance on how to achieve Metadata Integrity.

I’ll stop here so as not to repeat all the useful content in Vladimir’s article. By all means, give it read!

Wednesday, 04. December 2024

Kyle Den Hartog

Pseudonymity on the Web is Dying

Many digital identity proponents love the meme “the internet was built without an identity layer” and claim this as a bug to defend the need for a universal digital identity system. They’ve got it wrong though. This is a feature of the Web that has allowed the Web to flourish not a bug that must be fixed. The reason we should be preserving it extends beyond privacy and is more than ever relevant t

Many digital identity proponents love the meme “the internet was built without an identity layer” and claim this as a bug to defend the need for a universal digital identity system. They’ve got it wrong though. This is a feature of the Web that has allowed the Web to flourish not a bug that must be fixed. The reason we should be preserving it extends beyond privacy and is more than ever relevant to the impact the Web has on our society.

Instead, we have to look beyond about the purpose the Web serves in our greater lives. Today, many of us use the Web as a means to learn, challenge our beliefs, and ultimately shape who we are by being exposed to new and challenging ideas. However, without pseudonyms it’s hard for us to refine and grow ideas. This is because pseudonyms are a hallmark to challenging the Overton’s window.

What is an Overton window though and why does it matter? The idea of an Overton window is that there’s a limited number of acceptable policy or discussion points that are acceptable to a mainstream populous. For example, if we were to debate the concept of inequality and someone were to propose the idea of getting rid of money in today’s time it would not be a widely accepted solution in a discussion or debate. The reason for this is that it’s impractical relative to our cultural and social norms and therefore sits outside the Overton window of today.

Similarly, slavery is a well understood concept that is no longer acceptable in today’s society. We understand it no longer aligns with our values as humans and therefore sits outside the Overton window as well as a regressive policy that was once acceptable, but is definitely not now. So, in any good faith discussion or debate there’s a range of believes that are acceptable as a path forward for the majority of people and that’s what makes an Overton window interesting in the broader perspective of the Web. It serves as a tool for us to consider and evolve our beliefs in society today. This range of acceptable beliefs changes though, but how?

Essentially, in any discussion people stake their credibility to put forth ideas that they believe have merits to the stakeholders of a problem and one that aims to solve such problem. The issue is though that some ideas exist too far outside the range of acceptable policies such that the idea of even proposing them would instantly affect the credibility of the author of the idea. Therefore, in order to reduce the risk while still moving the discussion forward we need the ability to submit and refine ideas that exist outside the Overton window until they’re good enough to shift the window. So why is pseudonymity important to guiding this?

Pseudonymity serves as a way to reduce the risk while still contributing to the overall broader conversation without running the risk of hurting our credibility within the broader context. Often times, the people with the most interesting ideas are those who are most involved within the topic, but may not be willing to contribute under their true moniker. Benjamin Franklin is one such example of a person who famously used pseudonyms in this way. Silence Dogood was a pseudonym that Benjamin Franklin developed at the age of 16 in order to test his political ideas and refine his writing style. It’s likely the case that this ability to submit publications under this pseudonym and listen to the feedback of readers anonymously contributed to his ability to shape the founding of the United States beneficially. So should we remove or reduce that ability from the Web which is now the primary place for us to engage in political discussions? Furthermore, how is that happening?

With the advent of digital credentials coming to your phone soon we’re about to make it very easy for any social media platform, forum, or site to link our online identities to our real life identities. In fact, in some places like Australia they’ve recently passed a bill to require the collection of this information for the purposes of protecting children. Since these digital credentials of our true monikers will be easily accessible, as they become collected within places on the Web where political discussions occur we will see a reduction in the use of pseudonyms on the Web. Instead, people will either be forced to remain silent or they will be forced to overcome a greater barrier of entry in order to shift the Overton window under true monikers. Is that beneficial for society as a whole?

I’d argue not - clearly it’s been beneficial both for the purposes of the progress of a nation and for the progress of us as people to have pseudonymity to in political discussions previously. Therefore, it seems risky to accept the path of reducing the pseudonymity on the Web and ultimately reducing the shifting of the Overton window by chilling political speech on the Web via the allowance of digital credentials. Instead, we should be limiting digital credentials usage as much as possible in order to continue making progress in our political discussions and ultimately progressing our shared values together.

Monday, 02. December 2024

Damien Bod

Using ASP.NET Core with Azure Key Vault

This article looks at setting up an ASP.NET Core application to use Azure Key Vault. When deployed to Azure, it works like in the Azure documentation but when working on development PCs, some changes are required for a smooth developer experience. Code: https://github.com/damienbod/UsingAzureKeyVaultInDevelopment I develop using Visual Studio and manage multiple accounts and test environments. […]

This article looks at setting up an ASP.NET Core application to use Azure Key Vault. When deployed to Azure, it works like in the Azure documentation but when working on development PCs, some changes are required for a smooth developer experience.

Code: https://github.com/damienbod/UsingAzureKeyVaultInDevelopment

I develop using Visual Studio and manage multiple accounts and test environments. However, I frequently encounter access errors when using DefaultAzureCredentials. These errors typically occur because the account and tenant do not match the filter set in Visual Studio, or the user lacks access to the tenant. This issue arises often:

The problem can be resolved by using an application client secret to access the Azure Key Vault, implemented with the ChainedTokenCredential. The access client can be specified in the project’s user secrets, allowing it to function independently of the user account and the last filter used in Visual Studio. To set this up, a new Azure App registration was created, and a client secret was added. This client secret is then included in the dotnet user secrets for the project.

A Role assignment was added to the Azure Key Vault for the application service principal, and the necessary access role was configured. With the correct configuration in the user secrets on the development PC, the application can access the Azure Key Vault. This setup eliminates the need for Azure RBAC changes for each developer.

Setup local development credentials

The following Nuget packages can be used to integrate a Key Vault client in an ASP.NET Core application. Depending on how the client is setup, or what data is used from the Key Vault, different packages can be used.

Azure.Extensions.AspNetCore.Configuration.Secrets Azure.Identity Azure.Security.KeyVault.Certificates Azure.Security.KeyVault.Secrets

A ChainedTokenCredential is used to access the Key Vault, preventing the access issues associated with DefaultAzureCredential. In local development, a client credential is used via a secret, which can be stored in the dotnet user secrets. When a new developer joins the team, these user secrets can be shared, eliminating the need to modify Azure Key Vault RBAC settings for each new developer. This setup is ideal for development, as it avoids problems caused by incorrect tenants or user accounts in Visual Studio. For production or other deployments, a system-assigned managed identity is used.

using Azure.Identity; namespace DevelopmentAspNetCoreKeyVault; public static class AppAccessCredentials { public static ChainedTokenCredential GetChainedTokenCredentials(IConfiguration configuration, bool isDevelopment) { if (!isDevelopment) { // Use a system assigned managed identity on production deployments return new ChainedTokenCredential(new ManagedIdentityCredential()); } else // dev env { var tenantId = configuration.GetValue<string>("EntraId:TenantId", string.Empty); var clientId = configuration.GetValue<string>("EntraId:ClientId", string.Empty); var clientSecret = configuration.GetValue<string>("EntraId:ClientSecret", string.Empty); var options = new TokenCredentialOptions { AuthorityHost = AzureAuthorityHosts.AzurePublicCloud }; if (string.IsNullOrEmpty(tenantId)) // DevOps { // Use DefaultAzureCredential if AzureCliCredential is not used in your DevOps return new ChainedTokenCredential(new AzureCliCredential()); } // https://docs.microsoft.com/dotnet/api/azure.identity.clientsecretcredential var devClientSecretCredential = new ClientSecretCredential( tenantId, clientId, clientSecret, options); var chainedTokenCredential = new ChainedTokenCredential(devClientSecretCredential); return chainedTokenCredential; } } }

The user secrets would have something like this:

{ "EntraId": { // aspnetcore-keyvault-development-access "TenantId": "--tenant_id--", "ClientId": "--client_id--", "ClientSecret": "--secret--" } }

Using Key Vault directly in the application

The secret from the Azure Key Vault can be used directly in the code by using the SecretClient class from Azure.Identity. The AppAccessCredentials is used to return the chained credentials and the application is authenticated if the service principal from the App registration has the correct RBAC.

// Azure SDK direct var client = new SecretClient(new Uri(_configuration["AzureKeyVaultEndpoint"]!), AppAccessCredentials.GetChainedTokenCredentials(_configuration, _hostEnvironment.IsDevelopment())); var secret = await client.GetSecretAsync("demosecret"); DemoSecret = secret!.Value.Value;

Using Key Vault as a configuration

The AddAzureKeyVault method can be used to use the Azure Key as a further configuration option. I like this as all configuration is abstracted with the IConfiguration interface in the application. The AddAzureKeyVault takes the chained credential as a param.

var keyVault = builder.Configuration["AzureKeyVaultEndpoint"]; if(!string.IsNullOrEmpty(keyVault)) { builder.Configuration.AddAzureKeyVault( new Uri($"{builder.Configuration["AzureKeyVaultEndpoint"]}"), AppAccessCredentials.GetChainedTokenCredentials(builder.Configuration, builder.Environment.IsDevelopment())); }

The secret can be read anywhere in the code using the IConfiguration interface. This can be used in different ways and provides type safe ways of using the properties.

// ASP.NET Core configuration // From from key vault using ASP.NET Core configuration integration // Or from user secrets if offline, or fast startup is required DemoSecretConfig = _configuration["demosecret"];

Using user secrets for local development

Sometimes, you need to work offline or frequently restart the application. Constantly connecting to Azure Key Vault can slow down development. In such cases, it’s often more efficient to use dotnet user secrets instead of Azure Key Vault.

Note

This is a good and simple approach for developing applications which use Azure Key Vault. You can use a local key management system or whatever local developers use. The client secret should not be used in production deployments as the secret expires and there are better ways to secure the access for the application to Key Vault. System assigned managed identity is the best way to implement application access in Azure in production environments.

Links

https://learn.microsoft.com/en-us/aspnet/core/security/key-vault-configuration

https://docs.microsoft.com/en-us/azure/key-vault/key-vault-developers-guide

Securing Azure Functions using an Azure Virtual Network
Using Key Vault and Managed Identities with Azure Functions
Using Azure Key Vault with ASP.NET Core and Azure App Services

https://stackoverflow.com/questions/40025598/azure-key-vault-access-denied

https://cmatskas.com/securing-asp-net-core-application-settings-using-azure-key-vault/

https://github.com/jayendranarumugam/DemoSecrets/tree/master/DemoSecrets

https://docs.microsoft.com/en-us/cli/azure/install-azure-cli-windows?view=azure-cli-latest

https://anthonysimmon.com/defaultazurecredential-local-development-optimization/

Sunday, 01. December 2024

Werdmüller on Medium

My OKRs for 2025

Using Objectives and Key Results to get through a difficult year Continue reading on Medium »

Using Objectives and Key Results to get through a difficult year

Continue reading on Medium »


Mike Jones: self-issued

Three New Specs Enhancing OpenID Federation and New Contributors

The OpenID Connect working group recently adopted three new specifications that build upon and provide new capabilities to OpenID Federation. But I’m not only happy about these because of the engineering benefits they bring. I’m particularly happy because they bring new active contributors to the work, specifically Michael Fraser and Łukasz Jaromin, as well as […]

The OpenID Connect working group recently adopted three new specifications that build upon and provide new capabilities to OpenID Federation. But I’m not only happy about these because of the engineering benefits they bring.

I’m particularly happy because they bring new active contributors to the work, specifically Michael Fraser and Łukasz Jaromin, as well as continuing the strong work by Giuseppe De Marco, who’s become a leader in the space. They’re also supported by a few veterans: Roland Hedberg, John Bradley, and yours truly, plus now the full OpenID Connect working group.

Here’s the three new specifications, along with an abstract for each of them:

1. OpenID Federation Extended Subordinate Listing

This specification acts as an extension to OpenID Federation 1.0. It outlines methods to interact with a given Federation with a potentially large number of registered Entities, as well as mechanisms to retrieve multiple entity statements along with associated details in a single request.

2. OpenID Federation Wallet Architectures

As digital wallets become increasingly deployed for managing identity credentials, establishing an architecture for trusted communication is required to allow each participant in the ecosystem to evaluate other participants’ compliance with mutual trust frameworks and accomplish secure and trusted transactions.

This specification defines how to use OpenID Federation 1.0 to enhance the security and interoperability of wallet ecosystems, facilitating trust establishment among the parties and enabling secure metadata exchange and policy application across large scale deployments. It outlines the general architecture of a federated trust infrastructure for wallet ecosystems, identifying participant roles and describing the use of those roles.

3. OpenID Connect Relying Party Metadata Choices

This specification extends the OpenID Connect Dynamic Client Registration 1.0 specification to enable RPs to express a set of supported values for some RP metadata parameters, rather than just single values. This functionality is particularly useful when Automatic Registration, as defined in OpenID Federation 1.0, is used, since there is no registration response from the OP to tell the RP what choices were made by the OP. This gives the OP the information that it needs to make choices about how to interact with the RP in ways that work for both parties.

Thanks to the members of the OpenID Connect working group who helped refine them before adoption, and are now working on progressing them in the working group.

Saturday, 30. November 2024

Heres Tom with the Weather

Austin Indieweb at Radio Coffee

On Wednesday 12/04 at 7pm, the Austin Indieweb Meetup returns to Radio Coffee and Beer. Count me in.

On Wednesday 12/04 at 7pm, the Austin Indieweb Meetup returns to Radio Coffee and Beer. Count me in.

Friday, 29. November 2024

Hyperonomy Digital Identity Lab

Model 37. Multi-dimensional Overton Olive (MuDOO) as an ADKAR-enabled Change Management Framework (MuDOO-ADKAR)

Copyright © 2024 Michael Herman (Bindloss, Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public License Also Checkout:

Copyright © 2024 Michael Herman (Bindloss, Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public License

Also Checkout:

Model 35. Overton Olive Digital Twin (#OvertonOlive #DigitalTwin) https://hyperonomy.com/2024/11/27/overton-olive-overtonolive/ Model 36. Overton Visual Taxonomy https://hyperonomy.com/2024/11/28/model-36-overton-visual-taxonomy/ Want to know more about ADKAR? https://www.prosci.com/methodology/adkar


Model 36. Overton Visual Taxonomy

Copyright © 2024 Michael Herman (Bindloss, Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public License Model 36. Overton Visual Taxonomy Additional Information:

Copyright © 2024 Michael Herman (Bindloss, Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public License

Model 36. Overton Visual Taxonomy

Additional Information:

Model 35. Overton Olive Digital Twin (#OvertonOlive #DigitalTwin) https://hyperonomy.com/2024/11/27/overton-olive-overtonolive/ Model 37. Multi-dimensional Overton Olive (MuDOO) as an ADKAR-enabled Change Management Framework (MuDOO-ADKAR)

Wednesday, 27. November 2024

Hyperonomy Digital Identity Lab

Model 35. Overton Olive Digital Twin (#OvertonOlive #DigitalTwin)

Copyright © 2024 Michael Herman (Bindloss, Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public License Model 35. #OvertonOlive #DigitalTwin (click the diagram to enlarge it) Progressive Improvement through Continuous Transformation For more background on the #OvertonOlive:

Copyright © 2024 Michael Herman (Bindloss, Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public License

Model 35. #OvertonOlive #DigitalTwin (click the diagram to enlarge it)
Progressive Improvement through Continuous Transformation

For more background on the #OvertonOlive:

Model 36. Overton Visual Taxonomy
https://hyperonomy.com/2024/11/28/model-36-overton-visual-taxonomy/ How We Think About How We Work
https://hyperonomy.com/2016/05/09/how-do-we-think-about-how-we-work/ Using #Graphitization to Create Your Organization’s Digital Twin
https://hyperonomy.com/2017/01/02/graphitization-of-the-enterprise/ Crossing the Enterprise Architecture Chasm
https://hyperonomy.com/2016/10/04/crossing-the-enterprise-architecture-chasm/ What are the differences between improving the design (and operation) of a smart city, an aircraft engine, a muscle car, a large enterprise, and an economic system …at hyperscale?
https://hyperonomy.com/2017/04/10/whats-the-difference-between-improving-the-design-and-operation-of-an-aircraft-engine-a-muscle-car-a-large-enterprise-and-a-commercial-global-cloud-services-platform/

Talking Identity

Broadening the Definition of Identity Practitioner

One of the wonderful, yet also daunting, aspects of being in the Identity industry is realizing just how far-reaching and all-encompassing its scope and boundaries are. It’s why we need more than just technologists and standards wonks engaged in driving the future of this industry. I’ve been on stage at Identiverse and other conferences for […]

One of the wonderful, yet also daunting, aspects of being in the Identity industry is realizing just how far-reaching and all-encompassing its scope and boundaries are. It’s why we need more than just technologists and standards wonks engaged in driving the future of this industry. I’ve been on stage at Identiverse and other conferences for many years challenging the Identirati to bring in people from other fields like User Experience Research, Law, Privacy, and more.

This is the challenge that the Digital Identity Advancement Foundation has taken on. You can check out how they are doing this on their website, via the Kim Cameron Award and the Vittorio Bertocci Award. Thanks to Arynn Crow, I had the chance to spend some time chatting with the 2024 Kim Cameron award winner, legal scholar Grace Klutke, at Authenticate 2024. We talked (in my usual rambling sort of way) about ethics in Identity, and the broader impact on human rights, inclusion, and access (in the human and not the technical Authorization sense). You can read her reflection on her experience at the conference here.

Oh, and support DIAF if you can. They’re doing good, and necessary, work! If DIAF continues to bring smart, talented, and motivated individuals like her from other fields into our industry, then we’ll be doing it right.

Monday, 25. November 2024

Just a Theory

RFC: Extension Packaging & Lookup

A proposal to modify the PostgreSQL core so that all files required for an extension live in a directory named for the extension, along with a search path to find extension directories.

Several weeks ago, I started a pgsql-hackers thread proposing a new extension file organization and a search path GUC for finding extensions. The discussion of Christoph Berg’s extension_destdir patch inspired this proposal. These threads cover quite a lot of territory, so I want to pull together a more unified, public proposal.

Here goes.

Challenges

A number of challenges face extension users, thanks to extension file organization in the Postgres core. The common thread among them is the need to add extensions without changing the contents of the Postgres installation itself.

Packager Testing

On Debian systems, the user account that creates extension packages lacks permission to add files to Postgres install. But testing extensions requires installing the extension where Postgres can find it. Moreover, extensions ideally build against a clean Postgres install; adding an extension in order to run make installcheck would pollute it.

Christoph’s patch solves these problems by adding a second lookup path for extensions and dynamic modules, so that Postgres can load them directly from the package build directory.

Alas, the patch isn’t ideal, because it simply specifies a prefix and appends the full pg_config directory paths to it. For example, if --sharedir outputs /opt/share and extension_destdir GUC is set to /tmp/build/myext, the patch will search in /tmp/build/myext/opt/share. This approach works for the packaging use case, which explicitly uses full paths with a prefix, but would be weird for other use cases.

Peter Eisentraut proposed an alternate patch with a new GUC, extension_control_path, that provides a more typical search path pattern to find extension control files, but doesn’t account for shared modules that ship with an extension, requiring that they still live in the dynamic_library_path. Installing into custom directories requires the undocumented datadir and pkglibdir variables:

make install datadir=/else/where/share pkglibdir=/else/where/lib

This pattern can probably be simplified.

OCI Immutability

OCI (née Docker) images are immutable, while a container image runs on a writeable but non-persistent file system. To install persistent extensions in a container, one must create a persistent volume, map it to SHAREDIR/extensions, and copy over all the extensions it needs (or muck with symlink magic). Then do it again for shared object libraries (PKGLIBDIR), and perhaps also for other pg_config directories, like --bindir. Once it’s all set up, one can install a new extension and its files will be distributed to the relevant persistent volumes.

This pattern makes upgrades tricky, because the core extensions are mixed in with third-party extensions. Worse, the number of directories that must be mounted into volumes depends on the features of an extension, increasing deployment configuration complexity. It would be preferable to have all the files for an extension in one place, rather than scattered across multiple persistent volumes.

Peter Eisentraut’s patch addresses much of this issue by adding a search path for extension control files and related data/share files (generally SQL files). One can create a single volume with a lib directory for shared modules and share/extension directory for control and data/share files.

OCI Extension Images

However, an additional wrinkle is the ambition from the CloudNativePg (CNPG) community to eliminate the need for a persistent volume, and rely instead on mounting images that each contain all the files for a single extension as their own volumes, perhaps using Kubernetes image volume feature, (currently in alpha).

This feature requires all the file in an extension to live in a single directory, a volume mounted to an extension image contains all the files required to use the extension. The search path patches proposed so far do not enable this behavior.

Postgres.app Immutability

The macOS Postgres.app supports extensions. But installing one into SHAREDIR/extensions changes the contents of the Postgres.app bundle, breaking Apple-required signature validation. The OS will no longer be able to validate that the app is legit and refuse to start it.

Peter Eisentraut’s new patch addresses this issue as well, with all the same caveats as for the packager testing challenges.

Solution

To further address these issues, this RFC proposes to change file organization and lookup patterns for PostgreSQL extensions.

Extension Directories

First, when an extension is installed, by default all of its files will live in a single directory named for the extension. The contents include:

The Control file that describes extension Subdirectories for SQL, shared modules, docs, binaries, etc.

Subdirectories roughly correspond to the pg_config --*dir options:

bin: Executables doc: Documentation files html: HTML documentation files lib: Dynamically loadable modules locale: Locale support files man: Manual pages share: SQL and other architecture-independent support files

This layout reduces the cognitive overhead for understanding what files belong to what extension. Want to know what’s included in the widget extension? Everything is in the widget directory. It also simplifies installation of an extension: one need add only a directory named for and containing the files required by the extension.

Configuration Parameter

Add a new pg_config value that returns the directory into which extensions will by default be installed:

--extdir show location of extensions

Its default value would be $(pg_config --sharedir)/extension, but could be set at compile time like other configuration parameters. Its contents consist of subdirectories that each contain an extension, as described in Extension Directories. With a few extensions installed, it would look something like:

❯ ls -1 "$(pg_config --extdir)" auto_explain bloom isn pair plperl plpgsql plv8 xml2 semver vector Extension Path

Add an extension lookup path GUC akin to dynamic_library_path, called extension_path. It lists all the directories that Postgres will search for extensions and their files. The default value for this GUC will be:

extension_path = '$extdir'

The special string $extdir corresponds to the pg_config option of the same name, and function exactly as $libdir does for the dynamic_library_path GUC, substituting the appropriate value.

Lookup Execution

Update PostgreSQL’s CREATE EXTENSION command to search the directories in extension_path for an extension. For each directory in the list, it will look for the extension control file in a directory named for the extension:

$dir/$extension/$extension.control

The first match will be considered the canonical location for the extension. For example, if Postgres finds the control file for the pair at /opt/pg17/ext/pair/pair.control, it will load files only from the appropriate subdirectories, e.g.:

SQL files from /opt/pg17/ext/pair/share Shared module files from /opt/pg17/ext/pair/lib PGXS

Update the extension installation behavior of PGXS to install extension files into the new layout. A new variable, $EXTDIR, will define the directory into which to install extension directories, and default to $(pg_config --extdir). It can be set to any literal path, which must exist and be accessible by the PostgreSQL service.

The $EXTENSION variable will be changed to allow only one extension name. If it’s set, the installation behavior will be changed for the following variables:

EXTENSION: Creates $EXTDIR/$EXTENSION, installs $EXTDIR/$EXTENSION/$EXTENSION.control MODULES and MODULE_big: Installed into $EXTDIR/$EXTENSION/lib MODULEDIR: Removed DATA and DATA_built: Installed into $EXTDIR/$EXTENSION/share DATA_TSEARCH: Installed into $EXTDIR/$EXTENSION/share/tsearch_data DOCS: Installed into $EXTDIR/$EXTENSION/doc PROGRAM, SCRIPTS and SCRIPTS_built: Installed into $EXTDIR/$EXTENSION/bin

Each of these locations can still be overridden by setting one of the (currently undocumented) installation location options (e.g., datadir, pkglibdir, etc.).

Note

External projects that install extensions without using PGXS, like pgrx, must also be updated to either follow the same pattern or to delegate installation to PGXS.

Control File

The directory control file parameter will be deprecated and ignored.

The module_pathname parameter should only name a shared module in the lib subdirectory of an extension directory. Any existing use of a $libdir prefix will be stripped out and ignored before replacing the MODULE_PATHNAME string in SQL files. The implication for loading extension dynamic modules1 differs from the existing behavior as follows:

If the name is an absolute path, the given file is loaded. If the name does not contain a directory part, the file is searched for in the in the lib subdirectory of the extension’s directory ($EXTDIR/$EXTENSION/lib). Otherwise (the file was not found in the path, or it contains a non-absolute directory part), the dynamic loader will try to take the name as given, which will most likely fail. (It is unreliable to depend on the current working directory.) Use Cases

Here’s how the proposed file layout and extension_path GUC addresses the use cases that inspired this RFC.

Packager Testing

A packager who wants to run tests without modifying a PostgreSQL install would follow these steps:

Prepend a directory under the packaging install to the extension_path GUC. The resulting value would be something like $RPM_BUILD_ROOT/$(pg_config --extdir):$extdir. Install the extension into that directory: make install EXTDIR=$RPM_BUILD_ROOT Make sure the PostgreSQL server can access the directory, then run make installcheck

This will allow PostgreSQL to find and load the extension during the tests. The Postgres installation will not have been modified; only the extension_path will have changed.

OCI/Kubernetes

To allow extensions to be added to a OCI container and to persist beyond its lifetime, one or more volumes could be used. Some examples:

Mount a persistent volume for extensions and prepend the path to that directory to the extension_path GUC. Then Postgres can find any extensions installed there, and they will persist. Files for all extensions will live on a single volume. Or, to meet a desire to keep some extensions separate (e.g., open-source vs company-internal extensions), two or more persistent volumes could be mounted, as long as they’re all included in extension_path, are accessible by PostgreSQL, and users take care to install extensions in the proper locations. CNPG Extension Images

To meet the CNPG ambition to “install” an extension by mounting a single directory for each, create separate images for each extension, then use the Kubernetes image volume feature (currently in alpha) to mount each as a read-only volume in the appropriate subdirectory of a directory included in extension_path. Thereafter, any new containers would simply have to mount all the same extension image volumes to provide the same extensions to all containers.

Postgres.app

To allow extension installation without invalidating the Postgres.app bundle signature, the default configuration could prepend a well-known directory outside the app bundle, such as /Library/Application Support/Postgres, to extension_path. Users wishing to install new extensions would then need to point the EXTDIR parameter to that location, e.g.,

$ make install EXTDIR="/Library/Application Support/Postgres"`

Or the app could get trickier, setting the --extdir value to that location so that users don’t need to use EXTDIR. As long as extension_path includes both the bundle’s own extension directory and this external directory, Postgres will be able to find and load all extensions.

Extension Directory Examples

A core extension like citext would have a structure similar to:

citext ├── citext.control ├── lib │ ├── citext.dylib │ └── bitcode │ ├── citext │ │ └── citext.bc │ └── citext.index.bc └── share ├── citext--1.0--1.1.sql ├── citext--1.1--1.2.sql ├── citext--1.2--1.3.sql ├── citext--1.3--1.4.sql ├── citext--1.4--1.5.sql ├── citext--1.4.sql └── citext--1.5--1.6.sql

The subdirectory for a pure SQL extension named “pair” in a directory named “pair” that looks something like this:

pair ├── LICENSE.md ├── README.md ├── pair.control ├── doc │ ├── html │ │ └── pair.html │ └── pair.md └── share ├── pair--1.0--1.1.sql └── pair--1.1.sql

A binary application like pg_top would live in the pg_top directory, structured something like:

pg_top ├── HISTORY.rst ├── INSTALL.rst ├── LICENSE ├── README.rst ├── bin │ └── pg_top └── doc └── man └── man3 └── pg_top.3

And a C extension like semver would live in the semver directory and be structured something like:

semver ├── LICENSE ├── README.md ├── semver.control ├── doc │ └── semver.md ├── lib │ ├── semver.dylib │ └── bitcode │ ├── semver │ │ └── semver.bc │ └── semver.index.bc └── share ├── semver--1.0--1.1.sql └── semver--1.1.sql Phase Two: Preloading

The above-proposed solution does not allow shared modules distributed with extensions to compatibly be loaded via shared library preloading, because extension modules wil no longer live in the dynamic_library_path. Users can specify full paths, however. For example, instead of:

shared_preload_libraries = 'pg_partman_bgw'

One could use the path to the lib subdirectory of the extension’s directory:

shared_preload_libraries = '/opt/postgres/extensions/pg_partman_bgw/lib/pg_partman_bgw'

But users will likely find this pattern cumbersome, especially for extensions with multiple shared modules. Perhaps some special syntax could be added to specify a single extension module, such as:

shared_preload_libraries = '$extension_path::pg_partman_bgw'

But this overloads the semantics of shared_preload_libraries and the code that processes it rather heavily, not to mention the LOAD command.

Therefore, as a follow up to the solution proposed above, this RFC proposes additional changes to PostgreSQL.

Extension Preloading

Add new GUCs that complement shared library preloading, but for extension module preloading:

shared_preload_extensions session_preload_extensions local_preload_extensions

Each takes a list of extensions for which to preload shared modules. In addition, another new GUC, local_extensions, will contain a list of administrator-approved extensions users are allowed to include in local_preload_extensions. This GUC complements local_preload_libraries’s use of a plugins directory.

Then modify the preloading code to also preload these files. For each extension in a list, it would:

Search each path in extension_path for the extension. When found, load all the shared libraries from $extension/lib.

For example, to load all shared modules in the pg_partman extension, set:

shared_preload_extensions = 'pg_partman'

To load a single shared module from an extension, give its name after the extension name and two colons. This example will load only the pg_partman_bgw shared module from the pg_partman extension:

shared_preload_extensions = 'pg_partman::pg_partman_bgw'

This change requires a one-time change to existing preload configurations on upgrade.

Future: Deprecate LOAD

For a future change, consider modifying CREATE EXTENSION to support shared module-only extensions. This would allow extensions with no SQL component, such as auto_explain, to be handled like any other extension; it would live under one of the directories in extension_path with a structure like this:

auto_explain ├── auto_explain.control └── lib ├── auto_explain.dylib └── bitcode ├── auto_explain │ └── auto_explain.bc └── auto_explain.index.bc

Note the auto_explain.control file. It would need a new parameter to indicate that the extension includes no SQL files, so CREATE EXTENSION and related commands wouldn’t try to find them.

With these changes, extensions could become the primary, recommended interface for extending PostgreSQL. Perhaps the LOAD command could be deprecated, and the *_preload_libraries GUCs along with it.

Compatibility Issues The module_pathname control file variable would prefer the name of a shared module. The code that replaces the MODULE_PATHNAME string in SQL files would to strip out the $libdir/ prefix, if present. The behavior of loading dynamic modules that ship with extensions (i.e., the value of the AS part of CREATE FUNCTION) would change to look for a library name (with no directory part) in the lib subdirectory of the extension directory. The directory control file parameter and the MODULEDIR PGXS variable would be deprecated and ignored. *_preload_libraries would no longer be used to find extension modules without full paths. Administrators would have to remove module names from these GUCs and add the relevant extension names to the new *_preload_extensions variables. To ease upgrades, we might consider adding a PGXS variable that, when true, would symlink shared modules into --pkglibdr. LOAD would no longer be able to find shared modules included with extensions, unless we add a PGXS variable that, when true, would symlink shared modules into --pkglibdr. The EXTENSION PGXS variable will no longer support multiple extension names. The change in extension installation locations must also be adopted by projects that don’t use PGXS for installation, like pgrx. Or perhaps they could be modified to also use PGXS. Long term it might be useful to replace the Makefile-based PGXS with another installation system, perhaps a CLI. Out of Scope

This RFC does not include or attempt to address the following issue:

How to manage third-party shared libraries. Making system dependencies consistent in a OCI/Kubernetes environment or for non-system binary packaging patterns presents its own challenges, though they’re not specific to PostgreSQL or the patterns described here. Research is ongoing into potential solutions, and will be addressed elsewhere. Acknowledgements

A slew of PostgreSQL community members contributed feedback, asked hard questions, and suggested moderate to significant revisions to this RFC via the the pgsql-hackers list, in-person discussion at PGConf.eu, and pull request comments. I’d especially like to thank:

Yurii Rashkovskii and David Christensen for highlighting this issue at the Extension Ecosystem Summit Christoph Berg for the original patch, calling attention to the permission issues when building Debian packages, and various lists discussions Tobias Bussmann for calling attention to the immutability issues with Postgres.app Christoph Berg, Gabriele Bartolini, Peter Eisentraut, and Andres Freund for detailed discussion at PGConf.eu on extension location issues and getting to consensus on a genera approach to solving it Douglas J Hunley, Shaun Thomas, and Keith Fiske for pull request reviews and corrections Álvaro Hernández Tortosa for a very close review and ton of substantive feedback on the pull request Paul Ramsey, Tristan Partin, Ebru Aydin Gol, and Peter Eisentraut for pgsql-hackers list discussions. Tembo for supporting my work on this and many other extension-related issues

All remaining errors and omissions remain my own.

But not non-extension modules; see Phase Two and Future for further details on preloading extension modules and eventually deprecating non-extension modules. ↩︎

More about… Postgres Extensions RFC Packaging Kubernetes OCI Packaging Postgres.app

Jon Udell

The social cost of mediated experience

The first time I heard a critique of mediated experience, the critic was my dad. He was an avid photographer who, during our family’s year in India, when I was a young child, used his 35mm Exacta to capture thousands of photos that became carousels of color slides we viewed for many years thereafter. It … Continue reading The social cost of mediated experience

The first time I heard a critique of mediated experience, the critic was my dad. He was an avid photographer who, during our family’s year in India, when I was a young child, used his 35mm Exacta to capture thousands of photos that became carousels of color slides we viewed for many years thereafter. It was a remarkable documentary effort that solidified our memories of that year. But dad was aware of the tradeoff. A favorite joke became: “Q: How was your trip?” “A: I won’t know until the film is developed!” He realized that interposing a camera between himself and the people he encountered had altered the direct experience he and they would otherwise have had.

This weekend I heard Christine Rosen’s modern version of that critique in a discussion of her new book The extinction of experience: Being human in a disembodied world. I listened to the podcast on a hike, my noise-canceling Airpods insulating me from the sounds of the creek trail and from the people walking along it.

It’s complicated. When hiking alone I greatly value the ability to listen to interesting people and ideas while exercising, breathing fresh air, and moving through the natural world. The experience is embodied in one sense, disembodied in another. Reading the same material while lying on the couch would be a different, and arguably more extreme, form of disembodiment. But when I passed a family of four, all walking along looking at their phones, that felt wrong. When people are together they should actually be together, right? You’ve doubtless felt the same when seeing people in this together-but-not-together state.

Lately Pete Buttigieg has been urging us to spend less time online, more time IRL having face-to-face conversations. I think that’s right. There’s no doubt that the decline of social capital described in Robert Putnam’s Bowling Alone has accelerated in the 30 years since he wrote that book. America’s tragic polarization is a predictable outcome. Without the institutions and cultural traditions that once brought us together, face-to-face, in non-political ways, we’re all too vulnerable to being herded into competing online echo chambers that magnify our differences and erase our common humanity.

I won’t be abandoning my mediated and disembodied life online, but I do need to participate in it less and more critically, and prioritize my unmediated and embodied life IRL. The pendulum has swung too far away from the direct experience of shared reality, and that hasn’t been good for me nor for my country,

Saturday, 23. November 2024

Heres Tom with the Weather

RIP Toni Price

We made a roadtrip one last time to Wimberly to see her perform at the Devil’s Backbone Tavern in August 2023. There was no one like Toni Price.

We made a roadtrip one last time to Wimberly to see her perform at the Devil’s Backbone Tavern in August 2023.

There was no one like Toni Price.

Thursday, 14. November 2024

Just a Theory

Bye Twitter

I finally deleted all my Twitter accounts, notably @theory. If you see that username, it’s not me.

I quit Twitter in November 2022, when it became clear that Phony Stark would take control of the company. No regrets; I’ve been active on Mastodon since then, via Ivory, where I’ve enjoyed the Twitter ca. 2010 vibe.

But I hadn’t deleted my Tweets or my accounts until this week. The change was finally triggered by the latest announcement:

On Friday, X will adopt a new terms of service policy that for the first time explicitly says all users agree to let their posts be used to train artificial intelligence, like the company’s generative AI service, Grok. Currently, X users can go to the site’s settings and opt out.

I have no interest in feeding Grok or any other large language model. So this week I deleted all my tweets and then my accounts. I tried to transfer my personal username to a new, empty account, but in the second between changing it and switching to another tab to take it, it was somehow acquired by an account created in 2011 with no posts. Not sure how that worked. Pretty scummy.

So, just to be clear, anyone using these usernames on Twitter are not me anymore:

@theory @lunar_theory @DesignScene @pgxn

See you on Mastodon — or perhaps Bluesky.

More about… Twitter Personal

Patrick Breyer

Ein Jahr Absage an die Chatkontrolle

Heute vor einem Jahr haben wir im Ausschuss für bürgerliche Freiheiten, Justiz und Inneres (LIBE) des EU-Parlaments eine fraktionsübergreifende Position gegen anlasslose Chatkontrolle beschlossen. Damit haben wir den wahnwitzigen Überwachungsplänen …

Heute vor einem Jahr haben wir im Ausschuss für bürgerliche Freiheiten, Justiz und Inneres (LIBE) des EU-Parlaments eine fraktionsübergreifende Position gegen anlasslose Chatkontrolle beschlossen. Damit haben wir den wahnwitzigen Überwachungsplänen eine Absage erteilt, welche die von der Leyen-EU-Kommission vorgelegt hat.

Neben dem Europäischen Parlament müssen die EU-Regierungen im Rat eine Position zu dem Gesetzentwurf beschließen. Schon drei Mal wurde die notwendige Mehrheit dort verfehlt. Sowohl die spanische Ratspräsidentschaft im Oktober 2023 als auch der belgische Vorsitz im Juni 2024 haben die Abstimmung über die Chatkontrolle kurzfristig von der Tagesordnung des Rats der EU genommen, als sich ein Scheitern bei der Abstimmung abgezeichnet hat – ein Riesenerfolg der Zivilgesellschaft.

Die Europäische Kommission wird gerade neu gebildet. Bisher tat EU-Innenkommissarin Ylva Johansson (S&D, Schweden) als Mutter der Chatkontrolle alles, um ihr Vorhaben durchzuprügeln. Nun soll Magnus Brunner (EVP, Österreich) auf sie im Amt folgen und das Projekt weiterführen. Er hat in seinem Anhörungsverfahren gegenüber dem Europäischen Parlament auf die Frage der Piratin Markéta Gregorová schon signalisiert, dass er die Verhandlungen zur Chatkontrolle zum Abschluss bringen will. Ein Bekenntnis zum Schutz vor flächendeckender Chatkontrolle oder Schutz sicherer Verschlüsselung kam ihm nicht über die Lippen. Das ist ein Fehler: die EU-Kommission muss sich endgültig von Massenüberwachung verabschieden. Die anlasslose Chatkontrolle muss vom Tisch!

Im Europaparlament habe ich mich als euer Abgeordneter der Piratenpartei Deutschland federführend in den Verhandlungen eingesetzt, um die Chatkontrolle zu verhindern und auch die Abgeordneten der anderen Fraktionen in unzähligen Beratungen davon überzeugt. Heute trägt Markéta Gregorová für die europäischen Piraten die Fackel der Digitalen Freiheitskämpfer*innen im Europäischen Parlament weiter und setzt sich dort gegen die Chatkontrolle ein.

Im Rat der EU konnten wir bisher nur dank des unermüdlichen Engagements und des lautstarken Protests der Zivilgesellschaft totalitäre flächendeckende Chatkontrollen verhindern. Möglich war das dank jeder Aktivistin und jedem Einzelnen, die sich telefonisch, per E-Mail oder per Brief immer wieder an Politik und Regierung gewendet haben. Danke für euren Einsatz! Wir müssen stark bleiben und auch bei jedem weiteren Anlauf gegen die extreme und weltweit einzigartige Bedrohung des digitalen Briefgeheimnisses und sicherer Verschlüsselung durch die Chatkontrolle kämpfen!

Themenseite Chatkontrolle: chatkontrolle.de

Monday, 11. November 2024

Hyperonomy Digital Identity Lab

Web 7.0 / TDW Decentralized Identifier (DID Method) Clusters Model 0.1: Specification Development Framework

Copyright (c) 2018-2024 Michael Herman (Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public Licensehttps://creativecommons.org/licenses/by-sa/4.0/legalcode To learn how to join the did-methods DIF Working Group, click https://github.com/decentralized-identity/did-methods. 0. Prime Objective/Directive #Chickens, #Eggs, and #Roosters: A #NorthStar for the Global Decent

Copyright (c) 2018-2024 Michael Herman (Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public License
https://creativecommons.org/licenses/by-sa/4.0/legalcode

To learn how to join the did-methods DIF Working Group, click https://github.com/decentralized-identity/did-methods.

0. Prime Objective/Directive

#Chickens, #Eggs, and #Roosters: A #NorthStar for the Global Decentralized Systems Community (#GDSC)

Byline: #meggDLs, #Seleggtive #Disclosure, #DEGGCOMM, and #Eggports

The entire digital identity ecosystem is missing out on the #BigOpportunity by not focusing on the right catalyst for the #massiveadoption of #digitalcredentials.
Morphing the chicken and egg mental model: If Hens are the Issuers, Roosters the Verifiers, and Eggs are the digital credentials, the prime objective needs to be increasing the demand for and consumption of Eggs by Holders …creating hundreds of thousands of ways that drive more Holders to consume more Eggs. Think about it.

… are great examples of driving the demand for and consumption of more and more digital credentials [and DIDs] (eggs); and secondarily, the demand for hens and roosters (Issuers and Verifiers). The demand for eggs drives the production of hens; and in turn, the demand for roosters. Don’t mess with #MotherNature

Reference: https://www.linkedin.com/posts/mwherman_the-classic-chicken-and-egg-problem-verifiers-activity-7249162946709241856-lxaS?utm_source=share&utm_medium=member_desktop

1. Decentralized Identifiers (DIDs) v1.0 Specification

Reference: https://www.w3.org/TR/did-core/

Decentralized identifiers (DIDs) are a new type of identifier that enables verifiable, decentralized digital identity. A DID refers to any subject (e.g., a person, organization, thing, data model, abstract entity, etc.) as determined by the controller of the DID. In contrast to typical, federated identifiers, DIDs have been designed so that they may be decoupled from centralized registries, identity providers, and certificate authorities.

DID subject
The entity identified by a DID and described by a DID document. Anything can be a DID subject: person, group, organization, physical thing, digital thing, logical thing, etc.

2. Use Cases and Requirements for Decentralized Identifiers Document

Reference: https://www.w3.org/TR/did-use-cases/#uc

3. Subsidiary Ledgers / Digital Wallets 4. Trusted Digital Web (TDW) 5. LinkedIn Economic Graph

Reference: https://www.youtube.com/watch?v=xwawTzMfHic&list=PLU-rWqHm5p45dzXF2LJZjuNVJrOUR6DaD&index=23

6. Toronto Musican’s Economic Graph 7. #Graphitization: Amazon’s Principles, Core Entities, and Relationships 8. Enteprise Architecture 9. What Happens When You Don’t Think You Need a Taxonomy? 10. NEON NATION / Web 7.0 / TDW Conceptual Architecture 11. Web 7.0 / TDW DID Method Clusters Model 0.1

GitHub: https://github.com/decentralized-identity/did-methods/issues/3

Universal DID Methods

DID methods suitable for interacting with Every Little Thing (#ELT) on the planet/in the universe:

did:object (http://didobject.directory) did:ns (http://didns.directory) did:web7 Web 7.0/TDW DID Method Clusters Model Taxonomy 0.1 A bold method is the model method or exemplar for the particular cluster (cell). A method can be a exemplar for 1 or many clusters. This list of DID method categories is just an example. A complete taxonomy will likely be a 2-3 level hierarchy. The parent categories for these examples might include: Live Things, Inanimate Things, Abstract Things, Digital Things, Business Things, etc. etc. More than one exemplar per cluster is permitted. 12. Process Framework: Sociocracy

Reference: https://github.com/decentralized-identity/did-methods/issues/4

In Sociocracy terminology, a mini-WG is called a circle. Each category of DID methods (cluster of DID Methods) would be managed by its own independent circle. A circle member can belong to more than 1 circle. Circles are connected to a parent circle for administrative purposes. The parent circle would correspond to the DID Method WG (co-chaired by Markus).

Sociocracy: https://www.sociocracyforall.org/content/

Sociocracy combines consent decision-making, a decentralized system of authority and intentional processes to improve our decisions and processes over time into a governance system that supports effective and efficient process while increasing connection, listening and co-creation among members.

Sociocracy is used in businesses, communities, nonprofits, cooperatives, grassroots groups and in education. 

13. Trusted Digital Web (TDW) Glossary/Taxonomy Model: Erin Buys a Car Neighborhood 14. Questions

Friday, 08. November 2024

Heres Tom with the Weather

Webfinger in the Wild

Today, a post in my feed included a mention and its webfinger verification threw a WebFinger::BadRequest exception: Nov 08 09:18:49 AM WebFinger::BadRequest (Bad Request): Nov 08 09:18:49 AM Nov 08 09:18:49 AM app/models/account.rb:79:in `fetch_and_create_mastodon_account' Nov 08 09:18:49 AM app/models/account.rb:367:in `block in create_status!' Nov 08 09:18:49 AM app/models/account.rb:36

Today, a post in my feed included a mention and its webfinger verification threw a WebFinger::BadRequest exception:

Nov 08 09:18:49 AM WebFinger::BadRequest (Bad Request): Nov 08 09:18:49 AM Nov 08 09:18:49 AM app/models/account.rb:79:in `fetch_and_create_mastodon_account' Nov 08 09:18:49 AM app/models/account.rb:367:in `block in create_status!' Nov 08 09:18:49 AM app/models/account.rb:364:in `each' Nov 08 09:18:49 AM app/models/account.rb:364:in `create_status!' Nov 08 09:18:49 AM app/lib/activity_pub/activity/create.rb:20:in `perform' Nov 08 09:18:49 AM app/controllers/accounts_controller.rb:148:in `process_item' Nov 08 09:18:49 AM app/controllers/accounts_controller.rb:75:in `inbox'

The activitypub actor document resided on mastodon.well.com but when a reverse discovery was performed, the hostname of the subject in the webfinger response was well.com instead of mastodon.well.com. Making a webfinger request to well.com for the mentioned user returned a 500 Internal Server Error so a WebFinger::BadRequest exception was thrown. What was going on?

Fortunately, an issue in the activitypub-webfinger had the answer:

Looks like some are using this host-meta redirect to use a custom domain for actors which is different to the actual domain of the server.

And that is what was happening:

curl https://mastodon.well.com/.well-known/host-meta <?xml version="1.0" encoding="UTF-8"?> <XRD xmlns="http://docs.oasis-open.org/ns/xri/xrd-1.0"> <Link rel="lrdd" template="https://mastodon.well.com/.well-known/webfinger?resource={uri}"/> </XRD>

A response in the issue notes

The use of host-meta as a “second layer of indirection” is something that mostly a holdover from the OStatus days, IIRC. Most projects that aren’t Mastodon or Pleroma will not check host-meta at all, and will instead always skip straight to the /.well-known/webfinger endpoint. I don’t think it makes sense to unnecessarily pressure everyone into adopting host-meta or supporting variable LRDD endpoints

I can’t argue with that so I just handled the exception without setting the custom domain.


Phil Windleys Technometria

Internet Identity Workshop XXXIX Report

The 39th edition of the Internet Identity Workshop wrapped up last week.

The 39th edition of the Internet Identity Workshop wrapped up last week. We have 364 attendees from around the world who called 178 sessions. I can't begin to describe the energy of the identity community when they all get together to discuss their favorite topics. If you like identity, or simply want to be part of an enthusiastic technical community, you should come to IIW.

As you can see by the pins in the map at the top of this post, there were attendees from all over the world. Not surprisingly, most of the attendees were from the US (251), followed by Canada (18) and France (14). Germany, Japan, and Australia rounded out the top six with 12, 9, and 7 attendees respectively. Attendees from India (5), Columbia (3), and Chile (2) show IIW's geographic diversity. Sadly, there were no attendees from Africa again. Please remember we offer scholarships for people from underrepresented areas, so if you'd like to come to IIW40, please let us know. If you're working on identity, we want you there.

For states and provinces, California was first with 131 attendees. Washington (19), Utah (14), New York (9), and Massachusetts (9) made up the rest of the top 5. San Jose (20), San Francisco (16), Paris (12), Oakland (11), and Seattle (9) were the top five cities.

We'll have the book of proceedings out in a month or so with notes from the different sessions and descriptions of the 20 demos given during demo hour. Past proceedings are available here.

The next IIW takes place April 8-10, 2025 at the Computer History Museum. This will be IIW XL, number 40! We'll have registration open the first part of December. If you're interested in sponsoring, send me a note.

Subscribe now

Share

Wednesday, 06. November 2024

Heres Tom with the Weather

Indieweb Meetup Austin today

After some time off IndieWeb Meetup Austin is back at Mozart’s Coffee tonight at 7pm. I guess a few things have transpired since the last meeting so I’m looking forward to learning about what’s going on again.

After some time off IndieWeb Meetup Austin is back at Mozart’s Coffee tonight at 7pm. I guess a few things have transpired since the last meeting so I’m looking forward to learning about what’s going on again.

Monday, 04. November 2024

Just a Theory

Introducing RFC 9535 Go JSONPath and Playground

I’ve made a RFC 9535 JSONPath JSONPath Go package, and compiled it into Wasm to build an in-browser playground for it.

I’ve written and release a RFC 9535 JSONPath JSONPath Go package, github.com/theory/jsonpath. Why? For a personal project, I needed a simpler JSONPath engine to complement to the Go SQL/JSON Path package, and quickly found myself implementing most of the RFC. So I decided do the whole thing.

Yes, yet another JSONPath package in Go. I really appreciate the idea of a standard — plus its support for features not included in the original design from 2007, such as object slices! But I could find no reference to the RFC on pkg.go.dev. Today the search shows one!

Example

Usage is straightforward; here’s a quick example (Go playground):

package main import ( "fmt" "log" "github.com/theory/jsonpath" ) func main() { // Parse a jsonpath query. p, err := jsonpath.Parse(`$["name", "slogan"]`) if err != nil { log.Fatal(err) } // Select values from unmarshaled JSON input. json := map[string]any{ "name": "Kamala Harris", "title": "Vice President of the United States", "home": "California", "slogan": "We are not going back!", } items := p.Select(json) // Show the result. fmt.Printf("%#v\n", items) }

And the output:

[]interface {}{"Kamala Harris", "We are not going back!"} 🛝 Playground

No need to write code to try it out, though. I’ve also written a playground webapp to encourage experimentation and exploration of the syntax and behavior of the package. The implementation follows the precedents set by the Go SQL/JSON Playground and design of the Rust JSONPath Playground. Moreover, thanks to TinyGo, the Wasm file comes in at a mere 254K!

The webapp loads sample JSON from the RFC, and randomly rotates through a few example JSONPath queries. Fill in your own and tap the “Permalink” button to share links. The Playground is a stateless JavaScript/Wasm web application: data persists only in permalink URLs.1

🛝 Try this example Playground permalink right now!

Status

The root jsonpath package is stable and ready for use. Other packages remain in flux, as I refactor and rejigger things in the coming weeks as part of the aforementioned personal project. But for actual JSONPath execution and querying, it should continue to work as-is for the foreseeable future.

I hope you find it useful.

And whatever data [GitHub Pages collect] 😔. ↩︎

More about… Go JSONPath Playground RFC 9535

NYTimes Tech Guild on Strike

My former colleagues in the New York Times Tech Guild have gone on strike in response to a number of unfair labor practice violations and management’s inability to bargain in good faith.

This is a big deal. My former colleagues in the New York Times Tech Guild have declared a strike today in response to a number of unfair labor practice violations and management’s inability to bargain in good faith. From the Washington Post:

Represented by the NewsGuild of New York, the Tech Guild has been negotiating its first contract with the company since 2022. Both sides have accused the other side of bogging down contract talks, which have recently focused on three key issues: “just cause” job protections (which ensure employees can’t be fired without reason and due process), remote work and pay equity. Unit members voted overwhelmingly in September to authorize the strike, saying the timing of the authorization — in the run-up to the high-profile election period — was “no accident.”

I urge you to join me in supporting the strike fund, which will help more vulnerable members to weather the strike.

Beyond that, the Guild asks supporters to refrain from interacting with Cooking and Games, and to break your streaks. Times reporters Maggie Astor has the details:

As of this morning, the New York Times Tech Guild, which represents NYT tech workers, is on strike.

Unless it’s resolved while today’s Wordle is still live, this streak is gone.

NYT Games and Cooking are BEHIND THE PICKET LINE. Please don’t play or engage with Games or Cooking content while the strike lasts!

News coverage is NOT behind the picket line. It’s okay to read and share that, though the site and app may have problems.

Read the whole thread for more information, which, as she says, “the rules may not be what you think!”

More about… New York Times Strike News Guild New York Times Tech Guild

Michael Ruminer

Agents Craft Hackathon and Inspiration Block

When you want to do a hackathon but am missing the perfect idea. Compliments of Flux.1 Schnell You likely know the saying, “Don’t let the perfect be the enemy of the good/done.” Sometimes, I have a hard time living that, and an existing effort is one example. I have registered for an AI agent hackathon, Agents Craft Hackathon. I don’t plan to be competitive but I doubt that is the goal o
When you want to do a hackathon but am missing the perfect idea. Compliments of Flux.1 Schnell

You likely know the saying, “Don’t let the perfect be the enemy of the good/done.” Sometimes, I have a hard time living that, and an existing effort is one example. I have registered for an AI agent hackathon, Agents Craft Hackathon. I don’t plan to be competitive but I doubt that is the goal of most participants. I do want a good idea that makes a great example and is a pseudo-novel idea that implements an agent usage not in the GenAI_Agents Github repo. Therein lies my problem; a unique and interesting use case and example. I’m inspiration-blocked. I know that I am letting the perfect be the enemy of the good or even of the enemy of the fun because there is a Discord channel with participant-created ideas for teams to work on. I want it simple yet inspired. I’m not inspired by any of the existing team projects; nothing against the projects as this is a me problem. I have a few more days to think about it.

Maybe you should join the hackathon. Your idea might be inspiring.


Aaron Parecki

My IETF 121 Agenda

Here's where you can find me at IETF 121 in Dublin!

Here's where you can find me at IETF 121 in Dublin!

Monday 9:30 - 11:30 • oauth 15:30 - 17:00 • alldispatch Tuesday 9:30 - 11:30 • oauth 13:00 - 14:30 • spice 16:30 - 17:30 • scim Thursday 9:30 - 11:30 • oauth Get in Touch LinkedIn GitHub Mastodon aaron@parecki.com My Current Drafts OAuth 2.1 OAuth for Browser-Based Apps OAuth Protected Resource Metadata OAuth for First-Party Apps OAuth Client ID Metadata Document Identity Assertion Authorization Grant OAuth Global Token Revocation

Saturday, 02. November 2024

Jon Udell

How To Create Software Diagrams With ChatGPT and Claude

Earlier efforts to diagram software with LLM assistance weren’t fruitful, but this time around things went really well. I ended up with exactly what I needed to explain the architecture of a browser extension, and along the way I learned a lot about a couple of formats — Mermaid and Graphviz — as well as … Continue reading How To Create Software Diagrams With ChatGPT and Claude

Earlier efforts to diagram software with LLM assistance weren’t fruitful, but this time around things went really well. I ended up with exactly what I needed to explain the architecture of a browser extension, and along the way I learned a lot about a couple of formats — Mermaid and Graphviz — as well as their tool ecosystems.

How To Create Software Diagrams With ChatGPT and Claude

LLM series at The New Stack

Friday, 01. November 2024

Michael Ruminer

VSCode and Debugging Python in Virtual Environments

AI generated image of a beetle This post is really for me more than anyone else. I will be able to come back here and find it. If it helps you as well, then great. I have historically not been a Python programmer. Additionally, I have historically used Visual Studio versus VSCode. Lately, I have been doing Python in VSCode. I need to learn my way around. I recently found how to deb
AI generated image of a beetle

This post is really for me more than anyone else. I will be able to come back here and find it. If it helps you as well, then great.

I have historically not been a Python programmer. Additionally, I have historically used Visual Studio versus VSCode. Lately, I have been doing Python in VSCode. I need to learn my way around.

I recently found how to debug Python with breakpoints and watches in a virtual environment, or at least in the default “-m venv” virtual environment, much as I would in a different language in Visual Studio .

You can have the debug window create a launch.json in the .vscode folder of the root workspace directory or create it yourself from scratch. Here is one of my modified ones.

{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"name": "Python: crewai series day_04",
"type": "debugpy",
"request": "launch",
"program": "${file}",
"console": "integratedTerminal",
"env": {
"PYTHONPATH": "${workspaceFolder}\\crewai_series"
},
"cwd": "${workspaceFolder}\\crewai_series\\day_04\\src"
},
{
"name": "Python: crewai series day_05",
"type": "debugpy",
"request": "launch",
"program": "${file}",
"console": "integratedTerminal",
"env": {
"PYTHONPATH": "${workspaceFolder}\\crewai_series"
},
"cwd": "${workspaceFolder}\\crewai_series\\day_05\\src"
}
]
}

The real trick is to set that PYTHONPATH property to your folder with the virtual environment folder as a top-level subdirectory.

Bonus: you set different debug profiles like I did above and shift between them in the debug window. That is helpful when you have different virtual environments in the same workspace.

That’s it. That’s the post.

Wednesday, 30. October 2024

Mike Jones: self-issued

OpenID Presentations at October 2024 OpenID Workshop and IIW plus New Specifications

I gave the following presentation on work in the OpenID Connect working group at the Monday, October 28, 2024 OpenID Workshop at Microsoft: OpenID Connect Working Group Update (PowerPoint) (PDF) I also gave this invited “101” session presentation at the Internet Identity Workshop (IIW) on Tuesday, October 29, 2024: Introduction to OpenID Connect (PowerPoint) (PDF) […]

I gave the following presentation on work in the OpenID Connect working group at the Monday, October 28, 2024 OpenID Workshop at Microsoft:

OpenID Connect Working Group Update (PowerPoint) (PDF)

I also gave this invited “101” session presentation at the Internet Identity Workshop (IIW) on Tuesday, October 29, 2024:

Introduction to OpenID Connect (PowerPoint) (PDF)

There’s more happening in the OpenID Connect working group than at any other time since we started the OpenID Connect work. In fact, two new specifications were adopted today!

OpenID Federation Wallet Architectures 1.0 OpenID Connect Relying Party Metadata Choices 1.0

Thanks to all who helped us get there!

Tuesday, 29. October 2024

Just a Theory

PGConf & Extension Ecosystem Summit EU 2024

Notes and links from the Extension Ecosystem Summit EU 2024 and my first time at PGConf EU. Plus thoughts and photos from ancient sites and archeological museums I visited.

Last week I MCed the first Extension Ecosystem Summit EU and attended my first at PGConf EU in Athens, Greece. Despite my former career as an archaeologist — with a focus on Mediterranean cultures, no less! — this was my first visit to Greece. My favorite moment was the evening after the Summit, when I cut out of a networking shindig to walk to Pláka and then circumnavigate the Acropolis. I mean just look at this place!

The Acropolis of Athens on the evening of October 22, 2024. © 2024 David E. Wheeler

Highlight of the trip for sure. But the Summit and conference were terrific, as well.

Extension Ecosystem Summit

Floor Drees kindly organized The Extension Ecosystem Summit EU, the follow-up to the PGConf.dev original. While the Vancouver Summit focused on developers, we tailored this iteration to users. I started the gathering with a condensed version of my POSETTE talk, “State of the Postgres Extension Ecosystem”, but updated with a Trunk OCI Distribution demo. Links:

PDF PDF with Notes Keynote

We then moved into a lightning round of 10 minute introductions to a variety of extensions:

Jimmy Angelakos showed off pg_statviz, a “minimalist extension and utility pair for time series analysis and visualization of PostgreSQL internal statistics”. Adam Hendel gave a brief history of pgmq, is a “lightweight message queue like AWS SQS and RSMQ, but on Postgres.” Gülçin Yıldırım Jelínek introduced pgzx, “a library for developing PostgreSQL extensions written in Zig.” James Sewell talked about pgvectorscale and pgai for managing machine training workloads on Postgres. Alastair Turner described pg_tde, an extension that provides transparent data encryption for Postgres.

Quite the whirlwind! There followed open discussion, in which each maintainer went to a corner to talk to attendees about contributing to their extensions. Details to come in a more thorough writeup on the Tembo blog, but I personally enjoyed some fascinating discussions about extension distribution challenges.

PGConf.eu

Following the Summit, I attended several thought-provoking and provocative presentations at PGConf.eu, which took place at the same hotel, conveniently enough.

Floor Drees speaking at PGConf.eu 2024. © 2024 David E. Wheeler

In the conference keynote, Stacey Haysler explained the The PostgreSQL License Fee. I’m pleased to say that my employer “pays” license fee! Andres Freund (yes, that one) summarized NUMA vs PostgreSQL, covering some of the issues and opportunities for optimization of PostgreSQL on servers using the NUMA multi-processor memory architecture. Heikki Linnakangas offered an overview of The Wire Protocol, that bit of PostgreSQL technology that lets clients talk to PostgreSQL. Ophir Lojkine and Thomas Guillemard showed how a pretty sweet framework, SQLPage, lets archaeologists like Thomas write complete web applications in pure SQL, enabling rapid data entry while in the field. Tomas Vondra’s Performance Archaeology dug into the history of PostgreSQL improvements, mapping their compounding returns over time. Floor Drees (photo above) talked about Supporting extensions, but really now, making a strong case for the need to organize infrastructure to support the broader extension ecosystem.

There were many more talks, but clearly I tend to be drawn to the most technical, core-oriented topics. And also archaeology.

Museums

Speaking of which, I made time to visit two museums while in Athens. First up was the National Archaeological Museum of Athens, where I was delighted to explore the biggest collection of Mycenaean artifacts I’ve ever seen, including massive collections from the excavations of Heinrich Schliemann. So much great Bronze Age stuff here. I mean, just look at this absolute unit:

From the museum description: “Fragment of a krater depicting a chariot with two occupants. A male figure holding a staff walks in front of the chariot. Much of the Mycenaean Pictorial Style pottery (14th-12th centuries BC) with representations of humans, chariots, horses and bulls on large kraters, was produced at Berbati in the Argolid and exported to Cyprus, where it was widely imitated. Birds, fish, wild goats or imaginary creatures (i.e. sphinxes) occur on other types of vessels, such as jugs and stirrup jars. Usually only fragments of these vases survive in mainland Greece from settlement contexts. In Cyprus, however, complete vases are preserved, placed as grave gifts in tombs.” © Photo 2024 David E. Wheeler

The animal decorations on Mycenaean and Akrotiri pottery is simply delightful. I also enjoyed the Hellenistic stuff, and seeing the famed Antikythera Mechanism filled my nerd heart with joy. A good 3 hours poking around; I’ll have to go back and spend a few days there sometime. Thanks to my pal Evan Stanton for gamely wandering around this fantastic museum with me.

Immediately after the PGConf.eu closing session, I dashed off to the Acropolis Museum, which stays open till 10 on Fridays. Built in 2009, this modern concrete-and-glass building exhibits several millennia of artifacts and sculpture exclusively excavated from the Acropolis or preserved from its building façades. No photography allowed, alas, but I snapped this photo looking out on the Acropolis from the top floor.

The Acropolis as viewed from inside the Acropolis Museum. Friezes preserved from the Parthenon inside the museum reflect in the glass, as does, yes, your humble photographer. © 2024 David E. Wheeler

I was struck by the beauty and effectiveness of the displays. It easily puts the lie to the assertion that the Elgin Marbles must remain in the British Museum to protect them. I saw quite a few references to the stolen sculptures, particularly empty spots and artfully sloppy casts from the originals, but the building itself makes the strongest case that the marbles should be returned.

But even without them there remains a ton of beautiful sculpture to see. Highly recommended!

Back to Work

Now that my sojourn in Athens has ended, I’m afraid I must return to work. I mean, the event was work, too; I talked to a slew of people about a number of projects in flight. More on those soon.

More about… Postgres Extensions PGConf Summit Archaeology Athens Greece Acropolis Mycenae Elgin Marbles

Monday, 28. October 2024

Phil Windleys Technometria

Is Voting Secure?

I'm fairly certain that what I write here won't change the minds that need changing. But I feel like I need to make a statement anyway: your vote is secure.

There's a lot of fear mongering about the security of elections. I've wanted to discuss this for a while. I have several things in my background that have given me insight into how elections work. I was the CIO for the State of Utah. I was a member of the Lt Governor of Utah's voting equipment selection committee. And I've been involved in identity and security for several decades.

Let me give you the headline up front: committing election fraud in a way that changes the result is difficult, nearly impossible. Let's examine how elections are run and how fraud could happen to see why.

First a caveat: there is no single way that US elections are run. Elections in the US are quite decentralized. Each state has different election laws and in most cases the mechanics of running an election are given over to county clerks who must follow state law, but also have freedom to create their own workflows and processes within that law. The analysis that follows is generalized and likely more true of Utah, which I'm very familiar with, than other places. Still, I think the big ideas are largely the same everywhere.

The process of voting is divided into two parts: (1) voter registration and (2) voting. This is important because most people who make up scenarios to convince you that voting is insecure usually ignore voter registration. Registration requires that you provide an address. This is an important piece of information because if you're voting by mail, it's where the ballot will be mailed. If you're voting in person, you need to vote at a specific polling place depending on your address.

When you vote, you either mail back the ballot that was mailed to you at the address you provided or you go to your polling place and fill out a ballot (usually via a voting machine). In either case, the ballot presented to you depends on your address since the candidates listed on your ballot depend on your voting precinct. Also, as of 2024, 35 states require voters to present identification at the polling place in order to vote. Of those that don't, many require it for voters who are voting for the first time after their registration.

Now, let's examine voting fraud and how it might work. One important factor is scale. You need to commit fraud at a scale necessary to impact the outcome. For small elections (say a single state legislative race or a small school board election) you don't need to change a lot of votes to change the outcome in a tight race—hundreds of votes might do it. For larger elections, like the presidential election, scale is a significant issue. I'm going to focus on presidential elections since they are the most consequential. Less consequential elections are not likely to attract the kind of money and talent necessary to commit election fraud.

A second factor is stealth. You have to keep the fraud from being discovered so that it's not reversed. Proving consequential fraud would likely result in the election being challenged and rerun. You don't have to identify who did it, just prove that it was done. So election fraud is much more dependent on not being discovered than commercial transaction fraud where the loss is likely to only be recovered if the identity of the perpetrator is known.

The nature of presidential elections is greatly influenced by the electoral college system. You need to influence the votes in enough states to swing that state's electoral votes to the candidate you favor. You don't want to commit fraud where it's not needed because you'll waste money while increasing your chances of discovery. So, selecting the states where you want to commit fraud is critical. Each of those states will have different requirements, so you'll have to tailor your attack to each of them. Furthermore, you'll have to tailor your attack to each legislative precinct within the counties you determine are the most likely to impact the election.

There are a few ways to attack an election:

Sending your people to vote—for this to work, your fake voters have to have been registered and, in most cases, provide some form of ID. To register, they need a plausible address. The election office might not notice if one or two extra people with different last names are registered at a specific address, but they might if this is systematic or if an unreasonable number of people register at the same address. Remember that elections are run at the county level, so you have to assume that the election workers have a good understanding of the local environment. These fake voters now have to go to many different polling locations and cast a vote. They can't easily cast multiple ballots at the same polling location since the poll workers might remember them. So, you need lots of people going to lots of different polling locations.

Intercepting mail-in ballots—for this to work, you have to register at someone else's home address and then get to the mail before they do or steal their ballots after they've filled them in and change the vote. This requires lots of people. You can't do this remotely. It requires "boots on the ground" as the saying goes. Furthermore, those people are exposed since they're looking in mailboxes in neighborhoods where they don't live. Doable, but not very stealthy.

Paying people to vote—for this to work, you have to contact a lot of people, convince them to commit fraud, and then be satisfied with the fact that you'll never know if they voted for your candidate or not because ballots are secret. They could take your money and vote for whoever they want. Or just not vote at all unless you're supervising them, an activity that will call attention to you and your agents.

Replacing real ballots with fake ones—for this to work, you have to get your hands on real ballots for many different polling places (remember they're different because of overlapping jurisdictions), intercept the ballots somewhere in transit or on delivery, and replace the real ballots with ones that you've filled out for your candidate. This likely involves subverting county election workers. Not just one, but many. Again, the risk of discovery goes up with each contact.

Destroying ballots—for this to work, you need to destroy ballots that are for the candidate you don't want to win. You could simple destroy ballots without regard to how they're filled, but this won't assure you'll meet your goal. To be effective, you have to just destroy the ones for the other candidate and leave the ones for your candidate. Again, you will have to subvert election workers to get your hands on the ballots and determine who the ballot is for.

Changing the results after the ballots are counted—for this to work, you have to either hack the machines that record the vote or hack the machines that are tabulating the vote. Hacking the machines won't work if the machines keep a paper audit trail and it's used to audit results. Hacking the tabulators means getting access to those machines. Recall those are kept at the county level, so you have to hack many in different locations unless a single county can swing the election your way.

I hope all of this has at least given you a feel for the scale and scope of the problem. Pulling it off successfully without anyone knowing it happened is a difficult problem. Each method involves many people being let in on the secret—in some cases a lot of people. This isn't an operation that a small group of hackers can reliably pull off. Having lots of people involved increases the chances that you'll be discovered. The decentralized and distributed nature of how elections are run is a feature, and makes elections more secure and trustworthy.

On top of all this, election officials aren't stupid, lazy, or inept. Sure, you're going to find a few who are. But as a rule the elections officials I've interacted with at the state and county level are professionals who are attuned to these dangers and take active steps to protect against them. They are, as a rule, happy to talk about how they operate and will respond to polite requests for information about how they audit systems and the processes they have in place to protect the vote.

As an aside, do you know what's easier than committing election fraud? Using social media to convince people that election fraud is happening to reduce confidence in the election and sow discontent. Then you can use that discontent to challenge a legitimate election and maybe change the outcome if you don't like it. Ask yourself which is more likely.

Successfully changing the results of a presidential election isn't impossible. But the odds of doing so and not leaving any evidence—the perfect crime—are vanishingly small. I have confidence in the security of the US election system.

Photo Credit: Voting from DALL-E (public domain) Prompt: Draw a horizontal picture of a woman casting a ballot with a ballot box

Share

Subscribe now


Michael Ruminer

CrewAI, Simple Enough but It Once Made 100 API Calls Instead of 1

AI generated crew image. Note that some folks are backwards. :-) I continued my experimentation with CrewAI this weekend. To see the code, take a look at the following repo and path: Starter CrewAI Series and the day_04 folder specifically. In the day_04 package, I created a simple custom tool, one that uses Tavily. The two agent, two task package queries for news information on CrewAI and the
AI generated crew image. Note that some folks are backwards. :-)

I continued my experimentation with CrewAI this weekend. To see the code, take a look at the following repo and path: Starter CrewAI Series and the day_04 folder specifically. In the day_04 package, I created a simple custom tool, one that uses Tavily. The two agent, two task package queries for news information on CrewAI and then processes that info to create a report (markdown) on the latest news about CrewAI. I learned a few things outside that CrewAI just raised $18 million (US).

How it Works

It uses decorators on the classes and functions. I love decorators. I always have. I filed a patent once that used decorators in C# as a part of a solution for business rule traceability. But back to CrewAI. You’ll see code snippets like:

@CrewBase
class Day04Crew():

@agent
def researcher(self) -> Agent:

@task
def research_task(self) -> Task:

While to define the agent behavior, you use some YAML like the below:

researcher:
role: >
Senior Data Researcher on {topic}
goal: >
Find recent most relevant news on {topic} and limit your response to {limit} results
backstory: >
You're a seasoned researcher with a knack for uncovering the latest
developments on {topic}. Known for your ability to find the most relevant
information and present it in a clear and concise manner.

To define the task you use YAML like the below:

research_task:
description: >
Search news about {topic}
expected_output: >
A list of news articles about {topic} with the title, url, and content
agent: researcher

If I were to experiment deeper I’d try the research task description to be a more sophisticated prompt but this one returned decent results.

And that is largely it for a simple enough example. I was off to the races with the exception of some calls to actually run the agents.

What I Found

First off. I found that I need to do some discovery of the CrewAI capabilities for some increased logging or traceability. Even with agent verbosity turned on, it was too much of a black box for me. Maybe I didn’t look closely enough at the verbose output but it seemed a bit too superficial. I want to know what exactly was passed to the LLM, a timestamp, its response and that timestamp, which endpoints on the LLM, etc. I think some of that can be found using LangTrace or CrewAI AgentOps. I’ll almost certainly try that soon.

I also found that one time it got stuck in what I assume was some sort of loop. I can’t be certain exactly where, as I didn’t have any real logging or traceability (black box). But it was running far too long on just the first agent and task. I had to cancel out and when I did and looked at my usage of Tavily it had bumped up 100 API calls for that run versus the expected of only 1. That was very disconcerting. All other runs with the unmodified code performed only the expected 1 API call to Tavily.

The report output was what I was hoping for, but that has more to do with the LLM and Tavily results than with CrewAI.

I did notice that each task can have only one agent. That makes sense, I think. I would like to try where an agent has multiple tasks and has to choose the appropriate task for its job and also a scenario where an agent might call a task multiple times with slightly different input to get a more nuanced or expanded context for its actions. I don’t currently have an example use case for the latter. Give me some time or recommend one below. In these scenarios, traceability becomes even more important, and limits on task calling or tool usage are probably needed.

Final Thoughts

CrewAI covered the simple use case I wanted to try though it left me desiring more visibility into what it was doing. The implementation in my limited use case was easy. It was slow, but I don’t know where it was slow, because I didn’t have any instrumentation to see where it was spending its time. It might have been in the LLM and/or Tavily. All in all, I plan to experiment more with a hierarchical structure and with some attempts into observability and traceability. I wish I could say more and provide greater depth than what you can probably easily and quickly discern from the documentation but for now this is what I have. The next question is will I try the same in LangChain to compare or will I dig deeper into CrewAI first.

Saturday, 26. October 2024

Virtual Democracy

Earth Data Rescue 2025

2016 was a different election: 2024 is so much more important The election of a climate-change denying US president in 2016 gave rise to the potential for administrative policies that would obstruct the public’s use of government-held Earth data. Very soon after taking office, the Trump administration instructed that government websites be scrubbed of climate … Continue reading Earth Data Rescue&nb
2016 was a different election: 2024 is so much more important The election of a climate-change denying US president in 2016 gave rise to the potential for administrative policies that would obstruct the public’s use of government-held Earth data. Very soon after taking office, the Trump administration instructed that government websites be scrubbed of climate … Continue reading Earth Data Rescue 2025

Friday, 25. October 2024

Jon Udell

What Claude and ChatGPT can see on your screen

“If you work with these cloud platforms every day, you have doubtless forgotten that you ever had questions like these. But every newcomer does. And on a continuing basis, we are all newcomers to various aspects of applications and services. In so many ways, the experience boils down to: I am here, what do I … Continue reading What Claude and ChatGPT can see on your screen

“If you work with these cloud platforms every day, you have doubtless forgotten that you ever had questions like these. But every newcomer does. And on a continuing basis, we are all newcomers to various aspects of applications and services. In so many ways, the experience boils down to: I am here, what do I do now?

It’s nice if you can share your screen with someone who has walked that path before you, but that’s often impossible or infeasible. LLMs synthesize what others have learned walking the path. We typically use words to search that body of hard-won knowledge. Searching with images can be a powerful complementary mode.”

What ChatGPT and Claude can see on your screen

Part of the LLM series at The New Stack.

Thursday, 24. October 2024

Heres Tom with the Weather

CPJ Head Condemns Israel's Deadly War on Journalists

Jodie Ginsberg, CPJ’s chief executive officer: No journalist from outside Gaza has been allowed in since the start of that war, and that’s highly unusual. I speak to lots of war correspondents who’s covered many, many wars over decades, and all of them talk about how unprecedented this is to not have any access whatsoever. And that, of course, puts additional pressure on these journalists

Jodie Ginsberg, CPJ’s chief executive officer:

No journalist from outside Gaza has been allowed in since the start of that war, and that’s highly unusual. I speak to lots of war correspondents who’s covered many, many wars over decades, and all of them talk about how unprecedented this is to not have any access whatsoever. And that, of course, puts additional pressure on these journalists.

CPJ Head Condemns Israel’s Deadly War on Journalists in Gaza as IDF Threatens Al Jazeera Reporters

In the interview, Amy Goodman also mentions “James McGovern leading 64 other congressmembers in a letter to Biden and Blinken, urging them to push for Israel to allow in international journalists”

Monday, 21. October 2024

Michael Ruminer

Using LM Studio and CrewAI with Llama

I was recently experimenting with CrewAI, but in one experiment I wanted to use a local LLM hosted by LM Studio. Why LMStudio? Well… because that was the experiment. I wanted to see how I’d use LMStudio to host my model versus Ollama and then use it from CrewAI. Below is a very simple setup for doing this. It’s good to point out that CrewAI uses LiteLLM under the hood as a sort of proxy. Thi

I was recently experimenting with CrewAI, but in one experiment I wanted to use a local LLM hosted by LM Studio. Why LMStudio? Well… because that was the experiment. I wanted to see how I’d use LMStudio to host my model versus Ollama and then use it from CrewAI. Below is a very simple setup for doing this.

It’s good to point out that CrewAI uses LiteLLM under the hood as a sort of proxy. This provides CrewAI with the ability to talk to a range of LLMs without needing to really do much in terms of handling it.

What Doesn’t Work and Why

The first thing I found is what doesn’t work. The LLM class in CrewAI allows for the instantiation of an LLM that can then be used by agents. Spinning up an LLM instance on a local Ollama hosted model can look like below.

ollama_31_8b = LLM(
model="ollama/llama3.1",
base_url="http://localhost:11434"
)

This works just fine if hosting the LLM inside Ollama, but you get response exceptions if, instead, you try to host inside LMStudio at the same server port.

ERROR: LiteLLM call failed: litellm.APIConnectionError: 'response'

First, you notice that the base_url doesn’t have a “/v1” at the end which LMStudio uses in their server setup. If you fix that, thinking it might work, you’ll find that you likely get the same error.

Secondly, you may realize that the model property in your LLM instantiation uses a [PROVIDER]/[MODEL] format. I tried removing the provider portion to see what would happen. The results were:

llm.py-llm:161 - ERROR: LiteLLM call failed: litellm.BadRequestError: LLM Provider NOT provided.

That’s a reasonable outcome.

What Does Work

Lastly, I remembered that LM Studio is using OpenAI endpoints.

A quick look at the LiteLLM docs provided the answer I needed; set the provider as “openai”. This results in a final outcome of:

ollama_31_8b = LLM(model="openai/llama3.1", base_url="http://localhost:11434/v1")

Now, if you try running it with the agent using the LLM instantiated it will work. Below is example code of the LLM and agent creation where I had Llama 3.1:8b model hosted in LM Studio on port 11434.

@CrewBase
class MyCrew():

llama_31_8b = LLM(
model="openai/llama3.1",
base_url="http://localhost:11434/v1"
)

@agent
def joke_generator(self) -> Agent:
return Agent(
config=self.agents_config['joke_generator'],
verbose=True,
llm=self.llama_31_8b
) Note

Note that on LMStudio I had my server port set to 11434 versus the default of 1234. It made it easier as I switched back and forth between Ollama and LM Studio; I didn’t need to modify the port. 11434 is the default Ollama port.

When Might I Use This

When might I use this? If I am programming, probably rarely. I could instead host the model in Ollama. I’d use LM Studio if I want to host a model and chat with it. In that scenario, I’d probably be more likely to use Ollama with AnythingLLM which would also provide me with some Retrieval-Augmented Generation (RAG) capabilities. Nonetheless, it was an experiment and I proved, for myself, it could easily be done.

Thursday, 17. October 2024

Mike Jones: self-issued

OAuth 2.0 Protected Resource Metadata Specification in RFC Editor Queue

I’m pleased to report that the “OAuth 2.0 Protected Resource Metadata” specification has been approved by the IESG and is now in the RFC Editor queue. The version approved by the IESG and sent to the RFC Editor is: https://www.ietf.org/archive/id/draft-ietf-oauth-resource-metadata-13.html It joins OAuth 2.0 Security Best Current Practice and JWT Response for OAuth Token Introspection, […]

I’m pleased to report that the “OAuth 2.0 Protected Resource Metadata” specification has been approved by the IESG and is now in the RFC Editor queue.

The version approved by the IESG and sent to the RFC Editor is:

https://www.ietf.org/archive/id/draft-ietf-oauth-resource-metadata-13.html

It joins OAuth 2.0 Security Best Current Practice and JWT Response for OAuth Token Introspection, which are also both currently there.

Thanks to the IETF directorate reviewers and IESG members for their feedback that resulted in improvements to the specification!

Wednesday, 16. October 2024

Decentral Gabe

The Greatly Exaggerated Demise of SSI: A Rebuttal to Premature Eulogies

💡 This is written in response to a post titled "Why Verifiable Credentials Aren't Widely Adopted & Why Trinsic Pivoted" A forewarning—there will be no tears here and, please, no hugging. credit, XKCD https://xkcd.com/386/ The post I'm responding to

💡 This is written in response to a post titled "Why Verifiable Credentials Aren't Widely Adopted & Why Trinsic Pivoted"

A forewarning—there will be no tears here and, please, no hugging.

credit, XKCD https://xkcd.com/386/

The post I'm responding to makes a point about people resisting its conclusions because their livelihoods depend on it. But let’s be real—Trinsic depends on this too, just as much as mine does. We’re both here, with skin in the game, and that's a good thing. I continue to work on figuring out how to make self-sovereign identity work at scale because I believe it's worth it. If I was not convinced of its value, I wouldn't be here debating it; I'd have moved on to easier, more profitable paths. That’s the commitment this industry requires—guts, patience, and a long-term perspective.

So let's set the stage clearly. This isn't just about egos or jobs; it's about a vision for the future of digital identity. And while you’ve pivoted, it's worth looking at the broader picture and challenging the notion that SSI is somehow "dead" just because it's hard. We’re not the first to face a daunting path—nor will we be the last.

The Market Isn't the Oracle You Think It Is

Your unwavering faith in the market's infallibility is touching. You proclaim, "The market is the objective reality," as if channeling the spirit of a laissez-faire economist at a startup retreat. But let's examine recent history.

Betamax was superior to VHS, but marketing muscle won the day. The QWERTY keyboard isn't ergonomic, yet it's entrenched due to historical happenstance. The invisible hand often leads us down paths of inefficiency and fragmentation, leaving a trail of superior but abandoned technologies in its wake.

Fragmentation and incompatible standards aren't signs of the market "choosing" the best option; they're about who can shout "MINE!" the loudest while elbowing competitors off the playground. Sometimes, it takes a guiding hand—a nudge from governments or standard bodies—to steer the market toward what's efficient and beneficial for all.

Pivoting Isn't Prophecy you can do better, hon

The leap from "We couldn't make it work" to "It can't work" is a logical chasm too wide to cross. Your pivot from SSI evangelist to skeptic, while understandable, isn't a revelation of universal truth.

This is akin to Friendster declaring social media a dead end, or Blockbuster declaring video streaming a passing fad. The inability to monetize a nascent technology doesn't negate its potential; it merely highlights the gap between visionary ideals and the brutal realities of running a startup. For example, certain decentralized identity projects have successfully implemented verifiable credentials in applications like educational certificates, employment verification, and gaining access to the financial system, demonstrating real-world utility and growing momentum. These efforts show that verifiable credentials are gaining traction in specific, focused use cases.

Let me be unequivocal: I commend you for having the guts to start a company, work hard for years, and continue to try to make it viable. That is nothing short of admirable. Your pivot might make sense for Trinsic, but it's hardly the canary in the coal mine for SSI.

SSI, like many transformative technologies before it, may simply be awaiting its "streaming moment" — that convergence of technological readiness, market understanding, and user demand that catapults it from interesting concept to indispensable tool.

The Power of Collective Standards

Dismissing government involvement as a mere "silver lining" is like calling the sun a "helpful light bulb." Let's not overlook crucial historical precedents.

The internet (ARPANET), GPS, and even time zone standardization were government initiatives. The metric system, USB, and Wi-Fi all benefited from coordinated efforts beyond market forces. These standards didn't become universal because the market willed them so, but because of coordination to prevent fragmentation.

It is important not to ignore this. Preventing the chaos of incompatible systems often requires patience and collaboration—yes, even more patience than the industry has shown to this point.

Scattered Seeds Grow Diverse Forests

Your lament that SSI's "heat map is too scattered" misses the point. This diversity isn't a bug—it's a feature. The applications sprouting in education, finance, healthcare, and agriculture worldwide are the seeds of a diverse forest, not a failed monoculture.

Yes, SSI's initial user experience may be challenging, but that's par for the course with transformative technologies. Remember when 2G networks made downloading a single image feel interminable? Or when dial-up internet's screeching was our daily soundtrack?

Every groundbreaking technology endures growing pains. Users adapt, technologists adapt, and the market adapts with them—especially when the endgame offers control, privacy, and empowerment.

Business Goals Aren't Universal Truths

There's a profound difference between "We couldn't make it work for our business" and "The technology is inherently flawed."

Your focus on immediate market adoption overlooks the long-term value of doing the right thing—the hard thing—the thing that maybe, honestly, is not suited for startups that need to show growth and profit now. Perhaps a successful SSI startup is better suited as a B-Corp.

While market challenges are real, the problems that led us to SSI have only grown more severe. Data breaches are no longer news; they're an expectation. Surveillance Capitalism has entrenched itself deeper into the fabric of our digital lives, commodifying our every interaction. Government-issued digital identities, on the hardware of monopolistic tech giants, laced with privacy-eroding mandates, threatening to erode what little privacy we have left.

SSI isn't just a novel technology; it is our defense against these escalating digital threats.

Focused Implementation

While the post laments the scattered nature of SSI adoption, I see this differently. There is a clear answer to the diversity challenge: focus on specific, high-impact use cases. Our net is intentionally not cast wide.

We don't require credentials to be used across different domains from which they were issued. That's a potential bonus, not a requirement. It does not undermine the original goal of the technology; optionality is a strength. It leaves the door open for future cross-domain adoption without dependence on it for current success.

This focused strategy allows us to deliver value in targeted areas while building the infrastructure for broader adoption. It’s a pragmatic approach that addresses immediate needs while paving the way for SSI’s long-term vision. An iterative approach helps build confidence in the SSI ecosystem, one application at a time.

The success of SSI won't come from boiling the ocean. We’re laying the groundwork for future interoperability while delivering value today.

A Vision Beyond The Fiscal Calendar

While some may chase immediate market opportunities, many of us remain committed to SSI's promise. Transformational change isn't measured in fiscal quarters but in steady progress over years, many years. It demands fortitude, collaboration, and a willingness to look beyond short-term gains.

To the SSI community: Let this be our rallying cry. We're not just building a product; we're creating the foundation for human-centric digital identity. Let's continue to collaborate on improving user experience, pushing interoperability, and engaging governments and standards organizations. Every effort, no matter how small, contributes to a future where secure, privacy-respecting, self-sovereign digital identity is the norm.

The most compelling technological advances come from those who persevere when the path gets steep. The future of digital identity—one that isn't controlled solely by governments or tech monopolies—is counting on us.

credit: Dune

Should you find yourself at the next IIW, or the one after that, perhaps hosting a session titled "I Was Wrong About Being Wrong About SSI," feel free to invite us—we'll be there. We won't cry, clap, or hug you. We'll simply nod, in recognition that the most profound insights often come from pushing through challenges, not pivoting away from them.

Tuesday, 15. October 2024

Riley Hughes

Why Verifiable Credentials Aren’t Widely Adopted & Why Trinsic Pivoted

A presentation I gave at IIW38, in April 2024. At the Internet Identity Workshop 38, I announced my talk with a description something like this: Five years ago at this conference, Trinsic’s first product (our self-sovereign identity wallet app) was launched. Today, I’m hosting a session called “SSI didn’t work. Trinsic is pivoting.” I knew it was a provocative title — and I fully consid
A presentation I gave at IIW38, in April 2024.

At the Internet Identity Workshop 38, I announced my talk with a description something like this:

Five years ago at this conference, Trinsic’s first product (our self-sovereign identity wallet app) was launched. Today, I’m hosting a session called “SSI didn’t work. Trinsic is pivoting.”

I knew it was a provocative title — and I fully considered that apostatizing this publicly could result in a mob of opposition. But I was not prepared for what actually happened.

Right after my announcement (and brief, surprised gasp from the crowd of ~350 people) one IIW veteran popped off his chair and clapped his way across the room to embrace me. Right before my session, the CEOs of two separate identity companies asked to broadcast the talk online (one via livestream, another via live Tweet). Right after my session during the closing circle (where the original crowd of ~350 reconvened), the cofounder of the conference wept as she thanked me. Another longtime identity veteran stood immediately after and claimed it was the most important session held at the event in the last 5 years. The next morning, a startup CEO called for memorializing a “Riley Session” as a new staple of the conference twice each year. More than a dozen people asked me to reprise my session.

I say this all descriptively, not boastfully. Far from feeling proud, I actually felt humbled to have had the opportunity to contribute something evidently valuable to a community which has given me so much. (It was at an IIW in 2018 that I realized I wanted to build a career in this industry.) That is the context for this post — to share apparently-useful insights that hopefully will result in getting the world closer to using better digital ID technologies.

The spoiler for this article: I’m no longer betting on verifiable credentials taking off. I mean this literally — 99% of my net worth is Trinsic stock, so the strategy I bet my company on is an objective revelation of my strongest convictions. While once central to Trinsic’s existence, our latest pivot demonstrates that I believe interoperable digital credentials won’t succeed in market anytime soon.

Specifically, my prognostication is that digital identities will continue to be deployed and adopted, but that fragmentation, incompatibility, and bespoke implementations will continue to dominate the market.

Now, I’ve been working on this problem for 7 years. Explaining how I landed on this conclusion requires expounding on years of the business equivalent of trench warfare. This post is a deep dive, not a summary. If you want a summary, I suggest dropping this post into your LLM of choice (I won’t be offended). I’ll do my best to be concise, but I also intend on being thorough. The structure of this post is as follows:

Assumptions

Brief History

Impossible Problems

Bull Case

End

Assumptions

Assumption #0: I’ve drunk the Kool-aid

As you read this post, your walls might go up. Especially if your salary depends on me being wrong. Just be aware of your biases and know that I am still Kool-aid drunk on the philosophy of self-sovereign identity. I want that future as much as anybody. But in my sincere pursuit to discover reality, I’ve learned some things that I hope others can benefit from.

Assumption #1: Terminology

I will use “verifiable credential” or “VC” in this post quite a bit. I’ll also use “self-sovereign identity” or “SSI” or “IDtech”. “Decentralized identity” or “DI” could be subbed in too for all I care. Fundamentally, I’m trying to communicate: A standard way to exchange trustworthy data in a user-centric/user-controlled way. But I’m intentionally avoiding being too pedantic, so don’t get overly caught up in semantics.

Assumption #2: The world is huge

The world is inconceivably big.

Consulting in SSI is a good way to make a living. But hand-to-hand combat is insufficient to get VCs adopted. The only way to scale something on the magnitude needed for SSI to work is a product with product/market fit. That, or regulatory obligation.

Imagine if companies needed Accenture to adopt artificial intelligence. Now consider how that differs from a world where any developer can pick up an API key from OpenAI and implement GPT4 in an afternoon. LLMs have product/market fit, SSI does not.

Assumption #3: The market is reality

Here’s how I think about startups & new product development. The market is unknowable in its entirety. If it was known how to build a successful product (in our case, a widely-adopted SSI wallet or an adoptable VC format), it would have been done by now. To make progress at the cutting edge, you must do something desirable and different. Andy Radcliff calls this being “non-consensus and right”.

Chamath describes a startup as an exercise in mentally modeling the “market” and building a product to fit that model. If your product is rejected, your model of the world is wrong and you need to iterate its form. If your product is adopted, your model of the world is right.

All this is to say, the market is the objective reality. It decides winners and losers. As I’ve sought to get product/market fit in this space, I seek to discover reality as impartially as I can. When I see evidence of something working, I pay attention; when I see evidence that something isn’t working, I try to absorb those data points too.

Assumption #4: The best technology gets adopted

The worth of software is its impact on humans. The most elegant, idealistic, best software is therefore worthless unless it’s adopted. Actually, if it’s not adopted, it’s not elegant, idealistic, or best — no consolation prizes. The market is the judge and if your product isn’t making an impact on people’s lives, it’s no better than a refrigerator on the moon.

Adoption isn’t just the most important thing, it’s the only thing. Thus, this is actually a tautology: by definition, what gets adopted is best.

Assumption #5: Don’t waste your time

I’ve thought a lot about the optimal point between discouraging and cautioning. Yes, I’ve tried a lot of things but it’s important to state that markets change. Maybe something is different now than when I tried it. I surely can’t turn over every stone. Running new or differentiated experiments in this market is probably fruitful–I don’t want to discourage innovation. But I do want it to shine a light on the graveyard that exists in this space and hopefully prevent people from doing things that are doomed to fail.

Brief history

I started my journey in identity in 2017 when I interviewed for a job at Sovrin Foundation. I started Trinsic with two cofounders in 2019 because of the relatively low adoption of Sovrin’s network. I thought if we made the technology dramatically easier to use, more companies would adopt SSI.

Me plus Tomislav & Michael, my cofounders, launching our product at IIW in 2019.

We built something good. Hundreds of developers were signing up every month and issuing verifiable credentials. One of those developers was Wayne Chang who, before cofounding Spruce, gave me the feedback that Trinsic was the best execution of any SSI product in the market. He asked something like:

“With a product like this (where a developer can get value for free) you are probably either using this as lead gen for $250k consulting deals, or you’re just really betting on verifiable credentials blowing up.”

I told him we were betting on VCs blowing up.

Our aim was to build a product that would accelerate VC adoption. In our view, if a customer needed consulting to succeed, it was a signal the product wasn’t good enough. (We did consult customers on how to succeed — but we treated this time as product discovery time, and incorporated the results immediately into the product roadmap.)

What happened is we acquired dozens of paying customers, but not enough were successful enough. We had roughly 1 breakaway customer story per 500 companies that signed up. The rest were either startups that failed to get traction or large enterprise R&D departments that never went to production.

In the meantime, the market shaped out in an undeniable way. Dozens of companies obtained product/market fit in the reusable identity space, none of which use the VC standard. Either none of the successful companies chose to use VCs, or none of companies that chose VCs succeeded (two statements of identical outcome but different presumed causation).

A slide I made in 5 mins immediately before my IIW38 presentation to make this point!

In fact one such company, which raised over $75m from top-tier venture capitalists to fuel its meteoric growth, said to me (paraphrasing):

“The SSI people had a good idea. So we’re applying that idea to get customers while they’re wasting their time in working groups. You can think of us as proprietary SSI.”

It’s been a few years since I was told that. I recently checked back in on them. They’re used by tens of millions of users.

Another reusable ID company used by several million end-users, which has raised 10s of millions, told me (paraphrasing):

“We are building a monopoly. Our goal is to own the identity layer of the internet. It might not work, but if it does, we’re going to make a lot of money.”

Both companies were started after 2019 (the year the VC data model became a W3C recommendation) and chose not to use standards.

This dynamic vexed me for months. Why was the market rewarding companies with inferior identity solutions?

It took me too long to conclude that SSI is the inferior identity solution. The market determines winners and losers, better and worse — and there are important reasons the market is rejecting verifiable credentials in favor of proprietary solutions.

After 4+ years of hypothesizing, experimenting, evaluating, and iterating, I found what I consider to be intractable problems that block adoption of verifiable credentials. I do not know the answers to these problems. If you’re building in the IDtech space, you should take these problems seriously because, in my view, these are the billion-dollar questions you’ll need to crack to have any meaningful success.

Impossible problems

Impossible problem #1: Interoperability is the premise, but doesn’t exist

Standards exist to enable compatibility. If interoperability isn’t important, simple alternatives to verifiable credentials exist (such as signing — or not signing — JSON). The ability to share data with an arbitrary 3rd-party in a standardized way (without bespoke integration or direct connection to issuing source) is the core purpose of VCs. But that doesn’t actually exist right now.

In other words, the reason for the existence of VCs is not present.

This is because interoperability at the application level requires technical interoperability of keys/signatures, public key resolution, exchange protocols, request/response formats, data models, semantics, and a host of underlying elements. (Not to mention nontechnical interoperability elements, like business models and trust frameworks.) Because a myriad of VC variants, exchange protocols, signature suites, etc. have emerged over the years, which must all line up to effectively interoperate, the only practical way to achieve interoperability with others is to use the same software.

Theoretically, if everyone in the industry agreed upon a single standard to use, interoperability would be present, and there would begin to be value to adopting VCs today — but there’s a problem: which standard should be adopted? and what if it’s the wrong one?

Prematurely standardizing is a risky endeavor. But in order to find out what kind of digital credential will have real product/market fit — and thus be adopted — the world needs to experiment, iterate, innovate, over trade-offs to discover what the market will adopt.

Do you see the problem? VCs need innovation, iteration, and experimentation to succeed. But they also need standardization, agreement, buy-in, and stagnation to have value. These two forces are directly opposed. I wrote about this dynamic more in a post several years ago.

An image I made in 2022 for a post I wrote about the tension between interop & innovation.

Standardization is essential for adoption. But adoption is essential for knowing what to standardize (there’s nothing worse than standardizing the wrong thing)! Prematurely standardizing before adoption is a classic “cart before the horse” scenario.

To be clear, none of this means that there is no reason to adopt VCs now — it means every reason to adopt VCs today is in anticipation of future value. Because today, verifiable credentials offer zero marginal value relative to simpler alternatives. To rationally adopt VCs, one must make a bet on the future. Some bold companies will be willing to take that very costly journey (like we did), but most will not.

Impossible problem #2: Initial UX is worse than baseline

Verifiable credentials are always a better UX than existing identity solutions at scale, but always a worse experience initially.

I had a mask-laden conversation with the late Vittorio Bertocci at the Internet Identity Workshop in 2021 where I was working to convince him of the merits of VCs as he advocated for federated approaches.

His point: VCs aren’t necessary to share data outside its original domain. Take, for example, Airbnb wanting to verify a digital driver’s license. Instead of using a VC, DMVs could become an IdP, and Airbnb could add a “verify with X driver’s license button” into their app. The tech exists today.

My point: Yes, but nobody does that today because it doesn’t work in person, doesn’t work offline, doesn’t work when presenting multiple credentials, doesn’t work when privacy needs to be preserved, etc. But most importantly, in this example Airbnb would need to add a “verify with X” button for all 50 states, which is a very bad UX — VCs would enable one universal verification button!

I’ve thought a lot about this interaction. While I like to think I helped nudge Vittorio a bit on the value VCs could provide, you’ll notice my points are basically only relevant at scale.

For a concrete example, take AI agents who need identity wallet capabilities. We ran a whole campaign about this at Trinsic and spoke to dozens of AI agent developers (see this landing page, watch the demo video!). It’s obvious the problem exists in the long-term. As a user, it’s a pain to provision an agent access to all my accounts, info, etc. needed for it to accomplish real tasks for me. But in the near-term, 100% of the AI developers we talked to just chose to use OAuth to connect to sources where users’ data exists already (e.g. Google).

If a user already has a wallet with a bunch of useful credentials in it, obviously sharing those is a better UX than redoing the onboarding process that was required to obtain those credentials. But acquiring the wallet with credentials in the first place will always cause extra friction upfront, which businesses and users are very reluctant toward.

This problem is compounded because businesses don’t just adopt things that are a little bit better than alternatives. In order to switch behaviors or risk their careers on unproven technologies, new solutions need to be better enough to justify that extra investment. And unfortunately for VCs, for every initial use case I’m aware of, the experience is objectively worse.

I shared more insights from Vittorio here.

Impossible problem #3: Heat map is too scattered

When I was raising money for Trinsic as a general-purpose verifiable credential platform, investors would often ask what the most prominent use case for verifiable credentials was — and more specifically, what the most common use case for Trinsic’s customers was. My answer sucked:

Our customers are building use cases in education, employment, financial services, healthcare, insurance, industrials, government, and agriculture. And more!

Once, in pursuit of a better answer to this question, we plotted the industries, geographies, and use cases of 100+ paying customers. The scatter plot was all over the place. Nothing stuck out prominently. It didn’t make sense to double down on a specific industry or use case because none dominated our demand. We didn’t even have geographic consistency — adoption existed from Myanmar to Peru to USA to Romania.

My cofounder Michael continuously sounded the alarm in my ear: in order for VCs to take off, a network effect needs to take hold. No network effect will exist if all adoption is completely scattered.

This problem doesn’t just affect Trinsic, it affects the entire VC space. Until pockets of adoption breed more adoption in a virtuous cycle, the rate of adoption will be capped to a linear value — and adoption to 1+ billion people requires exponential rates.

To drive this point home, I’ll cite an example from a packed room at a previous Internet Identity Workshop, circa 2020. In a room full of decentralized ID vendors, I asked for a raise of hands: “Who has ever heard of a verifiable credential being used for a purpose other than that for which it was issued?” (In other words, who has ever heard of a verifiable credential being used for the reason the VC standard was created.)

No hand went up.

I asked again in 2024. Still crickets.

Verifiable credentials won’t take off until this sort of thing is commonplace. This cross-context data sharing simply won’t happen until both sides of the exchange are interested in similar credentials, or are from a similar industry or geography. In other words, this won’t happen until the heat map of potential use cases lights up in a particular spot. But the current nature of the market is very scattered.

Bull case

This post outlines why I’m not betting on verifiable credentials taking off anytime soon. But if I had to articulate the bull case, there are two arguments here: government mandates and timing/direction of causality.

Governments

Governments represent the silver lining for verifiable credentials. While, in my view, the private sector lacks a viable adoption path for the reasons outlined above, governments have different incentives. Governments build utilities, and there is a good argument that interoperable digital ID is a government utility. Many governments in the West are investing in VCs, most notably eIDAS 2.0. Although even that is far from settled.

Government involvement is not limited to government-issued credentials. Virtually every private sector company in the verifiable credential world (except Trinsic, for what it’s worth) has been subsidized by governments funding their preferred VC standards (DHS funding different standards than the EU, for example). Various technologies through the years have succeeded due to government mandates or subsidies–nothing wrong with it inherently–but it is a telling aspect.

Governments compelling use of interoperable digital credentials represents the most likely success case for VCs I can think of. But that scenario exists in the medium-term, not the short-term.

Timing

The other argument says that verifiable credentials are simply too early. Many of the leading reusable ID ecosystems like IDme, Yoti, CLEAR, etc. were started and initially grew before the VC standard was finalized/stabilized. So one could argue these companies, if started today, could have had the same or greater success using verifiable credentials rather than a proprietary solution.

Unfortunately, recently companies like Incode, Footprint, Plaid, and many others launched “reusable ID” products and chose not to use VCs after extensive research. And have succeeded far beyond most SSI products. Which presents a difficult rebuttal to this argument.

One cannot rule out the possibility that companies can succeed betting on the VC standard. But it’s difficult to point to examples of VCs helping a business solve problems for customers (and thus make money) — and it’s trivial to point to dozens of examples of companies sinking millions of dollars into verifiable credential infrastructure (money which could have been spent acquiring customers). One hypothesis is that tinkerers are attracted to VCs while aggressive operators are less likely to experiment on new technologies or make speculative bets on the future.

End

I referenced Vittorio earlier, and I’ll cite him one last time here. In a blog post about his involvement in the CardSpace project at Microsoft, which is a spiritual ancestor of modern SSI, he said:

“When the user centric identity effort substantially failed to gain traction in actual products, with the identity industry incorporating some important innovations (hello, claims) but generally rejecting many of the key tenets I held so dear, something broke inside me. I became disillusioned with pure principled views, and moved toward a stricter Job to be done, user cases driven stance.”

I’ve not changed my convictions on the future I want to live in. Nor has my resolve wavered in my desire to build that future. But the market has spoken. Prematurely standardizing a pre-product/market fit technology won’t work — no matter how badly zealous advocates of SSI want it to. That is why Trinsic is no longer a general-purpose verifiable credential infrastructure company.

While I don’t expect a universal standard for digital IDs will happen anytime soon, new digital IDs continue to roll out every day. These digital IDs are sometimes built with VCs, other times with mDLs, but usually in proprietary ways. Some are bound with biometrics, others with passkeys. Some are focused on solving a problem in a specific industry, others in a specific geography.

Today, the Trinsic Identity Acceptance Network represents the future of Trinsic. Instead of helping companies create digital IDs (issue verifiable credentials or create wallets for users), we help businesses accept existing digital IDs. Our strategy is a bet on continued fragmentation of the ID landscape — a bet on no single standard winning out.

In fact, I’ve become convinced that Trinsic’s current strategy to help businesses accept digital credentials is actually a necessary stepping stone to a future world powered by interoperable digital credentials. We’re still ushering the world in that direction, just from a different angle.

Of course, I could be wrong, and I hope I am. My ears and eyes are wide open for new information. My only objective is to see reality as clearly as I can. If you have information you believe I’m missing, I’d love to hear from you and have a conversation. Sincerely! You can DM me on LinkedIn or Twitter, or shoot me an email riley@trinsic.id.

Thanks to Kim Hamilton Duffy, Gerald Glickman, Timothy Ruff, Eve Maler, James Monaghan, Dave Grantham, and Zachary Jones for giving feedback on a draft of this post.

Sunday, 13. October 2024

Michael Ruminer

AI “its got legs”

Compliments of FLUX schnell This last week I spoke with two folks on AI intersection within their work and how their work intersects with one another. One was a CPO for a startup that is bringing AI into their life sciences products in a forefront way. The other was a compliance manager for a pharma company. The second happens to be my wife. For me, their work intersects. They both certainly g
Compliments of FLUX schnell

This last week I spoke with two folks on AI intersection within their work and how their work intersects with one another. One was a CPO for a startup that is bringing AI into their life sciences products in a forefront way. The other was a compliance manager for a pharma company. The second happens to be my wife. For me, their work intersects. They both certainly got me thinking harder about AI agents. The first of the two was breaking down knowledge silos in organizations and I definitely saw a play for AI agents and the use of tools by those agents in their approach. The second needed some exuberant discussion on the potential of how AI can contribute to their organizational goals and needs; and without mansplaining, I was glad to help. :-) The conversations only reinforced in me where we are headed in AI adoption.

There were discussions on trend analysis, small language models, retrieval-augmented generation (RAG), and how AI agents and tools work to bring different knowledge repositories into a single interface with human consumable outputs, just to name a few topics. Surprisingly or not, a lot of these topics were not with the CPO but with the wife; putting the theory into practical examples. It was motivational to me.

I have been looking for good, practical, and real-world examples of AI agenic tools to work on as knowledge-expanding exercises; whether they work or, in the end, surface the immaturity of the technology. I don’t get especially motivated by the common and tired examples of financial analysis, LinkedIn scraping for recruiting, or travel booking scenarios that every agent example seems to rely on. These discussions were a step forward.

These topics reinforced with me that for AI and its agents — “its got legs”.

Saturday, 12. October 2024

Heres Tom with the Weather

Thursday, 10. October 2024

Jon Udell

Mix Human Expertise With LLM Assistance for Easier Coding

There are plenty of ways to use LLMs ineffectively. For best results, lean into your own intelligence, experience, and creativity. Delegate the boring and routine stuff to closely supervised assistants whose work you can easily check. Mix Human Expertise With LLM Assistance for Easier Coding Part of the LLM series at The New Stack.

There are plenty of ways to use LLMs ineffectively. For best results, lean into your own intelligence, experience, and creativity. Delegate the boring and routine stuff to closely supervised assistants whose work you can easily check.

Mix Human Expertise With LLM Assistance for Easier Coding

Part of the LLM series at The New Stack.

Wednesday, 09. October 2024

Justin Richer

GNAP: A Conversation of Authorization

After five years of standardization work, GNAP is now officially RFC9635! This long and intense process actually started a few years prior to that, when I was talking with a lot of folks in the security industry about some of the shortcomings of OAuth 2.0, and what we could do about them as an industry. These conversations led to the XYZ proposal (and implementations) which eventually led to the f

After five years of standardization work, GNAP is now officially RFC9635! This long and intense process actually started a few years prior to that, when I was talking with a lot of folks in the security industry about some of the shortcomings of OAuth 2.0, and what we could do about them as an industry. These conversations led to the XYZ proposal (and implementations) which eventually led to the formation of the GNAP working group along with a bunch of others. In particular, the work that Fabien Imbault, Yaron Sheffer, Leif Johannsen, and Aaron Parecki put into the documents and conversations in the working group over these years.

I’m really proud of what we’ve built in GNAP. One of the core tenets of GNAP was to look at the world of OAuth and surrounding technologies and figure out how we could do a lot of that better. It’s been great to see GNAP getting applied in a bunch of places over the web, from payments to key management, and especially in places where OAuth doesn’t reach as well. While OAuth remains deeply entrenched over the world, and likely will be for some time, the community has learned many things from GNAP. Alot of things that started in GNAP have been making their way back to the OAuth ecosystem in some form.

The most obvious of this is RFC9396: OAuth Rich Authorization Requests. This replacement of OAuth’s scope parameter was a direct and intentional backport of what became GNAP’s resource access rights, which also acronyms to RAR. In the OAuth world, we don’t get some of the clean features of GNAP, like being able to substitute strings for objects as a shorthand, but a lot of the core enhancements are there.

We’re also seeing yet another intent registration addition to OAuth 2 (on top of the pushed authorization request, device grant type, and CIBA extensions), and this one mimics a lot of the flexibility of GNAP’s interaction system. It’s a more narrow use case in the OAuth specification, but it’s clear that the pattern that GNAP was built on is here to stay.

And then there’s RFC9421: HTTP Message Signatures. This is work that started independently from GNAP but grew up around the same time, and GNAP utilizes HTTP Message Signatures as a core security function. I don’t think we’d have gotten the signing spec to be as robust as it is without some of the GNAP key proofing use cases driving the discussion.

And finally, the GNAP Resource Servers document has just passed IESG review and is on its own way to becoming an RFC as well. This document represents key abstractions in how and RS and AS relate to each other, and I hope we can continue to build this out and pull the best ideas out into the world.

The GNAP working group is shutting down now that its core work is done, but GNAP is far from over. I look forward to seeing it grow into its spaces, and serve as a beacon of how a delegation protocol can be engineered and built.


Just a Theory

PGXN v2 Update

A lot has happened in he five months since the last PGXN v2 update. The time has been used for continuing community discussions, planning, designs, and the start of implementation. Read on for a full accounting.

Speaking of PGXN news, I neglected to link to this post I wrote for the Tembo Blog last month, a fairly detailed accounting of what’s been happening on the PGXN v2 project:

Forgive me Postgres community, for it has been five months since my last PGXN v2 Update. In my defense, it has been super busy! The time went into ongoing community discussions, planning, designs, and the start of implementation. Join me below for the lowdown.

A few highlights:

PGXN RFCs Repository and rfcs.pgxn.org Binary Distributution POC and OCI POC Extension Ecosystem Summit API and ABI guidance pgxn_meta v0.1.0 PGXN Meta JSON Schemas project plan

There’s been quite a bit of activity since then, including the aforementioned PGXN RFC–5 — Release Certification. More soon!

More about… PGXN

PGXN Certifications RFC

A request for comments on a new PGXN RFC for signing releases, plus a link to an initial implementation.

A couple weeks ago, I drafted PGXN RFC–5 — Release Certification, which proposes to replace the simple inclusion of a SHA-1 hash digests in PGXN release META.json files with a JWS-signed release payload. From the introduction:

This RFC therefore proposes to extend v2 distribution metadata with a single additional property, certs, that contains one or more certifications that attest to the authenticity or other characteristics of a release on PGXN.

The certs value is an object that contains at least one property, pgxn, which itself contains a PGXN-generated RFC 7515 JSON Web Signature in the JWS JSON Serialization format. The pgxn property will allow clients not only to assemble the release URL and verify the downloaded file against checksums, but also validate it against a public key provided by PGXN.

The design allows multiple signatures, certifications, or other attestations, which in the future MAY allow authors or other entities to sign releases with their own keys. The new format appends a structure such as this to the distribution META.json file:

{ "certs": { "pgxn": { "payload": "eyJ1c2VyIjoidGhlb3J5IiwiZGF0ZSI6IjIwMjQtMDktMTNUMTc6MzI6NTVaIiwidXJpIjoiZGlzdC9wYWlyLzAuMS43L3BhaXItMC4xLjcuemlwIiwiZGlnZXN0cyI6eyJzaGE1MTIiOiJiMzUzYjVhODJiM2I1NGU5NWY0YTI4NTllN2EyYmQwNjQ4YWJjYjM1YTdjMzYxMmIxMjZjMmM3NTQzOGZjMmY4ZThlZTFmMTllNjFmMzBmYTU0ZDdiYjY0YmNmMjE3ZWQxMjY0NzIyYjQ5N2JjYjYxM2Y4MmQ3ODc1MTUxNWI2NyJ9fQ", "signature": "cC4hiUPoj9Eetdgtv3hF80EGrhuB__dzERat0XF9g2VtQgr9PJbu3XOiZj5RZmh7AAuHIm4Bh-rLIARNPvkSjtQBMHlb1L07Qe7K0GarZRmB_eSN9383LcOLn6_dO--xi12jzDwusC-eOkHWEsqtFZESc6BfI7noOPqvhJ1phCnvWh6IeYI2w9QOYEUipUTI8np6LbgGY9Fs98rqVt5AXLIhWkWywlVmtVrBp0igcN_IoypGlUPQGe77Rw" } } }

Review and feedback would be very much appreciated, especially on the list of unresolved questions toward the end.

Thanks to David Christensen and Steven Miller for the early reviews!

Meanwhile, I’ve released pgxn_meta v0.4.0, which adds support for this format, as well as code to rewrite PGXN v1 release fields to the new format. It doesn’t actually do signature verification, yet, as the server back end hasn’t been updated with the pattern and PKI. But I expect to modify it in response to feedback and get it implemented in early 2025.

More about… Postgres PGXN RFC JWS

⛰️ Postgres Ecosystem Summit EU

The sequel to the successful PGConf.dev event, the Extension Ecosystem Summit EU will showcases some exemplary extension use cases.

Given the success of the Extension Ecosystem Summit at PGConf.dev back in May, my colleague Floor Drees has organized a sequel, the Extension Ecosystem Summit EU on Tuesday, October 22, at the Divani Caravel Hotel in Athens. That’s “Day 0” at the same hotel as PGConf.eu. Tembo, Percona, Xata, and Timescale co-sponsor.

While the May event took the form of an open-space technology (OST)-style unconference aimed at extension developers, the EU event aims to inform an audience of Postgres users about the history and some exemplary use cases for extensions. From the invite:

Join us for a gathering to explore the current state and future of Postgres extension development, packaging, and distribution. Bring your skills and your devices and start contributing to tooling underpinning many large Postgres installations.

Jimmy Angelakos - pg_statviz: pg_statviz is a minimalist extension and utility pair for time series analysis and visualization of PostgreSQL internal statistics. Adam Hendel (Tembo) - pgmq: pgmq is a lightweight message queue. Like AWS SQS and RSMQ but on Postgres. Adam is pgmq’s maintainer since 2023, and will present a journey from pure Rust → pgrx → pl/pgsql. Alastair Turner (Percona) - pg_tde: pg_tde offers transparent encryption of table contents at rest, through a Table Access Method extension. Percona has developed pg_tde to deliver the benefits of encryption at rest without requiring intrusive changes to the Postgres core. Gülçin Yıldırım Jelínek (Xata) - pgzx: pgzx is a library for developing PostgreSQL extensions written in Zig. Mats Kindahl (Timescale) - TimescaleDB (C), [pgvectorscale] (Rust) and pgai (Python): maintaining extensions written in different languages.

I will also deliver the opening remarks, including a brief history of Postgres extensibility. Please join us if you’re in the area or planning to attend PGConf.eu. See you there!

More about… Postgres PGXN Extensions PGConf Athens Summit

Tuesday, 08. October 2024

Michael Ruminer

Sometimes It’s All About The Chunking

As I continue my study and experimentation with coding up AI solutions, and especially, at the moment, with Retrieval-Augmented Generation (RAG), I decided to work with a post from the Metadocs blog titled “Simple Agentic RAG for Multi Vector stores with LangChain and LangGraph”. It seemed it would cover two areas of interest, agentic operations and RAG. Little did I expect to learn a valuable les

As I continue my study and experimentation with coding up AI solutions, and especially, at the moment, with Retrieval-Augmented Generation (RAG), I decided to work with a post from the Metadocs blog titled “Simple Agentic RAG for Multi Vector stores with LangChain and LangGraph”. It seemed it would cover two areas of interest, agentic operations and RAG. Little did I expect to learn a valuable lesson in chunking. In this post I’ll pass along the obvious but well demonstrated lesson I gathered in the process.

It started with the prior mentioned post that referred to a prior post that it built upon. Following a link to that pre-requisite post, “Create a Langchain app with multiple vector store the easy way”, it in turn referenced an even earlier post as a pre-requisite. So down that rabbit hole I went. The earlier post was “Deploy a RAG app with Langchain in minutes”. I read the earliest of the three posts and it was a very simple RAG application. I coded it up ensuring I understood each line along the way. Most notable was that it was splitting the text into chunks on “\n\n”. I hadn’t looked at the source document they were providing as a sample. It turns out it was a text file of a US State of the Union address with a twist. Each sentence was followed by “\n\n” (two line feed carriage returns); an interesting if unrealistic formatting. I have my own example PDF that I have been using for testing out various RAG implementations and how it handles a specific prompt, so I copied two paragraphs from the document that contains the context I was after and formatted it with a “\n\n” after each sentence. Normally, I have been extracting the text from the PDF as part of the code and then chunking usually with recursive character text splitting, but I didn’t want to change this program since I was going to build on it. When done the results actually returned what I was after, a list of the 10 principles of SSI.

For no apparent reason, I decided to edit the text file and format with a single “\n” after each paragraph. This should return the same results if I edited the text split to represent this. It didn’t. I was, and still am, perplexed by this. It makes no sense that a double line feed split should return any different results than a single line feed results under the circumstances. I plan to revisit this as I believe I must be wrong despite trying multiple times. What was most important in the process was that with the right chunking, as simple as it was, I got the desired results when prompted whereas in all my past experiences it had failed. There was a difference in that I was reading a text file versus using a PDF text extractor and that I was using only two paragraphs focused on the context I wanted versus 70 pages of an academic paper that is probably very hard to extract even if the proper area of context was two clean paragraphs of text within that PDF. The real lesson for me is showing how chunking is so important. I suspect the major differential in the success was in the chunk divisions. I won’t rule out the simpler PDF as a contributor.

Next, I plan to try a few things in this naive RAG implementation before I move on to the multi-vector store — a PDF creation of just the two paragraphs that contain the needed context and split on paragraphs (\n) to see how that comes out. I’ll try the two paragraphs in a PDF with text extracted and chunked using RecursiveCharacterTextSplitter and separators=[“\n\n”, “\n”, “. “, “ “, “”] and a chunk size of 1000 and two different settings for overlap (0, and 200) as well as with SentenceTransformerEmbeddingFunction and then the default OpenAi embedding function. Let’s see how all those combinations work.

To recap, though I can’t explain why I got wildly different results depending on the parsing character used with the tex file format changed, I can suspect the simple chunking by sentence made a lot of difference. The other likely impacting result was clean simple text versus PDF-extracted text. I plan to experiment more and will report back the results. A take away for me, even if it was not entirely or even primarily the impactful element, was how important the chunking is for good results.

Monday, 07. October 2024

FACILELOGIN

The lessons learned in building a secure, scalable, and reliable identity platform @ DevRev

Image credits: https://www.networkrailmediacentre.co.uk/resources/ap-0409 At DevRev, we are building an API-first dev-centric platform that gives you one unified view of your customers and products by connecting your data, teams, and users to our AgentOS platform and app experiences. This blog post shares some insights into how we built (and keep evolving) a secure, scalable, and reliable identity
Image credits: https://www.networkrailmediacentre.co.uk/resources/ap-0409

At DevRev, we are building an API-first dev-centric platform that gives you one unified view of your customers and products by connecting your data, teams, and users to our AgentOS platform and app experiences. This blog post shares some insights into how we built (and keep evolving) a secure, scalable, and reliable identity platform at DevRev. The decisions we made and the lessons we learned throughout this journey, we believe, would be equally applicable to most SaaS companies. I would like to acknowledge and thank Akanksha Deswal, and the DevRev identity team, who were part of this journey, and instrumental in figuring out what works best for us.

An identity platform serves as a secure bridge between users and the services they access — whether apps, APIs, or microservices — delivering a seamless and trustworthy way to manage, verify, and protect user identities in increasingly interconnected environments. While the expectations for an identity platform can vary depending on specific use cases, at DevRev, we primarily focused on the following core areas.

The expectations for an identity platform can vary depending on specific use cases; and at DevRev, we primarily focused on the above core areas.

The decision to buy or build components of an identity platform depends on several factors, such as business requirements (both functional and non-functional), cost at scale, available resources, in-house expertise, and more. We wanted to maintain complete control over our object model (organizations, users, groups, etc.) while delegating more generic identity capabilities to an external identity service (identity provider). Having control over the foundational object model is essential for innovation, and allowing a third party to dictate its structure or limitations can stifle growth and introduce significant risks.

A few years ago, I enjoyed reading Ask Your Developer: How to Harness the Power of Software Developers and Win in the 21st Century by Jeff Lawson. In the book, Jeff reframes the classic ‘build vs. buy’ debate into a more urgent choice: ‘build vs. die.’ As every company evolves into a software company, the true competitive advantage lies in the software they build. When software becomes the critical interface between our services and our customers, not building it to meet the exact needs means risking irrelevance — or worse, failure.

Building doesn’t always mean starting from scratch. We shouldn’t reinvent the wheel if existing solutions meet our needs. Instead, focus on developing what’s core to our business — the components that give us a competitive edge. Everything else, the foundational components that enable us to build our vision, can be sourced from the digital supply chain. I touched on this concept in the blog post, The Next TCP/IP Moment in Identity.

We had no interest in building identity federation standards like OpenID Connect or SAML, nor did we want to develop authentication protocols in-house. These are commodity features supported by almost all identity providers. However, we wanted to ensure the external identity service handling federation and authentication remained a ‘thin’ layer, separate from our business logic. This approach minimizes vendor lock-in.

Vendor lock-in isn’t just about making it easy to switch providers — though that’s certainly part of it. More importantly, it’s about retaining the freedom to innovate independently, which was our primary focus. Within the DevRev platform, Janus serves as the identity service, managing the identity object model and interacting with the external identity provider during organization and user onboarding flows. For the rest of the DevRev platform components, the external identity provider remains a black box, only interacting with Janus.

Choosing the right identity provider requires a thorough and thoughtful evaluation. Our objective wasn’t to find an identity provider that perfectly matched the DevRev object model but rather to focus on its core constructs, their relationships, and the integration effort with the DevRev platform.

In the DevRev platform, each organization can choose its preferred authentication method. For example, one organization might opt for Google login, while another may use their identity provider via SAML. The external identity provider’s object model must support this flexible relationship between organizations and their connections. For instance, the connection to Google (social) is made using OpenID Connect, and ideally, we should be able to reuse that Google connection (or any other social connection) across multiple organizations.

This means the identity provider’s object model should allow connections to be defined independently of a specific organization and provide the ability to associate them with any org. During our evaluation, we encountered one identity provider that could only define a connection within the scope of an individual organization (or tenant), which led us to reject it.

Apart from key functional requirements for authentication and identity federation, we also worried about the scalability, availability, reliability, testability, compliance, and interoperability of the identity provider, and of course the cost at scale. We picked Auth0, mostly because of the lightweight organization model it had and it could meet our requirement to scale the number of organizations to 1 million. Most of the prominent identity providers do support almost all the key identity federation and authentication needs. Only in non-functional requirements and object relationships, you would find the differentiators.

In our hunt for an external identity provider, we mostly looked for a SaaS product. Going for a SaaS product gave us the freedom to focus on our core business problem, and delegate anything related to the external identity provider to the SaaS vendor. A key reason one would pick an on-prem product over a SaaS product is the freedom to do heavy customizations. Building heavy customization in the identity provider is something we wanted to avoid from day 1.

Then again, I am not completely ruling out identity provider on-prem deployments. It would depend on the industry you are in and your business-specific needs. For example, highly regulated industries would pick an on-prem identity provider to keep all their customer data within their boundaries. I led the development of the open-source WSO2 Identity Server for almost 14 years, before joining DevRev, and saw many large-scale successful on-prem deployments.

When picking a SaaS identity provider, non-functional requirements play a key role. We have little control over the non-functional requirements, and even from the SaaS vendor’s point of view, they will find it hard to accommodate change requests related to non-functional requirements; as it could require core changes in their product. For example, Auth0 offers an availability of four 9’s. When you create a tenant in Auth0, it’s bound to one region, and internally they create a replica of that tenant in another availability zone. However, when the primary Auth0 region is down, we do not have the control to switch to the corresponding availability zone. It’s the Auth0 team that decides when to switch based on the severity of the incident. Auth0 also offers a private cloud deployment, but, its a costly option.

Another limitation we found in Auth0 (and of course in most of the other prominent SaaS identity providers as well) replication of data between regions. For example, when you pin your tenant to the Auth0 US-1 region, all the API calls from any region globally must be routed to the US-1 region. Due to this limitation, we’ve witnessed increased latency levels in login API calls (OpenID Connect) for the users connecting from Asia. This hurts the app load time and at the moment we are in the process of building a solution in-house to minimize the impact.

The DevRev platform is built with 250+ microservices running on Kubernetes. Janus is one of them. Each service has its CI/CD pipeline, which deploys each commit to main, to Prod after running end-to-end tests in Dev and QA. We’ve integrated Auth0 into the Janus integration tests. We have a dedicated Auth0 tenant for that, and in the CI/CD pipeline, we connect to Auth0 via the API to create organizations and users; and then associate users with organizations. Once the tests are complete, we run a clean-up task to remove the resources created in Auth0, as it has certain resource restrictions.

There was a time when the clean-up task failed to run and unfortunately went unnoticed. This filled up Auth0 resources and soon reached its limits. Auth0 has a limit on the size of the app metadata that can be associated with a user. We used the app metadata to associate the DevRev-generated organization identifier with an Auth0 user. Once we hit the limit, the Auth0 system became unstable, and none of the data in the test tenant could be accessed either via the UI or the API. This became a blocker for the prod promotion of the Janus service and all other services that relied on Janus for testing. As a quick workaround (while the Auth0 team was working on recovering the tenant), we had to spin up a new Auth0 tenant and use that in our tests. Usually, these limits enforced by Auth0 are way beyond our production needs, and we encountered this issue due to the failed clean-up task from our end.

Auth0 has a rate limit on its management API. During our tests, we used this management API to create organizations and users. Due to the nature of the tests, we started to hit the rate limits (even with proper retries) which resulted in some flaky tests. As a solution, to this and the resource limit issue we discussed before, we ended up mocking Auth0 in our integration tests, rather than directly hitting the Auth0 test tenant.

We follow GitOps at DevRev to automate and manage infrastructure and application deployments. Git is the single source of truth. This helps us to take any change via a review process and audit all changes. We followed the same model with Auth0, where appropriate. For example, we use Auth0 actions to inject the DevRev-generated org/user identifiers into the tokens it produces. The actions are Javascript code and we keep it in a Git repo and deploy to the Auth0 Prod tenant via Dev and QA. This helped us avoid any manual edits from the Auth0 management console, and keep all three environments consistent.

Currently, not all operations on Auth0 are fully managed through GitOps. For example, we still update certain settings, such as token lifetimes for applications, directly through the Auth0 management console. Although these are infrequent tasks, it’s essential to have a complete audit trail in production for every operation and change made. While Auth0 provides logs, they are only retained for a month. To ensure long-term visibility, we push Auth0 logs to Datadog and set up Slack alerts for specific cases. Additionally, we’re exploring the use of the Auth0 Terraform provider to automate and manage all Auth0 operations via GitOps.

Passwords are painful, not just from the user’s point of view but also from the product side. If an app has to support password-based login, it must also build the supporting infrastructure to recover passwords, rotate them periodically, implement strong second-factor authentication, and so on. From day one, we made a design decision to avoid passwords.

We use Auth0 as an identity bridge. The contract between the DevRev app and Auth0 is defined by OpenID Connect. Auth0 helps us connect multiple other identity providers via SAML and OpenID Connect to authenticate users. These identity providers are either the social identity providers or the enterprise identity providers owned by DevRev customers. Following the identity bridge pattern helped avoid storing user passwords, and also introducing a new identity provider or an authentication method requires no changes to the DevRev app. It can be done transparently.

This graph shows the growth of unique monthly active Dev users by connection type over the last 12 months. The growth in active users is healthy, with enterprise connections starting to dominate.

The identity bridge pattern effectively addresses the common challenges posed by the spaghetti-identity and identity-silos antipatterns. Spaghetti identity occurs when a SaaS application tries to establish direct, point-to-point integrations with multiple identity providers, leading to complex, tangled connections. On the other hand, identity silos arise when an application is limited to a single federation or authentication protocol, making it difficult and costly to introduce new protocols in the future. By adopting the identity bridge pattern, organizations can simplify integration and enable flexibility in supporting multiple protocols without the downsides of these antipatterns.

It’s also worth mentioning how we leveraged the identity bridge pattern to overcome a limitation in Auth0. Auth0 supports passwordless authentication with email OTP; however, it cannot be used on its own with Auth0 organizations — only as a second factor. This limitation likely arises from Auth0’s architecture, as the organization model was introduced later. To overcome this and use email OTP as a passwordless method, we spun up another Auth0 instance without organizations and then federated to it from the primary Auth0 tenant (which has organizations enabled). While this approach works, it is far from ideal. We anticipate switching to native passwordless email OTP within our primary Auth0 tenant as soon as that feature becomes available.

Auth0 provides the latency numbers it introduces during the login operations; however, that does not reflect the latency the users experience at the edge. To address this concern, and also to have better visibility into the OpenID Connect login flow, we started routing all login calls to Auth0 via our CDN provider, which is Fastly. On the Fastly Compute@Edge platform, we run an edge gateway, which pushes all the API stats to Google BigQuery for further analysis. There we would know the latency at the edge corresponding to all the login operations. These latency numbers are very much closer to what the users experience. The following graph shows the p90 latency numbers in milliseconds for the last 31 days against a set of selected regions, for the Auth0 token endpoint.

p90 latency numbers in milliseconds for the last 31 days against a set of selected regions, for the Auth0 token endpoint

All DevRev applications utilize the OpenID Connect authorization code flow. The token endpoint, highlighted in the graph above, is called once during the initial app load and periodically thereafter for token refreshes. While the periodic refreshes happen asynchronously and do not affect the user experience, the latency of the initial token request during page load is more critical.

Two main factors contribute to this initial latency:

Geographic latency — All users, regardless of location, connect to a centralized Auth0 instance in the US-East region. Web-worker-based login flow — The DevRev web app, a single-page application (SPA), leverages this flow, which adds complexity and impacts performance during authentication.

We opted for the web-worker-based login flow to enhance security by storing user tokens in browser memory more securely. However, web worker memory is scoped to a single browser tab, which means that even if a user has an active Auth0 session, opening a new tab requires calling both the authorize and token endpoints during the initial page load. Upon analyzing the stats, we found that 91.1% of the calls to the Auth0 authorize endpoint were from users who were already authenticated. To address this, we developed an edge-based solution targeting these 91.1% of users, aiming to reduce the latency at the edge to almost zero. As of this writing, this solution is being tested in our development and QA environments and will soon be available in production.

91.1% of the calls to the Auth0 authorize endpoint were from users who were already authenticated (page reloads). The users once logged in to the app, keep the login session. This graph shows the average number of active vs silent logins by weekday for the last 90 days. The edge-based solution we are working on will reduce the latency at the edge for the login calls for already authenticated users by almost 100%.

In addition to publishing stats related to Auth0 API performance from the edge to Google BigQuery, we also send logs to Datadog. This allows us to trigger alerts in case of Auth0 failures and detect downtimes early. Moreover, this helps us identify which users and organizations are impacted during Auth0 outages — events that have occurred a few times in the past couple of years.

The contract between all the DevRev apps and the identity provider is based on OpenID Connect. However, we didn’t find the ID token to be particularly useful in the case of an SPA. An SPA is simply a “dumb” app or a public client, which means it cannot make any trustworthy decisions on its own. The access token is the key! (If you’re interested in learning more about the difference between the ID token and the access token, please check out this video.)

We use self-contained access tokens (JWTs), which include the user context with every request. These tokens have been widely adopted in the industry and were formalized as an IETF standard in 2021 (RFC 9068). While Auth0 has yet to support RFC 9068, this has had no impact on our use cases. In fact, for an SPA, using reference tokens (the other alternative to self-contained access tokens — also known as opaque access tokens) is not a viable option. Only the issuer of an opaque token can validate it, and it’s impractical for the gateway intercepting API requests to communicate with the issuer for token validation, for each request.

When using self-contained tokens, we also need to implement support for revocation. Unlike self-contained tokens, when a reference token is revoked, that request goes to the token issuer. The consumer of the token (such as a gateway) would be aware of the revocation because it communicates with the token issuer each time it validates a token. However, there’s no point in communicating with the issuer to revoke a self-contained token unless there’s a reliable way to propagate the revocation decision to all dependent systems. To address this, we introduced the auth-tokens.delete endpoint, allowing us to revoke access tokens upon user logout. We also maintain a cache of the revoked token IDs, with a TTL based on the token’s expiration.

Returning to the topic of the ID token, the DevRev app relies on the dev-users.self API to identify the user. Additionally, another reason we can't rely on the ID token for user identification is that Auth0 cannot embed the necessary user information into the ID token, as we store only a minimal set of user data in Auth0.

Most of the support queries we receive from our customers related to login are due to some kind of an issue with the OTP delivery. Even though the number of such queries is quite low (compared to the success cases), when it happens, it’s painful to the users and hurts the business. Most of the DevRev customers at the stage of evaluation use email OTP-based login and then later switch to login with their identity provider, with SAML or OpenID Connect. Apart from our direct customers, we also offer a support portal for our customers’ customers (who we call Rev users), which also offers login with email OTP.

Most email OTP-related issues arise from one of the following causes:

The user’s email is added to the email provider’s global suppression list. The OTP email is marked as spam. The OTP email is rejected by the user’s mailbox due to internal domain restrictions, or other similar factors.

Email service providers (ESPs) maintain global suppression lists to prevent sending emails to addresses that have previously bounced (due to invalid addresses) or opted out of receiving emails from certain senders. If a user’s email address is on such a list, the OTP email won’t be delivered. This can happen if the user’s email previously encountered delivery issues, such as soft or hard bounces.

Initially, we used SendGrid as our ESP for sending OTP emails as well as marketing emails. Although we used two separate SendGrid accounts for these purposes, SendGrid maintains a global suppression list shared across accounts when emails originate from the same domain — devrev.ai in our case. This meant that if a user unsubscribed from marketing emails, they were automatically added to the global suppression list, preventing them from receiving OTPs. In some cases, users didn’t even unsubscribe themselves; certain plugins handled it automatically.

This behavior was undesirable, as it significantly impacted OTP delivery. To address the issue, we switched to AWS SES for sending OTP emails, which reduced global suppression list-related problems by nearly 90%.

We once encountered a particularly interesting issue related to email OTPs, and while it wasn’t a delivery problem, it’s worth mentioning. The user received the OTP email, but every time they tried entering the OTP, it failed. What made this case even more intriguing was that we discovered the OTP had already been attempted from a different IP address just before the user tried it. This explained why the OTP kept failing — the user was attempting to use an OTP that had already been used.

After further troubleshooting, we discovered the root cause. Along with the OTP in the email, we had also embedded an OTP link that allowed users to log in directly by clicking it. Although this link was commented out in the HTML template and not visible to the user, the organization this user belonged to had strict security protocols that automatically scanned and clicked on links embedded in incoming emails. This security bot didn’t distinguish between commented-out links and visible ones, so it clicked the hidden link, sending a request to the Auth0 server and invalidating the OTP before the user could use it. The issue was resolved by completely removing the commented-out OTP link from the email template.

Many email providers have spam filters that automatically classify certain emails as spam based on various criteria, such as the content of the email, sending frequency, or the sender’s reputation. If the OTP email is flagged as spam, it ends up in the user’s spam or junk folder, leading to the user not receiving it in their primary inbox. This could occur if the email provider deems the sender domain as suspicious or if the email contains certain keywords that trigger spam filters. After moving from SendGrid to AWS SES and then carefully designing the email template, we could get the number of spam emails almost down to 0.

Every organization and user in the DevRev platform is assigned a unique, immutable, and persistent pseudonym known as a DON (DevRev Object Name). Every resource in the platform has its own DON. The format of a DON is: don:<service>:<partition>:(<type>/<id>)+, inspired by Amazon Resource Names (ARN).

For example, don:identity:dvrv-us-1:devo/xyz is the DON of an organization. Here, identity refers to the service name, and dvrv-us-1 is the partition identifier, which indicates the organization’s location. For example, an organization in the us-east-1 region has dvrv-us-1 as its partition ID, while one in ap-south-1 has dvrv-in-1.

The type field of the DON defines the type of the object it is attached to. Every key entity within the DevRev platform is associated with an object, and all objects share a common ancestor known as the Atom object. For example, a devo object type represents a Dev organization, while devu represents a Dev user. A Dev organization is a company or team that uses the DevRev platform, and its members are referred to as Dev users. Customers of a Dev organization are called Rev users, who belong to Rev organizations (also known as workspaces). Both Rev organizations and Rev users operate within the scope of a Dev organization. For instance, the DON of a Rev organization would be: don:identity:dvrv-us-1:devo/xyz:revo/pqr.

All identity objects in the DevRev platform follow a predefined schema, ensuring consistency and standardization across all organizations. This common schema facilitates smooth integration and uniformity across the platform. However, recognizing that different organizations may have unique requirements, we also provide the flexibility to extend the Dev user and Rev user objects with custom fields specific to a given Dev organization.

Customizability is essential for organizations that need to tailor their identity models to accommodate unique workflows, industry requirements, or specific customer data needs. For instance, a Dev organization might need to capture additional fields such as department codes, job roles, or region-specific preferences that go beyond the standard schema. Moreover, when importing Rev user objects from external systems like Salesforce or Zendesk, these objects may come with their own set of attributes. The ability to extend and adapt the schema allows organizations to seamlessly integrate and migrate from these external systems to the DevRev platform.

All personally identifiable information (PII) of DevRev users is stored exclusively in Janus, making it the single source of truth for PII. Other services in the platform reference users through their DONs. For example, when a client calls the works.get API to retrieve a work item, the request goes through the DevRev gateway, which makes a gRPC call to the corresponding service. The service returns the work item data, including the DONs of the users who created and are assigned to the item.

The DevRev platform is built with 250+ microservices running on Kubernetes. Fastly Compute@Edge serves as the entry point at the edge for all DevRev services. At the origin, the DevRev API gateway intercepts all incoming traffic. Once the gateway verifies the JWT accompanying an API request, it dispatches the request to the appropriate service.

Since raw user DONs are not user-friendly, the gateway resolves them into user details, such as display name, email, and full name, by querying Janus (the API composition pattern). These user summaries are cached in Redis to avoid repetitive calls to Janus (from the gateway), ensuring faster response times for future requests.

A DON attached to a user or an organization is an immutable identifier. It’s not just immutable, but also resolvable by its own — or embeds information in it. For example, we use the partition ID embedded into the organization DON to route requests to the corresponding region of the organization, in our multi-regional routing implementation built at the edge. Designing a system based on immutable identifiers needs to be done with care because the cost of a bad design decision is high. For example, we embed the Dev organization ID into the user DON, which means we cannot move this user to another Dev organization or a user cannot be part of multiple Dev organizations. This is a conscious decision we made to scope all the resources in the DevRev platform under a Dev organization.

However, we made a mistake in our Rev user implementation. The Rev user DON embeds both the Dev organization ID and the corresponding Rev organization ID. This means, a Rev user cannot switch Rev organizations, or cannot be part of multiple Rev organizations. This was not a requirement initially, but as we kept learning from our customers, we figured that it was a too restrictive constraint to limit a Rev user to one Rev organization. We made the Rev user scoped at the Dev organization level and removed the Rev organization ID from the DON. This was a painful change.

The Dev organization ID (don:identity:dvrv-us-1:devo/xyz), which is embedded into a DON is not incremental. An incremental ID can expose sensitive information, such as the total number of organizations created over time. Malicious actors could exploit this to infer patterns about system growth or attempt ID enumeration attacks to access or manipulate unauthorized resources. Non-incremental IDs make it harder to guess or predict other organization IDs, improving security and reducing the risk of ID enumeration.

The randomly generated IDs provide global uniqueness, meaning IDs can be safely generated across distributed systems without the risk of collision. This allows for a more flexible, decentralized architecture where different systems or regions can generate IDs without coordinating with a central system.

We initially used a 48-bit random value as the Dev organization ID. Once base62 encoded, it becomes a string with either 8 or 9 characters. When we moved to the multi-regional deployment, we wanted to ensure the Dev organization ID is unique across all regions by adding a regional prefix, for example, “1” for us-east1. However, since the previously generated Dev organization IDs were already 8 or 9 characters long, this introduced a challenge. A new region could potentially generate an ID with the prefix that results in 9 characters, which could match an already generated ID from a different region, before the introduction of the regional prefix. To address this, we first ensured the base62-encoded 48-bit value was always 9 characters by adding padding, and then introduced a regional prefix on top of that, effectively making the Dev organization ID a fixed-length random string of 10 characters.

What does deleting a Dev organization mean? A Dev organization has many other resources linked to it within and outside the identity space (or Janus). Dev users, Rev users, workspaces, accounts, service accounts, system users, and preferences are all identity-related objects, while conversations, articles, tickets, and issues, for example, are objects outside the identity space.

This is a common problem in a distributed system. What we need is a cascade delete. Implementing a cascade delete in a distributed system requires careful consideration of consistency, availability, fault tolerance, and scalability. Unlike a traditional monolithic system, where database constraints and relationships can directly enforce cascade deletes, distributed systems involve multiple services, databases, and possibly different regions, making the process more complex.

When a Dev organization is deleted, we mark it as soft-deleted. A soft delete marks a Dev organization as deleted without actually removing it from the database. This is, in general, useful to avoid cascading failures in case other services depend on the deleted entity. When a Dev organization is marked as deleted, it triggers a CDC (change data capture) event, and we have a dedicated service (garbage collection service), listening to these events and taking actions based on the type of the object being deleted and the dependent object types.

Once the garbage collection service picks the event to soft delete an organization, it spins up a temporal workflow, which will trigger the cascade delete of the dependent objects after a retention period. As you have figured out already this is a recursive process, and we only need to specify the first level of dependencies.

In general, there are two common saga patterns for cascade delete implementation, orchestration-based saga and choreography-based saga. In the orchestration-based saga pattern, a central coordinator service orchestrates the cascade delete by calling each service that manages dependent objects in a defined sequence. It ensures the cascade completes across all services or triggers compensating actions (e.g., undo the deletion) if a step fails. In the choreography-based saga pattern, each service listens for events and performs its part of the cascade delete, triggering further events to continue the process. We picked the orchestration-based saga pattern over the choreography-based saga.

At DevRev, we leverage a range of databases — including MongoDB, PostgreSQL, and DynamoDB — each selected to address specific use cases. For all identity-related data managed by Janus and the STS (which we’ll discuss later in this blog), we rely on MongoDB. MongoDB is ideal for transactional (OLTP) workloads, excelling in scenarios that demand high write throughput and the ability to manage large volumes of concurrent transactions. Its built-in replication via replica sets provides automatic failover and data redundancy across multiple nodes, enhancing both reliability and fault tolerance. To ensure data consistency, we utilize the majority write concern and read from the primary node. Currently, we are focusing on improving read and write performance in this setup.

MongoDB also offers horizontal scaling through sharding, where each shard operates as a replica set. Data is distributed across shards using a shard key, which is a field (or combination of fields) that determines how MongoDB partitions and routes data. Although we currently operate with a replica set without sharding, we’ve designed each collection in the identity database with a Dev organization ID field that can serve as the shard key, allowing us to seamlessly enable sharding when needed.

Unlike traditional relational databases, which require predefined schema structures, MongoDB’s document-oriented nature supports schema-less design. This flexibility has allowed us to rapidly develop and iterate on identity objects without the need to define a schema or handle migrations upfront.

Along with MongoDB we also use Google BigQuery, which is a fully managed, serverless, and highly scalable data warehouse designed for fast SQL-based queries and real-time analytics on large datasets. We are using the MongoDB Kafka connector to capture change data capture (CDC) events from MongoDB and push them to BigQuery. This also acts as an audit for all changes at the data level.

SCIM support had always been planned but was deprioritized due to other tasks. We’re excited that it is now an integral part of the DevRev identity platform. SCIM offers a standardized protocol that streamlines the exchange of identity information between identity providers and external applications, significantly reducing manual processes and minimizing errors. It ensures that user data is synchronized and consistent across all systems. Furthermore, SCIM enhances security by automating the revocation of user access when employees leave or change roles, reducing the risk of unauthorized access and ensuring proper access management.

DevRev provides a SCIM endpoint for provisioning users and groups from an organization’s identity provider. Initial testing was done using Okta as the identity provider, and the DevRev SCIM integration will soon be available on the Okta Integration Network. Following that, we plan to expand availability to Azure and Google platforms. As a DevRev platform customer, the DevRev organization has already implemented SCIM for integration with Okta, streamlining identity management within the organization.

The identity bridge pattern (which we discussed before) facilitates trust mediation between the identity provider trusted by the DevRev platform (Auth0) and external identity providers that authenticate users from DevRev organizations. This pattern is primarily focused on human identities. To bridge the human identity into the services, we’ve introduced a security token service (STS). The STS implementation is inspired by the RFC 8693. This adds another level of indirection in trust, and all DevRev services only need to trust the STS. You can read more about STS from this blog: Securing north/south and east/west traffic @ DevRev.

STS is a key part of the DevRev identity architecture, which helps to make the identity provider a thin layer, which we discussed earlier in this blog. An Auth0 token tells us who the user is and does not embed any PII data other than the email address. However, when exchanging an Auth0 token for an STS-issued token, we have the flexibility to enrich the token with the data available in the DevRev platform.

The STS is capable of issuing tokens of different types. This graph shows the number of tokens issued by the STS for the last 30 days.

The STS is capable of issuing tokens of different types. For example Personal Access Tokens (PAT) for Dev users, Application Access Tokens (AAT) for service accounts, Rev session tokens for Rev users, and so on. Each of these token types is bound to a set of processing rules. For example, while issuing a Rev session token, the STS talks to Janus, and just-in-time provisions the Rev user, if they do not exist in the system. Another example would be how we handle delegation workflows. When a snap-in wants to act on behalf of a Rev user, it authenticates with its AAT (which represents the snap-in service account) and requests a Rev session token. Also, the STS-issued tokens are used in the multi-regional setup to facilitate certain region-to-region interactions.

In this blog post, we highlighted key aspects of the DevRev identity platform and the lessons we’ve learned throughout its development. This is an ongoing effort, and as we continue to gain insights from our customers, we constantly discover new ways to enhance the platform and tackle exciting challenges in the identity and security landscape.

The lessons learned in building a secure, scalable, and reliable identity platform @ DevRev was originally published in FACILELOGIN on Medium, where people are continuing the conversation by highlighting and responding to this story.

Thursday, 27. June 2024

Just a Theory

Patch: Postgres ABI and API Guidance

Dear Postgres extension developers: Please review and give feedback on the proposed patch adding ABI and API guidance to the C language documentation.

Update 2024-11-14

I forgot to update this post at the time, but on July 31, Peter Eisentraut committed the patch patch ABI and API guidance to the C language documentation. I only noticed because today’s releases contained a modified ABI that broke a number of extensions. See the hackers thread for details.

TL;DR

If you’re a Postgres extension developer interested in understanding what to expect from core API and ABI stability, please review and give feedback on this patch (or pull request) adding ABI and API Guidance to the documentation.

In my PGConf.dev report a couple days ago, I mentioned that a few actionable items came out of the Improving extensions in core unconference session. One was the need to document the heretofore unofficial policy for API and ABI stability between major and, especially, minor versions of Postgres.

A frequent topic at the Extension Summit and Mini-Summits and a number of PCConf sessions has been concern regarding compatibility changes between minor releases of Postgres. At Mini Summit Five, for example, Yurii Rashkovskii presented a few examples of such changes, leading him to conclude, along with several others in the community, that C API-using extensions can only be used when built against the minor release with which they’re used.

In the Unconference session, core committers reported that such changes are carefully made, and rarely, if ever, affect extensions compiled for different minor releases of the same major version. Furthermore, they carefully make such changes to avoid compatibility issues. In the case Yurii found, for example, a field was added to a struct’s padding, without affecting the ordering of other fields, thus minimizing the risk of runtime failures.

It became clear that, although the committers follow a policy — and read new committers into it via patch review — it’s not documented anywhere. The result has been a bunch of sturm und drang amongst extension developer unsure what level of compatibility to depend on and what changes to expect.

The week after the conference, I started a pgsql-hackers thread proposing to document the committer policy. Following some discussion and review of potential ABI breaks in minor releases, the consensus seemed to be that the committers strive to avoid such breaks, that they’re quite uncommon in minor releases, and that most of the reported issues were due to using more obscure APIs.

As a result, we started drafting a policy, and after a few iterations, Peter Eisentraut pulled things together from the perspective of a core team member, reframed as “Server API and ABI Guidance”. I converted it into a patch (and pull request) to add it to the C Language docs. A key statement on minor releases:

In general, extension code that compiles and works with a minor release should also compile and work with any other minor release of the same major version, past or future.

I hope this document clarifies things. Even if it’s not as strict as some might hope, it at least documents the project approach to compatibility, so we have a better idea what to expect when using the C APIs. If you see gaps, or you have additional questions, please respond to pgsql-hackers thread — or the pull request (I’ll propagate comments to hackers).

More about… Postgres Extensions Patch ABI API