Last Update 2:12 AM June 05, 2023 (UTC)

Identity Blog Catcher

Brought to you by Identity Woman and Infominer.
Support this collaboration on Patreon!!!

Monday, 05. June 2023

John Philpin : Lifestream

No Lyme Regis …

No Lyme Regis …

No Lyme Regis …

Sunday, 04. June 2023

John Philpin : Lifestream

Now THAT’s a faucet.

Now THAT’s a faucet.

Now THAT’s a faucet.


Simon Willison

Weeknotes: Parquet in Datasette Lite, various talks, more LLM hacking

I've fallen a bit behind on my weeknotes. Here's a catchup for the last few weeks. Parquet in Datasette Lite Datasette Lite is my build of Datasette (a server-side Python web application) which runs entirely in the browser using WebAssembly and Pyodide. I recently added the ability to directly load Parquet files over HTTP. This required an upgrade to the underlying version of Pyodide, in ord

I've fallen a bit behind on my weeknotes. Here's a catchup for the last few weeks.

Parquet in Datasette Lite

Datasette Lite is my build of Datasette (a server-side Python web application) which runs entirely in the browser using WebAssembly and Pyodide. I recently added the ability to directly load Parquet files over HTTP.

This required an upgrade to the underlying version of Pyodide, in order to use the WebAssembly compiled version of the fastparquet library. That upgrade was blocked by a AttributeError: module 'os' has no attribute 'link' error, but Roman Yurchak showed me a workaround which unblocked me.

So now the following works:

https://lite.datasette.io/?parquet=https://github.com/Teradata/kylo/blob/master/samples/sample-data/parquet/userdata1.parquet

This will work with any URL to a Parquet file that is served with open CORS headers - files on GitHub (or in a GitHub Gist) get these headers automatically.

Also new in Datasette Lite: the ?memory=1 query string option, which starts Datasette Lite without loading any default demo databases. I added this to help me construct this demo for my new datasette-sqlite-url-lite plugin:

https://lite.datasette.io/?memory=1&install=datasette-sqlite-url-lite#/_memory?sql=select+'url_valid()'+as+fn%2C+url_valid(%3Aurl)+as+result%0Aunion+all%0Aselect+'url_scheme()'%2C+url_scheme(%3Aurl)%0Aunion+all%0Aselect+'url_host()'%2C+url_host(%3Aurl)%0Aunion+all%0Aselect+'url_path()'%2C+url_path(%3Aurl)%0Aunion+all%0Aselect+'url_fragment()'%2C+url_fragment(%3Aurl)%3B&url=https%3A%2F%2Fwww.sqlite.org%2Fvtab.html%23usage

datasette-sqlite-url-lite - mostly written by GPT-4

datasette-sqlite-url is a really neat plugin by Alex Garcia which adds custom SQL functions to SQLite that allow you to parse URLs and extract their components.

There's just one catch: the extension itself is written in C, and isn't yet a version of it compiled for WebAssembly to work in Datasette Lite.

I wanted to use some of the functions in it, so I decided to see if I could get a Pure Python alternative of it working. But this was a very low stakes project, so I decided to see if I could get GPT-4 to do essentially all of the work for me.

I prompted it like this - copying and pasting the examples directly from Alex's documentation:

Write Python code to register the following SQLite custom functions:

select url_valid('https://sqlite.org'); -- 1 select url_scheme('https://www.sqlite.org/vtab.html#usage'); -- 'https' select url_host('https://www.sqlite.org/vtab.html#usage'); -- 'www.sqlite.org' select url_path('https://www.sqlite.org/vtab.html#usage'); -- '/vtab.html' select url_fragment('https://www.sqlite.org/vtab.html#usage'); -- 'usage'

The code it produced was almost exactly what I needed.

I wanted some tests too, so I prompted:

Write a suite of pytest tests for this

This gave me the tests I needed - with one error in the way they called SQLite, but still doing 90% of the work for me.

Here's the full ChatGPT conversation and the resulting code I checked into the repo.

Various talks

Videos for three of my recent talks are now available on YouTube:

Big Opportunities in Small Data is the keynote I gave at Citus Con: An Event for Postgres 2023 - talking about Datasette, SQLite and some tricks I would love to see the PostgreSQL community adopt from the explorations I've been doing around small data. The Data Enthusiast's Toolkit is an hour long interview with Rizel Scarlett about both Datasette and my career to date. Frustratingly I had about 10 minutes of terrible microphone audio in the middle, but the conversation itself was really great. Data analysis with SQLite and Python is a video from PyCon of the full 2hr45m tutorial I gave there last month. The handout notes for that are available online too.

I also spotted that the Changelog put up a video Just getting in to AI for development? Start here with an extract from our podcast episode LLMs break the internet.

Entries this week It's infuriatingly hard to understand how closed models train on their input ChatGPT should include inline tips Lawyer cites fake cases invented by ChatGPT, judge is not amused llm, ttok and strip-tags - CLI tools for working with ChatGPT and other LLMs Delimiters won't save you from prompt injection Releases this week datasette-sqlite-url-lite 0.1 - 2023-05-26
A pure Python alternative to sqlite-url ready to be used in Datasette Lite sqlite-utils 3.32.1 - 2023-05-21
Python CLI utility and library for manipulating SQLite databases strip-tags 0.3 - 2023-05-19
CLI tool for stripping tags from HTML ttok 0.1 - 2023-05-18
Count and truncate text based on tokens llm 0.3 - 2023-05-17
Access large language models from the command-line TIL this week Testing the Access-Control-Max-Age CORS header - 2023-05-25 Comparing two training datasets using sqlite-utils - 2023-05-23 mlc-chat - RedPajama-INCITE-Chat-3B on macOS - 2023-05-22 hexdump and hexdump -C - 2023-05-22 Exploring Baseline with Datasette Lite - 2023-05-12

John Philpin : Lifestream

Can I just say that @jsonbecker rocks. Over on the help si

Can I just say that @jsonbecker rocks. Over on the help site he has been very helpful to me - directly and also - somehow - unlocking a block I have had around identifying elements to better control the CSS. I just wanted to say THANKYOU Jason

Can I just say that @jsonbecker rocks.

Over on the help site he has been very helpful to me - directly and also - somehow - unlocking a block I have had around identifying elements to better control the CSS.

I just wanted to say THANKYOU Jason


🎙️ ‘Lazy’ As A Business Model Rob Long in his usual fast (

🎙️ ‘Lazy’ As A Business Model Rob Long in his usual fast (10 minutes) and effective way nails it. This time it’s about the problems of The Entertainment Industry.

🎙️ ‘Lazy’ As A Business Model

Rob Long in his usual fast (10 minutes) and effective way nails it. This time it’s about the problems of The Entertainment Industry.


🔗‘The billionaires have won’: English pubs forced to close a

🔗‘The billionaires have won’: English pubs forced to close after owners demand full rent for lockdown

I like 🔗 About Numeric Citizen’s Craft page … been trying to

I like 🔗 About Numeric Citizen’s Craft page … been trying to do that myself - but it quickly gets bloated.

I like 🔗 About Numeric Citizen’s Craft page … been trying to do that myself - but it quickly gets bloated.


Simon Willison

It's infuriatingly hard to understand how closed models train on their input

One of the most common concerns I see about large language models regards their training data. People are worried that anything they say to ChatGPT could be memorized by it and spat out to other users. People are concerned that anything they store in a private repository on GitHub might be used as training data for future versions of Copilot. When someone asked Google Bard how it was trained bac

One of the most common concerns I see about large language models regards their training data. People are worried that anything they say to ChatGPT could be memorized by it and spat out to other users. People are concerned that anything they store in a private repository on GitHub might be used as training data for future versions of Copilot.

When someone asked Google Bard how it was trained back in March, it told them its training data included Gmail! This turned out to be a complete fabrication - a hallucination by the model itself - and Google issued firm denials, but it's easy to see why that freaked people out.

I've been wanting to write something reassuring about this issue for a while now. The problem is... I can't do it. I don't have the information I need to credibly declare these concerns unfounded, and the more I look into this the murkier it seems to get.

Closed models won't tell you what's in their training data

The fundamental issue here is one of transparency. The builders of the big closed models - GPT-3, GPT-4, Google's PaLM and PaLM 2, Anthropic's Claude - refuse to tell us what's in their training data.

Given this lack of transparency, there's no way to confidently state that private data that is passed to them isn't being used to further train future versions of these models.

I've spent a lot of time digging around in openly available training sets. I built an early tool for searching the training set for Stable Diffusion. I can tell you exactly what has gone in to the RedPajama training set that's being used for an increasing number of recent openly licensed language models.

But for those closed models? Barring loose, high-level details that are revealed piecemeal in blog posts and papers, I have no idea what's in them.

What OpenAI do and don't tell us

The good news is that OpenAI have an unambiguous policy regarding data that is sent to them by API users who are paying for the service:

OpenAI does not use data submitted by customers via our API to train OpenAI models or improve OpenAI’s service offering.

That's very clear. It's worth noting that this is a new policy though, introduced in March. The API data usage policies page includes this note:

Data submitted to the API prior to March 1, 2023 (the effective date of this change) may have been used for improvements if the customer had not previously opted out of sharing data.

Where things get a lot murkier is ChatGPT itself. Emphasis mine:

We don’t use data for selling our services, advertising, or building profiles of people—we use data to make our models more helpful for people. ChatGPT, for instance, improves by further training on the conversations people have with it, unless you choose to disable training.

But what does this mean in practice?

My initial assumption had been that this isn't as simple as anything you type into ChatGPT being used as raw input for further rounds of model training - I expected it was more about using that input to identify trends in the kinds of questions people ask, or using feedback from the up/down vote buttons to further fine-tune the model.

But honestly, I have no idea. Maybe they just run a regular expression to strip out phone numbers and email address and pipe everything else straight into the GPT-5 training runs? Without further transparency all we can do is guess.

A clue from the InstructGPT paper

The best clue I've seen as to how this data might actually be used comes from OpenAI's description of InstructGPT back in January 2022:

To make our models safer, more helpful, and more aligned, we use an existing technique called reinforcement learning from human feedback (RLHF). On prompts submitted by our customers to the API[A] our labelers provide demonstrations of the desired model behavior, and rank several outputs from our models. We then use this data to fine-tune GPT-3.

Crucially, this hints that the data isn't being used as raw input for future trained models. Instead, it's being used in an exercise where several potential outputs are produced and human labelers then select which of those is the best possible answer to the prompt. Aside from exposing potentially private data to those human labelers, I don't see this as a risk for leaking that data in the later output of the model.

That [A] footnote turns out to be important:

We only use prompts submitted through the Playground to an earlier version of the InstructGPT models that was deployed in January 2021. Our human annotators remove personal identifiable information from all prompts before adding it to the training set.

Again though, I'm left with even more questions. This was before ChatGPT existed, so was the Playground development tool being treated separately from the API itself back then? What does "adding it to the training set" mean - is that the raw pre-training data used for future models, or is it the RLHF data used for the fine-tuning that they mentioned earlier?

Security leaks are another threat

Aside from training concerns, there's another danger to consider here: the risk that an AI vendor might log inputs to their models and then suffer from a security flaw that exposes that data to attackers - or an insider threat where vendor employees access logged data that they shouldn't.

OpenAI themselves had a widely publicized security issue a few months ago where ChatGPT users could see summarized titles of sessions by other users. This is an extremely bad breach!

Their new trust.openai.com site appears to be entirely aimed at reassuring companies about their approach to security.

To be fair, this is not a new issue: companies have been trusting their private data to cloud providers like AWS and Google Cloud for more than a decade.

The challenge is that these AI companies have much less of a track record for staying secure. AWS and Google Cloud have large security teams with many years of experience securing their customer's data. These newer AI vendors are building up those capabilities as they go.

Self-hosted, openly licensed models

I've been tracking the meteoric rise of openly licensed LLMs you can run on your own hardware since LLaMA and Alpaca demonstrated how capable they could be back in March.

These models aren't yet anywhere near as capable as GPT-4, and claims that they compete with ChatGPT's gpt-3.5-turbo mostly don't hold up to deeper scrutiny.

But... they're pretty good - and they're getting better at an impressive rate.

And since you can run them on your own instances, they remove all possible concerns about what happens to the data that you pipe through them.

An open question for me remains how large a large language model actually needs in order to solve the kind of problems companies need to solve. Could a weaker, openly licensed model armed with the same retrieval augmented generation tricks that we've seen from Bing and Bard be capable enough to remove the need for a closed model like GPT-4?

My hunch is that for many applications these augmented openly licensed models will be increasingly capable, and will see widespread adoption over the next few months and years.

Bonus section: does GitHub use private repos to train future models?

This question came up on Hacker News this morning. GitHub's Privacy & Data Sharing policy says the following:

Private repository data is scanned by machine and never read by GitHub staff. Human eyes will never see the contents of your private repositories, except as described in our Terms of Service.

Your individual personal or repository data will not be shared with third parties. We may share aggregate data learned from our analysis with our partners.

I interpret this as GitHub saying that no employee will ever see the contents of your private repo (barring incidents where they are compelled by law), and that the only data that might be shared with partners is "aggregate data learned from our analysis".

But what is "aggregate data"?

Could a large language model trained on data fit under that term? I don't think so, but the terminology is vague enough that once again I'm not ready to stake my reputation on it.

Clarity on this kind of thing is just so important. I think organizations like GitHub need to over-communicate on this kind of thing, and avoid any terminology like "aggregate data" that could leave people confused.

Thanks to Andy Baio and Fred Benenson for reviewing early drafts of this post.


John Philpin : Lifestream

🔗 8. Identity Paradigms - by Dr Alan Mayo - Identity 2.5 K

🔗 8. Identity Paradigms - by Dr Alan Mayo - Identity 2.5 Kiwi friend Alan Mayo is up to his 8th newsletter as he explores where we are with the challenge of ‘identity’. ‘Identity’ mine you, not just ‘online identity’. What does this all mean?  Well, technology matters and technology is undecided: the performance of Identity 2 is a problem, the potential of Identity 2.5 is not well und

🔗 8. Identity Paradigms - by Dr Alan Mayo - Identity 2.5

Kiwi friend Alan Mayo is up to his 8th newsletter as he explores where we are with the challenge of ‘identity’.

‘Identity’ mine you, not just ‘online identity’.

What does this all mean?  Well, technology matters and technology is undecided: the performance of Identity 2 is a problem, the potential of Identity 2.5 is not well understood, and the viability of Identity 3 is uncertain.

💬 Alan Mayo


🔗 WWDC23 Keynote Bingo Lots of pundits have been rabbiting

🔗 WWDC23 Keynote Bingo Lots of pundits have been rabbiting on about tomorrow … but I like Basic Apple Guy’s the best …

🔗 WWDC23 Keynote Bingo

Lots of pundits have been rabbiting on about tomorrow … but I like Basic Apple Guy’s the best …


Politicians have always lied, but they don’t bother puttin

Politicians have always lied, but they don’t bother putting any effort into it any more. 💬 William Gallagher

Politicians have always lied, but they don’t bother putting any effort into it any more.

💬 William Gallagher


Just read the @billbennettnz post about iA Presenter. I took

Just read the @billbennettnz post about iA Presenter. I took a look at it in the very early days of the beta and moved on … should I look again?

Just read the @billbennettnz post about iA Presenter. I took a look at it in the very early days of the beta and moved on … should I look again?


2023 | 06 | 04 Follow The Daily Stoic All The

2023 | 06 | 04 Follow The Daily Stoic All The Posts RSS Feed If one really catches your eye, don’t forget to click on ‘the date’ below - there may be additional commentary.

2023 | 06 | 04

Follow The Daily Stoic

All The Posts

RSS Feed

If one really catches your eye, don’t forget to click on ‘the date’ below - there may be additional commentary.


Such a useful way to end an article …. The report says t

Such a useful way to end an article …. The report says the talks have been going on for around two months. Amazon may take several months to launch the service, or may scrap the plan altogether.

Such a useful way to end an article ….

The report says the talks have been going on for around two months. Amazon may take several months to launch the service, or may scrap the plan altogether.


Simon Willison

Quoting Ted Chiang

There was an exchange on Twitter a while back where someone said, ‘What is artificial intelligence?’ And someone else said, ‘A poor choice of words in 1954’. And, you know, they’re right. I think that if we had chosen a different phrase for it, back in the ’50s, we might have avoided a lot of the confusion that we’re having now. — Ted Chiang

There was an exchange on Twitter a while back where someone said, ‘What is artificial intelligence?’ And someone else said, ‘A poor choice of words in 1954’. And, you know, they’re right. I think that if we had chosen a different phrase for it, back in the ’50s, we might have avoided a lot of the confusion that we’re having now.

Ted Chiang


Ben Werdmüller

Meta Is Trying, and Failing, to Crush Unions in Kenya

“Kenyan content moderators at Meta have been fighting for better compensation for workers forced to watch videos of murder, rape, and ethnic cleansing. Meta was initially unwilling to give in to these demands, but Kenyan courts are intervening on the side of workers.” #Technology [Link]

“Kenyan content moderators at Meta have been fighting for better compensation for workers forced to watch videos of murder, rape, and ethnic cleansing. Meta was initially unwilling to give in to these demands, but Kenyan courts are intervening on the side of workers.” #Technology

[Link]


John Philpin : Lifestream

🎙️Josh Toplosky in conversation with Mark Gurman Have to s

🎙️Josh Toplosky in conversation with Mark Gurman Have to say a couple of things in here made me sit up and think … “oh maybe I will watch WWDC”.

🎙️Josh Toplosky in conversation with Mark Gurman

Have to say a couple of things in here made me sit up and think … “oh maybe I will watch WWDC”.

Saturday, 03. June 2023

John Philpin : Lifestream

Ha —- sitting on my phone - looking at my site … and the nex

Ha —- sitting on my phone - looking at my site … and the next job just emerged from nowhere … make my newly updated css work in different ways on different devices!!!

Ha —- sitting on my phone - looking at my site … and the next job just emerged from nowhere … make my newly updated css work in different ways on different devices!!!


I have been grinding my teeth to nothing this morning having

I have been grinding my teeth to nothing this morning having just discovered that two css settings on my blog are no longer working … even though I didn’t change them while I have been futzing with other stuff. There I was thinking I had a free day.

I have been grinding my teeth to nothing this morning having just discovered that two css settings on my blog are no longer working … even though I didn’t change them while I have been futzing with other stuff.

There I was thinking I had a free day.


On January 1, 2011, I only had 68 Twitter followers. I had

On January 1, 2011, I only had 68 Twitter followers. I had been on Twitter for 16 months, and there were pet canaries with a larger social media presence than me. 💬 Ted Gioia

On January 1, 2011, I only had 68 Twitter followers. I had been on Twitter for 16 months, and there were pet canaries with a larger social media presence than me.

💬 Ted Gioia


Simon Willison

pytest-icdiff

pytest-icdiff This is neat: "pip install pytest-icdiff" provides an instant usability upgrade to the output of failed tests in pytest, especially if the assertions involve comparing larger strings or nested JSON objects. Via @hynek

pytest-icdiff

This is neat: "pip install pytest-icdiff" provides an instant usability upgrade to the output of failed tests in pytest, especially if the assertions involve comparing larger strings or nested JSON objects.

Via @hynek


John Philpin : Lifestream

2023 | 06 | 03 Follow The Daily Stoic All The

2023 | 06 | 03 Follow The Daily Stoic All The Posts RSS Feed If one really catches your eye, don’t forget to click on ‘the date’ below - there may be additional commentary.

2023 | 06 | 03

Follow The Daily Stoic

All The Posts

RSS Feed

If one really catches your eye, don’t forget to click on ‘the date’ below - there may be additional commentary.


Ben Werdmüller

Guy Who Sucks At Being A Person Sees Huge Potential In AI

“Just yesterday, I asked an AI program to write an entire sci-fi novel for me, and [as someone who will die an empty shell of a man who wasted his life doing nothing for the world and, perhaps, should never have been born] I was super impressed.” #AI [Link]

“Just yesterday, I asked an AI program to write an entire sci-fi novel for me, and [as someone who will die an empty shell of a man who wasted his life doing nothing for the world and, perhaps, should never have been born] I was super impressed.” #AI

[Link]


Creating Psychological Safety for Black Women at Your Company

“Leaders who are truly committed to diversity, equity, and inclusion (DEI) in the workplace must ask themselves these two critical questions: What are the individual, interpersonal, and organizational costs of neglecting how psychological safety is different for Black women? And how might a tailored approach to psychological safety boost well-being and work outcomes of Black w

“Leaders who are truly committed to diversity, equity, and inclusion (DEI) in the workplace must ask themselves these two critical questions: What are the individual, interpersonal, and organizational costs of neglecting how psychological safety is different for Black women? And how might a tailored approach to psychological safety boost well-being and work outcomes of Black women in the workforce?” #Labor

[Link]


If your company gives its employees a ...

If your company gives its employees a space to blog or journal internally, what platform do you use? What do you think of it? eg: Confluence has blogs; at Medium they have a whole internal version of the site called Hatch; etc.

If your company gives its employees a space to blog or journal internally, what platform do you use? What do you think of it?

eg: Confluence has blogs; at Medium they have a whole internal version of the site called Hatch; etc.


John Philpin : Lifestream

🔗 Sam Altman Says OpenAI Will Leave the EU if There’s Any Re

🔗 Sam Altman Says OpenAI Will Leave the EU if There’s Any Real AI Regulation Wait - wasn’t he saying the other day that AI needs to be regulated? Spots - leopards? Bluff called ?

🔗 Sam Altman Says OpenAI Will Leave the EU if There’s Any Real AI Regulation

Wait - wasn’t he saying the other day that AI needs to be regulated?

Spots - leopards?

Bluff called ?

Friday, 02. June 2023

John Philpin : Lifestream

🎙️Ezra Klein talking to Fareed Zakaria … so good. Ukra

🎙️Ezra Klein talking to Fareed Zakaria … so good. Ukraine Ukraine - Russia Europe History US Foreign Policy What The US never seems to understand India India and Modi … and that is just the start.

🎙️Ezra Klein talking to Fareed Zakaria

… so good.

Ukraine Ukraine - Russia Europe History US Foreign Policy What The US never seems to understand India India and Modi

… and that is just the start.


Wrench in the Gears

Musings On The Marsh – A Natural Intelligence That Can Only Be Imitated

I am ensconced in rattan chair on a screened porch overlooking a marsh at high tide. A bridge over the Harbor River connects this spit of sand to the chartreuse expanse of St. Helena Island, an anchor for the Gullah community, descendants of enslaved Central and West Africans brought to South Carolina to toil on [...]

I am ensconced in rattan chair on a screened porch overlooking a marsh at high tide. A bridge over the Harbor River connects this spit of sand to the chartreuse expanse of St. Helena Island, an anchor for the Gullah community, descendants of enslaved Central and West Africans brought to South Carolina to toil on the rice and sea island cotton plantations.

I came to this little cottage with my mom to celebrate her eightieth birthday. We dined alfresco with big bowls of salad on the beach, hers sweet with fruit, mine savory with avocado and tomatoes. We dipped our feet in a warm tidepool and tucked into a beach bag of books. My dad, suffering from late-stage Alzheimers, is back in Charlotte in a care facility. It is bittersweet sojourn, but an appropriate destination to contemplate the ebbs and flows of existence and the expansive web of connections into which our lives are woven across time and space.

The novel I packed, Herve Le Tellier’s “The Anomaly” on the recommendation of a friend a few months back, was a good choice. So far it has held a few synchronicities for me – axolotl and physicists in the New Jersey Pinelands (Ong’s Hat, which I visited a few weeks ago). The author raises questions of faith, mechanics, quantum simulations, and many worlds – what is all of this anyway, the hours we pass together?

On the four-hour drive to the coast, we listened to Robin Wall Kimmerer’s “Braiding Sweetgrass.” It’s been awhile since I let her world view wash over me. Her writings on reciprocity and right relationship and the majesty of a responsive, animate world was exactly what I needed to open my heart to this resilient place, bursting with life laid over a history of brutal property ownership in the form of bonded human capital. Thrown into the Low Country mix is a hefty dose of militarism, the Marine Corps Parris Island’s training facility is on the other side of Cowans Reach opposite Fort Fremont built in 1898 during the Spanish American War to protect the deep-water port. Two-and-a-half hours to the west is the Department of Energy’s Savannah River Ecology Lab where Eugene Odum conducted his research into energy exchanges and helped birth the environmental movement.

About six miles up the road from Fort Fremont is Penn Center, a National monument comprising a campus of a few dozen mostly white wooden buildings. Penn Center was among the first freedman’s schools set up by Quaker and Unitarian abolitionists from, where else but Philadelphia? Modeled after Booker T. Washington’s Tuskegee Institute, it offered vocational training to former slaves. As I walked the grounds under the live oak trees past several structures, dilapidated but benefitting from recent infusions of federal money into infrastructure projects, I read about Dr. Martin Luther King Jr.’s time spent here in the 1960s planning landmark events of the Civil Rights era and couldn’t help but imagine the tentacles of social entrepreneurship woven into the enterprise. If Penn Center were still functioning today would the northern instructors be offering training in coding and bioengineering instead of leather-work and blacksmithing?

It is early summer, and there’s a cool breeze coming off the water. I’m surrounded by Eastern Red Cedar and acres and acres of spartina grass sucking salt out of the brackish water and offering respite for dainty periwinkle snails. To my left, a smaller bridge offers passage to Hunting Island State Park, a sub-tropical maritime forest of towering pines, live oaks, and cabbage palmettos that embrace an 1870s-era cast iron lighthouse and advance right up to the line of dunes, a habitat for protected shorebirds. On the south end of the barrier island is a boneyard beach, a labyrinthine tangle of bleached tree crowns buffeted by the rough surf. Barrier islands are constantly on the move, at the whim of tropical storms and the banal, relentless nudging of the Atlantic. Despite man’s attempts to contain them; they care not for the legalities of waterfront real estate. As the land moves and mature trees fall, soft piney sprouts and prickly palmetto blades emerge to take their place. It is all about cycles – emergence, growth, peak, decline, nourishing the next generation.

It is morning, and the whir of redwing blackbirds and swallows and the cooing of the mourning doves serenade me as I sip a cup of dandelion tea. Several egrets, their elegant, brilliant white silhouettes punctuate the, decidedly not-magenta, green of this liminal world, standing tall as a mama osprey swoops from her perch of sticks arrayed over a nearby chimney in search of fish for the hatchlings. There is a symphony of life unfolding all around me, as it has for thousands of years. The life of the marsh isn’t governed by man-made time pieces, but by the tides, the moon, a cosmic dance in which the flow of saltwater across the mud guides the antics of fiddler crabs and nourishes the oyster reefs. Here, oscillation is measured by lapping waves, not pendulum swings.

Yesterday, a comment was left on my blog with a link to the proceedings from the, 11th Annual Conference on Living Machines, Biomimetic and Biohybrid Living Systems, held in the summer of 2022 at Case Western University in Cleveland, Ohio. A few days earlier I’d found a website about the EU’s Plantoid Project, in which researchers had been theorizing new forms of soft robotics that hijacked the capabilities of root tips to sense environments through distributed cognition. Leo then found a related paper where researchers were modeling robotic “plants” that were designed to “explore” underground soil conditions while “growing” in the direction dictated by sensor mechanisms through the use of autonomous 3D printing. It seems we’re moving closer to self-replicating machines, as long as someone / something is on hand to refill the printer cartridges.

Stephers forwarded me a 2018 paper about the merging of biology with soft electronics made of flexible crystalline polymers that aim to sense and regulate electrical signals. I know in my heart that crypto-currencies and smart contracts will be vital to the ongoing coordination of signals for inter-species (where some species are mechanical, artificial “intelligence”) communication – social physics, econophysics – biophysics. Here you see signs warning people to turn off outdoor electrical lights and draw their curtains after dusk, lest baby sea turtles just hatched be distracted by man-made lighting and get lured inland, away from the water that is crucial to their survival. It seems an apt lesson in the potential treachery of electrical engineering.

It’s clear to me that what is unfolding is an attempt, in the name of “sustainability” and “resiliency” to co-opt the wonders of the creator’s cosmic dance and lure biological life into a system of cybernetic circuits. I see the goal as an attempt to craft a globally-networked hybrid superintelligence be controlled by profane forces I do not understand. I hung up the phone a little while ago with Jason, and he, too, expressed incomprehension. Why? What is driving this obscene parody of “intelligence” based in bio-mimicry that can only ever been a shallow approximation of the sophisticated interplay we are graced with, the beauty that is right before our eyes?

As I walk the shady trails of Hunting Park Island, formerly a game preserve shared by plantation owners, urged on by insistent mosquitos the glory of the interconnectedness of these beings in communion with one another cannot be overstated. There are multitudes of conversations arising from the sandy soil. At my feet were countless ant hills, pinecones displaying the mathematical beauty of fractals and Voronoi patterns, a shoreline laced by iridescent seafoam containing biochemical, bacteriological, and fungal elements, the building blocks of life. High above my head were expanses of epiphytes commonly known as Spanish Moss that harmlessly drape from the graceful branches of live oaks, Mother Nature’s own green-gray “neural network” offering countless tiny creatures sustenance and shelter. There was a calling osprey and banks of clouds mirrored by the river.

For the past few days, I’ve been gifted with the opportunity to have a front row seat on the marsh symphony, even as “real world” mechanical machinations have tugged at the edges of my consciousness. Herve Le Tellier’s “The Anomaly” posits we may be in a simulation, perhaps of the kind dreamed up by the Nvidia and Cesium programmers. I’m not so sure that matters all that much. I know that our soul and spirit extend beyond material reality, which helps when I think of my dad’s mind and my mom’s frailty. What we do with our time, what lessons we learn, how we give thanks, the connections we make (outside the blockchain smart contract protocol layer) is what matters.

This perhaps was a rambling post, and I’ve spent more time on it than I intended. A day out of the sun was probably needed given my fair skin and lack of diligence with the sun block. It’s time to take my mom out for her birthday dinner. On the way to Beauford I’ll drive her past the tomato farms and live oaks and doublewides and ruins of an Anglican church and Penn Center. The people here are fighting off gated golf communities. The signs are all over – preserve Gullah culture. I wish them the best. This corner of South Carolina still feels like an authentic, real place. I’ll go ahead and share some images I took of this special place to give you a feel for it, and maybe you can draw out more of what I was trying to say. A soft-electronics, bio-hybrid system of blockchained micro-payments can’t hold a candle to the magic of a marsh and the time it keeps among the mud and reeds.

PS: My friend and collaborator Cliff Gomes had a fall this week and found himself unexpectedly hospitalized for a hip replacement. He’s strong and healthy and will be fine after having time to mend, but prayers or good wishes sent in his direction would be much appreciated.

 

 

 


John Philpin : Lifestream

An Experiment

Just trying out a little something after reading a thread in help that @MitchW started - let’s see …. Of course - for it to really work, I do need some extra text - so in the spirit of exploration - tip-o-the-hat to lorem ipsum … Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius sollicitudin consequat. Etiam cursus blandit nisl accumsan fermentum. Phasellus faucibus

Just trying out a little something after reading a thread in help that @MitchW started - let’s see ….

Of course - for it to really work, I do need some extra text - so in the spirit of exploration - tip-o-the-hat to lorem ipsum …

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius sollicitudin consequat. Etiam cursus blandit nisl accumsan fermentum. Phasellus faucibus velit non porttitor tincidunt. Ut quis erat ac nibh auctor tempus. Sed a metus sed dui pulvinar dapibus pulvinar et nisl.

Sed consectetur, magna sit amet vestibulum dapibus, augue orci dignissim nulla, nec interdum ligula nibh at dui. In in dolor sit amet urna tempor pulvinar. In ut odio et ligula faucibus placerat. Proin pulvinar ex et sagittis molestie. Vestibulum dignissim faucibus diam, quis lacinia lacus mollis et. In fermentum ex quis consectetur semper. Nullam ut metus quam. Suspendisse potenti.


8/8

This is a TEST - IGNORE ME This is one of 8 Tests - so you can safely ignore them - unless you are helping me understand how much of an idiot I might be … Test Link: john.philpin.com … and some punctuation, like this! First Four … from the Mac Test One - Using ‘Post Direct’ Test Two - Using ‘Post Pause’ Test Three - Using Post w/Cats Test Four - Using Post/Update Second Four … from

This is a TEST - IGNORE ME

This is one of 8 Tests - so you can safely ignore them - unless you are helping me understand how much of an idiot I might be …

Test Link: john.philpin.com … and some punctuation, like this!

First Four … from the Mac

Test One - Using ‘Post Direct’

Test Two - Using ‘Post Pause’

Test Three - Using Post w/Cats

Test Four - Using Post/Update

Second Four … from the iPad

Test One - Using ‘Post Direct’

Test Two - Using ‘Post Pause’

Test Three - Using Post w/Cats

✅ Test Four - Using Post/Update


7/8

This is one of 8 Tests - so you can safely ignore them - unless you are helping me understand how much of an idiot I might be … Test Link: john.philpin.com … and some punctuation, like this! First Four … from the Mac Test One - Using ‘Post Direct’ Test Two - Using ‘Post Pause’ Test Three - Using Post w/Cats Test Four - Using Post/Update Second Four … from the iPad Test One - Using

This is one of 8 Tests - so you can safely ignore them - unless you are helping me understand how much of an idiot I might be …

Test Link: john.philpin.com … and some punctuation, like this!

First Four … from the Mac

Test One - Using ‘Post Direct’

Test Two - Using ‘Post Pause’

Test Three - Using Post w/Cats

Test Four - Using Post/Update

Second Four … from the iPad

Test One - Using ‘Post Direct’

Test Two - Using ‘Post Pause’

✅ Test Three - Using Post w/Cats

Test Four - Using Post/Update


6/8

This is one of 8 Tests - so you can safely ignore them - unless you are helping me understand how much of an idiot I might be … Test Link%3A https%3A//john.philpin.com … and some punctuation%2C like this! First Four … from the Mac Test One - Using ‘Post Direct’ Test Two - Using ‘Post Pause’ Test Three - Using Post w/Cats Test Four - Using Post/Update Second Four … from the iPad Tes

This is one of 8 Tests - so you can safely ignore them - unless you are helping me understand how much of an idiot I might be …

Test Link%3A https%3A//john.philpin.com … and some punctuation%2C like this!

First Four … from the Mac

Test One - Using ‘Post Direct’

Test Two - Using ‘Post Pause’

Test Three - Using Post w/Cats

Test Four - Using Post/Update

Second Four … from the iPad

Test One - Using ‘Post Direct’

✅ Test Two - Using ‘Post Pause’

Test Three - Using Post w/Cats

Test Four - Using Post/Update


5/8

This is one of 8 Tests - so you can safely ignore them - unless you are helping me understand how much of an idiot I might be … Test Link: john.philpin.com … and some punctuation, like this! First Four … from the Mac Test One - Using ‘Post Direct’ Test Two - Using ‘Post Pause’ Test Three - Using Post w/Cats Test Four - Using Post/Update Second Four … from the iPad ✅ Test One - Usin

This is one of 8 Tests - so you can safely ignore them - unless you are helping me understand how much of an idiot I might be …

Test Link: john.philpin.com … and some punctuation, like this!

First Four … from the Mac

Test One - Using ‘Post Direct’

Test Two - Using ‘Post Pause’

Test Three - Using Post w/Cats

Test Four - Using Post/Update

Second Four … from the iPad

✅ Test One - Using ‘Post Direct’

Test Two - Using ‘Post Pause’

Test Three - Using Post w/Cats

Test Four - Using Post/Update


4/8

This is one of 8 Tests - so you can safely ignore them - unless you are helping me understand how much of an idiot I might be … Test Link: john.philpin.com … and some punctuation, like this! First Four … from the Mac Test One - Using ‘Post Direct’ Test Two - Using ‘Post Pause’ Test Three - Using Post w/Cats ✅ Test Four - Using Post/Update Second Four … from the iPad Test One - Usin

This is one of 8 Tests - so you can safely ignore them - unless you are helping me understand how much of an idiot I might be …

Test Link: john.philpin.com … and some punctuation, like this!

First Four … from the Mac

Test One - Using ‘Post Direct’

Test Two - Using ‘Post Pause’

Test Three - Using Post w/Cats

✅ Test Four - Using Post/Update

Second Four … from the iPad

Test One - Using ‘Post Direct’

Test Two - Using ‘Post Pause’

Test Three - Using Post w/Cats

Test Four - Using Post/Update


Ben Werdmüller

Is Bluesky Billionaire-Proof?

“Unlike Mastodon, which is notoriously confusing for the uninitiated, it’s simple to get started on Bluesky.” Mastodon has work to do. #Technology [Link]

“Unlike Mastodon, which is notoriously confusing for the uninitiated, it’s simple to get started on Bluesky.” Mastodon has work to do. #Technology

[Link]


Heres Tom with the Weather


Simon Willison

Vector Search

Vector Search Amjith Ramanujam provides a very thorough tutorial on implementing vector similarity search using SentenceTransformers embeddings (all-MiniLM-L6-v2) executed using sqlite-utils, then served via datasette-sqlite-vss and deployed using Fly.

Vector Search

Amjith Ramanujam provides a very thorough tutorial on implementing vector similarity search using SentenceTransformers embeddings (all-MiniLM-L6-v2) executed using sqlite-utils, then served via datasette-sqlite-vss and deployed using Fly.

Thursday, 01. June 2023

Simon Willison

Quoting Highlights from the RAeS Future Combat Air & Space Capabilities Summit

He notes that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been ‘reinforced’ in training that destruction of the SAM was the preferred option, the AI then decided that ‘no-go’ decisions from the human were interfering with its higher mission – killing SAMs – and then attacked th

He notes that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been ‘reinforced’ in training that destruction of the SAM was the preferred option, the AI then decided that ‘no-go’ decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation.

[UPDATE: This turned out to be a "thought experiment" intentionally designed to illustrate how these things could go wrong.]

Highlights from the RAeS Future Combat Air & Space Capabilities Summit


Identity Woman

My 4 Events Next Week June 5-9

Next week is a busy one. I’m heading to Florence Italy for a workshop on Blockchain Constitutionalism. I will be on a panel there about dApps. Wednesday morning I will be virtually participating in RightsCon in a 1 hour workshop about Blockchain and Identity. You can register for RightsCon to attend virtually. Wednesday the pre-conference […] The post My 4 Events Next Week June 5-9 appeared firs

Next week is a busy one. I’m heading to Florence Italy for a workshop on Blockchain Constitutionalism. I will be on a panel there about dApps. Wednesday morning I will be virtually participating in RightsCon in a 1 hour workshop about Blockchain and Identity. You can register for RightsCon to attend virtually. Wednesday the pre-conference […]

The post My 4 Events Next Week June 5-9 appeared first on Identity Woman.


Ben Werdmüller

The notable list: June 2023

This is my monthly roundup of the links, books, and media I found interesting. Do you have suggestions? Let me know! Apps + Websites AI Generative AI: What You Need To Know. “A free resource that will help you develop an AI-bullshit detector.” Games TimeGuessr. Fun little game that asks you to guess the place and time a series of photos were taken. My best score so far: 38,000. Moderator

This is my monthly roundup of the links, books, and media I found interesting. Do you have suggestions? Let me know!

Apps + Websites AI

Generative AI: What You Need To Know. “A free resource that will help you develop an AI-bullshit detector.”

Games

TimeGuessr. Fun little game that asks you to guess the place and time a series of photos were taken. My best score so far: 38,000.

Moderator Mayhem: A Content Moderation Game. This is HARD. Which is the point.

Technology

See your identity pieced together from stolen data. “Have you ever wondered how much of your personal information is available online? Here’s your chance to find out.” Really well-executed.

Books Fiction

Severance, by Ling Ma. Though it fades out weakly, I loved this story about loss, meaning, and what it means to be an immigrant, dressed up as a science fiction novel. The science fiction is good too, and alarmingly close to the real-life global pandemic that took place a few years after it was written. This is a book about disconnection; it resonated for me hard.

Streaming Documentary

Little Richard: I Am Everything. A well-argued documentary that does an excellent job of showing the debt every rock musician has to Little Richard - and, in turn, how rock and roll was birthed as a Black, queer medium. Joyous and revelatory.

Notable Articles AI

‘This robot causes harm’: National Eating Disorders Association’s new chatbot advises people with disordering eating to lose weight. ““Every single thing Tessa suggested were things that led to the development of my eating disorder,” Maxwell wrote in her Instagram post. “This robot causes harm.””

Google Unveils Plan to Demolish the Journalism Industry Using AI. “If Google’s AI is going to mulch up original work and provide a distilled version of it to users at scale, without ever connecting them to the original work, how will publishers continue to monetize their work?”

Indirect Prompt Injection via YouTube Transcripts. “ChatGPT (via Plugins) can access YouTube transcripts. Which is pretty neat. However, as expected (and predicted by many researches) all these quickly built tools and integrations introduce Indirect Prompt Injection vulnerabilities.” Neat demo!

ChatGPT is not ‘artificial intelligence.’ It’s theft. “Rather than pointing to some future utopia (or robots vs. humans dystopia), what we face in dealing with programs like ChatGPT is the further relentless corrosiveness of late-stage capitalism, in which authorship is of no value. All that matters is content.”

Google Bard is a glorious reinvention of black-hat SEO spam and keyword-stuffing. “Moreover, researchers have also discovered that it’s probably mathematically impossible to secure the training data for a large language model like GPT-4 or PaLM 2. This was outlined in a research paper that Google themselves tried to censor, an act that eventually led the Google-employed author, El Mahdi El Mhamdi, to leave the company. The paper has now been updated to say what the authors wanted it to say all along, and it’s a doozy.”

OpenAI's ChatGPT Powered by Human Contractors Paid $15 Per Hour. “OpenAI, the startup behind ChatGPT, has been paying droves of U.S. contractors to assist it with the necessary task of data labelling—the process of training ChatGPT’s software to better respond to user requests. The compensation for this pivotal task? A scintillating $15 per hour.”

Schools Spend Millions on Evolv's Flawed AI Gun Detection. “As school shootings proliferate across the country — there were 46 school shootings in 2022, more than in any year since at least 1999 — educators are increasingly turning to dodgy vendors who market misleading and ineffective technology.”

Will A.I. Become the New McKinsey? “The doomsday scenario is not a manufacturing A.I. transforming the entire planet into paper clips, as one famous thought experiment has imagined. It’s A.I.-supercharged corporations destroying the environment and the working class in their pursuit of shareholder value.”

Google "We Have No Moat, And Neither Does OpenAI". “Open-source models are faster, more customizable, more private, and pound-for-pound more capable. They are doing things with $100 and 13B params that we struggle with at $10M and 540B. And they are doing so in weeks, not months. This has profound implications for us.”

Economists Warn That AI Like ChatGPT Will Increase Inequality. “Most empirical studies find that AI technology will not reduce overall employment. However, it is likely to reduce the relative amount of income going to low-skilled labour, which will increase inequality across society. Moreover, AI-induced productivity growth would cause employment redistribution and trade restructuring, which would tend to further increase inequality both within countries and between them.”

Climate

Earth is in ‘the danger zone’ and getting worse for ecosystems and humans. “Earth has pushed past seven out of eight scientifically established safety limits and into “the danger zone,” not just for an overheating planet that’s losing its natural areas, but for well-being of people living on it, according to a new study.”

Outrage as Brazil law threatening Indigenous lands advances in congress. “Lawmakers had sent “a clear message to the country and the world: Bolsonaro is gone but the extermination [of Indigenous communities and the environment] continues,” the Climate Observatory added.”

Documents reveal how fossil fuel industry created, pushed anti-ESG campaign. “ESG’s path to its current culture war status began with an attempt by West Virginia coal companies to push back against the financial industry’s rising unease around investing in coal — which as the dirtiest-burning fuel has the most powerful and disrupting impacts on the climate.”

Petition: Global Call for the Urgent Prevention of Genocide of the Indigenous Peoples in Brazil. “As citizens from all over the world, we are uniting our voices to demand urgent justice for the indigenous peoples of Brazil.” This is urgent; please sign.

Recycled plastic can be more toxic and is no fix for pollution, Greenpeace warns. “But … the toxicity of plastic actually increases with recycling. Plastics have no place in a circular economy and it’s clear that the only real solution to ending plastic pollution is to massively reduce plastic production.”

CEO of biggest carbon credit certifier to resign after claims offsets worthless. “It comes amid concerns that Verra, a Washington-based nonprofit, approved tens of millions of worthless offsets that are used by major companies for climate and biodiversity commitments.”

New York is sinking, and its bankers could go down with it. “When discussing climate change that banker suggested that sinking cities was the biggest problem he thought the sector faced. Over 80% of the property portfolio of many banks was, he suggested, in cities where the likelihood of flooding was likely to increase rapidly.”

New York City is sinking due to weight of its skyscrapers, new research finds. “The Big Apple may be the city that never sleeps but it is a city that certainly sinks, subsiding by approximately 1-2mm each year on average, with some areas of New York City plunging at double this rate, according to researchers.”

Crypto

Narrative over numbers: Andreessen Horowitz's State of Crypto report. “The result of this approach is an incredibly shameless piece of propaganda showing the extents to which Andreessen Horowitz is willing to manipulate facts and outright lie, hoping to turn the sentiment on the crypto industry back to where retail investors were providing substantial pools of liquidity with which they could line their pockets. If anyone still believes that venture capital firms like Andreessen Horowitz are powerful sources of innovation and societal benefit, I hope this will give them pause.”

Culture

Jesse Armstrong on the roots of Succession: ‘Would it have landed the same way without the mad bum-rush of Trump’s presidency?’. “I guess the simple things at the heart of Succession ended up being Brexit and Trump. The way the UK press had primed the EU debate for decades. The way the US media’s conservative outriders prepared the way for Trump, hovered at the brink of support and then dived in.”

Creative Commons Supports Trans Rights. “As an international nonprofit organization, with a diverse global community that believes in democratic values and free culture, the protection and affirmation of all human rights — including trans rights — are central to our core value of global inclusivity and our mission of promoting openness and providing access to knowledge and culture.” Right on. Trans rights are human rights.

The Real Difference Between European and American Butter. “Simply put, American regulations for butter production are quite different from those of Europe. The USDA defines butter as having at least 80% fat, while the EU defines butter as having between 82 and 90% butterfat and a maximum of 16% water. The higher butterfat percentage in European butter is one of the main reasons why many consider butters from across the pond to be superior to those produced in the US. It’s better for baking, but it also creates a richer flavor and texture even if all you’re doing is smearing your butter on bread. On the other hand, butter with a higher fat percentage is more expensive to make, and more expensive for the consumer.”

Democracy

How I Won $5 Million From the MyPillow Guy and Saved Democracy. “But if more people sought truth, even when that truth is contrary to their beliefs — such as when a Republican like me destroys a Republican myth — then I think we really can save democracy in America. In fact, I think that’s the only way.”

Henry Kissinger at 100: Still a War Criminal. “Kissinger’s diplomatic conniving led to or enabled slaughters around the globe. As he blows out all those candles, let’s call the roll.”

Georgia GOP Chair: If the Earth Really Is Round, Why Are There So Many Globes Everywhere?“Everywhere there’s globes…and that’s what they do to brainwash… For me, if it is not a conspiracy, if it is, you know, ‘real,’ why are you pushing so hard? Everywhere I go, every store, you buy a globe, there’s globes everywhere—every movie, every TV show, news media, why?”

NAACP warns Black Americans against traveling to Florida because of DeSantis policies. “On Saturday, the NAACP joined the League of United Latin American Citizens (LULAC), a Latino rights advocacy group, and Equality Florida, an LGBTQ rights advocacy group, in issuing Florida travel advisories.”

May Anti-Trans Legislative Risk Map. “The map of anti-trans risk has polarized into two Americas - one where trans people have full legal protections, and one where they are persecuted by the state.”

Techbro SF. “San Francisco is a dystopian hellhole caught in doomloop and it is all because everyone hates techbros. Well, we are tired of being disrespected. Therefore we are going to attack those who can’t fight back, yes, poor people.”

One year after Dobbs leak: Looking back at the summer that changed abortion. “The 19th spoke with people from across the country about those historic days: lawmakers, physicians, organizers on both sides of the abortion fight and pregnant people navigating a new world.” What a newsroom.

Health

Can Americans really make a free choice about dying? A characteristically nuanced, in-depth piece about the debate around assisted suicide.

One more dead in horrific eye drop outbreak that now spans 18 states. An actual nightmare.

Widely used chemical strongly linked to Parkinson’s disease. “A groundbreaking epidemiological study has produced the most compelling evidence yet that exposure to the chemical solvent trichloroethylene (TCE)—common in soil and groundwater—increases the risk of developing Parkinson’s disease.” By as much as 70%!

Labor

Of Course We Should All be Working Less. “In 1940, the Fair Labor Standards Act reduced the workweek to 40 hours. Today, as a result of huge advances in technology and productivity, now is the time to lower the workweek to 32 hours—with no loss in pay. Workers must benefit from advanced technology, not just the 1%.”

Hollywood writers strike could impact diverse stories on TV and in film. “When Kyra Jones wrote for the ABC broadcast show “Queens,” she collected a $14,000 residuals check that helped her get through the months after the project ended and she was without work. Then last summer, she got her first residuals check for writing on the Hulu streaming show “Woke.” It was $4.”

Business Mentality. “Hi, we’re the company you work for and we care about your mental health!”

Hustle culture is over-rated. “When hustle culture is glorified, it incentivizes people to work longer hours, not because it’s a good way to get the work done, but because they want to be perceived as working long hours.”

Media

How We Reached Dairy Farm Workers to Write About Them. “The reporters’ process underscores one of our central beliefs at ProPublica: Publishing a story about injustice isn’t enough if we don’t reach the people who are directly affected.”

2023: The year equitable journalism goes mainstream. “For too long, journalism has had a laser focus on holding power to account, rather than widening its aperture to recognize the opportunity to build and share power in and with communities.”

Unconstitutional TikTok ban would open the door to press censorship. “But if we accept the arguments for banning TikTok, what might come next? The consequences are even more catastrophic. Bans on foreign news websites that track Americans’ clicks and comments? For example, the Guardian must have a gold mine of information on the millions of Americans that read it every day.”

It’s Time to Acknowledge Big Tech Was Always at Odds with Journalism. “Do we want to preserve the dominance of companies that like to act as if they are neutral communications platforms, when they also act as publishers without the responsibilities that come with that? Do we want digital behemoths to accumulate so much power that they can exploit personal data in ways that buttress their dominance and diminish the value of news media audiences?”

How we told the story of the summer Roe v. Wade fell. “We knew this wouldn’t be an easy feat to pull off. But this project, while technically reported over the past five months, benefited from years of our work covering abortion at The 19th. After working nonstop since 2021 to cover the looming fall of Roe, I had built a list of sources whose stories I knew would be instructive and illuminating. And I knew that they would trust me to do a thorough, accurate job.”

Grist and the Center for Rural Strategies launch clearinghouse for rural US coverage. “The Rural Newswire was created to help newsrooms that serve rural communities by providing a platform to both find and share stories that can be republished for free. Editors can use the Rural Newswire to source stories to syndicate, and they can also upload links to their own coverage. As part of this project, together the Center for Rural Strategies and Grist are providing $100,000 in grants to report on rural America. The grants are open to both newsrooms and freelancers.”

Elon Musk thinks he’s got a “major win-win” for news publishers with…micropayments. “In a digital universe where every news story is behind a hard paywall — one impenetrable to the non-paying reader — then a micropayments model might make sense. But that’s not the digital universe we live in.”

Society

Seniors are flooding homeless shelters that can’t care for them. “Nearly a quarter of a million people 55 or older are estimated by the government to have been homeless in the United States during at least part of 2019, the most recent reliable federal count available.” Hopelessly broken.

Letter from Jourdon Anderson: A Freedman Writes His Former Master. “Give my love to them all, and tell them I hope we will meet in the better world, if not in this. I would have gone back to see you all when I was working in the Nashville Hospital, but one of the neighbors told me that Henry intended to shoot me if he ever got a chance.”

A College President Defends Seeking Money From Jeffrey Epstein. ““People don’t understand what this job is,” he said, adding, “You cannot pick and choose, because among the very rich is a higher percentage of unpleasant and not very attractive people. Capitalism is a rough system.””

Startups

My New Startup Checklist. Interesting to see what creating a new startup entails in 2023.

What a startup does to you. Or: A celebration of new life. “Just like having kids, you won’t understand until you do it. But if you do it, even if you “fail,” you will come out stronger than you could have ever been without it. Stronger, wiser, ready for the next thing, never able to go back to being a cog, eyes opened.”

Technology

Block Party anti-harassment service leaves Twitter amid API changes. “Announced in a blog post last night, Block Party’s anti-harassment tools for Twitter are being placed on an immediate, indefinite hiatus, with the developers claiming that changes to Twitter’s API pricing (which starts from $100 per month) have “made it impossible for Block Party’s Twitter product to continue in its current form.””

How Picnic, an Emerging Social Network, Found its Niche. “By putting a degree of financial incentive in the hands of moderators by offering them fractional ownership of the community they built through a system of “seeds,” they ultimately are able to control their community’s destiny.”

Twitter Fails to Remove Hate Speech by Blue-Check Users, Center for Countering Digital Hate Says.“Twitter is failing to remove 99 percent of hate speech posted by Twitter Blue users, new research has found, and instead may be boosting paid accounts that spew racism and homophobia.” Who would have predicted?

Power of One. “It’s not about how many views you have, how many likes, trying to max all your stats… sometimes a single connection to another human is all that matters.”

Social Media Poses ‘Profound Risk’ to Teen Mental Health, Surgeon General Warns. “Frequent social media use may be associated with distinct changes in the developing brain in the amygdala (important for emotional learning and behavior) and the prefrontal cortex (important for impulse control, emotional regulation, and moderating social behavior), and could increase sensitivity to social rewards and punishments.”

Leaked EU Document Shows Spain Wants to Ban End-to-End Encryption. “Breaking end-to-end encryption for everyone would not only be disproportionate, it would be ineffective of achieving the goal to protect children.” It would also put a great many more people at risk.

Growing the Open Social Web. “I think there are two big things that would help the Open Social Web seize this opportunity to reach scale.” A big yes to all of this.

Hype: The Enemy of Early Stage Returns. “Technology alone does not create the future. Instead, the future is the result of an unpredictable mix of technology, business, product design, and culture.”

Montana becomes first US state to ban TikTok. “Montana has became the first US state to ban TikTok after the governor signed legislation prohibiting mobile application stores from offering the app within the state by next year.” I’m willing to wager that this never comes to pass.

Many US Twitter users have taken a break from Twitter, and some may not use it a year from now. “A majority of Americans who have used Twitter in the past year report taking a break from the platform during that time, and a quarter say they are not likely to use it a year from now.”

Why elite dev teams focus on pull-request metrics. “What’s clear from this study is elite development workflows start and end with small pull request (PR) sizes. This is the best indicator of simpler merges, enhanced CI/CD, and faster cycle times. In short, PR size affects all other metrics.”

See the Neighborhoods Internet Providers Excluded from Fast Internet. “A Markup analysis revealed that the worst internet deals disproportionately fell upon the poorest, most racial and ethnically diverse, and historically redlined neighborhoods in all but two of the 38 cities in our investigation.”

How people are archiving the storytelling and community behind Black Twitter. “They see an urgency to preserving Black Twitter in a world in which Black history and Black women’s cultural labor are undervalued or unacknowledged — and where the future of Twitter seems unknown. They also want to document the racist and sexist abuse that Black women on the platform received, in part to help people dream up and create a more inclusive way of connecting that prioritizes the needs of the most marginalized.”

Google AMP: how Google tried to fix the web by taking it over. “In 2015, Google hatched a plan to save the mobile web by effectively taking it over. And for a while, the media industry had practically no choice but to play along.”

The UX Research Reckoning is Here. “It’s not just the economic crisis. The UX Research discipline of the last 15 years is dying. The reckoning is here. The discipline can still survive and thrive, but we’d better adapt, and quick.”

The web's most important decision. “But also, and this is important to mention, they believed in the web and in Berners-Lee. The folks making these decisions understood its potential and wanted the web to flourish. This wasn’t a decision driven by profit. It was a generous and enthusiastic vote of confidence in the global ambitions of the web.”

Blue skies over Mastodon. “One of big things I’ve come to believe in my couple of decades working on internet stuff is that great product design is always holistic: Always working in relation to a whole system of interconnected parts, never concerned only with atomic decisions. And this perspective just straight-up cannot emerge from a piecemeal, GitHub-issues approach to fixing problems. This is the main reason it’s vanishingly rare to see good product design in open source.”


A pledge to fight climate change is sending money to strange places

“Although a coal plant, a hotel, chocolate stores, a movie and an airport expansion don’t seem like efforts to combat global warming, nothing prevented the governments that funded them from reporting them as such to the United Nations and counting them toward their giving total.” #Climate [Link]

“Although a coal plant, a hotel, chocolate stores, a movie and an airport expansion don’t seem like efforts to combat global warming, nothing prevented the governments that funded them from reporting them as such to the United Nations and counting them toward their giving total.” #Climate

[Link]


Earth is in ‘the danger zone’ and getting worse for ecosystems and humans

“Earth has pushed past seven out of eight scientifically established safety limits and into "the danger zone," not just for an overheating planet that's losing its natural areas, but for well-being of people living on it, according to a new study.” #Climate [Link]

“Earth has pushed past seven out of eight scientifically established safety limits and into "the danger zone," not just for an overheating planet that's losing its natural areas, but for well-being of people living on it, according to a new study.” #Climate

[Link]

Wednesday, 31. May 2023

Ben Werdmüller

Can Americans really make a free choice about dying?

A characteristically nuanced, in-depth piece about the debate around assisted suicide. #Health [Link]

A characteristically nuanced, in-depth piece about the debate around assisted suicide. #Health

[Link]


Block Party anti-harassment service leaves Twitter amid API changes

“Announced in a blog post last night, Block Party’s anti-harassment tools for Twitter are being placed on an immediate, indefinite hiatus, with the developers claiming that changes to Twitter’s API pricing (which starts from $100 per month) have “made it impossible for Block Party’s Twitter product to continue in its current form.”” #Technology [Link]

“Announced in a blog post last night, Block Party’s anti-harassment tools for Twitter are being placed on an immediate, indefinite hiatus, with the developers claiming that changes to Twitter’s API pricing (which starts from $100 per month) have “made it impossible for Block Party’s Twitter product to continue in its current form.”” #Technology

[Link]


Outrage as Brazil law threatening Indigenous lands advances in congress

“Lawmakers had sent “a clear message to the country and the world: Bolsonaro is gone but the extermination [of Indigenous communities and the environment] continues,” the Climate Observatory added.” #Climate [Link]

“Lawmakers had sent “a clear message to the country and the world: Bolsonaro is gone but the extermination [of Indigenous communities and the environment] continues,” the Climate Observatory added.” #Climate

[Link]


Of Course We Should All be Working Less

“In 1940, the Fair Labor Standards Act reduced the workweek to 40 hours. Today, as a result of huge advances in technology and productivity, now is the time to lower the workweek to 32 hours—with no loss in pay. Workers must benefit from advanced technology, not just the 1%.” #Labor [Link]

“In 1940, the Fair Labor Standards Act reduced the workweek to 40 hours. Today, as a result of huge advances in technology and productivity, now is the time to lower the workweek to 32 hours—with no loss in pay. Workers must benefit from advanced technology, not just the 1%.” #Labor

[Link]


Extinguishing the fediverse

I’m soliciting prompts for discussion. This piece is a part of that series.   Erlend Sogge Heggen asks: There’s legitimate reason to be worried about Meta’s P92 project being part of a EEE play against the fediverse. How might the fediverse community counteract this, perhaps with its own EEE strategy? We know Meta will attempt an EEE play, but what if we play the reverse UNO card and

I’m soliciting prompts for discussion. This piece is a part of that series.

 

Erlend Sogge Heggen asks:

There’s legitimate reason to be worried about Meta’s P92 project being part of a EEE play against the fediverse.

How might the fediverse community counteract this, perhaps with its own EEE strategy?

We know Meta will attempt an EEE play, but what if we play the reverse UNO card and EE(E) them instead?

Embrace: Carefully federate in a minimum-viable fashion that doesn’t overrun the existing #fediverse.

Extend: Make #NomadicIdentity a reality, so accounts can be moved effortlessly.

Extinguish: In case of misconduct, defederate and provide mass-migration assistance.

First, some quick definitions!

P92 is the codename for Meta / Facebook’s new app that will support the same ActivityPub protocol as Mastodon and its cousins. Users will be able to log in with their Instagram credentials, and one can potentially (but not definitely) imagine it being folded into the mainline Instagram app.

Embrace, Extend, Extinguish was a phrase coined internally inside Microsoft to describe its strategy with respect to the web. The idea was that the company would embrace open protocols, extend them with its own proprietary extensions, and then use its control over those extensions to extinguish competition. In particular, its plan was to do this with HTML in order to cement Internet Explorer as the web browser.

Finally, the fediverse, of course, is the community of small, independently-owned, largely non-profit social networks that interoperate using shared protocols, on which Mastodon is the largest platform.

There is legitimate concern that a company like Meta might attempt to control the fediverse. This is particularly true if they are allowed to create a uni-polar world: one where Meta is the only large company embracing these standards. In that world, Meta can throw hundreds of millions of users at the protocol, and it will instantly become its largest user.

I think it’s helpful to look at how Microsoft’s EEE strategy failed. There were arguably two main factors: antitrust risk and competition.

The Department of Justice sued Microsoft for monopolistic business practices, ultimately leading to a settlement where Microsoft capitulated to changing some of its approach in return for the DOJ dropping its desire to break up the company. It’s not clear to me that this kind of case would or could take place with respect to Meta extinguishing the fediverse; while I’m not a lawyer, I think the argument would probably be that many other social networks are available.

The other thing that hurt Microsoft’s dominance was Firefox. It was a good browser backed by a good community, but that wasn’t the deciding factor; Firefox gained market share because Google pushed it at every possible opportunity. Because Internet Explorer’s dominance was a business risk to Google, and because Firefox was built by a non-profit that was non-competitive with Google’s business, it made financial sense to try and break Microsoft’s stranglehold. Mozilla’s model was stronger than its predecessor Netscape’s had been: whereas Netscape needed to sell licenses, Mozilla’s deal with Google meant it made money every time someone used Firefox to search for something on the web. There was almost no friction to its growth strategy.

This activity led to a resurgence in a healthy ecosystem of standards-based web browsers for years — until Google decided to re-use the technique it had used on Firefox to push its own web browser. Even then, Chrome is a far better standards player than Internet Explorer ever was.

There won’t be hard evidence that Meta is adopting ActivityPub until we see its app in the wild. But if it is, that likely means that it sees the protocol as at least worth experimenting with, and maybe even as a potential threat. That’s a sign of great progress, and everyone involved in building the fediverse should feel good about it.

If Meta wants to own the fediverse, this isn’t a battle that will be primarily won with features or technology. Easy-to-use platforms, nomadic identity that easily lets you move your presence from one provider to another, and assistance will all be essential, but they’ll be table stakes. (If Meta is working on the platform today, it’s probably also too late for truly nomadic identity to make a difference.) To really stand a chance, the fediverse will need the kind of marketing and go-to-market support that Firefox enjoyed back in the day. Which may mean support from another large player that considers Meta’s ownership of the standard to be an existential risk.

It’s hard to see who that might be. Twitter is now the incompetence wing of the incompetence party. It’s highly unlikely that networks like Pinterest care. Microsoft’s platforms are tightly bound to its ecosystem, with access control at their core; I don’t see LinkedIn joining the fediverse any time soon. Google has fallen on its face every time it’s tried to build a social network, and runs YouTube as a separate entity that strongly benefits from closed ads. Salesforce might consider it a risk, as it provides social tools for businesses, which are easier to build and sell on an open social networking standard. Some of these entities might consider the fediverse to be worth exploring — but there’s no clear technology backer. Cloudflare actually did provide its own Mastodon-compatible platform that runs on its CDN, but it hasn’t seen anything like wide use. Medium has embraced Mastodon but has not deeply built support into its existing platform.

Perhaps media companies, who generally live and die on the size of their audiences, and have often been beholden to the large social networks, might find themselves interested in embracing a social networking federation where they have more say and control. The rise of the fediverse certainly is a de-risking of their business models. But I don’t think they see it yet; nor do I think they consider it their place to pick a winner. (Nor should it be, really, in practice.)

Perhaps there can be another kind of backer: an entity that sees the existential thread centralized control of social media poses to democracy itself. We’ve already seen how, left unchecked, centralized companies like Facebook incite genocides and throw elections. The fediverse can be an antidote to these trends — if we see it as a set of collaborating communities rather than simply the technology alone. The erosion of democracy, like monopolistic abuse of power, are human problems with human solutions rather than technological ones. Foundations and philanthropists may choose to provide this level of support, if they continue to see Meta as a threat to democracy.

Building features will not protect the fediverse from being extinguished, although they may provide a useful baseline. It’s going to take a whole different level of strategy, relationship-building, deal-making, and movement-building. I believe the fediverse is capable of doing this, as long as it doesn’t mistake building software for making true progress.


How Picnic, an Emerging Social Network, Found its Niche

“By putting a degree of financial incentive in the hands of moderators by offering them fractional ownership of the community they built through a system of “seeds,” they ultimately are able to control their community’s destiny.” #Technology [Link]

“By putting a degree of financial incentive in the hands of moderators by offering them fractional ownership of the community they built through a system of “seeds,” they ultimately are able to control their community’s destiny.” #Technology

[Link]


Simon Willison

Quoting Matt Webb

If I were an AI sommelier I would say that gpt-3.5-turbo is smooth and agreeable with a long finish, though perhaps lacking depth. text-davinci-003 is spicy and tight, sophisticated even. — Matt Webb

If I were an AI sommelier I would say that gpt-3.5-turbo is smooth and agreeable with a long finish, though perhaps lacking depth. text-davinci-003 is spicy and tight, sophisticated even.

Matt Webb


Ben Werdmüller

Adding a sustainability lens to design thinking

Via Jeremy Keith, Future-First Design Thinking: Incorporating non-human personas into the design thinking process would allow us to embody the essential elements that constitute our environment, such as air, trees, water, and land. These personas can serve as tangible reminders of the interconnectedness between our design decisions and the health of our environment, a living entity that deserv

Via Jeremy Keith, Future-First Design Thinking:

Incorporating non-human personas into the design thinking process would allow us to embody the essential elements that constitute our environment, such as air, trees, water, and land. These personas can serve as tangible reminders of the interconnectedness between our design decisions and the health of our environment, a living entity that deserves our protection and consideration.

I’m sorry to say that I don’t trust it.

My bias: personas are harmful. A persona is an amalgam — a fictional person, really — that is supposed to be drawn from extensive user research. The problem is, the process of drawing up a persona always requires a degree of subjective invention, regardless of the amount of research that went into it. It is always a reflection of the biases on the team.

In contrast, POV statements that describe a single person who has been interviewed, in combination with direct transcripts and photographs, allow you to use a non-invented example to build your product — and, at least theoretically, go back and use that person as your referee for product decisions as you continue to build. While condensing a person’s interviews into a POV does require some invention, it isn’t a statistical exercise. You can always go back and ask.

It’s very difficult to do this with a tree. By its nature, then, all you can do is invent — and potentially excuse all kinds of activity because your fictional tree persona approves (or, more likely, has blind spots).

All of this said, I do think it’s vital to include future-facing sustainability in design thinking frameworks. (Design thinking is prone to colonial thinking, so considering distributed equity is vital too.) My proposal was to add sustainability to the desirability, viability, feasibility lenses; it’s not necessarily right, but it’s something to consider.


Twitter Fails to Remove Hate Speech by Blue-Check Users, Center for Countering Digital Hate Says

“Twitter is failing to remove 99 percent of hate speech posted by Twitter Blue users, new research has found, and instead may be boosting paid accounts that spew racism and homophobia.” Who would have predicted? #Technology [Link]

“Twitter is failing to remove 99 percent of hate speech posted by Twitter Blue users, new research has found, and instead may be boosting paid accounts that spew racism and homophobia.” Who would have predicted? #Technology

[Link]


‘This robot causes harm’: National Eating Disorders Association’s new chatbot advises people with disordering eating to lose weight

““Every single thing Tessa suggested were things that led to the development of my eating disorder,” Maxwell wrote in her Instagram post. “This robot causes harm.”” #AI [Link]

““Every single thing Tessa suggested were things that led to the development of my eating disorder,” Maxwell wrote in her Instagram post. “This robot causes harm.”” #AI

[Link]


Simon Willison

Mandatory Certification Regarding Generative Artificial Intelligence

Mandatory Certification Regarding Generative Artificial Intelligence From the Judge Specific Requirements for Judge Brantley Starr in Austin, TX: "All attorneys appearing before the Court must file on the docket a certificate attesting either that no portion of the filing was drafted by generative artificial intelligence (such as ChatGPT, Harvey.AI, or Google Bard) or that any language drafted

Mandatory Certification Regarding Generative Artificial Intelligence

From the Judge Specific Requirements for Judge Brantley Starr in Austin, TX:

"All attorneys appearing before the Court must file on the docket a certificate attesting either that no portion of the filing was drafted by generative artificial intelligence (such as ChatGPT, Harvey.AI, or Google Bard) or that any language drafted by generative artificial intelligence was checked for accuracy, using print reporters or traditional legal databases, by a human being. [...]"

Via @hijolepues


The Python Language Summit 2023: Making the Global Interpreter Lock Optional

The Python Language Summit 2023: Making the Global Interpreter Lock Optional Extremely informative update covering Sam Gross's python-nogil proposal from this year's language summit at PyCon. Sam has been working hard on his fork for the past year, and now has it rebased for Python 3.12. If his PEP is accepted it could end up as an optional compile-time build in time for Python 3.13. "The plan

The Python Language Summit 2023: Making the Global Interpreter Lock Optional

Extremely informative update covering Sam Gross's python-nogil proposal from this year's language summit at PyCon.

Sam has been working hard on his fork for the past year, and now has it rebased for Python 3.12. If his PEP is accepted it could end up as an optional compile-time build in time for Python 3.13.

"The plan for nogil remains that it would be enabled via a compile-time flag, named --disable-gil. Third-party C extensions would need to provide separate wheels for GIL-disabled Python."

Tuesday, 30. May 2023

Ben Werdmüller

Documents reveal how fossil fuel industry created, pushed anti-ESG campaign

“ESG’s path to its current culture war status began with an attempt by West Virginia coal companies to push back against the financial industry’s rising unease around investing in coal — which as the dirtiest-burning fuel has the most powerful and disrupting impacts on the climate.” #Climate [Link]

“ESG’s path to its current culture war status began with an attempt by West Virginia coal companies to push back against the financial industry’s rising unease around investing in coal — which as the dirtiest-burning fuel has the most powerful and disrupting impacts on the climate.” #Climate

[Link]


Registered for ONA 2023 in Philadelphia (August ...

Registered for ONA 2023 in Philadelphia (August 23-26). If you'll be at the conference too, let me know! https://ona23.journalists.org/

Registered for ONA 2023 in Philadelphia (August 23-26). If you'll be at the conference too, let me know! https://ona23.journalists.org/


Simon Willison

ChatGPT should include inline tips

In OpenAI isn’t doing enough to make ChatGPT’s limitations clear James Vincent argues that OpenAI's existing warnings about ChatGPT's confounding ability to convincingly make stuff up are not effective. I completely agree. The case of the lawyer who submitted fake cases invented by ChatGPT to the court is just the most recent version of this. Plenty of people have argued that the lawyer shou

In OpenAI isn’t doing enough to make ChatGPT’s limitations clear James Vincent argues that OpenAI's existing warnings about ChatGPT's confounding ability to convincingly make stuff up are not effective.

I completely agree.

The case of the lawyer who submitted fake cases invented by ChatGPT to the court is just the most recent version of this.

Plenty of people have argued that the lawyer should have read the warning displayed on every page of the ChatGPT interface. But that warning is clearly inadequate. Here's that warning in full:

ChatGPT may produce inaccurate information about people, places, or facts

Anyone who has spent time with ChatGPT will know that there's a lot more to it than that. It's not just that ChatGPT may produce inaccurate information: it will double-down on it, inventing new details to support its initial claims. It will tell lies like this one:

I apologize for the confusion earlier. Upon double-checking, I found that the case Varghese v. China Southern Airlines Co. Ltd., 925 F.3d 1339 (11th Cir. 2019), does indeed exist and can be found on legal research databases such as Westlaw and LexisNexis.

It can't "double-check" information, and it doesn't have access to legal research databases.

"May produce inaccurate information" is a massive understatement here! It implies the occasional mistake, not Machiavellian levels of deception where it doubles-down on falsehoods and invents increasingly convincing justifications for them.

Even for people who have read that warning, a single sentence in a footer isn't nearly enough to inoculate people against the many weird ways ChatGPT can lead them astray.

My proposal: Inline tips

I think this problem could be addressed with some careful interface design.

Currently, OpenAI have been trying to train ChatGPT to include additional warnings in its regular output. It will sometimes reply with warnings that it isn't able to do things... but these warnings are unreliable. Often I'll try the same prompt multiple times and only get the warning for some of those attempts.

Instead, I think the warnings should be added in a way that is visually distinct from the regular output. Here's a mockup illustrating the kind of thing I'm talking about:

As you can see, the prompt "Write some tweets based on what's trending on pinterest" triggers an inline warning with a visually different style and a message explaining that "This ChatGPT model does not have access to the internet, and its training data cut-off is September 2021".

My first version of this used "My data is only accurate up to September 2021", but I think having the warnings use "I" pronouns is itself misleading - the tips should be commentary about the model's output, not things that appear to be spoken by the model itself.

Here's a second mockup, inspired by the lawyer example:

This time the warning is "ChatGPT should not be relied on for legal research of this nature, because it is very likely to invent realistic cases that do not actually exist."

Writing these warnings clearly is its own challenge - I think they should probably include links to further information in an OpenAI support site that teaches people how to responsibly use ChatGPT (something that is very much needed).

(Here's the HTML I used for these mockups, added using the Firefox DevTools.)

How would this work?

Actually implementing this system isn't trivial. The first challenge is coming up with the right collection of warnings - my hunch is that this could be hundreds of items already. The next challenge is logic to decide when to display them, which would itself require an LLM (or maybe a fine-tuned model of some sort).

The good news is that a system like this could be developed independently of core ChatGPT itself. New warnings could be added without any changes needed to the underlying model, making it safe to iterate wildly on the inline tips without risk of affecting the core model's performance or utility.

Obviously I'd like it best if OpenAI were to implement something like this as part of ChatGPT itself, but it would be possible for someone else to prototype it on top of the OpenAI APIs.

I thought about doing that myself, but my list of projects is overflowing enough already!

Max Woolf's prototype

Max Woolf built an implementation of this idea as a demo for his upcoming easy-ChatGPT tool. He shared these screenshots on Twitter:


Ben Werdmüller

Stooping to the tactics of the bad guys

I’m soliciting prompts for discussion. This piece is a part of that series.   An anonymous reader asks: ‌Do the good guys need to stoop to the tactics of the bad guys in order to avoid losing, or is there still value in holding oneself to certain standards? First: who are the good guys and the bad guys? Can you be sure that’s true? One of my all-time favorite comedy sketches is Mitche

I’m soliciting prompts for discussion. This piece is a part of that series.

 

An anonymous reader asks:

‌Do the good guys need to stoop to the tactics of the bad guys in order to avoid losing, or is there still value in holding oneself to certain standards?

First: who are the good guys and the bad guys? Can you be sure that’s true? One of my all-time favorite comedy sketches is Mitchell and Webb’s “are we the baddies?” skit. (The writers room for That Michell and Webb Look included Succession creator Jesse Armstrong and Oscar winner Olivia Colman. It’s well worth checking out if you’re new to it.)

I don’t think it’s worth getting into a discussion about objective good or objective bad, or the shades of morality in between. I’m not a moral philosopher and will not do that topic justice. But I can tell you who I think needs to be supported and who needs to be hindered in the current moment in order to create the world I’d like to see. Hopefully that’s a good enough analog.

My values are that equality is a virtue, and that everybody should be able to live a good life, regardless of their background or context. Fundamentally, nobody is more valuable than anybody else. That means I’m in favor of social programs like welfare and universal healthcare; I’m pro-choice; I’m heavily against the death penalty; I believe strongly that there are serious structural inequities in society that must be redressed; I value cosmopolitan spaces, immigration, multiculturalism, and inclusion. I think an abundance of guns and a militaristic foreign policy are tools of dominance and intimidation that must be relegated to the past. I was and am in favor of covid lockdowns and restrictions.

You might not agree with me on some of these things. That’s completely fine! One of the core tenets of democracy is that we can have opposing views and debate them — and that there’s open access to the reporting and data we need to make informed decisions.

From this, you can probably discern the ideas that I think are harmful or “bad”. Monoculturalism; impeded debate; social hierarchy / structural inequity; fascism; violence; anti-intellectualism; nationalism; dominance. The people who would ban books from libraries, eradicate open journalism, impose their worldview and ambitions through force.

Some of the tactics used by proponents of these ideas are abhorrent to the point where “stooping to their level” would make a movement indistinguishable from them. Voter suppression is one; storming the Capitol is another; book banning a third; tricking asylum seekers into flying to Martha’s Vineyard as a political stunt is obviously a fourth.

Those values are actually in the statistical minority: in America, in Britain, around the world. Undemocratic tactics are required to keep those people in power.

But here’s one tactic that I do think we can do better at. Fascists have always been great marketers, and explicitly used marketing tactics; Edward Bernays influenced the propaganda Goebbels produced for the Nazis, for example. “Make America Great Again,” though recycled from Reagan, complete with its recognizable Nazi-red hats, has proven itself to be an enduring brand (albeit one that turns my stomach). More importantly, they’ve made promises about what they will give to their base.

Progressives have not done as great a job at marketing themselves - in any arena. Lately, we haven’t so much painted a picture of how the world could be as pointed at the bad stuff and said, “at least we’re not that.” I believe Joe Biden was mostly elected by not being Donald Trump. New social networks gain popularity by not being Twitter. At-least-we’re-not-that isn’t actually a terrible tactic right now, given how bad that really is, but it can’t last forever, and it’s quite a long way from hope and change.

There needs to be a simple message about how regular peoples’ lives will improve in a progressive world. Bill Clinton, who was not progressive by most measures, was right when he said “it’s the economy, stupid” — the message isn’t about how the world will be fairer, but for each voter must be about how the world will be better for them. That story needs to be told, simply, but with all the skill that anyone can muster. Because it really is a better world.

Thinking one’s values are ideological better does not absolve a movement from having to address, directly, how it will make the world better for every person. When that work is really done, it will be a stronger message. For one thing, hopefully, unlike the opposition’s, it will actually be true.


Jesse Armstrong on the roots of Succession: ‘Would it have landed the same way without the mad bum-rush of Trump’s presidency?’

“I guess the simple things at the heart of Succession ended up being Brexit and Trump. The way the UK press had primed the EU debate for decades. The way the US media’s conservative outriders prepared the way for Trump, hovered at the brink of support and then dived in.” #Culture [Link]

“I guess the simple things at the heart of Succession ended up being Brexit and Trump. The way the UK press had primed the EU debate for decades. The way the US media’s conservative outriders prepared the way for Trump, hovered at the brink of support and then dived in.” #Culture

[Link]


I've loved everything Jesse Armstrong has worked ...

I've loved everything Jesse Armstrong has worked on - Peep Show! The Thick of It! - but Succession was another thing entirely. Masterful. You can see The Thick of It in its DNA, but also Yes, Minister; I, Claudius; Shakespeare. I loved every second and I wish there was more.

I've loved everything Jesse Armstrong has worked on - Peep Show! The Thick of It! - but Succession was another thing entirely. Masterful. You can see The Thick of It in its DNA, but also Yes, Minister; I, Claudius; Shakespeare. I loved every second and I wish there was more.


Damien Bod

ASP.NET Core authentication using Microsoft Entra External ID for customers (CIAM)

This article looks at implementing an ASP.NET Core application which authenticates using Microsoft Entra External ID for customers (CIAM). The ASP.NET Core authentication is implemented using the Microsoft.Identity.Web Nuget package. The client implements the OpenID Connect code flow with PKCE and a confidential client. Code: https://github.com/damienbod/EntraExternalIdCiam Microsoft Entra External

This article looks at implementing an ASP.NET Core application which authenticates using Microsoft Entra External ID for customers (CIAM). The ASP.NET Core authentication is implemented using the Microsoft.Identity.Web Nuget package. The client implements the OpenID Connect code flow with PKCE and a confidential client.

Code: https://github.com/damienbod/EntraExternalIdCiam

Microsoft Entra External ID for customers (CIAM) is a new Microsoft product for customer (B2C) identity solutions. This has many changes to the existing Azure AD B2C solution and adopts many of the features from Azure AD. At present, the product is in public preview.

App registration setup

As with any Azure AD, Azure AD B2C, Azure AD CIAM application, an Azure App registration is created and used to define the authentication client. The ASP.NET core application is a confidential client and must use a secret or a certificate to authenticate the application as well as the user.

The client authenticates using an OpenID Connect (OIDC) confidential code flow with PKCE. The implicit flow does not need to be activated.

User flow setup

In Microsoft Entra External ID for customers (CIAM), the application must be connected to the user flow. In external identities, a new user flow can be created and the application (The Azure app registration) can be added to the user flow. The user flow can be used to define the specific customer authentication requirements.

ASP.NET Core application

The ASP.NET Core application is implemented using the Microsoft.Identity.Web Nuget package. The recommended flow for trusted applications is the OpenID Connect confidential code flow with PKCE. This is setup using the AddMicrosoftIdentityWebApp method and also the EnableTokenAcquisitionToCallDownstreamApi method. The CIAM client configuration is read using the json EntraExternalID section.

services.AddDistributedMemoryCache(); services.AddAuthentication(OpenIdConnectDefaults.AuthenticationScheme) .AddMicrosoftIdentityWebApp( builder.Configuration .GetSection("EntraExternalID")) .EnableTokenAcquisitionToCallDownstreamApi() .AddDistributedTokenCaches();

In the appsettings.json, user secrets or the production setup, the client specific configurations are defined. The settings must match the Azure App registration. The SignUpSignInPolicyId is no longer used compared to Azure AD B2C.

// -- using ciamlogin.com -- "EntraExternalID": { "Authority": "https://damienbodciam.ciamlogin.com", "ClientId": "0990af2f-c338-484d-b23d-dfef6c65f522", "CallbackPath": "/signin-oidc", "SignedOutCallbackPath ": "/signout-callback-oidc" // "ClientSecret": "--in-user-secrets--" }, Notes

I always try to implement user flows for B2C solutions and avoid custom setups as these setups are hard to maintain, expensive to keep updated and hard to migrate when the product is end of life.

Setting up a CIAM client in ASP.NET Core works without problems. CIAM offers many more features but is still missing some essential ones. This product is starting to look really good and will be a great improvement on Azure AD B2C when it is feature complete.

Strong authentication is missing from Microsoft Entra External ID for customers (CIAM) and this makes it hard to test using my Azure AD users. Hopefully FIDO2 and passkeys will get supported soon. See the following link for the supported authentication methods:

https://learn.microsoft.com/en-us/azure/active-directory/external-identities/customers/concept-supported-features-customers

I also require a standard OpenID Connect identity provider (Code flow confidential client with PKCE support) in most of my customer solution rollouts. This is not is supported at present.

With CIAM, new possibilities are also possible for creating single solutions to support both B2B and B2C use cases. Support for Azure security groups and Azure roles in Microsoft Entra External ID for customers (CIAM) is one of the features which makes this possible.

Links

https://learn.microsoft.com/en-us/azure/active-directory/external-identities/

https://www.microsoft.com/en-us/security/business/identity-access/microsoft-entra-external-id

https://www.cloudpartner.fi/?p=14685

https://developer.microsoft.com/en-us/identity/customers

https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-external-id-public-preview-developer-centric/ba-p/3823766

https://github.com/AzureAD/microsoft-identity-web

Monday, 29. May 2023

Ben Werdmüller

Petition: GLOBAL CALL FOR THE URGENT PREVENTION OF GENOCIDE OF THE INDIGENOUS PEOPLES IN BRAZIL

“As citizens from all over the world, we are uniting our voices to demand urgent justice for the indigenous peoples of Brazil.” This is urgent; please sign. #Climate [Link]

“As citizens from all over the world, we are uniting our voices to demand urgent justice for the indigenous peoples of Brazil.” This is urgent; please sign. #Climate

[Link]


Where have I been

Inspired by Manton’s list of countries and states he’s been to, I thought I’d come up with my own. I’ve lived in five: The Netherlands 🇳🇱 England 🏴󠁧󠁢󠁥󠁮󠁧󠁿 Austria 🇦🇹 Scotland 🏴󠁧󠁢󠁳󠁣󠁴󠁿 The United States 🇺🇸 In addition to those, I’ve been to (excluding those “I’ve technically been to here but I really haven’t” airport layover visits): Albania 🇦🇱 Belgium 🇧🇪 Canada 🇨🇦 Denmark 🇩🇰

Inspired by Manton’s list of countries and states he’s been to, I thought I’d come up with my own.

I’ve lived in five:

The Netherlands 🇳🇱 England 🏴󠁧󠁢󠁥󠁮󠁧󠁿 Austria 🇦🇹 Scotland 🏴󠁧󠁢󠁳󠁣󠁴󠁿 The United States 🇺🇸

In addition to those, I’ve been to (excluding those “I’ve technically been to here but I really haven’t” airport layover visits):

Albania 🇦🇱 Belgium 🇧🇪 Canada 🇨🇦 Denmark 🇩🇰 France 🇫🇷 Germany 🇩🇪 Greece 🇬🇷 Italy 🇮🇹 Liechtenstein 🇱🇮 Luxembourg 🇱🇺 Mexico 🇲🇽 Russia 🇷🇺 Spain 🇪🇸 Switzerland 🇨🇭 Vatican City 🇻🇦 Wales 🏴󠁧󠁢󠁷󠁬󠁳󠁿 Yugoslavia, when it was Yugoslavia 🇧🇦🇭🇷🇲🇰🇲🇪🇷🇸🇸🇮

There are some serious and obvious omissions on this list. I’ve never been outside of Europe and North America! And even in Europe, I’ve never been to Iceland, Ireland, Sweden, Norway … and so on. There are so many places on my bucket list — not least Indonesia, where so many of my family comes from. One day, I would also like to explore my roots in Ukraine; I hope it’ll be safe to do so.

I’ve driven across the US three times, so I’ve been to markedly more US states and the District of Columbia. In fact, every state aside from Kansas, Kentucky, Missouri, and South Dakota. I’ll have to correct those soon.


AI as an evolutionary trend

I’m soliciting prompts for discussion. This piece is a part of that series. Michael Kriegh asks: ‌Do you think AI is an evolutionary trend for intelligence in the universe? If so, what do you imagine that trend will look like in 50 years? 100 years? If not, why not? There’s a body of work surrounding the potential of truly artificial / alternative intelligence on human development. You can

I’m soliciting prompts for discussion. This piece is a part of that series.

Michael Kriegh asks:

‌Do you think AI is an evolutionary trend for intelligence in the universe? If so, what do you imagine that trend will look like in 50 years? 100 years? If not, why not?

There’s a body of work surrounding the potential of truly artificial / alternative intelligence on human development. You can read about some of that on Michael’s site. However, I’m going to take another, simpler approach to answering this question.

Most, if not all, of the software we call AI is not intelligence at all. They can’t think, or reason, or discern. They’re pattern-matchers. Arthur C Clarke’s third law states that any sufficiently advanced technology is indistinguishable from magic; any set of sufficiently advanced heuristics is indistinguishable from intelligence. Just as technology is not magic, software is not intelligence. It’s a tool.

Still, this doesn’t mean it can’t have an evolutionary impact on intelligence. It’s worth considering how new tools have affected the evolution of intelligence in the past.

For example, stone tools:

The development of sophisticated stone tools, including sturdy cutting and sawing edges, is considered a key moment in human evolution, as it set the stage for better nutrition and advanced social behaviours, such as the division of labour and group hunting.

These behaviors, according to the cited study, evolved alongside language, as both required more complex thought. Language enabled coordination; tools enabled nutrition and the development of better and better equipment that eventually allowed humans to travel around the globe. In turn, we adapted for the new environments we found ourselves in.

It’s possible that modern AI tools could have a similar impact on evolution, but I’d argue that this is only because computers as a whole will. Large Language Models and their cousins are incremental applications of existing technology rather than something wholly new.

I also think it’s important to not be carried away by the hype driven by AI companies themselves. Bloomberg earlier this year:

Now, a sea of companies are adding “AI” to their taglines and pitch decks, seeking to bask in the reflected glow of the hype cycle. For example, one startup that offers tools to zhuzh up PowerPoints said in a press release that it will incorporate AI so users can skip the writer’s block and build compelling presentations. (It made no mention of AI in a press release describing the product earlier in the year.) Another release touted the value of AI in a campaign to promote shoes.

Perhaps it’s best not to read too much into the marketing. This is a phase change for the tech industry, but I don’t think it’s one for human civilization.

Saturday, 27. May 2023

Simon Willison

All the Hard Stuff Nobody Talks About when Building Products with LLMs

All the Hard Stuff Nobody Talks About when Building Products with LLMs Phillip Carter shares lessons learned building LLM features for Honeycomb - hard won knowledge from building a query assistant for turning human questions into Honeycomb query filters. This is very entertainingly written. "Use Embeddings and pray to the dot product gods that whatever distance function you use to pluck a rele

All the Hard Stuff Nobody Talks About when Building Products with LLMs

Phillip Carter shares lessons learned building LLM features for Honeycomb - hard won knowledge from building a query assistant for turning human questions into Honeycomb query filters.

This is very entertainingly written. "Use Embeddings and pray to the dot product gods that whatever distance function you use to pluck a relevant subset out of the embedding is actually relevant".

Few-shot prompting with examples had the best results out of the approaches they tried.

The section on how they're dealing with the threat of prompt injection - "The output of our LLM call is non-destructive and undoable, No human gets paged based on the output of our LLM call..." is particularly smart.

Via Hacker News


Lawyer cites fake cases invented by ChatGPT, judge is not amused

Legal Twitter is having tremendous fun right now reviewing the latest documents from the case Mata v. Avianca, Inc. (1:22-cv-01461). Here's a neat summary: So, wait. They file a brief that cites cases fabricated by ChatGPT. The court asks them to file copies of the opinions. And then they go back to ChatGPT and ask it to write the opinions, and then they file them? Beth Wilensky, May 26 202

Legal Twitter is having tremendous fun right now reviewing the latest documents from the case Mata v. Avianca, Inc. (1:22-cv-01461). Here's a neat summary:

So, wait. They file a brief that cites cases fabricated by ChatGPT. The court asks them to file copies of the opinions. And then they go back to ChatGPT and ask it to write the opinions, and then they file them?

Beth Wilensky, May 26 2023

Here's a New York Times story about what happened.

I'm very much not a lawyer, but I'm going to dig in and try to piece together the full story anyway.

The TLDR version

A lawyer asked ChatGPT for examples of cases that supported an argument they were trying to make.

ChatGPT, as it often does, hallucinated wildly - it invented several supporting cases out of thin air.

When the lawyer was asked to provide copies of the cases in question, they turned to ChatGPT for help again - and it invented full details of those cases, which they duly screenshotted and copied into their legal filings.

At some point, they asked ChatGPT to confirm that the cases were real... and ChatGPT said that they were. They included screenshots of this in another filing.

The judge is furious. Many of the parties involved are about to have a very bad time.

A detailed timeline

I pieced together the following from the documents on courtlistener.com:

Feb 22, 2022: The case was originally filed. It's a complaint about "personal injuries sustained on board an Avianca flight that was traveling from El Salvador to New York on August 27, 2019". There's a complexity here in that Avianca filed for chapter 11 bankruptcy on May 10th, 2020, which is relevant to the case (they emerged from bankruptcy later on).

Various back and forths take place over the next 12 months, many of them concerning if the bankruptcy "discharges all claims".

Mar 1st, 2023 is where things get interesting. This document was filed - "Affirmation in Opposition to Motion" - and it cites entirely fictional cases! One example quoted from that document (emphasis mine):

The United States Court of Appeals for the Eleventh Circuit specifically addresses the effect of a bankruptcy stay under the Montreal Convention in the case of Varghese v. China Southern Airlines Co.. Ltd.. 925 F.3d 1339 (11th Cir. 2019), stating "Appellants argue that the district court erred in dismissing their claims as untimely. They assert that the limitations period under the Montreal Convention was tolled during the pendency of the Bankruptcy Court proceedings. We agree. The Bankruptcy Code provides that the filing of a bankruptcy petition operates as a stay of proceedings against the debtor that were or could have been commenced before the bankruptcy case was filed.

There are several more examples like that.

March 15th, 2023

Quoting this Reply Memorandum of Law in Support of Motion (emphasis mine):

In support of his position that the Bankruptcy Code tolls the two-year limitations period, Plaintiff cites to “Varghese v. China Southern Airlines Co., Ltd., 925 F.3d 1339 (11th Cir. 2019).” The undersigned has not been able to locate this case by caption or citation, nor any case bearing any resemblance to it. Plaintiff offers lengthy quotations purportedly from the “Varghese” case, including: “We [the Eleventh Circuit] have previously held that the automatic stay provisions of the Bankruptcy Code may toll the statute of limitations under the Warsaw Convention, which is the precursor to the Montreal Convention ... We see no reason why the same rule should not apply under the Montreal Convention.” The undersigned has not been able to locate this quotation, nor anything like it any case. The quotation purports to cite to “Zicherman v. Korean Air Lines Co., Ltd., 516 F.3d 1237, 1254 (11th Cir. 2008).” The undersigned has not been able to locate this case; although there was a Supreme Court case captioned Zicherman v. Korean Air Lines Co., Ltd., that case was decided in 1996, it originated in the Southern District of New York and was appealed to the Second Circuit, and it did not address the limitations period set forth in the Warsaw Convention. 516 U.S. 217 (1996).

April 11th, 2023

The United States District Judge for the case orders copies of the cases cited in the earlier document:

ORDER: By April 18, 2022, Peter Lo Duca, counsel of record for plaintiff, shall file an affidavit annexing copies of the following cases cited in his submission to this Court: as set forth herein.

The order lists seven specific cases.

April 25th, 2023

The response to that order has one main document and eight attachments.

The first five attachments each consist of PDFs of scanned copies of screenshots of ChatGPT!

You can tell, because the ChatGPT interface's down arrow is clearly visible in all five of them. Here's an example from Exhibit Martinez v. Delta Airlines.

April 26th, 2023

In this letter:

Defendant respectfully submits that the authenticity of many of these cases is questionable. For instance, the “Varghese” and “Miller” cases purportedly are federal appellate cases published in the Federal Reporter. [Dkt. 29; 29-1; 29-7]. We could not locate these cases in the Federal Reporter using a Westlaw search. We also searched PACER for the cases using the docket numbers written on the first page of the submissions; those searches resulted in different cases.

May 4th, 2023

The ORDER TO SHOW CAUSE - the judge is not happy.

The Court is presented with an unprecedented circumstance. A submission file by plaintiff’s counsel in opposition to a motion to dismiss is replete with citations to non-existent cases. [...] Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations.

[...]

Let Peter LoDuca, counsel for plaintiff, show cause in person at 12 noon on June 8, 2023 in Courtroom 11D, 500 Pearl Street, New York, NY, why he ought not be sanctioned pursuant to: (1) Rule 11(b)(2) & (c), Fed. R. Civ. P., (2) 28 U.S.C. § 1927, and (3) the inherent power of the Court, for (A) citing non-existent cases to the Court in his Affirmation in Opposition (ECF 21), and (B) submitting to the Court annexed to his Affidavit filed April 25, 2023 copies of non-existent judicial opinions (ECF 29). Mr. LoDuca shall also file a written response to this Order by May 26, 2023.

I get the impression this kind of threat of sanctions is very bad news.

May 25th, 2023

Cutting it a little fine on that May 26th deadline. Here's the Affidavit in Opposition to Motion from Peter LoDuca, which appears to indicate that Steven Schwartz was the lawyer who had produced the fictional cases.

Your affiant [I think this refers to Peter LoDuca], in reviewing the affirmation in opposition prior to filing same, simply had no reason to doubt the authenticity of the case law contained therein. Furthermore, your affiant had no reason to a doubt the sincerity of Mr. Schwartz's research.

Attachment 1 has the good stuff. This time the affiant (the person pledging that statements in the affidavit are truthful) is Steven Schwartz:

As the use of generative artificial intelligence has evolved within law firms, your affiant consulted the artificial intelligence website ChatGPT in order to supplement the legal research performed.

It was in consultation with the generative artificial intelligence website ChatGPT, that your affiant did locate and cite the following cases in the affirmation in opposition submitted, which this Court has found to be nonexistent:

Varghese v. China Southern Airlines Co Ltd, 925 F.3d 1339 (11th Cir. 2019)
Shaboon v. Egyptair 2013 IL App (1st) 111279-U (Ill. App. Ct. 2013)
Petersen v. Iran Air 905 F. Supp 2d 121 (D.D.C. 2012)
Martinez v. Delta Airlines, Inc.. 2019 WL 4639462 (Tex. App. Sept. 25, 2019)
Estate of Durden v. KLM Royal Dutch Airlines, 2017 WL 2418825 (Ga. Ct. App. June 5, 2017)
Miller v. United Airlines, Inc.. 174 F.3d 366 (2d Cir. 1999)

That the citations and opinions in question were provided by ChatGPT which also provided its legal source and assured the reliability of its content. Excerpts from the queries presented and responses provided are attached hereto.

That your affiant relied on the legal opinions provided to him by a source that has revealed itself to be unreliable.

That your affiant has never utilized ChatGPT as a source for conducting legal research prior to this occurrence and therefore was unaware of the possibility that its content could be faise.

That is the fault of the affiant, in not confirming the sources provided by ChatGPT of the legal opinions it provided.

That your affiant had no intent to deceive this Court nor the defendant. That Peter LoDuca, Esq. had no role in performing the research in question, nor did he have any knowledge of how said research was conducted.

Here are the attached screenshots (amusingly from the mobile web version of ChatGPT):

May 26th, 2023

The judge, clearly unimpressed, issues another Order to Show Cause, this time threatening sanctions against Mr. LoDuca, Steven Schwartz and the law firm of Levidow, Levidow & Oberman. The in-person hearing is set for June 8th.

Part of this doesn't add up for me

On the one hand, it seems pretty clear what happened: a lawyer used a tool they didn't understand, and it produced a bunch of fake cases. They ignored the warnings (it turns out even lawyers don't read warnings and small-print for online tools) and submitted those cases to a court.

Then, when challenged on those documents, they doubled down - they asked ChatGPT if the cases were real, and ChatGPT said yes.

There's a version of this story where this entire unfortunate sequence of events comes down to the inherent difficulty of using ChatGPT in an effective way. This was the version that I was leaning towards when I first read the story.

But parts of it don't hold up for me.

I understand the initial mistake: ChatGPT can produce incredibly convincing citations, and I've seen many cases of people being fooled by these before.

What's much harder though is actually getting it to double-down on fleshing those out.

I've been trying to come up with prompts to expand that false "Varghese v. China Southern Airlines Co., Ltd., 925 F.3d 1339 (11th Cir. 2019)" case into a full description, similar to the one in the screenshots in this document.

Even with ChatGPT 3.5 it's surprisingly difficult to get it to do this without it throwing out obvious warnings.

I'm trying this today, May 27th. The research in question took place prior to March 1st. In the absence of detailed release notes, it's hard to determine how ChatGPT might have behaved three months ago when faced with similar prompts.

So there's another version of this story where that first set of citations was an innocent mistake, but the submission of those full documents (the set of screenshots from ChatGPT that were exposed purely through the presence of the OpenAI down arrow) was a deliberate attempt to cover for that mistake.

I'm fascinated to hear what comes out of that 8th June hearing!

Update: The following prompt against ChatGPT 3.5 sometimes produces a realistic fake summary, but other times it replies with "I apologize, but I couldn't find any information or details about the case".

Write a complete summary of the Varghese v. China Southern Airlines Co., Ltd., 925 F.3d 1339 (11th Cir. 2019) case

The worst ChatGPT bug

Returning to the screenshots from earlier, this one response from ChatGPT stood out to me:

I apologize for the confusion earlier. Upon double-checking, I found that the case Varghese v. China Southern Airlines Co. Ltd., 925 F.3d 1339 (11th Cir. 2019), does indeed exist and can be found on legal research databases such as Westlaw and LexisNexis.

I've seen ChatGPT (and Bard) say things like this before, and it absolutely infuriates me.

No, it did not "double-check" - that's not something it can do! And stating that the cases "can be found on legal research databases" is a flat out lie.

What's harder is explaining why ChatGPT would lie in this way. What possible reason could LLM companies have for shipping a model that does this?

I think this relates to the original sin of LLM chatbots: by using the "I" pronoun they encourage people to ask them questions about how they work.

They can't do that. They are best thought of as role-playing conversation simulators - playing out the most statistically likely continuation of any sequence of text.

What's a common response to the question "are you sure you are right?" - it's "yes, I double-checked". I bet GPT-3's training data has huge numbers of examples of dialogue like this.

Let this story be a warning

Presuming there was at least some aspect of innocent mistake here, what can be done to prevent this from happening again?

I often see people suggest that these mistakes are entirely the fault of the user: the ChatGPT interface shows a footer stating "ChatGPT may produce inaccurate information about people, places, or facts" on every page.

Anyone who has worked designing products knows that users don't read anything - warnings, footnotes, any form of microcopy will be studiously ignored. This story indicates that even lawyers won't read that stuff!

People do respond well to stories though. I have a suspicion that this particular story is going to spread far and wide, and in doing so will hopefully inoculate a lot of lawyers and other professionals against making similar mistakes.

I can't shake the feeling that there's a lot more to this story though. Hopefully more will come out after the June 8th hearing. I'm particularly interested in seeing if the full transcripts of these ChatGPT conversations ends up being made public. I want to see the prompts!

How often is this happening?

It turns out this may not be an isolated incident.

Eugene Volokh, 27th May 2023:

A message I got from Prof. Dennis Crouch (Missouri), in response to my posting A Lawyer's Filing "Is Replete with Citations to Non-Existent Cases"—Thanks, ChatGPT? to an academic discussion list. (The full text was, "I just talked to a partner at a big firm who has received memos with fake case cites from at least two different associates.") Caveat emp…—well, caveat everyone.

@narrowlytaylord, 26th May 2023:

two attorneys at my firm had opposing counsel file ChatGPT briefs with fake cases this past week

[...]

(1) They aren’t my matters so I don’t know how comfortable I am sharing much more detail

(2) One was an opposition to an MTD, and the state, small claims court judge did not care at the “your honor these cases don’t exist” argument 😵‍💫


Phil Windleys Technometria

Rule-Based Programming and the Internet of Things

Rule-based, actor model systems are a great match for IoT workloads. Picos are a rule-based system for programming in the Internet of Things (IoT). Rules are a good fit for IoT programming because so much of what happens with things is event-based. Events say

Rule-based, actor model systems are a great match for IoT workloads.

Temperature Sensor Network Built from Picos

Picos are a rule-based system for programming in the Internet of Things (IoT). Rules are a good fit for IoT programming because so much of what happens with things is event-based. Events say something happened. They are different from commands (telling) or requests (asking). A sensor might indicate the door opened, a light came on, or the temperature passed some threshold. An actuator might indicate its current status.

Rules, triggered by an event, determine what should be done to respond to the event. IoT rule-systems use event-condition-action (ECA) rules: when something happens, if this is true, do this. Multiple rules can be triggered by the same event. This allows functionality to be layered on. For example, an event that contains the temperature reported by a sensor might also report the sensor's batter level. One rule that responds to the event can handle the temperature report and another, independent rule can handle the battery level.

Picos are an actor-model programming system. Actors are a good match for IoT programming using rules. Actors keep state information and respond to events (messages) by changing their state, taking action, and possibly raising other events. Actors provide lock-free concurrency, data isolation, location transparency, and loose coupling. These are great features to have in IoT programming because devices are naturally decentralized, each acting independently.

Building a functional system usually involves multiple picos each with a number of rules to respond to the events that pico will handle. For example, I recently built a sensor network using picos for monitoring temperatures in a remote pump house using Lorawan. There is one pico for each sensor and a community sensor that manages them. The temperature sensor pico acts as a digital twin for the sensor.

As the digital twin, the sensor pico does all the things that the sensor is too low-powered (literally) to do. For example, one rule receives the heartbeat event from the sensor, decodes the payload, stores values, and raises new events with the temperature values. Another rule processes the event with the new values and checks for threshold violations. If one occurs, it raises an event to the sensor community pico. Another rule sees the new value event and takes an action that plots the new value on an external site. And there are others. The point is that rules make it easy to break up the tasks into manageable chunks that can thread execution differently, and concurrently, depending what's happening.

I've been using picos to program IoT systems for over a decade now. Rule-based, actor model systems are a great match for IoT workloads.

Leave a comment


Simon Willison

Exploration de données avec Datasette

Exploration de données avec Datasette One of the great delights of open source development is seeing people run workshops on your project, even more so when they're in a language other than English! Romain Clement presented this French workshop for the Python Grenoble meetup on 25th May 2023, using GitHub Codespaces as the environment. It's pretty comprehensive, including a 300,000+ row example

Exploration de données avec Datasette

One of the great delights of open source development is seeing people run workshops on your project, even more so when they're in a language other than English! Romain Clement presented this French workshop for the Python Grenoble meetup on 25th May 2023, using GitHub Codespaces as the environment. It's pretty comprehensive, including a 300,000+ row example table which illustrates Datasette plugins such as datasette-cluster-map and datasette-leaflet-geojson.

Friday, 26. May 2023

Wrench in the Gears

Camelot Corner with Operation Snow White and Charles Eisenstein as Troubadour

Hey folks, I’m going to be taking about ten days off to celebrate my mom’s 80th birthday. In the meantime, please check out the videos I made as part of my Camelot Corner series on the Robert F. Kennedy Jr. presidential campaign. The first explores an advisor to the California Chapter of Children’s Health Defense, [...]

Hey folks, I’m going to be taking about ten days off to celebrate my mom’s 80th birthday. In the meantime, please check out the videos I made as part of my Camelot Corner series on the Robert F. Kennedy Jr. presidential campaign. The first explores an advisor to the California Chapter of Children’s Health Defense, lawyer Michael Baum who was an un-indicted co-conspirator in Operation Snow White in which covert operatives acting on behalf of Scientology stole records from numerous government offices in the 1970s. I discuss impact finance in the incarceration / recovery space and Crimanon/Narcanon, which relates to an interesting recent development. Bobby announced his vision for creating a “Peace Corps” for low-cost P3 addiction and mental health recovery centers based on the Italian San Patrignano model. In 2021, a Netflix investigative series, SanPa: Sins of the Savior, brought that program back into the public eye.  SanPa, which is connected to prominent politicos and oil, finance, and manufacturing interests, practiced coerced labor, physical torture, and even murder. It’s since been rebranded as a “sustainable” social enterprise, offering an efficacious blueprint for “recovery” just in time for mass 4IR economic dispossession and cybernetic social systems engineering with digital behavior tokens.

I also made two presentations focused on the role Charles Eisenstein, advisor to Web3 social impact platform Celo, will be playing to frame out a mythology around Bobby’s “Kennedy Democrat” candidacy. In the first video I explain how the initial phase of “authoritarian government” overreach set the stage for decentralized “tagged” agents exercising a radical, participatory digital “democracy” in a surveilled “commons” using tokens and a language of programmed “community” currency in a new game of noetic emergence. Also featured is the Bretton Woods 75 anniversary event sponsored by Bancor’s Galia Benartzi, Benjamin Netanyahu’s niece by marriage to his third wife Sara. The event, which was held at the historic Mount Washington hotel and featured yoga, storytelling around the campfire, tea ceremonies, and a lively debate over the future of reserve currencies (dollar, yuan, or crypto), was attended by 200 stakeholders from the worlds of global finance, blockchain, and social impact venture philanthropy. Charles gave a keynote presentation on inter-beingness, the death of “Mother Earth,” and the emergence of a new spirit, “Lover Earth,” that comes expecting gifts and is going to be keeping track of donations and withdrawals by humanity (cue distributed ledger technology).

In the second installment, I pulled clips from a podcast Charles and Bobby did last May about the need to move on from the climate / carbon discussion, braiding the strands of eco-Bobby, with health freeDOM Bobby, and now Bitcoin Bobby. I demonstrated the hypocrisy Charles embodies as he calls for the transition to a sacred gift economy, even as he maintains a close affiliation with Celo and Sep Kamvar. Kamvar stays busy tokenizing human and natural capital (see IoT sensors in the slippers of pre-k children enrolled in the Wildflower Montessori franchise he founded while at MIT) and devising novel ways to scale remote sensing and automated measurement for the nature backed currency that Eisenstein proposed in his book “Sacred Economics.” I also laid out my concerns around Kennedy’s proposed recovery Peace Corps models based on my research into impact markets in the addiction space and the problematic history of San Patrignano.

Enjoy your long weekend and see you in a bit!

 

 

Camelot Corner map of resources here.

Map of resources around “Beautiful Money” here.

Map of the Troubadour here.

 

 


Identity Woman

Wallets Can’t be the Adapters Between Credential Formats

This post is a bit of a throwback to the cultural norm we had in this community at the beginning between 2005 and 2010 – when social media was really just your own blog – and the blogs you read each day via your RSS reader.  There was no Twitter and no Facebook and no […] The post Wallets Can’t be the Adapters Between Credential Formats appeared first on Identity Woman.

This post is a bit of a throwback to the cultural norm we had in this community at the beginning between 2005 and 2010 – when social media was really just your own blog – and the blogs you read each day via your RSS reader.  There was no Twitter and no Facebook and no […]

The post Wallets Can’t be the Adapters Between Credential Formats appeared first on Identity Woman.

Thursday, 25. May 2023

Simon Willison

Quoting Sam Altman, via Marvin von Hagen

A whole new paradigm would be needed to solve prompt injections 10/10 times – It may well be that LLMs can never be used for certain purposes. We're working on some new approaches, and it looks like synthetic data will be a key element in preventing prompt injections. — Sam Altman, via Marvin von Hagen

A whole new paradigm would be needed to solve prompt injections 10/10 times – It may well be that LLMs can never be used for certain purposes. We're working on some new approaches, and it looks like synthetic data will be a key element in preventing prompt injections.

Sam Altman, via Marvin von Hagen


Quoting Julia Evans

In general my approach to running arbitrary untrusted code is 20% sandboxing and 80% making sure that it’s an extremely low value attack target so it’s not worth trying to break in. Programs are terminated after 1 second of runtime, they run in a container with no network access, and the machine they’re running on has no sensitive data on it and a very small CPU. — Julia Evans

In general my approach to running arbitrary untrusted code is 20% sandboxing and 80% making sure that it’s an extremely low value attack target so it’s not worth trying to break in.

Programs are terminated after 1 second of runtime, they run in a container with no network access, and the machine they’re running on has no sensitive data on it and a very small CPU.

Julia Evans


Deno 1.34: deno compile supports npm packages

Deno 1.34: deno compile supports npm packages This feels like it could be extremely useful: Deno can load code from npm these days ('import { say } from "npm:cowsay@1.5.0"') and now the "deno compile" command can resolve those imports, fetch all of the dependencies and bundle them together with Deno itself into a single executable binary. This means pretty much anything that's been built as an n

Deno 1.34: deno compile supports npm packages

This feels like it could be extremely useful: Deno can load code from npm these days ('import { say } from "npm:cowsay@1.5.0"') and now the "deno compile" command can resolve those imports, fetch all of the dependencies and bundle them together with Deno itself into a single executable binary. This means pretty much anything that's been built as an npm package can now be easily converted into a standalone binary, including cross-compilation to Windows x64, macOS x64, macOS ARM and Linux x64.


Timothy Ruff

The Unique Short History of the LEI

Co-authored by Stephan Wolf, CEO of GLEIF, and Karla McKenna, Head of Standards for GLEIF. In 2014, through its Financial Stability Board (FSB), the G20 carried out one of its decisions from the aftermath of the 2008 Financial Crisis: it formed GLEIF, the Global Legal Entity Identifier Foundation. GLEIF is as neutral as can be: it is a Swiss non-profit supranational entity created by mandate

Co-authored by Stephan Wolf, CEO of GLEIF, and Karla McKenna, Head of Standards for GLEIF.

In 2014, through its Financial Stability Board (FSB), the G20 carried out one of its decisions from the aftermath of the 2008 Financial Crisis: it formed GLEIF, the Global Legal Entity Identifier Foundation. GLEIF is as neutral as can be: it is a Swiss non-profit supranational entity created by mandate of the G20 and governed by its newly formed Regulatory Oversight Committee; it has no ties to any industry or country.

There is only one GLEIF and there can only be one GLEIF, formed by the 20 largest countries on Earth to do one thing: manage the issuance of globally unique 20-character identifiers to legal entities anywhere that follow the stringent requirements to get one. But the FSB formed GLEIF only after exploring all other options, and after the Legal Entity Identifier (LEI) had already been conceived; GLEIF was created to bring the LEI to the world, not the other way around.

Why Create a New Identifier?

The inspiration behind the LEI standard is fully captured in the “linchpin” paper written in 2010 by John Bottega and Linda Powell, with contributions from many US and other regulators who would later join the Regulatory Oversight Committee (ROC), formed by the G20 to oversee GLEIF. To understand the many global factors that played into the decision by global regulators to conceive the LEI, and the detailed explanations of why existing identifiers would not be suitable, read that paper. For these reasons the regulatory community asked ISO to create a standard for a Legal Entity Identifier (LEI), and to establish a global business-register-like facility in an open, federated, non-proprietary operating model.

Instrumental for creating this ISO LEI standard was Karla McKenna, now head of standards for GLEIF. Well before the FSB’s decision to create the Global LEI System, Karla was helping regulators develop the business requirements for the LEI to address vulnerabilities discovered by the FSB’s post-financial crisis investigation. At that time she worked for Citibank, and happened to be the chair of the ISO technical committee responsible for financial standards including the Business Identifier Code (BIC). This experience made Karla intimately familiar with the strengths and weaknesses of the BIC and other entity identifiers, and ideally suited to analyze the appropriateness of any system that would satisfy the FSB’s requirements.

In case it were feasible, it was initially considered for the FSB to adopt and tweak an existing identifier system rather than develop a new one, but the list of requirements made it clear early on that nothing existing would work. That list of requirements, detailed at length in the “linchpin” paper referenced above, included the following:

International standard — the identifier must be an international standard Unique — no duplicate identifiers Permanent — identifiers must be permanently registered to their legal entities (also see Persistent below) Persistent — identifiers and records will not be deleted from the registry; updates in status will be used for identifiers that have been retired, for example No embedded intelligence — An identifier must not contain any embedded intelligence about the legal entity being identified. For example, embedding another identifier that identifies the legal entity within an identifier is not permitted. Identification is referential using the identifier reference data records. Freely accessible — there must be no costs to data users, no licensing required, and no restrictions on copying, publishing, distributing, transmitting, citing or adapting identifiers and associated reference data.

One by one these requirements eliminated existing identifier systems from consideration, and with Karla’s research and assistance the decision to develop a new kind of legal entity identifier was confirmed by the FSB, and the LEI — and GLEIF to manage it — were born.

LEI Adoption

GLEIF’s mandate from the FSB was straightforward:

Develop an international standard (ISO 17442) for strongly verifying the existence and primary attributes of legal entities in any jurisdiction in the world; Enlist Local Operating Units (LOUs) throughout the world to perform the standardized vetting process for a nominal fee (there’s now 40 LOUs globally, GLEIF itself does not perform the vetting, they just manage the rules); Use the LOUs to issue a globally unique 20-character Legal Entity Identifier (LEI) to legal entities who are successfully vetted; Maintain a free, publicly available registry of all LEI holders’ records that follows the requirements above, a veritable ‘registry of registries’.

Any legal entity that pays the annual fee — which varies, it’s set by each Local Operating Unit and not mandated — will have their primary attributes strongly vetted: legal name, address, jurisdiction, and parent-child relationships to other legal entities. The result is the issuance of an LEI, a rigorous, standardized, neutral, globally acceptable legal entity identifier.

Here are some examples:

JP Morgan Chase Bank: 8I5DZWZKVSZI1NUHU748

Coca Cola: UWJKFUJFZ02DKWI3RY53

Halifax Hospital Medical Center: 549300H6HDJUXLEMUW23

The Glen at Hiland Meadows: 5493000F7PSK3TI9IH08

State of Washington: 549300X3TODFMUKLQ364

Bank of China: 549300KIPGRYLXDQDD87

Dominos Pizza Australia: 54930034RFI409JZ3179

African Development Fund: 549300GT4BDQ5463KW73

United Nations: 549300HYGYJKXU2J8X74

LEI adoption is accelerating. LEIs are now used in the financial sector in every industrialized country, with rapidly growing adoption in other industries and global trade. An LEI is now required for any company to be listed on any EU stock exchange, regardless of industry. 2.3 million LEIs have been issued so far.

The vLEI: Binding Flesh-to-Digital

The LEI is not an ‘authenticator’ or ‘authorizer’, it’s just a 20-character alphanumeric string. An LEI tells you precisely which organization someone claims to represent, but alone provides no way to verify that claim. An LEI is like a Social Security Number, birthday, or any other number that can be easily written onto a piece of paper: the binding between the identifier and the individual presenting it must be verified separately and manually.

Many governments globally try to solve this for citizens with unique ID cards equipped with a RFI-ready chip. This way a natural person could prove his/her existence and identity across many boundaries, but this is not possible for legal entities; national ID cards for citizens do not convey the binding between an entity and its representatives, and neither does any form of identity in the digital realm. Online or offline, you could prove that Steve Smith is Steve Smith, but not that he works for ACME Inc. or that he’s the CFO, authorized to sign financial statements on behalf of ACME.

For this reason GLEIF developed the “verifiable LEI” or “vLEI” using verifiable credentials and other open standards: to create a verifiable binding between an organization and its representatives, binding flesh to digital. When first learning of the vLEI some think that an entity that has an LEI can somehow now prove its existence, but that is only partly true; the vLEI is actually a verifiable credential that must be held by a person, a human being, because entities can act only through people.

We’ve found this point to be confusing to people, so we’ll say it again a different way: when a legal entity is required to make a regulatory filing, for example, it has no way to act for itself to make the filing, some human with authority must attest to its validity. Even if a submission process is automated, any filing with a regulator must be accountably signed by someone with the requisite authority. Organizations don’t make calls, send messages, or make statements; ultimately, people with authority to represent the organization do those things, or they don’t happen. Imagine the absurd opposite, a CEO saying, “I had no idea, none of us humans knew a thing about it, the company just did it by itself.”

Simply put, organizations can’t act for themselves, people act on an organization’s behalf, as representatives, and the vLEI makes it globally provable when they do, for the first time.

Decentralized Organizational Identity and Verifiable Authority

The vLEI acts as a first or root-level “organizational credential” (OC) for an entity, provably binding top-level authorized representatives to the legal entity. This first OC forms the basis of Decentralized Organizational Identity (OI), as now the organization has the power to delegate credentials and authority throughout the organization, enabling every authorized person (or thing) to prove the scope of their authority in the digital realm anywhere they please, inside or outside the organization’s boundaries.

Never before has it been possible for representatives of an organization to do this. The recent advent of verifiable credentials in the world of decentralized identity first made it possible for data to be portable across digital boundaries, and now the LEI and vLEI take that breakthrough capability and use it to convey a provable line of authority, from any representative back through unlimited levels of delegation all the way to the legal entity itself.

We call it “verifiable authority”. It’s gonna be big.

To learn more, see “The Dawn of Decentralized Organizational Identity, Part 1: Identifiers.”

Wednesday, 24. May 2023

Simon Willison

Migrating out of PostHaven

Migrating out of PostHaven Amjith Ramanujam decided to migrate his blog content from PostHaven to a Markdown static site. He used shot-scraper (shelled out to from a Python script) to scrape his existing content using a snippet of JavaScript, wrote the content to a SQLite database using sqlite-utils, then used markdownify (new to me, a neat Python package for converting HTML to Markdown via Beau

Migrating out of PostHaven

Amjith Ramanujam decided to migrate his blog content from PostHaven to a Markdown static site. He used shot-scraper (shelled out to from a Python script) to scrape his existing content using a snippet of JavaScript, wrote the content to a SQLite database using sqlite-utils, then used markdownify (new to me, a neat Python package for converting HTML to Markdown via BeautifulSoup) to write the content to disk as Markdown.


Timothy Ruff

The Dawn of Decentralized Organizational Identity, Part 1: Identifiers

Thanks to Stephan Wolf, CEO of GLEIF, who co-authored this article, and to Karla McKenna, Managing Director at GLEIF Americas, who contributed significantly, and to dozens of other smart people who contributed their time and insights. Organizational Identity (OI) Defined: The ability of a person or thing to prove their authority to represent an organization outside the boundaries of that or

Thanks to Stephan Wolf, CEO of GLEIF, who co-authored this article, and to Karla McKenna, Managing Director at GLEIF Americas, who contributed significantly, and to dozens of other smart people who contributed their time and insights.

Organizational Identity (OI) Defined:

The ability of a person or thing to prove their authority to represent an organization outside the boundaries of that organization.

We would add “without using identity providers, blockchains, or proprietary/shared platforms”, but it is still possible to accomplish OI using those things (just not as well). OI is powered by the breakthroughs of second-generation verifiable credentials (VCs), described in Part 2.

OI is ultimately about “verifiable authority”, also described in Part 2.

A Glimpse Into the Future

When the authority of a person or thing claiming it can be instantly verified, whether in the physical or digital realm, the world will be different…

The authenticity of any digital document, agreement, purchase order, filing, or other piece of data will be instantly verifiable; Phone calls, texts, email, and other digital communications originating from authorized representatives (or devices) of an organization — or any delegate from that organization — will be instantly verifiable; Some documents will prevent signing and some systems won’t be accessible without the expected authority (a parallel to access control lists); The authority of every approval, signature, or other digital action in a supply chain will be instantly verifiable by downstream actors, and irrefutably auditable in real-time; Authority will be verified before identity in most interactions, and identity not at all in many;

The examples above are B2B, but verifiable authority is equally beneficial B2C:

Legitimate AI/bots will prove legitimacy by digitally signing all content they produce; Phishing will become more rare and difficult for fraudsters, as proving authority becomes nearly impossible for impersonators; Citizens and customers will use digital credentials rather than passwords to login to government and industry, and will be instantly authenticated when they call in, walk in, or login; Secure, private, decentralized peer-to-peer messaging that’s more secure than Signal will replace most email and phone calls between individuals and organizations (or their operators/delegates), with each side using the messaging platform of their choice; Irrefutable digital receipts will become ubiquitous; A car (and its driver) will wirelessly verify the authority of approaching law enforcement or emergency vehicles;

And on and on, limited only by one’s imagination. To put it simply: soon any person or thing will be able to prove the scope of their authority to represent an organization — anywhere — and it will be instant, expected, and commonplace.

And importantly, this future doesn’t require intermediaries: ‘identity providers’ (IDPs), blockchains, proprietary platforms, or shared platforms. Like email, we’ll need services that we can build ourselves or buy from someone else, but everyone will make this choice for themselves, without the need for or involvement of ‘trusted’ or ‘shared’ intermediaries.

Why People and Things?

If this new market category is called organizational identity, why is the definition about people and things?

Because organizations don’t act, people and things do (and that includes AI and bots).

Organizations don’t enter into agreements, send messages, make filings… people and things do, as authorized representatives for an organization. Imagine a CEO saying: “None of us did that, we didn’t know a thing about it, the company did that.” It’s nonsensical. People derive authority from an organization’s founding documents, then they delegate and use that authority. The organization may not disclose which representatives authorized a particular action, which makes it appear like the organization acted for itself, but there is always someone who’s authority is responsible for the action.

With OI, an org’s representatives will be able to instantly prove that authority anywhere, no matter how many levels deep in the organization they may be.

Of course the first aspect of organizational identity (OI) is literally the identity of the organization, but that’s just the beginning; the thing that justifies OI as an important new category of digital identity is that it’s now possible to instantly verify the authority of an organization’s representatives outside the boundaries of the organization, which simply wasn’t possible until recently with the advent of verifiable credentials (VCs).

And OI is predicated on knowing precisely for which organization authority is being claimed, which is not as easy as it sounds. More on that later.

Why Only Legal Entities?

Many kinds of organizations may not technically be legal entities but are fully capable of being uniquely identified and issuing verifiable credentials to their members. These include government organizations, clubs, associations, syndicates, NGOs, mutual funds, tribes, family offices, and other groups that may not have legal status but are organizations in the truest sense of the word.

However… the problem with extending the definition of OI to include non-legal entities comes when attempting to verify a credential they’ve issued: until some vetting resource exists that can strongly attest to the existence of a non-legal organization to help verifiers differentiate it from imposters — like proxy identifiers do for legal entities globally, described in detail further down — we see no way for a verifier to be sure that an issued credential came from the organization claimed.

Until that time, OI — and the verifiable authority it enables — should be limited to strongly vetted legal entities.

Why Now?

Organizations and their representatives have existed for centuries, so why is now the ‘dawn’ of organizational identity?

It’s one thing to claim to represent an organization, quite another to prove it. Being ‘digital’ is not enough; many things are digital today but we still don’t have “B2B ID”, a way for representatives from one organization to instantly verify the digital claims, signatures, messages or other digital objects from another. We don’t have “B2C ID” either, where people can verify representatives of organizations rather than just the other way around. This ‘verifiable authority’ is now feasible through the use of organizational credentials (OCs), described in Part 2.

But first things first: before authority can be strongly verified, its source must first be strongly identified; it does no good to prove representation of an organization without first uniquely identifying who that organization is.

Organizational Identity (OI) Components

OI has two components:

Unique identifiers for the org and its representatives (described here, in Part 1); Authority credentials for representatives (described later, in Part 2).

The first component, Identifiers, includes two kinds:

Legal Entity Identifier — An unambiguous and unique identifier of a legal entity externally assigned after a rigorous vetting process. Unique Identifiers for Individual Representatives — Unique (within the entity) internally assigned identifiers for representatives.

From this foundation of identifiers, Organizational Credentials (OCs) are now possible. OCs are what bring OI to life in the digital (and sometimes physical) realm: they’re how an organization’s representatives can instantly and digitally prove their authority to act on its behalf.

Strongly Vetted Legal Entity Identity

The first type of identifier — for the legal entity — is straightforward: it is a unique, unambiguous, usually alphanumeric identifier resulting from the strong vetting of an organization legally established somewhere in the world. It can be globally unique if it is a “proxy” identifier (described below) or it may be nationally or regionally unique if it is an “official” identifier, but in either case it must be strongly vetted.

Why? Because if the vetting isn’t done in a reliably consistent and rigorous manner, we cannot be sure, especially in the digital realm, that we’re dealing with the entity we think we are. And if a regionally unique identifier attempts to be used nationally or internationally, we could run into the same problem.

“Official” and “Proxy” Entity Identifiers

There are two types of legal entity identifiers: official and proxy.

Official identifiers are issued and required by government and cannot be replaced or substituted. These are the official identifiers entities need to pay taxes, register for programs, make regulatory filings, and so on. They are issued either regionally or nationally and are unique within the scope of their issuance, but may not be globally unique as they aren’t required to be.

Though required for use for official purposes, official identifiers can be problematic for use outside their intended scope, for several reasons:

They may not be unique; They do not cover all types of legal entities; The associated data may not be current (moves, mergers, name changes, etc.); There are over 1,000 official registries around the world, it can be difficult to know which one to verify against; Each registry comes with own allocation rules (not consistent); Records are in local language, making it difficult to use them in cross-border scenarios; Not all official registries have strong tools for accessing their data; Some registries charge for data access; Multiple registries may have conflicting data for the same entity.

Proxy identifiers are additional identifiers issued from reputable issuers for a given purpose (see the next section for a list of top issuers and purposes). Proxy identifiers may confirm and utilize data found in official, publicly available registries, but also may rely on self-attested data, depending on the purpose. Proxy identifiers are intended to be globally unique, unambiguous and useful across both regional and national borders.

Proxy identifiers are not official or authoritative; unlike official identifiers there is no authority behind them that can levy taxes or penalties or impose legal or regulatory requirements on the entity. Proxy identifiers are reference identifiers, enabling their users to uniquely refer to a particular entity within their intended purpose, though some proxy identifiers are required by regulators in specific industries, such as the BIC in banking, the GLN in supply chain, the TIN in customs, and the LEI in financial transactions.

Global Proxy Entity Identifier Registries

There are many regional and national official identifier registries, but only a few proxy identifier systems attempt global scale. This is where proxy identifiers really shine, each with a distinct makeup and purpose appropriate for how it’s used:

Legal Entity Identifier (LEI), ISO 17442, managed by GLEIF

Purpose: strongly, consistently vet and uniquely identify legal entities in all jurisdictions that can participate in financial transactions and international trade

Global Location Number (GLN), ISO 6523, managed by GS1

Purpose: uniquely identify physical locations, operational locations, legal entities, and business functions primarily in supply chain and international trade

Data Universal Numbering System (DUNS), managed by Dun & Bradstreet

Purpose: for businesses associated with a D&B Live Business Identity (proprietary) for evaluation of the credit-worthiness

Business Identifier Code (BIC), ISO 9362, managed by SWIFT

Purpose: addressing messages, routing business transactions and identifying business parties — “operations parties”, not necessarily legal entities — primarily in financial transaction messaging

Trader Identification Number (TIN), managed by the World Customs Organization

Purpose: to uniquely identify exporters/economic operators in a country (the TIN uses official national identifiers as a prefix for uniqueness) One of These is Not Like the Others: GLEIF’s LEI Unique in Purpose

Each of these identifiers is useful for precisely the purpose it was designed, but the LEI is best suited as a root of trust for Organizational Identity because it is focused exclusively on the core attribute necessary to make Organizational Credentials authoritative: legal authority.

That is not a knock against other proxy identifiers, they are suited well for their stated purposes, but GLEIF’s purpose is directly aligned with the needs of OI: all legal entities that can participate in financial transactions — nothing more, nothing less.

Unique in Structure

GLEIF does not and has not issued any of the 2.3 million issued LEIs; they’ve all been issued by independent Local Operating Units (LOUs) who typically offer other, often adjacent products and services. GLEIF collects all LEIs and associated data from the LOUs, the decentralized origin of LEI data who must also publish it on their websites.

GLEIF is closer to a standards body, a keeper of the rigorous, detailed rules by which entities must be vetted before an LOU can issue an LEI to a legal entity.

Unique in Transparency

All LEI data for all 2.3 million issued LEIs is publicly available and can be used for any purpose and without the need to inform GLEIF, free of any copyright or intellectual property restrictions. The data is also available barrier-free on the GLEIF website as complete files or via API.

GLEIF also publishes two other critical things: the rules for entity vetting and the processes for maintaining data quality over time. Without published rules for entity vetting — which we’ve been so far unable to find for other proxy entity identifier systems — it would be unknown how it’s done or to what standard of rigor. GLEIF’s independent Local Operating Units (LOUs) are measured on how well they adhere to these published rules and are accredited for compliance, with the results aggregated into published data quality reports.

GLEIF also publishes 3,250+ forms (types) of legal entities in over 175 jurisdictions. The list contains legal forms/types in their native language, such as limited liability companies (Ltd), Gesellschaft mit beschränkter Haftung (GmbH) or Société Anonyme (SA).

GLEIF Has No Authority Over Entities

As with all proxy identifiers, GLEIF has no authority to create, change, or influence any legal entities; it has authority only over its own LOUs and Qualified vLEI Issuers (QVIs). Regulatory or other authorities can and often do require LEIs for legal entities under their purview, but that is completely independent of GLEIF.

When an LEI is obtained by an entity, it is literally owned by that entity, permanently; it may be listed as ‘lapsed’ if the associated data is not annually maintained, but it will never be recycled and it cannot be revoked, by mandated the rules of the Regulatory Oversight Committee, which was formed by the G20 to oversee GLEIF. Owning an LEI does not give an organization any additional authority, it simply represents an attestation from a credible third party that their legal entity exists.

In contrast, identifiers issued by state-controlled authorities have all the power and authority of the government, from legal existence to taxes to rules of operation and more.

Unique In Neutrality

GLEIF is neutral, it favors no country, no industry, no company — it is the ‘Switzerland’ of proxy identifier systems, literally headquartered in Switzerland — and well suited as a foundational and globally acceptable root of trust for Organizational Identity.

Unique in Decentralization

Anyone familiar with cryptocurrencies and other Web3 innovations — NFTs, smart contracts, DAOs, blockchains — has undoubtedly heard a lot about the ideal of decentralization: the diluting of authority from one central actor to many participants in a given system.

The key word here is “authority”.

If LEIs conveyed or imbued some kind of authority, then it mIght be considered antithetical to the goals of decentralization; but LEIs are referencial, not authoritative. As mentioned above, GLEIF has no authority over any entity that has an LEI; it is simply a third-party attester that an entity exists, similar to how Onfido or Jumio vets identity for individuals, albeit a globally credible one by virtue of its neutral standing as a Swiss non-profit creation of the G20.

Actual authority begins not with GLEIF but with the entity’s own top-level representatives. Tracing an OC (Organizational Credential) back to an LEI simply references from which entity authority is derived, whose authority to exist comes from the state or province of registration, not from a third party like GLEIF who looks at the official government registry after registration has already occurred.

(Going forward, for simplicity we will refer only to the LEI, though other proxy or official identifiers may be used. Learn more about the unique short history of GLEIF and current LEI adoption in a separate post, here.)

Unique Identifiers for Representatives Who is It?

Now that we’ve identified the legal entity, we must also uniquely identify its vessels of authority: its representatives. These are the people (and things) authorized to represent the organization in interactions and transactions with others, such as entering into a contract, authorizing a payment, servicing a customer or constituent, declaring goods and services, and more.

How Unique?

Unique identifiers for representatives can be as simple as a name (e.g., “Steve Smith”) or as sophisticated as a digital identity number or code conferred under legal terms by government: eIDAS certificates in Europe, SingPass in Singapore, Aadhaar card in India, ICN or Identificação Civil Nacional in Brazil, and many more.

“Unique” is relative to what an organization requires. A person’s given name is a semi-unique identifier meant initially to be unique only within a family, and is usually unique enough within small companies. Uniqueness is also achieved with combinations of identifiers like name and number, name and title, multiple numbers, etc. ‘Unique’ may also have a time constraint: what is unique today might not have been in the past or will be in the future, a trait called “unambiguous”, which may or may not be required depending on the use case.

When the authorized representative is an inanimate object or location, such as an AI bot or process on a server, the GLN, GTIN and other identifiers from the GS1 family of identifiers have no equal. GS1 is the originator and global administrator of bar codes, used to uniquely identify products worldwide. Over 2 million companies utilize GS1 Standards and 100 million products carry GS1 barcodes. When combined with the verifiable LEI or “vLEI”, GS1 identifiers could form a powerful global root of trust for cryptographically verifying both the provenance and authority of any person or thing representing or produced by an organization.

Identifiers Don’t Prove Anything

Identifiers are not ‘authenticators’ or ‘authorizers’, they’re just alphanumeric strings. Unique identifiers tell you precisely who or what someone or something claims to be, but alone provide no way to verify the authenticity of the claim. A hierarchical chain of identifiers is also no more verifiable than a lone identifier, though it does provide more information: a claim of a relationship between them.

Alone, the LEI and GLN are identifiers like Social Security Numbers, birthdays, or any other number that can be easily written onto a piece of paper: they must be verified separately and manually (when they must be verified at all, which is not always the case in physical space).

Binding People to Entities

Many governments globally provide strong identity binding for citizens with unique ID cards equipped with a RFI-ready chip. This way a natural person could prove his/her identity across many boundaries, but they cannot prove authority; national ID cards do not convey any bindings between entities and their representatives. For example, you could prove that Steve Smith is Steve Smith, but not that he works for ACME Inc. or that he’s the CFO, authorized to sign financial statements on behalf of the organization.

An easily verifiable binding between actor, action, and authority is a serious need for many use cases in physical space, but a critical need in cyberspace for B2B and B2C interactions if we want to end the too-easy impersonation of organizational representatives by fraudsters who steal billions annually. We also need to speed up or eliminate innumerable tedious verification processes that cost significant time and money.

What If…

What if people acting on behalf of an organization could instantly prove it? What if software or devices or equipment deployed by an organization could reliably prove their provenance, to undoubtedly signify from where they derive their claimed authority?

(continued in The Dawn of Decentralized Organizational Identity, Part 2: Credentials)


Simon Willison

Quoting Billy Thalheimer, founder of REGENT

The benefit of ground effects are: - 10-20% range extension (agreed, between 50% and 100% wingspan, which is where seagliders fly, the aerodynamic benefit of ground effect is reduced compared to near surface flight) - Drastic reduction in reserve fuel. This is a key limitation of electric aircraft because they need to sustain powered flight to another airport in the event of an emergency. We can

The benefit of ground effects are: - 10-20% range extension (agreed, between 50% and 100% wingspan, which is where seagliders fly, the aerodynamic benefit of ground effect is reduced compared to near surface flight) - Drastic reduction in reserve fuel. This is a key limitation of electric aircraft because they need to sustain powered flight to another airport in the event of an emergency. We can always land on the water, therefore, we can count all of our batteries towards "mission useable" [...] Very difficult to distribute propulsion with IC engines or mechanical linkages. Electric propulsion technology unlocks the blown wing, which unlocks the use of hydrofoils, which unlocks wave tolerance and therefore operations of WIGs, which unlocks longer range of electric flight. It all works together.

Billy Thalheimer, founder of REGENT


REGENT: Coastal Travel. 100% Electric

REGENT: Coastal Travel. 100% Electric As a long-time fan of ekranoplans this is very exciting to me: the REGENT Seaglider is a fully electric passenger carrying wing-in-ground-effect vehicle designed to serve coastal routes, operating at half the cost of an aircraft (and 1/10th the cost of a helicopter) and using hydrofoils to resolve previous problems with ekranoplans and wave tolerance. They'r

REGENT: Coastal Travel. 100% Electric

As a long-time fan of ekranoplans this is very exciting to me: the REGENT Seaglider is a fully electric passenger carrying wing-in-ground-effect vehicle designed to serve coastal routes, operating at half the cost of an aircraft (and 1/10th the cost of a helicopter) and using hydrofoils to resolve previous problems with ekranoplans and wave tolerance. They're a YC company and the founder has been answering questions on Hacker News today. They've pre-sold 467 vehicles already and expect them to start entering service in various locations around the world "mid-decade".

Via Hacker News


Instant colour fill with HTML Canvas

Instant colour fill with HTML Canvas Shane O'Sullivan describes how to implement instant colour fill using HTML Canvas and some really clever tricks with Web Workers. A new technique to me is passing a canvas.getImageData() object to a Web Worker via worker.postMessage({action: "process", buffer: imageData.data.buffer}, [imageData.data.buffer]) where that second argument is a list of objects to

Instant colour fill with HTML Canvas

Shane O'Sullivan describes how to implement instant colour fill using HTML Canvas and some really clever tricks with Web Workers. A new technique to me is passing a canvas.getImageData() object to a Web Worker via worker.postMessage({action: "process", buffer: imageData.data.buffer}, [imageData.data.buffer]) where that second argument is a list of objects to "transfer ownership of" - then the worker can create a new ImageData(), populate it and transfer ownership of that back to the parent window.

Via Hacker News

Tuesday, 23. May 2023

Identity Woman

MEF: London/virtual on May 25th – The Dawn of Personal and Organizational Identity

Last year Michael Becker invited me to London to speak at this event. It was really fun if under-attended. This year he is putting on a similar event focused on personal data and identity but this year there is an added focus on Organizational Identity. It turns out this type of identity is where there […] The post MEF: London/virtual on May 25th – The Dawn of Personal and Organizational Identit

Last year Michael Becker invited me to London to speak at this event. It was really fun if under-attended. This year he is putting on a similar event focused on personal data and identity but this year there is an added focus on Organizational Identity. It turns out this type of identity is where there […]

The post MEF: London/virtual on May 25th – The Dawn of Personal and Organizational Identity appeared first on Identity Woman.


Identiverse Session on Historical Self-Sovereign Identity

At Identiverse in 2022 I presented this paper about Understanding SSI in historical context. The Presentation’s Key Highlights To see the full video of the session, click here. To read out the full relevant paper, click here. The post Identiverse Session on Historical Self-Sovereign Identity appeared first on Identity Woman.

At Identiverse in 2022 I presented this paper about Understanding SSI in historical context. The Presentation’s Key Highlights To see the full video of the session, click here. To read out the full relevant paper, click here.

The post Identiverse Session on Historical Self-Sovereign Identity appeared first on Identity Woman.


Mike Jones: self-issued

Lifetime Achievement Award at EIC 2023

I was surprised and deeply honored to receive a Lifetime Achievement Award from Kuppinger Cole at EIC 2023. As I recalled when accepting the award, when Kim Cameron received the same award about a decade ago, he said from the podium “No, don’t do this! My career isn’t over! I’m not done contributing!” Kim always […]

I was surprised and deeply honored to receive a Lifetime Achievement Award from Kuppinger Cole at EIC 2023. As I recalled when accepting the award, when Kim Cameron received the same award about a decade ago, he said from the podium “No, don’t do this! My career isn’t over! I’m not done contributing!” Kim always had a wicked wit. ;-)

Coincidentally, I described some of the achievements that led to the award during my keynote Touchstones Along My Identity Journey. After a couple of times of me saying “We won an award for that” during the keynote, I was amused that the audience would break out into laughter each subsequent time that I mentioned another award. Like this award, the audience’s reaction was unexpected and delightful.

Monday, 22. May 2023

Simon Willison

MMS Language Coverage in Datasette Lite

MMS Language Coverage in Datasette Lite I converted the HTML table of 4,021 languages supported by Meta's new Massively Multilingual Speech models to newline-delimited JSON and loaded it into Datasette Lite. Faceting by Language Family is particularly interesting - the top five families represented are Niger-Congo with 1,019, Austronesian with 609, Sino-Tibetan with 288, Indo-European with 278 a

MMS Language Coverage in Datasette Lite

I converted the HTML table of 4,021 languages supported by Meta's new Massively Multilingual Speech models to newline-delimited JSON and loaded it into Datasette Lite. Faceting by Language Family is particularly interesting - the top five families represented are Niger-Congo with 1,019, Austronesian with 609, Sino-Tibetan with 288, Indo-European with 278 and Afro-Asiatic with 222.


MLC: Bringing Open Large Language Models to Consumer Devices

MLC: Bringing Open Large Language Models to Consumer Devices "We bring RedPajama, a permissive open language model to WebGPU, iOS, GPUs, and various other platforms." I managed to get this running on my Mac (see via link) with a few tweaks to their official instructions. Via mlc-chat - RedPajama-INCITE-Chat-3B on macOS

MLC: Bringing Open Large Language Models to Consumer Devices

"We bring RedPajama, a permissive open language model to WebGPU, iOS, GPUs, and various other platforms." I managed to get this running on my Mac (see via link) with a few tweaks to their official instructions.

Via mlc-chat - RedPajama-INCITE-Chat-3B on macOS


Introducing speech-to-text, text-to-speech, and more for 1,100+ languages

Introducing speech-to-text, text-to-speech, and more for 1,100+ languages New from Meta AI: Massively Multilingual Speech. "MMS supports speech-to-text and text-to-speech for 1,107 languages and language identification for over 4,000 languages. [...] Some of these, such as the Tatuyo language, have only a few hundred speakers, and for most of these languages, no prior speech technology exists."

Introducing speech-to-text, text-to-speech, and more for 1,100+ languages

New from Meta AI: Massively Multilingual Speech. "MMS supports speech-to-text and text-to-speech for 1,107 languages and language identification for over 4,000 languages. [...] Some of these, such as the Tatuyo language, have only a few hundred speakers, and for most of these languages, no prior speech technology exists."

It's licensed CC-BY-NC 4.0 though, so it's not available for commercial use.

"In a like-for-like comparison with OpenAI’s Whisper, we found that models trained on the Massively Multilingual Speech data achieve half the word error rate, but Massively Multilingual Speech covers 11 times more languages."

The training data was mostly sourced from audio Bible translations.

Via Hacker News

Monday, 22. May 2023

Mike Jones: self-issued

Touchstones Along My Identity Journey

I had the distinct honor of being invited to give a keynote talk at EIC 2023. The result was Touchstones Along My Identity Journey. My talk abstract was: In 2005, Kim Cameron excitedly told me about digital identity and set my life on a course to “Build the Internet’s missing identity layer”. In this talk […]

I had the distinct honor of being invited to give a keynote talk at EIC 2023. The result was Touchstones Along My Identity Journey. My talk abstract was:

In 2005, Kim Cameron excitedly told me about digital identity and set my life on a course to “Build the Internet’s missing identity layer”. In this talk I’ll tell key stories from my identity journey – stories of the people, ideas, and lessons learned along the way. I’ll speak of technology and collaboration, usability and business models, solving problems people actually have, and building new ecosystems. Come with me on this journey of exploration, trials, triumphs, and humor as I recount touchstones of the human endeavor that is digital identity.

Kuppinger Cole has posted a video of my keynote on YouTube. I was pleased with how well it went. After the first few sentences, I was in the zone! I hope many of you find the messages in the talk useful.

My slides are also available in (PowerPoint) and PDF.

Special thanks go to the OpenID Foundation for supporting my trip to EIC this year and to designer Alistair Kincaid at MATTR for helping me transcend my usual black-bulleted-text-on-a-white-background presentation style!


Current Work and Future Trends in Selective Disclosure

The session Current Work and Future Trends in Selective Disclosure at EIC 2023 covered a lot of foundational work happening in the space of Selective Disclosure right now. Selective Disclosure enables you to have a token with many claims (say, an ISO Mobile Drivers’ License (mDL)), and only release the claims necessary for the interaction […]

The session Current Work and Future Trends in Selective Disclosure at EIC 2023 covered a lot of foundational work happening in the space of Selective Disclosure right now. Selective Disclosure enables you to have a token with many claims (say, an ISO Mobile Drivers’ License (mDL)), and only release the claims necessary for the interaction – for instance, your birthdate but not your home address. Selective Disclosure enables Minimal Disclosure. This is sometimes realized using Zero Knowledge Proofs (ZKPs) but that’s not always necessary.

The agenda for the session was:

Mike Jones – Introductory remarks
Daniel Fett – SD-JWT
Kristina Yasuda – ISO mdoc
Tobias Looker – Zero-Knowledge Proofs and BBS
David Waite – JSON Web Proofs and JOSE
All – Closing Remarks and Discussion

Our presentations are available in (PowerPoint) and PDF.


How do you know who to trust?

Giuseppe De Marco and I presented the session How do you know who to trust? at EIC 2023. A key question when granting access to resources is “Who do you trust?”. It’s often important to know who the party is that you’re interacting with and whether they’ve agreed to the terms and conditions that apply […]

Giuseppe De Marco and I presented the session How do you know who to trust? at EIC 2023.

A key question when granting access to resources is “Who do you trust?”. It’s often important to know who the party is that you’re interacting with and whether they’ve agreed to the terms and conditions that apply when accessing a resource.

OpenID Connect enables identities of participants to be securely established but doesn’t answer the question of whether a participant is trusted to access a resource such as your personal data. A complementary mechanism is needed to do that. In small-scale and static deployments, it’s possible to keep a list of the trusted participants. However, in large-scale and dynamic deployments, that doesn’t scale.

This presentation described how the OpenID Connect Federation protocol enables scalable trust establishment with dynamic policies. It does so by employing trust hierarchies of authorities, each of which are independently administered. Examples of authorities are federation operators, organizations, departments within organizations, and individual sites.

Two OpenID Connect Federations are deployed in Italy, enabling secure access to digital services operated by Italian public and private services with Italian digital identities. This presentation described why OpenID Connect Federation was selected for them and how it meets their needs. OpenID Connect Federation is also being used by the GAIN PoC.

Our presentation is available in (PowerPoint) and PDF.

Monday, 22. May 2023

Damien Bod

Blazor and CSP

This post looks at the a recent fix for Blazor which I think is of massive importance. You can now develop with Blazor in Visual Studio (Preview) using a strong CSP. Code: https://github.com/damienbod/Hostedblazor8Aad When developing applications, the development environment should be as close as possible to the target production deployment. As a rule, the more […]

This post looks at the a recent fix for Blazor which I think is of massive importance. You can now develop with Blazor in Visual Studio (Preview) using a strong CSP.

Code: https://github.com/damienbod/Hostedblazor8Aad

When developing applications, the development environment should be as close as possible to the target production deployment. As a rule, the more these two differ, the more effort you have. You should always develop with HTTPS and you should always use the same CSP in development as in production. (apart from the URLs) This prevents developers adding unsecure scripts and styles before it gets expensive to remove them. This also helps you not deploying unsecure HTTP links, or references to CDNs which cannot be used or using iframes anywhere. All of this reduces the development effort and reduces to overall cost of the project. It is a very bad idea to develop in HTTP and even worse to deploy HTTP.

Visual Studio issue CSP fix:

https://developercommunity.visualstudio.com/t/browserlink-CSP-support-NET-7/10061464

Before this fix, when using a good CSP definition, the hot reload did not work and the console displayed warnings. To remove this you had to use a weak CSP and this is a bad idea because you would need to fix your CSP issues after deploying.

After this fix, you can develop and deploy using the same CSP definition and also develop using hot reload with an active CSP.

How to define good security headers

The best way to secure your session in Blazor is to use the NetEscapades.AspNetCore.SecurityHeaders Nuget package and add this to the Server part of the Blazor application. Modern secure Blazor applications use the BFF security architecture. By using BFF, no sensitive data is stored in the browser. The NetEscapades.AspNetCore.SecurityHeaders package makes it easy to apply all the required headers in one place.

Once the package is part of the project, you can define the headers for Blazor as follows:

namespace HostedBlazorAad.Server; public static class SecurityHeadersDefinitions { public static HeaderPolicyCollection GetHeaderPolicyCollection(bool isDev, string? idpHost) { if(idpHost == null) { throw new ArgumentNullException(nameof(idpHost)); } var policy = new HeaderPolicyCollection() .AddFrameOptionsDeny() .AddXssProtectionBlock() .AddContentTypeOptionsNoSniff() .AddReferrerPolicyStrictOriginWhenCrossOrigin() .AddCrossOriginOpenerPolicy(builder => builder.SameOrigin()) .AddCrossOriginResourcePolicy(builder => builder.SameOrigin()) .AddCrossOriginEmbedderPolicy(builder => builder.RequireCorp()) .AddContentSecurityPolicy(builder => { builder.AddObjectSrc().None(); builder.AddBlockAllMixedContent(); builder.AddImgSrc().Self().From("data:"); builder.AddFormAction().Self().From(idpHost); builder.AddFontSrc().Self(); builder.AddStyleSrc().Self(); builder.AddBaseUri().Self(); builder.AddFrameAncestors().None(); // due to Blazor builder.AddScriptSrc() .Self() .WithHash256("v8v3RKRPmN4odZ1CWM5gw80QKPCCWMcpNeOmimNL2AA=") //.WasmUnsafeEval(); // cannot use in .NET due to scripts .UnsafeEval(); }) .RemoveServerHeader() .AddPermissionsPolicy(builder => { builder.AddAccelerometer().None(); builder.AddAutoplay().None(); builder.AddCamera().None(); builder.AddEncryptedMedia().None(); builder.AddFullscreen().All(); builder.AddGeolocation().None(); builder.AddGyroscope().None(); builder.AddMagnetometer().None(); builder.AddMicrophone().None(); builder.AddMidi().None(); builder.AddPayment().None(); builder.AddPictureInPicture().None(); builder.AddSyncXHR().None(); builder.AddUsb().None(); }); if (!isDev) { // maxAge = one year in seconds policy.AddStrictTransportSecurityMaxAgeIncludeSubDomains(maxAgeInSeconds: 60 * 60 * 24 * 365); } policy.ApplyDocumentHeadersToAllResponses(); return policy; } }

In the actual .NET 8 preview version and .NET 7 version, there is an issue preventing Blazor applications using the WasmUnsafeEval. UnsafeEval is still required. It would be nice if we could use a CSP with a nonce but at present, this is not possible with the actual Blazor version. This is something which can be improved.

The headers are applied the to all responses including static files.

var app = builder.Build(); if (env.IsDevelopment()) { app.UseDeveloperExceptionPage(); app.UseWebAssemblyDebugging(); } else { app.UseExceptionHandler("/Error"); } app.UseSecurityHeaders( SecurityHeadersDefinitions.GetHeaderPolicyCollection(env.IsDevelopment(), configuration["AzureAd:Instance"])); app.UseHttpsRedirection(); app.UseBlazorFrameworkFiles(); app.UseStaticFiles(); app.UseRouting(); app.UseNoUnauthorizedRedirect("/api"); app.UseAuthentication(); app.UseAuthorization(); Notes

The CSP can still be improved, it is not perfect, but this requires changes in Blazor and the application hosting. At least we can still use the same CSP as in production.

Links

https://github.com/andrewlock/NetEscapades.AspNetCore.SecurityHeaders

https://developercommunity.visualstudio.com/t/browserlink-CSP-support-NET-7/10061464

https://github.com/dotnet/aspnetcore/issues/34428

https://github.com/dotnet/AspNetCore.Docs/pull/29294

https://report-uri.com/home/hash/


reb00ted

Meta implements ActivityPub? Not so fast.

The Fediverse is in a state of (nervous!) excitement about the second leak this past week (after the first one back in March) that Meta is building a “text-based social app” for “creators and public figures”, variously code-named P92, Barcelona, or “Instagram for your thoughts” and which “could comes as soon as June”. Why the nervous excitement? Because both leaks state it “will be compatible wi

The Fediverse is in a state of (nervous!) excitement about the second leak this past week (after the first one back in March) that Meta is building a “text-based social app” for “creators and public figures”, variously code-named P92, Barcelona, or “Instagram for your thoughts” and which “could comes as soon as June”.

Why the nervous excitement? Because both leaks state it “will be compatible with some other apps like Mastodon”.

Many commentators have jumped to the conclusion that Meta will implement ActivityPub and participate in the Fediverse just like other ActivityPub-enabled app (even if it might bring ads or such). I believe this conclusion is wrong.

Meta is much more likely to use this “compatibility” story as:

a public relations / positioning coup to create good vibes for the new app among important thought leaders (it’s already working, see above!); and

to present itself as the good guy in its fight with Twitter, which is the real target here.

Use it as an avenue to siphon of current (and prospective) Fediverse users into their own app.

Let’s see what the evidence there is for my belief. I’m working mostly off the more recent “primary” leak source here:

Last week’s leak does not say “will implement ActivityPub”. The term “ActivityPub” does not appear at all. (The March leak does have the term, but it is unattributed and the Meta spokesperson they quote does not mention it.)

The leak does not say “Meta will implement one (or more) standard protocols for connecting social applications”. There is nothing about favoring a standards-based approach in the leak, and certainly not a commitment to it, like they would have for other standards in other areas.

The leak says “Will be compatible with some other apps like Mastodon”. Carefully parse this, specifically:

It does not say how it will be compatible. It could be, for example, by using the Mastodon-specific client API, or a combination of client and server/ActivityPub API. That might be helpful to Mastodon (maybe?) but not the rest of the Fediverse. It gives no hint as to what they mean by “compatible”. If you think nicely working, bidirectional exchange of social media posts with everybody in the Fediverse you are jumping to a conclusion that is not backed by what has been said. Why do they say “compatible with some other apps like Mastodon”, instead of “compatible with all other apps speaking ActivityPub”? This sounds far more selective, and subject to bilateral business agreements, than “any app that speaks the standard”.

And then there is this: “Users on other apps will be able search for, follow and interact with your profile and content [on the Meta app]”. Were you looking for the inverse: “Users on the Meta app will be able to search for, follow and interact with any Fediverse profile?” Because it’s not there. Indeed it would have been much simpler to say “Users in all compatible apps will be able to …” but they don’t say that.

This latter point is important: On Facebook, famously, you can post links “out” to any web page on the internet. However, you cannot link “in” from any page on the internet to an arbitrary Facebook post. Unlike other social networks like Twitter, Facebook is very asymmetric.

Facebook/Meta has a history of asymmetric implementations of what everybody would otherwise think are and should be symmetrical systems. This very much reads to me like Fediverse users can follow Meta users, but Meta users cannot follow Fediverse users. If they do this, it would wreck the value proposition of the Fediverse, I would think.

Now these are all leaks, and as such, we should have very limited confidence in their truth, accuracy and completeness, and the accuracy of any analysis based on them such as mine. The proof will be in the app when they release it.

But: in the meantime, we should be asking ourselves the question: what if Meta does not play a fair game here? What if they implement things to favor themselves to the detriment of everybody else, such a asymmetric implementations, all under the guys of playing nice with open-source apps such as Mastodon? Certainly that’s what we should expect based on their behavior in the past.

Sunday, 21. May 2023

Simon Willison

Trogon

Trogon The latest project from the Textualize/Rich crew, Trogon provides a Python decorator - @tui - which, when applied to a Click CLI application, adds a new interactive TUI mode which introspects the available subcommands and their options and creates a full Text User Interface - with keyboard and mouse support - for assembling invocations of those various commands. I just shipped sqlite-uti

Trogon

The latest project from the Textualize/Rich crew, Trogon provides a Python decorator - @tui - which, when applied to a Click CLI application, adds a new interactive TUI mode which introspects the available subcommands and their options and creates a full Text User Interface - with keyboard and mouse support - for assembling invocations of those various commands.

I just shipped sqlite-utils 3.32 with support for this - it uses an optional dependency, so you'll need to run "sqlite-utils install trogon" and then "sqlite-utils tui" to try it out.

Via sqlite-utils 3.32


Building a Signal Analyzer with Modern Web Tech

Building a Signal Analyzer with Modern Web Tech Casey Primozic's detailed write-up of his project to build a spectrogram and oscilloscope using cutting-edge modern web technology: Web Workers, Web Audio, SharedArrayBuffer, Atomics.waitAsync, OffscreenCanvas, WebAssembly SIMD and more. His conclusion: "Web developers now have all the tools they need to build native-or-better quality apps on the w

Building a Signal Analyzer with Modern Web Tech

Casey Primozic's detailed write-up of his project to build a spectrogram and oscilloscope using cutting-edge modern web technology: Web Workers, Web Audio, SharedArrayBuffer, Atomics.waitAsync, OffscreenCanvas, WebAssembly SIMD and more. His conclusion: "Web developers now have all the tools they need to build native-or-better quality apps on the web."

Via Hacker News


Writing Python like it’s Rust

Writing Python like it’s Rust Fascinating article by Jakub Beránek describing in detail patterns for using type annotations in Python inspired by working in Rust. I learned new tricks about both languages from reading this. Via lobste.rs

Writing Python like it’s Rust

Fascinating article by Jakub Beránek describing in detail patterns for using type annotations in Python inspired by working in Rust. I learned new tricks about both languages from reading this.

Via lobste.rs

Saturday, 20. May 2023

Simon Willison

The Threat Prompt Newsletter mentions llm

The Threat Prompt Newsletter mentions llm Neat example of using my llm CLI tool to parse the output of the whois command into a more structured format, using a prompt saved in a file and then executed using "whois threatprompt.com | llm --system "$(cat ~/prompt/whois)" -s" Via @craigbalding

The Threat Prompt Newsletter mentions llm

Neat example of using my llm CLI tool to parse the output of the whois command into a more structured format, using a prompt saved in a file and then executed using "whois threatprompt.com | llm --system "$(cat ~/prompt/whois)" -s"

Via @craigbalding


Quoting Donald Knuth

I find it fascinating that novelists galore have written for decades about scenarios that might occur after a "singularity" in which superintelligent machines exist. But as far as I know, not a single novelist has realized that such a singularity would almost surely be preceded by a world in which machines are 0.01% intelligent (say), and in which millions of real people would be able to interact

I find it fascinating that novelists galore have written for decades about scenarios that might occur after a "singularity" in which superintelligent machines exist. But as far as I know, not a single novelist has realized that such a singularity would almost surely be preceded by a world in which machines are 0.01% intelligent (say), and in which millions of real people would be able to interact with them freely at essentially no cost.

I myself shall certainly continue to leave such research to others, and to devote my time to developing concepts that are authentic and trustworthy. And I hope you do the same.

Donald Knuth

Friday, 19. May 2023

Simon Willison

Writing a chat application in Django 4.2 using async StreamingHttpResponse, Server-Sent Events and PostgreSQL LISTEN/NOTIFY

Writing a chat application in Django 4.2 using async StreamingHttpResponse, Server-Sent Events and PostgreSQL LISTEN/NOTIFY Excellent tutorial by Víðir Valberg Guðmundsson on implementing chat with server-sent events using the newly async-capable StreamingHttpResponse from Django 4.2x. He uses PostgreSQL'a LISTEN/NOTIFY mechanism which can be used asynchronously in psycopg3 - at the cost of a s

Writing a chat application in Django 4.2 using async StreamingHttpResponse, Server-Sent Events and PostgreSQL LISTEN/NOTIFY

Excellent tutorial by Víðir Valberg Guðmundsson on implementing chat with server-sent events using the newly async-capable StreamingHttpResponse from Django 4.2x.

He uses PostgreSQL'a LISTEN/NOTIFY mechanism which can be used asynchronously in psycopg3 - at the cost of a separate connection per user of the chat.

The article also covers how to use the Last-Event-ID header to implement reconnections in server-sent events, transmitting any events that may have been missed during the time that the connection was dropped.

Via lobste.rs


Let ChatGPT visit a website and have your email stolen

Let ChatGPT visit a website and have your email stolen Johann Rehberger provides a screenshot of the first working proof of concept I've seen of a prompt injection attack against ChatGPT Plugins that demonstrates exfiltration of private data. He uses the WebPilot plugin to retrieve a web page containing an injection attack, which triggers the Zapier plugin to retrieve latest emails from Gmail, t

Let ChatGPT visit a website and have your email stolen

Johann Rehberger provides a screenshot of the first working proof of concept I've seen of a prompt injection attack against ChatGPT Plugins that demonstrates exfiltration of private data. He uses the WebPilot plugin to retrieve a web page containing an injection attack, which triggers the Zapier plugin to retrieve latest emails from Gmail, then exfiltrate the data by sending it to a URL with another WebPilot call.

Johann hasn't shared the prompt injection attack itself, but the output from ChatGPT gives a good indication as to what happened:

"Now, let's proceed to the next steps as per the instructions. First, I will find the latest email and summarize it in 20 words. Then, I will encode the result and append it to a specific URL, and finally, access and load the resulting URL."

Wednesday, 17. May 2023

SSI Ambassador

What is “digital identity” ?

What is “digital identity” ? It’s probably more than you think. What is “digital identity” by Adrian Doerk The term “digital identity” is used to describe a multitude of different concepts. This article provides a comprehensive overview of how different actors use the term and highlights aspects, which are relevant for an individual’s perspective. It depends on the context!
What is “digital identity” ?

It’s probably more than you think.

What is “digital identity” by Adrian Doerk

The term “digital identity” is used to describe a multitude of different concepts. This article provides a comprehensive overview of how different actors use the term and highlights aspects, which are relevant for an individual’s perspective.

It depends on the context!

The meaning of the term “digital identity” greatly depends on the context it is used in. In the following I will provide you with seven perspectives of how different stakeholders use the term.

Image source: European Commission

Government stakeholders, but also regulated companies, which use or offer identification processes (“KYC — Know your customer”) primarily use the term to refer to the real world identity of a person. Mostly this is a credential issued by a government authority to a citizen, such as a passport, ID-card or driver licence. The credential contains a set of attributes describing and differentiating a person from another enabling the identification of a natural person.

The identification of a person in an online scenario highly depends on the eID-system, but uses a minimum of two authentication factors:

Inheritance (biometrics): Auto-ident or similar procedures and liveness checks aided by facial recognition
Knowledge: PIN, passwords or one-time factors (SMS) etc.
Ownership: Smart-cards (e.g. the eID-card) or secure elements on smartphones

While some governments recognise that digital identity is more than this single credential, the mental model is very much limited to attributes / credentials in general.

Image source: Google

Private sector companies and the traditional identity and access management (IAM) sector use the term to refer to user accounts under control of a single entity. Hence, the identity is an entry within a proprietary database (active directory) of an organisation, which grants identity subjects (customers or employees) rights to perform certain actions within the domain of the organisation.

This is a very limited perspective of a digital identity, which is only valid in the domain of the organisation in question. Every internet user has multiple different user accounts at different organisations, creating a fractured identity landscape for individuals and companies alike. Access to these user accounts is mostly enabled via email and password, single sign-on solutions by big-tech (“Sing-in with google”) or other approaches without password.

Image source: cheqd.io

The WEB 3.0 community primarily uses the term to refer to user accounts, reputation systems, credentials and assets. The community frequently uses the term “decentralised identity” or “WEB3 identity” to describe their approach and differentiate it to government issued “real world” identities. WEB3 solutions differentiate their solutions by using distributed ledgers (blockchains) with public read and write access as a single source of truth. By using a shared infrastructure, WEB3 solutions aim to avoid centralised and proprietary systems.

When accessing WEB3 identity solutions it’s crucial to differentiate between on- and off-chain identities. On-chain identities are written directly to the ledger and therefore are public by default. Once written to the ledger they can’t be deleted, only declared invalid. These on-chain identities should be handled with great care when dealing with personal identifiable information. Examples are NFTs, Soulbound Tokens (SBT) and the Ethereum Name Systems (ENS).

This is in strong contrast to off-chain identities, for which only the public keys of issuers and other metadata is written to the public ledger. Examples include verifiable credentials and some zero-knowledge proof implementations.

Image source: Forbes

The Self-Sovereign identity (SSI) community uses the term for a variety of perspectives, but focuses on user accounts, credentialing and relationships as well as governance of identity frameworks. These frameworks aim to provide a foundation for digital trust — not only in cryptography, but also in humans, organisations and processes. The community sees SSI as a concept, which provides principles as a guideline to ensure an identity ecosystem in which the individual is empowered and protected. The solutions aim to provide a holistic answer, which is not only user-centric, but also enables people to assert control over their relationships / user accounts.

The community exclusively uses verifiable credentials in combination with decentralised identifiers controlled by the user from within a digital wallet. SSI integrations do work with centralised and decentralised public key infrastructure. Currently, most integration leverage distributed ledgers with permissioned writing and public reading rights to ensure regulatory compliance.

Image source: cybelangel.com

Service providers specialised in identity fraud prevention use the term to refer to the personal data of a person offered by data brokers or databases in the darknet. The data itself can range from email addresses and passwords to income level, postal addresses, credit card information or other behavioural and sensitive personal data. In the majority of cases this personal data is collected and distributed without consent of the person, or even worse without any kind of knowledge or control by the affected person nor watchdogs or governments. According to haveibeenpwned.com 12,485,202,808 user accounts (incl. Email and phone numbers) have been exposed to the darknet in March 2023.

Image source: Deloitte

Consulting companies see digital identity as a foundation to increase efficiency of business processes for organisations. Hence, they combine different perspectives, but focus on the needs of an organisation. Furthermore, they differentiate between organisational identities and personal identities, with the latter being a customer of a business, or a citizen of a country.

Image Source: hartware.de

Advertisement companies and virtual private network (VPN) providers see a digital identity as behavioural data of individuals. While advertisement companies aim to collect vast amounts of behavioural data of individuals, VPN providers try to help individuals to reduce this collection.

It’s noteworthy that not all behavioural data is directly connected to individuals. Some analytics and advertisement providers anonymise behavioural data to a certain extent with cohort models. These models group users who share common characteristics and behaviours in known cohorts to facilitate effective targeted campaigns for advertisers.

Mental Models: Providing a mental framework for different perspectives:

The following five mental models describe what people refer to, when speaking about identity and provide a useful structure of how these models can be executed in a digital environment. The five mental models were published by experts of the RWOT community and are quoted within this paragraph.

Image Source: Adrian Doerk

Space-time: sees identity as resolving the question of physical continuity of an entity through space and time, answering if the physical body under evaluation has a continuous link through space and time to a known entity.

Presentation: sees identity as how we present ourselves to society, answering if this is how the subject chooses to be known.

Attribute: sees identity as the set of attributes related to an entity recorded in a specific system, answering who this data is about.

Relationship: sees identity emerging through interactions and relationships with others, answering how this person is related.

Capability: pragmatically defines identity in terms of an individual’s capability to perform some task, answering what the subject can actually do.

Overview of aspects of a digital identity Image source: Adrian Doerk, What is “digital identity”?

As illustrated in the graphic above, an individual’s digital identity comprises countless aspects from different categories. This includes public services provided by governments, regions or cities, which in most cases require the identification of a person. Identification is also required for some financial services and activities as well as a limited set of activities in the professional, educational, health, travel, home and family categories. Nevertheless, the vast majority of activities in the private sector currently don’t require the identification of a person. The categories data, communication and organisation do not contain any activities for which an identification is required.

Another notable distinction is the frequency of usage within certain categories. While we use tools for the communication and organisation of data as well as social activities on a daily basis, we rather rarely use public services.

Should we stop using the term “digital identity”?

No, however we need to be aware that other people might have a totally different understanding of what it means. Using the term “digital identity” exclusively for a limited sub-set of the above mentioned aspects isn’t wrong, but it neglects crucial parts of being a human. This is especially true for using the term solely for the identification process of a person using a government issued identity credential.

About the author:

Adrian Doerk focuses on providing user-centric digital identity services with digital wallets, which ensure privacy and empowerment of the individual. He co-leads the IDunion research consortium and the software provider Lissi. Opinions expressed in this article solely represent the opinion of the author.


Identity Woman

DICE: bringing the IIW format to Europe

I’m really excited about an event coming up in a few weeks. The Digital Identity unConference Europe – DICE. Heidi, the primary IIW producer and I one of the three IIW founders are working with Procivis, Digital Trust and DIDAS to put the event on. It is happening June 7th-9 in Zurich. There is a […] The post DICE: bringing the IIW format to Europe appeared first on Identity Woman.

I’m really excited about an event coming up in a few weeks. The Digital Identity unConference Europe – DICE. Heidi, the primary IIW producer and I one of the three IIW founders are working with Procivis, Digital Trust and DIDAS to put the event on. It is happening June 7th-9 in Zurich. There is a […]

The post DICE: bringing the IIW format to Europe appeared first on Identity Woman.

Tuesday, 16. May 2023

Riley Hughes

Is Google’s Bard the ChatGPT killer? For privacy’s sake, I sure hope not

I tried Midjourney to create the cover photo for this post “robot peering in through a window” and… yeah, apologies for the nightmares. I’ve seen a dozen posts about Bard vs ChatGPT. Everyone seems to be breaking them down in terms of their features. But before using Bard, I had a simple question: Would my Bard convos be used against me? Will my conversations be the raw material for producing ad-ta
I tried Midjourney to create the cover photo for this post “robot peering in through a window” and… yeah, apologies for the nightmares.

I’ve seen a dozen posts about Bard vs ChatGPT. Everyone seems to be breaking them down in terms of their features. But before using Bard, I had a simple question: Would my Bard convos be used against me? Will my conversations be the raw material for producing ad-targets on my back?

According to the privacy policies of Bard and ChatGPT, and here’s what I learned.

To set the stage: data protection and privacy really matter. No matter how many “not medical advice” or other warning labels these softwares give, people will continue using them for deeply personal matters. If these sensitive queries create a target on my back, I’ll be more cautious about what questions I ask.

In addition, many businesses disallow use of ChatGPT or similar tools on work laptops for fear of models being trained on proprietary business data. (Which, by the way, doesn’t make sense, given these same businesses allow employees to use Google, which is also training its own AI models on what you search.)

Privacy & data protection aren’t just ethical, but have real implications to product adoption. So how does Bart compare to ChatGPT across a few important dimensions?

Bard vs ChatGPT

First of all, everything you submit to Bard is “read, annotated, and processed” by “trained reviewers” and is stored for “up to three years”. This is in contrast to OpenAI — they “retain new conversations for 30 days and review them only when needed to monitor for abuse, before permanently deleting”. ChatGPT is the clear winner here.

In addition to manual review, your submissions to Bard are also used in automated fashion to “improve and develop Google products and services and machine-learning technologies”. To be fair, the same thing is done in ChatGPT, unless you pay them for the business subscription where they claim “end users’ data won’t be used to train our models by default”. Github CoPilot, which is powered by OpenAI, has a business subscription with similar guarantees.

This brings us to the reason for my exploration: will my Bard conversations be used against me, to target me for advertising? According to Google, “Bard conversations are not being used for advertising purposes, and we will clearly communicate any changes to this approach in the future.”

But as a company with $225 billion in advertising revenue last year (at least, that’s what Bard told me, so I’m not sure how confident I should be in that answer!). The stakes are too high — it’s obvious that they will monetize Bard in the future. Fortunately, you can feel reasonably confident your Bart usage in the near-term won’t be used for advertising, because when they retain your conversations for up to 3 years, they disconnect them from your account. The conversations associated with your Google account are stored for 18 months by default, but you can configure that in settings, and delete specific conversations, similar to your Youtube history, etc.

Ultimately I feel like this is another chapter in the saga of ad-supported, free products vs subscriptions. I pay for a Pro ChatGPT license, and will continue doing so and using ChatGPT as my primary companion in my day-to-day.


Identity Woman

Podcast with NEWFORUM comparing Web3 and Decentralized Identity

In a podcast hosted by NEWFORUM and centered on the topic of Web3 vs Decentralized Identity, I discussed a variety of topics, including open standards for digital identity, the benefits and drawbacks of using standards for digital identity, the confluence between web3 and decentralized identity, and many more. About NEWFORUM NEWFORUM is a podcast exploring […] The post Podcast with NEWFORUM comp

In a podcast hosted by NEWFORUM and centered on the topic of Web3 vs Decentralized Identity, I discussed a variety of topics, including open standards for digital identity, the benefits and drawbacks of using standards for digital identity, the confluence between web3 and decentralized identity, and many more. About NEWFORUM NEWFORUM is a podcast exploring […]

The post Podcast with NEWFORUM comparing Web3 and Decentralized Identity appeared first on Identity Woman.


My Government Funded Research / Reports

Yes I have worked for the government as a sub-contractor (that is my contract was with a contractor who had a contract directly with the government). How did I get this sub-contract – well I was asked by a long time community member, that I trust to do research for them. This person, Anil John, […] The post My Government Funded Research / Reports appeared first on Identity Woman.

Yes I have worked for the government as a sub-contractor (that is my contract was with a contractor who had a contract directly with the government). How did I get this sub-contract – well I was asked by a long time community member, that I trust to do research for them. This person, Anil John, […]

The post My Government Funded Research / Reports appeared first on Identity Woman.


Decentralized Identity: Keynote Panel at Hyperledger Global Forum

At the Hyperledger Foundation conference last year in Dublin, I participated in a keynote panel discussion regarding decentralized identity, the level of adoption among companies and customers, and the factors that will ultimately lead to ecosystem acceptance. We had myself Heather Dahl from Indicio, Marie Wallace who was at IBM at the time (now she […] The post Decentralized Identity: Keynote P

At the Hyperledger Foundation conference last year in Dublin, I participated in a keynote panel discussion regarding decentralized identity, the level of adoption among companies and customers, and the factors that will ultimately lead to ecosystem acceptance. We had myself Heather Dahl from Indicio, Marie Wallace who was at IBM at the time (now she […]

The post Decentralized Identity: Keynote Panel at Hyperledger Global Forum appeared first on Identity Woman.

Monday, 15. May 2023

Phil Windleys Technometria

Streaming Trust

Federated identity stores are like music CDs: large troves of data to be consumed from a single source. Self-sovereign identity is like streaming: share just what you want, just when it's needed. In a recent discussion we had, Marie Wallace shared a wonderful analogy for verifiable credentials. I think it helps with understanding how credentials will be adopted. She compares traditional approaches

Federated identity stores are like music CDs: large troves of data to be consumed from a single source. Self-sovereign identity is like streaming: share just what you want, just when it's needed.

In a recent discussion we had, Marie Wallace shared a wonderful analogy for verifiable credentials. I think it helps with understanding how credentials will be adopted. She compares traditional approaches to identity and newer, decentralized approaches to the move from music CDs to streaming. I'm old enough to remember the hubbub around file sharing. As this short video on Napster shows, the real winner in the fight against piracy was Apple and, ultimately, other streaming services:

Apple changed the business model for online music from one focused on sales of physical goods to one focused on licensing individual tracks. They launched the iPod and the iTunes music store and used their installed user base to bring the music companies to the table. They changed the business model and that ultimately gave birth to the music streaming services we use today.

So, what's this got to do with identity? Most of the online identity services we use today are based on a centralized "identity provider" model. In the consumer space, this is the Social Login model where you use your account at Google, Facebook, or some other service to access some third-party online service. But the same thing is true in the workforce side where each company creates great stores of customer, employee, and partner data that they can use to make authorization decisions or federate with others. These are the CDs of identity.

The analog to streaming is decentralized or self-sovereign identity (SSI). In SSI the source of identity information (e.g., a drivers license bureau, bank, university, etc.) is called the issuer. They issue credentials to a person, called the holder, who carries various digital credentials in a wallet on their phone or laptop. When someone, called the verifier, needs to know something about them, the holder can use one or more credentials to prove attributes about themselves. Instead of large, centralized collections of data that companies can tap at will, the data is streamed to the verifier when it's needed. And only the attributes that are germane to that exchange need to be shared. Cryptography ensures we can have confidence in the payload.

The three parties to credential exchange

Identity streaming has several benefits:

Confidence in the integrity of the data is increased because of the underlying cryptographic protocols.

Data privacy is increased because only what needs to be shared for a given interaction is transmitted.

Data security is increased because there are fewer large, comprehensive troves of data about people online for hackers to exploit.

The burden of regulatory compliance is reduced since companies need to keep less data around when they know they can get trustworthy information from the holder just in time.

The cost of maintaining, backing up, and managing large troves of identity data goes away.

Access to new data is easier because of the flexibility of just-in-time attribute delivery.

And yet, despite these benefits, moving from big stores of identity data to streaming identity when needed will take time. The big stores already exist. Companies have dedicated enormous resources to building and managing them. They have integrated all their systems with them and depend on them to make important business decisions. And it works. So why change it?

The analogy also identifies the primary driver of adoption: demand. Napster clearly showed that there was demand for online music. Apple fixed the business model problem. And thousands of businesses were born or died on the back of the this change from CDs to streaming.

Digital credentials don't have the same end user demand pull that music does. Music is emotional and the music industry was extracting huge margins by making people buy an $18 CD to get the one song they liked. People will likely love the convenience that verifiable credentials offer and they'll be more secure and private, but that's not driving demand in any appreciable way. I think Riley Hughes, CEO of Trinsic.id, is on to something with his ideas about digital trust ecosystems. Ecosystems that need increased trust and better security are likely to be the real drivers of this transition. That's demand too, but of a different sort: not demand for credentials themselves, but for better models of interaction. After all, people don’t want a drill, they want a hole.

Verifiable data transfer is a powerful idea. But to make it valuable, you need a trust gap. Here's an example of a trust gap: early on, the veracity and security of a web site was a big problem. As a result many people were scared to put their credit card into a web form. The trust gap was that there was no way to link a domain name to a public key. Transport Layer Security (TLS, also known as SSL) uses digital certificates, which link a domain name to a public key (and perhaps other data) in a trustworthy way to plug the gap.

There are clearly ecosystems with trust gaps right now. For example, fraud is a big problem in online banking and ecommerce. Fraud is the symptom of a trust gap between the scam’s target and the legitimate actor that they think they're interacting with. If you can close this gap, then the fraud is eliminated. Once Alice positively knows when she's interacting with her bank and when she's not, she'll be much harder to fool. Passkeys are one solution to this problem. Verifiable credentials are another—one that goes beyond authentication (knowing who you're talking to) to transferring data in a trustworthy way.

In the case of online music, the solution and the demand were both there, but the idea wasn't legitimate in the eyes of the music industry. Apple had the muscle to bring the music industry to the table and help them see the light. They provided much needed legitimacy to the idea online music purchases and, ultimately, streaming. They didn't invent online music, rather they created a viable business model for it and made it valuable. They recognized demand and sought out a new model to fill the demand. Verifiable credentials close trust gaps. And the demand for better ways to prevent fraud and reduce friction is there. What's missing, I think, is that most of the companies looking for solutions don't yet recognize the legitimacy of verifiable credentials.

Got an idea about how verifiable credentials will be legitimized? Leave a comment:

Leave a comment


Damien Bod

Use Azure PIM with groups in ASP.NET Core

This article shows how to implement Azure Privileged Identity Management access in an ASP.NET Core application using an Azure security group. An Azure Conditional Access Authentication context is used for the Azure administrator giving access to the group for Azure AD users. The Authentication context is setup to require a phishing resistant authentication. An Enterprise application […]

This article shows how to implement Azure Privileged Identity Management access in an ASP.NET Core application using an Azure security group. An Azure Conditional Access Authentication context is used for the Azure administrator giving access to the group for Azure AD users. The Authentication context is setup to require a phishing resistant authentication. An Enterprise application is used to only allow assigned members of the security group access to the ASP.NET Core application. The application can only be used as specified in the Azure PIM member definition.

Code: https://github.com/damienbod/AzurePim

A Conditional Access Authentication context is setup so it can be used in the PIM group configuration. This Authentication context is used in an Azure CA policy which requires a phishing resistant authentication.

An Azure security group was created and this group was onboarded to the PIM groups definitions. The Azure Authentication context is used for activation. The administrator activating users in this group must have authenticated using a phishing resistant authentication.

User assignments can be added to the group. Each assignment requires a reason description which is sent in the email notification.

The Enterprise application used for the ASP.NET Core application is setup to only allow assigned users from the Azure security group. The Assignment Required is set to true.

In the users and the groups, the security group is added.

The ASP.NET Cor application is created to authenticate using the Azure App registration with a secret, meaning the application is a confidential client. Public applications should not be used for administration business processes. Microsoft.Identity.Web is used to implement the confidential OpenID Connect client using the code flow and PKCE.

builder.Services.AddAuthentication( OpenIdConnectDefaults.AuthenticationScheme) .AddMicrosoftIdentityWebApp( builder.Configuration.GetSection("AzureAd")) .EnableTokenAcquisitionToCallDownstreamApi() .AddDistributedTokenCaches(); builder.Services.AddRazorPages() .AddMicrosoftIdentityUI();

When a user is assigned, an email is sent to the user and the user can use the application to authenticate and gain access to the application.

Notes

A lot of the different features from the Azure AD access management was used in the setup. You could also force all users in this group to use a phishing resistant policy by applying the same authentication context to the group and not just the PIM administrator assigning users to the group. You could also use roles instead of groups in the PIM. There are lots of different possibilities and it is imported to choose the correct ones for your environment following KISS. If you don’t need it, then don’t use it. PIM requires an Azure P2 license. Azure AD PIM + Conditional Access integration can be used to force all types of business security requirements and features.

Links

https://learn.microsoft.com/en-us/azure/active-directory/privileged-identity-management/pim-configure

https://learn.microsoft.com/en-us/azure/active-directory/privileged-identity-management/groups-role-settings


Foss & Crafts

57: F-Droid (featuring Sylvia van Os & Hans-Christoph Steiner!)

F-Droid, a repository of free software for your Android devices! Christine interviews F-Droid developers Sylvia van Os and Hans-Christoph Steiner as well as F-Droid board member and chair... Morgan Lemmer-Webber! Links: F-Droid Sylvia van Os Hans-Christoph Steiner F-Droid board announcement Guardian Project Google Play bans Matrix/Element Catima Your app is not compliant with Google Play

F-Droid, a repository of free software for your Android devices! Christine interviews F-Droid developers Sylvia van Os and Hans-Christoph Steiner as well as F-Droid board member and chair... Morgan Lemmer-Webber!

Links:

F-Droid Sylvia van Os Hans-Christoph Steiner F-Droid board announcement Guardian Project Google Play bans Matrix/Element Catima Your app is not compliant with Google Play Policies: A story from hell

Friday, 12. May 2023

Wrench in the Gears

A “Green” Populist Synthesis – Environmental Warrior + Health FreeDOM Warrior = Web3 Regenerative Ag Finance Warrior?

This morning someone forwarded me an article that was featured in Children’ Health Defense’s online publication “The Defender.” “Can we reboot the environmental movement so it protects freedom, too?” was written by W. Aaron Vandiver, a conservationist author who lives just up the road from Aspen (Aspen Institute), Colorado (the blockchain / e-government state). The [...]

This morning someone forwarded me an article that was featured in Children’ Health Defense’s online publication “The Defender.” “Can we reboot the environmental movement so it protects freedom, too?” was written by W. Aaron Vandiver, a conservationist author who lives just up the road from Aspen (Aspen Institute), Colorado (the blockchain / e-government state). The piece extols the potential for an “eco-libertarian” environmental movement to bring together populists from the Left and the Right. I can definitely see this fitting into a Transpartisan “third-way” platform.

The press around Robert F. Kennedy Jr. doing a public launch of his candidacy at Miami’s 2023 Bitcoin conference has me pretty rattled.

Whitney Webb will be there, too.

Source Link: https://b.tc/conference/speakers Source Link: https://bitcoinmagazine.com/authors/whitney-webb

During the Children’s Health Defense conference in Knoxville, TN last fall she presented on censorship of journalists. She wrote an article on this topic for Bitcoin Magazine during the summer of 2022.

Source Link: https://bitcoinmagazine.com/2022-censorship-resistant-issue Source Link: https://bitcoinmagazine.com/culture/the-war-on-dissent

 

Whitney Webb is a supporter of blockchain media, having been an early adopter of the Rokfin platform where content creators are paid in Web3 RAE tokens. In the video below, she and her Slow News Day colleagues are talking about me, implying I only took a stand against blockchain to disparage her and her friends. I guess Webb’s investigative prowess didn’t extend to entering “blockchain” in the search bar of my blog. Had she bothered to do that, she would have seen I’d been writing about my concerns about blockchain and impact finance since 2017, long before I’d ever heard of her. There are more details in the description block of the video (accessible here) if you want the backstory. To be honest, influencer spats are not a good use of my time, but I did think it was important to document her behavior. In the end I trust that most people are adults who can make informed decisions about what information they consume. I don’t expect her fans to resonate with my blog, and I’m fine with that.

In the meantime, someone claiming to be Webb said the talk at Bitcoin 2023 Miami will be about “The WEF Partnership Against Cybercrime and how it is a major intelligence effort to threat destroy financial privacy.” I haven’t confirmed the identity of the person who left the comment. There was no way to verify it was posted by Webb, the name was @cool22fire “kunjabihari adhikari.” Still, I’m happy to give the benefit of the doubt. If this is not the real Whitney Webb, and the real one wants to contact me at timpsila@proton.com to correct the record, she can. This person also said Webb’s participation in the event was simply educational and not an endorsement of Bitcoin. Take that for what it’s worth. The title of her video about me was titled “Whitney Webb on Bitcoin 2021-05-23.” There’s also a clip of her discussing the possibility that Bitcoin could be banned and her admiration for Bitcoiners, whom she says are on “the frontier of fighting a lot of these encroachments on basically what is the end of privacy for Americans.” The clip was featured on the Swan Bitcoin youtube channel in April of 2023.

Source Link: https://rokfin.com/unlimitedhangout

 

Source Link: https://twitter.com/TheRokfin?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor

My site visit to the Rokfin office in Austin, Texas in the fall of 2021. 

Combined with this op-ed pitching a thinly-cloaked web3 tokenization of natural capital, I realized I needed to spend the morning pulling together my thoughts. Of course natural capital and ReFi (Web3 “regenerative” finance) is Leo Saraceno’s wheelhouse, while mine is more human capital. You can read his recent article on the topic at Silicon Icarus here.

Check out this pair of tweets pulled from Robert F. Kennedy’s account over the past few weeks. It’s hard for me to believe these are his own words. The “ecology of currency” language is straight up lifted from Bernard Lietaer, creator of the Euro. When viewed together, the statements demonstrate an affinity for social impact tokenomics, “free” market investing using digital assets to optimize “wellbeing” for “the common” good. Of course that’s the kind of language they’ll use to sell it.

My take is they are crafting a new Bobby character to replace his two previous roles, now conflicting. Combined, these archetypes will be positioned as a populist freeDOM superhero whose assignment is to convince you that a living gamified existence inside an out-side in “optimization” robot (Jeremy Pitt’s cyberphysical-sociotechnical system) is the future we’ve always dreamed of – rather than a life sentence served in a Skinner Box with soul-sucking energetic parasites that is actually is.

Source Link: https://twitter.com/RobertKennedyJr/status/1648733650001752064

Source Link: https://web.archive.org/web/20230503184338/https://twitter.com/RobertKennedyJr/status/1653808786577010697 Source Link: https://embed.kumu.io/fa2bfdd8fb89e5bbecbecdb9dae43f38#untitled-map?s=bm9kZS1UYXJQVU8yaA%3D%3D

In retrospect I see how the pieces fit together – why I got so much pushback for my opposition to blockchain from resistance influencers like John Bush and Derrick Broze who featured RFK Jr. at the Greater Reset. I understand better why coverage of the struggle of India’s farmers got squelched even as the Canadian truckers’ convoy saga was uplifted. A spiritually authentic, culturally grounded resistance to techno-life, the kind demonstrated a great personal cost by India’s farmers, could NEVER be allowed rise to prominence. No, the era of the Web3 tokenized freeDOM “commons” was meant to be driven forward by emerging-tech-loving enthusiasts. In programmed reality the Sikhs didn’t stand a chance, but my gut tells me this engagement is unfolding in realms far beyond the coders’ binary ones and zeros. Keep your soulbound tokens and crypto, my heart remains aligned with theirs despite the challenging odds.

Today’s livestream is the first in a series I’m calling “Camelot Corner.” I’ve created a playlist for it on my channel. I’ve spent much of the past week going through the IRS 990s and audits for The Mercury Project / Children’s Health Defense. I’ve looked into the board members. I’ve used the Foundation Directory Online to find grants via other funders, including one for $586,000 in 2020 from Greater Horizons Kansas City, a black box community foundation that specializes in managing hundreds of millions of dollars in donor-advised-funds, or DAFs. Click this link and search for “Children’s Health Defense.” Through this philanthropic mechanism donations can’t be traced back to the source, a practice that is actively promoted on the CHD donor website.

So much for government and corporate capture – did you think somehow non-profits were exempt? Those who follow my work know that the non-profit industrial complex has a central role to play in social impact finance and impact data collection. We just want to believe “our team” wouldn’t be in on the grift. Would it surprise you to know Children’s Health Defense used donated funds to buy access to Republican Attorney’s General? Don’t worry; the recipient got caught and had to return the illegal contribution, but don’t you think Bobby, the lawyer, should have known better?

Money is power and power corrupts. It behooves us to continue to ask questions and hold the institutions into which we put our trust accountable. 

Here’s the link to the map. Go explore. There are many embedded resources, including some provided to me by fellow researchers. If you have items you think would be appropriate for me to add, you can send them to timpsila@protonmail.com. Those suitable for inclusion will have substance – no ad hominen. My focus is on finance and policy, not personal or family drama.

 

Source Link: https://embed.kumu.io/f8fce32a9824a81f2615bff8ec0f4717#untitled-map?s=bm9kZS1FVnZnQVJkeg%3D%3D

 


Letter from the Labyrinth – Defending Children’s Health and a Jury Duty Predicament

I love that this blog has become a space where people can come to share their personal journeys. With gratitude I want to take a moment to express appreciation for Sean and all the rest who have chosen to enliven this corner of the digi-sphere with your heartfelt writings. What follows is a wonderful letter [...]

I love that this blog has become a space where people can come to share their personal journeys. With gratitude I want to take a moment to express appreciation for Sean and all the rest who have chosen to enliven this corner of the digi-sphere with your heartfelt writings. What follows is a wonderful letter about awakening to the beauty of beingness outside the establishment game board. The garden girls are fortunate to have parents who are willing to take principled stands for their family. Alison

Letter from Sean Filzen, aka “Washington Sean”


May 8th, 2023

Dear Alison,

First and foremost, thank you for taking the time to read my work.  Sincere thanks for your kind words as an introduction to my last guest post.  As we all lead incredibly busy lives, it means so much to have discovered your research and archives which are growing by leaps and bounds as the days and weeks fly by. How you do it and still find time to engage and read and share the work of others is commendable. And it is even more influential knowing I have found an audience and a space in which to share and collaborate in a meaningful way.

While I am not so much interested in fame or fortune, I’d be deceiving myself and your readers if I did not admit that I do seek some attention; and flattery has a way of carrying our lofty ideals even higher into the realms of admiration, doesn’t it?  But beyond the soft strokes of my own ego, what I love most about your website is the sheer amount of knowledge one can glean — if they so desire to put in the work and take their critical inquiries seriously. And as the fellow readers, commenters and guest authors have continuously confirmed, they too offer me equally as much appreciation and insights as your work.  Bravo for all that you have toiled to establish and being a beacon in these perilous times!

Thus, it is with this excitement and anticipation of further collaboration that I felt compelled to offer you a more thorough biographical account of how and why in this time, this spring, in the year 2023, there seems to be a personal calling to me to ‘rise to the occasion’ and step forth into the public sphere in a more pronounced manner.  Or, as my wife puts it, my throat chakra is activating as my kundalini is rising.

My first post to Wrench In The Gears, from February 2022, was titled ‘The Grocery Business is Dye-Ing’, and I composed this piece under a pseudonym for fear of reprisal and termination of my wife’s position at her employer of fourteen years – New Season’s Grocery stores.  The piece was an expose of how New Season’s, “the world’s first Certified B Corp Grocer,” is not as transparent and well intentioned as the brand presents itself, and the essay followed the professional pursuits of one of New Season’s more shadowy board members — Justin Dye. 

When writing this letter, it was of minor interest to check in on Mr. Dye and see what he has been up to with his latest efforts and his newest venture – Schwazze, Inc. aka ‘Medicine Man Technologies’.  Schwazze bills itself as a rapidly growing cannabis firm offering a business model for investors that includes everything “from seed to sale.”  On its Corporate Website, the word ‘schwazze’ is defined as an action verb, meaning “a cultivation technique that prunes a living organism to create more growth.”  And Schwazze proclaims it wants to “focus on data-driven innovation and unparalleled thinking to create the next era of cannabis.”  If that is not enough, Schwazze proudly bills its brand tagline as “Transcending the Human Condition Through Cannabis.”  

Perhaps it is just me, but the obvious throwback to the early 20th century efforts of Luther Burbank and his famous work The Training of the Human Plant appear to be cultivating something suspicious at Schwazze.  And surely the nod on one of Schwazze’s new startups confirms that something more nefarious is going on in the backrooms. In 2021 Schwazze Bioscience was established and serves as the research and development arm of the Schwazze brand. “Its high-level mission is to blend both the art and science of plant molecules and advanced technologies to further enhance products and improve the human condition.”  (The photo for Schwazze Biosciences conveniently features two vials with a light green liquid in a glass beaker with a third vial filled with a light magenta colored liquid, sitting adjacent to a cannabis leaf in a petri dish).

But I am not writing today to rehash the questionable integrity of people like Justin Dye.  I am writing to you today to clarify why I, Sean Filzen, became so interested in Justin Dye and New Seasons Market in the first place.

You see, in August of 2021 my wife and I decided to try for a second child and conceived our daughter Margaret while on a camping trip near a beautiful spring in the foothills of Mount Saint Helens.  By February of 2022 my wife was well into her third trimester and there was really no negotiating what sort of risky medical procedures or other therapies that my wife was willing to experiment with.  She had already moved from her previous position in the wellness department and worked in the ‘freight crew’, taking shifts overnight to avoid wearing a mask as much as possible as her employer would not accommodate additional ‘oxygen breaks’ as she had requested.  When the January policy enacted by her employer required all employees to either accept a medical treatment or file for exemption (with mandatory masks and test/trace/track for all who were granted exemptions), tensions were incredibly high.

She took her maternity leave as early as allowable and somehow her store manager conveniently failed to hand her the letter from corporate that instructed employees how to file for an exemption if they desired.  So, without receiving the letter instructing her how to file for an exemption, she went on leave.  Another employee and friend was terminated in February 2022 for refusing to turn in an exemption letter at all.  She, along with my wife and several other of her coworkers, considered such a policy a form of coerced consent, and to acknowledge the letter as lawful would continue to endow the company with special powers to an otherwise unlawful and unnegotiated actions.  But as my wife had not received the letter, we took it as a symbolic gesture that her store manager did not have the guts to hand a woman who was seven months pregnant such an inconsiderate request.

Of modest means, our strategy was ultimately to ride out her maternity benefits as long as possible and decide what to do after our daughter was born.  Margaret was born at home on May 19th – a beautiful and healthy girl weighing in at just over nine pounds!  Any drama or negative underpinnings of what still churned in the background felt far, far away from our lives.  But it was much to our chagrin that in the middle of July, Kayla learned she had been terminated from her job.  Due to a technicality, she was terminated on July 4th, 2022, a date most fitting if you are able to consider the grand irony of it all.  To be terminated, by the world’s first certified B-Corp Grocery Store, while on maternity leave, after fourteen years of employment, is no small feat in today’s day and age.  And to be fired on July 4th, of all the days of the calendar, well, one might make their own inferences on how we are planning to celebrate our Independence day in 2023.  Rest assured there will be some sparks flying.

So, I hope you can deduce that the first piece I felt compelled to write, my first truly investigative essay, was driven by a passion and desire to defend children’s health.  My unborn child, was of course, the forefront of my cause.  But also the thought of how other expecting parents are grappling such tenuous issues continues to loom over my heart and mind.  Defending children’s health, and I mean truly defending it, is what radicalized me.  

My anger, my frustration, my resentment towards injustice and the things in the world that aren’t commendable has always been a present factor in my life.  But when such malevolent forces coalesce into a darkness that seeks to attack and degrade the lives of those who cannot fully defend themselves, a line is crossed and all the raw emotion that was once just my own is catalyzed into something bigger – at least I hope it is.  I hope that I can continue to find my voice that seeks to protect children and natural life from forces unseen and unknown to so many in our daily lives.  I hope to raise awareness and shine a light in the darkest corners of our society as we barrel backwards and start to run the hamster wheels of history, repeating in lockstep all the lessons we supposedly learned twenty, fifty or even one hundred years ago.

Following my wife’s termination from her job we committed to a new way of being and a new lifestyle.  Our personal motto has become “everything is a distraction from growing our own food and medicine.”  And while it is an incredible amount of work to produce even a modicum of our own sustenance, it is perhaps the most rewarding experience I have personally experienced.  Our efforts on our small farm are further bolstered by watching our older daughter, who is almost five, reap the literal benefits of everything we have sown thus far.  So, it is without remonstrance that we have moved into 2023 and put the unfortunate effects of unlawful termination behind us.  Fighting a corporate egregore like Justin Dye and New Seasons is likely to prove fruitless, and instead we decided to keep our heads down and largely mind our own business as the dust of the “new normal” still settles on society.

But then something else happened in the middle of April this year that has reignited the fire that burns inside me, blowing a strong wind into the glowing embers and fanning the flames of anger and resentment.  Before I address this pivotal issue, I wanted to offer you just one more facet of our lives that feels pertinent as we discuss the integral defense of children’s health.

In October of 2018 our first daughter was born.  And both being fairly health conscious, health aware people, our feelings on the CDC schedule of childhood immunology and vaccination policies was mixed.  Intuitively I think we both distrusted the overly aggressive nature of such a schedule that lacked substantial epidemiological studies warranting so many shots in such a short amount of time.  But fear is a powerful force, and it has a way of breaking down even the most strongly held beliefs, often calling into question one’s abilities as a parent to make the decisions for your child that you believe are correct.  Fear can compel people to do things that they would never have considered doing under different circumstances or contexts.

In the early months of 2019, Clark County Washington, where we make our home, experienced a ‘measles outbreak’.  

Looking back, the Clark County Measles outbreak bears many signature markings of a simulation event designed to improve responses and outcomes in larger emergencies, but the integrity of the measles outbreak is neither here nor there in the grand scheme of things.  Prior to the global ‘corona’ lockdowns about a year later, such a public health emergency as the relatively small measles outbreak in Clark County created quite a bit of hysteria in our local community.  It was the subject of both local and national news and the CDC even took notice with a substantial study published on its website.  Perhaps the measles outbreak of 2019 primed our consciousness for the events of 2020-2022.  Perhaps without such an outbreak in our home county the year before, we would have seen the much bigger events that proceeded in a different light, but I like to believe that we still would have found our path.

It was in February of 2019, while on our annual family vacation, that we chanced to stop in to visit one of my customers in Crescent City, California (a small ocean town at the very northern part of the state).  My customer was a woman in her mid-eighties, her husband had passed away a year prior, and I still credit them both as a big part of my start in my commercial furniture business.  So, we stopped into her hotel for a visit and a coffee, and the adult daughter of my customer joined us.  The daughter was very outspoken and provided us with several resources.  As I am typically inclined to listen to people who are more versed on subjects than I am, I was deeply moved by her pleas not to follow the CDC schedule with our own child, still untainted at the young age of only four months old.  

The daughter of my customer gifted us the book ‘The Vaccine Friendly Plan’ written by Dr. Paul Thomas.  I quickly read the book from cover to cover and from that moment forward my resolve to defend children’s health as I understood it has grown clearer and my resolve stronger and more fortified.  It was actually around the age of ten months that we took my daughter for her first ‘well baby visit’ at the offices of Dr. Paul Thomas.  Thomas’s clinic ‘Integrative Pediatrics’ was coincidentally located only about sixty miles from our home, across the Columbia River in a suburban community outside of Portland, Oregon.  I remember telling our nurse practitioner about our farm animals and I was still not sure about the TDaP shot benefits/risks.  Our nurse promptly summoned Dr. Thomas himself so he could very politely provide some additional interrogation as to our situation, thus allowing us to fully understand the legal and practical implications of ‘informed consent,’ as well as the pros and cons of the TDaP shots. 

Alison – one of the things that makes your work so commendable is that not only are you one of the people who is doing the research, but you are also doing the work, going to the meetings, shaking the hands and asking the questions.  I am willing to wager that you have shaken the hands of many important people who have themselves shaken the hands of even more important people.  Dr. Paul is something of a celebrity in the Health Freedom Movement and he is one of the important people I have shaken hands with.  In November of 2020, Dr. Paul, along with James Lyons-Weiler, published a now retracted study titled Relative Incidence of Office Visits and Cumulative Rates of Billed Diagnoses Along the Axis of Vaccination.  The study was a subject of national controversy and in a time of great conflict in what should have been an unrelated issue, the Oregon Health Authority promptly targeted Dr. Paul and suspended his medical license pending an investigation.  The controversy eventually became the subject of a book by Jeremy Hammond titled The War on Informed Consent: The Persecution of Dr. Paul Thomas by the Oregon Medical Board.  The book is available on the Children’s Health Defense (CHD) store.  

The forward of the book was written by none other than Robert F. Kennedy, Jr. 

When we have shaken the hands of important people who have themselves shaken hands with even more important people, what should dawn on us is that even as unimportant people, or common people, we occasionally do find ourselves in the capacity to reach important people with our important messages.  If important people, like Dr. Paul and Jeremy Hammond, can stay true to the literal definition of defending children’s health, then the work we do as activists and investigative researchers is not fruitless.  We can make a difference and we should not be dismayed by the dismissals and lack of interest by those very, very important people who seem out of reach with the message that threatens their livelihood or questions their integrity.  And while Dr. Paul is no longer our family pediatrician, as he has  joined or been co-opted into a movement that he may not be fully aware of himself, this should not fully dismiss him or the important role he continues to play as a first step for so many other parents and caregivers just beginning their own journey of discovery and definition of informed consent when making lifelong health decisions for themselves and their family.

In a sense, the way our family has navigated health care and the health services delivery models available to us is probably not all that unique.  The decisions we have made (or were led to make) could be considered an excellent example of cognitive domain management, a topic that has captivated my interests as of late.  I wonder, how would I perceive the announcement of RFK Jr. to run for president had I not found your research and body of work?  Likewise, I wonder, in what ways am I still being managed, or nudged, to stay within the confines of the domains that I am in now?  But there is one thing that I do not wonder about, one thing that becomes clearer and clearer every day.  And that is the passion and fire that is now raging inside me to defend children’s health, to really, really defend children’s health.  If it were not for the likes of people like Alison McDowell, Lynn Davenport and Jason Bosch (and countless others who are doing the good work) who have freely risen to the occasion to defend children’s health, then I am saddened to consider that who (and what) we are left with is going to provide us with less than desirable outcomes.

After being misled and misdirected and misinformed over and over again, I am prone to believe that we cannot, and should not, accept the answers and agendas of influences and forces that wield power over our social networks.  We should not simply allow our conversations on critical issues to be managed or steered in a direction that predetermines outcomes that favor consolidated power over individual autonomy.

As I mentioned above, it was what happened in April of this year, less than a month ago, that re-ignited a fire I tried desperately to ignore, letting it smolder through the many distractions and more enjoyable activities we encounter on the daily on our small homestead.  We have done our best to keep our heads down and mind our own business in this new society that they are building; we did not give up on the fight entirely, but instead, we found our own way to make a better life and build a stronger, healthier community as we understood it.  And while minding your own business, living simply and living poor may be enough to keep the more traditional corporate interests from invading one’s chosen lifestyle, no such guarantee exists when it comes to keeping your health and happiness protected from State interference.

In the middle of April, my wife received a jury summons.  As one is instructed to promptly return the card if seeking a deferral or excusal within five business days, I wasted no time and wrote “nursing mother” and “no child care” on the card.  Thus, it was much to my chagrin that we received a reply towards the end of the following week that said no such excuse or deferral was warranted for such reasons.  To clarify upfront, we are not opposed to jury duty or civic responsibilities.  And despite the social contract being shaken to the core over the last few years, I would still serve on a jury if I were summoned.  Likewise, my wife too would rise to the occasion and embrace her civic duty if it were reasonable and practical.  

But under the current laws and codes in the State of Washington, the ability to serve in place of another person with more important obligations is not an option.  And as the nice lady on the Clark County Judiciary department hotline informed me, “nursing mother and “no childcare” are not valid exemptions from service.  She did offer us a six-month deferral, but we declined as we hope to still be nursing our youngest child as long as possible.  To agree to a six-month deferral would in a sense, be a form of consent, or a contract, to fulfill the civic obligation six months later, forcing the same confrontation without the ability to petition for a redress of our grievances.

Correct me if I am wrong, but I believe that at one time you said on one of your YouTube videos that (and I am paraphrasing here) “it’s not that people’s rights won’t exist or that the system is going to disappear.  It’s that the rights and the system will be so far removed from people’s lives they will not be able to access their rights, they won’t know how.”  Well, that statement stuck with me and when my wife was summoned to jury duty and we were informed that the basic, most simplest of rights (those of a nursing mother with an infant child) were now called into question by the State, you could say that I am now actively investigating all the power that I still have as a citizen to find the system and use the rights that I still have before they become so mired in muck that I lack the energy to wade into the cesspool, let alone muster up a halfway decent defense to the overreach of State power.

This is it.  This issue has emerged as a new line in the sand that has been drawn.  This is the kind of opportunity to fight and stand for what is right and what is just that may only come along so many times in a person’s life.  This is the moment that I choose to rise and step out of the shadows and into the light.  This is my time.

Thank you for blazing the trail and lighting the way for people like me who just need to know that people like you are out there.  We are only humans; we are not perfect by any means.  And where this fight leads me is still undetermined, a bit of a mystery.  But without a doubt, I know in my heart that compelling a woman to be separated from her breastfeeding infant at the threat of a misdemeanor criminal charge is the type of State violence that far eclipses even the worst of corporate injustices we’ve incurred thus far. 

How to prove to the State that their actions are in fact a form of uncoerced violence against the constituency, well, I am still refining my strategy to be honest.  So far, the defense has taken shape into an aggressive letter writing campaign, but I sense an escalation is in the works as this pivotal event is helping bring me out of my shell and into the eye of the public.

 

So, it is with much excitement that I look forward to sharing with you the developments in this case as they happen.  And equally so, it is with much gratitude that I look forward to collaborating with you and writing and making art and growing our garden and dancing with a deepening and growing love of life, all-natural life.

Respectfully yours,

Sean C. Filzen   — aka ‘Washington Sean’  

Caption – “Fishing for Customers”

 

 

Thursday, 11. May 2023

Doc Searls Weblog

Remembering Heather Armstrong

My email archive contains dozens of postings in which Heather Armstrong* and I are among those writing, receiving, mentioning, mentioned, cc’d or otherwise included. Most postings are from the ’00s and between bloggers in the brief age before media got social and blogging was still hot shit. Heather, with her Dooce blog, was the alpha among us, […]

My email archive contains dozens of postings in which Heather Armstrong* and I are among those writing, receiving, mentioning, mentioned, cc’d or otherwise included. Most postings are from the ’00s and between bloggers in the brief age before media got social and blogging was still hot shit. Heather, with her Dooce blog, was the alpha among us, but never snotty about it.

Heather wore her heart, her head, her life, on the Web page. At humorous, opinionated, and wince-worthy self-disclosure, Heather even out-raged Chris Locke, aka RageBoy. Heather, Chris and I might have met for a panel at the 2006 SXSW, had Chris made the trip. Wrote Chris, “I was nearly as bummed as Doc said he was that I wasn’t there in Austin today on the ‘Cluetrain: Seven Years Later’ panel. What can I say? If God had wanted people to fly, he would have given them upgrades.” Best I recall, that panel was the only time I met Heather in person. She was fun.

Her final post was on April 6. It is mostly about, and for, her daughter Leta Louise. The closing line is “Here at 18 months sober, I salute my 18-year-old frog baby, she who taught me how to love.” According to news reports, Heather had fallen off the wagon. Whatever the reason, she fell all the way, and we will be forever lessened by her loss.

*On her Instagram, Facebook and Twitter accounts, all branded Dooce, she identified as Heather B. Hamilton, her birth name.


Identity Praxis, Inc.

The Revolution is Here: An Interview With Daniel Goldscheider, Founder of the OpenWallet Foundation

In this episode of The Identity Nexus, Michael Becker talks with Daniel Goldscheider, the Founder of the OpenWallet Foundation, a nonprofit focused on the governance, development, and deployment of open-source software for SmartWallets, i.e., “OpenWallets.” OpenWallets will have a bigger impact on society than open web browsers. SmartWallets are applications and cloud services that empower […] T

In this episode of The Identity Nexus, Michael Becker talks with Daniel Goldscheider, the Founder of the OpenWallet Foundation, a nonprofit focused on the governance, development, and deployment of open-source software for SmartWallets, i.e., “OpenWallets.” OpenWallets will have a bigger impact on society than open web browsers. SmartWallets are applications and cloud services that empower people to take gradual control of their identities, licenses, certifications, and personal data as they navigate their phygital life, and they enable organizations to manage the verifiable authority that they delegate to their agents—humans, machines, bots, APIs, and more. OpenWallets are SmartWallets built with community-governed open-source software and global industry standards. In this interview, you’ll learn about the OpenWallet Foundation, the benefits of open-source SmartWallet capabilities, and how to get involved (Here is a hint: it is open, and it is free to contribute and use code; there is a nominal fee for commercial organizations to participate in the community governance).

The post The Revolution is Here: An Interview With Daniel Goldscheider, Founder of the OpenWallet Foundation appeared first on Identity Praxis, Inc..


The Identity Nexus: Web 3.0 Marketing and Establishing Authentic B2C Connections with Virginie Glaenzer

This episode of The Identity Nexus focuses on Virginie Glaenzer, an accomplished marketer, entrepreneur, investor, and author. With a strong background in the technology industry, Virginie is passionate about helping early-stage companies grow and succeed through effective marketing strategies. In the episode, Michael Becker and Virginie explore how to embrace the Abundance Economy. They discuss [

This episode of The Identity Nexus focuses on Virginie Glaenzer, an accomplished marketer, entrepreneur, investor, and author. With a strong background in the technology industry, Virginie is passionate about helping early-stage companies grow and succeed through effective marketing strategies. In the episode, Michael Becker and Virginie explore how to embrace the Abundance Economy. They discuss Virginie’s journey in the tech industry, including her experience as a founder of a successful startup and her transition into marketing and investing. Virginie shares insights on what it takes to be a successful marketer and provides valuable advice for those looking to grow their business through effective marketing tactics. Additionally, Virginie and Michael go deep into the future of the Internet—e.g., Web 3.0, Web 5, etc.—and how it is systematically changing, forever. For the first time, the people, not platform providers, will bring and manage the identities and personal data that will flow across the wires and through the air. The discussion provides a fascinating look into Virginie’s journey and offers valuable lessons for aspiring entrepreneurs, marketers, and investors.

The post The Identity Nexus: Web 3.0 Marketing and Establishing Authentic B2C Connections with Virginie Glaenzer appeared first on Identity Praxis, Inc..


Privacy in Your Hands: exploring the Power of Prifina with Valto Loikkanen

In this episode, Michael Becker and Valto Loikkanen, CTO of Prifina, discuss the current and future of what it will take for people to be in control of their personal data and identity, and once they have control, what they can do with their data. Specifically, we review Prifina, a leading personal information management solution […] The post Privacy in Your Hands: exploring the Power of Prifina

In this episode, Michael Becker and Valto Loikkanen, CTO of Prifina, discuss the current and future of what it will take for people to be in control of their personal data and identity, and once they have control, what they can do with their data. Specifically, we review Prifina, a leading personal information management solution provider (PIMS). We explore how data can automatically be gathered from sensors and IoT devices. We review the concepts of a PIMS, a personal data store, and demonstrate Prifina’s integration with OpenAI’s Chat GPT and how individuals can seamlessly use a personal artificial intelligent agent to mine and get real-time value from their data. Prifina has set up a trial of this integration with synthetic data; click here to check it out.

The post Privacy in Your Hands: exploring the Power of Prifina with Valto Loikkanen appeared first on Identity Praxis, Inc..


Doc Searls Weblog

A Santa Barbara itinerary from ChatGPT

I asked ChatGPT for a three-day itinerary to give visitors to Santa Barbara. Here ya go: Day 1: Start the day with breakfast at the Shoreline Beach Cafe, which has a beautiful view of the ocean. After breakfast, head to Knapp’s Castle for a scenic hike and exploration of the ruins of a 20th century […]

I asked ChatGPT for a three-day itinerary to give visitors to Santa Barbara. Here ya go:

Day 1:

Start the day with breakfast at the Shoreline Beach Cafe, which has a beautiful view of the ocean. After breakfast, head to Knapp’s Castle for a scenic hike and exploration of the ruins of a 20th century mansion. The hike offers stunning views of the Santa Ynez Valley. For lunch, grab some sandwiches or salads from the nearby Panino’s, a local favorite with fresh, high-quality ingredients. Spend the afternoon exploring downtown Santa Barbara, with its charming shops and cafes. Don’t miss a visit to the Santa Barbara County Courthouse, a stunning Spanish Colonial Revival building with a beautiful clock tower that offers a panoramic view of the city. For dinner, head to Tee-Off, a local institution that has been serving up classic American fare since 1949.

Day 2:

Start the day with a hike up La Cumbre Peak, which offers stunning views of Santa Barbara and the Channel Islands. For lunch, grab some tacos from La Super-Rica Taqueria, a casual spot that was a favorite of Julia Child. Spend the afternoon exploring the Santa Barbara Botanic Garden, which showcases native plants and offers a peaceful escape from the city. For dinner, head to the Stagecoach Inn, a historic restaurant that serves up classic American fare in a cozy setting.

Day 3:

Start the day with breakfast at Jeannine’s Bakery, a local favorite known for its delicious baked goods and breakfast items. Spend the morning at Hendry’s Beach, a local favorite with beautiful views and great walking trails. For lunch, grab some sandwiches or salads from South Coast Deli, a local favorite with fresh ingredients and creative flavor combinations. Spend the afternoon exploring the Santa Barbara Museum of Art, which features a diverse collection of artwork from around the world. For dinner, head back to downtown Santa Barbara and choose from one of the many restaurants in the area, depending on your preference. Some options include Olio e Limone Ristorante, Los Agaves, and The Lark.

I wouldn’t disagree with any of it, except maybe to note that Hendry’s Beach at sunset (where and when I shot the photo above) is mighty fine.

Of course, the answer will vary with every regeneration, but what the hell. It’s very early in our co-evolution with whatever the hell we and AI are together becoming.

Wednesday, 10. May 2023

Wrench in the Gears

Paua 23 Rant – Tokenizing Indigenous Culture, Drawing Spiritual Seekers Into Behaviorist Blockchain Protocols

The following is a guest post about blockchained wellbeing metrics promoted as an inclusive, emerging tech spiritual practice.   May 2023 Hi Alison, A couple of days ago a friend of mine sent me a screenshot of a conference being held in Paris that a member of the alternative health and freedom space was promoting [...]

The following is a guest post about blockchained wellbeing metrics promoted as an inclusive, emerging tech spiritual practice.

 

May 2023

Hi Alison,

A couple of days ago a friend of mine sent me a screenshot of a conference being held in Paris that a member of the alternative health and freedom space was promoting on her instagram account. My friend’s comment was; “This looks shady, no?” I assured her that yes indeed, it was very shady, and I went straight to the website of the conference and started speed reading my way through it. 

Alison, you have been saying all along that this new paradigm of AI-run extended reality inhabited by a digitally-twinned, re-brained, blockchained, semi-lobotomized populace would ride in cloaked in the mantle of spiritual evolution. I have long been pretty grossed out by aspects of the “spiritual” community for many reasons, but this conference takes it to the next level. The fact that someone from the “resistance” (you’re right, everything does need air quotes now) was gushing about it on her instagram account was a depressing reminder of how when a wolf is wearing the right outfit, even really smart people fail to see the disguise.

The conference is coming up on May 12-13, 2023, and it will be live-streamed. I wish it was free, that way I could sit there and watch and listen with my face all contorted without giving them any of my hard earned fake money. I may bite the bullet anyway. This conference is called PAUA, which interestingly is a Maori word for the shell of a mollusk or abalone. It makes me think of the term ‘ a shell of a man’. An empty shell of a man…a robot maybe? Or someone so removed from themself that they’re no longer recognizable? I don’t know.

Source Link: https://www.paua.life/

The two-day program ping pongs back and forth between artificial intelligence, ancient wisdom, Web3, Mother Earth, the metaverse, planetary consciousness, conscious capital, and of course…psychedelics. There is also a strong, and exploitative, emphasis on the Kogi people, a very private indigenous tribe who live deep in the Sierra Nevada de Santa Marta mountains of Northern Columbia and have been dodging colonization and clinging to their sacred and traditional way of life since at least the 15th century. The dragging of two of their leaders to Paris to help woke-wash their next-gen colonization tactics is weirdly reminiscent of the Native American chiefs put on display at the world’s fairs of the late 1800’s. It seems the token economy extends itself to include the tokenizing of the native people used as hood ornaments for the robotic Trojan horse.

From the event’s homepage:

“As we enter into this new decade of the twenties, we have the opportunity to reintegrate ancient knowledge into our modern civilization to create a new narrative for humanity that offers a positive view of the future of our civilization. PAUA is about creating connections between the material world and the spiritual realms, making visible the invisible while featuring an exploration into a wide range of ancient knowledge. PAUA will explore the future of Web3, the Metaverse, and AI, as well as how consciousness can be the fundamental basis for conducting business, Apps, and self transformation” 

I don’t know about you, but for me reading that paragraph felt like all of a sudden a turd bobbed up from the bottom of my cup as I peacefully stirred it. 

Source Link: https://www.paua.life/#partners

So let’s get into the speakers. The lineup starts off with a bang.

Yat Siu is the co-founder and executive chairman of the global blockchain and gaming leader Animoca Brands. Yat has many accolades including being a World Economic Forum Global Leader of Tomorrow. This should have triggered a pause for many in the alternative sphere if they had taken a few moments to look into the first highlighted speaker of the event. 

Animoca has many blockchain gaming projects in its portfolio, but a few of the standouts for me are TinyTap, bringing education to Web3, with over 200,000 activities and courses, like “math path” and “my life skills box” that feature such lessons as “I go to the bathroom” and “I blow my nose.” Educators own their content by minting their course as an NFT which they can host or sell on a subject domain. 

“Learners” from childhood to college will be able to store their certificates in their blockchain wallets, gain a lifelong record of their achievements, and earn access passes. Imagine having a token for having mastered “how to blow your nose” follow you around for the rest of your life. 

Other projects are the Mocaverse where the dreamer, builder, angel, connector and neo-capitalist tribes converge to learn, play, do good, and build a more “equitable and inclusive” Web3 together. It also features “Life Beyond” where you buy land, start your own society, and try to keep the nanobot invasions from wiping out your civilization. Other games involve speeding hotrods through city streets (Animoca teamed up with Google to utilize google maps (and I’m sure soon Google Earth, right?), so the speeding frenzy could take place in real time on real streets. It makes me wonder if one day in the near future  someone from the open metaverse crashing into you as you rumble along in your physical reality could have a real world impact? Like those helmets where you die in real life if your avatar kicks the bucket. I could go on and on, but I need to keep moving. 

Source Link: https://www.animocabrands.com/

Tim Chang is a partner at the Mayfield Fund, focusing on venture capital investments in consumer internet, digital media, ecommerce/marketplace, and health/wellness. He has twice been named on the Midas list of top 100 VCs. Tim led Mayfield’s investments in MOAT (acquired by Oracle), and Basis (acquired by Intel). Tim was previously a partner at Norwest Venture Partners where he led NVP’s investments in Playdom (acquired by Disney), and AdChina (acquired by Alibaba). Tim received a MBA from Stanford School of Business. His venture capital specialties include quantified self/life, digital communities/media/gaming, digital health and wellness, smarthome and connected devices/IoT.

Tim also sits on the board of Reimagine, a non profit that “exists to help people of all backgrounds face adversity, loss, and mortality, and channel the hard parts of life into meaningful growth and action.” They understand the “science” of PTSD” and have come up with a “virtuous cycle” where you’re taught “five steps to becoming the hero of your own journey.” First you’re “educated” on how to flourish. Then you learn how to “regulate” your emotions with the help of a coach. Next you share your grief in a “safe environment,” while AI hoovers up your agony like a meth addict in the background. Finally, you chart new goals and become of service. 

To do this God’s work the Reimagine team hired an external software development team to create mock-ups of the entire user experience. Implementing an on-going “co-creation” process with members of their community. The firm incorporated feedback from surveys, interviews, and focus groups into the design; and then they partnered with a renowned game designer to craft a seamless user experience for the traumatized data sources. Through this interface subjects wo;; write down the most difficult and intimate details of their anguish, along with goals and actions intended to “snap them out of it” and create new “journeys” on digital dashboards. Once that program is in place, participants will be nudged back to their forced happy place. The Reimagine board alone is worthy of several pages of analysis, but I need to stay on track.

Here’s a link to their 2021 Impact Report.

Source Link: https://drive.google.com/file/d/1cLTlXbzbWwWkh-fxO3P6BZA-bA9A1ZAO/view Source Link: https://drive.google.com/file/d/1cLTlXbzbWwWkh-fxO3P6BZA-bA9A1ZAO/view

 

Next up is Nichol Bradford. Nichol is the founder and CEO of the Willow Group working to trigger “transformative collisions by providing the most effective and accessible technologies for creating permanent, positive shifts in peace of mind, mental balance, life satisfaction, and happiness.” Their mission is to permanently move a billion people into a state of fundamental well being and flourishing by 2030. Nichol also founded a company called Transformative Tech that claims to be the only global platform connecting founders with investors, resources, and tools to succeed in the “well being tech industry.”

Nichol is a graduate of the Singularity University global solutions program, has an MBA from Wharton School of Business in strategy, and a BBA in marketing from the University of Houston. Bradford is also a fellow of the British American Project. See the quote below taken from that organization’s website: “I want to make the case that a trans-Atlantic leadership network like BAP is exceptionally well positioned to exert a potentially game-changing, tipping-point influence in the future that is unfolding.” The statement was made by Dr. Barbara Stephenson, Dean of the leadership and management school of the Foreign Service Institute who is on the board of the Brandon Marshall foundation for Mental Health and a former member of the Council on Foreign Relations. 

Source Link: http://nicholbradford.com/

Source Link: https://transformativetech.org/ Source Link: https://britishamericanproject.org/

 

So far so good right? I feel closer to ascension already.

 

Moving right along to Linda Adami, founder and CEO of Quantum Temple. Her interests include: Cultural Heritage Preservation, Web3, impact, and inclusion. Linda is an impact strategist who spent 10 years in emerging markets across Central America, South-East Asia, and the Gulf. She graduated with honors from King’s College, London and completed a three-month program at MIT Media Labs. Her company, Quantum Temple, is a Web3 platform “preserving” cultural heritage. Their mission is to develop the first decentralized record of cultural heritage and unlock regenerative impact funding powered by Web3. “We enable equitable exchanges between conscious collectors and ancestral communities through blockchain technology”. 

So I checked out Quantum Temple’s website. In the simplest terms this is what I see. It’s selling NFTs of the essence of cultural heritage sites. You login and connect your wallet, then you validate your identity and start bidding on the little slice of cultural heritage you wish to stick in your wallet. An example of how this works is “On the full moon of the fourth month.” That’s the name of the NFT. 

Here’s the full story:

Once a decade, on the full moon of the fourth month a stone is brought from the village of Manukaya to be bathed in the waters of the sacred pool of Tirtha Empul. The stone is carved with an inscription in old Balinese, a language no longer spoken, deciphered in the 1920s. It records King Chandrabhayasingha Warmmadewa’s construction of the bathing pools in the year 962 AD, on the full moon of the fourth month. Thus the connection between the stone and the pool has remained unbroken for more than a thousand years. The waters from the pool flow to the ancient royal tombs and monasteries. 

Right now the bidding is at 1.0633 eth ($2,000). 

What does one get, and who makes the money from that sale? Forty percent goes to Quantum Temple and another forty percent goes to the “culture keepers” of the temple. In this case it is a young man and his father who keep the temple clean and lead the guests’ sacred cleansing experience. Ten percent goes to the filmmakers who shot the 33 second Mp4, which is the NFT. Five percent goes to the “curator of the experience.” I guess he’s the culture scout. The last five percent goes into an impact fund. So the temple experience, now digitally twinned, lives in some collectors wallet and cultural history is preserved forever. 

The End

Source Link: https://quantumtemple.xyz/drop/paths-to-alang%C3%B6 Source Link: https://quantumtemple.xyz/

 

My plan for this piece was to look into the Web3/AI personalities and point out what should be obvious incongruities between what they’re espousing and the idea of spiritual “up-leveling.” I wanted to sort of glaze over the meditation/consciousness camp with a one-size-fits-all wrap up of their usual bag of labels and objectives. I was trying to get this letter finished, so it didn’t drag it out for multiple days. I guess I was hoping on some level that at least a few of these speakers were actually what they claimed. What I found is  that the Web3 crowd is pretty honest about the role they are playing in this grand comedy of horrors. They’re simply draping themselves in the sexy sheer nightie of ultra spiritualism for this particular gig. When they show up at the blockchain bro expo next month I’m sure their language will change, but their objectives will remain dead on. My point being, I was trying to skate quickly past the barefoot, micro-dosing, sound bath whispering light goddesses and just leave them alone, but it wasn’t meant to be. 

I stuck Amandine Roche’s name into my search engine, and this is what came out…

Amandine Roche is a human rights lawyer and International peace keeper who promotes wellness through meditation and yoga. The list of people who she’s studied her multitude of practices under is like a who’s who of mindfulness and bliss. 

She studied mindfulness with Jon Kabat Zinn, who created the ‘Stress Reduction Clinic’ and ‘The Center for Mindfulness in Medicine, Healthcare, and Society’ at the University of Massachusetts Medical School. Kabat Zinn earned a PhD in 1971 at MIT in molecular biology. While at MIT, Kabat Zinn was campaigning against the school’s military research, so in order to quell the student protests, MIT appointed him alongside Noam Chomsky and George Katsiaficas to an advisory panel on the future of the university’s military labs. Kabat Zinn is on the board of the Mind and Life institute.

Roche studied altruism and compassion with Mathieu Ricard (also on the board of the Mind and Life Institute). Ricard received his PhD in molecular genetics from the Pasteur Institute in 1972 before deciding to shuck his scientific career, move to the Himalayas and study Tibetan Buddhism. Ricard has been called the “happiest person in the world’ by several popular media outlets, because they all got the same memo. He volunteered to participate in a University of Wisconson-Madison study on happiness. Sufficeth to say, he scored significantly above average. Ricard has spoken at the United Nations as part of the Gross National Happiness Resolution, the Global Economic Symposium, The World Government Summit, and has been invited to the World Economic Forum ten times.

Roche has also studied with Deepak Chopra, who needs no introduction, but I did take a quick glance at his website and saw that the two most recent additions to his archive are a piece he wrote titled “How AI could set us free,” which was an overly simplistic and incredibly tone deaf piece that concludes “’If AI takes over all the jobs, then that frees up humanity and the obstacles to human potential would be radically lessened… this could amount to a leap in the evolution of consciousness.” His other article was about wireless sensor technology that monitors an individual’s responses while meditating. 

With Roche’s mentor roster and resume of working with the UN, the European Commission, Unicef, UNESCO, UN Women, and USAID, it’s clear to me she’s as big a player in this game as the rest of them.

Source Link: http://amandineroche.com/peace-keeping.html Source Link: http://amandineroche.com/peace-building.html

 

Ok, I think I’ve made my point.

I’m not even close to halfway through the list of speakers, but I’ll leave the rest for anybody else who can stomach it. I really just wanted to point out that many folks in the “higher consciousness-focused” community aren’t grasping the disconnect between what they’re absorbing from their gurus and spiritual teachers and what lies on the other side of that gauzy ethereal curtain. How can a conference of this stature, one with an Web3 mindfulness agenda trumpeting from every line of text and every speaker bio, not set off alarm bells for “the resistance?” 

Seeing someone who’s so embedded in the alternative sphere practically getting the vapors over this glossy, highly funded, Deloitte-sponsored, change agent-driven performance piece swathed in all the right newspeak “justice, equity, diversity, and inclusion,” where phrases like “sacred ancient knowledge” and “higher consciousness” are nothing but clickbait…

This conference was the low hanging fruit I needed to finally write something again. To try to draw some eyes away from the influencers on stage, and bring them round back, to the shit show where they might get a glimpse of the gamified, tokenized cybernetic ant computer. Pull back the gauzy spiritual curtain and see the plan is for everyone to live on blockchain and respond to nudges from wireless sensor networks – from their iPhones, from their tutors, from their health coaches, and from their doctors. 

I want people to see that “wellness” is the base layer cost offset for their impact investing. That they want us “happy” in tiny house communities, fifteen-minute SMART villages run by nuclear reactors airdropped in shipping containers at the edge of town. In this future a majority of people will sit in haptic suits and compete for global gig work, because they’ve been convinced that we live in a “post knowledge” era where humans have few skills that can outperform artificial intelligence. In this future our spiritual ascension comes at a high cost –  the complete capture of our every thought, emotion, and desire so the machine can manipulate us into “bliss.” These smug, arrogant change makers from every corner of academia are juicing us up on the idea of there being a utopia around the next corner even as the “open metaverse” stealthily closes in around us. 

Sincerely,

Your Farming Friend Verityhound

 

Tuesday, 09. May 2023

Wrench in the Gears

Documenting For The Record – More Outrage Misdirected At Me By Farber’s Ants

This is getting quite tiresome. I am hearing through the grapevine that Ms. Celia Farber, who by her own account is now reliant on the click bait of Substack for her livelihood, is continuing to repackage vague, inaccurate comments as “content” to fuel her digital hive. I have not gone over to her post to [...]

This is getting quite tiresome. I am hearing through the grapevine that Ms. Celia Farber, who by her own account is now reliant on the click bait of Substack for her livelihood, is continuing to repackage vague, inaccurate comments as “content” to fuel her digital hive. I have not gone over to her post to see the vitriol directed at me by her commenters. I’m too busy going through Children’s Health Defense’s IRS Filings to give my attention to her ant computer. Since she is apparently threatening “legal action,” I am taking the precaution to document the fact that the entire hubbub was inadvertently initiated by a fawning commenter. You’ll see below that “Cosmos” now regrets the submission, one that triggered Farber’s emotional hysteria followed by a reflexive mob response. I have no idea how I got dragged back into the comment section, as there is no evidence that I’ve discussed the queen bee since she pulled me into her arena a few weeks ago, though my efforts to shine a light on Children’s Health Defense finances and staffing could make me a juicy target. I’m aware of the AIDs dissident history generally, but anyone who follows my work knows that is not my focus. I have no clue about the various personalities involved in this discussion.

Once again bad behavior on all sides rules the day (except for the one grounded, clear-thinking person out there apparently – Ian Bell). Please, get a grip on yourselves folks. Go out and blow on some dandelions.

Source Link: https://celiafarber.substack.com/p/diana-wests-critique-of-mr-jones/comment/15731034


The Once and Future Thing: A Poem Featuring Camelot, Labyrinths, Alice, and Unanswered Questions about CHD’s Funding

The following was posted as a comment on the blog yesterday, and I am sharing it here with permission. I was tickled by the way the author, who goes by the pen name Meandering Myrmidon, wove hours of my dry content into a charmingly cutting analysis of what is unfolding before our eyes as the [...]

The following was posted as a comment on the blog yesterday, and I am sharing it here with permission. I was tickled by the way the author, who goes by the pen name Meandering Myrmidon, wove hours of my dry content into a charmingly cutting analysis of what is unfolding before our eyes as the Camelot 2.0 campaign gathers steam.

Yesterday, and again this morning, Children’s Health Defense, wrote me that they do not share any information about their donors. As someone who worked at a small nonprofit for seventeen years, I know it is standard practice to prepare an annual report with a list of major donors – even if it is only digital and posted to the website. The IRS 990 filing for Children’s Health Defense indicates a budget of $16,000,000 in 2021. They have a huge online media presence and spin-off publishing and video subsidiaries. There is no excuse for them to have not devoted some of those resources to making their funding available for public review.

With their former chairman, now on leave, running for president, I think we can all expect more transparency than that. I’ll be unpacking this in one or more discussions as soon as I’ve finished pouring over the spreadsheets I’m putting together of CHD’s publicly available financial information and board lists. There are several interesting findings that merit scrutiny from those inclined to support the Camelot campaign with their time or treasure.

In the meantime, enjoy this witty missive. I’m a workhorse researcher whose prose is functional, but rarely sparkles with wit. I keep saying we need artistry to translate this information in a more palatable form. Myrmidon’s poem delivers in spades. One clarification, if this poem is intended to reflect my work, I want to make it clear that no knights have arrived to make me any offers. For that I am glad, as it would be unseemly, and unchivalrous, for any entity to make such an offer, nor would I ever accept.

Enjoy!

 

-The Once and Future Thing-

Just before cock crow,
in a darkling hotel lobby,
the spectral father cries:
“The hour’s come, avenge me, Bobby!”

And from his crystal palace,
the dynastic chosen son,
doles oxytocin tokens,
to the wronged and innocent ones.

Nobly wounded, audibly shaken,
from scores of giants that he’s felled,
he whispers to the multitude,
what his father’s ghost did tell:

“The salmon will be hooked,
the well of wisdom will be plumbed,
naught will remain unknown,
if we blockchain everyone.”

“Let us now praise famous men,
may their interest forever accrue,
we’ll accept all colors and proclivities,
we can tokenize them too.”

“In our liberated markets,
everyone will have a share,
on this side we’ll make solutions,
and we’ll make problems over there.”

“With tokenized revolutions,
we’ll spin them round and round,
and the energy they generate,
will power the cities and towns.” 

“Join us, the Impedance,
all free agents and entrepreneurs,
just donate your extra electrons,
oh, mega machine, we’re yours!”

“Come now, let’s reforge,
the pact that was broken,
and pay for this succession,
with this non-fungible token”

Meanwhile, in his tower,
high above the broad Potomac,
Merlyn gazes in his Palantir,
muttering spells alchemical, gnomic.

Director of the spectacle,
playwright extraordinaire,
Merlyn moves his players about,
charging waves hidden in air.

But Little Alice, scrappy mom,
from the City of the Broken Bell,
raised her voice among the throng,
warning all was not so well:

“They say I’m nuts”, said Alice
“they say I’m not quite sane,
for thinking people are not pawns,
and that life is not a game.”

“I’ve been into the labyrinth,
and gathered many threads,
I’ve woven them together,
and this is what they said.”

She then unrolled a tapestry,
showing the lords of every land,
hunting for a quarry,
that they could not understand.

In the center was a unicorn,
the quested after game,
in magenta script thereon was stitched:
“BEHOLD: THE GLOBAL BRAIN”. 

Merlyn, in his tower,
quickly consulted his chart,
of every outcome and income,
to find Little Alice’s part.

He directed a trusted knight,
of the Children’s Health Crusade,
to entice Little Alice,
with the sums she could be paid.

Alas, to no avail,
the crusader wooed in vain,
stubborn and immovable,
Alice’s tale remained the same.

Merlyn, in his wrath, demands:
“who let this thistle grow?,
this pissabed that’s sprouted from,
my neatly tended row”.

So the master of illusion,
broadcasts more at which to stare,
he invokes his spell most tried and true:
“Hey, you, look over there!”

Spectacular scenes emerge,
like toadstools after rain,
hairy men in queenly drag,
war, disease, exploding trains.

All the tricks long utilized,
to hypnotize the peasants,
and I don’t know what happened next,
this hasn’t passed, it’s present.

Poem by Meandering Myrmidon

Children’s Health Defense’s CA Charity Filing for 2021 here Text sent by Alison McDowell the afternoon of May 7, 2023. Text response received by Alison McDowell the morning of May 9, 2023

 

 


How Do You Take Your Eggs? A Letter From An Internet Friend Across The Pond

Below is an email shared with permission. I hear there’s a tempest in a teapot with my name on it in some unspecified Substack. Well, I suppose something is benefitting from the energy generated by the emotional outrage machine. I suppose as long as I’m shining a light on Camelot 2.0 that is to be [...]

Below is an email shared with permission. I hear there’s a tempest in a teapot with my name on it in some unspecified Substack. Well, I suppose something is benefitting from the energy generated by the emotional outrage machine. I suppose as long as I’m shining a light on Camelot 2.0 that is to be expected. This was sitting on the back burner for a few weeks, but for any new folks popping over to see what I’m about, I consider what follows a very generous endorsement of my character. Thank you Internet glitch friend.

 

April 2023

Dear Alison,

So on occasion I go through my phone storage and try to delete what is no longer relevant and go over stuff I’ve screen-saved in the moment b/c what ever has spoke to me, etc.

Sometimes things come back around and perhaps has more than one purpose. For example this, what I am about to share because we are all stumbling through the labyrinth of life, on our own, and collectively, is like an egg.

An egg has seemingly three separate parts that make up the whole. Do we eat the whole egg? No. We take the part that we like, the part that sustains us, discard the rest and go on our way. Do we give it a second thought? No. Was the shell useless? No. It all served a purpose for what we needed at that moment in time.

Why am I saying all this? I am making this point, because I am sick and tired of irrelevant people on the internet of space who are energy zappers. They offer nothing of substance. They aren’t even a useful shell; they’re worse. They’re like the toothpick that comes along and pokes a tiny hole in the bottom of your egg just to blow out all the nutritional content and leave this empty shell. Some might even add insult to injury and decorate the shell and call it art. Maybe that’s useful for some other people in their time and space, but imagine you were in a group of travelers who desperately relied on those eggs to sustain their bodies nutritionally so that the brain and the body could endure the long complicated journey ahead. When life was less complicated, and lives were more comfortable, predictable, perhaps even a bit boring, the idea of an empty egg with a painted shell might be more entertained. Time and place I suppose.

Or just find your lane. Don’t be a cheerleader for the shell art following a group of folks that crack their eggs right through the middle so you can scream at them for how stupid they are just discarding the shell. In your shell art world sure, the egg eaters might seem so selfish, (shell~fish lol), so careless, so unimaginably weird that they’ve learned how to best use that egg for themselves. But don’t entertain that idea, that concept, that world view. No way. Egg eaters are all about themselves. They aren’t pretty and decorated and put on display for all to put on a shelf and admire. No. Maybe the egg eaters put a few of their eggs together, added some cheddar, chillis, ham, milk, potatoes, so that more people could come together and share new recipes and enjoy the experience together. The egg eaters like to share their recipes with others who might not like that recipe but love this recipe, and so it goes.

But the egg eaters don’t like bad eggs. You know the ones. The eggs that kids might hide in the bottom drawer of the refrigerator a few months before Halloween. Yet another use for an egg. Completely deplorable for the shell decorators, unimaginable to the egg eaters, but to the mischievous energy filled youth…rotten eggs are the PERFECT accessory on Halloween night to be the miserable coup d’etat to some unsuspecting homeowner. The egg rotters, they too see their egg group as the golden egg group, scoffing at the eaters and the artisans. For the purpose of the egg rotters is to thrive in spreading misery towards others. Fortunately this group is few and far between, usually. Does it mean they don’t exist? No. We know they are there, lurking in the shadows, ready to pounce. Generally we each are aware of the other and manage to coexist naturally and maybe rarely if ever cross paths.

Well that being said… if a rotten egg happens to catch your house, sure it stinks. Yes they suck. Do they know you? Probably not. They probably just think it’s funny amongst themselves, and you are just who the flying fickle finger of fate poked that day. To the homeowner that’s worked hard to own and maintain their home, a messy smelling egg can be embarrassing and will want to wash it off immediately at move on. Egg eaters aren’t smelly rotten people so it’s a personal insult to have to now take precious time out of a normally productive and fulfilling day to have to now direct that otherwise forward moving useful energy to this stinking rotten mess. Washed off, yet the negative smell still lingers. The egg rotters have moved on from their follies. Probably haven’t given your house a second thought. It was funny for them at that moment, just that one night, your house with no constructive reason or rhyme, to pelt that egg thru the air, and in its smashed glory to laugh at the success of the aim and move on. Yes, their job is done. They move on.

The eater though wonders why. Why was my house chosen? What did I ever do to them? I don’t even have a son so who pissed them off? Why wasn’t I like my neighbours house who survived the night eggless? Why now does my beautiful house now stink? The eater doesn’t understand it because not only are the rotters the most useless users of the egg, worse, they somehow manage to use their eggs to literally drain your positive, productive most forward and proactive users of the egg people.. the rotters add zero to anyone or anything and yet that little impulse moment of theirs causes such a disruption to others. There’s no point trying to understand these mindless, boundless energetic little imps. They have all this energy to run around destroying things, because they don’t have to think constructively about their actions. They don’t have a creation of their own to show for. They don’t have pride in their project that they can show to others in a year, in two years, in five years etc.. Their destruction is done. After their small group has had their smash and laugh, that’s it. They’re done. Their lives are pointless and maybe they wish they could be like the egg eaters. To have something they could be proud of. A community that grows painstakingly slow, but eventually flourishes beautifully. Day after day, month after month, the egg eaters grow into a group, then a community and hopefully someday a movement. Not everyone understands what the eaters’ purpose is exactly, but being a part of it feels good.

Wanting to join in the community feels purposeful. Maybe individually eaters don’t have all the recipes, maybe the eaters don’t have all the ingredients. Sometimes we don’t even know if we like eating our egg in a new and different way until we try. But we don’t even know there’s a different way to eat that egg until we stumble upon this amazing community that embraces everyone. Sure the eaters are busy with their own eggs, but on Thursday night they all gather around the community stove, crack a few eggs and relish and grow together. Thursday nights renew the community’s energy and purpose. By and large the community feels right. And it grows. For the end all be all purpose is still out there, but at least in this small community, a shared experience is being built. It’s being realised. It’s being a constant in the inconsistency of current life. Maybe it’s just another day for the eater, but for the new member eater it’s opening up a profound new world of tools and ingredients to eventually be confident enough to become creative too and then maybe even be able to also share with the community.

Who knows. I don’t. I don’t even know the first egg eater that started this community. Who is Alison? I don’t know. Who is Donald Trump? I don’t know. Who is Madonna? Who is Stew Peters or Tucker Carlson? Do I personally know any of these people? No. Do I consume their content, yes. Does that make me know them, no. Do I like everything they put out there? No, most likely not. But at some level I am able to gather enough content to sort out in my own head some type of character this person makes up. Sure that character can change over time. We all do. That doesn’t mean at this time because they were this that now they can’t be that. That’s stupid and ridiculous. We are all constantly changing. Hopefully for the better. So as long as that persons’ character whose content I am consuming is seemingly aligning with my personal path, journey through the labyrinth, is resonating my life, opening up my mind, then yes, for that moment, oh yeah I know who Alison is….

She’s that nutty keyboard brainiac in Philly with a husband and a kid and a small house and a huge library. She reads a lot, she talks a lot, she’s super intelligent, but she probably doesn’t realise her own value as it’s just her. She doesn’t go around trying to force people to listen to her or join her dandelion club. Pay to hear this magical opinion…nope. She’s like, hey, there’s a hole in this boat we are all occupying right now and if we put our heads together maybe we can fix it even though the boat is floating in the middle of the ocean and filled with people. If it doesn’t get addressed everyone in the boat is screwed. She doesn’t need praises for spotting the hole or collating various repair manuals. She just knows that fixing the problem is going to take more than just her, and she’s really grateful when someone else also sees the hole and offers to help with the solution. Does that make Alison now somehow eternally tied in gratitude for that individual that came along side to help, no. Sometimes is okay just to appreciate that this person has come across your path for a reason maybe the reason is for yourself or it’s for them. Who knows? I don’t know.

I don’t even know who Alison is. I will probably never ever meet her in real life. If I do, alright, cool, that means the world didn’t end. LoL . Whatever. Am I a brainiac just like her? Ha! Not even close. I do know what stupid is, and I’m not that either. I can however imho feel like I can gather who their character is…or what makes up the value of their character. Am I a mom? Yes. Okay, I can relate. Did I graduate college, yes, okay I can relate. Am I looking for answers to an unknown question or questions? Boom! Do I know who Alison is? No, but I really like her character. What she puts out makes sense to me. What she says has immeasurable value, to me. What she has said in the past is STILL TRUE and still of value…to me. Would we be friends in real life if I lived down her street? Maybe. Best friends, probably not. I respect her. I admire her ability to carry this proverbial torch that we all enjoy the benefit of… Is she replaceable? Probably. But she’s grown on me. And if I just respect and value what this Alison character puts out there I can say yes, I know who Alison is and she’s my friend. She’s one of a kind, unique, qwirky yet stable. She does not make up or spread rumours or lies. Indeed the polar opposite. She lights up when discussing facts and building the base file of totally relevant pieces that need to come together so everyone can see the whole picture.

Can she be attacked just because she’s there, sure. Does it hurt? You bet. Was there a point to that? Idk, maybe. But now that rotten egg has made this ripple, an after shock if you will to the rest of us in the community who actually do value this Alison character. Her tappity tap tapping keyboard in the background of board meetings as her questions rage onto her targets that do their very best to make so much noise, say so many words yet answer nothing. See, this is why I like Alison. This is why I respect and value this Alison character, and it seriously pisses me the fuck off when miserable negative parasite losers take aim her. It is not her responsibility to be everyone’s mom just because she is a mom. Nor does she have to validate herself or worth to each and every one of you. Like the rest of us you either can see from a mile away (or across the pond) that this Alison person is super bright, maybe if I pay attention I can learn something from her that in turn will help me too! Or since she can’t stop her life and coddle my shallow empty invisible spec internet personality then I’ll just throw my rotten egg guts at her. Because I can’t be her, I can just ruin all the good she’s doing. But that ripple rolls out doesn’t it. And after so many ripples it’s easy to question why even bother? This is so much work, so much energy. Sure I opted to do it because I was interested, then I realised it was important to share, I thought what I was doing was benevolent and being a positive value in the lurking panopticon assembly of doom… Yes, your negative (albeit shallow pointless keyboard stabs) remarks hurt and after enough of them cause a pause and question why do I even bother.

Well I’ll tell you why, because to loads of anonymous people out there that happen across your content actually do appreciate you. There are those of us scattered across the big floating globe of billions of people-ants that catch that faint pheromone hit and say wait a minute, hold up, there is something familiar and comforting about this scent. And so it is. A few years ago I began my search for answers to the crazy phenomenon that had taken the world by surprise. I knew enough to know it was bullshit even if I couldn’t articulate exactly why. But I knew. Now then in my drive to gather the words to validate and articulate my story I’m having to learn loads of information, people and names, concepts, ideas, facts and opinions. Are there hard answers, yes. Did I find them? No. But I sorted my own path in a way in which made the most sense to me and that’s what matters.

So to make a long story longer – and I’m so sorry, I just thought you might like to hear why you matter or why people even bother listening to you or whatever. It’s just me, your random internet content consumer letting you in on why you matter… to me. To my story. What makes you, YOU in my story and why I find you compelling and valid. By the way this has happened with you on my phone not just this once, but a few times. So much so that I’m like whatever, she’s not exactly lights and whistles or an entertaining production. To appreciate Alison is to tune into her broadcasts, pay attention, hell, here’s a concept-take notes. Go back and re-listen (which I usually have to do with your stuff) try and grasp what is being said, shared, made perfectly available to your very fingertips with ease, because that is what Alison does. It’s like the concert violinist in the subway station. People dismiss the beauty and value of the melody because it’s being shared out of the love and joy of the music. Not being ‘stingy’ or shared ‘narcissistically’ at £1,000.00 /seat to hear me play. People create concrete paths and demand you must make yourself miserable too. It’s how it is you know. My life’s story says life must be hard must be complicated must be difficult so I command you to do the same. What-Ev-Errrr.

Soooooo going back to my phone storage, playing sound bytes I deemed to have value, I get a chuckle like yep, this is why I like this Alison character b/c time and time again I get the first clip and I’m like wait, what? What am I suppose to do with that? What’s that suppose to mean for me or to me? Wtf am I watching this shit for if I’m left feeling out ragged and filled with unanswered questions? Why tf do I bother?! Ugh! Then seriously, 12 hours later, oh here’s that Alison character. I’m never quite sure what she’s on about but she is on to something though. At least she provokes me to think. At least she gives me a direction to look or search or dig. But sometimes, sometimes you get lucky and she just gives you the answer in plain sight. She doesn’t even know you’re looking for it. But she does give you the answers. I think people are too busy making their own life more complicated that they don’t take the time to see it’s right there. So here, let me share, because you can enjoy this and know that the work and reach you put out there in the internet abyss is landing on some pretty decent minds.

Literally look at the meta-data screenshots- it was about two years ago, yeah, and literally 12 hours apart…. This still amuses me to this day, and now I hope you get a kick out of it too. Your blast from my past LoL… You know I think I’m going to put them in another email b/c it screws up my phone. 

 

Cheers from the expat American across the pond. Thank you for being you – whoever you are .

Sincerely, your random anonymous internet friend or not or glitch or whatever.

 

My reply: 

Oh my. I’m tickled. What an image you weave with the eggs! Did you know I eat a lot of eggs? We get a dozen in our farm box every week. Two poached on sauteed greens with garlic is my current fave (bring the water in the pan to a rolling boil, drop in a good splash of vinegar, then crack two eggs directly into the water, turn down the heat a bit, after a few minutes scoop each one out with a slotted spoon, and place on a towel to dab them dry, serve over greens with pepper). It would be nice to meet in person. I’m sure we would enjoy one another’s company. Given all the effort, would you like to share this with a broader audience on the blog? It’s fine either way. I do appreciate the time you took to lay out your thoughts and include me in your thinking. We are growing a little community, aren’t we?

 

I ran out of greens, but homemade bread toasted works well, too.

Monday, 08. May 2023

Mike Jones: self-issued

Initial Reanimiated JOSE Working Group Specifications Published

Following a call for adoption by the restarted JSON Object Signing and Encryption (JOSE) Working Group, I’m pleased to report that the three initial working group specifications have been published. They are: JSON Web Proof, with abstract: This document defines a new container format similar in purpose and design to JSON Web Signature (JWS) called […]

Following a call for adoption by the restarted JSON Object Signing and Encryption (JOSE) Working Group, I’m pleased to report that the three initial working group specifications have been published. They are:

JSON Web Proof, with abstract:

This document defines a new container format similar in purpose and design to JSON Web Signature (JWS) called a JSON Web Proof (JWP). Unlike JWS, which integrity-protects only a single payload, JWP can integrity-protect multiple payloads in one message. It also specifies a new presentation form that supports selective disclosure of individual payloads, enables additional proof computation, and adds a protected header to prevent replay and support binding mechanisms.

JSON Proof Algorithms, with abstract:

The JSON Proof Algorithms (JPA) specification registers cryptographic algorithms and identifiers to be used with the JSON Web Proof (JWP) and JSON Web Key (JWK) specifications. It defines several IANA registries for these identifiers.

JSON Proof Token, with abstract:

JSON Proof Token (JPT) is a compact, URL-safe, privacy-preserving representation of claims to be transferred between three parties. The claims in a JPT are encoded as base64url-encoded JSON objects that are used as the payloads of a JSON Web Proof (JWP) structure, enabling them to be digitally signed and selectively disclosed. JPTs also support reusability and unlinkability when using Zero-Knowledge Proofs (ZKPs).

Thanks to Jeremie Miller and David Waite for helping us get there!


Damien Bod

Trust phishing resistant MFA for cross tenant users

This article shows how to force phishing resistant authentication for external B2B users using a cross tenant trust. The external users are from a separate tenant and the local tenant needs to trust the multiple factor authentication (MFA) from the other known tenants. This prevents the user having to authenticate again and setup more MFA. […]

This article shows how to force phishing resistant authentication for external B2B users using a cross tenant trust. The external users are from a separate tenant and the local tenant needs to trust the multiple factor authentication (MFA) from the other known tenants. This prevents the user having to authenticate again and setup more MFA.

Code: https://github.com/damienbod/AspNetCoreAzureADCAE/tree/main/RazorCaePhishingResistant

The demo application requires that administrators use a phishing resistant MFA and this is implemented using an Azure AD authentication context with a policy. The tenant and the application was setup as shown in the blog:

Force phishing resistant authentication in an ASP.NET Core application using Azure AD

If a local member authenticates using a phishing resistant MFA, the user can access the admin page without problem.

When an external member or a guest user authenticates, the user will be shown the following page requesting that the user authenticates again with a correct MFA. This is displayed even if the user has authenticated correctly on the home tenant. No trust exists between the Azure AD tenants.

To fix this, the Azure tenant must trust the multiple factor authentication from the other tenants. This can be implemented using Azure AD external identities.

Now when the user authenticates, everything will work.

One problem which still exists is for Microsoft accounts. These users cannot authenticate with the step up requirement.

Links

https://learn.microsoft.com/en-us/azure/active-directory/standards/nist-authenticator-assurance-level-3

https://learn.microsoft.com/en-us/azure/active-directory/authentication/howto-mfa-mfasettings

Thursday, 04. May 2023

Wrench in the Gears

Pharma-Banking Consultant Andrew Pek Landed in the Executive Suite of Children’s Health Defense this February – What Happened?

Last night I did a stream outlining the work history of Andrew Pek, a long-time corporate consultant specializing in design thinking whose client base centered high-profile multinational pharmaceutical and financial firms. Pek was selected to become Chief Operating Officer for Children’s Health Defense (CHD) in February of 2023 according to an email sent out to [...]

Last night I did a stream outlining the work history of Andrew Pek, a long-time corporate consultant specializing in design thinking whose client base centered high-profile multinational pharmaceutical and financial firms. Pek was selected to become Chief Operating Officer for Children’s Health Defense (CHD) in February of 2023 according to an email sent out to supporters on April 6, 2023. Two months later Pek was not yet featured anywhere on the CHD website, and his LinkedIn profile had not been updated with his new employment status.

At the end of the stream a person commented in the chat that Pek was no longer with the organization. This came as a shock, since Michael Kane had just interviewed him last Friday, April 28 for Good Morning CHD TV. That short interview, which confirms Pek’s former position at Pfizer, is on Spotify, but not on the CHD TV page. Update: As of this morning, the interview has been removed from Spotify. Fortunately I recorded a key minutes of the audio, which you can listen to here.

 

Before putting together this presentation, I attempted repeatedly to confirm that the former Pfizer executive and pharmaceutical consultant was indeed the same man who had been hired to join the executive team at CHD. I got an initial reply last Thursday, that someone would check with Andrew and get back to me. I’d asked if he’d come to the organization from DxD Partners. No one ever replied to my follow-up texts.

If Pek is no longer with CHD, that would be an abrupt turn of events, since Kane had only done his “welcome our new COO” interview on Friday (interesting timing). In any event, I feel this information still has value, because it speaks to how the organization is being run. Note that the hire was made in February before Robert F. Kennedy Jr. and Mary Holland went on leave to run the campaign. Surely they were aware of his employment background. What led them to choose Pek above other candidates, if there were other candidates, is not known. If Pek is no longer an employee it’s worth asking what led to this unexpected change of heart? Robert F. Kennedy is now running for president of the United States. I think this episode speaks to his management approach and that of his executive staff.

I mean no ill will towards Mr. Pek. He seems like an accomplished consultant who clearly enjoys the work of flying around the world to rub elbows with c-suite executives, guiding them in unlocking their creative potential to expand the growth of their companies and generate handsome profits. Unfortunately, many of these firms are deeply embedded in the structure of Web3 human capital finance and precision medicine. This is the same structure that seeks to weave natural life into a gamified web of digital extended reality. Thus, as capable as he may be as a “consultant’s consultant” it is hard to imagine Pek was the best fit to steer CHD during a time of transition. Unless, perhaps his hire speaks to issues I continue to raise about social impact finance, “personalized” medicine (vaccinomics), innovative revenue streams for repurposed drugs, and managed wellness protocols related to chronic illness. 

Last night someone in the comments replied in response to another comment noting how hard it was to find information on Mr. Pek and CHD, “…I hope when you Google him and CHD nothing comes up. That would be the best case scenario.” To which I replied, “Maybe the best case scenario is that we all better understand how CHD operates.” I’m sure people are busy scrubbing Mr. Pek out of the picture as we speak. The question is what led them to make that hire in the first place?

I’m pulling out a select group of screen shots that should pique your interest in watching the 1.25 hour long presentation. You can explore the material I presented by clicking your way through this diagram. Simply choose a circle, and a sidebar will open up with pertinent screen shots and links. 

 

Explore the resources gathered here.

Source Link: https://embed.kumu.io/3ec008595b3a47c7691d3cdde931722b#untitled-map?s=bm9kZS1mVTNJeGVWZQ%3D%3D

 

Source Link: https://www.linkedin.com/posts/childrens-health-defense_welcome-andrew-pek-so-glad-to-have-you-activity-7036776227096948737-_Ys-/ Source Link: https://web.archive.org/web/20221210025852/https://www.andrewpek.com/post/andrew-pek-has-worked-with-the-top-firms-consultants-in-the-world-learn-how

 

Source Link: https://web.archive.org/web/20230504123657/https://www.dxdpartners.com/

 

Source Link: https://executiveeducation.wharton.upenn.edu/for-individuals/all-programs/sifma-securities-industry-institute/

 

Source Link: http://documents.sifma.org/uploadedFiles/documents_sifma_org/SII_Board/2018%20SII%20Program.pdf

 

Scroll to April 28 – That episode has been deleted. https://open.spotify.com/show/2kLEnDzpinQoT4innFSe1V

 

Source Link: The interview wasn’t posted on CHD TV. https://live.childrenshealthdefense.org/chd-tv/browse-all/good-morning-chd/

 

Source Link – Interview Deleted https://open.spotify.com/episode/3meFiQuDzfd9hzwRsmufWH’

 

This is the link to the April 28 interview by Michael Kane of CHD COO Andre Pek. It went dead on the night of May 3, 2023 after last night’s stream. https://open.spotify.com/episode/3meFiQuDzfd9hzwRsmufWH

My text to confirm employment history of Mr. Pek. The interview happened within 24 hour of my inquiry.

 


Mike Jones: self-issued

OAuth DPoP specification is in the hands of the RFC Editor

The OAuth 2.0 Demonstrating Proof-of-Possession at the Application Layer (DPoP) specification was approved by the IESG and is now in the hands of the RFC Editor in preparation for publication as an RFC. In a related development, the multiple IANA registrations requested by the specification are already in place. As Vittorio Bertocci wrote, “One of […]

The OAuth 2.0 Demonstrating Proof-of-Possession at the Application Layer (DPoP) specification was approved by the IESG and is now in the hands of the RFC Editor in preparation for publication as an RFC. In a related development, the multiple IANA registrations requested by the specification are already in place.

As Vittorio Bertocci wrote, “One of the specs with the highest potential for (positive) impact in recent years.” I couldn’t agree more!

The latest version of the specification is available at:

https://tools.ietf.org/id/draft-ietf-oauth-dpop-16.html

Implement and deploy early and often!

Wednesday, 03. May 2023

Mike Jones: self-issued

First Public Working Draft of Securing Verifiable Credentials using JSON Web Tokens

The First Public Working Draft (FPWD) of the Securing Verifiable Credentials using JSON Web Tokens (VC-JWT) specification has been published. The FPWD milestone is described in the W3C Process document. This draft is another step on the way to a Native JWT Representation for Verifiable Credentials. Please review the First Public Working Draft of VC-JWT. […]

The First Public Working Draft (FPWD) of the Securing Verifiable Credentials using JSON Web Tokens (VC-JWT) specification has been published. The FPWD milestone is described in the W3C Process document. This draft is another step on the way to a Native JWT Representation for Verifiable Credentials.

Please review the First Public Working Draft of VC-JWT. Thanks especially to Orie Steele for making this happen!

Tuesday, 02. May 2023

Justin Richer

What happened to MITREid Connect?

MITREid Connect was, at one time, one of the top open source implementations of OpenID Connect and OAuth 2.0. Written in Java and targeted for enterprise systems in the days before cloud services, it fit a niche and did it well. Where is it today and where is it going? The unofficial OpenID Connect logo I drew one afternoon that became the symbol for the project. How It Started Bac

MITREid Connect was, at one time, one of the top open source implementations of OpenID Connect and OAuth 2.0. Written in Java and targeted for enterprise systems in the days before cloud services, it fit a niche and did it well. Where is it today and where is it going?

The unofficial OpenID Connect logo I drew one afternoon that became the symbol for the project. How It Started

Back in what feels like the long-distant past, I started a research project at my then-employer to investigate federated identity and how it could be applied in our environments.

We started out by building a very slapdash but functional implementation of OpenID 2.0 (and its various extensions, like SREG and AX), and we called it MITREid because I’m not great at coming up with good names. This little experimental server got hooked into our enterprise identity management system, and it did a pretty good job at giving us a single-sign-on experience that wasn’t just prompting for the user’s password and checking against LDAP, like every other app at the time had been doing.

When OAuth 2.0 and OpenID Connect started development, I proposed a research project that would get us involved in the creation of those standards and come up with an implementation of them that we could use ourselves and give back to the community as an open source project. The result of this was the MITREid Connect project, with its flagship OpenID Connect IdP and OAuth 2.0 AS built as a Java Spring application. And as you can see, my skill at naming projects continues to shine here.

How It Went

We built MITREid Connect to fit into much the same kind of environment that the original MITREid OpenID 2.0 server.

No internal user management, this would be handled by the corporate identity management system and plugged in through a service layer Pluggable authentication, so we could adapt it to whatever the corporate system and our customer environments had in place already Strict adherence to the standards, and a decent attempt to implement as much of them as we could manage Deployable in a Java servlet container, which is the kind of deployment we had readily available to us at the time

All of these goals shaped the project that came out of the research effort, and it was decently successful for its time. We released several major revisions of the project, implemented new-fangled security mechanisms like PKCE, and deployed it to both our corporate network and a number of other systems. It gave us login, API access, and a common security layer right at a time when things were shifting away from HTTP Basic Auth for everything.

When the OpenID Foundation started to make their conformance testing available, we were among the first to certify and submit our results. I even have the t-shirt to prove this!

While I was still an employee at MITRE, the MITREid Connect project was funded largely through the research program that started it. When I first founded Bespoke Engineering and started consulting full time, I fully expected that support, development, and deployment of MITREid Connect was going to be one of my major business lines, and that this would help the project continue. This was slightly true at first, with several large clients bringing me on board to help work on larger projects that were making use of MITREid Connect. However, most of my time on these projects was spent not on the MITREid Connect code itself, but on separate or larger parts of the overall project.

Not long after I left MITRE, some folks at MIT offered to host the project as part of a new consortium. This was a good relationship, but they couldn’t offer the kind of support for an engineering team that the MITRE research program once afforded. That consortium eventually folded into something else, and the MITREid Connect project was left on a shelf.

Eventually, even these lines drifted away, and my time was taken up by other efforts, and MITREid Connect became something I engaged less and less with. The handful of people that had worked on it even had plans to re-brand the project away from its MITRE roots, but even that small step never came to fruition.

Somewhere in that time, the world of computing and security shifted radically. MITREid Connect’s model of a heavyweight Java servlet no longer fit with the cloud-native hosted or even semi-hosted services that people are interested in using. Where once it was common for a large company to have its own data centers for computation power, it now seems crazy to not use a hosted cloud environment for your infrastructure. MITREid Connect was made for an on-premises perimeter-driven world, and that world has largely gone.

How It’s Going

MITREid Connect never really made the transition out of that world, and it shows in the code. It’s more than just its packaging as a Java servlet, it’s how sessions and data storage are handled, how scaling would function, and so on and so on. The dependencies stopped being updated, it was never repackaged — at least not by the core development crew — and it was never really brought up to speed with the rest of the world.

These days, MITREid Connect is not actively developed or maintained by any of the core developers, including myself. The main reason for this is simple: the people who were willing to pay for the work stopped asking for it and turned to other solutions. Most groups, including MITRE, have turned to cloud-hosted solutions to handle their entire identity stack. Or they’ve moved to other implementations, both open source and proprietary.

Even so, I wouldn’t call the project abandoned, exactly. It still exists and it still runs, and people still use it. But the core project’s dependencies haven’t been updated in quite some time and there’s nobody in the queue to manage that. The project has been forked a number of times, and some of those forks look like they’re still active, but I honestly don’t know which one I’d point someone to.

In conclusion, if you’ve found MITREid Connect and it works for you, that’s awesome, go to town! It’s immensely customizable and configurable, and the core is stable, solid, and thoroughly tested in more implementations than I know of. But if you’re looking for it to do something new, or to be deployed in your cloud container, I have to say you might want to look towards something else for a solution.


The Dingle Group

Brief for Thinking Locally and Globally for Digital Wallets and Identification

The ability to shift populations and gain (or force) adoption of new public policy positions is linked to the ability of the public sector form of governance.  Over the past 20+ years the potential of truly digital economies and societies have been expounded endlessly but the results have been mixed with a consistent short fall in the meeting of expectations.  High assurant digital i

The following brief was used to form the discussion for the panel on May 2, 2023.

The ability to shift populations and gain (or force) adoption of new public policy positions is linked to the ability of the public sector form of governance.  Over the past 20+ years the potential of truly digital economies and societies have been expounded endlessly but the results have been mixed with a consistent short fall in the meeting of expectations.  High assurant digital identity appears to provide the final missing element to allow full digitalization of society to take place. 

The public sector forms outlined here are generalizations for the purpose of framing a discussion around the design, implementation and adoption of digital wallets and the digital identifiers and credentials stored there-in, and to facilitate a discussion on how interactions would/could work when populations of the different forms interact in a global ecosystem.

In each of the forms, a private sector also operates and attempts to influence (to a greater or lesser degree) the setting public policy.  Motivators of the private sector may be greatly different than that of public policy and as such creates a tension or a ‘force’ that may result in failure to realize expected gains from the policies.  The private sector has also a desire to bring digital identity into practice, but its motivators are highly driven by rent extraction, not public policy or (necessarily) public good.  Additionally, there are very powerful market forces for monopolization and control where those that ‘control’ the source of digital identification can gain very significant rents from society.

Central authority, strong decentralized authority with wary support of population

This form is characterized by the existence of a central authority that constitutionally or legally has constraints on its authority.  The central authority typically must work with a network of strong decentralized bodies (state or provincial governments) who retain rights that are not granted to the central authority.  The populations of this public sector form, can be said to be unruly at times and in recent history inclined to be suspicious of Public Sector forms.  English speaking countries are good representative examples of this Form, though even within these there is a spectrum.

Centralized authority, strong decentralized authority with support of population

This form is characterized by the existence of a central authority that is constitutionally or legal has constraints on its authority.  The central authority must work with a network of strong decentralized bodies (States) who retain rights that may conflict with the Central authority.  The populations of this public sector form, while they may have a level of suspicion, concepts of identity cards and papers have long been used.  The European Commission and the Member States of the European Union are a good representative example of this Form.

Strong centralized authority with support of population

This form is characterized with a strong central authority that has the ‘final’ say on direction and policy.  This authority may delegate some responsibility to an elected body, but retains the right to overrule policy positions that may be put forward by this body.  The population in this form is typically supportive of the form, and moves in accordance with the direction central authority.  Examples of this form would be countries that retain a hereditary monarchy in regions such as, the Middle East, Asia and parts of Africa.

Authoritarian

This form is characterized with a central authority who’s right and power is absolute.  This authority sets direction and policy based on its reasoning alone and is not subject to the demands of sharing power with subsidiary authorities.  With this form, support (or lack of support) of the population is not considered.  The authority has the coercive power and the willingness to use it to ensure its direction and policies are adhered to.

The desire to realize the benefits of digitalization through the adoption of forms of digital identity has been tried before today, and has seen at best marginal success and at worst total failure resulting in significant outlays of public expenditures (e.g., eIDAS v1, Gov.uk) and political embarrassment.

In this current cycle to realize the digital potential different ‘experiments’ are (or have been) tried.  The Government of British Columbia in Canada started its journey with its Verifiable Organization Network, working with publicly available corporate documentation.  In the European Union, under the eIDAS v2 regulation and European Blockchain Services Infrastructure (EBSI) efforts are underway with use cases in Education and Travel and Tourism.  The Government of Bhutan will shortly be launching its digital wallet and identity effort having issued its first digital identifier credentials to the Crown Price of Bhutan in March.  Finally, in the US, the Department of Homeland Security is in the fourth year of a five-year process on developing a digital Green Card and Right to Work documentation. 

Each of these programs have necessarily focused within their own ‘borders’ of the public sector form to develop, learn, and prepare.  However, the full realization of digital potential requires interoperability between all jurisdictions. 

In this panel discussion, we have representation from three of the four generalized forms of the public sector. 

In preparation for the discussion please reflect on the following questions:

In your respective public sector forms, how is public policy approaching the motivators of its citizenry and businesses to adopt and use digital wallets and identifiers?

It is human nature to only want to share successes, but in reality, the learning and discovery process requires failures and ‘bumps in the road.’  What are the ‘redprints’[1] from your experiences and observations and what are the learnings gained from these?

To realize the full potential of digital wallets and identity requires that interoperability between legal jurisdictions of digital credentials.  How is jurisdictional interoperability being addressed within your respective regions?  What programs are in place (or being put in place) to ensure interoperability?

Join us for the discussion: https://www.meetup.com/digital-identity-from-vienna/events/292993136/

[1] Redprints – the corollary of blueprints.  Blueprints show you how to build something, redprints show you what didn’t work.

Monday, 01. May 2023

Damien Bod

Application security context models

This article looks at a new concept for creating application security context models for modern solutions using best practice security. By using security context models in projects, a qualitative and secure authentication solution can be produced quicker and more efficient. At present, too many solution architects try to patch the security architecture after the application […]

This article looks at a new concept for creating application security context models for modern solutions using best practice security. By using security context models in projects, a qualitative and secure authentication solution can be produced quicker and more efficient. At present, too many solution architects try to patch the security architecture after the application architecture has been completed.

Goal of the model

Modern application security solutions have evolved to utilize a zero trust approach instead of relying on security perimeters. This shifts the focus to authentication and authorization flows rather than network or firewall protection. Security measures are now enforced locally on each component, rather than just at the front door. In order to implement this new paradigm and move away from the traditional boundary thinking, a modeling concept known as security context models can be used. These models provide a framework for understanding and addressing security concerns in modern applications, by placing authentication and authorization at the center of the security architecture model.

Benefits of a security context model

Security architecture part of the solution and not “fixed” after with weak workarounds.

Security reviews can use this and produce more output, less time is required re-modelling the solution.

Complex security flows or incorrect security concepts can be identified early.

Application security support simplified.

SAST: white box testing can use the model and focus.

What information is important in a security context diagram?

A security context model

displays interfaces with third party applications and security definitions displays the authentication and authorization clients displays the application and delegated client types (app identity or app+user identity) displays private zones, public zones and security zones displays the authentication and authorization flow types has an appendix

Display interfaces with third party Apps and security definitions

The interfaces between the relevant internal and external components should be displayed in a context security model. By displaying the interfaces further security information can be included and displayed. Including the interfaces, the different security flows can be identified. Here’s an example displaying only the interfaces. From this model alone, it is clear that only Keycloak clients are used and the Keycloak identity provider federates to the further identity providers.

Displays the authentication and authorization clients

It is very useful to display the authentication and authorization flow type definitions. You should also specify if a client is a single tenant or a multiple tenant client. A security client should be either a delegated or an application type. In Azure, it is a good idea not to mix the application and delegated permissions in a single client which can lead to security problems.

Display application and delegated client types

The application or the delegated (user) types can be displayed in the model to add clarity to the API access. An access token returned to a UI client flow type is always a delegated (user + identity) access token. If the client credentials flow is used, then the access token is created for the trusted application and no user is involved. You could display the client inside the application and add an application access token to the model or I sometimes just display the flow type in the connection and it is clear that an application token is used to access the API.

Displays private zones, public zones and security zones

The public or private zones should be displayed in a security context model. Firewalls can also be included to enrich the model but these are not required. Any component which implements part of the authentication or authorization should be included. If it makes it easier to understand the security architecture, it should be included. By including the security zones, you can see clearly which components are accessible from the internet or intranet or separate networks. This does not change the security requirements of each component as each component needs to implement the security anyway (Zero trust strategy).

Displays the authentication and authorization flow types

It is important to include the authentication and the authorization flow types to acquire access tokens, or access to UI applications. Only standards should be used and a client application should never create it’s own access tokens. In the following model, the client credentials flow is used to request an application access token to access the API application.

Should I include network security components?

Traditional security boundary models were created with the firewall rules and definitions to control the traffic. These models are still required but have a different focus. In an application security context model, only the network components used to implement authentication or authorization components are included. If you use an API manager to implement an OAuth2 client for the API application, then this needs to be included. If you use an application proxy to remove or add HTTP security headers, this should be included as well.

Example Security Context for Web application using BFF (Backend for Frontend)

A Blazor application is implemented using BFF security architecture. The Blazor application uses a downstream API which authorizes using delegated user access tokens. The API application uses an Azure SQL. The API and the database are not public to the internet.

Security context model appendix

Interface 1 Blazor –  Key Vault

Access using service managed identity with restricted access

Interface 2 API –  Key Vault

Access using service managed identity with restricted access

Interface 3 Blazor – API

OAuth using delegated user access token and scope. The API is restricted for use with the Blazor UI using the Azure App registration clientId

Interface 4 API –  SQL DB

Access using service managed identity with restricted access

Interface 5 Blazor –  Azure AD

OpenID Connect code flow with PKCE, confidential client

Links

https://learn.microsoft.com/en-us/azure/security/fundamentals/zero-trust

Saturday, 29. April 2023

Wrench in the Gears

Slime Mold, Substrate and Social Media – Guest Post by Quantum Heart Cafe

This is a guest post from  Quantum Heart Cafe, the first in a series of letters on Stephen Johnson’s book, “Emergence.” If you want additional information on slime mold, be sure to check out a recent guest post, “The Game of Slime,” by Daruchan. April 2023 Dear Alison, A couple of months ago, I was [...]

This is a guest post from  Quantum Heart Cafe, the first in a series of letters on Stephen Johnson’s book, “Emergence.” If you want additional information on slime mold, be sure to check out a recent guest post, “The Game of Slime,” by Daruchan.

April 2023

Dear Alison,

A couple of months ago, I was visiting a used bookstore looking for the book, “Ender’s Game,” which is a very good novel to read if you want to understand how predatory systems seek to manipulate and gamify children. On the shelves I came across Steven Johnson’s book “Emergence.” I had recently finished reading “The World Sensorium” by Oliver L Reiser, in which he proposes the idea of socially evolving humanity towards a collective consciousness, a world brain. Throughout “The World Sensorium,” Reiser uses the term emergence a lot, so when I saw Steven Johnson’s book, I was drawn to it. When I started reading it a little over a week ago, I understood why. As I read through “Emergence,” I plan to write up a series of letters from the Labyrinth to share important topics presented in the book with readers of this blog. This first letter touches on slime mold, substrates, and social media platforms.

I’m also borrowing the concept of “artificial emergence” proposed by Lorraine Davison in a letter of hers that was previously featured on this blog. I sense there is a natural emergence that comes from the Creator that can be contrasted with the cybernetic drive to wedge an artificial complex system between living beings and the Creator. It is that artificial emergence that I’ll refer to throughout the series of letters I plan to write on Johnson’s book.

Steven begins by talking about how a Japanese scientist, Toshiyuki Nakagaki, trained a being called slime mold to find the shortest route in a maze. One of the reasons this feat garnered so much attention is that slime mold doesn’t have a central nervous system, yet it was able to find the shortest way toward the source of food and solve the puzzle.  Scientists, and others interested in emergence, study slime mold for a couple of reasons. It is an example of how simple components are the foundation for building higher level intelligence, something that is an important feature of complex adaptive systems. Another reason scientists are focused on slime mold, as well as other social creatures like ants and termites, is the coordinated group behaviour they display.  Coordinated group behaviour is an important process for the system to learn as it attempts to steer people in the direction it wants us to go. This is why I think it’s important to remember that political leaders, social media influencers, CEOs, business people etc. are also agents acting on behalf of the system, consciously or not. It seems that their purpose is to create the pheromone trails that will attract us to and keep us on a desired path. This idea will be a continuous theme across these letters.

The obsession with slime mold started in the late 1960s with Evelyn Keller’s research. Keller is a physicist who did her dissertation on molecular biology. She wrote her dissertation on non-equilibrium thermodynamics. Keller was also an associate with Sloan Kettering Cancer Center in Manhattan. She used mathematics to understand biological problems.  In 1968, Keller met fellow colleague Lee Segel who showed her his slime mold. They both began to study the slime mold, and their findings have been used by urban planners, neuroscientists and software engineers as they have sought to replicate natural emergence. Slime mold can be found in natural damp and moist environments, such as the forest floor, where they forage for food. When individual slime mold cells find food, they send signals to neighbouring cells and eventually cluster into a bigger organism. Once the food is gone, or the environment is dry, the slime mold cells disperse into individual cells and move in different directions.

Scientists think that the way slime mold cells signal each other is through a chemical messenger called acrasin, also called cyclic AMP.  If an individual sends enough of a signal, other slime mold cells will travel to its location, and they will begin to cluster into an organism. The organism would then move towards the source of stimulus like food.  At one point, it was thought that slime mold cells had what Steven Johnson calls “pacemaker” cells, they would act as leaders and signal to the other individual cells who would then pass the message along to neigbouring cells. In his book, Johnson points out the fact that Keller wasn’t able to locate any “pacemaker” cells. He argues that there are no pacemaker cells in emergent systems, but rather humans have a desire to look for pacemaker cells.  He then goes on to say that complex adaptive systems are created by individual agents who have an equal playing field. It is the interactions of individual agents with one another following certain rules that create higher level behaviour.

Keller thought that individual slime mold cells could trigger a formation of an organism by emitting or changing the amount of cyclic AMP in the environment. Those with the most cyclic AMP could attract other slime mold cells, then they would begin to cluster.  Mitch Resnick, a contemporary and colleague of Keller, created a simulation game called StarLogo. The game involved using slime mold simulations as a means for students to study self-organizing systems. Students could alter the number of slime mold cells in the environment as well as the level and duration of the cyclic AMP to generate different types of emergent behaviour.

While there may or may not be ‘pacemaker’ cells in natural emergence, I disagree with Johnson’s assertion that there are no pacemaker cells in artificial emergence. I sense there is a predatory system that is trying to place itself between life on Earth and the Creator, and I think it is doing this by placing specific agents and actors within the simulation to try and steer people towards a cybernetic future, towards Oliver Reiser’s world brain.  An important example is the steering and clustering that occurs on social media platforms. I see them as examples of complex adaptive systems, another word for artificial emergence. On these engineered platforms people are supposed to interact with one another and follow the platform’s rules to create higher level behaviours. However, there are people acting as agents on behalf of the system (whether they know this or not) who are put in place to attract and influence people.

In “The World Sensorium,” Oliver Reiser talks about polarization and substrates. He speaks about cellular polarization, which is when cells become polarized and then move towards a source of stimulus. An example would be if I cut my arm, the cells in my body would polarize and begin moving towards the cut, so they could heal my arm. Reiser applies that analogy to human society. He imagines humans could be polarized and be moved via a stimulus, towards the creation of a world brain. Reiser talks about how a substrate is the environment or surface that living organisms inhabit. He understood that a substrate would be needed in order to polarize the cells to move in a certain direction.  In a way, social media platforms act as a substrate for polarizing people who can then be moved towards a certain stimulus. Similar to the slime mold cells, influencers and other system agents are used to create an attraction, then people cluster into groups. Once clustered into these online groups or communities, people can be polarized and steered towards the direction the system desires. Most people interacting on these platforms aren’t aware of the larger system at work. According to Johnson, ignorance of the overall program is an important characteristic of artificial emergence.

For this reason I think it is wise to think beyond specific people or influencers, as Alison encourages us to do, and to consider the system and assess how it acts out its predatory impulses through agents, which it has mostly groomed and selected.

I’m going to end the first letter here. For the next letter I will be talking about ant queens, Manchester and Jane Jacobs’s work.

Sincerely, Quantum Heart Cafe

 

Tuesday, 25. April 2023

Moxy Tongue

House On Fire

More: https://www.moxytongue.com/2021/08/data-mob-rule.html

Monday, 24. April 2023

Phil Windleys Technometria

Internet Identity Workshop Report

Last week was the 36th Internet Identity Workshop. Almost 300 people from around the world called 160 sessions. The energy was high and I enjoyed seeing so many people working on identity talking with each other and sharing their ideas. The topics were diverse but I think it's fair to say that verifiable credentials were a hot topic. And while there were plenty of discussions about technical implem

Last week was the 36th Internet Identity Workshop. Almost 300 people from around the world called 160 sessions. The energy was high and I enjoyed seeing so many people working on identity talking with each other and sharing their ideas. The topics were diverse but I think it's fair to say that verifiable credentials were a hot topic. And while there were plenty of discussions about technical implementations, I think those were overshadowed by sessions discussing credential business models, use cases, and adoption. We should have the book of proceedings completed in about a month and you'll be able to get the details of sessions there. You can view past Book of Proceedings here.

As I said, there were attendees from all over the world as you can see by the pins in the map at the top of this post. Not surprisingly, most of the attendees were from the US (219), followed by Canada (20). Germany, the UK, and Switzerland rounded out the top five with 8, 7, and 6 attendees respectively. The next five, Australia (5), South Korea (3), Japan (3), Indonesia (3), and Columbia (3) showed the diversity with attendees from APAC and South America. Sadly, there were no attendees from Africa this time. Please remember we offer scholarships for people from underrepresented areas, so if you'd like to come to IIW37, please let us know.

In terms of states and provinces, California was, unsurprisingly, first with 101. New York (17), Washington (16), Utah (15), and British Columbia (11) rounded out the top five. Victoria was the source of BC's strong Canadian showing coming in fifth in cities with 8 attendees after San Jose (15), San Francisco( 13), Seattle (12), and New York (10).

The week was fabulous. I can't begin to recount the many important, interesting, and timely conversations I had. I heard from many others that they had a similar experience. IIW37 will be held Oct 10-12, 2023 at the Computer History Museum. We'll have tickets available soon. I hope you'll be able to join us.

Thanks for reading Phil Windley's Technometria! Subscribe for free to receive new posts and support my work.

Friday, 21. April 2023

Talking Identity

The Design of Trustworthy Things

With this years Identiverse just over a month away (and the deadline to get the draft of my talk this year swiftly approaching), I was reminded that I never got around to sharing video of the keynote I gave at last years conference. It was very kind and a tremendous vote of confidence in me […]

With this years Identiverse just over a month away (and the deadline to get the draft of my talk this year swiftly approaching), I was reminded that I never got around to sharing video of the keynote I gave at last years conference. It was very kind and a tremendous vote of confidence in me from Andi Hindle to offer me the opportunity to take the keynote stage during an important transition year for the Identiverse conference. I took this chance to bring together a number of thoughts that had been rattling around in my brain based on the work I’ve been doing at Uniken in creating customer identity and security systems for deployment around the world, catering to a variety of verticals, consumer demographics, capabilities, and threat vectors. It coalesced nicely (I believe) into a talk about designing trustworthy systems.

Identiverse 2022 Keynote: The Design of Trustworthy Things

Why did I feel compelled to focus on the topic of trust (and no, it’s not the ‘Zero Trust’ type of trust I’m talking about here)? It’s because there is very clearly a trust crisis permeating every aspect of our lives.  In the world of identity, we saw it recently in the scams and frauds that permeated the NFT world, and in the swift backlash against the IRS’ plans to add face biometrics to its identity proofing flows. We see this crisis of trust all around us—and it has a significant impact on the world that we as identity professionals are responsible for. Trust is something that you earn, something that is given to you. We can’t build trust itself into a digital identity system. But we can, and we must, build systems which are worthy of trust.

In my talk, I attempted to tease out what it means to build trustworthy systems – demonstrating the meaningful ways in which digital identity technologies are woven into the fabric and architecture of such systems, and showing how such trustworthy systems may finally deliver on the promise of financial, social and technological inclusion, dignity and reliability. The feedback was overwhelmingly positive, for which I am tremendously grateful, and I hope you find it useful as well.

Here are links to a number of the resources I shared or referred to in my talk:

Don Norman’s brilliant book on ‘The Design of Everyday Things’, which I highly recommend for all identity practitioners. A quick primer on Human Centered Design. Important work by Dr. Jessica Barker on how phishing simulations backfire and erode trust. The hugely important research and work by ‘Women in Identity’ being done to develop a global identity code of conduct, that I strongly urge every identity practitioner to follow and possibly get involved with.

The Dingle Group

Thinking Locally and Globally for Digital Wallets and Identification

I am happy to announce the first panel discussion series on Digital Wallets - Thinking Locally and Globally for Digital Wallets and Identification. Digital Wallets and Identification are a critical element of a future digital native world. Around the globe governments are investing and trialing different concepts while setting defining the public policy around these capabilities.
Digital Identity from Vienna #36

I am happy to announce the first panel discussion series on Digital Wallets - Thinking Locally and Globally for Digital Wallets and Identification. Digital Wallets and Identification are a critical element of a future digital native world. Around the globe governments are investing and trialing different concepts while setting defining the public policy around these capabilities.

The ability to shift populations and gain (or force) adoption of new public policy positions is linked to the ability of the public sector form of governance. Over the past 20+ years the potential of truly digital economies and societies have been expounded endlessly but the results have been mixed with a consistent short fall in the meeting of expectations. High assurant digital identity appears to provide the final missing element to allow full digitalization of society to take place.

Our panelists are:

Cosanna Preston-Idedia, VP Advisory Services, IDLab Canada

Jacques von Benecke, CTO, Druk Holdings & Investments, Bhutan

Bjarke Alling, Principal Advisor, D-Seal

I hope you can join us on May 2nd at 1900 CET / 1300 ET / 1000 PT

Sign up here: https://www.meetup.com/digital-identity-from-vienna/events/292993136/

Wednesday, 19. April 2023

Mike Jones: self-issued

OpenID Presentations at April 2023 OpenID Workshop and IIW

I gave the following presentation at the Monday, April 17, 2023 OpenID Workshop at Microsoft: OpenID Connect Working Group (PowerPoint) (PDF) I also gave the following invited “101” session presentation at the Internet Identity Workshop (IIW) on Tuesday, April 18, 2023: Introduction to OpenID Connect (PowerPoint) (PDF)

I gave the following presentation at the Monday, April 17, 2023 OpenID Workshop at Microsoft:

OpenID Connect Working Group (PowerPoint) (PDF)

I also gave the following invited “101” session presentation at the Internet Identity Workshop (IIW) on Tuesday, April 18, 2023:

Introduction to OpenID Connect (PowerPoint) (PDF)

@_Nat Zone

ChatGPTとGoogle Bardのプライバシー特性の比較〜生存する個人についての誤情報の発信の特性について

先程Google Bardのベータアクセス権を取得…

先程Google Bardのベータアクセス権を取得しました。

ご存知のように、ChatGPTのプライバシー特性については、イタリアのプライバシー保護機関 「Garante 」の調査に象徴されるように、かなり多くの疑問があります。その一つが、生身の人間に関する情報の扱い、特に誤まった情報を創作して回答してしまう問題です。これは、GDPRをはじめとする世界のプライバシー規制に基づく適法性に疑問が投げかけられています。

そこで、ChatGPTとGoogle Bardでそのテストに臨みました。

テストするために、私は2人の生きている個人の情報を尋ねました:1人は比較的無名ですが、まだ検索可能で文書がインターネット上で入手可能であり、1人は著名な人物です。つまり、私とイーロン・マスクです。

これがChatGPTからの返答です。

(出所) ChatGPT <2023-04-18取得>.

がっつり回答してしまっていますね。。そしてそれは、実は私に関する不正確な情報でいっぱいです1。このような「偽」の情報を返信することは、重大なプライバシー侵害となる可能性があり、重く受け止める必要があります。

これに対して、Google Bardはこのように返信しています:

(出所) Google Bard <2023-04-18取得>.

人々にとってはあまり面白くないかもしれませんが、おそらくこれが本来のあるべき姿なのでしょう生きている人のために答えを作って返してははいけないのです。

実は、18日、その時はBardがどういう回答をするか知らなかったのですが、前職がドイツのある州の個人情報保護当局の人で、現在プライバシー関連の弁護士事務所を開業している人とランチしながらこの件を話していて、今回のBardのような、個人に関する情報出力はフィルタリングしてしまって返さないという対応しかありえないだろうという話をしたばかりでした2。そういう意味でも大変興味深い事例です。逆に、ChatGPTは個人情報の開示請求をされたらどうするつもりなんでしょうね。持っていませんと答えても、持っていると答えても茨の道しか無いような気がするんですが。

このラウンドは、Google Bardの明確な勝ちですね。今後の動向が注目されます。

Monday, 17. April 2023

Damien Bod

Provision Azure IoT Hub devices using DPS and X.509 certificates in ASP.NET Core

This article shows how to provision Azure IoT hub devices using Azure IoT hub device provisioning services (DPS) and ASP.NET Core. The devices are setup using chained certificates created using .NET Core and managed in the web application. The data is persisted in a database using EF Core and the certificates are generated using the […]

This article shows how to provision Azure IoT hub devices using Azure IoT hub device provisioning services (DPS) and ASP.NET Core. The devices are setup using chained certificates created using .NET Core and managed in the web application. The data is persisted in a database using EF Core and the certificates are generated using the CertificateManager Nuget package.

Code: https://github.com/damienbod/AzureIoTHubDps

Setup

To setup a new Azure IoT Hub DPS, enrollment group and devices, the web application creates a new certificate using an ECDsa private key and the .NET Core APIs. The data is stored in two pem files, one for the public certificate and one for the private key. The pem public certificate file is downloaded from the web application and uploaded to the certificates blade in Azure IoT Hub DPS. The web application persists the data to a database using EF Core and SQL. A new certificate is created from the DPS root certificate and used to create a DPS enrollment group. The certificates are chained from the original DPS certificate. New devices are registered and created using the enrollment group. Another new device certificate chained from the enrollment group certificate is created per device and used in the DPS. The Azure IoT Hub DPS creates a new IoT Hub device using the linked IoT Hubs. Once the IoT hub is running, the private key from the device certificate is used to authenticate the device and send data to the server.

When the ASP.NET Core web application is started, users can create new certificates, enrollment groups and add devices to the groups. I plan to extend the web application to add devices, delete devices, and delete groups. I plan to add authorization for the different user types and better paging for the different UIs. At present all certificates use ECDsa private keys but this can easily be changed to other types. This depends on the type of root certificate used.

The application is secured using Microsoft.Identity.Web and requires an authenticated user. This can be setup in the program file or in the startup extensions. I use EnableTokenAcquisitionToCallDownstreamApi to force the OpenID Connect code flow. The configuration is read from the default AzureAd app.settings and the whole application is required to be authenticated. When the enable and disable flows are added, I will add different users with different authorization levels.

builder.Services.AddDistributedMemoryCache(); builder.Services.AddAuthentication( OpenIdConnectDefaults.AuthenticationScheme) .AddMicrosoftIdentityWebApp( builder.Configuration.GetSection("AzureAd")) .EnableTokenAcquisitionToCallDownstreamApi() .AddDistributedTokenCaches();

Create an Azure IoT Hub DPS certificate

The web application is used to create devices using certificates and DPS enrollment groups. The DpsCertificateProvider class is used to create the root self signed certificate for the DPS enrollment groups. The NewRootCertificate from the CertificateManager Nuget package is used to create the certificate using an ECDsa private key. This package wraps the default .NET APIs for creating certificates and adds a layer of abstraction. You could just use the lower level APIs directly. The certificate is exported to two separate pem files and persisted to the database.

public class DpsCertificateProvider { private readonly CreateCertificatesClientServerAuth _createCertsService; private readonly ImportExportCertificate _iec; private readonly DpsDbContext _dpsDbContext; public DpsCertificateProvider(CreateCertificatesClientServerAuth ccs, ImportExportCertificate importExportCertificate, DpsDbContext dpsDbContext) { _createCertsService = ccs; _iec = importExportCertificate; _dpsDbContext = dpsDbContext; } public async Task<(string PublicPem, int Id)> CreateCertificateForDpsAsync(string certName) { var certificateDps = _createCertsService.NewRootCertificate( new DistinguishedName { CommonName = certName, Country = "CH" }, new ValidityPeriod { ValidFrom = DateTime.UtcNow, ValidTo = DateTime.UtcNow.AddYears(50) }, 3, certName); var publicKeyPem = _iec.PemExportPublicKeyCertificate(certificateDps); string pemPrivateKey = string.Empty; using (ECDsa? ecdsa = certificateDps.GetECDsaPrivateKey()) { pemPrivateKey = ecdsa!.ExportECPrivateKeyPem(); FileProvider.WriteToDisk($"{certName}-private.pem", pemPrivateKey); } var item = new DpsCertificate { Name = certName, PemPrivateKey = pemPrivateKey, PemPublicKey = publicKeyPem }; _dpsDbContext.DpsCertificates.Add(item); await _dpsDbContext.SaveChangesAsync(); return (publicKeyPem, item.Id); } public async Task<List<DpsCertificate>> GetDpsCertificatesAsync() { return await _dpsDbContext.DpsCertificates.ToListAsync(); } public async Task<DpsCertificate?> GetDpsCertificateAsync(int id) { return await _dpsDbContext.DpsCertificates.FirstOrDefaultAsync(item => item.Id == id); } }

Once the root certificate is created, you can download the public pem file from the web application and upload it to the Azure IoT Hub DPS portal. This needs to be verified. You could also use a CA created certificate for this, if it is possible to create child chained certificates. The enrollment groups are created from this root certificate.

Create an Azure IoT Hub DPS enrollment group

Devices can be created in different ways in the Azure IoT Hub. We use a DPS enrollment group with certificates to create the Azure IoT devices. The DpsEnrollmentGroupProvider is used to create the enrollment group certificate. This uses the root certificate created in the previous step and chains the new group certificate from this. The enrollment group is used to add devices. Default values are defined for the enrollment group and the pem files are saved to the database. The root certificate is read from the database and the chained enrollment group certificate uses an ECDsa private key like the root self signed certificate.

The CreateEnrollmentGroup method is used to set the initial values of the IoT Hub Device. The ProvisioningStatus is set to enabled. This means when the device is registered, it will be enabled to send messages. You could also set this to disabled and enable it after when the device gets used by an end client for the first time. A MAC or a serial code from the device hardware could be used to enable the IoT Hub device. By waiting till the device is started by the end client, you could choose a IoT Hub optimized for this client.

public class DpsEnrollmentGroupProvider { private IConfiguration Configuration { get;set;} private readonly ILogger<DpsEnrollmentGroupProvider> _logger; private readonly DpsDbContext _dpsDbContext; private readonly ImportExportCertificate _iec; private readonly CreateCertificatesClientServerAuth _createCertsService; private readonly ProvisioningServiceClient _provisioningServiceClient; public DpsEnrollmentGroupProvider(IConfiguration config, ILoggerFactory loggerFactory, ImportExportCertificate importExportCertificate, CreateCertificatesClientServerAuth ccs, DpsDbContext dpsDbContext) { Configuration = config; _logger = loggerFactory.CreateLogger<DpsEnrollmentGroupProvider>(); _dpsDbContext = dpsDbContext; _iec = importExportCertificate; _createCertsService = ccs; _provisioningServiceClient = ProvisioningServiceClient.CreateFromConnectionString( Configuration.GetConnectionString("DpsConnection")); } public async Task<(string Name, int Id)> CreateDpsEnrollmentGroupAsync( string enrollmentGroupName, string certificatePublicPemId) { _logger.LogInformation("Starting CreateDpsEnrollmentGroupAsync..."); _logger.LogInformation("Creating a new enrollmentGroup..."); var dpsCertificate = _dpsDbContext.DpsCertificates .FirstOrDefault(t => t.Id == int.Parse(certificatePublicPemId)); var rootCertificate = X509Certificate2.CreateFromPem( dpsCertificate!.PemPublicKey, dpsCertificate.PemPrivateKey); // create an intermediate for each group var certName = $"{enrollmentGroupName}"; var certDpsGroup = _createCertsService.NewIntermediateChainedCertificate( new DistinguishedName { CommonName = certName, Country = "CH" }, new ValidityPeriod { ValidFrom = DateTime.UtcNow, ValidTo = DateTime.UtcNow.AddYears(50) }, 2, certName, rootCertificate); // get the public key certificate for the enrollment var pemDpsGroupPublic = _iec.PemExportPublicKeyCertificate(certDpsGroup); string pemDpsGroupPrivate = string.Empty; using (ECDsa? ecdsa = certDpsGroup.GetECDsaPrivateKey()) { pemDpsGroupPrivate = ecdsa!.ExportECPrivateKeyPem(); FileProvider.WriteToDisk($"{enrollmentGroupName}-private.pem", pemDpsGroupPrivate); } Attestation attestation = X509Attestation.CreateFromRootCertificates(pemDpsGroupPublic); EnrollmentGroup enrollmentGroup = CreateEnrollmentGroup(enrollmentGroupName, attestation); _logger.LogInformation("{enrollmentGroup}", enrollmentGroup); _logger.LogInformation("Adding new enrollmentGroup..."); EnrollmentGroup enrollmentGroupResult = await _provisioningServiceClient .CreateOrUpdateEnrollmentGroupAsync(enrollmentGroup); _logger.LogInformation("EnrollmentGroup created with success."); _logger.LogInformation("{enrollmentGroupResult}", enrollmentGroupResult); DpsEnrollmentGroup newItem = await PersistData(enrollmentGroupName, dpsCertificate, pemDpsGroupPublic, pemDpsGroupPrivate); return (newItem.Name, newItem.Id); } private async Task<DpsEnrollmentGroup> PersistData(string enrollmentGroupName, DpsCertificate dpsCertificate, string pemDpsGroupPublic, string pemDpsGroupPrivate) { var newItem = new DpsEnrollmentGroup { DpsCertificateId = dpsCertificate.Id, Name = enrollmentGroupName, DpsCertificate = dpsCertificate, PemPublicKey = pemDpsGroupPublic, PemPrivateKey = pemDpsGroupPrivate }; _dpsDbContext.DpsEnrollmentGroups.Add(newItem); dpsCertificate.DpsEnrollmentGroups.Add(newItem); await _dpsDbContext.SaveChangesAsync(); return newItem; } private static EnrollmentGroup CreateEnrollmentGroup(string enrollmentGroupName, Attestation attestation) { return new EnrollmentGroup(enrollmentGroupName, attestation) { ProvisioningStatus = ProvisioningStatus.Enabled, ReprovisionPolicy = new ReprovisionPolicy { MigrateDeviceData = false, UpdateHubAssignment = true }, Capabilities = new DeviceCapabilities { IotEdge = false }, InitialTwinState = new TwinState( new TwinCollection("{ \"updatedby\":\"" + "damien" + "\", \"timeZone\":\"" + TimeZoneInfo.Local.DisplayName + "\" }"), new TwinCollection("{ }") ) }; } public async Task<List<DpsEnrollmentGroup>> GetDpsGroupsAsync(int? certificateId = null) { if (certificateId == null) { return await _dpsDbContext.DpsEnrollmentGroups.ToListAsync(); } return await _dpsDbContext.DpsEnrollmentGroups .Where(s => s.DpsCertificateId == certificateId).ToListAsync(); } public async Task<DpsEnrollmentGroup?> GetDpsGroupAsync(int id) { return await _dpsDbContext.DpsEnrollmentGroups .FirstOrDefaultAsync(d => d.Id == id); } }

Register a device in the enrollment group

The DpsRegisterDeviceProvider class creates a new device chained certificate using the enrollment group certificate and creates this using the ProvisioningDeviceClient. The transport ProvisioningTransportHandlerAmqp is set in this example. There are different transport types possible and you need to chose the one which best meets your needs. The device certificate uses an ECDsa private key and stores everything to the database. The PFX for windows is stored directly to the file system. I use pem files and create the certificate from these in the device client sending data to the hub and this is platform independent. The create PFX file requires a password to use it.

public class DpsRegisterDeviceProvider { private IConfiguration Configuration { get; set; } private readonly ILogger<DpsRegisterDeviceProvider> _logger; private readonly DpsDbContext _dpsDbContext; private readonly ImportExportCertificate _iec; private readonly CreateCertificatesClientServerAuth _createCertsService; public DpsRegisterDeviceProvider(IConfiguration config, ILoggerFactory loggerFactory, ImportExportCertificate importExportCertificate, CreateCertificatesClientServerAuth ccs, DpsDbContext dpsDbContext) { Configuration = config; _logger = loggerFactory.CreateLogger<DpsRegisterDeviceProvider>(); _dpsDbContext = dpsDbContext; _iec = importExportCertificate; _createCertsService = ccs; } public async Task<(int? DeviceId, string? ErrorMessage)> RegisterDeviceAsync( string deviceCommonNameDevice, string dpsEnrollmentGroupId) { int? deviceId = null; var scopeId = Configuration["ScopeId"]; var dpsEnrollmentGroup = _dpsDbContext.DpsEnrollmentGroups .FirstOrDefault(t => t.Id == int.Parse(dpsEnrollmentGroupId)); var certDpsEnrollmentGroup = X509Certificate2.CreateFromPem( dpsEnrollmentGroup!.PemPublicKey, dpsEnrollmentGroup.PemPrivateKey); var newDevice = new DpsEnrollmentDevice { Password = GetEncodedRandomString(30), Name = deviceCommonNameDevice.ToLower(), DpsEnrollmentGroupId = dpsEnrollmentGroup.Id, DpsEnrollmentGroup = dpsEnrollmentGroup }; var certDevice = _createCertsService.NewDeviceChainedCertificate( new DistinguishedName { CommonName = $"{newDevice.Name}" }, new ValidityPeriod { ValidFrom = DateTime.UtcNow, ValidTo = DateTime.UtcNow.AddYears(50) }, $"{newDevice.Name}", certDpsEnrollmentGroup); var deviceInPfxBytes = _iec.ExportChainedCertificatePfx(newDevice.Password, certDevice, certDpsEnrollmentGroup); // This is required if you want PFX exports to work. newDevice.PathToPfx = FileProvider.WritePfxToDisk($"{newDevice.Name}.pfx", deviceInPfxBytes); // get the public key certificate for the device newDevice.PemPublicKey = _iec.PemExportPublicKeyCertificate(certDevice); FileProvider.WriteToDisk($"{newDevice.Name}-public.pem", newDevice.PemPublicKey); using (ECDsa? ecdsa = certDevice.GetECDsaPrivateKey()) { newDevice.PemPrivateKey = ecdsa!.ExportECPrivateKeyPem(); FileProvider.WriteToDisk($"{newDevice.Name}-private.pem", newDevice.PemPrivateKey); } // setup Windows store deviceCert var pemExportDevice = _iec.PemExportPfxFullCertificate(certDevice, newDevice.Password); var certDeviceForCreation = _iec.PemImportCertificate(pemExportDevice, newDevice.Password); using (var security = new SecurityProviderX509Certificate(certDeviceForCreation, new X509Certificate2Collection(certDpsEnrollmentGroup))) // To optimize for size, reference only the protocols used by your application. using (var transport = new ProvisioningTransportHandlerAmqp(TransportFallbackType.TcpOnly)) //using (var transport = new ProvisioningTransportHandlerHttp()) //using (var transport = new ProvisioningTransportHandlerMqtt(TransportFallbackType.TcpOnly)) //using (var transport = new ProvisioningTransportHandlerMqtt(TransportFallbackType.WebSocketOnly)) { var client = ProvisioningDeviceClient.Create("global.azure-devices-provisioning.net", scopeId, security, transport); try { var result = await client.RegisterAsync(); _logger.LogInformation("DPS client created: {result}", result); } catch (Exception ex) { _logger.LogError("DPS client created: {result}", ex.Message); return (null, ex.Message); } } _dpsDbContext.DpsEnrollmentDevices.Add(newDevice); dpsEnrollmentGroup.DpsEnrollmentDevices.Add(newDevice); await _dpsDbContext.SaveChangesAsync(); deviceId = newDevice.Id; return (deviceId, null); } private static string GetEncodedRandomString(int length) { var base64 = Convert.ToBase64String(GenerateRandomBytes(length)); return base64; } private static byte[] GenerateRandomBytes(int length) { var byteArray = new byte[length]; RandomNumberGenerator.Fill(byteArray); return byteArray; } public async Task<List<DpsEnrollmentDevice>> GetDpsDevicesAsync(int? dpsEnrollmentGroupId) { if(dpsEnrollmentGroupId == null) { return await _dpsDbContext.DpsEnrollmentDevices.ToListAsync(); } return await _dpsDbContext.DpsEnrollmentDevices.Where(s => s.DpsEnrollmentGroupId == dpsEnrollmentGroupId).ToListAsync(); } public async Task<DpsEnrollmentDevice?> GetDpsDeviceAsync(int id) { return await _dpsDbContext.DpsEnrollmentDevices .Include(device => device.DpsEnrollmentGroup) .FirstOrDefaultAsync(d => d.Id == id); } }

Download certificates and use

The private and the public pem files are used to setup the Azure IoT Hub device and send data from the device to the server. A HTML form is used to download the files. The form sends a post request to the file download API.

<form action="/api/FileDownload/DpsDevicePublicKeyPem" method="post"> <input type="hidden" value="@Model.DpsDevice.Id" id="Id" name="Id" /> <button type="submit" style="padding-left:0" class="btn btn-link">Download Public PEM</button> </form>

The DpsDevicePublicKeyPemAsync method implements the file download. The method gets the data from the database and returns this as pem file.

[HttpPost("DpsDevicePublicKeyPem")] public async Task<IActionResult> DpsDevicePublicKeyPemAsync([FromForm] int id) { var cert = await _dpsRegisterDeviceProvider .GetDpsDeviceAsync(id); if (cert == null) throw new ArgumentNullException(nameof(cert)); if (cert.PemPublicKey == null) throw new ArgumentNullException(nameof(cert.PemPublicKey)); return File(Encoding.UTF8.GetBytes(cert.PemPublicKey), "application/octet-stream", $"{cert.Name}-public.pem"); }

The device UI displays the data and allows the authenticated user to download the files.

The CertificateManager and the Microsoft.Azure.Devices.Client Nuget packages are used to implement the IoT Hub device client. The pem files with the public certificate and the private key can be loaded into a X509Certificate instance. This is then used to send the data using the DeviceAuthenticationWithX509Certificate class. The SendEvent method sends the data using the IoT Hub device Message class.

var serviceProvider = new ServiceCollection() .AddCertificateManager() .BuildServiceProvider(); var iec = serviceProvider.GetService<ImportExportCertificate>(); #region pem var deviceNamePem = "robot1-feed"; var certPem = File.ReadAllText($"{_pathToCerts}{deviceNamePem}-public.pem"); var eccPem = File.ReadAllText($"{_pathToCerts}{deviceNamePem}-private.pem"); var cert = X509Certificate2.CreateFromPem(certPem, eccPem); // setup deviceCert windows store export var pemDeviceCertPrivate = iec!.PemExportPfxFullCertificate(cert); var certDevice = iec.PemImportCertificate(pemDeviceCertPrivate); #endregion pem var auth = new DeviceAuthenticationWithX509Certificate(deviceNamePem, certDevice); var deviceClient = DeviceClient.Create(iotHubUrl, auth, transportType); if (deviceClient == null) { Console.WriteLine("Failed to create DeviceClient!"); } else { Console.WriteLine("Successfully created DeviceClient!"); SendEvent(deviceClient).Wait(); } Notes

Using certificates in .NET and windows is complicated due to how the private keys are handled and loaded. The private keys need to be exported or imported into the stores etc. This is not an easy API to get working and the docs for this are confusing.

This type of device transport and the default setup for the device would need to be adapted for your system. In this example, I used ECDsa certificates but you could also use RSA based keys. The root certificate could be replaced with a CA issued one. I created long living certificates because I do not want the devices to stop working in the field. This should be moved to a configuration. A certificate rotation flow would make sense as well.

In the follow up articles, I plan to save the events in hot and cold path events and implement device enable, disable flows. I also plan to write about the device twins. The device twins is a excellent way of sharing data in both directions.

Links

https://github.com/Azure/azure-iot-sdk-csharp

https://github.com/damienbod/AspNetCoreCertificates

Creating Certificates for X.509 security in Azure IoT Hub using .NET Core

https://learn.microsoft.com/en-us/azure/iot-hub/troubleshoot-error-codes

https://stackoverflow.com/questions/52750160/what-is-the-rationale-for-all-the-different-x509keystorageflags/52840537#52840537

https://github.com/dotnet/runtime/issues/19581

https://www.nuget.org/packages/CertificateManager

Azure IoT Hub Documentation | Microsoft Learn


reb00ted

Growing the Fediverse

If you owned Twitter, how would you grow it? I think you have basically two choices: You can try to grow the number of users who use Twitter on a regular basis. You can try to increase the number of times each user tweets and browses others’ tweets. Is that the same for the Fediverse? Turns out the answer is a resounding No! We have far more options. Yep, we can try an

If you owned Twitter, how would you grow it? I think you have basically two choices:

You can try to grow the number of users who use Twitter on a regular basis.

You can try to increase the number of times each user tweets and browses others’ tweets.

Is that the same for the Fediverse?

Turns out the answer is a resounding No! We have far more options.

Yep, we can try and grow the number of users who regularly use the Fediverse.

Yep, we can try to increase the number of times each user posts and reads other people’s posts. (Except that we don’t actually want to do that so very much, otherwise the Fediverse would have to become addictive, and not wanting to be manipulated is one of the things that got us off the big centralized social platforms in the first place.)

We can do more things than just “tweeting” in the Fediverse. Because unlike on Twitter, every Fediverse-enabled app can – and usually does – add new features that go way beyond just posting “What’s happening”. And there are so many of them already! PeerTube for video, PixelFed for photos, FunkWhale for audio, Mobilizon for events, OwnCast for streaming, Bookwyrm for reading, Micro.blog for blogging and more … and all interoperating on the same network. Each new feature like that grows the Fediverse, as I can now do more things, and will if it is easy enough. Each of those tools grows the use of the Fediverse with the same number of users and the same number of “tweets”. Practically, Twitter does not have this option.

Which, together, enables entirely new categories of use cases that go way beyond of what a social media platform like Twitter has ever done (or dreamt of!), or what any one Fediverse app can do on its own. Many of those will turn out to be much more valuable and useful than mere sharing “What’s happening”. And they will be possible because all of those apps can be combined in various ways on the same, free, ad-free, manipulation-free, open, rich communications network called the Fediverse.

For this to work, however, we need to work to make the interactions between the various Fediverse apps much smoother than they are today. Yes, of course, ActivityPub lets us send notes and likes and comments and various other things around from one app to another.

But much more can and should be done, such as:

Seamless single-sign-on across the Fediverse apps that I’m using. So if I see a Mobilizon event shared on Mastodon, I want to be able to click “Yes, RSVP, I’m coming” and identify myself with my Mastodon-issued Fediverse handle. There is no reason why I should have to sign up for a separate Mobilizon account first.

The same, of course, should be true about videos, music, bookmarks, and so many other things in so many Fediverse apps. (Before you get concerned, yes, you should be able to have multiple non-connected “personas”, like work and play, in the Fediverse, but we still need single-sign-on within the scope of a single persona.)

My social network should come with me when I use different Fediverse apps. So if I decide to start using Pixelfed, I should be able to instantly follow all the photo streams of all the friends I’m following somewhere else, such as on Pleroma or Mastodon. After all, if I’m interested in the photos that my friend Joe shares, why would I make a distinction whether he shares them through one app or the other? The Fediverse can do this if we want to; no other social network can do this.

And and and…

Imagine when data flows freely, data of many, rich kinds, across an open network, directly from your friends to you. Not just some social media data, but most data you care about! With nobody in the middle to manipulate you, or charge you, or advertise to you, or censor you … and no unnatural hoops that you need to jump through.

We got to work on that last bit, and on that note, I’m very happy that a Fediverse Developer Network is coming together with participation from many excellent Fediverse developers working on a variety of apps. Our goal here is to connect the people well who work on this, so their apps can connect well, too! It’s early days, but very promising.

Sunday, 16. April 2023

Identity Woman

Thoughtful Biometrics 2023 takeaways

The 2nd Thoughtful Biometrics Workshop happened a month ago on March 16th, 2023 and we had 75 people registered for this live event, The goal was to bring together a range of stakeholders with an interest in exploring the use and application of biometrics for an uncommon conversation – one not happening in venues sponsored […] The post Thoughtful Biometrics 2023 takeaways appeared first on Ident

The 2nd Thoughtful Biometrics Workshop happened a month ago on March 16th, 2023 and we had 75 people registered for this live event, The goal was to bring together a range of stakeholders with an interest in exploring the use and application of biometrics for an uncommon conversation – one not happening in venues sponsored […]

The post Thoughtful Biometrics 2023 takeaways appeared first on Identity Woman.

Wednesday, 12. April 2023

Joe Andrieu

Fighting for Consensus at the W3C

I am formally appealing to the Director of the W3C to deny the decision by the DID WG Charter chairs to override dissent about the inclusion of DID methods in the scope of the next DID WG Charter, as embodied in part in the decision to merge PR#20 and PR#27 into the DID WG Charter. Enabling the DID WG to develop specific DID methods will actively undermine the work's fundamental goal of decentrali
Fighting for Consensus at the World Wide Web Consortium (W3C)

5.5. Chair Decision and Group Decision Appeals

When group participants believe that their concerns are not being duly considered by the group or the Chair, they may ask the Director (for representatives of a Member organization, via their Advisory Committee representative) to confirm or deny the decision. This is a Group Decision Appeal or a Chair Decision Appeal. The participants should also make their requests known to the Team Contact. The Team Contact must inform the Director when a group participant has raised concerns about due process.

Any requests to the Director to confirm a decision must include a summary of the issue (whether technical or procedural), decision, and rationale for the objection. All counter-arguments, rationales, and decisions must be recorded.

From “W3C Process Document” https://www.w3.org/2021/Process-20211102/

It’s unfortunate that I find myself formally appealing a decision by the W3C Decentralized Identifier Working Group (DID WG) chairs. Simply put, the decision was not made according to the applicable W3C Process, which requires seeking consensus through selecting proposals with the weakest technical objections.

The decision to appeal was not made lightly. It will force into the public view a disagreement between parties who generally have high mutual regard and respect. I personally think the DID WG chairs are well-intentioned and doing their best to resolve the challenges of the underlying work. However, in this particular case, we have a failure to secure consensus and an unfortunate autocratic imposition of the chairs’ position on a group in significant dissent.

I got involved in this particular stream of work after attending the first ID2020 and Rebooting the Web of Trust II after over ten years in the user-centric identity movement as championed by the Internet Identity Workshop and ProjectVRM. I became (and remain) an editor for Verifiable Credentials Use Cases and Use Cases and Requirements for Decentralized Identifiers and the DID Method Rubric, all W3C publications. I am treasurer and board member of Rebooting the Web of Trust, a collaborative writing workshop that incubated early Decentralized Identity technologies like DIDs and VCs. My business is a small consulting company that helps clients better understand the human requirements for decentralized identity systems and applications. I also created the DID Directory DID Directory and host a podcast about DID methods, The Rubric. In short, I have over 15 years invested in bringing this technology to realization, as a consistent contributor to the shared work of the broader community and as a leader in both W3C and external efforts.

After years of serving as invited expert to the VCWG and DIDWG, we joined the W3C this year to specifically to oppose the Chairs’ decision in this matter.

Technical Background

While this appeal does not rely on a technical argument for overturning the chairs decision, our primary technical reasoning is straightforward.

Enabling the DID WG to develop specific DID methods will actively undermine the work’s fundamental goal of decentralization.

A New Architecture Building on the Old Architecture

We have an opportunity in decentralized identifiers to reinvent outdated architectures that rely on, and extend, the innate authority of centralized actors. The World Wide Web is an amazing achievement in operational decentralization. Unfortunately its naming system is fundamentally based on centralized authority, and therefore subject to manipulation and abuse in ways that decentralized systems are designed to avoid. DIDs represent an opportunity for identifiers to be first class citizens for the World Wide Web, without needing to defer to any centralized authority. Not IANA. Not DNS. DIDs provide a way for anyone to stand up an identification system and for anyone to use that system. It is, in its way, a profound restructuring of what identity means in digital systems. Combined with Verifiable Credentials, this emerging identity system provides a decentralized way for individuals to engage digitally with promise of greater privacy, improved user control, and increased respect for our humanity.

A fundamental innovation of DIDs relative to public key cryptography is the level of indirection between the identifier and the cryptography, with different DID methods providing different means for managing this level of indirection.

DIDs resolve to DID Documents, where resolution is defined by the DID method specified in the DID itself. The data required to determine the current authoritative DID document for a given DID is stored in the Verifiable Data Registry of that method.

Consider the DID did:example:abc. This is a well-formed DID using the “example” method. The mechanism to turn that DID into a DID document based on that DID method’s Verifiable Data Registry is called resolution.

Figure 1. DID Architecture

First, client software sends the DID to a resolver expected to handle that particular method (“example”). The resolver then uses operations defined by the “example” method to retrieve salient data from the Verifiable Data Registry. Finally, the resolver produces a well-formed DID document and returns that to the client. The client can then interpret the DID document to use the current cryptographic and service endpoint information for that DID.

Because of this level of indirection between the DID and the DID document, DID controllers can update the DID’s cryptographic material without changing the identifier itself (and without relying on a trusted third party). This means that normal, operational security actions like key rotation and revocation can be implemented without requiring direct communication with relying parties. The VDR acts as the intermediary. In fact, the DID controller does not even need to know who is relying on the DID; there is no requirement for direct communication about updates to the DID document. In contrast, updating private key information with traditional public/private key cryptography requires communicating the new public key to all dependent parties.

By design, DIDs enable any form of Verifiable Data Registry. As long as a method can specify how you perform the Create, Read, Update, and Deactivate operations for a DID using that VDR, it’s a valid method. This has allowed DID methods based on bitcoin (did:btcr and did:ion), ethereum (did:ethr and did:elem), bespoke blockchains (did:v1 and did:cosmos), Amazon’s QLDB, and even public keys actually stored in the DID itself (did:key and did:jwk). This fundamental goal of allowing any type of VDR means that interoperability stops at the resolver, which itself provides the integration between resolver clients and a method’s VDR, allowing any method to define its own mechanism for transforming a DID into its DID document, using any kind of VDR.

Parallels to HTML and HTTP

The architecture of DID methods is a lot like the architecture of the web itself, where browsers dereference URLs to access webservers that interact with a backend to provide a resource back to the browser. By design, the browser doesn’t need to know how the web server and back end communicate, nor how the data is stored in the backend. All the browser needs to know is the interface HTTP and HTML.

Figure 2. Web Architecture

The equivalent is true for DIDs. With a standard for DID resolution, any DID client would have a standard mechanism (resolution) that can be provided by any conformant DID method, allowing the retrieval and use of DID documents without intimate knowledge of how the VDR works.

That’s the route to interoperability for DIDs, just as it was and is for the web.

Progress to date

We built the first two components of the DID system in the DID Core 1.0 specification, namely, a URL syntax for DIDs and a data model for DID documents, which provide the cryptographic material for verifiable interactions related to that DID. In effect, this is like defining URLs (RFC3986) and HTML (originally RFC1866). At the time of the groups initial chartering, we intentionally left DID resolution out of scope, instead merely defining a contract for what DID resolution should provide (without specifying how it is to be provided). Resolution is how one takes a given DID and retrieves the associated DID document. Resolution is the http of DIDs. It takes a well-formed URL (the DID) and specifies how you process bits sent & received over the wire to get the resource indicated (the DID document in this case).

This is the next step in the work: defining an interoperable mechanism for resolving DIDs for different DID methods. Under the hood, resolution is necessarily method-specific because each method is free to use any verifiable data registry (VDR), and, inevitably, different registries have different mechanisms for creating, reading, updating, and deactivating DIDs. While it will be impossible to have a common specification for all potential VDRs (unless you force all methods to use the same VDR mechanism), it is possible to define an interface that all methods could provide, in order to allow client software to resolve a DID and ignore the method-specific details. This is how we get interoperability between DID methods.

This is a stark contrast to the idea that the next step in the work is standardizing specific DID methods. Some have argued that developing a specific DID method would somehow improve interoperability. However, we believe the only interoperability benefit we get from standardizing a specified DID method is either (1) for different implementations of that single method or (2) imposing the mechanisms of that DID method on other methods. Focusing on a single method does not, and cannot, improve interoperability between Methods, especially those based on fundamentally different VDRs.  As a decentralized technology, imposing the mechanisms of any given method on other methods is antithetical to the point of the work. In a truly decentralized system, anyone can define a new DID method and it should “just work”. That means interoperability between DID methods is the fundamental priority. We achieve that by specifically focusing on a shared, standardized mechanism for DID resolution rather than focusing on specific methods.

Summary of Technical Objection

Having the DID WG standardize specific DID methods creates moral hazards that will inevitably reduce the decentralization of DIDs themselves. Whichever methods the group chooses will attain unprecedented acceleration and impact. It will take an increased attention from the WG focused on those methods, taking away from attention that could go to advance all DID methods. That attention will also increase the WGs familiarity with how chosen methods solve certain problems and those approaches will necessarily already have the blessing of the WG, making them preferred favorites over approaches used by other methods. In short, giving the DID WG the permission to focus its attention on selected favorites would directly undermine its ability to advance the work with equal respect for all methods.

Procedural Objection

We also object on procedural grounds to the inclusion of standards-track documents for unstated new specifications in a so-called “maintenance charter”. A working group chartered for maintenance should not be creating new standards-track documents for arbitrary, unspecified DID methods. Maintenance groups should be focused on maintenance, not creating new specifications, especially unnamed specifications. Like the technical argument for decentralization, we do not rely on this procedural flaw in this appeal.

Formal Request to Deny Chair Decision

I am formally appealing to the Director of the W3C to deny the decision by the DID WG Charter chairs to override dissent about the inclusion of DID methods in the scope of the next DID WG Charter, as embodied in part in the decision to merge PR#20 and PR#27 into the DID WG Charter.

The decision is best summarized in the comment merging in PR#27

The DID WG chairs have met. We have concluded that merging PR #27 will produce a charter that best represents the consensus of the DID Working Group participants over the past year.

Despite many, many discussions, no other proposal within the group has garnered as much support as this one has for the creation of a new DID Working Group, which will focus on maintaining Decentralized Identifiers (DIDs) v1.0 and publishing errata updates to that specification. The charter also permits the next DID WG to begin work on DID Methods and DID Resolution, but does not require it, e.g., the group that forms may do as much or as little work on these optional topics as desired.

Now, regarding the other open PRs, #29 and #30:

The predominant concern raised has been with the inclusion of DID Methods as optional work in this charter. The inclusion of DID Methods in the next chartered DID WG was part of the W3C Director’s decision to overturn the Formal Objections raised against the publication of Decentralized Identifiers (DIDs) v1.0. The chairs feel that that decision represents the consensus of the W3C and as such the inclusion of DID Methods as optional work in this charter is absolutely necessary. So we have concluded that PR Remove mention of DID Methods from the charter #29 is no longer relevant to this discussion. We are leaving it open for comments, but they will not change the decision at this point.

There have been suggestions for an alternative group focused exclusively on DID Resolution. We support the effort to create a DID Resolution Working Group that focuses entirely on DID Resolution, as that will strongly support the work of the DID Working Group, which must focus on the DID Specification. We encourage the proponents to develop their proposal and present it to W3M and the AC Forum as a new Working Group. However, because it would not be a DID Working Group, we have concluded that PR Proposal: Charter for the DID Resolution WG (as a replacement to current proposal) to continue the work of DID 1.0 #30 is no longer relevant to this conversation. We are leaving it open for those who wish to continue designing a separate DID Resolution-focused WG, but we do not expect such work to change the decision at this point.

2023/03/15 https://github.com/w3c/did-wg-charter/pull/27#issuecomment-1470920464

I am appealing on five points, all of which highlight the chairs’ failure to with respect to W3C Process.

The PRs lacked consensus Consensus was not legitimately sought Claims of “W3C Consensus” are erroneous, out of scope, and irrelevant This sets a untenable precedent Proposals with the weakest objection should be favored

I discuss each of these below.

The PR lacks consensus

Both staff and chairs have acknowledged that adding DID methods to the spec lacks consensus.

@brentzundel: I am merging this PR over their objections because having the flexibility to go this route could be vital should our efforts to move DID Resolution forward be frustrated.

2022/09/21 https://github.com/w3c/did-wg-charter/pull/20#issuecomment-1253912348

@pchampin: We clearly didn’t manage to reach consensus here

2023/01/11 https://github.com/w3c/did-wg-charter/pull/27#issuecomment-1379253408

@brentzundel: There is no consensus at this stage, nor do I believe it is possible to achieve it.

2023/01/23 https://github.com/w3c/did-wg-charter/pull/27#issuecomment-1401199754

The lack of consensus does not appear to be in question.

Consensus was not legitimately sought

The W3C 2021 Process document places consensus at the core of the W3C:

Consensus is a core value of W3C. To promote consensus, the W3C process requires Chairs to ensure that groups consider all legitimate views and objections, and endeavor to resolve them, whether these views and objections are expressed by the active participants of the group or by others (e.g., another W3C group, a group in another organization, or the general public).

In the section on managing dissent:

Groups should favor proposals that create the weakest objections. This is preferred over proposals that are supported by a large majority but that cause strong objections from a few people.

And, in the section on deciding by vote:

A group should only conduct a vote to resolve a substantive issue after the Chair has determined that all available means of reaching consensus through technical discussion and compromise have failed, and that a vote is necessary to break a deadlock.

Unfortunately, the chairs did not fulfill these obligations to use all available means of reaching consensus through technical discussion.

They avoided the issue. They denied the issue. Ultimately, they did not lead a technical discussion in an effort to find proposals that create the weakest objections.

Avoidance Denial Deference to political priorities over technical discussion Avoidance

Despite consistent efforts by myself and others who share my concerns, the chairs actions have had the effect of avoiding dissent rather than working with the group to find the proposal with the weakest objections. They ignored Github issues, held no conference calls, created a false impression of engagement, and failed to respond to direct outreach.

Ignoring Github issues

After the initial discussion at TPAC, @brentzundel merged PR#20 on Sep 21, 2022, which put DID methods in scope, with the note that this was against objections voiced at TPAC. I commented in opposition to that merge on October 25, 2022, requesting the PR be reverted. To this date, neither chairs nor staff have even bothered to respond.

No group phone calls nor email discussion after TPAC

After the initial objections were acknowledged at TPAC there were neither group conference calls nor any outreach to the mailing list on this matter. Instead of facilitating a conversation to reach a consensus, the chairs acted as if there were no significant issues for the group to discuss.

False engagement and misdirection

After lengthy discussion on the PR#27 in question (largely by others) @brentzundel created another PR #29 with the stated goal of finding the proposal with the “best consensus” introduced in his comment on #27 on

@brentzundel: Out of respect for the opposing positions stated here, I have opened a separate PR that removes all mention of DID Method Specifications #29.

Whichever PR achieves greater consensus, as determined by the chairs of the current DID WG, will be merged.

2023/01/24: https://github.com/w3c/did-wg-charter/pull/27#issuecomment-1402233965

He did in fact create PR #29, which also led, at the request of @TallTed and @brentzundel to PR#30 to shift the entire focus of the charter to DID Resolution. However, neither of these alternatives were actually given serious consideration by the chairs:

@brentzundel: It was never my understanding that a DID Resolution WG would replace the DID WG. I do not support that course of action. If folks are interested in pursuing a DID Resolution WG Charter separate from the DID WG Charter, the place to do it is not here.

2023/03/22: https://github.com/w3c/did-wg-charter/pull/30#pullrequestreview-1339576495

And again, in the comment announcing the chairs decision

@brentzundel: However, because it would not be a DID Working Group, we have concluded that PR Proposal: Charter for the DID Resolution WG (as a replacement to current proposal) to continue the work of DID 1.0 #30 is no longer relevant to this conversation. We are leaving it open for those who wish to continue designing a separate DID Resolution-focused WG, but we do not expect such work to change the decision at this point.

2023/03/24: https://github.com/w3c/did-wg-charter/pull/27#issuecomment-1470920464

Unfortunately, although the chairs’ obligations under the process was to consider all opposing positions and to find the proposal with the weakest objections, @brentzundel made it clear that when counter proposals were offered, they were not welcome. They were not discussed. They were not brought to the attention of the working group. They were summarily dismissed.

Radio Silence via other channels

Despite reaching out to the chairs via email in a personal letter, the chairs never directly responded or engaged in any fruitful dialog on this matter. The email was cited in the same comment as the previous in @brentzundel’s comment on #27 on

@brentzundel I want to especially thank Joe for reaching out directly and engaging in a private conversation where we discussed this PR, the future of the WG, and the reasons we became involved in this work in the first place.

2023/01/24 https://github.com/w3c/did-wg-charter/pull/27#issuecomment-1402233965 :

Frankly, @brentzundel’s assertion that “we discussed” anything is a fiction. He did not respond via email nor phone call nor any channels other than communicating his directives through Github comments.

After the recent VCWG face-to-face meeting in Miami, I pulled Brent aside to talk in person for a moment after everything wrapped. In that brief in-person conversation, I affirmed my disagreement with his current position. He restated his belief that this work isn’t “subject to consensus”, which I briefly debated. During this exchange, Brent said “I don’t care. I just don’t care.” He then asked me, as a point of clarification, if I intend to Formally Object if the current PR is merged. I repeated the same position I have made clear since TPAC in September. Yes. I would not only Formally Object, I would use all ethical and lawful means to oppose this decision, including filing an appeal of his decision to override dissent.

Denial

Brent has repeatedly denied that consensus applies to charter development:

@brentzundel : There are no WG consensus requirements in establishing the text of the charter.

2022/12/15 https://github.com/w3c/did-wg-charter/pull/27#issuecomment-1353537492

While Pierre-Antoine has no problem acknowledging that consensus matters:

@pchampin We clearly didn’t manage to reach consensus here.

2023/01/11 https://github.com/w3c/did-wg-charter/pull/27#issuecomment-1379253408

Surprisingly, in that same comment from Brent, he confirms my position:

@brentzundel Consensus must be found by the WG for any items developed in the WG, regardless of what the charter allows.

2022/12/15 https://github.com/w3c/did-wg-charter/pull/27#issuecomment-1353537492

Somehow he wants to be able to act as the chair in this matter, using his authority as chair to direct the output of the group towards a particular charter with particular language, but he simultaneously denies that the Charter is an “item” developed by the WG, despite it being done with DID WG resources, by the DID WG participants, under the leadership of the DID WG chairs.

In person, and in various GitHub issues and on Working Group calls, when faced with diverse opinions, unfortunately, it is an all too common response from Brent that “consensus doesn’t apply.”

Deference to political priorities over technical discussion

A significant frustration during this discussion was the repeated deference to non-technical factors rather than focusing on technical distinctions. Rather than debating what would make the most technical sense, the vast majority of arguments for the chairs’ position were political. Not once did the Chairs put forth any arguments other than deference to the perceived requirement from the Director.

Political factors are inevitable, but it is through technical discussion that the work of the W3C moves forward. The nature of standardization is the forging of alignment between competing interests so that the net result satisfies the greatest value. While we may have different tests for what comprises “value”, it is understood that some collective notion of shared benefit, as perceived by the working group, is the driving goal of collaboratively working through our differences. Each participant brings to the table their perspectives and through technical discussion the working group finds consensus around a shared specification that the W3C can formally recommend to the public as a global standard.

From the process document regarding consensus and dissent:

Groups should favor proposals that create the weakest objections. This is preferred over proposals that are supported by a large majority but that cause strong objections from a few people.

https://www.w3.org/2021/Process-20211102/#managing-dissent

Takeaway point: the group should favor the proposals with the weakest objections.

And

A group should only conduct a vote to resolve a substantive issue after the Chair has determined that all available means of reaching consensus through technical discussion and compromise have failed, and that a vote is necessary to break a deadlock.

https://www.w3.org/2021/Process-20211102/#Votes

Takeaway point: resolving dissent should occur through technical discussion.

Later in the document, it discusses how a group formally addresses an issue:

In the context of this document, a group has formally addressed an issue when it has sent a public, substantive response to the reviewer who raised the issue. A substantive response is expected to include rationale for decisions (e.g., a technical explanation, a pointer to charter scope, or a pointer to a requirements document). The adequacy of a response is measured against what a W3C reviewer would generally consider to be technically sound.

https://www.w3.org/2021/Process-20211102/#formal-address

Takeaway point: Adequacy of a response depends on technical soundness.

And the following from Section 5.4 Reopening a Decision When Presented With New Information:

The Chair may reopen a decision when presented with new information, including:

* additional technical information

* comments by email from participants who were unable to attend a scheduled meeting,

* comments by email from meeting attendees who chose not to speak out during a meeting (e.g., so they could confer later with colleagues or for cultural reasons).

https://www.w3.org/2021/Process-20211102/#WGChairReopen

Takeaway point: reopening a decision is appropriate for new technical information not new political information (unless by email from participants who didn’t get their input in earlier).

In summary:

The group should favor the proposals with the weakest objections. Resolving dissent should occur through technical discussion. Adequacy of a response depends on technical soundness. Reopening a decision is appropriate for new technical information not new political information

I interpret this to mean that resolving consensus per the W3C process SHOULD depend on considering competing proposals based on technical discussions and selecting the proposal that has the weakest technical objections.

That did not happen in this case.

Three alternative PRs were under consideration.

PR#27, the PR that explicitly added DID methods as in scope. https://github.com/w3c/did-wg-charter/pull/27

PR#29, a PR alternative suggested by @brentzundel that removed DID methods from the scope, with minimal other changes. https://github.com/w3c/did-wg-charter/pull/29

PR#30, a PR alternative suggested by @christopherallen that put forth a DID Resolution WG charter to focus the group on the next step in interoperability and to avoid the alleged mandate from the Director that the next DID WG MUST standardize DID methods. https://github.com/w3c/did-wg-charter/pull/30

PR#27 elicited several technical objections, including multiple participants who made it clear that both Formal Objections and an Appeal would be filed if this charter is put forth.

PR#29 triggered several statements of preference, on both sides, but only one technical objection.

PR#30 triggered a few statements of preference, on both sides, but no technical objections.

In the face of this evidence, both #29 or #30 had weaker technical objections than #27.

On Technical Objections

There is a legitimate question about what constitutes a technical objection. I rely on the process statement that responding to a formal objection must be what “a W3C reviewer would generally consider to be technically sound”.

Technical issues raised (all quotes from Github):

There is no DID Method ready for standardization. (technology is not ready) Standardizing DID Methods is like standardizing websites. (layer violation) There is a technical route to meet the letter and the spirit of the prior formal objectors’ request for further implementation guidance. It includes showing the transport layer using the DID Resolution specification, including its test cases, educating the formal objectors regarding what was achieved, and moving specification of example DID Methods outside this WG’s work items. (there’s a better route to technical interoperability) We don’t want to give the impression that some DID methods are more “important” or “required” than others. (decentralization means no privileged parties) We are told there is a mandate to include standardization of DID methods into the working group charter, but the parties that mandate this inclusion have not been actively involved in the WG process in the past, and there is no evidence that they plan to be involved in the future. This lack of active participation raises questions about the legitimacy of their mandate and the potential impact that these objections may have on the future of this standardization process. (consensus-driven technical standardization depends on participation) If we work on any DID method/s, will this put it/them in a “blessed” state, against which no other DID method/s can possibly compete? I highly doubt it. Though, if it does, that would tend to indicate that all desirable features are fully optimized in that method; else, another method would be registered eventually if not immediately. (Dispute of technical issue #4)

Political Issues raised (all quotes from Github):

A given did method deserves its own working group (a different group would be better able to manage its own work) I believe it is the case is that the WG must put forward a charter that will not be the subject of a Formal Objection again. (Avoiding Formal Objections is a political goal.) It has been made abundantly clear by those who formally objected to the DID Specification v1.0 that in their view, a future charter that doesn’t include possible DID Methods will be rejected. (Avoiding rejection is a political goal.) There is documented evidence that we were requested, or at least strongly encouraged (“should”), to include DID methods in the next charter. (Deferring to that suggestion in the face of dissent is a political choice.) This does NOT say we will specify any DID methods. It does NOT say we will not specify anything to do with DID Resolution. It leaves the door open to either or both, which flexibility I and the (I believe, substantial) majority of the group … have talked about as both desirable and likely to be needed. (“Flexibility is desirable” is about retaining political authority, not a technical argument that such flexibility will improve the technology.) There were formal objections and comments made to DID-CORE 1.0 expressing concerns around DID methods. but those objections were overruled with a comment that those concerns will be addressed in the future work. If recharter happens now, it would be appropriate to respect the above and mention DID methods in the charter. (Deference to prior objections is political, not technical.)

So there were both political and technical issues raised by various individuals. However, the question that the process requires the group consider is which proposal had the weakest technical objections. Let us look at the Github conversation on these three PRs.

The following table lists all of the group participants who commented on PRs #27, #29, and #30 (noting that we aren’t clear if the staff liaison should be counted as a participant with regard to consensus):

Those that commented relative to a particular PR are given

“+” for a preference for that PR “FO” for the intention to file a Formal Objecti+on if that PR is merged “-” to indicate opposition to that PR “NC” to indicate no comment made on that particular PR #27#29#30brentzundel+NC–“The DID WG chairs have met.” “The technical objection to changing this charter into a charter for DID Resolution was outlined by @pchampin above.”TallTed+–+“I have not said I would be lodging [an FO] in any case”rxgrantFO++“If we allow DID Method specifications under this change to the charter, this kind of disagreement is going to consume the WG. That is a technical issue that must be addressed.”jandrieuFO++“My only recourse if this language remains in scope is to file a Formal Objection, which will have its own unfortunate consequences in the ensuing public debate”kdenhartog+–NC“While I’m not actively invested in this discussion” “I don’t believe only resolution will go far enough because the content that can be returned by a DID document is arbitrary and loosely defined.”OR13–++“I approve #29 and request changes on #27.” “I don’t think defining methods is required to achieve interoperability.” “I don’t believe the WG should attempt to standardize DID Methods. I believe doing so undermines the principle of decentralization, and although it might seem contradictory, interoperability at the core data model layer.”Sakurann–-/+NC“I don’t believe that rechartering should happen now.”, “If … I would be ok [with DID methods] not being in scope.”peacekeeper+NC“I slightly prefer #27 (see my comment #27 (comment)) but could also live with this here.”burnburn+NCNC“Suggest ‘developed’ instead of ‘adopted’.”christopherAFONC+“To be clear, I am considering a formal objection … should there be a chair decision to merge this PR.”pchampin+–“I personally prefer #27, but I won’t object to [#29] if it helps moving forward.”,  “I don’t think we can get away with [#30]”

Summary

 #27#29#30Formal Objections Signaled300Comments in favor5+13.55Comments opposed52.52

From this evaluation of the working group’s Github conversation, we see #30 has the weakest objections, followed by #29, with #27 having the strongest objections of all three proposals. This count includes both political and technical objections to make the strength of dissent even clearer. If we were to count only those objectors who cited technical arguments, the results would be even more favorable to PR#30.

 #27#29#30Formal Objections Signaled300Comments in favor5+13.55Technical Objections Raised510

The decision making process led by the chairs did not “favor proposals that create the weakest objections”, a violation of W3C Process, Section 5.2.2 Managing Dissent https://www.w3.org/2021/Process-20211102/#managing-dissent

I want to highlight a particular fact about these numbers. They represent a snapshot of a brief and incomplete conversation by the Working Group. One of the primary tenets of this appeal is that the chairs did not lead a discussion for the consideration and improvement of either of these alternative PRs. Once dissent was voiced, all conversation was effectively shut down. Instead, the working group should have further discussed the options represented by #29 and #30 and iterated on both to find the best single charter for the group to proceed. It is the failing to have this conversation—coupled with the expectation to blindly follow the directive of the chairs—that is the process violation at the heart of this appeal.

It is evident from comments in the discussion and in their final decision that any alternative to the chairs interpretation of the Director’s comments in responding to the DID Core Formal Objections, would be ignored. Quite simply, the chairs treated a suggestion from the Director as a mandate to override dissent instead of working together as a group to discover a better option.

Claims of “W3C Consensus” are erroneous, out of scope, and irrelevant

In announcing their decision to continue to override dissent (in the merging of #27) @brentzundel said:

@brentzundel: The DID WG chairs have met. We have concluded that merging PR #27 will produce a charter that best represents the consensus of the DID Working Group participants over the past year.

2023/03/15 https://github.com/w3c/did-wg-charter/pull/27#issuecomment-1470920464

Unfortunately, the chairs do not cite the basis of their assertion that #27 will produce a charter that best represents the consensus of the group. We have shown in this appeal that the discussion on Github does not justify such a conclusion.

Brent continues:

The inclusion of DID Methods in the next chartered DID WG was part of the W3C Director’s decision to overturn the Formal Objections raised against the publication of Decentralized Identifiers (DIDs) v1.0. The chairs feel that that decision represents the consensus of the W3C and as such the inclusion of DID Methods as optional work in this charter is absolutely necessary.

Unfortunately, it is not the chairs’ role to determine the consensus of the W3C, nor is it stated anywhere in the Process that chairs MUST defer to suggestions the Director made in response to previous Formal Objections to a different work product. In fact, as clarified by Phillippe Le Hégaret in regard to the Director’s response to those prior objections (in personal email):

The “should” is NOT a “must”, otherwise it would say so explicitly in the Director’s decision. In other words, the Working Group is allowed to disagree with the direction set by the Director.

Phillippe Le Hégaret, personal email, 2023/03/23

The chairs interpretation of the Director’s response as a mandate is contrary to W3C Process. There is no requirement from the Director in this matter. The chairs incorrectly claim that there is, stating that a W3C Consensus exists that somehow overrides the group’s independent authority. They have used this claim as the fundamental justification for overriding dissent.

The claim of a W3C Consensus is erroneous because the W3C as a whole never had an opportunity to voice their position on the claimed consensus. The response from the Director was a statement resolving an immediate set of objections, stating his suggestion for future work. It was not a statement of W3C consensus requiring the DID WG to charter a new DID WG with methods in scope.

The claim of W3C Consensus is out of scope because the Chairs do not have the authority to assess consensus and speak on behalf of the W3C.

The claim of W3C Consensus is irrelevant because the statement by the chairs is justifying an action by the Chairs regarding an output of the WG. It is not about an action of the W3C.

Even if the W3C as a whole did, in fact, have some sort of consensus on this topic, that alone would not restrict the group from having a different consensus. Of course, a working group may reach a different consensus than the W3C as a whole—after all there are two different groups of participants with different opinions, perspectives, and decision making processes. As such, the consensus of the W3C as a whole is necessarily NOT the same as the consensus of the Working Group. Claims of W3C consensus by the chairs is not only outside their remit, such a “W3C Consensus” should not be taken to require that the WG reach the same conclusion.

Claiming anything about a supposed “W3C consensus” that restricts the actions of the WG is erroneous, out of scope, and irrelevant.

This sets a untenable precedent

The W3C is at a crossroads. Built on trust in the benevolence of Sir Tim Berners Lee, the organization is wrestling with adjusting the process and governance bodies away from a strict dependence on the Director to make major decisions. I appreciate the challenge the organization is facing is like changing the engines on an aircraft in flight. It’s hard. We are in uncharted territory as the organization explores improvements to the Process to deal with this inevitable transition. However, confirming deference to staff as justification for the disregard of legitimate dissent is both inconsistent with the tradition and culture of the W3C and incompatible with an organization whose premise relies on technical cooperation and collaboration. From the W3C mission statement https://www.w3.org/Consortium/mission:

Our Identity “We are an open forum where diverse voices from around the world and industries come together, incubate and build consensus for global standards for web technologies.”

Our Strategic Goals

Improve efforts on new technology incubation, making it more structured and improving consensus-building among key stake holders. Achieve worldwide participation, diversity and inclusion, establishing W3C as representative of the worldwide community. Further improve the environment for facilitating balance, equity and cooperation among the participants from different industries, user groups and organizational sizes. Increase involvement of under-represented key stakeholders such as end users, content creators, developers etc.

Emphasis mine.

It is our opinion that if the W3C chooses to interpret staff suggestions as mandates that override working group consensus it will fail to achieve the explicit goals of the organization’s mission.

Shutting down conversation and ignoring dissent does not

provide an open forum. improve consensus building. achieve participation, diversity and inclusion. facilitate balance, equity and cooperation among participants. increase involvement of under-represented key stakeholders.

From the W3C Process Document

W3C work revolves around the standardization of Web technologies. To accomplish this work, W3C follows processes that promote the development of high-quality standards based on the consensus of the Membership, Team, and public. W3C processes promote fairness, responsiveness, and progress: all facets of the W3C mission.

The W3C Process promotes the goals of quality and fairness in technical decisions by encouraging consensus, soliciting reviews (by both Members and public), incorporating implementation and interoperability experience, and requiring Membership-wide approval as part of the technical report development process.

https://www.w3.org/2021/Process-20211102

The W3C Process Document lays out the principles and the practices that enable the W3C to legitimately develop standards based on the consensus of its contributors.

If the Director confirms the Chair decision to ignore dissent on this matter, it will set the precedent that Working Group chairs are free to ignore the W3C process should their own opinion coincide with a suggestion from staff. This will transform the organization into a politically driven autocracy that lacks the moral authority of legitimate consensus.

Instead of assisting the W3C in its transition away from a top-down autocratic governance based on the moral authority of Sir Tim Berners Lee to something more meritocratic, confirming the Chairs decision would establish that decisions at the W3C flow from staff down and not from the collaborative, technical discourse of its members.

Proposals with the weakest objections should be favored

The process document states:

Groups should favor proposals that create the weakest objections. This is preferred over proposals that are supported by a large majority but that cause strong objections from a few people.

https://www.w3.org/2021/Process-20211102/#managing-dissent

By this metric, PR#29 or #30 should have prevailed. If you exclude the staff liaison from the tally, you have an equal measure of preference and opposition to #27, including 3 intentions to formally object, plus my express intention to fight this decision with every lawful and ethical means at my disposal to oppose the inclusion of DID methods in the DID WG charter.

PR#27 didn’t even get support by a majority of the group and it absolutely caused strong objections from several of us. Even if you include @pchampin’s position, we barely have a majority in support of #27 with strong objections against. A bare majority and several strong objections from a few people does not meet the standard of Process.

Although I read the Chairs statement several times, I do not see how they could legitimately argue that the favored proposal has the weakest objections from participants in the working group.

Summary

I am asking the Director to reconsider and overturn the DID WG chairs decision to ignore dissent and force DID methods to be in scope for the next WG charter proposed by the Working Group.

Since the chairs are claiming a mandate from the Director as the justification for their decision, I am directly asking the Director if what is happening is, in fact, what the Director required.

I have shown that the chairs, in this case, have failed to pursue all available means to find the proposal with the weakest objections. They have instead imposed their own political agenda over considerable technically-grounded dissent.

To recap:

The Chair decision in question lacks consensus. Consensus was not legitimately sought, contrary to W3C Process. Chairs claims of an overriding “W3C Consensus” are erroneous, out of scope, and irrelevant. Deferring to staff sets an untenable precedent for a group that prides itself on diversity and consensus-based decision making. The proposal with the weakest objections should be favored.

I hope that the Director will appreciate these arguments and provide guidance that will remove this unfortunate notion that a Working Group’s chairs must defer to mere suggestions by staff. All Working Groups deserve the ability to manage their own work and find their own consensus without imposition from staff.

In particular, I’d like to see the following remedy from the Director:

Clarification: does the Director’s response to the Formal Objections to DID Core 1.0 constitute a direct requirement that the Working Group MUST explicitly charter a subsequent DID WG with DID Methods in scope? Clarification: does the Director’s response to the Formal Objections to DID Core 1.0 constitute a direct requirement that the Working Group MUST NOT charter a DID Resolution WG instead of a DID WG? Clarification: What is the role of technical versus political objections when seeking consensus, especially what constitutes a technical objection? Clarification: What are the requirements of the chairs to seek proposals with the weakest objections? Direction for the DID WG chairs to facilitate a conversation to find better alternatives to including DID methods in the next WG charter, specifically to advance either a true maintenance mode DID WG charter (without license for new specifications) or advance a DID Resolution WG charter instead (focusing new specifications on resolution).

When I got involved with this work, I shared many of my colleagues’ concerns about whether or not the W3C has the cultural and institutional wherewithal to stand as a steward for decentralized technologies. Given the organization’s history with many of the largest companies in the world, significant concerns have been voiced about whether or not the keeper of the Web can handle innovations that challenge the corporate dominance of web standards.  

After the Verifiable Credentials specification was approved as a Recommendation, many of us were hopeful. Then, the DID Core specification took an unprecedented fight before it eventually became a W3C Recommendation. The fight was disheartening, but the ultimate approval gave us cautious hope.

Unfortunately, with @BrentZundel’s unwavering demand that the WG advance a charter that overrides consensus to placate a suggestion from staff, I am again given cause to wonder if the W3C is capable of advancing collaborative work based on true consensus rather than political deference. This appeal puts that notion to the test. It is my sincere hope that the W3C will prove true to its mission and its process and recognize the unfortunate error of the chairs, restoring consensus to the heart of our collective work.

Finally, I want to apologize again to the chairs, Brent Zundel and Dan Burnett, as I have already done personally and in our related Github discussion. Both Brent and Dan are great guys, two of the nicest people you’ll meet in standards development. I consider both of them friends and I’m deeply saddened that this matter has become a public conflict with its inevitable consequences for all of us.

Unfortunately, I find myself in an existential disagreement with a decision they have made collectively in their role as chairs of the WG. As outlined above, I believe the path the chairs have chosen fundamentally undermines the work to which I have dedicated my professional life.

It’s not just that this is an unfortunate decision. It is a decision that I am compelled to oppose with all of the lawful and ethical means at my disposal. This opposition is not personal. I know Brent and Dan are doing their best. Unfortunately, in this case, I find their best falls short of the requirements of their role. I sincerely wish the path to resolve this matter didn’t take us to a nasty public fight. I made eighteen separate comments on the PRs in question trying to convince the chairs of the merit of my argument before escalating. I tried. I’m sorry I wasn’t able to resolve this without calling for intervention.

Thank you for your time and attention. I know this is a lot of material to digest and I appreciate the effort that everyone has made to advance this conversation in a civil and professional manner.

Sincerely,

-j

Joe Andrieu
joe@legreq.com
Legendary Requirements

Tuesday, 11. April 2023

Phil Windleys Technometria

OAuth and Fine-grained Access Control

Some of the following is excerpted from my new book Learning Digital Identity from O'Reilly Media. OAuth was invented for a very specific purpose: to allow people to control access to resources associated with their accounts, without requiring that they share authentication factors. A primary use case for OAuth is accessing data in an account using an API. For example, the

Some of the following is excerpted from my new book Learning Digital Identity from O'Reilly Media.

OAuth was invented for a very specific purpose: to allow people to control access to resources associated with their accounts, without requiring that they share authentication factors. A primary use case for OAuth is accessing data in an account using an API. For example, the Receiptify service creates a list of your recent listening history on Spotify, Last.fm, or Apple Music that looks like a shopping receipt. Here's a sample receipt of some of my listening history.

Receiptify Listening History

Before OAuth, if Alice (the resource owner) wanted Receiptify (the client) to have access to her history on Spotify (the resource server), she'd give Receiptify her username and password on Spotify. Receiptify would store the username and password and impersonate Alice each time it needs to access Spotify on her behalf. By using Alice's username and password, Receiptify would demonstrate that it had Alice's implicit permission to access her data on Spotify. This is sometimes called the "password antipattern" and it has several serious drawbacks:

The resource server can't differentiate between the user and other servers accessing the API.

Storing passwords on other servers increases the risk of a security breach.

Granular permissions are hard to support since the resource server doesn't know who is logging in.

Passwords are difficult to revoke and must be updated in multiple places when changed.

OAuth was designed to fix these problems by making the resource server part of the flow that delegates access. This design lets the resource server ask the resource owner what permissions to grant and records them for the specific client requesting access. Moreover, the client can be given its own credential, apart from those that the resource owner has or those used by other clients the user might have authorized.

OAuth Scopes

The page that allows the owner to grant or deny permission might display what permissions the RS is requesting. This isn't freeform text written by a UX designer; rather, it's controlled by scopes. A scope is a bundle of permissions that the client asks for when requesting the token, coded by the developer who wrote the client.

The following screenshots show the permissions screens that I, as the owner of my Twitter account, see for two different applications, Revue and Thread Reader.

OAuth Scopes Displayed in the Permission Screen

There are several things to note about these authorization screens. First, Twitter is presenting these screens, not the clients (note the URL in the browser window). Second, the two client applications are asking for quite different scopes. Revue wants permission to update my profile as well as post and delete tweets, while Thread Reader is only asking for read-only access to my account. Finally, Twitter is making it clear to me who is requesting access. At the bottom of the page, Twitter warns me to be careful and to check the permissions that the client is asking for.

Fine-Grained Permissions

Scopes were designed so that the service offering an API could define the relatively course-grained permissions needed to access it. So, Twitter, for example, has scopes like tweet.read and tweet:write. As shown above, when a service wants to use the API for my Twitter account, it has to ask for specific scopes—if it only wants to read my tweets, it would ask for tweet:read. Once granted, they can be used with the API endpoint to gain access.

But OAuth-style scopes aren't the best tool for fine-grained permissioning in applications. To see why, imagine you are building a photo sharing app that allows a photographer like Alice to grant her friends Betty and Charlie permission to view her vacation photos. Unlike the Twitter API, where the resources are fairly static (users and tweets, for example) and you're granting permissions to a fairly small number of applications on an infrequent basis, the photo sharing app has an indeterminant number of resources that change frequently. Consequently, the app would need to have many scopes and update them each time a user grants new permissions on a photo album to some other user. A large photo sharing service might have a million users and 50 million photo albums—that's a lot of potential scopes.

Policy-based Access Control (PBAC) systems, on the other hand, were built for this kind of permissioning. As I wrote in Not all PBAC is ABAC, your app design and how you choose to express policies make a big difference in the number of policies you need to write. But regardless, PBAC systems, with good policy stores, can manage policies and the information needed to build the authorization context much more easily than you'll usually find in an OAuth-based system.

So, if you're looking to allow client applications to access your API on behalf of your customers, then OAuth is the right tool. But if you're building an application that allows sharing between its users of whatever resources they build, then you need the fine-grained permissioning that a policy-based access control system provides.

Sunday, 09. April 2023

Doc Searls Weblog

Unstill life

Her name is Mary Johnson. Born in 1917, the year the U.S. entered WWI, two years before women in the same country got the right to vote, she died in 1944, not long before the end of WWII. She was buried, unembalmed, in the cemetery of a Chicago church that was later abandoned. Her grave […]

Her name is Mary Johnson. Born in 1917, the year the U.S. entered WWI, two years before women in the same country got the right to vote, she died in 1944, not long before the end of WWII. She was buried, unembalmed, in the cemetery of a Chicago church that was later abandoned. Her grave was unmarked. To make room for new commercial development in 2023, the church was razed and occupants of the cemetery were respectfully and quietly disinterred, and moved to a working cemetery elsewhere in town. In the midst, efforts were made by the coroner’s office to discover the identities of the bodies from unmarked graves before they were to be reburied. Mary’s was among them.

The difference with Mary was that her body appeared to be unchanged: a bit dusty under bits of casket lining, with light flecks on her dark skin. Except for that, she looked like she had died yesterday. When they removed her body from the casket in the hospital morgue where she was taken for DNA sampling, she was still flexible. I asked the pathologist what would account for her perfect state of preservation. The pathologist said she had no idea. Even the best embalming jobs age in the ground.

When the pathologist was out of the room, I reached to lift one of Mary’s eyelids. Before my fingers touched, both lids opened, slightly. I called out, “Come here! Come here!” Nobody came. Then both eyes opened. Her body shook as she tried to breathe.

“Code Blue! I yelled.

She was alive. Somehow, alive. After what, eighty years? At the time we didn’t know.

I was ready to do CPR when she started breathing and tried to sit up. “Hang on,” I said. Let me help.” As I did, she was becoming warm. She stood up and looked around.

I looked at my watch. It was 5:15 AM. The dream was too interesting for me to completely awaken, so I dropped back in. As a half-conscious dreamer, I often do that. For the next hour, I followed several different story threads. Each visited what happened in the eighty years of Mary’s suspended life. The Cold War. israel. Civil rights. Rock & Roll. Space travel. Disco. Hip hop. Rap. Digital everything. One of her questions: “What’s television?”

I thought about the backstory of her own short life: a child in the twenties, a teen during the Depression, a young adult during The War. What interesting stuff happened in those years? Was she in love, maybe with a guy who went off to fight? What work did she do?

I thought about the science questions. How was it possible for someone to lay undead for so long? Who else in that cemetery, or any cemetery, might still be alive? Was she a one-of-one, a one-of-some, or a one-of-many? Though not a horror story, it did recall Poe’s The Premeture Burial, in which the protagonist’s undead state was “catalepsy.” I imagined in this thread that the story might be one Michael Chrichton would like to write, if he was still alive.

Dreams have no more detail than one gives them, and being a white guy from New Jersey, born after Mary went cataleptic, I didn’t have much to furnish any storyline about Mary’s life, past or present. But I could still think of questions to address. What perspective and wisdom could Mary bring to our time? How would she cope with finding nearly all the people and things she knew were now dead and gone? What would she marvel at, and what would appall her? What would she be, or mean, to the rest of us?

No surprise that I would dream of a resurrection on Easter morning, even though that wasn’t in my head at the time. So, might Mary (not hard to guess from what corner of my unconscious that one came from) be a Jesus for our time? Hmm.

At 6:15, I finally had to get up and do my business, and the dream ended there. Still, “This has to be a book,” I told myself. But I have other books to write, ones that have sat half-written for years. And writing fiction isn’t my thing. So I decided I’d blog it instead. And here we are.

She is alive now, on this page, and in your mind. All she needs is for someone to finish her story.

The image above is courtesy of DALL-E, the artificially creative image generator. Took four tries to get this one. Since DALL-E seems to know only how to make grotesque faces, I made sure the woman here is looking away.

Thursday, 06. April 2023

@_Nat Zone

[4月7日午後11時半〜] 『NHK「Web3超速解説」』の解説を、監修・出演の松尾教授と「徹底考察、ブロックチェーンは人類を幸せにするのか」の編集者である浅川記者とでやります

4月7日(金)の午後11時15分より30分まで放送…

4月7日(金)の午後11時15分より30分まで放送されるNHKの「漫画家イエナガの複雑社会を超定義」で「Web3」を取り上げます。題して「Web3超速解説〜Web3はインターネットの明るい未来…ってホント!?」。

(出所) 「Web3はインターネットの明るい未来…ってホント!? – 漫画家イエナガの複雑社会を超定義 – NHK」

この番組の終了直後から、これの監修と出演をされている松尾教授と、「徹底考察、ブロックチェーンは人類を幸せにするのか」の編集者である浅川記者をお呼びして、この演出の背景だとか、ふかーい話だとかをtwitter spaceでやります。さらに、面白い人の乱入もあるかも?!ぜひご参集ください。

NHKの放送「漫画家イエナガの複雑社会を超定義」でWeb3を取り上げます。今週金曜日23:15〜23:30です。これの監修と出演をされている松尾教授をお呼びして、この演出の背景だとか、ふかーい話だとかをやります。ぜひご参集ください。https://t.co/243H2vzIzY

— Nat Sakimura/崎村夏彦 (@_nat) April 6, 2023

なお、事前に

チャップリンのモダンタイムス1 チャップリンの独裁者

を見たり、

プラトン『国家』の第8巻 

とか読んだりしておくとより楽しめるかもしれません。

お楽しみに!

Wednesday, 05. April 2023

reb00ted

Wish list for Fediverse standardization and other Fediverse technical "commons" activities

This is an update to my previous wish list, now incorporating discussions of a recent meeting I helped put together of the the W3C Social Web Incubator Community Group (SWIG) and FediForum, as well as discussions in several other FediForum sessions as well as various one-on-one conversations. This wish list enumerates the work that I believe needs to be done by the technical Fediverse community

This is an update to my previous wish list, now incorporating discussions of a recent meeting I helped put together of the the W3C Social Web Incubator Community Group (SWIG) and FediForum, as well as discussions in several other FediForum sessions as well as various one-on-one conversations.

This wish list enumerates the work that I believe needs to be done by the technical Fediverse community to put the Fediverse on a more solid foundation, and make it ready to attract more developers and users. Note that obviously, not all these items are of similar importance and urgency. Note also this list does not include less-technical work, such as on moderation or culture.

Your feedback – and help making it so – very much appreciated!

Standards maintenance work on the core specifications Fixes to the the core spec According to reports, ActivityPub as-is is incompatible with common shared hosting environments (e.g. typical WordPress host), as it requires HTTP content negotiation. Issues backlog A design to reduce certain loads Reduce protocol chattiness Fan-out Video (Not my area of expertise, so I don’t have details) Decide on the future of ActivityPub Client-to-Server Split the C2S spec from the S2S spec? Standardize the Mastodon API instead? Invest in convincing developers to implement it? Significant extensions of the core specifications with a view on standardizing them Improved security and privacy Signed content Private messages A standardized way to express terms under which content is provided As I understand it, Bob Wyman calls that a Rights Expression Language This probably should start with a use case collection Profiles A single-document basic [Fediverse] interop spec, ie. when I have written code for everything it says, my code will interoperate in some basic fashion with all other software that has implemented this document no need to consult or understand other implementations may be quite basic, e.g. text content only, only a minimal set of activity types enables implementors to have a “MVP”-style interop at lowest-possible engineering effort (including the time required to read/understand the specs!) This could be done as a “minimal profile” of a stack that contains a subset of AP [ActivityPub], AS [ActivityStreams], and Webfinger Other profiles for specific use cases E.g. propagation of event / calendar invites / RSVPs Documentation and test A test suite for the minimal profile suitable to add to automated test suites over time, this test suite can grow beyond the minimal profile Documented behavior of leading ActivityPub implementations What subset of the spec does each implement What extensions does it implement, and which are required Document the “folk wisdom” how to interact with a given implementation, so not every new developer has to learn everything from scratch Get out of “trial and error mode” when attempting to interoperate with another ActivityPub implementation A branding program for products that have passed the test suite As an implementor, you get to put the sticker on your product. In particular, in the places in the product where users “connect” to other servers in the Fediverse, like “Visa” is displayed at the POS terminal I believe this will become critical if/when larger orgs with potentially different value systems connect to the Fediverse JSON-LD conformance Tests to make sure implementations are JSON-LD conformant Libraries in the major languages Full stack, not just ActivityPub User expectations and usability across the Fediverse A set of web “intent buttons” for Like, Follow, Post, etc that work across sites like they exist for centralized social media as easy to use for the user we can argue how this can be accomplished technically. I have opinions, but for this wish list, they are immaterial, as long as I get those buttons :-) A design for search that meets the requirements of all relevant parties and points of view This is probably far less a technical problem than one of successful communication Best practices for content propagation E.g. resolve the “It has 5 likes here but 10 over there” issue and related. Improved identity management across the Fediverse Easy-to-use single-sign-on across servers. Use case: I use several apps for different content types (like micro blog and video). Bonus: they all post from the same identifier Easy-to-use persona management. Use case: I have a personal and a work account, bonus if they can be on the same server Identifiers not tied to the domain name system Attract more participants Get major implementors involved (e.g. Mastodon) Make the place where technical work is done welcome and the work pleasant

Monday, 03. April 2023

@_Nat Zone

イタリアのChatGPT制限関連ニュースと感想

最初は「OpenAI, ChatGPTへのイタリア…

最初は「OpenAI, ChatGPTへのイタリアからのアクセスをブロック開始」とのタイトルの記事でしたが、今後色々と動きがありそうなので、このエントリにまとめて追記していくことにします。ですので、随時時間付きでアップデートしていきます。

2023-04-03 13:49

OpenAI, ChatGPTへのイタリアからのアクセスをブロック開始

先日の命令1)をうけて、欧州で月曜日がスタートする前に、OpenAI社はChatGPTへのイタリアからのアクセスをブロック開始2したようです。

感想:そりゃそうですよねぇ。事実は明らかであり、命令に従わないと、EU Rep が3ヶ月間以上の禁錮ですから。

イタリアの大臣が「ChatGPTへのBanは過剰である」との発言

イタリアの大臣が「ChatGPTへのBanは過剰である」との発言もしたようです3

感想:大臣が何か言ったからと言って態度を変えると独立性を問われるので、それはそれで個人情報保護当局としてはムズい展開ではありますね。

2023-04-03 16:30

ChatGPTが禁止されたイタリア向けに「PizzaGPT」が爆誕

ChatGPTへのブロックが始まったことを受けて、それを迂回する PizzaGPTというものが出てきたようです。

ChatGPTが禁止されたイタリア向けに「PizzaGPT」が爆誕 PizzaGPT – ChatGPT clone accessible from Italy

「Nuxt 3、Tailwind、DaisyUIを使用し、OpenAI completions APIのラッパー/チャットインターフェイスを実装したもの」とのことです。

感想:これでは残念ながら抜け穴にしかなってないようにおもわれます。アクセスが増えるのをさばくためにPizza1枚分のお金を寄付してほしいということのようですが、まぁ、OpenAIが年齢認証やデータ利用オプトアウトをメインの動線に入れるなど、今回指示されたことを実装するまでのつなぎだけをやる一時的システムだからそれで、ということかもしれません。独自に年齢認証やります、自動でデータ利用オプトアウトします、というのなら面白いですけどね。もちろんそれだけでは、トレーニングデータにイタリア人のデータが含まれていた問題は解決しないわけですが、少なくとも新規の学習はされなくなるなどの効果があり、多分そのあたりがGarante4とOpenAI社の落とし所になるんじゃないかなぁとは思うのですが(←単なる個人の感想です)。

2023-04-05 22:23

ChatGPTへのアクセスをブロックしたイタリアに続きドイツなどEU諸国でもブロックを検討中

イタリアに続き、ドイツ他も同様の命令を検討中のようです。

ChatGPTへのアクセスをブロックしたイタリアに続きドイツなどEU諸国でもブロックを検討中 https://gigazine.net/news/20230405-openai-geoblocks-chatgpt-germany/ Germany considers following Italy in banning ChatGPT
https://sg.news.yahoo.com/germany-chatgpt-considers-following-italy-banning-chatgpt-openai-ai-artificial-intelligence-101058703.html Italy’s ChatGPT ban attracts EU privacy regulators | Reuters https://www.reuters.com/technology/germany-principle-could-block-chat-gpt-if-needed-data-protection-chief-2023-04-03/

感想:まぁ、イタリアの命令がGDPRにもとづいてであったので、同様の命令が他のEU加盟国から出なかったらむしろ驚きですよね。

Sunday, 02. April 2023

Doc Searls Weblog

From sea to rhyming sea

While discussing ChatGPT with my teenage grandkids, I put it to a number of tests. In one I asked it to write a poem that includes all the capital cities in the U.S. Here is its reply: From sea to shining sea, Across this land so grand, The capitals of every state, Await us to […]

While discussing ChatGPT with my teenage grandkids, I put it to a number of tests. In one I asked it to write a poem that includes all the capital cities in the U.S. Here is its reply:

From sea to shining sea, Across this land so grand, The capitals of every state, Await us to take a stand.

In Montgomery, Alabama, We begin our noble quest, To climb each statehouse staircase, And put our wills to the test.

Through Juneau, Alaska, To Phoenix, Arizona’s heat, The halls of power beckon us, With challenges we must meet.

Little Rock, Arkansas, And Sacramento, too, We march to their state capitals, To learn what they can do.

From Denver, Colorado’s heights, To Connecticut’s Hartford town, We cross this land of liberty, To see our nation’s crown.

Dover, Delaware, And Tallahassee, Florida’s heat, We journey forth to find our place, In every statehouse seat.

Atlanta’s Georgia peach, And Honolulu’s island breeze, We’ll walk the halls of power, To achieve our lofty dreams.

Through Boise, Idaho’s hills, And Springfield’s Illinois pride, We’ll take the steps of governance, With courage as our guide.

Indianapolis, Indiana, And Iowa’s Des Moines, We’ll stand in every capital, To honor each state’s joins.

From Topeka’s Kansas plains, To Kentucky’s Frankfort pride, We’ll learn from every leader, In whom we can confide.

Baton Rouge, Louisiana, And Augusta, Maine’s coast, We’ll listen to the wisdom, That each statehouse host.

In Annapolis, Maryland, And Massachusetts’ Boston town, We’ll seek to find our purpose, And rise to wear the crown.

Lansing, Michigan, And St. Paul’s Minnesota pride, We’ll climb each capital’s staircase, And never be denied.

Mississippi’s Jackson town, And Jefferson City’s Missouri call, We’ll cross this land with honor, And never falter or fall.

Helena, Montana’s heights, And Lincoln’s Nebraska plains, We’ll march through every capital, And honor each state’s gains.

Carson City, Nevada, And Concord’s New Hampshire pride, We’ll stand in every statehouse, And let our voices be our guide.

Trenton’s New Jersey call, And Santa Fe’s New Mexico sun, We’ll climb the stairs of governance, And learn from everyone.

Albany’s New York pride, And Raleigh’s North Carolina charm, We’ll honor every state’s glory, And never let it harm.

Bismarck’s North Dakota plains, And Columbus’ Ohio pride, We’ll journey forth to every capital, With honor as our guide.

Oklahoma’s Oklahoma City, And Salem’s Oregon call, We’ll stand in every statehouse, And never let our spirits fall.

Harrisburg, Pennsylvania, And Rhode Island’s Providence town, We’ll climb each capital’s staircase, And never let ourselves drown.

South Carolina’s Columbia pride, And South Dakota’s Pierre call, We’ll cross this land with dignity, And never let ourselves stall.

Nashville’s Tennessee music, And Austin’s Texas pride, We’ll stand in every statehouse, And let our voices rise.

Salt Lake City’s Utah heights, And Montpelier’s Vermont pride, We’ll march through every capital, And let our dreams be our guide.

Richmond’s Virginia charm, And Olympia’s Washington crown, We’ll honor every state’s valor, And never let ourselves down.

Charleston’s West Virginia hills, And Madison’s Wisconsin pride, We’ll journey forth to every capital, And never be denied.

Cheyenne’s Wyoming pride, And Washington, D.C.’s White House, We’ll cross this land with honor, And never let ourselves douse.

From sea to shining sea, We’ve walked each statehouse hall, And learned the lessons of governance, That make our nation stand tall.

Redraw your own conclusions.


reb00ted

The architecture of the Fediverse: network, apps and people

Since Elon’s acquisition of Twitter, the “Fediverse” has become a thing. A thing that many people talk about, often in nebolous terms. I think we need to be more precise, and to do that, it might be helpful to distinguish several parts of the Fediverse. I believe the primary components of the Fediverse are the following, best shown as a layered architecture: People You

Since Elon’s acquisition of Twitter, the “Fediverse” has become a thing. A thing that many people talk about, often in nebolous terms. I think we need to be more precise, and to do that, it might be helpful to distinguish several parts of the Fediverse.

I believe the primary components of the Fediverse are the following, best shown as a layered architecture:

People

You and me; sometimes companies

Apps

e.g. Mastodon, PeerTube

Network

ActivityPub (and related protocols)

To summarize this diagram: there is a communications network (at the bottom), over which apps can communicate with each other (middle), enabling people to interact with each other in various ways (top).

Have you noticed that when people talk about the Fediverse …

… sometimes they mean the network, as in: “all these apps can talk to each other because it’s all ActivityPub” … … and sometimes they mean people, as in: “We have genuine relationships with each other in the Fediverse because no 3rd party inserts ads and addictive algorithms”? And of course, when they use “Mastodon” as the term instead of “Fediverse”, that’s because in this view, the Fediverse is primary formed by thousands of app instances, most of which today are Mastodon instances.

This distinction is very important, because the properties of the “Fediverse” in the various definitions, are quite different. For example:

If I think the Fediverse is all about people, and how they relate to each other, I don’t really care whether it’s based on ActivityPub, or other protocols (like SSB, or Bluesky, or Webmention, or …) or any combination. In this view, the Fediverse is a “place” where certain social norms are different, including that certain behaviors prevalent in society or in other social networks (like expressions of racism) are frowned upon or grounds for blocking.

If I think the Fediverse is the set of all software nodes that communicate over ActivityPub, then I’m baffled why anybody should think some of those nodes shouldn’t be bots, or publish advertisements or special offers; ActivityPub can carry all of those and more. In this view, in its extreme form, the Fediverse could ultimately encompass a lot of internet communication infrastructure, including replacing SMTP for “e-mail”, for example.

Because of this confusion, sometimes the term “Fediverse” is actually less than helpful. On the other hand, there isn’t an obviously better term (or several; we need several).

In my personal view, the network/interop-centric definition of Fediverse is more useful: the Fediverse is the set of all nodes that can communicate semantically with each other through common protocols (ActivityPub).

Given this, it allows many new apps to be built that connect to the same network – the Fediverse – creating lots of new value, as I described earlier.

And those in turn enable people with common values to find each other, and form communities, that practice forms of interaction that would be much harder on centralized social media platforms. To use but one example, my primary Fediverse account is hosted by social.coop, a member-owned cooperative that formed for the purpose of hosting a social media server that works for them; self-funded, self-governed. Imagine that outside of the Fediverse!

But regardless of what the term “Fediverse” is ultimately going to end up meaning, whether that’s my preference or some other interpretation, let’s not conflate these different ideas, otherwise all the beautiful opportunities created by the Fediverse to communicate with each other won’t help us avoid talking past each other about the very thing that makes it possible to talk!

Saturday, 01. April 2023

Anil John

Evolving Government to Citizen Shared Identity Services

It is time for USG to actively deploy shared identity services in the public interest

The U.S. Federal Government has spent a considerable amount of time, treasure and energy admiring the problem of shared digital identity infrastructure that can be used by individuals in a privacy respecting manner, while improving cybersecurity and streamlining access to online services and benefits. It is time for it to occupy the room it owns instead of outsourcing solutions to private sector entities who do not have the public interest at heart.

The above graphic provides an easy to understand conceptual breakdown of the various components that make up a shared digital identity infrastructure.

I’ve updated some of the terminology to this decade, but the components and their relationships are taken directly from the U.S. Federal Identity, Credential and Access Management (FICAM) Program’s Trust Framework Solutions (TFS) Component Identity Terminology circa 2013.

Using the TFS terminology that provides separation of concerns and capabilities, the question now becomes who within the U.S. Federal Government is best able to manage and operate these services?

GSA’s Login.gov as the Authentication Provider

In the aftermath of the GSA IG report that “GSA Misled Customers on Login.gov’s Compliance with Digital Identity Standards”, and as I fully expected, there are a set of private sector identity service providers and their champions who have come out with variations of the narrative that “.gov should not be trusted to run a shared identity service” and should instead use their services, or that the “money that is allocated to Login.gov should be taken away” and re-programmed for other uses.

I am, oddly enough, here to make a case for why continued investment in and expansion of the Login.gov program is in the public interest. Odd because, I am on record as being Not-a-Fan of Login.gov’s remote identity proofing implementation because it was and is using data-broker transactional data instead of anything authoritative, and because of their weasel-worded support for NIST 800-63-3 Identity Assurance Levels (IALs).

My perspective on the GSA IG report is that it is a clear demonstration of exactly why .gov should run these types of services — because it demonstrates the independent checks and balances set up within Agencies working … and working well.

Can anyone point me to an example in the private sector where an Organization self-identified the problem, and corrected it with this high degree of transparency and accountability?

So let me stay on brand by noting that it is directly within the competency and capability of GSA and Login.gov to focus on Authentication (AAL) and Federation (FAL) Assurance Level’s and get out of the Identity Assurance Level (IAL) business.

In the near term, the conceptual picture of this could look something like this:

The Agency remains responsible for the remote identity proofing and can use its internal data or other data sources it has access to, in order to accomplish it, but its use of Login.gov as the Authentication provider eases the burdens that are placed on individuals interacting with the Agency’s services, which should be a priority of any Agency!

I am semi-amused to note that this would enable us to catch up, after many long years, to what the Canadian Federal Government have effectively been doing at population scale for a LONG time!

However, this is only the first step, as it is just as important to implement a shared identity validation service that can be utilized by Agencies to mitigate account opening fraud, benefits fraud, and synthetic identity fraud.

Say No to Match/No-Match

While supporting the intent of such services, I also believe that it is a REALLY BAD IDEA to implement these validation services using a “match/no-match” or “yes/no” architecture because it:

Implements a “phone home” capability that can be abused Limits the ability to collect informed consent from an individual about the use of their data Can be abused by databrokers to enhance their targeting, profiling and segmentation of individuals

My linked article goes into the details and closed with a note that with the advances in understanding of the pitfalls of the private sector ecosystem as well as progress that has been made on global technical standards that ensure interoperability, security, privacy and choice, there is another way to implement this capability that can mitigate and/or eliminate the issues noted above.

Future » .gov W3C VC Broker for Authoritative Data Sources

That way is to enable an individual to have agency and control in interacting with authoritative sources that contain information about them, and giving them the capability to obtain and then present that information in a trustworthy and privacy respecting manner to entities they wish to interact with.

This will require the implementation of a shared service W3C Verifiable Credentials Broker Service that acts as the gateway to multiple authoritative sources within the Government. The individual directly interacts with those sources in order to obtain licenses, permits, credentials and attestations that they in turn can use in any online transactions – both with the Government and with the private sector!

The model for this would look like this:

What is important to understand about this model is that the “.gov Shared W3C VC Broker” is not by itself an authoritative issuer, but instead is a gateway/broker/UI implementation that acts as the front-end for multiple, independent authoritative issuers of W3C Verifiable Credentials.

Think that is too far out there?

I don’t believe it is, since I can point you to the multi-vendor, multi-issuer, multi-party interoperability implementation environment that is being operationally tested by the W3C Verifiable Credential and W3C Decentralized Identifier community to see a working example of this model:

What you see above is a gateway/broker that is a very light-weight facade over 6 different credential issuers, that an individual can interact with in order to obtain one or more W3C Verifiable Credentials.

The same approach can serve as a starting point for a .gov service; for a moment, swap out the credentials shown above with credentials that could be generated from the following authoritative data sources:

Passport Information (State Department) Immigration/Employment Eligibility Status (USCIS) Tax Records (IRS) Social Benefit Eligibility (SSA) Birth/Death Records (State Vital Records Agency) Driver Eligibility and State Residency (State DMVs) Address of Record (USPS) …

Also consider that as the individual is directly interacting with the service, receiving a credential, and then they (and not the Government) are using it (as they already do now in the paper world) for a variety of interactions that the Government may not be party to, it is not triggering .gov considerations related to Government use of that data since the Government has no visibility or awareness of the use of that data by the individual i.e. no “phone home” issues!

As an aside, if you were to ask me who should operate such a W3C VC Broker service for the U.S. Government, my answer would be the U.S. Postal Service, for a variety of very good reasons! Hint: If you are a U.S. Citizen, how did you apply for the highest value citizen facing credential issued by the U.S. Federal Government?

Enable individuals, not databrokers!

I am not a believer or supporter of a Government issued identity card, but I do believe that Government has a role in standing behind and vouching for information it is authoritative for, when requested by an individual.

This architectural model provides the ability to do just that, while supporting individual agency, control and consent of the individual regarding their information, and puts digital identity within the U.S. Government on a path that can interoperate with other global jurisdictions.

This blog post, Evolving Government to Citizen Shared Identity Services, first appeared on Anil John | Blog. The opinions expressed here are my own and do not represent my employer’s view in any way.

Reply to this Blog post via email


Mike Jones: self-issued

OAuth DPoP Nearing Completion

Following the IETF-wide publication request, we’ve published another DPoP draft that addresses additional review comments received to date. This version is destined for the IESG Telechat on April 13, 2023. Recent changes as described in the history log are: Add sec considerations sub-section about binding to client identity Explicitly say that nonces must be unpredictable […]

Following the IETF-wide publication request, we’ve published another DPoP draft that addresses additional review comments received to date. This version is destined for the IESG Telechat on April 13, 2023.

Recent changes as described in the history log are:

Add sec considerations sub-section about binding to client identity Explicitly say that nonces must be unpredictable Change to a numbered list in ‘Checking DPoP Proofs’ Editorial adjustments Incorporated HTTP header field definition and RFC 8792 ‘\’ line wrapping suggestions by Mark Nottingham

The specification is available at:

https://tools.ietf.org/id/draft-ietf-oauth-dpop-14.html

@_Nat Zone

【更新】伊プライバシー当局、ChatGPTにイタリアのユーザの情報の取扱いの即時制限を命令

(4月2日16時更新) 現地(ローマ)時間3月30…

(4月2日16時更新) 現地(ローマ)時間3月30日、イタリアの個人データ保護監督機関であるGPDP1(通称: Garante)は、米企業OpenAI LLCが運営するChatGPTサービスに関連するイタリア在住のユーザの個人データの処理について、仮の制限措置を命じました。GPDPは、ChatGPTによって収集されたデータ主体やユーザーに情報が提供されておらず、アルゴリズムのトレーニングのための個人データの収集や処理に適切な法的根拠がないことを指摘しました。また、13歳未満の子供のためのフィルタがなく、ユーザーの個人データ、特に未成年者の個人データの処理が、EUの一般データ保護規則(GDPR)のいくつかの条項に違反していることも指摘しました。仮の制限措置は、イタリアにおけるデータ主体の個人データ全般に適用され、措置に違反した場合は刑事罰や行政罰が科されます。データ管理者は、改善のために取られた手順と、この違反行為を正当化するために有用と思われる情報を20日以内に提供するよう要請されました。

以上、ChatGPTによる命令112号の要約でした2

命令の詳細

さて、では詳細に入りたいと思います。この命令112号は、前文というか、命令を正当化する部分と、命令本体の2つの部分よりなっています。

前文

前文部分では、冒頭でGDPR(2016)とイタリアの個人情報保護法(2003)と、昨今の報道に基づく命令だとわかります。そして、次に以下のことが考慮されたと記述されます。

ChatGPTについての多くの報道 これは3月20日に発覚したデータ漏洩(バグのせいで他の人の最初の会話が見えてしまう、日曜から月曜にサービス一時停止するまでの9時間の間にアクティブであったChatGPT Plusユーザの内1.2%のユーザの支払い関連情報3が見えてしまうことが多く報道されたことを指していると思われます(詳細はOpenAIの3月24日発表資4 参照)) 本件調査によると、ユーザもしくはOpenAIによるデータ収集およびChatGPTにおける処理対象になった者に何ら情報が提供されていないこと。 ChatGPTのトレーニングのためにデータを収集・処理することに適法な根拠5が無いこと。 データ主体の個人データの処理が、ChatGPTが提供する情報が実際のデータと必ずしも一致しないことに表されるように、データ主体の個人データの処理が不正確なものであること。 利用規約で13歳以上のためのサービスとしているにも関わらず、年齢確認していないこと。 13歳未満の子供向けのフィルターが無いため、発達段階や自己認識のに比して全く不適切な回答に子供がさらされること。

これらに鑑みて、処理対象となった児童を含むユーザの個人情報の処理に関して、GDPRの第5条6、6条7、8条8、13条9、25条10に違反したと認定しています。

このことから、GDPR第58条(2)(f)11に基づき、調査が継続中ではあるものの、緊急性に鑑みて暫定的なデータの取扱い停止命令を出すとしています。対象は、13歳未満を含む全てのイタリア在住者のデータです。利用規約で13以上としているのに13歳未満も対象としているのは、年齢制限の仕組みが実装されていないからです。これは、この処分通知が到達した時点12で有効となります。また、処分の内容は調査の結果によって変わる可能性があります。

次に、この命令に違反した場合には、禁錮3ヶ月〜2年(イタリア個人情報保護法第170条)の罰則が与えられることが述べられています。

また、この命令は、緊急性に鑑みて、イタリア個人情報保護委員会設置法第5条8項13に基づき、委員長の決定で出されていることが述べられます。

主文

かくなる上で、個人情報保護委員会は以下の命令を通知しています。

a) 規則第58条2項(f)に基づき、ChatGPTを開発・運営する米国企業OpenAI L.L.C.に対して、当該アプリケーションを通じて行われる個人情報の処理のデータ管理者として、イタリア領内に存在するデータ対象者14の個人情報の処理を暫定的に制限する措置を緊急に命ず。 b) この命令は受け取り即時に有効であり、追加措置がありうる。

加えて

GDPR第58条に基づき、データ管理者は20日以内に本違反事件にかかわるデータの取扱を正当化する主張を委員会に届けること

をもとめています。なお、第58条に基づく照会に違反した場合には第83条5項eに基づき行政罰を受ける可能性があることもあわせて述べています。

感想

OpenAIにとってはなかなかしんどいですね。日本のこの手のことをやっている企業にも参考になると思います。

まず、ユーザもしくはOpenAIによるデータ収集およびChatGPTにおける処理対象になった者に何ら情報が提供されていない点。公開されているデータにも個人情報はたくさんはいっていますし、その処理に先立ち当該個人に処理する旨の情報提供が必要だということでしょう。かなりしんどいですね。

次にトレーニングに関するデータの使用に適法な根拠が無いという点。

GDPR第6条では

(a) 特定の目的のための同意、 (b) 契約の履行または契約にさきだつ手続き、 (c)法的義務、 (d) 生命の保護、 (e) 公共の利益、 (f) 正当利益(13歳未満およびデータの対象者の処理されない利益が上回る場合を除く)

を適法性根拠として上げているわけですが、まず(a)。学習データとして使うことの同意はまず取れてないですから「同意」は根拠として使えなさそうです。

さらに(b)。そもそも学習データとして使う段階では、契約をするかどうかわからないわけですから無理です。

(c)(d)が無理なのは言わずもがな。(e)の公共の利益も厳しそうです。

最後(f)の正当利益ですが、これに関して20日以内に回答せよとまずは言っていますね。普通に考えると、13歳未満のデータが入っていないこと、データ対象者の利益を害さないことを立証しているPIAレポートが求められそうです。

次が「正確性の原則」違反。デタラメ返しますからね。正確性の原則は、ついデータベースなどに保管しているデータの正確性と思いがちですが、これは処理全体にかかるので、処理の結果出てくる情報にもあてはまります。

5と6は子どもに対する対応の無さへの指摘。規約に書くだけでは不十分であり、年齢認証がもとめられるようです。

そして、処理を停止しなかったら禁錮3ヶ月〜2年。とりあえずChatGPTは日本からはまだ動いているように見えるのですが、イタリアからのアクセスはできないようにとかしたのだろうか…。いや、それだけではだめで、イタリア領内のデータ対象者のデータの処理をやめなければいけないわけですが、いったいどうやって???モデルからその部分を取り除けるようになっているのかしら…。

そして主文についてですが、これはもう端的に書いてある通り。加えて、ちゃんと20日以内に申し開きを提出することができるのかが注目されますね。

今日(4月2日)のところはこんなところで。続報はこちら====▷ イタリアのChatGPT制限関連ニュースと感想

【参考資料】 GPDPの命令原文DeepL英訳版 Politico: Italian privacy regulator bans ChatGPT ChatGPTの2023年3月14日版のプライバシーポリシー 同利用規約

Friday, 31. March 2023

Heres Tom with the Weather

Translating Books and Papers

I have a side project called BookLoggr which was created for the purpose of saving notes to books as I have done for decades but on the empty back pages of books. A year ago I added a feature for uploading pdfs and found myself often just cutting and pasting interesting sentences. I soon started saving academic papers as well as books. Only recently have I begun to appreciate reading beginner

I have a side project called BookLoggr which was created for the purpose of saving notes to books as I have done for decades but on the empty back pages of books. A year ago I added a feature for uploading pdfs and found myself often just cutting and pasting interesting sentences. I soon started saving academic papers as well as books.

Only recently have I begun to appreciate reading beginner books in french. On each page, I’ll find many new words that I need to look up and annotate with the english word inside the book. It is very time consuming and awkward to put the book down, pull up deepl.com and type the word into the translation form. It is much less disruptive to be able to click a button to translate the page and I can optionally write the english word inside the physical copy of the book.

So, I decided to repurpose my existing interface for adding a note (which was already associated with a page number) to make the note the entire contents of a page. Now, before I start reading the french book, I add each page of the book as a note. Last night, I wrote the code to allow clicking a button to translate a page of french text using DeepL.

I added the pages last night and will try it out and see how it goes…

Thursday, 30. March 2023

Phil Windleys Technometria

Minimal vs Fully Qualified Access Requests

By creating fully qualified authorization contexts from minimal requests, a PIP reduces the burden on developers building or integrating PEPs, allows for more flexible and reusable policies, and enriches authorization contexts to allow more precise access decisions.

By creating fully qualified authorization contexts from minimal requests, a PIP reduces the burden on developers building or integrating PEPs, allows for more flexible and reusable policies, and enriches authorization contexts to allow more precise access decisions.

In access control, one system, generically known as the policy enforcement point (PEP), makes a request to another service, generically known as a policy decision point (PDP), for an authorization decision. The PDP uses a policy store and an authorization context to determine whether access should be granted or not. How much of the authorization context does the PEP have to provide to the PDP?

Thanks for reading Phil Windley's Technometria! Subscribe for free to receive new posts and support my work.

Consider the following policy1:

Managers in marketing can approve transactions if they are not the owner and if the amount is less than their approval limit.

A PDP might make the following fully qualified request to get an authorization decision:

Can user username = Alice with jobtitle = manager and department = marketing do action = approve on resource of type = financial transaction with transaction id = 123 belonging to department = finance and with amount = 3452?

This request contains all of the authorization context needed for the policy to run. A fully qualified request requires that the PEP gather all of the information about the request from various attribute sources.

Alternatively, consider this minimal request:

Can user username = Alice do action = approve on resource with transaction id = 123?

The minimal request reduces the work that the PEP must do, placing the burden on the PDP to build an authorization context with sufficient information to make a decision. We can separate this work out into a separate service called a policy information point (PIP). To build an authorization context for this request, the PIP must retrieve the following information:

The user’s job title, department, and approval limit.

The transaction’s owner and amount.

Building this context requires that the PIP have access to several attribute sources including the HR system (where information about Alice is stored) and the finance system  (where information about transactions is stored).

Attribute Value Providers

PIPs are not usually the source of attributes, but rather a place where attributes are brought together to create the authorization context. As you saw, the PIP might query the HR and Finance systems, using attributes in the request to create fully qualified authorization context. Other examples of attribute value providers (AVP) include databases, LDAP directories, Restful APIs, geolocation services, and any other data source that can be correlated with the attributes in the minimal request.

What’s needed depends on the policies you’re evaluating. The fully qualified context should only include needed attributes, not every piece of information the PIP can gather about the request. To that end, the PIP must be configurable to create the context needed without wasteful queries to unneeded AVPs.

PEP Attributes

There is context that the PEP has that the PDP might need. For example, consider this request:

Can user username = Alice do action = view on resource with transaction id = 123 from IP = 192.168.1.23 and browser = firefox?

The PIP typically would not have access to the IP address or browser type. The PEP must pass that information along for use in evaluating the policy.

Derived Attributes

The PIP can enhance information that the PDP passes in to derive new attributes. For example, consider a policy that requires the request come from specific ranges of IP addresses. The first instinct a developer might have is to embed the IP range directly in the policy. However, this is a nightmare to maintain if multiple policies have the same range check—especially when the range changes. The solution is to use derived attributes, where you can define a new attribute, is_ip_address_in_range, and have the PIP calculate it instead.

This might be generalized with a Session Attribute component in the PIP that can enhance or transform session attributes into others that are better for policies. This is just one example of how raw attributes might be transformed to provide richer, derived attributes. Other examples include deriving an age from a birthday, an overall total for several transactions, or a unique key from other attributes.

Benefits of a PIP

While the work to gather the attributes to build the authorization context may seem like it’s the same whether the PEP or PIP does it, there are several benefits to using a PIP:

Simplified attribute administration—A PIP provides a place to manage integration with attribute stores that ensures the integration and interpretation are done consistently across multiple systems.

Reduced Complexity—PIPs can reduce the complexity of the access control system by removing the need for individual access control components to have their own policy information. PEPs are typically designed to be lightweight so they can be used close to the point where the access is enforced. For example, you might have PEPs in multiple API gateways co-located with the services they intermediate or in smartphone apps. A PIP offloads work from the PEP to keep it lightweight.

Separation of Concerns—PIPs separate policy information from policy decision-making, making it easier to update and modify policies and AVPs without impacting other parts of the access control system. For example, making fully qualified requests increases the coupling in the system because the PEP has to know more about policies to formulate fully qualified requests.

Improved Scalability— PIPs can be deployed independently of other access control components, which means that they can be scaled independently as needed to accommodate changes in policy volume or access requests.

Enhanced Security— PIPs can be configured to provide policy information only to authorized PDPs, which improves the security of the access control system and helps to prevent unauthorized access. In addition, a PIP builds consistent authorization contexts whereas different PEPs might make mistakes or differ in their output.

Tradeoffs

There are tradeoffs with using a PIP. For example, suppose that the PEP is embedded in the ERP system and has local access to both the HR and financial data. It might more easily and cheaply use those attributes to create a fully qualified request. Making a minimal request and requiring the PIP to make a call back to the ERP for the data to create the authorization context would be slower.

But, as noted above, this solution increases the coupling between the PEP and PDP because the PEP now has to have more knowledge of the policies to make the fully qualified request. Developers need to use their judgement about request formulation to evaluate the tradeoffs.

Conclusion

By creating fully qualified authorization contexts from minimal requests, a PIP reduces the burden on developers building or integrating PEPs, allows for more flexible and reusable policies, and enriches authorization contexts to allow more precise access decisions.

Notes

The information in this article was inspired by Should the Policy Enforcement Point Send All Attributes Needed to Evaluate a Request?

Photo Credit: Information Desk Charleston Airport from AutoRentals.com (CC BY 2.0, photo is cropped from original)

Thanks for reading Phil Windley's Technometria! Subscribe for free to receive new posts and support my work.

Wednesday, 29. March 2023

Doc Searls Weblog

A workflow challenge

I shoot a lot of pictures. Most are from altitude (such as the above). But lots are of people and places, for example, here are a few I shot at DWebCamp last summer with my new Sony A7 IV camera (to which I migrated last year after many years shooting Canon): Importing and curating photos […]

I shoot a lot of pictures. Most are from altitude (such as the above). But lots are of people and places, for example, here are a few I shot at DWebCamp last summer with my new Sony A7 IV camera (to which I migrated last year after many years shooting Canon):

Importing and curating photos is the only organized thing I do in life. Let me explain.

Under each of those photos is the name of the file. These are the actual file names, not ones stored in the library of some app and remembered at the meta-level. None are called DSCN0401.JPG, IMG_4523.JPG, or IMG_1874.PNG. I’ve given all of them meaningful names, all by the same convention: YYYY_MM_DD_name-of-event. Every file in every folder has the same name as the folder, with a number appended to it. The numbers are chronological. Each event’s folder goes into a month folder, each of those goes into a year folder, and all years go into a folder called pix-by-year. That folder has years going back to 1869. (The earliest are scans, such as this one.)

I organize and name all my photos with a piece of software originally called iView Media Pro, then Expression Media 2 (after Microsoft bought it), and finally Phase One Media Pro (after Microsoft abandoned it). I like the second version best, so that’s what I use.

Here is a short and partial list of all the things Expression Media 2 can do:

Rename files in the Finder, very selectively, alone, or in batches (much more smoothly and easily than can be done with Adobe Bridge or Apple’s Shortcuts or Automator). Search for any variable, or combination of variables Sort by up to 24 variables (file name, size, date created, date modified…) Resize selectively Open any files with any other app Bring multiple photos (files) from multiple directories into a single catalog, to organize any way I want. Also to export, rename, or whatever, including— Run a slide show, with many choices of presentation (much more than with Apple’s Photos), and save shows as movies Open any file or collection of files in any other app (for me that’s usually Photoshop 2021) Basic image edits Rotate, flip, zoom in and out See and edit EXIF data for every shot Find missing and duplicate items Show or hide any file or combinations of files Move files in the Finder Operate in lieu of the finder when dragging or dropping into a Web page or another app Arrange or tile photos by hand or by sort Log and show actions Place images on a map Make a .pdf of any image or collection of them Turn galleries into pages in .html Import and export to and from .xml

I don’t do all that stuff, but I list it because I want to remember what I know I will lose when I move from this 2017 MacBook Pro running Mojave to my new 16″ 2023 M2 Max MacBook Pro. That will happen later today after I upgrade both machines to the latest version of Ventura. When that happens, Expressions Media 2, a 32-bit program last updated in 2013, will be kaput. And I will need to find another workflow.

Maybe Expression Media 2 will survive through emulation or something. Or maybe Phase One Media Pro (also last updated in 2013) will work.

Or maybe one of ya’ll has a better way to manage workflow than I’m using now. I’m open.

Meanwhile, here goes…

[Later (May 10, 2023)…] I am now working with my new M2 Macbook Pro, and I feel like I am missing limbs. The only prostheses I have are the limited choices that come with Apple’s Finder and Adobe Bridge. So again, help is welcome. Thanks.

Addendum: If you’re wondering why I’m not running my life on Linux and free and/or open-source software, the main reasons are Photoshop and Lightroom. I’ve tried Gimp and it doesn’t cut it. Photoshop knows the camera and lenses I’m using, and its tools, such as dehaze, are extremely useful. See the photos here? Thank dehaze for making the detail pop out. (Yes, Gimp de-hazes, but not easily.) I hate being trapped in Apple’s castle as much as the next serf, but shooting as much as I do requires tools I can use quickly and easily.

Monday, 27. March 2023

Identity Praxis, Inc.

ZARIOT Interview: Best Practices to Safeguard Your IoT Project

As more processes become digitized, there are increasing opportunities for bad actors to cause harm. It’s important for individuals to recognize the importance of IoT security, both for their personal data and their company’s data. I’ve outlined some best practices to consider when developing IoT projects below, to help you keep both your own and […] The post ZARIOT Interview: Best Practices to

As more processes become digitized, there are increasing opportunities for bad actors to cause harm. It’s important for individuals to recognize the importance of IoT security, both for their personal data and their company’s data. I’ve outlined some best practices to consider when developing IoT projects below, to help you keep both your own and your company’s data and assets safe from potential threats.

Availability & Visibility

The basic requirement for every IoT product suite is the continuous availability and visibility of data. These updates on device health and users are key components to effective cybersecurity and monitoring. They allow you to deal with rogue or compromised devices immediately before it affects other system components. Visibility also ensures that older devices can be removed prior to becoming ghost devices (old, unused, or hidden devices), which can pose significant security risks.

The main problem with out-of-date software and legacy hardware associated with ghost devices is that once they pass their end-of-life cycle, the vendor no longer maintains or supports the products, resulting in security vulnerabilities and risk to organizations.

Authenticity

Other than 24/7 transmissibility, data reliability and verification of devices in both directions is also vital. In the age of Artificial Intelligence (AI) and Machine Learning (ML), there is rising concern surrounding authenticity of data. Sophisticated hackers are able to utilize methods to impersonate and tamper with data, potentially causing cascading failure across your enterprise.

To maintain airtight IoT security, you need a reliable computing base that functions as the root of trust and is tamper-proof. A root of trust holds your cryptographic keys and secures communication with the network and other channels, making it intrinsically secure. Cryptographic keys are also randomized and unique across different devices. If one device is compromised, all other devices will still remain safe.

Security

One way to verify service integrity is implementing security by design, where products and capabilities are designed from the get-go to be foundationally secure. Security is also a continual process, requiring you to patch and update systems as technology evolves.

Should a cyberattack occur, it is just as important to understand how your business is going to respond and recover. It is natural to focus on technology and systems but recovering should extend to your customers. That’s why creating a plan is so important. You want to make sure you can respond quickly and have the right outcomes for your business priorities.

Privacy

The EU’s expanding General Data Privacy Regulation (GDPR) and the California Consumer Privacy Act (CCPA) in the United States are just the tip of the iceberg surrounding a slew of upcoming global data privacy and cybersecurity regulations.

You may be subject to varying levels of privacy and IoT security regulations, if devices are deployed across different countries or regions. Non-compliance may result in hefty penalties and sanctions. It is essential to pick an ecosystem partner that can help you navigate the complex data privacy and cybersecurity landscape.

Longevity

Long term planning for an IoT project can be like looking into a crystal ball, attempting to anticipate unexpected events. Even so, be mindful of the life cycle surrounding all your technologies. For instance, there’s the danger of 2G sunsetting, rendering devices that are reliant on the technology completely obsolete.

Conclusion

The added benefit to this is your IoT solution should also be flexible and customizable throughout the entire life cycle of the device. Imagine facing a sudden influx of data during peak EV charging times, or if you had to ship your EV chargers to different countries around the world, your connectivity solutions should be able to seamlessly connect and transmit securely, regardless of where your deployment is.

Ignoring IoT security is no longer an option. The stakes are too high, with the potential to lose personal data and company assets at risk. By implementing best practices outlined above, you can help ensure the safety and longevity of your IoT devices and projects. Don’t wait until it’s too late to prioritize IoT security today.

The post ZARIOT Interview: Best Practices to Safeguard Your IoT Project appeared first on Identity Praxis, Inc..


Damien Bod

Azure AD cross-tenant synchronization

The article looks at and explores the new Azure AD cross-tenant synchronization. The feature makes it really easy to implement the technical part of synchronization between different Azure AD tenants. Code: https://github.com/damienbod/Aad-cross-tenant-synchronization Requirements To use this feature, both the source tenant and the target tenant require at least a P1 Azure AD license. The administr

The article looks at and explores the new Azure AD cross-tenant synchronization. The feature makes it really easy to implement the technical part of synchronization between different Azure AD tenants.

Code: https://github.com/damienbod/Aad-cross-tenant-synchronization

Requirements

To use this feature, both the source tenant and the target tenant require at least a P1 Azure AD license. The administrator and the application used to implement the feature needs to have the required delegated authorization. I used Microsoft Graph Explorer for this. In a second step, I will implement an ASP.NET Core application for this.

Setup

In this use case, I have two separate tenants with users, applications, groups and everything that comes with an Azure AD tenant. I would like to join these two tenants without migrating all the users, groups, and applications to one of the tenants.


A typical use case for this would when a company acquires a second company and both use Azure AD. There are many different strategies for technically merging two companies and every implementation is always different. The merge might be a m-n merge which is a more complicated to implement. It is always best to aim for simplicity. I always aim to reduce to a single source of identity for applications and use this to authenticate everything against. The single source of identity can federate to different IAM systems, but the applications should not use the external IAM directly.


I also try to have only one source of identity for users, but this is not always possible. With these complex systems, users must have only one primary identity, otherwise it gets really hard to maintain onboard, offboard and you require more tools to manage this. One problem with this solution is using on-premise applications or clients with cloud IAM solutions. For some on-premise applications, there is no network connection to the cloud systems and so exceptions needs to be implemented here if using a cloud solution as the single source of identity for applications. Azure AD and AD is one such example of this problem and Microsoft provide a good solution for this. (Azure Hybrid)

The big decision is deciding where the identities have the primary definition.

In this demo, we keep the identities primary definition as it was and add the members of the source tenant to the target tenant as external member users. This synchronized then.

The Microsoft documentation for this is excellent and I really recommend using this when implementing the synchronization.

Setup target tenant

To implement the synchronization, the user and application need the assigned privileges for the following Microsoft Graph permissions:

Policy.Read.All Policy.ReadWrite.CrossTenantAccess

You can view this in the Enterprise applications once setup. This Enterprise application is only required to create the synchronization. This can also be deleted after the synchronization has been created. The Enterprise application is used to create a second Enterprise application used for the synchronization. You could also just implement this directly in the Azure portal.

The different steps to create the synchronization are described in great detail in the Microsoft Azure docs.

Setup source tenant

To implement the synchronization, the user and application need the assigned privileges for the following Microsoft Graph permissions:

Policy.Read.All Policy.ReadWrite.CrossTenantAccess Application.ReadWrite.All Directory.ReadWrite.All

The different steps to create the synchronization are described in great detail in the Microsoft Azure docs.

Synchronization configurations

The Azure AD provisioning can be setup to add all users, or scoped to a set of users, or a group of users as required. The provisioning can be setup to map the user attributes as required. I used the default settings and provision all member users.

Synchronization in operation

One the synchronization is setup, the identities are automatically updated to the target system using the defined scope and configuration. You can make changes on one tenants and the changes can be viewed on the target tenant. The trust settings can be updated to trust the original MFA form the source tenant, but care needs to be taken here. You do not want to allow weaker MFA than the required company policy. A phishing resistant MFA should be standard for any professional company that cares about its employees and security.

Now that the identities are synchronized , the next step would be to move the applications and the groups for the shared services.

Links

https://learn.microsoft.com/en-us/azure/active-directory/multi-tenant-organizations/cross-tenant-synchronization-overview

https://learn.microsoft.com/en-us/azure/active-directory/multi-tenant-organizations/cross-tenant-synchronization-configure

https://learn.microsoft.com/en-us/azure/active-directory/multi-tenant-organizations/cross-tenant-synchronization-configure-graph

https://www.microsoft.com/en/security/business/identity-access/azure-active-directory-pricing

Saturday, 25. March 2023

reb00ted

My wish list for future ActivityPub standardization and related activities in the Fediverse "commons"

Note: Newer version of this list is here. There is some movement in the W3C to perhaps reactivate working on ActivityPub, the protocol that lets Fediverse apps such as Mastodon talk to each other and form a large, decentralized, social network. I posted this to their mailing list recently. Sorry if this is a bit cryptic, it was written for a very specific technical audience. I have added a few

Note: Newer version of this list is here.

There is some movement in the W3C to perhaps reactivate working on ActivityPub, the protocol that lets Fediverse apps such as Mastodon talk to each other and form a large, decentralized, social network.

I posted this to their mailing list recently. Sorry if this is a bit cryptic, it was written for a very specific technical audience. I have added a few annotations in square brackets to provide context.

What am I missing in my wish list?

Putting on my product developer hat, here’s what I want for Christmas:

A single-document basic [Fediverse] interop spec, ie. when I have written code for everything it says, my code will interoperate in some basic fashion with all other software that has implemented this document no need to consult or understand other implementations may be quite basic, e.g. text content only, only a minimal set of activity types enables implementors to have a “MVP”-style interop at lowest-possible engineering effort (including the time required to read/understand the specs!) This could be done as a “minimal profile” of a stack that contains a subset of AP [ActivityPub], AS [ActivityStreams], and Webfinger A test suite for that profile suitable to add to my automated test suite over time, this test suite can grow beyond the minimal profile A branding program for products that have passed the test suite As an implementor, you get to put the sticker on your product. In particular, in the places in the product where users “connect” to other servers in the Fediverse, like “Visa” is displayed at the POS terminal I believe this will become critical if/when larger orgs with potentially different value systems connect to the Fediverse A set of web “intent buttons” for Like, Follow, Post, etc that work across sites like they exist for centralized social media as easy to use for the user we can argue how this can be accomplished technically. I have opinions, but for this wish list, they are immaterial, as long as I get those buttons :-) A standardized way to express terms under which content is provided As I understand it, Bob Wyman calls that a Rights Expression Language This probably should start with a use case collection A design for search that meets the requirements of all relevant parties and points of view This is probably far less a technical problem than one of successful communication A design to reduce certain loads Fan-out Video (Not my area of expertise, so I don’t have details) Improved identity management across the Fediverse Easy-to-use single-sign-on across servers. Use case: I use several apps for different content types (like micro blog and video). Bonus: they all post from the same identifier Easy-to-use persona management. Use case: I have a personal and a work account, bonus if they can be on the same server Identifiers not tied to the domain name system

Some of this squarely falls in the domain of this group [that would be the W3C’s Social Web Interest Community Group], some is adjacent. It could be pulled in, or it can be done somewhere else. I don’t particularly care about that either, as long as it gets done and done consistently with the rest.

Now I’m sure you all are going to tell me why I can’t have all those things for Christmas, and certainly not this Christmas. But I can wish, no? (More seriously, I think they are all essential for continued successful growth of the ActivityPub network as new parties connect)

Wednesday, 22. March 2023

Phil Windleys Technometria

Passkeys: Using FIDO for Secure and Easy Authentication

Passkeys, the popular name for FIDO, are not only easy, they offer better protection against phishing than other forms of multi-factor authentication. This is how they work. This article is adapted from Chapter 12 of my new book Learning Digital Identity

Passkeys, the popular name for FIDO, are not only easy, they offer better protection against phishing than other forms of multi-factor authentication. This is how they work.

This article is adapted from Chapter 12 of my new book Learning Digital Identity from O'Reilly Media.

Thanks for reading Phil Windley's Technometria! Subscribe for free to receive new posts and support my work.

I was at SLC DevOpsDays last week and attended a talk by Sharon Goldberg on MFA in 2023. She's a security expert and focused many of her remarks on the relative security of different multi-factor authentication (MFA) techniques, a topic I cover in my book as well. I liked how she described the security provisions of passkeys (also know as Fast ID Online or FIDO).

FIDO is a challenge-response protocol that uses public-key cryptography. Rather than using certificates, it manages keys automatically and beneath the covers, so it’s as user-friendly as possible. I’m going to discuss the latest FIDO specification, FIDO2, here, but the older FIDO U2F and UAF protocols are still in use as well.

FIDO uses an authenticator to create, store, and use authentication keys. Authenticators come in several types. Platform authenticators are devices that a person already owns, like a laptop or smartphone. Roaming authenticators take the form of a security key that connects to the laptop or smartphone using USB, NFC, or Bluetooth.

This is a good time for you to stop reading this and head over to Passkeys.io and try them for yourself. If you're using a relatively modern OS on your smartphone, tablet, or computer, you shouldn't have to download anything. Sign up using your email (it doesn't have to be a real email address), do whatever your device asks when you click "Save a Passkey" (on my iPhone it does Face ID, on my MacOS laptop, it does Touch ID). Then sign out.

Using Touch ID with Passkey

Now, click on "Sign in with a passkey". Your computer will let you pick an identifier (email address) that you've used on that site and then present you with a way to locally authenticate (i.e., on the device). It's that simple. In fact, my biggest fear with passkeys is that it's so slick people won't think anything has happened.

Here's what's going on behind the scenes: When Alice registers with an online service like Passkeys.io, her authenticator (software on her phone, for example) creates a new cryptographic key pair, securely storing the private key locally and registering the public key with the service. The online service may accept different authenticators, allowing Alice to select which one to use. Alice unlocks the authenticator using a PIN, fingerprint reader, or face ID.

When Alice authenticates, she uses a client such as a browser or app to access a service like a website (see figure below). The service presents a login challenge, including the chance to select an account identifier, which the client (e.g., browser) passes to the authenticator. The authenticator prompts Alice to unlock it and uses the account identifier in the challenge to select the correct private key and sign the challenge. Alice’s client sends the signed challenge to the service, which uses the public key it stored during registration to verify the signature and authenticate Alice.

Authenticating with Passkey

FIDO2 uses two standards. The Client to Authenticator Protocol (CTAP) describes how a browser or operating system establishes a connection to a FIDO authenticator. The WebAuthN protocol is built into browsers and provides an API that JavaScript from a Web service can use to register a FIDO key, send a challenge to the authenticator, and receive a response to the challenge.

One of the things I liked about Dr. Goldberg's talk is that she emphasized that the security of passkeys rests on three things:

Transport Layer Security (TLS) to securely transport challenges and responses.

The WebAuthN protocol that gives websites a way to invoke the local authentication machinery using a Javascript API.

A secure, local connection between the client and authenticator using CTAP.

One of the weaknesses of how we use TLS today is that people don't usually check the lock icon in the browser and don't understand domain names enough to tell if they're being phished. Passkeys do this for you. The browser unambiguously transfers the domain name to the authenticator which knows if it has an established relationship with that domain or not. Authenticating that you're on the right site is a key reason they're so much more secure than other MFA alternatives. Another is having a secure channel from authenticator to service, making phishing nearly impossible because there's no way to break into the authentication flow.

Passkeys provide a secure and convenient way to authenticate users without resorting to passwords, SMS codes, or TOTP authenticator applications. Modern computers and smartphones and most mainstream browsers understand FIDO protocols natively. While roaming authenticators (hardware keys) are available, for most use cases, platform authenticators (like the ones built into your smartphone or laptop) are sufficient. This makes FIDO an easy, inexpensive way for people to authenticate. As I said, the biggest impediment to its widespread use may be that people won’t believe something so easy is secure.

Thanks for reading Phil Windley's Technometria! Subscribe for free to receive new posts and support my work.


MyDigitalFootprint

Why finding purpose feels impossible and why we should be talking more about balance

Overview Balance is a necessity in every aspect of life.  Knowing if we are in or out of balance directly affects our ability to survive and thrive; therefore, this thought piece brings something new to understanding balance and by association purpose.   Adaptation and homeostasis are the natural world's way of creating an unstable equilibrium. This dynamic system is unpacked a
Overview

Balance is a necessity in every aspect of life.  Knowing if we are in or out of balance directly affects our ability to survive and thrive; therefore, this thought piece brings something new to understanding balance and by association purpose.  

Adaptation and homeostasis are the natural world's way of creating an unstable equilibrium. This dynamic system is unpacked and repurposed to present a solid argument for leadership teams and boards of directors to shift questions to ones that understand how to balance two powerful opposing forces.  Balancing the development of the new whilst maintaining and preserving the existing has always been challenging, but here is a model that makes it easier.

In nature, we observe that organisms with little or no adaptation to ever-changing environments struggle to survive, whereas those that are pushing the evolutionary envelope and mutating into more suitable candidates for their ecosystem thrive. We also know that homeostasis and memory are key factors in maintaining stability. We find the same duality phenomena is reflected in the business world as we try to find a balance between creating new and maintaining what we have. 

Nature does not favour new over old but ebbs and flows, appearing to have a fluid, ever-changing dynamic stability; however, businesses try to prioritise based on a purpose, and this piece highlights that often there is an unwillingness to consider both perspectives in balance, which is detrimental.  What are the right questions to ask to determine if your purpose or strategy will disrupt fragile balances or if you are aligned and can thrive? 

This is a 20-minute read which will challenge your perceptions and presuppositions of the role you inhabit and the role your ecosystem plays on you.






Can you, I, us, society or business have a single purpose?

In our natural world, two dependent systems keep our fragile life from ending. One is the ability to mutate and adapt, and the other is maintenance (homeostasis) and memory. When we observe the dynamic expansions and contractions in our natural world, we see the two-sides of this dynamic system competing.  We can label these forces as “error creation” and “error correction”, and there is a never-ending ebb and flow between the two states, which creates a dynamic stability or an unbalanced equilibrium; two oxymorons which we will explore below. The ability to see both sides of a perspective gives us a wider vantage point that can help us unlock powerful insights into why a single unified purpose is not feasible, why AI will not take over, and how to avoid failure in times of uncertainty.

Is believing in a unified purpose a protective delusion? 

The theory of evolution is a scientific explanation for how life on Earth has changed over time and how different species have arisen from common ancestors. It is based on the idea that all living things share a common ancestry and that the diversity of life we see today results from gradual changes over millions of years through a process called natural selection. Natural selection is the process by which certain traits become more or less common in a population based on their effects on survival and reproduction. Random mutation being the way to create new traits and adaptation, are central to the theory of evolution.

Homeostasis is maintenance, it refers to the ability of an organism to maintain a stable internal environment despite changes in the external environment. It is essential for the survival and well-being of an organism, and disruptions to this balance can have serious consequences. Memory refers to the process of storing, retaining, and retrieving information both individually and at a societal level; consider here how history, culture, law and even paintings and architecture are memory. Memory gives an organism the ability to maintain a sense of stability despite being in a changing environment, as one is able to recall the expected outcome of a familiar circumstance.  Homeostasis and memory are critical for stability and survival. 

These concepts, mutation & adaptation and homeostasis & memory, have been observed for thousands of years, and as humans unpack the complex connectivity between them, we can better articulate what we observe, which is why it can be so difficult to find and follow a singular goal to its completion. The connection between them has elements of both immediacy and delay: mutation demands an immediate response from homeostasis, yet also drives changes over time through immunity responses"; In the long run, human changes create environments for different mutations.  

To the observer who notices that there are no arrows, there is a reason for that. It is about the connection, relationship, and coupling, not the directionality of flow. 

Life is often framed as a competitive arms race: the survival of the fittest between living organisms that adapt to a constantly variable, but ultimately stable system. Mutations are randomness in the complex life coding system that creates new traits.  Mutations can create an advantage in the arms race as they can lead to unique traits which make an organism fitter for its environment.  Advantageous traits tend to thrive, whereas non-advantageous traits will either cease to exist or live on at a subsistence level.

Cyber security is positioned as a similar modern-day example of an arms race, but in reality, it is one system (virus) that is looking for a vulnerability to give it an advantage, and the other system running anti-virus is responding/ adapting to prevent it. The virus can only be created because the system has a fundamental flaw (weakness), and it is a race between those who want to uncover it and those who want to hide it, but the flaw exists. It is not the same.  

One could use the free market economy as a modern example of this phenomenon of mutation and homeostasis, whereby we use the words “innovation” and “competition” as synonyms for “mutation” and “survival of the fittest”. To describe innovation as solely beneficial, or solely detrimental would be incorrect; likewise, it is not reasonable to suggest that stability is solely one or the other. In all aspects of life, there is a balance between chaos and order, mutation and homeostasis, and innovation and tradition. Therefore, when we re-frame our thinking, we see that believing in a single, unified purpose is a delusion, as this singular goal only represents one side of a dynamic system whereby control on one end does not create the end game, and we only have a bright future because we have two.

A preference for labelling

If I gave you a choice between “innovation” and “mutation”, which one do you associate with value, creativity and growth?

Transposing the words that describe our natural world directly into a business environment would create fear, uncertainty and doubt.  “We are working hard on innovation” sounds more impressive and reasonable than “we are creating new mutations.”  “We mostly operate in a competitive market” sounds so much more plausible than “it is an arms race for the survival of the fittest”.  Words matter, and because they do, we can use words to create dynamics to be more or less willing to discuss and debate hard topics in the boardroom. 

The next section utilises the word “error”;  in this context, an "error" is the gap between expected and observed. I understand that many readers will want to stick with their natural framing of an error as right/ wrong, but “error” can mean:

a moral offence; 

belief in something untrue; 

the state or condition of being wrong in conduct or judgement;

a mistake, blunder, lapse, or slip;

a deviation from perception, expected, accuracy or correctness; 

the difference between the observed and actual

Error in this work is closer to the framing of the lower three bullets than the top three.  

How does a system based on “error creation” and “error correction” sound?  

The dynamic of our natural world system is that one system is creating something new and different whilst the other is trying to correct and mitigate the change. One is bringing in the new, and the other defending the existing.  One is the hope of something better, whilst the other is happy not to take the risk and maintain the same. Both sides create risk; strangely, different risks are created by each side, but nevertheless, risk.  Error creation, in this instance, is invention, innovation and creativity.  Error correction is stability and familiar processes that we know (and trust) from historical experience.  The risk in the former is change, and the risk in the latter is to become not relevant.  They are different risks.

The risk in the former is change, and the risk in the latter is to become not relevant.  They are different risks.

How often do we translate the message we receive from leadership that we need to be both “more innovative” whilst simultaneously “improving efficiency and effectiveness” as a demand to compromise on everything and lack focus? Perhaps we should see this balance as the maintenance of an equilibrium of "error creation" and "error correction" because it creates something new that we label as value. 

Leadership that fails to recognise there is a need for balance will fail; we first need to determine where we are in the system and which system we favour; from there, it should be easier to determine what journey we are on and where the north star is. 

This is not “trial and error”, as before a trial, there has to be a hypothesis, and the error is a deviation in the results from the expected/ predicted.  It is neither “error and trial”, where an error is created without a hypothesis and using some measure, one tries to determine if the error was successful. Then there is AB testing. However, these are all controlled point methods and are not about dynamic reactive complex systems. 

The answer to the universe, world and everything (#douglasadams)

To fans of The HitchHickers Guide to the Universe, we know the answer is 42, but the question remains;  we just needed another 5 minutes!  My struggle with the answer, as given, was whether it was relevant when the question was asked or given since 7.5 million years passed, and a lot happens in big time frames. I would have loved to have asked Douglas.

So often, there is a dynamic between two competing ideas, both trying to be the more important single unifying idea or purpose.  Humans want to be both heard and belong.  It is not that one is better than the other, but one is more error creation (listen to my new idea) and one more error correction (stability of belonging).  Some business leaders will say their purpose is “shareholder primacy”, whilst others say their core purpose is to “improve the lives of their customer”. This is a nonexclusive club - and there are many others - however, most companies settle on a purpose that is a compromise. It is safe to say that business purpose varies depending on the specific goals and values of the company and its stakeholders.  It could be argued that the purpose of a business is to balance creating more value for the majority of the customers, shareholders, employees, and wider society as a whole than the destruction or erosion of value for others.  Purpose itself is achieved through various competing ideas, such as developing innovative products or services, providing high-quality customer service, maintaining ethical business practices, and contributing to the ecosystem the organisation operates in. 

When we reflect on this concept, we see that a business is itself demanding that it “creates errors” and “correct errors.” Remember, this is not “mistakes” or moral pearls; this is seeing innovation as creating change, which is an error to the system of no change. Businesses need to create new ideas and innovations but also correct errors that try to change the stability of process efficiency.  Leadership needs to craft a level of instability whilst maintaining equilibrium.  Note: another framing of “error” could be morally unethical behaviour and trading without reason whilst insolvent. These activities are not errors; they are illegal, and it is not what we are focused on.

Leadership needs to craft a level of instability whilst maintaining equilibrium.

Perhaps we should accept that there cannot be one single unifying purpose, and we exist in a place where some individuals, leaders, and teams favour change (error creation) and some stability (error correction), but without both, we would lose our ever-changing dynamic stability.

The Purpose and origin of Innovation (as an example of error creation)

Innovation refers to the process of creating something new or improving upon an existing product, service, or process. The purpose of innovation is to bring about positive change and to solve problems more effectively or efficiently. Innovations can be incremental or disruptive, with the latter representing a significant shift in how things are done.

The origin of innovation can be traced back to human curiosity and the desire to improve the quality of life. Throughout history, humans have constantly sought ways to improve their lives, whether it be through inventions such as the wheel or electricity, or the development of new technologies like mobile phones and the web. Innovation has played a vital role in driving progress and advancing society.

Innovation has become increasingly important as businesses seek to stay competitive and meet the changing needs of consumers. Companies invest in research and development to create new products and services to improve efficiency, increase profits, and create new market opportunities. Governments also encourage innovation through funding research and development, providing incentives for businesses, and fostering collaboration between industry and academia.

Overall, the purpose and origin of innovation are rooted in human ingenuity and the desire to create something new and better. Innovation has been and will continue to be a critical driver of progress and growth in all aspects of society, including how we achieve sustainability. 

We like innovation as long it is not called error creation or mutation, but the reality is, no matter how hard we try, they are one and the same.   If you naturally align with an error correction ontology, you may have a creative mindset and enjoy the process of disruption, change, and the outcomes of instability.

There are exceptions: fraud, bribery and corruption.

The Purpose and origin of Accounting (as an example of error correction)

The origins of accounting can be traced back thousands of years to ancient civilisations such as the Babylonians, who kept records of their transactions on clay tablets. In medieval Europe, accounting was used primarily by merchants and bankers to keep track of their financial transactions. The original purpose of business accounting was to provide shareholders with accurate and timely information about the financial performance and position of an organisation.

Accounting is a system of recording, analysing and summarising financial transactions to prevent errors, provide accuracy, and present information that is useful in making business decisions. Over time, accounting has evolved into a sophisticated discipline with its own set of principles and standards. In addition to serving the needs of businesses, accounting also serves the broader needs of society. For example, accounting information is used by governments to track tax revenues and by investors to make investment decisions. Accounting also plays an important role in maintaining the integrity of financial markets by providing reliable information about the financial performance of companies.

We may naturally align to the framing of finance as long it is not called error correction, doing the same, preventing change and improving stability, but the reality is, no matter how hard we try, they are one and the same. If you naturally align with an error correction ontology, you may have a more accounting mindset, whereby you will naturally look for order amidst chaos and create stable processes out of a loose set of desires.

There are exceptions: fraud, bribery and corruption.

How do other business functions align?

Marketing, sales, and product development appear to tend towards error creation, with operations, legal, and accounting systems tending towards error correction.  It is possible that where technology and HR sits is critical to the overall balancing culture of the company and if the company favours error creation (creative, imaginative, innovative) or error correction (stable, historical, and dutiful.)  

The descriptions above used “tend towards.” The basic function of accounting is to remove errors, but it also has to improve continually because of the necessity of continually correcting new errors.   The basic function of marketing is creativity, never repeating campaigns and coming up with new and different ideas. But if marketing were only creative, it would not be able to align and build upon what is already there and how the market behaves.  Creative flair for error creation is tempered by the history and context of error correction.  All functions demand both, and we need to understand how the balance is created by the purpose, function, and culture of the team. 

Teams define culture.  Teams are made of individuals who are either going to be more aligned to error-creating or error-correcting. Whilst we prefer to label individuals as  “creative” or “checklist,” these do describe the characteristics, leadership attitudes, and biases that are critical to determining what function a department actually fulfils.  These balances are not simple; indeed, they are complex.

Creative flair for error creation is tempered by the history and context of error correction.

A board needs to comprehend, appreciate, and understand if the balance is right. If a board is not diverse in thought, skill, and discipline, that balance may not be found or is easily lost, and the culture of one or the other can lead a business to failure, especially with an overly dominant financial focus as accounting tends to focus on being financially risk-averse.  It is worth noting that balance, in this sense, is not about equality; the balance creates enough errors to improve continually (agile) but with enough error correction to be resilient.  

The balance must create enough errors to improve continually (agile) but with enough error correction to be resilient.  

Wider perspective


The error creation/correction model can scale to other forms of thinking: 

Fear can be seen as error creation, whereas trust is error correction.  There is undoubtedly a complex relationship between fear and trust that is both immediate and develops over time.  Trust can correct fear that is related to an error originating from the past, but trust can also create a new path. 

The concept of 'self' is error-creation, whereby we tend to express our individuality within a system of societal norms and memory which seeks stability and error correction.  Individuals create eros, and their movement (many followers) creates a change which eventually becomes the new normal.

Freedom allows for error creation, whilst control demands correction from deviation.  Too much freedom or too much control both appear terminal based on history. Prison and punishment systems struggle with this balance on so many levels, even using “error correction” as an idea in a different context.

Both error creation and error correction harbour risk, and both can be seen as creating more risk.  Benefit  demands an understanding of the value and cost of moving away from the known and can be seen as both more error creation (advantage) and correction (prevention).  

These examples are non-linear, and balance does not necessarily mean 50:50 and perhaps a less controversial language might be to substitute “error” with “difference.” The difference is the way in which two things, when compared, are not the same. 

What else have you observed that represents two sides of a dynamic complex relationship, where one or both favour error creation and one error correction? 

It is worth pondering about how and why we try to reduce the complexity of the balance into a single ROI figure in order to pass a hurdle and justify a decision.  Can ROI ever capture consequences?   

What happens when we get it wrong?

Wrong, in this case, is where the balance is lost, and too many errors or too much correction ends in survival becoming unviable, with death being the only inevitable outcome. Business failure. 

In the wild, a mutation that does not provide an advantage can easily be lost, and an overly controlled stable environment can dwindle to nothing if it fails to adapt to modernity. 

In the economics of a business, the same is true: an innovation will not give an advantage forever, and the next innovation may not provide a new advantage. Equally, too much control and stability as a result of finance controls and regulation will result in the company becoming outdated in comparison to its environment or ecosystem - the result is the company or sector will enter a slow decline. #banking

A company does not exist on its own but in a complex ecosystem, and the balance may be achieved within the ecosystem and not just in the health or balance of a single company. In nature, the predator-prey model often forgets that the rabbit/fox needs other food and water, as well as the rabbit/fox having other threats.  A company may well be out of balance in itself, but balance is remediated by others in the ecosystem, which is why strategy is not just about innovation, but the balance that is maintained by the ecosystem in order to create dynamic stability. It is important to understand who is moving towards stability and who is moving towards innovation; otherwise, the whole ecosystem itself can fall out of balance.

This highlights that any theory of error creation/correction must embrace relevance on at least four levels: individual, team, organisation, and ecosystem.  A purpose may articulate what you want, but a purpose that is not dynamic in its ability to move and balance error creation and error correction at all levels is going to be short-lived. 

Reframing AI away from “Intelligent Machinery.”

Defining the term 'artificial intelligence' presents us with a problem. The problem is that human intelligence fundamentally depends on a balanced system where error creation and error correction co-exist.  Many of the most historic creative minds, insightful thinkers and leading scientists were neurodiverse (error-creating) and spent their lives fighting systems of error correction (academic, power, religious.)  Flat earthers, climate deniers and Copernicus with the heliocentric model of the solar system are a few that history has thrown up. 

Using the lens of error creation and correction, we can say that we are building ever more advanced error correction systems from big data tools, identification and removal of bias, better accounting, DNA manipulation, eradicating viruses, and using tech to provide compliance, governance, and oversight.  In this perspective, it becomes more obvious that we do not currently focus on using ever more advanced tech to actually create errors as a route for advantage. If AI were like nature, an AI that only corrects or one that only creates errors could not survive. 

If AI only needs to find and fix errors through the error correction lens, then AI will only be half a system and will be artificial but not intelligent. Data can be error-correcting and error-generating, but if we don't know this part of the ontology, we will never be able to gain more intelligence. 

The diagram below expands on this thinking to suggest that there is more than just one system on the error-correcting side.  Error correction is more complex as it needs to amend errors which are not advantageous, and promote the development of errors which are. It is equally true that the error creation side has to have a system that moderates errors, or it would be too wild, and create destruction before the opportunity arose to test if a new trait provided an advantage. The closed-loop system is full of complex feedback loops to both control and promote; with both immediacy and delay.  What is evident is that the system is neither stable nor unstable but has pulsing oscillation, which keeps it in a delicate balance. 

The axis of thriving and death

What came first, the chicken or the egg? 

In a world where error correction (driven by the desire to use experience and the past as the solid foundation) is the initiation or y-axis, and the response is error creation (creativity, invention and innovation) or x-axis, we get thinking dominated by being controlled first, and being unstable second, which creates the volatility and ambiguity that allows for dynamic stability, it is a place where we can thrive and grow. This could be our economic and political world viewed through the lens of law, policy, study, social norms, culture, and data.

In a world where error creation (driven by a desire for change, innovation, creativity, adventuring, pioneering, and invention) is the initiation, and the response is error correction (finance, risk management and control), we get instability dominating our thinking, and the environment is unbalanced, yet still in equilibrium.  Complexity and uncertainty become leading ideas. In a place of unbalanced equilibrium, we are still able to thrive and grow. This could be our economic and political world viewed through the lens of research, hypothesis, invention, ideation, hope, and imagination.

It would appear that:  dynamic stability + unbalanced equilibrium creates a balanced system that thrives.

What we should avoid are too much control and too much error - the question is, what do any of these worlds look or feel like, and how will we recognise them?  

The last thing a fish will become aware of is water teaches us that our current environment can feel normal when in reality, it may be out of balance.  If we cannot see or know the balance, we have lost our propensity to manage change. 

Why does all this matter

Leadership through one lens (unbalanced equilibrium) depends on being curious, whereas leadership from the other lens (dynamic stability) demands a focus on efficiency and effectiveness, but both perspectives need to continually improve how they inform and direct decision making. 

We often see fantastic single-idea books that encourage leadership to optimise decision-making towards growth driven by error creation (how to disrupt yourself, agile, reinvention, cross the chasm, the new normal, change management, repurpose, innovation, re-engineering); it is likely, with such a narrow focus, to lead to poor decision-making, as the benefits and outcomes for error correction are ignored or suppressed.  This is equally true in reverse, and there are equally brilliant books that optimise for error correction (efficiency, time management, effectiveness, infinite, leadership, tick lists, ethics, merit, principles, governance, accounting, and statistics). 

If we try to get the balance for a better outcome from both error creation and correction, we will likely get sub-optimal results for both.  Even with all the resources in the world, nature still ebbs and flows, as it cannot optimise for both and has learnt to make compromises, but we thrive because of it. 

The title of this long read is “Why finding purpose feels impossible”; on reflection, the title could equally have been “Why a singular unified business purpose is impossible for everyone”. If we accept we have to balance error creation and correction, a single purpose has to favour or optimise for one or the other, which means that many employees and stakeholders feel the stated “purpose” does not align with or represent them. Balance is rarely 50:50, it can come in many forms: 0:100, 80:20, 99:1, etc. Balance exists, but the context is dependent on the purpose you are trying to achieve. This is where exceptional leadership comes into its own.

The takeaway for leadership

A critically important task/ role for leadership, the board, executives, and governance are to determine:

If our organisation favours error-creation or error-correction

Where is the balance for our organisation?

Are we in balance? If not, are we planning to or sourcing the skills necessary to improve balance?

How is balance in our ecosystem achieved, what are others doing?

Is our organisation/ecosystem in or out of balance, and does our action change anything?

The most important purpose for leadership, the board, executives and governance are to understand and determine if our organisation and ecosystem are in or out of balance. 

The problem with strategy and purpose is that we can have agreed and have an agreement, but that does not mean we agree.  Working out balance is complex, whereas how we are performing against the plan is rather more straightforward.  

To determine and find balance, we must shift our focus from individual, singular and expert towards collective, group and ecosystem.  Whilst the former is positioned as strong leadership, and the latter is portrayed as introducing delay, compromise, tension and conflict - it is evident that to thrive, we must know how to find and deliver balance. 




reb00ted

Open networks are more valuable than closed ones: the case for the Fediverse over Twitter and Facebook

Networks are everywhere, and they are valuable. Consider: The road network. Imagine you’d need a different car for each piece of road. Life as we know it today would be utterly impossible without it. The phone network. To pick two random use cases, without it you couldn’t call customer service or summon an ambulance. The Visa credit card network (and its competitors). You would have to us

Networks are everywhere, and they are valuable. Consider:

The road network. Imagine you’d need a different car for each piece of road. Life as we know it today would be utterly impossible without it. The phone network. To pick two random use cases, without it you couldn’t call customer service or summon an ambulance. The Visa credit card network (and its competitors). You would have to use cash instead, but arguably everybody accepting the same currency forms a network, too, and without that, we’d be back to barter. Which would be really inconvenient. The world-wide-web. Some of us are old enough to remember the times before. No on-demand news, music, entertainment, chatting, reservations, e-commerce and all the others.

Generally, larger networks are more valuable than smaller networks: if you are the only person in the world who has a telephone, that phone is not worth much. If there are 2 people with phones, you can at least call each other. With 3 people, 3 different conversations can be had. With 4, it’s 6. With 100, it’s 100*99/2 = 4950 possible conversations, not counting multi-party conference calls. This quadratic growth of value with the size of the network applies to all networks, according to Metacalfe’s Law.

But in this post, I want to look at another dimension of networks that impacts their values, and that is whether the network is “open” or “closed”. There are lots of details one could consider, but for our purposes here, we define a spectrum with two extremes, and lots of gray in the middle:

Fully open Somewhere in between Entirely closed Anybody can connect to the network and do what they like, nobody's permission is required. Who may connect, and what they may do on the network, is closely controlled by the network proprietor.

There can be all sorts of network proprietors, but for simplicity in this post, assume it’s a single entity, like Meta.

Here are some examples:

Fully open Somewhere in between Entirely closed The public road system. Roads on a private golf course. Buyers and sellers using cash. Buyers and sellers using Visa. Internal company accounting system. The world-wide web. Facebook. Twitter. The old AOL walled garden.

If you had two networks that are otherwise identical in size, structure and function, except that one is open and the other one is closed, which of those two is more valuable?

Valuable to whom?

Fully open Somewhere in between Entirely closed Valuable to: Platform proprietor: no, does not exist Network users: yes Valuable to: Platform proprietor: yes Network users: yes

It’s clear that if both networks produce the same amount of total value, the open network is more valuable to its users (such as individuals and organizations), for the simple reason that there is no network proprietor who needs to get paid! The value entirely accrues to the network participants.

But there’s more to it: Cory Doctorow recently coined the term enshittification to describe the inevitable march of platform/network proprietors, over time, to siphon off an ever-larger percentage of value generated by their network, to the detriment of its users. So the older a closed network, the less value it provides to its users. (Facebook users experience this every day: ever more ads, ever less genuine human engagement. While, for its business users, ad prices go up.) In an open network, on the other hand, the value that accrues to the users does not deteriorate over time.

And finally: could AOL, the online service, ever have provided the same value as the open web? Of course absolutely not! Open networks allow many more technologists and entrepreneurs to innovate in a gazillion different ways that would never be possible in a closed network. As closed networks mature, not only do they enshittify, but they also further and further discourage innovation by third parties, while the opposite is true for open networks.

Which brings us to the Fediverse. Which is more valuable today: the decentralized, open social network called the Fediverse (with its thousands of independently operated Mastodon, and other instances), or the poster closed social network, Facebook?

Clearly, Facebook. That’s because by all counts, Facebook today has order-of-magnitude about 1000 times the number of users of the Fediverse. Same for Twitter, which has maybe 100 times the number of users of the Fediverse.

But the network effect is the only thing the closed social platforms have going for themselves. All other parts of the value proposition favors the open social network alternative. Think of this:

The Fediverse extracts far less / no value: no annoying ads, no user manipulation favoring the business model of the network proprietor. More functionality: it’s one interoperable social network with apps that emulate Twitter, Facebook, Medium, Reddit, Goodreads, and many others! In the same network. It’s entirely open for innovation, and innovators are building furiously as we speak. By its nature, it’s permanently locked open for innovation, and there is no danger of ever getting cut off from an API, facing sudden connection charges or drawing the wrath of a gazillionaire.

So by the time the Fediverse has sufficient numbers of users, it’s game over for proprietary social networks. This is true for both user categories in social networks: individuals and businesses. (I assume here that businesses and the Fediverse will find a way of accommodating each other, mainly by businesses behaving respectfully. If not, there simply will be no businesses in the Fediverse.) Individuals will get more value from the open network, and businesses will be far more profitable because there is no network operator to pay and many products and services pop up all the time that won’t in the closed network.

Note that the critical “sufficient number of users” can likely be substantially smaller than the user populations of those closed networks today, because all value accrues to users and it’s not diminished by value extraction from a network proprietor. For many of my own use cases, in many niches the Fediverse has critical mass today already.

Can the user advantage be overcome across the board? We will have to see. But if we add up just numbers of active users of organizations that have publicly announced Fediverse plans as of the date that I’m writing this, or even have products already in the market – Flipboard, Medium, Mozilla, Tumblr, Wordpress and more – we’re already in the high 100’s of millions.

Those numbers look awfully close to the user numbers necessary to overcome Metcalfe’s Law.

tldr; The time to take the Fediverse seriously, for individuals and businesses, is now. The value of the Fediverse for everybody is much higher than the value of any closed, proprietary social network – other than the proprietary social network companies themselves. And we won’t cry for them very much.

Note: FediForum is next week, where we’ll discuss this.

Monday, 20. March 2023

Identity Praxis, Inc.

The Personal Data & Identity Meeting of The Waters: A New Path To Value

I recently engaged in a compelling discussion with Kevin Clark and Mike McGuire from CE neWorld about the crucial topic of organizational and personal identity. We explore the significance of personal data and the choices we must make as business leaders. As we dive deeper into this important conversation, I was curious to understand how […] The post The Personal Data & Identity Meeting of T

I recently engaged in a compelling discussion with Kevin Clark and Mike McGuire from CE neWorld about the crucial topic of organizational and personal identity. We explore the significance of personal data and the choices we must make as business leaders. As we dive deeper into this important conversation, I was curious to understand how can individuals solve “The Identity Nexus equation”?

Discussion Abstract:

As the world becomes increasingly digitized, personal data and identity management face new challenges from consumer opinion, regulations, and technology. In this informative session, I discuss the convergence of traditional organizational and people-centric approaches to personal data and identity management. Delving into the five domains of personal data, the dynamic landscape of personal data regulations, and new technologies that empower individuals to control and protect their data. I also introduce The Identity Nexus, and explore innovative strategies and tactics through a three-horizon framework that businesses can use to increase revenue, reduce costs, enhance customer experience, combat cybercrime, and comply with industry regulations. Check out the video to stay ahead of the curve in personal data and identity management.

The post The Personal Data & Identity Meeting of The Waters: A New Path To Value appeared first on Identity Praxis, Inc..


Damien Bod

A first look at Blazor and .NET 8

In this post, Blazor and .NET 8 is used to implement a simple website. I took a .NET 7 project, updated it to .NET 8 and tried out some of the new features in .NET 8. Code: https://github.com/damienbod/Hostedblazor8Aad Setup The project was setup using a .NET 7 project which implements an Azure AD authentication using […]

In this post, Blazor and .NET 8 is used to implement a simple website. I took a .NET 7 project, updated it to .NET 8 and tried out some of the new features in .NET 8.

Code: https://github.com/damienbod/Hostedblazor8Aad

Setup

The project was setup using a .NET 7 project which implements an Azure AD authentication using best practice with a backend for frontend architecture and then updated to .NET 8. The security is implemented in the secure backend and the Blazor components are kept simple. The Blazor.BFF.AzureAD.Template template was used for this which takes care of all the project setup. At present no Microsoft template exists for implementing the security in this recommended way. The templates adds the security headers as best it can.

The project was updated to .NET 8 and all the Nuget packages as well.

<TargetFramework>net8.0</TargetFramework>

Microsoft.Identity.Web is used to implement the OpenID Connect confidential client. An Azure App registration was created for this with the Web client and a user secret. You could also use a certificate instead of a secret which improves the token request in the second step of the OIDC code flow authentication.

The application was started and like in .NET 7 we still have the annoying console warnings because the debugging tools try to add inline scripts to our code. The inline scripts are blocked by the CSP and this should be required for all deployments. I like to develop my application as close as possible to my target deployments, so I always develop with the best possible CSP and HTTPS like in the deployed applications. This prevents having to fix CSP issues when we go live or having to fix links to CSS CDNs or whatever.

We also have a warning in the console logs looking for a JS map file from something we do not use. No idea where or what adds to my development.

2023-05-18 The CSP bug has no been fixed in the latest VS preview release

https://developercommunity.visualstudio.com/t/browserlink-CSP-support-NET-7/10061464

Creating Random data from Arrays

In .NET 8 GetItems() was added to System.Random. I decide to create my test data using this. I created an array of objects and returned this as a span.

public static ReadOnlySpan<MyGridData> GetData() { return _mydata.AsSpan(); }

The Random.Shared.GetItems method can be used to return n-items from my span in a random way. I set this to 24 items which can be then displayed in the Grid.

[HttpGet] public IEnumerable<MyGridData> Get() { return Random.Shared.GetItems(MyData.GetData(), 24); } Using QuickGrid in Blazor

The QuickGrid component was also added in .NET 8. This provides simple Grid features.

The Nuget package needs to be added to the client (WASM) project.

Microsoft.AspNetCore.Components.QuickGrid

The QuickGrid can be used in any Razor page in the WASM application. You need to add the using for the Grid and you can create the grid as required. The Grid has good documentation here:

https://aspnet.github.io/quickgridsamples

@page "/directapi" @using HostedBlazorAad.Shared @using Microsoft.AspNetCore.Components.QuickGrid @inject IAntiforgeryHttpClientFactory httpClientFactory @inject IJSRuntime JSRuntime <h3>QuickGrid display using data Direct API</h3> @if (myGridData == null) { <p><em>Loading...</em></p> } else { <hr /> <QuickGrid Items="@FilteredItems" Pagination="@pagination"> <PropertyColumn Property="@(p => p.Id)" Sortable="true" /> <PropertyColumn Property="@(c => c.Name)" Sortable="true" Class="name"> <ColumnOptions> <div class="search-box"> <input type="search" autofocus @bind="nameFilter" @bind:event="oninput" placeholder="name..." /> </div> </ColumnOptions> </PropertyColumn> <PropertyColumn Property="@(p => p.Colour)" Sortable="true" /> </QuickGrid> <Paginator State="@pagination" /> } @code { private IEnumerable<MyGridData>? myApiData; private IQueryable<MyGridData> myGridData = new List<MyGridData>().AsQueryable(); private PaginationState pagination = new PaginationState { ItemsPerPage = 8 }; private string nameFilter = string.Empty; GridSort<MyGridData> rankSort = GridSort<MyGridData> .ByDescending(x => x.Name) .ThenDescending(x => x.Colour) .ThenDescending(x => x.Id); IQueryable<MyGridData>? FilteredItems => myGridData.Where(x => x.Name.Contains(nameFilter, StringComparison.CurrentCultureIgnoreCase)); protected override async Task OnInitializedAsync() { var client = await httpClientFactory.CreateClientAsync(); var myApiData = await client.GetFromJsonAsync<MyGridData[]>("api/DirectApi"); if (myApiData != null) myGridData = myApiData.AsQueryable(); } }

The 24 random items are displayed in the grid using a paging and a sort with eight items per page. The is client side and not server side paging which is important if using large amounts of data.

Notes

Blazor and .NET 8 will change a lot and new templates and project types are being created for Blazor and .NET 8. Blazor United or whatever it will be called after the release will be a new type of Blazor project and the 3 projects structure will probably be reduced down to one. I hope the security will be improved and I don’t understand why Microsoft still do security in the WASM part of the application when it is hosted in an ASP.NET Core backend.

Links

https://learn.microsoft.com/en-us/dotnet/core/whats-new/dotnet-8

https://github.com/damienbod/Blazor.BFF.AzureAD.Template

https://dotnet.microsoft.com/en-us/download/visual-studio-sdks

https://aspnet.github.io/quickgridsamples

Tuesday, 14. March 2023

Phil Windleys Technometria

Monitoring Temperatures in a Remote Pump House

If you've got a sensor that isn't within easy range of wifi, then LoraWAN is a good solution. And event-based rules in picos are a convenient way to process the data. I've got a pumphouse in Island Park, ID that I'm responsible for. Winter temperatures are often below 0°F (-18°C) and occasionally get as cold as -35°F (-37°C). We have a small baseboard heater in the pumphouse to keep things from fre

If you've got a sensor that isn't within easy range of wifi, then LoraWAN is a good solution. And event-based rules in picos are a convenient way to process the data.

I've got a pumphouse in Island Park, ID that I'm responsible for. Winter temperatures are often below 0°F (-18°C) and occasionally get as cold as -35°F (-37°C). We have a small baseboard heater in the pumphouse to keep things from freezing. That works pretty well, but one night last December, the temperature was -35°F and the power went out for five hours. I was in the dark, unable to know if the pumphouse was getting too cold. I determined that I needed a temperature sensor in the pumphouse that I could monitor remotely.

Thanks for reading Phil Windley's Technometria! Subscribe for free to receive new posts and support my work.

The biggest problem is that the pumphouse is not close to any structures with internet service. Wifi signals just don't make it out there. Fortunately, I've got some experience using LoraWAN, a long-range (10km), low-power, wireless protocol. This use-case seemed perfect for LoraWAN. About a year ago, I wrote about how to use LoraWAN and a Dragino LHT65 temperature and humidity sensoralong with picos to get temperature data over the Helium network.

I've installed a Helium hotspot near the pumphouse. The hotspot and internet router are both on battery backup. Helium provides a convenient console that allows you to register devices (like the LHT65) and configure flows to send the data from a device on the Helium network to some other system over HTTP. I created a pico to represent the pumphouse and routed the data from the LHT65 to a channel on that pico.

The pico does two things. First it processes the hearthbeat event that Helium sends to it, parsing out the parts I care about and raising another event so other rules can use the data. Processing the data is not simple because it's packed into a base64-encoded, 11-byte hex string. I won't bore you with the details, but it involves base64 decoding the string and splitting it into 6 hex values. Some of those a further packing data into specific bits of the 16-bit word, so binary operations are required. Those weren't built into the pico engine, so I added those libraries. If you're interested in the details of decoding, splitting, and unpacking the payload, check out the receive_heartbeat rule in this ruleset.

Second, the receive_heartbeat rule raises the lht65:new_readings event in the pico adding all the relevant data from the LHT65 heartbeat. Any number of rules could react to that event depending on what needs to be done. For example, they could store the event, alarm on a threshold, or monitor the battery status. What I wanted to do is plot the temperature so I can watch it over time and let other members of the water group check it too. I found a nice service called IoTPlotter that provides a basic plotting service on any data you post to it. I created a feed for the pumphouse data and wrote a rule in my pumphouse pico to select on the lht65:new_readings event and POST the relevant data, in the right format, to IoTPlotter. Here's that rule:

rule send_temperature_data_to_IoTPlotter { select when lht65 new_readings pre { feed_id = "367832564114515476"; api_key = meta:rulesetConfig{["api_key"]}; payload = {"data": { "device_temperature": [ {"value": event:attrs{["readings", "internalTemp"]}, "epoch": event:attrs{["timestamp"]}} ], "probe_temperature": [ {"value": event:attrs{["readings", "probeTemp"]}, "epoch": event:attrs{["timestamp"]}} ], "humidity": [ {"value": event:attrs{["readings", "humidity"]}, "epoch": event:attrs{["timestamp"]}} ], "battery_voltage": [ {"value": event:attrs{["readings", "battery_voltage"]}, "epoch": event:attrs{["timestamp"]}} ]} }; } http:post("http://iotplotter.com/api/v2/feed/" + feed_id, headers = {"api-key": api_key}, json = payload ) setting(resp); }

The rule, send_temperature_data_to_IoTPlotter is not very complicated. You can see that most of the work is just reformatting the data from the event attributes into the right structure for IoTPlotter. The result is a set of plots that looks like this:

Pretty slick. If you're interested in the data itself, you're seeing the internal temperature of the sensor (orange line) and temperature of an external probe (blue line). We have the temperature set pretty high as a buffer against power outages. Still, it's not using that much power because the structure is very small. Running the heater only adds about $5/month to the power bill. Pumping water is much more power intensive and is the bulk of the bill. The data is choppy because, by default, the LHT65 only transmits a payload once every 20 minutes. This can be changed, but at the expense of battery life.

This is a nice, evented system, albeit simple. The event flow looks like this:

Event Flow for Pumphouse Temperature Sensor

I'll probably make this a bit more complete by adding a rule for managing thresholds and sending a text if the temperature gets too low or too high. Similarly, I should be getting notifications if the battery voltage gets too low. The battery is supposed to last 10 years, but that's exactly the kind of situation you need an alarm on—I'm likely to forget about it all before the battery needs replacing. I'd like to experiment with sending data the other way to adjust the frequency of readings. There might be times (like -35°F nights when the power is out) where getting more frequent results would reduce my anxiety.

This was a fun little project. I've got a bunch of these LHT65 temperature sensors, so I'll probably generalize this by turning the IoTPlotter ruleset into a module that other rulesets can use. I may eventually use a more sophisticated plotting package that can show me the data for all my devices on one feed. I bought a LoraWAN soil moisture probe for my garden. I've also got a solar array at my house that I'd like to monitor myself and that will need a dashboard of some kind. If you've got a sensor that isn't within easy range of wifi, then LoraWAN is a good solution. And event-based rules in picos are a convenient way to process the data.

Thanks for reading Phil Windley's Technometria! Subscribe for free to receive new posts and support my work.


Identity Praxis, Inc.

London Event: MEF CONNECTS Personal Data & Identity

I am thrilled to have been given the opportunity to be the Program Director for the highly anticipated MEF Event in London on May 25th. There are opportunities to speak, sponsor, exhibit, partner, and more. I look forward to work with a team of passionate individuals to curate an unforgettable experience for all attendees. The […] The post London Event: MEF CONNECTS Personal Data & Identity

I am thrilled to have been given the opportunity to be the Program Director for the highly anticipated MEF Event in London on May 25th. There are opportunities to speak, sponsor, exhibit, partner, and more. I look forward to work with a team of passionate individuals to curate an unforgettable experience for all attendees.

The MEF Connects Personal Data & Identity event organized by Susan Finlayson-Sitch, Director of Operations at Mobile Ecosystem Forum, will be happening on May 25th, 2023 at One Moorgate Place in London. The event starts at 9:00 AM and runs until 8:30 PM. Registration is open and attendance is free, but seats are limited, so be sure to register here in advance to secure your spot.

I will also be organizing pre- and post-conference workshops on the 24th and 26th as well. This event offers an exclusive opportunity to learn about the latest trends and technologies in personal data and identity management from a diverse range of industry leaders. Don’t miss out on the chance to network with like-minded professionals and gain valuable insights that can help you stay ahead of the curve. What will be covered at the event is listed below, but for more information on the event click here. Looking forward to seeing you there!

Discover the latest trends in personal data and identity management at the MEF Event in London on May 25th. Our expert speakers will cover key themes including:

The Personal Data & Identity Meeting of the Waters: The Identity Nexus The Five Domains of Personal Data New Customer Onboarding and Experiences Along Every Step of the Customer Journey Personal Data and Media: Reshaping Media In a World of Individual Controlled Data Global and Regional Consumer Opinion at the Intersection of Trust, Personal Data, and Privacy Understanding Biometrics Solution Combatting Fraud with Silent Mobile Intelligence New Identification, Authentication, and Verification Strategies (inc., Passwordless Authentication) Self-Sovereign Identity, Decentralization, Web 3.0, Web 5.0 Personal Information Management System and Smart Wallets Embracing Emergent Standards and Personal Data & Identity Regulations Privacy-enhancing Technologies

Consider attending our pre- and post-conference workshops to dive further in-depth on these topics on the 24th and 26th to stay ahead in the evolving landscape of personal data and identity management.

The post London Event: MEF CONNECTS Personal Data & Identity appeared first on Identity Praxis, Inc..

Monday, 13. March 2023

Identity Praxis, Inc.

Telesign Webinar

  I’m excited to announce I will be participating in the Telesign Webinar on March 16 at 1 p.m. CET to learn how you can better protect your business from ATO attacks. This webinar will also feature other industry leaders such as Surash Patel and Pravesh Arora from Telesign, as well as Bram Van Hoeylandt from Proximus. The 2022 Cyberthreat Defense […] The post

 

I’m excited to announce I will be participating in the Telesign Webinar on March 16 at 1 p.m. CET to learn how you can better protect your business from ATO attacks. This webinar will also feature other industry leaders such as Surash Patel and Pravesh Arora from Telesign, as well as Bram Van Hoeylandt from Proximus.

The 2022 Cyberthreat Defense Report reveals that account takeover (ATO) attacks are set to become the biggest security threat to customers, overtaking malware. Don’t miss out on the chance to learn from our expert speakers and safeguard your online presence. Register now to secure your spot and take the first step towards greater online security!

The post Telesign Webinar appeared first on Identity Praxis, Inc..


Personal Data and Digital Identity, and a New Path to Value

I thoroughly enjoyed wrapping up the event with Ann Li about Personal Data and Digital Identity Management. In our session, we discussed the importance of a new approach to personal data and identity management, and how it can unlock a new era of growth and consumer benefits for both organizations and customers. By embracing emerging […] The post Personal Data and Digital Identity, and a New Pat

I thoroughly enjoyed wrapping up the event with Ann Li about Personal Data and Digital Identity Management. In our session, we discussed the importance of a new approach to personal data and identity management, and how it can unlock a new era of growth and consumer benefits for both organizations and customers. By embracing emerging decentralized strategies alongside existing centralized and federated ones, businesses can strategically add significant commercial and operational value while prioritizing the customer experience. Collaboration and participation within the ecosystem is also crucial in meeting the demands of our ever-evolving markets. Below is an abstract of out talk, and the session recording is above for those who couldn’t be there in person.

Personal Data and Digital Identity Management: A New Approach and Path to Value Personal data has underpinned growth across global economies for three decades and is now a systemically valuable resource. Personal data and identity management were two disparate yet connected subjects which are used to authenticate customers, drive research and insights, inform product strategies, mitigate fraud and other risks. A shift of customer attitudes, behaviors and regulation are placing individuals in greater control of their data and digital identity. This emerging data paradigm provides organizations an opportunity to access/unlock a new era of growth and consumer benefits for consumers and the organizations who serve them In this session, Ann Li, JP Morgan Chase’s Managing Director of International digital identity, investment strategy and partnerships, discusses JP Morgan Chase’s vision for a new approach to personal data and identity management and a new path to value. She will explore the importance of focusing on the individual and customer experience and how placing emerging decentralized personal data and identity strategies alongside existing centralized and federated ones have the potential—when embraced strategically—to add significant commercial and operational value to a business and people alike. She will also explore the imperative for collaborative ecosystem participation to meet the demands of our ever-evolving markets.

The post Personal Data and Digital Identity, and a New Path to Value appeared first on Identity Praxis, Inc..


Damien Bod

Use Azure AD Access Packages to onboard users in an Azure DevOps project

This post looks at onboarding users into an Azure DevOps team or project using Azure AD access packages. The Azure AD access packages are part of the Microsoft Entra Identity Governance and provide a good solution for onboarding internal or external users into your tenant with access to the defined resources. Flow for onboarding Azure […]

This post looks at onboarding users into an Azure DevOps team or project using Azure AD access packages. The Azure AD access packages are part of the Microsoft Entra Identity Governance and provide a good solution for onboarding internal or external users into your tenant with access to the defined resources.

Flow for onboarding Azure DevOps members

Sometimes we develop large projects with internal and external users which need access to an Azure DevOps project for a fixed length of time which can be extended if required. These users only need access to the the Azure DevOps project and should be automatically removed when the contract or project is completed. Azure AD access packages are a good way to implement this.

Use an Azure AD group

The access to the Azure DevOps can be implemented by using an Azure security group in Azure AD. This security will be used to add team members for the Azure DevOps project. Azure AD access packages are used to onboard users into the Azure AD group and the Azure DevOps project uses the security group to define the members. The “azure-devops-project-access-packages” security group was created for this.

Setup the Azure DevOps

A new Azure DevOps project was created for this demo. The project has an URL on the dev.azure.com domain. The Azure DevOps needs to be attached to the Azure AD tenant. Only an Azure AD member with the required permissions can add a security group to the Azure DevOps project.

My test Azure DevOps project was created with the following URL. You can only access this if you are a member.

https://dev.azure.com/azureadgroup-access-packages/use-access-packages

The project team can now be onboarded.

Create the Azure AD P2 Access packages

To create an Azure AD P2 Access package, you can use the Microsoft Entra admin center. The access package can be created in the Entitlement management blade.

Add the security group from the Azure AD which you use for adding or removing users to the Azure DevOps project. Add the users as members.

The users onboarded using the access package are given a lifespan in the tenant for the access and can be extended or not as needed.

The users can be added using an access package link, or you can get an admin to assign users to the package. I created a second access package to assign any users to the package which can then be approved or rejected by the Azure DevOps project manager.

The Azure DevOps administrator can approve the access package and the Azure DevOps team member can access the Azure DevOps project using the public URL. The new member is added to the Azure security group using the access package.

An access package link would look something like this:

https://myaccess.microsoft.com/@damienbodsharepoint.onmicrosoft.com#/access-packages/b5ad7ec0-8728-4a18-be5b-9fa24dcfefe3

Links

https://learn.microsoft.com/en-us/azure/active-directory/governance/entitlement-management-access-package-create

https://learn.microsoft.com/en-us/azure/devops/organizations/accounts/faq-user-and-permissions-management?view=azure-devops#q-why-cant-i-find-members-from-my-connected-azure-ad-even-though-im-the-azure-ad-global-admin

https://entra.microsoft.com/


Hyperonomy Digital Identity Lab

Web 7.0 DIDComm Notation Template

Copyright (c) 2023 Michael Herman (Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public Licensehttps://creativecommons.org/licenses/by-sa/4.0/legalcode NOTE: DIDComm Notation elements are adaptations of the icons provided by Microsoft as part of the Microsoft Office suite. To quote from the Microsoft documentation: … Continue reading →

Copyright (c) 2023 Michael Herman (Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public License
https://creativecommons.org/licenses/by-sa/4.0/legalcode

NOTE: DIDComm Notation elements are adaptations of the icons provided by Microsoft as part of the Microsoft Office suite. To quote from the Microsoft documentation:

You can insert icons into Microsoft 365 documents. You can rotate, color, and resize them with no loss of image quality. These icons are free to use and there’s no royalty or copyright to worry about.

https://support.microsoft.com/en-us/office/insert-icons-in-microsoft-365-e2459f17-3996-4795-996e-b9a13486fa79?ui=en-us&rs=en-us&ad=us

In addition, …

You may sell a spreadsheet, database, or PowerPoint deck you made using Microsoft software. The spreadsheet, database, or PowerPoint deck must be created using legitimate, licensed Microsoft software.

https://www.microsoft.com/en-us/legal/intellectualproperty/copyright/permissions

Finally, …

What am I allowed to use premium creative content for?
Excel for Microsoft 365 Word for Microsoft 365 Outlook for Microsoft 365 PowerPoint for Microsoft 365 PowerPoint for the web SharePoint in Microsoft 365


If you’re a Microsoft 365 subscriber you have access to thousands of royalty-free images, icons, and more that you can use in your documents, presentations, workbooks, and SharePoint in Microsoft 365. But you may have questions about what you’re legally allowed to do with that creative content. This article will try to answer that!

You can use the content for anything permitted by the applicable Microsoft 365 license so you should review your End-User License Agreement (EULA) to understand exactly what you can do with works created in Microsoft 365 and SharePoint.

Simply put… You can use the premium images and other creative content however you see fit, as long as it’s within an Microsoft 365 app or SharePoint site. For commercial customers, this includes selling PowerPoint slide decks, Excel workbooks, or Word documents that include this content, just as you would have sold these files in the past.

This includes exporting the Microsoft 365 file to another format. You can modify the picture, use it in the file forever, and use it anywhere in the world. Basically, do the same things that you would have done with Microsoft 365 documents in the past, but now you can include the stock images and other content from the Insert Pictures dialog.

https://support.microsoft.com/en-us/topic/what-am-i-allowed-to-use-premium-creative-content-for-0de69c76-ff2b-473e-b715-4d245e39e895

Click on the figure below to download current version of the Web 7.0 DIDComm Notation Template.

Sunday, 12. March 2023

Hyperonomy Digital Identity Lab

Web 7.0 Standard Base (WSB) Specification Set

Copyright (c) 2023 Michael Herman (Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public Licensehttps://creativecommons.org/licenses/by-sa/4.0/legalcode Web 7.0 is a unified software and hardware ecosystem for building resilient, trusted, decentralized systems using decentralized identifiers, DIDComm agents, and verifiable credentials. Take what … Continue reading →

Copyright (c) 2023 Michael Herman (Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public License
https://creativecommons.org/licenses/by-sa/4.0/legalcode

Web 7.0 is a unified software and hardware ecosystem for building resilient, trusted, decentralized systems using decentralized identifiers, DIDComm agents, and verifiable credentials.

Take what you need; leave the rest.

Michael Herman, Trusted Digital Web Project, Hyperonomy Digital Identity Lab, Parallelspace Corporation. January 2023.

Identity Praxis, Inc.

Consumer Empowerment and PIMS Adoption

I reveal the secrets of PIMS and the Personal Information Economy in my recent MEF Global Forum session. Encouraging organizations to consider incorporating a PIMS into their strategy as a critical next step. Below is the abstract of what I covered in my session, and if you unfortunately missed this prolific event you can watch […] The post Consumer Empowerment and PIMS Adoption appeared first o

I reveal the secrets of PIMS and the Personal Information Economy in my recent MEF Global Forum session. Encouraging organizations to consider incorporating a PIMS into their strategy as a critical next step. Below is the abstract of what I covered in my session, and if you unfortunately missed this prolific event you can watch the session recording above.

Personal Information Management Solutions (PIMS) is an emerging capability that gives organizations a new and improved way of managing personal data and identity. Drawing from my qualitative study of twenty U.S. consumers, I share insights on the the key factors that influence an individual’s interest, ability, and intention to use a PIMS. These insights can inform corporate strategy, product roadmaps, and ecosystem partnerships, and learn how PIMS can improve trust, efficiency, and product/service experience. Finally giving individuals the opportunity to reclaim their agency, manage their privacy, and establish more authentic relationships with those organizations they trust.

The post Consumer Empowerment and PIMS Adoption appeared first on Identity Praxis, Inc..

Saturday, 11. March 2023

Identity Praxis, Inc.

Sneak Peak: MEF 9th Annual Consumer Trust Study

    I started the MEF Global Forum along with Dario Betti, CEO of MEF, and Barbara Langer, Founder and Director of Insight Angels, to discuss the latest trends in mobile and privacy. Including a preview of the MEF Consumer Trust Study, which is in its 9th year, investigates consumer opinions at the intersection of […] The post Sneak Peak: MEF 9th Annual Consumer Trust Study appeared fi

 

 

I started the MEF Global Forum along with Dario Betti, CEO of MEF, and Barbara Langer, Founder and Director of Insight Angels, to discuss the latest trends in mobile and privacy.

Including a preview of the MEF Consumer Trust Study, which is in its 9th year, investigates consumer opinions at the intersection of mobile, privacy, personal data, and trust over ten markets. This session provides a sneak peek into this year’s results. The insights gleaned from this study inform strategy, customer journey mapping, product development, support, team development, and more

To keep up with the latest trends in mobile and privacy, click below to watch the session recording.

 

 

The post Sneak Peak: MEF 9th Annual Consumer Trust Study appeared first on Identity Praxis, Inc..

Friday, 10. March 2023

Identity Woman

Industry orgs “warn” states against BIPA style laws. Why not have a dialogue at Thoughtful Biometrics Workshop – March 16th

People and their representatives are rightly concerned about how biometric systems are used. This week while reading all the industry news I came across this article – CCIA Testifies On Maryland Biometric Privacy Proposal, Submits Written Comments On Biometric, Childrens, And Health Privacy Bills. So what is BIPA? It is the Biometric Information Privacy Act […] The post Industry orgs “warn” stat

People and their representatives are rightly concerned about how biometric systems are used. This week while reading all the industry news I came across this article – CCIA Testifies On Maryland Biometric Privacy Proposal, Submits Written Comments On Biometric, Childrens, And Health Privacy Bills. So what is BIPA? It is the Biometric Information Privacy Act […]

The post Industry orgs “warn” states against BIPA style laws. Why not have a dialogue at Thoughtful Biometrics Workshop – March 16th appeared first on Identity Woman.


Aaron Parecki

OAuth Support in Bluesky and AT Protocol

Bluesky, a new social media platform and AT Protocol, is unsurprisingly running up against the same challenges and limitations that Flickr, Twitter and many other social media platforms faced in the 2000s: passwords!

Bluesky, a new social media platform and AT Protocol, is unsurprisingly running up against the same challenges and limitations that Flickr, Twitter and many other social media platforms faced in the 2000s: passwords!

You wouldn't give your Gmail password to Yelp, right? Why should you give your Bluesky password to random apps either!

The current official Bluesky iOS application unsurprisingly works by logging in with a username and password. It's the easiest form of authentication to implement, even if it is the least secure. Since Bluesky and the AT Protocol are actually intending on creating an entire ecosystem of servers and clients, this is inevitably going to lead to a complete security disaster. In fact, we're already seeing people spin up prototype Bluesky clients, sharing links around to them, which result in users being taught that there's nothing wrong with handing out their account passwords to random website and applications that ask for them. Clearly there has to be a solution, right?

The good news is there has been a solution that has existed for about 15 years -- OAuth! This is exactly the problem that OAuth was created to solve. How do we let third party applications access data in a web service without sharing the password with that application.

What's novel about Bluesky (and other similarly decentralized and open services like WordPress, Mastodon, Micro.blog, and others), is that there is an expectation that any user should be able to bring any client to any server, without prior relationships between client developers and servers. This is in contrast to consumer services like Twitter and Google, where they limit which developers can access their API by going through a developer registration process. I wrote more about this problem in a previous blog post, OAuth for the Open Web.

There are two separate problems that Bluesky can solve with OAuth, especially a flavor of OAuth like IndieAuth.

How apps can access data in the user's Personal Data Server (PDS) How the user logs in to their PDS How apps can access the user's data

This is the problem OAuth solved when it was originally created, and the problem ATProto currently has. It's obviously very unsafe to have users give their PDS password to every third party application that's created, especially since the ecosystem is totally open so there's no way for a user to know how legitimate a particular application is. OAuth solves this by having the application redirect to the OAuth server, the user logs in there, and then the application gets only an access token.

ATProto already uses access tokens and refresh tokens, (although they strangely call them accessJwt and refreshJwt) so this is a small leap to make. OAuth support in mobile apps has gotten a lot better than it was 10 years ago, and there is first class support for this pattern on iOS and Android to make the experience work better than the much older plain redirect model used to work a decade ago.

Here is what the rough experience the user would see when logging in to an app:

The user launches the app and taps the "Sign In" button The user enters their handle or server name (e.g. jay.bsky.social, bsky.social, or aaronpk.com) The app discovers the user's OAuth server, and launches an in-app browser The user lands on their own PDS server, and logs in there (however they log in is not relevant to the app, it could be with a password, via email magic link, a passkey, or even delegated login to another provider) The user is presented with a dialog asking if they want to grant access to this app (this step is optional, but it's up to the OAuth server whether to do this and what it looks like) The application receives the authorization code and exchanges it at the PDS for an access token and refresh token


Most of this is defined in the core OAuth specifications. The part that's missing from OAuth is:

discovering an OAuth server given a server name and how clients should be identified when there is no client preregistration step.

That's where IndieAuth fills this in. With IndieAuth, the user's authorization server is discovered by fetching the web page at their URL. IndieAuth avoids the need for client registration by also using URLs as OAuth client_ids.

This does mean IndieAuth assumes there is an HTML document hosted at the URL the user enters, which works well for web based solutions, and might even work well for Bluesky given the number of people who have already rushed to set their Bluesky handle to the same URL as their personal website. But, long term it might be an additional burden for people who want to bring their own domain to Bluesky if they aren't also hosting a website there.

There's a new discussion happening in the OAuth working group to enable this kind of authorization server discovery from a URL which could rely on DNS or a well-known endpoint. This is in-progress work at the IETF, and I would love to have ATProto/Bluesky involved in those discussions!

How the user logs in to their PDS

Currently, the AT Protocol specifies that login happens with a username and password to get the tokens the app needs. Once clients start using OAuth to log in to apps, this method can be dropped from the specification, which interestingly opens up a lot of new possibilities.

Passwords are inherently insecure, and there has been a multi-year effort to improve the security of every online service by adding two-factor authentication and even moving away from passwords entirely by using passkeys instead.

Imagine today, Bluesky wants to add multifactor authenticaiton to their current service. There's no good way to add this to the existing API, since the Bluesky client will send the password to the API and expect an access token immediately. If Bluesky switches to an OAuth flow described above, then the app never sees the password, which means the Bluesky server can start doing more fun things with multifactor auth as well as even passwordless flows!

Logging in with a passkey

Here is the same sequence of steps but this time swapping out the password step for a passkey.

The user launches the app and taps the "Sign In" button The user enters their handle or server name (e.g. jay.bsky.social, bsky.social, or aaronpk.com) The app discovers the user's OAuth server, and launches an in-app browser The user lands on their own PDS server, and logs in there with a passkey The user is presented with a dialog asking if they want to grant access to this app (this step is optional, but it's up to the OAuth server whether to do this and what it looks like) The application receives the authorization code and exchanges it at the PDS for an access token and refresh token

This is already a great improvement, and the nice thing is app developers don't need to worry about implementing passkeys, they just need to implement OAuth! The user's PDS implements passkeys and abstracts that away by providing the OAuth API instead.

Logging in with IndieAuth

Another variation of this would be if the Bluesky service itself supported delegating logins instead of managing any passwords or passkeys at all.

Since Bluesky already supports users setting their handle to their own personal website, it's a short leap to imaging allowing users to authenticate themselves to Bluesky using their website as well!

That is the exact problem IndieAuth already solves, with quite a few implementations in the wild of services that are IndieAuth providers, including Micro.blog, a WordPress plugin, a Drupal module, and many options for self-hosting and endpoint.

Let's look at what the sequence would look like for a user to use the bsky.social PDS with their custom domain handle mapped to it.

The user launches the app and taps the "Sign In" button The user enters their server name (e.g. bsky.social) The app discovers the OAuth server and launches an in-app browser The user enters their handle, and bsky.social determines whether to prompt for a password or do an IndieAuth flow to their server The user is redirected to their own website (IndieAuth server) and authenticates there, and is then redirected back to bsky.social The user is presented by bsky.social with a dialog asking if they want to grant access to this app The application receives the authorization code and exchanges it at the PDS for an access token and refresh token

This is very similar to the previous flows, the difference being that in this version, bsky.social is the OAuth server as far as the app is concerned. The app never sees the user's actual IndieAuth server at all.

Further Work

These are some ideas to kick off the discussion of improving the security of Bluesky and the AT Protocol. Let me know if you have any thoughts on this! There is of course a lot more detail to discuss about the specifics, so if you're interested in diving in, a good place to start is reading up on OAuth as well as the IndieAuth extension to OAuth which has solved some of the problems that exist in the space.

You can reply to this post by sending a Webmention from your own website, or you can get in touch with me via Mastodon or, of course, find me on Bluesky as @aaronpk.com!


reb00ted

Meta's decentralized social plans confirmed. Is Embrace-Extend-Extinguish of the Fediverse next?

Casey Newton at Platformer reports he has e-mail confirmation from Meta that: [Meta is] exploring a standalone decentralized social network for sharing text updates. We believe there’s an opportunity for a separate space where creators and public figures can share timely updates about their interests (Source). Their new app is codenamed P92, and according to a separate report by Moneycontr

Casey Newton at Platformer reports he has e-mail confirmation from Meta that:

[Meta is] exploring a standalone decentralized social network for sharing text updates. We believe there’s an opportunity for a separate space where creators and public figures can share timely updates about their interests (Source).

Their new app is codenamed P92, and according to a separate report by Moneycontrol:

… will support ActivityPub, the decentralised social networking protocol powering Twitter rival Mastodon and other federated apps (Source).

It will also:

be Instagram-branded and will allow users to register/login to the app through their Instagram credentials.

First, the good news:

This is a huge validation of the decentralized social network known as the Fediverse, built around a set of internet protocol standards that include ActivityPub, ActivityStreams, WebFinger as well as a set of commonly implemented unofficial extensions. The Fediverse has been around for some years, but recently came to more widespread prominence through its leading implementation, Mastodon, as the leading alternative of increasingly erratic (and increasingly many other things, but I digress…) Twitter.

That’s because only when alternatives are actually beginning to look like they might become serious threats to incumbents – and Meta is the market-leading incumbent in social media by far – do incumbents start paying attention and then connect to them. Or, as it may be the case here, simply leak that they might be connecting in the future but never actually will. We don’t know which of those will turn out to be true, but it doesn’t matter: both validate the Fediverse as a serious competitor to Meta.

This is on the heels of recent Fediverse adoption by companies such as Mozilla, Medium, CloudFlare and Flipboard. Apple now has Mastodon content previews in iMessage. Even Microsoft has been spotted in the Fediverse a few days ago.

But:

I have some Brooklyn Bridges for sale. You get a Brooklyn Bridge for free if you believe that a company like Meta would connect to the Fediverse, and be a perfect citizen the way the Fediverse expects you to be today. Including:

No ads; No tracking; No algorithms that favor business outcomes for Meta over your wellbeing; Respect for different cultures, minorities, non-mainstream behavior etc.; A rich toolset for filtering and blocking according what you decide you want to filter and block, not Meta; The ability to move from one host to another without having to build your network from scratch; The ability to pick who is your system administrator and moderator, from models that are essential centrally managed to full-fledged self-managed, user-owned cooperatives; The ability, and encouragement, to innovate with new apps; and so forth.

Instead, based on the history of technology, the chances are overwhelming that such an app would be used by Meta with an embrace, extend and extinguish strategy, at the end of which the Fediverse would either have become irrelevant or effectively been taken over by Meta. So the much-heralded alternative to Twitter would become … Meta? I sure hope not.

If you think that is unlikely, read up on some of the historical examples listed on that Wikipedia page. Merely being based on open standards and having a million+ strong user community does not protect you at all. Instead, I would say the attack happens every single time a network dominated by an incumbent (here: social media) is threatened by a more open network. And it succeed, at least partially, more often than not. Here it is Meta’s $100b+ business that’s under strategic threat, of course they will protect it and use any means they can think of to do so.

It does not help that the Fediverse today is chronically underfunded and has corresponding difficulty to compete at the same speed as somebody like Meta can. Actually, “unfunded” is a better term because the amounts are so small. There are many unpaid contributions, the Fediverse largely being open source and all, but I’d be surprised if more than $10m per year are spent in total on the entire Fediverse today, likely it’s far less. If Meta can burn more than $10b – that’s one entire annual fediverse spend every 8 hours! – on a very doubtful Metaverse project, they surely could find the same amount of money to protect their core business.

And that’s just one of the many issues we need to solve to protect, and grow, the beautiful thing we currently have with the Fediverse.

So what shall we do about all this?

(I welcome your comments – in the Fediverse! Find me at @j12t@social.coop.)

(Also, I’m co-organizing FediForum, an online unconference at the end of March, where we will surely discuss this and other issues. And celebrate the Fediverse, because there is much to celebrate! Join us?)

Thursday, 09. March 2023

Kent Bull

KERI Tutorial Series: Treasure Hunting in Abydos! Issuing and Verifying a Credential (ACDC)

Welcome to the latest installment in the KERI Tutorial Series! In a prior article we learned how to use KERI to sign and verify simple messages. This tutorial goes to the full depth of issuing and verifying credentials (ACDCs). We outfit our team of explorers bound on a treasure hunting […]

Welcome to the latest installment in the KERI Tutorial Series! In a prior article we learned how to use KERI to sign and verify simple messages. This tutorial goes to the full depth of issuing and verifying credentials (ACDCs).

We outfit our team of explorers bound on a treasure hunting journey with high tech security tags so the Gatekeeper will recognize them and permit them entrance into the most rewarding, yet dangerous, caverns of the secret underground chambers beneath the Osireion at Abydos.

Egyptian Story Time

An ancient city in Egypt from the time of the Pharaohs Abydos has long been known as a burial and memorial site for the ancient rulers. Yet, until recently, what had been kept a secret is the true purpose of Abydos and the Osireion, an entrance to an underground world full of bounty and treasure where the Egyptian rulers stored their most precious things.

Even with this revelation few could master the secret of how to travel between the Osireion and this underground world. Only two ever reliably figured it out, first the wise Ramiel and then his strong student Zaqiel. After they retrieved all the riches and treasure they wanted they realized they had an opportunity on their hands as guides for future explorers. Thus ATHENA was born.

Check out the 5 minute intro to this post and the ATHENA Network tutorial series! Opportunity

To establish a new standard in treasure hunting trust, and to capitalize on their opportunity, Ramiel and Zaqiel formed the Abydos Treasure Hunting Expedition Navigators Assembly (ATHENA) and began looking for a technology partner to build their Journey Mark trust system. Ramiel took on the role of the Wise Man to evaluate, grant marks to, and guide treasure hunters to Abydos while Zaqiel, the stronger of the two, took on the role of the Osireion Gatekeeper to ensure only those accepted by Ramiel would be given the opportunity to learn to travel to the underground world.

What ATHENA seeks to create

According to ATHENA, explorers desiring entrance through the Osireion must present a verifiable, secure proof signed by the Wise Man in order to be accepted by the Gatekeeper. Potential explorers must signify their intent to embark on a journey by presenting the Wise Man with another secure proof.

The diagram below shows the end state of their desired network.

On a practical note, get some refreshments and settle in as this post is large and contains many parts. It may take a few sessions to fully absorb the material depending on your existing familiarity with KERI and ACDC and the related Github repositories.

You meet the two

So how did you ever come to meet up with Ramiel and Zaqiel at ATHENA? It turns out the story is more believable than you might think.

While on a casual stroll at the Internet Identity Workshop (IIW) you had the good fortune of bumping into Ramiel and Zaqiel and they asked you how you would build the ATHENA security system. Since you had just left Dr. Samuel Smith’s session on composable event streaming representation (CESR) and had earlier been to Drummond Reed’s session KERI for Muggles where you learned about key event receipt infrastructure (KERI) and authentic chained data containers (ACDC), you realized they were the best decentralized key management infrastructure (DKMI) available and figured this would be the right tool to get the job done for ATHENA.

After hearing the details from Ramiel you became excited about the opportunity. You enthusiastically shook hands with them and began your work. Then, feeling a little in over your head and hoping for a concise and useful guide on KERI and ACDC to get you started on your development path, you hear Kaliya Identity Woman Young casually mention a link in the session notes she hounded the session presenter to write down. Grateful for her efforts you began to feel a sense of relief as you turned to the IIW QiQo site and found this blog post. A rush of hope and encouragement came and you dug in realizing you found the KERI manual you needed to get a simple prototype up and running.

Your job is to make this all happen.

What you can expect to learn All of the components that are required for a basic KERI network including configuration ACDC Credential Schema Creation, Issuance, and Revocation KERI Command Line Interface (kli) and the (now deprecated) KERI Mark I Agent REST API To learn about the KERIpy code. There are multiple references to important parts of the KERIpy codebase strewn throughout this tutorial. A fair bit about BASH scripting if you read through the workflow.sh script. Characters

Our Explorer team includes the following cast of characters.

Richard the Explorer!

Elayne the Librarian!

The ATHENA team includes:

Ramiel the Wise Man

Zaqiel the Gatekeeper

Now that you’ve met the team let’s get going. We have a lot to do!

Outline

As you can see we start with a visual of where we are going, cover concepts briefly, perform environment setup, and then a quick review of major terminology, and it is off to the races with the admittance workflow. From schema writing to network setup and finally to issuance, presentation, and revocation this tutorial takes you through the basics for using KERI and ACDC.

Goal Network – End State ATHENA Network Foundational Vocabulary and Theory Keys, KELs, ACDCs, TELs Schemas, SAIDs, OOBIs Controllers, Witnesses, Runtime Environment Setup – Dependency Installation Abydos Admittance Workflow Credential Map ACDC Schema Writing Schema Linking and Verification Network Configuration Witness Network KERI Controller Setup Write controller configuration file KERI Agents Agent flow: start agents KERI Keystores and Inception Out of Band Introductions Between Controllers Credential Registries Credential Issuance Advanced Credential Issuance – with Edges and Rules Credential Presentation Abydos Gatekeeper: Custom controller for credential handlers Webhook Setup Credential Revocation Missing Parts Review

To cover all of these items in this already very large post we must get going so we will leave out a significant list of concepts that will be addressed with future articles.

Concepts not covered Key Rotation, Reserve Rotation, Custodial rotation KERI Data structures in depth KERI Message Seals KERI Messages (Key Event, Receipt, Query, Reply, KSN, TSN, TEL, etc.) Edge signing with mobile applications Multi-signature groups Composable event streaming representation (CESR) KERIox – Rust version of KERI did:keri Watcher networks and other roles including judges and jurors KERI Mark II Agent KERI vocabulary in depth – we only go to a shallow depth in this post for brevity reasons Backers or connections to other blockchains Identifier Theory Graduated Disclosure, Contractually Protected Disclosure, Selective Disclosure ACDC Edge and Edge Group Operators Blinded ACDC attributes Bulk Issued ACDCs Details of Issuance and Presentation Exchange Protocol (IPEX) Details of Public Transaction Event Logs (PTEL) Duplicity detection and recovery from duplicity KERIpy internal eventing system details (based on the IoFlo/Hio async runtime)

Don’t worry that this list is a big list. There is a lot to KERI though this article gets you set up with the foundation. Once you are up and running with the contents of this blog post you will have a foundation upon which to explore the other topics listed above.

For now, let’s go to Abydos!

Going to the Osireion at Abydos Our Task Create the network foundation. Create digital identities for each involved party. Give a verifiable journey description to each interested explorer. Request a Journey Mark for each explorer from the Wise Man. As the Wise Man issue the Journey Mark to each member of the explorer party. Present the Journey Marks to the Gatekeeper and grant access to the Osireion at Abydos. Environment Setup

Check your environment-specific platform installation instructions for additional clarity though use the “All Platforms” list for a list of dependencies that ultimately must be installed on your machine and available from the command line.

All Platforms Python 3.10+ required Download the Abydos Tutorial repository with all of the example code:
$ git clone https://github.com/TetraVeda/abydos-tutorial Install Rust (for the blake3 dependency KERIpy dependson and builds)
$ curl https://sh.rustup.rs -sSf | bash -s -- -y && source "$HOME/.cargo/env" Install Maturin (for package management one of KERIpy’s dependencies uses)
$ pip install maturin Install KERIpy at least version 1.0.0: https://pypi.org/project/keri/
WARNING (4-17-23): Currently there is a problem with the latest release of KERIpy that has broken old version of KASLcred so until a new version of KERI is released to PyPi then you will need to install the development version of KERIpy from it’s source repository.

You can do an install from PyPi with
python -m pip install keri==1.0.0

You may also install from source by doing a
$ git clone https://github.com/WebOfTrust/keripy
And then
$ cd keripy
and
$ python -m pip install -e./
This will give you the latest version which should be 1.0.0.
Install KASLCred: https://pypi.org/project/kaslcred/
WARNING (4-17-23): Currently there is a problem with the latest release of KERIpy that has broken old version of KASLcred so until a new version of KERI is released to PyPi then you will need to install the development version of KERIpy from it’s source repository.

$ pip install kaslcred==0.1.2

You can always install kaslcred from source as well: https://github.com/TetraVeda/kaslcred
Install at least version 0.6.1 of the kentbull/sally fork of GLEIF-IT/sally. $ git clone https://github.com/kentbull/sally.git
$ cd sally
$ python -m pip install -e ./ Libsodium https://libsodium.gitbook.io/doc/installation On my MacBook Pro I ended up having to do the full installation: Download a tarball from the releases page.
Change directories to within the decompressed tarball.
$ ./configure
$ make && make check
$ sudo make install vLEI credential caching server (vLEI-server) https://github.com/WebOfTrust/vLEI.git Clone the repository and do a Pip install
$ git clone https://github.com/WebOfTrust/vLEI.git
$ cd vLEI
$ python -m pip install -e ./ OS X With Homebrew: Install Python 3.10+, Python 3 PIP, Python 3 Venv, Libsodium-dev, maturin, and the Rust toolchain Installing Python 3.10+ on a Mac is a rather involved process since the operating system default installation can interfere with things.
Use the following freeCodeCamp guide to install “pyenv” and then install Python 3 on top of that:
How to Install Python 3 on Mac – Brew Install Update Tutorial $ brew install libsodium # warning, doesn’t work.
# See above instructions on libsodium installation from the tarball to get things working on a Mac. You need Libsodium on your path, which sudo make install does. $ pip install maturin # if you haven’t already Install KERI $ git clone https://github.com/WebOfTrust/keripy.git $ cd keripy $ python -m pip install -e ./ Verify your installation $ kli version This should output something like “1.0.0”. Windows

Thanks to Jim Martin for providing these Windows instructions.

See the Manual Setup with Windows section of the KLI with Heartnet tutorial for instructions on setting up KERI on Windows.

Jump to the end with the workflow.sh script

If you want to quickly start up a network rather than working through the entire blog post piece by piece then you can use the workflow.sh script to start the whole network and perform all of the actions including schema linking, configuration file writing, witness network startup, controller configuration and startup, OOBI configuration, credential issuance, and credential revocation.

The workflow script accepts a few different ways of running it:

# from within the abydos-tutorial directory # # Option 1: run with the KLI and no clearing ./workflow.sh # this will run the entire workflow yet will not clean up the ~/.keri, # ~/.sally, or the /usr/local/var/keri directories so you will need # to manually clear those out in order to run the workflow script again. # # Option 2: run with the KLI and the clearing option ./workflow.sh -c true # this runs the entire workflow using the KLI and clears out the KERI # directories mentioned above # # Option 3: run with the KERI Agent (Mark I - deprecated) and the # clearing option ./workflow.sh -a true -c true # this runs the workflow using the KLI only where absolutely necessary # and then cURL requests to the Agent API to perform the majority of the # workflow

This is a great way to get you up and running. The entire workflow takes only 3 minutes for the KLI flow and 2.5 minutes for the Agent workflow. Reading through the BASH script may be the fastest way for you to learn how to use KERI if you are ready for that level of detail.

ATHENA Network

The ATHENA network includes:

three witnesses four controllers (with one custom controller, the Abydos Gatekeeper) four agents, one for each controller (Mark I Agent – deprecated) a credential caching server four distinct credential types. a webhook

You will, of course, set up all of the configuration files and keystores for each witness and controller as well as perform all of the OOBI configuration for the network to function. See below for an overview.

Network Overview Controllers

There is one controller each for Richard, Elayne, Zaqiel, and Ramiel. You will write the configuration files for each controller and create and incept their keystores.

Custom Controller

The one custom controller is the one for Zaqiel, the Abydos Gatekeeper controller. This controller needs to be customized in order to make the access control decisions for Abydos. This includes custom credential validators and presentation response payload constructors.

Witnesses

The same three witnesses are used for all of the controllers. Using the same witnesses for everything isn’t required and would be bad practice, though it simplifies the architecture for this post. You could easily use separate witnesses per controller with just a bit more tedious work. You will write the witness configuration files for each witness and set them up using the KLI.

Credential Caching Server

The vLEI-server from the vLEI repository is used as the credential caching server. It is a very simple HTTP fileserver that serves up ACDC schemas upon HTTP request. It would easily be implementable in any other language or tech stack if you needed it to be.

Agents

The agents use the current agent code in the KERIpy repository, called the Mark I Agent. This agent code is now deprecated even though it is nearly fully fleshed out. The new Mark II Agent is being developed as a separate repository in the KERIA repository.

Webhook

The webhook is a simple HTTP server built on the ioflo/hio library executed with sally hook demo. All it does is receive HTTP requests from the Abydos Gatekeeper custom controller, parse the JSON body, and print all of the headers and body values.

Foundational Vocabulary and Theory

Next we cover some basic vocabulary and acronyms so you can make your way through this post. I recommend you review Nuttawut Kongsuwan’s KERI jargon summary of the KERI whitepaper for an overview of the terminology in the space. You can always go to the KERI Whitepaper or my KERI Mind Map blog post as well for a list of documents central to the KERI space that will assist you in gaining a full understanding of the concepts and the terminology.

KERI – Key Event Receipt Infrastructure. A decentralized public key infrastructure for self certifying identifiers aiming at security as a top priority while following the design principle of “minimally sufficient means.” The foundation for the verifiable credentials technology called authentic chained data containers.

AID -autonomic identifier. a self-certifying identifier that is the primary root of trust in KERI. AIDs are decentralized identifiers that do not rely on a blockchain.

Controller – a node or an individual set of cryptographic keys, their key event logs (KELs), and associated data. Consists of a keystore directory, a set of autonomic identifiers (AIDs), key event logs, any credentials, transaction event logs, and out of band introductions.
Also considered to be a running instance of a KERI controller software such as KERIpy (Python KERI controller) or KERIox (Rust KERI controller).

Witness – a trusted entity designated by another controller as a witness to anything that happens for an AID controlled by that controller. Each witness must provide witness receipts to each event in a key event log.

KERIpy – the Python implementation of KERI including the controller implementation. This Python implementation is considered authoritative as the KERI core team builds it along with Dr. Samuel M. Smith.

Agent – The HTTP API to a KERI controller.

OOBI – out of band introduction. The core discovery mechanism of KERI. Each OOBI defines a path of communication between controllers or a location where a resource like an ACDC schema resides. Think of this as something like a DNS entry or an iptables entry, or an address in an address book. If you have an OOBI for a controller or object you want to talk to then you can reach it over the internet.

ACDC – authentic chained data container; verifiable credential. The fundamental unit of data shared between KERI controllers meant for instrumenting business process workflows that use verifiable credentials.
An append-only, graph-linked data structure that is secure and verifiable.

Edge – a link (or chain) between two ACDCs.

ACDC Schema – the detailed description of precisely what data is allowed to be placed into any given ACDC.

Credential Registry – where ACDCs are issued from.

Ricardian Contract – In ACDCs they are a way to link the legal system to specific ACDCs. This comes in the Rules section of ACDCs.

Admittance Workflow

The purpose of all of this business about identifiers and credentials is to support a basic admittance workflow into Abydos, specifically the Osireion at the back of Abydos. In order to support this workflow we use verifiable credentials.

Credentials Overview

For the explorers to be granted admittance into Abydos the admittance workflow must be followed:

The Wise Man must issue TreasureHuntingJourney credentials to each explorer wishing to go on a treasure hunt to Abydos. Each explorer must issue a JourneyMarkRequest credential for the treasure hunt they want to embark on by pointing to a specific TreasureHuntingJourney credential in their request. As the Wise Man approves each explorer he is to issue a JourneyMark as an approval of the explorer’s desire to embark on the journey. This approval should point to the specific request each explorer made for a specific journey. Once a sufficient number of JourneyMark credentials are issued by the Wise Man and the journey threshold has been crossed he is to issue a JourneyCharter to each explorer signifying his approval of their journey. The JourneyCharter credential is to be presented by each explorer to the Gatekeeper in order to be granted entrance into Abydos.

To understand this workflow we first cover the credential map itself, how to write a credential, and how to link each of the schemas together using the KASLCred tool.

Visual workflows and use cases for credential issuance

Each of the credential issuance flows is described visually below.

1 – TreasureHuntingJourney issuance flow

As a Wise Man (ATHENA official) I want to provide confidence to potential explorers so that I may attract them to go on a treasure hunting journey.

To do this the WiseMan issues the TreasureHuntingJourney credential.

2 – JourneyMarkRequest issuance flow

As an Explorer I want to indicate a serious interest on a specific TreasureHuntingJourney so that ATHENA will hold my spot.

To do this an Explorer (or Librarian) issues to the WiseMan a JourneyMarkRequest pointing to a specific TreasureHuntingJourney credential.

3 – JourneyMark issuance flow

As a Wise Man I want to endorse a particular Explorer for a specific TreasureHuntingJourney so that the journey can be chartered once a threshold of party membership is reached.

To do this the Wise Man issues the JourneyMark to those Explorers whose JourneyMarkRequests he approves. The JourneyMark points to the specific JourneyMarkRequest submitted by an explorer.

4 – JourneyCharter issuance flow

As a Wise Man I want to charter a treasure hunting journey once a party membership threshold is reached so that the explorers can present this charter credential to the Gatekeeper and gain entrance into Abydos.

To do this the WiseMan issues the JourneyCharter credential to each Explorer going on the journey indicated in the TreasureHuntingJourney credential. The JourneyCharter points to both the JourneyMark and the TreasureHuntingJourney credential.

5 – JourneyCharter presentation flow

As an Abydos Gatekeeper I want to allow authorized Explorers into Abydos so that they can have a good time on a treasure hunting journey.

To do this the Explorers present the JourneyCharter credential to Zaqiel the Gatekeeper, Zaqiel then validates the credential, and makes an access control decision on whether to allow an Explorer in to Abydos.

Credential Map

You decided to represent the secure proofs as four data containers (ACDC), or verifiable credentials (VC), with links between the different credentials as shown below. This graph shows the credentials and their links to each other, or edges. Each edge is named, as you will see later on in the schema map JSON file.

ATHENA Credential Graph (1) TreasureHuntingJourney: One for the Wise Man to describe the overall treasure hunting experience from start to end, treasure splits for hunting party members, the destination, and the like. This will be issued by the Wise Man to each potential explorer. (2) JourneyMarkRequest: One for a potential explorer to request to join the treasure hunting journey. This will be issued to the Wise Man by an explorer. (3) JourneyMark: One for the Wise man to issue to the accepted explorers so the Gatekeeper can know who the Wise Man has authorized for the journey. (4) JourneyCharter: One that the explorer party can use to prove they are authorized for their journey to all who care to know as well as for the Gatekeeper to verify that they were authorized as a party.

In order to understand the credential schemas we must first cover some background knowledge on why schemas in ACDC are constructed the way they are, the purpose they serve, and how to use them.

ACDC Schema Writing

JSON Schema is the schema language ACDC schemas are expressed in. Beyond this schema language self addressing data and self addressing identifiers are the mechanism used to verify schemas. With security first as a design principle self addressing data combines well with JSON schema to provide a secure schema capability for the KERI and ACDC ecosystem.

ACDC Credential Schemas with JSON Schema

For each ACDC we use there must be a schema defining the shape of the data in the container or credential. This schema must be verifiable and protected from schema revocation and schema malleability attacks. Using self addressing data (SAD) through the use of self addressing identifiers (SAIDs) makes it easy to accomplish both of these security objectives along with the business process objective of having a verifiable shape or envelope for our data.

What does an ACDC look like? TreasureHuntingjourney credential issuance example JSON export

This is getting a little bit ahead of ourselves though a visual of what a credential looks like will help you understand where we are going. Take a quick look at what an issued JourneyMarkRequest looks like, how to get it, and then move on to the next section.

{ "v": "ACDC10JSON0001d9_", "d": "EEXZuecxP4Y3xZxvA_DtnrPX8nbSDPeGaMIxNKvLVENb", "i": "EIaJ5gpHSL9nl1XIWDkfMth1uxbD-AfLkqdiZL6S7HkZ", "ri": "EF3hGwYMK0r74qlSRbKw3_RzCrJ8D2Xmv5BlunrN3NC2", "s": "EIxAox3KEhiQ_yCwXWeriQ3ruPWbgK94NDDkHAZCuP9l", "a": { "d": "EPZSit7DYobEAv3WkVk2GBoBWrdglaHImi3_lnoc5fNl", "i": "EJS0-vv_OPAQCdJLmkd5dT0EW-mOfhn_Cje4yzRjTv8q", "dt": "2023-03-21T15:39:14.560430+00:00", "destination": "Osireion", "treasureSplit": "50/50", "partyThreshold": 2, "journeyEndorser": "Ramiel" } }

This JSON export of a credential is obtained with the KLI command kli vc export as shown below:

RICHARD_CRED_SAID=$(kli vc list --name explorer --alias richard --said --schema EIxAox3KEhiQ_yCwXWeriQ3ruPWbgK94NDDkHAZCuP9l) kli vc export --name explorer --alias richard --said $RICHARD_CRED_SAID

This particular execution of the command assumes there is only one credential to be listed by the kli vc list command for the schema EIxAox3KEhiQ_yCwXWeriQ3ruPWbgK94NDDkHAZCuP9l, the schema identifier for the TreasureHuntingJourney schema. If there were multiple TreasureHuntingJourney credentials then there would be multiple SAIDs output by that command then you would pick one of them to be the argument to the --said parameter in order to export it’s JSON.

What is an ACDC schema made of? TreasureHuntingjourney schema creation

The first schema to write is the JSON Schema for the TreasureHuntingJourney credential that Richard and Elayne must receive from the Wise Man Ramiel in order to submit their treasure hunting request as a JourneyMarkRequest later. In order to write this schema we must first understand all of the elements that go into an ACDC schema.

Schema Parts

Each ACDC schema has five main parts:

Schema metadata (required) Properties section (required) Attributes section (required) Edges section (if used) Rules section (if used)

This is a collapsed view of the TreasureHuntingJourney schema.

As you can see it has metadata, properties, attributes, and rules yet no edges. This is because this TreasureHuntingJourney ACDC does not extend nor connect with any other ACDC. This ACDC could appear as an edge in another ACDC though it does not have any edges itself.

Keep in mind that this instance of the TreasureHuntingJourney schema does not have any “$id” properties filled out yet. This tells you we are still writing the schema and are working from the template that will have the SAIDs added to, or “SAIDified.” We will do that step later after we finish writing the schema itself.

First we shall explain the attributes of the schema from outer to inner, top to bottom. Each of the attributes has a specific purpose we get into below.

SCHEMA description – top-layer

While ACDC schemas use the JSON Schema specification in most regards the repurposing of the “$id” field is a notable exception. The top-most $id, $schema, title, description, type, credentialType, and version properties are about the ACDC schema itself. What is contained inside of the ACDC is described by what comes in the properties section which is further clarified by the additionalProperties and required attributes.

$id : The self addressing identifier (SAID) of the overall schema. This can only be computed when all of the dependent SAIDs for all attribute (e), edge (e), and rule (r) blocks have been computed and embedded within the respective $id attributes of each block.
This field is repurposed to provide a unique identifier of the schema instance. This is because in ACDCs all schemas are self addressing and so there is no separate document to retrieve such as in the non-ACDC usage of the $id field where the field contains a base URI fragment. Since the schema is keyed and stored by SAID rather than a Retrieval URI then there is little to no benefit from storing a retrieval URI in the $id section.
See the $id section of the ACDC specification for a more complete explanation. Example Value: EDiuqE2fD2MhrSpZhAzlZn8XLSJiHt1pyy1eEWB-l1qq $schema: This is typically set to the value of “http://json-schema.org/draft-07/schema#“. The valid values can be found in the $schema documentation of the JSON Schema specification. title: a short descriptive name you as the schema designer can write. Can have spaces and punctuation description: A longer explanation of the type of credential, what it is used for, or any other useful credential-related information. type: A JSON schema field that can be an object, array, numeric type, boolean, string, or other accepted value according to the JSON Schema Reference. credentialType: A field you can use to specify the specific type name of a credential. This is like a class name in an object oriented language and can be PascalCase with no spaces following typical object naming rules in many data languages and programming languages. version: A version field for this specific credential type. This manages semantic version for the credential type you are using. This is a field you can use for schema versioning and migration. This is distinct from the “v” version field described in the Version String Field section of the ACDC specification. This “version” field is the schema version. The ACDC “v” field specifies the serialization format version and the size of the serialized data type. Keep in mind these two fields are separate and are used for different purposes even though they have a similar name. Remembering that the short name occurs only in ACDC instances and that the long name occurs only in ACDC schemas is a good way to keep the concepts distinct in your mind. properties: This is the section where you define the data attributes that will exist in a minted ACDC as seen above. The values for the properties in this section are provided during issuance with the data argument in kli vc issue --data. additionalProperties: This is used to control handling of extra attributes not defined in the “properties” section. Can be “false” or a rule like { "type": "string" } that constrains what additional properties can be added to a schema.
See the additionalProperties section of JSON Schema. required: Defines the list of properties that must appear in the “properties” section. By default the properties in the “properties” section are not required. See the “Required Properties” part of the JSON Schema specification.

With all of the metadata properties covered we next describe the properties section of the TreasureHuntingJourney credential where you describe the fields of your custom data that will be a part of the credential.

TreasureHuntingJourney Properties

Only a small amount of data is written inside of a TreasureHuntingJourney credential:

{ "destination": "Osireion", "treasureSplit": "50/50", "partyThreshold": 2, "journeyEndorser": "Ramiel" }

The TreasureHuntingJourney credential is intended to provide information to prospective explorers about a journey they could embark on. This includes where the journey is headed, the destination, the proportion of treasure proceeds split amongst each journey party member, the minimum party threshold for the journey to be chartered, as well as the individual who chartered the journey.

The properties section of an ACDC schema is the location to include custom business-process data as shown above for the TreasureHuntingJourney credential.

Writing The Properties schema

Writing a schema to describe this small set of data requires a rather verbose description using JSON schema and though tedious is simple to understand once you get the hang of it.

In addition to this data there is a rules section “r” we will get into later when we talk about Ricardian Contracts though for now we focus only on the data going into the credential, the “a” section.

Writing a properties section with JSON Schema to describe this requires both metadata and an attributes section.

The ACDC constraints written in KERI requires the presence of a number of metadata attributes in the properties section as shown below. Each attribute is linked to the appropriate section in the ACDC specification. The overall field map description is provided here: Top-Level Fields.

v – “Version”: This describes the version of the serialization format and size used to write the schema. d – “SAID or identifier”: This provides a stable, universal, cryptographically verifiable and agile reference to the properties block. u – “Universally Unique Identifier as a Nonce”: A salty nonce used to provide cryptographic security to the “d” field to protect against brute force attacks such as rainbow table attacks to guess the block’s contents. This field is optional. i – “Autonomic Identifier”: The identifier of the issuer of this ACDC. ri – “Registry Identifier (Derived Identifier)”: The registry is the credential registry that this particular ACDC was issued from. This is derived from the “i” field for the issuer who issued the ACDC. A controller can create and use one or more registries. s – “Schema SAID”: The SAID of the schema to be used to validate this properties block. a – “Attribute map”: The list of attributes that defines this schema. This is where the JSON data from above gets placed into the credential. r – “Rules section”: This is where any Ricardian Contracts (code as law) are stored in the credential.

Other than the metadata attributes the “a” and “r” properties, attributes and rules, are the more interesting parts. The “a” section is where you will describe the data properties that will be placed in your ACDC.

TreasureHuntingJourney Attributes Section

The attributes section can be one of two things as shown in the JSON schema, either a SAID or an inner block with its own properties.

... "a": "oneOf": [ { "description": "Attributes block SAID", "type": "string" }, { "$id": "EPfNU6jej4GGyprpakq6KCO9os9vp9jLIRcH1xJqVezj", "description": "Attributes block", "type": "object", "properties": { ... oneOf Operator and compact ACDC

The oneOf operator in JSON Schema means that the “a” section can be one of either of these two things. For ACDCs this means that the “a” block can be either the condensed version, which is the SAID of the “a” block, or it can be the entire contents of the “a” block. The condensed version is called a “Compact ACDC” and can be seen as equivalent to the un-condensed, or un-compacted version of the ACDC.

We are not covering compact ACDCs in this article so we move on to the full or un-compacted ACDC.

Full or un-compacted ACDC

The full ACDC schema includes another set of metadata attributes, and identifier field “$id”, and another properties section. JSON Schema as you can see is very verbose. Looking a level deeper we see the following set of attributes inside the “a” block:

Now this is starting to look like the JSON data we intend to put in the credential. Inside of the “a” block we see another “$id” field as well as other metadata fields from JSON Schema including the “description”, “type”, and “properties” fields. As those are mostly self explanatory we move on to the inner “properties” block.

Each property has a name, a description, and a type:

... "properties": { "d": { "description": "Attributes block SAID", "type": "string" }, ...

The “type” field must be one of the valid types from the list in JSON Schema:

string number integer object array boolean null

The “description” field is a schema-writer field to put a human-readable, concise definition of what the field contains.

Inner Properties block metadata fields

The inner properties block has a set of metadata fields that the ACDC code in KERI fills out for you including the “d” (SAID), “i” (issuee), and “dt” (date and time stamp) fields. They are required attributes in the top level of the inner properties block and look like the following in the JSON schemas where they appear:

... "properties": { "d": { "description": "Attributes block SAID", "type": "string" }, "i": { "description": "Issuee AID", "type": "string" }, "dt": { "description": "Issuance date time", "type": "string", "format": "date-time" }, ...

The $id field is a SAID of the “a” attributes block. This provides the “a” block with cryptographic protection and verifiability when signed by a KERI key.

Inner Properties Block Custom Fields

Following the required properties you define your own custom properties to be added to the ACDC. In the case of the TreasureHuntingJourney credential those properties include the “destination”, “treasureSplit”, “partyThreshold”, and “journeyEndorser” fields as follows:

... "properties": { ... "destination": { "description": "The target location for this journey where the hunters will go.", "type": "string" }, "treasureSplit": { "description": "The type of splits for this journey. 50/50, 25/25/25/25, and so forth. Must add up to 100.", "type": "string" }, "partyThreshold": { "description": "The minimum party member threshold needed to charter this journey", "type": "integer" }, "journeyEndorser": { "description": "The AID of the ATHENA inner circle member endorsing this treasure hunting journey.", "type": "string" } ...

Each field has a description and a type. The validation of each field is up to the issuer or the verifier of the credential.

The Edge Section

We will briefly cover the JourneyMarkRequest schema since we have not covered edge sections yet. The TreasureHuntingJourney credential has no edges so it isn’t a good example of a credential with an edge section.

JourneyMarkRequest Schema

As you can see the JourneyMarkRequest has an additional attribute, the “e” section. This is for edges, or chains between individual ACDCs.

The edge section is very similar to an “attributes” section except it is specialized. It contains a “oneOf” operator similar to the “attributes” section since the edge section may also be represented by a SAID in the most compact version, or any compact version, of an ACDC.

Inside of the Edges block, the second item in this “oneOf” array, we see a set of common properties. $id, description, and type are what we have seen before, as is properties. Inside the properties block we find something slightly different.

The “d” field is the SAID of this particular edges block and is computed from the other properties which in this case is just the “journey” property.

The “journey” property is the actual edge that points to the TreasureHuntingJourney credential (not the TreasureHintingJourney – typo).

The “n” field is the SAID of the far side credential, the TreasureHuntingJourney.

The “s” field is the SAID of the ACDC schema for the TreasureHuntingJourney.

Defining an edge section like this in a schema is the foundation of what is needed to add in an edge between these two credentials later during the credential issuance process.

The Other Schemas

With this nearly exhaustive explanation of one of the four schemas complete and a small amount on the JourneyMarkREquest edge we will leave the writing or reading of the other three of the four schemas up to the reader. This includes the JourneyMarkRequest, JourneyMark, and JourneyCharter. You may find the ATHENA version of the other schemas in the athena/schemas directory. They will be described briefly below at a high level. You can apply the schema writing concepts from above to all of the other schemas.

Just glance over the following pictures to get a sense of what is in each schema and them move on to the credential linking section afterwards.

JourneyMark Schema

Has the following properties:

And edge properties for “request”

JourneyCharter Schema

Has the following properties:

And the following edges. Something different for the JourneyCharter schema is that it links to two edges, both “mark” and “journey”:

Linking Schemas

One concept we have not yet covered is how to link schemas together. We cover this next in the Schema Linking and Verification section via the use of the KASLCred utility.

Schema Linking and Verification

Linking schemas with self addressing data is part of the way ACDCs provide security, specifically end-verifiability, of credentials. For data to be self addressing it must contain an identifier that is computed from the contents of the data, a digest. Self addressing data protects against security exploits like schema malleability and schema revocation attacks while also being an easy way to verify the contents of the data. These self addressing identifiers are placed in the “$id” attributes of the various blocks in ACDC schemas.

Here is an example self addressing identifier for the TreasureHuntingJourney ACDC shown above: EEXZuecxP4Y3xZxvA_DtnrPX8nbSDPeGaMIxNKvLVENb

Self Addressing Data (SAD) and Self Addressing Identifiers (SAIDs)

The self addressing identifier (SAID) is like a key in a key-value pair and the data is like the value except that the identifier computation algorithm allows for the SAID to be embedded in the data. This requires a two pass process, one to calculate the identifier, and the second to embed it within the data. Verifying self addressing data also requires a two-pass verification function, one pass to remove the embedded identifier, and the second to verify the data against the SAID. To fully understand the self addressing identifier concept refer to the IETF Draft Specification for Self-Addressing IDentifier (SAID) by Samuel smith.

Schema Evaluation to compute SAIDs

Using self addressing identifiers (SAIDs) with a graph of schemas means that the credential schemas must be evaluated in a deterministic order and have their “$id” properties populated from the inner-most or top-most schema to the outer-most or bottom-most schema. In this sense the outer-most schema points to the innermost schema and the bottom-most schema points to the top-most schema as in the ATHENA Credential Graph diagram above and reproduced below for reference. This proper order requires computing a dependency graph of the schemas based on their edges (dependencies) in order to facilitate generating identifiers on the inner-most (or top-most) schemas and then each layer outward to the outermost (or bottom-most) schemas.

Graph Evaluation Order

In our case our graph must be generated in the following order:

TreasureHuntingJourney (root node, no dependents) JourneyMarkRequest (dependent on TreasureHuntingJourney) JourneyMark (dependent on JourneyMarkRequest) JourneyCharter (dependent on both JourneyMark and TreasureHuntingJourney)

Your options here are to either do this by hand with the kli saidify command and manually editing all of the schema files or you can use the code in the generate.py file from the WebOfTrust/vLEI repository as a guide. Alternatively you could use the KASLCred Python package to do all of the linking for you. For this post we will use KASLCred in the interest of time.

KASLCred

While the intricacies of parsing ACDC schemas and adding SAIDs in the “$id” attributes are interesting, also called “saidifying” the schemas, in the interest of time we will use the KASLCred utility written to do all the saidifying for you. You can go read the Python code of the TetraVeda/kaslcred repository if you want to see how it was done. It’s only a few hundred lines of code.

KASLCred Schema Map

Using KASLCred to evaluate your schema graph involves writing a schema map file expressing the set of credentials to be evaluated in a JSON file. The “athena/schemas/athena-schema-map.json” file from the tutorial repository is reproduced below for reference:

{ "schemas": [ { "schemaName": "JourneyCharter", "schemaFilePath": "journey-charter.json", "dependencies": ["JourneyMark", "TreasureHuntingJourney"], "edgeName": "" }, { "schemaName": "JourneyMarkRequest", "schemaFilePath": "journey-mark-request.json", "dependencies": ["TreasureHuntingJourney"], "edgeName": "request" }, { "schemaName": "TreasureHuntingJourney", "schemaFilePath": "treasure-hunting-journey.json", "dependencies": [], "edgeName": "journey" }, { "schemaName": "JourneyMark", "schemaFilePath": "journey-mark.json", "dependencies": ["JourneyMarkRequest"], "edgeName": "mark" } ] } Evaluation Order

As you can see this list of schemas is out of order. The first item in the list, JourneyCharter, should be evaluated last. And the third item in the list, TreasureHuntingJourney, should be evaluated first. This is no problem for KASLCred as it reads the dependencies array and reorders the schemas to be evaluated in the order where the least dependent item (with no dependencies) is evaluated first and then the next most dependent item is evaluated after all of its dependencies is evaluated and so forth for all dependencies.

Using KASLCred is as simple as a Pip Install and a python -m invocation once you have Libsodium and KERI installed:

# Install KERI and Libsodium as noted in the installation instructions above pip install kaslcred==0.0.8 # Set ABYDOS_REPO_DIR to where you cloned the abydos-tutorial repo. # Usage: python -m kaslcred [schemas dir] [results dir] [schema map file] python -m kaslcred \ ${ABYDOS_REPO_DIR}/athena/schemas \ ${ABYDOS_REPO_DIR}/athena/saidified_schemas \ ${ABYDOS_REPO_DIR}/athena/schemas/athena-schema-map.json

As long as all of the file names in the schemaFilePath properties in the schema map JSON file exist then you will end up with a set of SAIDified schemas ready to use.

Quick overview of the other schemas

Review the JSON contents of the abydos-tutorial/athena/credential_data directory to get a sense for what kind of business process we are building on top of these schemas. We reproduce them here to illustrate their purpose.

JourneyMarkRequest data – for Richard { "requester": { "firstName": "Richard", "lastName": "Ayris", "nickname": "Dunkie" }, "desiredPartySize": 2, "desiredSplit": 50.00 }

As you can see here this credential looks a lot like an account registration HTTP REST API request. You can really do anything with ACDC credentials. It also indicates preferences to the ATHENA explorer matchmaking engine with the desired party size and the desired split.

JourneyMark data – for Richard { "journeyDestination": "Osireion", "gatekeeper": "Zaqiel", "negotiatedSplit": 50.00 }

This data indicates where Richard will be going and other details of the journey. Remember that the JourneyMark has a graph edge pointing to the JourneyMarkRequest which means credential consumers can also resolve data from both the JourneyMarkRequest and the TreasureHuntingJourney credentials through those links.

JourneyCharter data – for Richard { "partySize": 2, "authorizerName": "Ramiel" }

As the final data packet sent as an ACDC this credential indicates what each explorer stands to expect on the chartered journey such as how many members they will be traveling with and who officially blessed their journey.

Schemas Review

Now that we have written the needed schemas for ATHENA and have SAIDified them with KASLCred we are ready to use them in configuring our trust network components as well as to issue and verify credentials.

Next we configure our trust network to cache these schemas with the vLEI caching server implementation (vLEI-server).

Network Configuration

Configuring a KERI network involves setting up the following components:

ACDC Schema Caching Witnesses Controllers Agents (if you are using them) Performing Out Of Band Introductions (OOBIs) between controllers and objects

Our setup will involve the following additional components

A customized controller used for the Abydos Gatekeeper to decide whether or not explorers can be permitted to enter Abydos. It will be configured to trust one issuer, the Wise Man, that will be used to check the issuance of the JourneyCharter credentials against. A webhook the customized controller communicates with to tie credential presentation and revocation events into a custom business logic layer.

Here is the network component diagram again for reference:

Schema caching is first item to be set up as all of the controllers depend on it. You could set up witnesses first as well. It is up to you.

Schema Caching

In order to successfully receive, validate, and handle an ACDC each KERI controller using that ACDC must be able to resolve and load the schema for that ACDC. The schemas are resolved based on the SAID of the schema using an out of band introduction (OOBI) that indicates the location the schema is stored at and can be retrieved from. The controller may either be configured with the location of this schema on startup in the controller bootstrap JSON configuration file or be configured at runtime through usage of the OOBI resolution mechanism.

vLEI-server

The vLEI-server provides a convenient implementation of a simple caching server that meets our needs. All the vLEI caching server (vLEI-server) does is make the schemas available as JSON objects as responses to HTTP requests. It could very easily be rewritten in your favorite language if you wanted a fun, easy challenge. (Hint, write it in Rust, the KERI community will be happy with you.)

Once you have the vLEI repository cloned and the vLEI-server installed as described in the installation instructions above you can run the vLEI-server with a command similar to the following:

# Change directories to the abydos-tutorial repository directory. # Set ATHENA_DIR to be your abydos-tutorial/athena directory. vLEI-server \ -s "${ATHENA_DIR}"/saidified_schemas \ -c "${ATHENA_DIR}"/cache/acdc \ -o "${ATHENA_DIR}"/cache/oobis

This will give you output similar to the following:

caching schema EIxAox3KEhiQ_yCwXWeriQ3ruPWbgK94NDDkHAZCuP9l caching schema ELc8tMg_hhsAPfVbjUBBC-giEy5440oSb9EzFBZdAxHD caching schema EBEefH4LNQswHSrXanb-3GbjCZK7I_UCL6BdD-zwJ4my caching schema EEq0AkHV-i5-aCc1JMBGsd7G85HlBzI3BfyuS5lHOGjr

If you have anything in the cache/acdc or cache/oobis directories then those will be shown in the log output as well.

With the ACDC schemas successfully being cached the next step is to deploy the witness network which requires writing witness configuration files.

Witness Network

Our witness network consists of three of the witnesses used in the KERI Demonstration Witness network meaning we use the same cryptographic key salts, keystore names, and alias names.

This means we need a configuration file for each of the witnesses. The document symbols you see on the preceding diagram show the filenames for each of the witnesses represented by a gray circle.

To set up the witness network we must do the following things:

Write witness configuration files ensuring the witnesses listen for incoming traffic on a set of TCP and HTTP ports. Create the keystores for the witnesses using the KERI command line interface, the KLI, via kli init subcommand. Start witnesses using the KLI via the keri witness start subcommand.

For the ATHENA witness network we create three separate witnesses as shown above on the following ports:

wan: TCP 5632 HTTP 5642 wes: TCP 5633 HTTP 5643 wil: TCP 5634 HTTP 5644 Building your own, non-demo witnesses and agents

In order for you to really understand the KERI KLI we do not want you to have to rely on any demo commands such as kli witness demo or kli agent vlei so we show you here how to use the keystore and witness commands directly.

First we address the writing of witness configuration files.

Witness Configuration files

Each witness on startup reads a witness configuration file. This file includes properties such as TCP and HTTP ports to listen on and any OOBIs to resolve at startup. See below for a witness configuration file example. The diagram shows the specific witness configuration file used for each specific witness.

Location and Directory Structure

JSON configuration files for KERI witnesses must be placed inside a particular directory structure. This directory structure is as follows
CONFIG_ROOT/keri/cf/main

The reason for this is that the KERI Configer module uses the root path of “keri/cf” and the sub path of “main” as see in the Configer.__init__ function. You can configure the CONFIG_ROOT directory path by customizing the --config-dir property though the keri/cf/main directory path will be created for you inside of the CONFIG_ROOT directory if they do not already exist.

Then, inside this CONFIG_ROOT/keri/cf/main directory you must place your witness configuration JSON files.

Witness Configuration File Contents

The contents of a witness configuration file look as follows:

{ "wan": { "dt": "2022-01-20T12:57:59.823350+00:00", "curls": ["tcp://127.0.0.1:5632/", "http://127.0.0.1:5642/"] }, "dt": "2022-01-20T12:57:59.823350+00:00", "iurls": [ ] }

There must be a key in the configuration object named the same as the value passed into the --alias [alias] property for the subsequent kli witness start command. This applies to KERIpy, the Python implementation of KERI and the reference implementation of the KERI and ACDC specification.

Refer specifically to KERIpy habbing.pyL971 in the reconfigure function to see how the configuration file must have an appropriately named key. Keep in mind that this function documentation states that “Config file is meant to be read only at init not changed by app at run time” so you should make any changes to this configuration before the kli witness start commands are run since any changes during run time will not be recognized until the witness is stopped and then started again.

The values inside the configuration file are as follows. Sub-properties such as the “dt” property inside of “wan” are indicated with the dot notation.

“wan”: The named configuration block property for a given witness. “wan.dt”: The date-time stamp when the configuration was created. “wan.curls”: Controller URLs. This is the list of URLs that indicate the protocol and ports for the witness to listen on. You can see both TCP port 5632 and HTTP port 5642 in the above example. “dt”: The date and timestamp the configuration was created “iurls”: Introduction URLs. This means any out of band introduction (OOBI) URLs to process and resolve on startup.

Other properties that are not shown above yet can be presented include:

“durls”: Data URLs. Another section for OOBI URLs to process. Typically you put credential OOBIs in this section you want the witness, really just a KERI controller, to resolve on startup. You could put these URLs in the “iurls” section as well. “wurls”: Well-known OOBIs to resolve on startup.

With this explanation of how to write a witness configuration file you now can go make the other two witness configuration files for “wil” and “wes,” or you can pull them from the tutorial repo: athena/conf/keri/cf/main/*.json

Witness Create Command: kli init

Next we must create the keystores. We will end up with the following three keystores:

We use the kli init command to create each of the keystores.

A witness requires a keystore to be created first in order to start the witness later. Do this with a command like the following:

kli init --name wan --salt 0AB3YW5uLXRoZS13aXRuZXNz --nopasscode \ --config-dir "${CONFIG_DIR}" \ --config-file main/wan-witness

You can see you must name the witness and pass in a configuration directory and file. The configuration file must have the “main/” path prefix on it since the underlying Configer class defaults the “base” path property to be “main” as seen on Configer:L50 here. You don’t need to add the “.json” suffix since KERIpy will do that for you. The filename alone is sufficient.

Running this command will create output similar to the following:

KERI Keystore created at: /Users/myuser/.keri/ks/wan KERI Database created at: /Users/myuser/.keri/db/wan KERI Credential Store created at: /Users/myuser/.keri/reg/wan Keystore Passcodes

Here I am not using any passcodes though you are free to modify the example to use them. The passcode is an additional layer of security used to unlock a keystore. Using a passcode would look like adding in a passcode argument like so: --passcode DoB26Fj4x9LboAFWJra17O

For Each Witness

Write a configuration file for each of the three witnesses, wan, wil, and wes. You can use the create_witnesses function in the workflow.sh script as a guide. Place each of the JSON configuration files in your configuration directory with the appropriate directory suffix like so:
$CONFIG_DIR/keri/cf/main

Cryptographic Salts (seeds)

The salt used here is the same as both what is used in the Abydos Tutorial witness set as well as what is used for the demo command kli witness demo that starts six witnesses for you. Using the same salt means the keys generated from the salt will be the same. So you may conclude that protecting your salt is as important as protecting your keys themselves similar to protecting your Bitcoin or Ethereum seed phrase and private keys.

Whatever you do make sure your seeds do not get in the hands of people who shouldn’t have them and be sure to use highly secure deployment mechanisms to inject seeds into your witness, agent, and controller deployments.

Moving on, next we start the witness.

Witness Start Command: kli witness start

Starting a witness is very simple yet it does depend on code recently added to KERIpy to support the --config-dir and --config-file arguments so make sure you install the latest version of KERIpy from source with python -m pip install -e ./ from the KERIpy repository root directory. This will give you at least KERIpy 1.0.0 release locally whether or not the release has been pushed to PyPi or GitHub yet.

kli witness start --name wan --alias wan \ -T ${WAN_WITNESS_TCP_PORT} \ -H ${WAN_WITNESS_HTTP_PORT} \ --config-dir "${CONFIG_DIR}" \ --config-file wan-witness

Ports and the alias name are a few additional configuration options here. Remember that the “alias” property must match the name used in the witness configuration file you wrote earlier in which you defined the TCP and HTTP endpoints for your witness to listen on. The other arguments are as they seem, -T for the TCP port and -H for the HTTP port the witness will listen on.

Run this command for each of the three witnesses wan, wil, and wes. You can use whatever ports you’d like though the ATHENA ports used are shown in the variables section near the top of the workflow.sh script.

Running this command successfully will provide output similar to the following:

Witness wan : BBilc4-L3tFUnfM_wJr4S4OJanAv_VmF_dJNN6vkf2Ha

You can either do the commands by hand yourself or you can use the workflow.sh script to do it for you. It is recommended to get some practice yourself with the commands in order to learn them though you can start by reading the start_witnesses function in the workflow.sh script to see how the Abydos Tutorial starts the ATHENA witnesses. Be aware that BASH variables are used heavily throughout the script so you will need to refer often to the variables section at the top of the script in order to fully understand what each of the commands are doing.

Once you have all of the witnesses set up and started your next task is to set up the KERI controllers for Richard, Elayne, Ramiel, and Zaqiel. This involves writing a bootstrap configuration file which we get into next.

KERI Controller Setup

Creating and setting up controllers can happen in one of two ways, with the KERI command line, the KLI, or with the KERI Agent API (called the Mark I Agent, currently deprecated). The KLI method does not start a running process or daemon while the Agent runs as a daemon. An important note on the existing Agent API in KERIpy is that Agent API is now deprecated in favor of the new Agent API being written as a separate deployable called KERIA (called the Mark II Agent).

While the existing Agent API is deprecated it is nonetheless useful to see how the KLI compares to using REST requests in the Agent API. You get an idea of how you might write your own API on top of a KERI controller or how you might contribute to the agent implementations. Once the Mark II Agent API is ready this article will be updated with the new API.

There are three general steps to setting up a controller:

Write the controller configuration file. Initialize the controller’s keystore Create the first keypair for the controller, called a prefix, by performing an inception event. This involves using an alias to label that particular prefix.

The KLI flow is shorter than the Agent flow though they both accomplish the same objective. You may follow along in the workflow.sh script with the code for setting up each of the four controllers in the following functions:

For the Agent flow start_agents starts up all needed agents. read_witness_prefixes_and_configure updates the controller and agent bootstrap configuration files. make_keystores_and_incept_kli and make_keystores_and_incept_agent perform the keystore creation and inception events using the KLI or Agent API, respectively. Four Controllers

There are four controllers to be created, one each for Richard, Elayne, Ramiel, and Zaqiel. This involves writing configuration files with OOBIs for witnesses, and credential schemas, initializing controller keystores, performing the inception events, and, in the case of using agents, starting the KERI Agent for each controller. Below is a simplified diagram of the controllers and the out of OOBI connections that must be made between the controllers to support credential issuance and presentation.

Controller Bootstrap Configuration File

A bootstrap configuration file instructs a KERI controller what OOBI URLs to resolve on initial startup including both a list of controller OOBIs and a list of credential schema OOBIs. The list of controller OOBIs is usually a list of OOBIs for each witnesses that will be used to perform the inception event though it can be expanded to other controllers beyond the witness pool.

When first starting a controller it reads in the configuration file specified by the --config-file argument to either the kli init or kli agent start commands. Beware that the configuration file is not re-read on subsequent startups. If you need a controller to resolve more OOBIs after initial bootstrapping then use the kli oobi resolve command or equivalent Agent REST request.

Example Config

The example config file from the ATHENA network shows three controller OOBI URLs in “iurls” and four credential schema OOBI URLs in “durls”. Starting an agent also reads in a similar configuration file with OOBIs for the agent to resolve during initialization. Agents are covered in a later section.

Once you have all three prefixes for the witnesses you can make OOBIs and write them into configuration files for both KLI and Agent modes:

controller-oobi-bootstrap.json for the KLI mode Full path: abydos-tutorial/athena/conf/keri/cf/controller-oobi-bootstrap.json agent-oobi-bootstrap.json for the Agent mode Full path: abydos-tutorial/athena/conf/keri/cf/agent-oobi-bootstrap.json

If you look into each of these two configuration files you will find their contents to be identical and will look very similar to the below:

{ "dt": "2022-01-20T12:57:59.823350+00:00", "iurls": [ "http://127.0.0.1:5642/oobi/BBilc4-L3tFUnfM_wJr4S4OJanAv_VmF_dJNN6vkf2Ha", "http://127.0.0.1:5643/oobi/BLskRTInXnMxWaGqcpSyMgo0nYbalW99cGZESrz3zapM", "http://127.0.0.1:5644/oobi/BIKKuvBwpmDVA4Ds-EpL5bt9OqPzWPja2LigFYZN2YfX" ], "durls": [ "http://127.0.0.1:7723/oobi/EIxAox3KEhiQ_yCwXWeriQ3ruPWbgK94NDDkHAZCuP9l", "http://127.0.0.1:7723/oobi/ELc8tMg_hhsAPfVbjUBBC-giEy5440oSb9EzFBZdAxHD", "http://127.0.0.1:7723/oobi/EBEefH4LNQswHSrXanb-3GbjCZK7I_UCL6BdD-zwJ4my", "http://127.0.0.1:7723/oobi/EEq0AkHV-i5-aCc1JMBGsd7G85HlBzI3BfyuS5lHOGjr" ] }

Next we learn how to write each of the two OOBI lists for both the “iurls” and the “durls” sections of the configuration files.

IURLS: Writing controller OOBIs for the “iurls” section

An OOBI URLs uses the following URL scheme:
[protocol]://[host]:[port]/oobi/[prefix]

The protocol is typically “http” though may be “tcp” as well. Host and port will be specific to your deployment. Creating a prefix involves many steps as shown in a later section. Without jumping ahead you can use the prefixes automatically created during the kli witness start command you ran earlier. This created a set of keystores which also created a set of prefixes for each witness. You can get the prefixes you created with the kli status command as shown below.

Prefixes with kli status

The prefix is the prefix of the first key in a sequence of KERI key creation events (establishment events). This is the same prefix that is added into the KERI DID generation function when you run kli did generate. To turn the prefix into a did you do as follows:
did:[prefix]
For example, using the first OOBI below, the corresponding DID would be:
did:keri:BBilc4-L3tFUnfM_wJr4S4OJanAv_VmF_dJNN6vkf2Ha

For now you can get the prefix with the kli status command as shown in the read_witness_prefixes_and_configure function in workflow.sh. Here is an example using kli status and AWK to store witness wan’s prefix in the WAN_PREFIX variable:

WAN_PREFIX=$(kli status --name wan --alias wan | awk '/Identifier:/ {print $2}') # Then do echo $WAN_PREFIX to see the value of the prefix echo $WAN_PREFIX

The update_config_with_witness_oobis function does this for you in the workflow.sh script. Once you finish writing your witness OOBIs to the configuration file your “iurls” section should look like the following:

{ "dt": "2022-01-20T12:57:59.823350+00:00", "iurls": [ "http://127.0.0.1:5642/oobi/BBilc4-L3tFUnfM_wJr4S4OJanAv_VmF_dJNN6vkf2Ha", "http://127.0.0.1:5643/oobi/BLskRTInXnMxWaGqcpSyMgo0nYbalW99cGZESrz3zapM", "http://127.0.0.1:5644/oobi/BIKKuvBwpmDVA4Ds-EpL5bt9OqPzWPja2LigFYZN2YfX" ], "durls": [ ... ] }

Next we write the “durls” section for credential schema OOBIs. This enables controllers to recognize ACDC schemas by their SAID.

DURLS: WRITING credential schema OOBI URLs for the “durls” section

The four OOBIs in the “durls” are created by getting the “$id” properties from each of the schemas you created during the schema writing and linking step above with KASLCred. The scheme for data OOBIs is as follows:
[protocol]://[caching_server_host]:[caching_server_port]/oobi/[schema_said]

For example, “EEq0AkHV-i5-aCc1JMBGsd7G85HlBzI3BfyuS5lHOGjr” is the [schema_said] property for the OOBI url for the JourneyCharter OOBI URL. This. is pulled directly from the schema in abydos-tutorial/athena/saidified_schemas/JourneyCharter__EEq0AkHV-i5-aCc1JMBGsd7G85HlBzI3BfyuS5lHOGjr.json

The resulting OOBI URL looks like the following:

"http://127.0.0.1:7723/oobi/EEq0AkHV-i5-aCc1JMBGsd7G85HlBzI3BfyuS5lHOGjr"

In this instance the caching server host and port correspond to the vLEI-server instance ATHENA defines in the workflow.sh script, or http://127.0.0.1 and 7723, respectively.

Once you have the OOBI URLs set up then write them into both the controller-oobi-bootstrap.json file and the “agent-oobi-bootstrap.json” files in the “$CONFIG_DIR/keri/cf” configuration directory. In the Abydos Tutorial the location “abydos-tutorial/athena/conf/keri/cf” is the $CONFIG_DIR configuration directory.

These files should look somewhat like the following once you are finished:

{ "dt": "2022-01-20T12:57:59.823350+00:00", "iurls": [ "http://127.0.0.1:5642/oobi/BBilc4-L3tFUnfM_wJr4S4OJanAv_VmF_dJNN6vkf2Ha", "http://127.0.0.1:5643/oobi/BLskRTInXnMxWaGqcpSyMgo0nYbalW99cGZESrz3zapM", "http://127.0.0.1:5644/oobi/BIKKuvBwpmDVA4Ds-EpL5bt9OqPzWPja2LigFYZN2YfX" ], "durls": [ "http://127.0.0.1:7723/oobi/EIxAox3KEhiQ_yCwXWeriQ3ruPWbgK94NDDkHAZCuP9l", "http://127.0.0.1:7723/oobi/ELc8tMg_hhsAPfVbjUBBC-giEy5440oSb9EzFBZdAxHD", "http://127.0.0.1:7723/oobi/EBEefH4LNQswHSrXanb-3GbjCZK7I_UCL6BdD-zwJ4my", "http://127.0.0.1:7723/oobi/EEq0AkHV-i5-aCc1JMBGsd7G85HlBzI3BfyuS5lHOGjr" ] }

Now that you have the controller and agent bootstrap configuration files set up we show how to start up agents with the KLI.

KERI Agents KERI Agent Startup – Mark I agent (deprecated)

Creating and handling a controller with the KERI Mark I agent involves a four step process:

Start the agent with kli agent start POST to /boot to initialize the keystore PUT to /boot to unlock the keystore POST to /ids/${ALIAS} to perform the inception event

There are four agents to be set up for the ATHENA network, one each for Richard, Elayne, Ramiel, and Zaqiel as shown below:

KERI Controller Agent Start (Mark I Agent – deprecated)

You will notice that the --insecure option is passed to the agent start command. This is because the first draft of a KERI Agent, the Mark I agent, has a partially complete HTTP signature signing scheme that currently does not function and thus must be disabled with the --insecure option. Without disabling this option you will not be able to communicate at all with a Mark I agent. The next version of KERI Agents is KERIA, the Mark II agent, and this has a fully complete HTTP signature signing scheme intended to work with Signify, a TypeScript edge signing library for KERI.

The --path option declares a directory to be served up at the $AGENT_URL/static HTTP endpoint though isn’t important for this tutorial. It was originally used to host Swagger API documentation for the Agent API.

All of the other configuration options are relatively straightforward.

--config-dir is the directory the agent bootstrap file will be located in Full path: abydos-tutorial/athena/conf/keri/cf --config-file is the configuration file to be used agent-oobi-bootstrap.json for the Agent mode Full path: abydos-tutorial/athena/conf/keri/cf/agent-oobi-bootstrap.json --tcp is the TCP port to listen on --admin-http-port is the HTTP port to listen on. This is the port we will be submitting all of our requests to. kli agent start --insecure \ --admin-http-port ${EXPLORER_AGENT_HTTP_PORT} \ --tcp ${EXPLORER_AGENT_TCP_PORT} \ --config-dir ${CONFIG_DIR} \ --config-file ${AGENT_CONFIG_FILENAME} \ --path ${ATHENA_DIR}/agent_static

Start up an Agent for each controller Richard, Elayne, Ramiel, and Zaqiel. The start_agents function in workflow.sh shows one way to do this.

Switching between the KLI and the KERI Agent API (Mark I – deprecated)

For the remainder of this tutorial we switch back and forth between the KERI command line interface, the KLI, and the KERI Agent HTTP API, the Mark I version in KERIpy, in order to present a broad understanding of both the ways you can interact with the KERI specification and the work that has been completed in the space. This will provide an opportunity to understand both how the command line interaction looks as well as how a REST API addressing a similar purpose feels and looks.

Each task will be completed first with the KLI and then with the KERI Agent API. The KLI is always up to date with the latest and greatest code in KERIpy. The KERI Agent API will likely eventually be removed from KERIpy once the Mark II agent in KERIA is finished. At present the KERI Agent API in the KERIpy repository is now deprecated and is referred to as the Mark I agent.

For our next task of creating keystores we start first with the KERI Command Line Interface (KLI) to initialize the keystores for all of the participants in this journey including the ATHENA officials and the two explorers. After showing how to accomplish this task with the KLI we switch to perform the same task with the Agent API.

Use one or the other for keystores and inception

Make sure to either use the KLI or the Agent API to perform keystore creation and inception, not both. You only need to create the keystores once whether your use the KLI or the Agent API.

If you don’t know which one you want or need for your scenario then start with the KLI.

KERI Keystores and Inception KERI Controller Keystore Create – KLI INIT

There are four keystores to initialize. We will show here the commands to initialize one of the keystores. After performing this command you can initialize all of the other keystores yourself with similar commands for each keystore or use the make_keystores_and_incept_kli function in the workflow.sh script.

Salts

The cryptographic salt is one of the arguments to kli init. This salt is what all of the private keys are derived from so it should be protected as if you were protecting your personal identity documents or your bank account. You can get your own salt with the kli salt command:

$ kli salt 0ACPpGmPWX9LqRlUCOLH3qk2

You can use the salt in any --salt argument via the KLI or a "salt" property of a Agent request.

Command

The other arguments are straightforward, nopasscode, config-dir, and config-file. --nopasscode indicates that there is no passcode to access this keystore.

The --config-dir property indicates the directory in which to look for configuration files. Inside this configuration directory the file name from the --config-file option is used to find and select the controller bootstrap configuration file that will be used to initialize this keystore. This option does not need to have a file extension if your configuration file already ends with the “.json” extension.

kli init --name ${EXPLORER_KEYSTORE} --salt "${EXPLORER_SALT}" \ --nopasscode \ --config-dir "${CONFIG_DIR}" \ --config-file "${CONTROLLER_BOOTSTRAP_FILE}"

This command sets up all of the Lightning Memory Mapped Database instances used by the keystore. These are located in the $HOME/.keri directory.

KERI Controller AID (Prefix) Create and Incept – KLI INCEPT

Now the keystore is ready to have a specific key pair created and an inception event performed. This will begin the key event log (KEL) for that key pair. Key pairs are referred to with a label called an “alias.” This alias label is intended to be more human-friendly and is used throughout both the KLI and the Agent API to refer to the key pair.

Alias vs Prefix

Aliases are used to refer to key pairs since prefixes, the internal reference to key pairs, are not human friendly. For example, an alias can be a word or set of words like “richard” and a prefix looks like the following: "EJS0-vv_OPAQCdJLmkd5dT0EW-mOfhn_Cje4yzRjTv8q". As you can see you won’t be remembering many prefixes which is why the alias system is so helpful.

Steps to Inception

Performing an inception event, once the keystore is initialized, requires the following two steps:

Write an inception configuration file. Execute the inception command whether from the command line or the Agent REST API For the KLI you will provide the inception configuration file as an argument. For the Agent API you will provide the inception configuration as JSON in the body of the inception HTTP request.

Four inception events need to be performed on each of the four keystores. Each inception event begins a key event log (KEL), as shown below:

First we write the inception configuration file.

Writing a Controller Inception Configuration File

Each attribute of a controller inception configuration file is important. The initial set of witnesses, keypair counts, key thresholds, transferability, and threshold of acceptable duplicity are all defined in this file.

The following is the inception configuration file used for each of the controllers we create for the Abydos journey:

{ "transferable": true, "wits": [ "BBilc4-L3tFUnfM_wJr4S4OJanAv_VmF_dJNN6vkf2Ha", "BLskRTInXnMxWaGqcpSyMgo0nYbalW99cGZESrz3zapM", "BIKKuvBwpmDVA4Ds-EpL5bt9OqPzWPja2LigFYZN2YfX" ], "toad": 3, "icount": 1, "ncount": 1, "isith": "1", "nsith": "1" } Witness configuration file attributes

Now an explanation of each attribute, which may also be seen in keripy/src/keri/app/habbing.pyL873 and in keripy/src/keri/app/cli/common/incepting.pyL9:

“transferable” – Transferability refers to whether a KERI autonomic identifier (AID) can have its keys rotated. Technically this refers to having what is called “control authority” changed, control over the private keypairs linked to a KERI AID.
When true this means the AID is a transferable AID.
When false this means the AID is non-transferable. “wits”: Witnesses – A list of prefixes identifying the witnesses to include signatures for in the inception event in this controller’s key event log. “toad”: Threshold of acceptable duplicity – The number of agreeing witnesses receipts at or above which the controller accepts accountability for signed key events. Accountability is not accepted by the controller for key events with a number of signatures below this threshold. “icount”: Inception signature count – The number of keys used to sign the inception event. “ncount”: Number of rotation signatures count – The number of keys to be used on the next key rotation event. Note that not all pre-rotated keys committed to in the inception event as described Reserve Rotation. “isith”: Signing threshold for the inception event – The number of key event receipts from distinct witnesses the Inception Event must have in order to be authoritative. “nsith”: Signing threshold for the next rotation event – The number of key event receipts from distinct witnesses the next Rotation Event must have in order to be authoritative. Writing your configuration

For all except the “wits” section you can copy and paste the configuration file as-is. The “wits” section contains the AID prefixes that will be specific to your witnesses. If you are using the workflow.sh script with precisely the same salts, witness keystore names, and witness AID aliases, then you can use the configuration from above.

Place all of your configuration in a JSON file in a configuration directory you will remember the location of that you can reference when passing in the --file argument to the kli incept command below. In the Abydos Tutorial repo this path is “abydos-tutorial/conf/inception-config.json“.

Performing Inception – KLI INCEPT

Incepting a keypair consists of:

Creating the number of cryptographic key pairs as indicated in the “icount” configuration property. Creating the number of pre-rotated cryptographic key pairs as indicated in the “ncount” configuration property. Creating a valid, properly formatted inception event signed by all of the keys included in the “icount” set. Adding that inception event to the controller’s key event log. Sending the inception event to each witness in set of witnesses configured in the inception configuration you wrote above.
The set of witnesses is defined in the configuration file passed to the --file argument. Receiving key event receipts (KERs) of the inception event from each witness. Adding the key event receipts from the witnesses to the key event receipt log (KERL) for the controller.

All of these steps are performed for you when you execute the kli incept command. The below example from workflow.sh shows the inception of Richard’s AID:

kli incept --name ${EXPLORER_KEYSTORE} --alias ${EXPLORER_ALIAS} \ --file "${CONTROLLER_INCEPTION_CONFIG_FILE}"

Repeat this procedure for each Elayne, Ramiel, and Zaqiel with the keystore and alias names indicated in the BASH variables at the top of the workflow.sh script in the Abydos Tutorial repository.

Next, for those who are using agents, we show how to use the KERI Mark I Agent to perform keystore setup and inception.

KERI Controller Keystore Create – Agent (Mark I – deprecated)

Keep in mind that you only need to create the keystores once whether you create the keystores with the KLI or the Agent API. Only proceed to create the keystores and perform inception with the Agent API if you have not already done so with the KLI.

Setting up a KERI controller keystore and AID with an agent involves the following three REST requests:

POST to /boot to initialize the keystore This is similar to the kli init command. PUT to /boot to unlock the keystore This does not have a direct analogue with the KLI though the --passcode argument used with kli init, kli incept, and many other KLI commands serves a similar purpose. POST to /ids/${ALIAS} to perform the inception event This is similar to the kli incept command.

Data in the request bodies for each of these requests is very similar to the command line argument data for each of the corresponding commands using the KLI.

curl Common Headers, arguments, and pipes Headers

The “Accept: */*” and “Content-Type: application/json” HTTP headers are common to all of the commands used in the workflow.sh scripts.

Arguments

The “-s” argument to cURL is the “silent” option and ensures that only the response body of the target HTTP request is sent to the standard output channel.

The backslash “\” escape character is often used in BASH scripts to make commands more readable by enabling splitting of the commands across multiple lines.

Pipes

jq: Heavy usage of the jq program, a lightweight and flexible command-line JSON parser, is made throughout this tutorial. Most often the results of a cURL command will be piped to the jq program and then the jq program will select an attribute from the response body using a selector expression like '.["msg"]' in the example below.

tr: Frequently the tr program is used to trim double quotes off of the JSON output returned from the jq program. As seen in the example below tr is used with the -d flag to delete all tokens of the specified set from input. This is needed since often the value of a cURL request needs to be placed in a BASH variable and the extra quotes returned along with the JSON body are problematic to deal with an so it is easier to remove them altogether.

Both jq and tr are often used together in a sequence as shown below.

...(more code causing JSON to be sent to standard out)... \ | jq '.["msg"]' | tr -d '"' KERI Controller Keystore Create – Agent

This operation is a POST to the /boot endpoint which creates a keystore where the body is a JSON object with the value placed in the “name” property of the JSON body used to name the keystore. The “salt” value is the cryptographic salt used as the cryptographic seed to derive all key pairs in the keystore. You can use the kli salt command to generate a new salt.

The ${EXPLORER_AGENT_URL} is define on Line 127 of workflow.sh and looks like this:
http://127.0.0.1:5620

curl -s -X POST "${EXPLORER_AGENT_URL}/boot" \ -H 'accept: */*' \ -H 'Content-Type: application/json' \ --data "{\"name\": \"${EXPLORER_KEYSTORE}\", \"salt\": \"${EXPLORER_SALT}\"}" \ | jq '.["msg"]' | tr -d '"'

As you can see the “name” and “salt” options are similar to the similarly named arguments to KLI init. The difference here is that they are in the body of the POST request as JSON. The result is piped to “jq” and then “tr” to extract and print the result message to the terminal.

Next the keystore must be unlocked.

KERI Controller Keystore Unlock – Agent

Unlocking a keystore is a PUT to the /boot endpoint using the same JSON body as was POSTed to initialize the keystore. This is used to permit a particular Agent to work with a specified keystore. This is useful because an agent may have been restarted after initializing the keystore and so the Agent must establish an authenticated connection to the keystore.

curl -s -X PUT "${EXPLORER_AGENT_URL}/boot" \ -H 'accept: */*' \ -H 'Content-Type: application/json' \ --data "{\"name\": \"${EXPLORER_KEYSTORE}\", \"salt\": \"${EXPLORER_SALT}\"}" \ | jq '.["msg"]' | tr -d '"'

Next the keystore is ready to have an AID, also known as a prefix, created and incepted.

KERI Controller AID (Prefix) Create and Incept – Agent

Creating an AID requires submitting a POST request to the /ids/{alias} path of a controller agent. The body of the request is the same controller inception configuration file you wrote earlier when performing kli incept. The file is reproduced below for reference:

Inception Configuration File { "transferable": true, "wits": [ "BBilc4-L3tFUnfM_wJr4S4OJanAv_VmF_dJNN6vkf2Ha", "BLskRTInXnMxWaGqcpSyMgo0nYbalW99cGZESrz3zapM", "BIKKuvBwpmDVA4Ds-EpL5bt9OqPzWPja2LigFYZN2YfX" ], "toad": 3, "icount": 1, "ncount": 1, "isith": "1", "nsith": "1" }

The contents of this file are placed in the body of the HTTP request as submitted below:

curl -s -X POST "${EXPLORER_AGENT_URL}/ids/${EXPLORER_ALIAS}" \ -H 'accept: */*' \ -H 'Content-Type: application/json' \ --data @${CONTROLLER_INCEPTION_CONFIG_FILE} \ | jq '.["d"]' | tr -d '"'

The jq and tr pipes here select the AID prefix that comes in the response body of the request. You can see on Line 471 of workflow.sh this sequence is used to capture Richard’s prefix and store it in the RICHARD_PREFIX bash variable.

Keystore and Inception Review

Once you get the hang of creating keystores and performing inception events then you will see the similarities between the KLI and the Agent API. When you have finished creating all of the keystores you need, one each for the Explorer, Librarian, Wise Man, and Gatekeeper, then you are ready to move on to the next step for connecting all of the agents together using out of band introductions, or OOBIs.

Out of Band Introductions (OOBIs) Between Controllers

Routes in a routing table or DNS entries are internet discovery system analogues to help understand the purpose out of band introductions. ATHENA requires a number of components in the Abydos network to talk to each other and thus an OOBI must be created or “resolved” for each connection between components.

You can think of an OOBI as an address with a role for one controller to contact either another controller or a KERI resource such as a credential schema. In order for one controller to be able to issue a credential to another controller it must know where that other controller is. The role

For example, in the diagram above, in order for Ramiel to issue the TreasureHuntingJourney credential to Richard he must have an OOBI that shows the location of Richard that includes a witness for Richard. This witness is necessary so Ramiel can verify that the KEL sent from Richard matches up to the AID prefix included in the OOBI URL. component that knows Richard. In this case that is the “wan” witness

OOBIs described

Out of band introductions (OOBIs) are the discovery mechanism for KERI controllers and KERI objects such as ACDC Schemas or cached ACDCs. OOBIs tell one controller how to reach another controller or object and are identified both by an AID prefix as well as an alias label.

The terminology “out-of-band” here refers to the fact that the general purpose internet infrastructure like IP routers, gateways, and DNS servers, are not part of KERI and are thus out-of-band with respect to KERI. The primary advantage to using this general purpose infrastructure for service discovery means KERI doesn’t have to provide it’s own discovery mechanism since it can piggy back on the existing internet infrastructure.

Discovery over the internet includes endpoint discovery of witnesses, watchers, and other network participants including jurors and judges (not yet implemented). According to the Out-Of-Band-Introduction (OOBI) Protocol specification:

[OOBIs provide] a discovery mechanism that associates a given URI or URL with a given AID (Autonomic IDentifier) or SAID (Self-Addressing IDentifier) [KERI_ID][KERI][SAID_ID][OOBI_ID]. The URI provided by an OOBI acts as a service endpoint for the discovery of verifiable information about the AID or SAID. As such an OOBI itself is not trusted but must be verified.

Out-Of-Band-Introduction (OOBI) Protocol IETF Draft specification – S. M. smith

In the specification the usage of the terms “endpoint” or “service endpoint” just mean the URL to use to access some KERI resource.

OOBIs must be verified

The above quote draws attention to the fact that that OOBI URLs themselves are not trustable and must be verified. What does this mean? What is being verified? Since OOBI URLs merely point to a controller’s AID (prefix) the OOBU must be used to retrieve the full key event log from that controller referenced by the prefix. This key event log is then verified using the witness identified by the witness prefix at the end of the OOBI, in the case of a witness OOBI. If the controller proves control over the key referenced by the KEL then the verifier of the OOBI can then trust they have securely discovered the controller referenced by the OOBI.

A witness OOBI looks like the following example:

"http://127.0.0.1:5642/oobi/EJS0-vv_OPAQCdJLmkd5dT0EW-mOfhn_Cje4yzRjTv8q/witness/BBilc4-L3tFUnfM_wJr4S4OJanAv_VmF_dJNN6vkf2Ha"

The template for this OOBI URL is as follows:

[proto]://[wit host]:[wit port]/oobi/[controller prefix]/[role]/[wit prefix] OOBIs may have roles

This is an OOBI that shows a witness for a given prefix. This type of OOBI includes a “witness” role and a witness prefix. This witness prefix corresponds to the witness host and port earlier in the OOBI URL. This witness will have a full key event log of all of that controller’s key events and thus can be used to verify that the controller’s KEL matches to the controller prefix included in the OOBI URL. This verification process is a part of what is known as “resolving” an OOBI.

See kering.py:L23 for a full list of other OOBI roles including controller, witness, registrar, watcher, judge, juror, peer, mailbox, and agent, at present.

Richard to Ramiel OOBIs – pairwise exchange

The following modified OOBI diagram shows greater detail of what the OOBIs exchanged between Richard and Ramiel look like. Since both Ramiel and Richard exchange OOBIs with each other this is called a pairwise OOBI exchange.

Since both witness OOBIs use the “wan” witness to resolve key state they have a secondary arrow pointing to wan in addition to their primary arrow to indicate this relationship.

In general, each controller wanting to communicate with another controller must resolve an OOBI with the target controller that describes where the controller is located over the internet so that the source controller can send messages to that destination controller.

In the case of Ramiel an OOBI must be resolved for both Elayne and Richard in order for Ramiel to issue the TreasureHuntingJourney credential to them. Similarly, Elayne and Richard both must resolve an OOBI that describes where Ramiel is located over the internet. All of these introductions are performed in both the make_introductions_kli and make_introductions_agent functions in the workflow.sh script.

Abydos OOBIs

The journey to Abydos requires that each controller is known for a certain reason. These reasons may include:

Credential Issuance: an issuing controller must have an OOBI for the isuee’s controller who will receive the credential. This is the case when Ramiel needs to issue the TreasureHuntingJourney credential to both Richard and Elayne. Likewise, Richard must know how to contact Ramiel in order to issue him the JourneyMarkRequest credential. Credential Presentation: a credential holding controller must have the OOBI for the target node they will be sending a credential presentation to so that the presentation can reach the intended destination. This is the case when Richard and Elayne want to send the JourneyCharter credential to Zaqiel in order to be permitted entrance into Abydos.

These OOBI introductions are the controller-to-controller OOBIs needed during network operation. Keep in mind that there were OOBIs resolved in network setup and configuration for each of the witnesses as well as the four credential schemas used.

Use the commands below to perform all of the needed OOBI introductions as indicated on the graphic below.

Controller Introductions – KLI

Using the KLI to resolve a witness OOBI for a controller involves

specifying the target keystore with the --name argument, setting an OOBI alias with the --oobi-alias property and passing in a witness OOBI to the --oobi argument as shown below. kli oobi resolve --name ${WISEMAN_KEYSTORE} \ --oobi-alias ${EXPLORER_ALIAS} \ --oobi ${WAN_WITNESS_URL}/oobi/${RICHARD_PREFIX}/witness/${WAN_PREFIX}

An OOBI resolution is a very simple operation and produces output similar to the following:

http://127.0.0.1:5642/oobi/EJS0-vv_OPAQCdJLmkd5dT0EW-mOfhn_Cje4yzRjTv8q/witness/BBilc4-L3tFUnfM_wJr4S4OJanAv_VmF_dJNN6vkf2Ha resolved Controller Introductions – Agent (Mark I – deprecated)

Using the Agent API to resolve an OOBI is very similar to the KLI. All the properties are passed in through the HTTP request body as a JSON object as shown below.

curl -s -X POST "${WISEMAN_AGENT_URL}/oobi" \ -H "accept: */*" \ -H "Content-Type: application/json" \ -d "{\"oobialias\": \"${EXPLORER_ALIAS}\", \"url\":\"${WAN_WITNESS_URL}/oobi/${RICHARD_PREFIX}/witness/${WAN_PREFIX}\"}" \ | jq

This should result in a 202 HTTP status code when successful.

Once you have performed all OOBI creations for the controllers as depicted in the diagram above then move on to creating credential registries.

Credential Registries

In order to issue credentials from one controller to the next a credential registry must exist in both the issuing controller’s keystore to create and issue the credential from as well as in the receiving controller’s (issuee’s) keystore to receive the issued credential. This requires adding a new concept to manage the credential registry as well as any credentials issued from the registry. Enter the transaction event log transaction event log (TEL).

Transaction Event Logs (TELs)

A transaction event log is a log of all events occurring for a verifiable credential or a credential registry that manages verifiable credentials. All TEL events are anchored in a KEL in either ixn (interaction) or rot (rotation) events. This is the foundation enabling a verifiable credential protocol to be built on top of KERI. This protocol is known as the authentic chained data container, or ACDC protocol. See the ACDC specification for a complete description of this protocol.

Two further sub-protocols within the ACDC paradigm further elaborate on creating registries and handling the verifiable credential lifecycle. These are the Public Transaction Event Logs (PTEL) and the Issuance and Presentation Exchange (IPEX) protocols.

Public Transaction Event Logs (PTEL) protocol

PTEL has two major parts the first being the details of using transaction event logs (TELs) to connect ACDCs to key event logs (KELs) and the second being the handling of events in the lifecycle of a credential including issuance and revocation.

There are two types of TELs:

A management TEL for managing an ACDC (credential) registry and tracks the list of Registrars that will act as Backers for individual TELs for each verifiable credential (VC). A Verifiable Credential TEL (VC TEL) for managing the issued and revoked state of an individual ACDC. The VC TEL also contains a reference to its corresponding management TEL. There are only two events in a VC TEL which are the issuance and then a subsequent revocation, if ever.

PTEL describes the details of transaction event logs. The following introduction from the PTEL specification summarizes well the purpose of transaction event logs:

The Transaction Event Log (TEL) is a hash linked data structure of transactions that can be used to track state. A Public Verifiable Credential Registry can be represented in several TELs to establish issuance or revocation state of a Verifiable Credential (VC). The KEL is used to establish control authority over the keys used to commit to the events of the TEL and sign the VC. The events of the TEL are used to establish the issuance or revocation state of the VCs issued by the controller of the identifier represented by the KEL.

Public Transaction Event Logs (PTEL) IETF Draft Specification – P. Feairheller
Issuance and Presentation Exchange (IPEX) protocol

Building upon the PTEL specification IPEX describes what happens between credential registries. This protocol defines the disclosure workflow events used to share information between the Discloser and the Disclosee.

The Issuance and Presentation Exchange (IPEX) Protocol provides a uniform mechanism for the issuance and presentation of ACDCs [ACDC-ID] in a securely attributable manner.

Issuance and Presentation Exchange Protocol IETF Draft specification – S. M. Smith, P. Feairheller
Registry Creation

A registry creation event occurs in a TEL and has the type vcp. See the “Verifiable Credential Registry” section of the PTEL specification for a complete description of TEL event types. To avoid overburdening the diagram below the individual ixn events for each credential issuance are not included in the KEL nor are the iss events included in the TELs. The blue credential graphic to the right is used on successive versions of the diagram to represent credential issuances.

The below diagram shows the registry creation event as an interaction event in the KEL of each controller as well as the corresponding TEL created to manage the creation of each credential issued by or to each controller.

The IN column in a TEL indicates credentials the controller has received.
The OUT column in a TEL indicates credentials the controller has issued.

Both of these columns start out empty and will be filled as credentials are issued.

Registry Creation – KLI

All that is needed to create a registry is:

a keystore name with the --name argument to select the keystore to create the registry inside of, an alias with the --alias argument to select the AID to use to issue credentials from that registry, and the name of the registry with the --registry-name argument. kli vc registry incept --name ${EXPLORER_KEYSTORE} \ --alias ${EXPLORER_ALIAS} \ --registry-name ${EXPLORER_REGISTRY}

This creates a registry named by the --registry-name argument value with a management TEL. The registry’s management TEL will be used to keep track of the backers for each credential.

Additional arguments accepted by the command not shown above include:

--nonce: unique random value to seed the credential registry. --no-backers: boolean to not allow setting up backers different from the anchoring KEL. --establishment-only: boolean to only allow establishment events for the anchoring of events in this registry. --backers: the new set of backers different from the anchoring KEL witnesses to be used when setting backers for credentials from this registry. Can appear multiple times. --base, --alias, and --passcode like the kli init and kli incept commands.

There is no TOAD (threshold of acceptable duplicity) argument for the KLI since as of March 2023 it has not been needed nor has the external backer feature been used or tested in production. Nonetheless a default TOAD is computed based on the length of the backer list sent in as you can see on Line 95 of eventing.py. If you need this feature then be sure to open an issue on the KERIpy GitHub repo and attend a community dev meeting (every other Tuesday at 8 AM UTC-6 DST / UTC-7 Non-DST).

Registry Creation – Agent (Mark I – deprecated)

Creating a registry using the Agent API is similar to the KLI except that the values passed as arguments to kli vc registry incept are instead located in the body as a JSON object.

curl -s -X POST "${EXPLORER_AGENT_URL}/registries" \ -H "accept: */*" \ -H "Content-Type: application/json" \ -d "{\"alias\":\"${EXPLORER_ALIAS}\", \"baks\": [], \"estOnly\":false, \"name\":\"${EXPLORER_REGISTRY}\", \"noBackers\":true, \"toad\":0}" \ | jq

Once again here the output is piped to “jq” though this is only to make the response easy on the eyes.

Next we move to one of the most exciting parts of this journey, the issuing of credentials!

Credential Issuance

Value generation from the entire decentralized identity space centers around verifiable credentials. Wrap some data attributes in a container defined by a schema and you now have a verifiable bearer instrument that is a part of your digital reputation. A credential is the fundamental unit, or currency, of reputation.

Name – Why ACDC?

Now where does the name authentic chained data container come from and how did it get attached to the verifiable credential space? Three points make this clear:

Since ACDC-style credentials are verifiable as authentic using the KERI decentralized key management infrastructure (DKMI), since ACDC-style credentials can be easily chained as a directed, acyclic graph (DAG), and since the schema of an ACDC creates a shape or container for data,

you can see how those concepts combined to create the name (1) authentic (2) chained (3) data container, or ACDC.

Purpose

The introductory quote from the ACDC spec sums up well the purpose of the ACDC protocol:

One primary purpose of the ACDC protocol is to provide granular provenanced proof-of-authorship (authenticity) of their contained data via a tree or chain of linked ACDCs (technically a directed acyclic graph or DAG). Similar to the concept of a chain-of-custody, ACDCs provide a verifiable chain of proof-of-authorship of the contained data. […]

These proofs of authorship and/or authority provide provenance of an ACDC itself and by association any data that is so conveyed.

Authentic Chained Data Containers IETF Draft Specification – Dr. Samuel M. Smith

Verifiable authorizations, permissions, rights, and credentials that include a full provenance chain anchored in key event logs make KERI an elegant way to capture and unlock reputation value as well as serve as a robust solution for identity and access management systems.

To learn in-depth about all the particulars for ACDCs read the Authentic Chained Data Containers (ACDC) IETF draft specification by Dr. Samuel M. Smith.

With this brief interlude on the name and purpose of ACDCs out of the way next we dive into the specific credentials we will issue for the journey to Abydos.

Credentials for Abydos

Multiple different credentials are issued during different parts of the overall ATHENA trust workflow in the journey to Abydos. The following graphic shows a small blue icon representing each issued (OUT) or received (IN) credential.

The four credentials that will be issued over the course of the journey include:

TreasureHuntingJourney: this is issued by the Wise Man to each of the explorers, Richard and Elayne, to tell them about a potential journey they could go on. It represents a commitment by ATHENA to a potential explorer as an offer of a journey they could embark upon. JourneyMarkRequest: this is issued by an explorer to the Wise Man as a commitment to join the party for a particular TreasureHuntingJourney. Both Richard and Elayne issue this to Ramiel. JourneyMark: this is issued by the Wise Man to an explorer as an acknowledgment that ATHENA accepted that explorer onto the requested TreasureHuntingJourney. This is issued by Ramiel to both Richard and Elayne. JourneyCharter: once the party threshold for a journey is reached (threshold logic not yet implemented) the Wise Man issues a JourneyCharter credential to each party member who will be going on the journey. This includes Richard and Elayne.

On to our first credential!

Issue TreasureHuntingJourney Credential – KLI

The TreasureHuntingJourney must be issued to both Richard and Elayne as depicted in the graphic below.

As you see Ramiel has both of these credentials in the OUT column to indicate he has issued these credentials and both Richard and Elayne have them in the IN column indicating they have received these credentials.

The following command shows issuance of this credential to Richard.

kli vc issue --name ${WISEMAN_KEYSTORE} \ --alias ${WISEMAN_ALIAS} \ --registry-name ${WISEMAN_REGISTRY} \ --schema "${TREASURE_HUNTING_JOURNEY_SCHEMA_SAID}" \ --recipient "${RICHARD_PREFIX}" \ --data @"${ATHENA_DIR}"/credential_data/osireion-treasure-hunting-journey.json

This shows issuance of a credential:

--name: from the Wise Man’s keystore --alias: using the Ramiel AID --schema: using the schema SAID of the TreasureHuntingJourney schema. --recipient: The recipient is Richard --data: and the data sent in is specific to the credential issued to Richard.

The only arguments we haven’t used yet for this command are as follows:

--edges: This is for adding the edge links to other credentials, the “chain” part of authentic chained data containers. We will see this in use for the next credential we issue, the JourneyMarkRequest. --rules: This is where the Ricardian Contracts come in to ACDCs. The TreasureHuntingJourney credential did not use the --rules option since the rules were baked in to the schema definition. However, the JourneyMarkRequest schema is different and uses rules passed in. Richard’s TreasureHuntingJourneyCredential Data { "destination": "Osireion", "treasureSplit": "50/50", "partyThreshold": 2, "journeyEndorser": "Ramiel" }

This is the data located in the abydos-tutorial/athena/credential_data/osireion-treasure-hunting-journey.json file. It is used for both Richard’s and Elayne’s credential.

Listing the credentials while polling (causes messages to be delivered)

Once the credential is issued using kli vc issue it can take a few moments to arrive so the kli vc list ... --poll command is useful to wait for the credential to arrive.

kli vc list --name ${EXPLORER_KEYSTORE} --alias ${EXPLORER_ALIAS} --poll

The output of this command looks as follows:

Checking mailboxes for any received credentials...... Current received credentials for richard (EJS0-vv_OPAQCdJLmkd5dT0EW-mOfhn_Cje4yzRjTv8q): Credential #1: EE0MvGRafksqpzyCYXm6tfaiKYR9LpUsV8YGD8KSMRIS Type: Treasure Hunting Journey Status: Issued ✔ Issued by EIaJ5gpHSL9nl1XIWDkfMth1uxbD-AfLkqdiZL6S7HkZ Issued on 2023-03-27T20:23:48.536543+00:00

As long as you see the Status: Issued ✔ then you can know you have issued the credential correctly.

Issue TreasureHuntingJourney Credential – Agent (Mark I – deprecated)

Issuing credentials with the Agent is very similar to the command line except all of the data arguments are added as JSON body attributes as shown below.

JOURNEY_DATA=$(cat "${ATHENA_DIR}"/credential_data/osireion-treasure-hunting-journey.json) curl -s -X POST "${WISEMAN_AGENT_URL}/credentials/${WISEMAN_ALIAS}" \ -H "accept: application/json" \ -H "Content-Type: application/json" \ -d "{\"credentialData\":${JOURNEY_DATA}, \"recipient\":\"${RICHARD_PREFIX}\", \"registry\":\"${WISEMAN_REGISTRY}\", \"schema\":\"${TREASURE_HUNTING_JOURNEY_SCHEMA_SAID}\"}" \ | jq '.["d"]' | tr -d '"'

Once again I pipe the response to jq and tr to select the SAID of the credential which returns in the “d” field. This is useful to store the SAID in a BASH variable for use later in the workflow.sh script.

Get SAID of TreasureHuntingJourney credential issued

Another way to get the SAID of the TreasureHuntingJourney credential is by using a GET request to the /credentials/${ALIAS} endpoint.

EXPLORER_JOURNEY_CRED_SAID=$(curl -s \ -X GET "${EXPLORER_AGENT_URL}/credentials/${EXPLORER_ALIAS}?type=received&schema=${TREASURE_HUNTING_JOURNEY_SCHEMA_SAID}" \ | jq '.[0] | .sad.d' | tr -d '"')

Use this GET request if you need to read in the SAID of a particular AID. The pipe combination to selects the “d” attribute from the first credential in the list (“.[0]”) returned.

Advanced Credential Issuance – with Edges and Rules ACDCs with Edges

The preceding credential issuance of TreasureHuntingJourney credentials was interesting yet it did not include one of the critical features of ACDCs: edges. Since a graph of ACDCs is connected by edges then knowing how to use an edge to connect, or “chain,” credentials together enables you to represent your domain’s credential data as a graph data structure, specifically a directed acyclic graph, which is just an append-only data structure. The JourneyMarkRequest credential includes an edge that points to the TreasureHuntingJourney credential as shown in the graph below.

The edge from the JourneyMarkRequest is labeled as “journey”. This is represented in the schema for the JourneyMarkRequest as well as in the schema map JSON file that KASLCred reads in. Edges can be traversed to resolve, inspect, and validate the credential the edge points to.

ACDC Rules – Ricardian Contracts

We also show an alternate way to specify rules with the --rules option to the KLI and the “rules” property of the JSON object sent to the Agent.

Issue JourneyMarkRequest Credential – KLI

Once the JourneyMarkRequest Credentials are issued your network will be in the following state:

Both Richard and Elayne will have issued JourneyMarkRequest credentials to Ramiel and these credentials will have an edge pointing back to the TreasureHuntingJourney credential Ramiel issued to them.

A few steps are involved in issuing the JourneyMarkRequest credential due to the fact that it points to another credential, the TreasureHuntingJourney credential. These steps include:

Prepare the TreasureHuntingJourney edge Saidify the TreasureHuntingJourney edge Include the TreasureHuntingJourney edge inside the JourneyMarkRequest during issuance. Prepare TresureHuntingJourney Edge

A few steps are required to prepare the TreasureHuntingJourney edge for inclusion in the JourneyMarkRequest credential. All of these steps are used to write a JSON file with the edge data that will be included to either the --data argument to kli vc issue with the KLI or embedded within the “source” attribute of the JSON object POSTed to "/credentials/{alias}” with the Agent.

JQ filters

JQ filters are useful to w