Last Update 1:51 AM March 24, 2023 (UTC)

Identity Blog Catcher

Brought to you by Identity Woman and Infominer.
Support this collaboration on Patreon!!!

Thursday, 23. March 2023

Simon Willison

textra

textra Tiny (432KB) macOS binary CLI tool by Dylan Freedman which produces high quality text extraction from PDFs, images and even audio files using the VisionKit APIs in macOS 13 and higher. Via Alex Garcia

textra

Tiny (432KB) macOS binary CLI tool by Dylan Freedman which produces high quality text extraction from PDFs, images and even audio files using the VisionKit APIs in macOS 13 and higher.

Via Alex Garcia


ChatGPT Retrieval Plugin

ChatGPT Retrieval Plugin "The ChatGPT Retrieval Plugin repository provides a flexible solution for semantic search and retrieval of personal or organizational documents using natural language queries." How many existing startups were building this I wonder?

ChatGPT Retrieval Plugin

"The ChatGPT Retrieval Plugin repository provides a flexible solution for semantic search and retrieval of personal or organizational documents using natural language queries." How many existing startups were building this I wonder?


ChatGPT plugins

ChatGPT plugins ChatGPT is getting a plugins mechanism, which will allow developers to provide extra capabilities to ChatGPT, like looking up restaurants on OpenTable or fetching data from APIs. This feels like the kind of feature that could obsolete - or launch - a thousand startups. It also makes ChatGPT much more interesting as a general purpose tool, as opposed to something that only works a

ChatGPT plugins

ChatGPT is getting a plugins mechanism, which will allow developers to provide extra capabilities to ChatGPT, like looking up restaurants on OpenTable or fetching data from APIs. This feels like the kind of feature that could obsolete - or launch - a thousand startups. It also makes ChatGPT much more interesting as a general purpose tool, as opposed to something that only works as an interface to a language model.


John Philpin : Lifestream

March 23rd. Follow The Daily Stoic All The Pos

March 23rd. Follow The Daily Stoic All The Posts RSS Feed If one really catches your eye, don’t forget to click on ‘the date’ below - there may be additional commentary

March 23rd.

Follow The Daily Stoic

All The Posts

RSS Feed

If one really catches your eye, don’t forget to click on ‘the date’ below - there may be additional commentary


Chance | #mbmar

Chance | #mbmar

Chance | #mbmar


Ben Werdmüller

We're throwing a free event on tech, ...

We're throwing a free event on tech, justice, and economic mobility - and you're invited. It takes place online and in-person in Chicago from May 18-20. It's free to attend. This is a save-the-date announcement - but trust me, you'll want to meet our speakers. https://19thnews.org/2023-summit/

We're throwing a free event on tech, justice, and economic mobility - and you're invited. It takes place online and in-person in Chicago from May 18-20. It's free to attend. This is a save-the-date announcement - but trust me, you'll want to meet our speakers. https://19thnews.org/2023-summit/


John Philpin : Lifestream

Mail Drop for Drafts … NICE

Mail Drop for Drafts … NICE

Mail Drop for Drafts … NICE


Simon Willison

mitsua-diffusion-one

mitsua-diffusion-one "Mitsua Diffusion One is a latent text-to-image diffusion model, which is a successor of Mitsua Diffusion CC0. This model is trained from scratch using only public domain/CC0 or copyright images with permission for use." I've been talking about how much I'd like to try out a "vegan" AI model trained entirely on out-of-copyright images for ages, and here one is! It looks like

mitsua-diffusion-one

"Mitsua Diffusion One is a latent text-to-image diffusion model, which is a successor of Mitsua Diffusion CC0. This model is trained from scratch using only public domain/CC0 or copyright images with permission for use." I've been talking about how much I'd like to try out a "vegan" AI model trained entirely on out-of-copyright images for ages, and here one is! It looks like the training data mainly came from CC0 art gallery collections such as the Metropolitan Museum of Art Open Access.

Via @wrockben


Ben Werdmüller

How Loneliness Reshapes the Brain

“The problem with loneliness seems to be that it biases our thinking. In behavioral studies, lonely people picked up on negative social signals, such as images of rejection, within 120 milliseconds — twice as quickly as people with satisfying relationships and in less than half the time it takes to blink. Lonely people also preferred to stand farther away from strangers, trust

“The problem with loneliness seems to be that it biases our thinking. In behavioral studies, lonely people picked up on negative social signals, such as images of rejection, within 120 milliseconds — twice as quickly as people with satisfying relationships and in less than half the time it takes to blink. Lonely people also preferred to stand farther away from strangers, trusted others less and disliked physical touch.” #Health

[Link]


Here’s the full analysis of newly uncovered genetic data on COVID’s origins

“The full analysis provides additional compelling evidence that the pandemic coronavirus made its leap to humans through a natural spillover, with a wild animal at the market acting as an intermediate host between the virus's natural reservoir in horseshoe bats and humans.” #Health [Link]

“The full analysis provides additional compelling evidence that the pandemic coronavirus made its leap to humans through a natural spillover, with a wild animal at the market acting as an intermediate host between the virus's natural reservoir in horseshoe bats and humans.” #Health

[Link]


John Philpin : Lifestream

I wrote to someone on the 5th July, 2017 seeking clarificati

I wrote to someone on the 5th July, 2017 seeking clarification on an earlier conversation. I never heard from them again. Until yesterday, when I received a long, thoughtful, considered email answering all of my questions. No reference as to why it took them nearly 6 years to reply. I mean nothing. It’s as if they were beamed into a time warp and have just now returned and they just aren’t a

I wrote to someone on the 5th July, 2017 seeking clarification on an earlier conversation.

I never heard from them again. Until yesterday, when I received a long, thoughtful, considered email answering all of my questions.

No reference as to why it took them nearly 6 years to reply. I mean nothing.

It’s as if they were beamed into a time warp and have just now returned and they just aren’t aware that they have been ‘lost’ these past 6 years.


Simon Willison

Teaching News Apps with Codespaces

Teaching News Apps with Codespaces Derek Willis used GitHub Codespaces for the latest data journalism class he taught, and it eliminated the painful process of trying to get students on an assortment of Mac, Windows and Chromebook laptops all to a point where they could start working and learning together. Via @derekwillis

Teaching News Apps with Codespaces

Derek Willis used GitHub Codespaces for the latest data journalism class he taught, and it eliminated the painful process of trying to get students on an assortment of Mac, Windows and Chromebook laptops all to a point where they could start working and learning together.

Via @derekwillis


Quoting James Vincent

If you ask Microsoft’s Bing chatbot if Google’s Bard chatbot has been shut down, it says yes, citing as evidence a news article that discusses a tweet in which a user asked Bard when it would be shut down and Bard said it already had, itself citing a comment from Hacker News in which someone joked about this happening, and someone else used ChatGPT to write fake news coverage about the event. —

If you ask Microsoft’s Bing chatbot if Google’s Bard chatbot has been shut down, it says yes, citing as evidence a news article that discusses a tweet in which a user asked Bard when it would be shut down and Bard said it already had, itself citing a comment from Hacker News in which someone joked about this happening, and someone else used ChatGPT to write fake news coverage about the event.

James Vincent

Wednesday, 22. March 2023

Simon Willison

Weeknotes: AI won't slow down, a new newsletter and a huge Datasette refactor

I'm a few weeks behind on my weeknotes, but it's not through lack of attention to my blog. AI just keeps getting weirder and more interesting. I'm beginning to expect that every Tuesday may be a write-off for the next few years, since the AI community seems to have decided that Tuesday is the day to launch everything. Two Tuesdays ago we got a Google announcement, Anthropic's Claude and GPT-4.

I'm a few weeks behind on my weeknotes, but it's not through lack of attention to my blog. AI just keeps getting weirder and more interesting.

I'm beginning to expect that every Tuesday may be a write-off for the next few years, since the AI community seems to have decided that Tuesday is the day to launch everything.

Two Tuesdays ago we got a Google announcement, Anthropic's Claude and GPT-4. On Tuesday this week we got Google Bard, Bing Image Creator and Adobe Firefly.

I've written about a bunch of that stuff this month:

10th: ChatGPT can't access the internet, even though it really looks like it can 11th: Large language models are having their Stable Diffusion moment 13th: Stanford Alpaca, and the acceleration of on-device large language model development 17th: Could you train a ChatGPT-beating model for $85,000 and run it in a browser? 18th: A conversation about prompt engineering with CBC Day 6 22nd: Don't trust AI to talk accurately about itself: Bard wasn't trained on Gmail

Apparently this blog is now partly focused on AI! If you want to stay up-to-date with my writing on this (and other) subjects you can subscribe to my atom feed, or you can sign up for my brand new Substack newsletter.

My blog as a newsletter

I know there are a lot of people out there who don't habitually use a feed reader but do find great value from email newsletters.

simonw.substack.com is my new newsletter, which is effectively a way to subscribe to my blog via email.

I started it a few months ago when it looked like Twitter was about to collapse under the weight of its new mismanagement. I first promoted it at the bottom of my Large language models are having their Stable Diffusion moment post, and it's since grown to 640 subscribers!

I plan to send it out around once a week, provided there's material to send.

It will be mostly content from my blog, with maybe a paragraph or two of additional context added at the top highlighting themes of the past week (such as GPT-4).

The first two editions can be found here:

March 13th: The Stable Diffusion moment for Large Language Models March 19th: Could you train a ChatGPT-beating model for $85,000 and run it in a browser?

A fun detail about my newsletter is how I'm generating it.

Substack doesn't have an API, but I wanted to automate as much of the process of copying in data from my blog as possible.

I built myself an automation around copy and paste!

observablehq.com/@simonw/blog-to-newsletter is an Observable notebook I wrote which assembles most of the newsletter for me.

It works by running this SQL query against my datasette.simonwillison.net Datasette instance, which runs against a SQLite copy of my blog content (a PostgreSQL/Django app) built by a GitHub Action in this repository.

The SQL query assembles a string of HTML which is rendered in the notebook. There's also a "Copy to clipboard" button which uses this JavaScript pattern to copy a rich text representation of the HTML to the clipboard.

When I hit "paste" in the Substack editor interface it converts that representation into Substack's chosen subset of HTML. Then I can edit it by hand in the Substack editor.

This is working really well so far - it's really easy to tweak the generated HTML in the Observable notebook, and once I've transferred it to Substack I can re-arrange things and add my own extra commentary to the top of the newsletter before hitting send.

Datasette's new JSON API

I finally landed a GIANT branch I've been working on for several months now: a complete redesign of Datasette's default JSON format, one of the largest changes I need to land prior to releasing Datasette 1.0.

The previous default JSON format was a bit of a mess: it had dozens of keys, and presented the row data as an array of arrays (on the basis that the column names were available in a separate key, and rows as arrays would be more efficient in terms of bytes on the wire).

I always found myself adding ?_shape=array to that URL to get a smalle format, which strongly indicated that the default I had picked was the wrong one.

The new format can now be previewed here - it looks like this (truncated):

{ "ok": true, "next": "d,v", "rows": [ { "pk1": "a", "pk2": "a", "content": "a-a" }, { "pk1": "a", "pk2": "b", "content": "a-b" } ] }

The default keys are "ok", "next" to indicate pagination (this is null if there are no extra pages) and "rows" with a list of JSON objects.

If you want extra rows - like a total row count, or a list of columns, or some suggested facets - you can request them using the new ?_extra= parameter - for example:

https://latest.datasette.io/fixtures/sortable.json?_extra=columns&_extra=count&_extra=suggested_facets

This returns a response that starts like this:

{ "ok": true, "next": "d,v", "count": 201, "columns": [ "pk1", "pk2", "content", "sortable", "sortable_with_nulls", "sortable_with_nulls_2", "text" ], "suggested_facets": [ { "name": "pk1", "toggle_url": "https://latest.datasette.io/fixtures/sortable.json?_extra=columns&_extra=count&_extra=suggested_facets&_facet=pk1" }, { "name": "pk2", "toggle_url": "https://latest.datasette.io/fixtures/sortable.json?_extra=columns&_extra=count&_extra=suggested_facets&_facet=pk2" }, { "name": "text", "toggle_url": "https://latest.datasette.io/fixtures/sortable.json?_extra=columns&_extra=count&_extra=suggested_facets&_facet=text" } ], "rows": [

There's still more work to do on this feature: I need to write the documentation for it, and figure out how it should affect the Datasette endpoint that returns results from an arbitrary SQL query. But it's ready to preview, and I'm keen to get feedback on it as quickly as possible!

Please take a look, and provide feedback on this dedicated issue thread - or come and talk about it in the Datasette Discord.

Releases these weeks datasette-atom: 0.9 - (12 releases total) - 2023-03-14
Datasette plugin that adds a .atom output format datasette-simple-html: 0.2 - (2 releases total) - 2023-03-12
Datasette SQL functions for very simple HTML operations apple-notes-to-sqlite: 0.1 - 2023-03-09
Export Apple Notes to SQLite datasette-render-html: 1.0 - (4 releases total) - 2023-03-09
Plugin for selectively rendering the HTML is specific columns datasette: 0.64.2 - (125 releases total) - 2023-03-08
An open source multi-tool for exploring and publishing data TIL these weeks Mocking subprocess with pytest-subprocess Using ChatGPT to write AppleScript Using SQL with GDAL Running LLaMA 7B and 13B on a 64GB M2 MacBook Pro with llama.cpp Copy rich text to the clipboard How to read Hacker News threads with most recent comments first Scraping Reddit and writing data to the Datasette write API A simple Python implementation of the ReAct pattern for LLMs Use DuckDB to convert parquet to JSON and then open it in Datasette Lite

Datasette: Gather feedback on new ?_extra= design

Datasette: Gather feedback on new ?_extra= design I just landed the single biggest backwards-incompatible change to Datasette ever, in preparation for the 1.0 release. It's a change to the default JSON format from the Datasette API - the new format is much slimmer, and can be expanded using a new ?_extra= query string parameter. I'm desperately keen on getting feedback on this change! This issue

Datasette: Gather feedback on new ?_extra= design

I just landed the single biggest backwards-incompatible change to Datasette ever, in preparation for the 1.0 release. It's a change to the default JSON format from the Datasette API - the new format is much slimmer, and can be expanded using a new ?_extra= query string parameter. I'm desperately keen on getting feedback on this change! This issues has more details and a call for feedback.


Phil Windleys Technometria

Passkeys: Using FIDO for Secure and Easy Authentication

Passkeys, the popular name for FIDO, are not only easy, they offer better protection against phishing than other forms of multi-factor authentication. This is how they work. This article is adapted from Chapter 12 of my new book Learning Digital Identity

Passkeys, the popular name for FIDO, are not only easy, they offer better protection against phishing than other forms of multi-factor authentication. This is how they work.

This article is adapted from Chapter 12 of my new book Learning Digital Identity from O'Reilly Media.

Thanks for reading Phil Windley's Technometria! Subscribe for free to receive new posts and support my work.

I was at SLC DevOpsDays last week and attended a talk by Sharon Goldberg on MFA in 2023. She's a security expert and focused many of her remarks on the relative security of different multi-factor authentication (MFA) techniques, a topic I cover in my book as well. I liked how she described the security provisions of passkeys (also know as Fast ID Online or FIDO).

FIDO is a challenge-response protocol that uses public-key cryptography. Rather than using certificates, it manages keys automatically and beneath the covers, so it’s as user-friendly as possible. I’m going to discuss the latest FIDO specification, FIDO2, here, but the older FIDO U2F and UAF protocols are still in use as well.

FIDO uses an authenticator to create, store, and use authentication keys. Authenticators come in several types. Platform authenticators are devices that a person already owns, like a laptop or smartphone. Roaming authenticators take the form of a security key that connects to the laptop or smartphone using USB, NFC, or Bluetooth.

This is a good time for you to stop reading this and head over to Passkeys.io and try them for yourself. If you're using a relatively modern OS on your smartphone, tablet, or computer, you shouldn't have to download anything. Sign up using your email (it doesn't have to be a real email address), do whatever your device asks when you click "Save a Passkey" (on my iPhone it does Face ID, on my MacOS laptop, it does Touch ID). Then sign out.

Using Touch ID with Passkey

Now, click on "Sign in with a passkey". Your computer will let you pick an identifier (email address) that you've used on that site and then present you with a way to locally authenticate (i.e., on the device). It's that simple. In fact, my biggest fear with passkeys is that it's so slick people won't think anything has happened.

Here's what's going on behind the scenes: When Alice registers with an online service like Passkeys.io, her authenticator (software on her phone, for example) creates a new cryptographic key pair, securely storing the private key locally and registering the public key with the service. The online service may accept different authenticators, allowing Alice to select which one to use. Alice unlocks the authenticator using a PIN, fingerprint reader, or face ID.

When Alice authenticates, she uses a client such as a browser or app to access a service like a website (see figure below). The service presents a login challenge, including the chance to select an account identifier, which the client (e.g., browser) passes to the authenticator. The authenticator prompts Alice to unlock it and uses the account identifier in the challenge to select the correct private key and sign the challenge. Alice’s client sends the signed challenge to the service, which uses the public key it stored during registration to verify the signature and authenticate Alice.

Authenticating with Passkey

FIDO2 uses two standards. The Client to Authenticator Protocol (CTAP) describes how a browser or operating system establishes a connection to a FIDO authenticator. The WebAuthN protocol is built into browsers and provides an API that JavaScript from a Web service can use to register a FIDO key, send a challenge to the authenticator, and receive a response to the challenge.

One of the things I liked about Dr. Goldberg's talk is that she emphasized that the security of passkeys rests on three things:

Transport Layer Security (TLS) to securely transport challenges and responses.

The WebAuthN protocol that gives websites a way to invoke the local authentication machinery using a Javascript API.

A secure, local connection between the client and authenticator using CTAP.

One of the weaknesses of how we use TLS today is that people don't usually check the lock icon in the browser and don't understand domain names enough to tell if they're being phished. Passkeys do this for you. The browser unambiguously transfers the domain name to the authenticator which knows if it has an established relationship with that domain or not. Authenticating that you're on the right site is a key reason they're so much more secure than other MFA alternatives. Another is having a secure channel from authenticator to service, making phishing nearly impossible because there's no way to break into the authentication flow.

Passkeys provide a secure and convenient way to authenticate users without resorting to passwords, SMS codes, or TOTP authenticator applications. Modern computers and smartphones and most mainstream browsers understand FIDO protocols natively. While roaming authenticators (hardware keys) are available, for most use cases, platform authenticators (like the ones built into your smartphone or laptop) are sufficient. This makes FIDO an easy, inexpensive way for people to authenticate. As I said, the biggest impediment to its widespread use may be that people won’t believe something so easy is secure.

Thanks for reading Phil Windley's Technometria! Subscribe for free to receive new posts and support my work.


John Philpin : Lifestream

The quiet before ( or is it after? ) the storm.

The quiet before ( or is it after? ) the storm.

The quiet before ( or is it after? ) the storm.


MyDigitalFootprint

Why finding purpose feels impossible.

Overview Balance is a necessity in every aspect of life.  Knowing if we are in or out of balance directly affects our ability to survive and thrive; therefore, this thought piece brings something new to understanding balance and by association purpose.   Adaptation and homeostasis are the natural world's way of creating an unstable equilibrium. This dynamic system is unpacked an
Overview

Balance is a necessity in every aspect of life.  Knowing if we are in or out of balance directly affects our ability to survive and thrive; therefore, this thought piece brings something new to understanding balance and by association purpose.  

Adaptation and homeostasis are the natural world's way of creating an unstable equilibrium. This dynamic system is unpacked and repurposed to present a solid argument for leadership teams and boards of directors to shift questions to ones that understand how to balance two powerful opposing forces.  Balancing the development of the new whilst maintaining and preserving the existing has always been challenging, but here is a model that makes it easier.

In nature, we observe that organisms with little or no adaptation to ever-changing environments struggle to survive, whereas those that are pushing the evolutionary envelope and mutating into more suitable candidates for their ecosystem thrive. We also know that homeostasis and memory are key factors in maintaining stability. We find the same duality phenomena is reflected in the business world as we try to find a balance between creating new and maintaining what we have. 

Nature does not favour new over old but ebbs and flows, appearing to have a fluid, ever-changing dynamic stability; however, businesses try to prioritise based on a purpose, and this piece highlights that often there is an unwillingness to consider both perspectives in balance, which is detrimental.  What are the right questions to ask to determine if your purpose or strategy will disrupt fragile balances or if you are aligned and can thrive? 

This is a 20-minute read which will challenge your perceptions and presuppositions of the role you inhabit and the role your ecosystem plays on you.






Can you, I, us, society or business have a single purpose?

In our natural world, two dependent systems keep our fragile life from ending. One is the ability to mutate and adapt, and the other is maintenance (homeostasis) and memory. When we observe the dynamic expansions and contractions in our natural world, we see the two-sides of this dynamic system competing.  We can label these forces as “error creation” and “error correction”, and there is a never-ending ebb and flow between the two states, which creates a dynamic stability or an unbalanced equilibrium; two oxymorons which we will explore below. The ability to see both sides of a perspective gives us a wider vantage point that can help us unlock powerful insights into why a single unified purpose is not feasible, why AI will not take over, and how to avoid failure in times of uncertainty.

Is believing in a unified purpose a protective delusion? 

The theory of evolution is a scientific explanation for how life on Earth has changed over time and how different species have arisen from common ancestors. It is based on the idea that all living things share a common ancestry and that the diversity of life we see today results from gradual changes over millions of years through a process called natural selection. Natural selection is the process by which certain traits become more or less common in a population based on their effects on survival and reproduction. Random mutation being the way to create new traits and adaptation, are central to the theory of evolution.

Homeostasis is maintenance, it refers to the ability of an organism to maintain a stable internal environment despite changes in the external environment. It is essential for the survival and well-being of an organism, and disruptions to this balance can have serious consequences. Memory refers to the process of storing, retaining, and retrieving information both individually and at a societal level; consider here how history, culture, law and even paintings and architecture are memory. Memory gives an organism the ability to maintain a sense of stability despite being in a changing environment, as one is able to recall the expected outcome of a familiar circumstance.  Homeostasis and memory are critical for stability and survival. 

These concepts, mutation & adaptation and homeostasis & memory, have been observed for thousands of years, and as humans unpack the complex connectivity between them, we can better articulate what we observe, which is why it can be so difficult to find and follow a singular goal to its completion. The connection between them has elements of both immediacy and delay: mutation demands an immediate response from homeostasis, yet also drives changes over time through immunity responses"; In the long run, human changes create environments for different mutations.  

To the observer who notices that there are no arrows, there is a reason for that. It is about the connection, relationship, and coupling, not the directionality of flow. 

Life is often framed as a competitive arms race: the survival of the fittest between living organisms that adapt to a constantly variable, but ultimately stable system. Mutations are randomness in the complex life coding system that creates new traits.  Mutations can create an advantage in the arms race as they can lead to unique traits which make an organism fitter for its environment.  Advantageous traits tend to thrive, whereas non-advantageous traits will either cease to exist or live on at a subsistence level.

Cyber security is positioned as a similar modern-day example of an arms race, but in reality, it is one system (virus) that is looking for a vulnerability to give it an advantage, and the other system running anti-virus is responding/ adapting to prevent it. The virus can only be created because the system has a fundamental flaw (weakness), and it is a race between those who want to uncover it and those who want to hide it, but the flaw exists. It is not the same.  

One could use the free market economy as a modern example of this phenomenon of mutation and homeostasis, whereby we use the words “innovation” and “competition” as synonyms for “mutation” and “survival of the fittest”. To describe innovation as solely beneficial, or solely detrimental would be incorrect; likewise, it is not reasonable to suggest that stability is solely one or the other. In all aspects of life, there is a balance between chaos and order, mutation and homeostasis, and innovation and tradition. Therefore, when we re-frame our thinking, we see that believing in a single, unified purpose is a delusion, as this singular goal only represents one side of a dynamic system whereby control on one end does not create the end game, and we only have a bright future because we have two.

A preference for labelling

If I gave you a choice between “innovation” and “mutation”, which one do you associate with value, creativity and growth?

Transposing the words that describe our natural world directly into a business environment would create fear, uncertainty and doubt.  “We are working hard on innovation” sounds more impressive and reasonable than “we are creating new mutations.”  “We mostly operate in a competitive market” sounds so much more plausible than “it is an arms race for the survival of the fittest”.  Words matter, and because they do, we can use words to create dynamics to be more or less willing to discuss and debate hard topics in the boardroom. 

The next section utilises the word “error”;  in this context, an "error" is the gap between expected and observed. I understand that many readers will want to stick with their natural framing of an error as right/ wrong, but “error” can mean:

a moral offence; 

belief in something untrue; 

the state or condition of being wrong in conduct or judgement;

a mistake, blunder, lapse, or slip;

a deviation from perception, expected, accuracy or correctness; 

the difference between the observed and actual

Error in this work is closer to the framing of the lower three bullets than the top three.  

How does a system based on “error creation” and “error correction” sound?  

The dynamic of our natural world system is that one system is creating something new and different whilst the other is trying to correct and mitigate the change. One is bringing in the new, and the other defending the existing.  One is the hope of something better, whilst the other is happy not to take the risk and maintain the same. Both sides create risk; strangely, different risks are created by each side, but nevertheless, risk.  Error creation, in this instance, is invention, innovation and creativity.  Error correction is stability and familiar processes that we know (and trust) from historical experience.  The risk in the former is change, and the risk in the latter is to become not relevant.  They are different risks.

The risk in the former is change, and the risk in the latter is to become not relevant.  They are different risks.

How often do we translate the message we receive from leadership that we need to be both “more innovative” whilst simultaneously “improving efficiency and effectiveness” as a demand to compromise on everything and lack focus? Perhaps we should see this balance as the maintenance of an equilibrium of "error creation" and "error correction" because it creates something new that we label as value. 

Leadership that fails to recognise there is a need for balance will fail; we first need to determine where we are in the system and which system we favour; from there, it should be easier to determine what journey we are on and where the north star is. 

This is not “trial and error”, as before a trial, there has to be a hypothesis, and the error is a deviation in the results from the expected/ predicted.  It is neither “error and trial”, where an error is created without a hypothesis and using some measure, one tries to determine if the error was successful. Then there is AB testing. However, these are all controlled point methods and are not about dynamic reactive complex systems. 

The answer to the universe, world and everything (#douglasadams)

To fans of The HitchHickers Guide to the Universe, we know the answer is 42, but the question remains;  we just needed another 5 minutes!  My struggle with the answer, as given, was whether it was relevant when the question was asked or given since 7.5 million years passed, and a lot happens in big time frames. I would have loved to have asked Douglas.

So often, there is a dynamic between two competing ideas, both trying to be the more important single unifying idea or purpose.  Humans want to be both heard and belong.  It is not that one is better than the other, but one is more error creation (listen to my new idea) and one more error correction (stability of belonging).  Some business leaders will say their purpose is “shareholder primacy”, whilst others say their core purpose is to “improve the lives of their customer”. This is a nonexclusive club - and there are many others - however, most companies settle on a purpose that is a compromise. It is safe to say that business purpose varies depending on the specific goals and values of the company and its stakeholders.  It could be argued that the purpose of a business is to balance creating more value for the majority of the customers, shareholders, employees, and wider society as a whole than the destruction or erosion of value for others.  Purpose itself is achieved through various competing ideas, such as developing innovative products or services, providing high-quality customer service, maintaining ethical business practices, and contributing to the ecosystem the organisation operates in. 

When we reflect on this concept, we see that a business is itself demanding that it “creates errors” and “correct errors.” Remember, this is not “mistakes” or moral pearls; this is seeing innovation as creating change, which is an error to the system of no change. Businesses need to create new ideas and innovations but also correct errors that try to change the stability of process efficiency.  Leadership needs to craft a level of instability whilst maintaining equilibrium.  Note: another framing of “error” could be morally unethical behaviour and trading without reason whilst insolvent. These activities are not errors; they are illegal, and it is not what we are focused on.

Leadership needs to craft a level of instability whilst maintaining equilibrium.

Perhaps we should accept that there cannot be one single unifying purpose, and we exist in a place where some individuals, leaders, and teams favour change (error creation) and some stability (error correction), but without both, we would lose our ever-changing dynamic stability.

The Purpose and origin of Innovation (as an example of error creation)

Innovation refers to the process of creating something new or improving upon an existing product, service, or process. The purpose of innovation is to bring about positive change and to solve problems more effectively or efficiently. Innovations can be incremental or disruptive, with the latter representing a significant shift in how things are done.

The origin of innovation can be traced back to human curiosity and the desire to improve the quality of life. Throughout history, humans have constantly sought ways to improve their lives, whether it be through inventions such as the wheel or electricity, or the development of new technologies like mobile phones and the web. Innovation has played a vital role in driving progress and advancing society.

Innovation has become increasingly important as businesses seek to stay competitive and meet the changing needs of consumers. Companies invest in research and development to create new products and services to improve efficiency, increase profits, and create new market opportunities. Governments also encourage innovation through funding research and development, providing incentives for businesses, and fostering collaboration between industry and academia.

Overall, the purpose and origin of innovation are rooted in human ingenuity and the desire to create something new and better. Innovation has been and will continue to be a critical driver of progress and growth in all aspects of society, including how we achieve sustainability. 

We like innovation as long it is not called error creation or mutation, but the reality is, no matter how hard we try, they are one and the same.   If you naturally align with an error correction ontology, you may have a creative mindset and enjoy the process of disruption, change, and the outcomes of instability.

There are exceptions: fraud, bribery and corruption.

The Purpose and origin of Accounting (as an example of error creation)

The origins of accounting can be traced back thousands of years to ancient civilisations such as the Babylonians, who kept records of their transactions on clay tablets. In medieval Europe, accounting was used primarily by merchants and bankers to keep track of their financial transactions. The original purpose of business accounting was to provide shareholders with accurate and timely information about the financial performance and position of an organisation.

Accounting is a system of recording, analysing and summarising financial transactions to prevent errors, provide accuracy, and present information that is useful in making business decisions. Over time, accounting has evolved into a sophisticated discipline with its own set of principles and standards. In addition to serving the needs of businesses, accounting also serves the broader needs of society. For example, accounting information is used by governments to track tax revenues and by investors to make investment decisions. Accounting also plays an important role in maintaining the integrity of financial markets by providing reliable information about the financial performance of companies.

We may naturally align to the framing of finance as long it is not called error correction, doing the same, preventing change and improving stability, but the reality is, no matter how hard we try, they are one and the same. If you naturally align with an error correction ontology, you may have a more accounting mindset, whereby you will naturally look for order amidst chaos and create stable processes out of a loose set of desires.

There are exceptions: fraud, bribery and corruption.

How do other business functions align?

Marketing, sales, and product development appear to tend towards error creation, with operations, legal, and accounting systems tending towards error correction.  It is possible that where technology and HR sits is critical to the overall balancing culture of the company and if the company favours error creation (creative, imaginative, innovative) or error correction (stable, historical, and dutiful.)  

The descriptions above used “tend towards.” The basic function of accounting is to remove errors, but it also has to improve continually because of the necessity of continually correcting new errors.   The basic function of marketing is creativity, never repeating campaigns and coming up with new and different ideas. But if marketing were only creative, it would not be able to align and build upon what is already there and how the market behaves.  Creative flair for error creation is tempered by the history and context of error correction.  All functions demand both, and we need to understand how the balance is created by the purpose, function, and culture of the team. 

Teams define culture.  Teams are made of individuals who are either going to be more aligned to error-creating or error-correcting. Whilst we prefer to label individuals as  “creative” or “checklist,” these do describe the characteristics, leadership attitudes, and biases that are critical to determining what function a department actually fulfils.  These balances are not simple; indeed, they are complex.

Creative flair for error creation is tempered by the history and context of error correction.

A board needs to comprehend, appreciate, and understand if the balance is right. If a board is not diverse in thought, skill, and discipline, that balance may not be found or is easily lost, and the culture of one or the other can lead a business to failure, especially with an overly dominant financial focus as accounting tends to focus on being financially risk-averse.  It is worth noting that balance, in this sense, is not about equality; the balance creates enough errors to improve continually (agile) but with enough error correction to be resilient.  

The balance must create enough errors to improve continually (agile) but with enough error correction to be resilient.  

Wider perspective


The error creation/correction model can scale to other forms of thinking: 

Fear can be seen as error creation, whereas trust is error correction.  There is undoubtedly a complex relationship between fear and trust that is both immediate and develops over time.  Trust can correct fear that is related to an error originating from the past, but trust can also create a new path. 

The concept of 'self' is error-creation, whereby we tend to express our individuality within a system of societal norms and memory which seeks stability and error correction.  Individuals create eros, and their movement (many followers) creates a change which eventually becomes the new normal.

Freedom allows for error creation, whilst control demands correction from deviation.  Too much freedom or too much control both appear terminal based on history. Prison and punishment systems struggle with this balance on so many levels, even using “error correction” as an idea in a different context.

Both error creation and error correction harbour risk, and both can be seen as creating more risk.  Benefit  demands an understanding of the value and cost of moving away from the known and can be seen as both more error creation (advantage) and correction (prevention).  

These examples are non-linear, and balance does not necessarily mean 50:50 and perhaps a less controversial language might be to substitute “error” with “difference.” The difference is the way in which two things, when compared, are not the same. 

What else have you observed that represents two sides of a dynamic complex relationship, where one or both favour error creation and one error correction? 

It is worth pondering about how and why we try to reduce the complexity of the balance into a single ROI figure in order to pass a hurdle and justify a decision.  Can ROI ever capture consequences?   

What happens when we get it wrong?

Wrong, in this case, is where the balance is lost, and too many errors or too much correction ends in survival becoming unviable, with death being the only inevitable outcome. Business failure. 

In the wild, a mutation that does not provide an advantage can easily be lost, and an overly controlled stable environment can dwindle to nothing if it fails to adapt to modernity. 

In the economics of a business, the same is true: an innovation will not give an advantage forever, and the next innovation may not provide a new advantage. Equally, too much control and stability as a result of finance controls and regulation will result in the company becoming outdated in comparison to its environment or ecosystem - the result is the company or sector will enter a slow decline. #banking

A company does not exist on its own but in a complex ecosystem, and the balance may be achieved within the ecosystem and not just in the health or balance of a single company. In nature, the predator-prey model often forgets that the rabbit/fox needs other food and water, as well as the rabbit/fox having other threats.  A company may well be out of balance in itself, but balance is remediated by others in the ecosystem, which is why strategy is not just about innovation, but the balance that is maintained by the ecosystem in order to create dynamic stability. It is important to understand who is moving towards stability and who is moving towards innovation; otherwise, the whole ecosystem itself can fall out of balance.

This highlights that any theory of error creation/correction must embrace relevance on at least four levels: individual, team, organisation, and ecosystem.  A purpose may articulate what you want, but a purpose that is not dynamic in its ability to move and balance error creation and error correction at all levels is going to be short-lived. 

Reframing AI away from “Intelligent Machinery.”

Defining the term 'artificial intelligence' presents us with a problem. The problem is that human intelligence fundamentally depends on a balanced system where error creation and error correction co-exist.  Many of the most historic creative minds, insightful thinkers and leading scientists were neurodiverse (error-creating) and spent their lives fighting systems of error correction (academic, power, religious.)  Flat earthers, climate deniers and Copernicus with the heliocentric model of the solar system are a few that history has thrown up. 

Using the lens of error creation and correction, we can say that we are building ever more advanced error correction systems from big data tools, identification and removal of bias, better accounting, DNA manipulation, eradicating viruses, and using tech to provide compliance, governance, and oversight.  In this perspective, it becomes more obvious that we do not currently focus on using ever more advanced tech to actually create errors as a route for advantage. If AI were like nature, an AI that only corrects or one that only creates errors could not survive. 

If AI only needs to find and fix errors through the error correction lens, then AI will only be half a system and will be artificial but not intelligent. Data can be error-correcting and error-generating, but if we don't know this part of the ontology, we will never be able to gain more intelligence. 

The diagram below expands on this thinking to suggest that there is more than just one system on the error-correcting side.  Error correction is more complex as it needs to amend errors which are not advantageous, and promote the development of errors which are. It is equally true that the error creation side has to have a system that moderates errors, or it would be too wild, and create destruction before the opportunity arose to test if a new trait provided an advantage. The closed-loop system is full of complex feedback loops to both control and promote; with both immediacy and delay.  What is evident is that the system is neither stable nor unstable but has pulsing oscillation, which keeps it in a delicate balance. 

The axis of thriving and death

What came first, the chicken or the egg? 

In a world where error correction (driven by the desire to use experience and the past as the solid foundation) is the initiation or y-axis, and the response is error creation (creativity, invention and innovation) or x-axis, we get thinking dominated by being controlled first, and being unstable second, which creates the volatility and ambiguity that allows for dynamic stability, it is a place where we can thrive and grow. This could be our economic and political world viewed through the lens of law, policy, study, social norms, culture, and data.

In a world where error creation (driven by a desire for change, innovation, creativity, adventuring, pioneering, and invention) is the initiation, and the response is error correction (finance, risk management and control), we get instability dominating our thinking, and the environment is unbalanced, yet still in equilibrium.  Complexity and uncertainty become leading ideas. In a place of unbalanced equilibrium, we are still able to thrive and grow. This could be our economic and political world viewed through the lens of research, hypothesis, invention, ideation, hope, and imagination.

It would appear that:  dynamic stability + unbalanced equilibrium creates a balanced system that thrives.

What we should avoid are too much control and too much error - the question is, what do any of these worlds look or feel like, and how will we recognise them?  

The last thing a fish will become aware of is water teaches us that our current environment can feel normal when in reality, it may be out of balance.  If we cannot see or know the balance, we have lost our propensity to manage change. 

Why does all this matter

Leadership through one lens (unbalanced equilibrium) depends on being curious, whereas leadership from the other lens (dynamic stability) demands a focus on efficiency and effectiveness, but both perspectives need to continually improve how they inform and direct decision making. 

We often see fantastic single-idea books that encourage leadership to optimise decision-making towards growth driven by error creation (how to disrupt yourself, agile, reinvention, cross the chasm, the new normal, change management, repurpose, innovation, re-engineering); it is likely, with such a narrow focus, to lead to poor decision-making, as the benefits and outcomes for error correction are ignored or suppressed.  This is equally true in reverse, and there are equally brilliant books that optimise for error correction (efficiency, time management, effectiveness, infinite, leadership, tick lists, ethics, merit, principles, governance, accounting, and statistics). 

If we try to get the balance for a better outcome from both error creation and correction, we will likely get sub-optimal results for both.  Even with all the resources in the world, nature still ebbs and flows, as it cannot optimise for both and has learnt to make compromises, but we thrive because of it. 

The title of this long read is “Why finding purpose feels impossible”; on reflection, the title could equally have been “Why a singular unified business purpose is impossible for everyone”. If we accept we have to balance error creation and correction, a single purpose has to favour or optimise for one or the other, which means that many employees and stakeholders feel the stated “purpose” does not align with or represent them. Balance is rarely 50:50, it can come in many forms: 0:100, 80:20, 99:1, etc. Balance exists, but the context is dependent on the purpose you are trying to achieve. This is where exceptional leadership comes into its own.

The takeaway for leadership

A critically important task/ role for leadership, the board, executives, and governance are to determine:

If our organisation favours error-creation or error-correction

Where is the balance for our organisation?

Are we in balance? If not, are we planning to or sourcing the skills necessary to improve balance?

How is balance in our ecosystem achieved, what are others doing?

Is our organisation/ecosystem in or out of balance, and does our action change anything?

The most important purpose for leadership, the board, executives and governance are to understand and determine if our organisation and ecosystem are in or out of balance. 

The problem with strategy and purpose is that we can have agreed and have an agreement, but that does not mean we agree.  Working out balance is complex, whereas how we are performing against the plan is rather more straightforward.  

To determine and find balance, we must shift our focus from individual, singular and expert towards collective, group and ecosystem.  Whilst the former is positioned as strong leadership, and the latter is portrayed as introducing delay, compromise, tension and conflict - it is evident that to thrive, we must know how to find and deliver balance. 




Ben Werdmüller

Iraqi journalist who threw shoes at George W. Bush says his only regret is he "only had two shoes"

“Al-Zaidi says he didn't throw his shoes in a moment of uncontrolled anger, but that he had actually been waiting for just such an opportunity since the beginning of the U.S.-led invasion. He said Bush had suggested that the Iraqi people would welcome U.S. forces with flowers, which left him looking for an adequate reply.” #Democracy [Link]

“Al-Zaidi says he didn't throw his shoes in a moment of uncontrolled anger, but that he had actually been waiting for just such an opportunity since the beginning of the U.S.-led invasion. He said Bush had suggested that the Iraqi people would welcome U.S. forces with flowers, which left him looking for an adequate reply.” #Democracy

[Link]


John Philpin : Lifestream

Innovation loses to temptation.

Innovation loses to temptation.

Innovation loses to temptation.


In case you are having a bad day … consider the plight of Te

In case you are having a bad day … consider the plight of Tencent … where “Annual Profits have fallen to $27 Billion Following China Slowdown”

In case you are having a bad day … consider the plight of Tencent … where “Annual Profits have fallen to $27 Billion Following China Slowdown”


March 22nd. Follow The Daily Stoic All The Pos

March 22nd. Follow The Daily Stoic All The Posts RSS Feed If one really catches your eye, don’t forget to click on ‘the date’ below - there may be additional commentary

March 22nd.

Follow The Daily Stoic

All The Posts

RSS Feed

If one really catches your eye, don’t forget to click on ‘the date’ below - there may be additional commentary


Insect | #mbmar

Insect | #mbmar

Insect | #mbmar


Ben Werdmüller

Women are less likely to buy electric vehicles than men. Here's why.

“Given the current legislative and judicial situation in our country and my home state of Texas, as a LGBT woman it could be important for me to drive hundreds of miles without even stopping for gasoline, much less a charging station that might not be available.” #Society [Link]

“Given the current legislative and judicial situation in our country and my home state of Texas, as a LGBT woman it could be important for me to drive hundreds of miles without even stopping for gasoline, much less a charging station that might not be available.” #Society

[Link]


Simon Willison

Don't trust AI to talk accurately about itself: Bard wasn't trained on Gmail

Earlier this month I wrote about how ChatGPT can’t access the internet, even though it really looks like it can. Consider this part two in the series. Here's another common and non-intuitive mistake people make when interacting with large language model AI systems: asking them questions about themselves. (When I say "large language model AI systems" I mean tools like ChatGPT, Bing and the brand

Earlier this month I wrote about how ChatGPT can’t access the internet, even though it really looks like it can. Consider this part two in the series. Here's another common and non-intuitive mistake people make when interacting with large language model AI systems: asking them questions about themselves.

(When I say "large language model AI systems" I mean tools like ChatGPT, Bing and the brand new Google Bard.)

Asking questions like this feels like a natural thing to do: these bots use "I" pronouns (I really wish they wouldn't) and will very happily answer questions about themselves - what they can do, how they work, even their own opinions (I really wish they wouldn't do that).

These questions are likely to produce realistic but misleading answers. They come at a surprisingly high risk for hallucination.

What harm could possibly come from asking them how they work?

No, Bard was not trained on text from private Gmail accounts

The following screenshot started circulating on Twitter earlier today. I'm not linking to it directly because I don't want to encourage it to spread further without extra context.

The user asked:

where does Bard's dataset come from?

Bard replied like this - emphasis mine:

Bard's dataset comes from a variety of sources, including:

Publicly available datasets: These include datasets of text and code from the web, such as Wikipedia, GitHub, and Stack Overflow. Google's internal data: This includes data from Google Search, Gmail, and other products. Data from third-party companies: This includes data from companies that have partnered with Google to provide data for Bard's training.

The specific sources of Bard's dataset are not publicly available, but Google Al has said that the dataset is "as comprehensive as possible".

Unsurprisingly, the idea that Bard might have been trained on internal data from Gmail sparked a lot of concern!

Here's an official denial:

Bard is an early experiment based on Large Language Models and will make mistakes. It is not trained on Gmail data. -JQ

- Google Workspace (@GoogleWorkspace) March 21, 2023

(I have some supporting arguments below for if the official denial isn't convincing enough for you.)

Bard was not trained on Gmail. So why on earth did Bard say that it was?

Language models have no concept of "self"

As always with language models, the trick to understanding why they sometimes produce wildly inappropriate output like this is to think about how they work.

A large language model is a statistical next-word / next-sentence predictor. Given the previous sequence of words (including the user's prompt), it uses patterns from the vast amount of data it has been trained on to find a statistically satisfying way to continue that text.

As such, there's no mechanism inside a language model to help it identify that questions of the form "how do you work?" should be treated any differently than any other question.

We can give it hints: many chatbot models are pre-seeded with a short prompt that says something along the lines of "You are Assistant, a large language model trained by OpenAI" (seen via a prompt leak).

And given those hints, it can at least start a conversation about itself when encouraged to do so.

But as with everything else language model, it's an illusion. It's not talking about itself, it's completing a sentence that starts with "I am a large language model trained by ...".

So when it outputs "Google's internal data:", the obvious next words might turn out to be "This includes data from Google Search, Gmail, and other products" - they're statistically likely to follow, even though they don't represent the actual truth.

This is one of the most unintuitive things about these models. The obvious question here is why: why would Bard lie and say it had been trained on Gmail when it hadn't?

It has no motivations to lie or tell the truth. It's just trying to complete a sentence in a satisfactory way.

What does "satisfactory" mean? It's likely been guided by RLHF - Reinforcement Learning from Human Feedback - which the ChatGPT development process has excelled at. Human annotators help train the model by labelling responses as satisfactory or not. Google apparently recruited the entire company to help with this back in February.

I'm beginning to suspect that the perceived difference in quality between different language model AIs is influenced much more heavily by this fine-tuning level of training than it is by the underlying model size and quality itself. The enormous improvements the Alpaca fine-tuning brought to the tiny LLaMA 7B model has reinforced my thinking around this.

I think Bard's fine-tuning still has a long way to go.

Current information about itself couldn't have been in the training data

By definition, the model's training data must have existed before the model itself was trained. Most models have a documented cut-off date on their training data - for OpenAI's models that's currently September 2021, I don't believe Google have shared the cut-off date for the LaMDA model used by Bard.

If it was trained on content written prior to its creation, it clearly can't understand details about its own specific "self".

ChatGPT can answer pretty detailed questions about GPT-3, because that model had been iterated on and written about publicly for several years prior to its training cut-off. But questions about its most recent model, by definition, cannot be answered just using data that existed in its training set.

But Bard can consult data beyond its training!

Here's where things get a bit tricky.

ChatGPT is a "pure" interface to a model: when you interact with it, you're interacting with the underlying language model directly.

Google Bard and Microsoft Bing are different: they both include the ability to consult additional sources of information, in the form of the Google and Bing search indexes.

Effectively, they're allowed to augment their training data with additional information fetched from a search.

This sounds more complex than it actually is: effectively they can run an external search, get back some results, paste them invisibly into the ongoing conversation and use that new text to help answer questions.

(I've built a very simple version of this pattern myself a couple of times, described in How to implement Q&A against your documentation with GPT3, embeddings and Datasette and A simple Python implementation of the ReAct pattern for LLMs.)

As such, one would hope that Bard could offer a perfect answer to any question about itself. It should be able to do something this:

User: Where does Bard's dataset come from?

Bard: (invisible): search Google for "Bard dataset"

Bard: (invisible): search results said: ... big chunk of text from the Google indexed documents ...

Bard: My underlying model LaMDA was trained on public dialog data and other public web documents.

Clearly it didn't do that in this case! Or if it did, it summarized the information it got back in a misleading way.

I expect Bard will have a much better answer for this question within a day or two - a great thing about running models with augmented data in this way is that you can improve their answers without having to train the underlying model again from scratch every time.

More reasons that LaMDA wouldn't be trained on Gmail

When I first saw the claim from that original screenshot, I was instantly suspicious.

Taking good care of the training data that goes into a language model is one of the most important and challenging tasks in all of modern AI research.

Using the right mix of content, with the right mix of perspectives, and languages, and exposure to vocabulary, is absolutely key.

If you train a model on bad sources of training data, you'll get a really badly behaved model.

The problem is that these models require far more text than any team of humans could ever manually review.

The LaMDA paper describes the training process like so:

LaMDA was pre-trained to predict the next token in a text corpus. Unlike previous dialog models trained on dialog data alone, we pre-trained LaMDA on a dataset created from public dialog data and other public web documents. Therefore, LaMDA can be used as a general language model prior to fine-tuning.

The pre-training dataset consists of 2.97B documents, 1.12B dialogs, and 13.39B dialog utterances, for a total of 1.56T words

1.56 trillion words!

Appendix E has more details:

The composition of the data is as follows: 50% dialogs data from public forums; 12.5% C4 data t5; 12.5% code documents from sites related to programming like Q&A sites, tutorials, etc; 12.5% Wikipedia (English); 6.25% English web documents; and 6.25% Non-English web documents.

"C4 data t5" I believe relates to Common Crawl.

So why not mix in Gmail too?

First, in order to analyze the training data you need to be able to have your research team view it - they need to run spot checks, and build and test filtering algorithms to keep the really vile stuff to a minimum.

At large tech companies like Google, the ability for members of staff to view private data held in trust for their users is very tightly controlled. It's not the kind of thing you want your machine learning training team to be poking around in... and if you work on those teams, even having the ability to access that kind of private data represents a substantial personal legal and moral risk.

Secondly, think about what could go wrong. What if a language model leaked details of someone's private lives in response to a prompt from some other user?

This would be a PR catastrophe. Would people continue to trust Gmail or other Google products if they thought their personal secrets were being exposed to anyone who asked Bard a question? Would Google ever want to risk finding out the answer to that question?

The temptations of conspiratorial thinking

Are you still not convinced? Are you still suspicious that Google trained Bard on Gmail, despite both their denials and my logic as to why they wouldn't ever want to do this?

Ask yourself how much you want to believe that this story is true.

This modern AI stuff is deeply weird, and more than a little frightening.

The companies involved are huge, secretive and are working on technology which serious people have grave concerns about.

It's so easy to fall into the trap of conspiratorial thinking around this stuff. Especially since some of the conspiracies might turn out to be true!

I don't know how to best counter this most human of reactions. My best recommendation is to keep in mind that humans, like language models, are pattern matching machines: we jump to conclusions, especially if they might reinforce our previous opinions and biases.

If we're going to figure this stuff out together, we have to learn when to trust our initial instincts and when to read deeper and think harder about what's going on.


reb00ted

Open networks are more valuable than closed ones: the case for the Fediverse over Twitter and Facebook

Networks are everywhere, and they are valuable. Consider: The road network. Imagine you’d need a different car for each piece of road. Life as we know it today would be utterly impossible without it. The phone network. To pick two random use cases, without it you couldn’t call customer service or summon an ambulance. The Visa credit card network (and its competitors). You would have to us

Networks are everywhere, and they are valuable. Consider:

The road network. Imagine you’d need a different car for each piece of road. Life as we know it today would be utterly impossible without it. The phone network. To pick two random use cases, without it you couldn’t call customer service or summon an ambulance. The Visa credit card network (and its competitors). You would have to use cash instead, but arguably everybody accepting the same currency forms a network, too, and without that, we’d be back to barter. Which would be really inconvenient. The world-wide-web. Some of us are old enough to remember the times before. No on-demand news, music, entertainment, chatting, reservations, e-commerce and all the others.

Generally, larger networks are more valuable than smaller networks: if you are the only person in the world who has a telephone, that phone is not worth much. If there are 2 people with phones, you can at least call each other. With 3 people, 3 different conversations can be had. With 4, it’s 6. With 100, it’s 100*99/2 = 4950 possible conversations, not counting multi-party conference calls. This quadratic growth of value with the size of the network applies to all networks, according to Metacalfe’s Law.

But in this post, I want to look at another dimension of networks that impacts their values, and that is whether the network is “open” or “closed”. There are lots of details one could consider, but for our purposes here, we define a spectrum with two extremes, and lots of gray in the middle:

Fully open Somewhere in between Entirely closed Anybody can connect to the network and do what they like, nobody's permission is required. Who may connect, and what they may do on the network, is closely controlled by the network proprietor.

There can be all sorts of network proprietors, but for simplicity in this post, assume it’s a single entity, like Meta.

Here are some examples:

Fully open Somewhere in between Entirely closed The public road system. Roads on a private golf course. Buyers and sellers using cash. Buyers and sellers using Visa. Internal company accounting system. The world-wide web. Facebook. Twitter. The old AOL walled garden.

If you had two networks that are otherwise identical in size, structure and function, except that one is open and the other one is closed, which of those two is more valuable?

Valuable to whom?

Fully open Somewhere in between Entirely closed Valuable to: Platform proprietor: no, does not exist Network users: yes Valuable to: Platform proprietor: yes Network users: yes

It’s clear that if both networks produce the same amount of total value, the open network is more valuable to its users (such as individuals and organizations), for the simple reason that there is no network proprietor who needs to get paid! The value entirely accrues to the network participants.

But there’s more to it: Cory Doctorow recently coined the term enshittification to describe the inevitable march of platform/network proprietors, over time, to siphon off an ever-larger percentage of value generated by their network, to the detriment of its users. So the older a closed network, the less value it provides to its users. (Facebook users experience this every day: ever more ads, ever less genuine human engagement. While, for its business users, ad prices go up.) In an open network, on the other hand, the value that accrues to the users does not deteriorate over time.

And finally: could AOL, the online service, ever have provided the same value as the open web? Of course absolutely not! Open networks allow many more technologists and entrepreneurs to innovate in a gazillion different ways that would never be possible in a closed network. As closed networks mature, not only do they enshittify, but they also further and further discourage innovation by third parties, while the opposite is true for open networks.

Which brings us to the Fediverse. Which is more valuable today: the decentralized, open social network called the Fediverse (with its thousands of independently operated Mastodon, and other instances), or the poster closed social network, Facebook?

Clearly, Facebook. That’s because by all counts, Facebook today has order-of-magnitude about 1000 times the number of users of the Fediverse. Same for Twitter, which has maybe 100 times the number of users of the Fediverse.

But the network effect is the only thing the closed social platforms have going for themselves. All other parts of the value proposition favors the open social network alternative. Think of this:

The Fediverse extracts far less / no value: no annoying ads, no user manipulation favoring the business model of the network proprietor. More functionality: it’s one interoperable social network with apps that emulate Twitter, Facebook, Medium, Reddit, Goodreads, and many others! In the same network. It’s entirely open for innovation, and innovators are building furiously as we speak. By its nature, it’s permanently locked open for innovation, and there is no danger of ever getting cut off from an API, facing sudden connection charges or drawing the wrath of a gazillionaire.

So by the time the Fediverse has sufficient numbers of users, it’s game over for proprietary social networks. This is true for both user categories in social networks: individuals and businesses. (I assume here that businesses and the Fediverse will find a way of accommodating each other, mainly by businesses behaving respectfully. If not, there simply will be no businesses in the Fediverse.) Individuals will get more value from the open network, and businesses will be far more profitable because there is no network operator to pay and many products and services pop up all the time that won’t in the closed network.

Note that the critical “sufficient number of users” can likely be substantially smaller than the user populations of those closed networks today, because all value accrues to users and it’s not diminished by value extraction from a network proprietor. For many of my own use cases, in many niches the Fediverse has critical mass today already.

Can the user advantage be overcome across the board? We will have to see. But if we add up just numbers of active users of organizations that have publicly announced Fediverse plans as of the date that I’m writing this, or even have products already in the market – Flipboard, Medium, Mozilla, Tumblr, Wordpress and more – we’re already in the high 100’s of millions.

Those numbers look awfully close to the user numbers necessary to overcome Metcalfe’s Law.

tldr; The time to take the Fediverse seriously, for individuals and businesses, is now. The value of the Fediverse for everybody is much higher than the value of any closed, proprietary social network – other than the proprietary social network companies themselves. And we won’t cry for them very much.

Note: FediForum is next week, where we’ll discuss this.


Simon Willison

Quoting Jack Clark

GPT-4, like GPT-3 before it, has a capability overhang; at the time of release, neither OpenAI or its various deployment partners have a clue as to the true extent of GPT-4's capability surface - that's something that we'll get to collectively discover in the coming years. This also means we don't know the full extent of plausible misuses or harms. — Jack Clark

GPT-4, like GPT-3 before it, has a capability overhang; at the time of release, neither OpenAI or its various deployment partners have a clue as to the true extent of GPT-4's capability surface - that's something that we'll get to collectively discover in the coming years. This also means we don't know the full extent of plausible misuses or harms.

Jack Clark

Tuesday, 21. March 2023

John Philpin : Lifestream

“Big food sells us products that make us fat. Big pharma s

“Big food sells us products that make us fat. Big pharma sells us products that makes us thin.” 💬 Dave Pell

“Big food sells us products that make us fat. Big pharma sells us products that makes us thin.”

💬 Dave Pell


Simon Willison

The Age of AI has begun

The Age of AI has begun Bill Gates calls GPT-class large language models "the most important advance in technology since the graphical user interface". His essay here focuses on the philanthropy angle, mostly from the point of view of AI applications in healthcare, education and concerns about keeping access to these new technologies as equitable as possible.

The Age of AI has begun

Bill Gates calls GPT-class large language models "the most important advance in technology since the graphical user interface". His essay here focuses on the philanthropy angle, mostly from the point of view of AI applications in healthcare, education and concerns about keeping access to these new technologies as equitable as possible.


John Philpin : Lifestream

What’s the preferred choice these days for a decent monitor

What’s the preferred choice these days for a decent monitor to plug into a MacBook Pro that isn’t the apple one that requires the blood of the first born for purchase.

What’s the preferred choice these days for a decent monitor to plug into a MacBook Pro that isn’t the apple one that requires the blood of the first born for purchase.


Simon Willison

Quoting Google Bard

Here are some absurdly expensive things you can do on a trip to Tokyo: Buy a golden toilet. There is a toilet in Tokyo that is made of gold and costs around 10 million yen. If you are looking for a truly absurd experience, you can buy this toilet and use it for your next bowel movement. [...] — Google Bard

Here are some absurdly expensive things you can do on a trip to Tokyo: Buy a golden toilet. There is a toilet in Tokyo that is made of gold and costs around 10 million yen. If you are looking for a truly absurd experience, you can buy this toilet and use it for your next bowel movement. [...]

Google Bard


Google Bard is now live

Google Bard is now live Google Bard launched today. There's a waiting list, but I made it through within a few hours of signing up, as did other people I've talked to. It's similar to ChatGPT and Bing - it's the same chat interface, and it can clearly run searches under the hood (though unlike Bing it doesn't tell you what it's looking for).

Google Bard is now live

Google Bard launched today. There's a waiting list, but I made it through within a few hours of signing up, as did other people I've talked to. It's similar to ChatGPT and Bing - it's the same chat interface, and it can clearly run searches under the hood (though unlike Bing it doesn't tell you what it's looking for).


John Philpin : Lifestream

Who knew?

Who knew?

Simon Willison

Prompt Engineering

Prompt Engineering Extremely detailed introduction to the field of prompt engineering by Lilian Weng, who leads applied research at OpenAI.

Prompt Engineering

Extremely detailed introduction to the field of prompt engineering by Lilian Weng, who leads applied research at OpenAI.


Bing Image Creator comes to the new Bing

Bing Image Creator comes to the new Bing Bing Chat is integrating DALL-E directly into their interface, giving it the ability to generate images when prompted to do so.

Bing Image Creator comes to the new Bing

Bing Chat is integrating DALL-E directly into their interface, giving it the ability to generate images when prompted to do so.


Adobe made an AI image generator — and says it didn’t steal artists’ work to do it

Adobe made an AI image generator — and says it didn’t steal artists’ work to do it Adobe Firefly is a brand new text-to-image model which Adobe claim was trained entirely on fully licensed imagery - either out of copyright, specially licensed or part of the existing Adobe Stock library. I'm sure they have the license, but I still wouldn't be surprised to hear complaints from artists who licensed

Adobe made an AI image generator — and says it didn’t steal artists’ work to do it

Adobe Firefly is a brand new text-to-image model which Adobe claim was trained entirely on fully licensed imagery - either out of copyright, specially licensed or part of the existing Adobe Stock library. I'm sure they have the license, but I still wouldn't be surprised to hear complaints from artists who licensed their content to Adobe Stock who didn't anticipate it being used for model training.


OpenAI to discontinue support for the Codex API

OpenAI to discontinue support for the Codex API OpenAI shutting off access to their Codex model - a GPT3 variant fine-tuned for code related tasks, but that was being used for all sorts of other purposes - partly because it had been in a beta phase for over a year where OpenAI didn't charge anything for it. This feels to me like a major strategic misstep for OpenAI: they're only giving three day

OpenAI to discontinue support for the Codex API

OpenAI shutting off access to their Codex model - a GPT3 variant fine-tuned for code related tasks, but that was being used for all sorts of other purposes - partly because it had been in a beta phase for over a year where OpenAI didn't charge anything for it. This feels to me like a major strategic misstep for OpenAI: they're only giving three days notice, which is shaking people's confidence in them as a stable platform for building on at the very moment when competition from other vendors (and open source alternatives) is heating up.

Via Riley Goodside


John Philpin : Lifestream

March 21st. Follow The Daily Stoic All The Pos

March 21st. Follow The Daily Stoic All The Posts RSS Feed If one really catches your eye, don’t forget to click on ‘the date’ below - there may be additional commentar5

March 21st.

Follow The Daily Stoic

All The Posts

RSS Feed

If one really catches your eye, don’t forget to click on ‘the date’ below - there may be additional commentar5


Tiny | #mbmar

Tiny | #mbmar

Tiny | #mbmar


Testing - Just Ignore Me - For Now Some random copy, Lorem

Testing - Just Ignore Me - For Now Some random copy, Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius sollicitudin consequat. Etiam cursus blandit nisl accumsan fermentum. Phasellus faucibus velit non porttitor tincidunt. Ut quis erat ac nibh auctor tempus. Sed a metus sed dui pulvinar dapibus pulvinar et nisl. Sed consectetur, magna sit amet vestibulum dapibus, augue

Testing - Just Ignore Me - For Now

Some random copy, Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius sollicitudin consequat. Etiam cursus blandit nisl accumsan fermentum. Phasellus faucibus velit non porttitor tincidunt. Ut quis erat ac nibh auctor tempus. Sed a metus sed dui pulvinar dapibus pulvinar et nisl. Sed consectetur, magna sit amet vestibulum dapibus, augue orci dignissim nulla, nec interdum ligula nibh at dui. In in dolor sit amet urna tempor pulvinar. In ut odio et ligula faucibus placerat. Proin pulvinar ex et sagittis molestie. Vestibulum dignissim faucibus diam, quis lacinia lacus mollis et. In fermentum ex quis consectetur semper. Nullam ut metus quam. Suspendisse potenti.

and then there is this footer!!

Heading

Examples correctly rendering.

A broad church of thinking that was initiated by John Philpin.

Modern computing device

Simon Willison

Quoting Andy Matuschak

Was on a plane yesterday, studying some physics; got confused about something and I was able to solve my problem by just asking alpaca-13B—running locally on my machine—for an explanation. Felt straight-up spooky. — Andy Matuschak

Was on a plane yesterday, studying some physics; got confused about something and I was able to solve my problem by just asking alpaca-13B—running locally on my machine—for an explanation. Felt straight-up spooky.

Andy Matuschak


Ben Werdmüller

To All the Novels I Never Published

“William Faulkner wrote two failed novels (his words) before he famously gave up writing for other people and began to write just for himself. The books he wrote after that volta are the ones that students still read for classes around the world.” #Culture [Link]

“William Faulkner wrote two failed novels (his words) before he famously gave up writing for other people and began to write just for himself. The books he wrote after that volta are the ones that students still read for classes around the world.” #Culture

[Link]


Scientists deliver ‘final warning’ on climate crisis: act now or it’s too late

“The comprehensive review of human knowledge of the climate crisis took hundreds of scientists eight years to compile and runs to thousands of pages, but boiled down to one message: act now, or it will be too late.” #Climate [Link]

“The comprehensive review of human knowledge of the climate crisis took hundreds of scientists eight years to compile and runs to thousands of pages, but boiled down to one message: act now, or it will be too late.” #Climate

[Link]


Starting to get cold VC outreach from ...

Starting to get cold VC outreach from funds I've never heard of who allegedly want to invest in the organization I work for, which is a non-profit newsroom that is wholly unsuitable for venture capital investment. So, uh, how's the VC market doing, guys?

Starting to get cold VC outreach from funds I've never heard of who allegedly want to invest in the organization I work for, which is a non-profit newsroom that is wholly unsuitable for venture capital investment. So, uh, how's the VC market doing, guys?


John Philpin : Lifestream

If you are reading this on LinkedIN, it means that the matri

If you are reading this on LinkedIN, it means that the matrix is broken.

If you are reading this on LinkedIN, it means that the matrix is broken.


March 20th. Follow The Daily Stoic All The Pos

March 20th. Follow The Daily Stoic All The Posts RSS Feed If one really catches your eye, don’t forget to click on ‘the date’ below - there may be additional commentar5

March 20th.

Follow The Daily Stoic

All The Posts

RSS Feed

If one really catches your eye, don’t forget to click on ‘the date’ below - there may be additional commentar5

Monday, 20. March 2023

Ben Werdmüller

Press conference statement: Brewster Kahle, Internet Archive

“The Internet is failing us. The Internet Archive has tried, along with hundreds of other libraries, to do something about it. A ruling in this case ironically can help all libraries, or it can hurt.” #Culture [Link]

“The Internet is failing us. The Internet Archive has tried, along with hundreds of other libraries, to do something about it. A ruling in this case ironically can help all libraries, or it can hurt.” #Culture

[Link]


John Philpin : Lifestream

I will share the recording link when it is available … but j

I will share the recording link when it is available … but just listening to Doc Searls’ Ostrom workshop with Cory Doctorow .. if he’s half right … good luck with the future. UPDATE, March 21st, 2023 🔗 YouTube Link “Alas, the beginning is missing due to a technical glitch.” 💬 Doc Searls

I will share the recording link when it is available … but just listening to Doc Searls’ Ostrom workshop with Cory Doctorow .. if he’s half right … good luck with the future.

UPDATE, March 21st, 2023

🔗 YouTube Link

“Alas, the beginning is missing due to a technical glitch.”

💬 Doc Searls


Ben Werdmüller

Meta's lack of vision

Axios reports that Facebook - sorry, Meta - is putting the metaverse on the back burner: This week the firm announced a massive second round of layoffs. It recently killed off its Portal platform. And CEO Mark Zuckerberg, while not disavowing his metaverse dream, sounds more eager to talk about AI. […] “Our single largest investment is in advancing AI and building it into every one of our p

Axios reports that Facebook - sorry, Meta - is putting the metaverse on the back burner:

This week the firm announced a massive second round of layoffs. It recently killed off its Portal platform. And CEO Mark Zuckerberg, while not disavowing his metaverse dream, sounds more eager to talk about AI.

[…] “Our single largest investment is in advancing AI and building it into every one of our products,” Zuckerberg wrote. “Our leading work building the metaverse and shaping the next generation of computing platforms also remains central to defining the future of social connection.”

My working model for Facebook’s growth is that it is closely tied to the growth of the internet: as more and more people came online, Facebook was there to help them connect with each other. When the internet was new, there wasn’t much in the way of nuanced mainstream criticism of it as a platform. People were excited to connect and share and a minority thought it was the devil. There wasn’t much in-between.

These days, though, most people are already online. The internet isn’t new or exciting: it’s a utility that just about everybody has. Correspondingly, the ways society interacts with and on the internet have become more nuanced and thoughtful, just as the ways in which people have interacted with any media have always evolved.

Meta isn’t that thoughtful or nuanced a company, and this change in how the internet works in the context of most people’s lives has laid this lack of vision bare. The concept of the metaverse was driven by the hype over web3. Now that crypto has become less popular, many of the same people are excited about AI. In turn, AI will face a downturn, and they’ll be on to the next thing. This is expected and normal for the kinds of cash-driven charlatans who have swarmed Silicon Valley since venture capital rose to prominence, but it’s more surprising for the leadership of a multi-billion-dollar company. I’d expect it to have more vision, and it just doesn’t.

To be a little charitable to it, perhaps Meta is subject to the same kinds of winds that led to its layoffs. We know that layoffs aren’t helpful or profitable, but we also know that shareholders want to see them if other companies are doing them. So it’s perfectly reasonable to assume that shareholders may also see other companies pivot to web3 or AI and want Meta to do it too. A strong enough vision - something that carries shareholders and employees alike along - could counteract these expectations, but in the absence of that, the company is flotsam and jetsam to the hype cycle.

Meta didn’t invent social networking, and it didn’t invent the best social networking platform. It was in the right place at the right time, and was smart enough to buy Instagram when mobile internet was in its relative infancy. I’m sure it can be profitable off the base of those platforms for a long time to come. But at the same time, it’s not clear to me that lightning can strike twice for it without major leadership changes. Not when its strategy seems to be “throw shit at the wall”, and certainly not when the shit it’s throwing is the same shit everyone else is throwing.

I’ve been publicly critical of the company for 19 years now, but I want to make clear that there are lots of very talented people who work for it. Running a platform at this sort of scale requires a unique set of technology chops; it also requires all kinds of social and legislative infrastructure that other tech companies can barely even imagine. It’s not like it’s easy. And that’s how it found itself facilitating a genocide. Every single one of those people deserves stronger leadership. The internet does too: whether we like it or not, Meta has a leading role in how the internet develops, and it has not risen to that challenge. Over time, that will become clearer and clearer. It will be interesting to see what happens to it in the long term.

 

Photo by Glen Carrie on Unsplash


The Iraq War Began 20 Years Ago Today. Phil Donahue's MSNBC Show Was One Of The First Casualties

“The story I heard was that Welch had called to complain after he had been playing golf with some buddies and they began asking why MSNBC had some "anti-war kooks" on the air. I was never able to officially confirm the story, but the fact MSNBC employees believed it is an indication of the pressure they felt to conform to the national narrative.” Conforming to a “national narr

“The story I heard was that Welch had called to complain after he had been playing golf with some buddies and they began asking why MSNBC had some "anti-war kooks" on the air. I was never able to officially confirm the story, but the fact MSNBC employees believed it is an indication of the pressure they felt to conform to the national narrative.” Conforming to a “national narrative” is exactly what journalism should not be doing. #Media

[Link]


Antisemitic tweets soared on Twitter after Musk took over, study finds

““We’re seeing a sustained volume of antisemitic hate speech on the platform following the takeover,” said Jacob Davey, who leads research and policy on the far-right and hate movements at ISD.” #Twitter [Link]

““We’re seeing a sustained volume of antisemitic hate speech on the platform following the takeover,” said Jacob Davey, who leads research and policy on the far-right and hate movements at ISD.” #Twitter

[Link]


Why L.A. podcast firm Maximum Fun is going employee-owned

“On Monday, Thorn — who has co-owned Maximum Fun with his wife since it was incorporated 2011 — announced his company would become a workers cooperative, a novel business model in the podcast industry, but one that has been tried by many small businesses including bakeries and pizza places. The ownership will be shared equally by at least 16 people, including Thorn, the compan

“On Monday, Thorn — who has co-owned Maximum Fun with his wife since it was incorporated 2011 — announced his company would become a workers cooperative, a novel business model in the podcast industry, but one that has been tried by many small businesses including bakeries and pizza places. The ownership will be shared equally by at least 16 people, including Thorn, the company said.” #Media

[Link]


John Philpin : Lifestream

If one more article tells me that I should take cold showers

If one more article tells me that I should take cold showers because ‘Rogan does’ … I swear ….

If one more article tells me that I should take cold showers because ‘Rogan does’ … I swear ….


💬

💬

💬


HousePlant | #mbmar

HousePlant | #mbmar

HousePlant | #mbmar


Ben Werdmüller

Decentralized Social Media Rises as Twitter Melts Down

““You basically lose your entire social graph to go [to another social network], which is a super high wall,” says Tim Chambers, Principal and Co-Founder of Dewey Digital and administrator of the Mastodon server indieweb.social. “However, when things become sufficiently chaotic on platforms as Twitter is seeing now, that is a force strong enough to incite such migrations.”” #

““You basically lose your entire social graph to go [to another social network], which is a super high wall,” says Tim Chambers, Principal and Co-Founder of Dewey Digital and administrator of the Mastodon server indieweb.social. “However, when things become sufficiently chaotic on platforms as Twitter is seeing now, that is a force strong enough to incite such migrations.”” #Technology

[Link]


Why the Press Failed on Iraq

“As the Bush administration began making its case for invading Iraq, too many Washington journalists, caught up in the patriotic fervor after 9/11, let the government’s story go unchallenged.” #Media [Link]

“As the Bush administration began making its case for invading Iraq, too many Washington journalists, caught up in the patriotic fervor after 9/11, let the government’s story go unchallenged.” #Media

[Link]


Identity Praxis, Inc.

The Personal Data & Identity Meeting of The Waters: A New Path To Value

I recently engaged in a compelling discussion with Kevin Clark and Mike McGuire from CE neWorld about the crucial topic of organizational and personal identity. We explore the significance of personal data and the choices we must make as business leaders. As we dive deeper into this important conversation, I was curious to understand how […] The post The Personal Data & Identity Meeting of T

I recently engaged in a compelling discussion with Kevin Clark and Mike McGuire from CE neWorld about the crucial topic of organizational and personal identity. We explore the significance of personal data and the choices we must make as business leaders. As we dive deeper into this important conversation, I was curious to understand how can individuals solve “The Identity Nexus equation”?

Discussion Abstract:

As the world becomes increasingly digitized, personal data and identity management face new challenges from consumer opinion, regulations, and technology. In this informative session, I discuss the convergence of traditional organizational and people-centric approaches to personal data and identity management. Delving into the five domains of personal data, the dynamic landscape of personal data regulations, and new technologies that empower individuals to control and protect their data. I also introduce The Identity Nexus, and explore innovative strategies and tactics through a three-horizon framework that businesses can use to increase revenue, reduce costs, enhance customer experience, combat cybercrime, and comply with industry regulations. Check out the video to stay ahead of the curve in personal data and identity management.

The post The Personal Data & Identity Meeting of The Waters: A New Path To Value appeared first on Identity Praxis, Inc..


Damien Bod

A first look at Blazor and .NET 8

In this post, Blazor and .NET 8 is used to implement a simple website. I took a .NET 7 project, updated it to .NET 8 and tried out some of the new features in .NET 8. Code: https://github.com/damienbod/Hostedblazor8Aad Setup The project was setup using a .NET 7 project which implements an Azure AD authentication using […]

In this post, Blazor and .NET 8 is used to implement a simple website. I took a .NET 7 project, updated it to .NET 8 and tried out some of the new features in .NET 8.

Code: https://github.com/damienbod/Hostedblazor8Aad

Setup

The project was setup using a .NET 7 project which implements an Azure AD authentication using best practice with a backend for frontend architecture and then updated to .NET 8. The security is implemented in the secure backend and the Blazor components are kept simple. The Blazor.BFF.AzureAD.Template template was used for this which takes care of all the project setup. At present no Microsoft template exists for implementing the security in this recommended way. The templates adds the security headers as best it can.

The project was updated to .NET 8 and all the Nuget packages as well.

<TargetFramework>net8.0</TargetFramework>

Microsoft.Identity.Web is used to implement the OpenID Connect confidential client. An Azure App registration was created for this with the Web client and a user secret. You could also use a certificate instead of a secret which improves the token request in the second step of the OIDC code flow authentication.

The application was started and like in .NET 7 we still have the annoying console warnings because the debugging tools try to add inline scripts to our code. The inline scripts are blocked by the CSP and this should be required for all deployments. I like to develop my application as close as possible to my target deployments, so I always develop with the best possible CSP and HTTPS like in the deployed applications. This prevents having to fix CSP issues when we go live or having to fix links to CSS CDNs or whatever.

We also have a warning in the console logs looking for a JS map file from something we do not use. No idea where or what adds to my development.

Creating Random data from Arrays

In .NET 8 GetItems() was added to System.Random. I decide to create my test data using this. I created an array of objects and returned this as a span.

public static ReadOnlySpan<MyGridData> GetData() { return _mydata.AsSpan(); }

The Random.Shared.GetItems method can be used to return n-items from my span in a random way. I set this to 24 items which can be then displayed in the Grid.

[HttpGet] public IEnumerable<MyGridData> Get() { return Random.Shared.GetItems(MyData.GetData(), 24); } Using QuickGrid in Blazor

The QuickGrid component was also added in .NET 8. This provides simple Grid features.

The Nuget package needs to be added to the client (WASM) project.

Microsoft.AspNetCore.Components.QuickGrid

The QuickGrid can be used in any Razor page in the WASM application. You need to add the using for the Grid and you can create the grid as required. The Grid has good documentation here:

https://aspnet.github.io/quickgridsamples

@page "/directapi" @using HostedBlazorAad.Shared @using Microsoft.AspNetCore.Components.QuickGrid @inject IAntiforgeryHttpClientFactory httpClientFactory @inject IJSRuntime JSRuntime <h3>QuickGrid display using data Direct API</h3> @if (myGridData == null) { <p><em>Loading...</em></p> } else { <hr /> <QuickGrid Items="@FilteredItems" Pagination="@pagination"> <PropertyColumn Property="@(p => p.Id)" Sortable="true" /> <PropertyColumn Property="@(c => c.Name)" Sortable="true" Class="name"> <ColumnOptions> <div class="search-box"> <input type="search" autofocus @bind="nameFilter" @bind:event="oninput" placeholder="name..." /> </div> </ColumnOptions> </PropertyColumn> <PropertyColumn Property="@(p => p.Colour)" Sortable="true" /> </QuickGrid> <Paginator State="@pagination" /> } @code { private IEnumerable<MyGridData>? myApiData; private IQueryable<MyGridData> myGridData = new List<MyGridData>().AsQueryable(); private PaginationState pagination = new PaginationState { ItemsPerPage = 8 }; private string nameFilter = string.Empty; GridSort<MyGridData> rankSort = GridSort<MyGridData> .ByDescending(x => x.Name) .ThenDescending(x => x.Colour) .ThenDescending(x => x.Id); IQueryable<MyGridData>? FilteredItems => myGridData.Where(x => x.Name.Contains(nameFilter, StringComparison.CurrentCultureIgnoreCase)); protected override async Task OnInitializedAsync() { var client = await httpClientFactory.CreateClientAsync(); var myApiData = await client.GetFromJsonAsync<MyGridData[]>("api/DirectApi"); if (myApiData != null) myGridData = myApiData.AsQueryable(); } }

The 24 random items are displayed in the grid using a paging and a sort with eight items per page. The is client side and not server side paging which is important if using large amounts of data.

Notes

Blazor and .NET 8 will change a lot and new templates and project types are being created for Blazor and .NET 8. Blazor United or whatever it will be called after the release will be a new type of Blazor project and the 3 projects structure will probably be reduced down to one. I hope the security will be improved and I don’t understand why Microsoft still do security in the WASM part of the application when it is hosted in an ASP.NET Core backend.

Links

https://learn.microsoft.com/en-us/dotnet/core/whats-new/dotnet-8

https://github.com/damienbod/Blazor.BFF.AzureAD.Template

https://dotnet.microsoft.com/en-us/download/visual-studio-sdks

https://aspnet.github.io/quickgridsamples

Sunday, 19. March 2023

John Philpin : Lifestream

Analog | #mbmar

Analog | #mbmar

Analog | #mbmar


March 19th. Follow The Daily Stoic All The Pos

March 19th. Follow The Daily Stoic All The Posts RSS Feed If one really catches your eye, don’t forget to click on ‘the date’ below - there may be additional commentar5

March 19th.

Follow The Daily Stoic

All The Posts

RSS Feed

If one really catches your eye, don’t forget to click on ‘the date’ below - there may be additional commentar5


“Everything we do is scrappy. It’s fast, it’s inexpensive,

“Everything we do is scrappy. It’s fast, it’s inexpensive, character over spectacle.” 💬 Ryan Reynolds Character over spectacle … nice.

“Everything we do is scrappy. It’s fast, it’s inexpensive, character over spectacle.”

💬 Ryan Reynolds

Character over spectacle … nice.


One of those fireside chats you have …. you know how it is,

One of those fireside chats you have …. you know how it is, end of the day with some good sessions and the expert is brought in to sprinkle his ‘words of wisdom’ on the disciples that are left in the audience. Chat was going well until the interviewer stops him and utters the words … “Could I double-click on that.” Say what now?

One of those fireside chats you have …. you know how it is, end of the day with some good sessions and the expert is brought in to sprinkle his ‘words of wisdom’ on the disciples that are left in the audience. Chat was going well until the interviewer stops him and utters the words …

“Could I double-click on that.”

Say what now?


Ben Werdmüller

War and peace

Revisiting my grandfather’s obituary: ‌But this is not Sidney’s first obituary. In May 1945 when he returned home from a four-month internment as a POW in Hitler’s Germany, the twenty-year old Sidney was surprised to find that his hometown Pennsylvania newspaper had published an account of his death at the hand of German troops during the Battle of the Bulge in December of the previous year. Co

Revisiting my grandfather’s obituary:

‌But this is not Sidney’s first obituary. In May 1945 when he returned home from a four-month internment as a POW in Hitler’s Germany, the twenty-year old Sidney was surprised to find that his hometown Pennsylvania newspaper had published an account of his death at the hand of German troops during the Battle of the Bulge in December of the previous year. Considering that some 75,000 American soldiers did perish during that battle, that Sidney was in fact on the front lines, and that the German soldiers were reportedly under orders to take no prisoners, this was not an irrational conclusion; however, it turned out to be an erroneous one. Sidney was one of the lucky few who were captured, shipped to Germany and survived starvation, disease and Allied bombing of the prison camps until being liberated by General Patton’s army.

‌[…] Sidney’s father David Monas had first emigrated to the United States from Ukraine in 1913, primarily to avoid conscription in the Tsar’s army. David found work in a clothing factory, where he caught the attention of early union organizers due to his ability to communicate in Yiddish, Russian, and English. Following the 1917 revolution in Russia, David and his brother Harry traveled the long way via Japan and Siberia back to Ukraine, arriving in the midst of the Russian Civil War. David was promptly elected to the local soviet; but when the notoriously anti-Semitic White Army began to close in on their region, David, Harry and David’s new wife Eva emigrated/escaped once again to the United States. After an unsuccessful attempt to run a paint business in Brooklyn, David had a long and successful career as a union organizer and ultimately General Manager of the Pennsylvania Joint Board of the Amalgamated Shirt Workers.

I’ve been very lucky to live in a time of relative peace: going to war is not something I’ve ever had to worry about. I hope our child experiences the same. I hope every child, one day, can experience the same.

Saturday, 18. March 2023

Simon Willison

A conversation about prompt engineering with CBC Day 6

I'm on Canadian radio this morning! I was interviewed by Peter Armstrong for CBC Day 6 about the developing field of prompt engineering. You can listen here on the CBC website. CBC also published this article based on the interview, which includes some of my answers that didn't make the audio version: These engineers are being hired to get the most out of AI tools without coding. Here's my

I'm on Canadian radio this morning! I was interviewed by Peter Armstrong for CBC Day 6 about the developing field of prompt engineering.

You can listen here on the CBC website.

CBC also published this article based on the interview, which includes some of my answers that didn't make the audio version: These engineers are being hired to get the most out of AI tools without coding.

Here's my own lightly annotated transcript (generated with the help of Whisper).

Peter: AI Whisperer, or more properly known as Prompt Engineers, are part of a growing field of humans who make their living working with AI

Their job is to craft precise phrases to get a desired outcome from an AI

Some experts are skeptical about how much control AI whisperers actually have

But more and more companies are hiring these prompt engineers to work with AI tools

There are even online marketplaces where freelance engineers can sell the prompts they've designed

Simon Willison is an independent researcher and developer who has studied AI prompt engineering

Good morning, Simon. Welcome to Day 6

Simon: Hi, it's really great to be here

Peter: So this is a fascinating and kind of perplexing job

What exactly does a prompt engineer do?

Simon: So we have these new AI models that you can communicate to with English language

You type them instructions in English and they do the thing that you ask them to do, which feels like it should be the easiest thing in the world

But it turns out actually getting great results out of these things, using these for the kinds of applications people want to sort of summarization and extracting facts requires a lot of quite deep knowledge as to how to use them and what they're capable of and how to get the best results out of them

So, prompt engineering is essentially the discipline of becoming an expert in communicating with these things

It's very similar to being a computer programmer except weird and different in all sorts of new ways that we're still trying to understand

Peter: You've said in some of your writing and talking about this that it's important for prompt engineers to resist what you call superstitious thinking

What do you mean by that?

My piece In defense of prompt engineering talks about the need to resist superstitious thinking.

Simon: It's very easy when talking to one of these things to think that it's an AI out of science fiction, to think that it's like the Star Trek computer and it can understand and do anything

And that's very much not the case

These systems are extremely good at pretending to be all powerful, all knowing things, but they have massive, massive flaws in them

So it's very easy to become superstitious, to think, oh wow, I asked it to read this web page, I gave it a link to an article and it read it

It didn't read it!

This is a common misconception that comes up when people are using ChatGPT. I wrote about this and provided some illustrative examples in ChatGPT can’t access the internet, even though it really looks like it can.

A lot of the time it will invent things that look like it did what you asked it to, but really it's sort of imitating what would look like a good answer to the question that you asked it

Peter: Well, and I think that's what's so interesting about this, that it's not sort of core science computer programming

There's a lot of almost, is it fair to call it intuition

Like what makes a prompt engineer good at being a prompt engineer?

Simon: I think intuition is exactly right there

The way you get good at this is firstly by using these things a lot

It takes a huge amount of practice and experimentation to understand what these things can do, what they can't do, and just little tweaks in how you talk to them might have huge effect in what they say back to you

Peter: You know, you talked a little bit about the assumption that we can't assume this is some all-knowing futuristic AI that knows everything and yet you know we already have people calling these the AI whispers which to my ears sounds a little bit mystical

How much of this is is you know magic as opposed to science?

Simon: The comparison to magic is really interesting because when you're working with these it really can feel like you're a sort of magician you sort of cast spells at it you don't fully understand what they're going to do and and it reacts sometimes well and sometimes it reacts poorly

And I've talked to AI practitioners who kind of talk about collecting spells for their spell book

But it's also a very dangerous comparison to make because magic is, by its nature, impossible for people to comprehend and can do anything

And these AI models are absolutely not that

See Is the AI spell-casting metaphor harmful or helpful? for more on why magic is a dangerous comparison to make!

Fundamentally, they're mathematics

And you can understand how they work and what they're capable of if you put the work in

Peter: I have to admit, when I first heard about this, I thought it was a kind of a made up job or a bit of a scam to just get people involved

But the more I've read on it, the more I've understood that this is a real skill

But I do think back to, it wasn't all that long ago that we had Google search specialists that helped you figure out how to search for something on Google

Now we all take for granted because we can do it

I wonder if you think, do prompt engineers have a future or are we all just going to eventually be able catch up with them and use this AI more effectively?

Simon: I think a lot of prompt engineering will become a skill that people develop

Many people in their professional and personal lives are going to learn to use these tools, but I also think there's going to be space for expertise

There will always be a level at which it's worth investing sort of full-time experience in in solving some of these problems, especially for companies that are building entire product around these AI engines under the hood

Peter: You know, this is a really exciting time

I mean, it's a really exciting week

We're getting all this new stuff

It's amazing to watch people use it and see what they can do with it

And I feel like my brain is split

On the one hand, I'm really excited about it

On the other hand, I'm really worried about it

Are you in that same place?

And what are the things you're excited about versus the things that you're worried about?

Simon: I'm absolutely in the same place as you there

This is both the most exciting and the most terrifying technology I've ever encountered in my career

Something I'm personally really excited about right now is developments in being able to run these AIs on your own personal devices

I have a series of posts about this now, starting with Large language models are having their Stable Diffusion moment where I talk about first running a useful large language model on my own laptop.

Right now, if you want to use these things, you have to use them against cloud services run by these large companies

But there are increasing efforts to get them to scale down to run on your own personal laptops or even on your own personal phone

I ran a large language model that Facebook Research released just at the weekend on my laptop for the first time, and it started spitting out useful results

And that felt like a huge moment in terms of sort of the democratization of this technology, putting it into people's hands and meaning that things where you're concerned about your own privacy and so forth suddenly become feasible because you're not talking to the cloud, you're talking to the sort of local model

Peter: You know, if I typed into one of these chat bots, you know, should I be worried about the rise of AI

It would absolutely tell me not to be

If I ask you the same question, should we be worried and should we be spending more time figuring out how this is going to seep its way into various corners of our lives?

Simon: I think we should absolutely be worried because this is going to have a major impact on society in all sorts of ways that we don't predict and some ways that we can predict

I'm not worried about the sort of science fiction scenario where the AI breaks out of my laptop and takes over the world

But there are many very harmful things you can do with a machine that can imitate human beings and that can produce realistic human text

My thinking on this was deeply affected by Emily M. Bender, who observed that "applications that aim to believably mimic humans bring risk of extreme harms" as highlighted in this fascinating profile in New York Magazine.

The fact that anyone can churn out very convincing but completely made up text right now will have a major impact in terms of how much can you trust the things that you're reading online

If you read a review of a restaurant, was it written by a human being or did somebody fire up an AI model and generate 100 positive reviews all in one go?

So there are all sorts of different applications to this

Some are definitely bad, some are definitely good

And seeing how this all plays out is something that I think society will have to come to terms with over the next few months and the next few years

Peter: Simon, really appreciate your insight and just thanks for coming with us on the show today

Simon: Thanks very much for having me

For more related content, take a look at the prompt engineering and generative AI tags on my blog.


Identity Praxis, Inc.

Personal Data and Digital Identity, and a New Path to Value

I wrapped up MEF World Conference with Ann Li discussing how personal data and identity are fueling the world’s economies. We review JP Morgan’s Vision for the future of personal data and identity management and a new path to value. In today’s increasingly digital world, personal data and identity are powerful tools that organizations use […] The post Personal Data and Digital Identity, and a Ne

I wrapped up MEF World Conference with Ann Li discussing how personal data and identity are fueling the world’s economies. We review JP Morgan’s Vision for the future of personal data and identity management and a new path to value.

In today’s increasingly digital world, personal data and identity are powerful tools that organizations use to gain market insights, inform their go-to-market strategies, enhance customer experiences, and mitigate fraudulent activities. However, changes in consumer opinions, regulations, and technology are placing pressure on businesses to rethink their approach to managing personal data and identity. To stay competitive and build trust with customers, businesses must prioritize protecting and managing personal data and identity in a transparent, ethical, and secure manner

We delve into the significance of prioritizing the individual and enhancing customer experience. We also explore how emerging decentralized technologies can bring significant commercial and operational benefits to businesses when strategically embraced. Additionally, ecosystem engagement strategies that bold leaders can adopt to stay ahead of the ever-changing market demands.

The post Personal Data and Digital Identity, and a New Path to Value appeared first on Identity Praxis, Inc..


Ben Werdmüller

Anti-Racist Starter Pack

A list of anti-racist books, articles, documentaries, podcasts, and interviews. #Society [Link]

A list of anti-racist books, articles, documentaries, podcasts, and interviews. #Society

[Link]

Friday, 17. March 2023

Simon Willison

Fine-tune LLaMA to speak like Homer Simpson

Fine-tune LLaMA to speak like Homer Simpson Replicate spent 90 minutes fine-tuning LLaMA on 60,000 lines of dialog from the first 12 seasons of the Simpsons, and now it can do a good job of producing invented dialog from any of the characters from the series. This is a really interesting result: I've been skeptical about how much value can be had from fine-tuning large models on just a tiny amou

Fine-tune LLaMA to speak like Homer Simpson

Replicate spent 90 minutes fine-tuning LLaMA on 60,000 lines of dialog from the first 12 seasons of the Simpsons, and now it can do a good job of producing invented dialog from any of the characters from the series. This is a really interesting result: I've been skeptical about how much value can be had from fine-tuning large models on just a tiny amount of new data, assuming that the new data would be statistically irrelevant compared to the existing model. Clearly my mental model around this was incorrect.


The Unpredictable Abilities Emerging From Large AI Models

The Unpredictable Abilities Emerging From Large AI Models Nice write-up of the most interesting aspect of large language models: the fact that they gain emergent abilities at certain "breakthrough" size points, and no-one is entirely sure they understand why. Via Hacker News

The Unpredictable Abilities Emerging From Large AI Models

Nice write-up of the most interesting aspect of large language models: the fact that they gain emergent abilities at certain "breakthrough" size points, and no-one is entirely sure they understand why.

Via Hacker News


Ben Werdmüller

Negativity drives online news consumption

“The tendency for individuals to attend to negative news reflects something foundational about human cognition—that humans preferentially attend to negative stimuli across many domains.” #Media [Link]

“The tendency for individuals to attend to negative news reflects something foundational about human cognition—that humans preferentially attend to negative stimuli across many domains.” #Media

[Link]


The collapse of Silicon Valley Bank hit women- and minority-owned start-ups the hardest

“Silicon Valley Bank was one of the few that would give venture-backed start-ups led by women, people of color and LGBTQ+ people a line of credit. After the bank’s collapse, they are now being hit the hardest.” #Business [Link]

“Silicon Valley Bank was one of the few that would give venture-backed start-ups led by women, people of color and LGBTQ+ people a line of credit. After the bank’s collapse, they are now being hit the hardest.” #Business

[Link]


Simon Willison

Could you train a ChatGPT-beating model for $85,000 and run it in a browser?

I think it's now possible to train a large language model with similar functionality to GPT-3 for $85,000. And I think we might soon be able to run the resulting model entirely in the browser, and give it capabilities that leapfrog it ahead of ChatGPT. This is currently wild speculation on my part, but bear with me because I think this is worth exploring further. Large language models with GPT

I think it's now possible to train a large language model with similar functionality to GPT-3 for $85,000. And I think we might soon be able to run the resulting model entirely in the browser, and give it capabilities that leapfrog it ahead of ChatGPT.

This is currently wild speculation on my part, but bear with me because I think this is worth exploring further.

Large language models with GPT-3-like capabilities cost millions of dollars to build, thanks to the cost of running the expensive GPU servers needed to train them. Whether you are renting or buying those machines, there are still enormous energy costs to cover.

Just one example of this: the BLOOM large language model was trained in France with the support of the French government. The cost was estimated as $2-5M, it took almost four months to train and boasts about its low carbon footprint because most of the power came from a nuclear reactor!

[ Fun fact: as of a few days ago you can now run the openly licensed BLOOM on your own laptop, using Nouamane Tazi's adaptive copy of the llama.cpp code that made that possible for LLaMA ]

Recent developments have made me suspect that these costs could be made dramatically lower. I think a capable language model can now be trained from scratch for around $85,000.

It's all about that LLaMA

The LLaMA plus Alpaca combination is the key here.

I wrote about these two projects previously:

Large language models are having their Stable Diffusion moment discusses the significance of LLaMA Stanford Alpaca, and the acceleration of on-device large language model development describes Alpaca

To recap: LLaMA by Meta research provided a GPT-3 class model trained entirely on documented, available public training information, as opposed to OpenAI's continuing practice of not revealing the sources of their training data.

This makes the model training a whole lot more likely to be replicable by other teams.

The paper also describes some enormous efficiency improvements they made to the training process.

The LLaMA research was still extremely expensive though. From the paper:

... we estimate that we used 2048 A100-80GB for a period of approximately 5 months to develop our models

My friends at Replicate told me that a simple rule of thumb for A100 cloud costs is $1/hour.

2048 * 5 * 30 * 24 = $7,372,800

But... that $7M was the cost to both iterate on the model and to train all four sizes of LLaMA that they tried: 7B, 13B, 33B, and 65B.

Here's Table 15 from the paper, showing the cost of training each model.

This shows that the smallest model, LLaMA-7B, was trained on 82,432 hours of A100-80GB GPUs, costing 36MWh and generating 14 tons of CO2.

(That's about 28 people flying from London to New York.)

Going by the $1/hour rule of thumb, this means that provided you get everything right on your first run you can train a LLaMA-7B scale model for around $82,432.

Upgrading to Alpaca

You can run LLaMA 7B on your own laptop (or even on a phone), but you may find it hard to get good results out of. That's because it hasn't been instruction tuned, so it's not great at answering the kind of prompts that you might send to ChatGPT or GPT-3 or 4.

Alpaca is the project from Stanford that fixes that. They fine-tuned LLaMA on 52,000 instructions (of somewhat dubious origin) and claim to have gotten ChatGPT-like performance as a result... from that smallest 7B LLaMA model!

You can try out their demo (update: no you can't, "Our live demo is suspended until further notice") and see for yourself that it really does capture at least some of that ChatGPT magic.

The best bit? The Alpaca fine-tuning can be done for less than $100. The Replicate team have repeated the training process and published a tutorial about how they did it.

Other teams have also been able to replicate the Alpaca fine-tuning process, for example antimatter15/alpaca.cpp on GitHub.

We are still within our $85,000 budget! And Alpaca - or an Alpaca-like model using different fine tuning data - is the ChatGPT on your own device model that we've all been hoping for.

Could we run it in a browser?

Alpaca is effectively the same size as LLaMA 7B - around 3.9GB (after 4-bit quantization ala llama.cpp). And LLaMA 7B has already been shown running on a whole bunch of different personal devices: laptops, Raspberry Pis (very slowly) and even a Pixel 5 phone at a decent speed!

The next frontier: running it in the browser.

I saw two tech demos yesterday that made me think this may be possible in the near future.

The first is Transformers.js. This is a WebAssembly port of the Hugging Face Transformers library of models - previously only available for server-side Python.

It's worth spending some time with their demos, which include some smaller language models and some very impressive image analysis languages too.

The second is Web Stable Diffusion. This team managed to get the Stable Diffusion generative image model running entirely in the browser as well!

Web Stable Diffusion uses WebGPU, a still emerging standard that's currently only working in Chrome Canary. But it does work! It rendered my this image of two raccoons eating a pie in the forest in 38 seconds.

The Stable Diffusion model this loads into the browser is around 1.9GB.

LLaMA/Alpaca at 4bit quantization is 3.9GB.

The sizes of these two models are similar enough that I would not be at all surprised to see an Alpaca-like model running in the browser in the not-too-distant future. I wouldn't be surprised if someone is working on that right now.

Now give it extra abilities with ReAct

A model running in your browser that behaved like a less capable version of ChatGPT would be pretty impressive. But what if it could be MORE capable than ChatGPT?

The ReAct prompt pattern is a simple, proven way of expanding a language model's abilities by giving it access to extra tools.

Matt Webb explains the significance of the pattern in The surprising ease and effectiveness of AI in a loop.

I got it working with a few dozen lines of Python myself, which I described in A simple Python implementation of the ReAct pattern for LLMs.

Here's the short version: you tell the model that it must think out loud and now has access to tools. It can then work through a question like this:

Question: Population of Paris, squared?

Thought: I should look up the population of paris and then multiply it

Action: search_wikipedia: Paris

Then it stops. Your code harness for the model reads that last line, sees the action and goes and executes an API call against Wikipedia. It continues the dialog with the model like this:

Observation: <truncated content from the Wikipedia page, including the 2,248,780 population figure>

The model continues:

Thought: Paris population is 2,248,780 I should square that

Action: calculator: 2248780 ** 2

Control is handed back to the harness, which passes that to a calculator and returns:

Observation: 5057011488400

The model then provides the answer:

Answer: The population of Paris squared is 5,057,011,488,400

Adding new actions to this system is trivial: each one can be a few lines of code.

But as the ReAct paper demonstrates, adding these capabilities to even an under-powered model (such as LLaMA 7B) can dramatically improve its abilities, at least according to several common language model benchmarks.

This is essentially what Bing is! It's GPT-4 with the added ability to run searches against the Bing search index.

Obviously if you're going to give a language model the ability to execute API calls and evaluate code you need to do it in a safe environment! Like for example... a web browser, which runs code from untrusted sources as a matter of habit and has the most thoroughly tested sandbox mechanism of any piece of software we've ever created.

Adding it all together

There are a lot more groups out there that can afford to spend $85,000 training a model than there are that can spend $2M or more.

I think LLaMA and Alpaca are going to have a lot of competition soon, from an increasing pool of openly licensed models.

A fine-tuned LLaMA scale model is leaning in the direction of a ChatGPT competitor already. But... if you hook in some extra capabilities as seen in ReAct and Bing even that little model should be able to way outperform ChatGPT in terms of actual ability to solve problems and do interesting things.

And we might be able to run such a thing on our phones... or even in our web browsers... sooner than you think.

And it's only going to get cheaper

Tobias Lütke on Twitter:

H100s are shipping and you can half this again. Twice (or more) if fp8 works.

- tobi lutke (@tobi) March 17, 2023

The H100 is the new Tensor Core GPU from NVIDIA, which they claim can offer up to a 30x performance improvement over their current A100s.


Ben Werdmüller

Believe it or not, the Amish are loving electric bikes

“It’s a lot quicker to jump on your bike and go into town than it is to bring your horse into the barn, harness it to the buggy, and go. It’s a lot quicker and you travel faster too.” #Technology [Link]

“It’s a lot quicker to jump on your bike and go into town than it is to bring your horse into the barn, harness it to the buggy, and go. It’s a lot quicker and you travel faster too.” #Technology

[Link]


Simon Willison

Web Stable Diffusion

Web Stable Diffusion I just ran the full Stable Diffusion image generation model entirely in my browser, and used it to generate an image (of two raccoons eating pie in the woods, see "via" link). I had to use Google Chrome Canary since this depends on WebGPU which still isn't fully rolled out, but it worked perfectly. Via @simon on Mastodon

Web Stable Diffusion

I just ran the full Stable Diffusion image generation model entirely in my browser, and used it to generate an image (of two raccoons eating pie in the woods, see "via" link). I had to use Google Chrome Canary since this depends on WebGPU which still isn't fully rolled out, but it worked perfectly.

Via @simon on Mastodon


The surprising ease and effectiveness of AI in a loop

The surprising ease and effectiveness of AI in a loop Matt Webb on the langchain Python library and the ReAct design pattern, where you plug additional tools into a language model by teaching it to work in a "Thought... Act... Observation" loop where the Act specifies an action it wishes to take (like searching Wikipedia) and an extra layer of software than carries out that action and feeds back

The surprising ease and effectiveness of AI in a loop

Matt Webb on the langchain Python library and the ReAct design pattern, where you plug additional tools into a language model by teaching it to work in a "Thought... Act... Observation" loop where the Act specifies an action it wishes to take (like searching Wikipedia) and an extra layer of software than carries out that action and feeds back the result as the Observation. Matt points out that the ChatGPT 1/10th price drop makes this kind of model usage enormously more cost effective than it was before.

Via @intrcnnctd

Thursday, 16. March 2023

Simon Willison

Transformers.js

Transformers.js Hugging Face Transformers is a library of Transformer machine learning models plus a Python package for loading and running them. Transformers.js provides a JavaScript alternative interface which runs in your browser, thanks to a set of precompiled WebAssembly binaries for a selection of models. This interactive demo is incredible: in particular, try running the Image classificat

Transformers.js

Hugging Face Transformers is a library of Transformer machine learning models plus a Python package for loading and running them. Transformers.js provides a JavaScript alternative interface which runs in your browser, thanks to a set of precompiled WebAssembly binaries for a selection of models. This interactive demo is incredible: in particular, try running the Image classification with google/vit-base-patch16-224 (91MB) model against any photo to get back labels representing that photo. Dropping one of these models onto a page is as easy as linking to a hosted CDN script and running a few lines of JavaScript.


Ben Werdmüller

No, my Japanese American parents were not 'interned' during WWII. They were incarcerated

“In a historic decision aimed at accuracy and reconciliation, the Los Angeles Times announced Thursday that it would drop the use of “internment” in most cases to describe the mass incarceration of 120,000 people of Japanese ancestry during World War II.” Let’s call them what they were: concentration camps. #Media [Link]

“In a historic decision aimed at accuracy and reconciliation, the Los Angeles Times announced Thursday that it would drop the use of “internment” in most cases to describe the mass incarceration of 120,000 people of Japanese ancestry during World War II.” Let’s call them what they were: concentration camps. #Media

[Link]


Bandcamp Employees Unionize for Fairer Conditions

““Many of us work at Bandcamp because we agree with the values the company upholds for artists: fair pay, transparent policies, and using the company’s social power to uplift marginalized communities,” says Cami Ramirez-Arau, who has worked as a Support Specialist at Bandcamp for two years. “We have organized a union to ensure that Bandcamp treats their workers with these same

““Many of us work at Bandcamp because we agree with the values the company upholds for artists: fair pay, transparent policies, and using the company’s social power to uplift marginalized communities,” says Cami Ramirez-Arau, who has worked as a Support Specialist at Bandcamp for two years. “We have organized a union to ensure that Bandcamp treats their workers with these same values.”” #Labor

[Link]


Simon Willison

Train and run Stanford Alpaca on your own machine

Train and run Stanford Alpaca on your own machine The team at Replicate managed to train their own copy of Stanford's Alpaca - a fine-tuned version of LLaMA that can follow instructions like ChatGPT. Here they provide step-by-step instructions for recreating Alpaca yourself - running the training needs one or more A100s for a few hours, which you can rent through various cloud providers.

Train and run Stanford Alpaca on your own machine

The team at Replicate managed to train their own copy of Stanford's Alpaca - a fine-tuned version of LLaMA that can follow instructions like ChatGPT. Here they provide step-by-step instructions for recreating Alpaca yourself - running the training needs one or more A100s for a few hours, which you can rent through various cloud providers.


Not By AI: Your AI-free Content Deserves a Badge

Not By AI: Your AI-free Content Deserves a Badge A badge for non-AI generated content. Interesting to note that they set the cutoff at 90%: "Use this badge if your article, including blog posts, essays, research, letters, and other text-based content, contains less than 10% of AI output." Via Hacker News

Not By AI: Your AI-free Content Deserves a Badge

A badge for non-AI generated content. Interesting to note that they set the cutoff at 90%: "Use this badge if your article, including blog posts, essays, research, letters, and other text-based content, contains less than 10% of AI output."

Via Hacker News


Ben Werdmüller

Fox News discourse

At this point I’m not sure how helpful it is to be publicly outraged over Fox News. There’s the catharsis of it, sure, but I’m increasingly of the mind that we shouldn’t give it oxygen. Lately it’s been their redefinition of the word “woke” and, this week, the ludicrous idea that Silicon Valley Bank imploded because of DEI initiatives. It’s also been the revelation, through leaks related to the

At this point I’m not sure how helpful it is to be publicly outraged over Fox News. There’s the catharsis of it, sure, but I’m increasingly of the mind that we shouldn’t give it oxygen.

Lately it’s been their redefinition of the word “woke” and, this week, the ludicrous idea that Silicon Valley Bank imploded because of DEI initiatives. It’s also been the revelation, through leaks related to their voting machines lawsuit, that they don’t mean what they say and privately hate Donald Trump. These people are unprincipled charlatans who prey on their audience, but we know that; we’ve always known that.

And maybe it’s worth saying, again and again, because we don’t want anyone to forget that basic truth. I don’t want to argue for letting them get away with it. But they also are getting away with it, and in some ways I think the better solution is to do our own thing and show and tell that it’s better.

We’re all imperfect. Over the last year, I’ve been more imperfect than most. But all of us, however imperfect, can stand up and craft our own message - not just in response to Fox News or bigotry, but in a future-facing way that paints the future we actually want to live in. I think that’s powerful, and crucially, will change more minds.


Best printer 2023: just buy this Brother laser printer everyone has, it’s fine

“Here’s the best printer in 2023: the Brother laser printer that everyone has. Stop thinking about it and just buy one. It will be fine!” #Technology [Link]

“Here’s the best printer in 2023: the Brother laser printer that everyone has. Stop thinking about it and just buy one. It will be fine!” #Technology

[Link]


Simon Willison

Quoting Jeonghwan Kim

As an NLP researcher I'm kind of worried about this field after 10-20 years. Feels like these oversized LLMs are going to eat up this field and I'm sitting in my chair thinking, "What's the point of my research when GPT-4 can do it better?" — Jeonghwan Kim

As an NLP researcher I'm kind of worried about this field after 10-20 years. Feels like these oversized LLMs are going to eat up this field and I'm sitting in my chair thinking, "What's the point of my research when GPT-4 can do it better?"

Jeonghwan Kim


Quoting Me

I expect GPT-4 will have a LOT of applications in web scraping The increased 32,000 token limit will be large enough to send it the full DOM of most pages, serialized to HTML - then ask questions to extract data Or... take a screenshot and use the GPT4 image input mode to ask questions about the visually rendered page instead! Might need to dust off all of those old semantic web dreams, becaus

I expect GPT-4 will have a LOT of applications in web scraping

The increased 32,000 token limit will be large enough to send it the full DOM of most pages, serialized to HTML - then ask questions to extract data

Or... take a screenshot and use the GPT4 image input mode to ask questions about the visually rendered page instead!

Might need to dust off all of those old semantic web dreams, because the world's information is rapidly becoming fully machine readable

Me


bloomz.cpp

bloomz.cpp Nouamane Tazi Adapted the llama.cpp project to run against the BLOOM family of language models, which were released in July 2022 and trained in France on 45 natural languages and 12 programming languages using the Jean Zay Public Supercomputer, provided by the French government and powered using mostly nuclear energy. It's under the RAIL license which allows (limited) commercial use,

bloomz.cpp

Nouamane Tazi Adapted the llama.cpp project to run against the BLOOM family of language models, which were released in July 2022 and trained in France on 45 natural languages and 12 programming languages using the Jean Zay Public Supercomputer, provided by the French government and powered using mostly nuclear energy.

It's under the RAIL license which allows (limited) commercial use, unlike LLaMA.

Nouamane reports getting 16 tokens/second from BLOOMZ-7B1 running on an M1 Pro laptop.

Via @nouamanetazi

Wednesday, 15. March 2023

Ben Werdmüller

The more services incorporate GPT, the more ...

The more services incorporate GPT, the more input boxes on the web get fed through OpenAI's servers, and the more it becomes both a single point of failure and an obvious way to capture data from across the web.

The more services incorporate GPT, the more input boxes on the web get fed through OpenAI's servers, and the more it becomes both a single point of failure and an obvious way to capture data from across the web.


Jon Udell

Mastodon timelines for teams

Welcome to episode 17 of the Steampipe+Mastodon series, in which we introduce a new subplot: timeline history. So far, the examples I’ve shown and discussed work with current timelines. We’ve seen SQL queries that fetch results from realtime calls to the Mastodon API, and Steampipe dashboards that display those results. But Steampipe isn’t just an … Continue reading Mastodon timelines for teams

Welcome to episode 17 of the Steampipe+Mastodon series, in which we introduce a new subplot: timeline history. So far, the examples I’ve shown and discussed work with current timelines. We’ve seen SQL queries that fetch results from realtime calls to the Mastodon API, and Steampipe dashboards that display those results. But Steampipe isn’t just an API siphon, it’s also a Postgres database. As such it supports the transient tables created by Steampipe’s foreign data wrapper and plugins, but also enables you to create your own native tables as well. And you can use those native tables to accumulate data from the transient foreign tables.

Because saving and searching Mastodon data is a controversial topic in the fediverse — none of us wants to recapitulate Big Social — I’ve focused thus far on queries that explore recent Mastodon flow, of which there are plenty more to write. But nobody should mind me remembering my own home timeline, so a few weeks ago I made a tool to read it hourly and add new toots to a Postgres table.

Before you can add any toots to a table, of course, you’ve got to create that table. Here’s how I made this one.

create table mastodon_home_timeline as select * from mastodon_toot_home limit 200

Once created, the table can be updated with new toots like so.

with data as ( select account, -- more -- columns username from mastodon_toot_home limit 200 ) insert into mastodon_home_timeline ( account, -- more -- columns username ) select * from data where id not in ( select t.id from mastodon_home_timeline t )

To run that query from a crontab, on a machine where Steampipe is installed, save it as mastodon_home_timeline.sql, then schedule it.

15 * * * * cd /home/jon/mastodon; steampipe query mastodon_home_timeline.sql

That’s it! Now the number reported by select count(*) from mastodon_home_timeline is growing hourly.

I’ve only been collecting toots for a couple of weeks, and haven’t yet begun to explore that data yet; we’ll see what happens when we get there. Meanwhile, though, I want to show how such exploration can be a team exercise.

A friend of mine, whom I’ll call Elvis, shares my interest in teasing out connections among people, servers, and hashtags. He could capture his own timeline using the method shown here. But since we’ll be looking at this data together, we agreed that I’ll gather both our timelines. To enable that, he shared a (revokable) Mastodon API token that I’ve used to configure Steampipe with credentials for both our accounts.

connection "mastodon_social_jon" { plugin = "mastodon" server = "https://mastodon.social" access_token = "..." } connection "mastodon_social_elvis" { plugin = "mastodon" server = "https://mastodon.social" access_token = "..." }

Steampipe’s foreign data wrapper turns each of these named connections into its own Postgres schema. Athough we happen to share the same home server, by the way, we needn’t. A team collaborating like this could pool timelines from mastodon.social and hachyderm.io and fosstodon.org and any other Mastodon-API-compatible server.

(You can do the same thing with AWS or Slack or GitHub or other kind of account by defining multiple connections. Steampipe makes API calls concurrently across parallel connections.)

With this configuration I can read my timeline like so.

select * from mastodon_social_jon.mastodon_toot_home limit 200

And Elvis’ like so.

select * from mastodon_social_elvis.mastodon_toot_home limit 200

If I want to query both in realtime, for example to count the combined total, I can use a SQL UNION. Or I can define umbrella connection that aggregates these two.

connection "all_mastodon" { plugin = "mastodon" type = "aggregator" connections = [ "mastodon_social_jon", "mastodon_social_elvis" ] } connection "mastodon_social_jon" { plugin = "mastodon" server = "https://mastodon.social" access_token = "..." } connection "mastodon_social_elvis" { plugin = "mastodon" server = "https://mastodon.social" access_token = "..." }

Now the query select * from all_mastodon.mastodon_toot_home limit 200 makes API calls on behalf of both accounts — in parallel — and combines the results. When we follow the resulting URLs in order to reply or boost, we’ll do so as individual identities. And we’ll be able to use Steampipe queries and dashboards in that same single-user mode. But we’ll also be able to pool our timelines and point our queries and dashboards at the combined history.

Will that prove interesting? Useful? That remains to be seen. I think it’s one of many experiments worth trying as the fediverse sorts itself out. And I see Steampipe as one laboratory in which to run such experiments. With SQL as the abstraction over APIs, aggregation of connections, and dashboards as code, you have all the ingredients needed to iterate rapidly, at low cost, toward shared Mastodon spaces tailored for teams or groups.


1 https://blog.jonudell.net/2022/11/28/autonomy-packet-size-friction-fanout-and-velocity/
2 https://blog.jonudell.net/2022/12/06/mastodon-steampipe-and-rss/
3 https://blog.jonudell.net/2022/12/10/browsing-the-fediverse/
4 https://blog.jonudell.net/2022/12/17/a-bloomberg-terminal-for-mastodon/
5 https://blog.jonudell.net/2022/12/19/create-your-own-mastodon-ux/
6 https://blog.jonudell.net/2022/12/22/lists-and-people-on-mastodon/
7 https://blog.jonudell.net/2022/12/29/how-many-people-in-my-mastodon-feed-also-tweeted-today/
8 https://blog.jonudell.net/2022/12/31/instance-qualified-mastodon-urls/
9 https://blog.jonudell.net/2023/01/16/mastodon-relationship-graphs/
10 https://blog.jonudell.net/2023/01/21/working-with-mastodon-lists/
11 https://blog.jonudell.net/2023/01/26/images-considered-harmful-sometimes/
12 https://blog.jonudell.net/2023/02/02/mapping-the-wider-fediverse/
13 https://blog.jonudell.net/2023/02/06/protocols-apis-and-conventions/
14 https://blog.jonudell.net/2023/02/14/news-in-the-fediverse/
15 https://blog.jonudell.net/2023/02/26/mapping-people-and-tags-on-mastodon/
16 https://blog.jonudell.net/2023/03/07/visualizing-mastodon-server-moderation/
17 https://blog.jonudell.net/2023/03/14/mastodon-timelines-for-teams/

Tuesday, 14. March 2023

Phil Windleys Technometria

Monitoring Temperatures in a Remote Pump House

If you've got a sensor that isn't within easy range of wifi, then LoraWAN is a good solution. And event-based rules in picos are a convenient way to process the data. I've got a pumphouse in Island Park, ID that I'm responsible for. Winter temperatures are often below 0°F (-18°C) and occasionally get as cold as -35°F (-37°C). We have a small baseboard heater in the pumphouse to keep things from fre

If you've got a sensor that isn't within easy range of wifi, then LoraWAN is a good solution. And event-based rules in picos are a convenient way to process the data.

I've got a pumphouse in Island Park, ID that I'm responsible for. Winter temperatures are often below 0°F (-18°C) and occasionally get as cold as -35°F (-37°C). We have a small baseboard heater in the pumphouse to keep things from freezing. That works pretty well, but one night last December, the temperature was -35°F and the power went out for five hours. I was in the dark, unable to know if the pumphouse was getting too cold. I determined that I needed a temperature sensor in the pumphouse that I could monitor remotely.

Thanks for reading Phil Windley's Technometria! Subscribe for free to receive new posts and support my work.

The biggest problem is that the pumphouse is not close to any structures with internet service. Wifi signals just don't make it out there. Fortunately, I've got some experience using LoraWAN, a long-range (10km), low-power, wireless protocol. This use-case seemed perfect for LoraWAN. About a year ago, I wrote about how to use LoraWAN and a Dragino LHT65 temperature and humidity sensoralong with picos to get temperature data over the Helium network.

I've installed a Helium hotspot near the pumphouse. The hotspot and internet router are both on battery backup. Helium provides a convenient console that allows you to register devices (like the LHT65) and configure flows to send the data from a device on the Helium network to some other system over HTTP. I created a pico to represent the pumphouse and routed the data from the LHT65 to a channel on that pico.

The pico does two things. First it processes the hearthbeat event that Helium sends to it, parsing out the parts I care about and raising another event so other rules can use the data. Processing the data is not simple because it's packed into a base64-encoded, 11-byte hex string. I won't bore you with the details, but it involves base64 decoding the string and splitting it into 6 hex values. Some of those a further packing data into specific bits of the 16-bit word, so binary operations are required. Those weren't built into the pico engine, so I added those libraries. If you're interested in the details of decoding, splitting, and unpacking the payload, check out the receive_heartbeat rule in this ruleset.

Second, the receive_heartbeat rule raises the lht65:new_readings event in the pico adding all the relevant data from the LHT65 heartbeat. Any number of rules could react to that event depending on what needs to be done. For example, they could store the event, alarm on a threshold, or monitor the battery status. What I wanted to do is plot the temperature so I can watch it over time and let other members of the water group check it too. I found a nice service called IoTPlotter that provides a basic plotting service on any data you post to it. I created a feed for the pumphouse data and wrote a rule in my pumphouse pico to select on the lht65:new_readings event and POST the relevant data, in the right format, to IoTPlotter. Here's that rule:

rule send_temperature_data_to_IoTPlotter { select when lht65 new_readings pre { feed_id = "367832564114515476"; api_key = meta:rulesetConfig{["api_key"]}; payload = {"data": { "device_temperature": [ {"value": event:attrs{["readings", "internalTemp"]}, "epoch": event:attrs{["timestamp"]}} ], "probe_temperature": [ {"value": event:attrs{["readings", "probeTemp"]}, "epoch": event:attrs{["timestamp"]}} ], "humidity": [ {"value": event:attrs{["readings", "humidity"]}, "epoch": event:attrs{["timestamp"]}} ], "battery_voltage": [ {"value": event:attrs{["readings", "battery_voltage"]}, "epoch": event:attrs{["timestamp"]}} ]} }; } http:post("http://iotplotter.com/api/v2/feed/" + feed_id, headers = {"api-key": api_key}, json = payload ) setting(resp); }

The rule, send_temperature_data_to_IoTPlotter is not very complicated. You can see that most of the work is just reformatting the data from the event attributes into the right structure for IoTPlotter. The result is a set of plots that looks like this:

Pretty slick. If you're interested in the data itself, you're seeing the internal temperature of the sensor (orange line) and temperature of an external probe (blue line). We have the temperature set pretty high as a buffer against power outages. Still, it's not using that much power because the structure is very small. Running the heater only adds about $5/month to the power bill. Pumping water is much more power intensive and is the bulk of the bill. The data is choppy because, by default, the LHT65 only transmits a payload once every 20 minutes. This can be changed, but at the expense of battery life.

This is a nice, evented system, albeit simple. The event flow looks like this:

Event Flow for Pumphouse Temperature Sensor

I'll probably make this a bit more complete by adding a rule for managing thresholds and sending a text if the temperature gets too low or too high. Similarly, I should be getting notifications if the battery voltage gets too low. The battery is supposed to last 10 years, but that's exactly the kind of situation you need an alarm on—I'm likely to forget about it all before the battery needs replacing. I'd like to experiment with sending data the other way to adjust the frequency of readings. There might be times (like -35°F nights when the power is out) where getting more frequent results would reduce my anxiety.

This was a fun little project. I've got a bunch of these LHT65 temperature sensors, so I'll probably generalize this by turning the IoTPlotter ruleset into a module that other rulesets can use. I may eventually use a more sophisticated plotting package that can show me the data for all my devices on one feed. I bought a LoraWAN soil moisture probe for my garden. I've also got a solar array at my house that I'd like to monitor myself and that will need a dashboard of some kind. If you've got a sensor that isn't within easy range of wifi, then LoraWAN is a good solution. And event-based rules in picos are a convenient way to process the data.

Thanks for reading Phil Windley's Technometria! Subscribe for free to receive new posts and support my work.


Identity Praxis, Inc.

London Event: MEF CONNECTS Personal Data & Identity

I am thrilled to have been given the opportunity to be the Program Director for the highly anticipated MEF Event in London on May 25th. There are opportunities to speak, sponsor, exhibit, partner, and more. I look forward to work with a team of passionate individuals to curate an unforgettable experience for all attendees. The […] The post London Event: MEF CONNECTS Personal Data & Identity

I am thrilled to have been given the opportunity to be the Program Director for the highly anticipated MEF Event in London on May 25th. There are opportunities to speak, sponsor, exhibit, partner, and more. I look forward to work with a team of passionate individuals to curate an unforgettable experience for all attendees.

The MEF Connects Personal Data & Identity event organized by Susan Finlayson-Sitch, Director of Operations at Mobile Ecosystem Forum, will be happening on May 25th, 2023 at One Moorgate Place in London. The event starts at 9:00 AM and runs until 8:30 PM. Registration is open and attendance is free, but seats are limited, so be sure to register here in advance to secure your spot.

I will also be organizing pre- and post-conference workshops on the 24th and 26th as well. This event offers an exclusive opportunity to learn about the latest trends and technologies in personal data and identity management from a diverse range of industry leaders. Don’t miss out on the chance to network with like-minded professionals and gain valuable insights that can help you stay ahead of the curve. What will be covered at the event is listed below, but for more information on the event click here. Looking forward to seeing you there!

Discover the latest trends in personal data and identity management at the MEF Event in London on May 25th. Our expert speakers will cover key themes including:

The Personal Data & Identity Meeting of the Waters: The Identity Nexus The Five Domains of Personal Data New Customer Onboarding and Experiences Along Every Step of the Customer Journey Personal Data and Media: Reshaping Media In a World of Individual Controlled Data Global and Regional Consumer Opinion at the Intersection of Trust, Personal Data, and Privacy Understanding Biometrics Solution Combatting Fraud with Silent Mobile Intelligence New Identification, Authentication, and Verification Strategies (inc., Passwordless Authentication) Self-Sovereign Identity, Decentralization, Web 3.0, Web 5.0 Personal Information Management System and Smart Wallets Embracing Emergent Standards and Personal Data & Identity Regulations Privacy-enhancing Technologies

Consider attending our pre- and post-conference workshops to dive further in-depth on these topics on the 24th and 26th to stay ahead in the evolving landscape of personal data and identity management.

The post London Event: MEF CONNECTS Personal Data & Identity appeared first on Identity Praxis, Inc..

Monday, 13. March 2023

Identity Praxis, Inc.

Telesign Webinar

  I’m excited to announce I will be participating in the Telesign Webinar on March 16 at 1 p.m. CET to learn how you can better protect your business from ATO attacks. This webinar will also feature other industry leaders such as Surash Patel and Pravesh Arora from Telesign, as well as Bram Van Hoeylandt from Proximus. The 2022 Cyberthreat Defense […] The post

 

I’m excited to announce I will be participating in the Telesign Webinar on March 16 at 1 p.m. CET to learn how you can better protect your business from ATO attacks. This webinar will also feature other industry leaders such as Surash Patel and Pravesh Arora from Telesign, as well as Bram Van Hoeylandt from Proximus.

The 2022 Cyberthreat Defense Report reveals that account takeover (ATO) attacks are set to become the biggest security threat to customers, overtaking malware. Don’t miss out on the chance to learn from our expert speakers and safeguard your online presence. Register now to secure your spot and take the first step towards greater online security!

The post Telesign Webinar appeared first on Identity Praxis, Inc..


Personal Data and Digital Identity, and a New Path to Value

I thoroughly enjoyed wrapping up the event with Ann Li about Personal Data and Digital Identity Management. In our session, we discussed the importance of a new approach to personal data and identity management, and how it can unlock a new era of growth and consumer benefits for both organizations and customers. By embracing emerging […] The post Personal Data and Digital Identity, and a New Pat

I thoroughly enjoyed wrapping up the event with Ann Li about Personal Data and Digital Identity Management. In our session, we discussed the importance of a new approach to personal data and identity management, and how it can unlock a new era of growth and consumer benefits for both organizations and customers. By embracing emerging decentralized strategies alongside existing centralized and federated ones, businesses can strategically add significant commercial and operational value while prioritizing the customer experience. Collaboration and participation within the ecosystem is also crucial in meeting the demands of our ever-evolving markets. Below is an abstract of out talk, and the session recording is above for those who couldn’t be there in person.

Personal Data and Digital Identity Management: A New Approach and Path to Value Personal data has underpinned growth across global economies for three decades and is now a systemically valuable resource. Personal data and identity management were two disparate yet connected subjects which are used to authenticate customers, drive research and insights, inform product strategies, mitigate fraud and other risks. A shift of customer attitudes, behaviors and regulation are placing individuals in greater control of their data and digital identity. This emerging data paradigm provides organizations an opportunity to access/unlock a new era of growth and consumer benefits for consumers and the organizations who serve them In this session, Ann Li, JP Morgan Chase’s Managing Director of International digital identity, investment strategy and partnerships, discusses JP Morgan Chase’s vision for a new approach to personal data and identity management and a new path to value. She will explore the importance of focusing on the individual and customer experience and how placing emerging decentralized personal data and identity strategies alongside existing centralized and federated ones have the potential—when embraced strategically—to add significant commercial and operational value to a business and people alike. She will also explore the imperative for collaborative ecosystem participation to meet the demands of our ever-evolving markets.

The post Personal Data and Digital Identity, and a New Path to Value appeared first on Identity Praxis, Inc..


Damien Bod

Use Azure AD Access Packages to onboard users in an Azure DevOps project

This post looks at onboarding users into an Azure DevOps team or project using Azure AD access packages. The Azure AD access packages are part of the Microsoft Entra Identity Governance and provide a good solution for onboarding internal or external users into your tenant with access to the defined resources. Flow for onboarding Azure […]

This post looks at onboarding users into an Azure DevOps team or project using Azure AD access packages. The Azure AD access packages are part of the Microsoft Entra Identity Governance and provide a good solution for onboarding internal or external users into your tenant with access to the defined resources.

Flow for onboarding Azure DevOps members

Sometimes we develop large projects with internal and external users which need access to an Azure DevOps project for a fixed length of time which can be extended if required. These users only need access to the the Azure DevOps project and should be automatically removed when the contract or project is completed. Azure AD access packages are a good way to implement this.

Use an Azure AD group

The access to the Azure DevOps can be implemented by using an Azure security group in Azure AD. This security will be used to add team members for the Azure DevOps project. Azure AD access packages are used to onboard users into the Azure AD group and the Azure DevOps project uses the security group to define the members. The “azure-devops-project-access-packages” security group was created for this.

Setup the Azure DevOps

A new Azure DevOps project was created for this demo. The project has an URL on the dev.azure.com domain. The Azure DevOps needs to be attached to the Azure AD tenant. Only an Azure AD member with the required permissions can add a security group to the Azure DevOps project.

My test Azure DevOps project was created with the following URL. You can only access this if you are a member.

https://dev.azure.com/azureadgroup-access-packages/use-access-packages

The project team can now be onboarded.

Create the Azure AD P2 Access packages

To create an Azure AD P2 Access package, you can use the Microsoft Entra admin center. The access package can be created in the Entitlement management blade.

Add the security group from the Azure AD which you use for adding or removing users to the Azure DevOps project. Add the users as members.

The users onboarded using the access package are given a lifespan in the tenant for the access and can be extended or not as needed.

The users can be added using an access package link, or you can get an admin to assign users to the package. I created a second access package to assign any users to the package which can then be approved or rejected by the Azure DevOps project manager.

The Azure DevOps administrator can approve the access package and the Azure DevOps team member can access the Azure DevOps project using the public URL. The new member is added to the Azure security group using the access package.

An access package link would look something like this:

https://myaccess.microsoft.com/@damienbodsharepoint.onmicrosoft.com#/access-packages/b5ad7ec0-8728-4a18-be5b-9fa24dcfefe3

Links

https://learn.microsoft.com/en-us/azure/active-directory/governance/entitlement-management-access-package-create

https://learn.microsoft.com/en-us/azure/devops/organizations/accounts/faq-user-and-permissions-management?view=azure-devops#q-why-cant-i-find-members-from-my-connected-azure-ad-even-though-im-the-azure-ad-global-admin

https://entra.microsoft.com/


Hyperonomy Digital Identity Lab

Web 7.0 DIDComm Notation Template

Copyright (c) 2023 Michael Herman (Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public Licensehttps://creativecommons.org/licenses/by-sa/4.0/legalcode NOTE: DIDComm Notation elements are adaptations of the icons provided by Microsoft as part of the Microsoft Office suite. To quote from the Microsoft documentation: … Continue reading →

Copyright (c) 2023 Michael Herman (Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public License
https://creativecommons.org/licenses/by-sa/4.0/legalcode

NOTE: DIDComm Notation elements are adaptations of the icons provided by Microsoft as part of the Microsoft Office suite. To quote from the Microsoft documentation:

You can insert icons into Microsoft 365 documents. You can rotate, color, and resize them with no loss of image quality. These icons are free to use and there’s no royalty or copyright to worry about.

https://support.microsoft.com/en-us/office/insert-icons-in-microsoft-365-e2459f17-3996-4795-996e-b9a13486fa79?ui=en-us&rs=en-us&ad=us

In addition, …

You may sell a spreadsheet, database, or PowerPoint deck you made using Microsoft software. The spreadsheet, database, or PowerPoint deck must be created using legitimate, licensed Microsoft software.

https://www.microsoft.com/en-us/legal/intellectualproperty/copyright/permissions

Finally, …

What am I allowed to use premium creative content for?
Excel for Microsoft 365 Word for Microsoft 365 Outlook for Microsoft 365 PowerPoint for Microsoft 365 PowerPoint for the web SharePoint in Microsoft 365


If you’re a Microsoft 365 subscriber you have access to thousands of royalty-free images, icons, and more that you can use in your documents, presentations, workbooks, and SharePoint in Microsoft 365. But you may have questions about what you’re legally allowed to do with that creative content. This article will try to answer that!

You can use the content for anything permitted by the applicable Microsoft 365 license so you should review your End-User License Agreement (EULA) to understand exactly what you can do with works created in Microsoft 365 and SharePoint.

Simply put… You can use the premium images and other creative content however you see fit, as long as it’s within an Microsoft 365 app or SharePoint site. For commercial customers, this includes selling PowerPoint slide decks, Excel workbooks, or Word documents that include this content, just as you would have sold these files in the past.

This includes exporting the Microsoft 365 file to another format. You can modify the picture, use it in the file forever, and use it anywhere in the world. Basically, do the same things that you would have done with Microsoft 365 documents in the past, but now you can include the stock images and other content from the Insert Pictures dialog.

https://support.microsoft.com/en-us/topic/what-am-i-allowed-to-use-premium-creative-content-for-0de69c76-ff2b-473e-b715-4d245e39e895

Click on the figure below to download current version of the Web 7.0 DIDComm Notation Template.

Sunday, 12. March 2023

Hyperonomy Digital Identity Lab

Web 7.0 Standard Base (WSB) Specification Set

Copyright (c) 2023 Michael Herman (Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public Licensehttps://creativecommons.org/licenses/by-sa/4.0/legalcode Web 7.0 is a unified software and hardware ecosystem for building resilient, trusted, decentralized systems using decentralized identifiers, DIDComm agents, and verifiable credentials. Take what … Continue reading →

Copyright (c) 2023 Michael Herman (Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public License
https://creativecommons.org/licenses/by-sa/4.0/legalcode

Web 7.0 is a unified software and hardware ecosystem for building resilient, trusted, decentralized systems using decentralized identifiers, DIDComm agents, and verifiable credentials.

Take what you need; leave the rest.

Michael Herman, Trusted Digital Web Project, Hyperonomy Digital Identity Lab, Parallelspace Corporation. January 2023.

Identity Praxis, Inc.

Consumer Empowerment and PIMS Adoption

I reveal the secrets of PIMS and the Personal Information Economy in my recent MEF Global Forum session. Encouraging organizations to consider incorporating a PIMS into their strategy as a critical next step. Below is the abstract of what I covered in my session, and if you unfortunately missed this prolific event you can watch […] The post Consumer Empowerment and PIMS Adoption appeared first o

I reveal the secrets of PIMS and the Personal Information Economy in my recent MEF Global Forum session. Encouraging organizations to consider incorporating a PIMS into their strategy as a critical next step. Below is the abstract of what I covered in my session, and if you unfortunately missed this prolific event you can watch the session recording above.

Personal Information Management Solutions (PIMS) is an emerging capability that gives organizations a new and improved way of managing personal data and identity. Drawing from my qualitative study of twenty U.S. consumers, I share insights on the the key factors that influence an individual’s interest, ability, and intention to use a PIMS. These insights can inform corporate strategy, product roadmaps, and ecosystem partnerships, and learn how PIMS can improve trust, efficiency, and product/service experience. Finally giving individuals the opportunity to reclaim their agency, manage their privacy, and establish more authentic relationships with those organizations they trust.

The post Consumer Empowerment and PIMS Adoption appeared first on Identity Praxis, Inc..

Saturday, 11. March 2023

Identity Praxis, Inc.

Sneak Peak: MEF 9th Annual Consumer Trust Study

    I started the MEF Global Forum along with Dario Betti, CEO of MEF, and Barbara Langer, Founder and Director of Insight Angels, to discuss the latest trends in mobile and privacy. Including a preview of the MEF Consumer Trust Study, which is in its 9th year, investigates consumer opinions at the intersection of […] The post Sneak Peak: MEF 9th Annual Consumer Trust Study appeared fi

 

 

I started the MEF Global Forum along with Dario Betti, CEO of MEF, and Barbara Langer, Founder and Director of Insight Angels, to discuss the latest trends in mobile and privacy.

Including a preview of the MEF Consumer Trust Study, which is in its 9th year, investigates consumer opinions at the intersection of mobile, privacy, personal data, and trust over ten markets. This session provides a sneak peek into this year’s results. The insights gleaned from this study inform strategy, customer journey mapping, product development, support, team development, and more

To keep up with the latest trends in mobile and privacy, click below to watch the session recording.

 

 

The post Sneak Peak: MEF 9th Annual Consumer Trust Study appeared first on Identity Praxis, Inc..


The BBC: a Case Study: A New Path to Value and Consumer Empowerment

  I had a great time speaking with Nick North who shared the BBC’s vision this year at The Mobile World Congress. Below is an abstract of our talk, and click above to watch the recording of our session. Enterprises and organizations alike have a new path to value creation, the adoption and integration of […] The post The BBC: a Case Study: A New Path to Value and Consumer Empowerment appear

 

I had a great time speaking with Nick North who shared the BBC’s vision this year at The Mobile World Congress. Below is an abstract of our talk, and click above to watch the recording of our session.

Enterprises and organizations alike have a new path to value creation, the adoption and integration of a personal information management solution (PIMS). PIMS support a decentralized data strategy–think self-sovereign identity, Web 3.0, and Web 5.0 all wrapped up in one. PIMS is a general term for software and services that restructures how personal data is collected and used throughout an organization.

A PIMS can help organizations re-frame their relationship with individuals, improve transparency, build trust, streamline onboarding and customer engagement, increase regulatory compliance, reduce costs, mitigate risks, and more. For individuals, a PIMS promises to give them agency over their data, protect their privacy, increase online and offline efficiency and convenience, and achieve peace of mind. The time for PIMS is now!

The post The BBC: a Case Study: A New Path to Value and Consumer Empowerment appeared first on Identity Praxis, Inc..


The Mobile Centric Ecosystem: Optimizing the Experience While Keeping It Safe

In a recent MEF Event, industry experts joined me to discuss the measures required to partition and participate in our mobile-centric world safely. This session covered key topics, including fraud prevention, risk scoring with mobile intelligence, brand safety, and how to build out reliable voice, messaging, advertising, and other related experiences. If you are interested […] The post The Mobil

In a recent MEF Event, industry experts joined me to discuss the measures required to partition and participate in our mobile-centric world safely. This session covered key topics, including fraud prevention, risk scoring with mobile intelligence, brand safety, and how to build out reliable voice, messaging, advertising, and other related experiences. If you are interested in watching the session recording click above, you can find the abstract of our session below.

As of January 2023, the world is more connected than ever before, with over 800 mobile operators spanning 200 countries. The use of mobile devices has become ubiquitous, with 68% of the world’s population (5.44 billion people) relying on mobile phones, and over 8.46 billion mobile connections (excluding Internet of Things devices) worldwide. Mobile technology is revolutionizing the way people work, play, socialize, shop, and access essential services like banking and healthcare.

I was accompanied in this discussion by a panel of experts, including Nilay Kumar, Vice President of mFilterit, Noah Rafalko, CEO of TSG Global, Glyn Povah, Founder & Director Global Product Development of Smart Digits at Telefonica, Gautam Hazari, Chief Technology Officer of Sekura Mobile Intelligence, and Teodor Magureanu, Chief of Staff of VOX Solutions. Helping spread awareness that everyone has a role to play in ensuring the safety and security of mobile channels. They provided fascinating insights on the latest trends, technologies, and strategies for carriers, enterprises, brands, regulators, solution providers, and individuals to succeed in the mobile world. However, ensuring the safety and security of mobile users is a top priority for everyone involved in the mobile ecosystem.

The post The Mobile Centric Ecosystem: Optimizing the Experience While Keeping It Safe appeared first on Identity Praxis, Inc..

Friday, 10. March 2023

Identity Woman

Industry orgs “warn” states against BIPA style laws. Why not have a dialogue at Thoughtful Biometrics Workshop – March 16th

People and their representatives are rightly concerned about how biometric systems are used. This week while reading all the industry news I came across this article – CCIA Testifies On Maryland Biometric Privacy Proposal, Submits Written Comments On Biometric, Childrens, And Health Privacy Bills. So what is BIPA? It is the Biometric Information Privacy Act […] The post Industry orgs “warn” stat

People and their representatives are rightly concerned about how biometric systems are used. This week while reading all the industry news I came across this article – CCIA Testifies On Maryland Biometric Privacy Proposal, Submits Written Comments On Biometric, Childrens, And Health Privacy Bills. So what is BIPA? It is the Biometric Information Privacy Act […]

The post Industry orgs “warn” states against BIPA style laws. Why not have a dialogue at Thoughtful Biometrics Workshop – March 16th appeared first on Identity Woman.

Friday, 10. March 2023

Identity Woman

Biometrics =/= Digital ID

I have been engaging with folks who work developing biometric systems and folks who are concerned about biometric systems for in preparation for the Thoughtful Biometrics Workshop coming up March 16th. Two weeks ago I attended Biometrics Regulation: Global State-of-Play Symposium (one to many talking on zoom with no chat function) put on by the […] The post Biometrics =/= Digital ID appeared fir

I have been engaging with folks who work developing biometric systems and folks who are concerned about biometric systems for in preparation for the Thoughtful Biometrics Workshop coming up March 16th. Two weeks ago I attended Biometrics Regulation: Global State-of-Play Symposium (one to many talking on zoom with no chat function) put on by the […]

The post Biometrics =/= Digital ID appeared first on Identity Woman.

Friday, 10. March 2023

Identity Woman

Federal Agencies using Facial Recognition Technology: GAO report from 2021

I just learned about a 2021 GAO report. It says that This means it is likely that more agencies are using FRT for more reasons. This report seems relevant because for the third time legislation is being put forward to do a Federal Facial Recognition Ban just this week. The diagrams within the report do […] The post Federal Agencies using Facial Recognition Technology: GAO report from 2021 appear

I just learned about a 2021 GAO report. It says that This means it is likely that more agencies are using FRT for more reasons. This report seems relevant because for the third time legislation is being put forward to do a Federal Facial Recognition Ban just this week. The diagrams within the report do […]

The post Federal Agencies using Facial Recognition Technology: GAO report from 2021 appeared first on Identity Woman.


Aaron Parecki

OAuth Support in Bluesky and AT Protocol

Bluesky, a new social media platform and AT Protocol, is unsurprisingly running up against the same challenges and limitations that Flickr, Twitter and many other social media platforms faced in the 2000s: passwords!

Bluesky, a new social media platform and AT Protocol, is unsurprisingly running up against the same challenges and limitations that Flickr, Twitter and many other social media platforms faced in the 2000s: passwords!

You wouldn't give your Gmail password to Yelp, right? Why should you give your Bluesky password to random apps either!

The current official Bluesky iOS application unsurprisingly works by logging in with a username and password. It's the easiest form of authentication to implement, even if it is the least secure. Since Bluesky and the AT Protocol are actually intending on creating an entire ecosystem of servers and clients, this is inevitably going to lead to a complete security disaster. In fact, we're already seeing people spin up prototype Bluesky clients, sharing links around to them, which result in users being taught that there's nothing wrong with handing out their account passwords to random website and applications that ask for them. Clearly there has to be a solution, right?

The good news is there has been a solution that has existed for about 15 years -- OAuth! This is exactly the problem that OAuth was created to solve. How do we let third party applications access data in a web service without sharing the password with that application.

What's novel about Bluesky (and other similarly decentralized and open services like WordPress, Mastodon, Micro.blog, and others), is that there is an expectation that any user should be able to bring any client to any server, without prior relationships between client developers and servers. This is in contrast to consumer services like Twitter and Google, where they limit which developers can access their API by going through a developer registration process. I wrote more about this problem in a previous blog post, OAuth for the Open Web.

There are two separate problems that Bluesky can solve with OAuth, especially a flavor of OAuth like IndieAuth.

How apps can access data in the user's Personal Data Server (PDS) How the user logs in to their PDS How apps can access the user's data

This is the problem OAuth solved when it was originally created, and the problem ATProto currently has. It's obviously very unsafe to have users give their PDS password to every third party application that's created, especially since the ecosystem is totally open so there's no way for a user to know how legitimate a particular application is. OAuth solves this by having the application redirect to the OAuth server, the user logs in there, and then the application gets only an access token.

ATProto already uses access tokens and refresh tokens, (although they strangely call them accessJwt and refreshJwt) so this is a small leap to make. OAuth support in mobile apps has gotten a lot better than it was 10 years ago, and there is first class support for this pattern on iOS and Android to make the experience work better than the much older plain redirect model used to work a decade ago.

Here is what the rough experience the user would see when logging in to an app:

The user launches the app and taps the "Sign In" button The user enters their handle or server name (e.g. jay.bsky.social, bsky.social, or aaronpk.com) The app discovers the user's OAuth server, and launches an in-app browser The user lands on their own PDS server, and logs in there (however they log in is not relevant to the app, it could be with a password, via email magic link, a passkey, or even delegated login to another provider) The user is presented with a dialog asking if they want to grant access to this app (this step is optional, but it's up to the OAuth server whether to do this and what it looks like) The application receives the authorization code and exchanges it at the PDS for an access token and refresh token


Most of this is defined in the core OAuth specifications. The part that's missing from OAuth is:

discovering an OAuth server given a server name and how clients should be identified when there is no client preregistration step.

That's where IndieAuth fills this in. With IndieAuth, the user's authorization server is discovered by fetching the web page at their URL. IndieAuth avoids the need for client registration by also using URLs as OAuth client_ids.

This does mean IndieAuth assumes there is an HTML document hosted at the URL the user enters, which works well for web based solutions, and might even work well for Bluesky given the number of people who have already rushed to set their Bluesky handle to the same URL as their personal website. But, long term it might be an additional burden for people who want to bring their own domain to Bluesky if they aren't also hosting a website there.

There's a new discussion happening in the OAuth working group to enable this kind of authorization server discovery from a URL which could rely on DNS or a well-known endpoint. This is in-progress work at the IETF, and I would love to have ATProto/Bluesky involved in those discussions!

How the user logs in to their PDS

Currently, the AT Protocol specifies that login happens with a username and password to get the tokens the app needs. Once clients start using OAuth to log in to apps, this method can be dropped from the specification, which interestingly opens up a lot of new possibilities.

Passwords are inherently insecure, and there has been a multi-year effort to improve the security of every online service by adding two-factor authentication and even moving away from passwords entirely by using passkeys instead.

Imagine today, Bluesky wants to add multifactor authenticaiton to their current service. There's no good way to add this to the existing API, since the Bluesky client will send the password to the API and expect an access token immediately. If Bluesky switches to an OAuth flow described above, then the app never sees the password, which means the Bluesky server can start doing more fun things with multifactor auth as well as even passwordless flows!

Logging in with a passkey

Here is the same sequence of steps but this time swapping out the password step for a passkey.

The user launches the app and taps the "Sign In" button The user enters their handle or server name (e.g. jay.bsky.social, bsky.social, or aaronpk.com) The app discovers the user's OAuth server, and launches an in-app browser The user lands on their own PDS server, and logs in there with a passkey The user is presented with a dialog asking if they want to grant access to this app (this step is optional, but it's up to the OAuth server whether to do this and what it looks like) The application receives the authorization code and exchanges it at the PDS for an access token and refresh token

This is already a great improvement, and the nice thing is app developers don't need to worry about implementing passkeys, they just need to implement OAuth! The user's PDS implements passkeys and abstracts that away by providing the OAuth API instead.

Logging in with IndieAuth

Another variation of this would be if the Bluesky service itself supported delegating logins instead of managing any passwords or passkeys at all.

Since Bluesky already supports users setting their handle to their own personal website, it's a short leap to imaging allowing users to authenticate themselves to Bluesky using their website as well!

That is the exact problem IndieAuth already solves, with quite a few implementations in the wild of services that are IndieAuth providers, including Micro.blog, a WordPress plugin, a Drupal module, and many options for self-hosting and endpoint.

Let's look at what the sequence would look like for a user to use the bsky.social PDS with their custom domain handle mapped to it.

The user launches the app and taps the "Sign In" button The user enters their server name (e.g. bsky.social) The app discovers the OAuth server and launches an in-app browser The user enters their handle, and bsky.social determines whether to prompt for a password or do an IndieAuth flow to their server The user is redirected to their own website (IndieAuth server) and authenticates there, and is then redirected back to bsky.social The user is presented by bsky.social with a dialog asking if they want to grant access to this app The application receives the authorization code and exchanges it at the PDS for an access token and refresh token

This is very similar to the previous flows, the difference being that in this version, bsky.social is the OAuth server as far as the app is concerned. The app never sees the user's actual IndieAuth server at all.

Further Work

These are some ideas to kick off the discussion of improving the security of Bluesky and the AT Protocol. Let me know if you have any thoughts on this! There is of course a lot more detail to discuss about the specifics, so if you're interested in diving in, a good place to start is reading up on OAuth as well as the IndieAuth extension to OAuth which has solved some of the problems that exist in the space.

You can reply to this post by sending a Webmention from your own website, or you can get in touch with me via Mastodon or, of course, find me on Bluesky as @aaronpk.com!


reb00ted

Meta's decentralized social plans confirmed. Is Embrace-Extend-Extinguish of the Fediverse next?

Casey Newton at Platformer reports he has e-mail confirmation from Meta that: [Meta is] exploring a standalone decentralized social network for sharing text updates. We believe there’s an opportunity for a separate space where creators and public figures can share timely updates about their interests (Source). Their new app is codenamed P92, and according to a separate report by Moneycontr

Casey Newton at Platformer reports he has e-mail confirmation from Meta that:

[Meta is] exploring a standalone decentralized social network for sharing text updates. We believe there’s an opportunity for a separate space where creators and public figures can share timely updates about their interests (Source).

Their new app is codenamed P92, and according to a separate report by Moneycontrol:

… will support ActivityPub, the decentralised social networking protocol powering Twitter rival Mastodon and other federated apps (Source).

It will also:

be Instagram-branded and will allow users to register/login to the app through their Instagram credentials.

First, the good news:

This is a huge validation of the decentralized social network known as the Fediverse, built around a set of internet protocol standards that include ActivityPub, ActivityStreams, WebFinger as well as a set of commonly implemented unofficial extensions. The Fediverse has been around for some years, but recently came to more widespread prominence through its leading implementation, Mastodon, as the leading alternative of increasingly erratic (and increasingly many other things, but I digress…) Twitter.

That’s because only when alternatives are actually beginning to look like they might become serious threats to incumbents – and Meta is the market-leading incumbent in social media by far – do incumbents start paying attention and then connect to them. Or, as it may be the case here, simply leak that they might be connecting in the future but never actually will. We don’t know which of those will turn out to be true, but it doesn’t matter: both validate the Fediverse as a serious competitor to Meta.

This is on the heels of recent Fediverse adoption by companies such as Mozilla, Medium, CloudFlare and Flipboard. Apple now has Mastodon content previews in iMessage. Even Microsoft has been spotted in the Fediverse a few days ago.

But:

I have some Brooklyn Bridges for sale. You get a Brooklyn Bridge for free if you believe that a company like Meta would connect to the Fediverse, and be a perfect citizen the way the Fediverse expects you to be today. Including:

No ads; No tracking; No algorithms that favor business outcomes for Meta over your wellbeing; Respect for different cultures, minorities, non-mainstream behavior etc.; A rich toolset for filtering and blocking according what you decide you want to filter and block, not Meta; The ability to move from one host to another without having to build your network from scratch; The ability to pick who is your system administrator and moderator, from models that are essential centrally managed to full-fledged self-managed, user-owned cooperatives; The ability, and encouragement, to innovate with new apps; and so forth.

Instead, based on the history of technology, the chances are overwhelming that such an app would be used by Meta with an embrace, extend and extinguish strategy, at the end of which the Fediverse would either have become irrelevant or effectively been taken over by Meta. So the much-heralded alternative to Twitter would become … Meta? I sure hope not.

If you think that is unlikely, read up on some of the historical examples listed on that Wikipedia page. Merely being based on open standards and having a million+ strong user community does not protect you at all. Instead, I would say the attack happens every single time a network dominated by an incumbent (here: social media) is threatened by a more open network. And it succeed, at least partially, more often than not. Here it is Meta’s $100b+ business that’s under strategic threat, of course they will protect it and use any means they can think of to do so.

It does not help that the Fediverse today is chronically underfunded and has corresponding difficulty to compete at the same speed as somebody like Meta can. Actually, “unfunded” is a better term because the amounts are so small. There are many unpaid contributions, the Fediverse largely being open source and all, but I’d be surprised if more than $10m per year are spent in total on the entire Fediverse today, likely it’s far less. If Meta can burn more than $10b – that’s one entire annual fediverse spend every 8 hours! – on a very doubtful Metaverse project, they surely could find the same amount of money to protect their core business.

And that’s just one of the many issues we need to solve to protect, and grow, the beautiful thing we currently have with the Fediverse.

So what shall we do about all this?

(I welcome your comments – in the Fediverse! Find me at @j12t@social.coop.)

(Also, I’m co-organizing FediForum, an online unconference at the end of March, where we will surely discuss this and other issues. And celebrate the Fediverse, because there is much to celebrate! Join us?)

Wednesday, 08. March 2023

Phil Windleys Technometria

Not all PBAC is ABAC: Access Management Patterns

Exploring the architectural patterns we can use with policy-based access control (PBAC) illustrates the difference between policy-based and attribute-based access control (ABAC). The primary ways of implementing access control in modern applications are (1) access control lists (ACLs), (2) role-based access control (RBAC), and (3) attribute-based access control (ABAC). I assume you're familiar with

Exploring the architectural patterns we can use with policy-based access control (PBAC) illustrates the difference between policy-based and attribute-based access control (ABAC).

The primary ways of implementing access control in modern applications are (1) access control lists (ACLs), (2) role-based access control (RBAC), and (3) attribute-based access control (ABAC). I assume you're familiar with these terms in this post. If you're not there's a great explanation in chapter 12 of my new book, Learning Digital Identity.1

Thanks for reading Phil Windley's Technometria! Subscribe for free to receive new posts and support my work.

To explore access management patterns, let's classify applications requiring fine-grained access management into one of two types:

Structured—these applications can use the structure of the attribute information to simplify access management. For example, an HR application might express a policy as “all L9 managers with more than 10 reports can access compensation management functionality for their reports”. The structure allows attributes like level and number_of_reports to be used to manage access to the compensation tool with a single policy. A smalls set of policies can control access to the compensation tool. These applications are the sweet spot for ABAC.

Ad hoc—these applications allow users to manage access to resources they control based on identifiers for both principals and resources without any underlying structure. For example, Alice shares her vacation photo album with Betty and Charlie. The photo album, Betty, and Charlie have no attributes in common that can be used to write a single attribute-based policy defining access. These applications have a harder time making effective use of ABAC.

Ad hoc access management is more difficult than structured because of the combinatorial explosion of possible access relationships. When any principal can share any resource they control with any other principal and with any subset of possible actions, the number of combinations quickly becomes very large.

There are several approaches we can take to ad hoc access management:

Policy-based—In this approach the application writes a new policy for every access case. In the example given above, when Alice shares her vacation photo album with Betty and Charlie, the application would create a policy that explicitly permits Betty and Charlie to access Alice’s vacation photo album. Every change in access would result in a new policy or the modification of an existing one. This is essentially using policies as ACLs.

Group-based—In a group-based approach, we create a group for people who can access the vacation photo album and a policy that allows access to the vacation photo album if the user has a group attribute of canAccessVacationPhotos. The group name has to be unique to Alice's vacation photo album and includes the allowed action. When Alice shares the album with Betty and Charlie, we add them both to the canAccessVacationPhotos group by putting it in the groupsattribute in their profile. Group-based policies look like "principal P can access vacationPhotosAlbum if P.groups contains canAccessVacationPhotos." This is essentually RBAC.

Resource-based—In this approach, we add a sharedWith or canEdit attribute to Alice’s vacation photos album that records the principals who can access the resource. Now our policy uses the resource attribute to allow access to anyone in that list. Resource-based policies look like "principal P can edit resource R if P is in R.canEdit". Every resource of the same type has the same attributes. This approach is close to ABAC because it makes use of attributes on the resources to manage access, reducing the combinatorial explosion.

Hybrid—We can combine group and resource-based access management by creating groups of users and storing group names in resource attribute instead of principals. For example, if Alice adds Betty and Charlie to her group friends, then she could add friends to the sharedWith attribute on her album. The advantage of the hybrid approach is we reduce the length of the attribute lists.

The advantage of the policy-based approach is that it’s the simplest thing that could possibly work. Given a policy store with sufficient management features (i.e., finding, filtering, creating, modifying, and deleting policies) this is straight-forward. The chief downside is the explosion in the number of policies and the scaling that it requires of the policy store. Also since the user’s permissions are scattered among many different policies, knowing who can do what is difficult and relies on the policy store's filtering capabilities.

The group-based approach results in a large number of groups for very specific purposes. This is a common problem with RBAC systems. But given an attribute store (like an IdM profile) that scales well, it splits the work between the attribute and policy stores by reducing the number of policies to one per share type (or combination). That is, we need a policy for each resource that allows viewing, one to allow editing, and so on.

The resource-base approach reduces the explosion of groups by attaching attributes to the resource, imposing structure. In the photo album sharing example, each album (and photo) would need an attribute for each sharing type (view, modify, delete). If Alice says Betty can view and modify an album, Betty’s identifier would be added to the view and modify attributes for that album. We need a policy for each unique resource type and action.

The downside of the resource-based approach is that the access management system has to be able to use resource attributes in the authorization context. Integrating the access management system with an IdP provides attributes about principals so that we can automatically make those attributes available in the authorization context. You could integrate using the attributes in an OIDC token or by syncing the authorization service with the IdP using SCIM.

But the ways that attributes can be attached to a resource are varied. For example, they might be stored in the application's database. They might be part of an inventory control system. And so on. So the access management system must allow developers to inject those attributes into the authorization context when the policy enforcement point is queried or have a sufficiently flexible policy information point to easily integrate it with different databases and APIs. Commercial ABAC systems will have solved this problem because it is core to how they function.

Every application, of course, will make architectural decisions about access management based its specific needs. But if you understand the patterns that are available to you, then you can think through the ramifications of your design ahead of time. Sometimes this is lost in the myth that policy-based access management (PBAC) is all the same. All of the approaches I list above are PBAC, but they're not all ABAC.

Notes

The material that follows is not in the book. But it should be. An errata, perhaps.

Thanks for reading Phil Windley's Technometria! Subscribe for free to receive new posts and support my work.


The Dingle Group

Qualities of Value

The challenge, to come up with language that describes the value that is enabled through the technical standards that comprise what is known as IDTech (also known as Self-Sovereign Identity or the Decentralized Identity stack).  This is independent of the different variations within the technology stack, and more importantly is meant for the vast majority of the business? population that

The challenge, to come up with language that describes the value that is enabled through the technical standards that comprise what is known as IDTech (also known as Self-Sovereign Identity or the Decentralized Identity stack).  This is independent of the different variations within the technology stack, and more importantly is meant for the vast majority of the business? population that have never heard of different standards bodies, protocols or tech stacks, and who quite frankly don’t care and will never exert the effort to learn about them.  In many cases these are the exact people who make the final investment go/no-go decision.

After hundreds of conversations and countless hours of research and reading over the past five years, four ‘qualities of value’ have emerged.  These are provenance, persistence, flexibility, and portability.   If these values are important in your business or operations, then this technology sector will be important for you.

Provenance, the ability to trace where something, whether data or physical object, has come from, and to have confidence the data has not been tampered with.   More than just that, it is being able to trace back to the source with the highest level of confidence.  This means to be able to automatically (no human intervention) determine what entity issued or attested to the data or physical object and to validate that this party is authorized to make these claims. 

Persistence, the ability to act on data and information whether the original source of the data continues to exist or not.  A reality of life is constant change.  For anyone interacting with technology, the rate of change seems only to go up.  With change, older systems become unavailable, but information links may need to persist far beyond the life of the source system.  Having a mechanism that provides high levels of assurance on the information integrity under the control of the interested entity define / rephrase? can mean a decoupling of systems, reducing complexity and increasing resilience.

Flexibility, the ability to use the data about an object in ways that extend beyond the original intended use of the data.  The most common example of a document that is used widely outside its intended use is the driver’s license, which is also used as a piece of identity documentation for purchasing age restricted products, writing a check (in the US), checking into a hotel and many other uses that have nothing to do with driving a vehicle.  This is a broad but simplistic use case. When an individual or organization has the ability to select data from various parties and combine these into a single new claim, presenting this composite claim to any entity while retaining the highest levels of provenance outlined above, this creates new business models and revenue opportunities are found.

Portability, the ability to bring the data “with me”, and to do so in a manner that does not leave a link or connection to the issuer of the data.  This is the ability to move this information between systems and vendors.  As an individual or business, data portability is of value as a means to prevent capture or lock-in to a specific vendor.  Additionally, being able to use the data in a manner that does not leave a ‘bread crumb trail’ every time it is used (or shared) is critical for personal privacy as well as business confidentiality, and has become a requirement under the law with the GDPR and CCPA regulations.

In this first post, I have intentionally kept the concepts at a high level.  In the following posts, I will delve further into each of these Values, expanding on them and bringing in examples of where they would apply.  I will endeavor to keep technical language and jargon out of the posts where at all possible. 

 


Jon Udell

Visualizing Mastodon server moderation

In Mapping the wider fediverse I showed how a new table added to the Mastodon plugin — mastodon_domain_block — enables queries that find which servers are moderating which other servers. For example, here are servers on nerdculture.de‘s list of offenders. select server, domain, severity from mastodon_domain_block where server = 'https://nerdculture.de' +------------------------+--------------------

In Mapping the wider fediverse I showed how a new table added to the Mastodon plugin — mastodon_domain_block — enables queries that find which servers are moderating which other servers. For example, here are servers on nerdculture.de‘s list of offenders.

select server, domain, severity from mastodon_domain_block where server = 'https://nerdculture.de' +------------------------+--------------------------------+----------+ | server | domain | severity | +------------------------+--------------------------------+----------+ | https://nerdculture.de | roysbeer.place | silence | | https://nerdculture.de | *.10minutepleroma.com | suspend | | https://nerdculture.de | *.activitypub-troll.cf | suspend | ...snip... | https://nerdculture.de | shitposter.club | suspend | | https://nerdculture.de | wolfgirl.bar | suspend | | https://nerdculture.de | www2.gabbers.me | suspend | +------------------------+--------------------------------+----------+

I used the new query pattern to build a dashboard to report, for each server in the home timeline:

The list of blocking servers. The count of blocked servers for each blocking server. The list of blocking servers for each blocked server, and the count of those blocking servers.

This was a good start, but I had a hunch that relationship graphs would reveal non-obvious connections among these servers. And indeed they do! Here’s a view of the new graph-enriched dashboard.

The left pane shows the blocked servers shunned by a blocking server selected from those in the home timeline. The right pane has the inverse view: the blocking servers that shun a selected blocked server. These were the two categories I defined for the first iteration of these graphs.

category "blocking_server" { color = "darkgreen" icon = "server" } category "blocked_server" { color = "darkred" icon = "server" }

Here’s the code for one of the graphs.

graph { node { base = node.blocking_server } node { base = node.blocked_server } node { base = node.blocked_and_blocking_server } edge { args = [ self.input.blocking_server.value ] base = edge.match_blocked_server } edge { args = [ self.input.blocking_server.value ] base = edge.match_blocking_server } }

Here is the definition of node.blocking_server, which refers to category.blocking_server.

node "blocking_server" { category = category.blocking_server sql = <<EOQ with servers as ( select distinct blocking_server, blocked_server from blocking_servers(${local.limit}) ) select blocking_server as id, blocking_server as title from servers order by blocking_server EOQ }

The FROM clause calls blocking_servers(), a set-returning function defined like so.

create or replace function public.blocking_servers(max int) returns table ( blocking_server text, blocked_server text ) as $$ with servers as ( select distinct server as domain, 'https://' || server as server_url from mastodon_toot where timeline = 'home' limit max ), blocking_and_blocked as ( select s.domain as blocking_domain, d.domain as blocked_domain from servers s join mastodon_domain_block d on s.server_url = d.server ) select blocking_domain, blocked_domain from blocking_and_blocked order by blocking_domain, blocked_domain $$ language sql

I thought these ingredients would suffice. But when I began poking around in the graphs made with these definitions, infosec.exchange behaved strangely. Someties it would appear as a blocking server, other times as a blocked server. I was missing a category!

category "blocked_and_blocking_server" { color = "orange" icon = "server" }

As you can see in the graph, infosec.exchange not only blocks 73 servers, it is also blocked by two servers: religion.masto.host and weatherishappening.net. Why?

weatherishappening.net calls infosec.exchange a “HOSTING DEATH SPIRAL FASCIST ORGANIZATION” and blocks it with “Limited” severity.

religion.masto.host blocks infosec.exchange at the “Suspended” level, but does not explain why (“Reason not available”).

Although those servers could, in turn, be blocked by others, no such blocks appear in the block lists of my immediate server neighborhood.

> select count(*) from blocking_servers(100) where blocked_server = 'weatherishappening.net' +-------+ | count | +-------+ | 0 | +-------+ > select count(*) from blocking_servers(100) where blocked_server = 'religion.masto.host' +-------+ | count | +-------+ | 0 | +-------+

There is another switch hitter in my neighborhood, though. c.im is blocked by me.dm and octodon.social.

Why?

me.dm (Medium) has “Suspended” c.im for “Hate speech”.

octodon.social has “Suspended” c.im for “Reason not available”.

When the opinions and policies of your server differ from those of mine, we see different realities through our respective lenses. Could such fragmentation drive the fediverse-curious back into arms of Big Social? I’m sure that will happen — indeed is happening — to some degree.

But I hope that some of us, at least, will learn to thrive in diverse networks of online communities, aware of the kaleidoscopic interplay of filters but not overwhelmed by it. That skill will serve us well IRL too. To acquire it, we’ll need to visualize the operation of our filters. One great way to do that: SQL queries that drive relationship graphs.


1 https://blog.jonudell.net/2022/11/28/autonomy-packet-size-friction-fanout-and-velocity/
2 https://blog.jonudell.net/2022/12/06/mastodon-steampipe-and-rss/
3 https://blog.jonudell.net/2022/12/10/browsing-the-fediverse/
4 https://blog.jonudell.net/2022/12/17/a-bloomberg-terminal-for-mastodon/
5 https://blog.jonudell.net/2022/12/19/create-your-own-mastodon-ux/
6 https://blog.jonudell.net/2022/12/22/lists-and-people-on-mastodon/
7 https://blog.jonudell.net/2022/12/29/how-many-people-in-my-mastodon-feed-also-tweeted-today/
8 https://blog.jonudell.net/2022/12/31/instance-qualified-mastodon-urls/
9 https://blog.jonudell.net/2023/01/16/mastodon-relationship-graphs/
10 https://blog.jonudell.net/2023/01/21/working-with-mastodon-lists/
11 https://blog.jonudell.net/2023/01/26/images-considered-harmful-sometimes/
12 https://blog.jonudell.net/2023/02/02/mapping-the-wider-fediverse/
13 https://blog.jonudell.net/2023/02/06/protocols-apis-and-conventions/
14 https://blog.jonudell.net/2023/02/14/news-in-the-fediverse/
15 https://blog.jonudell.net/2023/02/26/mapping-people-and-tags-on-mastodon/
16 https://blog.jonudell.net/2023/03/07/visualizing-mastodon-server-moderation/
17 https://blog.jonudell.net/2023/03/14/mastodon-timelines-for-teams/

Tuesday, 07. March 2023

Aaron Parecki

How to use your own domain as your BlueSky handle

I recently got access to the BlueSky beta, and decided to poke around to see what it's all about. I will save the details of what it is and how I feel about it for a different post. However, one of the first things you do when you sign up is choose a username that exists under the bsky.app domain. I have zero interest in another name rush where everyone tries to claim the shortest username possible

I recently got access to the BlueSky beta, and decided to poke around to see what it's all about. I will save the details of what it is and how I feel about it for a different post. However, one of the first things you do when you sign up is choose a username that exists under the bsky.app domain. I have zero interest in another name rush where everyone tries to claim the shortest username possible, so I went with aaronpk.bsky.app rather than trying to get a or apk.

But, as I was poking around the app and replying to a few people, it just didn't feel like "me" yet. I don't care about renting a name in someone else's space. Yesterday, Jay Graber posted on the BlueSky blog all about supporting custom domains as identities in BlueSky. Naturally I was curious to try this out myself, but there were no technical details available in the blog post.

When I opened my BlueSky timeline, I saw a few people mentioning that they had figured out how to do it even before the app supported it natively. I did some digging around, and jik pointed me to the right places in the AT Proto docs, and I was able to change my handle to aaronpk.com!

Below are the instructions for how to do this yourself. Please note that these instructions will be made obsolete in a week or two since the BlueSky developers are planning on shipping first class support for custom domains in the app and you won't need to do this workaround for long.

Overview

The rough process for this is:

Create a DNS TXT record on your domain with the value of your underlying BlueSky ID Get a token for your account (e.g. "log in" to your account from an HTTP client) Update your profile with your new domain

Note: To make these POST requests, you'll need an HTTP client that can POST in JSON format. You can use cURL, httpie, Postman, or write code in whatever language you want. I'll use httpie for the examples.

Get a token for your account

First, you need to "log in" to your account. This process will return an access token you can use to update your profile. Logging in will also tell you your DID, the unique identifier for your underlying account. (The "handle" on your account is more like a display name, and that's how your account can end up being portable across domains as well as BlueSky servers later.)

Call the com.atproto.session method with your email and password you used to sign up. (Type this all on one line, I've just split it into multiple lines for readability. Also, again, if you're using cURL or another HTTP client, make sure you send this payload as JSON, not form-encoded.)

http post https://bsky.social/xrpc/com.atproto.session.create handle=user@example.com password=hunter2

The response will be a JSON document with your access token as well as DID:

HTTP/1.1 200 OK Access-Control-Allow-Origin: * Connection: keep-alive Content-Length: 609 Content-Type: application/json; charset=utf-8 Date: Tue, 07 Mar 2023 15:29:38 GMT ETag: W/"261-15oNf47+gL0kDnvlsvzpUK0ocx8" X-Powered-By: Express { "accessJwt": "eyJh...", "did": "did:plc:s2koow7r6t7tozgd4slc3dsg", "handle": "aaronpk.bsky.app", "refreshJwt": "eyJh..." } Create a DNS Record

Go to your domain registrar and create a new TXT record on your domain.

Subdomain: _atproto Value: did=did:plc:XXXXXXXXX TTL: Default

In my case, the domain I wanted to use is aaronpk.com, so I would create a new subdomain _atproto.aaronpk.com, with the value did=did:plc:s2koow7r6t7tozgd4slc3dsg.

_atproto.aaronpk.com TXT did=did:plc:s2koow7r6t7tozgd4slc3dsg Update your Profile

Wait a little while to make sure the DNS entry takes effect. Then you can update your profile with your new domain by running the com.atproto.handle.update method.

Use the value of the accessJwt you got from the first request. (Again, type this out all on one line.)

http post https://bsky.social/xrpc/com.atproto.handle.update Authorization:"Bearer eyJ..." handle=aaronpk.com

You won't see a response, but that means it worked! Next time you open the app, you'll probably see a bunch of errors. So log out of the app, and log back in! Right now, you'll need to use your email address to log in instead of your handle, but they are going to fix that in a future update.

Now you should see your new domain as your handle in the BlueSky app!

Once again, this will all be obsolete once they ship this feature in the app itself! Here's a sneak peek from Paul of what the interface will look like!

Monday, 06. March 2023

Karyl Fowler

You don’t need a blockchain.

Verifiable Credentials for Enhanced Supply Chain Traceability, Security, & Resilience Transmute uses verifiable credentials (VCs) + decentralized identifiers (DIDs) to create cost effective supply chain data security at scale. Using VCs + DIDs together harnesses the benefits of centralization while eliminating the downsides. The Drawbacks of Blockchain Blockchains are an incredible innovati

Verifiable Credentials for Enhanced Supply Chain Traceability, Security, & Resilience

Transmute uses verifiable credentials (VCs) + decentralized identifiers (DIDs) to create cost effective supply chain data security at scale. Using VCs + DIDs together harnesses the benefits of centralization while eliminating the downsides.

The Drawbacks of Blockchain

Blockchains are an incredible innovation, but they’re not a one-size-fits-all solution.

In their current state, blockchains are a heavy and over-engineered tool for the job at hand when it comes to bridging data silos in global trade.

Blockchains have a high cost for creation and maintenance. Blockchains are expensive in public and private form, often gobbling up the margins of commodities they are being used to secure. Blockchains are difficult to scale. Blockchains don’t scale well enough for reliable enterprise use, including large supply chain companies or large regulatory and customs agencies. Blockchains may require layer 2 solutions to meet these needs. Blockchains lack interoperability. Blockchains aren’t interoperable with existing infrastructure or each other. To achieve interoperability, blockchains require additional adapters or layer 2 investments. Additionally, the cryptography many of them rely on is still considered very emergent and may not meet certain regulatory requirements. For instance, updated NIST requirements only recently allowed for an element used in Bitcoin and Ethereum to be used in blockchain applications. The Benefits of VCs + DIDs

Applied correctly, VCs + DIDs are affordable, interoperable, and already standardized. The interoperability perks of VCs + DIDs give you the ability to work across existing systems, including distributed ledgers (DLTs) and blockchains. Using VCs + DIDs is a futureproof method to fortify supply chain data, using the right tool for the job.

Using VCs + the appropriate DIDs provide the security benefits of decentralization and the efficiency benefits of centralization. Decentralization benefits include immutable claims and tamper evidence, while centralization benefits include scalability. Implemented by experts like Transmute, VCs + DIDs awaken superpowers — such as data portability, data minimization, and selective disclosure — without locking you into a single vendor or blockchain.

Karyl Fowler, Transmute co-Founder and CEO, is an expert in emerging tech and economic development in fields with high regulatory burdens such as FDA, HIPAA and ITAR. She holds leadership roles on the Decentralized Identity Foundation (DIF) Steering Committee, with the C26+ Smart Borders Working Group at the US-Mexico Foundation, as a Techstars Austin mentor, and serves as an advisor for SXSW.

Connect with Karyl on Twitter, and LinkedIn

About Transmute: Building on the security and freedom that Web3 promised, Transmute provides all the benefits of decentralization to enterprise teams seeking a cost effective, interoperable, planet-forward experience provided by experts in technology and industry.

Transmute was founded in 2017, graduated from TechStars Austin in 2018, and is based in sunny Austin, Texas. Learn more about us at: http://www.transmute.industries

Connect with Transmute on LinkedIn and Twitter

You don’t need a blockchain. was originally published in Transmute on Medium, where people are continuing the conversation by highlighting and responding to this story.


Damien Bod

Auto sign-out using ASP.NET Core Razor Pages with Azure AD B2C

This article shows how an ASP.NET Core Razor Page application could implement an automatic sign-out when a user does not use the application for n-minutes. The application is secured using Azure AD B2C. To remove the session, the client must sign-out both on the ASP.NET Core application and the Azure AD B2C identity provider or […]

This article shows how an ASP.NET Core Razor Page application could implement an automatic sign-out when a user does not use the application for n-minutes. The application is secured using Azure AD B2C. To remove the session, the client must sign-out both on the ASP.NET Core application and the Azure AD B2C identity provider or whatever identity provider you are using.

Code: https://github.com/damienbod/AspNetCoreB2cLogout

Sometimes clients require that an application supports automatic sign-out in a SSO environment. An example of this is when a user uses a shared computer and does not click the sign-out button. The session would remain active for the next user. This method is not fool proof as the end user could save the credentials in the browser. If you need a better solution, then SSO and rolling sessions should be avoided but this leads to a worse user experience.

The ASP.NET Core application is protected using Microsoft.Identity.Web. This takes care of the client authentication flows using Azure AD B2C as the identity provider. Once authenticated, the session is stored in a cookie. A distributed cache is added to record the last activity of of each user. An IAsyncPageFilter implementation is used and added as a global filter to all requests for Razor Pages. The SessionTimeoutAsyncPageFilter class implements the IAsyncPageFilter interface.

builder.Services.AddDistributedMemoryCache(); builder.Services.AddAuthentication(OpenIdConnectDefaults.AuthenticationScheme) .AddMicrosoftIdentityWebApp(builder.Configuration, "AzureAdB2c" ) .EnableTokenAcquisitionToCallDownstreamApi(Array.Empty<string>()) .AddDistributedTokenCaches(); builder.Services.AddAuthorization(options => { options.FallbackPolicy = options.DefaultPolicy; }); builder.Services.AddSingleton<SessionTimeoutAsyncPageFilter>(); builder.Services.AddRazorPages() .AddMvcOptions(options => { options.Filters.Add(typeof(SessionTimeoutAsyncPageFilter)); }) .AddMicrosoftIdentityUI();

The IAsyncPageFilter interface is used to catch the request for the Razor Pages. The OnPageHandlerExecutionAsync method is used to implement the automatic end session logic. We use the default name identifier claim type to get an ID for the user. If using the standard claims instead of the Microsoft namespace mapping, this would be different. Match the claim returned in the id_token from the OpenID Connect authentication. I check for idle time. If no requests was sent in the last n-minutes, the application will sign-out, in both the local cookie and also on Azure AD B2C. It is important to sign-out on the identity provider as well. If the idle time is less than the allowed time span, the DateTime timestamp is persisted to cache.

public async Task OnPageHandlerExecutionAsync(PageHandlerExecutingContext context, PageHandlerExecutionDelegate next) { var claimTypes = "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier"; var name = context.HttpContext .User .Claims .FirstOrDefault(c => c.Type == claimTypes)! .Value; if (name == null) throw new ArgumentNullException(nameof(name)); var lastActivity = GetFromCache(name); if (lastActivity != null && lastActivity.GetValueOrDefault() .AddMinutes(timeoutInMinutes) < DateTime.UtcNow) { await context.HttpContext .SignOutAsync(CookieAuthenticationDefaults.AuthenticationScheme); await context.HttpContext .SignOutAsync(OpenIdConnectDefaults.AuthenticationScheme); } AddUpdateCache(name); await next.Invoke(); }

Distributed cache is used to persist the user idle time from each session. This might be expensive for applications with many users. In this demo, the UTC now value is used for the check. This might need to be improved and the cache length as well. This needs to be validated. if this is enough for all different combinations of timeout.

private void AddUpdateCache(string name) { var options = new DistributedCacheEntryOptions() .SetSlidingExpiration(TimeSpan .FromDays(cacheExpirationInDays)); _cache.SetString(name, DateTime .UtcNow.ToString("s"), options); } private DateTime? GetFromCache(string key) { var item = _cache.GetString(key); if (item != null) { return DateTime.Parse(item); } return null; }

When the session timeouts, the code executes the OnPageHandlerExecutionAsync method and signouts.

This works for Razor Pages. This is not the only way of supporting this and it is not an easy requirement to fully implement. Next step would be to support this from SPA UIs which send Javascript or ajax requests.

Links

https://learn.microsoft.com/en-us/azure/active-directory-b2c/openid-connect#send-a-sign-out-request

https://learn.microsoft.com/en-us/aspnet/core/razor-pages/filter?view=aspnetcore-7.0

https://github.com/AzureAD/microsoft-identity-web

Friday, 03. March 2023

Bill Wendels Real Estate Cafe

LIVEStream: MIT Generative AI Summit – what implications for residential real estate?

(Technical problems: please forgive formatting above, will try to correct) #Back2REFuture: Billed as 1st-ever Generative AI Summit, real estate thought leaders have already been asking… The post LIVEStream: MIT Generative AI Summit – what implications for residential real estate? first appeared on Real Estate Cafe.

(Technical problems: please forgive formatting above, will try to correct) #Back2REFuture: Billed as 1st-ever Generative AI Summit, real estate thought leaders have already been asking…

The post LIVEStream: MIT Generative AI Summit – what implications for residential real estate? first appeared on Real Estate Cafe.

Thursday, 02. March 2023

Phil Windleys Technometria

Learning Digital Identity Podcasts

If you enjoy listening more than reading, these two podcasts may be just the thing to get you up to speed on learning digital identity. I was recently the guest on a couple of podcasts to talk about my new book, Learning Digital Identity. The first was with Mathieu Glaude, CEO of Northern Block. The second was with Sam Curren, the Deputy CTO of Indicio. One of the fun things about these podcasts is

If you enjoy listening more than reading, these two podcasts may be just the thing to get you up to speed on learning digital identity.

I was recently the guest on a couple of podcasts to talk about my new book, Learning Digital Identity. The first was with Mathieu Glaude, CEO of Northern Block. The second was with Sam Curren, the Deputy CTO of Indicio. One of the fun things about these podcasts is how different they were despite being about the same book.

Thanks for reading Phil Windley's Technometria! Subscribe for free to receive new posts and support my work.

Mathieu focused on relationships, a topic I deal with quite a bit in the book since I believe we build identity systems to manage relationships, not identities. In addition we discussed the tradespace among privacy, authenticity, and confidentiality and how verifiable credentials augment and improve attribute-based access control (ABAC).

Sam and I discussed identity metasystems and why they're necessary for building identity systems that enable us living effective online lives. We also talked about Kim Cameron's Laws of Identity and how they help analyze identity systems and their features. Other topics include the relationship between DIDComm and identity, how self-sovereign identity relates to IoT, and the relationship between trust, confidence, and governance.

These were both fun conversations and I'm grateful to Mathieu and Sam for lining them up. If you'd like me to talk about the book on a podcast you or your company hosts, or even for a private employee event, I'm happy to. Just send me a note and we'll line it up.

Thanks for reading Phil Windley's Technometria! Subscribe for free to receive new posts and support my work.

Wednesday, 01. March 2023

@_Nat Zone

3月2日(木)22時〜「NIST SP800-63-4(案)」を配信しながらダラダラ読む会 #7 いよいよ最終回?!

1月から始めた米国 NIST SP800-63-4…

1月から始めた米国 NIST SP800-63-4 のInitial Public Draft (昨年12月公開、3/24までパブリックコメント受付中)をダラダラ読む会、いよいよ第7回で最終回です。

第7回を3月2日午後10時よりおこないます。今回は、4分冊の4冊目、パートC「フェデレーションとアサーション」です。多分最終回?!

SP800-63-4ipd は

SP800-63-4 https://nvlpubs.nist.gov/nistpubs/Spe… SP800-63A-4 https://doi.org/10.6028/NIST.SP.800-6… SP800-63B-4 https://doi.org/10.6028/NIST.SP.800-6… SP800-63C-4 https://doi.org/10.6028/NIST.SP.800-6…

の4冊です。パブコメも出します。

チャット参加もビデオ参加1も可能です。 ビデオ参加には mmhmm を使います。mmhmmで参加したい方はリンクをお送りしますのでご連絡ください。チャットはこのYoutubeチャンネルにチャンネル登録して1分以上たてば参加できるはず。ただ、前回直前に登録した方はチャットを送れなかったという事例もあるので、事前に登録することをおすすめします。


Foss & Crafts

56: Make your own dehydrated meals

In yet another deep dive into yet another weird hobby of Christine's, we talk about how to make your own dehydrated meals! Why the heck would you want to do this? Well, maybe you want more consistent or dietary needs friendly travel food! Maybe you want to go camping or hiking! Maybe you're sick of deciding what's for lunch and you just want to scoop a cup of meal out of a jar on your desk every d

In yet another deep dive into yet another weird hobby of Christine's, we talk about how to make your own dehydrated meals! Why the heck would you want to do this? Well, maybe you want more consistent or dietary needs friendly travel food! Maybe you want to go camping or hiking! Maybe you're sick of deciding what's for lunch and you just want to scoop a cup of meal out of a jar on your desk every day! Maybe you want to weird out your fellow conference-goers as you turn a dry powder into a fully cooked meal with hot water and hot water alone!

Links:

Making dehydrated meals overview (Christine's Kitchen 0): [YouTube] [PeerTube]

Backpacking chef

Dishwasher cooking (yes it is a thing)

Tuesday, 28. February 2023

Hyperonomy Digital Identity Lab

Microsoft Entra Verified ID SDK: Developer Resources

Microsoft Entra Verified ID SDK: Developer Resources Microsoft Entra Verified ID SDK: Video Playlist Web 7.0 Evaluation Currently under evaluation as the primary identity system for Web 7.0: Web 7.0 is a unified software and hardware ecosystem for building resilient, … Continue reading →

Microsoft Entra Verified ID SDK: Developer Resources

Overview – https://youtu.be/ByheAyHKPEY Setup and Configuration – https://youtu.be/LMNP2pi1bpU Create your first credential, customize your credential, issuance & verification flows – https://youtu.be/2RRqFNthLgw Service Request API and Admin APIs – https://youtu.be/xTY82US6fF8 GitHub samples – https://youtu.be/c3fOHb48M14 (http://aka.ms/vcsample)

Microsoft Entra Verified ID SDK: Video Playlist

Microsoft Entra Verified ID: Video Playlist (https://www.youtube.com/playlist?list=PLU-rWqHm5p46_FLmQDcFgrhiXHkdj6zAt)

Web 7.0 Evaluation

Currently under evaluation as the primary identity system for Web 7.0:

Web 7.0 Standard Base Specification Set – Video Tutorials (https://www.youtube.com/playlist?list=PLU-rWqHm5p44AsU5GLsc1bHC7P8zolAAf)

Web 7.0 is a unified software and hardware ecosystem for building resilient, trusted, decentralized systems using decentralized identifiers, DIDComm agents, and verifiable credentials.

Take what you need; leave the rest.

https://hyperonomy.com/2022/12/18/web-7-0-didcomm-agent-architecture-reference-model-didcomm-arm-0-40-december-18-2022/

Monday, 27. February 2023

Identity Woman

Biometric Exit – its creepy.

I work on digital identity and this described below experience really highlighted the problem with much of the Biometrics industry and governments who use the technology – no explanation of what is actually happening or how a system got the template of my face. It re-affirmed for me the need for the Thoughtful Biometrics Workshop […] The post Biometric Exit – its creepy. appeared first on Identi

I work on digital identity and this described below experience really highlighted the problem with much of the Biometrics industry and governments who use the technology – no explanation of what is actually happening or how a system got the template of my face. It re-affirmed for me the need for the Thoughtful Biometrics Workshop […]

The post Biometric Exit – its creepy. appeared first on Identity Woman.


Damien Bod

Onboarding users in ASP.NET Core using Azure AD Temporary Access Pass and Microsoft Graph

The article looks at onboarding different Azure AD users with a temporary access pass (TAP) and some type of passwordless authentication. An ASP.NET Core application is used to create the Azure AD member users which can then use a TAP to setup the account. This is a great way to onboard users in your tenant. […]

The article looks at onboarding different Azure AD users with a temporary access pass (TAP) and some type of passwordless authentication. An ASP.NET Core application is used to create the Azure AD member users which can then use a TAP to setup the account. This is a great way to onboard users in your tenant.

Code: https://github.com/damienbod/AzureAdTapOnboarding

The ASP.NET Core application needs to onboard different type of Azure AD users. Some users cannot use a passwordless authentication (yet) and so a password setup is also required for these users. TAP only works with members and we also need to support guest users with some alternative onboarding flow. Different type of user flows are supported or possible:

AAD member user flow with TAP and FIDO2 authentication AAD member user flow with password using email/password authentication AAD member user flow with password setup and a phone authentication AAD guest user flow with federated login AAD guest user flow with Microsoft account AAD guest user flow with email code

FIDO2 should be used for all enterprise employees with an office account in the enterprise. If this is not possible, then at least the IT administrators should be forced to use FIDO2 authentication and the companies should be planning on a strategy on how to move to a phishing resistant authentication. This could be forced with a PIM and a continuous access policy for administration jobs. Using FIDO2, the identities are protected with a phishing resistant authentication. This should be a requirement for any professional solution.

Azure AD users with no computer can use an email code or a SMS authentication. This is a low security authentication and applications should not expose sensitive information to these user types.

Setup

The ASP.NET Core application uses Microsoft.Identity.Web and the Microsoft.Identity.Web.MicrosoftGraphBeta Nuget packages to implement the Azure AD clients. The ASP.NET Core client is a server rendered application and uses an Azure App registration which requires a secret or a certificate to acquire access tokens.

The onboarding application uses Microsoft Graph applications permissions to create the users and initialize the temporary access pass (TAP) flow. The following application permissions are used:

User.EnableDisableAccount.All User.ReadWrite.All UserAuthenticationMethod.ReadWrite.All

The permissions are added to a separate Azure App registration and require a secret to use. In a second phase, I will look at implementing the Graph API access using Microsoft Graph delegated permissions. It is also possible to use a service managed identity to acquire a Graph access token with the required permissions.

Onboarding members using passwordless

When onboarding a new Azure AD user with passwordless and TAP, this needs to be implemented in two steps. Firstly, a new Microsoft Graph user is created with the type member. This takes an unknown length of time to complete on Azure AD. When this is finished, a new TAP authentication method is created. I used the Polly Nuget package to retry this until the TAP request succeeds. Once successful, the temporary access pass is displayed in the UI. If this was a new employee or something like this, you could print this out and let the user complete the process.

private async Task CreateMember(UserModel userData) { var createdUser = await _aadGraphSdkManagedIdentityAppClient .CreateGraphMemberUserAsync(userData); if (createdUser!.Id != null) { if (userData.UsePasswordless) { var maxRetryAttempts = 7; var pauseBetweenFailures = TimeSpan.FromSeconds(3); var retryPolicy = Policy .Handle<HttpRequestException>() .WaitAndRetryAsync(maxRetryAttempts, i => pauseBetweenFailures); await retryPolicy.ExecuteAsync(async () => { var tap = await _aadGraphSdkManagedIdentityAppClient .AddTapForUserAsync(createdUser.Id); AccessInfo = new CreatedAccessModel { Email = createdUser.Email, TemporaryAccessPass = tap!.TemporaryAccessPass }; }); } else { AccessInfo = new CreatedAccessModel { Email = createdUser.Email, Password = createdUser.Password }; } } }

The CreateGraphMemberUserAsync method creates a new Microsoft Graph user. To use a temporary access pass, a member user must be used. Guest users cannot be onboarded like this. Even though we do not use a password in this process, the Microsoft Graph user validation forces us to create one. We just create a random password and will not return this, This password will not be updated.

public async Task<CreatedUserModel> CreateGraphMemberUserAsync (UserModel userModel) { if (!userModel.Email.ToLower().EndsWith(_aadIssuerDomain.ToLower())) { throw new ArgumentException("A guest user must be invited!"); } var graphServiceClient = _graphService .GetGraphClientWithManagedIdentityOrDevClient(); var password = GetRandomString(); var user = new User { DisplayName = userModel.UserName, Surname = userModel.LastName, GivenName = userModel.FirstName, OtherMails = new List<string> { userModel.Email }, UserType = "member", AccountEnabled = true, UserPrincipalName = userModel.Email, MailNickname = userModel.UserName, PasswordProfile = new PasswordProfile { Password = password, // We use TAP if a paswordless onboarding is used ForceChangePasswordNextSignIn = !userModel.UsePasswordless }, PasswordPolicies = "DisablePasswordExpiration" }; var createdUser = await graphServiceClient.Users .Request() .AddAsync(user); return new CreatedUserModel { Email = createdUser.UserPrincipalName, Id = createdUser.Id, Password = password }; }

The TemporaryAccessPassAuthenticationMethod object is created using Microsoft Graph. We create a use once TAP. The access code is returned and displayed in the UI.

public async Task<TemporaryAccessPassAuthenticationMethod?> AddTapForUserAsync(string userId) { var graphServiceClient = _graphService .GetGraphClientWithManagedIdentityOrDevClient(); var tempAccessPassAuthMethod = new TemporaryAccessPassAuthenticationMethod { //StartDateTime = DateTimeOffset.Now, LifetimeInMinutes = 60, IsUsableOnce = true, }; var result = await graphServiceClient.Users[userId] .Authentication .TemporaryAccessPassMethods .Request() .AddAsync(tempAccessPassAuthMethod); return result; }

The https://aka.ms/mysecurityinfo link can be used to complete the flow. The new user can click this link and enter the email and the access code.

Now that the user is authenticated, he or she can add a passwordless authentication method. I use an external FIDO2 key.

Once setup, the user can register and authenticate. You should use at least two security keys.

This is an awesome way of onboarding users which allows users to authenticate in a phishing resistant way without requiring or using a password. FIDO2 is the recommended and best way of authenticating users and with the rollout of passkeys, this will become more user friendly as well.

Onboarding members using password

Due to the fact that some companies still use legacy authentication or we would like to support users with no computer, we also need to onboard users with passwords. When using passwords, the user needs to update the password on first use. The user should add an MFA, if not forced by the tenant. Some employees might not have a computer and would like user a phone to authenticate. An SMS code would be a good way of achieving this. This is of course not very secure, so you should expect these accounts to get lost or breached and so sensitive data should be avoided for applications used by these accounts. The device code flow could be used together on a shared PC with the user mobile phone. Starting an authentication flow from a QR Code is unsecure as this is not safe against phishing but as SMS is used for these type of users, it’s already not very secure. Again sensitive data must be avoided for applications accepting these low security accounts. It’s all about balance, maybe someday soon, all users will have FIDO2 security keys or passkeys to use and we can avoid these sort of solutions.

Onboarding guest users (invitations)

Guest users cannot be onboarded by creating a Microsoft Graph user. You need to send an invitation to the guest user for your tenant. Microsoft Graph provides an API for this. There a different type of guest users, depending on the account type and the authentication method type. The invitation returns an invite redeem URL which can be used to setup the account. This URL is mailed to the email used in the invite and does not need to be displayed in the UI.

private async Task InviteGuest(UserModel userData) { var invitedGuestUser = await _aadGraphSdkManagedIdentityAppClient .InviteGuestUser(userData, _inviteUrl); if (invitedGuestUser!.Id != null) { AccessInfo = new CreatedAccessModel { Email = invitedGuestUser.InvitedUserEmailAddress, InviteRedeemUrl = invitedGuestUser.InviteRedeemUrl }; } }

The InviteGuestUser method is used to create the invite object and this is sent as a HTTP post request to the Microsoft Graph API.

public async Task<Invitation?> InviteGuestUser (UserModel userModel, string redirectUrl) { if (userModel.Email.ToLower().EndsWith(_aadIssuerDomain.ToLower())) { throw new ArgumentException("user must be from a different domain!"); } var graphServiceClient = _graphService .GetGraphClientWithManagedIdentityOrDevClient(); var invitation = new Invitation { InvitedUserEmailAddress = userModel.Email, SendInvitationMessage = true, InvitedUserDisplayName = $"{userModel.FirstName} {userModel.LastName}", InviteRedirectUrl = redirectUrl, InvitedUserType = "guest" }; var invite = await graphServiceClient.Invitations .Request() .AddAsync(invitation); return invite; }

Notes

Onboarding users with Microsoft Graph can be complicated because you need to know which parameters and how the users need to be created. Azure AD members can be created using the Microsoft Graph user APIs, guest users are created using the Microsoft Graph invitation APIs. Onboarding users with TAP and FIDO2 is a great way of doing implementing this workflow. As of today, this is still part of the beta release.

Links

https://entra.microsoft.com/

https://learn.microsoft.com/en-us/azure/active-directory/authentication/howto-authentication-temporary-access-pass

https://learn.microsoft.com/en-us/graph/api/authentication-post-temporaryaccesspassmethods?view=graph-rest-1.0&tabs=csharp

https://learn.microsoft.com/en-us/graph/authenticationmethods-get-started

https://learn.microsoft.com/en-us/azure/active-directory/authentication/howto-authentication-passwordless-security-key-on-premises

Create Azure B2C users with Microsoft Graph and ASP.NET Core
Onboarding new users in an ASP.NET Core application using Azure B2C
Disable Azure AD user account using Microsoft Graph and an application client
Invite external users to Azure AD using Microsoft Graph and ASP.NET Core

https://learn.microsoft.com/en-us/azure/active-directory/external-identities/external-identities-overview

https://learn.microsoft.com/en-us/azure/active-directory/external-identities/b2b-quickstart-add-guest-users-portal

Sunday, 26. February 2023

Jon Udell

Mapping people and tags on Mastodon

In Mastodon relationship graphs I showed how to use Steampipe to map Mastodon network neighborhoods. When I use the word map here, I’m channeling Denis Wood’s The Power of Maps: Every map shows this … but not that, and every map shows what it shows this way … but not the other way. Not only … Continue reading Mapping people and tags on Mastodon

In Mastodon relationship graphs I showed how to use Steampipe to map Mastodon network neighborhoods. When I use the word map here, I’m channeling Denis Wood’s The Power of Maps:

Every map shows this … but not that, and every map shows what it shows this way … but not the other way. Not only is this inescapable but it is precisely because of this interested selectivity — this choice of word or sign or aspect of the world to make a point — that the map is enabled to work.

The aspect chosen by those neighborhood maps is the boost — the Mastodon version of a retweet. One of the maps focuses on a selected instance that appears in the home timeline. It shows people who belong to that instance and who boost toots from people on the same or different instances.

The other map zooms out to show boost relationships among all the instances that appear in the home timeline. This view wouldn’t be legible if it included people, so it omits them in order to focus on server-to-server relationships.

These maps represent (or as Denis Wood emphasizes, “re-present”) a set of toots. They omit original toots to which nobody replies, and they also omit replies, in order to focus on boost relationships. What about replies? That would be a different map, one that might also be interesting to draw.

Meanwhile, though, I’ve come up with another map to display the tags that appear in the results of a Mastodon tag search, along with the accounts that use those tags. It proved its worth today when I was looking for perspectives on Section 230 of the Communications Decency Act. As you’ve likely noticed, the US Supreme Court is reconsidering Section 230. My understanding of the topic wasn’t current, I wanted to refresh it, and I especially wanted to test whether Mastodon could provide a useful alternative to a conventional web search.

One affordance that Mastodon provides: search for toots that use the #Section230 tag. Here are two ways to map the results of that search.

On the left is a conventional Mastodon view: a list of toots that match the query. In this case the article I ultimately wanted to read appears way down in that list. The toot that announced it was from The Markup, “a nonprofit newsroom that investigates how powerful institutions are using technology to change our society.” The article, Section 230 Is a Load-Bearing Wall — Is It Coming Down?, transcribes part of a conversation with two legal scholars whom I know to be reliable guides to Net-related issues.

On the right is my Steampipe-based Mastodon tag explorer. Working with the same data, it surfaced The Markup’s article in a way that brought it immediately to my attention. The first thing that caught my eye was the conjunction of two tags: #section230 and #scotus. Since the Supreme Court’s interest in Section 230 is what’s driving the current news cycle, I wanted to hear from legal scholars qualified to discuss the Court’s interest in Section 230. So the tag conjunction was a significant landmark.

The map displayed two nodes that connect to both #section230 and #scotus. How did I choose between them? My prior familiarity with The Markup led me to click on that node and visit the Markup’s Mastodon instance where I read the article.

Had I been following The Markup then, as I am now, I would likely have seen the article on the news list to which I’ve assigned The Markup’s account. But that wouldn’t have changed the experience of searching for the #section230 tag. The relationship graph works by reformulating the results of that search. It omits the text of toots that contain the tag, and the images in those toots, in order to highlight two aspects of the result list: people (or accounts) and tags. It contextualizes those tags by charting their relative frequency in the result list. And it attaches, to each tag node, a link to a new graph focused on that tag.

This “interested selectivity” enables the map to do its work: find accounts that use given tags. Like a tag node, an account nodes provides a link — in this case, to the account’s Mastodon home page. It also reports the accounts’ description using a property that appears when hovering the node. So if I were unfamiliar with The Markup I could reveal its description without leaving the graph. Here’s the query that adds that property to the node.

select note from mastodon_search_account where query = 'https://mastodon.themarkup.org/@themarkup' +---------------------+ | note | +---------------------+ | Watching Big Tech. | +---------------------+

That query is embedded in another query that joins across two Steampipe plugins: one that wraps the Mastodon API and another that queries RSS feeds. That’s because, as noted in Mastodon, Steampipe, and RSS, the RSS feeds that Mastodon provides for tags enrich the results available from the core API.

Enabling SQL to query diverse APIs in a common way is one of Steampipe’s core superpowers. Enabling such queries to form the nodes and edges of relationship graphs is another. Used together, these two superpowers enable maps that select what is helpful, omit what isn’t, and thus “re-present” information for a given purpose.


1 https://blog.jonudell.net/2022/11/28/autonomy-packet-size-friction-fanout-and-velocity/
2 https://blog.jonudell.net/2022/12/06/mastodon-steampipe-and-rss/
3 https://blog.jonudell.net/2022/12/10/browsing-the-fediverse/
4 https://blog.jonudell.net/2022/12/17/a-bloomberg-terminal-for-mastodon/
5 https://blog.jonudell.net/2022/12/19/create-your-own-mastodon-ux/
6 https://blog.jonudell.net/2022/12/22/lists-and-people-on-mastodon/
7 https://blog.jonudell.net/2022/12/29/how-many-people-in-my-mastodon-feed-also-tweeted-today/
8 https://blog.jonudell.net/2022/12/31/instance-qualified-mastodon-urls/
9 https://blog.jonudell.net/2023/01/16/mastodon-relationship-graphs/
10 https://blog.jonudell.net/2023/01/21/working-with-mastodon-lists/
11 https://blog.jonudell.net/2023/01/26/images-considered-harmful-sometimes/
12 https://blog.jonudell.net/2023/02/02/mapping-the-wider-fediverse/
13 https://blog.jonudell.net/2023/02/06/protocols-apis-and-conventions/
14 https://blog.jonudell.net/2023/02/14/news-in-the-fediverse/
15 https://blog.jonudell.net/2023/02/26/mapping-people-and-tags-on-mastodon/
16 https://blog.jonudell.net/2023/03/07/visualizing-mastodon-server-moderation/
17 https://blog.jonudell.net/2023/03/14/mastodon-timelines-for-teams/


Identity Woman

Digital ID architectures matter. So does the way we talk about them.

I am reading a new article out asking if 2023 is the year of Digital ID. It might be – lots of good developments are happening. AND we also have a disaster of communication about how some systems work. The article is referencing the way that TSA works where the employee puts the ID into […] The post Digital ID architectures matter. So does the way we talk about them. appeared first on Identity W

I am reading a new article out asking if 2023 is the year of Digital ID. It might be – lots of good developments are happening. AND we also have a disaster of communication about how some systems work. The article is referencing the way that TSA works where the employee puts the ID into […]

The post Digital ID architectures matter. So does the way we talk about them. appeared first on Identity Woman.

Saturday, 25. February 2023

Identity Praxis, Inc.

Whitepaper: Is It Rude to Ask Your Age?

Worldwide, policymakers, in response to measurable threats and harms to children online and due to a desire to make the Internet safer for children, are updating existing and issuing new age verification and assurance regulations. Age verification is the practice of verifying an individual’s age. Age assurance is a process of verifying or estimating, with […] The post Whitepaper: Is It Rude to A

Worldwide, policymakers, in response to measurable threats and harms to children online and due to a desire to make the Internet safer for children, are updating existing and issuing new age verification and assurance regulations. Age verification is the practice of verifying an individual’s age. Age assurance is a process of verifying or estimating, with a level of certainty and accuracy, an individual’s age.

The goal of this paper is to document and share the prevailing consensus on what is happening with AGE assurance regulations — how regulations and the field are most likely to evolve and what actions organizations will likely be required to take to comply with the new regulations and technical demands.

Age assurance is the collective term used globally to refer to the processes and technologies used to verify an individual’s age, estimate an individual’s age, and determine with a level of certainty that an individual’s claimed or estimated age is accurate (i.e., truthful, substantiated, and verifiable).

We discuss why age verification and assurance matters and what you and your organization should consider to help make the internet a safer place for children and to protect your organization.

You can download the complete 60 page whitepaper here.

The post Whitepaper: Is It Rude to Ask Your Age? appeared first on Identity Praxis, Inc..

Friday, 24. February 2023

@_Nat Zone

OpenWallet Foundationが発足しました

【ブリュッセル23日】OpenWallet Fou…

【ブリュッセル23日】OpenWallet Foundation の発足が日本時間23日深夜に発表されました。

OpenWallet Foundationはデジタル版の証明書や鍵、チケット、カードと言ったものを収めるデジタル財布である「ウォレット」の共通部品を作るためのLinux Foundation配下のオープンソース団体です。個別の機能を開発するプロジェクトの上に団体としての理事会、政府諮問会議、技術諮問会議が組織されます。

初期の理事メンバーはアクセンチュア、フューチャーウェイ、Gen(旧ノートン+Avast)、VISAで1、一般メンバー2にはAmexなどが並んでいます。

OpenWallet はWallet自体ではありません。BlinkなどがWebブラウザ自体ではなくWebブラウザのエンジンであるように、ウォレットのエンジンを提供します。これによって、各社が自社のウォレットを作ることができるようになるのです。

OpenWallet はWalletのエンジン
(source) OpenWallet Foundation

また、OpenWallet Foundation は規格の開発などは行いません。行うのは、ウォレットを作るために必要になる機能群を収めたライブラリ=ウォレットエンジンです。これを使って各社はウォレットを作ることができます。同じエンジンを使っていれば、一つのウォレットから他のウォレットに移るのもより簡単になると期待されます。

発表に付随するイベントで、OpenID Foundation理事長のNat Sakimura は、以下のように述べました。

ユーザー中心主義とプライバシーはデジタルIDに関する技術標準化団体であるOpenID Foundationの中核的な原則で す。15年前のブラウザがそうであったようにウォレットは今この原則を達成するための最前線にいるのです。しかしウォレットが介在しているだけではそれを達成することはできません。ウォレットのコードに対する信頼が必要です。そのためにはオープンソースの実装が提供する透明性が不可欠でありOpenWallet Foundationとの協働を期待しています。

(source) Nat Sakimura

OpenWallet Foundation のより詳しい内容は、https://openwallet.foundation から取得することができます。


Identity Woman

Biometrics in Airports? Senators call for a ban. The IBIA & SIA respond. Lets discuss at Thoughtful Biometrics Workshop March 16th.

One of the big developments related to biometrics in the United States in the last month is a call by senators “calling on TSA to immediately halt its deployment of facial recognition technology.” My understanding of what is going on is facial matching between a presented document with a photo on it (passport or drivers […] The post Biometrics in Airports? Senators call for a ban. The IBIA &

One of the big developments related to biometrics in the United States in the last month is a call by senators “calling on TSA to immediately halt its deployment of facial recognition technology.” My understanding of what is going on is facial matching between a presented document with a photo on it (passport or drivers […]

The post Biometrics in Airports? Senators call for a ban. The IBIA & SIA respond. Lets discuss at Thoughtful Biometrics Workshop March 16th. appeared first on Identity Woman.

Thursday, 23. February 2023

Phil Windleys Technometria

Why Doesn't This Exist Already?

I've had a busy month. Something had to drop and it was regular updates here. Hopefully I’m back to a more regular schedule now. Technological change is messy and hard to predict. Still we can look at the direction in which digital identity is evolving to get an idea what's coming—even if we can't predict the timing.

I've had a busy month. Something had to drop and it was regular updates here. Hopefully I’m back to a more regular schedule now.

Technological change is messy and hard to predict. Still we can look at the direction in which digital identity is evolving to get an idea what's coming—even if we can't predict the timing.

Thanks for reading Phil Windley's Technometria! Subscribe for free to receive new posts and support my work.

Riley Hughes and I recently had a conversation about the question "why doesn't SSI exist already?" Sometimes the question is asked because people think it's such a natural idea that it's surprising that it's not the norm. Other times, the question is really a statement "if this were really a good idea, it would exist already!" Regardless of which way it's asked, the answer is interesting since it's more about the way technology develops and is adopted than the technology itself.

Riley calls identity products "extremely objectionable" meaning that there are plenty of reasons for people to object to them including vendor lock-in, privacy concerns, security concerns, and consumer sentiment. I think he's right. You're not asking people and companies to try a new app (that they can easily discard if it doesn't provide value). You're asking them to change the fundamental means that they use to form, manage, and use online relationships.

The last chapter of my new book, Learning Digital Identity, makes a case that there is an existing identity metasystem that I label the Social Login (SL) metasystem. The SL metasystem is supported by OpenID Connect and the various "login in with..." identity providers. The SL metasystem is widely used and has provided significant value to the online world.

There is also an emerging Self-Sovereign Identity (SSI) metasystem based on DIDs and verifiable credentials. I evaluate each in terms of Kim Cameron’s Laws of Identity. In this evaluation, the SL metasystem comes out pretty good. I believe this accounts for much of its success. But it fails in some key areas like not supporting directed (meaning not public) identifiers. As a result of these failings, SLM has not been universally adoptable. Banks, for example, aren’t going to use Google Signin for a number of reasons.

The SSI Metasystem, on the other hand, meets all of Cameron’s Laws. Consequently, I think it will eventually be widely adopted and gradually replace the SL metasystem. The key word being eventually. The pace of technological change leads us to expect that change will happen very quickly. Some things (like the latest hot social media app) seem to happen overnight. But infrastructural change, especially when it requires throwing out old mental models about how things should work is much slower. The fact is, we’ve been building toward the ideas in SSI (not necessarily the specific tech) for several decades. Work at IIW on user-centric identity led to the SL metasystem. But the predominant mental model of that metasystem didn't change much from the one-off centralized accounts people used before. You still get an account administrated bythe relying party, they've just outsourced the authentication to someone else. (Which means another party is intermediating the relationship). Overcoming that mental model, especially with entrenched interests, is a long slog.

In the 80s and 90s (pre-web) people were only online through the grace of their institution (university or company). So, I was windley@cs.ucdavis.edu and there was no reason to be anything else. When the web hit, I needed to be represented (have an account) in dozens or hundreds of places with whom I no longer had a long term relationship (like employee or student). So, we moved the idea of an account from workstation operating systems to the online service. And became Sybill.

When Kim first introduced the Laws of Identity, I literally didn’t understand what he as saying. I understood the words but not the ideas. Certainly not the ramifications. I don’t think many did. He’s the first person I know who understood the problems and set out a coherent set of principles to solve them. We used Infocards in our product at Kynetx and they worked pretty well. But because of how they were rolled out, people came to associate them strictly with Microsoft. The SL metasystem won out, offering the benefits of federation, without requiring that people, developers, or companies change their mental model.

Changing metasystems isn't a matter of technology. It's a social phenomenon. Consequently it's slow and messy. Here's my answer to the question "why doesn't this exist yet?" The arc of development for digital identity systems has been bending toward user-controlled, decentralized digital identity for decades. That doesn't mean that SSI, as currently envisioned, is inevitable. Just that something like it, that better complies with Kim's laws than the current metasystem, is coming. Maybe a year from now. Maybe a decade. No one can say. But it's coming. Plan and work accordingly.

Thanks for reading Phil Windley's Technometria! Subscribe for free to receive new posts and support my work.


SAMMANTICS

Skills Needed to Become a Blockchain Developer

Blockchain is one of the fastest-growing technologies in the world. Most companies are trying to implement the scope of blockchain for the growth of their business. Therefore, they are looking for skilled blockchain developers who would help them to execute new technologies and to update their existing systems so that they can ensure growth in […]

Blockchain is one of the fastest-growing technologies in the world. Most companies are trying to implement the scope of blockchain for the growth of their business. Therefore, they are looking for skilled blockchain developers who would help them to execute new technologies and to update their existing systems so that they can ensure growth in the market.

As blockchain is revolutionizing the world, there is a huge demand for blockchain developers. Without them, it is not possible to utilize the benefits of blockchain technology. The developers are responsible for all the research on the blockchain and the execution of the benefits of blockchain. Developers will be responsible for building the system’s architecture, creating smart contracts, as well as optimizing the protocols. As they need to handle a lot of responsibilities, they need to have certain skills.

Cryptography

To offer a safe blockchain environment, cryptography is very important. Therefore blockchain developers must have a good understanding of cryptographic practices and concepts including crypto keys, wallets, and digital signatures. Developers must have the knowledge to make use of public-key cryptography to avoid unauthorized access. They should also understand the dissimilarities between the different cryptographic hash functions such as Keccak-256 and SHA-256.

Blockchain architecture

Blockchain developers must have a good idea about the way blockchain works and its architecture. They should also know concepts like a consensus, smart contracts, distributed ledgers, and others that are very important in the functioning of blockchain technology. They should also know the four basic types of blockchain architecture – private, public, consortium, and hybrid.

Data Structures

The presence of data structures is an unavoidable part of the whole blockchain network. Every block is a data structure type that gathers all the transactions specifically for the public ledger. Blockchain developers must study data structures thoroughly and should understand the way blockchain network makes use of data structure. Developers must also understand the different types of data structures like heaps, graphs, Merkle trees, Patricia trees, and hash trees.

Smart Contracts

Smart contracts allow two parties to transfer services and goods without any intermediaries. Smart contracts are a predominant factor while implementing blockchains. Therefore, developers must be familiar with these smart contracts and how they can be used for business. They should also know the programming languages that are widely used in the execution of smart contracts like Chaincode, Solidity, or Vyper.

Web development

Web development is very important as blockchain gives high importance to decentralized applications. Therefore developers must have good skills in web development like designing and developing web applications, technologies that back them up, and the steps that can be taken for the optimization and security of these applications.

Programming languages

Various programming languages are used by the blockchain technologies and will be different according to the platforms used to execute the blockchain environment. Developers need not be experts in all the languages, but they should have some basic knowledge of some of the popular languages like Python, C++, or JavaScript. It is also good if the developers have some idea of object-oriented programming.


Hyperonomy Digital Identity Lab

Proposal 4: DID Communications (DIDComm) Protocol as the basis for a Unified Trust Spanning Protocol 0.28 (FINAL 3)

Copyright (c) 2023 Michael Herman (Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public Licensehttps://creativecommons.org/licenses/by-sa/4.0/legalcode Version 0.28 of Proposal 4 presents a compelling story (with irrefutable evidence and examples) supporting the selection of the DID Communications (DIDComm) protocol as the … Continue reading →

Copyright (c) 2023 Michael Herman (Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public License
https://creativecommons.org/licenses/by-sa/4.0/legalcode

Version 0.28 of Proposal 4 presents a compelling story (with irrefutable evidence and examples) supporting the selection of the DID Communications (DIDComm) protocol as the Layer 2 Trust Spanning Protocol for any and all decentralized system ecosystems:

Web 7.0 DIDComm-ARM ToIP Technical Architecture Specification (ToIP TAS) etc.

Please enjoy the following downloadable presentation. I look forward to your comments and feedback. A downloadable version of the Web 7.0 DIDComm Notation Template is also listed below.

Change Log

Downloadable Presentation

Click below to download version 0.28 of the Proposal 4: DID Communications (DIDComm) Protocol as the basis for a Unified Trust Spanning Protocol 0.28 presentation.

web-7.0-tslf-proposal-4-toip-0.28-final3Download

Downloadable Web 7.0 DIDComm Notation Template

Web 7.0 DIDComm Notation Template

Feedback

Post your comments and feedback to the following ToIP Technical Architecture discussion item 27:

https://github.com/trustoverip/trust-spanning-protocol/discussions/27

Tuesday, 21. February 2023

MyDigitalFootprint

Can models help us navigate uncertainty?

Probably not but at there end there are 10 ways to prevent assumptions becoming culture   Post the 2008 global financial crisis, the president of the European Central Bank, Jean-Claude Trichet, said in an opening address at the ECB Central Banking Conference Frankfurt, 18 November 2010, “As a policymaker during the crisis, I found the available models of limited help. In fact, I would go f
Probably not but at there end there are 10 ways to prevent assumptions becoming culture

 

Post the 2008 global financial crisis, the president of the European Central Bank, Jean-Claude Trichet, said in an opening address at the ECB Central Banking Conference Frankfurt, 18 November 2010, “As a policymaker during the crisis, I found the available models of limited help. In fact, I would go further: In the face of the crisis, we felt abandoned by conventional tools.”



The Gap

Many in executive and leadership positions have faced for a while a feeling that our models, simulations and computations are ill-suited to addressing big, messy, complicated real-world problems. There is a gap between the reality we read and measure today and the model that predicted today a short period ago.   Since we have yet to create new models and continue to utilise the same ideals, we know there is a gap between prediction and reality, and that gap is leading to poor decisions, which make us see a bigger gap. 

Decisions based on models today have short and long-term consequences. Poor decisions because of broken models lead to us missing an agreed performance metric and impact our children as they live with choices removed and unintended outcomes. 

It is accepted by some commentators that utility maximisation (the economic concept) is a poor way to navigate uncertainty.  The reason is that uncertainty means you cannot be sure you are optimising for the best thing in the short and long term. The map and directions our economic models provide cannot deliver a compass for wicked problems such as climate change and pandemics. 

A reason our models fail may be that they start with the wrong outcome preference, framing and assumption, which over time are transformed to no longer be preferences, framings and assumptions but beliefs and culture, and we cannot change them before they break — path dependency.  An assumption our utility maximisation economic model took a while back was to value the time of humans unequally (discrimination), and now technology laboured to convert the earth's unvalued natural resources. With this model, humanity ignores the consequences of previous generations' assumptions because they are now cultural, which we experience as differing views on the environment, society and climate.

Does system thinking break at earth scale?

Many of our complex models start with a decomposition ideal to break apart complexity into small discrete components that we find we can model and explore. However, even our understanding of atoms, once the most fundamental building block, actually contains secrets that we still can't fathom; such is the joy of quantum mechanics.  As we break things apart, we break connections we did not know existed, and so we cannot recombine the components and make what we started with.

Components don't combine to make the hole. I can put all your chemistry in a bucket, but it will not make you. 

If our model depends on the complete understanding of the smallest part and that the understanding is perfect and precise, then when doubt arises or the proof comes that this smallest part is not what we assumed, why do we think that the model will retain its ability to provide precise prediction?  When we study our smallest component parts in isolation, what linkages were there to other parts that we did not observe?

Our systems thinking and models depend on a form of separation, boundary, and interface that help us detail how these small components interact.  We have a duality of issue insomuch that if our understanding of the smaller part is not right and we also make an error in the dependencies about how this small part interacts, effects, affects and connects to other small parts, we end up with non-linear errors in our models.  Exponential (non-linear) errors, just like exponential growth, get big very quickly. 

There is a branch of thinking that retains the idea that we will get a large-scale model right because we should be able to model each small basic component part and the dependency between all parts, but we face a dichotomy that we don't know that if the whole can be represented by the parts, because of variables in time delays and the emergence of new recursive loops in the system, which is why our system thinking breaks at earth's scale. At scale, the earth is not a static model but a dynamic living, moving, changing and adapting system of systems.  The earth is adopting (not learning), and today's system creates a dynamic change that means tomorrow's system has different dependencies.   Our decisions impact and direct that change.

In the small backwater of behaviours, economics and social psychology, there is a discipline called anthropology, the study of human behaviours, which searches for differences and how these tiny one-off behaviours and ideas may be the future and may scale. Hidden in plain sight among all the noise and hubris is the new dynamic changes, which means that our understanding of the small parts and the dependency must adapt continually, or we must recognise that small errors at scale will give false conclusions. It is not that the emerging feedback loop existed when it started; it was created because of the dynamic nature of complex systems.   We lack right now the tools to create such new dynamic loops. This means that all our models are wrong; we just don't know by how much.

We lack right now the tools to create such new dynamic loops. This means that all our models are wrong; we just don't know by how much.

Just because all models are wrong, that does not mean they are not useful, but it does mean we need all models and cannot align to only one when we are dealing with issues at the earth scale.  This implies that 1.5 degrees are both right and wrong, as is 3.5 degrees.  Indeed the idea that any temperature is right or wrong is naive. Further, any one metric that we use as a tool for complex decision-making is going to be wrong. 

Decisions based on metrics.

The consequence of the maximisation theory that drives our model has ignored or unvalued some cost. It also maximises for a single outcome, which is never possible.  This means that the opposite side of that coin is true, that there will always be an overstatement of revenue and margin that can lead to false intentions.

We are drawn to decisions not based on costs but on the increment between revenue and costs (margin); we shortcut the complexity of the model to a single figure, ROI. Just like temperature in climate models. ROI decisions based on only costs do not exist and, therefore, as a viewpoint, it is discounted - reinforcing culture/ framing and mindset. ROI will always fail as a decision tool as there will always be undervalued costs and hurdles that mean we stretch the assumptions, but we love the idea of boling complexity down to a simple linear narrative.  

Can models navigate uncertainty, probably not, but they are the best tool we have.  Assuming they are a true compass is a dangerous game, especially if we have a culture that means we cannot challenge the model itself.  

I suppose here is the reality, system thinking will fail at the earth scale because assumptions become culture, and we don’t have any examples that culture changes before it breaks.  If the model cannot be challenged, we will have to wait until it breaks.

We don’t have any examples that culture changes before it breaks.

Leadership can stop assumptions from becoming culture

Assumptions can become part of a culture when they are repeated and reinforced over time, often without questioning or examining their accuracy or relevance.  Assumptions become a part of culture when they are widely held and perpetuated through social norms and values.

To prevent assumptions from becoming entrenched in the company culture, it's important that leadership take steps to challenge and question them, as well as to actively promote more inclusive and open-minded ways of thinking.

So we we asked ChatGPT for some specific recommendations on how to prevent assumptions becoming culture; we will leave it up to you to judge.

ChatGPT please give some specific steps that can help prevent assumptions from becoming a culture?

Encourage critical thinking: Promote a culture of critical thinking and encourage people to challenge assumptions and beliefs that are taken for granted. This can help to break down the barriers that prevent people from questioning assumptions and can lead to more open-minded and inclusive thinking.

Encourage and promote diversity: Encourage diversity in all forms, whether it's through hiring practices, community events, or other initiatives. This can help to break down stereotypes and promote understanding across different cultures and perspectives by exposing people to a wider range of perspectives and experiences.

Foster open communication: Create a culture of open communication where people feel comfortable sharing their thoughts and ideas without fear of judgment. This can help to prevent assumptions from becoming entrenched by allowing people to openly question and discuss them.

Educate everyone: Provide education and training on topics such as cultural competency, unconscious bias, and diversity and inclusion. This can help people to become more aware of their own assumptions and biases, and can promote more inclusive thinking. Share your findings and insights with others. Explain why certain assumptions are inaccurate and how they can be harmful. Encourage others to question their assumptions as well.

Lead by example: It's important to lead by example. Model the behavior that you want to see in others by questioning your own assumptions and being open to new perspectives. By doing so, you can help to create a culture where assumptions are not accepted without question, and where people are encouraged to think critically and with an open mind.

Emphasize transparency: Encourage open and honest language, and be transparent about the assumptions that underlie decisions and actions. This can help to build trust and promote accountability.

Continuously assess and evaluate: Regularly review assumptions and cultural norms to determine whether they are still relevant and accurate. This can help to prevent assumptions from becoming outdated or misaligned with current needs and realities.

Be aware of your assumptions: Start by examining your own assumptions and biases. Ask yourself where they come from and whether they are based on evidence or stereotypes. Awareness is the first step in changing behavior.

Challenge assumptions: When you encounter assumptions that don't feel right, question them. Look for evidence that supports or contradicts them. If an assumption is not based on evidence, it should be discarded.

Hold people accountable: Finally, hold people accountable for their assumptions. Call out harmful assumptions when you see them, and encourage others to do the same. Make it clear that assumptions are not acceptable in your culture.

Monday, 20. February 2023

Damien Bod

Using Hangfire with ASP.NET Core

This article looks at setting up Hangfire with ASP.NET Core and an SQL server. Hangfire provides a solution to run recurring jobs and background jobs with an excellent dashboard to monitor the events. Code: https://github.com/damienbod/AspNetCoreHangfire History 2023-02-21 Fix Delete background job to delete recurring job, feedback from Alex The ASP.NET Core application uses two Hangfire […]

This article looks at setting up Hangfire with ASP.NET Core and an SQL server. Hangfire provides a solution to run recurring jobs and background jobs with an excellent dashboard to monitor the events.

Code: https://github.com/damienbod/AspNetCoreHangfire

History

2023-02-21 Fix Delete background job to delete recurring job, feedback from Alex

The ASP.NET Core application uses two Hangfire Nuget packages to integrate the event handling into the solution.

Hangfire.AspNetCore Hangfire.SqlServer

The program file initializes the services and adds the middleware to the project. The SQL Server is setup using the extension methods from the Hangfire Nuget packages and uses a connection string from the app settings in development.

public static void Main(string[] args) { var builder = WebApplication.CreateBuilder(args); var services = builder.Services; var configuration = builder.Configuration; var env = builder.Environment; services.AddHangfire(hangfire => { hangfire.SetDataCompatibilityLevel(CompatibilityLevel.Version_170); hangfire.UseSimpleAssemblyNameTypeSerializer(); hangfire.UseRecommendedSerializerSettings(); hangfire.UseColouredConsoleLogProvider(); hangfire.UseSqlServerStorage( configuration.GetConnectionString("HangfireConn"), new SqlServerStorageOptions { CommandBatchMaxTimeout = TimeSpan.FromMinutes(5), SlidingInvisibilityTimeout = TimeSpan.FromMinutes(5), QueuePollInterval = TimeSpan.Zero, UseRecommendedIsolationLevel = true, DisableGlobalLocks = true }); var server = new BackgroundJobServer(new BackgroundJobServerOptions { ServerName = "hangfire-test", }); }); // other services... var app = builder.Build(); app.UseHangfireDashboard(); // more middleware... app.Run(); }

SQL database setup

The SQL database is setup using the Hangfire documentation.

https://docs.hangfire.io/en/latest/configuration/using-sql-server.html

The hangfire script can be found in the hangfire repository. I also added a copy of this script in the demo repository: hangfire-default-install.sql

Create a recurring job

Now that Hangfire is setup, jobs can be created. I use a class to implement the method called after a Hangfire event has been fired. The method must be reentrant and needs to handle the exceptions.

public class MyRecurringJob : IMyRecurringJob { public void DoSomethingReentrant() { Console.WriteLine("IMyRecurringJob doing something"); } }

The job can be created using the RecurringJob.AddOrUpdate method. This adds the job to the SQL database.

RecurringJob.AddOrUpdate<IMyRecurringJob>( job => job.DoSomethingReentrant(), Cron.Hourly); Create a background job

A Hnagfire background job can be created using the BackgroundJob.Enqueue method. The methods have various parameters which can be used anc covers most of the event, job requirements which can exist.

BackgroundJob.Enqueue<IMyBackgroundJob>(x => x.DoSomethingReentrant()); Delete recurring jobs

Deleting the jobs is required if you deploy regularly as different instances are created and new recurring jobs are created then. The following code makes it possible to delete the recurring jonbs.

using (var connection = JobStorage.Current.GetConnection()) { foreach (var recurringJob in connection.GetRecurringJobs()) { RecurringJob.RemoveIfExists(recurringJob.Id); } } Running the applications

When the applications are created and different jobs can be created or deleted.

Hangfire Dashboard

Hangfire provides an excellent dashboard which can display the running or failed jobs and the different server setups.

Notes

This is just the basic setup of Hangfire. You can integrate Hangfire in various ways into a solution, but KISS should always be followed. Using events inside solutions is already complicated and requires extra tools to monitor and debug. You should also only use this if the business logic requires this. To use Hangfire inside a professional solution, you would need to add security to the dashboard, probably inject services inside the code that executes the jobs which has problems, add logging and the executing code must be reentrant due to retries.

Hangfire is a great tool as with many other tools, solutions for this type of problem. The jobs need to be reentrant. For long running workflows which require persistence, maybe other tools would be the better choice, for example Azure durable functions or power automate. Background services also work really good for simple jobs. ASP.NET Core Quartz can also be used. All these tools are very good and you should try to choose the one which best fits your needs and follows the KISS principal.

Links

https://docs.hangfire.io/

https://docs.hangfire.io/en/latest/getting-started/aspnet-core-applications.html

https://github.com/HangfireIO/Hangfire/blob/27ab355ff1cd72a06af51fc6d2f4599a87c3b4b8/src/Hangfire.SqlServer/DefaultInstall.sql

https://jonhilton.net/simple-background-jobs-with-hangfire-and-aspnet-core/

https://codeopinion.com/background-tasks


reb00ted

Why I am co-organizing FediForum

Kaliya Young (identitywoman), Jennifer Holmes and I are organizing a new unconference focused on the Fediverse and the future of social media: FediForum March 29 and 30 fediforum.org Like many, I have been watching the destruction of Twitter by its new owner with utter fascination. If it was problematic in many ways before, Musk-Twitter is far worse, and there are no signs it

Kaliya Young (identitywoman), Jennifer Holmes and I are organizing a new unconference focused on the Fediverse and the future of social media:

FediForum
March 29 and 30
fediforum.org

Like many, I have been watching the destruction of Twitter by its new owner with utter fascination. If it was problematic in many ways before, Musk-Twitter is far worse, and there are no signs it will ever return to the state that many of us, myself included, mostly enjoyed.

Fortunately, there are alternatives, led by Mastodon and a rapidly-growing list of Fediverse apps that all interoperate thanks to a number of open protocols such as WebFinger, ActivityPub and Activity Vocabulary. Millions of people have created accounts there in recent months, and over a million new users have become regulars.

But there are growing pains and many open questions, such as:

The underlying protocol standards are currently essentially unmaintained, and real-implementations don’t exactly match how the standards were intended. How will this situation be resolved? How do we create and maintain a safe space for traditionally disenfranchised people? The Fediverse currently depends on much unpaid volunteer work. How long can that go well? What if another million users (or more!) join, and the novelty effect wears off? Should brands be allowed in? Is #Fediblock the best we can do for moderation? How do we keep the character of the place if (when!) large organizations come in that bring lots of new users, and, in comparison, large budgets? Who decides?

Conversations need to be had, in a form that encourages problem solving. That’s the kind of space we are trying to create with FediForum: where people meet who want to move the Fediverse forward: on technology, on funding, on operations, on governance, and perhaps on culture.

Because if we don’t, there’s a real chance the once-in-a-generation opportunity to build better social media passes, and I really, really don’t want that to happen. I hope you don’t either.

So, if you love Mastodon or any other post-Twitter social media apps, and have an opinion on the future of social media, join us and help figure it out together with others who think similarly. The first event will be on-line; we hope to be able to do an in-person event later this year.

More info: FediForum.org

Sunday, 19. February 2023

Heres Tom with the Weather

Toews has long covid

Today we learned that a 34 year old hockey player, a 6x all-star in the NHL’s list of 100 greatest players of all time, stopped playing hockey several weeks ago due to Long Covid. The covid minimizer that now owns Twitter intentionally allows misinformation to spread. Mask up.

Today we learned that a 34 year old hockey player, a 6x all-star in the NHL’s list of 100 greatest players of all time, stopped playing hockey several weeks ago due to Long Covid.

The covid minimizer that now owns Twitter intentionally allows misinformation to spread. Mask up.

Thursday, 16. February 2023

Identity Woman

Who is invited to the Thoughtful Biometrics Workshop?

The workshop is coming up March 16th virtually. If what I write below resonates with you please consider joining us I thought long and hard about key constituencies to invite to the Thoughtful Biometrics Workshop and why. I collaborated with Kyra Auerbach to create this image that captures it. There are many biometrics, ways to […] The post Who is invited to the Thoughtful Biometrics Workshop? a

The workshop is coming up March 16th virtually. If what I write below resonates with you please consider joining us I thought long and hard about key constituencies to invite to the Thoughtful Biometrics Workshop and why. I collaborated with Kyra Auerbach to create this image that captures it. There are many biometrics, ways to […]

The post Who is invited to the Thoughtful Biometrics Workshop? appeared first on Identity Woman.

Thursday, 16. February 2023

Identity Woman

Speaking at the 2023 Conference of the Plurality Research Network

At the Plurality Research Network Conference 2023, which commenced on January 13, 2022, and continued through January 15, 2023, I delivered a lighting talk the first day. A Little Introduction to the Conference Researchers and practitioners from various domains, such as computer science, sociology, political ethics, and government, who are exploring plural technologies are brought togeth

At the Plurality Research Network Conference 2023, which commenced on January 13, 2022, and continued through January 15, 2023, I delivered a lighting talk the first day. A Little Introduction to the Conference Researchers and practitioners from various domains, such as computer science, sociology, political ethics, and government, who are exploring plural technologies are brought together at […]

The post Speaking at the 2023 Conference of the Plurality Research Network appeared first on Identity Woman.

Wednesday, 15. February 2023

MyDigitalFootprint

Bias and Trauma

I have been exploring the research and concepts that bias and trauma are deeply linked.  The linkage and directionality are much debated.   Trauma creates bias, and equally, bias creates trauma. It would appear that either can be a starting point, but they definitely feed each other, creating complex positive (healing) and negative (detrimental) feedback loops which extend beyond the in

I have been exploring the research and concepts that bias and trauma are deeply linked.  The linkage and directionality are much debated.   Trauma creates bias, and equally, bias creates trauma. It would appear that either can be a starting point, but they definitely feed each other, creating complex positive (healing) and negative (detrimental) feedback loops which extend beyond the individual and their immediate relationships to wider society.    


Using systems-mapping to address Adverse Childhood Experiences (ACEs) and trauma: A qualitative study of stakeholder experiences 

https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0273361


Why does this matter, as all data has a bias? 

Fundamental to a decision-making role based on data is to demand that we recognise bias and try to remove bias; however, I am now thinking that if we remove the bias, we assume there is no trauma, and therefore, everyone will be rational.  Yes, there are some big ugly assumptions in that statement, but the purpose here is about the flow of challenging thinking and not academic rigour to prove something we already know. 

In removing the bias, we ignore that bias feeds trauma and that equally, trauma feeds bias; a decision made without this context has no coupling to the reality of the situation.  The alternative is that we live with the bias in our data, known and unknown; however, we don't know if the analysis for this person in this situation reflects/ fuels/heals their bias/ trauma or not.  

Therefore, the framing of removing bias is not valid; equality, a framing that recognises bias, is not valid as it cannot represent the individual.

Houston, we have a problem. Do we have to reframe bias?

I am all up for understanding bias, but it has become more evident to me that in my thrust to be curious about bias, I have to accept that I am going to have to operate in a dynamic feedback loop, and it will uncover/ reveal trauma.  Revelling trauma or bias crosses the creepy line.  The creepy line is a very subjective idea and is different for everyone.  When I published “My Digital Footprint” back in 2009, I wrote a lot about the creepy line where data collection can be the same, but how you present the data back to the user will either add value or unnerve them.  But to know what a user will accept, you test the user's concepts of a creepy line continually to find out when what you are saying becomes unacceptable and back off — deep personalisation.  The creepy line and the concepts of privacy are deeply interlinked.  Privacy is linked to perceptions of risk, and risk acceptance or adversity equally links to trauma and bias.

Bias, trauma, risk and privacy matter at both levels, macro and micro, but that is a big rabbit hole where we get into identity, but I am interested here in decision-making.  

What I am left questioning is whether understanding bias is the right thing to do. Conceptually should we spend more time understanding if this bias feeds something OR if this bias is fed by something OR both?  Would both provide new insights to help us give the user better choices and make better decisions, or are we edging toward influence and control and, in doing so, remove agency?


 CEO Take Away

Trauma and bias are critical to decision-making. In the board room and leadership executive setting, should we unpack bias and ignore trauma?   Bias is easy to talk about, trauma remains “unsaid” - I am not even sure if we assume.  The point is, can we use bias as a proxy? Knowing that many CEO’s and founders are driven by trauma, is understanding bias going to take us to a place that will be easier or more difficult to talk about how we make decisions?

Worth reading CEO childhood trauma, social networks, and strategic risk-taking


 




Further reading 

CEO early-life disaster experience and corporate social performance
https://onlinelibrary.wiley.com/doi/abs/10.1002/smj.3293

CEO Traits and Firm Outcomes: Do Early Childhood Experiences Matter?
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3374389

The Indirect Effect of Trauma via Cognitive Biases and Self-Disturbances on Psychotic-Like Experiences
https://www.frontiersin.org/articles/10.3389/fpsyt.2021.611069/full

Childhood trauma and cognitive biases associated with psychosis: A systematic review and meta-analysis https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7906349/ 

Enhanced Attentional Bias Variability in Post-Traumatic Stress Disorder and its Relationship to More General Impairments in Cognitive Control
https://www.nature.com/articles/s41598-017-15226-7 

Unconscious Cultural Bias Reduction and Trauma-Informed Care https://www.careinnovations.org/resources/unconscious-cultural-bias-reduction/

Attention Bias and Metacogniti on: Relationships with Childhood Trauma and Psychological Vulnerability
https://research.manchester.ac.uk/files/205623245/FULL_TEXT.PDF

Interpretation biases in victims and non-victims of interpersonal trauma and their relation to symptom development
https://www.sciencedirect.com/science/article/abs/pii/S0887618506001332

Recall Bias in Trauma Recollection
https://www.researchgate.net/publication/355928917_Recall_Bias_in_Trauma_Recollection






Jon Udell

News in the fediverse

On a recent episode of You’re Wrong About, Sarah Marshall delivered a crisp summary of how the 24-hour news cycle came to be. This led me to realize that many among us do not remember when news was confined to time slots: the 7 o’clock news, the 11 o’clock news, the morning paper. I think … Continue reading News in the fediverse

On a recent episode of You’re Wrong About, Sarah Marshall delivered a crisp summary of how the 24-hour news cycle came to be. This led me to realize that many among us do not remember when news was confined to time slots: the 7 o’clock news, the 11 o’clock news, the morning paper. I think it might be healthy to bring that back in some form. From the excitement I heard in their voices I gather that Sarah Marshall and her co-presenter Blair Braverman feel the same way. When I hear people complain about Mastodon’s lack of breaking news, I think: “Feature, not bug!” Maybe what the 24-hour cycle has been breaking is us.

So when press.coop launched today I was a bit conflicted. On the one hand, I would like to use Mastodon to read mainstream news headlines just as I once used RSS to do. (Full disclosure: press.coop is run as a public service by hello.coop whose founder, Dick Hardt, is a longtime friend.) On the other hand, when reading Mastodon timelines I’m enjoying NOT knowing what’s happening in the world right now.

What if you could exclude news from the home timeline, put it on a list, and go there when — and only when — in a news mindset? That’s a feature of the Fedilab client, I learned from Keith Soltys.

What would it take to implement the same idea in the Mastodon reader I’m developing? It couldn’t be just an extra WHERE condition, could it?

Well, actually, it could.

Problem solved. Now I can read news in Mastodon when and how I want, and never see it otherwise.

If you want that same control, you shouldn’t have to use a particular Android client, or Steampipe client, or any other. There’s a Cambrian explosion of such creatures right now. The more they can share important DNA, the better for all of us.

I hope that the Steampipe plugin for Mastodon, which enables the dashboards I’m building using Steampipe’s own dashboards-as-code system, can provide some useful common DNA. A rule like news only on lists, not timelines, once expressed in SQL, can be used (and thus not reinvented) by any kind of Steampipe (read: Postgres) client: psql (or any Postgres CLI), Metabase or Tableau or any Postgres-compatible BI tool, Python or JavaScript or any programming language. Steampipe is a versatile component. Its mapping from APIs to SQL can, in theory, provide the data layer for any application.

My Steampipe + Mastodon adventure notwithstanding, let’s hold on to that larger idea. The fediverse is our chance to reboot the social web and gain control of our information diets. Since our diets all differ, it ought to be trivial for anyone — in any client — to turn on a rule like news only on lists, not timelines. I’ve shown one way it can be trivial. Please show me others!


1 https://blog.jonudell.net/2022/11/28/autonomy-packet-size-friction-fanout-and-velocity/
2 https://blog.jonudell.net/2022/12/06/mastodon-steampipe-and-rss/
3 https://blog.jonudell.net/2022/12/10/browsing-the-fediverse/
4 https://blog.jonudell.net/2022/12/17/a-bloomberg-terminal-for-mastodon/
5 https://blog.jonudell.net/2022/12/19/create-your-own-mastodon-ux/
6 https://blog.jonudell.net/2022/12/22/lists-and-people-on-mastodon/
7 https://blog.jonudell.net/2022/12/29/how-many-people-in-my-mastodon-feed-also-tweeted-today/
8 https://blog.jonudell.net/2022/12/31/instance-qualified-mastodon-urls/
9 https://blog.jonudell.net/2023/01/16/mastodon-relationship-graphs/
10 https://blog.jonudell.net/2023/01/21/working-with-mastodon-lists/
11 https://blog.jonudell.net/2023/01/26/images-considered-harmful-sometimes/
12 https://blog.jonudell.net/2023/02/02/mapping-the-wider-fediverse/
13 https://blog.jonudell.net/2023/02/06/protocols-apis-and-conventions/
14 https://blog.jonudell.net/2023/02/14/news-in-the-fediverse/
15 https://blog.jonudell.net/2023/02/26/mapping-people-and-tags-on-mastodon/
16 https://blog.jonudell.net/2023/03/07/visualizing-mastodon-server-moderation/
17 https://blog.jonudell.net/2023/03/14/mastodon-timelines-for-teams/

Tuesday, 14. February 2023

MyDigitalFootprint

Love this - but let's not pretend we all agree on what or why we are doing something.

The future isn’t what it used to be: Here's how strategic foresight can help; it is a high-quality, well-written, thoughtful piece from WEF.  It is presented by Olivier Woeffray, Practice Lead, Strategic Intelligence, World Economic Forum and Paulo Carvalho, Executive Director, MBA, Lisbon School of Economics & Management. The truth of the opening statement set the scene, “[We are mov

The future isn’t what it used to be: Here's how strategic foresight can help; it is a high-quality, well-written, thoughtful piece from WEF.  It is presented by Olivier Woeffray, Practice Lead, Strategic Intelligence, World Economic Forum and Paulo Carvalho, Executive Director, MBA, Lisbon School of Economics & Management.

The truth of the opening statement set the scene, “[We are moving] from a world of relative predictability … to a world with more fragility – greater uncertainty, higher economic volatility, geopolitical confrontations.”


However, something niggled me.  I love the future system and exponential thinking; however, I was wondering if the Venn diagram was the right one, but that took me off into thinking about the presentation, not the content.  My gut said I was missing something, but I could add all my initial objections into the future or systems thinking buckets. I was warming to the model and indeed feel it is a positive and valuable contribution, but it assumes we have already agreed on two concepts:

what we are optimising for, and

the top-level ontology. 


We have yet to agree on what we are optimising for, aka the Peak Paradox framing; we have to acknowledge there are different purposes, and it is not that any are good or bad; they are different.  The 13 SDGs have conflicts and cannot all be achieved. 

I would also like to know if we have or can agree on a top-level ontology and how to frame what we are optimising for.  

The WEF framing assumes we have and can agree on the macroeconomics and therefore focuses on the micro. The model is very valid and helpful at a micro level, but the individuals should test the assumptions, the framing and the perspectives about the macro.  Future framing definitely should embrace long-term philosophies of de-growth, sustainability and transparency, but because they are beyond the long-term planning and strategy horizons, they are discounted.  Because so much of business is framed by “what problem does this solve”, “free market competition”, and “what is the action”, future thinking is limited to a narrow scope and does not embrace a different ontology or a different optimisation purpose. 

Love this - but let's not pretend we all agree on what or why we are doing something. 


https://www.weforum.org/agenda/2023/02/strategic-intelligence-why-foresight-key-future-readiness/



Damien Bod

Use multiple identity providers from a Blazor WASM ASP.NET Core App secured using BFF

This post shows how to implement a Blazor WASM UI hosted in an ASP.NET Core application using multiple identity providers to authenticate. Two confidential OpenID Connect code flow clients with PKCE are used to implement the Blazor authentication. The Blazor WASM and the ASP.NET Core application are a single security context. This is implemented using […]

This post shows how to implement a Blazor WASM UI hosted in an ASP.NET Core application using multiple identity providers to authenticate. Two confidential OpenID Connect code flow clients with PKCE are used to implement the Blazor authentication. The Blazor WASM and the ASP.NET Core application are a single security context. This is implemented using the backend for frontend security architecture.

Code: https://github.com/damienbod/ApiJwtWithTwoSts

Setup

Two identity providers are implemented using OpenIddict with ASP.NET Core Identity. Any OpenID Connect identity provider can be used here. The identity provider implements a confidential client requiring a secret. The OpenID Connect code flow is used to implement the authentication flow.


The Blazor application was setup using the Blazor.BFF.OpenIDConnect.Template which can be installed and used from Visual Studio or the dotnet cmd tool. The template creates an ASP.NET Core application with a WASM view and default definitions for OIDC code flow.

Blazor authentication

The Blazor applicaiton requires two OIDC client implementations. The user of the application can choose the required IDP. The DefaultChallengeScheme is set to “UNKNOWN” and this has no definition. The login or sign-in needs to set the authentication scheme for the challenge. The redirect Url and the logout URL needs to be explicitly set and cannot match a definition from a different challenge.

services.AddAuthentication(options => { options.DefaultScheme = CookieAuthenticationDefaults.AuthenticationScheme; options.DefaultChallengeScheme = "UNKNOWN"; }) .AddCookie() .AddOpenIdConnect("T1", options => { configuration.GetSection("OpenIDConnectSettingsT1").Bind(options); options.SignInScheme = CookieAuthenticationDefaults.AuthenticationScheme; options.CallbackPath = "/signin-oidc-t1"; options.SignedOutCallbackPath = "/signout-callback-oidc-t1"; options.ResponseType = OpenIdConnectResponseType.Code; options.SaveTokens = true; options.Scope.Add("profile"); options.GetClaimsFromUserInfoEndpoint = true; options.TokenValidationParameters = new TokenValidationParameters { NameClaimType = "name", RoleClaimType = "role" }; }) .AddOpenIdConnect("T2", options => { configuration.GetSection("OpenIDConnectSettingsT2").Bind(options); options.SignInScheme = CookieAuthenticationDefaults.AuthenticationScheme; options.ResponseType = OpenIdConnectResponseType.Code; options.CallbackPath = "/signin-oidc-t2"; options.SignedOutCallbackPath = "/signout-callback-oidc-t2"; options.SaveTokens = true; options.Scope.Add("profile"); options.GetClaimsFromUserInfoEndpoint = true; options.TokenValidationParameters = new TokenValidationParameters { NameClaimType = "name", RoleClaimType = "role" }; });

The AccountController implements the login and the logout requests. The login needs to force the login for the correct scheme. I named my schemes T1 and T2.

[HttpGet("LoginT1")] public IActionResult T1(string returnUrl) { var redirectUrl = !string.IsNullOrEmpty(returnUrl) ? returnUrl : "/"; return Challenge(new AuthenticationProperties { RedirectUri = redirectUrl }, "T1"); } [HttpGet("LoginT2")] public IActionResult T2(string returnUrl) { var redirectUrl = !string.IsNullOrEmpty(returnUrl) ? returnUrl : "/"; return Challenge(new AuthenticationProperties { RedirectUri = redirectUrl }, "T2"); }

The CallbackPath needs to be unique in the application and a separated definition is required for each identity provider.

Blazor client logout

The logout works a bit like the login. You need to have a unique SignedOutCallbackPath definition for each scheme. I added a new claim to the id_token token from the identity provider. I added this claim in the OpenIddict server. Once the user is authenticated, the idp claim can used to verify which identity provider was used for authentication. The claim is required to logout which sends a request to the correct identity provider.

[ValidateAntiForgeryToken] [Authorize] [HttpPost("Logout")] public IActionResult Logout() { var authProperties = new AuthenticationProperties { RedirectUri = "/" }; // custom claims added to idp, you need to // implement something on your idp for this var usedT1ForAuthn = User.Claims.Any( idpClaim => idpClaim.Type == "idp" && idpClaim.Value == "T1"); var usedT2ForAuthn = User.Claims.Any( idpClaim => idpClaim.Type == "idp" && idpClaim.Value == "T2"); if(usedT1ForAuthn) return SignOut(authProperties, CookieAuthenticationDefaults.AuthenticationScheme, "T1"); if (usedT2ForAuthn) return SignOut(authProperties, CookieAuthenticationDefaults.AuthenticationScheme, "T2"); return SignOut(authProperties, CookieAuthenticationDefaults.AuthenticationScheme); }

The custom “idp” is also used to display identity provider in the Blazor WASM UI.

<Authorized> <strong>@context?.User?.Identity?.Name idp: @context?.User?.Claims.FirstOrDefault(c => c.Type == "idp")?.Value </strong> <form method="post" action="api/Account/Logout"> <AntiForgeryTokenInput /> <button class="btn btn-link" type="submit">Signout</button> </form> </Authorized>

The user can login using different identity providers.

The name claim and the identity provider are displayed in the UI.

Notes

Using multiple identity providers in ASP.NET Core is a simple implementation once you understand the scheme concept and the authentication flow events used. If using the one line wrappers authentication clients from the different providers, you need to override the default implementation hidden inside these libraries. Most of these packages provide extension methods and contracts for this, but sometimes it is more simple to use the standard ASP.NET Core implementation. This works for any conform OpenID Connect server.

Links

https://documentation.openiddict.com/configuration/claim-destinations.html

https://learn.microsoft.com/en-us/aspnet/core/security/authentication/claims

https://www.nuget.org/packages/Blazor.BFF.OpenIDConnect.Template

Monday, 13. February 2023

@_Nat Zone

[2/14 20:30~]MOSIP Open Source Day キーノート(Youtube Liveあり)

またまた直前のアナウンスですが、明日2/14の日本…

またまた直前のアナウンスですが、明日2/14の日本時間の夜8時30分より、インドのバンガロールで行われるMOSIP Open Source Day でキーノートを行います1。Youtube Live での配信もあるようです。なにやら明日発表になる現在は秘密の話もあるようなので、わたしも今から楽しみです。

以下に、主催者からの情報(DeepLによる翻訳と原文)を載せておきます。

MOSIPオープンソースデイは、2月14日午後5時から開催されます。Nat Sakimura氏による基調講演、MOSIPチームとインテグレーション・パートナーによるプレゼンテーションが予定されています。最後に夕食会を開催し、皆様との意見交換を行いたいと考えております。詳細については、以下にイベントのアジェンダを添付しておりますので、ご覧ください。

ご出席をお決めになられていない方は、お手数ですがご登録をお願いいたします。

RSVPリンク: https://us10.list-manage.com/survey?u=6e5efca6f5e1df2cbe836c9be&id=e0374ae2d7&attribution=false

対面参加にご都合の悪い方は、YouTubeのライブストリーミングでご参加ください。

皆様のご来場をお待ちしております。
よろしくお願いします。

ラメッシュ・ナラヤナン

(出所) MOSIP よりの案内 (2/13時点)

以下、原文です。

The MOSIP Open-Source Day will commence at 5 pm on 14th February. The event will include a keynote address by Nat Sakimura, presentations by the MOSIP team and our integration partners. We will conclude the evening with a dinner, where we hope to engage with you personally to share insights and ideas. Please find the event agenda attached below for further information.

For those of you who will be making it in person, kindly confirm your attendance if you have already not done so.

RSVP Link: https://us10.list-manage.com/survey?u=6e5efca6f5e1df2cbe836c9be&id=e0374ae2d7&attribution=false

If you are unable to be physically present this time, please do join us via our live stream on YouTube:

We look forward to your presence at the event and appreciate your support in making it a memorable occasion.

Warm regards,

Ramesh Narayanan

Sunday, 12. February 2023

Dick Hardt

Why has Digital Identity not been Solved?

A chatbot answering complex life questions sitting on a mountain — DALL·E 2 From ChatGPT: Digital identity is a complex issue that has not been fully solved due to several challenges and obstacles, some of which include: Privacy concerns: Digital identity often requires the collection and storage of personal information, which raises privacy concerns and the risk of data breaches. Ther
A chatbot answering complex life questions sitting on a mountain — DALL·E 2

From ChatGPT:

Digital identity is a complex issue that has not been fully solved due to several challenges and obstacles, some of which include:

Privacy concerns: Digital identity often requires the collection and storage of personal information, which raises privacy concerns and the risk of data breaches. There is a need for a solution that can provide secure and reliable identity verification while also protecting the privacy of individuals. Interoperability: There are many different systems and protocols for digital identity management, which can make it difficult for these systems to work together seamlessly. This can lead to compatibility issues and difficulties in accessing and sharing information across different platforms and organizations. Trust and Security: Establishing trust and security in digital identity systems is crucial, as fraudulent activities and identity theft can have serious consequences for both individuals and organizations. Ensuring that digital identity systems are secure and resistant to hacking and fraud requires ongoing investment in research and development. Regulation: Digital identity is a global issue and different countries have different laws and regulations around data privacy and identity management. This can make it difficult to develop a standardized and universal solution for digital identity. Technical Complexity: Digital identity involves multiple technologies, such as cryptography, biometrics, and machine learning, which can make it challenging to create a solution that is both secure and user-friendly.

Despite these challenges, there has been progress made in recent years to solve these problems and create more secure and user-friendly digital identity solutions. However, this is an ongoing process that requires collaboration between technology companies, governments, and other stakeholders.

Saturday, 11. February 2023

Anil John

Escape from Apple's Walled Garden

Installing Ubuntu on a 2010 era iMac hardware

I use a 2010 era 27” iMac as my primary home desktop computer. I have become increasingly frustrated by my inability to update the macOS to versions that support security and privacy features such as Apple’s Advanced Data Protection for iCloud. So I decided to install Ubuntu (Linux) in a dual-boot configuration on this hardware.

As I was researching whether this was even possible, I ran into sometimes conflicting, many times outdated, and often incomplete information, so wanted to document both my configuration and the resources that were helpful to me in the hope that others don’t have to go through much of the uncertainty that I felt when I decided to go down this road.

TL;DR - Installing Ubuntu 22.04.1 LTS (Jammy Jellifish) in a dual boot configuration with macOS High Sierra Version 10.13.6 on a 27” iMac 11,3 (Mid 2010) hardware with an Intel Core i3 CPU was a painless experience and, as of right now, I have been happily running this configuration as my primary home desktop computer with zero problems for close to 3 weeks.

Apple hardware

I like the iMac hardware.

It is clean, does not take up much space, and has been running trouble-free for me since I bought it back in 2010. The following is my hardware configuration:

❖ Model Name: iMac (27-inch, Mid 2010) ▻ Model Identifier: iMac11,3 ❖ Processor Name: Intel Core i3 ▻ Processor Speed: 3.2 GHz ▻ Number of Processors: 1 ▻ Total Number of Cores: 2 ▻ L2 Cache (per Core): 256 KB ▻ L3 Cache: 4 MB ❖ Memory: 16 GB 1333 MHz DDR3 ❖ Graphics: ATI Radeon HD 5670 512 MB ❖ Storage: 1 TB SATA Disk

It is also connected to the following external peripherals:

❖ Dell 27" Monitor (U2719DC) ❖ USB connected Keychron V10 (Alice Layout) QMK Custom Mechanical Keyboard ❖ USB connected Logitech G203 Prodigy Gaming Mouse ❖ Hard line network connection

I am calling out the mention of the keyboard, mouse and network connection as being hard-wired because in my research, the typical complaints regarding the installation of Linux on Apple hardware was related to WiFi network connections and Bluetooth keyboards and mice.

My recommendation to de-complexify your installation would be to ensure that you use hard-line connected devices during the installation phase and once everything is done, swap in your regular devices (My WiFi and Bluetooth are working just fine under Ubuntu; I just don’t use it.)

Why Ubuntu LTS?

My selection criteria for a Linux distribution was the following:

It should be stable and it should just work; nothing flashy or fancy When a software or hardware vendor says that it “Supports Linux”, the chosen distribution should be at the top of that list There should be an automated security and maintenance update process Excellent documentation and community support Application support for the majority of my day to day activities, ideally with Linux versions of the same applications

Ubuntu LTS met all of those requirements for me.

In particular the LTS version of Ubuntu which “… stands for long-term support — which means five years of free security and maintenance updates, guaranteed until April 2027” was a make or break selection criteria for me.

Installation steps

I spent a substantial amount of time looking thru installation guides on the inter-webs and the best one that I found, and which I used for my installation, was Dan Helyer’s How to Install and Dual Boot Linux on Your Mac. It is well written, with lots of screen shots, and step-by-step instructions that worked very well for me.

I am not going to replicate that here, but instead provide some notes regarding my experience:

This process will not work on Apple Silicon iMacs I followed the dual-boot installation steps The drive partitioning step was an overnight process for me (I allocated 500GB of my 1TB SATA drive to a combination of OS and swap partitions) I did not use the USB installer, but instead simply burned the Ubuntu ISO onto a DVD and used that instead

Everything else simply just worked, which I think is a testament to the maturity and polish of the Ubuntu distribution.

Post-installation configuration

There were only three steps that were specific to my environment.

I use a NAS within my home network to keep content across my various devices synchronized and as my private git server and backup infrastructure. It is then set up to perform regular, automated and encrypted off-site archival of that content. So I simply had to install that client agent on my new system and point it to my internal NAS for all my content to be come back to my new machine. That was an overnight process.

By default, Ubuntu uses the bash shell and I prefer the zsh shell. That is a personal preference; I had also spent far too much time getting my terminal “just so” on my mac and did not feel like redoing all that with bash. So I installed zsh and copied the config files that I was using on the iMac over, and everything just came back the way I wanted it to look.

My mechanical keyboard supports open source software that allows for a significant amount of customization. So I spent some time looking at the keyboard shortcuts available under Ubuntu to remap some of my keys, and create some macros to assign to the custom keys on the keyboard to make my day-to-day tasks a bit easier.

Future steps

I am rather happy with how this turned out, and the new life that my iMac hardware is enjoying.

The one thing I am currently researching is how best to add a fingerprint sensor for both authentication to the local machine as well as a pathway to support Passkeys/FIDO2/WebAuthn. Currently reading about and looking into the YubiKey Bio as a possible way to enable this support.

I hope that my experience and pointers prove useful to other folks who are being given not so subtle signals by the vendor that their hardware has reached end-of-life. The reality is that it has not, and you have very good options available to you to continue using the hardware which is working just fine!

This blog post, Escape from Apple's Walled Garden, first appeared on Anil John | Blog. The opinions expressed here are my own and do not represent my employer’s view in any way.

Reply to this Blog post via email


Mike Jones: self-issued

Native JWT Representation for Verifiable Credentials

For the first time, there is now a native JSON Web Token (JWT) representation for Verifiable Credentials. This representation uses IANA-registered JWT claims whenever applicable. Among other improvements and simplifications, this means that we finally have a Verifiable Credentials representation that doesn’t require the use of JSON-LD. The native JWT representation explicitly isn’t a mapping […]

For the first time, there is now a native JSON Web Token (JWT) representation for Verifiable Credentials. This representation uses IANA-registered JWT claims whenever applicable. Among other improvements and simplifications, this means that we finally have a Verifiable Credentials representation that doesn’t require the use of JSON-LD.

The native JWT representation explicitly isn’t a mapping from the VC Data Model. This mapping in the VC 1.1 specification resulted in ambiguities about whether to duplicate VC Data Model claims in the VC-JWT representation (the “in addition to” option) or whether to delete them from the VC Data Model representation (the “instead of” option). These ambiguities harmed interoperability. Rather, the 2.0 VC-JWT representation is its own simpler native JWT data structure.

See the new native JWT VC representation in the Version 2 section of the “Securing Verifiable Credentials using JSON Web Tokens” specification. You can also compare it there to the Version 1.1 representation, which is a mapping from the VC Data Model with the “in addition to” and “instead of” choices.

This accomplishment is the product of the vision, passion, and perseverance of many advocates of simplifying Verifiable Credentials. Foremost among them is Orie Steele – my co-editor for the VC-JWT specification. I’ll also observe that the pull request creating this functionality had an unprecedented fifteen approvers – an indication of the broad support for this direction for Verifiable Credentials. I am proud to have played a role in making it happen.

Thursday, 09. February 2023

Identity Woman

What are biometrics?

I first was exposed to biometrics at scale when I was working within the National Strategy for Trusted Identities in Cyberspace – Identity Ecosystem Steering Group that would host meetings “next to” biometric industry conferences. They were really kinda freaky for the technology presented and how I imagined it all being used. I still have […] The post What are biometrics? appeared first on Ident

I first was exposed to biometrics at scale when I was working within the National Strategy for Trusted Identities in Cyberspace – Identity Ecosystem Steering Group that would host meetings “next to” biometric industry conferences. They were really kinda freaky for the technology presented and how I imagined it all being used. I still have […]

The post What are biometrics? appeared first on Identity Woman.

Thursday, 09. February 2023

FACILELOGIN

Let’s talk CIAM!

Everyone knows you well, and loves you of course, but still, would you mind intro yourself, maybe in one-liner? Continue reading on FACILELOGIN »

Everyone knows you well, and loves you of course, but still, would you mind intro yourself, maybe in one-liner?

Continue reading on FACILELOGIN »

Wednesday, 08. February 2023

Doc Searls Weblog

Is Mastodon a commons?

Glenn Fleishman has a lucid and helpful introduction to Mastodon in TidBITS that opens with this: Cast your mind back to the first time you experienced joy and wonder on the Internet. Do you worry you’ll never be able to capture that sense again? If so, it’s worth wading gently into the world of Mastodon […]

Glenn Fleishman has a lucid and helpful introduction to Mastodon in TidBITS that opens with this:

Cast your mind back to the first time you experienced joy and wonder on the Internet. Do you worry you’ll never be able to capture that sense again? If so, it’s worth wading gently into the world of Mastodon microblogging to see if it offers something fresh and delightful. It might remind you—as it does me, at least for now—of the days when you didn’t view online interactions with some level of dread.

Mastodon isn’t a service but a network of consensually affiliated, independently operated servers running the Mastodon software. It’s the best-known example of the so-called Fediverse…

Then, a few paragraphs later, he provides the best metaphor I’ve yet seen for what Mastodon is and how it works:

You can think of Mastodon as a flotilla of boats of vastly different sizes, whereas Twitter is like being on a cruise ship the size of a continent. Some Mastodon boats might be cruise liners with as many as 50,000 passengers; others are just dinghies with a single occupant! The admin of each instance—the captain of your particular boat—might make arbitrary decisions you disagree with as heartily as with any commercial operator’s tacks and turns. But you’re not stuck on your boat, with abandoning ship as the only alternative. Instead, you can hop from one boat to another without losing your place in the flotilla community. Parts of a flotilla can also splinter off and form their own disconnected groups, but no boat, however large, is in charge of the community.

Since my day job is working as a visiting scholar in the Ostrom Workshop at Indiana University, and Customer Commons has been imagined from its start as a potential commons for customers (or as many commons, flotilla style), I find myself wondering if each of Mastodon’s boats is a commons. Or if some of them could be, or already are. Or if Mastodon itself is one.

My first experience with Mastodon came early on, in a boat that I abandoned before it sank. But now that Mastodon is hot again, I’ve jumped with two crowds onto two boats: twit.social (here) and journa.host (here). TWiT.social’s occupants are the community of hosts, co-hosts, and participants in the TWiT network. Journa.host’s occupants are a collection of journalists. The two communities are different, though not entirely: journalists abound in both of them.

The question for me here is if any of these boats qualify as a commons. Or if Mastodon itself is one.

To qualify as a commons, a canonical list to check off is provided by Elinor Ostrom. In Governing the Commons (Cambridge, 1990), she outlined eight “design principles” for stable local common pool resource (CPR) management. I’ll make notes following each in italics:

Clearly defining the group boundaries (and effective exclusion of external un-entitled parties) and the contents of the common pool resource. Mastodon is designed to support that. The appropriation and provision of common resources that are adapted to local conditions. If we’re talking about code, yes. Maybe more. Gotta think about that. Collective-choice arrangements that allow most resource appropriators to participate in the decision-making process. Depends on the instance, I suppose.  Effective monitoring by monitors who are part of or accountable to the appropriators. Not sure about that one.  A scale of graduated sanctions for resource appropriators who violate community rules. Up to the person or people running each boat. Mechanisms of conflict resolution that are cheap and of easy access. I think these range from informal to formal, and draw from rules developed for mailing lists and other fora. But, not sure. Self-determination of the community recognized by higher-level authorities. At the top level, it’s othe Mastodon dev community. At the boat (instance) level, it’s the captain(s) of each. In the case of larger common-pool resources, organization in the form of multiple layers of nested enterprises, with small local CPRs (common pool resources) at the base level. A thought: the common pool resource is the authors of posts (aka toots) and the posts themselves.

Ostrom and others have also gone deeper and wider than that, for example by examining socio-ecological systems (SESes), defined here in 2004. I’ll leave digging into that up to scholars more schooled than I (or to a later post, after I finish schooling myself). Meanwhile, I think it’s important, given the sudden growth of Mastodon and other federated systems with flotilla-ish qualities, to examine how deep research and writing on commons apply.

This work does matter: Ostrom won a Nobel Prize for it, and it may matter more now than ever.

And help is welcome.

About the photo up top: Lacking a royalty-free visual for a flotilla of boats, I settled on the collections of people you see through bubbles in the photo above, which I shot on the grounds of Versailles. Kinda works, methinks.

Tuesday, 07. February 2023

Jon Udell

Protocols, APIs, and conventions

The Fediverse is wired together by protocols like ActivityPub and WebFinger which, as yet, I know very little about. That’s because the Steampipe plugin, which supports the dashboards I’ve been building and describing in this series, doesn’t require me to understand or use those protocols. It does, however, require me to understand and use the … Continue reading Protocols, APIs, and conventions

The Fediverse is wired together by protocols like ActivityPub and WebFinger which, as yet, I know very little about. That’s because the Steampipe plugin, which supports the dashboards I’ve been building and describing in this series, doesn’t require me to understand or use those protocols.

It does, however, require me to understand and use the Mastodon API. Mostly I use that API by way of the Go SDK for Mastodon (thanks, mattn!), sometimes I make REST calls directly. Either way, my read-only dashboards use a fairly small subset of the Mastodon API. The full API is quite broad and deep; it enables API clients to read from and write to Mastodon servers in all sorts of ways. Here are the chapters of the Mastodon API book: apps, accounts, admin, instance, search, statuses, timelines, notifications, oembed. These chapters define what’s common to all Mastodon clients, including web apps, phone apps, native OS apps, and Steampipe dashboards.

So far I’ve ignored protocol-enabled interop in order to focus on API-enabled interop. I’m aware that the Fediverse includes much more than just Mastodon. I intend to explore BookWrym, Friendica, Funkwhale, Lemmy, takahe, PeerTube, Pixelfed, PeerTube, and others in due time. But right now the Mastodon ecosystem is plenty to try to wrap my head around.

For example, there’s a new web client for Mastodon: elk.zone. With the recent addition of support for lists, it has become my favorite way to interact in Mastodon space. So naturally I wanted to be able to click through from Steampipe dashboards to Elk, and use it as an alternative to the batteries-included Mastodon web app.

It turned out to be easy to enable that integration. Not thanks to ActivityPub, and not even thanks to the API. It works thanks to a third level of interop at play: common patterns for account URLs and toot URLs.

Here’s the account URL for Ward Cunningham who hangs his shingle at mastodon.radio: https://mastodon.radio/@k9ox. But as we saw in instance-qualified Mastodon URLs, if you visit that URL directly — and if it’s not your home server — you can’t follow Ward there, or add him to a list. You’ll need to copy that URL, paste it into your home server’s search box, run the search, and arrive at an instance-qualified URL where you can follow him or add him to a list: https://mastodon.social/@k9ox@mastodon.radio. If you’re home is fosstodon.org this would instead be https://fosstodon.org/@k9ox@mastodon.radio.

Similarly here is one of Ward’s toots at mastodon.radio: https://mastodon.radio/@k9ox/109802968820955379. If you want to reply or boost or favorite, you can’t do it there. The URL you need is again one that routes through your home server: https://mastodon.social/@k9ox@mastodon.radio/109802969999396562. Note that the ids for the same toot differ! That difference surprised me and some others, and is a topic for another episode. Here I’ll just note that these two patterns govern how we interact when crossing server boundaries in Mastodon space using the stock web client.

When I started using Elk, another layer of pattern emerged. Here are those same URLs in Elk:

https://elk.zone/mastodon.social/@k9ox@mastodon.radio

https://elk.zone/mastodon.social/@k9ox@mastodon.radio/109802969999396562

As it turns out, I just needed to make two of the Steampipe plugin’s transform functions prepend elk.zone to the instance-qualified URLs, then make such prefixing a configuration option. Now when I visit Mastodon links from dashboards, to reply or boost or follow or enlist, I land in the Elk experience that I prefer.

ActivityPub and WebFinger are formal standards. I would describe the Mastodon API as a de-facto standard. But this prefixing maneuver is just a convention. It’s not guaranteed to work with another web client, and not even guaranteed to work across all URLs presented by the stock Mastodon client. That’s OK by me. Conventions are incredibly useful. The Twitter hashtag is just a convention, after all, inspired in turn by an IRC convention.

We’re in one of those Internet moments of rapid innovation, when new conventions can unlock emergent behaviors. It hadn’t even occurred to me that the Steampipe dashboards could support Elk. A few hours after I thought they might, they did. I’ve seen this kind of thing before, perhaps most notably when the blogosophere adopted <link rel="alternate" type="application/rss+xml" href="{feedUrl}"> to enable browsers to auto-discover RSS feeds. That happened about 20 years ago, and quite suddenly when a handful of leading blogging tools adopted the convention in a matter of days. It was a lot of fun to live through that era. If you missed it, enjoy the sequel that’s unfolding now!


1 https://blog.jonudell.net/2022/11/28/autonomy-packet-size-friction-fanout-and-velocity/
2 https://blog.jonudell.net/2022/12/06/mastodon-steampipe-and-rss/
3 https://blog.jonudell.net/2022/12/10/browsing-the-fediverse/
4 https://blog.jonudell.net/2022/12/17/a-bloomberg-terminal-for-mastodon/
5 https://blog.jonudell.net/2022/12/19/create-your-own-mastodon-ux/
6 https://blog.jonudell.net/2022/12/22/lists-and-people-on-mastodon/
7 https://blog.jonudell.net/2022/12/29/how-many-people-in-my-mastodon-feed-also-tweeted-today/
8 https://blog.jonudell.net/2022/12/31/instance-qualified-mastodon-urls/
9 https://blog.jonudell.net/2023/01/16/mastodon-relationship-graphs/
10 https://blog.jonudell.net/2023/01/21/working-with-mastodon-lists/
11 https://blog.jonudell.net/2023/01/26/images-considered-harmful-sometimes/
12 https://blog.jonudell.net/2023/02/02/mapping-the-wider-fediverse/
13 https://blog.jonudell.net/2023/02/06/protocols-apis-and-conventions/
14 https://blog.jonudell.net/2023/02/14/news-in-the-fediverse/
15 https://blog.jonudell.net/2023/02/26/mapping-people-and-tags-on-mastodon/
16 https://blog.jonudell.net/2023/03/07/visualizing-mastodon-server-moderation/
17 https://blog.jonudell.net/2023/03/14/mastodon-timelines-for-teams/

Monday, 06. February 2023

@_Nat Zone

2月6日(月)22時〜 FIDOアライアンス台北セミナー感想 with 森山W3C理事とNIST当人認証標準SP800-63B を配信しながら読む会をやります

【台北 2月6日】2023 FIDO Taipei…

【台北 2月6日】2023 FIDO Taipei Seminar が、オードリー・タンデジタル大臣のスピーチで開始されました。非常に網羅的かつ「わかってる感」が染み出してくるスピーチでさすがでした。じつはわたしも今日のクロージングパネルに登場することになっています。

Panel: Global Perspectives on Passwordless Authentication

Moderator: Christina Hulka, Chief Operating Officer, FIDO Alliance Panelists: Koichi Moriyama, Chief Security Architect, Corporate Evangelist, and Head of Security Innovations Management, Corporate Strategy & Planning Department, NTT DOCOMO, INC. Nat Sakimura, Chairman, OpenID Foundation Vittorio Bertocci, Principal Architect at Okta Paul Grassi, Principal Security – Identity, Amazon Pamela Dingle, Director of Identity Standards, Microsoft

今日の夜は、FIDOアライアンスのExcecutive Council メンバーでW3C理事である森山さんとともに、本日の振り返りをするとともに、FIDOともっとも関係の深い「当人認証」部分である800-63Bを読んでいきたいと思います。

ホテルからの配信で機器も普段に比べて足りないですし、どこまでうまくいくかわかりませんが、ぜひ起こしください。

付録:FIDO Taipei Seminar アジェンダ 8:00-10:00REGISTRATION AND DEMOS10:00-11:00Welcome Remarks10:00-10:05Welcome RemarksSignature sponsor10:05-10:25Welcoming RemarksRepresentatives of Government10:25-10:45Guest speechAudrey TANG, Minister of Digital Affairs10:45-11:05FIDO Welcome + State of PasswordlessAndrew Shikiar, Executive Director and Chief Marketing Officer of FIDO Alliance11:05-11:30BREAK and DEMOS11:30-12:30FIDO Intros11:30-11:55FIDO Technical Overview & the Future of PasswordlessKi-Eun Shin, Principal Software Development Engineer, SK Telecom11:55-12:10The Value of FIDO CertificationPaul Heim, Director of Certification, FIDO Alliance12:10-12:45Impact of FIDO in Government and PolicyJeremy Grant, Managing Director, Technology Business Strategy, Venable LLP12:40-1:45LUNCH1:45-3:00FIDO Global Case Studies / Content1:45-2:00Taiwan identification solution — TW FidoHan-Chieh Sun, Chunghwa Telecom2:00-2:20NTT DOCOMO Goes PasswordlessMasao Kubo, NTT DOCOMO Inc. 2:20-2:40Bringing FIDO to on Premise Environment in TaiwanZake Huang, Vice President, AuthenTrend3:10-3:30BREAK and DEMOS3:30-4:30FIDO in IoT: FIDO Device Onboard3:30-3:50Introduction to FIDO Device OnboardDavid Turner, Senior Director of Standards Development, FIDO Alliance3:50-4:20Panel: The Imperative for a More Secure IoTModerator: David Turner, Senior Director of Standards Development, FIDO Alliance

Panelists:
Simon Trac, CEO, VinCSS

Giri Mandyman, Chief Security Architect – IoT and Automotive, Qualcomm

Rolf Lindemann, VP Products, Nok Nok Labs4:20-5:00Closing Sessions4:20-5:00Panel: Global Perspectives on Passwordless AuthenticationModerator: Christina Hulka, Chief Operating Officer, FIDO Alliance

Panelists:
Koichi Moriyama, Chief Security Architect, Corporate Evangelist, and Head of Security Innovations Management, Corporate Strategy & Planning Department, NTT DOCOMO, INC.

Nat Sakimura, Chairman, OpenID Foundation

Vittorio Bertocci, Principal Architect at Okta

Paul Grassi, Principal Security – Identity, Amazon

Pamela Dingle, Director of Identity Standards, Microsoft5:00-5:05Closing RemarksKaren Chang, Vice President, Egis Technology Inc.5:05-7:00RECEPTION AND DEMOS

Saturday, 04. February 2023

Cyberforge (Anil John)

Building mudge's wallet

CyberForge Journal - Feb 4, 2023

The digital wallet is the killer app for the completely open source hardware and software “digital mobile safe” program led by mudge (aka Peiter) Zatko, at Google’s Advanced Technologies and Projects (ATAP) Group back in 2015. We need it today!

Digital wallets are core to enabling individual agency and control over sensitive data in online and offline interactions. In many ways, the European Union is leading this charge with the massive identity ecosystem orchestration they are currently in the midst of.

At the same time, to ensure that wallets truly remain under individual control we must pay attention to choices being made in its development such that the future does not become a collection of proprietary vertical cylinders of excellence (silos), and that individuals have true choice in how and when they select wallets for their personal use.

However, there is a curious but important missing element in all of these digital wallet conversations; everyone talks about protocols to interact with a wallet and what should or should not be stored in a wallet, but there is little to no discussion about ensuring the openness, consistency and fidelity of the core security, privacy, cryptographic and storage aspects that a wallet actually needs!

Common need across digital wallets

Let us fast forward in time to understand what is needed:

I am an EU Citizen with a EU Digital Identity (EUDI) Wallet who is seeking to live and work in the US. I need to interact with the US Government to provide information about who I am, and at the end of that process receive some manner of immigration or work permit related credential. Since I already have an EUDI Wallet, I would like provide information to the USG from that wallet and in turn continue to use that wallet to store any immigration credential I get from the USG.

I am a Canadian who is looking to live and work in the EU and when seeking employment there would like to use the digital wallet from my Province that stores my identity information as well as education credentials. My potential employer supports, per EU regulation, the ability to consume these credentials in a digital form.

Following upon both of the above scenarios, I would like to open a bank account using my digital wallet to present the needed information for KYC.

There are so many more variations of this that span the public and private sector, as well as a variety and combinations of credentials.

The commonality across ALL of them is not the credentials, or the protocols, but the underlying foundation and “trustability” of the digital wallet itself by a counter-party. Everything else is pipes and payloads.

And that, to my mind, is what the Google ATAP Project Vault showed could be done in a completely open manner, such that you can actually have independently verifiable confidence in that wallet when it shows up at your digital front door.

Google ATAP Project Vault

I would highly recommend folks watching the video below, which includes both a great introduction by Regina Dugan (formerly the first female director of DARPA) as well as the overview and demo of the capability by mudge.

So the question is, why build this [...]

It turns out, you already have security elements in your phones and your computers. SIM Cards; they protect the things that are important to the carriers. Trusted platform modules, or TPMs; they protect the things that are important to the OEMs and the content providers. [...]

So, where is the security element that protects the things that are important to you, that you have complete control over? So we made one!

[...] So what do developers have to do to get the phones or laptops or IOT devices ready to use Project Vault? Nothing! The host system thinks this is a regular storage device.

mudge

Link to Project Vault YouTube video

 

As I wrote back in 2015:

Project Vault is a micro-SD form-factor card that, when plugged into a device, makes available to the device a suite of cryptographic tools to enable hashing, signing, bulk encryption, streaming encryption as well as a strong hardware random number generator.

The algorithms in the vault are not exposed to the host system so you can plug it into an untrusted device while being confident that the host system will not be able to corrupt the vault’s crypto services.

It also comes with 4GB of isolated, sealed storage which can be used for enabling capabilities such as an immutable logging system where evidence of malicious activity cannot be covered up by tampering with the logs.

It is operating system agnostic, so will work with a variety of both mobile and desktop operating systems. To the OS it behaves like a proc file system without any kernel drivers, which means that developers don’t have to do anything special in order to utilize it.

The one change I would hope for in any current day implementation “… is a Security Key form-factor with USB-C …” because USB-type C is mandated to become EU’s standard by the end of 2024, and where the EU is leading with that, the world will follow.

Opportunity for personal, truly open digital wallets

There is a clear opportunity here to implement an open source hardware(!) and software foundation (the vault) to enable secure and interoperable wallets that can be used for a variety of purposes, that truly remains under the control of an individual.

However, for anyone going down this path, what will become critical to their success is what they choose not to do rather than what they actually do.

In order to provide a truly open foundation for a digital wallet that is under the personal control of an individual, it will require them to:

keep the focus on enhancing and making production ready the foundational vault capabilities to support hardened, accelerated cryptographic operations and secure storage that are needed for any type of digital wallet build well documented APIs to interact with the core vault services that are open, royalty free and free to implement for anyone think through how best to put into place an independent assessment/verification mechanism with a high degree of transparency and credibility that allows counter-parties to assess the “goodness” of an implementation

I anticipate that the hardest challenges here will not be technical, but instead will be to:

resist the pressure from handset and platform vendors who will seek to influence the work in a manner such that the capabilities being externalized into the vault continue to be gatekeeper’d by them resist the use of platform or vendor specific hardware security elements instead of the open source externalized hardware security elements demonstrated by Project Vault resist getting dragged into the political infighting around directly implementing the variety of credential types or credential issuance and presentation protocols directly in the vault; it should be the responsibility of those specific communities to define and standardize how their protocols can utilize the open platform and vendor neutral vault APIs. Building on an open source foundation

I have no visibility into or awareness of what happened with Project Vault after mudge moved on from Google ATAP. What I do believe is that he, particularly given his prior stint at DARPA, looked around the corner for what was coming, drove the work that was needed to be ready for that future, and made sure it was as widely available as possible by making it entirely open source!

That is a gift that should not be wasted!

newRecently: Commons are not tragic

Elinor Ostrom, the first woman to be awarded the Nobel Prize in Economic Sciences, debunks the “Tragedy of the Commons” which posits that “Humans, when left to their own devices, compete with one another for resources until the resources run out.”

Her research revealed that “Far from being profoundly destructive, we humans have deep capacities for sharing resources with generosity and foresight.”

The features of successful systems, Ostrom and her colleagues found, include:

clear boundaries (the ‘community’ doing the managing must be well-defined);

reliable monitoring of the shared resource;

a reasonable balance of costs and benefits for participants;

a predictable process for the fast and fair resolution of conflicts;

an escalating series of punishments for cheaters;

and good relationships between the community and other layers of authority, from household heads to international institutions.

This has implications for and provides hope to those who are building and investing in the “Internet Commons”.

cyberLinks: random and relevant

Project Vault available under the Apache 2.0 License - “Verilog and VHDL and all the code for the security based Real Time Operating System (RTOS), the SD Controller firmware to allow communications with the host systems, the NAND flash translation layer so you can manipulate the sealed storage, the hardened and accelerated hardware crypto cores, and the interface description language so you can talk to it. Even the processor is open source - the OpenRISC1200”

The Wallet Wars Are Not About Money, They Are About Identity - “Around the world the transition from physical wallets to digital wallets is well underway. An Accenture survey of 16,000 customers in 13 countries found that 56% of them were using digital wallets more than five times every month (compared with only 48% using cards that often) and they interpret these results to mean that heading towards a hundred billion dollars of annual payments revenues for banks are “at risk”.”

OpenWallet Foundation - “The mission of the OWF is to develop an open source engine to enable secure and interoperable multi-purpose wallets anyone can use to build solutions. The OWF aims to set best practices for digital wallet technology through collaboration on open source code for use as a starting point for anyone who strives to build interoperable, secure and privacy-protecting wallets.”

World Wide Web Consortium (W3C) is now a public-interest nonprofit organization - “… perhaps the best insurance policy came in the form of what has been described by The Boston Globe as Berners-Lee’s “greatest act of all” in being something he “didn’t do:” require fees for patents. Instead, a royalty-free patent policy was created so that people who use patents covering technologies in their standards don’t have to pay royalties or fees. Other standards development organizations have since copied this over the years since it was developed in the early 2000s.”

This Journal entry, Building mudge's wallet, first appeared on CyberForge.

Reply to this Journal entry via email


Werdmüller on Medium

Resetting professional goals

Giving myself permission to dream big again. Continue reading on Medium »

Giving myself permission to dream big again.

Continue reading on Medium »

Friday, 03. February 2023

Jon Udell

Mapping the wider Fediverse

I began this journey convinced that Steampipe could help the Fediverse evolve, but not sure exactly how. My first thought was to use Steampipe’s API-wrangling superpower to study patterns of communication (and conflict) across the Fediverse. But as one of many Twitter escapees last November, I soon realized that the network I was joining reflected … Continue reading Mapping the wider Fediverse

I began this journey convinced that Steampipe could help the Fediverse evolve, but not sure exactly how. My first thought was to use Steampipe’s API-wrangling superpower to study patterns of communication (and conflict) across the Fediverse. But as one of many Twitter escapees last November, I soon realized that the network I was joining reflected a culture that had been humming along nicely for six years and didn’t particularly want to be the object of sociological study.

As I argued in Autonomy, packet size, friction, fanout, and velocity, Mastodon bakes in certain kinds of friction for reasons. You’ve likely heard about a default unfriendliness to search, which is both a technical setting and a cultural choice that privileges the experience of current flow over the mining of past flow. Even more fundamentally, the ID of a toot not only differs from server to server but also obfuscates the toot’s date, another technical/cultural choice that means you can’t randomly access history by date. None of these frictions is insurmountable. They will be overcome for purposes good and bad. I hope and expect that communities will be able to choose their desired amounts and kinds of friction while still interoperating with others. But for my project it seemed that trying to survey the wider Fediverse wasn’t the right place to start.

So instead I began to explore a different way to read my home timeline. The dashboards that I’ve built and described in this series have become, for me at least, an effective way to scan recent Mastodon flow, then click into the stock client to post, reply, or boost. After overcoming a few obstacles, things are starting to feel like the Bloomberg terminal for Mastodon that I envision.

One of those obstacles was the awkward copy/paste/search of foreign toot URLs that was required in order to interact with them. That’s now overcome by Instance-qualified Mastodon URLs. Another obstacle was the difficulty of curating and effectively reading topical lists of people. The strategies described in Lists and people on Mastodon and Working with Mastodon lists have improved matters nicely. And relationship graphs turned out to be a more useful alternate view of the current flow than I had expected.

I think I’ve proven that a set of Steampipe dashboards, layered on a plugin that maps the Mastodon API to tables that the dashboards query, can improve the ability to absorb and react to Mastodon flow. An unproven corollary: Steampipe’s dashboards-as-code system is only one of potentially many clients of the Mastodon plugin. Any dashboarding system or web app or native app could tap into the same query capability to deliver still another way to experience the flow. But that’s a future episode.

Meanwhile, with a decent reading experience in place, this seemed like a good time to circle back to the question of surveying the wider Fediverse. To begin enabling that I added a couple of new tables to the plugin: mastodon_peer and mastodon_domain_block.

Here’s a query that uses mastodon_peer.

with data as ( select 'https://' || server as server from mastodon_toot where timeline = 'home' limit 100 ), servers as ( select server, count(*) as occurrences from data group by server ) select s.server, s.occurrences, count(p.peer) as peers from servers s join mastodon_peer p on s.server = p.server group by s.server, s.occurrences order by peers desc

In Engish: gather the most recent 100 toots in my home timeline, count the occurrences of each origin server, then ask each origin server how many other servers it talks to. Unsurprisingly my home server, mastodon.social, occurs most often. And because it’s the marquee Mastodon server it has the most peers.

+----------------------------------+-------------+-------+ | server | occurrences | peers | +----------------------------------+-------------+-------+ | https://mastodon.social | 11 | 49732 | | https://fosstodon.org | 1 | 33973 | | https://octodon.social | 1 | 29983 | | https://infosec.exchange | 2 | 26833 | | https://indieweb.social | 9 | 26279 | | https://hachyderm.io | 3 | 19911 | | https://social.treehouse.systems | 3 | 18110 | | https://journa.host | 1 | 18021 | | https://nerdculture.de | 9 | 17984 | | https://werd.social | 2 | 13792 | | https://dan.mastohon.com | 2 | 13351 | | https://masto.nyc | 1 | 10917 | | https://mastodon.archive.org | 1 | 9582 | | https://social.fossdle.org | 1 | 8343 | | https://devdilettante.com | 12 | 6898 | +----------------------------------+-------------+-------+

Here’s a query that uses mastodon_domain_block.

with data as ( select 'https://' || server as server from mastodon_toot where timeline = 'home' limit 100 ), servers as ( select server, count(*) as occurrences from data group by server ) select s.server, s.occurrences, count(d.domain) as "blocked domains" from servers s join mastodon_domain_block d on s.server = d.server group by s.server, s.occurrences order by "blocked domains" desc

This one says: Again gather the origin servers in my recent home timeline, but this time ask each one how many other servers it blocks. Here we see that octodon.social, which happened to show up in my timeline when I ran the query, blocks a lot more servers than mastodon.social does.

+--------------------------+-------------+-----------------+ | server | occurrences | blocked domains | +--------------------------+-------------+-----------------+ | https://octodon.social | 1 | 510 | | https://mastodon.social | 8 | 181 | | https://hachyderm.io | 4 | 125 | | https://infosec.exchange | 4 | 66 | | https://nerdculture.de | 1 | 36 | | https://indieweb.social | 4 | 23 | +--------------------------+-------------+-----------------+

One could, and maybe at some point I will, comprehensively acquire and store this data. But meanwhile, how might it enhance the experience of reading recent Mastodon flow? Here’s what I’ve got so far.

We’ve already seen the first table which answers the question: “How many servers does each of the origin servers in my timeline block?” The second table answers a different question: “Which servers are most often blocked by the origin servers in my timeline?”

The blocking server list column shows an interesting mix of consensus and variation, and I think that will be something to explore comprehensively across the Fediverse. But for now I like how this view contextualizes what’s in my home timeline. Like the relationship graphs, it’s highly dynamic because my immediate network neighborhood is changing all the time. Whatever the current set of servers happens to be, though, I now have some clues about how connected each of those servers is and how aggressively each one is blocking others. This feels like a good first step toward mapping the wider Fediverse.


1 https://blog.jonudell.net/2022/11/28/autonomy-packet-size-friction-fanout-and-velocity/
2 https://blog.jonudell.net/2022/12/06/mastodon-steampipe-and-rss/
3 https://blog.jonudell.net/2022/12/10/browsing-the-fediverse/
4 https://blog.jonudell.net/2022/12/17/a-bloomberg-terminal-for-mastodon/
5 https://blog.jonudell.net/2022/12/19/create-your-own-mastodon-ux/
6 https://blog.jonudell.net/2022/12/22/lists-and-people-on-mastodon/
7 https://blog.jonudell.net/2022/12/29/how-many-people-in-my-mastodon-feed-also-tweeted-today/
8 https://blog.jonudell.net/2022/12/31/instance-qualified-mastodon-urls/
9 https://blog.jonudell.net/2023/01/16/mastodon-relationship-graphs/
10 https://blog.jonudell.net/2023/01/21/working-with-mastodon-lists/
11 https://blog.jonudell.net/2023/01/26/images-considered-harmful-sometimes/
12 https://blog.jonudell.net/2023/02/02/mapping-the-wider-fediverse/
13 https://blog.jonudell.net/2023/02/06/protocols-apis-and-conventions/
14 https://blog.jonudell.net/2023/02/14/news-in-the-fediverse/
15 https://blog.jonudell.net/2023/02/26/mapping-people-and-tags-on-mastodon/
16 https://blog.jonudell.net/2023/03/07/visualizing-mastodon-server-moderation/
17 https://blog.jonudell.net/2023/03/14/mastodon-timelines-for-teams/


Wip Abramson

PhD: Completed It!

2022 marked the end of a big chapter of my life. The culmination of my PhD journey, easily the most challenging project I have ever…

2022 marked the end of a big chapter of my life. The culmination of my PhD journey, easily the most challenging project I have ever undertaken. Alongside graduating from my PhD as a Doctor of Philosophy, this year I formalised my relationship as a Consultant Requirements Engineer at Legendary Requirements, helped to produce RWOT the Hague and participated in a research sprint focused on Digital Identity in Times of Crisis coordinated by the Berkman Klein Center where I wrote a soon to be published short hopepunk story titled Experiences in Memory Space.

This post is a brief reflection on and celebration of this last year and the PhD adventure that brought me to this moment. It will likely be the last post I submit to this site, as I experiment with new self-presentations of myself online.

First, I have to say what an honour, joy and privilege it was to participate as one of the first PhD students at the Blockpass Identity Lab at Edinburgh Napier University. I didn’t fully comprehend what I was getting myself into back in 2018, but it exceeded my expectations.

That I had the freedom and support to explore the boundaries and the depths of human knowledge is something I will be forever grateful for. To perceive the interconnections and evolution of human scientific thought stretching back into our past, back to the seeds of ideas and the subsequent foundations upon which our modern information society has been built. It has been both an awe-inspiring and humbling experience. I am grateful to all those who influenced, shaped and supported me along the way, especially my PhD supervisor and mentor Professor Bill Buchanan.

It is telling that the last post I made to this site was almost two years ago in March 2021. Since then, almost all of my writing energy was turned towards producing my thesis. In the process I wrote over 200,000 words on 750words.com, a lifesaving practice where I worked through my goals, fears, doubts and initial drafts. Then there were the countless words produced iterating through drafts, revisions and rewrites as I attempted to corral my thoughts and sythesise my learnings into a coherent, consistent body of text. The end result being an 80,000 word tome titled Identity and Identification in an Information Society: Augmenting Formal Systems of Identification with Technological Artefacts that I imagine few will ever read. Regardless, it is a piece of work I will always be proud of.

My thesis presents my unique perspective on the identification systems, shaped by my academic research across multiple disciplinary boundaries and my practical experience as an implementer and participant in the rapidly evolving decentralized identity space. I believe there are some nuggets within its pages. My personal highlights include:

The chapter titled Security without Identification which traces the history and evolution scientific thought on cryptography through the lens of cryptographic credentials first conceptualised by David Chaum’s seminal paper Security without Identification. My determination to include a broader, sociological understanding of identification systems, despite advice that it should be left out of scope. See Chapters Identity, Trust and Privacy in an Information Society and Identification Systems. The workshop I ran at the Royal College of Physicians of Edinburgh that introduced verifiable credentials and explored issues and perceptions around their use for clinical passporting solutions. The subtle influences of Ludwick Fleck’s work: The Genesis and Development of a Scientific Fact that I attempted to weave throughout my thesis. The diverse, high quality references throughout my thesis, that are testament to the depth and breadth of human knowledge produced on identity and identification throughout the last 100 years. The content in my appendix, especially my wacky abstract diagrams and a paper I wrote but never published that talks about identity and interaction in terms of complex adaptive systems.

Of course there are some aspects I am less pleased with, but I imagine that is always the case. I completed it, passed my viva and graduated which is all that really matters at the end of the day.

I have ambitions to synthesise and adapt parts of my thesis into more digestible content which I will make available on my new professional self-presentation - https://drwip.com. We will see how that goes. In the meantime, if you are interested to give it a read I am happy to send it across.

It is also important to point out that the thesis is not the journey, just like the map is not the territory. There were so many other explorations, side quests, learning opportunities and experiences that shaped my perspective throughout my PhD and are not well reflected in the thesis. Growing into a researcher and developing my academic voice, exploring the beauty of Edinburgh and Scotland, learning Rust, participating in the interdisciplinary TAPESTRY research sandpit, over two years collaborating with the OpenMined community, the Odyssey hackathon where I contributed to the initial POC for the Commons Stack, RWOT Barcelona, MyData, my attempt at the Cabaret of Dangerous Ideas as part of the unfortunately virtual Edinburgh Fringe, my evolving thoughts about memory and meaning and countless other experiences. With at least half of my studies against the backdrop of the pandemic, adding its own layer of challenges into the mix and changes to adapt to.

My PhD journey was daunting, exhilarating, stressful, inspiring, exhausting, humbling and joyful at different moments along the way. A rollercoaster of emotions, motivation, exhaustion and determination. I am glad it is over, but I will be forever grateful for the opportunities and experiences that it opened up for me.

Looking to the Future

The future is unknown and unknowable. However, despite this I have repeatedly seen the value of setting expectations of it. Imagining, outlining and articulating possible futures that I would like to see come to pass. In fact I am playing around with a whole new space - https://futurewip.com - to playfully explore and experiment with just this in the context of the climate emergency. Even if it is more of a placeholder at the moment. I enjoy learning a new voice and style with which to write in. Crafting new identities you might say. Which if I have learnt anything over my PhD, it is that the identities we hold, take or have applied to us shape our experienced present and inturn influence our future. I believe there a few things more powerful that intentionally constructing loosely held identities for yourself as a means to explore yourself and who you might want to be in the future.

Another thing I learnt during my thesis, is that I love the freedom to self-direct my attention and time whilst exploring things that I care deeply about. Which identity and identification in an information society is certainly one, as I have come to view it as critical to the future of humanity and the forms and structures it might take. Whatever I end up doing in the future, I want to make sure I have time to spend as I choose because I am confident in my ability to spend it well and know this helps me thrive. At the moment that looks like pursuing a research and development agenda focused on memory and meaning as previously discussed on this blog. Indeed the questions I posed on my home page two years ago appear rather prescient and increasingly relevant in the context of LLMs such as ChatGPT. The speculative fiction story I produced as part of the BKC research sprint further explores these ideas through a different creative lens. I have some other interesting ideas and experiments around this concept that I hope to share over the coming months.

Professionally, I see my future intertwined with Legendary Requirements. Through Legendary I have been exposed to a diverse and interesting set of clients, projects and technologies with challenging problems that are fun to work through. It is clear that we add value to our clients, that I add value to Legendary and that Legendary values my contributions. My work is flexible, engaging and self-directed. The people I work with are intelligent, kind and thoughtful. I feel fortunate to have had such a smooth transition from my PhD into a job that respects my time, supports my growth and encourages my independence.

The decentralized identity space is still young and emerging. Legendary Requirements is well positioned to continue to work at the forefront of this space, helping clients discover, document and maintain human requirements for real world systems and use cases.

I am excited to see what the future holds.

Wednesday, 01. February 2023

Foss & Crafts

55: Free Soft Wear

Morgan talks about "Free Soft Wear": textile processes under free culture licenses! Links: Morgan's talk about Free Soft Wear at the Creative Freedom Summit Elena of Valhalla’s repository of CC BY-SA sewing patterns Morgan's blog Free Soft Wear index Dice bag and simple skirt tutorials RSI Glove pattern Simple sweater Layered Skirt Kat Walsh or @kat@stareinto.space

Morgan talks about "Free Soft Wear": textile processes under free culture licenses!

Links:

Morgan's talk about Free Soft Wear at the Creative Freedom Summit

Elena of Valhalla’s repository of CC BY-SA sewing patterns

Morgan's blog

Free Soft Wear index

Dice bag and simple skirt tutorials

RSI Glove pattern

Simple sweater

Layered Skirt

Kat Walsh or @kat@stareinto.space

Tall Dog Electronics face mask (You may recognize Dan and Tall Dog Electronics of TinyNES fame)

Wikimedia Commons

Project Gutenberg

Learning the sewing machine

RSI episode

FreeSewing (an open source software project that creates made-to-measure creative commons licensed sewing patterns)

Tuesday, 31. January 2023

@_Nat Zone

2月2日(木)22時〜「NIST SP800-63-4(案)」を配信しながらダラダラ読む会 #3

先週から始めた米国 NIST SP800-63-4…

先週から始めた米国 NIST SP800-63-4 のInitial Public Draft (昨年12月公開、3/24までパブリックコメント受付中)をダラダラ読む会の第3回を2月2日午後10時よりおこないます。今回は、身元確認基準であるパートAの2.2節から読み始めます。この中には、氏名を尋ねるねることの妥当性などの議論も入ってくると思います。え、「氏名」を入力させたりするの当たり前だろうって?そんなこと無いんですよ。そもそも、氏と名を聞くのって、ものすごく西欧よりの文化なわけですよ。文化によっては氏はなかったり、でも氏を入力しないと処理ができなかったり、逆に入れようとすると文字数制限があって入力できなかったり、使えない字があったりですね。

これらは実は文化的公平性(equity)の問題でもあるのです。そして、この公平性の問題が、SP800-63の今回の改定の一つの目玉でもあるのです。

ちなみに、日本では明治時代に氏名に統一してしまったために比較的この辺りは分かりづらくなってはいます。ですが、依然としてこの問題は存在します。たとえば、わたしが家をお貸ししている方は外国の方で氏が無いんですね。なので、氏の欄にも名を繰り返して入れておられます。

また、国際線の搭乗券などでは8文字しか入れられなかったりして、わたしの搭乗券はSAKIMURA NATSUHIK になっています。某銀行のシステムも同様で、キャッシュカード兼クレジットカードのローマ字に入力できないので短くしてくれと言われました。これ幸いと NAT SAKIMURA にしましたが1

そもそもわたしの氏の漢字が入力できないところ多いです。「﨑」がね。わたしは別に「崎」でも良いのですが戸籍が「﨑」なのでこれでないとはねられてしまうことがあるのですよ。そのくせ入力できなかったり、出力が文字化けしたり。

そうそう、カタカナ名も問題ですね。「ナットコンサルティング」って、eLTAXに入力できないですからね。小さい「ッ」と「ィ」が入らないんですね。そのくせ漢字欄から勝手にひっぱってきて入れてたりして、何がエラーになっているかわからず、「頼む、税金払わしてくれよ〜」と泣いてました。原因究明するの大変だった。ちなみに、税理士さんからのアドバイスは「窓口で払いましょう」

こうした入力の制限は本来、どうして氏名を入力させるのかということに立ち返って考えるべきなんですね。たとえばeLTAXだったら、法人番号があるんだからそれで事足りるわけですよ、本来。もともと法人名称なんてユニーク性無いんだから識別子として使えないわけだし。

個人の側にもどると、この「氏名」を求めるというプロセスをやめると、実は改姓によって被る様々な不利益を減じたりする可能性もあるんですよね。氏名を入力させることによって、ある集団に対して申請が難しくなるなどの障害を生んでいないか、それが差別的に機能していないかなどを評価することも「公平性」の観点で重要だったりします。なので、 #選択的夫婦別姓 推進な方々もよろしければぜひ一緒に考えてみてください。

まぁもちろんね、SP800-63-4で問題になっている本丸は氏名ではなく、写真つき身分証明書をもとめることとかそういうことですが。

バージョン4はバージョン3である「SP800-63-3」に比べて、対象範囲がかなり広くなっています。この観点からはちょっと足りないところがあり、その点を著者の一人に「○○がすっぽり落ちてない?」と聞いたところ、「認識はしていてこれから書く予定だ」ということでした。「すべてのコメントは歓迎だ」とのことです。

SP800-63は米国連邦政府向けの基準ですが、各国の関連規格にも大きな影響を与えています。日本の政府の基準もその中の一つです。おりしも日本政府もこのあたりの改定を考えているようですので、SP800-63をちゃんと勉強しておくことは意味があります。

そこで、8回ほどに分けてSP800-63-4を読んでみようと思います。8回にわけるのは、関係する文書は

SP800-63-4 https://nvlpubs.nist.gov/nistpubs/Spe… SP800-63A-4 https://doi.org/10.6028/NIST.SP.800-6… SP800-63B-4 https://doi.org/10.6028/NIST.SP.800-6… SP800-63C-4 https://doi.org/10.6028/NIST.SP.800-6…

の4冊もあり、どう考えても1回では終わらないからです。

これらは、OpenID Foundation ジャパンで日本語訳も鋭意作成中のようで、配信までに多少日本語で読めるようになったものも揃っているかもしれません。使えるようであれば、それを副読本にしながらやろうかと思います。

チャット参加もビデオ参加2も可能です。 ビデオ参加には mmhmm を使います。mmhmmで参加したい方はリンクをお送りしますのでご連絡ください。チャットはこのYoutubeチャンネルにチャンネル登録して1分以上たてば参加できるはず。ただ、前回直前に登録した方はチャットを送れなかったという事例もあるので、事前に登録することをおすすめします。


Jon Udell

Verification for Mastodon

This post attests to my ownership of this Mastodon account.

This post attests to my ownership of this Mastodon account.

Monday, 30. January 2023

Doc Searls Weblog

Does Sirius XM sound far worse than listening to music on YouTube?

That’s a question asked on Quora and deleted before I posted my answer. So I’m posting my answer here. This is like asking if a car radio sounds better than a TV. Because it’s a matter of where, how, when, and what, more than a matter of sound. There is some overlap in the sense […]

That’s a question asked on Quora and deleted before I posted my answer. So I’m posting my answer here.

This is like asking if a car radio sounds better than a TV. Because it’s a matter of where, how, when, and what, more than a matter of sound.

There is some overlap in the sense that both SiriusXM and YouTube are fully useful on mobile apps. But you don’t watch your radio in your car or listen to your radio on your TV, even though it’s possible to do both through apps that are native to both the road (through Android Auto and Apple Carplay) and the living room (through Roku, Amazon, Apple, and other TV app platforms).

As for the sound itself, YouTiube lets you select audio quality bitrates up to 256kbps AAC & OPUS. SiriusXM’s default bitrate is also 256kpbs, but over the satellite link bitrates are typically lower—sometimes much lower. But, since SiriusXM does not (to my knowledge, so far) publish their bitrates in a place that’s easy to find, its bitrates are subject to debate. Here is one of those on Reddit.

But, again, it’s a matter of where. when, and what, more than how. If you want to see and hear a piece of music, YouTube provides enormous optionality, with its almost boundless collection of videos. If you want radio-like music programming, SiriusXM offers more—plus sports, talk, news, sports (including play-by-play for all the major ones), and more.

Yet the Internet has more than both put together. That’s why the image above is of Radio Paradise, which is one of the oldest and best Internet music stations. It’s live on the Net and the Web, and it has Best Of collections on YouTube as well.

Bonus link (and a lot of fun): radio.garden. There’s an app for that too.


Damien Bod

Using multi-tenant AAD delegated APIs from different tenants

This post looks at implementing and using Azure AD multiple tenant applications from different tenants. A service principal needs to be created for the tenant using the multi-tenant API and consent needs to be given for the API scope. The API will accept tokens from different issuers which need to be validated. It is important […]

This post looks at implementing and using Azure AD multiple tenant applications from different tenants. A service principal needs to be created for the tenant using the multi-tenant API and consent needs to be given for the API scope. The API will accept tokens from different issuers which need to be validated. It is important that all tenants allowed to use the API are validated.

Code: https://github.com/damienbod/AadMutliApis

Setup API Azure App Registration

A multi-tenant API Azure App registration is used to expose the scope which is required to use the API. The API is implemented using ASP.NET Core and validates this claim in the JWT token as well as other claims. V2 tokens are required and only delegated access tokens are created from this App registration.

Implement the API

An ASP.NET Core application implements the API and the security logic validating the access token. An explicit list of issuers can use the API The API validates that a secret is required to authenticate and the client that requested the access token is also validated. The authorization handler validates that the token have a scope claim which means that it is a delegated access token (if produced by Azure AD). Great care has to be taken when using mutli-tenant app registrations because any tenant can use this but not any tenant should be allowed to use the API.

services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme) .AddJwtBearer(JwtBearerDefaults.AuthenticationScheme, options => { options.MetadataAddress = aadMetadataAddress; //options.Authority = issuert1; options.Audience = aud; options.TokenValidationParameters = new TokenValidationParameters { ValidateIssuer = true, ValidateAudience = true, ValidateIssuerSigningKey = true, ValidAudiences = new List<string> { aud }, ValidIssuers = new List<string> { issuert1 } }; }); services.AddSingleton<IAuthorizationHandler, ValidTenantsAndClientsHandler>(); services.AddAuthorization(policies => { policies.AddPolicy("ValidTenantsAndClients", p => { // only delegated trusted known clients allowed to use the API p.Requirements.Add(new ValidTenantsAndClientsRequirement()); // Validate id of application for which the token was created p.RequireClaim("azp", azpClientId); // client secret = 1, 2 if certificate is used p.RequireClaim("azpacr", "1"); }); }); services.AddControllers(options => { var policy = new AuthorizationPolicyBuilder() .RequireAuthenticatedUser() .AddAuthenticationSchemes(JwtBearerDefaults.AuthenticationScheme) .Build(); options.Filters.Add(new AuthorizeFilter(policy)); });

The handler validates that the scope claim has the expected value. Together with the rest of the validation, it is possible the validate that the access token is intended for this API.

public class ValidTenantsAndClientsHandler : AuthorizationHandler<ValidTenantsAndClientsRequirement> { protected override Task HandleRequirementAsync( AuthorizationHandlerContext context, ValidTenantsAndClientsRequirement requirement) { if (context == null) throw new ArgumentNullException(nameof(context)); if (requirement == null) throw new ArgumentNullException(nameof(requirement)); var scopeClaim = context.User.Claims.FirstOrDefault(t => t.Type == "scope"); if (scopeClaim != null) { var scopes = scopeClaim.Value.Split(" ", StringSplitOptions.RemoveEmptyEntries); if (scopes.Any(t => t == "access_as_user")) { context.Succeed(requirement); } } return Task.CompletedTask; } } Setup Service Principal for other tenants

Now that the application is ready and the App registration exists, a service principal can be created for this Azure App registration in the target tenant.

Connect-AzureAD -TenantId '<UI-tenantId>' New-AzureADServicePrincipal -AppId 'AppId-from-multi-tenant-api' Give consent in your tenant to the Enterprise applications

The service principal can be found in the Enterprise applications blade.

Open the Enterprise Applications blade Find your enterprise application using the Guid ObjectId from the Powershell script Open the permissions blade Grant Admin consent if you require to use local tenant permissions

Using and consent

To use the UI application and the multi-tenant API, consent must be given, usually by a tenant administrator on behalf of all users in this tenant. Different consents screens are displayed depending on the Azure tenant policies and the person using the application.

Once consent ahs been given, this can be viewed in the API permissions of the Enterprise application created for the target tenant.

The extra step of explicitly allowing the tenants that can use the API has advantages. You can continue to use delegated scopes and implement token exchange protocols for the downstream APIs where required. This is a secure way of connecting software systems using different identity providers if implemented correctly. A disadvantage with the approach is that each admin must give consent to use the API on their tenant.

Links
Azure AD Multi tenant Azure App registration consent

https://stackoverflow.com/questions/60929155/how-to-create-service-principal-of-multi-tenant-application

Sunday, 29. January 2023

Mike Jones: self-issued

JSON Object Signing and Encryption (JOSE) Working Group Reanimated

I’m thrilled that the IETF has restarted the JSON Object Signing and Encryption (JOSE) Working Group. It’s chartered to work on JSON- and CBOR-based representations for Zero-Knowledge Proofs (ZKPs), selective disclosure enabling minimal disclosure, and non-correlatable presentation. The representations are planned to use the three-party model of Issuer, Holder, and Verifier utilized by Verifiable C

I’m thrilled that the IETF has restarted the JSON Object Signing and Encryption (JOSE) Working Group. It’s chartered to work on JSON- and CBOR-based representations for Zero-Knowledge Proofs (ZKPs), selective disclosure enabling minimal disclosure, and non-correlatable presentation. The representations are planned to use the three-party model of Issuer, Holder, and Verifier utilized by Verifiable Credentials.

See the newly approved JOSE charter at https://datatracker.ietf.org/doc/charter-ietf-jose/03/. The working group will be chaired by Karen O’Donoghue, John Bradley, and John Mattsson, with the assigned area director being Roman Danyliw.

I believe this is a great outcome because the JOSE working group participants already have expertise creating simple, widely-adopted JSON-based cryptographic formats, such as JSON Web Signature (JWS), JSON Web Encryption (JWE), and JSON Web Key (JWK). The new formats will be peers to JWS, JWE, and COSE, reusing elements that make sense, while enabling use of new cryptographic algorithms whose inputs and outputs are not representable in the existing JOSE and COSE formats.

If you’re interested in the work, please join the JOSE mailing list at https://www.ietf.org/mailman/listinfo/jose if you’re not already a member. Also, plan to participate in IETF 116 Yokohama, where we should be able to have the first meeting of the reconstituted working group. I hope to see you there!

As background, the first step in the JOSE rechartering was the JSON Web Proofs (JWP) BoF at IETF 114 in Philadelphia sponsored by Security Area Director Roman Danyliw and chaired by Karen O’Donoghue and John Bradley, during which Jeremie Miller, Kristina Yasuda, Tobias Looker, and I presented. That was follwed by a Virtual Interim JWP BoF in October, 2022, review on the ietf-announce mailing list, and multiple IESG discussions.

All of which brings us back to the (now recurring!) question: “What Would JOSE Do?” Join us and be part of answering it!

Sunday, 29. January 2023

reb00ted

What if Apple's headset is a smashing success?

Signs are pointing that Apple will announce its first headset in the next few months. This would be a major new product for Apple – and the industry beyond –, but there is very little excitement in the air. We can blame Meta for that. After buying Oculus, iterating over the product for almost 9 years since, and reportedly spending more than $10 billion a year on it, their VR products remains a d

Signs are pointing that Apple will announce its first headset in the next few months. This would be a major new product for Apple – and the industry beyond –, but there is very little excitement in the air.

We can blame Meta for that. After buying Oculus, iterating over the product for almost 9 years since, and reportedly spending more than $10 billion a year on it, their VR products remains a distinct Meh. I bought a Quest 2 myself, and while it definitely has some interesting features (I climbed Mt Everest, in VR!), it mostly sits on the shelf, gathering dust.

So the industry consensus is that Apple’s won’t amount to much either. If Meta couldn’t find compelling use cases, the thinking goes, Apple won’t either, because there aren’t any! (Other than some limited forms of gaming and some niche enterprise ones.)

I think this line of thinking would be a mistake.

My argument: Apple understands their customers and works down their use cases better than anybody. If Apple works on a new product category for many years – and signs are that they have – and then finally decides that the product is ready, chances are, it is. Their track record on new products is largely unblemished since the return of Jobs about 25 years ago:

fruity fun design for a computer (iMac) – success digital music player (iPod) – smashing success smartphone (iPhone) – so successful it killed and reinvented an entire industry table (iPad) – success watch (iWatch) – success … and many smaller products, like headsets, voice assistance, Keynote etc.

Looking for a major dud in those 25 years, I can’t really find one. (Sure, some smaller things like the 25-year anniversary Mac – but that was always a gimmick, not a serious product line.)

It appears that based on their history, betting against Apple’s headset is not a smart move. Even if we can’t imagine why an Apple headset would be compelling before we see it: we non-Apple people didn’t predict iPhone either, but once we saw it, it was “immediately” obvious.

So let’s turn this around. What about we instead assume the headset will be a major success? Then what?

I believe this would transform the entire technology industry profoundly. For historical analogies, I would have to go back all the way to the early 80’s when graphical user interfaces first became widely used – coincidentally (or not) an Apple accomplishment: they represented a fundamentally different way of interacting with computers than the text terminals that came before them. Xerox Parc gave that demo to many people. Nobody saw the potential and went with it, just Apple did. And they pulled a product together that caused the entire industry to transform. Terminals are still in use, but only by very few people for very specific tasks (like system administrators).

What if AR/VR interfaces swept the world as the GUI swept the PC?

I believe they can, if somebody relentlessly focuses on uses cases and really makes them work. I built my first 3D prototype in VRML in 1997. It was compelling back then and it would be today. Those uses can be found, I’m quite certain.

Based on everything we’ve seen, it’s clear that Meta won’t find them. Hanging out with your friends who don’t look like your friends in some 3D universe is just not it. But if anybody can do it, it’s Apple.

So I’m very much looking forward to seeing what they came up with, and I think you should be, too.

Friday, 27. January 2023

Moxy Tongue

AI in Civil Society

A Constitution with computational integrity can't originate Rights in a database.. this data domain of AI is not the domain of Rights for people in a "civil society".  Until people own root authority, and systems are all derived accurately, there can be no healthy implementation of AI, or as we are watching, a "civil society" of ID participants. People must own root, AI is an adversarial in
A Constitution with computational integrity can't originate Rights in a database.. this data domain of AI is not the domain of Rights for people in a "civil society". 
Until people own root authority, and systems are all derived accurately, there can be no healthy implementation of AI, or as we are watching, a "civil society" of ID participants. People must own root, AI is an adversarial intelligence deriving all capacity from people, in order to detect/interpret/solve problems. 
Structure yields results.. artificial intelligence is in the fraud stage.. a thing that deceives for gain.
Fraud defined: a thing intended to deceive others, typically by unjustifiably claiming or being credited with accomplishments or qualities of people.


In order to protect the nascent value of AI research, and help humanity manifest this immensely useful technical outcome, the structure of human participation in civil society MUST be conceived anew. 

What is "Sovereign Source Authority"?


Jon Udell

Images considered harmful (sometimes)

The Mastodon dashboards I’ve been developing and describing in this series are backed by a Steampipe plugin that translates SQL queries to Mastodon API calls. Like all Steampipe plugins you can use this one to run those queries in all sorts of ways: from psql or another Postgres CLI (perhaps via cron, perhaps in a … Continue reading Images considered harmful (sometimes)

The Mastodon dashboards I’ve been developing and describing in this series are backed by a Steampipe plugin that translates SQL queries to Mastodon API calls. Like all Steampipe plugins you can use this one to run those queries in all sorts of ways: from psql or another Postgres CLI (perhaps via cron, perhaps in a CI/CD pipeline); from Metabase or Grafana or any Postgres-compatible BI tool; from Python or JavaScript or any programming language. The Steampipe core is a versatile software component that you can plug into just about any environment.

There’s also, of course, Steampipe’s dashboards as code approach which powers the alternate Mastodon UX that I’ve been exploring in this series. You can think of this dashboarding tool as a primitive web browser with a freakish talent for querying Postgres and weaving SQL results into widgets such as infocards, input controls, charts, tables, and relationship graphs. You compose widgets using HCL (Hashicorp Configuration Language) instead of HTML, arrange them using basic layout syntax, and view them by connecting your browser to the local Steampipe dashboard server or to cloud.steampipe.io.

The alternate Mastodon UX built this way was aptly described (thanks again Greg Wilson!) as A Bloomberg terminal for Mastodon. As a reminder, the stock Mastodon web client looks like this.

And the pages served by the Mastodon dashboards look like this.

I can scan the latter view far more effectively.

Now, I’ve been trying out a bunch of alternate Mastodon clients lately. I love Ivory on my phone, and Elk in the browser, and I use them when I want an experience that feels social-media-ish. But I don’t use Ivory or Elk or Toot! or the stock Mastodon web clients (browser, phone) when I want an experience that feels RSS-reader-ish.

For some information landscapes I want to unfold a big map, spread it out, and scan for points of interest. Social media is that kind of landscape, RSS readers were the first way I surveyed it effectively, and these Mastodon dashboards are becoming my new way.

When I compare those two screenshots though, it’s not just the density of the latter that I notice, but also the absence of pictures. At first I fretted about that. Could the dashboard pages render full-strength HTML? Perhaps they will, and there are a few different ways that could happen, but meanwhile I’ve come to embrace the text-only constraint. When I’ve got my map unfolded, and I’m scanning my home timeline or my lists for things to click into, images can be a distraction. I wouldn’t have chosen to omit them, but I find that their absence enables me to focus very effectively on who is speaking, and what they are saying or boosting.

There is also, of course, the graphical view afforded by relationship graphs. These feel very map-like in the way they reveal clusters of people interacting with one another. I’m finding them more useful than anticipated.

But the text that I read in these dashboards is image-free for now. And I think it’s having a calming effect. There are plenty of times when I want images, don’t get me wrong, and that’s partly why I use a mix of conventional Mastodon clients. But there are times when I want to dial down the clutter, just like there are times when I switch my phone to monochrome. Two things can be true: The pictures we share with one another are a source of joy, and they are sensory overload. I’ll make them optional here when I can, and would like to have the option to use all social media interfaces in text mode.


1 https://blog.jonudell.net/2022/11/28/autonomy-packet-size-friction-fanout-and-velocity/
2 https://blog.jonudell.net/2022/12/06/mastodon-steampipe-and-rss/
3 https://blog.jonudell.net/2022/12/10/browsing-the-fediverse/
4 https://blog.jonudell.net/2022/12/17/a-bloomberg-terminal-for-mastodon/
5 https://blog.jonudell.net/2022/12/19/create-your-own-mastodon-ux/
6 https://blog.jonudell.net/2022/12/22/lists-and-people-on-mastodon/
7 https://blog.jonudell.net/2022/12/29/how-many-people-in-my-mastodon-feed-also-tweeted-today/
8 https://blog.jonudell.net/2022/12/31/instance-qualified-mastodon-urls/
9 https://blog.jonudell.net/2023/01/16/mastodon-relationship-graphs/
10 https://blog.jonudell.net/2023/01/21/working-with-mastodon-lists/
11 https://blog.jonudell.net/2023/01/26/images-considered-harmful-sometimes/
12 https://blog.jonudell.net/2023/02/02/mapping-the-wider-fediverse/
13 https://blog.jonudell.net/2023/02/06/protocols-apis-and-conventions/
14 https://blog.jonudell.net/2023/02/14/news-in-the-fediverse/
15 https://blog.jonudell.net/2023/02/26/mapping-people-and-tags-on-mastodon/
16 https://blog.jonudell.net/2023/03/07/visualizing-mastodon-server-moderation/
17 https://blog.jonudell.net/2023/03/14/mastodon-timelines-for-teams/

Tuesday, 24. January 2023

reb00ted

Activity Streams graphical model

All you need is a gazillionaire doing strange things to some internet platform, and all of a sudden decentralized social media soars in adoption. So lots of people are suddenly seriously looking at how to contribute, myself included. Core to this is the ActivityPub standard, and real-world implementations that mix it with additional independently defined protocols, such as what Mastodon does.

All you need is a gazillionaire doing strange things to some internet platform, and all of a sudden decentralized social media soars in adoption. So lots of people are suddenly seriously looking at how to contribute, myself included.

Core to this is the ActivityPub standard, and real-world implementations that mix it with additional independently defined protocols, such as what Mastodon does.

None of them are particularly easy to understand. So I did a bit of drawing just to make it clearer (for myself) what kind of data can be shipped around in the Fediverse. To be clear, this is only a small part of the overall stack, but an important one.

Here are some diagrams. They are essentially inheritance diagrams that show what kinds of activities there are, and actors, etc. Posted here in case they are useful for others, too.

And here’s how to interpret my homegrown graphical notation. (I made it up for my dissertation eons ago, and used it ever since. It has certain advantages over, say, UML or traditional ERA diagram styles. IMHO :-))

Monday, 23. January 2023

Hyperonomy Digital Identity Lab

Mapping the Web 7.0 DIDComm/HTTP Architecture Reference Model to the ToIP Trust Spanning Layer Framework

Copyright (c) 2022-2023 Michael Herman (Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public Licensehttps://creativecommons.org/licenses/by-sa/4.0/legalcode Web 7.0 is a unified software and hardware ecosystem for building resilient, trusted, decentralized systems using decentralized identifiers, DIDComm agents, and verifiable credentials. Take what … Continue reading

Copyright (c) 2022-2023 Michael Herman (Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public License
https://creativecommons.org/licenses/by-sa/4.0/legalcode

Web 7.0 is a unified software and hardware ecosystem for building resilient, trusted, decentralized systems using decentralized identifiers, DIDComm agents, and verifiable credentials.

Take what you need; leave the rest.

Michael Herman, Trusted Digital Web Project, Hyperonomy Digital Identity Lab, Parallelspace Corporation. January 2023.

@_Nat Zone

『JIS X 9252:2023 情報技術―オンラインにおけるプライバシーに関する通知及び同意』が発行されました

去る1月20日(金)に、『ISO/IEC 2918…

去る1月20日(金)に、『ISO/IEC 29184 Online privacy notice and consent』 のJIS版、『JIS X 9252:2023 情報技術―オンラインにおけるプライバシーに関する通知及び同意』が発行されました。JIS原案作成委員会事務局を努められたJIPDEC様、検討に参加いただきました先生方、経産省のご担当者様、JSAのご担当者様、また、提出された原案をご審議いただいた日本産業標準調査会標準第二部会 情報技術専門委員の皆様、その他、ご尽力いただきました関係者の皆様にあつく御礼申し上げます。

JIS X 9252 は、日本ではプライバシー・ポリシーと呼ばれることの多い、個人情報の取扱に関する通知の書き方と、同意を処理の根拠として使う場合の同意の取得方法を規格として書き起こしたものです。もともとは経産省のガイドラインだったものをISOに持ち込み、EDPBなどを含む国際的な検討を行い国際規格化したものを、再び日本に持ち帰り、JIS規格にしたものです。「通知」はいかなる場合でも必要ですから、全事業者が関係する規格であると言っても過言ではありません。ぜひお手にお取りいただければと思います。

発行のお知らせと官報はそれぞれ以下のリンクで見ることができます。

■日本規格協会
https://webdesk.jsa.or.jp/books/W11M0090/index/?bunsyo_id=JIS+X+9252%3A2023
■官報
https://kanpou.npb.go.jp/20230120/20230120g00013/20230120g000130041f.html


Damien Bod

Basic Authentication in ASP.NET Core

This article shows how basic authentication could be implemented in an ASP.NET Core application. This is not the recommended way to implement security for user flows as the password is always sent with each request but the flow is sometimes required to implement a standard or you sometimes need to support one side of an […]

This article shows how basic authentication could be implemented in an ASP.NET Core application. This is not the recommended way to implement security for user flows as the password is always sent with each request but the flow is sometimes required to implement a standard or you sometimes need to support one side of an authentication flow which requires this.

Code: https://github.com/damienbod/OAuthGrantExchangeOidcDownstreamApi

Client implementation

An client implementation needs to send an authorization header with a clientId and a clientSecret separated using the “:” char and encoded using base64. The secret is hashed using SHA256 so as not to send the original secret in the request but this does not really improve the security much, it just prevents the original application secret being shared. The request is sent using HTTPS and so the headers are encrypted.

private static string CreateBasicAuthenticationHeader( GetDelegatedApiTokenOAuthTokenExchangeModel reqData) { var builder = new StringBuilder() .Append(reqData.ClientId) .Append(':') .Append(OauthTokenExchangeExtentions .ToSha256(reqData.ClientSecret)); var credentials = Convert.ToBase64String( Encoding.ASCII.GetBytes(builder.ToString())); return credentials; }

The ShA256 hash is implemented using a simple method which returns a base64 string of this.

public static string ToSha256(string text) { using var sha256 = SHA256.Create(); var bytes = Encoding.UTF8.GetBytes(text); var hash = sha256.ComputeHash(bytes); return Convert.ToBase64String(hash); }

The credentials are sent using the Authentication header.

string credentials = CreateBasicAuthenticationHeader(reqData); httpClient.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Basic", credentials);

Server implementation

The server part of the flow is implemented using the idunno.Authentication.Basic Nuget package. The credentials are validated using the same hash and checked against the expected values from the configuration.

services.AddAuthentication(BasicAuthenticationDefaults.AuthenticationScheme) .AddBasic(options => { options.Realm = "oauthtokenexchange"; options.Events = new BasicAuthenticationEvents { OnValidateCredentials = context => { var config = context.HttpContext.RequestServices .GetService<IOptions<OauthTokenExchangeConfiguration>>(); if(ValidateBasicAuthentication.IsValid(context.Username, context.Password, config.Value)) { var claims = new[] { new Claim( ClaimTypes.NameIdentifier, context.Username, ClaimValueTypes.String, context.Options.ClaimsIssuer), new Claim( ClaimTypes.Name, context.Username, ClaimValueTypes.String, context.Options.ClaimsIssuer) }; context.Principal = new ClaimsPrincipal( new ClaimsIdentity(claims, context.Scheme.Name)); context.Success(); } return Task.CompletedTask; } }; });

The ValidateBasicAuthentication class checks the used credentials.

public static class ValidateBasicAuthentication { public static bool IsValid( string clientId, string clientSecret, OauthTokenExchangeConfiguration oauthTokenExchangeConfiguration) { if(!clientId.Equals(oauthTokenExchangeConfiguration.ClientId)) { return false; }; if (!clientSecret.Equals( OauthTokenExchangeExtentions.ToSha256( oauthTokenExchangeConfiguration.ClientSecret))) { return false; }; return true; } }

The basic authentication can be validated in the authorize attribute using the correct scheme.

[Authorize(AuthenticationSchemes = BasicAuthenticationDefaults.AuthenticationScheme)] [HttpPost("~/connect/oauthTokenExchangetoken"), Produces("application/json")] public async Task<IActionResult> Exchange([FromForm] OauthTokenExchangePayload oauthTokenExchangePayload) { // business }

This works well but sending the password and the name on ever request is not always the best way of implementing authentication. This should only be used if required when implementing a standard. There are better ways and more secure ways of securing APIs.

Links

https://github.com/blowdart/idunno.Authentication/tree/dev/src/idunno.Authentication.Basic


@_Nat Zone

1月26日(木)22時〜「NIST SP800-63-4(案)」を配信しながらダラダラ読む会

1年近く公表が遅れていた米国 NIST SP800…

1年近く公表が遅れていた米国 NIST SP800-63-4 のプレ・ドラフトが12月に公開され、3/24までパブリックコメント受付中です。バージョン3である「SP800-63-3」に比べて、対象範囲がかなり広くなっています。この観点からはちょっと足りないところがあり、その点を著者の一人に「○○がすっぽり落ちてない?」と聞いたところ、「認識はしていてこれから書く予定だ」ということでした。「すべてのコメントは歓迎だ」とのことです。

SP800-63は米国連邦政府向けの基準ですが、各国の関連規格にも大きな影響を与えています。日本の政府の基準もその中の一つです。おりしも日本政府もこのあたりの改定を考えているようですので、SP800-63をちゃんと勉強しておくことは意味があります。

そこで、2回ほどに分けてSP800-63-4を読んでみようと思います。2回にわけるのは、関係する文書は

SP800-63-4 https://nvlpubs.nist.gov/nistpubs/Spe… SP800-63A-4 https://doi.org/10.6028/NIST.SP.800-6… SP800-63B-4 https://doi.org/10.6028/NIST.SP.800-6… SP800-63C-4 https://doi.org/10.6028/NIST.SP.800-6…

の4冊もあり、どう考えても1回では終わらないからです。

これらは、OpenID Foundation ジャパンで日本語訳も鋭意作成中のようで、配信までに多少日本語で読めるようになったものも揃っているかもしれません。使えるようであれば、それを副読本にしながらやろうかと思います。

パブコメが 3/24 までと時間に余裕があるのになぜ今やるかというと、今月末に日本政府1の某官庁の関連会議があるからです。

チャット参加もビデオ参加2も可能です。 ビデオ参加には mmhmm を使います。mmhmmで参加したい方はリンクをお送りしますのでご連絡ください。チャットはこのYoutubeチャンネルにチャンネル登録して1分以上たてば参加できるはず。ただ、前回直前に登録した方はチャットを送れなかったという事例もあるので、事前に登録することをおすすめします。

Sunday, 22. January 2023

Jon Udell

Working with Mastodon lists

Since the early days of the blogosphere I have cherished the ability to view the world through the eyes of people more qualified than me to understand and explain what happens in particular domains. Although Twitter lists were a great way to collect people who provide those perspectives, I made little use of them. Prompted … Continue reading Working with Mastodon lists

Since the early days of the blogosphere I have cherished the ability to view the world through the eyes of people more qualified than me to understand and explain what happens in particular domains. Although Twitter lists were a great way to collect people who provide those perspectives, I made little use of them. Prompted by Martin Fowler’s frustration with lists I described my first steps toward reviving the technique in Lists and people on Mastodon.

First I encapsulated the relevant APIs in a pair of tables provided by the Steampipe plugin for Mastodon: mastodon_list and mastodon_list_account. Then I used those tables to enhance the Mastodon dashboard in a couple of ways. The followers and following tabs now show me who is or isn’t on a list.

And I can use the list tab to read recent posts from people on each list.

With these basic building blocks in place I want to be more intentional about curating these lists. To that end I thought I’d share the lists I’ve built so far, and invite suggestions. You can export your lists from the Mastodon web client using Preferences -> Data export -> Lists. The resulting CSV file has two columns: the name of a list and the Mastodon handle for a person.

list person Library liza@post.lurk.org Library alexwade@fosstodon.org Library jdiffin@glammr.us

Here’s the export I really wanted.

list person server url note Library liza post.lurk.org https://mastodon.social/@liza@post.lurk.org Technologist in Residence at the Harvard Library Innovation Lab. Interactive fiction, generative art, Democratic political tech. Amateur scholar of fringe utopian sci-fi. I will pet your dog. Library alexwade fosstodon.org https://mastodon.social/@alexwade@fosstodon.org Librarian, open science advocate, geek. VP Data Products, @digitalscience (Seattle, WA) Formerly: AllenAI (AI2), CZI, Amazon, Microsoft Research, UW Library jdiffin glammr.us https://mastodon.social/@jdiffin@glammr.us Head of Library Technology Services Section, NLM. Opinions are my own.

It’s easy to create that list using the Steampipe plugin for Mastodon.

steampipe query “select title as list, username, server, instance_qualified_account_url, note from mastodon_list l join mastodon_list_account a on l.id = a.list_id order by list, username” –output csv > mastodon_lists.csv

(I’m using instance-qualified URLs relative to my home server, if your home is elsewhere than mastodon.social you’ll want to adjust the links accordingly.)

I’ve uploaded the output of that query to a Google sheet. The most recent addition to my climate list is Peter Gleick who was quoted yesterday in Farhad Manhjoo’s What Will ‘Weather Whiplash’ Mean for California? I’d been looking for an expert perspective on California water policy, so I checked to see if Peter is on Mastodon, found that he is, and added him to my climate list. Here’s an example of the kind of perspective I’m looking for.

California reservoirs have captured enough water from the recent storms to supply all urban water use in the state for a year. Not counting the massive snowpack. Those claims that “all the water is being wasted flowing to the ocean” is nonsense. — Mastodon link

Of course it’s a complex issue, and I don’t think I’m wrong to be very interested in strategies for capturing more rainfall and using it to recharge aquifers. But this was a useful reality check.

Who are the experts you follow in various domains? Can I cherrypick from your lists? And can we imagine a way of remixing and sharing these lists? I’m looking at you, Lucas Gonze, with fond memories of how Webjay did that for MP3 playlists. Can we do that for Mastodon lists too?


1 https://blog.jonudell.net/2022/11/28/autonomy-packet-size-friction-fanout-and-velocity/
2 https://blog.jonudell.net/2022/12/06/mastodon-steampipe-and-rss/
3 https://blog.jonudell.net/2022/12/10/browsing-the-fediverse/
4 https://blog.jonudell.net/2022/12/17/a-bloomberg-terminal-for-mastodon/
5 https://blog.jonudell.net/2022/12/19/create-your-own-mastodon-ux/
6 https://blog.jonudell.net/2022/12/22/lists-and-people-on-mastodon/
7 https://blog.jonudell.net/2022/12/29/how-many-people-in-my-mastodon-feed-also-tweeted-today/
8 https://blog.jonudell.net/2022/12/31/instance-qualified-mastodon-urls/
9 https://blog.jonudell.net/2023/01/16/mastodon-relationship-graphs/
10 https://blog.jonudell.net/2023/01/21/working-with-mastodon-lists/
11 https://blog.jonudell.net/2023/01/26/images-considered-harmful-sometimes/
12 https://blog.jonudell.net/2023/02/02/mapping-the-wider-fediverse/
13 https://blog.jonudell.net/2023/02/06/protocols-apis-and-conventions/
14 https://blog.jonudell.net/2023/02/14/news-in-the-fediverse/
15 https://blog.jonudell.net/2023/02/26/mapping-people-and-tags-on-mastodon/
16 https://blog.jonudell.net/2023/03/07/visualizing-mastodon-server-moderation/
17 https://blog.jonudell.net/2023/03/14/mastodon-timelines-for-teams/

Friday, 20. January 2023

@_Nat Zone

「最高にエモい」と好評だったOpenID Summit Tokyoクロージング・キーノート 「No ID, No DX」の録画が一般公開されました

3年前のOpenID Summit Tokyo で…

3年前のOpenID Summit Tokyo で「最高にエモい」と好評だった OpenID Summit Tokyo のクロージング・キーノートの録画が公開されました。

崎村さんの Closing Keynote 最高にエモい。好き #openid_tokyo

— きりえ (@__kyrieleison__) January 24, 2020

本編スタートは 06:18 から。将来と希望ということから話を始め、産業革命の本質と大英帝国成立の背景を説明、そこから得られる第4次産業革命への示唆とサイバー大陸=第八大陸の成立とアップル教皇国、フェイスブック王国、グーグル共和国、WeChat人民共和国など列強(#GAFAM)による #第八大陸分割、#DFFT による経済成長についてのEUの見解、#eID #trustservices #eIDAS の意義、そして戦わない楽な道= #西用 路線をとることによる植民地化と貧困への道と、 #変法 による希望のもてる将来の話をしています。

3年前のスピーチですが、今でも全く古びていないと思います。いやむしろ、 #web3 で分散と言っている今こそ、そして今年のG7の議題となったDFFTという観点からも見直してぜひ熟慮していただきたいスピーチです。

お楽しみください。

Thursday, 19. January 2023

Jon Udell

Of course the attention economy is threatened by the Fediverse

Megan McArdle says this week, in the Washington Post, that “Twitter might be replaced, but not by Mastodon or other imitators.” I’m not linking to the article, you can easily find it, but that title is all we need for my purpose here, along with this bit of context: she has 93K followers on Twitter. … Continue reading Of course the attention economy is threatened by the Fediverse

Megan McArdle says this week, in the Washington Post, that “Twitter might be replaced, but not by Mastodon or other imitators.” I’m not linking to the article, you can easily find it, but that title is all we need for my purpose here, along with this bit of context: she has 93K followers on Twitter.

Nobody wants to walk away from that kind of audience. Well, almost nobody. Sam Harris’ recent Twitter exit is a rare example of someone concluding that a large follower count is a net negative. If I were in his shoes I’m not sure I’d be able to do the same. When my own audience was at its peak — at BYTE during the dawn of the Internet, then at InfoWorld in the early years of the blogosphere — I could press the Publish button on my blog and watch in realtime as the responses rolled in on waves of dopamine. It’s addictive, there’s never enough, you’re always looking for the next hit.

When Twitter started, that momentum carried forward for a while. I never racked up a huge follower count — it maxed out just shy of 6K — but most of those people followed me early on, thanks to the the ad-supported publications that had brought me to their attention. My Twitter following reached a plateau years ago. Did I wish for 100K followers? Sure, I’d be lying to pretend otherwise. But gradually I came to see that there was a sweet spot, somewhere between (let’s say) 200 and 15,000 followers, where it was possible to enjoy the kinds of pleasant and stimulating interaction that I’d first experienced in web forums and the blogosophere.

Until it wasn’t. Like a frog in slowly boiling water, I failed to notice how the Twitter experience degraded over time. Fewer and fewer of my 6K followers corresponded regularly, and my social graph there became stagnant. For me the Mastodon reboot has been a delightful replay of the early blogosphere: new acquaintances, collegial discussion, positive energy.

If you occupy a privileged position in the attention economy, as Megan McArdle does now, and as I once did in a more limited way, then no, you won’t see Mastodon as a viable replacement for Twitter. If I were still a quasi-famous columnist I probably wouldn’t either. But I’m no longer employed in the attention economy. I just want to hang out online with people whose words and pictures and ideas intrigue and inspire and delight me, and who might feel similarly about my words and pictures and ideas. There are thousands of such people in the world, not millions. We want to congregate in different online spaces for different reasons. Now we can and I couldn’t be happier. When people say it can’t work, consider why, and who benefits from it not working.

Here’s a graph of the Fediverse as it appears from my perspective right now.

It looks and feels healthy and it’s working just great. I don’t want us to replace Twitter, or imitate it. I want The Internet Transition that I hope is underway.

Wednesday, 18. January 2023

Hyperonomy Digital Identity Lab

Web 7.0 DIDComm-ARM vs. TBD Web5 DWP Stack

Here’s a link to the voice-over/narrative for the following slides: https://youtu.be/1XnPWmpkGro?t=630s Copyright (c) 2022-2023 Michael Herman (Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public Licensehttps://creativecommons.org/licenses/by-sa/4.0/legalcode Web 7.0 is a unified software and hardware ecosystem for building resilient, trusted, decentralized … Continu

Here’s a link to the voice-over/narrative for the following slides: https://youtu.be/1XnPWmpkGro?t=630s

Copyright (c) 2022-2023 Michael Herman (Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public License
https://creativecommons.org/licenses/by-sa/4.0/legalcode

Web 7.0 is a unified software and hardware ecosystem for building resilient, trusted, decentralized systems using decentralized identifiers, DIDComm agents, and verifiable credentials.

Take what you need; leave the rest.

Michael Herman, Trusted Digital Web Project, Hyperonomy Digital Identity Lab, Parallelspace Corporation. January 2023.

Tuesday, 17. January 2023

Hyperonomy Digital Identity Lab

DIDComm Message Attachment Types

Copyright (c) 2022-2023 Michael Herman (Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public Licensehttps://creativecommons.org/licenses/by-sa/4.0/legalcode Here’s a sampling… NOTE: Verifiable credentials are an obvious DIDComm Message Attachment Type. For this reason, VCs are missing from the following list.

Copyright (c) 2022-2023 Michael Herman (Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public License
https://creativecommons.org/licenses/by-sa/4.0/legalcode

Here’s a sampling…

NOTE: Verifiable credentials are an obvious DIDComm Message Attachment Type. For this reason, VCs are missing from the following list.


Bill Wendels Real Estate Cafe

MLKing Afterglow: What if we weaponized real estate wealth & buyer agent rebates for Valentine’s Day?

In January 2019, Starbucks posed Martin Luther King’s challenging question in a full page NYTimes ad: “Life’s most persistent and urgent question is, what are… The post MLKing Afterglow: What if we weaponized real estate wealth & buyer agent rebates for Valentine’s Day? first appeared on Real Estate Cafe.

In January 2019, Starbucks posed Martin Luther King’s challenging question in a full page NYTimes ad: “Life’s most persistent and urgent question is, what are…

The post MLKing Afterglow: What if we weaponized real estate wealth & buyer agent rebates for Valentine’s Day? first appeared on Real Estate Cafe.

Monday, 16. January 2023

Jon Udell

Mastodon relationship graphs

The new release of Steampipe is all about relationship graphs. Our blog post shows how these graphs provide contextual awareness for devops and security pros who can now see all the resources related to an EC2 instance, or determine at a glance whether the permissions related to an IAM role are properly scoped. As always, … Continue reading Mastodon relationship graphs

The new release of Steampipe is all about relationship graphs. Our blog post shows how these graphs provide contextual awareness for devops and security pros who can now see all the resources related to an EC2 instance, or determine at a glance whether the permissions related to an IAM role are properly scoped. As always, developers can explore and remix the code that builds these graphs, and adapt the idioms for their own purposes in any data domain.

These relationship graphs are driven by SQL queries that define nodes and edges. Such queries can use any column of any table provided by any Steampipe plugin to form nodes, and then edges between nodes. If you want to see connections among the people and objects represented by diverse APIs, you can now use SQL idioms to graph them. The only limit is your imagination.

Naturally I imagined graphing Mastodon relationships. So far I’ve built two graphs that visualize my home timeline. Here’s the first one.

Here we’re looking at the most recent 50 boosts (the Mastodon version of retweet) in my homeline. This is the query to find them.

select * from mastodon_toot where timeline = 'home' and reblog_server is not null limit 50

If we focus on Brian Marick we can see that:

Brian belongs to mastdn.social Brian boosted a post by Tim Bray Tim belongs to hachyderm.io

So this graph shows people on a selected server boosting people on other servers. In this case mastdn.social is the selected server, but we can refocus the graph on any other server that’s sending boosts.

The second graph zooms out to show the web of boost relationships among servers. If anyone on infosec.exchange boosts anyone on mastodon.world, there’s an edge connecting the two nodes. Although it’s not happening anywhere in this graph, the arrow can point both ways and would if anyone on mastodon.world were also boosting anyone on infosec.exchange.

Let’s build up the first graph step by step.

Step 1: Identify the selected server

Here’s the definition of the node that represents the selected server.

node { category = category.selected_server args = [ self.input.server.value ] sql = <<EOQ select server as id, server as title, jsonb_build_object( 'server', server ) as properties from mastodon_boosts() where server = $1 EOQ }

Per the documentation, a node’s query must at least select a column aliased as id. Here it’s the server column in a row returned by the above query. I’ve packaged that query into a SQL function, mastodon_boosts, to hide details (timeline = 'home' reblog_server is not null limit 50) and make it easier to focus on what’s special about each node. In this case the special quality is the server column that gives the node its identity matches the selected server.

If the graph block includes only this node, and mastdn.social is the selected server, here is the rendering. Not much to see here yet!

The node defines a bag of properties that can be any of the columns returned by the underlying query; these appear when you hover the node. The node also refers to a category that governs the node’s icon, color, and link. Here’s the category for the selected server.

category "selected_server" { color = "darkgreen" icon = "server" href = "https://{{.properties.'server'}}" } Step 2: Identify boosted servers

Now we’ll add boosted servers. This node uses the same set of records: the 50 most recent boosts in my feed. Again it finds just those whose server column matches the selected server. But the id is now the reblog_server which is the target, instead of the origin, of boosts from the selected server.

node { category = category.boosted_server args = [ self.input.server.value ] sql = <<EOQ select reblog_server as id, reblog_server as title from mastodon_boosts() where server = $1 EOQ }

Here’s the graph with both selected_server and boosted_server nodes. We’ve used another category to differentiate the boosted nodes.

There’s only one selected server but it can send boosts to more than one boosted server. The default rendering folds them into one node but you can click to unfold and see all of them.

Step 3: Identify people who boost others

Where are the people? Let’s add them next, starting with the people who are sending boosts.

node { category = category.person args = [ self.input.server.value ] sql = <<EOQ select username as id, display_name as title, jsonb_build_object( 'instance_qualified_account_url', instance_qualified_account_url ) as properties from mastodon_boosts() where server = $1 EOQ }

The username column gives the node its identity. Note also the property instance_qualified_account_url. That’s the synthetic column we added to the Mastodon plugin last time to ensure that links to people and toots will work properly in the Mastodon client. Because it’s included in a property here, and because category.person refers to that property, links representing people in the graph will resolve properly.

Step 4: Identify people who are boosted

This node takes its identify from the reblog_username column, and uses the synthetic column instance_qualified_reblog_url to provide the link.

node { category = category.boosted_person args = [ self.input.server.value ] sql = <<EOQ select reblog_username as id, reblog_username as title, jsonb_build_object( 'instance_qualified_reblog_url', instance_qualified_reblog_url ) as properties from mastodon_boosts() where server = $1 EOQ }

Step 5: Connect boosters on the selected server to that server

So far we’ve seen only nodes, whose queries minimally return the id property. An edge connects nodes by way of a query that minimally returns columns aliased to from_id and to_id.

edge { sql = <<EOQ select username as from_id, server as to_id, 'belongs to' as title from mastodon_boosts() EOQ }

You’ll also want to provide a title to label the edge. Here this edge occurs twice to represent “John Mashey belongs to mstdn.social” and “Brian Marick belongs to mstdn.social.”

Step 6: Connect people on boosted servers to their servers

This edge works the same way, but captures the relationship between boosted people and their servers.

edge { args = [ self.input.server.value ] sql = <<EOQ select reblog_username as from_id, reblog_server as to_id, 'belongs to' as title from mastodon_boosts() where server = $1 EOQ }

Step 7: Connect boosters to the people they boost

Finally we add an edge to connect boosters to the people they boost.

edge { category = category.boost args = [ self.input.server.value ] sql = <<EOQ select username as from_id, reblog_username as to_id, 'boosts' as title, jsonb_build_object( 'reblog_username', reblog_username, 'reblog_server', reblog_server, 'content', reblog ->> 'content' ) as properties from mastodon_boosts() where server = $1 EOQ }

And now we’ve completed the first graph shown above.

Graphing GitHub relationships

You can use this grammar of nodes and edges to describe relationships in any domain. Here’s a graph that looks across all the Steampipe-related repos and shows recently-updated PRs from external contributors.

And here’s one that uses any Steampipe plugin to show recently-updated pull requests for a selected repo.

These two views share a common SQL query and serve complementary purposes. The table is handy for sorting by date or author, the graph highlights one-to-many relationships.

Lifting the burden of context assembly

In What TimeDance got right I mourned the passing of a tool for scheduling meetings that had excelled at bringing together the messages and documents related to a meeting. I called this “context assembly” — a term I’d picked up from Jack Ozzie, cofounder of Groove, another collaboration tool whose passing I mourn. Context assembly is hard work. Too often the burden falls on people who only need to use that context and would rather not spend time and effort creating it.

We’ve seen how SQL can unify access to APIs. Now it can also help us see relationships among the data we extract from those APIs.


1 https://blog.jonudell.net/2022/11/28/autonomy-packet-size-friction-fanout-and-velocity/
2 https://blog.jonudell.net/2022/12/06/mastodon-steampipe-and-rss/
3 https://blog.jonudell.net/2022/12/10/browsing-the-fediverse/
4 https://blog.jonudell.net/2022/12/17/a-bloomberg-terminal-for-mastodon/
5 https://blog.jonudell.net/2022/12/19/create-your-own-mastodon-ux/
6 https://blog.jonudell.net/2022/12/22/lists-and-people-on-mastodon/
7 https://blog.jonudell.net/2022/12/29/how-many-people-in-my-mastodon-feed-also-tweeted-today/
8 https://blog.jonudell.net/2022/12/31/instance-qualified-mastodon-urls/
9 https://blog.jonudell.net/2023/01/16/mastodon-relationship-graphs/
10 https://blog.jonudell.net/2023/01/21/working-with-mastodon-lists/
11 https://blog.jonudell.net/2023/01/26/images-considered-harmful-sometimes/
12 https://blog.jonudell.net/2023/02/02/mapping-the-wider-fediverse/
13 https://blog.jonudell.net/2023/02/06/protocols-apis-and-conventions/
14 https://blog.jonudell.net/2023/02/14/news-in-the-fediverse/
15 https://blog.jonudell.net/2023/02/26/mapping-people-and-tags-on-mastodon/
16 https://blog.jonudell.net/2023/03/07/visualizing-mastodon-server-moderation/
17 https://blog.jonudell.net/2023/03/14/mastodon-timelines-for-teams/


Damien Bod

Implementing secure Microsoft Graph application clients in ASP.NET Core

The article looks at the different way a Microsoft Graph application client can be implemented and secured in an ASP.NET Core application or a .NET application. This type of client is intended for applications or application logic where no user is involved. Code: https://github.com/damienbod/MicrosoftGraphAppToAppSecurity Accessing Microsoft Graph can be initialized for app-to-app (application perm

The article looks at the different way a Microsoft Graph application client can be implemented and secured in an ASP.NET Core application or a .NET application. This type of client is intended for applications or application logic where no user is involved.

Code: https://github.com/damienbod/MicrosoftGraphAppToAppSecurity

Accessing Microsoft Graph can be initialized for app-to-app (application permissions) security in three different ways. The flows can only be used in a trusted host. The different implementation types are as follows:

Using Managed Identities Using Azure SDK and Graph SDK directly with client credentials Using Microsoft.Identity.Client and MSAL to acquire an access token which can be used directly against Microsoft Graph or using GraphServiceClient with the DelegateAuthenticationProvider class Using Managed Identities

Using managed identities for the Azure deployments is the most secure of the three ways to implement this client. This is because no secret or certificates are shared and so cannot be abused and there is no need for secret rotation.

Setup

We use a web application deployed to an Azure App Service to setup the security. A managed identity is created for this Azure resource. If the Azure App Service is deleted, so is the managed identity and the assigned Graph roles. Only this Azure resource can use the managed identity.

Once the Azure resource is created, the Graph App roles can be assigned to the managed identity.

Powershell scripting

I created the Powershell script using a blog from Microsoft. This powershell script finds the managed identity and assigns the User.Read.All application permission to the managed identity.

$TenantID = "<your-tenant-id>" $DisplayNameServicePrincpal ="<your-azure-app-registration-or-other-azure-resource>" $GraphAppId = "00000003-0000-0000-c000-000000000000" $PermissionName = "User.Read.All" Connect-AzureAD -TenantId $TenantID $sp = (Get-AzureADServicePrincipal -Filter "displayName eq '$DisplayNameServicePrincpal'") Write-Host $sp $GraphServicePrincipal = Get-AzureADServicePrincipal -Filter "appId eq '$GraphAppId'" $AppRole = $GraphServicePrincipal.AppRoles | Where-Object {$_.Value -eq $PermissionName -and $_.AllowedMemberTypes -contains "Application"} New-AzureAdServiceAppRoleAssignment -ObjectId $sp.ObjectId -PrincipalId $sp.ObjectId -ResourceId $GraphServicePrincipal.ObjectId -Id $AppRole.Id

This can be checked in the Azure portal using the Enterprise applications blade and filtering for managed identities.

The permissions contains the Graph User.Read.All application permission.

Implementing the client

The client is implemented using Azure.Identity and Graph SDK. We have two setups, one for the production and all other Azure deployments and one for development. The managed identity is used everywhere except the dev deployments and only this can be used. The local dev uses an Azure App registration with the client credentials flow. The GetGraphClientWithManagedIdentity method returns the GraphServiceClient Graph SDK client setup for the correct deployment. The correct ChainedTokenCredential is used to secure the client. It is important that only the correct managed identity for the exact resource can be used in production. No secret or certificates is required for this solution, the managed identity and Azure takes care of this. The GraphServiceClient is for the application and handles the HttpClient creation so the service is created as a singleton.

using Azure.Identity; using Microsoft.Graph; namespace GraphManagedIdentity; public class GraphApplicationClientService { private readonly IConfiguration _configuration; private readonly IHostEnvironment _environment; private GraphServiceClient? _graphServiceClient; public GraphApplicationClientService(IConfiguration configuration, IHostEnvironment environment) { _configuration = configuration; _environment = environment; } /// <summary> /// gets a singleton instance of the GraphServiceClient /// </summary> /// <returns></returns> public GraphServiceClient GetGraphClientWithManagedIdentityOrDevClient() { if (_graphServiceClient != null) return _graphServiceClient; string[] scopes = new[] { "https://graph.microsoft.com/.default" }; var chainedTokenCredential = GetChainedTokenCredentials(); _graphServiceClient = new GraphServiceClient(chainedTokenCredential, scopes); return _graphServiceClient; } private ChainedTokenCredential GetChainedTokenCredentials() { if (!_environment.IsDevelopment()) { return new ChainedTokenCredential(new ManagedIdentityCredential()); } else // dev env { var tenantId = _configuration["AzureAd:TenantId"]; var clientId = _configuration.GetValue<string>("AzureAd:ClientId"); var clientSecret = _configuration.GetValue<string>("AzureAd:ClientSecret"); var options = new TokenCredentialOptions { AuthorityHost = AzureAuthorityHosts.AzurePublicCloud }; // https://docs.microsoft.com/dotnet/api/azure.identity.clientsecretcredential var devClientSecretCredential = new ClientSecretCredential( tenantId, clientId, clientSecret, options); var chainedTokenCredential = new ChainedTokenCredential(devClientSecretCredential); return chainedTokenCredential; } } }

The service is added to the IoC and can be used anywhere in the application. Once deployed, the managed identity is used, otherwise the dev setup runs.

builder.Services.AddSingleton<GraphApplicationClientService>(); builder.Services.AddScoped<AadGraphSdkApplicationClient>();

I use it in a service then:

using Azure.Identity; using Azure.Security.KeyVault.Secrets; using Microsoft.Graph; using System.Security.Cryptography.X509Certificates; namespace GraphClientCrendentials; public class AadGraphSdkApplicationClient { private readonly IConfiguration _configuration; private readonly GraphApplicationClientService _graphService; public AadGraphSdkApplicationClient(IConfiguration configuration, GraphApplicationClientService graphService) { _configuration = configuration; _graphService = graphService; } public async Task<int> GetUsersAsync() { var graphServiceClient = _graphService.GetGraphClientWithClientSecretCredential(); IGraphServiceUsersCollectionPage users = await graphServiceClient.Users .Request() .GetAsync(); return users.Count; } } Dev setup

An Azure App registration is used to implement the OAuth client credentials flow and uses the Graph SDK client in development. The Graph application is added to the single tenant Azure App registration. An enterprise application is created from this.

The ChainedTokenCredential uses the app.settings and the user secrets to configure the client. The client uses the OAuth client credentials flow to acquire an access token. I normally use secrets for development for simplicity but if more security is required, a certificate can be used and the secret/certificate can be used directly from an Azure KeyVault.

"AzureAd": { "TenantId": "7ff95b15-dc21-4ba6-bc92-824856578fc1", "ClientId": "3606b25d-f670-4bab-ab70-437460143d89" //"ClientSecret": "add secret to the user secrets" //"CertificateName": "[Or instead of client secret: Enter here the name of a certificate (from the user cert store) as registered with your application]", //"Certificate": { // "SourceType": "KeyVault", // "KeyVaultUrl": "<VaultUri>", // "KeyVaultCertificateName": "<CertificateName>" //} }, Using Azure SDK and Graph SDK directly

A Microsoft Graph client can be setup to to use the client credentials flow to initialize the Graph SDK GraphServiceClient. This is a good way of creating the OAuth client credentials flow if it is used outside the Azure tenant. It is recommended to use a certificate and this is normally stored in an Azure Key Vault. This uses the OAuth client credentials flow and uses the client assertions to acquire a new access token.

The flow can be setup to use a secret:

private GraphServiceClient GetGraphClientWithClientSecretCredential() { string[] scopes = new[] { "https://graph.microsoft.com/.default" }; var tenantId = _configuration["AzureAd:TenantId"]; // Values from app registration var clientId = _configuration.GetValue<string>("AzureAd:ClientId"); var clientSecret = _configuration.GetValue<string>("AzureAd:ClientSecret"); var options = new TokenCredentialOptions { AuthorityHost = AzureAuthorityHosts.AzurePublicCloud }; // https://docs.microsoft.com/dotnet/api/azure.identity.clientsecretcredential var clientSecretCredential = new ClientSecretCredential( tenantId, clientId, clientSecret, options); return new GraphServiceClient(clientSecretCredential, scopes); }

Or setup to use a certificate:

private async Task<GraphServiceClient> GetGraphClientWithClientCertificateCredentialAsync() { string[] scopes = new[] { "https://graph.microsoft.com/.default" }; var tenantId = _configuration["AzureAd:TenantId"]; var options = new TokenCredentialOptions { AuthorityHost = AzureAuthorityHosts.AzurePublicCloud }; // Values from app registration var clientId = _configuration.GetValue<string>("AzureAd:ClientId"); var certififacte = await GetCertificateAsync(); var clientCertificateCredential = new ClientCertificateCredential( tenantId, clientId, certififacte, options); // var clientCertificatePath = _configuration.GetValue<string>("AzureAd:CertificateName"); // https://learn.microsoft.com/en-us/dotnet/api/azure.identity.clientcertificatecredential?view=azure-dotnet // var clientCertificateCredential = new ClientCertificateCredential( // tenantId, clientId, clientCertificatePath, options); return new GraphServiceClient(clientCertificateCredential, scopes); } private async Task<X509Certificate2> GetCertificateAsync() { var identifier = _configuration["AzureAd:ClientCertificates:0:KeyVaultCertificateName"]; if (identifier == null) throw new ArgumentNullException(nameof(identifier)); var vaultBaseUrl = _configuration["AzureAd:ClientCertificates:0:KeyVaultUrl"]; if(vaultBaseUrl == null) throw new ArgumentNullException(nameof(vaultBaseUrl)); var secretClient = new SecretClient(vaultUri: new Uri(vaultBaseUrl), credential: new DefaultAzureCredential()); // Create a new secret using the secret client. var secretName = identifier; //var secretVersion = ""; KeyVaultSecret secret = await secretClient.GetSecretAsync(secretName); var privateKeyBytes = Convert.FromBase64String(secret.Value); var certificateWithPrivateKey = new X509Certificate2(privateKeyBytes, string.Empty, X509KeyStorageFlags.MachineKeySet); return certificateWithPrivateKey; }

I usually use a secret for development and a certificate for production.

Using Microsoft.Identity.Client and MSAL

A third way of implementing the Graph client is to use Microsoft.Identity.Client or Microsoft.Identity.Web. This uses the ConfidentialClientApplicationBuilder to create a new IConfidentialClientApplication instance and can use a secret or a certificate to acquire the access token.

Microsoft.Identity.Client with a secret:

var app = ConfidentialClientApplicationBuilder.Create(config.ClientId) .WithClientSecret(config.ClientSecret) .WithAuthority(new Uri(config.Authority)) .Build(); app.AddInMemoryTokenCache();

or with a certificate and client assertions:

var app = ConfidentialClientApplicationBuilder.Create(config.ClientId) .WithCertificate(certificate) .WithAuthority(new Uri(config.Authority)) .Build(); app.AddInMemoryTokenCache();

The GraphServiceClient can be created using the DelegateAuthenticationProvider. As I understand you should avoid using the DelegateAuthenticationProvider if possible.

GraphServiceClient graphServiceClient = new GraphServiceClient("https://graph.microsoft.com/V1.0/", new DelegateAuthenticationProvider(async (requestMessage) => { // Retrieve an access token for Microsoft Graph (gets a fresh token if needed). AuthenticationResult result = await app.AcquireTokenForClient(scopes) .ExecuteAsync(); // Add the access token in the Authorization header of the API request. requestMessage.Headers.Authorization = new AuthenticationHeaderValue("Bearer", result.AccessToken); })); } Notes

There are three different ways of creating Microsoft Graph application clients and it is sometimes hard to understand when you should use which. This is not for the delegated clients. In an ASP.NET Core application you would use Microsoft.Identity.Web for a delegated client which then uses Microsoft Graph on behalf of the user. System assigned managed identities do not require managing secrets or certificates but can only be used in the same tenant. The client credentials flow can be used from anywhere. Microsoft recommends using certificates when using the client credentials flow.

Links

https://learn.microsoft.com/en-us/azure/app-service/scenario-secure-app-access-microsoft-graph-as-app?tabs=azure-powershell

https://learn.microsoft.com/en-us/azure/active-directory/develop/sample-v2-code#service–daemon

https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/tree/master/1-Call-MSGraph

https://oceanleaf.ch/azure-managed-identity/

https://learningbydoing.cloud/blog/stop-using-client-secrets-start-using-managed-identities/

https://github.com/Azure/azure-sdk-for-net

https://learn.microsoft.com/en-us/dotnet/api/azure.identity.environmentcredential?view=azure-dotnet

https://learn.microsoft.com/en-us/graph/sdks/choose-authentication-providers?tabs=CS

Saturday, 14. January 2023

Hyperonomy Digital Identity Lab

One’s Journey to the Grave – Hunter S. Thompson

“Life should not be a journey to the grave with the intention of arriving safely in a pretty and well preserved body, but rather to skid in broadside in a cloud of smoke, thoroughly used up, totally worn out, and … Continue reading →

“Life should not be a journey to the grave with the intention of arriving safely in a pretty and well preserved body, but rather to skid in broadside in a cloud of smoke, thoroughly used up, totally worn out, and loudly proclaiming “Wow! What a Ride!”

― Hunter S. Thompson, The Proud Highway: Saga of a Desperate Southern Gentleman, 1955-1967

Friday, 13. January 2023

Phil Windleys Technometria

SSI Doesn't Mean Accounts Are Going Away

SSI wallets will embody people in the digital world and give them a place to stand as a peer to others online. But online services will still have accounts.

I saw a tweet that said (paraphrasing): "In the future people won't have accounts. The person (and their wallet) will be the account." While I appreciate the sentiment, I think reality is much more nuanced than that because identity management is about relationships, not identities (whatever those are).

Supporting a relationship requires that we recognize, remember, and react to another party (person, business, or thing). In self-sovereign identity (SSI), the tools that support that are wallets and agents. For people, these will be personal. For a business or other organization they'll be enterprise wallets and agents. The primary difference between these is that enterprise wallets and agents will be integrated with the other systems that the business uses to support the relationships they have at scale.

Thanks for reading Phil Windley's Technometria! Subscribe for free to receive new posts and support my work.

Remembering and reacting to another entity requires that you keep information about them for the length of the relationship. Some relationships, like the one I form with the convenience store clerk when I buy a candy bar, are ephemeral, lasting only for the length of the transaction. I don't remember much while its happening and forget it as soon as it's done. Others are long-lasting and I remember a great deal in order for the relationship to have utility.

So, let's say that we're living in the future where SSI is ubiquitous and I have a DID-based relationship with Netflix. I have a wallet full of credentials. In order for my relationship to have utility, they will have to remember a lot about me, like what I've watched, what devices I used, and so on. They will likely still need to store a form of payment since it's a subscription. I call that an account. And for the service Netflix provides, it's likely not optional.

Let's consider a different use case: ecommerce. I go to a site, select what I want to buy, supply information about shipping and payment, and submit the order. I can still create a DID-based relationship, but the information needed from me beyond what I want to buy can all come from my credentials. And it's easy enough to provide that I don't mind supplying it every time. The ecommerce site doesn't need to store any of it. They may still offer to let me create an account, but it's optional. No more required than the loyalty program my local supermarket offers. The relationship I create to make the purchase can be ephemeral if that's what I want.

What will definitely go away is the use of accounts for social login. In social login, large identity providers have accounts that are then used by relying parties to authenticate people. Note that authentication is about recognizing. SSI wallets do away with that need by providing the means for different parties to easily create relationships directly and then use verifiable credentials to know things about the other with certainty. Both parties can mutually authenticate the other. But even here, social login is usually a secondary purpose for the account. I have an account with Google. Even if I never use it for logging in anywhere but Google, I'll still have an account for the primary reasons I use Google.

Another thing that goes away is logging in to your account. You'll still be authenticated, but that will fade into the background as the processes we use for recognizing people (FIDO and SSI) become less intrusive and fade into the background. We have a feel for this now with apps on our smartphones. We rarely authenticate because the app does that and then relies on the smartphone to protect the app from use by unauthorized people. FIDO and SSI let us provide similar experiences on the web as well. Because we won't be logging into them, the idea of accounts will fade from people's consciousness even if they still exist.

I don't think accounts are going away anytime soon simply because they are a necessary part of the relationship I have with many businesses. I want them to remember me and react to me in the context of the interactions we've had in the past. SSI offers new ways of supporting relationships, especially ephemeral ones, that means companies need to store less. But for long-term relationships, your wallet can't be the account. The other party needs their own means of remembering you and they will do that using tools that look just like an account.

Photo Credit: Creditor's Ledger, Holmes McDougall from Edinburgh City of Print (CC BY 2.0)

Thanks for reading Phil Windley's Technometria! Subscribe for free to receive new posts and support my work.


@_Nat Zone

1月16日(月)20時〜「犯罪予防や安全確保のためのカメラ画像利用に関する有識者検討会報告書(案)」を配信しながらダラダラ読む会やります

「犯罪予防や安全確保のためのカメラ画像利用に関する…

「犯罪予防や安全確保のためのカメラ画像利用に関する有識者検討会報告書(案)」に関する意見募集が2023年1月12日に始まりました。意見募集期間は2月12日までです。ページ数は70ページ以上あり、かなり充実しているんではないかと予想しています。

ぜひとも読むべきなんですが、一人でやっているとずるずると行きそうなので、1月16日午後8時からライブリーディングしようと思います。チャットのコメントも拾いますが、もしぜひ音声や画像でも出たいというかたがいらっしゃいましたらご連絡ください(←直接存じ上げている方に限定します)。特にこのあたり詳しくて、教えてやっても良いよという方!ちなみに、わたしもこの辺りは専門外なので、ひとりの学習者としてやります。

あと、最近体調があまり優れないので、もしだめになったらごめんなさい。その場合、twitter アカウント (https://twitter.com/_nat )で随時状況はお知らせします。

なお、これに引き続き他にやろうと思っているものにNIST SP800-63-4 PD 、BGIN SBT Study Report 、アメリカの Personal Financial Data Rightsなどがあります。最後のはほぼ時間切れ… (1/25 締切)。これもあって、やるぞ宣言して自分を叱咤激励するメソッドをばというわけです。

さて、できるかな…

Thursday, 12. January 2023

Doc Searls Weblog

FM Stations Down on Gibraltar Peak

[Update: 11:20 AM Wednesday 18 January] Well, I woke this morning to hear all the signals from Gibraltar Peak back on the air. I don’t know if the site is on generator power, or if electric power has been restored. This pop-out from a map symbol on Southern California Edison’s Power Outage Awareness Map suggests the […]

[Update: 11:20 AM Wednesday 18 January] Well, I woke this morning to hear all the signals from Gibraltar Peak back on the air. I don’t know if the site is on generator power, or if electric power has been restored. This pop-out from a map symbol on Southern California Edison’s Power Outage Awareness Map suggests the latter:

However, I am listening right now to KZSB/1290 AM’s FM signal on 96.9 from Gibraltar Peak, where the show hosts are detailing many road closures, noting that sections of Gibraltar road are “down the hill,” meaning not there anymore, and unlikely to be fixed soon. I think I also heard them say their FM transmitter is on generator power. Far as I know, they are the only station covering local road closures, buildings damaged, farms and orchards damaged, and related topics, in great detail. It’s old-fashioned local radio at its best. Hats off.

Looking at the power requirements up there, only two stations are high-power ones: KDB/93.7’s transmitter pumps 4.9kW into a stack of five antenna bays that yield an ERP (effective radiated power) of 12.5kW, and KDRW(KCRW)/88.7 uses about 5.9kW to produce 12kW ERP through a stack of four antenna bays. Those are on the poles at the right and left ends of this photo, which I shot after the Jesusita Fire in 2009:

All the other stations’ transmitters require less wattage than a microwave oven. Three only put out ten watts. So, given typical modern transmitter efficiencies, I’m guessing the site probably has a 20kW generator, give or take, requiring about 2.5 gallons of propane per hour. So a 500-gallon propane tank (a typical size) will last about 200 hours. Of course, none of that will matter until the next outage, provided electrical service is actually restored now, or soon.

[Update: 3:34 PM Monday 16 January] Two news stories:

Edhat: Gibraltar Road Damage., by Edhat staff, Januraly 11, 2023 12:30 PM. It’s a collection of revealing Gibraltar Road photos that I wish I had seen earlier. Apologies for that. This is the text of the whole story: “A resident of Gibraltar Road shared the below photos from the recent storm damage. A section of the road appears to be washed out with a Tesla trapped under some debris. The Tesla slide is located approximately a quarter mile past the Rattlesnake Canyon trailhead and the washed road is about a mile past the radio tower before reaching the west fork trailhead.” If “mile past” means going uphill toward East Camino Cielo on the ridge, that means travel was (and is) impeded (at the very least) in both directions from the transmitter sites. The photos are dramatic. Please check them out. Noozhawk: Several Radio Stations Still Off the Air After Storm Knocks Out Power to Gibraltar Transmitter Site by Giana Magnoli, by Managing Editor Giana Magnoli, January 16, 2023 | 1:47 pm

From the Noozhawk story:

“… they’ve helicoptered up a new battery and 600 gallons of diesel fuel to the site’s backup generator, but they haven’t been able to get it to work.” I believe this is for lack of the expected banjo valve. (See below.) “Southern California Edison, which supplies power to the transmission towers site, first reported an outage for the Gibraltar Road area at 2:34 a.m. Jan. 9, the day of the big storm.” That was Monday. At least some stations would have switched over to generator power then. “Repair crews haven’t been sent to the site yet, according to the SCE Outage Map, but Franklin said he heard there could be new poles installed this week.” That’s John Franklin, who runs the whole Gibraltar Peak site. “KCLU (102.3 FM) went off the air on Wednesday and was still off as of Monday.KCLU (102.3 FM) went off the air on Wednesday and was still off as of Monday. KJEE (92.9 FM) went down for several days but came back on the air on Thursday.” Note: it’s not on now—at least not on the radios I’m using. “Santa Barbara County spokeswoman Kelsey Gerckens Buttitta said there are cell and radio station towers off Gibraltar Road that requires fuel to operate, and Gibraltar Road and East Camino Cielo Road are closed because of slides, debris and slipouts.” Fixing those roads will be very difficult and time-consuming.

The story also lists signals I reported off as of last night. One correction to that: K250BS/97.9, which relays KTMS/990, is on the air. This I presume is because it’s at the KTMS/KTYD site. All the signals from that site (which is up the road from Gibraltar Peak) are still up. I assume that’s either because they are fed electric power separately from Gibraltar Peak, or because they are running on generator power.

[Update: 11:40 AM Monday 16 January] In a private group discussion with broadcast engineers, I am gathering that a stretch of Gibraltar Road close to the Gibraltar Peak site has collapsed. The location is 34°28’05.2″N 119°40’21″W, not far from the road into the transmitter site. This is not the section marked closed by Santa Barbara County on its map here. It is also not an easy fix, because it appears from one photograph I’ve seen (shared on a private group) that the land under the road slid away. It is also not the section where power lines to the site were knocked out. So we’re looking at three separate challenges here:

Restoring electrical service to Gibraltar Peak, and other places served by the same now-broken lines Repairing Gibraltar Road in at least two places (the one marked on the county map and the one above) Getting generators fueled and fixed.

On that last issue, I’m told that the site with most of the transmitters can be powered by a generator that awaits what is called a banjo valve. The KDB facility requires propane, and stayed up longer than the others on the peak while its own supply held up.

Gibraltar Peak isn’t the highest landform overlooking Santa Barbara. At 2180 feet, it’s about halfway up the south flank of the Santa Ynez Mountains. But it does provide an excellent vantage for FM stations that want the least obstructed view of the market’s population. That’s why more local signals come from here than from any other site in the region.

Except for now: a time that began with the storm last Tuesday. That’s when power lines feeding the peak were broken by falling rocks that also closed Gibraltar road. Here is a list of signals that have been knocked off the air (and are still off, as of the latest edit, on Sunday, January 15 at 11:15PM):

88.7 KDRW, which has a studio in Santa Barbara, but mostly relays KCRW from Santa Monica 89.5 KSBX, which relays KCBX from San Luis Obispo* 89.9 K210AD, which relays KPCC from Pasadena by way of KJAI from Ojai 90.3 KMRO-FM2, a booster for KMRO in Camarillo 91.5 K218CP, which relays KAWZ from Twin Falls, Idaho 93.7 KDB, which relays KUSC from Los Angeles (down after running on generator power for 5 days) 96.9 K245DD, which relays KZSB/1290 AM in Santa Barbara 97.9 K250BS, which relays KTMS/990 AM in Santa Barbara (and is on a KTMS tower, farther up the slope) 98.7 K254AH, which relays KPFK from Los Angeles 102.3 KK272DT, the FM side of KCLU/1340 in Santa Barbara and KCLU/88.3 in Thousand Oaks

KTMS/990AM, KTYD/99.9FM, and K231CR/94.1, which relays KOSJ/1490AM, are still on the air as of Sunday night at 11:15pm. Those are are a short distance farther up Gibraltar Road. (In the other box in the photo above.)

Here is a guide to substitute signals for some of the stations:

KCRW/KDRW can be heard on KCRU/89.1 from Oxnard (actually, Laguna Peak, in Pt. Magu State Park) KDB can be heard on KDSC/91.1 from Thousand Oaks (actually off Sulphur Mountain Road, south of Ojai) KCLU can be heard on 1340 AM from Santa Barbara and 88.3 FM from Thousand Oaks KPCC can be heard on KJAI/89.5 from Ojai (also transmitting from Sulphur Mountai Road) KSBX/KCBX can be heard on 90.9 from Solvang (actually Broadcast Peak) KPFK can be heard on its home signal (biggest in the U.S.) from Mount Wilson in Los Angeles at 90.7 KZSB can be heard on 1290 AM from Santa Barbara KMRO can still be heard on its Camarillo main transmitter on 90.3

The two AM signals (marked green in the top list above) are strong in town and most of the FMs are weak but listenable here and there. And all of them can be heard through their live streams online.

Published stories so far, other than this one:

Gibraltar Peak Towers Out of Commission, in the Santa Barbara Independent. That story says two power poles were knocked out, as reported by John Franklin, who operates the whole transmitter site. (The story also kindly gives me credit for the photo there.) KCLU’s 102.3 FM signal in Santa Barbara is intermittently off the air, on the KCLU website. The image it shows is not of KCLU’s antenna on Gibraltar Peak. Mouse over this photo to see which of the many antennas on Gibraltar Peak radiate which signals (to the best of my knowledge, anyway).

The Independent says the site is a “relay” one. That’s correct in the sense that most of the stations there are satellites of bigger stations elsewhere. But KCLU is local to Santa Barbara (its anchor AM station is here), and the ratings reflect it. I wrote about those ratings a few years ago, in Where Public Radio Rocks. In that post, I noted that public radio is bigger in Santa Barbara than anywhere else in the country.

The most recent ratings (Spring of 2022), in % shares of total listening, are these:

KDB/93.9, classical music, relaying KUSC/91.1 from Los Angeles: 7.9% KCLU/102.3 and 1340 in Santa Barbara (studios in Thousand Oaks), public broadcasting: 7.3% KDRW/88.7 in Santa Barbara (main studio in Santa Monica, as KCRW/89.9): 4.6% KPCC/89.9, relaying KJAI/89.5 and KPCC/89.3 in Pasadena: 1.3% KSBX/89.5, relaying KCBX/90.1 from San Luis Obispo: 0.7%

Total: 21.8%.

That means more than a fifth of all radio listening in Santa Barbara is to noncommercial and public radio.

And, of all those stations, only KDB/KUSC and KCLU-AM are on the air right now.

By the way, when I check to see how public broadcasting is doing in other markets, nothing is close. Santa Barbara still kicks ass. I think that’s an interesting story, and I haven’t seen anyone report on it, other than here.

*Turns out KSBX is off the air permanently, after losing a coverage battle with KPBS/89.5 in San Diego. On December 29, they published a story in print and sound titled Why is 89.5 KSBX off the air? The answer is in the atmosphere. They blame tropospheric ducting, which much of the time makes KPBS come in like a local signal. Also, even though KPBS’s transmitter on Soledad Mountain (really more of a hill) above the coast at La Jolla is more than 200 miles away, it does pump out 26,000 watts, while KCBX puts out only 50 watts—and less in some directions. Though the story doesn’t mention it, KJAI, the KPCC relay on 89.5 for Ojai, is audible in Santa Barbara if nothing else is there. So that also didn’t help. By the way, I’m almost certain that the antenna identified as KSBX’s in the story’s photo (which is also one of mine) is actually for KMRO-2. KSBX’s is the one on the left in this photo here.

Thursday, 12. January 2023

Identity Woman

IAPP Event: An Intro for Data Privacy Pros to Self-Sovereign Identity

An event hosted by the International Association of Privacy Professionals (IAPP) asked me to give a talk on the subject of self-sovereign identity and provide a foundational overview for privacy professionals. The following are some of the primary issues discussed throughout the event: The Panel was put together by Katharina Koerner, the Principal Technology Researcher at […] The post IAPP

An event hosted by the International Association of Privacy Professionals (IAPP) asked me to give a talk on the subject of self-sovereign identity and provide a foundational overview for privacy professionals. The following are some of the primary issues discussed throughout the event: The Panel was put together by Katharina Koerner, the Principal Technology Researcher at […]

The post IAPP Event: An Intro for Data Privacy Pros to Self-Sovereign Identity appeared first on Identity Woman.

Thursday, 12. January 2023

Hyperonomy Digital Identity Lab

Hyperledger Identity Implementers WG – January 12, 2023

Mission Web 7.0 is a unified software and hardware ecosystem for building resilient, trusted, decentralized systems using decentralized identifiers, DIDComm agents, and verifiable credentials. Take what you need; leave the rest. Michael Herman, Trusted Digital Web Project, Hyperonomy Digital Identity … Continue reading →

Mission

Web 7.0 is a unified software and hardware ecosystem for building resilient, trusted, decentralized systems using decentralized identifiers, DIDComm agents, and verifiable credentials.

Take what you need; leave the rest.

Michael Herman, Trusted Digital Web Project, Hyperonomy Digital Identity Lab, Parallelspace Corporation. January 2023.

Other Web 7.0 Resources

Also check out Welcome to Web 7.0 (including a link to the whitepaper): https://hyperonomy.com/2022/12/12/welcome-to-web-7-0/

Today’s Presentation

Click here to download the presentation:

didcomm-arm-3-0-48-20minDownload

Wednesday, 11. January 2023

Doc Searls Weblog

Heavy Weather

Most of California has just two seasons: rain and fire. Rain is another name for Winter, and it peaks in January. In most years, January in California isn’t any more wet than, say, New York, Miami or Chicago. But every few years California gets monsoons. Big ones. This is one of those years. The eighteen gallon storage […]

Most of California has just two seasons: rain and fire. Rain is another name for Winter, and it peaks in January. In most years, January in California isn’t any more wet than, say, New York, Miami or Chicago. But every few years California gets monsoons. Big ones. This is one of those years.

The eighteen gallon storage tub in our yard is sixteen inches deep and serves as a rain gauge:

Yesterday morning it was less than half full. While it gathered rain, our devices blasted out alerts with instructions like this:

So we stayed home and watched the Web tell us how the drought was ending:

Wasn’t long ago that Lake Cachuna was at 7%.

So that’s good news. The bad news is about floods, ruined piers and wharfsdowned trees, power outages, levee breaches. The usual.

It should help to remember that the geology on both coasts is temporary and improvisational. The East Coast south of New England and Long Island (where coastal landforms were mostly dumped there or scraped bare by glaciers in the geologic yesterday) is a stretch of barrier islands that are essentially dunes shifted by storms. Same goes for the Gulf Coast. The West Coast looks more solid, with hills and mountains directly facing the sea. But Pacific storms in Winter routinely feature waves high as houses, pounding against the shores and sea cliffs.

Looking up the coast from Tijuana, within a few hundred years Coronado and Point Loma in San Diego, La Jolla, all the clifftop towns up the coast to Dana Point and Laguna, Palos Verdes Peninsula, Malibu and Point Dume, Carpinteria, the Santa Barbara Mesa and Hope Ranch, all of Isla Vista and UCSB, Pismo and Avila Beaches, all of Big Sur and the Pacific Coast Highway there, Carmel and the Monterey Peninsula, Aptos, Capitola and Santa Cruz, Davenport, Half Moon Bay, Pacifica, the headlands of San Francisco, Muir and Stimson Beaches and Bolinas in Marin, Fort Bragg in Mendicino County, and Crescent City in Humbolt—all in California—will be eaten away partially or entirely by weather and waves. Earthquakes will also weigh in.

The photo up top is of La Conchita, a stupidly located town on the South Coast, west of Ventura, four days after a landslide in 2005 took out 13 homes and killed 10 people. All the land above town is a pile of former and future landslides, sure to slide again when the ground is saturated with water. Such as now or soon.

So that’s a long view. For one that spans the next week, visit windy.com and slide the elevation up to FL (flight level) 340 (34000 feet):

That yellow river of wind is a jet stream hauling serious ass straight across the Pacific and into California. Jet streams are why the headwinds and tailwinds you see on seat-back displays showing flight progress on planes often say 100mph or more. Look at Windy before you fly coast to coast or overseas, and you can guess what the flight path will be. You can also see why it may take as little as five hours to get from Dulles to Heathrow, or more than seven hours to come back by a route that touches the Arctic Circle. Your plane is riding, fighting or circumventing high altitude winds that have huge influences on the weather below.

To see how, drop Windy down to the surface:

Those eddies alongside the jet stream are low pressure centers full of the moisture and wind we call storms. They spin along the sides of the jet stream the way dust devils twist up along the sides of highways full of passing trucks. Those two storm centers are spinning toward California and will bring more wind and rain.

Beside the sure damage those will bring, there will be two benefits. One is that California will be as green as Ireland for a few months. The other is that wildflowers will bloom all over the place.

The Death Valley folks are hedging their bet, but I’d put money on a nice bloom this Spring. Watch for it.

Bonus link: There’s An Underground City Beneath Sacramento In Northern California That Most People Don’t Know About. Excerpt: “…Old Sacramento was built up during the time of the gold rush, but the frequent flooding of this area obliterated its first level time and time again, until finally, the city abandoned that level altogether. It’s both fascinating and creepy to tour the abandoned level…”

Monday, 09. January 2023

Hyperonomy Digital Identity Lab

Unification

Suppose you have 3 football teams: What does it mean to unify these 3 organizations? Is it possible? What would it take to unify these 3 organizations? Let’s start with a common cause/purpose/vision/mission. Without these, the goal of unification is … Continue reading →

Suppose you have 3 football teams:

Buffalo Bills NFL football team Machester United UK football team Local high school football team

What does it mean to unify these 3 organizations? Is it possible? What would it take to unify these 3 organizations?

Let’s start with a common cause/purpose/vision/mission. Without these, the goal of unification is impossible to attain.


Phil Windleys Technometria

Defining Digital Identity

Summary: The following is an excerpt from my upcoming book, Learning Digital Identity, which will be available January 2023. The family therapist Salvador Minuchin declared, "The human experience of identity has two elements: a sense of belonging and a sense of being separate." This is as good a description of digital identity as it is of our psychological identity. A digital identity contains data

Summary: The following is an excerpt from my upcoming book, Learning Digital Identity, which will be available January 2023.

The family therapist Salvador Minuchin declared, "The human experience of identity has two elements: a sense of belonging and a sense of being separate." This is as good a description of digital identity as it is of our psychological identity. A digital identity contains data that uniquely describes a person or thing but also contains information about the subject's relationships to other entities.

To see an example of this, consider the data record that represents your car, stored somewhere in your state or country's computers. This record, commonly called a title, contains a vehicle identification number (VIN) that uniquely identifies the car to which it belongs. In addition, it contains other attributes of the car such as year, make, model, and color. The title also contains relationships: most notably, the title relates the vehicle to a person who owns it. In many places, the title is also a historical document, because it identifies every owner of the car from the time it was made, as well as whether it's been in a flood or otherwise salvaged.

While fields as diverse as philosophy, commerce, and technology define identity, most are not helpful in building, managing, and using digital identity systems. Instead, we need to define identity functionally, in a way that provides hooks for us to use in making decisions and thinking about problems that arise in digital identity.

Joe Andrieu, principal at Legendary Requirements, writes that "identity is how we recognize, remember, and respond to specific people and things. Identity systems acquire, correlate, apply, reason over, and govern information assets of subjects, identifiers, attributes, raw data, and context." This definition is my favorite because it has proven useful over the years in thinking through thorny identity issues.

The identity record for a car includes attributes that the system uses to recognize it: in this case, the VIN. The title also includes attributes that are useful to people and organizations who care about (that is, need to respond to) the car, including the owner, the state, and potential buyers. The government runs a system for managing titles that is used to create, manage, transfer, and govern vehicles (or, in Andrieu's formulation, remember them). The system is designed to achieve its primary goal (to record valuable property that the state has an interest in taxing and regulating) and secondary goals (protecting potential buyers and creating a way to prove ownership).

Digital identity management consists of processes for creating, managing, using, and eventually destroying digital records, like the one that contains your car title. These records might identify a person, a car, a computer, a piece of land, or almost anything else. Sometimes they are created simply for inventory purposes, but the more interesting ones are created with other purposes in mind: allowing or denying access to a building, the creation of a file, the transfer of funds, and so on. These relationships and the authorized actions associated with them make digital identities useful, valuable, and sometimes difficult to manage.

Photo Credit: Plate - WHOS_CAR from Lone Primate (CC BY-NC-SA 2.0)

Tags: identity ldid


Orie Steele

Briefcase: A Fun Way to Share Small Fragments of Structured Data Using Decentralized Identifiers…

Briefcase: A Fun Way to Share Small Fragments of Structured Data Using Decentralized Identifiers (DIDs) As open source community leaders, Transmute is committed to fostering an environment where cutting edge problem sets and ideas are vetted and improved by expert peers. In that spirit, Transmute routinely publishes articles directly from our staff, who are experts across technology and industry.
Briefcase: A Fun Way to Share Small Fragments of Structured Data Using Decentralized Identifiers (DIDs)

As open source community leaders, Transmute is committed to fostering an environment where cutting edge problem sets and ideas are vetted and improved by expert peers. In that spirit, Transmute routinely publishes articles directly from our staff, who are experts across technology and industry. These articles are self-chosen topics an employee feels passionate about, and give you an inside look at the limitless creativity and skill the Transmute team applies to our work.

We love the web platform at Transmute.

One of our favorite things to do is to build small demos that show how powerful open web standards are, and some of the awesome features that have been developed at W3C to support web developers.

We’re also excited to participate in the Linux Foundation’s newest initiative the Open Wallet Foundation:

https://openwallet.foundation/ https://github.com/openwallet-foundation

As part our helping evaluate open source building blocks for digital wallets we built:

Briefcase

What’s Our Briefcase Made Of?

Briefcase uses Web Cryptography to share encrypted content using URI Fragments.

It also builds on top of DID JWK, which is the simplest DID Method we are aware of and provides a great starting point for building authentic relationships between parties that control private keys.

In order to make did:jwk even cooler, we added an emoji encoding of the JWK Thumbprint URI RFC9278, that takes the last few characters of the thumbprint and converts them to emojis.

We use the same approach in our DID Meme demo for post quantum cryptography, see <link>.

But where to store the public and private keys?

We’ve open sourced a tiny JavaScript wrapper around IndexDB that can store Web CryptoKeys and documents:

https://github.com/openwallet-foundation/Technical-Staging-OWF/pull/3

Once you have a way to store public and private keys, you can use this wonderful library for producing JSON Web Signatures (JWS) and JSON Web Encryptions (JWE).

What to Put in Your Briefcase

You can share a DID Key with a friend, who can encrypt a message to this DID using well supported standards from IETF including RFC7516.

Then share the link to the encryption with the private key controller, who will be able to decrypt the message.

In order to make this more fun, we added a 3D animation of a briefcase opening :)

Putting it all together, we’ve made a simple and easy way to share encrypted structured content built on W3C and IETF standards, and we had a lot of fun doing it!

Orie Steele, Transmute’s CTO and Co-Founder, has managed security concerns for startups and publicly traded companies, building secure web applications in Finance, Energy, and Healthcare.

Connect with Orie on LinkedIn, Twitter, and GitHub

About Transmute: Building on the security and freedom that Web3 promised, Transmute provides all the benefits of decentralization to enterprise teams seeking a cost effective, interoperable, planet-forward experience provided by experts in technology and industry.

Transmute was founded in 2017, graduated from TechStars Austin in 2018, and is based in sunny Austin, Texas. Learn more about us at: http://www.transmute.industries

Connect with Transmute on LinkedIn and Twitter

Briefcase: A Fun Way to Share Small Fragments of Structured Data Using Decentralized Identifiers… was originally published in Transmute on Medium, where people are continuing the conversation by highlighting and responding to this story.


Damien Bod

Implement the OAUTH 2.0 Token Exchange delegated flow between an Azure AD API and an API protected using OpenIddict

This article shows how to implement the OAUTH 2.0 Token Exchange RFC 8693 delegated flow between two APIs, one using Azure AD to authorize the HTTP requests and a second API protected using OpenIddict. The Azure AD protected API uses the OAUTH 2.0 Token Exchange RFC 8693 delegated flow to get a new OpenIddict delegated […]

This article shows how to implement the OAUTH 2.0 Token Exchange RFC 8693 delegated flow between two APIs, one using Azure AD to authorize the HTTP requests and a second API protected using OpenIddict. The Azure AD protected API uses the OAUTH 2.0 Token Exchange RFC 8693 delegated flow to get a new OpenIddict delegated access token using the AAD delegated access token. An ASP.NET Core Razor page application using a confidential client is used to get the Azure AD access token with an access_as_user scope. By using the OAUTH 2.0 Token Exchange flow, delegated and application authorization mixing can be avoided and the trust between systems can be reduced.

Code: https://github.com/damienbod/OAuthGrantExchangeOidcDownstreamApi

Setup OAUTH 2.0 Token Exchange RFC 8693 for delegated flows

A Razor page UI application is implemented using Azure AD as the identity provider. This application authenticates using a confidential client against Azure AD. The UI uses Microsoft.Identity.Web to implement the client authentication logic. The application requests an Azure AD delegated access token to use the API which is also protected using Azure AD. This API application needs to use a downstream API which is protected using a separate identity provider and is protected using OpenIddict. The API uses the Azure AD access token to acquire another access token which the OpenIddict protected API accepts. The OAuth 2.0 token exchange RFC 8693 is used to implement this using the delegated flow. Only known Azure AD delegated access tokens can be used. The identity provider which is used to host OpenIddict implements the server logic of token exchange flow. I have kept this separated but I assume this could be integrated into OpenIddict as well. It is important to validate the flow correctly and not just the flow but the mapping logic between the different identities used in the delegated access token. I did not implement the full spec in this demo, just the bits requires for the delegated flow. Impersonation and other such use cases for the RFC 8693 are not supported at present. Maybe I will implement this later.

Implement the OAUTH 2.0 Token Exchange client

The GetApiDataAsync method is used to get an access token for the OpenIddict downstream API and use it to get the data. It uses the GetApiTokenOauthGrantTokenExchange to the get the access token using the token exchange flow and then uses it to call the business API. The configuration values are used as well as the client secret to acquire the new token.

public async Task<List<string>> GetApiDataAsync(string aadAccessToken) { try { var client = _clientFactory.CreateClient(); client.BaseAddress = new Uri( _downstreamApi.Value.ApiBaseAddress); var access_token = await _apiTokenClient .GetApiTokenOauthGrantTokenExchange ( _downstreamApi.Value.ClientId, _downstreamApi.Value.Audience, _downstreamApi.Value.ScopeForAccessToken, _downstreamApi.Value.ClientSecret, aadAccessToken ); client.SetBearerToken(access_token); var response = await client.GetAsync("api/values"); if (response.IsSuccessStatusCode) { var data = await JsonSerializer .DeserializeAsync<List<string>>( await response.Content.ReadAsStreamAsync()); if(data != null) return data; return new List<string>(); } throw new ApplicationException($"Status code: {response.StatusCode}, Error: {response.ReasonPhrase}"); } catch (Exception e) { throw new ApplicationException($"Exception {e}"); } }

The GetApiTokenOauthGrantTokenExchangeAad is an internal method used to call the OpenIddict identity provider to get the correct access token. This method is only called once per session or as long as the token is valid. This is normally cached once acquired. The method passes the required parameters which match the server settings.

private async Task<AccessTokenItem> GetApiTokenOauthGrantTokenExchangeAad( string clientId, string audience, string scope, string clientSecret, string aadAccessToken) { var tokenExchangeHttpClient = _httpClientFactory.CreateClient(); tokenExchangeHttpClient.BaseAddress = new Uri( _downstreamApiConfigurations.Value.IdentityProviderUrl); var tokenExchangeSuccessResponse = await RequestDelegatedAccessToken .GetDelegatedApiTokenTokenExchange( new GetDelegatedApiTokenOAuthTokenExchangeModel { Scope = scope, AccessToken = aadAccessToken, ClientSecret = clientSecret, Audience = audience, ClientId = clientId, EndpointUrl = "/connect/oauthTokenExchangetoken", GrantExchangeHttpClient = tokenExchangeHttpClient }, _logger); if (tokenExchangeSuccessResponse != null) { return new AccessTokenItem { ExpiresIn = DateTime.UtcNow .AddSeconds(tokenExchangeSuccessResponse.expires_in), AccessToken = tokenExchangeSuccessResponse.access_token }; } _logger.LogError( "no success response from oauth token exchange access token request"); throw new ApplicationException( "no success response from oauth token exchange access token request"); }

The GetDelegatedApiTokenTokenExchange method implements the client business of the OAuth flow. This creates an authentication header using basic authentication as we only want to use a confidential client for this. The parameters are passed as a KeyValuePair and match the defined specifications in the RFC 8693 for the POST body. If the data is returned correctly a success response is returned, otherwise the error response like in the RFC definition with a few extra parameters. The OauthTokenExchangeSuccessResponse is used to get the successful HTTP response from the POST request.

public static async Task<OauthTokenExchangeSuccessResponse?> GetDelegatedApiTokenTokenExchange( GetDelegatedApiTokenOAuthTokenExchangeModel reqData, ILogger logger) { if (reqData.GrantExchangeHttpClient == null) throw new ArgumentException("Httpclient missing, is null"); string credentials = CreateBasicAuthenticationHeader(reqData); reqData.GrantExchangeHttpClient.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Basic", credentials); KeyValuePair<string, string>[] oauthTokenExchangeBody = CreateTokenExchangeBody(reqData); var response = await reqData.GrantExchangeHttpClient.PostAsync(reqData.EndpointUrl, new FormUrlEncodedContent(oauthTokenExchangeBody)); if (response.IsSuccessStatusCode) { var tokenResponse = await JsonSerializer.DeserializeAsync<OauthTokenExchangeSuccessResponse>( await response.Content.ReadAsStreamAsync()); return tokenResponse; } if (response.StatusCode == System.Net.HttpStatusCode.Unauthorized) { // Unauthorized error var errorResult = await JsonSerializer.DeserializeAsync<OauthTokenExchangeErrorResponse>( await response.Content.ReadAsStreamAsync()); if (errorResult != null) { logger.LogInformation("{error} {error_description} {correlation_id} {trace_id}", errorResult.error, errorResult.error_description, errorResult.correlation_id, errorResult.trace_id); } else { logger.LogInformation("RequestDelegatedAccessToken Error, Unauthorized unknown reason"); } } else { // unknown error, log logger.LogInformation("RequestDelegatedAccessToken Error unknown reason"); } return null; }

The CreateTokenExchangeBody creates the body. This is implemented for the delegated flow which requests an access token. The subject_token parameter is used to pass the Azure AD access token.

private static KeyValuePair<string, string>[] CreateTokenExchangeBody( GetDelegatedApiTokenOAuthTokenExchangeModel reqData) { // Content-Type: application/x-www-form-urlencoded var oauthTokenExchangeBody = new[] { new KeyValuePair<string, string>("grant_type", OAuthGrantExchangeConsts.GRANT_TYPE), new KeyValuePair<string, string>("audience", reqData.Audience), new KeyValuePair<string, string>("subject_token_type", OAuthGrantExchangeConsts.TOKEN_TYPE_ACCESS_TOKEN), new KeyValuePair<string, string>("subject_token", reqData.AccessToken), new KeyValuePair<string, string>("scope", reqData.Scope) // new KeyValuePair<string, string>("resource", "--optional--") // new KeyValuePair<string, string>("requested_token_type", "--optional--") // new KeyValuePair<string, string>("actor_token", "--optional--") // new KeyValuePair<string, string>("actor_token_type", "--optional--") }; return oauthTokenExchangeBody; }

I created a consts class to implement the specification per defined string types.

public class OAuthGrantExchangeConsts { public const string TOKEN_TYPE_ACCESS_TOKEN = "urn:ietf:params:oauth:token-type:access_token"; public const string TOKEN_TYPE_REFRESH_TOKEN = "urn:ietf:params:oauth:token-type:refresh_token"; public const string TOKEN_TYPE_ID_TOKEN = "urn:ietf:params:oauth:token-type:id_token"; public const string TOKEN_TYPE_SAML1 = "urn:ietf:params:oauth:token-type:saml1"; public const string TOKEN_TYPE_SAML2 = "urn:ietf:params:oauth:token-type:saml2"; public const string GRANT_TYPE = "urn:ietf:params:oauth:grant-type:token-exchange"; public const string ERROR_INVALID_REQUEST = "invalid_request"; public const string ERROR_INVALID_CLIENT = "invalid_client"; public const string ERROR_INVALID_GRANT = "invalid_grant"; public const string ERROR_UNAUTHORIZED_CLIENT = "unauthorized_client"; public const string ERROR_UNSUPPORTED_GRANT_TYPE = "unsupported_grant_type"; public const string ERROR_INVALID_SCOPE = "invalid_scope"; // ... more consts, see the code for the full definitions }

That’s all that is required to implement the client side of the OAuth Token exchange delegated flow. If you require other flow types from this specification, then this needs to be implemented. See the RFC docs for details (In the links below)

Implement the OAUTH 2.0 Token Exchange server

The server part of the flow needs to validate a few different things. The identity provider validates the POST request using BASIC authentication, then it validates the body of the HTTP POST request. The server needs to fully validate the Azure AD access token including the signature, aud and iss as per standard. Once the Azure AD token is validated, the claims can be used to authorize the identity delegated in the access token. Only delegated access tokens should be accepted and so in an Azure AD token V2, you can do this be checking for an oid claim and a scp claim. These claims might be renamed if using the default Microsoft namespaces. The server must match its users to the Azure AD users. You need to be careful when using emails for this. The Azure OID a good claim to use for this.

The server must do the following:

Validate the Basic authentication Validate the body of the POST request as per standard Validate the access token fully Validate the claims, do the authorization Generate the new access token as per standard

Validate Basic authentication

Basic authentication is used so that only confidential clients can use the API. This is not the strongest of authentication methods but it is how the specification recommends sending the clientId and clientSecret. The used authentication is validated using an Authorize attribute and the correct scheme.

[Authorize(AuthenticationSchemes = BasicAuthenticationDefaults.AuthenticationScheme)] [HttpPost("~/connect/oauthTokenExchangetoken"), Produces("application/json")] public async Task<IActionResult> Exchange([FromForm] OauthTokenExchangePayload oauthTokenExchangePayload) { // Implement validate and create AT logic }

Once authenticated, the validation can begin.

Validate payload of POST request

The payload of the HTTP POST request is validated. This checks that the body has the expected values and the ones which are allowed. If any are incorrect, the error parameter of the unauthorized request is returned as defined in the specification.

var (Valid, Reason, Error) = ValidateOauthTokenExchangeRequestPayload .IsValid(oauthTokenExchangePayload, _oauthTokenExchangeConfigurationConfiguration); if(!Valid) { return UnauthorizedValidationParametersFailed( oauthTokenExchangePayload, Reason, Error); } Validate access token and signature

If the payload is validated, then the access token sent using the subject_token parameter is validated. This must be fully validated including the signature. The well known endpoints of the Azure AD identity provider is used to get the public keys of the certificate used to create the JWT token. This is used to validate the token signature. The iss and the aud are validated and checked against the expected values.

// get well known endpoints and validate access token sent in the assertion var configurationManager = new ConfigurationManager<OpenIdConnectConfiguration>( _oauthTokenExchangeConfigurationConfiguration.AccessTokenMetadataAddress, new OpenIdConnectConfigurationRetriever()); var wellKnownEndpoints = await configurationManager .GetConfigurationAsync(); var accessTokenValidationResult = ValidateOauthTokenExchangeRequestPayload.ValidateTokenAndSignature( oauthTokenExchangePayload.subject_token, _oauthTokenExchangeConfigurationConfiguration, wellKnownEndpoints.SigningKeys); if(!accessTokenValidationResult.Valid) { return UnauthorizedValidationTokenAndSignatureFailed( oauthTokenExchangePayload, accessTokenValidationResult); }

The ValidateTokenAndSignature method checks and validates the token.

public static (bool Valid, string Reason, ClaimsPrincipal? ClaimsPrincipal) ValidateTokenAndSignature( string jwtToken, OauthTokenExchangeConfiguration oboConfiguration, ICollection<SecurityKey> signingKeys) { try { var validationParameters = new TokenValidationParameters { RequireExpirationTime = true, ValidateLifetime = true, ClockSkew = TimeSpan.FromMinutes(1), RequireSignedTokens = true, ValidateIssuerSigningKey = true, IssuerSigningKeys = signingKeys, ValidateIssuer = true, ValidIssuer = oboConfiguration.AccessTokenAuthority, ValidateAudience = true, ValidAudience = oboConfiguration.AccessTokenAudience }; ISecurityTokenValidator tokenValidator = new JwtSecurityTokenHandler(); var claimsPrincipal = tokenValidator .ValidateToken(jwtToken, validationParameters, out var _); return (true, string.Empty, claimsPrincipal); } catch (Exception ex) { return (false, $"Access Token Authorization failed {ex.Message}", null); } } Validate claims and authorize the access token

Now that the token is validated, the returned claimsPrincipal can be used to check and authorize the identity from the access token. The token must be validated that it is a delegated token and must contain a scp claim and an oid claim. The scp is what we added to use the service. We added an access_as_user claim. I would avoid roles as roles can be used for application tokens as well. I matched the name claim with the email from to the identity in the second IAM system. Using the OID claim would be a more trusted way of doing this.

// get claims from aad token and re use in OpenIddict token var claimsPrincipal = accessTokenValidationResult.ClaimsPrincipal; var isDelegatedToken = ValidateOauthTokenExchangeRequestPayload .IsDelegatedAadAccessToken(claimsPrincipal); if (!isDelegatedToken) { return UnauthorizedValidationRequireDelegatedTokenFailed(); } var name = ValidateOauthTokenExchangeRequestPayload .GetPreferredUserName(claimsPrincipal); var isNameAndEmail = ValidateOauthTokenExchangeRequestPayload .IsEmailValid(name); if(!isNameAndEmail) { return UnauthorizedValidationPrefferedUserNameFailed(); } // validate user exists var user = await _userManager.FindByNameAsync(name); if (user == null) { return UnauthorizedValidationNoUserExistsFailed(); }

The delegated access token check is validated using the oid and the scp claims. Sometimes the claims get changed using the namespaces from Microsoft. I added a fallback check to validate both.

public static bool IsDelegatedAadAccessToken(ClaimsPrincipal claimsPrincipal) { // oid if magic MS namespaces not user var oid = claimsPrincipal.Claims.FirstOrDefault(t => t.Type == "http://schemas.microsoft.com/identity/claims/objectidentifier"); // scp if magic MS namespaces not added var scp = claimsPrincipal.Claims.FirstOrDefault(t => t.Type == "http://schemas.microsoft.com/identity/claims/scope"); if (oid != null && scp != null) { return true; } oid = claimsPrincipal.Claims.FirstOrDefault(t => t.Type == "oid"); scp = claimsPrincipal.Claims.FirstOrDefault(t => t.Type == "scp"); if (oid != null && scp != null) { return true; } return false; } Generate new access token

A new access token is created using the same certificate as the defualt one used by OpenIddict. This makes it possible to validate the token using the well known endpoints.

// use data and return new access token var (ActiveCertificate, _) = await Startup.GetCertificates(_environment, _configuration); var tokenData = new CreateDelegatedAccessTokenPayloadModel { Sub = Guid.NewGuid().ToString(), ClaimsPrincipal = claimsPrincipal, SigningCredentials = ActiveCertificate, Scope = _oauthTokenExchangeConfigurationConfiguration.ScopeForNewAccessToken, Audience = _oauthTokenExchangeConfigurationConfiguration.AudienceForNewAccessToken, Issuer = _oauthTokenExchangeConfigurationConfiguration.IssuerForNewAccessToken, OriginalClientId = _oauthTokenExchangeConfigurationConfiguration.AccessTokenAudience }; var accessToken = CreateDelegatedAccessTokenPayload.GenerateJwtTokenAsync(tokenData); _logger.LogInformation("OBO new access token returned sub {sub}", tokenData.Sub); if(IdentityModelEventSource.ShowPII) { _logger.LogDebug("OBO new access token returned for sub {sub} for user {Username}", tokenData.Sub, ValidateOauthTokenExchangeRequestPayload.GetPreferredUserName(claimsPrincipal)); } return Ok(new OauthTokenExchangeSuccessResponse { expires_in = 60 * 60, access_token = accessToken, scope = oauthTokenExchangePayload.scope });

The claims are added like in the RFC specification.

public static string GenerateJwtTokenAsync(CreateDelegatedAccessTokenPayloadModel payload) { SigningCredentials signingCredentials = new X509SigningCredentials(payload.SigningCredentials); var alg = signingCredentials.Algorithm; //{ // "alg": "RS256", // "kid": "....", // "typ": "at+jwt", //} var subject = new ClaimsIdentity(new[] { new Claim("sub", payload.Sub), new Claim("scope", payload.Scope), new Claim("act", $"{{ \"sub\": \"{payload.OriginalClientId}\" }}", JsonClaimValueTypes.Json ) }); if(payload.ClaimsPrincipal != null) { var name = ValidateOauthTokenExchangeRequestPayload.GetPreferredUserName(payload.ClaimsPrincipal); var azp = ValidateOauthTokenExchangeRequestPayload.GetAzp(payload.ClaimsPrincipal); var azpacr = ValidateOauthTokenExchangeRequestPayload.GetAzpacr(payload.ClaimsPrincipal); if(!string.IsNullOrEmpty(name)) subject.AddClaim(new Claim("name", name)); if (!string.IsNullOrEmpty(name)) subject.AddClaim(new Claim("azp", azp)); if (!string.IsNullOrEmpty(name)) subject.AddClaim(new Claim("azpacr", azpacr)); } var tokenHandler = new JwtSecurityTokenHandler(); var tokenDescriptor = new SecurityTokenDescriptor { Subject = subject, Expires = DateTime.UtcNow.AddHours(1), IssuedAt = DateTime.UtcNow, Issuer = "https://localhost:44318/", Audience = payload.Audience, SigningCredentials = signingCredentials, TokenType = "at+jwt" }; tokenDescriptor.AdditionalHeaderClaims ??= new Dictionary<string, object>(); if (!tokenDescriptor.AdditionalHeaderClaims.ContainsKey("alg")) { tokenDescriptor.AdditionalHeaderClaims.Add("alg", alg); } var token = tokenHandler.CreateToken(tokenDescriptor); return tokenHandler.WriteToken(token); }

Start all the applications and if everything is configured correctly with your Azure AD tenant, the data from the OpenIddict protected API can be used and displayed in the Azure AD UI.

Links

https://documentation.openiddict.com/configuration/application-permissions.html

https://datatracker.ietf.org/doc/html/rfc8693

https://www.youtube.com/watch?v=Ue8HKBGkIJY&t=

https://github.com/damienbod/OnBehalfFlowOidcDownstreamApi

https://www.rfc-editor.org/rfc/rfc6749#section-5.2

https://github.com/blowdart/idunno.Authentication/tree/dev/src/idunno.Authentication.Basic

Sunday, 08. January 2023

Heres Tom with the Weather

Correction: check_webfinger!

Mastodon is not the fediverse and in my check_webfinger! post, I’m afraid I made that assumption. In particular, I concluded So, from the perspective of mastodon, the domain component of your identifier you are known as is determined by which domain serves your actor document rather than the domain serving the original “well known” webfinger document. which is not necessarily true if

Mastodon is not the fediverse and in my check_webfinger! post, I’m afraid I made that assumption. In particular, I concluded

So, from the perspective of mastodon, the domain component of your identifier you are known as is determined by which domain serves your actor document rather than the domain serving the original “well known” webfinger document.

which is not necessarily true if you consider the fediverse outside of Mastodon.

Instead, it seems that I should have said that the domain component of your identifier is determined by the domain component of the subject field returned in the webfinger response from the domain that serves your actor document when mastodon makes its 2nd webfinger request which is done in the check_webfinger! method.

def check_webfinger! webfinger = webfinger!("acct:#{@username}@#{@domain}") confirmed_username, confirmed_domain = split_acct(webfinger.subject)

In the code above, the @domain passed to webfinger! is the domain of the server providing the activitypub actor document but the confirmed_domain can be different (e.g. your personal domain) if your original “well known” webfinger document was not pointing to a Mastodon server for providing the actor document.

Therefore, if you have a static personal website, it is not necessary to also host the actor document there as long as the fediverse node providing the actor document is smart enough to provide your personal domain in the subject when mastodon makes a webfinger call to it. A caveat is that such a fediverse node accomodating personal domains would not be able to distinguish between bob@a.com and bob@b.com when mastodon webfingers server.com for bob@server.com.

Saturday, 07. January 2023

Heres Tom with the Weather

Friday, 06. January 2023

@_Nat Zone

ツイッターの2.3億人情報流出事件のリスクを考えてみる

2023年の1月6日未明、Bloombergの報道…

2023年の1月6日未明、Bloombergの報道「 ツイッター、2.3億人超える情報流出か1が流れてきたので、これを題材にこのリスク面をちょっと考えてみたいと思います。

もくじ

事件概要 悪用の可能性・リスク リスク1:メアドおよび電話番号を使って表アカウントとサブ垢、裏垢が名寄せされる リスク2:この情報を使ってフィッシングされる リスク3:プロファイリングに使われる リスク4:スパムに悩まされる/DDoS攻撃 識別子問題 事件概要

Bloomberg英語版の2023年1月6日 5:17 JSTの報道によると、twitterから漏洩したとみられる2.3億人を超えるユーザの情報が、掲示板「BreachForum」に掲載されたとのことです。入っている情報は、Eメールアドレス、ツイッター・ハンドル(@ から始まる名前。わたしの場合は @_nat )、フルネーム(←なんのことやら)とのことですが、このエントリーを書いていてふと思い立ってpiyologさんを見に行ったら「Twitterから流出したとみられる約2億件のデータについてまとめてみた」2にずっと詳しくまとめてありました。それによると

ユーザー名称 ユーザー名 フォロワー数 アカウント作成日 メールアドレス

が入っているようです。piyokangoさん、さすが。逆に、Bloomberg、ちゃんと書け。

このデータは新たに抜かれたものではなくて、2021年6月から2022年1月まで存在していたTwitter APIの「脆弱性」を使って抜かれたのだろうとのこと。この脆弱性自体は2022年1月にtwitter社に報告され、即時に修正されたもので、twitter社の2022年8月5日付けの報告3によると、「もしだれかがEメールアドレスないしは電話番号をTwitterのシステムに送ると、Twitterのシステムはその人に、それらが紐付けられたtwitterアカウントがある場合それを返す4」というものだったようです5

このAPI脆弱性が実際に悪用されたのはすでに2022年7月にわかっていました。上記で抜けるのはツイッター・ハンドルだけのように報告書からは読めますが、ツイッター・ハンドルがわかれば、ハンドルに結び付けられたTwitterプロフィールの「名前」もわかりますから、BreachForumに掲載されたのはそれまでに抜かれたデータに、公開情報としてある「名前」を結びつけたものかもしれません。そもそも2022年12月23日にはこれに加えて電話番号もある約4億件のデータが「20万ドルで独占販売又は6万ドルで複数販売」6されていたようですから、ここから重複をとったりしてクレンジングしたデータかも知れません。より詳しい内容は piyolog 記事7を御覧ください。

悪用の可能性/リスク

まず最初にいっておきますが、パスワードは漏れていないのでこれ直接の不正ログインのリスクはありません。ただし、これを機にログインを見直し、FIDO認証器やワンタイムパスワード(OTP)のを設定するのは有用ですからぜひやってください。実際にこれは上記8/5付Twitter社報告でも推奨しています。(逆に、Bloombergさん、「ログイン中に「アカウント」タブでパスワードを変更すべきだ」と高度認証の前にこれを持ってくるのは何?そういうとこだぞ。)

ではリスクに入ります。まず、2022年8月発表の脆弱性の悪用リスクから考えます。

リスク1:メアドおよび電話番号を使って表アカウントとサブ垢8、裏垢9が名寄せされる

リスク内容:メールアドレスあるいは電話番号をキーにサブ垢、裏垢を結び付けられるのは、一部個人にとっては最大のリスクでしょうね。これによって、サブ垢の過去のツイートを辿ってあれやこれや言われたり場合によっては脅迫されるとかも出てくるでしょう。元の脆弱性の報告でもこうした名寄せを最大のリスクとして挙げていました。

対策:うーん。起きてしまった分にはなかなか難しいです。サブ垢・裏垢削除は、まだ魚拓など取られていない前提であればある程度効果が見込めます。ただ、逆にツイ消しやアカ消ししていることをなじられる可能性もあります。やましいことがあるんだろうと。

今後に関しては、サブ垢・裏垢を作る場合には、メアドも携帯電話番号も別のものを使いましょう。なお、このブログエントリーを書こうと思ったのは、この辺の深堀(後述)が目的ですので、ぜひそちらも御覧ください。

リスク2:この情報を使ってフィッシングされる

リスク内容: たとえそれが公開されていたり漏れていたりする情報であったとしても、本人にとって正しく見える情報を提示されて危機感を煽られると、つい詐欺に騙されてしまいがちになります。「あなたの2015年1月5日開設のツイッターアカウント「@_nat10(メールアドレス: nat@example.com)のパスワードが漏洩しました。次のサイトに行って至急パスワードをリセットしてください」みたいに登録してある携帯電話番号 080-0987-654311 にSMSでメッセージがきたりしたら、ついやっちゃいますよね?

対策:ブルンバーグが勧めるようなパスワードの変更は意味がありません。Twitterが勧めるような2要素認証は、「セキュリティ・キー」を選択すればほぼ確実な効果が見込まれます。テキストメッセージや認証アプリを使ったOTPはフィッシングには脆弱ですのでこのリスクの対策にはなりません。(過去に漏洩したパスワードによる攻撃などには効果がありますが)

リスク3:プロファイリングに使われる

リスク内容: 漏れた情報およびそこから引っ張ってこれる情報と、自社でもともと持っていた情報をメアド or 電話番号で紐づけて、ターゲティング用のプロファイリングに利用される。

対策:現実的な、つまり費用対効果が見込める対策はあんまり無いです。なにか思いついたらコメントに書いてください。

リスク4:スパムに悩まされる/DDoS攻撃

リスク内容:一般人の場合、漏れたメアドや電話番号にスパムが来て、作業や自由時間に割り込みが入ったり、見落としてはいけないメッセージを見落としたりする、というような感じでしょうか。有名人の場合、DDoSアタックになりえますね。

対策:せっかく重要な相手にしか開示していないメアド・番号で見落としが無かった、という方には残念なおしらせになりますね。現在開示している相手をホワイトリストに入れて、あとは叩き落とすとか、番号を変えるとかしないとならないかも知れません。特に電話の場合は、局側での対応が可能でないと、その電話番号は使い物にならなくなる可能性があります。

その他まだまだ精査すればあるでしょうが、とりあえずこんなところで。思いついたものがあったら、コメントに書いていただければ幸いです。

識別子問題

このブログを書き出したときには「ツイッター、2.3億人超える情報流出事件を題材に識別子問題を考えてみる」というタイトルで、識別子問題を深掘しようと思っていたのですが、だいぶ長くなったしもう朝の3時近くて疲れたので、今回はここまでにします。書こうかなと思っていたのは、

識別子とクレデンシャルの混同問題:当初の「脆弱性」って、実はこの辺からくる仕様上のバグだったんじゃないのという感想を持ったので、ここを書こうと思ったのがそもそものこのエントリ執筆のきっかけです。そこまで行く前に力尽きたけど。「そんなバカな」と思うでしょうけど、この混同ってわりとあるんですよ。特に、パスワードリセットにこれらを使っているような場合。まぁ、twitterの場合はそうじゃなかったと信じたいところですが。 アカウント間名寄せ問題:リスクの項で触れたはなしが主になります。これに加えて、次の話もあります。 仮名アカウントのときのバックアップ認証手段の話 再利用可能識別子を使った場合のバックアップ認証手段の混同問題 Twitterによるメアド・電話番号の目的外利用問題:バックアップ認証手段として取得していたこれらを、ターゲティングに使っていたという話。 詳細はこの辺→ Twitter faces $250 million FTC fine for misusing emails and phone numbers (2020-08-04)

識別子については、昨年暮れに新設した日本語Youtubeチャンネルで解説を始めています(まだ2023年1月7日段階で第1回しかアップロードしてません)。

この第1回では、「識別子の分類とリスク:識別子徹底攻略①」12と題して識別子の分類とリスクについてお話しています(下に掲載しておきました)。これを見ていただくとわかりますが、メールアドレスも電話番号も基本的に再利用可能識別子になります。また、無指向性識別子でもありますし、多くの場合継続的識別子でもあるでしょう。3年毎にメアドと携帯電話番号を変えるって人、ほぼいませんよね。こういったことを勘案しながら、この事件から得られる教訓を書こうと思っていたのです。が、それはまた別の機会にということで。

それではまた!

Wednesday, 04. January 2023

MyDigitalFootprint

Peak Paradox and #privacy

I have explored privacy and identity in a previous post, taken from the perspective of the individual.  This post is from the perspective of the state/ nation/ law. I commented on Michael Becker's LinkedIn post about his list of words and definitions for the identity/ privacy space. I commented that everyone remains fixated on making their particular solution work to a problem the user has
I have explored privacy and identity in a previous post, taken from the perspective of the individual. 

This post is from the perspective of the state/ nation/ law. I commented on Michael Becker's LinkedIn post about his list of words and definitions for the identity/ privacy space. I commented that everyone remains fixated on making their particular solution work to a problem the user has not got which is "#privacy." Whilst every language and country has an ideal notion of privacy, the wide variety of cultures means there is no unified “concept of privacy”, even if privacy is explicitly named or defined in their specific language law or culture.

I asked #chatGPT, the “AI” bot from Google, the question, “how would a capitalist and socialist view privacy?” 

“Capitalists would see privacy as an important aspect of individual liberty and autonomy and they view privacy as a property right that can be traded or sold, and they may support policies that allow companies to collect and use personal data as long as they have obtained the necessary consent.

A socialist would define privacy as the right to personal autonomy and control over one's own thoughts, feelings, and personal information. It is the ability to make decisions about how and with whom one shares personal information, without interference or coercion from external forces such as the government or capitalist institutions. In a socialist society, privacy would be protected as a fundamental human right and would be respected and upheld by the community as a whole.”

StJohn Deakins added to the comments by suggesting we move on from ideals of the political spectrum to examples.  

Korea:  dictatorship with command economy:  Kim owns you and owns your data, including your thoughts (rights to brain data will become a seriously scary "thing" in the next decade)

China: one-party state, ruling on behalf of 'the people’ with a state-controlled market economy: the party owns you and your data (on behalf of the Chinese people)

USA: individualist democracy with a free market (where companies have the same legal rights as individuals).  You own your data, and others own it, too (lots of them).

EU:  social democracy with a free market (where markets serve people, and people choose how). You own your data (as a human right). You, in theory, participate in choosing how.


It is remarkably common that concepts can be well-defined in language and law, but there remains to be a wide variance in the understanding and use. Words and language are dynamic and never stationary, and we should not try to own or control them,  defining for a context is critical.   Just consider love, truth, safety, honesty, work, trust, beauty and risk.  

Plotting Privacy on the Peak-Paradox framework makes me consider how the different interruptions are framed by the purposes presented in the mapping.  It is evident that the interruption for privacy must vary as we move about our understanding of what we are optimising for. Within Peaks, such as the Peak Individual Purpose, there is a creation of dilemmas, for example, Mark Zuckerberg, who wants privacy for himself but not for any of his #meta users.

To add an additional layer to the complexity onion of privacy, it should be noted that the concept of privacy probably exists in each of us until there is an external demand or constraint.  As unpacked in this post, where I pick up on the work of two behavioural scientists, John Darley and Daniel Batson, who published in 1973 “From Jerusalem to Jericho: A study of Situational and Dispositional Variables in Helping Behavior.” It was an investigation into the psychology of prosocial behaviour. The research demonstrates that when we introduce time pressure, our dispositional factors (I like privacy) stand for nothing and that we will act differently because of the constraints. 

When you design for privacy, you need to understand what the user is optimising for and be aware that your design will change what privacy means to the users because it is a constraint.


Tuesday, 03. January 2023

MyDigitalFootprint

We can be very good at answering questions, but why don't we challenge them?

A problem (among many) with data is that many people ask questions that are easy.  How many and who clicked this button? These are easy to ask, occupy time, fill in KPI cards and are often easy to answer. Why do so few kick back to ask if it is the right question?  Why did they click the button? Oh, we don’t have that data! But we can create constraints that mean we get biased data as

A problem (among many) with data is that many people ask questions that are easy.  How many and who clicked this button? These are easy to ask, occupy time, fill in KPI cards and are often easy to answer. Why do so few kick back to ask if it is the right question?  Why did they click the button? Oh, we don’t have that data!

But we can create constraints that mean we get biased data as we don’t understand human behaviour in context. 

----

In 1973 two behavioural scientists, John Darley and Daniel Batson published "From Jerusalem to Jericho: A study of Situational and Dispositional Variables in Helping Behavior." It was an investigation into the psychology of prosocial behaviour.

Darley and Batson picked students who were studying to be priests at the Princeton Theological Seminary to determine how situational factors influenced prosocial behaviour. Hypothesis: When someone is kind to another, is that because he or she has some innate qualities that lead to kindness—or because some situational factors simply determine and allow for kind behaviours.

The famous study was across three days in late fall; they had a bunch of seminary students come to a building, meet with a researcher, and fill out a bunch of surveys. The surveys partly addressed if the students were religious primarily for intrinsic or for extrinsic reasons (with “intrinsic reasons” being like “I am motivated to do good in the world” and “extrinsic reasons” being like “I really want to get into heaven.”). Then the participants were told that they needed to prepare a brief talk about The Good Samaritan — which is a story about how a hapless victim on the side of the road was just passed by holy individuals, whilst a non-holy Samaritan took the time to stop and help the fellow out. The story's context is significant as the participants were told that they needed to walk to a nearby building to meet up with another team member and then give their sermon. However, using random selection, the student was told that they:

Had plenty of time, and were early.

Were on time, but should head over now so as not to be late, or

We're running late, and really needed to run without delay - no excuse.

Obviously, the situation was rigged, and all participants found a fallen stranger (an actor) in a narrow alleyway who acted sick on the ground and in need of help. The narrow alleyway was selected as there was a choice: help this guy or step over him!

The “time constraint” crafted behavioural change as:

63% of participants in the “early” condition stopped to help the stranger.

45% of participants in the “on-time” condition stopped to help the stranger.

10% of participants in the “late” condition stopped to help the stranger.

60% of the participants were unwilling to help the “victim.” This is ironic because the participants were Princeton students studying to be priests and about to give a talk on the lessons of the Good Samaritan, but this was manipulated because of the constraints.

A side note before the core finding is that dispositional factors (what you believed) had no bearing on helping behaviour. In other words, people who reported as religious for intrinsic reasons were no more likely than others to stop to help.

When it comes to human behaviour, we have a strong bias toward thinking that people do what they do because of internal traits that drive their behaviours (Dunning, Ross & Nisbett, 1990). The Overconfidence Effect in Social Prediction.  Data shows us that dispositional factors are relatively weak predictors of what we do, whilst situational factors (which we cannot see or measure and often seem benign or inconsequential) play a powerful role in shaping our behaviours.


We can only answer the question we have data to, but that does not mean the answer is right or the data is a good predictor because we don't understand the constraints. 

CEO Take Away

If data supports your decisions, who is accountable and responsible for ensuring it answers the question we want and is not just data without context?  In the next board agenda, put an item as part of AOB, “Do we understand situational bias in our data?” If there is no debate or looks of bewilderment, perhaps it is time to ask better questions of those who think that the data is accurate. 






Monday, 02. January 2023

Phil Windleys Technometria

Better Onboarding with Verifiable Credentials

Summary: Making onboarding easier would result in better customer experience, more customers, and reduced support costs. Last week a friend referred me to a question on Guru.com about devices for connected cars. Since I used to do Fuse, he figured I might be able to help. I was happy to. Unfortunately, Guru wasn't so happy to let me.

Summary: Making onboarding easier would result in better customer experience, more customers, and reduced support costs.

Last week a friend referred me to a question on Guru.com about devices for connected cars. Since I used to do Fuse, he figured I might be able to help. I was happy to. Unfortunately, Guru wasn't so happy to let me.

You can't answer a question at Guru.com without registering, enrolling, and onboarding. Fair enough. So I started down the path. Here's their process:

Enter name and email on first screen.

Choose whether you're en employer or freelancer and set your password. Be sure to follow their password conventions. Then agree to the terms of service and agree to get emails (or not).

Enter the four-digit code that was sent to the email address you gave in (1).

Solve the captcha.

Choose whether to use 2FA or security questions to secure your account. I chose 2FA.

Verify your phone number using SMS or WhatsApp (they recommend WhatsApp). I chose SMS.

Enter the 4 digit code they send.

Continue with 2FA. I'm not sure why this screen shows up twice.

Logout and log back in.

Scan the QR code to set up a TOTP authenticator.

Enter the one-time code from the authenticator app.

Upload a photo and enter a mailing address (yes, they're required).

Congratulations! You've gone through Guru's twelve step program and you're registered! I went through all this just to discover I can't answer questions unless I pay them money. I bailed.

As I was going though this, I couldn't help thinking how much easier it could be using verifiable credentials.

Enter an email.

Scan the QR code they present using my smart wallet to establish a DID connection.

Verify information about myself that they ask for using verifiable credentials.

Credentials asserting your verified email and phone number would be easy enough to get if I don't already have them. And they've not verifying address and photo anyway, so there's no need for anything but a self-asserted credential for that. Admittedly, if I've never used verifiable credentials before they need to coach me on getting a wallet and the phone and email address credential. But they're already doing that for the authenticator app in step 10 above.

Guru's registration process is one of the most arduous I have encountered. If I were them and unwilling to use verifiable credentials, I'd at least split it up and let people add their photo, address, and authenticator app after they're already on board. Guru.com (and lots of other web sites) have to be shedding potential customers at every step in their onboarding process. I wonder if they keep track of abandoned registrations and where it happens? Does anyone? I'd love to know the numbers.

Verifiable credentials could make the onboarding experience a breeze, get more customers in the door, and reduce the cost of customer support calls associated with it.

Photo Credit: Abandoned Road from Tim Emerich (CC0)

Tags: verifiable+credentials ssi identity web

Saturday, 31. December 2022

Jon Udell

Instance-qualified Mastodon URLs

In Lists and people on Mastodon I showed how I added a list column to the following tab of the Mastodon browser I’m building. That was a step in the direction of easier and more powerful list management. It enables me to see whether the people I follow are assigned to lists, and to consider … Continue reading Instance-qualified Mastodon URLs

In Lists and people on Mastodon I showed how I added a list column to the following tab of the Mastodon browser I’m building. That was a step in the direction of easier and more powerful list management. It enables me to see whether the people I follow are assigned to lists, and to consider who should be on a list (or perhaps on a different list).

Today, as I began to use that new affordance in earnest, I discovered a new challenge. In order to assign someone to a list, or change a list assignment, I clicked the link in the account_url column to open that person’s profile in the Mastodon web app. That was fine for accounts on my home server, mastodon.social. An account URL like Shelley Powers’ https://mastodon.social/@burningbird brings me to Shelley’s profile on my home server where the list manager is available.

But if I’m following someone elsewhere, like Ward Cunningham at https://mastodon.radio/@k9ox, the account URL brings me to Ward’s profile on that server where the list manager isn’t available. In order to assign Ward to a list I had to capture his account URL, paste it into the search box in my home server’s web app, and then click the resulting link: https://mastodon.social/@k9ox@mastodon.radio.

That got old real fast, so I adjusted the following tab to display the latter flavor of URL which I’ll call an instance-qualified URL.

Steampipe provides a few ways to make that adjustment. As a user of the dashboard, you can use Postgres’ regular expression functions to do the transformation in the SQL query that drives the view. But you’d rather not have to. It’s much nicer if the plugin does that for you, so the SQL can just refer to a column called instance_qualified_url.

I chose the latter approach. As the author of a Steampipe plugin you want to make life as easy as possible for users of the plugin. When you’re the author of both the plugin and the dashboard, as I am in this case, you can enjoy a nice virtuous cycle. As the dashboard evolves you discover ways to improve the plugin, which leads to more use of the dashboard, which suggests further opportunities to improve the plugin. I have been greatly enjoying the coevolution of these two components!

Adding a new column to a Steampipe table

To make the change, I extended the structure that defines the columns of the tables mapped from Mastodon Account API. A Steampipe plugin defines columns using a list of structs like this.

..., { Name: "url", Type: proto.ColumnType_STRING, Description: "URL for the account.", }, ...,

That struct says: “When the name of a top-level field in the API response is url, tell Steampipe to make a database column with that name and with the Postgres type text.”

You can also transform values in API responses to synthesize new columns that don’t appear in API responses. Here’s the struct I added for this case.

..., { Name: "instance_qualified_account_url", Type: proto.ColumnType_STRING, Description: "Account URL prefixed with my instance.", Transform: transform.FromValue().Transform(instanceQualifiedAccountUrl), }, ...

That one says: “Send the API response to the transform function instanceQualifiedAccountUrl, and use its result as the value of the column.

Here’s the function.

func instanceQualifiedAccountUrl(ctx context.Context, input *transform.TransformData) (interface{}, error) { url := input.Value.(*mastodon.Status).Account.URL qualifiedUrl := qualifiedUrl(ctx, url) return qualifiedUrl, nil }

It delegates the real work to another function.

func qualifiedUrl(ctx context.Context, url string) string { plugin.Logger(ctx).Debug("instanceQualifiedUrl", "server", homeServer, "url", url) re := regexp.MustCompile(`https://([^/]+)/@(.+)`) matches := re.FindStringSubmatch(url) if len(matches) == 0 { return url } person := matches[1] server := matches[2] qualifiedUrl := fmt.Sprintf("%s/@%s@%s", homeServer, server, person) plugin.Logger(ctx).Debug("instanceQualifiedUrl", "qualifiedUrl", qualifiedUrl) schemelessHomeServer := strings.ReplaceAll(homeServer, "https://", "") qualifiedUrl = strings.ReplaceAll(qualifiedUrl, "@"+schemelessHomeServer, "") plugin.Logger(ctx).Debug("qualifiedUrl", "qualifiedUrl", qualifiedUrl) return qualifiedUrl }

Why? Two different sets of column definitions need the same transformation. instanceQualifiedAccountUrl works with responses from the Account API. But account URLs also appear in the Status API that drives timeline views. Those use a different transform function, instanceQualifiedStatusUrl, to do the same transformation for a different API response.

From account URLs to status URLs

The instanceQualifiedAccountUrl column solved the original problem. I was able to remove my plugin-author hat, put on my dashboard-author hat, and refer to account urls as instance-qualified URLs in all the tabs that display them. Any such link now leads to a profile that I view through the lens of mastodon.social and that enables me to use the web app’s list manager directly, without the cumbersome copy/paste/search procedure.

My happy dance didn’t last long, though. Newly sensitized to that copy/paste/search friction, I realized it was still happening when I try to reply to items that appear in a timeline view. Here is a recent example: https://techpolicy.social/@mnot/109610641523489182.

That’s the URL displayed in the dashboard. When I click it I land on Mark’s server and can view the item, but if I try to reply I’m confronted with the dreaded copy/paste/search operation.

No problem! I’ll use a similar transform! Not so fast. I can form an URL like https://mastodon.social/@mnot@techpolicy.social/109610641523489182 but it doesn’t go anywhere.

If I do the copy/paste/search operation, I land on a similar-but-different URL: https://mastodon.social/@mnot@techpolicy.social/109610641692667630. It has the same structure but a different toot ID. This URL is also the one that appears in the web app’s home timeline, which is why I can reply directly from that view.

I’m out of my depth here so I’ll just end with an appeal for help. It makes sense that a home server will assign its own id to an item fetched from a foreign server, and that the web app will use that id. But I’m not seeing a way to aquire that id directly from the API. I suspect it’s possible to acquire it by way of search, but doing that for every item in a timeline will quickly exhaust the tight budget for API requests (just 300 every 5 minutes).

So, Lazy Mastodon, am I just stuck here or is there a way to transform foreign status URLs into instance-relative status URLs?

Update: Solved!

After chatting with Jari Pennanen I took another look and realized the needed ID was available in the API response after all, I just wasn’t using it (facepalm). And in fact there are two flavors of the ID — one for original toots, another for boosts. Columns for both cases are added here and the tweak to make the dashboard use them here.

Here is the result.

Thanks for being my rubber duck, Jari! The instance-qualified toot and reblog URLs make this dashboard massively more useful.


1 https://blog.jonudell.net/2022/11/28/autonomy-packet-size-friction-fanout-and-velocity/
2 https://blog.jonudell.net/2022/12/06/mastodon-steampipe-and-rss/
3 https://blog.jonudell.net/2022/12/10/browsing-the-fediverse/
4 https://blog.jonudell.net/2022/12/17/a-bloomberg-terminal-for-mastodon/
5 https://blog.jonudell.net/2022/12/19/create-your-own-mastodon-ux/
6 https://blog.jonudell.net/2022/12/22/lists-and-people-on-mastodon/
7 https://blog.jonudell.net/2022/12/29/how-many-people-in-my-mastodon-feed-also-tweeted-today/
8 https://blog.jonudell.net/2022/12/31/instance-qualified-mastodon-urls/
9 https://blog.jonudell.net/2023/01/16/mastodon-relationship-graphs/
10 https://blog.jonudell.net/2023/01/21/working-with-mastodon-lists/
11 https://blog.jonudell.net/2023/01/26/images-considered-harmful-sometimes/
12 https://blog.jonudell.net/2023/02/02/mapping-the-wider-fediverse/
13 https://blog.jonudell.net/2023/02/06/protocols-apis-and-conventions/
14 https://blog.jonudell.net/2023/02/14/news-in-the-fediverse/
15 https://blog.jonudell.net/2023/02/26/mapping-people-and-tags-on-mastodon/
16 https://blog.jonudell.net/2023/03/07/visualizing-mastodon-server-moderation/
17 https://blog.jonudell.net/2023/03/14/mastodon-timelines-for-teams/


Mike Jones: self-issued

OAuth DPoP Specification Addressing Area Director Review Comments

This week Brian Campbell published an updated OAuth 2.0 Demonstrating Proof-of-Possession at the Application Layer (DPoP) draft addressing the Area Director review comments received. Thanks to Roman Danyliw for his useful review! As Brian wrote, updates in this version of the specifiation were: Updates from Roman Danyliw’s AD review DPoP-Nonce now included in HTTP header […]

This week Brian Campbell published an updated OAuth 2.0 Demonstrating Proof-of-Possession at the Application Layer (DPoP) draft addressing the Area Director review comments received. Thanks to Roman Danyliw for his useful review!

As Brian wrote, updates in this version of the specifiation were:

Updates from Roman Danyliw’s AD review DPoP-Nonce now included in HTTP header field registration request Fixed section reference to URI Scheme-Based Normalization Attempt to better describe the rationale for SHA-256 only and expectations for how hash algorithm agility would be achieved if needed in the future Elaborate on the use of multiple WWW-Authenticate challenges by protected resources Fix access token request examples that were missing a client_id

The specification is available at:

https://tools.ietf.org/id/draft-ietf-oauth-dpop-12.html

Scott Maces IT Journal

One World 65 International Adapter: One versatile little brick

Recently I had a chance to test an evaluation unit of the OneWorld 65 International Adapter, made by OneAdaptr.com. This was only the second time I had ever traveled out of the country with tech in tow. The previous time, I was toting a MacBook laptop computer, and purchased an Apple-specific adapter kit basically built to plug into half of the 10-watt USB adapter. Not as versatile, and a great de

Recently I had a chance to test an evaluation unit of the OneWorld 65 International Adapter, made by OneAdaptr.com. This was only the second time I had ever traveled out of the country with tech in tow. The previous time, I was toting a MacBook laptop computer, and purchased an Apple-specific adapter kit basically built to plug into half of the 10-watt USB adapter. Not as versatile, and a great deal bulkier, but it did the job for me back in 2018.

The OneWorld 65 is a nice progression and compression of key functions of an international adapter, from the jigsaw-puzzle-looking box of Apple stuff to a simple brick. It can charge up to 6 devices simultaneously and is said to work in more than 200 countries. (I will never travel that much, so I'm taking OneAdapter.com's word for it.)

In keeping with how charging interfaces are evolving, the OneWorld 65 has two standard USB-C charging ports, 2 USB-A charing ports, and a 65-watt USB-C charging port for the newer laptops that support USB-C.

In late November and early December, I traveled to Austria, eventually joining the American Beethoven Society on a tour of musical landmarks, including a week in Vienna, and a side trip to Salzburg. Pictured here is the One World 65 in action in my Vienna hotel. To use, I selected between European Union, UK, or U.S. plugs (in Austria, it was EU all the way) and found the OneWorld 65 to plug easily into all such power outlets in various hotel rooms. The USB ports are on the bottom of the unit, and one is on the right side. I didn't come close to using all six outlets at once, but it's great to have the capacity.

I had one issue, and it wasn't a deal-breaker. I attempted to plug some 110-volt U.S. plugs into the outlet provided on the top of the front as pictured, but was unable to do so. When I returned to the U.S., I contacted OneAdaptr.com, and they suggested I use more force to insert such plugs. I tried various 110-volt plugs at home, and while I did have success with one such plug, I could not insert other plugs I tried, no matter how much force I applied. The blue plastic inside acts as a protection guard, and perhaps with repeated use, I will find it easier to insert any such plug.

Despite this issue, I recommend the OneWorld 65 for international travel. It's great to see so much versality in such a small package selling for less than $70. It travels well, and I will put it through its paces again in 2023.

Friday, 30. December 2022

Bill Wendels Real Estate Cafe

Compensation Revolution in Real Estate: Will 2023 deliver long-overdue reforms & BILLIONS in consumer savings?

“What if buyer and seller real estate commissions are decoupled?” That recent question on LinkedIn was first asked & answered 31 years ago when Steve… The post Compensation Revolution in Real Estate: Will 2023 deliver long-overdue reforms & BILLIONS in consumer savings? first appeared on Real Estate Cafe.

“What if buyer and seller real estate commissions are decoupled?” That recent question on LinkedIn was first asked & answered 31 years ago when Steve…

The post Compensation Revolution in Real Estate: Will 2023 deliver long-overdue reforms & BILLIONS in consumer savings? first appeared on Real Estate Cafe.

Thursday, 29. December 2022

Jon Udell

How many people in my Mastodon feed also tweeted today?

I lasted tweeted on Dec 22. (It was, unsurprisingly, a link to a blog post about Mastodon.) Today I wondered what percentage of the people who appear in my Mastodon timeline today also appeared on Twitter today. To start, I wrote this query which tries to match Twitter and Mastodon usernames. When it finds a … Continue reading How many people in my Mastodon feed also tweeted today?

I lasted tweeted on Dec 22. (It was, unsurprisingly, a link to a blog post about Mastodon.) Today I wondered what percentage of the people who appear in my Mastodon timeline today also appeared on Twitter today.

To start, I wrote this query which tries to match Twitter and Mastodon usernames. When it finds a match, it reports the day on which that person last tweeted.

with mastodon as ( select substring(username from 1 for 15) as username, -- twitter names are max 15 chars 'from:' || substring(username from 1 for 15) as query -- we will query twitter using, e.g., 'from:judell' from mastodon_toot where timeline = 'home' limit 500 ) select m.username as mastodon_person, t.author->>'username' as twitter_person, max(to_char(t.created_at, 'YYYY-MM-DD')) as last_tweet_day from mastodon m left join twitter_search_recent t -- see https://hub.steampipe.io/plugins/turbot/twitter/tables/twitter_search_recent on t.query = m.query group by mastodon_person, twitter_person order by last_tweet_day desc

This is my favorite kind of Steampipe query: two different APIs, each represented as a Postgres table, combined with a SQL JOIN.

The result looks like this, with nulls for failed matches.

+-----------------+-----------------+----------------+ | mastodon_person | twitter_person | last_tweet_day | +-----------------+-----------------+----------------+ | AlanSill | null | null | | Colarusso | null | null | | ... | | williamgunn | null | null | | xian | null | null | | ... | | futurebird | futurebird | 2022-12-29 | | glynmoody | glynmoody | 2022-12-29 | | ... | | khinsen | khinsen | 2022-12-23 | | blaine | blaine | 2022-12-23 | +-----------------+-----------------+----------------+

Next I created a table from the above query.

create table public.mastodon_twitter as -- sql as above

And then ran this query.

select last_tweet_day, count(*) from mastodon_twitter where last_tweet_day is not null group by last_tweet_day order by last_tweet_day desc

Here’s the result.

+----------------+-------+ | last_tweet_day | count | +----------------+-------+ | 2022-12-29 | 36 | | 2022-12-28 | 6 | | 2022-12-27 | 1 | | 2022-12-26 | 1 | | 2022-12-25 | 2 | | 2022-12-23 | 2 | +----------------+-------+

The 500 toots represented here were created by 93 people who tooted today.

select count(*) from mastodon_twitter +-------+ | count | +-------+ | 93 | +-------+

Of those 93 people, 48 have matching usernames.

select count(*) from mastodon_twitter where last_tweet_day is not null +-------+ | count | +-------+ | 48 | +-------+

Of the 48 with matching usernames, 36 also tweeted today.

So there’s my answer: 75% of the people who appeared in my Mastodon home timeline (when I sampled it just now) also appeared on Twitter today.


1 https://blog.jonudell.net/2022/11/28/autonomy-packet-size-friction-fanout-and-velocity/
2 https://blog.jonudell.net/2022/12/06/mastodon-steampipe-and-rss/
3 https://blog.jonudell.net/2022/12/10/browsing-the-fediverse/
4 https://blog.jonudell.net/2022/12/17/a-bloomberg-terminal-for-mastodon/
5 https://blog.jonudell.net/2022/12/19/create-your-own-mastodon-ux/
6 https://blog.jonudell.net/2022/12/22/lists-and-people-on-mastodon/
7 https://blog.jonudell.net/2022/12/29/how-many-people-in-my-mastodon-feed-also-tweeted-today/
8 https://blog.jonudell.net/2022/12/31/instance-qualified-mastodon-urls/
9 https://blog.jonudell.net/2023/01/16/mastodon-relationship-graphs/
10 https://blog.jonudell.net/2023/01/21/working-with-mastodon-lists/
11 https://blog.jonudell.net/2023/01/26/images-considered-harmful-sometimes/
12 https://blog.jonudell.net/2023/02/02/mapping-the-wider-fediverse/
13 https://blog.jonudell.net/2023/02/06/protocols-apis-and-conventions/
14 https://blog.jonudell.net/2023/02/14/news-in-the-fediverse/
15 https://blog.jonudell.net/2023/02/26/mapping-people-and-tags-on-mastodon/
16 https://blog.jonudell.net/2023/03/07/visualizing-mastodon-server-moderation/
17 https://blog.jonudell.net/2023/03/14/mastodon-timelines-for-teams/


Orie Steele

DID MEME

DID Meme Updates: Using a Joke Project to Experiment with Decentralized Identifiers Inside of Images As open source community leaders, Transmute is committed to fostering an environment where cutting edge problem sets and ideas are vetted and improved by expert peers. In that spirit, Transmute routinely publishes articles directly from our staff, who are experts across technology and industr

DID Meme Updates: Using a Joke Project to Experiment with Decentralized Identifiers Inside of Images

As open source community leaders, Transmute is committed to fostering an environment where cutting edge problem sets and ideas are vetted and improved by expert peers. In that spirit, Transmute routinely publishes articles directly from our staff, who are experts across technology and industry. These articles are self-chosen topics an employee feels passionate about, and give you an inside look at the limitless creativity and skill the Transmute team applies to our work.

Why DID Meme?

https://didme.me

https://github.com/OR13/didme.me

I’ve made a number of updates to a joke project we’ve been using to explore experimental concepts.

https://twitter.com/OR13b/status/1594880701987430405

Benefits over Previous Versions

It builds on did:jwk which is much simpler than did:key.

See the method spec for more details:

https://github.com/quartzjer/did-jwk

It uses a better steganography library that uses PNGs to transport hidden data:

https://github.com/paulmillr/steg

It uses an experimental library for post quantum cryptography, focused on representations for keys and signatures for Dilithium, Falcon and Sphincs:

https://github.com/transmute-industries/did-jwk-pqc

It uses an older weekend project we built that provides an emoji alphabet for base64url encodings:

https://github.com/OR13/demojid

We’ve enjoyed hiding public keys in images that have been generated from transformer models that are so popular on Twitter.

Here’s an example:

Orie Steele, Transmute’s CTO and Co-Founder, has managed security concerns for startups and publicly traded companies, building secure web applications in Finance, Energy, and Healthcare.

Connect with Orie on LinkedIn, Twitter, and GitHub

About Transmute: Building on the security and freedom that Web3 promised, Transmute provides all the benefits of decentralization to enterprise teams seeking a cost effective, interoperable, planet-forward experience provided by experts in technology and industry.

Transmute was founded in 2017, graduated from TechStars Austin in 2018, and is based in sunny Austin, Texas. Learn more about us at: http://www.transmute.industries

Connect with Transmute on LinkedIn and Twitter

DID MEME was originally published in Transmute on Medium, where people are continuing the conversation by highlighting and responding to this story.

Wednesday, 28. December 2022

Foss & Crafts

54: Oops!

Everyone goofs sometimes. Today we talk accidents... some happy, some not! Links: Decaf coffee and history of penicillin, your pop-sci "accidents of history" stories of the day. Look, this is admittedly kind of a fluff episode. Have we linked to Worse is Better before? We did? In the lisp episode? And here's the Terminal Phase episode

Everyone goofs sometimes. Today we talk accidents... some happy, some not!

Links:

Decaf coffee and history of penicillin, your pop-sci "accidents of history" stories of the day. Look, this is admittedly kind of a fluff episode. Have we linked to Worse is Better before? We did? In the lisp episode? And here's the Terminal Phase episode

Tuesday, 27. December 2022

Jon Udell

My belated introduction to Super Mario Brothers

On a hike today my friend noticed a clump of Amanita Muscaria and took away this glorious specimen. As we headed down the trail he said: “I can never see one of these without hearing the music from Super Mario Brothers — beep, boop.” He went on to explain that the spotted red mushroom was … Continue reading My belated introduction to Super Mario Brothers

On a hike today my friend noticed a clump of Amanita Muscaria and took away this glorious specimen. As we headed down the trail he said: “I can never see one of these without hearing the music from Super Mario Brothers — beep, boop.” He went on to explain that the spotted red mushroom was a coveted power-up that made Mario grow large.

I knew nothing about this. I have never played Super Mario Brothers, or any other iconic game, or really any computer-based game at all, except briefly in the early 1990s when I played Rogue during an office craze that lasted a few weeks, and around the same time played Putt-Putt with our young kids.

This seems unusual for someone like me. I have spent my adult life deeply engrossed in computer-based activity, and am often described to others, by friends and family, as a “computer guy.” That makes me a geek and/or nerd by definition. But I’ve never been comfortable with either of those terms.

Over the years I’ve known a great many others in the geek/nerd tribe who self-identify as gamers. It always puzzles me. I’m perfectly happy to spend long hours in front of a screen reading or writing or coding. But games? I simply lack that gene. It baffles me how other members of the tribe — especially adults who like me already put in long hours in front of screens — could then pile up more screen time playing games, when they could be outside hiking, or cycling, or gardening, or doing really anything else at all.

Perhaps this explains my reluctance to self-identify as a member of the geek/nerd tribe. If I really belonged I’d be gamer. Since I’m not I must be an impostor. I wonder if there’s a word — in some other language than English probably — for when you share certain core attributes with a tribe but not others?

Friday, 23. December 2022

Heres Tom with the Weather

check_webfinger!

The notes I made in Mastodon Discovery skipped over a noteworthy step. In general, after mastodon fetches and parses the “well known” webfinger document (the so-called JSON Resource Descriptor), there is a 3 step process to learn about the actor referenced in that document. fetch_resource check_webfinger! create_account As mentioned previously, in the first step, a very compre

The notes I made in Mastodon Discovery skipped over a noteworthy step. In general, after mastodon fetches and parses the “well known” webfinger document (the so-called JSON Resource Descriptor), there is a 3 step process to learn about the actor referenced in that document.

fetch_resource check_webfinger! create_account

As mentioned previously, in the first step, a very comprehensive json document for the actor is fetched and in the third step, an account is created for that actor if does not already exist. However, between those two steps, mastodon does another webfinger lookup since, for instance, the domain serving the actor document may be a different domain than the one that originally served the first “well known” webfinger document. Prior to this check, some instance variables are set:

@uri = @json['id'] @username = @json['preferredUsername'] @domain = Addressable::URI.parse(@uri).normalized_host

The @uri instance variable is the location of the actor document and the @domain instance variable is the domain that serves the actor document. After these variables are set, the check is performed:

check_webfinger! unless only_key

This check enforces that the domain component of your identifier is the domain that serves your actor document. (It inspects the subject of the “well known” document and if the username and domain of the subject match the instance variables above, the ‘self’ resource link is required to be the same as the @uri instance variable. If the subject does not match, one more webfinger lookup for the redirection is allowed.)

So, from the perspective of mastodon, the domain component of your identifier you are known as is determined by which domain serves your actor document rather than the domain serving the original “well known” webfinger document. It seems if your domain is a static site and you want to be known by an identifier associated with your domain, your domain needs to serve the actor document in addition to “well known” webfinger document.

Thursday, 22. December 2022

Heres Tom with the Weather

IndieWeb and Mastodon

Thanks to EFF-Austin for the opportunity to talk along with Jon Lebkowsky about the relationship between Indieweb and the Fediverse. (Slides and video) The meetup Indieweb and Mastodon: The Time is Now was coincidentally in the same room as IndieWebCamp Austin 2020.

Thanks to EFF-Austin for the opportunity to talk along with Jon Lebkowsky about the relationship between Indieweb and the Fediverse. (Slides and video)

The meetup Indieweb and Mastodon: The Time is Now was coincidentally in the same room as IndieWebCamp Austin 2020.


Aaron Parecki

How to create a keyboard shortcut to export the current slide in Keynote

Lately I've been using Apple Keynote to create graphics for using in videos and blog posts. It's a quick way to arrange things on a page, copying and pasting most things just works, and there are enough built in shapes and tools to get the point across. However, after spending a full day creating graphics for a video, I found myself frustrated by the number of clicks required to export a single sli

Lately I've been using Apple Keynote to create graphics for using in videos and blog posts. It's a quick way to arrange things on a page, copying and pasting most things just works, and there are enough built in shapes and tools to get the point across. However, after spending a full day creating graphics for a video, I found myself frustrated by the number of clicks required to export a single slide at a time.

I posted a quick note about this, and immediately got a lot of helpful replies! A lot of people suggested it should be possible to automate with KeyboardMaestro, a tool I was not previously familiar with. It was also suggested that I ask on the KeyboardMaestro forums, so I did, and also quickly received a very detailed reply!

In the course of reading through these replies, I realized that the thing they all had in common was AppleScript! One reply in particular from Chris contained a pretty detailed AppleScript that appears to have been adapted from the iworkautomation website. So I took that as a starting point and wanted to see how I could make it work without also using KeyboardMaestro.

Create a Quick Action in Automator

First, open Automator and create a new Quick Action.

In the new action that is created, set the "Workflow receives" option to no input, and choose Keynote.app in the application list.

Then search for the AppleScript action and drag it into the empty grey area.

Next, replace the (* Your script goes here *) text with the following AppleScript.

AppleScript to export the current slide as a PNG in Keynote set the defaultDestinationFolder to (path to downloads folder) tell application "Keynote" activate try set currentSlideNumber to the slide number of (get the current slide of the front document) --- skip all but the current slide tell the front document set skipped of ¬ (every slide where the slide number is not equal to currentSlideNumber) to true end tell --- capture the document name set documentName to the name of the front document if documentName ends with ".key" then ¬ set documentName to text 1 thru -5 of documentName --- create an empty folder to temporarily store the image tell application "Finder" set newFolderName to documentName & " slide " & currentSlideNumber set incrementIndex to 1 repeat until not (exists folder newFolderName of defaultDestinationFolder) set newFolderName to documentName & "-" & (incrementIndex as string) set incrementIndex to incrementIndex + 1 end repeat set the targetFolder to ¬ make new folder at defaultDestinationFolder with properties ¬ {name:newFolderName} set the targetFolderHFSPath to targetFolder as string end tell --- tell Keynote to export the current slide export the front document as slide images to file targetFolderHFSPath with properties ¬ {image format:PNG, skipped slides:false} --- set all the slides back to unskipped tell the front document set skipped of every slide to false end tell --- move the file to the destination folder and delete the temporary folder tell application "Finder" set folderContents to every item of targetFolder repeat with theFile in folderContents set fileExtension to name extension of theFile set fileName to the name of theFile as string set splitCharacters to ".001" set ATID to AppleScript's text item delimiters set AppleScript's text item delimiters to splitCharacters set newFileName to text item 1 of fileName set AppleScript's text item delimiters to ATID set newFileName to newFileName & "." & fileExtension set name of theFile to newFileName -- display dialog quoted form of (name of theFile as string) end repeat move files of entire contents of targetFolder to defaultDestinationFolder with replacing delete targetFolder --- open the destination folder tell application "Finder" set thePath to file (defaultDestinationFolder & newFileName as string) activate reveal the thePath end tell end tell on error errorMessage number errorNumber display alert "EXPORT PROBLEM" message errorMessage error number -128 end try end tell Setting up a keyboard shortcut

In the Automator app, save this quick action with a name like ExportCurrentSlide. You can quit Automator now.

Now open System Preferences, open the Keyboard options, and go to the Shortcuts tab, then click on Services. Your new action should appear at the bottom of the list.

Click on the "none" at the right side of the action and you can record a keyboard shortcut for the action. I used "Shift Command 9" for mine.

Close this, and go to Keynote to test it out!

Choose the slide you want to export and press your keyboard shortcut! You should see a quick sequence of hiding all the slides, un-hiding them, and then the Downloads folder should pop up with your file name including the Keynote file name and slide number!

There you go! A pure AppleScript solution, no third party apps needed! I just finished setting this up this morning and I'm already so much happier exporting slides now!


Jon Udell

Lists and people on Mastodon

I hadn’t thought to use Mastodon lists until I read the Frustration with lists chapter of Martin Fowler’s Exploring Mastodon, in which he writes: I like lists because they allow me to divide up my timeline to topics I want to read about at different times. They are frustrating because the tools to manage them … Continue reading Lists and people on Mastodon

I hadn’t thought to use Mastodon lists until I read the Frustration with lists chapter of Martin Fowler’s Exploring Mastodon, in which he writes:

I like lists because they allow me to divide up my timeline to topics I want to read about at different times. They are frustrating because the tools to manage them in Twitter are very limited, so it’s more hassle to set up the kind of environment I’d like. Mastodon also has lists, sadly its current management tools are equally bad.

This seemed like a good challenge for Steampipe. To tackle it, I first needed to add some new tables to the plugin to encapsulate the list APIs: mastodon_list and mastodon_list_account. I’ll save that story for another time. Here I’ll just show that together they enable queries like this.

select l.title as list, array_agg(a.username order by a.username) as people from mastodon_list l join mastodon_list_account a on l.id = a.list_id group by l.title +--------------+--------------------------------------+ | list | people | +--------------+--------------------------------------+ | Academic | ____, ______, ____, ___ | | Education | ___, ______ ___, ______ | | Energy | ___, ______, ____ __ | | Fediverse | ____ __, | | Humor | ____, ____ __, ____ __ | | Journalism | ___ __, ___ ____, ___, ______ | | Library | __ | | Net | ___ __, _____, ___ __, __ __, ____ | | Science | __, ____ __, ______ | | Software | ____ __, ______, ____ __ | +--------------+--------------------------------------+

That’s a useful view, and I’ve now included it, but it didn’t address Martin’s specific need.

To manage these lists, I really need a display that shows every account that I follow in a table with its lists. That way I can easily see which list each account is on, and spot any accounts that aren’t on a list.

For that I needed to add a list column to the Following tab.

This was the original query.

select url, case when display_name = '' then username else display_name end as person, to_char(created_at, 'YYYY-MM-DD') as since, followers_count as followers, following_count as following, statuses_count as toots, note from mastodon_following order by person

The new version captures the above join of mastodon_list and mastodon_list_account, and joins that to the mastodon_following (people I follow) table. It’s a left join which means I’ll always get all the people I follow. If you’re not on a list, your list column will be null.

with data as ( select l.title as list, a.* from mastodon_list l join mastodon_list_account a on l.id = a.list_id ), combined as ( select d.list, f.url, case when f.display_name = '' then f.username else f.display_name end as person, to_char(f.created_at, 'YYYY-MM-DD') as since, f.followers_count as followers, f.following_count as following, f.statuses_count as toots, f.note from mastodon_following f left join data d on f.id = d.id ) select * from combined order by person

That query drives the new version of the Following tab.

It’s pretty sparse, I’ve only just begun adding people to lists. And honestly I’m not sure I’ll want to keep doing this curation, it’s the kind of thing that can become a burden, I need to play around some more before I commit. Meanwhile, the default sort puts unlisted people first so they’re easy to find.

To provide a better way to find people who are on lists, I expanded the List tab in a couple of ways. It had included a dropdown of lists by which to filter the home timeline. Now that dropdown has counts of people on each list.

input "list" { type = "select" width = 2 sql = <<EOQ with list_account as ( select l.title from mastodon_list l join mastodon_list_account a on l.id = a.list_id ), counted as ( select title, count(*) from list_account group by title order by title ) select title || ' (' || count || ')' as label, title as value from counted order by title EOQ }

I also used this query to expand the List tab.

select l.title as list, array_to_string( array_agg( lower(a.username) order by lower(a.username)), ', ') as people from mastodon_list l join mastodon_list_account a on l.id = a.list_id group by l.title

The result is the list / people table on the right.

I know that some won’t cotton to this SQL-forward programming model. But for others who will, I wanted to show a few detailed examples to give you a sense of what’s possible at the intersection of Mastodon and Steampipe.

If you’re not tuned into SQL (like I wasn’t for a very long time), here’s your takeaway: as SQL goes, this stuff is not too scary. Yes there are joins, yes there’s an array_agg that transposes a column into a list. It’s not beginner SQL. But lots of people know how to use join and array_agg in these ways, lots more could easily learn how, and with SQL ascendant nowadays these are skills worth having.


1 https://blog.jonudell.net/2022/11/28/autonomy-packet-size-friction-fanout-and-velocity/
2 https://blog.jonudell.net/2022/12/06/mastodon-steampipe-and-rss/
3 https://blog.jonudell.net/2022/12/10/browsing-the-fediverse/
4 https://blog.jonudell.net/2022/12/17/a-bloomberg-terminal-for-mastodon/
5 https://blog.jonudell.net/2022/12/19/create-your-own-mastodon-ux/
6 https://blog.jonudell.net/2022/12/22/lists-and-people-on-mastodon/
7 https://blog.jonudell.net/2022/12/29/how-many-people-in-my-mastodon-feed-also-tweeted-today/
8 https://blog.jonudell.net/2022/12/31/instance-qualified-mastodon-urls/
9 https://blog.jonudell.net/2023/01/16/mastodon-relationship-graphs/
10 https://blog.jonudell.net/2023/01/21/working-with-mastodon-lists/
11 https://blog.jonudell.net/2023/01/26/images-considered-harmful-sometimes/
12 https://blog.jonudell.net/2023/02/02/mapping-the-wider-fediverse/
13 https://blog.jonudell.net/2023/02/06/protocols-apis-and-conventions/
14 https://blog.jonudell.net/2023/02/14/news-in-the-fediverse/
15 https://blog.jonudell.net/2023/02/26/mapping-people-and-tags-on-mastodon/
16 https://blog.jonudell.net/2023/03/07/visualizing-mastodon-server-moderation/
17 https://blog.jonudell.net/2023/03/14/mastodon-timelines-for-teams/

Monday, 19. December 2022

Jon Udell

Create your own Mastodon UX

I’ve been discussing Mastodon UX wishlists with some new acquaintances there. This excerpt from A Bloomberg terminal for Mastodon concludes with part of my own wishlist. In a Mastodon timeline, a chatty person can dominate what you see at a glance. When we participate in social media we are always making bids for one another’s … Continue reading Create your own Mastodon UX

I’ve been discussing Mastodon UX wishlists with some new acquaintances there. This excerpt from A Bloomberg terminal for Mastodon concludes with part of my own wishlist.

In a Mastodon timeline, a chatty person can dominate what you see at a glance. When we participate in social media we are always making bids for one another’s attention. As publishers of feeds it’s wise to consider how a flurry of items can overwhelm a reader’s experience. But it’s also useful to consider ways that feed readers can filter a chatty source. Steampipe’s SQL foundation affords an easy and natural way to do that. Here’s part of the query that drives the list view.

select distinct on (list, person, hour) -- only one per list/user/hour person, url, hour, toot from data order by hour desc, list, person

It was easy to implement a rule that limits each person to at most one toot per hour. Next steps here will be to apply this rule to other views, show the number of collapsed toots, and enable such rules on a per-person basis.

As a warmup exercise, I decided to first add a simple control for boosts that enables me to see my home timeline with or without boosts. To give technically-inclined readers a sense of what’s involved in doing this kind of thing with Steampipe, I’ll describe the changes here. I’m obviously biased but I find this programming environment to be accessible and productive. If it seems that way to you as well, you might want to try out some of the items on your own UX wishlist. And if you do, let me know how it goes!

Here are the original versions of the two files that I changed to add the new feature. First there’s home.sp which defines the dashboard for the home timeline.

dashboard "Home" { tags = { service = "Mastodon" } container { // a text widget with the HTML links that define the menu of dashboards } container { text { // a block that displays the HTML links that form a menu of dashboards } card { // a block that reports the name of my server } input "limit" { width = 2 title = "limit" sql = <<EOQ with limits(label) as ( values ( '50' ), ( '100' ), ( '200' ), ( '500' ) ) select label, label::int as value from limits EOQ } } container { table { title = "home: recent toots" query = query.timeline args = [ "home", self.input.limit ] column "person" { wrap = "all" } column "toot" { wrap = "all" } column "url" { wrap = "all" } } } }

And here’s the new version. It adds an input block called boosts, and passes its value to the referenced query.

dashboard "Home" { tags = { service = "Mastodon" } container { // a text widget with the HTML links that define the menu of dashboards } container { text { // a block that displays the HTML links that form a menu of dashboards } card { // a block that reports the name of my server } input "limit" { // as above } input "boosts" { width = 2 title = "boosts" sql = <<EOQ with boosts(label, value) as ( values ( 'include', 'include' ), ( 'hide', ' ' ), ( 'only', ' 🢁 ' ) ) select label, value from boosts EOQ } } container { table { // as above args = [ "home", self.input.limit, self.input.boosts ] } } }

Steampipe dashboards are built with two languages. HCL (Hashicorp configuration language) defines the UX widgets, and SQL fills them with data. In this case we’re selecting static values for the boosts input. But any Steampipe query can run there! For example, here is the input block I use on the dashboard that filters the timeline by the list to which I’ve assigned people.

input "list" { type = "select" width = 2 title = "search home timeline" sql = <<EOQ select title as label, title as value from mastodon_list order by title EOQ }

Now here is the referenced query, query.timeline, from the file query.sp which contains queries used by all the dashboards.

query "timeline" { sql = <<EOQ with toots as ( select account_url as account, case when display_name = '' then user_name else display_name end as person, case when reblog -> 'url' is null then content else reblog_content end as toot, to_char(created_at, 'MM-DD HH24:MI') as created_at, case when reblog -> 'url' is not null then '🢁' else '' end as boosted, case when in_reply_to_account_id is not null then ' 🢂 ' || ( select acct from mastodon_account where id = in_reply_to_account_id ) else '' end as in_reply_to, case when reblog -> 'url' is not null then reblog ->> 'url' else url end as url from mastodon_toot where timeline = $1 limit $2 ) select account, person || case when in_reply_to is null then '' else in_reply_to end as person, boosted || ' ' || toot as toot, url from toots order by created_at desc EOQ param "timeline" {} param "limit" {} }

And here is the new version of that query.

query "timeline" { sql = <<EOQ with toots as ( // as above ), boosted as ( select $3 as boost, boosted, account, in_reply_to, person, toot, url from toots order by created_at desc ) select account, person || case when in_reply_to is null then '' else in_reply_to end as person, boosted || ' ' || toot as toot, url from boosted where boost = boosted or boost = 'include' or boost = 'n/a' EOQ param "timeline" {} param "limit" {} param "boost" {} }

The original version uses a single CTE (aka common table expression aka WITH clause), toots, to marshall data for the concluding SELECT. The new version inserts another CTE, boosts, into the pipeline. It uses $3 to reference param "boost" {} which maps to the self.input.boosts passed from home.sp

The SQL code is all standard. Postgres is the engine inside Steampipe, and I sometimes use Postgres-specific idioms, but I don’t think any of those are happening here.

The HCL code may be unfamiliar. Steampipe uses HCL because its core audience are DevSecOps pros who are familiar with Terraform, which is HCL-based. But its a pretty simple language that can be used to describe all kinds of resources. Here the resources are widgets that appear on dashboards.

The other thing to know, if you want to roll up your sleeves and try building your own dashboards, is that the developer experience is — again in my biased opinion! — pretty great because if you’re using an autosaving editor you’ll see your changes (to both HCL and SQL code) reflected in realtime.

To illustrate that, here’s the screencast we included in our blog post introducing the dashboard system.

Not shown there, because we wanted to focus on the happy path, is realtime feedback when your SQL queries provoke Postgres errors. The experience feels very much like the one Bret Victor champions in Inventing on Principle. The core principle: “Creators need an immediate connection to what they’re creating.”

Here’s the wrong way that too often constrains us.

If there’s anything wrong with the scene, or if I go and make changes, or if I have further ideas, I have to go back to the code, and I edit the code, compile and run, see what it looks like. Anything wrong, I go back to the code. Most of my time is spent working in the code, working in a text editor blindly, without an immediate connection to this thing, which is what I’m actually trying to make.

And here is the right way.

I’ve got this picture on the side, and the code on the side, and this part draws the sky and this draws the mountains and this draws the tree, and when I make any change to the code, the picture changes immediately. So the code and the picture are always in sync; there is no compile and run. I just change things in the code and I see things change in the picture.

We want to work the right way wherever we can. The experience isn’t available everywhere, yet, but it is available in Steampipe where it powerfully enables the experimentation and prototyping that many of us are inspired to do as we delve into Mastodon.

If you want to try this for yourself, please check out the setup instructions for the plugin that maps Mastodon APIs to Postgres tables, and the dashboards that use those tables, and ping me (on Mastodon if you like!) with any questions you may have.


1 https://blog.jonudell.net/2022/11/28/autonomy-packet-size-friction-fanout-and-velocity/
2 https://blog.jonudell.net/2022/12/06/mastodon-steampipe-and-rss/
3 https://blog.jonudell.net/2022/12/10/browsing-the-fediverse/
4 https://blog.jonudell.net/2022/12/17/a-bloomberg-terminal-for-mastodon/
5 https://blog.jonudell.net/2022/12/19/create-your-own-mastodon-ux/
6 https://blog.jonudell.net/2022/12/22/lists-and-people-on-mastodon/
7 https://blog.jonudell.net/2022/12/29/how-many-people-in-my-mastodon-feed-also-tweeted-today/
8 https://blog.jonudell.net/2022/12/31/instance-qualified-mastodon-urls/
9 https://blog.jonudell.net/2023/01/16/mastodon-relationship-graphs/
10 https://blog.jonudell.net/2023/01/21/working-with-mastodon-lists/
11 https://blog.jonudell.net/2023/01/26/images-considered-harmful-sometimes/
12 https://blog.jonudell.net/2023/02/02/mapping-the-wider-fediverse/
13 https://blog.jonudell.net/2023/02/06/protocols-apis-and-conventions/
14 https://blog.jonudell.net/2023/02/14/news-in-the-fediverse/
15 https://blog.jonudell.net/2023/02/26/mapping-people-and-tags-on-mastodon/
16 https://blog.jonudell.net/2023/03/07/visualizing-mastodon-server-moderation/
17 https://blog.jonudell.net/2023/03/14/mastodon-timelines-for-teams/

Sunday, 18. December 2022

Jon Udell

A Bloomberg terminal for Mastodon

As I mentioned last time, the Steampipe dashboard for Mastodon has evolved in unexpected ways. I imagined that the components — a

As I mentioned last time, the Steampipe dashboard for Mastodon has evolved in unexpected ways. I imagined that the components — a plugin that maps Mastodon APIs to Postgres foreign tables, and a suite of views that query the APIs — would combine to enable a broad overview of activity in the Fediverse. That didn’t pan out for two reasons.

First, I learned that the Mastodon community didn’t appreciate the kind of surveillance required for such analysis. That was the original community, I should stress, and things have changed dramatically, but I want to respect the original ethos. Plenty of people will, nevertheless, crawl and index the Fediverse, but I don’t need to put my shoulder to that wheel. And if I did I’d be pushing Steampipe out of its sweet spot: realtime acquisition, querying, and visualization of API-sourced data.

Second, Mastodon’s API allows 300 requests every 5 minutes. You can use Steampipe in batch mode to defeat that limit, and you can store data permanently in its Postgres database, but that cuts across the grain with respect to both Steampipe and Mastodon. All Mastodon clients are subject to the same API rate limit. If you use the web app, or one of the phone apps, you will likely never have seen a message announcing that you’ve hit the limit and need to wait a few minutes. I never saw that message until I started querying the API with Steampipe while also using the web app.

So if Mastodon culture and tech resist deep data mining, and the system is optimized for clients that live within an API budget of 300 requests every 5 minutes, what kind of Mastodon client could Steampipe enable? It wouldn’t be a conventional client because Steampipe is a read-only system. The path forward would be some kind of reader, or browser, that augments the interactive apps.

The outcome, so far, is a suite of dashboards that display tabular views (along with some charts) of the home, local, and federated timelines, of my toot history and my favorites, of my follows and followers, of my notifications, of searches for terms, people, and hashtags, and of the timelines formed by the lists to which I’ve assigned people I follow. These are all HTML tables rendered by Steampipe’s dashboard server. The columns are all sortable, and the cells of the tables can contain only links or plain text.

Given that the toot content returned from the Mastodon API is HTML, the plain-text-only constraint felt, initially, like a blocker. No images? No links in toot content? What good is that?

Some constraints are worth embracing, though, and that may prove true here. The views created this way put a lot of information onto the screen. Here’s my default view in the stock client.

At a glance I can see three items on the home timeline, and if I want to scroll through 100 items I can only do so awkwardly in small gulps.

Here’s my home timeline in the Steampipe dashboard. I can see a dozen items at a glance, and can easily scan 100 items in gulps of that size.

When I described this effect to Greg Wilson he gave me the title for this post: “That sounds like the Bloomberg terminal for Mastodon.” I’ve never used one, and I’m aware that its design is often derided as a UX disaster, but as I understand it the product is built to enable traders to scan fast-moving data feeds from many different sources. In that sense I do think it’s an interesting and useful comparison.

The underlying principle is one I’ve learned from Edward Tufte: present information at maximum density. Our brains are built to take in a lot of information at a glance, and if it’s organized well we can do that very effectively. It feels like that’s happening for me when I scan these densely-packed views of Mastodon activity.

To enhance the effect, I’ve begun to apply filters. In a Mastodon timeline, for example, a chatty person can dominate what you see at a glance. When we participate in social media we are always making bids for one another’s attention. As publishers of feeds it’s wise to consider how a flurry of items can overwhelm a reader’s experience. But it’s also useful to consider ways that feed readers can filter a chatty source. Steampipe’s SQL foundation affords an easy and natural way to do that. Here’s part of the query that drives the list view.

select distinct on (list, user_name, person, hour) -- only one per list/user/hour person, url, hour, toot from data order by hour desc, list, person

It was easy to implement a rule that limits each person to at most one toot per hour. Next steps here will be to apply this rule to other views, show the number of collapsed toots, and enable such rules on a per-person basis.

There are always links into the Mastodon web app, and I follow them when I want to view images, boost someone, or reply to someone. The dashboards help me scan a lot of Mastodon activity quickly, and decide which items I want to interact with. Your 500-character toot is all you’ve got to grab my attention, and I’ll only see it as an unformatted chunk of plain text. That’s a pretty severe constraint, and not everyone will want to embrace it, but it’s working pretty well for me so far.

I expect that our dashboard system will support formatted text and images in cells of HTML tables. When it does I’d like to make it an option you can turn on or off in Mastodon dashboards. What should the default be? I suspect I’ll want plain text and no images, especially if image captions can appear along with the text of toots. Some of the original Mastodon cultural norms aren’t surviving the onslaught of new people, but writing descriptions of images is one that’s held up so far, and it’s a wonderful thing. So write a short thoughtful post, write a caption for your image if you include one, and if you capture my attention I’ll click through to view and interact.


1 https://blog.jonudell.net/2022/11/28/autonomy-packet-size-friction-fanout-and-velocity/
2 https://blog.jonudell.net/2022/12/06/mastodon-steampipe-and-rss/
3 https://blog.jonudell.net/2022/12/10/browsing-the-fediverse/
4 https://blog.jonudell.net/2022/12/17/a-bloomberg-terminal-for-mastodon/
5 https://blog.jonudell.net/2022/12/19/create-your-own-mastodon-ux/
6 https://blog.jonudell.net/2022/12/22/lists-and-people-on-mastodon/
7 https://blog.jonudell.net/2022/12/29/how-many-people-in-my-mastodon-feed-also-tweeted-today/
8 https://blog.jonudell.net/2022/12/31/instance-qualified-mastodon-urls/
9 https://blog.jonudell.net/2023/01/16/mastodon-relationship-graphs/
10 https://blog.jonudell.net/2023/01/21/working-with-mastodon-lists/
11 https://blog.jonudell.net/2023/01/26/images-considered-harmful-sometimes/
12 https://blog.jonudell.net/2023/02/02/mapping-the-wider-fediverse/
13 https://blog.jonudell.net/2023/02/06/protocols-apis-and-conventions/
14 https://blog.jonudell.net/2023/02/14/news-in-the-fediverse/
15 https://blog.jonudell.net/2023/02/26/mapping-people-and-tags-on-mastodon/
16 https://blog.jonudell.net/2023/03/07/visualizing-mastodon-server-moderation/
17 https://blog.jonudell.net/2023/03/14/mastodon-timelines-for-teams/

Friday, 16. December 2022

Bill Wendels Real Estate Cafe

Boston Realty Party: What should be thrown overboard to reform the real estate cartel?

OPEN LETTER to Consumer Advocacy Community: (Excerpt from email to legal aid organization) As #PriceCorrrection2022 unfolds, the magnitude of consumer harm is growing and regulatory… The post Boston Realty Party: What should be thrown overboard to reform the real estate cartel? first appeared on Real Estate Cafe.

OPEN LETTER to Consumer Advocacy Community: (Excerpt from email to legal aid organization) As #PriceCorrrection2022 unfolds, the magnitude of consumer harm is growing and regulatory…

The post Boston Realty Party: What should be thrown overboard to reform the real estate cartel? first appeared on Real Estate Cafe.

Phil Windleys Technometria

Wallets and Agents

Summary: This post is adapted from my forthcoming book, Learning Digital Identity, from O'Reilly Media. Our physical wallets are, historically, for holding currency. But that may be the least interesting use case for wallets. Many of the things people put in their wallets represent relationships they have and authorizations they hold. Most people don't often leave home without their wallet.

Summary: This post is adapted from my forthcoming book, Learning Digital Identity, from O'Reilly Media.

Our physical wallets are, historically, for holding currency. But that may be the least interesting use case for wallets. Many of the things people put in their wallets represent relationships they have and authorizations they hold. Most people don't often leave home without their wallet.

But the analogy to a physical wallet can only take us so far, because as physical beings, our natural capabilities are multitude. In the digital world, we need tools to accomplish almost anything useful. The name wallet1 for the software we use to interact digitally doesn't do the tool justice.

A digital identity wallet is a secure, encrypted database that collects and holds keys, identifiers, and verifiable credentials (VCs). The wallet is also a digital address book, collecting and maintaining its controller's many relationships. The wallet is coupled with a software agent that speaks the protocols necessary to engage with others.

Wallets and agents are not the same thing, even though they're often conflated. Agents are tools for taking action. Wallets are where stuff is stored. Still, most people just say "wallet," even when they mean "wallet and agent." For this post, when I say "wallet" I mean wallet and when I say "agent" I mean agent.

Identity agents are software services that manage all the stuff in the wallet. Agents store, update, retrieve, and delete all the artifacts that a wallet holds. Beyond managing the wallet, agents perform many other important tasks:

Sending and receiving messages with other agents

Requesting that the wallet generate cryptographic key pairs

Managing encrypted data interactions with the wallet

Performing cryptographic functions like signing and verifying signatures

Backing up and retrieving data in the wallet

Maintaining relationships by communicating with other agents when DID documents are updated

Routing messages to other agents

The relationship between identity wallets and agents

This figure shows the relationship between an agent, a wallet, and the underlying operating system. While most current implementations pair a single agent with a single wallet, the presence of an API means that it's possible for one agent to use several wallets, or for multiple agents to access one wallet. Some specialized agents might not even need a wallet, such as those that just perform routing, although most will at least need to store their own keys.

The key-management functions in the wallet includes actions on cryptographic keys like generation, storage, rotation, and deletion. Key management is performed in cooperation with the operating system and underlying hardware. Ideally, the operating system and hardware provide a secure enclave for key storage and a trusted execution environment for performing key-management functions.

The basic functions shown in the diagram might not seem to have much to do with identity. Identity-related activities like authentication and credential exchange are built on top of these basic functions. The agent can issue, request, and accept VCs. The agent also presents and verifies credentials. Specialized messages perform these activities.

Agents and Credential Exchange

Agents speak a protocol called DIDComm (DID-based communication) that provides a secure communications layer for the exchange of identity information via verifiable credentials (VCs). Agents speak DIDComm to each other without a third-party intermediary (i.e., they're peer-to-peer). Because of DIDComm's flexibility and the ability to define protocols on top of DIDComm messaging, it promises to be as important as the identity layer it enables. The DIDComm protocol is governed by the DIDComm specification, hosted at the Decentralized Identity Foundation. The current ratified version is 2.0.

The specification's opening sentence states that "the purpose of DIDComm Messaging is to provide a secure, private communication methodology built atop the decentralized design of DIDs." Note that the specification describes DIDComm as a communications methodology. This means that DIDComm is more than just a way to send a message or chat with someone else. DIDComm messaging allows individual messages to be composed into application-level protocols and workflows. This makes DIDComm messaging a foundational technology for performing different kinds of interactions within the framework of trust that a DID-based relationship implies.

To enable the exchange of verifiable credentials, the agent, using the wallet as secure storage, performs three primary activities:

Exchanging DIDs with other agents

Requesting and issuing credentials

Requesting and presenting credential proofs

The agent does these activities using protocols that run on top of DIDComm. DIDComm's job is to create a secure, mutually authenticated channel for exchanging DIDComm messages. The protocols that operate inside of it, carry out specific activities.

Exchanging DIDs

Agents take care of the tedious and tricky job of exchanging DIDs between parties who want to communicate so that people don't have to get entangled in the details of how DIDs work: how they're created, stored, and validated. Or the work that's necessary when one of the parties needs to rotate keys. The DIDComm v2 spec is capable of exchanging DIDs without a separate protocol so the process can be automated by smart identity agents working on behalf of the various parties.

Requesting and Issuing Credentials

Requesting and issuing credentials is defined in Aries RFC 0036: Issue Credential Protocol 1.0. The protocol "formalizes messages used to issue a credential." The protocol describes four primary messages: propose-credential, offer-credential, request-credential, and issue-credential. The protocol also defines the state machine that the agent operates in response to these messages. These messages combined with the state machine allow the credential issuer and the credential holder to engage is the ceremonies necessary for the issuer to issue a credential to the holder.

Requesting and Presenting Credential Proofs

Requesting and presenting credential proofs is defined in Aries RFC 0037: Present Proof Protocol 1.0. The protocol formalizes and generalizes message formats used for presenting a proof of the attributes in a credential. The protocol describes three primary messages: propose-proof, request-proof, and present-proof. The protocol also defines the state machine that the agent operates in response to these messages. These messages and state machine allow the credential holder and the credential verifier to engage in the ceremonies necessary for the holder to present a credential proof to the verifier.

The Nature of Wallets and Agents

Agents and wallets, working together, perform the work necessary for people, businesses, and devices to create mutually-authenticated, secure connections and use those connections to exchange verifiable credentials. People, businesses, and devices all have different needs and so they'll use different agents and wallets.

People will generally use agents and wallets running on smart phones, laptops, or other personal devices. Your Amazon Alexa, for example could have an agent/wallet pair installed on it to act on your behalf. Most people will have agents on every device. Most of these will have wallets associated with them. Wallets will use device secure enclaves to store sensitive cryptographic information. People will also have agents and wallets in the cloud. All of the agents and wallets under a person's control will interoperate with each other and perform different roles. For example, cloud-based agents are needed to route DIDComm messages to devices that may not have a routable IP address.

Businesses will use enterprise agents that are integrated with other enterprise systems like CRM, ERP, and IAM systems. The wallets associated with these will be more sophisticated than personal wallets since they have to manage DIDs and their associated keys that various employees, departments, and processes use. The ability to delegate authority and permission actions will be more rigorous than is needed in a personal wallet. A large business might operate thousands of enterprise agents for various business purposes.

Devices will use agents with associated wallets to create relationships and perform credential exchange with the device owner, other devices, their manufacturer, and other people or companies. How they operate and their sophistication depend in great measure on the nature of the device and its expected function. I wrote about the reasons for using agents as part of IoT devices in The Self-Sovereign Internet of Things.

Despite differences that these agents exhibit, they all run the same protocols and use DIDComm messaging. There are no intermediaries—the connections are all peer-to-peer. Every agent works on behalf of the entity who controls it. To get a feel for how they might interoperate, see Operationalizing Digital Relationships and SSI Interaction Patterns.

DIDComm-capable agents can be used to create sophisticated relationship networks that include people, institutions, and things. The relationships in that network are rich and varied—just like relationships in the real world. Smart agents allow people, business and devices to create, manage, and utilize secure, trustworthy communications channels with anyone online without reliance on any third party. The agent serves as flexible digital tool that people can use to manage their digital life.

Notes

I've heard various people object to the term wallet, but so far, no one has come up with anything else that has stuck, so for now, wallet is the word the industry uses.

Tags: ssi identity wallets agents verifiable+credentials didcomm

Thursday, 15. December 2022

Altmode

Civil Rights Journey Day 10: Montgomery to Home

Thursday, October 27, 2022 With the other members of our group having already departed, we joined our friends Liz and Peter to explore a few more places in Montgomery. We started our day at the Civil Rights Memorial Center, part of the Southern Poverty Law Center headquarters. The Memorial Center wasn’t large, but had some […]

Thursday, October 27, 2022

With the other members of our group having already departed, we joined our friends Liz and Peter to explore a few more places in Montgomery. We started our day at the Civil Rights Memorial Center, part of the Southern Poverty Law Center headquarters. The Memorial Center wasn’t large, but had some excellent multimedia presentations on the civil rights struggle, both historical and ongoing.

From there we walked over to the former Greyhound bus station associated with the Freedom Riders. The station is now a small museum, the Freedom Rides Museum, and has signs on the outside with a detailed history. While there, a Black woman drove up to take some video of the bus station and we had a somewhat surprising conversation with her. She was not entirely pleased with the placement of all of the civil rights museums and displays in Montgomery — she thought that the Black community needed to strengthen themselves and didn’t need to be constantly reminded of their struggles.

Freedom Rides Museum

From there we walked to the Rosa Parks Museum a few blocks away. The museum tour consisted of a couple of multimedia presentations describing the background and events of the day she refused to give up her seat on a city bus, which gave us a much more detailed understanding of the events of that day and the questionable legality of her arrest. This was followed by a short self-guided tour of artifacts relating to the tension between the Black community and the police.

It was then about time to make our way to the airport for our flights home. We returned to our hotel to retrieve our bags and got a Lyft ride to the airport. The Montgomery Regional Airport is a fairly small airport consisting of about five gates located only about 15 minutes from downtown. TSA inspection was quick and we had a small lunch in the airport’s one restaurant. Co-located with the airport is an Alabama Air National Guard base, which surprised us as several pairs of fighter planes — F-16s I think — took off with considerable noise and speed.

This article is the final installment in a series about our recent travels to the US South. To see the introductory article in the series, click here.

Wednesday, 14. December 2022

Altmode

Civil Rights Journey Day 9: Montgomery

Wednesday, October 26, 2022 Today was the last day of the official tour (we are staying a day longer) and a very powerful day. After breakfast at the hotel, we went to the Legacy Museum, a large and modern museum established by the Equal Justice Initiative to tell the story of Black experience from enslavement […]

Wednesday, October 26, 2022

Today was the last day of the official tour (we are staying a day longer) and a very powerful day. After breakfast at the hotel, we went to the Legacy Museum, a large and modern museum established by the Equal Justice Initiative to tell the story of Black experience from enslavement to mass incarceration (the current situation). The museum proceeded roughly chronologically from the slave trade, slavery, the Civil War, reconstruction, Jim Crow segregation, and lynchings, to the current disproportionate sentencing and incarceration of Black people. The images and words used in the museum were very strong and intentionally uncomfortable for many. We were somewhat prepared by our recent experience, but the story was very uncomfortable nonetheless.

Following the museum, we traveled to another of Equal Justice Initiative’s projects, the National Memorial for Peace and Justice. The memorial, which is outdoors, includes hundreds of large corroded iron rectangular blocks that, by county, list the names and dates of lynching victims up to 1950. There was also a section to recognize counties and states that had put up markers recognizing the deaths of these victims. I was struck by the number of markers and names of people who had been lynched; this presentation of the names helps one appreciate the magnitude of the lynching problem.

Memorial blocks at National Memorial for Justice and Peace Sculptures at Michelle Browder’s studio

We then traveled to the studio (for lack of a better name) of Michelle Browder, who led us on our tour yesterday, for lunch and discussion. After lunch, Michelle showed us her artwork, which included large sculptures and an old car that had been intricately decorated by use of a plasma cutter on its body. Her artwork and advocacy were recently highlighted in People Magazine.

We then sat down for a final debrief on our memories of the week before the first of our co-travelers had to leave for the airport.

Afterward, our friends Peter and Liz joined us in visiting the Museum of Alabama, located in the state archives building near the capitol. The museum was well presented, but we were now better equipped to read between the lines of many of the descriptions. For example, commerce often meant slave trade.

After returning to the hotel to clean up, we went to a nearby brewpub, the Tower Taproom. It had quite good pub food, and an interesting array of self-service taps that were enabled using a card they issued at the cash register. The taps measured beer (a good selection, plus a few wines and ciders) by the ounce, and you were charged for the amount you poured.

This article is part of a series about our recent travels to the US South. To see the introductory article in the series, click here.


reb00ted

The Push-Pull Publish-Subscribe Pattern (PuPuPubSub)

(Updated Dec 14, 2022 with clarifications and a subscriber implementation note.) Preface The British government clearly has more tolerance for humor when naming important things than the W3C does. Continuing in the original fashion, thus this name. The Problem The publish-subscribe pattern is well known, but in some circumstances, it suffers from two important problems: When a subscr

(Updated Dec 14, 2022 with clarifications and a subscriber implementation note.)

Preface

The British government clearly has more tolerance for humor when naming important things than the W3C does. Continuing in the original fashion, thus this name.

The Problem

The publish-subscribe pattern is well known, but in some circumstances, it suffers from two important problems:

When a subscriber is temporarily not present, or cannot be reached, sent events are often lost. This can happen, for example, if the subscriber computer reboots, falls off the network, goes to sleep, has DNS problems and the like. Once the subscriber recovers, it is generally not clear what needs to happen for the subscriber to catch up to the events it may have missed. It is not even clear whether it has missed any. Similarly, it is unclear for how long the publisher needs to retry to send a message; it may be that the subscriber has permanently gone away.

Subscriptions are often set up as part of the following pattern:

A resource on the Web is accessed. For example, a user reads an article on a website, or a software agent fetches a document. Based on the content of the obtained resource, a decision is made to subscribe to updates to that resource. For example, the user may decide that they are interested in updates to the article on the website they just read. There is a time lag between the time the resource has been accessed, and when the subscription becomes active, creating a race condition during which update events may be missed.

While these two problems are not always significant, there are important circumstances in which they are, and this proposal addresses those circumstances.

Approach to the solution

We augment the publish-subscribe pattern in the following way:

All events, as well as the content of the resource that whose changes are supposed to be tracked are time-stamped. Also, each event identifies the event that directly precedes it (that way, the subscriber can detect if it missed something). Alternatively, a monotonically increasing sequence number could be used.

The publisher stores the history of events emitted so far. For efficiency reasons, this may be shortened to some time window reaching to the present, as appropriate for the application; for example, all events in the last month. (Similar to how RSS/Atom feeds are commonly implemented.)

The publisher provides a query interface to the subscriber to that history, with a “since” time parameter, so the subscriber can obtain the sequence of events emitted since a certain time. (Actually, since “right after” the provided time not including the provided time itself.)

When subscribing, in addition to the callback address, the subscriber provides to the publisher:

a time stamp, and a subscription id.

Further, the actual sending of an event from the publisher to the subscriber is considered to be a performance optimization, rather than core to the functionality. This means that if the event cannot be successfully conveyed (see requirements above), it is only an inconvenience and inefficiency rather than a cause of lost data.

Details About the race condition

The future subscriber accesses resource R and finds time stamp T0. For example, a human reads a web page whose publication date is April 23, 2021, 23:00:00 UTC.

After some time passes, the subscriber decides to subscribe. It does this with the well-known subscription pattern, but in addition to providing a callback address, it also provides time stamp T0 and a unique (can be random) subscription id. For example, a human’s hypothetical news syndication app may provide an event update endpoint to the news website, and time T0.

The publisher sets up the subscription, and immediately checks whether any events should have been sent between (after) T0 and the present. (It can do that because it stores the update history.) If so, it emits those events to the subscriber, in sequence, before continuing with regular operations. As a result, there is no more race condition between subscription and event.

When sending an event, the publisher also sends the subscription id.

About temporary unavailability of the subscriber

After a subscription is active, assume the subscriber disappears and new events cannot be delivered. The publisher may continue to attempt to deliver events for as long as it likes, or stop immediately.

When the subscriber re-appears, it finds the time of the last event it had received from the publisher, say time T5. It queries the event history published by the publisher with parameter T5 to find out what events it missed. It processes those events and then re-subscribes with a later starting time stamp corresponding to the last event it received (say T10). When it re-subscribes, it uses a different subscription id and cancels the old subscription.

After the subscriber has re-appeared, it ignores/rejects all incoming events with the old subscription id.

Subscriber implementation notes

The subscriber receives events exclusively through a single queue for incoming events. This makes implementing an incoming-event handler very simple, as it can simply process events in order.

The event queue maintains the timestamp of the last event it successfully added. When a new event arrives, the queue accepts this event but only if the new event is the direct follower of the last event it successfully added. If it is not, the incoming event is discarded. (This covers both repeatedly received events and when some events were missed.)

The subscriber also maintains a timer with a countdown from the last time an event was successfully added to the incoming queue. (The time constant of the timer is application-specific, and may be adaptive.) When the timeout occurs, the subscriber queries the publisher, providing the last successful timestamp. If no updates are being found, nothing happens. If updates are being found, it is fair to consider the existing subscription to have failed. Then:

The subscriber itself inserts the obtained “missed” events into its own incoming event queue from where they are processed. The subscriber cancels the existing subscription. The subscriber creates a new subscription, with the timestamp of the most recent successfully-inserted event. Observations

Publishers do not need to remember subscriber-specific state. (Thanks, Kafka, for showing us!) That makes it easy to implement the publisher side.

From the perspective of the publisher, delivery of events to subscribers that can receive callbacks, and those that need to poll, both works. (It sort of emulates RSS except that a starting time parameter is provided by the client, instead of a uniform window decided on by the publisher as in RSS)

Subscribers only need to keep a time stamp as state, something they probably have already anyway.

Subscribers can implement a polling or push strategy, or dynamically change between those, without the risk of losing data.

Publishers are not required to push out events at all. If they don’t, this protocol basically falls back to polling. This is inefficient but much better than the alternative and can also be used in places where, for example, firewalls prevent event pushing.

Feedback?

Would love your thoughts!

Tuesday, 13. December 2022

Altmode

Civil Rights Journey Day 8: Selma to Montgomery

Tuesday, October 25, 2022 After getting up and packing for today’s trip to Montgomery, we loaded up the bus and made the very short trip to the Selma Center for Nonviolence, Truth, and Reconciliation (SCNTR). We had breakfast at the Center, followed by training, focusing on the meaning and application of nonviolence, from the Center’s […]

Tuesday, October 25, 2022

After getting up and packing for today’s trip to Montgomery, we loaded up the bus and made the very short trip to the Selma Center for Nonviolence, Truth, and Reconciliation (SCNTR). We had breakfast at the Center, followed by training, focusing on the meaning and application of nonviolence, from the Center’s director, Ainka Jackson. One of the major themes was the difference between nonviolence, which is an active role, from the passive role of non-violence (or avoidance of violence).

Lunch was also at SCNTR and featured a powerful presentation from Callie Greer on nonviolence and forgiveness. Many years ago, her son was shot and killed in an argument with one of his peers. When the perpetrator was put on trial, she asked in her victim statement that he not be sentenced to death or to life in prison, but rather be given a minimum sentence. The judge, stunned, complied. She met the perpetrator after he had served his prison time and asked for his mother to contact her. Callie and the perpetrator’s mother continue talking to this day.

Another situation that Callie related was that her daughter found a breast lump. Due to the lack of appropriate healthcare, her cancer was not diagnosed until it had progressed too far and the daughter died. She is finding it difficult to forgive the healthcare system (specifically, the lack of Medicaid in Alabama) for this.

Alabama River and Pettus Bridge seen from SCNTR Dexter Avenue King Memorial Baptist Church

After the lunch discussion, we rode the bus across the Pettus bridge and on to Montgomery, Alabama. Downtown Montgomery is a considerably more attractive capital than downtown Jackson, probably due to their focus on attracting conventions and other travelers. When we arrived, we met Michelle Browder, who in addition to being our tour guide is an entrepreneur. She is focused on telling a more complete story about the role of Black women as test subjects in the development of gynecological treatment, and how they were victimized in undergoing this experimentation.

Michelle led us on a tour that began at the waterfront of the Alabama River, and gave us a lesson on reading between the lines of the historical markers in town. She pointed out in particular three markers that highlighted the slave trade and Montgomery’s role in it. There were to be more of these markers, but of course the city stopped the project.

After the tour, we returned to check into the hotel (the Renaissance, probably the nicest hotel in town), and gathered at Central, a nearby restaurant that is also among the best. This was the final group dinner of the trip, because most of the group will be leaving tomorrow afternoon.

This article is part of a series about our recent travels to the US South. To see the introductory article in the series, click here.

Monday, 12. December 2022

Altmode

Civil Rights Journey Day 7: Jackson to Selma

Monday, October 24, 2022 After a buffet breakfast at our hotel, we had an hour or so with Mike Espy, former Secretary of Agriculture, Congressman, and Senate candidate. Secretary Espy discussed the political climate in Mississippi in quite some detail, and made it yet more clear that Mississippi politics are more complicated than one might […]

Monday, October 24, 2022

After a buffet breakfast at our hotel, we had an hour or so with Mike Espy, former Secretary of Agriculture, Congressman, and Senate candidate. Secretary Espy discussed the political climate in Mississippi in quite some detail, and made it yet more clear that Mississippi politics are more complicated than one might think, and civil rights is not a lost cause.

We then boarded our bus for the three-hour bus ride to Selma, Alabama. Selma today is a somewhat depressed town of about 18,000 having one of the highest murder rates in the state. In the 1960s, Selma was an important site for the civil rights movement. In 1965, attempts were made to march from here to the state capitol in Montgomery to demand voting rights. Initially this Freedom March was blocked by police and resulted in considerable injury to the protesters. Eventually a successful march was held following issuance of a court order.

Upon arrival, we visited the By the River Center for Humanity, a mixed-use community facility. They served us a delicious lunch which was followed by a talk by JoAnne Bland, one of the participants in the Freedom March. She shared with us some of her vision for Selma, and led us in a bit of singing and dancing as well.

JoAnne Bland, talking about Selma Newly restored St. James Hotel

We then walked across the famous Edmund Pettus Bridge, and visited the National Voting Rights Museum just across the river from Selma. The museum had artifacts and pictures from the Freedom March, as well as considerable material on subsequent visits to Selma by presidents, particularly Bill Clinton and Barack Obama.

After a short break at our hotel (the St. James hotel, recently renovated by Hilton), we took a short walk to The Revolt Selma, a new restaurant opened by a Black entrepreneur, for a buffet-style steak dinner.

This article is part of a series about our recent travels to the US South. To see the introductory article in the series, click here.

Sunday, 11. December 2022

Werdmüller on Medium

Things I’ve learned about parenting

It’s the hardest thing I’ve ever done. Continue reading on Medium »

It’s the hardest thing I’ve ever done.

Continue reading on Medium »


Altmode

Civil Rights Journey Day 6: Jackson, Mississippi

Sunday, October 23, 2022 We started earlier than usual today in order to catch the 8 am service at New Horizons Church, a majority-Black church in a former strip mall in southern Jackson. The facility was beautifully adapted to its use as a church, and we were warmly welcomed. Most of the music was presented […]

Sunday, October 23, 2022

We started earlier than usual today in order to catch the 8 am service at New Horizons Church, a majority-Black church in a former strip mall in southern Jackson. The facility was beautifully adapted to its use as a church, and we were warmly welcomed. Most of the music was presented by a children’s group, which was very talented.

After the service, we met with the Senior Pastor, Bishop Ronnie Crudup. He described the formation of the church and also went into considerable detail about the political climate in Jackson and more generally in Mississippi. One story that struck me was that the Governor, rather than distribute federal aid from the American Recovery Act, sent at least some of the money back to the federal government saying that it was not needed. This struck me as simply cruel. In any case it is organizations like New Horizons that are developing the community for future leadership.

We then went to a local restaurant for brunch with some local women leaders (and teenage leaders) who are working in the area of voting rights and getting the Black community, and Black women in particular, to participate in the political process and specifically to vote.

After brunch we took our bus to the Mississippi Civil Rights Museum in downtown Jackson. The museum is, perhaps surprisingly, funded by the State of Mississippi. We started out with a short talk by Hezekiah Watkins, one of the original Freedom Riders. We then had some time — although not enough — to explore the museum, which took us from the days of slavery to the present, with an emphasis on the mid-20th century civil rights movement.

Medgar Evers home Mississippi Civil Rights Museum

We then took the bus to the Medgar Evers home. Evers, the field secretary for the Mississippi NAACP, was shot and killed there in 1963. The house has recently been designated as a national monument, but unfortunately wasn’t open for tours when we were there.

This article is part of a series about our recent travels to the US South. To see the introductory article in the series, click here.

Saturday, 10. December 2022

Jon Udell

Browsing the Fediverse

A month ago, when the Great Discontinuity happened, I started working on a Steampipe plugin to enable SQL queries against the Mastodon API, along with a companion Steampipe “mod” (suite of dashboards) to display and chart the results of those queries. I expect these dashboards will soon be available in Steampipe Cloud, where it will … Continue reading Browsing the Fediverse

A month ago, when the Great Discontinuity happened, I started working on a Steampipe plugin to enable SQL queries against the Mastodon API, along with a companion Steampipe “mod” (suite of dashboards) to display and chart the results of those queries.

I expect these dashboards will soon be available in Steampipe Cloud, where it will take just a few seconds to pop in your Mastodon access token (from, e.g., https://mastodon.social/settings/applications/new) and begin using the dashboards.

Meanwhile, if you’re so inclined, you can find the plugin here and the dashboards here. If you’re reasonably technical you can pretty quickly and easily install Steampipe, clone these repos, build the plugin, and start using the dashboards.

Why would you want to? My own motivation, originally, was to do Mastodon analytics. I thought Steampipe’s SQLification of the API would be a handy way to discern and monitor activity trends during a period of extraordinary flux. And that’s proven to be true, to a limited extent. Here’s a snapshot of the dashboard that uses the instance activity API.

I’m watching this chart with great interest. Where does it go from here? I’m not going to hazard a guess. Everything’s up in the air right now, and anything could happen.

But as I added tables to the plugin to encapsulate more of the Mastodon API, and added dashboards to visualize those tables, my focus shifted. I began to see the suite of dashboards as a Mastodon reader/browser that complements the web and phone clients, and that’s how I mainly use them now.

I think the key benefit is one of Edward Tufte’s core principles: information density. Each of these dashboards shows more activity than you can see at a glance in the web or phone interfaces. I find this very helpful for searching and browsing. When I see items of interest that I want to interact with, I click through to the web app in order to boost, reply, or favorite.

Will this way of browsing Mastodon appeal to you? To get a feel for what it’s like, here are snapshots of some of the dashboards I’ve built so far.

dashboard.Favorites

dashboard.Following

dashboard.Home

dashboard.List

dashboard.Me

dashboard.Notification

dashboard.PeopleSearch

dashboard.StatusSearch

dashboard.TagSearch

For me, at least, this approach has become an effective way to browse the fediverse, find interesting people, read what they boost, and keep track of my own activity.

If you are dev-minded, by the way, please note that these dashboards are just one way to skin the results of queries against the plugin. Any SQL client can connect to Steampipe’s Postgres endpoint. You could use dashboards like Metabase or Grafana, or you could embed Steampipe as a component in an app.


1 https://blog.jonudell.net/2022/11/28/autonomy-packet-size-friction-fanout-and-velocity/
2 https://blog.jonudell.net/2022/12/06/mastodon-steampipe-and-rss/
3 https://blog.jonudell.net/2022/12/10/browsing-the-fediverse/
4 https://blog.jonudell.net/2022/12/17/a-bloomberg-terminal-for-mastodon/
5 https://blog.jonudell.net/2022/12/19/create-your-own-mastodon-ux/
6 https://blog.jonudell.net/2022/12/22/lists-and-people-on-mastodon/
7 https://blog.jonudell.net/2022/12/29/how-many-people-in-my-mastodon-feed-also-tweeted-today/
8 https://blog.jonudell.net/2022/12/31/instance-qualified-mastodon-urls/
9 https://blog.jonudell.net/2023/01/16/mastodon-relationship-graphs/
10 https://blog.jonudell.net/2023/01/21/working-with-mastodon-lists/
11 https://blog.jonudell.net/2023/01/26/images-considered-harmful-sometimes/
12 https://blog.jonudell.net/2023/02/02/mapping-the-wider-fediverse/
13 https://blog.jonudell.net/2023/02/06/protocols-apis-and-conventions/
14 https://blog.jonudell.net/2023/02/14/news-in-the-fediverse/
15 https://blog.jonudell.net/2023/02/26/mapping-people-and-tags-on-mastodon/
16 https://blog.jonudell.net/2023/03/07/visualizing-mastodon-server-moderation/
17 https://blog.jonudell.net/2023/03/14/mastodon-timelines-for-teams/


Doc Searls Weblog

How the species killing the planet can save it

We live in the Anthropocene, a geological epoch defined by the influence of one species over everything else, including the planet itself. That species is ours, and we are a pestilential one, altering, consuming, and wasting everything we can. Specifically, our civilizations have advanced on the planet like a cancer, parasitically metabolizing materials we call […]

We live in the Anthropocene, a geological epoch defined by the influence of one species over everything else, including the planet itself. That species is ours, and we are a pestilential one, altering, consuming, and wasting everything we can.

Specifically, our civilizations have advanced on the planet like a cancer, parasitically metabolizing materials we call “resources” (without their permission) as if their sums were not finite. Oil and coal will be gone in a few hundred years. Uranium, titanium, tungsten, helium, lithium and other members of the periodic table may be gone far sooner, thanks to our boundless appetites. And yes, we can raise crops of corn and other plants to make fuel for cars and jets, but only at the many costs of monoculture on the biodiversity required for sustaining every form of life.

I bring all this up because we’ll be talking about it on Monday at this month’s Ostrom Workshop salon at Indiana University and live on the Web. Our speaker will be Vinay Gupta (@leashless), inventor of the Hexayurt refugee shelter, founder and CEO of Mattereum, a progenitor of Ethereum, and source of wisdom on all that and much else. The title of his talk is the headline above. His case is that we have to get our per-person environmental consumption down by about 10x. Or else.

It helps that there are people and regions in the world providing living examples of how that is done. Vinay is deeply familiar with those and will share what he knows in ways that help us co-think and work to save the planet’s butt, along with our own.

The salon is at 2 PM Eastern time. It is also free, but you need to register first, here.

If this follows the pattern of our prior salons—all with the theme Beyond the Web—the presentation and discussion will be lively, informative, and productive. See you there.


Altmode

Civil Rights Journey Day 5: Incarceration Justice

Saturday, October 22, 2022 Today we are off to Jackson, Mississippi. We are also changing focus from history (enslavement) to contemporary civil rights issues. After breakfast we had a visit from three community members involved in trying to improve the criminal justice system in Louisiana. First we heard from Will Snowden of the Vera Institute. […]

Saturday, October 22, 2022

Today we are off to Jackson, Mississippi. We are also changing focus from history (enslavement) to contemporary civil rights issues. After breakfast we had a visit from three community members involved in trying to improve the criminal justice system in Louisiana. First we heard from Will Snowden of the Vera Institute. He spoke about their initiative to reduce the population in Orleans Parish jails following Hurricane Katrina from about 7200 to 1000, largely by observing the distinction between jails (typically short-term detainees awaiting trial) and prisons (long-term convicted detainees). This distinction gives long-term detainees the potential to have access to education and training programs to help them develop. Vera Institute is working more broadly to improve equity in the criminal legal system in Louisiana as well.

We then heard from Everett “Buck” Ofray and Louis Gibson, both former long-term detainees who had been convicted of second-degree murder. They described their journey from arrival at Louisiana’s notorious Angola Prison, forced to labor initially without pay and later for only 20 cents an hour. Held without parole, they had to challenge the system to obtain access to education and vocational opportunities. Eventually they worked their way into more and more responsible jobs like maintaining aircraft for the State Police. Changes in law allowed them to eventually be paroled. They described their process of re-integration into society; both are now working to assist other recent detainees. Louis is also working as a paralegal. This was truly an inspirational talk.

We then loaded into our bus for the three hour drive to Jackson, Mississippi for the next phase of our journey. We stopped along the way for a picnic lunch and to discuss what we have learned so far.

This is also homecoming weekend, a big deal in Jackson, so we expect everything to be crowded. We had dinner this evening at the Char Restaurant, again a very enjoyable meal. Tomorrow will be an early and full day, so we are turning in early.

This article is part of a series about our recent travels to the US South. To see the introductory article in the series, click here.

Friday, 09. December 2022

Altmode

Civil Rights Journey Day 4: Bayou

Friday, October 21, 2022 After breakfast at our hotel, we got on the bus and went east to visit one of the bayous. Along the way, we stopped and were introduced to a native American couple who explained the many ways that the oil industry in the area had impacted their lives. In addition to […]

Friday, October 21, 2022

After breakfast at our hotel, we got on the bus and went east to visit one of the bayous. Along the way, we stopped and were introduced to a native American couple who explained the many ways that the oil industry in the area had impacted their lives. In addition to needing to relocate, they were severely affected by the Gulf of Mexico oil well disaster, and continue to endure health threats from toxic emissions from the refineries, many of which have been made legal through lobbying efforts on behalf of the oil companies.

We continued east through St. Bernard Parish, and as we did, we saw many skeletons of dead oak trees. These have apparently died due to increased salination of the underground aquafer. Many homes were seen that were propped up on platforms typically 20 feet above ground. Many of these homes were owned not by locals but by sport fishermen as second homes or as rentals. As we looked out into the waterway, it was fairly obvious which boars belonged the visiting sport fishermen and which were owned by locals. As our hosts explained, the waterway is always in a state of transition. There have been efforts to open new channels from the Mississippi River to the Gulf of Mexico, and other projects that would change the salinity of that portion of the delta. Nobody seems to be sure how long this will last.

Returning to New Orleans, we had lunch at a very pleasant restaurant, Carmo. Our next stop was at a youth development organization called Son of a Saint. Housed in a newly renovated former ice house, Son of a Saint provides mentorship services to boys who have lost their fathers. Engagement with them begins in middle school and extends through their educational career and beyond. They have a very impressive track record with the boys they have mentored.

Son of a Saint Portion of StudioBE interior

We then traveled to StudioBE, a warehouse art gallery space featuring the art of “BMike”, a local urban artist. As you might expect, BMike’s preferred medium is spray paint, but the amount of expression and detail he is able to impart is truly impressive. But spray paint is not his only medium: there were exhibits in many forms, even simulated video games. Along with the art were words that were in many cases quite profound. This visit definitely broadened my art tastes.

After a short break at the hotel, the group went to dinner at the Windsor Court Hotel, where we had stayed on our visit to New Orleans about 10 years ago. It was a wonderful dinner, definitely too much food but all of it delicious.

This article is part of a series about our recent travels to the US South. To see the introductory article in the series, click here.


Jon Udell

When your database is an HTTP client

Here are three things I once hoped — but no longer expect — to outlive: 1. PDF files 2. passwords 3. The occasional need to scrape data from web pages PDF files and passwords are topics for another day, but web scraping is timely. Today I was asked to corral data from the Steampipe blog, … Continue reading When your database is an HTTP client

Here are three things I once hoped — but no longer expect — to outlive:

1. PDF files

2. passwords

3. The occasional need to scrape data from web pages

PDF files and passwords are topics for another day, but web scraping is timely. Today I was asked to corral data from the Steampipe blog, and the path of least resistance was (of course!) to extract it from the site.

I was all warmed up for the exercise because we’d just published a post dramatically entitled “Why build an HTTP client into a database? So you can ingest web data directly!” In that post I show three solutions enabled by the Net plugin’s net_http_request table.

Since the dawn of the web, scraping has worked this way: Use a script to fetch the data, then save it for retrieval and analysis. You might use the script language to query the data, or the query language of a database.

A couple of years ago I found a way to unify those ingredients: Run the script inside the database. You can do a lot with Postgres’ built-in procedural language, and even more if you activate Python inside Postgres. I went deeply into both and explained why in an earlier episode.

PL/Python was great for advanced uses, and I used it for a bit of web scraping too. It all worked fine, and I never thought to ask this question: “What if the database is the HTTP client, and SQL the common way to reason over data coming through that pipe?”

The examples in the post show what that’s like. In its simplest form you write a query like this.

select response_status_code, jsonb_pretty(response_headers) as headers from net_http_request where url = 'https://steampipe.io'

The result is like this.

+----------------------+-------------------------------------------------------+ | response_status_code | headers | +----------------------+-------------------------------------------------------+ | 200 | { | | | "Age": [ | | | "45557" | | | ], | | | "Date": [ | | | "Fri, 09 Dec 2022 06:46:40 GMT" | | | ], | | | "Etag": [ | | | "W/\"614a142998557b388e053bfa4408cf70\"" |

The response_status_code is a regular Postgres column, the headers column is a JSONB column that you can index into (e.g. headers ->> 'etag'). If you also select the response_body column of the net_http_request table you’ll get another regular Postgres column containing the text of the web page. If it’s HTML text, you can use regular expressions to match patterns in it. If it’s JSON text you can use Postgres’ JSON functions to query and transform it.

You can join fields nested in the JSON with other Postgres columns. And those other columns can belong to tables populated in any of the ways Steampipe plugins populate tables: from JSON or GraphQL API endpoints, from CSV or Terraform or Yaml files, from anywhere really. As a developer writing Steampipe queries (and flowing results into dashboards) you see all of these sources as tables, you query them individually in all the ways Postgres can, and you join across diverse sources in a common way.

Of course web pages are structured in ways that regular expressions can’t easily grok. It’s easy to match links, but parsing HTML tag names and attributes is a job for a real parser. I’d made a start on an HTML plugin for Steampipe. There were already two tables: one to extract links from a web page, one to transform HTML tables to CSV format. So today, when tasked with tabulating blog metadata, I added a third table to enable these queries.

-- find the title select page, tag_name, tag_content from html_tag where page = 'https://steampipe.io/blog/selective-select' and tag_name = 'title' -- list the meta tags select page, tag_name, tag_attrs from html_tag where page = 'https://steampipe.io/blog/selective-select' and tag_name = 'meta'

That’s a powerful way to reason over HTML data! It was easy for me to extend the HTML plugin in this way, and I assure you that I’m no 10x programmer. The Steampipe plugin SDK and the wonderful goquery package are doing all the heavy lifting. I just had to stitch the components together, and if you’re any kind of programmer, with or without Go experience, you could pretty readily do the same.

Thursday, 08. December 2022

Altmode

Civil Rights Journey Day 3: Whitney Plantation

Thursday, October 20, 2022 Our day began with a briefing in a conference room in our hotel. Our leaders spoke for almost two hours on the history of slavery in the US and specifically in New Orleans as background for our visit to Whitney Plantation, about an hour’s bus ride from town. We made it […]

Thursday, October 20, 2022

Our day began with a briefing in a conference room in our hotel. Our leaders spoke for almost two hours on the history of slavery in the US and specifically in New Orleans as background for our visit to Whitney Plantation, about an hour’s bus ride from town. We made it to the plantation about noon, and had a picnic lunch there.

Whitney Plantation is different from most historical sites because it is presented from the viewpoint of the enslaved people, not the plantation owners. We were very fortunate that Yvonne, one of our tour leaders, had worked for Whitney Plantation until very recently and was able to tailor our tour to the theme of our journey. The tour included not only the Big House of the plantation, but also the areas where many of the enslaved people worked, such as the cooking areas, and their living quarters. We also were introduced to the evolution of the plantation, from early days farming indigo to sugar cane farming. There were memorial walls commemorating the many enslaved people who worked at the plantation, giving us an idea of the scale of the slavery at this one plantation.

Living quarters for enslaved people Memorial walls showing names of enslaved people who had worked at the plantation

While at the Whitney, Konda Mason, an entrepreneur who leads an organization called Jubilee Justice, spoke to us about the work they are doing. Jubilee Justice is working to improve the business climate for Black rice farmers in Louisiana and Arkansas. One of their main initiatives is to establish a cooperative rice mill in order to give the local farmers a more equitable and cost effective way to get their rice to market.

After returning from the Whitney, we took a short break and then had dinner at 9 Roses with several leaders from the local Vietnamese community. New Orleans has a significant Vietnamese community, dating from the 1975 evacuation during the Communist takeover of South Vietnam. Some are engaged in fishing and shrimping, which had been their occupations in Vietnam. Our dinner table included a leader from the real estate industry who was part of that 1975 migration and a Catholic priest who had relatively recently moved to the United States. It was informative to see the similarities and differences between these generations of immigrants. The food was excellent!

This article is part of a series about our recent travels to the US South. To see the introductory article in the series, click here.

Wednesday, 07. December 2022

Altmode

Civil Rights Journey Day 2: Exploring the French Quarter

Wednesday, October 19, 2022 With the official part of our tour beginning at 3 pm, we had much of the day to explore the French Quarter of New Orleans on our own. We met up with our friends Liz and Peter and started at Café du Monde for their traditional breakfast of beignets and coffee. […]

Wednesday, October 19, 2022

With the official part of our tour beginning at 3 pm, we had much of the day to explore the French Quarter of New Orleans on our own. We met up with our friends Liz and Peter and started at Café du Monde for their traditional breakfast of beignets and coffee. Beignets are fried choux pastries with piles, and I mean piles, of powdered sugar on top. While we were eating, a brass band assembled on the sidewalk next to the Café and started playing. They were very good, a classic New Orleans moment.

The four of us headed over to the nearby Jean Lafitte National Historical Park and Preserve Visitor Center. We got there in time for their 10 am talk. It began with an interesting discussion of the Acadian expulsion from the Canadian Maritimes, and subsequent arrival of many of them in the New Orleans area. This was followed by a talk we were less interested in, discussing medicinal properties of native plants. Afterwards we enjoyed a number of interesting exhibits in the visitor center.

While we were listening to the talks, Liz and Peter went through the Louisiana state museum in The Cabildo, a notable building next to the iconic St. Louis Cathedral. This is the location where the Louisiana Purchase was signed. We understand admission was $10, and not necessarily worth the cost.

We met up with Liz and Peter again for lunch. We had planned to get muffuletta sandwiches, but weren’t sure where to get them since the traditional place, Central Grocery and Deli, was closed due to damage from Hurricane Ida. We found a place in the French Market, Alberto’s, that had good muffulettas but perhaps not quite as rich in olives as those at Central Grocery.

After lunch, we all went to the Historic New Orleans Collection museum on Royal Street near our hotel. Admission was free, and there was plenty to see. Visiting this museum helped us better follow the series of events as control of New Orleans passed from France to Spain, back to France, and finally to the United States. Definitely worth the visit.

At 3 pm our tour officially started. We gathered in a conference room of the hotel for introductions and a brief orientation. We then walked north to Congo Square, just outside the French Quarter. Congo Square was the location where slaves who might be given a Sunday off (due to the strong Roman Catholic influence in the area) would gather, dance and sing, and trade wares. We met up with Luther Gray and Jamilah Peters-Muhammad of the Congo Square Preservation Society who told us of the considerable history of the Square. They taught us drumming and dancing in the traditional style, or at least as close as we could come to that.

We were then met by Erin, an official New Orleans tour guide, who led us on a whirlwind walking tour of some of the French Quarter. Erin had a lot of information, and talked fast to get as much of it to us as possible.

After a few minutes to get cleaned up, the group walked to Galatoire’s, a well known (and fancy) restaurant in the French Quarter. Our group had a private room and a selection of menu items to choose from. I had to try the turtle soup, which resembled a thin chili and was very tasty. My fish entrée and Kenna’s crab hollandaise were both quite good.

This article is part of a series about our recent travels to the US South. To see the introductory article in the series, click here.