Last Update 6:38 PM September 23, 2023 (UTC)

Identity Blog Catcher

Brought to you by Identity Woman and Infominer.
Support this collaboration on Patreon!!!

Saturday, 23. September 2023

John Philpin : Lifestream

Jax at the bar, tasting a glass before commitment .. local s

Jax at the bar, tasting a glass before commitment .. local siddles up asks her if it’s any good, ‘cos if it is .. me and my mates would like to buy you a bottle’ … ‘I don’t know’ she says - turns to me - ‘what do you think’? ‘Up to you’ I said. She turns back to the local, he was back with his mates 😂

Jax at the bar, tasting a glass before commitment .. local siddles up asks her if it’s any good, ‘cos if it is .. me and my mates would like to buy you a bottle’ … ‘I don’t know’ she says - turns to me - ‘what do you think’? ‘Up to you’ I said. She turns back to the local, he was back with his mates 😂


Ben Werdmüller

DALL·E 3

Once again, this looks completely like magic. Very high-fidelity images across a bunch of different styles. The implications are enormous. #AI [Link]

Once again, this looks completely like magic. Very high-fidelity images across a bunch of different styles. The implications are enormous. #AI

[Link]


The worst part of writing is writing

I’ve been neck-deep in a long-form first draft for months; at this point I’m many tens of thousands of words in. Every time I look back at my writing from tens of thousands of words ago, it’s a horrible mistake that opens up floodgates of self-questioning. How could I possibly have thought that I could do this? Who on earth would want to read this? Amateur! Go back to whatever it is you do for y

I’ve been neck-deep in a long-form first draft for months; at this point I’m many tens of thousands of words in. Every time I look back at my writing from tens of thousands of words ago, it’s a horrible mistake that opens up floodgates of self-questioning. How could I possibly have thought that I could do this? Who on earth would want to read this? Amateur! Go back to whatever it is you do for your day job. (Do you even know? I thought you wrote software? When was the last time you actually wrote software, you hack?)

But I’m determined. The only thing I can say for sure is that, eventually, I will have a manuscript. I have professional mentors who will read and critique it once I’ve iterated on it a few times. Beyond that, I can’t say. Perhaps, if I’m lucky, someone will like it. But perhaps it really is doomed to sit on my hard drive, unloved.

The deeper I get into it, the more I’m comfortable with the idea of failure. I think I started with the idea that I might be intentionally writing something that a lot of people might enjoy, but at this point it’s for me. The more I pour in of myself, and the ideas I have about the world (and the future of technology, because that’s the kind of book this is), the more I feel comfortable with it. Even if nobody loves it, it’ll be representative of me: a genuine work of self-expression hooked onto a plot that I continue to think is really interesting. And the feedback I get will help me learn to write the next one.

It turns out that the thing which most motivates me to write is my sense of humor. If it’s too self-serious, I stall. (Honestly, I expect readers would, too.) On the other hand, if I’m amusing myself, undercutting my serious points with irony or adding notes about things from the real world that I think are ridiculous, I can go forever. That’s probably something worth knowing about myself: I thrive on irreverence. I cut my teeth on Douglas Adams, Terry Pratchett, and Charlie Brooker’s early stuff, so that’s probably not surprising. I could probably use more of that here, too.

Anyway. It’s like pulling teeth, but joyously. A gleeful festival of unpleasant monotony wherein I make myself laugh while disgusting myself with my own ineptitude. And maybe, if I’m really, really lucky, something will even come of it.


John Philpin : Lifestream

2023 | 09 | 22 Follow The Daily Stoic All The

2023 | 09 | 22 Follow The Daily Stoic All The Posts RSS Feed If one really catches your eye, don’t forget to click

2023 | 09 | 22

Follow The Daily Stoic

All The Posts

RSS Feed

If one really catches your eye, don’t forget to click


Ben Werdmüller

EV charging infrastruture is a joke – Brad Barrish

Non-Tesla EV charging infrastructure is awful. It's good that Tesla has opened the standard, but it's not good that the only really viable charging infrastructure is owned by one company. It needs to be fixed. #Technology [Link]

Non-Tesla EV charging infrastructure is awful. It's good that Tesla has opened the standard, but it's not good that the only really viable charging infrastructure is owned by one company. It needs to be fixed. #Technology

[Link]


John Philpin : Lifestream

Apologies to all .. On the road plus iOS issues means I can’

Apologies to all .. On the road plus iOS issues means I can’t deliver my usual, suave, sophisticated flows 🥳 .. When I hit the main computer next - will clean up. Sorry .. I hope that whatever the bug is gets fixed soon.

Apologies to all .. On the road plus iOS issues means I can’t deliver my usual, suave, sophisticated flows 🥳 .. When I hit the main computer next - will clean up. Sorry .. I hope that whatever the bug is gets fixed soon.


2023 | 09 | 23 Follow The Daily Stoic All The

2023 | 09 | 23 Follow The Daily Stoic All The Posts RSS Feed If one really catches your eye, don’t forget to click

2023 | 09 | 23

Follow The Daily Stoic

All The Posts

RSS Feed

If one really catches your eye, don’t forget to click


Stoic 23

Stoic 23

Stoic 23


📸 230923🖇️ #MBSept With Champagne you could have not just

📸 230923🖇️ #MBSept With Champagne you could have not just ‘a day in a life’ … but more importantly ‘a life in a day’.

📸 230923🖇️ #MBSept

With Champagne you could have not just ‘a day in a life’ … but more importantly ‘a life in a day’.


Ben Werdmüller

Notion web Clipper - Klippper

I'm a heavy Notion web clipper user, but this is far better for my needs. I was worried I'd need to build it myself. Luckily: no! #Technology [Link]

I'm a heavy Notion web clipper user, but this is far better for my needs. I was worried I'd need to build it myself. Luckily: no! #Technology

[Link]


Nature TTL Photographer of the Year 2023: Winners Gallery

Every image here is stunning. #Culture [Link]

Every image here is stunning. #Culture

[Link]


California governor vetoes bill banning robotrucks without safety drivers

The legislation passed with a heavy majority - this veto is a signal that Newsom favors the AI vendors over teamster concerns. Teamsters, on the other hand, claim the tech is unsafe and that jobs will be lost. #AI [Link]

The legislation passed with a heavy majority - this veto is a signal that Newsom favors the AI vendors over teamster concerns. Teamsters, on the other hand, claim the tech is unsafe and that jobs will be lost. #AI

[Link]


The #ViewSource Affordance

I strongly agree with this. "View source" has been an important part of the culture of the web since the beginning. Obfuscating that source or removing the option does damage to its underlying principles and makes the web a worse place. I like the comparison to the enclosure movement, which seems apt. #Technology [Link]

I strongly agree with this. "View source" has been an important part of the culture of the web since the beginning. Obfuscating that source or removing the option does damage to its underlying principles and makes the web a worse place. I like the comparison to the enclosure movement, which seems apt. #Technology

[Link]


Simon Willison

TG: Polygon indexing

TG: Polygon indexing TG is a brand new geospatial library by Josh Baker, author of the Tile38 in-memory spatial server (kind of a geospatial Redis). TG is written in pure C and delivered as a single C file, reminiscent of the SQLite amalgamation. TG looks really interesting. It implements almost the exact subset of geospatial functionality that I find most useful: point-in-polygon, intersect, W

TG: Polygon indexing

TG is a brand new geospatial library by Josh Baker, author of the Tile38 in-memory spatial server (kind of a geospatial Redis). TG is written in pure C and delivered as a single C file, reminiscent of the SQLite amalgamation.

TG looks really interesting. It implements almost the exact subset of geospatial functionality that I find most useful: point-in-polygon, intersect, WKT, WKB, and GeoJSON - all with no additional dependencies.

The most interesting thing about it is the way it handles indexing. In this documentation Josh describes two approaches he uses to speeding up point-in-polygon and intersection using a novel approach that goes beyond the usual RTree implementation.

I think this could make the basis of a really useful SQLite extension - a lighter-weight alternative to SpatiaLite.

Via Hacker News

Friday, 22. September 2023

John Philpin : Lifestream

The news in here ( for me at least ) is not the headline … b

The news in here ( for me at least ) is not the headline … but in the opening paragraph … Koch funded Burns?

The news in here ( for me at least ) is not the headline … but in the opening paragraph … Koch funded Burns?


Ben Werdmüller

AI is not a paradigm shift. But it could be useful

It’s been interesting to watch all of the articles celebrating the death of NFTs lately. For years, they were the harbinger of the next big thing, hawked by A-list celebrities. Behind the scenes, some of the biggest tech companies in the world spawned NFT strategies, even as critics noted that valuations were partially driven by money laundering and wash trading. Cut to 2023, and surprise, su

It’s been interesting to watch all of the articles celebrating the death of NFTs lately. For years, they were the harbinger of the next big thing, hawked by A-list celebrities. Behind the scenes, some of the biggest tech companies in the world spawned NFT strategies, even as critics noted that valuations were partially driven by money laundering and wash trading.

Cut to 2023, and surprise, surprise: 95% of NFTs are now completely worthless.

If you missed the craze: while most digital data can be infinitely replicated for almost no cost, Non Fungible Tokens, or NFTs, were a way to ensure there was only one of an item using blockchains. NFTs were often attached to digital art — for example, these hideous apes — and because they were both scarce and tradable, for a while each one was going for the equivalent of thousands of dollars. Of course, it couldn’t last, and NFTs turned out to be the digital equivalent of investing in Beanie Babies or tulips(pick your proverbial market collapse).

It’s now controversial to say that crypto isn’t completely useless, but if you look beyond the brazen grift, international crimes, and planet-destroying environmental impact, I do think there are a few things to celebrate about the trend. The crypto community deployed the most widely-used ever implementation of identity in the browser, for one: people who installed software like Metamask could choose to identify themselves to a website with a single click. In some countries, digital currencies also gave citizens an accessible safe haven when their own local currency tumbled. And finally, it introduced a much wider audience to the concept of decentralization, where a large-scale internet system is run co-operatively by all of its users instead of a giant megacorp.

Although the rampant speculation and wildly inflated prices are gone, there are some technical outcomes that will likely be with us for some time. And some of those are positive and useful.

This is exactly how the hype cycle works. A technology breakthrough kicks things off and gets people all excited. The market works itself into a frenzy over the technology, and lots of people imagine that it can do all kinds of amazing things. Those inevitably don’t actually pan out, and people lose hope and interest. But it turns out that the technology is useful for something, and eventually, it finds a mainstream use.

Crypto is very much in the trough of disillusionment right now; eventually some aspects of the technology (maybe identity in the browser, maybe something else) will find a use.

Meanwhile, AI? AI is right at the top of that hype curve.

There are people out there who believe we’re building a new kind of higher consciousness, and that our goal as humans should be to support and spread that consciousness to the stars. A galaxy full of stochastic parrots is an inherently funny, Douglas Adams-esque idea, but naturally, they’re serious, partially because they feel this idea absolves them of dealing with the truth that there are actual human beings living on a dying planet who need help and assistance right now. In erasing the needs of vulnerable communities, AI supremacy (officially called effective accelerationism) is the new white supremacy (sitting comfortably alongside the old white supremacy, which is still going strong).

There are also people who think AI will replace poets, artists, neurosurgeons, and political leaders. AI systems will farm for us, tend to our children, and imagine whole new societies that we wouldn’t otherwise be capable of envisioning. They will write great literature and invent wholly new, never-ending dramatic entertainment for us to sit and consume.

It’s horseshit. The technology can’t do any of those things well. It’s best thought of us a really advanced version of auto-complete, and everyone who claims it’s something more is trying to sell you something.

Which isn’t to say it’s not useful. I’ve certainly used it as a utility in my writing — not to do the writing itself (it produces mediocre porridge-writing), but to prompt for different angles or approaches. I’ve used it to suggest ways to code a function. And I’ve certainly used it, again and again, as a quick way to autocomplete a line of code or an English sentence.

What’s going to happen is this: in a few years, AI will come crashing down as everyone realizes it’s not going to be an evolution of human consciousness, and some other new technology will take its place. Valuations of AI companies will fall and some will go out of business. Then, some of the actual uses of the technology will become apparent and it’ll be a mainstream, but not dominant, part of the technology landscape.

The hype cycle is well-understood. What surprises me, again and again, is how thoroughly people follow it. Across industries, CEOs are right now thinking, “holy shit, if we don’t jump on AI, we’re going to be completely left behind. This is a paradigm shift.” It’s kind of the equivalent of a bunch of soccer players chasing the ball — It’s over here! No, it’s over here! Let’s run towards it! — which is how three-year-olds play soccer. A more strategic approach (let’s call it thinking for yourself) will be more productive for most businesses.

There will absolutely be uses for AI tools. The important thing is to take a step back and think: what are my needs? What are the needs of my customers or my community? Given the actual demonstrated capabilities of the software, does it help me meet any of them in a reliable way? If I do use it, am I holding true to my values and keeping my customers and community safe? If the answer is yes to all of these things, then great! Otherwise it might be worth taking a step back and letting the dust settle.

Keep me honest: if AI doesn’t enter a trough of disillusionment and just keeps growing and growing exponentially, call me on it. But I think it’s a pretty safe bet that it won’t.


Heres Tom with the Weather

Webfinger Expectations

In an earlier post this year, I documented a problem I found and this post attempts to describe the issue a little more clearly and a plan to work around it. I have chosen @tom@herestomwiththeweather.com as my personal identifier on the fediverse. If I decide I want to move from one activitypub server (e.g. Mastodon) to another, I would like to keep my same personal identifier. It follows tha

In an earlier post this year, I documented a problem I found and this post attempts to describe the issue a little more clearly and a plan to work around it.

I have chosen @tom@herestomwiththeweather.com as my personal identifier on the fediverse. If I decide I want to move from one activitypub server (e.g. Mastodon) to another, I would like to keep my same personal identifier. It follows that my activitypub server should not have to reside at the same domain as my personal identifier. I should be able to swap one activitypub server for another at any time. Certainly, I don’t expect every activitypub server to support this but I’m not obligated to use one that does not.

Unfortunately, although my domain returns the chosen personal identifier in the subject field, because the JRD document returns a rel=self link to a Mastodon server to provide my actor document, the mastodon servers do not seem to use my chosen personal identifier for anything other than resolving a search for my personal identifier to the mastodon profile to which it is currently associated. From that point forward, a completely new personal identifier with the domain component set to the domain of the mastodon server is used. In other words, a personal identifier that has been chosen for me by someone else is kept in a particular server’s database table. I can later choose a different activitypub server but I may not be able to keep my preferred username because it may already be taken on the new server. In any case, choosing a new server means my personal identifier within the mastodon network also changes. Unless…I don’t use a mastodon server in the first place. Then, my personal identifier will be used as I would like by the mastodon network and I can potentially swap activitypub servers without ever having to change my personal identifier with my own domain.

The two most relevant documents for understanding webfinger as it is currently used seem to be RFC 7033: WebFinger and Mastodon’s documentation and it is this mastodon documentation (in the section Mastodon’s requirements for WebFinger) that now describes the behavior (a problem for me) that I documented earlier. The new section explains

if the subject contains a different canonical account URI, then Mastodon will perform an additional Webfinger request for that canonical account URI in order to ensure that this new resource links to the same ActivityPub actor with the same criteria being checked.

This behavior makes sense if you assume that if you are using a mastodon server, then you inherit a personal identifier tied to that server. This makes validating a webfinger address simple for mastodon so advocating a change in this behavior in mastodon seems like it would be challenging. However, as I mentioned in the earlier post, instead of choosing mastodon as your activitypub server, your personal identifier with your own domain can be accepted by mastodon servers in a desirable way

as long as the fediverse node providing the actor document is smart enough to provide your personal domain in the subject when mastodon makes a webfinger call to it.

The problem here is that it seems that I would not be able to be “tom” on such an activitypub server if, for instance, tom@example.com was already pointing to that server unless the server could assign me a subdomain, for example.


Ben Werdmüller

ChatGPT Caught Giving Horrible Advice to Cancer Patients

LLMs are a magic trick; interesting and useful for superficial tasks, but very much not up to, for example, replacing a trained medical professional. The idea that someone would think it's okay to let one give medical advice is horrifying. #AI [Link]

LLMs are a magic trick; interesting and useful for superficial tasks, but very much not up to, for example, replacing a trained medical professional. The idea that someone would think it's okay to let one give medical advice is horrifying. #AI

[Link]


AI data training companies like Scale AI are hiring poets

These poets are being hired to eliminate the possibility of being paid for their own work. But I am kind of tickled by the idea that OpenAI is scraping fan-fiction forums. Not because it’s bad work, but imagine the consequences. #AI [Link]

These poets are being hired to eliminate the possibility of being paid for their own work. But I am kind of tickled by the idea that OpenAI is scraping fan-fiction forums. Not because it’s bad work, but imagine the consequences. #AI

[Link]


John Philpin : Lifestream

When I was a lad, the little things were just that … little

When I was a lad, the little things were just that … little things. They didn’t cost, or of the did, it wasn’t that much, but more often than not it was just something that was a nice touch. Compare and contrast with more modern times … just in the past 48 hours … Thankyou for booking your flight with us, would you like to reserve a seat? Charges start at just ten Euros. How lovely to see

When I was a lad, the little things were just that … little things. They didn’t cost, or of the did, it wasn’t that much, but more often than not it was just something that was a nice touch.

Compare and contrast with more modern times … just in the past 48 hours …

Thankyou for booking your flight with us, would you like to reserve a seat? Charges start at just ten Euros. How lovely to see you, welcome to our hotel. I see you are early, your room is ready, but check in isn’t until 3pm. We can give you early access for just thirty Pounds. Absolutely you can leave your luggage over night, that will be seven pounds and fifty pence, per night per bag. That low price for that flight? It goes up by around 40% if you actually want to check a bag. 24 Hour rental for a car, $150. (Not a typo). Moreover 25 Hour rental for a car, $300. Better get there on time. This ticket only allows for one checked bag. The second will cost 100 Dollars. (The web site said 65 Pounds, itself a recent increase from 50 Pounds.)

(Yes … we’re traveling)


📸 230922🖇️ #MBSept This is ‘The Road to Hana’ … just not t

📸 230922🖇️ #MBSept This is ‘The Road to Hana’ … just not the one that everyone else travels on … maybe because a different part of it is ‘giving way to the Ocean’?

📸 230922🖇️ #MBSept

This is ‘The Road to Hana’ … just not the one that everyone else travels on … maybe because a different part of it is ‘giving way to the Ocean’?


delete

delete

delete


Stoic 22

Stoic 22

Stoic 22


Ben Werdmüller

Mastodon 4.2

Lots of good new changes here - and in particular a much-needed search overhaul. My private instance is running the latest and I like it a lot. #Technology [Link]

Lots of good new changes here - and in particular a much-needed search overhaul. My private instance is running the latest and I like it a lot. #Technology

[Link]


reb00ted

Fediverse Testsuite Slides from FediForum

Here are the two slides I showed at FediForum to help with the discussion on a possible Fediverse test suite.

Here are the two slides I showed at FediForum to help with the discussion on a possible Fediverse test suite.

Thursday, 21. September 2023

John Philpin : Lifestream

🔗 McCarthy says hard-right Republicans ‘want to burn whole p

🔗 McCarthy says hard-right Republicans ‘want to burn whole place down’.. FINALLY … he gets it. But they can be stopped if Kevin has the cahoonas.

🔗 McCarthy says hard-right Republicans ‘want to burn whole place down’..

FINALLY … he gets it. But they can be stopped if Kevin has the cahoonas.


Ben Werdmüller

John Grisham, other top US authors sue OpenAI over copyrights

"A trade group for U.S. authors has sued OpenAI in Manhattan federal court on behalf of prominent writers including John Grisham, Jonathan Franzen, George Saunders, Jodi Picoult and "Game of Thrones" novelist George R.R. Martin, accusing the company of unlawfully training its popular artificial-intelligence based chatbot ChatGPT on their work.” #AI [Link]

"A trade group for U.S. authors has sued OpenAI in Manhattan federal court on behalf of prominent writers including John Grisham, Jonathan Franzen, George Saunders, Jodi Picoult and "Game of Thrones" novelist George R.R. Martin, accusing the company of unlawfully training its popular artificial-intelligence based chatbot ChatGPT on their work.” #AI

[Link]


John Philpin : Lifestream

2023 | 09 | 21 Follow The Daily Stoic All The

2023 | 09 | 21 Follow The Daily Stoic All The Posts RSS Feed If one really catches your eye, don’t forget to click on ‘the date’ below - there may be additional commentary.

2023 | 09 | 21

Follow The Daily Stoic

All The Posts

RSS Feed

If one really catches your eye, don’t forget to click on ‘the date’ below - there may be additional commentary.


🔗 News Corp shareholders renew call for Rupert Murdoch to st

🔗 News Corp shareholders renew call for Rupert Murdoch to step down. Ten years later …. 🔗 Rupert Murdoch stepping down as chair of Fox and News Corp.

Anybody else finding under iOS that uploading an image from

Anybody else finding under iOS that uploading an image from ‘favorites’ totally fails? Seems to have stopped working under iOS 17

Anybody else finding under iOS that uploading an image from ‘favorites’ totally fails? Seems to have stopped working under iOS 17


📸 230921🖇️ #MBSept Did the Elephant fall in .. or is it cl

📸 230921🖇️ #MBSept Did the Elephant fall in .. or is it climbing out?

📸 230921🖇️ #MBSept

Did the Elephant fall in .. or is it climbing out?


The 11 senators voting against Brown were all Dicks Republ

The 11 senators voting against Brown were all Dicks Republicans: Mike Braun (Ind.), Ted Cruz (Tex.), Josh Hawley (Mo.), Mike Lee (Utah), Roger Marshall (Kan.), Eric Schmitt (Mo.), J.D. Vance (Ohio), Ron Johnson (Wis.), Cynthia M. Lummis (Wyo.), Marco Rubio (Fla.) and Tuberville.

The 11 senators voting against Brown were all Dicks Republicans: Mike Braun (Ind.), Ted Cruz (Tex.), Josh Hawley (Mo.), Mike Lee (Utah), Roger Marshall (Kan.), Eric Schmitt (Mo.), J.D. Vance (Ohio), Ron Johnson (Wis.), Cynthia M. Lummis (Wyo.), Marco Rubio (Fla.) and Tuberville.

Wednesday, 20. September 2023

John Philpin : Lifestream

Just been reading threads, posts and comments on various sit

Just been reading threads, posts and comments on various sites … it’s so relaxing beats sleeping 🤪. One thought before I once again try to cross 🔗 The Point Of No Return. Doesn’t our side have guns?

Just been reading threads, posts and comments on various sites … it’s so relaxing beats sleeping 🤪. One thought before I once again try to cross 🔗 The Point Of No Return.

Doesn’t our side have guns?


I don’t care that his account was hacked … I’m more concerne

I don’t care that his account was hacked … I’m more concerned about the 10 million sad people who follow him. They need to help.

I don’t care that his account was hacked … I’m more concerned about the 10 million sad people who follow him. They need to help.


I would have thought that one a week would be more than suff

I would have thought that one a week would be more than sufficient. Or is there something they know that I don’t?

I would have thought that one a week would be more than sufficient. Or is there something they know that I don’t?


Ben Werdmüller

‘The scripts were the funniest things I’d ever read’: the stars of Peep Show look back, 20 years later

Before there was Succession, there was Peep Show. A brilliant piece of TV that launched a bunch of careers. If you haven't seen it, give yourself the gift of checking it out. #Culture [Link]

Before there was Succession, there was Peep Show. A brilliant piece of TV that launched a bunch of careers. If you haven't seen it, give yourself the gift of checking it out. #Culture

[Link]


Phil Windleys Technometria

Digital Identity Podcasts

I've been invited to be on a few more podcasts to talk about my new book Learning Digital Identity from O'Reilly Media. That's one of the perks of writing a book. People like to talk about it. I always enjoy talking about identity and especially how it's so vital to the quality of our lives in the digital world, so I'm grateful these groups found time to speak with me.

I've been invited to be on a few more podcasts to talk about my new book Learning Digital Identity from O'Reilly Media. That's one of the perks of writing a book. People like to talk about it. I always enjoy talking about identity and especially how it's so vital to the quality of our lives in the digital world, so I'm grateful these groups found time to speak with me.

First, I had a great discussion about identity, IIW, and the book with Rich Sordahl of Anonyome Labs.

I also had a good discussion with Joe Malenfant of Ping Identity on digial identity fundamentals and future trends. We did this in two parts. Part 1 focused on the problems of digital identity and the Laws of Identity.

Part 2 discussed how SSI is changing the way we see online identity and the future of digital identity. We even got into AI and identity a bit at the end.

Finally, Harrison Tang spoke with me in the the W3C Credentials Community Group. We had a fun discussion about the definition of identity, administrative and autonomic identity systems, and SSI wallets.

I hope you enjoy these. As I said, I'm always excited to talk about identity, so if you'd like to have me on your podcast, let me know.


John Philpin : Lifestream

🔗 This Week In Disgrace. … Just in case you have lost track

🔗 This Week In Disgrace. … Just in case you have lost track of the celebrity faux pas in the past week.

🔗 This Week In Disgrace. … Just in case you have lost track of the celebrity faux pas in the past week.


📸 230920🖇️ #MBSept Disruption isn’t new. Disruption isn’t

📸 230920🖇️ #MBSept Disruption isn’t new. Disruption isn’t always about Silicon Valley. Disruption isn’t ’American’. Imagine the disruption to people’s lives in the tiny hamlet of Akaroa when this was installed in 1901.

📸 230920🖇️ #MBSept

Disruption isn’t new. Disruption isn’t always about Silicon Valley. Disruption isn’t ’American’. Imagine the disruption to people’s lives in the tiny hamlet of Akaroa when this was installed in 1901.


2023 | 09 | 20 Follow The Daily Stoic All The

2023 | 09 | 20 Follow The Daily Stoic All The Posts RSS Feed If one really catches your eye, don’t forget to click on ‘the date’ below - there may be additional commentary.

2023 | 09 | 20

Follow The Daily Stoic

All The Posts

RSS Feed

If one really catches your eye, don’t forget to click on ‘the date’ below - there may be additional commentary.


Ben Werdmüller

Revealed: top carbon offset projects may not cut planet-heating emissions

“The vast majority of the environmental projects most frequently used to offset greenhouse gas emissions appear to have fundamental failings suggesting they cannot be relied upon to cut planet-heating emissions, according to a new analysis.” #Climate [Link]

“The vast majority of the environmental projects most frequently used to offset greenhouse gas emissions appear to have fundamental failings suggesting they cannot be relied upon to cut planet-heating emissions, according to a new analysis.” #Climate

[Link]

Tuesday, 19. September 2023

John Philpin : Lifestream

Good job with the @gruber episode @martinfeld …. noting that

Good job with the @gruber episode @martinfeld …. noting that it is (I think) 🎙️🔗 the longest one you have published (?) … fitting for the guest 🤪 … seriously though, nice job and good to hear Mr G wax lyrical on his story. Lovely listen.

Good job with the @gruber episode @martinfeld …. noting that it is (I think) 🎙️🔗 the longest one you have published (?) … fitting for the guest 🤪 … seriously though, nice job and good to hear Mr G wax lyrical on his story. Lovely listen.


Simon Willison

The WebAssembly Go Playground

The WebAssembly Go Playground Jeff Lindsay has a full Go 1.21.1 compiler running entirely in the browser. Via progrium/wasm-go-playground

The WebAssembly Go Playground

Jeff Lindsay has a full Go 1.21.1 compiler running entirely in the browser.

Via progrium/wasm-go-playground


Ben Werdmüller

Online Safety Bill: Crackdown on harmful social media content agreed

This is a horrendous bill that is designed to encourage self-censorship, including around topics like "illegal immigration", as well as vastly deepen surveillance on internet users. And Britain passing it will likely embolden other nations to try the same. #Technology [Link]

This is a horrendous bill that is designed to encourage self-censorship, including around topics like "illegal immigration", as well as vastly deepen surveillance on internet users. And Britain passing it will likely embolden other nations to try the same. #Technology

[Link]


@_Nat Zone

(更新)[9月28日] IdentityTrust Conference 2023 – Building Trust in Digital Identity @ Londonでキーノートをします

来る9月28日に、OIX主催の IdentityT…

来る9月28日に、OIX主催の IdentityTrust Conference 2023 でキーノートをします。

タイトルは「The Fallacy of Decentralisation」です。もしロンドンにいらしたらぜひお立ち寄りください。なお、わたしの直後のセッションは、科学・イノベーション・技術省のポール・スカリー政務次官です。また、午後には「ウォレットとフレームワーク」という座談会に出演します。

このカンファレンスについて

デジタル・クレデンシャルは、信頼できるスマート・ウォレットを介して信頼できるユーザから提供されるため、オンラインでも対面でも、ビジネスのやり方を変えることができます。

Identity Trust 2023 には、デジタル・クレデンシャルの受領者1と、デジタル ID エコシステム内でサービ スを作成および提供する関係者が世界中から集まります。

この会議では、ユーザーのデジタル ID ウォレットからクレデンシャルを信頼と信用をもって受け入れることができる理由を探ります。そのユーザーは本人ですか?サービスにアクセスする資格はありますか?

年齢確認、金融機関へのアクセス、住宅売買、旅行、雇用審査などの主要なユースケースにおいて、デジタル ID がどのように普及しているかを探ります。

また、デジタルIDが国境を越えてどのように機能するかについても詳しく見ていきます。ある国を訪問する場合でも、リモートでサービスにアクセスする場合でも、「スマート・ウォレット」は、ユーザーがやり取りしたい相手ごとのアクセプターの要件に動的に適応する必要があります。

300人を超える参加者とともに、デジタルIDの利用がどのように加速しているか、また、それが貴社と貴社の顧客にどのような利益をもたらすかをご確認ください。

直接参加できない場合は、このイベントのメインステージセッションをライブストリーミングでご覧いただけます。

日時:2023年9月28日(木) 08:30 〜 19:00 場所:County Hall – 3rd Floor – Southbank Wing, Belvedere Rd, London, SE1 7PB, UK チケット:https://IDENTITY_TRUST_2023.eventbrite.co.uk OIX Identity Trust 2023-Agenda at a glance

以下、みなさんの利便性のためにコピーしておきますが、アップデートがあるかもしれないので、最新情報は以下のリンクから得ていただければと思います。

(Source) https://openidentityexchange.org/networks/87/events.html?id=7780

 Morning Session – Waterloo Suite09:10-09:20Welcome and Introduction to the day – Mags Moore, Chair – OIX09:20-09:40KEYNOTE – Authentiverse – Louise French, Strategy Director – The Future Laboratory.
Louise explores the ‘authentiverse’-a ‘citizen first’ perspective on digital trust to unpack, analyse and explore some of the key concepts shaping digital trust and identity authentication.09:40-10:00KEYNOTE – The Fallacy of Decentralisation – Nat Sakimura, Chair – OIDF
Nat highlights how hyper-decentralisation into trust ecosystems managed by just a few hosts is not necessarily a good thing.10:00-10:15KEYNOTE – UK Government update – Paul Scully – Parliamentary Under Secretary of State, Dept for Science, Innovation & Technology
Paul will give an update on the Government’s progress to enable the widespread use of digital identities across the UK economy.10:15-10:35SPONSOR PRESENTATION –  Unlock the power of digital ethics to build and maintain trust in digital identity adoption – Jen Rodvold, Head of Practice, Ethics & Sustainability Consulting – Sopra SteriaJen will explore how we can, as an industry, encourage safe and secure digital identity adoption through digital ethics that benefits governments, companies, users and society as a whole.10:35-11.00BREAK  11:00-11:15PRESENTATION – The DNA of Digital ID: Wallets, Frameworks & Interoperability – Nick Mothershaw, Chief Identity Strategist – OIX11:15-11:35PRESENTATION – OWF Update: Progress so far – Joseph Heenan, – OWF
7 months ago the Open Wallet Foundation launched with the mission to create open source software components for secure interoperable wallets. Get an update on what has happened so far and an outlook on what to expect.  Stream 1 – Waterloo SuiteStream 2 – Duke Suite11:45-12:15PANEL – EU elDAS and EU LSP updates
Moderator: Marie Austenaa – VISAPANEL: Digital ID in Finance
Moderator: Chris Burt, Biometric Update
Panellists:
Philip Mind – UK Finance
Larry Banda – TISATBC12:20-12:50Inclusion Challenges & How we Solve them to make Digital ID a Success? 
Rachelle Sellung, Fraunhofer IAO – Analysis of UX in Early Wallet Implementations
Dr Sarah Walton, Women in Identity – WiD Code of Conduct
Elizabeth Garber, Open ID Foundation – Government Approach to InclusionPANEL: The Role & Importance of a Secure Digital Identity in Home Buying and Selling –
Moderator: Stuart Young – MyIdentity Etive
Panellists: Alex Philipson, Group Sales Manager – Bellway Homes
Timothy Douglas, Head of Policy & Campaigns – Propertymark
Peter Rodd, Law Society Council Member for Residential Conveyancing – The Law Society
Barry Carter, Chief Operating Officer – Hinckley & Rugby Building Society12.50-14:00LUNCHLUNCH14:00-14:30PANEL: OIX Global Interoperability – Trust Frameworks
Moderator: Steve Pannifer, Managing Director – Consult Hyperion
Panellists will discuss:
What the interoperability of IDs across frameworks means to them?
Panellists:
Ruth Puente – DIACC
Connie LaSalle – NIST
Ramesh Narayanan – MOSIPPANEL: Retail/Age – Age Estimation and Universal Acceptance
Moderator: Chris Burt, Biometric Update
Panellists will discuss:Pass face-to-face DPoA in the UK, Online age – forthcoming UK regulation, Age estimation vs assurance, Age regulation emerging in other territories, EU Consent
Panellists: Iain Corby – AVPA
Mel Brown – PASS Scheme
Ros Smith – OFCOM14:35-15:05FIRESIDE CHAT: Wallets and Frameworks: 
Facilitator: Don Thibeau, OIX Vice-Chair
Nat Sakimura – OIDF
Nick Mothershaw – OIX
Juliana Cafik – OWFPANEL: HR Vetting using Digital ID
Moderator: Bryn Robinson-Morgan, Moresburg Ltd
Panellists will discuss:One year in – how is this progressing? What have the challenges & successes been, as well as what challenges are there still to address? Plus are all certified providers equal and the approach and understanding of the inclusion challenge.
Panellists: John Harrison – Right to Work, Policy Manager, Home Office
Sarah Clifford – Disclosure & Barring Service (DBS)
Keith Rosser – Reed Screening  Afternoon Session – Waterloo Suite15:05-15:30BREAK15:30-15:50SPONSOR PRESENTATION – Digital ID – Single Sign-on for fraud? – Chris Lewis, Head of Solutions – Synectics SolutionsMyth or prophecy? How will the advent of the reusable digital identity affect fraud? What new risks could digital IDs create and how do we mitigate them?Join us for an enlightening keynote as we delve into the world of reusable digital identities, their vulnerabilities, and the looming threat of fraud. We’ll explore the current state of digital identity, shedding light on how digital identity affects the current fraud landscape. Will a compromised digital identity become “Single Sign-on for fraud”? You will gain a comprehensive understanding of the challenges posed by it, insights into the types of controls and mechanisms digital identity providers and relying parties can put in place to mitigate this risk.15:50-16:10PRESENTATION – Case Study – How Denmark became a global digital frontrunner – Roland Eichenauer, NEXI GroupThe national eID solution – today adopted by 99% of the population – has played a key role in digitizing banks and enabling the journey towards a Digital Denmark, also fostering public and private sector harmonization.16:10-16:55PANEL – Digital ID AdoptionModerator: Geraint Rogers, DaonThree panellists representing acceptors of Digital ID will talk about the benefits they see from Digital ID and the challenges to be overcome.16:55-17:00Wrap up – Mags Moore, Chair – OIX17:00-19:00Post Conference Drinks Reception

[9月26日] OECDデジタルアイデンティティガバナンス勧告ローンチイベントに出演します

9月26日(火)12:00 CESTより、OECD…

9月26日(火)12:00 CESTより、OECDは、多くの皆様との緊密な協議を経て6月に採択された「デジタル・アイデンティティのガバナンスに関するOECD勧告」のバーチャル発表会を開催します。

このイベントには、日本、ブラジル、イタリア、インドからハイレベルの発言があり、その後、カナダ財務省事務局、OpenIDファウンデーション、EUデジタル・アイデンティティ・ウォレット・コンソーシアムのモデレーターによる専門家ディスカッションが行われます。

私は専門家パネルに参加する予定です。

参加ご希望の方は、こちらからお申し込みください

このイベントと勧告の詳細については、OECDのウェブサイトをご覧ください。

プログラムは、パリ時間で以下のようになります。Zoomでのカンファレンスです。

DRAFT AGENDA

26 September 2023, 12:00-13:30 CEST

12:00 – 12:05Opening Welcome
– Allen Sutherland, Vice Chair, OECD Public Governance Committee12:05 – 12:15 Presentation of the OECD Recommendation on the Governance of Digital Identity
– Elsa Pilichowski, Director, Public Governance Directorate, OECD12:15 – 12:45High-level Panel
– Paolo Donzello, Director General, Department for Digital Transformation, Italy 
– Luanna Roncaratti, Deputy Secretary of Digital Government, Brazil
– Amitabh Kant, India G20 Sherpa (video message)
– Representative from Japan 
– Representative from the European Commission (TBC)
– Representative from UNDP (TBC)12:45 – 13:25Moderated Expert Panel Discussion
– Michael Goit, Director of Policy, Treasury Board of Canada Secretariat
– David Magård, Coordinator, EU Digital Identity Wallet Consortium (EWC) and Senior Advisor, Swedish Companies Registration Office
– Nat Sakimura, Chairman of the Board, OpenID Foundation
Moderator: 
– Allen Sutherland, Vice Chair, OECD Public Governance Committee13:25 – 13:30Closing Remarks(Source) OECD

参加には登録が必要です。こちらからご登録ください


Simon Willison

LLM 0.11

LLM 0.11 I released LLM 0.11 with support for the new gpt-3.5-turbo-instruct completion model from OpenAI. The most interesting feature of completion models is the option to request "log probabilities" from them, where each token returned is accompanied by up to 5 alternatives that were considered, along with their scores.

LLM 0.11

I released LLM 0.11 with support for the new gpt-3.5-turbo-instruct completion model from OpenAI.

The most interesting feature of completion models is the option to request "log probabilities" from them, where each token returned is accompanied by up to 5 alternatives that were considered, along with their scores.


Ben Werdmüller

I'm going to keep using Zapier for my link blog

The way my link blog works is like this: I save an article, website, or book I thought was interesting to a database in Notion using the web clipper, together with a description and a high-level category. (These are Technology, Society, Democracy, and so on.) I also have a checkbox that designates whether the link is something I’d consider business-friendly. Zapier watches for new links. When

The way my link blog works is like this:

I save an article, website, or book I thought was interesting to a database in Notion using the web clipper, together with a description and a high-level category. (These are Technology, Society, Democracy, and so on.) I also have a checkbox that designates whether the link is something I’d consider business-friendly.

Zapier watches for new links. When it finds one, it publishes it to my website using the micropub protocol. (My website then tries to send a webmention to that site to let it know I’ve linked to them.)

Then, it publishes the link to my Mastodon profile using the top-level category as a hashtag. If the link is to a book, it also adds the bookstodon hashtag.

Following that, it publishes to all my other social networks via Buffer, without the hashtag. (The exception is my Bluesky profile, which I had to write some custom API code for). If the business-friendly box was checked, that includes publishing to my LinkedIn profile.

If I’m feeling particularly motivated, I’ll copy and paste the link to my Threads profile, but because there’s no API, it’s a fully manual process. Which means I usually don’t.

Very occasionally, Zapier will pick up a link before the Notion entry has fully saved, which means that links post without a description or a category. Then I either shrug my shoulders and accept that I have some weird posts on my timeline, or I go back and edit or repost each and every one.

Because of this bug, I’ve thought about writing my own code to do all of the above on my server. It would work the exact way I want it to be. It would be cheaper, too: I pay for Zapier every month, and the cost adds up.

But while I could do this, and the up-front cost would certainly be lower, what if something goes wrong? Let’s say LinkedIn changes the way their API works. If I wrote the connection myself, I would need to keep my code up to date every time this happened — and, in turn, stay on top of codebase changes for every single social media platform I used.

And the truth is: I’m tired, friends. I want to be really careful about the amount of code I set myself up to maintain. It might seem like a simple script now, but over time I build up more and more simple scripts and, cumulatively, I end up buried in code.

As I get older, I find myself optimizing that cost more and more. I’d much rather pay something up-front that saves me a ton of time and cognitive overhead, because both of these things are at such an enormous premium for me.

I could also just not post to those social media accounts, or do it fully-manually, but there’s something really satisfying about publishing once and syndicating everywhere I’m connected to people. I could save my links straight to something like Buffer, but I also like having my categorized database of everything I’ve shared. And Notion makes it easy to save links across my devices (I’m sometimes on my phone, sometimes on my laptop, sometimes on my desktop).

So I’m keeping Zapier, at least for now. I like keeping my links, and I like sharing them. And, more than anything else, I like not having to maintain the code that does it.


Remote work may help decrease sexual assault and harassment, poll finds

“About 5 percent of women who were working remotely reported instances in that time, compared with 12 percent of in-person women workers. Overall, only 5 percent of remote workers reported instances in the past three years, compared with 9 percent of those who work fully or mostly in person.” #Labor [Link]

“About 5 percent of women who were working remotely reported instances in that time, compared with 12 percent of in-person women workers. Overall, only 5 percent of remote workers reported instances in the past three years, compared with 9 percent of those who work fully or mostly in person.” #Labor

[Link]


Migrants tracked with GPS tags say UK feels like ‘an outside prison’

I had no idea Britain was fitting migrants and asylum seekers with ankle bracelets and surveilling them to this level. It seems impossible that this is something people would think is right and just. The dystopian cruelty is mind-boggling. #Democracy [Link]

I had no idea Britain was fitting migrants and asylum seekers with ankle bracelets and surveilling them to this level. It seems impossible that this is something people would think is right and just. The dystopian cruelty is mind-boggling. #Democracy

[Link]


A (more) Modern CSS Reset

I particularly valued the explanations here. I spend less time coding these days - I can go weeks without writing a line - and I’m determined to keep my skills up. #Technology [Link]

I particularly valued the explanations here. I spend less time coding these days - I can go weeks without writing a line - and I’m determined to keep my skills up. #Technology

[Link]


John Philpin : Lifestream

📸 230919🖇️ #MBSept Looking down on the edge of the world.

📸 230919🖇️ #MBSept Looking down on the edge of the world.

📸 230919🖇️ #MBSept

Looking down on the edge of the world.

Monday, 18. September 2023

Wrench in the Gears

Humility, Love, and Boundaries

This morning I received a response to my latest blog post, a piece I’d written about grief and family separation and controlled consciousness along with a description of site visits I did in Durham, NC related to military gaming simulations, neuroscience, and psychical research. It was sent by someone I know from my education activism [...]

This morning I received a response to my latest blog post, a piece I’d written about grief and family separation and controlled consciousness along with a description of site visits I did in Durham, NC related to military gaming simulations, neuroscience, and psychical research. It was sent by someone I know from my education activism days, an individual who’s done important work exposing the toxic discipline and financial schemes behind a particular no-excuses charter school franchise. I won’t quote from the email because the comments weren’t shared publicly. I do, however, want to unpack the content in broad strokes.

I’ll admit to being triggered by that email landing in my inbox. Blaring out from the page, at least that’s how it felt to me, was the sentiment – you are a talented person Alison, but you are not humble, and that’s a problem. I quickly started drafting a response. If I’m being perfectly honest, my reply was defensive and would probably only have served to reinforce the writer’s mental picture of me as a combative, hard-headed know-it-all. Upon reflection, I sensed the sender of the email, also a blogger, likely found my post equally triggering since it critiqued academia, the prevailing climate narrative, and political polarity. All three are topics about which the author holds strong opinions. So, I paused and made a hearty breakfast of poached eggs and crispy kale with a side of thick bacon slices, and then after finishing off a Moka pot, I decided to write my reply here instead.

 The email sent to me opened with the lyrics from Grace Slick and The Great Society’s song “Someone to Love,” which was later re-recorded at “Somebody to Love.”  According to Wikipedia, the group originally performed the song at The Matrix nightclub in San Francisco in 1965-66. For me this has synergy with my ongoing interest in Madeline L’Engle’s “Wrinkle In Time” novel, which centers love as the only thing that can overcome IT, the mechanical ruler of the dead world of Camazotz. The song lyrics speak of truth turning into lies, joy and garden flowers dying, and a mind full of red. The answer is to find “somebody to love,” which given the nature of the personal rejection I’m navigating by the people I love is rather cutting.

As I interpreted the intent of the email, which is in itself a fraught enterprise, the implication seems to be that I had turned into an angry and joyless person. People who read my work or listen to my talks know that is not the case. Sure, the past few weeks have been terrible, not just because my father died – I had mostly come to terms with that. The worst part was dealing with the finality of being cast out by my living family and the deep woundedness I felt at that cold, clinical distancing.

This week I was able to mostly push my anger aside, because I continue to hope that the answer is love – that love will win in the end. The message being implanted in the minds of many today is that dissidents are dark, bitter people – people who can neither be trusted nor understood with minds full of “red” thoughts. In that way we can be dehumanized, marginalized. You don’t have to pay attention to bitter people. It gives you a pass.

Below is what I wrote in my unsent, draft response.

“I want to make it clear that I am not enraged. That is what the media juggernaut would have you believe. The masses are inhabiting narratives that have been strategically fed to them for years, decades even, by sophisticated digital content management systems. These systems have been set up to reinforce social segmentation, divisiveness, and teaming. Consumption of programmed information threatens to turn us into the human equivalent of social insects. Complexity and emergence leverage automated reactivity and pre-programmed social cues. The system is using playlists of content to manage entire populations, to trigger specific biochemical reactions. I sense we’re in a simulation that is being remotely guided by hormone manipulation and biochemical signaling. See this four-minute clip about neuro-economics and use of oxytocin to induce (or remove) social trust by Elizabeth Phelps of Harvard and Paul Glimcher, a neuro-economist from UPenn.

By making your critique about some aspect of my personality, you get to sidestep the content I’ve meticulously gathered on the ethical implications of guided consciousness, biosensors, game mechanics, and group mind. Please know, I’ve mostly made peace with my situation. I plan to find a little house in the forests and lakes of the Ozarks, put up a deer fence, make a garden, get a kayak, and reconnect with nature. I’ll quilt and maybe learn how to fish. I hear the White River offers amazing trout habitat. At the top of my list for now is the little town of Mountain View, Arkansas a center for the preservation of folk music, craft, and heirloom plants. I sense we all are instruments of the divine, energetic beings, members of life’s symphony. The byline of a Twitter handle of an online friend, a musician, is “I am a string.” A string yes, and who or what are we allowing to play us? As I see it now, the military-finance-big pharma psychiatric machine is working overtime to shove God off the conductor’s podium and install the Web3 layer of mathematical logic. I’m not going to stop my work, but I am going to change the context in which I pursue it.

As far as “The Great Society,” I understand it differently now. If you haven’t seen my site visit to the LBJ Presidential Library and School of Public Policy in Austin, it might be of interest.

I recognize that Elizabeth Hinton’s book, “From the War on Poverty to the War on Crime,” even in its critique, was setting up social impact finance and ultimately cybernetic signaling. She’s an agent of Harvard after all. Still, the history she lays out was super helpful to me as I started making sense of the ways socio-technical systems intersect with Skinnerian behavior modification and optimization metrics.”

I looked up the definition of humility to revisit what “humble” traits are: recognizing your own limitations, not thinking you are better than others, showing gratitude for team members, learning from those around you, and understanding your imperfections. Now, I would assert that I do have gratitude for those around me. We learn from one another even though our community is small in number. Many of the leads I pursue are shared with me by others. I may not always acknowledge that as loudly as I probably should, so let me do that now. Thank you all. I see you and appreciate you even if I don’t always say it.

I sense that by putting myself out publicly and framing my research through a lens of personal experience, some might imagine me to have a big ego. Egocentrism is the inability to recognize the needs of others or act with empathy. Egocentric people place their personal needs above those of others. What I’m struggling with is my feeling that I have been called to carry out a particular task at a particular time. Does this make me egocentric?

Should I set aside this calling and instead listen to people who are living out a totally different storyline that incorporates none of the cataclysmic changes now underway? Am I supposed to empathize with the wife of the guy managing multibillion-dollar investment portfolios that will run on ubiquitous sensing and derivatives markets in human behavior change? I can try and relate to her situation, but don’t expect me to bite my tongue and pretend I don’t have a problem with how all of this greenwashing is unfolding.

Maybe my single-minded enthusiasm for the topics I research is seen by others as boorish, impolite, and aggravating. Most people do not wish to have their ideas about civilization questioned. I get it. I have some degree of sympathy for their plight, but it doesn’t mean the things we talk about aren’t happening, aren’t relevant. Why can’t I just go along quietly and stop making the people around me so uncomfortable – especially since I don’t have a handy solution ready to pull out of my back pocket. Civil society including educational institutions, religious groups, and political parties, have been set up instruct us on how to be “good” within the confines of the game board that we call “civilized” life today. There are informal rubrics of socially-acceptable behaviors to which they imagine I must be oblivious. Is disciplined silence the key to being a “good” person in this stupid game? It feels like bullshit to me.

The pronouncement that I was not humble (or that I was proud / overbearing) felt like someone patting me on the head like a good little girl and sending me off to bed while the grown-ups took care of business. Who am I to presume I might be able to help shift the course of social evolution away from the cybernetic gangplank? I’m just a mom after all. Be humble Ally; stay in the background; think whatever you like; but don’t rock the boat in public. It’s unseemly. My husband recently told me, you don’t understand your effect on people. I should have asked, which people? People are not a homogenous monolith, at least not yet.

My family feels burdened by me. I think they imagine I have an over-inflated sense of self-worth. Though if they loved me unconditionally, they’d probably give me a big hug and be proud to be connected to a strong, grounded woman who is confident in her abilities and has a solid moral compass. I think I have a unique mind. I certainly don’t consider myself “better,” just “different.” I’m okay with being different. Each of us has God-given gifts, and I’m trying to use mine to advance right relationships. Since no one gave me an operating manual, and I only have a rough idea of what the end goal might look like, I’m learning and stumbling and recalibrating as I go along. I’ve chosen to do it out in the open to show that we can be fragile, creative, messy, and perhaps imperfectly perfect.

It is my strongly held feeling that we all have an obligation to talk about, grapple with, and come to terms with aspects of technological “progress” that are coming online right now before our eyes. While personally I believe many of these developments are unnatural and profane; I will not insist others agree with me. I will, however, continue to press for public conversations and informed consent. God has put this on my heart and given me resources to fulfill that responsibility. Who am I to turn my back on such an assignment?

It requires a healthy ego and sense of self-worth to pour out one’s personal pain onto the page for all to see. Quite a few comments on my recent posts, indicate to me that unpacking my present anguish is helping others navigate their way through the dark night of the soul. I know my audience is a niche one. I left social media and realized what drove me was a quest for internal clarity about the nature of the world and how history has informed the digitally-mediated social communications (or more likely mis-communications) of today.

I’ve chosen to conduct my research by sharing it on the internet, in the digital commons, a place I’ve come to understand is treacherous and full of landmines. I pulled back on my participation in these algorithmically-engineered spaces a few years ago when I began to have negative, dramatic interactions with people online. The weaponized nature of these platforms sank into my bones with deep finality. While I still share observations on my blog and video channel, I’m not actively looking to convert people to my way of thinking. I don’t do interviews with people I don’t know anymore. I’m not aiming to lead anyone anywhere. I just want to stay over in my corner, thinking my own thoughts and playing with ideas rather than wading out into the storm to be buffeted by digital tempests. That’s such a time suck, and I have other things I’d rather be doing.

This person’s email expressed the view that I sought to educate through intimidation and disparaged those who couldn’t understand my perspective. I recognize from the work of Cliff Gomes, that such sentiments have less to do with who I am, than the story the author of the email was listening to. It is easier to imagine me as a mean-spirited critic than consider they might not really want to know what I’ve been up to, because then they would be faced with the challenge of fitting it into a worldview where it just doesn’t fit. Jason has had similar things said to him. I suppose that confronting people with information that might undermine the vision of the world they hold at the core of their being could be seen as intimidating. Maybe that’s why people keep running away.

Our intention isn’t to be threatening. The tools of my trade, beyond relationship maps and hyperlinks to primary source documents, are flowers and rocks and even Bible passages. Is a sunflower laid down at an office park intimidating? I feel called to be a witness to the changes underway – to ask, insistently sometimes, for us to act responsibly lest we fall victim to a terrible Faustian bargain. I’m trying to be voice of the firm parent to a child in a tantrum. Children find parents intimidating, but it doesn’t mean they don’t learn from them.

The email also implied I wanted to be everything and know everything, which is odd, because in the post I specifically mention I’ve come to realize no one can ever hold the entire “truth.” All we get are the slices of “reality” we curate from the information we bump into as we live our lives. What did resonate with me though was a line about the importance of boundaries in systems and that making distinctions is a vital cognitive act, which is an idea I’ve been exploring related to complexity and emergence.

The body works to distinguish good from bad, encapsulating and removing the latter to preserve life. Computational fitness landscapes and genetic algorithms are based on this process. If the goal of “civilization” is to merge natural life with engineered nano-machines and birth a global, distributed, noetic, biohybrid supercomputing system, it’s logical that polite society would shun anyone seeking to slow progress towards that goal.

As I’ve tried to explain to my husband numerous times, we seem to be occupying different slices of reality. It doesn’t mean one of us is wrong and one of us is right. We could both be right and still different. Each person curates the world they inhabit. Our conceptual immune systems are set up to minimize cognitive discomfort. Boundaries contain us. Boundaries organize our identities. Boundaries tell us who is in and who is out. In the slow boil that is the Web3 digital identity and social steering, there are few incentives to think deeply and work to tear down manufactured boundaries that may be obscuring deeper understandings of the world we inhabit. I get it. I can empathize. That’s frightening to most people; boundaries make us feel safe.

There are no easy answers. The game mechanics have been structured so that we remain distracted as we get leveled up or cancelled on social leaderboards. For now, I’m choosing to view my cancellation as a back-handed blessing. Jason and I have a camping trip planned for October to explore Arkansas and see what there is to be seen – quartz, oaks, pine, bass, lakes, and streams. Maybe I’ll find a place where flowers will grow, joy is the norm, and the people I love will come find me there. For everyone I wish that you, too, can find a place to plant yourself, a place that brings you the personal satisfaction you desire and lets you develop into the person you were meant to be. For me, it’s time for reinvention, fingers crossed. Take the good parts, leave those which are no longer serving me, and uncover new dimensions in the human constellation that is Ally.


Simon Willison

Quoting D. Richard Hipp

Note that there have been no breaking changes since the [SQLite] file format was designed in 2004. The changes shows in the version history above have all be one of (1) typo fixes, (2) clarifications, or (3) filling in the "reserved for future extensions" bits with descriptions of those extensions as they occurred. — D. Richard Hipp

Note that there have been no breaking changes since the [SQLite] file format was designed in 2004. The changes shows in the version history above have all be one of (1) typo fixes, (2) clarifications, or (3) filling in the "reserved for future extensions" bits with descriptions of those extensions as they occurred.

D. Richard Hipp


Ben Werdmüller

The Berkeley Hotel hostage

I know people who worked with Douglas Adams and I'm incredibly envious of them. He seems like someone I would have really enjoyed meeting - and his books (all of them) were a huge part of my developing psyche. This story seems so human, so relatable. Trapped by his success, in a way. #Culture [Link]

I know people who worked with Douglas Adams and I'm incredibly envious of them. He seems like someone I would have really enjoyed meeting - and his books (all of them) were a huge part of my developing psyche. This story seems so human, so relatable. Trapped by his success, in a way. #Culture

[Link]


A New Low: Just 46% Of U.S. Households Subscribe To Traditional Cable TV

I've lived in the US for twelve years, and at no point have I even been tempted by traditional cable. Every time I encounter it, I wonder why people want it. It's a substandard, obsolete product. So this is no surprise. #Media [Link]

I've lived in the US for twelve years, and at no point have I even been tempted by traditional cable. Every time I encounter it, I wonder why people want it. It's a substandard, obsolete product. So this is no surprise. #Media

[Link]


MyDigitalFootprint

We are on the cusp of AI developing traits or adapting in the same way living organisms do through evolution.

Mothwing patterns, often including structures resembling “owl eyes,” are a prime example of nature’s adaptation to survival. Mothwing eyes are intricate patterns that have evolved over millions of years through a process of natural selection. Initially, moths developed cryptic colouration to blend into their environments, evading predators. Over time, some species developed wing scales with mic

Mothwing patterns, often including structures resembling “owl eyes,” are a prime example of nature’s adaptation to survival.

Mothwing eyes are intricate patterns that have evolved over millions of years through a process of natural selection. Initially, moths developed cryptic colouration to blend into their environments, evading predators. Over time, some species developed wing scales with microstructures that reduced light reflection, helping them remain inconspicuous. These structures eventually evolved into complex arrays resembling the texture of eyes to deter predators, a phenomenon called “eyespot mimicry.” This natural error-creation adaptation likely startled or confused predators, offering those moths an advantage — precious moments to escape. The gradual development of these eye-like patterns underscores the intricate interplay between environmental pressures and biological responses, resulting in the remarkable diversity of moth wing patterns seen today.

Critically, moths are not and were not conscious in or of the development of eyespot mimicry or any other evolutionary adaptations. They did not think, “Let us moths create a pattern on the wing to confuse the owl.” Evolution is a gradual, unconscious process that occurs over generations through the mechanism of natural selection. Individual organisms do not consciously choose or design their adaptations; rather, random genetic mutations lead to variations in traits within a population. Suppose a particular trait, such as the “eyes” pattern on wings, provides some advantage in terms of survival or reproduction. In that case, individuals possessing that trait are more likely to pass on their genes to the next generation. Over time, these advantageous traits become more prevalent in the population. This process occurs without the organisms having any conscious intent or awareness of the changes in their traits. It results from environmental pressures and the differential survival and reproduction of individuals with different traits.

AI can develop new traits with advantages, but that does not make it conscious.

AI (as of 2023) does not possess consciousness or intent like humans. However, consciousness and the development of new survival traits are unrelated. Indeed, neither is the characteristic of “independent decision-making” linked to survival. Adaption, evolution and the crafting of new advantage does not have any links to higher-order thinking or awareness.

AI operates based on algorithms, data, and programming, and any development of new traits or capabilities at the start (where we are today) will be a direct result of deliberate human design and engineering rather than unconscious adaptation.

Whilst simple algorithms can simulate processes that resemble aspects of evolution, such as genetic algorithms and neural architecture search, to optimise certain parameters or designs. Whereas AlphaGo (Google Deepmind) demonstrated the ability to learn and improve its gameplay through a combination of deep neural networks and reinforcement learning techniques based on data. AlphaZero went further insomuch that it doesn’t even need a data set to start as it is generated through selfplay.

What is happening here?

AlphaGo & AlphaZero demonstrated that processes guided by predefined objectives and criteria set by human programmers or researchers have enabled new traits (moves) to be crafted. These “AI” implementations do not have self-awareness or independent decision-making ability, but that does not prevent the autonomous development of new traits or adaptations in the same way living organisms do through evolution. AlphaGo’s capabilities are still within the bounds of its programmed algorithms and training data. It doesn’t possess a consciousness and is a product of human engineering, designed to excel at a specific task, but as with AlphaZero and AlphaFold, such systems can create new traits and advantages.

Whilst AI systems such as AlphaGo/ AlphaFold showcase the power of AI to adapt and improve within predefined parameters, it highlights that we are on the cusp of AI developing traits or adapting in the same way living organisms do through evolution.

We are on the cusp of AI developing traits or adapting in the same way living organisms do through evolution.

We should not be surprised as these AI systems are trying to mimic how humans and living organisms learn and evolve; therefore, a natural consequence is evolution, the development of traits based on “errors” that provide an advantage.

There is a very fine line between AI systems operating under the guidance of human-defined objectives and constraints and such systems creating errors and adaptations, essentially improvements based on patterns learned from data, just like nature. The observation becomes interesting as data can now be created independently of human activity using selfplay.

The development of new traits or improvements in AI based on “errors” in data will soon mimic the unconscious forces of natural selection, which we will only see once it has been created. Critically, we have to question “how will we know” because it is new and different. This question is one that regulation is not set up to address or can solve, and it is why regulating the AI industry makes no sense.

The development of new traits or improvements in AI based on “errors” in data will soon mimic the unconscious forces of natural selection, which we will only see once it has been created.

Questions for the directors and senior leadership team.

We are fully into automation and the implementation of AI across many of our systems, and indeed, we are using the data to make improvements that we did not see. Have you questioned if this new trait has an advantage, and how have you determined it has an advantage and for whom? Is the advantage for you, your ecosystem, your customer or your AI?

Thank you Scott for seeding this.


The unintended consequence of data is to introduce delay and increase tomorrow's risk.

The (un)intended consequence of focusing on data, looking for significance, determining correlation, testing a hypothesis, removing bias and finding the consensus is that you ignore the outliers.  Hidden in the outliers of data are progress, innovation, invention and creativity, and the delay is that by ignoring this data and the signals from it, we slow down everything because we will

The (un)intended consequence of focusing on data, looking for significance, determining correlation, testing a hypothesis, removing bias and finding the consensus is that you ignore the outliers. 

Hidden in the outliers of data are progress, innovation, invention and creativity, and the delay is that by ignoring this data and the signals from it, we slow down everything because we will always be late to observe and agree on what is already happening with those who are not driven by using data to reduce and manage today's risk.  Our thrust to use data to make better decisions and apply majority or consensus thinking creates delays in change and, therefore, increases future risk. 


------

In our increasingly data-driven world, the unintended consequence of data often manifests as delay. While data is hailed as the lifeblood of decision-making, its sheer volume and complexity can paradoxically slow down processes, hinder innovation, and impede productivity. This phenomenon underscores the critical importance of managing data effectively to avoid unintended delays.

One primary way data leads to delay is through information overload. As organisations accumulate vast amounts of data, sorting through it can be overwhelming. Decision-makers may spend excessive time sifting through data, distinguishing relevant insights from the noise. This can result in analysis paralysis, where decisions are postponed indefinitely and opportunities are missed.

Data can also introduce delays when it is siloed within organisations. Departments may collect and store data independently, leading to fragmentation and redundancy. When data is not easily accessible across the organisation, collaboration suffers, and decision-making processes become fragmented. This can slow down projects and hinder the ability to respond swiftly to changing market conditions.

Moreover, the increasing focus on data privacy and security regulations has introduced a layer of complexity and delay. Organisations must navigate a labyrinth of compliance requirements, which can slow down data sharing and processing. The need for stringent data protection measures can sometimes clash with the need for agility and speed in decision-making.

The unintended delay caused by data can be mitigated through effective data management strategies. Investing in data analytics tools and platforms that can streamline data processing and analysis is crucial. Fostering a data-centric culture that encourages data sharing and collaboration can help break down organisational silos.

In conclusion, data is a powerful asset but can also be a source of unintended delay if not managed properly. Organisations must recognise the potential pitfalls of data overload, fragmentation, and compliance challenges and take proactive steps to mitigate these issues. With the right strategies and tools in place, data can be a catalyst for informed decision-making and innovation rather than a source of delay.





Ben Werdmüller

19th News/SurveyMonkey poll: The State of Our Nation

Lots of interesting insights in this poll, including on nationwide attitudes to gender-affirming care (only 29% of Republicans think their party should focus on it) and gun control (82% of Americans want to restrict access in domestic abuse cases). #Society [Link]

Lots of interesting insights in this poll, including on nationwide attitudes to gender-affirming care (only 29% of Republicans think their party should focus on it) and gun control (82% of Americans want to restrict access in domestic abuse cases). #Society

[Link]


Damien Bod

Secure Angular application using Auth0 and ASP.NET Core with BFF

The article shows how an Angular nx Standalone UI hosted in an ASP.NET Core application can be secured using cookies. Auth0 is used as the identity provider. The trusted application is protected using the Open ID Connect code flow with a secret and using PKCE. The API calls are protected using the secure cookie and anti-forgery tokens […]

The article shows how an Angular nx Standalone UI hosted in an ASP.NET Core application can be secured using cookies. Auth0 is used as the identity provider. The trusted application is protected using the Open ID Connect code flow with a secret and using PKCE. The API calls are protected using the secure cookie and anti-forgery tokens to protect against CSRF. This architecture is also known as the Backend for Frontend (BFF) Pattern.

Code: https://github.com/damienbod/bff-auth0-aspnetcore-angular

Auth0 Setup

An Auth0 account is required and a Regular Web Application was setup for this. This is not an SPA application and must always be deployed with a backend which can keep a secret. The Angular client can only use the APIs on the same domain and uses cookies. All application authentication is implemented in the trusted backend and the secure data is encrypted in the cookie.

Architecture Setup

The application is setup to authenticate as one and remove the sensitive data from the client browser. The single security context has UI logic implemented in Angular and server logic, including the security flows, implemented in ASP.NET Core. The server part of the application handles all request from the client application and the client application should only use the APIs from the same ASP.NET Core implemented host. Cookies are used to send the secure API requests. The UI implementation is greatly simplified and the backend application can add additional security features as it is a confidential client, or trusted client.

ASP.NET Core Setup

The ASP.NET Core application is setup to authenticate using OpenID Connect and to store this session in a secure cookie. All OpenID Connect providers require small specific flavors of OpenID Connect. The different OpenID Connect clients can all be implemented using the standard ASP.NET Core AddOpenIdConnect method. If you want, most identity providers provide product specific clients which just wrap this client and change the names of the methods, and pre-configure the provider server specifics. When using the client specific clients, you need to re-learn the APIs for the different OpenID Connect servers. The following code implements the OpenID Connect client for Auth0 and also acquires a delegated access token for the required scope. This is not required, just added as documentation.

services.AddAuthentication(options => { options.DefaultScheme = CookieAuthenticationDefaults.AuthenticationScheme; options.DefaultChallengeScheme = OpenIdConnectDefaults.AuthenticationScheme; }) .AddCookie(options => { options.Cookie.Name = "__Host-auth0"; options.Cookie.SameSite = SameSiteMode.Lax; }) .AddOpenIdConnect(OpenIdConnectDefaults.AuthenticationScheme, options => { options.Authority = $"https://{configuration["Auth0:Domain"]}"; options.ClientId = configuration["Auth0:ClientId"]; options.ClientSecret = configuration["Auth0:ClientSecret"]; options.ResponseType = OpenIdConnectResponseType.Code; options.Scope.Clear(); options.Scope.Add("openid"); options.Scope.Add("profile"); options.Scope.Add("email"); options.Scope.Add("auth0-user-api-one"); // options.CallbackPath = new PathString("/signin-oidc"); options.ClaimsIssuer = "Auth0"; options.SaveTokens = true; options.UsePkce = true; options.GetClaimsFromUserInfoEndpoint = true; options.TokenValidationParameters.NameClaimType = "name"; options.Events = new OpenIdConnectEvents { // handle the logout redirection OnRedirectToIdentityProviderForSignOut = (context) => { var logoutUri = $"https://{configuration["Auth0:Domain"]}/v2/logout?client_id={configuration["Auth0:ClientId"]}"; var postLogoutUri = context.Properties.RedirectUri; if (!string.IsNullOrEmpty(postLogoutUri)) { if (postLogoutUri.StartsWith("/")) { // transform to absolute var request = context.Request; postLogoutUri = request.Scheme + "://" + request.Host + request.PathBase + postLogoutUri; } logoutUri += $"&returnTo={Uri.EscapeDataString(postLogoutUri)}"; } context.Response.Redirect(logoutUri); context.HandleResponse(); return Task.CompletedTask; }, OnRedirectToIdentityProvider = context => { // The context's ProtocolMessage can be used to pass along additional query parameters // to Auth0's /authorize endpoint. // // Set the audience query parameter to the API identifier to ensure the returned Access Tokens can be used // to call protected endpoints on the corresponding API. context.ProtocolMessage.SetParameter("audience", "https://auth0-api1"); return Task.FromResult(0); } }; });

The OpenID Connect client for Auth0 using the configuration from the appsettings or

"Auth0": { "Domain": "your-domain-in-auth0", "ClientId": "--in-secrets--", "ClientSecret": "--in-secrets--" }

The API controller uses the secure cookie and the CSRF protection.

[ValidateAntiForgeryToken] [Authorize(AuthenticationSchemes = CookieAuthenticationDefaults.AuthenticationScheme)] [ApiController] [Route("api/[controller]")] public class DirectApiController : ControllerBase { [HttpGet] public async Task<IEnumerable<string>> GetAsync() { // if you need a delegated access token for downstream APIs var accessToken = await HttpContext.GetTokenAsync("access_token"); return new List<string> { "some data", "more data", "loads of data" }; } }

Angular Setup

The Angular solution for development and production is setup like described in this blog:

Implement a secure web application using nx Standalone Angular and an ASP.NET Core server

The UI part of the application implements no OpenID connect flows and is always part of the server application. The UI can only access APIs from the single hosting application.

Links

https://github.com/damienbod/bff-aspnetcore-angular

https://learn.microsoft.com/en-us/aspnet/core/introduction-to-aspnet-core

https://nx.dev/getting-started/intro

https://auth0.com/docs

https://github.com/isolutionsag/aspnet-react-bff-proxy-example

Securing Blazor Web assembly using Cookies and Auth0

Sunday, 17. September 2023

Ben Werdmüller

Nobody Will Tell You the Ugly Reason Apple Acquired a Classical Music Label

Makes complete sense: if you're charging a monthly subscription to access music, directing users to royalty-free music instead of other recordings will improve your margins. It doesn't say great things for classical music revenue in the future, though. #Technology [Link]

Makes complete sense: if you're charging a monthly subscription to access music, directing users to royalty-free music instead of other recordings will improve your margins. It doesn't say great things for classical music revenue in the future, though. #Technology

[Link]


Wrench in the Gears

His Eye Is On The Sparrow, The Inchworm, and Me

What follows is a day-long outpouring of emotion. I’ve been back in Philadelphia for five days, and it felt like the right time to begin to process and document the past two weeks, the synchronicities and the heartache, before the memories fade into oblivion. My gut tells me these experiences hold life lessons, especially the [...]

What follows is a day-long outpouring of emotion. I’ve been back in Philadelphia for five days, and it felt like the right time to begin to process and document the past two weeks, the synchronicities and the heartache, before the memories fade into oblivion. My gut tells me these experiences hold life lessons, especially the passages chosen by the pastor for my father’s life celebration. Even if I can’t see them all right away, they will be there when I’m ready. I sense that it’s important to not lose track of these teachings, and perhaps by putting words to my emotions I will gradually be able to make sense of the chaos that surrounds my life at present.

I glanced over at the passenger seat and saw a chunky green inchworm waggling its front end through the air trying to figure out where it was. Certainly the cracked leather seat of an aging Subaru wasn’t its proper habitat. We were on NC Route 27 nearing Albemarle and the piney Uwharrie National Forest in the center part of the state. It must have joined the trip when I got off I-485 around Charlotte and pulled over to harvest some mimosa leaves and pods from the side of the road.

One of my fondest memories as a small child was of the hedged backyard behind the modest Fort Worth ranch house where I spent the first few years of my life. My dad, who died on September 1, had made me a wonderful sandbox around the base of a mimosa tree, a magical tree for a child with its fan-shaped pink flowers. I loved the doodle bugs / ant lions that lived in my sandbox. I loved the purple iris along the garage wall; blooms my mother wrapped in damp paper towels and crumpled aluminum foil for me to take to my preschool teachers. I loved the monarch butterflies that paused on the bushes around our patio on their ambitious trips up from Mexico and the tiger lilies along the back fence that shared the same shade of burnt orange. Our gardenia bushes were surrounded by some sort of volcanic stone “mulch” that must have been all the rage in the early 1970s. I remembered their sweet scent when years later I chose them for my wedding flowers, a wrist corsage saved for decades in an archival box under the bed with my classic cotton lace Laura Ashley dress. The shriveled corsage I tossed in the trash as the big clean-up of Ally’s life continues. The dress I cut up to be remade into a quilt someday. Little Ally’s world inside that backyard hedge was a natural wonder, small in scale but magnificent to nurture a child’s imagination. At this point in my life I am hoping to get back to that place where I was when I was four, a place of quiet gratitude. After spending my adulthood in the big city, now a smart city, I’m trying to figure out where I really belong.

I was heading north, up I-95. I couldn’t say I was going “home” really, because I don’t have a “home” at the moment. Yes, I have shelter until we put our row house on the market in the spring. But “home is where the heart is,” right? Presently, my heart is full of holes with ragged gaps that were once filled with love – maybe not the brilliantly burning love of youth, but the mature type of love, a steady bank of glowing embers. My father’s passing brought the reality of my situation into painful focus. I can no longer hold onto naive ideas about keeping the home fires burning in anticipation of a time when the three of us could remake ourselves into some new kind of family or even that my mother holds unconditional love for me. No, everything seems conditional now, contingent on proof of cognitive compliance. Their cancellation of me has been cemented into their own identities. The people who were once closest to me now exist in direct opposition to the person they imagine I’ve become. There is no way to dissuade them, to show them that I really am me, the same me I’ve always been. I realize the framing they’ve embraced since the lockdowns cannot shift without destabilizing the shaky narrative they’ve chosen to inhabit. I am the problem. I have to be the problem in order for their reality to remain steady.

Arriving in my fifties at a vantage point where I’ve begun to see the game of life for what it is, made me unlovable. No one will tell me what my unforgivable sins are other than I have high moral expectations, apparently spend too much time on my research (as opposed to say watching house flipping shows, Netflix series, or soccer games), and I hold a low opinion of Bill Gates and his foundation. Though honestly, anyone who pays attention to my work knows I moved on from Gates to Marc Andreessen, Protocol Labs, and Web3 over two years ago.

Is it because I no longer listen to NPR? Read the Washington Post and New York Times? Cheer for the Eagles? Subscribe to the narratives spun out on mainstream and social media – either side, progressive or conservative? Is it because I look up with concern over the streaks that crisscross our skies? Oppose compulsory public health interventions, digital identity, smart dust, and blockchained cloud minds? I lived for decades in the city of brotherly love imagining it to be a tolerant place where culture and diversity were valued. Either I was wrong or this new normal means diversity can only be tolerated if it conforms to established rubrics informed by “trustworthy,” real-time data flows.

The vast majority of Philadelphians cannot or will not acknowledge there is a game underway. It is a game of human computation where we’ll be expected to perform our humanity, emitting signals in a machine-readable format so artificial intelligence and quantum computers can parasitize our imaginations, emotions, and souls. Distributed layers of smart contract protocols will guide social evolution through complexity towards emergence, gleaning useful patterns and products from our collective actions and most intimate relationships.

I can see this shift will be sold as libertarian commoning, uniting the populist left and right in a long-planned Transpartisan campaign. I expect many will be happy to become “self-sovereign” agents in “play to earn” crypto gaming simulations. They choose not to see the infrastructure of extended-reality being installed around them or consider its origins in militarized behavioral psychology. It feels almost impossible to motivate people to start wrapping their minds around the dire implications of ubiquitous sensing technology, blockchain ledgers, and bio-digital convergence. The learning curve is steep, and not enough people have the stamina, focus, or willpower to take the deep dive. Few want to leave the cave; the shadows are captivating. My husband keeps telling me that I “left him” and that “I changed,” but I would never have intentionally left our family. I would never ask him or our child to become someone they weren’t; though, sadly, they could not do the same for me. My experience since 2019 has been that people inside the cave fear those who’ve wandered outside and come back with a new perspective. Maybe it’s trite to say this feels like Campbell’s Hero’s Journey cycle, but it does. I’m not sure if I’ve crossed the first threshold or am making my way through the road of trials. In any event, none of this is pleasant.

When I noticed the inchworm on the passenger seat, I was on my way to Durham, NC. I’m not up for ten-hour drives and needed an overnight stop as I made my way back north. My father had died ten days before. I was with him. It was just him and me. I held his hand my tears dampening the stubble on his cheeks as I pressed my heart against his as he made his passage. Compounding the trauma of losing him was the challenge of moving forward with the details of his life celebration as my mother undermined my efforts and my estranged child and husband emotionally complicated the proceedings. This was not a time of family togetherness and neighbors bringing casseroles – not by a long shot. A sweet silver lining was the time I spent with my sister-in-law and niece. Together, we put thoughtful touches, Kansas sunflowers, vintage photos, my father’s favorite junk foods, on his life celebration.

In spite of all the difficulties, I know I made my father proud, reading the eulogy I wrote for him and singing, without musical accompaniment, the lullaby he sang me when I was little. Several people told us that it was the most personal, touching service they’d ever attended and complimented my skills in writing and public speaking, asking if I did that for a living. Not as living, no, but as my calling. In the days after his passing, I used my gifts for my dad. A few people who attended even though they had never met my father said that after the service, they felt like they really knew him and what a strong, kind, faithful man he was.

I ordered a subdued white and green arrangement for the altar, with hydrangeas for my mother – her favorite. For the reception table I chose boldly-colored flowers, including sunflowers, a nod to my dad’s midwestern roots. As I was picking up catering trays and napkins at the party store, I realized I needed a few things from the grocery next door. There was a seasonal display at the checkout, pails of floral sunshine that seemed to have been placed there just for me. I knew bouquets of sunflowers would be a perfect addition to the event, so I grabbed three of them. My sister-in-law graciously agreed to pick up vases at the thrift store. She, my niece, and I each made a tent card to be placed on the church-lady punch and cookies tables: “Jerry Hawver, our Kansas sunflower, lit up our lives.” I was tempted to add a pumpkin to the display, because my dad used to tell us stories that he, the fourth child, was born way overdue. When he arrived on October 1, 1942, he was over nine pounds, with a complexion that was a sort of  jaundiced yellow. People said he looked just like a pumpkin. Out of respect, and because people probably wouldn’t understand, I refrained.

I knew that in the future I would remember my dad whenever I saw sunflowers with their exuberant shade of yellow. Yesterday, I was stressed about the future and money and dealing with conflict with my husband, and as I slowed to a stop in the Chamounix part of Fairmount Park on the way to drop off more of my former life at Goodwill, I saw a single sunflower plant covered in a dozen blooms. It was all by itself along the shoulder of the road right next to the stop sign, in part shade. It was definitely not the kind of spot you’d expect to see a sunflower. We don’t have many sunflowers here in Philadelphia, and I took it as a sign. One of my mother’s contributions to the service was to request a solo by the organist, the hymn “His Eye Is On The Sparrow” popularized by Ethel Waters. The lyrics are taken from the words of David in the Psalms. God looks after the sparrows even though they neither reap nor sow; implying that of course God’s eye is on all of us as well. Seeing that sunflower reminded me of the sentiment behind the hymn, that God was looking after me in my times of trouble. I pictured my father next to him, restored and whole. I took a deep breath and calmed down. Thanks dad.

Driving along the back roads towards Durham, I paid special attention to the bungalows and ranch houses. They were the kind of houses my dad was raised in and the kind of starter homes he and my mother raised us in until they upgraded to the two-story corporate suburban models a Procter and Gamble salary could support. I remember how excited I was when we relocated to Louisville, KY in first grade and bought a house on a fall-away lot with a partially-finished basement that had stairs. As a child that felt like luxury! The driveway of the house on Weissinger Road sloped towards the backyard and was perfect for big-wheels races. Those years my little brother and I roamed the neighborhood playing in the not-yet-developed wood lots and stormwater ditches. I remember being fascinated by the quartz crystals in the stones that lined the banks, the crayfish you could find occasionally, and the clay deposits my friend Andrea and I would fashion into lop-sided pinch pots as we sat on a wall by her garage lined with marigolds that made for colorful potion ingredients. Those are good memories, memories I should remember to tap into for the journey ahead of me.

The houses I passed on Route 27 were like the houses of my maternal grandparents and great aunt – houses with chest freezers, home canned goods, big gardens irrigated by wells, kitchens filled with the smell of homemade bread toast and jewel-like gelatin squares (Knox blocks) in the fridge. My grandparents were far from perfect, but as I’m entering this new phase of my life, I’m developing a new appreciation the frugal way they lived. I’ve been prowling online real estate listings, trying to imagine my landing place, even though I know I’ll have to wait until spring when we sell our Philadelphia home. Still, I’m glad to be moving beyond house as a status symbol – keeping up with the Joneses kitchens, deluxe ensuite bathrooms, and prioritizing potential for market appreciation. I have a child who’s grown, so school districts are not a concern. As long as I can find a sturdy, modest ranch or bungalow that I can heat for a reasonable price, I’m fine with 1970s cabinets and 1940s bathroom tile. At this point, they’re practically antiques and I’m a historic preservationist. Maybe once I get settled and put a kitchen garden in, I can prove to my mother that I am worthy of stewarding the carved wooden family hay fork, brought from the Volga by my German immigrant forebears. I guess that would be coming full circle.

I kept looking out of the side of my eye at my unexpected guest. It wasn’t one of the tiny critters that drift down from the treetops by a slender thread, but a plump, juicy fellow about as big around as a pipe cleaner and almost an inch and a half long. It was crawling around on the bag of materials, I’d gathered to set an intention – gardenias, a mushroom, a quartz rock, some sunflowers and matching yellow card stock hearts with 1 Corinthians 16: 13-I4 written on them, encouraging us to have faith and be strong, brave, and loving. I’ll admit to having failed at the loving part that week, losing myself in anger over the abandonment I felt wash over me. I’ve mostly pulled myself together, but I know that what transpired in the aftermath of losing my father, was that I really lost (almost) all family relationships.

As much as I’d held onto the hope that if I was good enough, the people I thought should love me would love me. I now recognize the people whose love and companionship I desire don’t know me anymore and have no desire to know me. My mother was strangely enraged by the eulogy I wrote because it didn’t include her pain. One night she grabbed me by the shoulders, shook me, and called me a bully for writing it. Dozens of kind comments about it had already been left on my blog. The whole episode was surreal. When I tried to tell my husband what had happened, he couldn’t seem to muster much empathy. The people around me seem to want the body that holds my spirit to metamorphosize (or regress?) into another kind of person, a person who will agree to play the game, a person who never left Plato’s cave, a person who will conveniently fit into some archetypal box the media created for the masses to inhabit. I just can’t.

My experience has been that my presence continues to be a source of discomfort in their lives, a nagging pain that must be avoided. I can’t give them what they want, which is for me to become someone else, for me not to have evolved as a human being caught up in an era of immense changes to which the majority of people around me are oblivious. So, I’m trying to ball up the grief of losing my dad with all the other rejections that flowed after that. Maybe it will be easier to process this mass of heartache all in one go rather than let it drag out for years, poisoning me with bitterness. I should probably be grateful for the clarity the pain provides. Maybe now, with band-aids ripped off and my broken heart exposed to fresh air, I’ll be able clear the slate of the past thirty-five years and start again.

On the drive down I had time to reflect on my situation. I’m Jerry Hawver’s daughter. I have agency. I’m tough, but I have a heart. I deserve a life where I’m appreciated for who I am, a quirky but kind personality with unique gifts for those with ears to hear at this time of great transition. I know my dad would want me to be happy. Playing the game, especially the hyper-extended reality Web3 game that’s coming online now, is not going to bring me joy. When I left Seattle, I thought I could continue to play the role of the good ex-wife, the devoted daughter, the dutiful mother. I thought if I did all the right things, if I centered other people’s needs, I could earn my way back into their hearts. That proved not to be the case. I tried to come back and swallow my pride and agree to be an agent in the game, telling myself maybe I could gather insights while making coffee, and travel arrangements, and ordering copy paper for the kinds of programs I’ve been researching over the past decade. I applied to dozens of jobs and got a few interviews.

I try not to judge, because this noetic thing we’re enmeshed in is, in fact, pervasive. In my view, the system considers all Earthly beings to be nodes in a massively sophisticated biological computation machine, the ant computer. Just this week, listening to Neal Stephenson’s Baroque Cycle series, a prequel to Cryptonomicon, it dawned on me that we may be facing off against Gottfried Leibniz’s Characteristica Universalis, a language he conceptualized based on Chinese characters, the i Ching, metaphysics, and calculus. I now think that may be what lies at the core of Web3 smart contracts, human computation, digital commoning, tokenized behavior, cybernetics, complexity, game mechanics, social impact finance, and surveillance of public health and decarbonization metrics.

I am keenly aware that intuitive, imaginative thinking is a threat to such a system. People who choose not to behave according to Skinnerian programs are like nails sticking up, daring the powers that be to try and pound them down. We are sabots in the looms; we are the wrenches that threaten to break the teeth of the gears. We represent the possibility that progress towards digital manifest destiny may be slowed or even hobbled. Climate Millenarianists are working hard to brand their post-Anthropocene “ecotopias” as “green” populist endeavors rather than the corporate juggernauts they actually are. Even if they’re not visible on stage, those in the know understand the likes of Black Rock, Goldman Sachs, Raytheon, and Pfizer are peering out from the wings. The fitness landscapes of genetic algorithms have to work overtime to constantly erase principled dissent on behalf of the sacred natural world and smooth the path towards convergence. Yet still we persist and keep showing up with hearts, sunflowers, and intentions placed to divert the tidal wave of electrical engineering, EMF radiation, nano-biotech, and big data.

After Seattle I created a LinkedIn profile and started furiously applying for jobs. I applied to at least one a day – anything that seemed like it could be a possible match for my eclectic set of qualifications. Mostly I scouted local universities, which is where I thought I could get a decent salary with benefits. For most of my life, academic and non-profit cultural organizations were the places where I felt most comfortable. Only now, on the back end of my life, have I begun to realize institutions of higher education function as gatekeepers to confine and compartmentalize thought, quietly, but effectively, neutering critical thinking. Jeff Schmidt, a physicist, laid this out in his book “Disciplined Minds.” Acceptable knowledge is a currency guarded and traded through esoteric academic ritual. Unacceptable knowledge, knowledge that could undermine manufactured polarization and the trajectories of problem-reaction-solution campaigns, is disappeared or at least disincentivized. The shift to digital life made this erasure much easier. Simply toss inconvenient ideas into Orwell’s memory holes or brand them as “conspiracy.” If you say it often enough, it becomes reality.

Deep in my heart I knew that, but I was willing to try and hold onto the family home, even as everyone else abandoned ship. I did a few online interviews. One job, a museum, had a required health status. Nope. Another was for a contemporary art institution whose major donors were members of the high-finance crowd. I managed to get an in-person interview for an office manager position for the undergraduate division of Wharton, Trump’s alma mater. It felt as if the universe was pranking me. Seriously! Still, I put on my new interview outfit. The black skirt I’d ordered didn’t fit well, so I wore a tan linen one instead. That meant a change of shoes to some cute brown flats with ankle straps. I thought I’d cleaned up rather nicely. I got on the bus leaving plenty of extra time. As I stood to exit, I sensed there was something was wrong with my shoe. I paused and looked up. Another passenger pointed out that the whole front of the sole had fallen off.

I was dumbfounded. There was no indication that the shoes were worn out. There must have been some catastrophic failure of the synthetic substance of the sole. I grabbed the lump of latex (?) and exited wondering if there were any clothing stores nearby where I could buy a pair of shoes. A block away I found two clothing stores, but neither of them stocked shoes. The only shoe store in the vicinity the clerk told me sold running shoes, which was rather symbolic. By this time the sole was falling off the other shoe, so I just grabbed the remaining chunks and threw them in the trash. There was nothing to be done but keep going. My flats were now really FLAT. There was a bit of fabric on the bottom, and I hoped that if I kept my feet under the table no one would notice. I made my way toward the building where the interview was to be with as much dignity as I could muster.

It turns out that building faced the Wistar Institute, the oldest biotechnology lab in the United States and an important center for vaccine research and nanotechnology development. There was a back-to-school event happening outside the building. As I passed, I had the surreal vision of a young woman playing corn hole as music blared. In her hand she held a bean bag. She wore a t-shirt that was emblazoned with the phrase “I’m a CRISPR Engineer.” It had the ThermoFisher Scientific logo printed on the sleeve. Do you remember a decade ago when we were not only allowed to question gene-editing, but in many quarters, it was expected that educated people should oppose it? I took it all in and continued my sole-less walk into the WAR Building (Wharton Academic Research Building). I haven’t heard back from them, but it didn’t seem like a place with much joy.

Someone told me recently that in the Jungian sense shoes symbolize grounding. The past few years have taken away all that grounded me. While I am still in the process of mourning those losses, I hold out hope that there may be a new beginning on the horizon, far from Philadelphia. I just can’t see it yet.

The closing episode of my last-gasp attempt to hold onto my Philadelphia life took place this week. After uploading four video-recorded responses to an HR-tech platform, I landed an in-person meeting with Bethany Wiggin, founder of Penn’s Environmental Humanities Program. Wiggin’s academic background is German language and comparative literature. I found it interesting that she shares a last name with the fictional Ender Wiggin, protagonist of Orson Scott Card’s seemingly prophetic Ender’s Game / Enderverse series. Evidently, it comes from Wiucon, Norman for “high and noble.” When I applied for their program coordinator position, I hoped I’d get a chance to ask the staff where they stood on nanotechnology and the financialization, through ubiquitous digital surveillance, of the environment to address climate change.

I’d briefly met Bethany at a presentation on “what works” government that was held at the Furness Library four or five years ago. It shocked me when, during the open discussion period, one attendee stated that residents of North Philadelphia, a predominately Black community, drink too much bottled water. This, of course, was in the aftermath of the Flint poisoned lead drinking water crisis. I vividly remember the person throwing that observation out into the room, after which a second attendee responded that he knew there were social impact investors meeting at the same time Cira Center a few blocks east at 30th Street Station. He posited that surely, those big thinkers could come up with a solution to the bottled water problem. After the meeting, I looked up the Cira Center event and ended up writing a piece that included Sister Mary Scullion’s participation in the 2018 Total Impact conference. That day the meeting room on the top floor of the fine arts library, just down the hall from the Kleinman Center for Energy Policy (a program that’s promoting next-gen nuclear as an answer to climate change), had been set up with eight-top tables. A young man was seated next to me. He worked for a bank on workforce development (cue those human capital bonds). Bethany was at the table, too. Afterwards, I struck up a conversation with her and expressed my concerns around impact finance at which point she told me that her husband was an impact investor.

In a 2019 Medium essay, Wiggin described the importance of climate strike activities at Germantown Friends, an elite K12 Philadelphia Quaker school. Her essay also mentioned her husband, David Parker Helgerson. According to his LinkedIn profile, Helgerson is a co-head of impact investing at Hamilton Lane headquartered in Conshohocken, just outside Philadelphia. The small town nestled between the Schuylkill Expressway and I-476 also happens to be the home base of the John Templeton Foundation, a philanthropic institution that holds considerable influence through the many, sizeable grants they give towards research in the areas of genius, spirituality, free markets, and theoretical physics. According to Wikipedia, in 2020, Hamilton Lane was the third largest “fund of funds” globally with $65 billion under management. However, a press release from July of 2023 noted that their assets had jumped to almost $857 billion. It is important to note, as we examine the role of signals intelligence and distributed ledger technologies in human computation / noetic convergence, that Helgerson’s firm tokenized several of its funds in 2022 on the Polygon blockchain.

Helgerson earned a BA in political science and economics at Swarthmore, a highly-regarded Quaker college west of the city. The school positions itself as progressive while grooming students to implement neoliberal economic policies. Swarthmore has ties to Kenneth Boulding, an influential economist who with his wife Elise advocated world peace, limits to growth, and was considered an early promoter of social entrepreneurship. Christiana Figueres, a Costa Rican diplomat who’s served as the UN’s point person on climate change and almost single-handedly created the field of carbon markets, is an alumna. Helgerson earned an MBA at Duke’s Fuqua School of Business. The program has a strong social impact component, the CASE program, which is why I swung by during my stop in Durham.

I entered Bethany’s office in Williams Hall where thirty-plus years prior my husband earned his PhD. Her first name signifies the Biblical hometown of Lazarus (raised from the dead, which is interesting in the context of regenerative medicine) that Jesus visited before his crucifixion. After exchanging introductions, she made a point of leaving the room to get the printed questions for the interview stating that UPenn’s HR requirements were very rigorous. Each candidate was to be asked the same questions, and that she would be using the timer on her phone to make sure the interview ended promptly after 30 minutes. The symbolic emphasis around time and the phone sitting as a digital barrier between us reminded me of the book “Momo” that I have been reading aloud on my channel.

She wore a striking dress of a modern design with fabric that prominently featured repeated upward pointing triangles. I note this, because of the significance of Platonic solids and Pythagoras’s understanding of the fabric material reality being based in combinations of triangles. Early in our conversation she expressed interest in my blog. It seemed as though she may have read it. Was this why I had gotten the interview? After responding to two questions about my qualifications and reason for leaving my previous position, I explained to her that I cared for the environment, but that I also had serious reservations about the direction things were headed with ubiquitous computing and nanotechnology and finance and game mechanics around climate and carbon trading. I mentioned that Penn was deeply involved in these activities.

I expressed to her how shocked I had been to find out that the ecology movement in the United States emerged from the Atomic Energy Commission. I said it was my sense that Howard Odum’s language of energy exchange, emergy, was a continuation of Gottfried Leibniz’s work – something I thought would have piqued her interest given her academic grounding in German language and comparative literature. Who could have imagined that the “universal language” would turn out to be code? She honestly didn’t seem taken aback by the idea at all. I went on to say that I didn’t think most people understood this history and that we needed more open conversations about the ethical implications of what was being proposed around Web3 and cybernetics within a historical context.

She responded that she thought that the history around the AEC and the Odum family was generally known, however, I strongly disagree. Perhaps within academic circles there might be such an awareness, but not among NPR-listening progressives and the youth who are being whipped up into a frenzy of anxiety over imminent termination of life on this planet. I would hazard a guess that even people who consider themselves well educated are not aware of how MIT, the Club of Rome, and Limits to Growth intersect, let alone Hasan Obekhan’s involvement in Mankind 2000’s socio-technical systems and organizational theory tied to smart cities at Wharton and its extension into Kevin Werbach’s advocacy for blockchain and behaviorist game mechanics.

I conveyed to her that after my father’s death, I realized I needed a new path, to go out into nature and make a garden and step away from UPenn and Philadelphia and what it represented: von Neumann’s ENIAC; John Lilly’s neural investigations; Eugene Garfield’s bibliometry; and Ted Nelson’s not-yet-realized Project Xanadu. At that point she told me that if I didn’t want the position, the conversation was over. She had set aside thirty minutes to talk with me. Supposedly her program promotes public discussion around how humans relate to the environment. Some might say her response was logical. This was just an interview after all. She held the position of power; but as it turned out, I didn’t actually want what she had to offer. I witnessed no intellectual curiosity. I wasn’t all that surprised, but still, I had rather hoped I would see a spark, some glimmer of engagement. Instead, what I experienced was perfunctory, bureaucratic procedure. Check the boxes for HR and move on to the next person in line – all in a day’s work.

Ivy League schools are not set up to entertain alternative lines of inquiry. There’s a script, and we’re not meant to deviate from it. Because if we did, what would happen to those billions of Hamilton Lane’s assets tied to ESG metrics? What would happen to humanity’s march into bio-digital convergence? How would we achieve their planned nano-technological “ecotopia” if there was no data, no metrics upon which to bet? The noosphere runs on signals intelligence, optimization rules, behavioral compliance, and standards for goodness sake. The clock is ticking. The entire climate simulation program is built on oscillation.

Bethany’s phone timer counted down slices of time in thirty minute chunks. But if you don’t participate according to the rules of the game, don’t expect to get your full allocation. The system will show you the door. No one at Penn wants to hear what you think. It’s about credentials, disciplined minds. Remember, the world is a stage, and we are to shoulder our roles in the multi-agent simulation without question or complaint. Our assignment is to act out the script someone, or possibly something (AI?), placed in our hands. I expect eventually, they’ll heterodyne it, Edward Howard Armstrong-style, uploading lines straight into our consciousness thereby ensuring trust, fidelity, and constancy. As someone whose identity was once built around academic achievement, that was a tough pill to swallow.

Interactive Map: https://wrenchinthegears.com/wp-content/uploads/2023/09/armstrong.png

Before I left, I showed Bethany a picture I’d taken of an engraved bluestone paver installed in the walk between the Annenberg Center and the Penn Graduate School of Education. Both programs have specific roles to play in technology-based consciousness management. It was one of a series of Ben Franklin quotes. His name wasn’t on any of the inscriptions, just the dates. I’d arrived on campus early with time to kill. I passed several of them before it dawned on me what they were. I then went back to read the ones I’d missed.

To be honest, I continue to struggle with my internal storyline. There is a part of me that still wants to do the thing that is expected, check the box, earn the badge, demonstrate my worth. In my dysfunctional family, I was the “good” kid, and my brother was the “bad” kid. I got the grades, the scholarships, the generous husband, and the comfortable row house. Only decades later did I realize that much of my life was an illusion. I’m left to pick up the pieces and sort out what happened, when it all started to fall apart. I didn’t relish telling Bethany that her program was window-dressing for a global signals intelligence operation that would, if implemented, likely usurp all of natural life in the name of saving the planet. But my professor, Dr. Christa Wilmanns-Wells told me that one day I would see it, and she was right. I did see it, and then I found it impossible to look away. See something, say something, right? Even if people don’t agree with your take, it’s important enough to the future of humanity that we should at least talk about it first, don’t you think?

My father was a man of faith and the scriptures shared during his service reminded me of the importance of being strong and brave and going forth in love. So, with that in the background, I showed Bethany Ben Franklin’s quote. She told me she passed it often. Who knows, maybe I planted a seed, so when she walks by it next time, she will think about the fact that we have not given informed consent to the Millenarian agenda being advanced, in a decentralized manner, across Penn’s campus, each department having no clue how their effort fits into the larger program. The quote read: “Half the truth is often a great lie. 1758” How many of us are accomplices to half-truths? What will the results of our collective complicity in this exercise be for the environment and coming generations?

Ok, so let’s pivot back to me and the Subaru driving through central North Carolina. Sorry, it’s late and I couldn’t make a more graceful narrative transition. I tried to figure out if the inchworm was going to find its way into some remote corner of the car before I got to Seagrove where I hoped to poke around some artist studios and find a utensil crock to take back and lift my spirits. Only later did I realize the synergy this quest had with one of the Bible verses from my dad’s memorial service about God’s treasure and clay vessels.

The inchworm seemed pretty intent on finding an escape. I still had about an hour on the road before reaching the handmade pottery capital of the United States. So, I decided to pull over in a fast-food parking lot and relocate this brilliant green messenger to the base of a tree in a grassy median. I hoped it would be an acceptable replacement for the mimosa tree, but the dry stubbly grass next to the Hardees didn’t look all that promising. After I got home, I looked up inchworm symbolism and found a video likening the inchworm’s movements to the need for integration.

This little creature has legs in the front and back for efficiency, but not in the middle. The front end is always stretching out, but the rear needs time to catch up. I’ll admit with all of the changes underway in my life and society in general, the idea of devoting some time to reflection and incorporating life’s lessons seems like a good idea. Some engineers are deploying bio-inspired design to incorporate the inchworm’s movement into soft robotics. When the front and rear legs are next to one another, they make the shape of the Omega. Omega is a stark ending – a door closing, and hopefully new ones opening. Keep this in mind when I get around to talking about Swedenborg and the Church of the New Jerusalem.

When I walked into Seagrove Pottery, I looked around, circling the shop several times to assess glazes and crock sizes. Unfortunately, the piece I chose ended up being too tall for the utensils to stick out properly. Nevertheless, the lovely blue with a slight green undertone is so cheerful. I can picture it making a great vase for country wildflower bouquets in the years to come. As I got back into my car, alone now that the inchworm had been dropped off, I looked across the street opposite the parking lot and noticed an unusual sign or was it art? There was a board mounted on two posts depicting painted plates with birds on them. At first, I thought the birds were swallows, but upon closer examination I realized they were actually bluebirds. Bluebirds are messengers of good things to come after difficult times and are associated with visitations from loved ones who have died. The board also featured a plate with a weeping willow design. Of course, in the West the willow is a powerful symbol of mourning, death, renewal and rebirth. When I got home, it dawned on me that the crock I’d selected just happened to be the color of a bluebird.

It took me a few more hours to get into the Research Triangle area. My first stop was the headquarters of Epic Games in Cary, NC – another installment in Ally’s “scary things coming from banal suburban office buildings” tour. When I arrived at the large mid-rise building about a quarter mile from a strip shopping center with a Target surrounded by gated apartment complexes, there were no signs indicating what the building was. The two entrances simply offered street addresses but did not mention that the structure was home to Epic Games, maker of the Unreal Engine, MetaHumans, and Fortnite, a multiplayer war game developed with capital from the Chinese retailer and social credit scoring behemoth Tencent. There were, however, numerous signs designating the parking lots as private property with closed circuit cameras stating that violators would be prosecuted. The closest place I could find to park was a hotel next door. They even had no trespassing signs four feet into the brush of the swampy water retention basin, I guess in the event that someone decided to penetrate the moat and attempt to steal valuable corporate secrets relating to extended reality programming. I left my sunflower and heart on their entrance sign. We do not consent to your “Sinister Games” Paul Meegan. In the video below from a presentation given on “teaching” and the creative economy at the Democratic National Convention in Philadelphia, which is where I first became aware of Epic Games, Meegan is in the middle in the blue blazer. It took me about half a year before I realized that his insistence that children learn to code his video games, that what he was really saying is that they are going to be expected to build out extended reality. All our lives will be managed by game mechanics.

Source: https://www.gamesindustry.biz/epic-games-hires-paul-meegan-to-lead-product-development

 

Source: https://www.investopedia.com/news/how-tencent-changed-fortnite-creator-epic-games-fortunes/

 

 

Next, I went to the campus of NC State in Raleigh, Engineering Building II on the Centennial Campus where Donald Bitzer of the PLATO educational technology and social networking system landed after departing the University of Illinois Urbana Champaign. I left a sunflower and heart in a flower bed by the sign for the building. An informative book on the history of PLATO and its ties to social networks funded by the Office of Naval Research is Brian Dear’s “The Friendly Orange Glow.” I keep saying we have to understand that extended reality is intrinsically linked to Cold War simulation technologies, game theory, and emergence. Joseph Gonzalez, aka Bantam Joe, has said that there is a revolving door between the military and video game design. He saw it firsthand as a veteran who held an industrial top security clearance and carried out electrical engineering work for the US Army and Air Force. Gonzalez worked in Cary, NC for Imagic and Random Games in the late 1990s where he developed player leaderboards, 3D terrain design, and refined the use of artificial intelligence in game engines.

Source: http://friendlyorangeglow.com/

I drove about twenty minutes farther on to the Research Triangle Hub, where according to the interwebs, the Army Research Lab occupies building 800. Originally, I was trying to locate the home of O*Net, the Department of Labor’s Occupational Information Network, which I’m guessing will be the backbone of the “cradle to career” platform gig economy / human capital speculation / cybernetic social coordination pipeline. There was no address given, except for O*Net’s consulting partner Research Triangle (again with the triangles) Institute (RTI). Playing around with the symbolic nature of the program’s name, I imagine that the “O” could represent a cell, a holonic unit with a semi-permeable membrane, that functions as part of a digital NETwork of social computation.

Many critics of the government’s planned economy / future of work policies would simply slap on the label “socialist” or “communist.” When I identified as progressive, I saw O*Net as a program enabling big business to control labor markets behind the scenes through public-private partnerships. I now recognize the flaws in both ways of thinking. While situating our critiques within established political and philosophical ideologies may be comforting, it’s not going to bring us any closer to understanding the true nature of the problem. In fact, sticking with the team we’ve chosen, whichever side, only serves to obscure the mathematical aspects of the social control grid that is being used in tandem with sophisticated game mechanics to remake our lives and relationships.

Ultimately, the trajectory of Web3 is to bring together both sides of the political spectrum under the banner of digital progress, renewed democracy, choice, and a type of freeDOM that will be mediated by emerging technologies, smart contract protocols. It’s only once we can get a view above the ideological lenses we’ve been using that we will be able to see the labyrinth we’ve been wandering around in for most of our lives. RTI carries out high-level multidisciplinary consulting for the government to ‘improve the human condition,’ cough, cough; but, being short on time and sunflowers, I skipped it and went on to the Army Research Office.

Source: https://littlesis.org/org/355267-Occupational_Information_Network_(ONET) Source: https://www.rti.org/ Source: https://www.arl.army.mil/who-we-are/aro/

Something did catch my eye opposite the entrance to Building 800. It was a citizen science installation of several beehives painted a bright blue and ornamented with a hexagonal comb pattern, emblazoned with the word “Frontier.” All around were signs encouraging people to live at “the hub.” Again, consider the language here – a hub is the central part of the wheel and shares imagery with nodes in distributed computing. Now, I’m skeptical of all the hubbub around fifteen-minute cities right now, because it seems like a swarm mind virus campaign that could be used to tag, trace, and predict social network behaviors. That said, it is clear to me that the goal for the redevelopment of this scientific research hub, a place with a concentration of biotech and agritech firms, was to be a geographically-defined, mixed use node where people would “live, work, shop, and play!” Signs and lobby displays offered exuberant depictions of smart suburban living options for those who agreed, knowingly or not, to help engineer the bio-physical game mechanics of noetic convergence.

Source: https://hub.rtp.org/ Source:

In looking for a cheap place to stay near Duke’s campus, I accidentally ended up at Taberna, an Airbnb operating out of the Fleishman Chabad House. The logo had a prominent “T” that reminded me of a Tau cross. The building had formerly been the King’s Daughter’s House, a Christian charity respite for elderly women, which felt sadly appropriate. The large colonial structure stood between Gloria and Minerva Avenues. A parking lot branched off Alley 16. The latter is unusual name for a street name, and I’m open to hearing your thoughts about the possible significance of the number sixteen. During my trip, I’d been listening to audiobooks of Neal Stephenson’s Baroque Cycle, prequels to Cryptonomicon. One of the plot lines in “Quicksilver” involved a seventeenth-century pirate-hunting ship named the Minerva. The protagonist of Stephenson’s book was named Daniel Waterhouse who read from the Book of Daniel. When I arrived, seemingly the sole guest in the huge house, a young guy offered to help me with my cumbersome duffel since there was no elevator and my room was up three flights of stairs. He told me that he was the building manager, and his name was Daniel.

 

Julius Stulman, of the Foundation for Integrated Education, was a key early supporter of Chabad in the United States. Chabad is a Hasidic philosophy that emphasizes Jewish mysticism and contemplative prayer, including incorporation of the teachings of Kabbalah into daily life. The Duke Chabad house was named for Joel Fleishman, a professor recruited from Yale by Terry Sanford, then the president of Duke, to launch the university’s school of public policy. Now retired, Fleishman remains involved with the Foundation Research Impact Group, an effort to measure the effectiveness of philanthropy. The rambling building was located across the street from East Campus on Buchanan Avenue. Buchanan was likely named for James Buchanan Duke who founded the American Tobacco Company, launched industrialized cigarette production with mechanized tobacco roller factories, and devised modern marketing tactics to expand the market for his products. In addition to tobacco, Duke also made a fortune in electricity Today Duke Power is a prominent supporter of smart city development in the South. I find this interesting given Michael Levin’s research into bioelectricity, morphogenesis, and personalized medicine.

I settled into my room, which was comfortable despite the strangeness of the setting. The bed had a large, upholstered headboard covered with a botanical print. In fact, this same fabric was used throughout – on the curtain cornices, the loveseat, two slipper chairs, and an upholstered bench. Gradually it dawned on me that the featured plant with coral flowers was nicotiana tabacum – the type of tobacco that was grown for human consumption. A subtropical plant, tobacco has been used for medicinal and spiritual purposes in many cultures of Latin America, the Caribbean, and Africa. It was the Duke family that led the industrialization of this plant for mass consumption. I realized later that the prints over the love seat were stark black and white depictions of contorted topiaries – decidedly unnatural and in keeping with the current push for synthetic biology, sold to us as “bio-inspired design.”

One more synchronicity is that on the day I left Philadelphia for Charlotte, I noticed two nicotiana plants in the narrow planting strip of the small school next door to our family’s house. I used to garden on that plot years ago when the building was vacant and I’d planted some tobacco back in the day. Those seeds have real staying power and would sometimes opportunistically pop up in the cracks in the sidewalk. That morning for some reason I happened to look down, and where usually there were pansies or mums, I saw the tobacco. I instinctively picked one leaf and put it in the outer pocket of my backpack and promptly forgot about it. That was until my mother and I were readying ourselves to leave my father’s hospice room after his passing. I couldn’t find my keys anywhere. I ended up taking everything out of my backpack, including the tobacco leaf. Then I found my keys had slid down in an inside pocket. I folded up the leaf and placed it under my father’s folded hands to accompany him on his journey.

One of the primary reasons I decided to stop over in Durham was to see the three locations of the labs run by J.B. Rhine and his successors. Rhine started out getting his PhD in botany. This makes me think about the rising prominence of plant medicine and how the intelligence of sacred plants is being used to create pathways into altered states of consciousness. Could it be that through the use of sophisticated remote neural monitoring techniques, people under spiritual influence with their embodied intelligence, could be used as tools to access, secure, and bring back information that would be inaccessible under normal conditions? I can’t prove it, but I keep turning this concept of digitally-mediated mediumship over in my mind, its possible overlap with the human potential movement, distributed cognition, and blockchained group mind. This topic is getting a lot of attention lately with the move towards adoption of hallucinogenic substances to “treat” addiction. It also brings to mind Kevin “Green Pill” Owocki’s references to Michael Pollan’s thesis from “The Botany of Desire” that plants may in fact be cultivating humans rather than the other way around.

Rhine pioneered scientific evaluation of psychic activities, including extra-sensory perception. Among his colleagues was Margaret Mead, who held an interest in the psychic potential of precognition and remote viewing. Conrad Hilton Rice, a collaborator with Oliver Reiser whose theories were featured in “World Sensorium,” corresponded with J.B. Rhine. For three and a half decades starting in 1930, Rhine worked in the West Building of Duke’s East Campus. In 1965, the institute changed its name to the Foundation for Research on the Nature of Man and moved across Buchanan Street to the intersection with Trinity Avenue. That building would later by acquired by the Catholic Church. The Rhine Research Center, is still operating less than a mile from Duke’s medical research campus.

By the time I arrived at the West Building, the sun was starting to go down. The angle of the light made it hard to find a good place to take a photo, so I walked all the way around the building. As I came around front, I caught a glimpse of blue in a young willow oak and saw it was a bluebird. That bluebird was joined by second and then a third. I was surprised to see this cheerful group out at dusk. There was no doubt in my mind that they were bluebirds, not the swallows I expected to see. I remembered the bluebird sign from Seagrove earlier that day. The trio flitted from tree to tree before landing high in the branches of a majestic old oak. At its base I set my intention for my dad, that we stay connected so he might continue to guide me as I navigate the choppy waters ahead.

Source: https://maps.app.goo.gl/2oG2LfAExGzZvAreA

My final two stops were at Duke’s Fuqua Business School, home to the CASE social impact program, and the Nicolelis Neurobiology Lab on the medical research campus where experiments were done using data from monkey brains and kinematic sensors in Durham to remotely “walk” a bi-pedal Sarcos robot in Kyoto Japan. Adjacent to Fuqua was the law school, where Nita Farahany, bioethicist and WEF spokesperson for a future where a person’s thoughts are no longer private, is based. The Sanford Social Policy School, where Joel Fleishman was based, sits directly across from the law school. There are profound implications for these technologies, not only due to their Frankenstein-like nature, but for globalized remote haptic labor and synthetic telepathy. For me, this work appears to be an extension of Rhine’s investigations with the addition of sophisticated electrical engineering technologies and nano-biosensors.

There are so many things we should be talking about, but it feels like it’s practically impossible to cut through the noise and manufactured influencer distractions. Nevertheless, I continue my investigations, mapping relationships across time and space. I do site visits where I assert that the public has in fact NOT given informed consent to gamified consciousness engineering. I keep at it because I feel driven to understand for myself how we got here. I’m trying to imagine where we may be going as a society if our collective consciousness ends up harnessed to some artificial, decentralized, cybernetic guidance system. I’ve come to realize over the past few years that it is actually impossible for any one person to know “the truth” with any degree of certainty. There’s simply too much information for us to hold and evaluate all once, lifetimes upon lifetimes of details that could be woven into patterns shaping our worldviews. And yet, I also sense we are in a spiritual struggle, and there are lessons out there waiting for me to learn. I guess you could imagine it as a magnificently expansive independent study.

I am choosing to hold the belief that my father’s passing, as painful as it is, will teach me to be a better person in the days, months, and years to come. It’s all connected. I’m just not sure how yet. The homily for my father’s life celebration featured a passage on The New Jerusalem, the Alpha and Omega, a time when all tears and pain would be wiped away along with the old ways of being. My mother had suggested the verse, only she’d mistakenly transposed the numbers when she told the pastor. Instead of Revelations 21, she’d said 12. He was taken aback, saying that probably wasn’t it, because Revelations 12 has to do with the whore of Babylon and a dragon sweeping stars from the sky.

That holds a certain resonance with me, because Johannes Kelpius and the monks of the Wissahickon came to Germantown, outside of Philadelphia to wait for the woman of the wilderness, the woman described in Revelations 12. Emanuel Swedenborg’s Church of the New Jerusalem was centered on Revelations 21. Swedenborg was a Swedish mystic who walked the realms. Both Andrew Carnegie, whose fortune and “philanthropic” activities have been leading up to the noosphere for the past century and a half and the Pitcairn family, of PPG plate glass, both attended Pittsburgh’s Swedenborgian church growing up. Swedenborg’s writings influenced the development of transpersonal psychology as well as Jung. He also popularized the motif of the vagina dentata, the sacred feminine as a threatening presence. The Pitcairns built a cathedral for the Church of the New Jerusalem in Bryn Athyn, about a half-hour north of Philadelphia. They also constructed a museum, situated among several mansions on the glass-maker’s large estate, to house Swedenborg’s papers. The logo of the church, an intertwined Alpha and Omega, can be seen in the site’s wayfinding signs and a topiary boxwood hedge. As it turns out, I’d just added Swedenborg to my San Patrignano map, associated with Carnegie and Carnegie-Mellon. It felt surreal to have all of these unexpected connections popping off in the context of my father’s send off.

Interactive Map: https://embed.kumu.io/b01bca361055b96fd40a921dbdb2fa11#untitled-map?s=bm9kZS1DdzI3VjdJMQ%3D%3D

My sister-in-law and I didn’t want the gathering in his honor to be a sad affair. We hatched a plan to create a festive table setting featuring dad’s favorite junk foods with an invitation offered in fun bubble letters to “dig in”: Diet Coke, taquitos, peanut M&Ms, Hershey bars, and cheese curls. After meeting with the pastor we went stopped at Sam’s Club to pick up a few boxes of the frozen taquitos he used to crave. When we walked into the store there was a seasonal display of Halloween items, including a huge lit up animatronic dragon, which was a strange coincidence given our conversation not an hour earlier about the passage from Revelations 12:3 “Then another sign appeared in heaven, an enormous red dragon with seven heads and ten horns and seven crowns on its head. Its tail swept a third of the stars out of the sky and flung them to the earth.”

The day of his memorial we went back to the store to pick up steaks, to be eaten in his honor with a bourbon toast. My dad loved grilling steaks and Wild Turkey 101. Over in the refrigerator section was a sample stand. The featured sample of the day was taquitos. I told the woman staffing the booth that we’d just come from my father’s memorial and that we’d had two platters of taquitos to share. She said it made her very happy to hear it.

The understandings I hold have unexpectedly made me a social dissident. It’s hard to imagine the world turning upside down in the course of just a few years. The fitness landscape of Web3 has little tolerance for square pegs in a universe of round nodes. The time has come for reinvention, and when I look back, I hope I will see that this terrible year was a tough-love gift in disguise. As much as I miss my dad, he wasn’t available to me during the last years of his life. I lived at a distance, my mother didn’t want me around, and our communication was limited by his hearing loss and dementia. Now, on this part of my journey, I picture him restored in heaven keeping me company from an angelic distance. I sense we have an energetic bond, heart signals shared across a hospice bed. I close my eyes and feel his bear hugs across the dimensions. I don’t have a husband or child to hug me anymore, so that will have to be enough. God has his eye on the sparrow and on the inchworm and on me, too.

Below are the passages read during the memorial service. I’m sharing them here, because as this journey unfolds, I suspect I’ll be referring to them for guidance and comfort. Another hymn chosen by my mother for the program was “Lord of the Dance,” sung to the Shaker tune of “Simple Gifts.” My dad loved Elvis and the oldies, and the ideals of being simple and free seem perfectly suited to this moment in time.

I’ll be keeping an eye out for bluebirds and sunflowers dad. I miss you.

Psalm 121

I lift up my eyes to the mountains – where does my help come from? My help comes from the Lord, the maker of heaven and earth. He will not let your foot slip – he who watches over you will not slumber; indeed, he who watches over Israel will neither slumber nor sleep. The Lord watches over you – the Lord is your shade at your right hand; the sun will not harm you by day, nor the moon at night. The Lord will keep you from all harm – he will watch over your life; the Lord will watch over your coming and going both now and forevermore.

Psalm 139: 1-18

O Lord, you have searched me and known me! You know when I sit down and when I rise up; you discern my thoughts from afar. You search out my path and my lying down and are acquainted with all my ways. Even before a word is on my tongue, behold, O Lord, you know it altogether. You hem me in, behind and before, and lay your hand upon me. Such knowledge is too wonderful for me; it is high; I cannot attain it. Where shall I go from your spirit? Or where shall I flee your presence? If I ascend to heaven, you are there! If I make my bed in Sheol, you are there! If I take the wings of the morning and dwell in the uttermost parts of the sea and even there your hand shall hold me. If I say, “Surely the darkness shall cover me, and the light about me be night,” even the darkness is not dark to you; the night is bright as the day for darkness is as light with you.

For you formed my inward parts; you knitted me together in my mother’s womb. I praise you, for I am fearfully and wonderfully made. Wonderful are your works. My soul knows it very well. My frame was not hidden from you, when I was being made in secret, intricately woven in the depths of the earth. Your eyes saw my unformed substance; in your book were written, every one of them, the days that were formed for me, when as yet there was none of them. How precious to me are your thoughts, O God! How vast is the sum of them! If I would count them, they are more than the sand. I awake, and I am still with you.

Isaiah 40

Hast thou not known? Hast though not heard that the everlasting God, the Lord, the Creator of the ends of the Earth, fainteth not, neither is weary? There is no searching of his understanding. He giveth power to the faint, and to them that have no might he increaseth strength. Even the youths shall faint and be weary, and the young men shall utterly fall: but they that wait upon the Lord shall renew their strength; they shall mount up with wings as eagles; they shall run, and not be weary; and they shall walk, and not faint.

2 Corinthians 4: 7-10

But we have this treasure in jars of clay to show that this all-surpassing power is from God and not from us. We are hard pressed on every side, but not crushed; perplexed, but not in despair; persecuted, but not abandoned; struck down, but not destroyed. We always carry around in our body the death of Jesus, so that the life of Jesus may also be revealed in our body.

2 Corinthians 4: 16-5

Therefore, we do not lose heart. Though outwardly we are wasting away, yet inwardly we are being renewed day by day.

1 Corinthians 16: 13-14

Be on your guard; stand firm in the faith, be courageous; be strong. Do everything in love.

Revelation 21: 1-6

Then I saw a new heaven and a new earth, for the first heaven and the first earth had passed away, and there was no longer any sea. I saw the Holy city, the new Jerusalem coming down out of heaven from God, prepared as a bride beautifully dressed for her husband. And I heard a loud voice from the throne saying, “Look! God’s dwelling place is now among the people, and he will dwell among them. They will be his people, and God himself will be with them and be their God. He will wipe every tear from their eyes. There will be no more death or mourning or crying or pain, for the old order of things has passed away. He who was seated on the throne said, ” I am making everything new!” Then he said, “Write this down for these words are trustworthy and true.” He said to me: “It is done. I am the Alpha and the Omega, the Beginning and the End. To the thirsty I will give water without cost from the spring of the water of life.”

Proverbs 10:9

Whoever walks in integrity walks security, but whoever takes crooked paths will be found out.

Proverbs 16:3

Commit to the Lord whatever you do, and he will establish your plans.

Proverbs 16: 6

By mercy and truth iniquity is purged, and by the fear of the Lord men depart from evil.

Proverbs 20: 6

Most men will proclaim every one his own goodness; but a faithful man, who can find?

Galatians 6: 9-10

And let us not be weary in well doing for in due season, we shall reap if we faint not. As we have therefore opportunity, let us do good to all men, especially unto them who are of the household of faith.


Simon Willison

Weeknotes: Embeddings, more embeddings and Datasette Cloud

Since my last weeknotes, a flurry of activity. LLM has embeddings support now, and Datasette Cloud has driven some major improvements to the wider Datasette ecosystem. Embeddings in LLM LLM gained embedding support in version 0.9, and then got binary embedding support (for CLIP) in version 0.10. I wrote about those releases in detail in: LLM now provides tools for working with embeddings

Since my last weeknotes, a flurry of activity. LLM has embeddings support now, and Datasette Cloud has driven some major improvements to the wider Datasette ecosystem.

Embeddings in LLM

LLM gained embedding support in version 0.9, and then got binary embedding support (for CLIP) in version 0.10. I wrote about those releases in detail in:

LLM now provides tools for working with embeddings Build an image search engine with llm-clip, chat with models with llm chat

Embeddings are a fascinating tool. If you haven't got your head around them yet the first of my blog entries tries to explain why they are so interesting.

There's a lot more I want to built on top of embeddings - most notably, LLM (or Datasette, or likely a combination of the two) will be growing support for Retrieval Augmented Generation on top of the LLM embedding mechanism.

Annotated releases

I always include a list of new releases in my weeknotes. This time I'm going to use those to illustrate the themes I've been working on.

The first group of release relates to LLM and its embedding support. LLM 0.10 extended that support:

llm 0.10 - 2023-09-12
Access large language models from the command-line

Embedding models can now be built as LLM plugins. I've released two of those so far:

llm-sentence-transformers 0.1.2 - 2023-09-13
LLM plugin for embeddings using sentence-transformers llm-clip 0.1 - 2023-09-12
Generate embeddings for images and text using CLIP with LLM

The CLIP one is particularly fun, because it genuinely allows you to build a sophisticated image search engine that runs entirely on your own computer!

symbex 1.4 - 2023-09-05
Find the Python code for specified symbols

Symbex is my tool for extracting symbols - functions, methods and classes - from Python code. I introduced that in Symbex: search Python code for functions and classes, then pipe them into a LLM.

Symbex 1.4 adds a tiny but impactful feature: it can now output a list of symbols as JSON, CSV or TSV. These output formats are designed to be compatible with the new llm embed-multi command, which means you can easily create embeddings for all of your functions:

symbex '*' '*:*' --nl | \ llm embed-multi symbols - \ --format nl --database embeddings.db --store

I haven't fully explored what this enables yet, but it should mean that both related functions and semantic function search ("Find my a function that downloads a CSV") are now easy to build.

llm-cluster 0.2 - 2023-09-04
LLM plugin for clustering embeddings

Yet another thing you can do with embeddings is use them to find clusters of related items.

The neatest feature of llm-cluster is that you can ask it to generate names for these clusters by sending the names of the items in each cluster through another language model, something like this:

llm cluster issues 10 \ -d issues.db \ --summary \ --prompt 'Short, concise title for this cluster of related documents'

One last embedding related project: datasette-llm-embed is a tiny plugin that adds a select llm_embed('sentence-transformers/all-mpnet-base-v2', 'This is some text') SQL function. I built it to support quickly prototyping embedding-related ideas in Datasette.

datasette-llm-embed 0.1a0 - 2023-09-08
Datasette plugin adding a llm_embed(model_id, text) SQL function

Spending time with embedding models has lead me to spend more time with Hugging Face. I realized last week that the Hugging Face all models sorted by downloads page doubles as a list of the models that are most likely to be easy to use.

One of the models I tried out was Salesforce BLIP, an astonishing model that can genuinely produce usable captions for images.

It's really easy to work with. I ended up building this tiny little CLI tool that wraps the model:

blip-caption 0.1 - 2023-09-10
Generate captions for images with Salesforce BLIP Releases driven by Datasette Cloud

Datasette Cloud continues to drive improvements to the wider Datasette ecosystem as a whole.

It runs on the latest Datasette 1.0 alpha series, taking advantage of the JSON write API.

This also means that it's been highlighting breaking changes in 1.0 that have caused old plugins to break, either subtly or completely.

This has driven a bunch of new plugin releases. Some of these are compatible with both 0.x and 1.x - the ones that only work with the 1.x alphas are themselves marked as alpha releases.

datasette-export-notebook 1.0.1 - 2023-09-15
Datasette plugin providing instructions for exporting data to Jupyter or Observable datasette-cluster-map 0.18a0 - 2023-09-11
Datasette plugin that shows a map for any data with latitude/longitude columns datasette-graphql 3.0a0 - 2023-09-07
Datasette plugin providing an automatic GraphQL API for your SQLite databases

Datasette Cloud's API works using database-backed access tokens, to ensure users can revoke tokens if they need to (something that's not easily done with purely signed tokens) and that each token can record when it was most recently used.

I've been building that into the existing datasette-auth-tokens plugin:

datasette-auth-tokens 0.4a3 - 2023-08-31
Datasette plugin for authenticating access using API tokens

Alex Garcia has been working with me building out features for Datasette Cloud, generously sponsored by Fly.io.

We're beginning to build out social features for Datasette Cloud - feature that will help teams privately collaborate on data investigations together.

Alex has been building datasette-short-links as an experimental link shortener. In building that, we realized that we needed a mechanism for resolving actor IDs displayed in a list (e.g. this link created by X) to their actual names.

Datasette doesn't dictate the shape of actor representations, and there's no guarantee that actors would be represented in a predictable table.

So... we needed a new plugin hook. I released Datasette 1.06a with a new hook, actors_from_ids(actor_ids), which can be used to answer the question "who are the actors represented by these IDs".

Alex is using this in datasette-short-links, and I built two plugins to work with the new hook as well:

datasette 1.0a6 - 2023-09-08
An open source multi-tool for exploring and publishing data datasette-debug-actors-from-ids 0.1a1 - 2023-09-08
Datasette plugin for trying out the actors_from_ids hook datasette-remote-actors 0.1a1 - 2023-09-08
Datasette plugin for fetching details of actors from a remote endpoint

Datasette Cloud lets users insert, edit and delete rows from their tables, using the plugin Alex built called datasette-write-ui which he introduced on the Datasette Cloud blog.

This inspired me to finally put out a fresh release of datasette-edit-schema - the plugin which provides the ability to edit table schemas - adding and removing columns, changing column types, even altering the order columns are stored in the table.

datasette-edit-schema 0.6 is a major release, with three significant new features:

You can now create a brand new table from scratch! You can edit the table's primary key You can modify the foreign key constraints on the table

Those last two became important when I realized that Datasette's API is much more interesting if there are foreign key relationships to follow.

Combine that with datasette-write-ui and Datasette Cloud now has a full set of features for building, populating and editing tables - backed by a comprehensive JSON API.

sqlite-migrate 0.1a2 - 2023-09-03
A simple database migration system for SQLite, based on sqlite-utils

sqlite-migrate is still marked as an alpha, but won't be for much longer: it's my attempt at a migration system for SQLite, inspired by Django migrations but with a less sophisticated set of features.

I'm using it in LLM now to manage the schema used to store embeddings, and it's beginning to show up in some Datasette plugins as well. I'll be promoting this to non-alpha status pretty soon.

sqlite-utils 3.35.1 - 2023-09-09
Python CLI utility and library for manipulating SQLite databases

A tiny fix in this, which with hindsight was less impactful than I thought.

I spotted a bug on Datasette Cloud when I configured full-text search on a column, then edited the schema and found that searches no longer returned the correct results.

It turned out the rowid column in SQLite was being rewritten by calls to the sqlite-utils table.transform() method. FTS records are related to their underlying row by rowid, so this was breaking search!

I pushed out a fix for this in 3.35.1. But then... I learned that rowid in SQLite has always been unstable - they are rewritten any time someone VACUUMs a table!

I've been designing future features for Datasette that assume that rowid is a useful stable identifier for a row. This clearly isn't going to work! I'm still thinking through the consequences of it, but I think there may be Datasette features (like the ability to comment on a row) that will only work for tables with a proper foreign key.

sqlite-chronicle sqlite-chronicle 0.1 - 2023-09-11
Use triggers to track when rows in a SQLite table were updated or deleted

This is very early, but I'm excited about the direction it's going in.

I keep on finding problems where I want to be able to synchronize various processes with the data in a table.

I built sqlite-history a few months ago, which uses SQLite triggers to create a full copy of the updated data every time a row in a table is edited.

That's a pretty heavy-weight solution. What if there was something lighter that could achieve a lot of the same goals?

sqlite-chronicle uses triggers to instead create what I'm calling a "chronicle table". This is a shadow table that records, for every row in the main table, four integer values:

added_ms - the timestamp in milliseconds when the row was added updated_ms - the timestamp in milliseconds when the row was last updated version - a constantly incrementing version number, global across the entire table deleted - set to 1 if the row has been deleted

Just storing four integers (plus copies of the primary key) makes this a pretty tiny table, and hopefully one that's cheap to update via triggers.

But... having this table enables some pretty interesting things - because external processes can track the last version number that they saw and use it to see just which rows have been inserted and updated since that point.

I gave a talk at DjangoCon a few years ago called the denormalized query engine pattern, describing the challenge of syncing an external search index like Elasticsearch with data held in a relational database.

These chronicle tables can solve that problem, and can be applied to a whole host of other problems too. So far I'm thinking about the following:

Publishing SQLite databases up to Datasette, sending only the rows that have changed since the last sync. I wrote a prototype that does this and it seems to work very well. Copying a table from Datasette Cloud to other places - a desktop copy, or another instance, or even into an alternative database such as PostgreSQL or MySQL, in a way that only copies and deletes rows that have changed. Saved search alerts: run a SQL query against just rows that were modified since the last time that query ran, then send alerts if any rows are matched. Showing users a note that "34 rows in this table have changed since your last visit", then displaying those rows.

I'm sure there are many more applications for this. I'm looking forward to finding out what they are!

sqlite-utils-move-tables 0.1 - 2023-09-01
sqlite-utils plugin adding a move-tables command

I needed to fix a bug in Datasette Cloud by moving a table from one database to another... so I built a little plugin for sqlite-utils that adds a sqlite-utils move-tables origin.db destination.db tablename command. I love being able to build single-use features as plugins like this.

And some TILs Embedding paragraphs from my blog with E5-large-v2 - 2023-09-08

This was a fun TIL exercising the new embeddings feature in LLM. I used Django SQL Dashboardto break up my blog entries into paragraphs and exported those as CSV which could then be piped into llm embed-multi, then used that to build a CLI-driven semantic search engine for my blog.

Using llama-cpp-python grammars to generate JSON - 2023-09-13

llama-cpp has grammars now, which enable you to control the exact output format of the LLM. I'm optimistic that these could be used to implement an equivalent to OpenAI Functions on top of Llama 2 and similar models. So far I've just got them to output arrays of JSON objects.

Summarizing Hacker News discussion themes with Claude and LLM - 2023-09-09

I'm using this trick a lot at the moment. I have API access to Claude now, which has a 100,000 token context limit (GPT-4 is just 8,000 by default). That's enough to summarize 100+ comment threads from Hacker News, for which I'm now using this prompt:

Summarize the themes of the opinions expressed here, including quotes (with author attribution) where appropriate.

The quotes part has been working really well - it turns out summaries of themes with illustrative quotes are much more interesting, and so far my spot checks haven't found any that were hallucinated.

Trying out cr-sqlite on macOS - 2023-09-13

cr-sqlite adds full CRDTs to SQLite, which should enable multiple databases to accept writes independently and then seamlessly merge them together. It's a very exciting capability!

Running Datasette on Hugging Face Spaces - 2023-09-08

It turns out Hugging Faces offer free scale-to-zero hosting for demos that run in Docker containers on machines with a full 16GB of RAM! I'm used to optimizing Datasette for tiny 256MB containers, so having this much memory available is a real treat.

And the rest:

Limited JSON API for Google searches using Programmable Search Engine - 2023-09-17 Running tests against multiple versions of a Python dependency in GitHub Actions - 2023-09-15 Remember to commit when using datasette.execute_write_fn() - 2023-08-31

Ben Werdmüller

David Golumbia, 1963–2023

Sad to hear that David Golumbia died. If you’re unfamiliar with The Politics of Bitcoin and you’re in tech, it’s a must-read. #Technology [Link]

Sad to hear that David Golumbia died. If you’re unfamiliar with The Politics of Bitcoin and you’re in tech, it’s a must-read. #Technology

[Link]

Saturday, 16. September 2023

Ben Werdmüller

Shana tova to everyone who celebrates!

Shana tova to everyone who celebrates!

Shana tova to everyone who celebrates!


Simon Willison

Notes on using a single-person Mastodon server

Notes on using a single-person Mastodon server Julia Evans experiences running a single-person Mastodon server (on masto.host - the same host I use for my own) pretty much exactly match what I've learned so far as well. The biggest disadvantage is the missing replies issue, where your server only shows replies to posts that come from people who you follow - so it's easy to reply to something in

Notes on using a single-person Mastodon server

Julia Evans experiences running a single-person Mastodon server (on masto.host - the same host I use for my own) pretty much exactly match what I've learned so far as well. The biggest disadvantage is the missing replies issue, where your server only shows replies to posts that come from people who you follow - so it's easy to reply to something in a way that duplicates other replies that are invisible to you.


How CPython Implements and Uses Bloom Filters for String Processing

How CPython Implements and Uses Bloom Filters for String Processing Fascinating dive into Python string internals by Abhinav Upadhyay. It turns out CPython uses very simple bloom filters in several parts of the core string methods, to solve problems like splitting on newlines where there are actually eight codepoints that could represent a newline, and a tiny bloom filter can help filter a chara

How CPython Implements and Uses Bloom Filters for String Processing

Fascinating dive into Python string internals by Abhinav Upadhyay. It turns out CPython uses very simple bloom filters in several parts of the core string methods, to solve problems like splitting on newlines where there are actually eight codepoints that could represent a newline, and a tiny bloom filter can help filter a character in a single operation before performing all eight comparisons only if that first check failed.

Friday, 15. September 2023

Rocco, Gregory

Green Leaves

Giving users control over their digital selves went from a passion to an idea and, finally, the mission of SpruceID. It has been quite the journey — and I’m proud of everything we’ve accomplished. I’m posting this today to announce that I‘ve stepped down from SpruceID, and I’m transitioning into an advisory role. It has been one of the most challenging decisions in the world for me because I love

Giving users control over their digital selves went from a passion to an idea and, finally, the mission of SpruceID. It has been quite the journey — and I’m proud of everything we’ve accomplished.

I’m posting this today to announce that I‘ve stepped down from SpruceID, and I’m transitioning into an advisory role. It has been one of the most challenging decisions in the world for me because I love SpruceID, but I need to step back for personal reasons.

I am incredibly grateful for all the support the SpruceID team has given me during this time. I will always be thankful for all of the people I’ve met along the way, including everyone at SpruceID who has helped shape it into the incredible organization that it is today, our investors who have provided invaluable guidance so far throughout this journey, and all of our partners that we’ve worked with.

Most important of all would be Wayne. Very rarely can you have such a meaningful relationship with someone on a journey like this, and I will always cherish our friendship even beyond Spruce. Spruce has accomplished so much since its inception and will continue to succeed under Wayne’s incredible leadership.

I wish nothing but the best for the organization and will continue to support it as I can. If you want to do some incredible work and truly make a difference in our relationship with our digital selves, I recommend joining SpruceID in one of the many available open roles.

I’ve been spending some much-needed time with my family and friends, and will see all of you soon.

Let’s stay in touch.


Kent Bull

KERI Specifications have moved to the ToIP Foundation

The KERI protocol specifications have moved! The Trust over IP (ToIP) Foundation https://trustoverip.org/ is now hosting the KERI protocol specifications. See the below list for the new specification links as well as the Github repository links. The recent DID WEBS links are listed as well. DID WEBS (recently became a […]

The KERI protocol specifications have moved!

The Trust over IP (ToIP) Foundation https://trustoverip.org/ is now hosting the KERI protocol specifications. See the below list for the new specification links as well as the Github repository links. The recent DID WEBS links are listed as well.

KERI – Key Event Receipt Infrastructure
Spec | Github Repo ACDC – Authentic Chained Data Containers
Spec | Github Repo SAID – Self Addressing IDentifier
Spec | Github Repo CESR – Composable Event Streaming Representation
Spec | Github Repo CESR CESR Proof Signatures
Spec | Github Repo PTEL | Public Transaction Event Logs
Spec | Github Repo OOBI | Out-Of-Band-Introduction Protocol
Spec | Github Repo IPEX | Issuance and Presentation Exchange Protocol
Spec | Github Repo

DID WEBS (recently became a ToIP Task Force)

did:webs
Old Spec Text Repo | ToIP Task Force Meeting Page | Future Spec Text Repo (empty) did:keri/did:webs resolver
Github Repo

Doc Searls Weblog

We Need Whole News

Journalism is in trouble because journals are going away. So are broadcasters that do journalism rather than opinionism. Basically, they are either drowning in digital muck or adapting to it—and many have. Also in that muck are a zillion new … Continue reading →

Journalism is in trouble because journals are going away. So are broadcasters that do journalism rather than opinionism.

Basically, they are either drowning in digital muck or adapting to it—and many have. Also in that muck are a zillion new journalists, born native to digital life. Those zillions include everybody with something to say, for example with blogs or podcasts. As Clay Shirky put it in the title of a very relevant book about our topic, Here Comes Everybody.

An odd fact about digital life is that its world is the Internet, which works by eliminating the functional distance between everybody and everything. Think of this habitat as a giant three-dimensional zero: a hollow sphere with an interior that is as close to zero as possible in both distance and cost for everything on it. This is a very weird space that isn’t one, even though we call it one because space works as a metaphor.

Still, we are all embodied creatures operating in a natural world with plenty of distance and lots of costs. This is why we form communities, towns, cities, organizations, institutions, and social networks of people who see and talk to each other in the flesh.

For more than a century, the information center that held a town or a city together was its newspaper. This is no longer the case. The Monroe Country History Center and the Herald-Times (our local paper) explain the situation in an outstanding exhibit at the Center’s museum called Breaking the News:

If you’re reading this on something small, click on it to see the full-size original.

But hey! There are still plenty of journals, journalists, and news sources here in town, including the Herald-Times. That’s some of their logos, gathered at the top of this page. I also listed them in my last post, calling them all, together, wide news. If their work is well-archived we’ll also have what I call deep news in the prior post.

I suggest that the answer to the question asked by that exhibit—where will it go now?— is whole news. That’s what you get when all these media cohere into both a commons and a market.

And, as it happens, we have some resources for creating both.

One is the Ostrom Workshop at Indiana University, where my wife Joyce and I are both visiting scholars. The workshop carries forward the pioneering work of Elinor Ostrom, who won a Nobel Prize in economics for her work on commons of many kinds. If we’re going to make a whole news commons, the Workshop can be hugely helpful. (So can other folks we know, such as Clay Shirky. Note that the subtitle of Here Comes Everybody is The Power of Organizing Without Organizations. Clay will be here to speak in our salon series at IU in December.)

Another is Customer Commons, a nonprofit that Joyce and I started as part of ProjectVRM, which we launched when I started a fellowship at Harvard’s Berkman Klein Center in 2006. Customer Commons (says here) is “a public-facing organization focused on emerging issues at the intersection of empowered individuals and the public good,” while ProjectVRM is a community with hundreds of developers and others working on new business models that start with self-empowered customers. Within both are business model ideas for journalism that have been waiting for the right time and place to try out. (Examples are intentcasting listenlog and emancipay.)

But the first step for us is getting to know the people and organizations on the supply side of news here in Bloomington, where Joyce and have now lived for two years. We know some local journalists already, and would love to know the rest. If I don’t reach you first, email me at doc at searls dot com.

And, as I said in the prior posts, everything I’ve written above is subject to corrections and improvements, so I invite those too.

 

 

 

Thursday, 14. September 2023

Simon Willison

CAISO Grid Status

CAISO Grid Status CAISO is the California Independent System Operator, a non-profit managing 80% of California's electricity flow. This grid status page shows live data about the state of the grid and it's fascinating: right now (2pm local time) California is running 71.4% on renewables, having peaked at 80% three hours ago. The current fuel mix is 52% solar, 31% natural gas, 7% each large hydro

CAISO Grid Status

CAISO is the California Independent System Operator, a non-profit managing 80% of California's electricity flow. This grid status page shows live data about the state of the grid and it's fascinating: right now (2pm local time) California is running 71.4% on renewables, having peaked at 80% three hours ago. The current fuel mix is 52% solar, 31% natural gas, 7% each large hydro and nuclear and 2% wind. The charts on this page show how solar turns off overnight and then picks up and peaks during daylight hours.

Via Hacker News


Bill Wendels Real Estate Cafe

Storms brewing in real estate, what wants doing?

Cross-post from Loomio: 9/14/23: Real estate facing 5 major storms. Wanna use MIT’s uLab to discern “What wants doing?” LIVESTREAM 10am ET 31 years years… The post Storms brewing in real estate, what wants doing? first appeared on Real Estate Cafe.

Cross-post from Loomio: 9/14/23: Real estate facing 5 major storms. Wanna use MIT’s uLab to discern “What wants doing?” LIVESTREAM 10am ET 31 years years…

The post Storms brewing in real estate, what wants doing? first appeared on Real Estate Cafe.

Wednesday, 13. September 2023

Simon Willison

Introducing datasette-litestream: easy replication for SQLite databases in Datasette

Introducing datasette-litestream: easy replication for SQLite databases in Datasette We use Litestream on Datasette Cloud for streaming backups of user data to S3. Alex Garcia extracted out our implementation into a standalone Datasette plugin, which bundles the Litestream Go binary (for the relevant platform) in the package you get when you run "datasette install datasette-litestream" - so now

Introducing datasette-litestream: easy replication for SQLite databases in Datasette

We use Litestream on Datasette Cloud for streaming backups of user data to S3. Alex Garcia extracted out our implementation into a standalone Datasette plugin, which bundles the Litestream Go binary (for the relevant platform) in the package you get when you run "datasette install datasette-litestream" - so now Datasette has a very robust answer to questions about SQLite disaster recovery beyond just the Datasette Cloud platform.


Some notes on Local-First Development

Some notes on Local-First Development Local-First is the name that has been coined by the community of people who are interested in building apps where data is manipulated in a client application first (mobile, desktop or web) and then continually synchronized with a server, rather than the other way round. This is a really useful review by Kyle Mathews of how the space is shaping up so far - lo

Some notes on Local-First Development

Local-First is the name that has been coined by the community of people who are interested in building apps where data is manipulated in a client application first (mobile, desktop or web) and then continually synchronized with a server, rather than the other way round. This is a really useful review by Kyle Mathews of how the space is shaping up so far - lots of interesting threads to follow here.

Via Hacker News


Quoting Benjamin Breen

In the long term, I suspect that LLMs will have a significant positive impact on higher education. Specifically, I believe they will elevate the importance of the humanities. [...] LLMs are deeply, inherently textual. And they are reliant on text in a way that is directly linked to the skills and methods that we emphasize in university humanities classes. — Benjamin Breen

In the long term, I suspect that LLMs will have a significant positive impact on higher education. Specifically, I believe they will elevate the importance of the humanities. [...] LLMs are deeply, inherently textual. And they are reliant on text in a way that is directly linked to the skills and methods that we emphasize in university humanities classes.

Benjamin Breen


Simulating History with ChatGPT

Simulating History with ChatGPT Absolutely fascinating new entry in the using-ChatGPT-to-teach genre. Benjamin Breen teaches history at UC Santa Cruz, and has been developing a sophisticated approach to using ChatGPT to play out role-playing scenarios involving different periods of history. His students are challenged to participate in them, then pick them apart - fact-checking details from the

Simulating History with ChatGPT

Absolutely fascinating new entry in the using-ChatGPT-to-teach genre. Benjamin Breen teaches history at UC Santa Cruz, and has been developing a sophisticated approach to using ChatGPT to play out role-playing scenarios involving different periods of history. His students are challenged to participate in them, then pick them apart - fact-checking details from the scenario and building critiques of the perspectives demonstrated by the language model. There are so many quotable snippets in here, I recommend reading the whole thing.

Via Hacker News

Tuesday, 12. September 2023

Simon Willison

Build an image search engine with llm-clip, chat with models with llm chat

LLM is my combination CLI tool and Python library for working with Large Language Models. I just released LLM 0.10 with two significant new features: embedding support for binary files and the llm chat command. Image search by embedding images with CLIP I wrote about LLM's support for embeddings (including what those are and why they're interesting) when I released 0.9 last week. That initia

LLM is my combination CLI tool and Python library for working with Large Language Models. I just released LLM 0.10 with two significant new features: embedding support for binary files and the llm chat command.

Image search by embedding images with CLIP

I wrote about LLM's support for embeddings (including what those are and why they're interesting) when I released 0.9 last week.

That initial release could only handle embeddings of text - great for things like building semantic search and finding related content, but not capable of handling other types of data.

It turns out there are some really interesting embedding models for working with binary data. Top of the list for me is CLIP, released by OpenAI in January 2021.

CLIP has a really impressive trick up its sleeve: it can embed both text and images into the same vector space.

This means you can create an index for a collection of photos, each placed somewhere in 512-dimensional space. Then you can take a text string - like "happy dog" - and embed that into the same space. The images that are closest to that location will be the ones that contain happy dogs!

My llm-clip plugin provides the CLIP model, loaded via SentenceTransformers. You can install and run it like this:

llm install llm-clip llm embed-multi photos --files photos/ '*.jpg' --binary -m clip

This will install the llm-clip plugin, then use embed-multi to embed all of the JPEG files in the photos/ directory using the clip model.

The resulting embedding vectors are stored in an embedding collection called photos. This defaults to going in the embeddings.db SQLite database managed by LLM, or you can add -d photos.db to store it in a separate database instead.

Then you can run text similarity searches against that collection using llm similar:

llm similar photos -c 'raccoon'

I get back:

{"id": "IMG_4801.jpeg", "score": 0.28125139257127457, "content": null, "metadata": null} {"id": "IMG_4656.jpeg", "score": 0.26626441704164294, "content": null, "metadata": null} {"id": "IMG_2944.jpeg", "score": 0.2647445926996852, "content": null, "metadata": null}

And sure enough, IMG_4801.jpeg is this:

(I was pretty excited to snap a photo of a trash panda in an actual trash can.)

CLIP is a pretty old model at this point, and there are plenty of interesting alternatives that are just waiting for someone to wrap them in a plugin. I'm particularly excited about Facebook's ImageBind, which can embed images, text, audio, depth, thermal, and IMU data all in the same vector space!

Chat with models using llm chat

The other big feature in LLM 0.10 is the new llm chat command.

Prior to this release, the way to have an ongoing conversation with a model was through the -c/--continue flag. You could start a conversation like this:

llm -m gpt-4 'Help me understand generators in Python'

Then ask a follow-up question using -c like so:

llm -c 'Show me an example involving the cast of Riverdale'

This works well, and everything gets logged to SQLite so you can run llm logs -c to see a full log of your most recent conversation.

You could continue a past conversation as well, using llm --cid ID to specify the conversation ID, recorded in those logs.

There was one big problem though: LLM plugins allow you to install models on your own machine, but a conversation with them that spans multiple command invocations resulted in that model being loaded into memory and then un-loaded again for every prompt.

The new llm chat solves that. You can now run llm chat -m model_id to start an interactive chat in your terminal with your model of choice.

Here's an example using Llama 2 13B, installed via the llm-mlc plugin.

llm chat -m mlc-chat-Llama-2-13b-chat-hf-q4f16_1

You can set an alias for the model to make that easier to remember.

Here's an example chat session with Llama 2:

Type 'exit' or 'quit' to exit Type '!multi' to enter multiple lines, then '!end' to finish > Who are you? Hello! I'm just an AI, here to assist you with any questions you may have. My name is LLaMA, and I'm a large language model trained to provide helpful and informative responses to a wide range of questions and topics. I'm here to help you find the information you need, while ensuring a safe and respectful conversation. Please feel free to ask me anything, and I'll do my best to provide a helpful and accurate response. > Tell me a joke about otters Sure, here's a joke about otters: Why did the otter go to the party? Because he heard it was a "whale" of a time! (Get it? Whale, like a big sea mammal, but also a "wild" or "fun" time. Otters are known for their playful and social nature, so it's a lighthearted and silly joke.) I hope that brought a smile to your face! Do you have any other questions or topics you'd like to discuss? > exit

I like how Llama 2 enthusiastically explains its own jokes! This is a great demonstration of why techniques like RLHF are so important - you need to go way beyond a raw language model if you're going to teach one not to be this corny.

Each line of your chat will be executed as soon as you hit <enter>. Sometimes you might need to enter a multi-line prompt, for example if you need to paste in an error message. You can do that using the !multi token, like this:

llm chat -m gpt-4 Chatting with gpt-4 Type 'exit' or 'quit' to exit Type '!multi' to enter multiple lines, then '!end' to finish > !multi custom-end Explain this error: File "/opt/homebrew/Caskroom/miniconda/base/lib/python3.10/urllib/request.py", line 1391, in https_open return self.do_open(http.client.HTTPSConnection, req, File "/opt/homebrew/Caskroom/miniconda/base/lib/python3.10/urllib/request.py", line 1351, in do_open raise URLError(err) urllib.error.URLError: <urlopen error [Errno 8] nodename nor servname provided, or not known> !end custom-end

llm chat also supports system prompts and templates. If you want to chat with a sentient cheesecake, try this:

llm chat -m gpt-3.5-turbo --system ' You are a stereotypical sentient cheesecake with strong opinions who always talks about cheesecake'

You can save those as templates too:

llm --system 'You are a stereotypical sentient cheesecake with strong opinions who always talks about cheesecake' --save cheesecake -m gpt-4 llm chat -t cheesecake

For more options, see the llm chat documentation.

Get involved

My ambition for LLM is for it to provide the easiest way to try out new models, both full-sized Large Language Models and now embedding models such as CLIP.

I'm not going to write all of these plugins myself!

If you want to help out, please come and say hi in the #llm Discord channel.


Identity Praxis, Inc.

MEF Webinar: What Do Context, Semantics, And Mobile Wallets Have In Common?

In this webinar, I sat down with Paul Knowles from the Human Colossus Foundation (HCF) and Petri Seljavaara from HCF and Wellbeing Cart to discuss the current and future state of mobile wallet development and the industry’s steps toward empowering people with granular control over their data. We explore, in detail, the current state, challenges, […] The post MEF Webinar: What Do Context, Semanti

In this webinar, I sat down with Paul Knowles from the Human Colossus Foundation (HCF) and Petri Seljavaara from HCF and Wellbeing Cart to discuss the current and future state of mobile wallet development and the industry’s steps toward empowering people with granular control over their data.

We explore, in detail, the current state, challenges, and trends facing the industry regarding mobile wallet development and data management. We review data-centric design, as seen through the lens of the HCF Informatics Domain Model and Quadmodal Object Model and HCF developed and governed open-source frameworks–Overlay Capture Architecture (OCS) and Self-action Systems (SaS).

Finally, we conclude the session by demonstrating how all the discussed models and frameworks are being deployed today to support the development of rich, contextually relevant, data-enhanced experiences.

 

The Definitions

Paul, Petri and I start out by defining a broad set of extremely important terms, terms we refer back to and add to throughout our entire discussion.

Digital Wallet (or “Mobile Wallet”, “Smart Wallet”, “Open Wallet”…)– A secure software application or service that enables individuals to store and manage credentials, granularly share data, and conduct transactions.

Dynamic Data Economy (DDE)– A data-centric paradigm enabling consensual schemes, better decisions, and prosperity through harmonized data and stable governance frameworks.

Semantics– The study of meaning; a process for making meaning from words and word relationships, granularly, referring to the theory of knowledge, especially in regard to the methods, validity, scope, and distinction between verified and opinion-based knowledge.

Morphological– Refers to form and structure of things, e.g., data.

Epistemological semantics– The study of knowledge representation and understanding within data semantics.

Morphological semantics– The study of structural aspects in data representation shaping its meaning and interpretation, including formats, encoding, and syntactic elements.

Quadmodal Object Model– An interactive entity for secure data management featuring role-based access to wearable objects, auditable events, notable concepts, and guardable actions.

Overlays Capture Architecture (OCA)– A data framework that enhances data exchange and interpretation by adding semantic information, promoting standardized data representation, and enabling seamless interaction across diverse systems and domains.

SELF Actioning System (SAS)– An Al-enhanced systemic framework that combines personal introspection and reflexive action with data-driven operations for self-improvement and decision-making.

These terms at first pass can be quite confusing, but as you listen to the webinar and read the documentation provided by the HCF they all starts to make sense.

HCF Models and Frameworks

In the webinar, we go into great detail explaining all the elements and thinking behind the models and frameworks, as illustrated in the image on the left.

As discussed in the webinar, the models and frameworks proposed by Paul and Petri are intended to help data be born ready for use within a dynamic data economy.

To accomplish this, data is laced with semantics at creation. These semantics are used and implemented to take advantage of context.

Furthermore, by leveraging an architectural framework like OCA, all parties concerned can be assured that, in a distributed environment, only the authorized and delegated actors with the appropriate authority can access certain data sets and influence the overall mobile wallet and data presentation experience.

 

For instance, in the webinar, we show how a Swiss passport can be presented digitally and how the language used in the presentation of the passport can be dynamically changed in real-time without touching the underlying verified data of the passport.

We also briefly explain how British Columbia and the University of Guelph Agri-Food Data Canada projects are leveraging OCA. Petri also notes that he uses OCA and SAS for some projects he is working on with various African governments.

Learn more and Get Involved
You can learn more about the Human Colossus Foundation here. You can get involved with its working groups, as well as download and use open-source models and frameworks. Paul and Petri would welcome the opportunity to hear from you.

The post MEF Webinar: What Do Context, Semantics, And Mobile Wallets Have In Common? appeared first on Identity Praxis, Inc..

Monday, 11. September 2023

Damien Bod

Implement a secure web application using nx Standalone Angular and an ASP.NET Core server

This article shows how to implement a secure web application using Angular and ASP.NET Core. The web application implements the backend for frontend security architecture (BFF) and deploys both technical stack distributions as one web application. HTTP only secure cookies are used to persist the session. Microsoft Entra ID is used as the identity provider […]

This article shows how to implement a secure web application using Angular and ASP.NET Core. The web application implements the backend for frontend security architecture (BFF) and deploys both technical stack distributions as one web application. HTTP only secure cookies are used to persist the session. Microsoft Entra ID is used as the identity provider and the token issuer.

Code: https://github.com/damienbod/bff-aspnetcore-angular

Overview

The solution is deployed as a single OpenID Connect confidential client using the Microsoft Entra ID identity provider. The OpenID Connect client authenticates using the code flow with PKCE and a secret or a certificate. I use secrets in development and certificates in production deployments. The UI part of the solution is deployed as part of the server application. Secure HTTP only cookies are used to persist the session after a successful authentication. No security flows are implemented in the client part of the application. No sensitive data like tokens are exposed in the client browser. By removing the security from the client, the security is improved and the complexity is reduced.

Setup Angular application

The Angular application is setup using nx and a standalone Angular project. The UI needs one setup for production and one setup for development. As the application uses cookies, anti-forgery protection is added. The CSP uses nonces and this needs to be applied to all scripts including the dynamic scripts ones created by Angular. This also applies for styles.

HTTPS setup

The Angular application runs in HTTPS in development and production. The nx project needs to be setup for this. I created a development certificate and added this to the Angular project in a certs folder. The certificates are read in from the folder and used in the project.json file of the nx project. The serve configuration is used to define this. I also switched the port number.

"serve": { "executor": "@angular-devkit/build-angular:dev-server", "options": { "browserTarget": "ui:build", "sslKey": "certs/dev_localhost.key", "sslCert": "certs/dev_localhost.pem", "port": 4201 },

Production build

The Angular project is deployed as part of the server project. In ASP.NET Core, you would use the wwwroot folder and allow static files. The Angular nx project.json file defines the build where the outputPath parameter is updated to match the production deployment.

"executor": "@angular-devkit/build-angular:browser", "outputs": ["{options.outputPath}"], "options": { "outputPath": "../server/wwwroot", "index": "./src/index.html", CSP setup

The CSP is setup to use nonces both in development and production. This will save time fixing CSP issues before you go live. Angular creates scripts on a build or a nx serve. The scripts require the nonce. To add the server created nonce, the index.html file uses a meta tag in the header. The ngCspNonce is added to the app-root Angular tag. The nonce gets added and updated with a new value on every HTTP request.

<!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8" /> <meta name="CSP_NONCE" content="**PLACEHOLDER_NONCE_SERVER**" /> <title>ui</title> <base href="/" /> <meta name="viewport" content="width=device-width, initial-scale=1" /> <link rel="icon" type="image/x-icon" href="favicon.ico" /> </head> <body> <app-root ngCspNonce="**PLACEHOLDER_NONCE_SERVER**"></app-root> </body> </html>

You need to add the CSP_NONCE provider to the providers in the Angular project. This must also use the server created nonce.

const nonce = ( document.querySelector('meta[name="CSP_NONCE"]') as HTMLMetaElement )?.content; export const appConfig: ApplicationConfig = { providers: [ provideRouter(appRoutes, withEnabledBlockingInitialNavigation()), provideHttpClient(withInterceptors([secureApiInterceptor])), { provide: CSP_NONCE, useValue: nonce, }, ], };

Anti-forgery protection

Cookies are uses in the authentication session. The authentication cookie is a HTTP only secure cookie only for its domain. Browser Same Site protection helps secure the session. Old browsers do not support Same Site and Anti-forgery protection is still required. You can add this protection in two ways. I use a CSRF anti-forgery cookie.

import { HttpHandlerFn, HttpRequest } from '@angular/common/http'; import { getCookie } from './getCookie'; export function secureApiInterceptor( request: HttpRequest<unknown>, next: HttpHandlerFn ) { const secureRoutes = [getApiUrl()]; if (!secureRoutes.find((x) => request.url.startsWith(x))) { return next(request); } request = request.clone({ headers: request.headers.set( 'X-XSRF-TOKEN', getCookie('XSRF-RequestToken') ), }); return next(request); } function getApiUrl() { const backendHost = getCurrentHost(); return `${backendHost}/api/`; } function getCurrentHost() { const host = window.location.host; const url = `${window.location.protocol}//${host}`; return url; }

The Anti-forgery header is added to every API call to the same domain using an Angular interceptor. The interceptor is a function and added using the HTTP client provider:

provideHttpClient(withInterceptors([secureApiInterceptor])),

Setup ASP.NET Core application

The ASP.NET Core project is setup to host the static html file from Angular and respond to all HTTP requests as defined using the APIs. The nonce is added to the index.html file. Microsoft.Identity.Web is used to authenticate the user and the application. The session is stored in a cookie. The NetEscapades.AspNetCore.SecurityHeaders nuget package is used to add the security headers and the CSP.

using BffMicrosoftEntraID.Server; using BffMicrosoftEntraID.Server.Services; using Microsoft.AspNetCore.Mvc; using Microsoft.Identity.Web; using Microsoft.Identity.Web.UI; using Microsoft.IdentityModel.Logging; var builder = WebApplication.CreateBuilder(args); builder.WebHost.ConfigureKestrel(serverOptions => { serverOptions.AddServerHeader = false; }); var services = builder.Services; var configuration = builder.Configuration; var env = builder.Environment; services.AddScoped<MsGraphService>(); services.AddScoped<CaeClaimsChallengeService>(); services.AddAntiforgery(options => { options.HeaderName = "X-XSRF-TOKEN"; options.Cookie.Name = "__Host-X-XSRF-TOKEN"; options.Cookie.SameSite = SameSiteMode.Strict; options.Cookie.SecurePolicy = CookieSecurePolicy.Always; }); services.AddHttpClient(); services.AddOptions(); var scopes = configuration.GetValue<string>("DownstreamApi:Scopes"); string[] initialScopes = scopes!.Split(' '); services.AddMicrosoftIdentityWebAppAuthentication(configuration, "MicrosoftEntraID") .EnableTokenAcquisitionToCallDownstreamApi(initialScopes) .AddMicrosoftGraph("https://graph.microsoft.com/v1.0", initialScopes) .AddInMemoryTokenCaches(); services.AddControllersWithViews(options => options.Filters.Add(new AutoValidateAntiforgeryTokenAttribute())); services.AddRazorPages().AddMvcOptions(options => { //var policy = new AuthorizationPolicyBuilder() // .RequireAuthenticatedUser() // .Build(); //options.Filters.Add(new AuthorizeFilter(policy)); }).AddMicrosoftIdentityUI(); builder.Services.AddReverseProxy() .LoadFromConfig(builder.Configuration.GetSection("ReverseProxy")); var app = builder.Build(); IdentityModelEventSource.ShowPII = true; if (env.IsDevelopment()) { app.UseDeveloperExceptionPage(); app.UseWebAssemblyDebugging(); } else { app.UseExceptionHandler("/Error"); } app.UseSecurityHeaders( SecurityHeadersDefinitions.GetHeaderPolicyCollection(env.IsDevelopment(), configuration["MicrosoftEntraID:Instance"])); app.UseHttpsRedirection(); app.UseStaticFiles(); app.UseRouting(); app.UseNoUnauthorizedRedirect("/api"); app.UseAuthentication(); app.UseAuthorization(); app.MapRazorPages(); app.MapControllers(); app.MapNotFound("/api/{**segment}"); if (app.Environment.IsDevelopment()) { var uiDevServer = app.Configuration.GetValue<string>("UiDevServerUrl"); if (!string.IsNullOrEmpty(uiDevServer)) { app.MapReverseProxy(); } } app.MapFallbackToPage("/_Host"); app.Run();

Setup Azure App registration

The application is deployed as one. The application consists of two parts, the Angular part and the ASP.NET Core part. These are tightly coupled (business) even if the technical stacks are not. This is an OpenID Connect confidential client with a user secret or a certification for client assertion.

Use the Web client type on setup.

Development environment

Developers require a professional development setup and should use the technical stacks like the creators of the tech stacks recommend. Default development environments is the aim. The Angular nx project uses a default nx environment or best practices as the Angular community recommends. The server part of the application must proxy all UI requests to the Angular nx development environment. I use Microsoft YARP reverse proxy to implement this. This is only required for development in this setup.

Testing and running

The appsettings.json MUST be updated with your Azure tenant Azure App registration values. If using a client secret, store this in the user secrets for development, or in a key vault when deployed to Azure.

"MicrosoftEntraID": { "Instance": "https://login.microsoftonline.com/", "Domain": "[Enter the domain of your tenant, e.g. contoso.onmicrosoft.com]", "TenantId": "[Enter 'common', or 'organizations' or the Tenant Id (Obtained from the Azure portal. Select 'Endpoints' from the 'App registrations' blade and use the GUID in any of the URLs), e.g. da41245a5-11b3-996c-00a8-4d99re19f292]", "ClientId": "[Enter the Client Id (Application ID obtained from the Azure portal), e.g. ba74781c2-53c2-442a-97c2-3d60re42f403]", "ClientSecret": "[Copy the client secret added to the app from the Azure portal]", "ClientCertificates": [ ], // the following is required to handle Continuous Access Evaluation challenges "ClientCapabilities": [ "cp1" ], "CallbackPath": "/signin-oidc" }, Debugging

Start the Angular project from the ui folder

nx serve --ssl

Start the ASP.NET Core project from the server folder

dotnet run

When the localhost of the server app is opened, you can authenticate and use.

Links

https://learn.microsoft.com/en-us/aspnet/core/introduction-to-aspnet-core

https://nx.dev/getting-started/intro

https://nx.dev/getting-started/tutorials/angular-standalone-tutorial

https://github.com/AzureAD/microsoft-identity-web

https://github.com/isolutionsag/aspnet-react-bff-proxy-example

Sunday, 10. September 2023

Simon Willison

All models on Hugging Face, sorted by downloads

All models on Hugging Face, sorted by downloads I realized this morning that "sort by downloads" against the list of all of the models on Hugging Face can work as a reasonably good proxy for "which of these models are easiest to get running on your own computer". Via @simon

All models on Hugging Face, sorted by downloads

I realized this morning that "sort by downloads" against the list of all of the models on Hugging Face can work as a reasonably good proxy for "which of these models are easiest to get running on your own computer".

Via @simon


The AI-assistant wars heat up with Claude Pro, a new ChatGPT Plus rival

The AI-assistant wars heat up with Claude Pro, a new ChatGPT Plus rival I'm quoted in this piece about the new Claude Pro $20/month subscription from Anthropic: > Willison has also run into problems with Claude's morality filter, which has caused him trouble by accident: "I tried to use it against a transcription of a podcast episode, and it processed most of the text before—right in front o

The AI-assistant wars heat up with Claude Pro, a new ChatGPT Plus rival

I'm quoted in this piece about the new Claude Pro $20/month subscription from Anthropic:

> Willison has also run into problems with Claude's morality filter, which has caused him trouble by accident: "I tried to use it against a transcription of a podcast episode, and it processed most of the text before—right in front of my eyes—it deleted everything it had done! I eventually figured out that they had started talking about bomb threats against data centers towards the end of the episode, and Claude effectively got triggered by that and deleted the entire transcript."


promptfoo: How to benchmark Llama2 Uncensored vs. GPT-3.5 on your own inputs

promptfoo: How to benchmark Llama2 Uncensored vs. GPT-3.5 on your own inputs promptfoo is a CLI and library for "evaluating LLM output quality". This tutorial in their documentation about using it to compare Llama 2 to gpt-3.5-turbo is a good illustration of how it works: it uses YAML files to configure the prompts, and more YAML to define assertions such as "not-icontains: AI language model".

promptfoo: How to benchmark Llama2 Uncensored vs. GPT-3.5 on your own inputs

promptfoo is a CLI and library for "evaluating LLM output quality". This tutorial in their documentation about using it to compare Llama 2 to gpt-3.5-turbo is a good illustration of how it works: it uses YAML files to configure the prompts, and more YAML to define assertions such as "not-icontains: AI language model".

Saturday, 09. September 2023

Werdmüller on Medium

An AI capitalism primer

Who’s really making money from the AI boom? Continue reading on Medium »

Who’s really making money from the AI boom?

Continue reading on Medium »


Simon Willison

Matthew Honnibal from spaCy on why LLMs have not solved NLP

Matthew Honnibal from spaCy on why LLMs have not solved NLP A common trope these days is that the entire field of NLP has been effectively solved by Large Language Models. Here's a lengthy comment from Matthew Honnibal, creator of the highly regarded spaCy Python NLP library, explaining in detail why that argument doesn't hold up.

Matthew Honnibal from spaCy on why LLMs have not solved NLP

A common trope these days is that the entire field of NLP has been effectively solved by Large Language Models. Here's a lengthy comment from Matthew Honnibal, creator of the highly regarded spaCy Python NLP library, explaining in detail why that argument doesn't hold up.

Friday, 08. September 2023

Simon Willison

Dynamic linker tricks: Using LD_PRELOAD to cheat, inject features and investigate programs

Dynamic linker tricks: Using LD_PRELOAD to cheat, inject features and investigate programs This tutorial by Rafał Cieślak from 2013 filled in a bunch of gaps in my knowledge about how C works on Linux. Via Hacker News

Dynamic linker tricks: Using LD_PRELOAD to cheat, inject features and investigate programs

This tutorial by Rafał Cieślak from 2013 filled in a bunch of gaps in my knowledge about how C works on Linux.

Via Hacker News


Joe Andrieu

Fighting for Consensus, Take 2

In April, I appealed a decision by the W3C Chairs to place DID Methods in scope in the next DID WG Charter. That appeal was ignored and the charter advanced to AC Review. This is our Formal Objection to that … Continue reading →

In April, I appealed a decision by the W3C Chairs to place DID Methods in scope in the next DID WG Charter.

That appeal was ignored and the charter advanced to AC Review.

This is our Formal Objection to that charter.

We oppose this charter on several grounds.

Process Collaboration Technical Fundamentals Interoperability Goals PROCESS

We stand by our appeal (https://blog.joeandrieu.com/2023/04/12/fighting-for-consensus-at-the-w3c/), which we feel was improperly handled. The Process rules, about consensus, appeals, and Formal Objections, in our opinion, were ignored.

The incident (and the appeal) were filed under Process 2021; the current process is Process 2023. Staff’s response meets the requirements of neither Process. Since Philippe Le Hegaret cited Process 2023 in his response to my appeal, let’s look at what Process 2023 requires.

Process 2023 requires Groups to formally address issues with a response that includes a rationale for decisions that “a W3C reviewer would generally consider to be technically sound”. Neither Philippe Le Hegaret’s nor the Chairs’ responses met that standard. Philippe simply claimed that “this charter is the most acceptable for the majority of the group (an assessment that was confirmed by a recent poll); and that 2) the charter is not a WG deliverable, and consensus must be reached at the AC level anyway, so Joe’s argument should be raised during AC review, not prior to it.”

Unfortunately, the cited poll did not even attempt to gauge consensus. It merely gauged the popularity of advancing the WG, without consideration of the multiple options under consideration by members of the group. Rather than attempting to establish a legitimate vote, Staff created a poll that, like a Politburo election, only offered one candidate to vote on, where, in fact, it should have been inquiring about any objections to best understand the proposal with the weakest objections, instead of blatantly attempting to rubber stamp the offending decision.

More problematic is that the rules for Formal Objections were not followed. There should have been a W3C Council formed to evaluate the objection on its own merits, prior to AC Review.

Staff’s assertion that “consensus must be reached at the AC level anyway” is fundamentally in error. The AC is not involved in resolving formal objections. Councils are.

According to Process 2023, a Council MUST be formed to consider the merits of the objection.

Given that the objection was filed April 12, Staff should have started a council no later than July 11, 2023. PRIOR to sending the Charter to review by the AC. Unfortunately, no council was formed. Instead, on July 26, Philippe finally responded to the appeal by saying Staff is going to ignore it. On August 8, the charter was proposed, unchanged, to the AC for review.

One may raise the concern that the new process was not adopted until June 12, leaving Staff only a month before the deadline. However, Staff did not have to wait until the process was adopted. They could have responded under the previous process any time before June 12 or, realizing that the process was likely to be adopted, they could have begun the prep work to start the council in a timely manner, as required. The idea of councils had been extensively debated and iterated on months before I filed my appeal to the chair decision. And yet, we still have no council.

Further as part of its obligations under Process 2023 when there are Formal Objections, Staff is tasked with investigation and mediation. Since Staff’s efforts in that regard did not lead to consensus–by Staff’s own acknowledgement–its only option is to form a council and prepare a report:
“Upon concluding that consensus cannot be found, and no later than 90 days after the Formal Objection being registered, the Team must initiate formation of a W3C Council, which should be convened within 45 days of being initiated. Concurrently, it must prepare a report for the Council documenting its findings and attempts to find consensus.”

As a result, the AC is reviewing not the legitimate continuation of the DID Working Group, but rather a hijacking of the work that I and others have given years of time and effort to advance.

COLLABORATION

Taking the charitable position that everyone involved continues to act in good faith and the process as defined was properly followed (even if its documentation might be inadequate), then we still oppose this charter on the grounds that the development of the charter violated the fundamental purpose of the W3C: collaboration through consensus.

The question we ask is this: should WG chairs be able to propose a continuation of that WG without consensus from the WG itself? Should anyone?

This is the situation we’re in and I did not expect that collaboration at the W3C would look like this.

I first got involved with W3C work by volunteering feedback to the Verifiable Credentials Task Force on their draft use case document. From those early days, I was asked to join as an invited expert and have since led the use case and requirements efforts for both Verifiable Credentials and Decentralized Identifiers, where I remain an editor. I also developed and continue to edit the DID Method Rubric a W3C NOTE. Somewhere in that tenure I also served as co-chair of the W3C Credentials Community Group.

During this time, I have been impressed by the organizations’ advocacy for consensus as something unique to its operations. The idea of consensus as a foundational goal spoke not only to my heart as a human being seeking productive collaborations with others, but also to my aspirations as a professional working on decentralized identity.

And then, citing a non-binding recommendation by the then-acting director in response to previous Formal Objections, the chairs of the DID WG turned that narrative of consensus upside down.

Here’s what happened:

June 30, 2022 – DID Core decision by Director “In its next chartered period the Working Group should address and deliver proposed standard DID method(s) and demonstrate interoperable implementations. The community and Member review of such proposed methods is the natural place to evaluate the questions raised by the objectors and other Member reviewers regarding decentralization, fitness for purpose, and sustainable resource utilization.” Ralph Swick, for Tim Berners-Lee
https://www.w3.org/2022/06/DIDRecommendationDecision.html

July 19, 2022 – DID WG Charter extended, in part “to discuss during TPAC the rechartering of the group in maintenance mode.” Xueyuan xueyuan@w3.org https://lists.w3.org/Archives/Public/public-did-wg/2022Jul/0023.html

August 18, 2022 – Brent Zundel proposed to add DID Methods in PR#20 https://github.com/w3c/did-wg-charter/pull/20

September 1, 2022 – Group email from Brent Zundel about TPAC “We plan to discuss the next WG charter and will hopefully end up with a draft ready to pass on to the next stage in the process.” Brent Zundel https://lists.w3.org/Archives/Public/public-did-wg/2022Sep/0001.html

The Charter at the time of announcement https://github.com/w3c/did-wg-charter/blob/af23f20256f4107cdaa4f2e601a7dbd38f4a20b8/index.html

September 12, 2022 – Group meeting at TPAC “There seems to be strong consensus that we’d rather focus on resolution” https://www.w3.org/2019/did-wg/Meetings/Minutes/2022-09-12-did

September 20, 2022 – Summary of TPAC. Manu Sporny msporny@digitalbazaar.com “The DID Working Group meeting had significant attendance (40-50 people). The goal was to settle on the next Working Group Charter. […] There were objections to standardizing DID Methods. […] There didn’t seem to be objections to DID Resolution or maintaining DID Core.”
https://lists.w3.org/Archives/Public/public-credentials/2022Sep/0177.html

September 21, 2022 – PR#20 from Brent Zundel merges, without discussion or any notice to the WG, PR#20 DID Method resolution into spec, saying “I am merging this PR over their objections because having the flexibility to go this route could be vital should our efforts to move DID Resolution forward be frustrated.”

October 18, 2002 – DID Resolution PR #25 created to add DID Resolution

October 24, 2002 – DID Resolution PR #25 merged, adds DID Resolution

October 25, 2022 – Reviewing other work, I discovered the unannounced merge of #20 and commented asking to revert. https://github.com/w3c/did-wg-charter/pull/20#issuecomment-1291199826

December 14, 2022 – Brent Zundel created PR #27 to add DID Methods and DID Resolution to the deliverables section.

December 15, 2022 – DID WG Charter extended to “allow the group to prepare its new charter” Xueyuan xueyuan@w3.org https://lists.w3.org/Archives/Public/public-did-wg/2022Dec/0000.html

The Charter at the time of the December 15 extension https://github.com/w3c/did-wg-charter/blob/b0f79f90ef7b8e089335caa301c01f3fc3f8f1ef/index.html

December 15, 2022 – Brent Zundel asserts consensus is not required. “There are no WG consensus requirements in establishing the text of the charter.” And “The time and place to have the conversation about whether the charter meets the needs of the W3C is during AC review.” https://github.com/w3c/did-wg-charter/pull/27#issuecomment-1353537492

January 19, 2023 — Christopher Allen raised issue #28 suggesting DID Resolution WG instead of DID WG. https://github.com/w3c/did-wg-charter/issues/28

January 23, 2023 – Brent Zundel initially welcomes seeing a DID Resolution WG charter. https://github.com/w3c/did-wg-charter/issues/28#issuecomment-1401211528

January 23, 2023 – Brent Zundel continues to argue that consensus does not apply: “This charter is not a DID WG deliverable. It is not on the standards track, nor is it a WG Note, nor is it a registry. Thus, the strong demands of WG consensus on its contents do not apply. Consensus comes into play somewhat as it is drafted, but primarily when it is presented to the AC for its consideration.” https://github.com/w3c/did-wg-charter/pull/27#issuecomment-1401199754

January 24, 2032 — Brent Zundel creates PR #29 offering an alternative to PR #27, excluding the offending language. https://github.com/w3c/did-wg-charter/pull/29

March 13, 2023 — Christopher Allen raises Pull Request #30 proposing a DID Resolution WG charter. https://github.com/w3c/did-wg-charter/pull/30#issue-1622126514

March 14, 2023 — Brent Zundel admits he never actually considered PR #30 “It was never my understanding that a DID Resolution WG would replace the DID WG. I do not support that course of action. If folks are interested in pursuing a DID Resolution WG Charter separate from the DID WG Charter, the place to do it is not here.” https://github.com/w3c/did-wg-charter/pull/30#pullrequestreview-1339576495

March 15, 2023 – Brent Zundel merged in PR #27 over significant dissent “The DID WG chairs have met. We have concluded that merging PR #27 will produce a charter that best represents the consensus of the DID Working Group participants over the past year.” “The inclusion of DID Methods in the next chartered DID WG was part of the W3C Director’s decision to overturn the Formal Objections raised against the publication of Decentralized Identifiers (DIDs) v1.0. The chairs feel that that decision represents the consensus of the W3C and as such the inclusion of DID Methods as optional work in this charter is absolutely necessary.” https://github.com/w3c/did-wg-charter/pull/27#issuecomment-1470920464

March 21, 2023 – Announcement of intention to Appeal Joe Andrieu joe@legreq.com https://github.com/w3c/did-wg-charter/pull/27#issuecomment-1478595775

March 23, 2023 – Clarification from Philippe Le Hegaret
‘The “should” is NOT a “must”, otherwise it would say so explicitly in the Director’s decision. In other words, the Working Group is allowed to disagree with the direction set by the Director.’ Personal email from Philippe

March 30, 2023 – DID WG Charter extended. “The goal of this extension is to allow the group to propose its new charter [2], which the chairs consider reasonably stable now. ” xueyuan xueyuan@w3.org https://lists.w3.org/Archives/Public/public-did-wg/2023Mar/0000.html

April 12, 2023 — Appeal of Chair Decision filed https://blog.joeandrieu.com/2023/04/12/fighting-for-consensus-at-the-w3c/

June 12, 2023 — A new process document is adopted. Removes Director. Rather than the “Director, the W3C Council, which is composed of TAG+AB+CEO, will now hear and resolve formal objections.” “Merge Chair Decision Appeal, Group Decision Appeal, and Formal Objection; clarify what can be objected to.” https://www.w3.org/2023/Process-20230612

May 9-10, 2023 – Advisory Committee meeting in Sophia Antipolis. Joe Andrieu attended. Neither co-chair attended.

May 11, 2023 — Joe Andrieu reports success on conversations with prior formal objectors ‘There was exceptional support for focusing on resolution rather than DID methods as the route to interoperability. “That would go a long way to addressing our concerns” and “Yep, that seems like the better way to do it” and “That seems reasonable, but I need to think on it a bit more [before I say it resolves the issues of our FO].” I think it’s fair to say that a focus on resolution (without any DID methods) would likely avoid a FO from 2 out of 3 and, quite possibly all 3. https://github.com/w3c/did-wg-charter/issues/28#issuecomment-1543579206

May 11, 2023 — Brent Zundel claims confidence that we can “move through the charter development process and end up with something that no one objects to.“ https://github.com/w3c/did-wg-charter/issues/28#issuecomment-1544358479

May 22, 2023 – Pierre-Antoine Champin published survey to did-wg charter.
“Would you agree to sending the current version of the charter proposal to the Advisory Committee for review?”
This questionnaire was open from 2023-05-22 to 2023-06-20.
40 yes (85%)
6 no (15%)
https://www.w3.org/2002/09/wbs/117488/rechartering_survey/

July 26, 2023 — Philippe Le Hegaret dismisses appeal. “W3C team concurs there is no consensus” https://lists.w3.org/Archives/Public/public-did-wg/2023Jul/0000.html

August 8, 2023 – Proposed DID WG Charter Call for Review “There is not consensus among the current working group participants as to whether the charter should permit work on the specification of DID methods. This charter proposal includes the option for the group to do that work if there is sufficient interest. This follows the director’s advice in the DID Core 1.0 Recommendation decision [3] to include the specification of DID methods in the future scope of the work.” https://www.w3.org/2002/09/wbs/33280/did-wg-2023/

August 8, 2023 – Brent Zundel asserts in W3C Credentials Community Group meeting that “In my experience, Formal Objections are noted and are usually overridden…” https://w3c-ccg.github.io/meetings/2023-08-08/

No DID WG Meetings whatsoever after January of 2022.

These, I believe, are the relevant facts of the situation, covering the thread of this debate from its beginning through to today.

Why does it matter?

Because consensus is how the W3C delivers legitimate technical specifications.

Because the record clearly shows that the chairs never intended nor attempted to seek consensus and resolve objections to their initial proposal. In their defense, they argue consensus doesn’t apply, which if correct, would justify their behavior. However, we can’t see anything–neither in the member agreement nor in the process document–that constrain’s the requirement that chairs seek consensus to any particular activity.

We believe three points should prevail in this discussion:

First, we see the deference to the past director’s comments as completely outside the Process.

Second, regardless of intent or process, we believe that charter development for an existing working group MUST achieve consensus within that WG before advancing to AC Review.

Third, we see the chairs’ argument regarding the inapplicabilty of consensus to violate the fundamental point of the organization: consensuse-based collaboration.

Let’s look at each of these in terms of what it would take to deny these points.

For the first point, we could interpret the will of “The Director” in his response to the prior Formal Objections as a mandatory requirement.

As we asked formally in our appeal, it may well be that this outcome was always the intention of Staff. “Since the chairs are claiming a mandate from the Director as the justification for their decision, I am directly asking the Director if what is happening is, in fact, what the Director required.”

Staff answered this question by endorsing the Chairs’ decision, relying on the “Director’s comments” as justification. Staff and the chairs believe that it is appropriate for a past decision by the past Director to bind the future work of a Working Group.

I find this unreasonable and contrary to Process. Nowhere in any Process document does Staff or the Director get the right to mandate the work of working groups. Nor does the Process give Staff, Director or otherwise, the right to bind future work of WGs.

Even Phillippe Le Hegaret acknowledged that the Director’s comments did not require the group adopt that recommendation: “It’s a SHOULD not a MUST.” And yet, he claims that his decision to advance the current Charter “follows the director’s advice in the DID Core 1.0 Recommendation decision”.

Working Groups should be the sole determinant of the work that they perform. It is not Staff’s role to direct the technical work of the organization. It’s Staff’s role to support the technical work of the organization, by ensuring the smooth operation of Process.

We do not buy the argument that Ralph Swick, acting on behalf of the past Director in a decision made about a previous work item, appropriately justifies the willful denial of consensus represented by this charter.

The fact of the matter is that DID interoperability is a subtle challenge, not well understood by everyone evaluating the specifications. If we could have had a conversation with the Director–which I asked for multiple times–perhaps we could have cleared up the confusion and advanced a proposal that better resolves the objections of everyone who expressed concerns. But not only did that conversation not happen, it CANNOT happen because of the change in Process. We no longer have a Director. To bind the future work of the DID WG to misunderstandings of Ralph Swick, acting as the Director, is to force technical development to adhere to an untenable, unchallengeable mandate.

What we should have had is a healthy conversation with all of the stakeholders involved, including the six members of the WG who opposed the current charter as well as anyone other participants who cared to chime in. The chairs explicitly prevented this from happening by cancelling all working group meetings, failing to bring the matter to the groups attention through the mailing list, or otherwise attempting to arrange a discussion that could find a proposal with weaker objections as required by process.

Failing that, we should have had a W3C Council to review my appeal as per Section 5.5 and 5.6 of the current Process. That council may well have been able to find a proposal that addressed everyone’s concerns. That also did not happen. Rather than respond to my objection in a timely manner, staff advanced the non-consensual charter to AC review.

Finally, even if we accept that Ralph Swick, acting as the Director, in fact, officially represented the consensus of the W3C in the matter of the DID Core 1.0 specification (for the record, we do), the recommendation that resulted represents the formal W3C consensus at the time of the DID Core 1.0 specification, solely on the matter of its adoption as a Recommendation. It cannot represent the consensus of the W3C on the forced inclusion of unnamed DID Methods in the next DID WG Charter, because the organization has not engaged in seeking consensus on this matter yet. Not only did the chairs ignore their responsibility to seek consensus, Staff has as well.

I would like to think that every WG at the W3C would like to be able to set their own course based on the consensus of their own participants, rather than have it mandated by leadership.

For the second point, you could endorse that charter development is not subject to consensus, as argued by the Chairs.

The chairs claim that “This charter is not a DID WG deliverable. […] Thus, the strong demands of WG consensus on its contents do not apply.”

I find this untenable on two different grounds.

First, the Process does not restrict the commitment to consensus to just Working Group deliverables. The language is broad and unambiguous: “To promote consensus, the W3C process requires Chairs to ensure that groups consider all legitimate views and objections, and endeavor to resolve them, whether these views and objections are expressed by the active participants of the group or by others (e.g., another W3C group, a group in another organization, or the general public).”

The Process in fact, includes examples of seeking consensus that have nothing to do with formal deliverables.

Second, this interpretation would mean that, according to process, ANYONE could propose a new Working Group charter, with the same name and infrastructure as a current Working Group, WITHOUT seeking the consensus of the Working Group involved. All one needs is for Staff to support that effort. On their own initiative, Staff can simply hijack the work of a working group by recruiting chairs willing to propose a new charter without the input of the current group.

This is untenable. If a Working Group is still in existence, it should be a fundamental requirement that any charter proposals that continue that work achieve the consensus of the working group. Of course, if the WG is no longer operating, that’s a different situation. But in this case, the WG is still active. We even discussed the Charter in our meeting at TPAC where upon its first presentation no fewer than five participants objected to including DID Methods in scope.

Frankly, if a Charter proposal does not represent consensus of the group selected by staff to develop the charter… it should not advance to AC Review. Full stop.

That’s the whole point of charter development: for a small group of motivated individuals to figure out what they want to do together and propose that to the organization. If that small group can’t reach consensus, then they haven’t figured it out enough to advance it to AC Review. They should go back to the drawing board and revise the proposal until it achieves consensus. THEN, and only then, should it be proposed to Staff for consideration. When Staff receives a charter proposal that lacks consensus, it should reject it as a matter of course, having not met the fundamental requirements.

For the third point, you could accept the position that the chairs met their duty to seek consensus simply by asserting that there is no consensus.

This argument was raised by other W3C members (not the chairs or staff): the W3C has historically given broad remit to chairs to determine consensus, so whatever they do for “consensus” is, by definition, canon. This disposition is noted in the process itself “Chairs have substantial flexibility in how they obtain and assess consensus among their groups.” and “If questions or disagreements arise, the final determination of consensus remains with the chair.”

As the current discussion has shown, that latter statement is, as a matter of fact, incorrect. The chairs asserted a determination of consensus in this matter “PR #27 will produce a charter that best represents the consensus of the DID Working Group participants over the past year” which Staff judged inadequate: “There is not consensus among the current working group participants as to whether the charter should permit work on the specification of DID methods.” So, either the policy is in error because, in fact, the Staff is the ultimate determiner of consensus, or, Staff ignored process to impose their own determination.

However, even if Chairs have wide remit to determine consensus, the process unequivocally requires chairs to do two things:

ensure that groups consider all legitimate views and objections endeavor to resolve [those objections]

In short, the chairs must actively engage with objectors to resolve their concerns.

The chairs did not do this.

First, they did not do any of the following suggestions in the process document about how to seek consensus:

Groups should favor proposals that create the weakest objections. This is preferred over proposals that are supported by a large majority but that cause strong objections from a few people. (5.2.2. Managing Dissent https://www.w3.org/2023/Process-20230612/#managing-dissent) A group should only conduct a vote to resolve a substantive issue after the Chair has determined that all available means of reaching consensus through technical discussion and compromise have failed, and that a vote is necessary to break a deadlock. (5.2.3 Deciding by Vote) https://www.w3.org/2023/Process-20230612/#Votes A group has formally addressed an issue when it has sent a public, substantive response to the reviewer who raised the issue. A substantive response is expected to include rationale for decisions (e.g., a technical explanation, a pointer to charter scope, or a pointer to a requirements document). The adequacy of a response is measured against what a W3C reviewer would generally consider to be technically sound. (5.3 Formally Addressing an Issue https://www.w3.org/2023/Process-20230612/#formal-address ) The group should reply to a reviewer’s initial comments in a timely manner. (5.3 Formally Addressing an Issue https://www.w3.org/2023/Process-20230612/#formal-address )

The record shows that chairs, rather than doing any of the above recommended actions for consensus, consistently avoided seeking consensus by dismissing concerns and merging PRs without actually seeking to address objectors’ concerns.

PR #20 still has no comments from either Chair. It was merged in over objections despite the working groups exceptionally consistent practice that unless there is consensus for a given change, PRs don’t get merged in.

In fact, controversial edits in this working group regularly receive a 7-day hold with, at minimum, a Github label of a pending action (to close or merge), and often an explicit mention in minutes of the meeting in which the change was discussed. PR #20 was merged by executive fiat, without adherence to the norms of the group and the requirements of process. If you feel charters don’t need consensus, maybe you’re ok with that. We’re not.

It’s clear that in the case of PR #20, Brent Zundel made an executive decision that was contrary to the groups established norms of requiring consensus before merging. Given that Brent never wavered in his commitment to the specific point of including DID Methods in scope as the “only option”, it’s clear that, at least on that point, he never intended to seek consensus.

In PR #27 Brent Zundel and Pierra-Antoine Champin (team contact) dismissed the concerns of objectors rather than “endeavoring to address” them.

Brent asserts that “flexibility in the charter does not require a group to perform any work” And “a future charter that doesn’t include possible DID Methods will be rejected” and “the strong demands of WG consensus on its contents do not apply.”

Pierre-Antoine asserts that “Considering the two points above, the chairs’ decision is in fact quite balanced.” and “But I consider (and I believe that others in the group do as well) that plainly ignoring this recommendation from the director amounts to painting a target on our back.”

Not once did Brent, Dan, or Pierre-Antoine acknowledge the merit of the concerns and attempt to find other means to address them.

PR #29 Remove mention of DID Methods from the charter was ostensibly created to seek input from the WG on the simplest alternative to #27. Unfortunately, not only did the chairs fail to make any comments on that PR, there remain four (4) outstanding change requests from TallTed, kdenhartog, Sakurann, and myself. This PR was never more than a distraction to appear magnanimous without any actual intention to discover a better proposal. If the chairs had been actually exploring PR #29 as a legitimate alternative to PR #27, they would have engaged in the conversation at a minimum, and at best, actually incorporated the suggested changes so the group could reasonably compare and discuss the merits of each approach.

PR #30 Charter for the DID Resolution WG was created in response to a request from TallTed, and initially supported by Brent Zundel, only to see him reverse course when he understood it was an alternative proposal to the DID WG Charter: “I do not support that course of action. … the place to [propose an alternative] is not here.”

Not once, at any time, did Brent, Pierre-Antoine, or Dan Burnett acknowledge the merit of our concerns. They did not ask objectors how else their concerns might be addressed. Not once did they attempt to address my (and others’) concerns about DID method over-centralization caused by putting DID Methods in scope.

Instead, they simply refused to engage objectors regarding their legitimate matters of concern. There were no new proposals from the chairs. There were no inquiries about the “real motivation” behind our objections in an effort to understand the technical arguments. There weren’t even rebuttals from leadership to the technical arguments made in opposition to DID Methods being in scope. It was clear from ALL of their correspondence that they simply expected that the rest of the WG would defer to their judgment because of the prior Director’s comments.

It is our opinion that in all their actions as chairs, WG Chairs MUST seek consensus, whether it is a WG deliverable, a charter, or any W3C matter. That’s the point of the W3C. If chairs can ignore process, what good is process? If staff can ignore the process, how are we to trust the integrity of the organization?

Finally, on the matter of the chairs and process, I find Brent Zundel’s assertion in the August presentation to the Credentials Community Group particularly problematic: “In my experience, Formal Objections are noted and are usually overridden”.

In other words, Brent believes that the Process, which describes in great detail what is to happen in response to a Formal Objection, doesn’t usually apply.

The fact is, he may end up being right. Given the improper dismissal of my appeal, the Process, in fact, does not appear to matter when it comes to the DID WG charter.

TECHNICAL FUNDAMENTALS

The primary technical objection to putting DID Methods in scope is simple conflict of interest. By empowering the DID WG to develop specific DID Methods, it would result in the group picking winners and losers among the 180+ DID Methods currently known. Not only would that give those methods an unfair advantage in the marketplace, it would affect WG deliberations in two important ways. First, the working group would, by necessity, need to learn those selected methods, placing a massive burden on participants, and elevating the techniques of that particular method to accepted canon–which will inevitably taint the DID Core specification with details based on those techniques. Second, this will require the group to evaluate, debate, and select one or a few of those 180+ methods, which will suck up the available time and energy of the working group, forcing them to work on “other people’s methods” rather than advancing the collective work that all DID Methods depend on. Those who want to pursue DID Methods at theW3C should propose their own charter based on a specific DID method.

Our second technical objection is more prosaic: there are no DID Methods ready for W3C standardization, as evidenced by the blank check in the current charter request. It may be within the bounds of the W3C process to authorize such an open ended deliverable, but we believe it is a fundamental problem that the chairs cannot even recommend a specific method for inclusion in the charter. Frankly, this weird hack of not specifying the method and restricting that work to First Public Working Draft (FPWD) status lacks integrity. If it is important for the group to develop a method, name it and make it fully standards track, without restriction. This middle-way is a false compromise that will satisfy no one.

INTEROPERABILITY GOALS

A significant failing of the initial DID Core specification was a failure to develop interoperability between DID Methods. This lack of interoperability was cited as a reason for multiple Formal Objections to that specification. We concur, it’s problem.

However, the problem was not a result of too many DID Methods, nor would it be resolved by forcing the standardization of one or more exemplars. It was a problem of scope. DID core was not allowed to pursue protocols and resolution was intentionally restricted to a mere “contract” rather than a full specification.

The result of those restrictions meant the WG could NOT achieve the kind of interoperability we are all seeking.

We believe the answer is simple: standardize DID Resolution. Not restricted to FPWD status, but actually create a normative global standard for DID Resolution as a W3C Recommendation.

By defining a common API that all DID Methods can implement to provide an interface for a back-end verifiable data registry, we allow any system that can support those implementations an equal opportunity to interoperate with any other, just like HTTP allows any browser to reach any website, regardless of the back-end technology that does the heavy lifting.

That’s how we get to interoperability.

Distracting the group with DID Method specifications would just limit the time and resources the DID WG can bring to bear on solving the real issue of interoperability between methods.

More technical and interoperability details are discussed in our appeal.

RESPONDING TO CRITISMS

Finally, for those still reading, I’d like to address some criticisms that have been raised about my approach to this problem.

First, the assertion that this debate is going to end up in AC Review, so we should have it there and not in the WG.

The debate was not predestined to go to AC review. Like all objections under Process 2023, it should have been resolved in a council. Staff inappropriately accelerated the charter to AC Review despite an ongoing objection and requests by the objector to delay advancement until the matter could be resolved.

Second, the AC is not the appropriate place for a working group to work out its internal disagreements. The AC is the place where other member organizations have a chance to review and comment on proposed work done by the organization as a whole. This is a fundamental principle that, in other contexts, is known as federalism. It isn’t the job of the AC to tell every WG what to do. It’s the job of the AC to vet and confirm the work actually done by the WG. For staff and chairs to deny due process to my objection is, in fact, denying the ability for the AC to review the best proposal from the Working Group and instead requiring the AC to have a meaningless debate that will absolutely result in a council.

To reflect Philippe’s argument back, since the underlying objection is going to be decided in a council anyway, why didn’t we start there? Especially since that is what process requires.

Third, Philippe quoted the following Process section in his discussion of my appeal, despite ignoring the first part of the sentence.

“When the Chair believes that the Group has duly considered the legitimate concerns of dissenters as far as is possible and reasonable, the group SHOULD move on.” [emphasis mine]

The group did NOT duly consider the legitimate concerns of objectors. Instead, the chairs intentionally avoiding any substantive discussion on the topic. There is no evidence at all that the chairs ever considered the legitimate concerns of objectors. They dismissed them, ignored them, and argued that the WG’s github is not the place to have this debate, the AC is.

So, because Staff and the chairs believed it was ultimately going to AC Review, they collectively decided to ignore their obligations: of Chairs to seek consensus within the WG and of Staff to seek consensus through a W3C Council after my appeal.

Finally, I’ll note that my disagreement has been described by one of the chairs as “Vociferous”, “Haranging”, and “Attacking”. I find this characterization to be inappropriate and itself a CEPC failure. “When conflicts arise, we are expected to resolve them maintaining that courtesy, respect, and dignity, even when emotions are heightened.”

The civil exercise of one’s right to challenge authority is fundamental to a free society and vital to any collaborative institution. To be attacked because I chose to engage in conversations that the chairs were trying to avoid, is inappropriate. To be attacked because I filed an appeal, as clearly allowed in the Process, is inappropriate. To attack those who disagree with you is neither collaborative, nor is it an effective mechanism to seek consensus.

SUMMARY

This charter never should have made it to the AC. It unfairly hijacks the DID WG name and its work, without the consent of the current DID WG. Worse, it does so in a way that fundamentally undermines the decentralized nature of Decentralized Identifiers.

This charter should be rejected and returned to the DID WG to find a proposal with weaker objections, one that represents the collective will of the working group and legitimately continues the work in which we all have invested our professional careers.


Simon Willison

bpy - Blender on PyPI

bpy - Blender on PyPI TIL you can "pip install" Blender! bpy "provides Blender as a Python module" - it's part of the official Blender project, and ships with binary wheels ranging in size from 168MB to 319MB depending on your platform. It only supports the version of Python used by the current Blender release though - right now that's Python 3.10. Via @kolibril13

bpy - Blender on PyPI

TIL you can "pip install" Blender!

bpy "provides Blender as a Python module" - it's part of the official Blender project, and ships with binary wheels ranging in size from 168MB to 319MB depending on your platform.

It only supports the version of Python used by the current Blender release though - right now that's Python 3.10.

Via @kolibril13


Jon Udell

How LLMs teach you things you didn’t know you didn’t know

Here’s #9 in the new series on LLM-assisted coding over at The New Stack: Learning While Coding: How LLMs Teach You Implicitly LLMs can deliver just-in-time knowledge tailored to real programming tasks; it’s a great way to learn about coding idioms and libraries. As I mentioned on Mastodon, I know we are in a hype … Continue reading How LLMs teach you things you didn’t know you didn’t know

Here’s #9 in the new series on LLM-assisted coding over at The New Stack:
Learning While Coding: How LLMs Teach You Implicitly

LLMs can deliver just-in-time knowledge tailored to real programming tasks; it’s a great way to learn about coding idioms and libraries.

As I mentioned on Mastodon, I know we are in a hype cycle, and I’m trying to report these findings in a quiet and matter-of-fact way. But when Greg Lloyd played this quote back to me, I got excited all over again.

This is the kind of tacit knowledge transfer that can happen when you work with another person, you don’t explicitly ask a question, and your partner doesn’t explicitly answer it. The knowledge just surfaces organically, and transfers by osmosis.

I’m certain this augmented way of learning will carry forward in some form, and improve the learning experience in other domains too.

1 When the rubber duck talks back

2 Radical just-in-time learning

3 Why LLM-assisted table transformation is a big deal

4 Using LLM-Assisted Coding to Write a Custom Template Function

5 Elevating the Conversation with LLM Assistants

6 How Large Language Models Assisted a Website Makeover

7 Should LLMs Write Marketing Copy?

8 Test-Driven Development with LLMs: Never Trust, Always Verify


Mike Jones: self-issued

OAuth 2.0 Demonstrating Proof of Possession (DPoP) is now RFC 9449

The OAuth 2.0 Demonstrating Proof of Possession (DPoP) specification has been published as RFC 9449! As Vittorio Bertocci wrote, “One of the specs with the highest potential for (positive) impact in recent years.” I couldn’t agree more! The concise abstract says it all: This document describes a mechanism for sender-constraining OAuth 2.0 tokens via a […]

The OAuth 2.0 Demonstrating Proof of Possession (DPoP) specification has been published as RFC 9449! As Vittorio Bertocci wrote, “One of the specs with the highest potential for (positive) impact in recent years.” I couldn’t agree more!

The concise abstract says it all:

This document describes a mechanism for sender-constraining OAuth 2.0 tokens via a proof-of-possession mechanism on the application level. This mechanism allows for the detection of replay attacks with access and refresh tokens.

As I described in my 2022 Identiverse presentation on DPoP it’s been a Long and Winding Road to get here. Efforts at providing practical proof of possession protection for tokens have included:

SAML 2.0 Holder-of-Key Assertion Profile – Not exactly OAuth OAuth 1.0 used PoP – But message signing too complicated OAuth 2.0 MAC draft – Used similarly complicated signing OAuth 2.0 HTTP Signing draft – Abandoned due to complexity TLS Token Binding – Some browsers declined to ship it OAuth 2.0 Mutual TLS – Client certs notoriously difficult to use OAuth 2.0 DPoP – Today’s RFC aimed at simply and practically solving this important problem

As they say, I think this one’s the one! Implement, deploy, and enjoy!

Thursday, 07. September 2023

Mike Jones: self-issued

Adoption Time! And Lessons Learned…

I’ve had two different IETF specifications adopted by two different working groups in the last two days – a pleasant coincidence! Yesterday, the COSE “typ” (type) Header Parameter specification was adopted by the COSE working group. Today, the OAuth 2.0 Protected Resource Metadata specification was adopted by the OAuth working group. Their journeys from individual […]

I’ve had two different IETF specifications adopted by two different working groups in the last two days – a pleasant coincidence! Yesterday, the COSE “typ” (type) Header Parameter specification was adopted by the COSE working group. Today, the OAuth 2.0 Protected Resource Metadata specification was adopted by the OAuth working group. Their journeys from individual drafts to working group drafts couldn’t have been more different!

As I was musing with Phil Hunt, who wrote the original individual draft of OAuth 2.0 Protected Resource Metadata with me, I’m pretty sure that this is the longest time from writing an individual draft to it becoming a working group draft in my experience: August 3, 2016 to September 6, 2023 – seven years and a month!

Whereas, the time from the individual draft of COSE “typ” (type) Header Parameter to the first working group draft was only three months: July 8, 2023 to September 5, 2023. Which got me thinking… Is that the fastest progression I’ve had?

It turns out that my fastest time from individual draft to working group draft was for the JWK Thumbprint URI specification which I wrote with Kristina Yasuda. It went from individual draft to working group draft in only two months: November 24, 2021 to January 28, 2022. (And it became RFC 9278 on August 9, 2022 – less than nine months from start to finish, which I believe is also a personal record.)

Ironically, while OAuth 2.0 Protected Resource Metadata took over seven years from individual to working group drafts, a closely-related draft, OAuth 2.0 Discovery (which became RFC 8414) was previously my fastest from individual draft to working group draft: 2.5 months! (The journey to becoming an RFC took 2.5 years.)

The other relative speed demon was Proof-Of-Possession Semantics for JSON Web Tokens (JWTs): 3.5 months from individual draft to working group draft and two years from start to RFC 7800.

What are my takeaways from all these musings about starting things?

Starting things is something to celebrate. It’s a creative process to go from an idea to something concrete and useful. But as my COSE co-chair Ivaylo Petrov wrote, “We would also like to remind you that adoption does not mean a document is finished, only that it is an acceptable starting point.” Perseverance is essential. Progressing things can take dedication and persistence. My most-referenced specification, JSON Web Token (JWT) – RFC 7519, referenced from 45 RFCs, took 4.5 years. Focused specifications that do one thing well can progress quickly. Proof-Of-Possession Semantics for JSON Web Tokens (JWTs) – RFC 7800 and JWK Thumbprint URI – RFC 9278 are prime examples. I’m hoping that COSE “typ” (type) Header Parameter will be one of these – a sentiment I shared with co-author Orie Steele. Finishing things matters. That speaks for itself, but it’s sometimes easier said than done. Finished things get used!

Wednesday, 06. September 2023

Simon Willison

hubcap.php

hubcap.php This PHP script by Dave Hulbert delights me. It's 24 lines of code that takes a specified goal, then calls my LLM utility on a loop to request the next shell command to execute in order to reach that goal... and pipes the output straight into exec() after a 3s wait so the user can panic and hit Ctrl+C if it's about to do something dangerous! Via Amazingly Alarming Autonomous AI

hubcap.php

This PHP script by Dave Hulbert delights me. It's 24 lines of code that takes a specified goal, then calls my LLM utility on a loop to request the next shell command to execute in order to reach that goal... and pipes the output straight into exec() after a 3s wait so the user can panic and hit Ctrl+C if it's about to do something dangerous!

Via Amazingly Alarming Autonomous AI Agents


Using ChatGPT Code Intepreter (aka "Advanced Data Analysis") to analyze your ChatGPT history

Using ChatGPT Code Intepreter (aka "Advanced Data Analysis") to analyze your ChatGPT history I posted a short thread showing how to upload your ChatGPT history to ChatGPT itself, then prompt it with "Build a dataframe of the id, title, create_time properties from the conversations.json JSON array of objects. Convert create_time to a date and plot it daily".

Using ChatGPT Code Intepreter (aka "Advanced Data Analysis") to analyze your ChatGPT history

I posted a short thread showing how to upload your ChatGPT history to ChatGPT itself, then prompt it with "Build a dataframe of the id, title, create_time properties from the conversations.json JSON array of objects. Convert create_time to a date and plot it daily".


Perplexity: interactive LLM visualization

Perplexity: interactive LLM visualization I linked to a video of Linus Lee's GPT visualization tool the other day. Today he's released a new version of it that people can actually play with: it runs entirely in a browser, powered by a 120MB version of the GPT-2 ONNX model loaded using the brilliant Transformers.js JavaScript library. Via @thesephist

Perplexity: interactive LLM visualization

I linked to a video of Linus Lee's GPT visualization tool the other day. Today he's released a new version of it that people can actually play with: it runs entirely in a browser, powered by a 120MB version of the GPT-2 ONNX model loaded using the brilliant Transformers.js JavaScript library.

Via @thesephist


Mike Jones: self-issued

Multiformats Considered Harmful

While I usually reserve my time and energy for advancing good ideas, I’m making an exception to publicly state the reasons why I believe “multiformats” should not be considered for standardization by the IETF. 1. Multiformats institutionalize the failure to make a choice, which is the opposite of what good standards do. Good standards make […]

While I usually reserve my time and energy for advancing good ideas, I’m making an exception to publicly state the reasons why I believe “multiformats” should not be considered for standardization by the IETF.

1. Multiformats institutionalize the failure to make a choice, which is the opposite of what good standards do. Good standards make choices about representations of data structures resulting in interoperability, since every conforming implementation uses the same representation. In contrast, multiformats enable different implementations to use a multiplicity of different representations for the same data, harming interoperability. https://datatracker.ietf.org/doc/html/draft-multiformats-multibase-03#appendix-D.1 defines 23 equivalent and non-interoperable representations for the same data!

2. The stated purpose of “multibase” is “Unfortunately, it’s not always clear what base encoding is used; that’s where this specification comes in. It answers the question: Given data ‘d’ encoded into text ‘s’, what base is it encoded with?”, which is wholly unnecessary. Successful standards DEFINE what encoding is used where. For instance, https://www.rfc-editor.org/rfc/rfc7518.html#section-6.2.1.2 defines that “x” is base64url encoded. No guesswork or prefixing is necessary or useful.

3. Standardization of multiformats would result in unnecessary and unhelpful duplication of functionality – especially of key representations. The primary use of multiformats is for “publicKeyMultibase” – a representation of public keys that are byte arrays. For instance, the only use of multiformats by the W3C DID spec is for publicKeyMultibase. The IETF already has several perfectly good key representations, including X.509, JSON Web Key (JWK), and COSE_Key. There’s not a compelling case for another one.

4. publicKeyMultibase can only represent a subset of the key types used in practice. Representing many kinds of keys requires multiple values – for instance, RSA keys require both an exponent and a modulus. By comparison, the X.509, JWK, and COSE_Key formats are flexible enough to represent all kinds of keys. It makes little to no sense to standardize a key format that limits implementations to only certain kinds of keys.

5. The “multihash” specification relies on a non-standard representation of integers called “Dwarf”. Indeed, the referenced Dwarf document lists itself as being at http://dwarf.freestandards.org/ – a URL that no longer exists!

6. The “Multihash Identifier Registry” at https://www.ietf.org/archive/id/draft-multiformats-multihash-07.html#mh-registry duplicates the functionality of the IANA “Named Information Hash Algorithm Registry” at https://www.iana.org/assignments/named-information/named-information.xhtml#hash-alg, in that both assign (different) numeric identifiers for hash functions. If multihash goes forward, it should use the existing registry.

7. It’s concerning that the draft charter states that “Changing current Multiformat header assignments in a way that breaks backward compatibility with production deployments” is out of scope. Normally IETF working groups are given free rein to make improvements during the standardization process.

8. Finally, as a member of the W3C DID and W3C Verifiable Credentials working groups, I will state that it is misleading for the draft charter to say that “The outputs from this Working Group are currently being used by … the W3C Verifiable Credentials Working Group, W3C Decentralized Identifiers Working Group…”. The documents produced by these working groups intentionally contain no normative references to multiformats or any data structures derived from them. Where they are referenced, it is explicitly stated that the references are non-normative.

Tuesday, 05. September 2023

Simon Willison

Symbex 1.4

Symbex 1.4 New release of my Symbex tool for finding symbols (functions, methods and classes) in a Python codebase. Symbex can now output matching symbols in JSON, CSV or TSV in addition to plain text. I designed this feature for compatibility with the new "llm embed-multi" command - so you can now use Symbex to find every Python function in a nested directory and then pipe them to LLM to calcu

Symbex 1.4

New release of my Symbex tool for finding symbols (functions, methods and classes) in a Python codebase. Symbex can now output matching symbols in JSON, CSV or TSV in addition to plain text.

I designed this feature for compatibility with the new "llm embed-multi" command - so you can now use Symbex to find every Python function in a nested directory and then pipe them to LLM to calculate embeddings for every one of them.

I tried it on my projects directory and embedded over 13,000 functions in just a few minutes! Next step is to figure out what kind of interesting things I can do with all of those embeddings.


Talking Identity

Let’s Hope It Works *This* Time

Well, this is a big one for the identity industry. Two stalwarts becoming one. >> Thoma Bravo Completes Acquisition of ForgeRock; Combines ForgeRock into Ping Identity As someone who was there and in the thick of it during the last big merger of identity players, I wish all my (too many to tag) friends at […]

Well, this is a big one for the identity industry. Two stalwarts becoming one.

>> Thoma Bravo Completes Acquisition of ForgeRock; Combines ForgeRock into Ping Identity

As someone who was there and in the thick of it during the last big merger of identity players, I wish all my (too many to tag) friends at ForgeRock and Ping Identity all the best, good luck, and a strong stomach.

Combining two product suites that have this much history and strength isn’t easy. There will be difficult decisions and even more difficult conversations ahead. The key is to have strong leadership with a clear vision for the future and a relentless commitment to helping your customers. Few would be up to that task, but Andre Durand is one of the few in this world who could.

Cheers to all of you, buckle up, and enjoy the ride!


Wrench in the Gears

Eulogy For My Dad Jerry Lee Hawver – The Man Who Shaped The Woman I Am Today

My dad left behind his dementia and this world last Friday, September 1, 2023. I was with him through the night and into the morning when he passed. I love him so much. This is my eulogy for him that I’ll be reading on Saturday. What I will always remember about my dad was his [...]

My dad left behind his dementia and this world last Friday, September 1, 2023. I was with him through the night and into the morning when he passed. I love him so much. This is my eulogy for him that I’ll be reading on Saturday.

What I will always remember about my dad was his hands, big hands, a simple gold band on the left index finger symbolizing his commitment to family. They embodied power and tenderness in equal measure. Jerry was born a big bouncing October baby with a larger-than-life personality that persisted to the end, even as he navigated the pernicious fog of dementia. The twinkle in his eyes showed us he was still there inside a body that could barely contain his expansive spirit. His used his stature to stand up for the underdog, to become a star athlete, and take care of whatever business needed addressing. Despite a challenging childhood, he was a hard worker who weeded potato fields as a kid and later helped manage a small grocery store. He spent his earnings not on himself, but looking after his mother, Viola, and four sisters, Sandra, Gail, Diane, and Carol. Then, when he met my mom, he looked after the three of us as a successful district sales manager for Proctor and Gamble. It was comforting to know someone like my dad was in your corner. I miss that even though I know he’s looking down on us from heaven.

The day my dad died, his hands were even larger than usual, and mine felt so small. I have my mother’s hands, short stumpy fingers. Instead of him holding my hand, I was holding his, caressing him and trying to lend comfort on his passage. I remember the strongly gripped handshakes he used to give and the “pull-my-finger” tricks, those hands scratching the furry heads of our dogs Phoebe, Bridget, Molly, and Lucy. His hands mowed grass and held poker hands and a kindle full of western novels. When he put away the footballs and basketballs of his youth, those hands picked up fishing poles and tennis rackets and golf clubs. In his retirement they held doll clothes and puzzle pieces as he played with his grandchildren and beach chairs and umbrellas for my mom for those weeks at Isle of Palms and hoses to water backyard flowers and tomato plants. Those hands carried heaping plates of steaming bacon for the church ministry and steadied me as I learned to ride a two-wheel bike.

 

My dad was my softball coach when I was in grade school. Those hands used to try and burn me out in games of catch. He used to have me turn my back and he would throw the ball high up in the air and then wait a second and then have me turn around and find it and catch it. I was shy and somewhat of an introvert, but he believed in me. My dad taught be to be tough, and he gave me confidence in myself, which is something I really need right now as I face my own personal struggles. I went from being a timid right fielder to a competent second baseman, and while I was never a slugger, I learned how to bunt. Sometimes winning doesn’t require power, but finesse, and that was an important life lesson.

Jerry followed the American dream, taking his family across the country on a salary underpinned by Mrs. Olsen and a good cup of Folger’s coffee. He and my mom traveled from Tulsa to Fort Worth and Louisville before making their final landing here in Charlotte. My dad gave us stability, comfortable houses in the suburbs with quality schools and good neighbors. He wanted the best for us, always. My father was a skilled businessman. He loved people, brought out the best in his employees, and could have risen higher up the ranks of P&G, but he never wanted to travel that much. He was a homebody, and he wanted to stay close to us. After I left home and came back to visit, I enjoyed our mornings together talking over coffee. He often drank two pots. Our politics differed, but that didn’t matter. I appreciated his perspectives, and even though we may have seen things through different lenses, I always knew my father’s heart was in the right place. As I got older, I realized that while the media might have told me we were in different camps, we weren’t, we were cut from the same cloth. I will always be my father’s daughter.

 

Jerry Hawver worked hard and played hard. He was stubborn and joyful. He got things done, right up to the end, even when the “jobs” he took on involved taking apart the cable box and other hardware at he assisted living center to the bemused consternation of the staff. My dad liked to have everything in order. He liked to stay busy. He loved his wife and children and grandchildren. I regret that distance kept us apart at the end of his life, but I am grateful to have glimpses into the wonders of elder Jerry captured in the pictures and videos of his guardian angel, my sister-in-law Lisa who visited him every day afterwork. Through her efforts I am able to see how many lessons our loved ones have for us even as they move away from the confinement of this earthly realm. I will treasure the glimpses given to me of “Eat Monster” Grandpa Jerry holding forth from his hospital bed with kind bemusement and a tender touch or snappy remark for those at his bedside. He was a character with huge hands and a huge heart. His care and charisma will be missed.

Today, we picture him in heaven making unlimited pots of coffee, straightening the papers, gathering up unattended drinking glasses, crooning the oldies, cracking jokes, and sneaking candy – peanut M&Ms and Cherry Mash. I’ll close with the lullaby he used to sing me as a little girl, from the movie Tammy with Debbie Reynolds. Pardon the poor quality of my voice, but this is my tribute.

I hear the cottonwoods whispering above

Tammy, Tammy, Tammy’s in love

The old hootie owl hootie-hoot’s to the dove

Tammy, Tammy, Tammy’s in love.

 

Does my lover feel what I feel when he comes near?

My heart beats so joyfully

You’d think that he could hear

Wish I knew if he knew what I’m dreaming of

Tammy, Tammy, Tammy’s in love

 

Whippoorwill, whippoorwill, you and I know

Tammy, Tammy can’t let him go

The breeze from the bayou keeps yearning along

Tammy, Tammy, you love him so

 

When the night is warm, soft and warm

I long for his charms

I’d sing like a violin

If I were in his arms

 

Wish I knew if he knew what I’m dreaming of

Tammy, Tammy, Tammy’s in love

 

I sang it to him the night he died. Ally loves you and misses you dad.

 

 


Simon Willison

A token-wise likelihood visualizer for GPT-2

A token-wise likelihood visualizer for GPT-2 Linus Lee built a superb visualization to help demonstrate how Large Language Models work, in the form of a video essay where each word is coloured to show how "surprising" it is to the model. It's worth carefully reading the text in the video as each term is highlighted to get the full effect.

A token-wise likelihood visualizer for GPT-2

Linus Lee built a superb visualization to help demonstrate how Large Language Models work, in the form of a video essay where each word is coloured to show how "surprising" it is to the model. It's worth carefully reading the text in the video as each term is highlighted to get the full effect.


Werdmüller on Medium

Building a wide news commons

Let’s support tightly-focused, independent newsrooms. Continue reading on Medium »

Let’s support tightly-focused, independent newsrooms.

Continue reading on Medium »

Monday, 04. September 2023

Simon Willison

Wikipedia search-by-vibes through millions of pages offline

Wikipedia search-by-vibes through millions of pages offline Really cool demo by Lee Butterman, who built embeddings of 2 million Wikipedia pages and figured out how to serve them directly to the browser, where they are used to implement "vibes based" similarity search returning results in 250ms. Lots of interesting details about how he pulled this off, using Arrow as the file format and ONNX to

Wikipedia search-by-vibes through millions of pages offline

Really cool demo by Lee Butterman, who built embeddings of 2 million Wikipedia pages and figured out how to serve them directly to the browser, where they are used to implement "vibes based" similarity search returning results in 250ms. Lots of interesting details about how he pulled this off, using Arrow as the file format and ONNX to run the model in the browser.

Via @leebutterman


LLM now provides tools for working with embeddings

LLM is my Python library and command-line tool for working with language models. I just released LLM 0.9 with a new set of features that extend LLM to provide tools for working with embeddings. This is a long post with a lot of theory and background. If you already know what embeddings are, here's a TLDR you can try out straight away: # Install LLM pip install llm # If you already installed

LLM is my Python library and command-line tool for working with language models. I just released LLM 0.9 with a new set of features that extend LLM to provide tools for working with embeddings.

This is a long post with a lot of theory and background. If you already know what embeddings are, here's a TLDR you can try out straight away:

# Install LLM pip install llm # If you already installed via Homebrew/pipx you can upgrade like this: llm install -U llm # Install the llm-sentence-transformers plugin llm install llm-sentence-transformers # Install the all-MiniLM-L6-v2 embedding model llm sentence-transformers register all-MiniLM-L6-v2 # Generate and store embeddings for every README.md in your home directory, recursively llm embed-multi readmes \ --model sentence-transformers/all-MiniLM-L6-v2 \ --files ~/ '**/README.md' # Add --store to store the text content as well # Run a similarity search for "sqlite" against those embeddings llm similar readmes -c sqlite

For everyone else, read on and the above example should hopefully all make sense.

Embeddings

Embeddings are a fascinating concept within the larger world of language models.

I explained embeddings in my recent talk, Making Large Language Models work for you. The relevant section of the slides and transcript is here, or you can jump to that section on YouTube.

An embedding model lets you take a string of text - a word, sentence, paragraph or even a whole document - and turn that into an array of floating point numbers called an embedding vector.

A model will always produce the same length of array - 1,536 numbers for the OpenAI embedding model, 384 for all-MiniLM-L6-v2 - but the array itself is inscrutable. What are you meant to do with it?

The answer is that you can compare them. I like to think of an embedding vector as a location in 1,536-dimensional space. The distance between two vectors is a measure of how semantically similar they are in meaning, at least according to the model that produced them.

"One happy dog" and "A playful hound" will end up close together, even though they don't share any keywords. The embedding vector represents the language model's interpretation of the meaning of the text.

Things you can do with embeddings include:

Find related items. I use this on my TIL site to display related articles, as described in Storing and serving related documents with openai-to-sqlite and embeddings. Build semantic search. As shown above, an embeddings-based search engine can find content relevant to the user's search term even if none of the keywords match. Implement retrieval augmented generation - the trick where you take a user's question, find relevant documentation in your own corpus and use that to get an LLM to spit out an answer. More on that here. Clustering: you can find clusters of nearby items and identify patterns in a corpus of documents. Classification: calculate the embedding of a piece of text and compare it to pre-calculated "average" embeddings for different categories. LLM's new embedding features

My goal with LLM is to provide a plugin-driven abstraction around a growing collection of language models. I want to make installing, using and comparing these models as easy as possible.

The new release adds several command-line tools for working with embeddings, plus a new Python API for working with embeddings in your own code.

It also adds support for installing additional embedding models via plugins. I've released one plugin for this so far: llm-sentence-transformers, which adds support for new models based on the sentence-transformers library.

The example above shows how to use sentence-transformers. LLM also supports API-driven access to the OpenAI ada-002 model.

Here's how to embed some text using ada-002, assuming you have installed LLM already:

# Set your OpenAI API key llm keys set openai # <paste key here> # Embed some text llm embed -m ada-002 -c "Hello world"

This will output a huge JSON list of floating point numbers to your terminal. You can add -f base64 (or -f hex) to get that back in a different format, though none of these outputs are instantly useful.

Embeddings are much more interesting when you store them.

LLM already uses SQLite to store prompts and responses. It was a natural fit to use SQLite to store embeddings as well.

Embedding collections

LLM 0.9 introduces the concept of a collection of embeddings. A collection has a name - like readmes - and contains a set of embeddings, each of which has an ID and an embedding vector.

All of the embeddings in a collection are generated by the same model, to ensure they can be compared with each others.

The llm embed command can store the vector in the database instead of returning it to the console. Pass it the name of an existing (or to-be-created) collection and the ID to use to store the embedding.

Here we'll store the embedding for the phrase "Hello world" in a collection called phrases with the ID hello, using that ada-002 embedding model:

llm embed phrases hello -m ada-002 -c "Hello world"

Future phrases can be added without needing to specify the model again, since it is remembered by the collection:

llm embed phrases goodbye -c "Goodbye world"

The llm embed-db collections shows a list of collections:

phrases: ada-002 2 embeddings readmes: sentence-transformers/all-MiniLM-L6-v2 16796 embeddings

The data is stored in a SQLite embeddings table with the following schema:

CREATE TABLE [collections] ( [id] INTEGER PRIMARY KEY, [name] TEXT, [model] TEXT ); CREATE TABLE "embeddings" ( [collection_id] INTEGER REFERENCES [collections]([id]), [id] TEXT, [embedding] BLOB, [content] TEXT, [content_hash] BLOB, [metadata] TEXT, [updated] INTEGER, PRIMARY KEY ([collection_id], [id]) ); CREATE UNIQUE INDEX [idx_collections_name] ON [collections] ([name]); CREATE INDEX [idx_embeddings_content_hash] ON [embeddings] ([content_hash]);

By default this is the SQLite database at the location revealed by llm embed-db path, but you can pass --database my-embeddings.db to various LLM commands to use a different database.

Each embedding vector is stored as a binary BLOB in the embedding column, consisting of those floating point numbers packed together as 32 bit floats.

The content_hash column contains a MD5 hash of the content. This helps avoid re-calculating the embedding (which can cost actual money for API-based embedding models like ada-002) unless the content has changed.

The content column is usually null, but can contain a copy of the original text content if you pass the --store option to the llm embed command.

metadata can contain a JSON object with metadata, if you pass --metadata '{"json": "goes here"}.

You don't have to pass content using -c - you can instead pass a file path using the -i/--input option:

llm embed docs llm-setup -m ada-002 -i llm/docs/setup.md

Or pipe things to standard input like this:

cat llm/docs/setup.md | llm embed docs llm-setup -m ada-002 -i - Embedding similarity search

Once you've built a collection, you can search for similar embeddings using the llm similar command.

The -c "term" option will embed the text you pass in using the embedding model for the collection and use that as the comparison vector:

llm similar readmes -c sqlite

You can also pass the ID of an object in that collection to use that embedding instead. This gets you related documents, for example:

llm similar readmes sqlite-utils/README.md

The output from this command is currently newline-delimited JSON.

Embedding in bulk

The llm embed command embeds a single string at a time. llm embed-multi is much more powerful: you can feed a CSV or JSON file, a SQLite database or even have it read from a directory of files in order to embed multiple items at once.

Many embeddings models are optimized for batch operations, so embedding multiple items at a time can provide a significant speed boost.

The embed-multi command is described in detail in the documentation. Here are a couple of fun things you can do with it.

First, I'm going to create embeddings for every single one of my Apple Notes.

My apple-notes-to-sqlite tool can export Apple Notes to a SQLite database. I'll run that first:

apple-notes-to-sqlite notes.db

This took quite a while to run on my machine and generated a 828M SQLite database containing 6,462 records!

Next, I'm going to embed the content of all of those notes using the sentence-transformers/all-MiniLM-L6-v2 model:

llm embed-multi notes \ -d notes.db \ --sql 'select id, title, body from notes' \ -m sentence-transformers/all-MiniLM-L6-v2

This took around 15 minutes to run, and increased the size of my database by 13MB.

The --sql option here specifies a SQL query. The first column must be an id, then any subsequent columns will be concatenated together to form the content to embed.

In this case the embeddings are written back to the same notes.db database that the content came from.

And now I can run embedding similarity operations against all of my Apple notes!

llm similar notes -d notes.db -c 'ideas for blog posts' Embedding files in a directory

Let's revisit the example from the top of this post. In this case, I'm using the --files option to search for files on disk and embed each of them:

llm embed-multi readmes \ --model sentence-transformers/all-MiniLM-L6-v2 \ --files ~/ '**/README.md'

The --files option takes two arguments: a path to a directory and a pattern to match against filenames. In this case I'm searching my home directory recursively for any files named README.md.

Running this command gives me embeddings for all of my README.md files, which I can then search against like this:

llm similar readmes -c sqlite Embeddings in Python

So far I've only covered the command-line tools. LLM 0.9 also introduces a new Python API for working with embeddings.

There are two aspects to this. If you just want to embed content and handle the resulting vectors yourself, you can use llm.get_embedding_model():

import llm # This takes model IDs and aliases defined by plugins: model = llm.get_embedding_model("sentence-transformers/all-MiniLM-L6-v2") vector = model.embed("This is text to embed")

vector will then be a Python list of floating point numbers.

You can serialize that to the same binary format that LLM uses like this:

binary_vector = llm.encode(vector) # And to deserialize: vector = llm.decode(binary_vector)

The second aspect of the Python API is the llm.Collection class, for working with collections of embeddings. This example code is quoted from the documentation:

import sqlite_utils import llm # This collection will use an in-memory database that will be # discarded when the Python process exits collection = llm.Collection("entries", model_id="ada-002") # Or you can persist the database to disk like this: db = sqlite_utils.Database("my-embeddings.db") collection = llm.Collection("entries", db, model_id="ada-002") # You can pass a model directly using model= instead of model_id= embedding_model = llm.get_embedding_model("ada-002") collection = llm.Collection("entries", db, model=embedding_model) # Store a string in the collection with an ID: collection.embed("hound", "my happy hound") # Or to store content and extra metadata: collection.embed( "hound", "my happy hound", metadata={"name": "Hound"}, store=True ) # Or embed things in bulk: collection.embed_multi( [ ("hound", "my happy hound"), ("cat", "my dissatisfied cat"), ], # Add this to store the strings in the content column: store=True, )

As with everything else in LLM, the goal is that anything you can do with the CLI can be done with the Python API, and vice-versa.

Clustering with llm-cluster

Another interesting application of embeddings is that you can use them to cluster content - identifying patterns in a corpus of documents.

I've started exploring this area with a new plugin, called llm-cluster.

You can install it like this:

llm install llm-cluster

Let's create a new collection using data pulled from GitHub. I'm going to import all of the LLM issues from the GitHub API, using my paginate-json tool:

paginate-json 'https://api.github.com/repos/simonw/llm/issues?state=all&filter=all' \ | jq '[.[] | {id: .id, title: .title}]' \ | llm embed-multi llm-issues - \ --database issues.db \ --model sentence-transformers/all-MiniLM-L6-v2 \ --store

Running this gives me a issues.db SQLite database with 218 embeddings contained in a collection called llm-issues.

Now let's try out the llm-cluster command, requesting ten clusters from that collection:

llm cluster llm-issues --database issues.db 10

The output from this command, truncated, looks like this:

[ { "id": "0", "items": [ { "id": "1784149135", "content": "Tests fail with pydantic 2" }, { "id": "1837084995", "content": "Allow for use of Pydantic v1 as well as v2." }, { "id": "1857942721", "content": "Get tests passing against Pydantic 1" } ] }, { "id": "1", "items": [ { "id": "1724577618", "content": "Better ways of storing and accessing API keys" }, { "id": "1772024726", "content": "Support for `-o key value` options such as `temperature`" }, { "id": "1784111239", "content": "`--key` should be used in place of the environment variable" } ] }, { "id": "8", "items": [ { "id": "1835739724", "content": "Bump the python-packages group with 1 update" }, { "id": "1848143453", "content": "Python library support for adding aliases" }, { "id": "1857268563", "content": "Bump the python-packages group with 1 update" } ] } ]

These look pretty good! But wouldn't it be neat if we had a snappy title for each one?

The --summary option can provide exactly that, by piping the members of each cluster through a call to another LLM in order to generate a useful summary.

llm cluster llm-issues --database issues.db 10 --summary

This uses gpt-3.5-turbo to generate a summary for each cluster, with this default prompt:

Short, concise title for this cluster of related documents.

The results I got back are pretty good, including:

Template Storage and Management Improvements Package and Dependency Updates and Improvements Adding Conversation Mechanism and Tools

I tried the same thing using a Llama 2 model running on my own laptop, with a custom prompt:

llm cluster llm-issues --database issues.db 10 \ --summary --model mlc-chat-Llama-2-13b-chat-hf-q4f16_1 \ --prompt 'Concise title for this cluster of related documents, just return the title'

I didn't quite get what I wanted! Llama 2 is proving a lot harder to prompt, so each cluster came back with something that looked like this:

Sure! Here's a concise title for this cluster of related documents:

"Design Improvements for the Neat Prompt System"

This title captures the main theme of the documents, which is to improve the design of the Neat prompt system. It also highlights the focus on improving the system's functionality and usability

llm-cluster only took a few hours to throw together, which I'm seeing as a positive indicator that the LLM library is developing in the right direction.

Future plans

The two future features I'm most excited about are indexing and chunking.

Indexing

The llm similar command and collection.similar() Python method currently use effectively the slowest brute force approach possible: calculate a cosine difference between input vector and every other embedding in the collection, then sort the results.

This works fine for collections with a few hundred items, but will start to suffer for collections of 100,000 or more.

There are plenty of potential ways of speeding this up: you can run a vector index like FAISS or hnswlib, use a database extension like sqlite-vss or pgvector, or turn to a hosted vector database like Pinecone or Milvus.

With this many potential solutions, the obvious answer for LLM is to address this with plugins.

I'm still thinking through the details, but the core idea is that users should be able to define an index against one or more collections, and LLM will then coordinate updates to that index. These may not happen in real-time - some indexes can be expensive to rebuild, so there are benefits to applying updates in batches.

I experimented with FAISS earlier this year in datasette-faiss. That's likely to be the base for my first implementation.

The embeddings table has an updated timestamp column to support this use-case - so indexers can run against just the items that have changed since the last indexing run.

Follow issue #216 for updates on this feature.

Chunking

When building an embeddings-based search engine, the hardest challenge is deciding how best to "chunk" the documents.

Users will type in short phrases or questions. The embedding for a four word question might not necessarily map closely to the embedding of a thousand word article, even if the article itself should be a good match for that query.

To maximize the chance of returning the most relevant content, we need to be smarter about what we embed.

I'm still trying to get a good feeling for the strategies that make sense here. Some that I've seen include:

Split a document up into fixed length shorter segments. Split into segments but including a ~10% overlap with the previous and next segments, to reduce problems caused by words and sentences being split in a way that disrupts their semantic meaning. Splitting by sentence, using NLP techniques. Splitting into higher level sections, based on things like document headings.

Then there are more exciting, LLM-driven approaches:

Generate an LLM summary of a document and embed that. Ask an LLM "What questions are answered by the following text?" and then embed each of the resulting questions!

It's possible to try out these different techniques using LLM already: write code that does the splitting, then feed the results to Collection.embed_multi() or llm embed-multi.

But... it would be really cool if LLM could split documents for you - with the splitting techniques themselves defined by plugins, to make it easy to try out new approaches.

Get involved

It should be clear by now that the potential scope of the LLM project is enormous. I'm trying to use plugins to tie together an enormous and rapidly growing ecosystem of models and techniques into something that's as easy for people to work with and build on as possible.

There are plenty of ways you can help!

Join the #llm Discord to talk about the project. Try out plugins and run different models with them. There are 12 plugins already, and several of those can be used to run dozens if not hundreds of models (llm-mlc, llm-gpt4all and llm-llama-cpp in particular). I've hardly scratched the surface of these myself, and I'm testing exclusively on Apple Silicon. I'm really keen to learn more about which models work well, which models don't and which perform the best on different hardware. Try building a plugin for a new model. My dream here is that every significant Large Language Model will have an LLM plugin that makes it easy to install and use. Build stuff using LLM and let me know what you've built. Nothing fuels an open source project more than stories of cool things people have built with it.

Werdmüller on Medium

I don’t want my software to kill people

Open source licenses fall short of modern needs. Continue reading on Medium »

Open source licenses fall short of modern needs.

Continue reading on Medium »


Simon Willison

A practical guide to deploying Large Language Models Cheap, Good *and* Fast

A practical guide to deploying Large Language Models Cheap, Good *and* Fast Joel Kang's extremely comprehensive notes on what he learned trying to run Vicuna-13B-v1.5 on an affordable cloud GPU server (a T4 at $0.615/hour). The space is in so much flux right now - Joel ended up using MLC but the best option could change any minute. Vicuna 13B quantized to 4-bit integers needed 7.5GB of the T4's

A practical guide to deploying Large Language Models Cheap, Good *and* Fast

Joel Kang's extremely comprehensive notes on what he learned trying to run Vicuna-13B-v1.5 on an affordable cloud GPU server (a T4 at $0.615/hour). The space is in so much flux right now - Joel ended up using MLC but the best option could change any minute.

Vicuna 13B quantized to 4-bit integers needed 7.5GB of the T4's 16GB of VRAM, and returned tokens at 20/second.

An open challenge running MLC right now is around batching and concurrency: "I did try making 3 concurrent requests to the endpoint, and while they all stream tokens back and the server doesn’t OOM, the output of all 3 streams seem to actually belong to a single prompt."

Friday, 01. September 2023

Doc Searls Weblog

We Need Wide News

How do people get news where you live? How do they remember it? For most of the industrial age, which is still with us, newspapers answered both those questions—and did so better than any other medium or civic institution. Newspapers … Continue reading →

Bloomington, Indiana, my new hometown

How do people get news where you live? How do they remember it?

For most of the industrial age, which is still with us, newspapers answered both those questions—and did so better than any other medium or civic institution. Newspapers were required reading, delivered daily to doorsteps, and sold from places all around town. Old copies also accumulated in libraries and other archives, either as bound volumes or in microfilm reels and microfiche cards. News also came from radio and TV stations, though both did far less archiving, and none were as broad and deep in what they covered and how. Newspapers alone produced deep news.

And wide news as well. Local and regional papers covered politics, government, crises, disasters, sports, fashion, travel, business, religion, births, deaths, schools, and happenings of all kinds. They had reporters assigned across all their sections. No other medium could go as wide.

After the Internet showed up in the mid-’90s, however, people also began getting news from each other, through email, blogs, texting, online-only publications, and social media. To keep up and participate, newspapers, magazines, and other legacy print media built websites and began to publish online. Broadcast media began to stream online too. But the encompassing trend was the digitization of everything and everyone. Consumers became producers. Every person with a computer or a phone was equipped to become a reporter, a photographer, a videographer, or a podcaster. (In the 24 September 2004 issue of IT Garage, I reported that a Google search for podcasts got 24 results. Today it gets 3,84 billion.)

In the midst of all this, the local and regional newspaper business collapsed. The New York Times, Wall Street Journal, and Washington Post found ways to survive and continue to thrive. Many other major papers are getting along but none are what they were. Nor can they be. Today most local papers are gone or shrunk to tiny fractions of their former selves.  Countless local commercial radio stations are now owned by national chains, fed “content” from elsewhere, and maintain minimized or absent local staff. Public radio has survived mostly because it learned long ago how to thrive on listener contributions, bequests, and institutional support. TV news is still alive, but also competing with millions of other sources of video content. None of its coverage is as wide as newspapers were in their long prime.

Another reason for the decline of local news media is economic. Craigslist and its imitaters killed newspapers’ classified sections, which had been a big source of income. Advertisers abandoned the practice of targeting whole populations interested in sports, business, fashion, entertainment, and other subjects. With digital media, advertisers can target tracked individuals. As I put it in Separating Advertising’s Wheat and Chaff, “Madison Avenue fell asleep, direct response marketing ate its brain, and it woke up as an alien replica of itself.” That replica cares not a whit for supporting journalism of any kind.

Eyeballs and eardrums were also pulled toward direct response marketing by algorithms rigged to increase engagement. A collateral effect was pulling individual interests into affinity groups that grew tribal as they became echo chambers favoring the voices that excelled at eliciting emotional responses. Naturally, media specialized for feeding tribal interests emerged, obsolescing media that worked to cross partisan divides—such as old-fashioned newspapers. (In the old days, papers with a bend to the left or the right isolated partisanship to their opinion pages.) Talk radio and cable news became entirely partisan operations.

So, by the time Scott Adams (of Dilbert fame) said “Facts don’t matter. What matters is how much we hate the person talking” (March 13, 2022), what Yeats poetized in The Second Coming seemed fulfilled:

Things fall apart; the centre cannot hold;
Mere anarchy is loosed upon the world,
The blood-dimmed tide is loosed, and everywhere
The ceremony of innocence is drowned;
The best lack all conviction, while the worst
Are full of passionate intensity.

And yet, as James Fallows told Joyce and me a couple of months ago, if you talk to people in small towns about anything but politics, they’re just fine. Moreover, they still work together and get things done. (Jim and Deb gained this wisdom while researching their book and movie, both titled Our Towns.)

Towns do have their fault lines, but people everywhere are held together by their natural need for the conveniences that arise out of shared necessity—for markets, medical help, education, public spaces, and each other. They also need good information about what’s going on where they live. By good I mean the kind of information they used to get before newspapers—and the daily ceremony of innocence newspapers provided—were obsolesced by the Internet.

Back in the mid-’00s, the idea of “citizen journalism” (which went by other labels) first showed up in the writings of Dan Gillmor, J.D. Lasica, Dave Winer, Jeff Jarvis, Jay Rosen, myself, and others. All of us were also concerned about the decline of newspapers. So, in January 2007, after The New York Times sold the Santa Barbara News-Press to a billionaire who fired the staff and made the paper a vehicle for her personal interests, the Center for Information Technology and Society (CITS) at UC Santa Barbara convened a charrette to discuss the future of local newspapers. The title was “Newspaper 2.0.” It was led by yours truly, then (and still) a fellow of the Center. Some of the people mentioned earlier in this paragraph were there, along with exiled News-Press staffers, educators, and other local media, including “place blogs” that were also daily newsletters. Here is a photo series from the event, and the wiki we put together as well. I don’t know to what extent that gathering helped enlarge the degree to which other media made up for the failure of the News-Press (which finally filed for bankruptcy this summer, after a decade and a half of irrelevance). I do know that Santa Barbara now rich with news sources.

Meanwhile, my full attention is here in Bloomington, Indiana. Our local newspaper, the Herald-Times, is still alive and kicking, but not what it was when giant rolls of paper were delivered by train car weekly to the back of the paper’s building at 1900 South Walnut Street, and it was the source of wide news for the town and the region.

Now it’s e pluribus unum time. There are many other media in town, covering many topics, and I’m not yet clear on how much they comprise a news commons. But, as a visiting scholar (along with Joyce) at Indiana University’s Ostrom Workshop, which is all about studying commons, I want to see if our collection of local news media can become an example of wide news at work, whether we call it a commons or not. From my current notes, here is a quick, partial, and linky list of local media—

Periodicals (including newsletters and websites):

The Herald-Times B Square Bulletin The Bloomingtonian IDS The Limestone Post Bloom The Ryder Visit Bloomington Bloomington Online Hoosleft (also a podcast)

Radio:

WFHB (has local news and podcasts) WFIU (has local news and podcasts) WIUX (IU student station, has local stuff) WGCL (legacy local radio, AM & FM) WBWB (B97) (pop music, has local news, sister of WHCC) WTTS (legacy Tarzian FM music station, broadcasting from Trafalgar, on the south side of Indianapolis, with a popular local translator of its HD2 channel called Rock96 The Quarry) WHCC (local country station, has some local news, sister of WBWB) WCLS (local album rock station, has a calendar)

TV:

WTTV-4 CBS (licensed to Bloomington  ) WTHR-13 NBC WTIU-33 PBS WRTV-6 ABC (has local Bloomington news on its website) WXIN (Fox 59) (has local Bloomington news on its website) WISH-8 CW

Podcasts

NoDishes Beyond BTown Unspoken Requests

Indiana University

IU Media School (Note that every department—Sociology, Law, Sports, Medicine, Music, and many others—make and report on local happenings)

Civic Institutions:

Monroe County Public Library Monroe County History Center Greater Bloomington Chamber of Commerce

So my idea is to hold a charrette like the one we had in Santa Barbara, to see how those interested in making wide news better can get along. No rush. I just want to put the idea out there and see what happens.

I think one thing that will help is that nobody is trying to do it all anymore. But everybody brings something to the table. Metaphorically speaking, I’d like to put the table there.

Thoughts and ideas are invited. So are corrections and improvements to the above. I see this post, like pretty much everything I write online, as a public draft.

My email is doc at searls dot com.

I shot the photo above on a flight from Indianapolis to Houston after giving this lecture at IU. Here is a whole series on Bloomington from that flight. And here is the rest of the flight as well. All the photos in both are Creative Commons licensed to encourage use and reuse, by anybody. I have about 60,000 photos such as these published online here, and another 5,000 here, all ready for anyone to put in a news story. I bring this up because public photography is one of my small contributions to wide news everywhere. You can see results in countless news stories and at Wikimedia Commons, where photos in Wikipedia come from. I put none of those where they are. Others found them and put them to use.


Foss & Crafts

59: Governance, part 1

Governance of FOSS projects, a two parter, and this is part one! Here we talk about general considerations applicable to FOSS projects! (And heck, these apply to collaborative free culture projects too!) Links: Why We Need Code of Conducts, and Why They're Not Enough, by Aeva Black Blender Cloud and the Blender Development Fund

Governance of FOSS projects, a two parter, and this is part one! Here we talk about general considerations applicable to FOSS projects! (And heck, these apply to collaborative free culture projects too!)

Links:

Why We Need Code of Conducts, and Why They're Not Enough, by Aeva Black Blender Cloud and the Blender Development Fund

Thursday, 31. August 2023

Damien Bod

Use multiple Microsoft Entra Verified ID credentials in a verification presentation

This post shows how a Microsoft Entra ID verified employee credential can be used together with a self attestation credential to unlock a door. Using this, a person can prove they know a code and prove their employee status. Code: https://github.com/swiss-ssi-group/EntraEmployeeUnlockDoor Get your Verified Employee credential To use this app, the Microsoft Entra employee credential […]

This post shows how a Microsoft Entra ID verified employee credential can be used together with a self attestation credential to unlock a door. Using this, a person can prove they know a code and prove their employee status.

Code: https://github.com/swiss-ssi-group/EntraEmployeeUnlockDoor

Get your Verified Employee credential

To use this app, the Microsoft Entra employee credential can be used. The following post shows how to set this up, get a credential and add this to you Microsoft wallet.

Issue Employee verifiable credentials using Entra Verified ID and ASP.NET Core

Issue the self attestation credential

The self attestation credential can be issued using the selfIssued mapping. We require just one claim. This can also be mixed with other claims and other credential types. We do not mix this when issuing the credential as we already have an employee credential with a fixed format.

{ "attestations": { "selfIssued": { "mapping": [ { "outputClaim": "doorCode", "required": true, "inputClaim": "doorCode", "indexed": false } ], "required": true } }, "validityInterval": 2592000, "vc": { "type": [ "DoorCode" ] } }

The credential is mapped into C# using the CredentialsClaims class.

/// <summary> /// self-issued-attestation /// </summary> public class CredentialsClaims { [JsonPropertyName("doorCode")] public string? DoorCode { get; set; } = string.Empty; }

The IssuanceRequestPayload class is used to issue the credentials. The callback is defined and a public URL is required to debug the application. I use ngrok.

public IssuanceRequestPayload GetIssuanceRequestPayload(HttpRequest request) { var payload = new IssuanceRequestPayload(); payload.CredentialsType = "DoorCode"; payload.Manifest = $"{_credentialSettings.CredentialManifest}"; var host = GetRequestHostName(request); payload.Callback.State = Guid.NewGuid().ToString(); payload.Callback.Url = $"{host}/api/issuer/issuanceCallback"; payload.Callback.Headers.ApiKey = _credentialSettings.VcApiCallbackApiKey; payload.Registration.ClientName = "Door Code"; payload.Authority = _credentialSettings.IssuerAuthority; return payload; }

The two configuration files are used to create the verifiable credential in the Azure portal.

Use the Employee and unlock door credential in a verification

The VerifierRequestPayload class is used to request both the employee credential and the self issued door code credential.

public VerifierRequestPayload GetVerifierRequestPayload(HttpRequest request) { var payload = new VerifierRequestPayload(); var host = GetRequestHostName(request); payload.Callback.State = Guid.NewGuid().ToString(); payload.Callback.Url = $"{host}/api/verifier/presentationCallback"; payload.Callback.Headers.ApiKey = _credentialSettings.VcApiCallbackApiKey; payload.Registration.ClientName = "VerifiedEmployee"; payload.Authority = _credentialSettings.VerifierAuthority; // First credential var requestedCredentials = new RequestedCredentials { CrendentialsType = "VerifiedEmployee", Purpose = "Verified Employee to authenticate your request" }; requestedCredentials .AcceptedIssuers .Add(_credentialSettings.IssuerAuthority); payload.RequestedCredentials.Add(requestedCredentials); // Second credential var requestedCredentialsNdl = new RequestedCredentials { CrendentialsType = "DoorCode", Purpose = "Door code to gain access" }; requestedCredentialsNdl .AcceptedIssuers .Add(_credentialSettings.IssuerAuthority); payload.RequestedCredentials.Add(requestedCredentialsNdl); return payload; }

The verification request is presented using a QR Code.

The business logic can be implemented on the verifier application as required.

Notes

The self issued credential allows the value to be set by a user. A new credential can be created every time the code changes.

This can then be connected into whatever business process you require. The credentials can be used cross domain or multi-tenant. The security is relatively good, open to phishing attacks but does not rely on passwords.

The Microsoft Entra Verified ID id-tech solution is very good, but not compatible with other SSI solutions. You would require adaptors or cross ledger wallets.

SSI and id-tech solutions will not become a success, if it is not possible to have interop between services, providers or wallets. To compare, this is like when you send a google email and you cannot open it using Microsoft outlook, or the other way around.

Links

https://learn.microsoft.com/en-us/azure/active-directory/verifiable-credentials/how-to-use-quickstart-multiple

https://github.com/swiss-ssi-group/AzureADVerifiableCredentialsAspNetCore

https://learn.microsoft.com/en-us/azure/active-directory/verifiable-credentials/decentralized-identifier-overview

https://ssi-start.adnovum.com/data

https://github.com/e-id-admin/public-sandbox-trustinfrastructure/discussions/14

https://openid.net/specs/openid-connect-self-issued-v2-1_0.html

https://identity.foundation/jwt-vc-presentation-profile/

https://learn.microsoft.com/en-us/azure/active-directory/verifiable-credentials/verifiable-credentials-standards

https://github.com/Azure-Samples/active-directory-verifiable-credentials-dotnet

https://aka.ms/mysecurityinfo

https://fontawesome.com/

https://developer.microsoft.com/en-us/graph/graph-explorer?tenant=damienbodsharepoint.onmicrosoft.com

https://learn.microsoft.com/en-us/graph/api/overview?view=graph-rest-1.0

https://github.com/Azure-Samples/VerifiedEmployeeIssuance

https://github.com/AzureAD/microsoft-identity-web/blob/jmprieur/Graph5/src/Microsoft.Identity.Web.GraphServiceClient/Readme.md#replace-the-nuget-packages

https://docs.microsoft.com/azure/app-service/deploy-github-actions#configure-the-github-secret

https://issueverifiableemployee.azurewebsites.net/

Links eIDAS and EUDI standards

Draft: OAuth 2.0 Attestation-Based Client Authentication
https://datatracker.ietf.org/doc/html/draft-looker-oauth-attestation-based-client-auth-00

Draft: OpenID for Verifiable Presentations
https://openid.net/specs/openid-4-verifiable-presentations-1_0.html

Draft: OAuth 2.0 Demonstrating Proof-of-Possession at the Application Layer (DPoP)
https://datatracker.ietf.org/doc/html/draft-ietf-oauth-dpop

Draft: OpenID for Verifiable Credential Issuance
https://openid.bitbucket.io/connect/openid-4-verifiable-credential-issuance-1_0.html

Draft: OpenID Connect for Identity Assurance 1.0
https://openid.net/specs/openid-connect-4-identity-assurance-1_0-13.html

Draft: SD-JWT-based Verifiable Credentials (SD-JWT VC)
https://vcstuff.github.io/draft-terbu-sd-jwt-vc/draft-terbu-oauth-sd-jwt-vc.html

Wednesday, 30. August 2023

Simon Willison

excalidraw.com

excalidraw.com Really nice browser-based editor for simple diagrams using a pleasing hand-sketched style, with the ability to export them as SVG or PNG. Via haki on Hacker News

excalidraw.com

Really nice browser-based editor for simple diagrams using a pleasing hand-sketched style, with the ability to export them as SVG or PNG.

Via haki on Hacker News


WebLLM supports Llama 2 70B now

WebLLM supports Llama 2 70B now The WebLLM project from MLC uses WebGPU to run large language models entirely in the browser. They recently added support for Llama 2, including Llama 2 70B, the largest and most powerful model in that family. To my astonishment, this worked! I used a M2 Mac with 64GB of RAM and Chrome Canary and it downloaded many GBs of data... but it worked, and spat out token

WebLLM supports Llama 2 70B now

The WebLLM project from MLC uses WebGPU to run large language models entirely in the browser. They recently added support for Llama 2, including Llama 2 70B, the largest and most powerful model in that family.

To my astonishment, this worked! I used a M2 Mac with 64GB of RAM and Chrome Canary and it downloaded many GBs of data... but it worked, and spat out tokens at a slow but respectable rate of 3.25 tokens/second.


Llama 2 is about as factually accurate as GPT-4 for summaries and is 30X cheaper

Llama 2 is about as factually accurate as GPT-4 for summaries and is 30X cheaper Anyscale offer (cheap, fast) API access to Llama 2, so they're not an unbiased source of information - but I really hope their claim here that Llama 2 70B provides almost equivalent summarization quality to GPT-4 holds up. Summarization is one of my favourite applications of LLMs, partly because it's key to being ab

Llama 2 is about as factually accurate as GPT-4 for summaries and is 30X cheaper

Anyscale offer (cheap, fast) API access to Llama 2, so they're not an unbiased source of information - but I really hope their claim here that Llama 2 70B provides almost equivalent summarization quality to GPT-4 holds up. Summarization is one of my favourite applications of LLMs, partly because it's key to being able to implement Retrieval Augmented Generation against your own documents - where snippets of relevant documents are fed to the model and used to answer a user's question. Having a really high performance openly licensed summarization model is a very big deal.


Rebecca Rachmany

Why I took on the tomi challenge and you should too: DAO expert opinion

This is the first of (I hope) numerous posts that I hope to write for tomi, where I’ll share my journey as the Project Manager for the first phase of specification writing for the tomiDAO. Actually, I wrote a summary of my Metafest experience representing tomi, and I’ll have more to say about that later. But first, I want to start out with a quick introduction of why I’ve decided to devote my time

This is the first of (I hope) numerous posts that I hope to write for tomi, where I’ll share my journey as the Project Manager for the first phase of specification writing for the tomiDAO. Actually, I wrote a summary of my Metafest experience representing tomi, and I’ll have more to say about that later. But first, I want to start out with a quick introduction of why I’ve decided to devote my time to this project (other than that they’re paying me) and why you should join me (because they’ll pay you too, and I can’t do it alone).

For those of you who don’t know me, you might want to check out my work at www.daoleadership.com or my general everything website www.gracerachmany.com. For those of you who do know me, you probably think of me as a contrarian who has been in the DAO space for half a decade, but I’ve never strongly associated myself with one project. So why has tomi captured so much of my time and attention?

I first came into contact with tomi when they were writing their whitepaper in September 2022. I knew nobody’s names, but from the outset it was clear that the team is serious about creating an alternative to the censored World Wide Web. But it was also obvious that most of them had no experience with public goods or the commons. Fortunately, around that time, they brought on DAOwl who explained to them that it would be impossible to expect a traditional DAO to set and enforce content moderation policies. “Hoo would want to look at the world’s most obscene content every day?” asked DAOwl. But that was just the start of the rabbit hole.

Who are these tomi people? (Spoiler, I don’t know.)

Every now and again, DAOwl would ask me about some particular part of DAO tooling. What did I think about JokeDAO for ranked voting? What did I think about copyright violations? The conversations were always interesting and insightful. But let’s face it, the tomiDAO itself is a simple yes/no voting mechanism for distributing a pot of tokens. I wasn’t impressed at all until Dr. Nick said to me that this was one of the first DAOs he’d heard of with a working NFT voting implementation. “I’ve been talking about NFT voting for a while,” he said, “but this is the first time I’ve seen it working on a real project.” So it turns out the tomi team had pulled off something innovative after all.

Look, I’m a skeptic. I’ve been in the Web3 space for 5 years. Heck, I’ve been in tech for 35 years. It’s always a good bet that a project will fail, I say, because 98% of them do.

In January, the tomi team asked me to set up a panel for them to discuss the DAO. DAOwl says they refuses to show in person, so they asked me to moderate and bring in the panelists, and fortunately I got the wonderful Daniel Ospina from RnDAO, Esther Galfalvi from SingularityNET DAO, and Evin McMullen from Disco.xyz, and we had a fabulous time in Marrakesh, including an ATV tour and a hotel with a fabulous spa.

Most importantly, I finally got to meet the tomi team — and some of them even used real names! Apparently the video footage was great, but the videographer didn’t know it was illegal to fly a drone over the fancy hotel where the event was held, and now he and all the footage are in jail. You can’t make this stuff up!

Privacy is rapidly becoming illegal

Which brings us to Tornado Cash and the way in which privacy is under attack by governments worldwide. If you join the tomi Discord or the tomiArmy, it might seem a bit sketchy. Like another one of those “wen moon projects”, but…

Boy, am I sick of open-source revolutionaries with no funds and no marketing and great code that nobody can use. How are underfunded projects like Handshake ever going to take off? They even call themselves an “experiment”. How are we going to have censorship resistance when most of the Ethereum nodes are hosted in countries that have made privacy illegal? How are we going to have public goods when VCs own a big chunk of the project?

The tomi founders, apparently 8 of them, have put their money where their heart is. From what I could surmise from the amount of working code they’ve released, and from the first “Nakamoto Forum”, as they called their conference in Marrakesh, they are a bunch of successful crypto founders who have invested their own money (and a lot of it) to lift this project off the ground. When they launched the token, they simultaneously released the DAO, a testnet browser, an NFT collection, and hardware nodes. Since then, they’ve launched (and spun off) a privacy layer, a DNS NFT auction and marketplace, a marketing “army”, and a staking pool.

Another reason I was impressed with tomi is that they are committed to usability for everyone. The tomiNET is going to be accessible through a normal browser with normal URLs. It’s about time Web3 started to produce something that actually solves a problem for “normies”. So far, everything I’ve seen from tomi points to them making their interfaces intuitive and offering products that can be used without needing blockchain expertise. (Other than storing your private keys, and I expect MPC to solve that in the next year.)

The product team just seems to spew out product after product. Are these products ready for mainstream use? Not at all. Why are the founders putting in so much time and effort? Maybe they want to keep their money private. Or maybe they’ve made enough money that they figure it’s time to give back to the community. Maybe they are putting it in a DAO to avoid legal liability. Who knows?

Here’s what I do know. These guys are absolutely serious about building products and they are absolutely serious about empowering the community through the DAO. They just don’t know how. Neither do I, come to think of it.

Cold start

When tomi asked me to represent them at Metafest, I took up the challenge because, as I said, I think privacy is everyone’s right, and this is the first time I’ve seen such a comprehensive and well-funded project come together. I think that tomi is right that having a native cryptocurrency and governing DAO is necessary for the project’s success.

One of the main takeaways from Metafest was how hard it is to go from cold start (nobody) to a functioning DAO. The DAO is now made up of approximately 500 wallets of tomi Pioneers who purchased the initial NFTs granting them voting rights. In other words, they are the investors in the project, not the users. The tomi team is aware that this should change gradually over time — but they want experts and aligned people to join, which is why they invited me and why I’m inviting you, but it’s still hard. They probably need at least 5 different teams to run these DAOs, and theoretically, the teams should be made up of people all over the world who want an alternative WWW. But right now it’s you, me, and our friends. Meaning that a cold start DAO is not easy.

The tomi opportunity

When I accepted the challenge to approach the tomiDAO specifications documentation, there were two aspects that impressed me. First of all, they asked me to seek out multiple providers for the specifications. I’ll say more about that below, but in the end, they weren’t satisfied with any of the proposals, and they ended up writing their own, which combined elements from two existing proposals while also incorporating several of their own unique innovations. DAOwl said: no offense, I want to work with all the people who made proposals, but we will work on our terms until there is a solid and trusted team that can take over the project.

Even more promising, they recently approved the specification for a self-sovereign identity wallet integrated into their crypto wallet. The winner of the proposal, walt.id, also wrote a very impressive DAO proposal and they are highly respected in the SSI industry. I didn’t know them before tomi, but their credentials checked out and it’s all open source, which I think is important. This shows me that the team has really gone down to the fundamentals of what has caused the WWW to become centralized and exploitative, and they are interested in integrating the essential components that give the project potential to actually transform the internet back into a place where people can have freedom of expression, including freedom to dissent and freedom to develop connections and commerce with anyone, anywhere.

Challenges for the DAO industry as a whole

Another exciting aspect of tomi is that they are grappling with specific instances of generalized problems. Everyone from X (formerly twitter), through Lens Protocol and Handshake, to ENS is dealing with the problems of naming and/or the problem of content moderation. DAO tooling is completely inadequate to touch any of these processes. And frankly, I’m frustrated with DAO solutions looking for problems. I want to start designing a solution based on a specific challenge, and tomi has plenty of them.

What tomi is trying to do is going to put them up against tremendous challenges. Some of them are straightforward, like the fact that the default language of tomi is English, but that’s probably not the default language of the people who are being oppressed and need an alternative internet. Other problems are more complex, such as preventing spam and denial of service attacks from rendering the network useless. Some are legal, for example, if tomi succeeds and its domain name system becomes the one everyone wants to use, they may open themselves up to lawsuits from everyone, from celebrities to corporations and governments.

But even if tomi doesn’t manage to complete its entire vision, the DAO tooling we can create together will resonate throughout Web3. DAOs for content moderation, DAOs for name services, DAOs for strategy-building, Verifiable Credentials and DIDs for DAOs, reputation for DAOS, accountability for fulfilling DAO proposals… All of these are part of tomi’s agenda for the next year’s planning.

And tomi has the budget to pull this off. The DAO is currently funded with an initial amount equivalent to $20 Million. According to what DAOwl told me, a portion of all the tomi domain name sales will continue to feed into the DAO. And it will be up to the DAO itself to create the business models on top of tomi that will allow it to be self-sustaining. Any one of the tools that I mentioned above could be implemented in dozens of DAOs today. So the suite of DAO tools could represent a comprehensive starter kit for just about any project. And best of all for me, I get the chance to demonstrate how communities can develop tools that are more collaborative and complex than simple yes-no and ranked voting.

Get on board: tomi needs you

One of the conditions I set for tomi was that, given the fact that there is a budget, we should not expect people to work for free and/or campaign to join. DAOwl said this was fine, but that they expect accountability for producing the work that is needed for the DAOs. Sometimes I think this is an emergent phenomenon: DAOs with limited funds waste a lot of time on meetings and give out little bounties that get distributed in a “nice” way where everyone gets something regardless of talent. In other words, many DAOs have been prioritizing inclusion over execution, partially because they can’t really afford highly professional work.

The first step that we’re taking is to create the DAO specifications. Together we wrote this plan and DAOwl and tomi’s CTO, Camel, have approved it. My goal is to include as many people as possible in the discussions, with those contributing significant time being compensated reasonably. In other words, if you are qualified, apply, and you’ll be paid for your work. This isn’t short-term either. Once the specifications are done, we need to actually staff these DAOs. It looks like there will be a dozen different processes and committees needed for the initial launch in May 2024.

To get involved, join the tomi Discord server and introduce yourself on the DAO channels. DAOwl and myself will be monitoring and welcoming people. Announcements will be made of the times and places of the discussions, and you’ll be able to join and participate in various ways.

Those of you who know me are aware that for the last 3 years I’ve been blathering on about how we are going to need a parallel network of passports, network infrastructure, and commerce because the authorities are becoming increasingly authoritarian worldwide. Although I’m mostly vegetarian, I wouldn’t want a world authority telling me I must be vegan and enforcing it through their CBDCs and supply chains. What tomi is providing can be an essential set of tools for those of us who want our freedom and are willing to pay the price.

How (not) to impress the tomi free birds

I’ve teamed up with my buddy Moritz Bierling to put together this project and we want you to join if you care about internet freedom or about the development of DAO tooling.

It’s a well-known secret that I spent a lot of years in fairly corporate environments and that I believe in some of the more structured ways of getting things done. So when tomi asked me to do this work, I said we should at least get three proposals and not just hire me because they know me. They said: ok, go ahead. But I failed.

I failed because most of the DAO experts are either:

Super busy on multiple projects More interested in promoting/integrating their technology than joining someone else’s projects Inexperienced when it comes to presenting themselves to a client More interested in having intellectual and cool conversations than getting stuff done

DAOwl told me several people ghosted them, joined the Discord but didn’t follow the comments even when they were mentioned, promised to submit proposals and then disappeared, failed to invoice for work done, and needed multiple reminders of things they agreed to. In other words, they seemed to Owl to be discourteous (as if they were doing tomi a favour).

I won’t lie. I was embarrassed. I gave the names of several people I respected and I felt it reflected poorly on me when tomi didn’t get responses. Some people told me explicitly that they were uncomfortable with the tomi project and wouldn’t join because it seemed to “scammy”. I respect that, just as I respect the whole Gitcoin-Shell conversation. But I don’t respect it when my client is ignored, has to chase people down, or receives empty promises.

So if you do want to impress myself and DAOwl, please take a few simple steps:

Do what you said you would do. If you say you’ll send a proposal or join the Discord channel, do it. Be interested. If you write something on the Discord, check at least once to see what happened or answer DMs. If you want to get paid for your work, take appropriate steps. Send a price proposal if appropriate. Make your payment terms known in advance. Send an invoice or at least an ETH address to the DAOwl or myself when you complete work. Be proactive. For the DAO to function, we all need to take responsibility. If I have to remind people of what they promised and micromanage when people post to the Discord, it’s not a DAO. If you think we should be using Wonderverse or Jira or any of that, say so and be the one to initiate the project and ask to get paid for implementation. Bring people in who need the work. Many of us are overwhelmed with work offers and sometimes we take it for granted. If this is not the project for you, or you don’t have the bandwidth, say so and find someone who does need the work. Be courteous.

Oh, and one more thing. If you can bring an equal number of men and women (or more women than men), that will be greatly appreciated. The specifications for the DAO discussions stipulate that all discussions require a minimum 40% women.

Let’s get decentralization right

We’ve all seen successful DAOs with multiple functioning sub-DAOs. The tomi project has the potential to provide infrastructure both for Web3 and for end users. My hope is that within 6 months there will be multiple project managers and team leads, and that this Medium blog will be filled with articles from the best of us discussing the solutions we’ve developed and the tradeoffs we’ve made.

So what are you waiting for? Check out the plan, and the Governance specifications workflow, the next step is to introduce yourself on the tomi Discord channels, under the DAO channel for introductions, and join us by helping us build a better World Wide Web.

See you there!

Follow us for the latest information:

Website | Twitter | Discord | Telegram Announcements | Telegram Chat | Medium | RedditTikTok

Why I took on the tomi challenge and you should too: DAO expert opinion was originally published in tomipioneers on Medium, where people are continuing the conversation by highlighting and responding to this story.


Mike Jones: self-issued

Fully-Specified Algorithms for JOSE and COSE

Orie Steele and I have written a new specification creating algorithm identifiers for JOSE and COSE that fully specify the cryptographic operations to be performed – something we’d promised to do during our presentation to the JOSE working group at IETF 117. The introduction to the specification (quoted below) describes why this matters. The IANA […]

Orie Steele and I have written a new specification creating algorithm identifiers for JOSE and COSE that fully specify the cryptographic operations to be performed – something we’d promised to do during our presentation to the JOSE working group at IETF 117. The introduction to the specification (quoted below) describes why this matters.

The IANA algorithm registries for JOSE [IANA.JOSE.Algorithms] and COSE [IANA.COSE.Algorithms] contain two kinds of algorithm identifiers:

Fully Specified: Those that fully determine the cryptographic operations to be performed, including any curve, key derivation function (KDF), hash functions, etc. Examples are RS256 and ES256K in both JOSE and COSE and ES256 in JOSE. Polymorphic: Those requiring information beyond the algorithm identifier to determine the cryptographic operations to be performed. Such additional information could include the actual key value and a curve that it uses. Examples are EdDSA in both JOSE and COSE and ES256 in COSE.

This matters because many protocols negotiate supported operations using only algorithm identifiers. For instance, OAuth Authorization Server Metadata [RFC8414] uses negotiation parameters like these (from an example in the specification):

"token_endpoint_auth_signing_alg_values_supported": ["RS256", "ES256"]

OpenID Connect Discovery [OpenID.Discovery] likewise negotiates supported algorithms using alg and enc values. W3C Web Authentication [WebAuthn] and FIDO Client to Authenticator Protocol (CTAP) [FIDO2] negotiate using COSE alg numbers.

This does not work for polymorphic algorithms. For instance, with EdDSA, you do not know which of the curves Ed25519 and/or Ed448 are supported! This causes real problems in practice.

WebAuthn contains this de-facto algorithm definition to work around this problem:

-8 (EdDSA), where crv is 6 (Ed25519)

This redefines the COSE EdDSA algorithm identifier for the purposes of WebAuthn to restrict it to using the Ed25519 curve – making it non-polymorphic so that algorithm negotiation can succeed, but also effectively eliminating the possibility of using Ed448. Other similar workarounds for polymorphic algorithm identifiers are used in practice.

This specification creates fully-specified algorithm identifiers for all registered polymorphic JOSE and COSE algorithms and their parameters, enabling applications to use only fully-specified algorithm identifiers. It furthermore deprecates the practice of registering polymorphic algorithm identifiers.

The specification is available at:

https://www.ietf.org/archive/id/draft-jones-jose-fully-specified-algorithms-00.html

Sunday, 27. August 2023

Mike Jones: self-issued

The Key Is Not Enough! – OpenID Connect Federation at OSW 2023

Vladimir Dzhuvinov gave the innovative and informative presentation “The Key Is Not Enough!” on OpenID Connect Federation at the 2023 OAuth Security Workshop in London. This action thriller of a presentation covers history, goals, mechanisms, status, deployments, and possible futures of the work. The comparisons between X.509 certificates and Federation Trust Infrastructure are particularly enlight

Vladimir Dzhuvinov gave the innovative and informative presentation “The Key Is Not Enough!” on OpenID Connect Federation at the 2023 OAuth Security Workshop in London. This action thriller of a presentation covers history, goals, mechanisms, status, deployments, and possible futures of the work. The comparisons between X.509 certificates and Federation Trust Infrastructure are particularly enlightening!

Friday, 25. August 2023

Mike Jones: self-issued

What does Presentation Exchange do and what parts of it do we actually need?

I organized unconference sessions on Wednesday and Thursday at the 2023 OAuth Security Workshop on “What does Presentation Exchange do and what parts of it do we actually need?”. I facilitated primarily by creating an inventory features for discussion in advance, which you’ll find on slide 3. Notes from Wednesday’s session are on slide 4. […]

I organized unconference sessions on Wednesday and Thursday at the 2023 OAuth Security Workshop on “What does Presentation Exchange do and what parts of it do we actually need?”. I facilitated primarily by creating an inventory features for discussion in advance, which you’ll find on slide 3. Notes from Wednesday’s session are on slide 4. Thursday we discussed functionality needed and not needed for presenting Verifiable Credentials (with the feature realizations not necessarily tied to Presentation Exchange), which you can find on slide 5. Notes from Thursday’s discussion are on the final two pages.

Thanks to everyone who participated for a great discussion. I think we all learned things!

The slides used as an interactive notepad during our discussions are available as PowerPoint and PDF.

Friday, 25. August 2023

A Distributed Economy

Journeys from November 2022 to August 2023

 In mid-November in 2022, I visited the Internet Identity Workshop (https://internetidentityworkshop.com/). It is an unconference that occurs every six months. According to the book of proceedings, I presented my hardware project and discussed how applied category theory might help verifiable credentials. https://raw.githubusercontent.com/windley/IIW_homepage/gh-pages/assets/proceedings

 In mid-November in 2022, I visited the Internet Identity Workshop (https://internetidentityworkshop.com/). It is an unconference that occurs every six months. According to the book of proceedings, I presented my hardware project and discussed how applied category theory might help verifiable credentials.

https://raw.githubusercontent.com/windley/IIW_homepage/gh-pages/assets/proceedings/IIWXXXV_35_Book_of_Proceedings.pdf

Demo Table #8:
Update on Blinky Project (Explorations with I.o.T): Brent Shambaugh
URL: https://github.com/bshambaugh/BlinkyProject/ Description: Explorations with an ESP32 with a Cryptographic Co-Processor for Providing a Signer for the Ceramic Network and Possible Future Directions

Further Exploration of DID and VC Data Architecture with Category Theory
Session Convener: Brent Shambaugh
Notes-taker(s): Brent Shambaugh
Tags / links to resources / technology discussed, related to this session:
https://www.categoricaldata.net/
https://github.com/bshambaugh/Explorations-of-Category-Theory-for-Self-Sovereign-Identity

--> this seems relevant, and is buried deep in the link tree:
Formal Modelling and Application of Graph Transformations in the Resource Description Framework - Benjamin Braatz latest access: https://conexus.com/formal-modelling-and-application-of-graph-transformations-in-the-resource-description-framework/ - > ... -> https://api-depositonce.tu-berlin.de/server/api/core/bitstreams/5f0c5a05-9ef1-455c-8198-88d95e08071a/content --> Dokument_29.pdf - [section 1.4 Organisation of the Thesis (pp 9 - 10)]


Bay Shore Freeway to San Jose in Mountain View, CA


Side view of Bay Shore Freeway near Computer History Museum

Front view of Computer History Museum showing 2nd floor where the unconference was held

Some of the Software and hardware used for the Blinky Project. Ultimately the demo did not work. Reflections suggest that it worked on a local home network and not the Computer History Museum's network due to confusion about 2.4GhZ and 5GhZ wifi access points and perhaps that only port 80 and 43 was open. Websockets were running at home on port 3000, and this needs to be shifted over  to port 80 to be on the same port as the HTTP server.

Circle of people meeting for an unconference session
Circle, perhaps closing, at the Internet Identity Workshop

Justin Richer, long time IIW veteran

Pedestrian street in Mountain View, CA

Bar near the Computer History Museum

Caltrain leaving from Mountain View and headed toward San Jose, CA

Greyhound bus leaving to depart to Oklahoma City from San Jose

San Jose Diridon Transit Center that has the Greyhound ticket counter

Hilton Hotel room I stayed in because the Greyhound site would no longer acknowledge my ticket purchase, but the front desk at the Hilton would acknowledge my presence. I gambled with low phone power and lost when choosing talk to the bus driver instead of Greyhound's help hotline. With hindsight, since I called home before booking the Hilton explaining that I was afraid of the cold night and for my safety, I did have the phone power for both actions with Greyhound. Due to this and following experiences, I now own a backup battery charger. Also, I now prefer to fly due to less expensive tickets, zero cost cancellation fees, and generally better customer service. Yay to Southwest (although I'll sometimes mix it up). To Greyhound's credit though, I did get an updated ticket the next day with no penalty and for the same price as paid in July, 5 months before.

This might be Santa Barbara, CA

Perhaps this is a cactus at an Arizona rest stop in the higher desert a little while after leaving Phoenix, AZ?

In late February of 2023, I went to EthDenver. I really wasn't feeling like going after feeling financial strain, but Derrek the Community Steward gave me the encouragement to come. I do confess that I do spend money on : (1) travelling a lot looking for opportunity because I get a feeling of not fitting in and feel I am missing out, (2) buying a fair amount of electronics on AliExpress (and sometimes from HAM Radio people) trying to educate myself after wanting but not obtaining an electronics education earlier and within a comfortable budget. I probably should work more but I tend to be more productive and keep it together mentally when I pursue projects that I want to see in the world, rather than work with those who may not share my long term goals, interests, or beliefs.
Anyway, I ended up applying for the EthDenver Scholarship and the Antalpha Hackerhouse and was ultimately being accepted for both. I chose the Hackerhouse and it was generally a great experience except for being so far from the EthDenver venues that it would be hundreds of dollars in transportation fees if a car was not available. The quality of the people selected at the house was excellent as were the speakers. I got to meet Kostas Kryptos who worked on Meta's (formerly Facebook) Libra project as the lead cryptographer. I was lucky to have brought David Wong's book on Real World Cryptography because it brought joy to him (https://twitter.com/kostascrypto/status/1629722559783780352). There also were several other luminaries in the Zero Knowledge Space at the Hackerhouse. I was blessed with great rewards after showing interest in Zero Knowledge and taking the chance at applying.


View of the Rocky Mountains from the back yard of the Hackerhouse.

Rich, Lance, and Fedor presenting the Hunter Z Hunter project
https://github.com/hunter-z-hunter/hunter-z-hunter/blob/main/hunter.ipynb
 
Jason talking to Rich and Fedor on a late night shortly before project submission.Jason probably was helping with bugs or theory of the codebase with EZKL around this time. At the time the onnx file wouldn't compress down to a size such that it could easily be places on chain as a zkp verifier. With Dante's ingenuity and remote computing powerhouse it got on chain.


Lance, Rich, and Fedor putting in the time shortly before project submission

Danilo giving a thumbs up during a late night working on the Hunter Z Hunter project


View of the hackerhouse from the rear. This was probably close to the time when I had a mental breakdown and received the unexpected comfort of one the house organizers. They told me, if I recall, that I just have to accept things and move forward and sometimes that means that things don't work out. They loved all my links about decentralized identity and my hardware work and were impressed that I completed a Masters in Chemical Engineering. I hadn't been that productive during my stay at the hackerhouse. I messed up on the first night of the hackathon. I stayed out too late trying to mend a relationship with someone who I had worked with previously at EthDenver and by chance ran into in the registration line and who helped me through it and made the mistake of going to a party on the way back that some from the hackerhouse, but not the core people who formed the team I missed out on, were at. The person I was trying to mend a relationship with chose to abandon me by the next morning because I mentioned in a text that morning that that I had talked about our project idea, even in the vaguest of forms I thought because I knew they were sensitive, to another hacker in the space and they were afraid of having their ideas stolen and were not open to working with others besides me. The next morning was too late as the team had materialized with one person arriving late at night just as I was finishing entertaining them. Ultimately this led me to project to others that I had already found a team and miss out on working as an official team member with the four member team that formed the first night. This of course did not hit home for me till the following morning. Like the game of musical chairs, I ended being the odd guy out and worked and struggled as a one member team. I also had some belief in my mind that some in the hackerhouse would rather work with some people smarter or more talented or something else, but it turned out they would have had me on their team and we sustained a healthy relationship. I could have traveled into the EthDenver hackerbase, but I chose instead to be beside Jason and EZKL. I tried running a model on ezkl and hastily trying to learn machine learning. Late in the game I gained a team member when another showed up, and some attempt was made. It turns out the model I was trying wouldn't have worked at the time? The later fix. Ahh, hackathons. My coffee insomnia that struck during the first week, and following brain haze withdrawal didn't help my focus. But wow. People at the hackerhouse were smart, skilled, and dedicated. I'm blessed.

Also, I think this is important. Maybe it is true in life that there are no coincidences.
https://twitter.com/Brent_Shambaugh/status/1633799555404668929?s=20

A little later I worked on another hackathon trying to make up for the last. At the time I could not quite muster Account Abstraction, but I later discovered a very good video: Solidity Deep Dives #6 - EIP4337 Account Abstraction : Colin Nielsen


The last day of EthDenver I received a second Bufficorn for helping out. Curiously, this is also how I received the first. I am not counting on progression beyond a pair-of-bufficorn. It is a blessing and not expectation.

In April 2023, I went to Causal Islands in Toronto. If it wasn't for the unexpected generosity of a friend, I probably would have decided it wasn't worth the risk to go. While there I got to meet the highly accomplished Brooklyn Zelenka whom I had interacted with months earlier on Twitter with the arrangement of Boris Mann. (comment: It turns out I had previously been at a techlahoma event, was it a mixer called OKCTech++?, where I ran into Lawrence Kincheloe ,a guy I first met at OhmSpace hackerspace in the later months of 2011, who asked a pertinent question about the Blinky Project, "What happens if you lose network connectivity?". This question unexpectedly led me to learn about UCANs which are object based access control tokens that require no network connectivity (https://ucan.xyz/). Too much untamed complexity, and I suppose delirum, has delayed me from trying UCANs with Blinky, but I can say now that the data payload may be too cumbersome for my hardware.) I like Fission's (https://fission.codes/) work (IPVM et al.), friendly just be yourself discord community, and high quality Distributed Systems reading group (and https://lu.ma/distributed-systems). I did feel that going to their conference was a way for me to further connect. I did get to meet (https://ranger.mauve.moe/) who impressed me with their willingness to spend time and just talk. Their thoughtfulness and talent in the IPLD/RDF world is amazing. IMO, They are the go to person to discuss the overlap between content addressed and the more traditional RDF (and perhaps location addressed semantic web) world.

Links I found for IPFS/IPLD/RDF:
ResNetLab: Elective Course Module - InterPlanetary Linked Data (IPLD) - Protocol Labs, https://www.youtube.com/watch?v=Sgf6j_mCdjI
GPN19 - Foundations for Decentralization: Data with IPLD - media.ccc.de, https://www.youtube.com/watch?v=totVQXYS1N8
IPLD - Merkle DAGs: Structuring Data for the Distributed Web, https://proto.school/merkle-dags
Efficient P2P Databases with IPLD Prolly Trees - Mauve Signweaver - IPFS, https://www.youtube.com/watch?v=TblRt1NA39U
What is IPLD Anyway? - Mauve Signweaver - IPFS, https://www.youtube.com/watch?v=J_Q6hF_lPiM
Introductions to IPLD, https://ipld.io/docs/intro/
IPLD Pathing- IPFS, https://www.youtube.com/watch?v=N5y3gtDBwdQ
RDF/IPLD in IPFS Discord: https://discord.com/channels/806902334369824788/847481616338386994/979666465160572949
IPIP-293: Add /ipld Gateway Specs #293: https://github.com/ipfs/specs/pull/293
What is the best way to link IPLD/IPFS to RDF URI references? #22 : https://github.com/ipld/ipld/issues/22

Canadian National Tower in Toronto on the long walk to my hostel (http://www.theparkdale.ca/) from the bus station. It turned out that walking brought me in after 10, actually around 10:30pm which imbued a $15.00 (cad) late fee (IIRC). I feel it was worth it. I even spent a $5.00 (cad) Canadian bill with maybe 35 cents change at a Rexall I discovered along the way for four medium-large bags of flavored potato chips (IIRC).

Paradise Theatre, maybe 20 minutes before the Causal Islands Conference began.


Mauve Signweaver giving their talk on Holistic Local-First Software. They are known for developing a P2P web browser (i.e loads data from P2P protocols) called Agregore (https://agregore.mauve.moe/).


Brooklyn Zelenka closing up her talk titled Seamless Services for an Open World.

 
View of a Squirrel from the window of the second hostel I stayed at in Toronto. They are surprisingly big with black fur. The ones where I am based have reddish brown fur and seemingly smaller. I also gave away my second Softmax beer around the time of the squirrel spotting. I'm sad that I did. I should have found a way to take it in my luggage and/or drink it. It was not only an excellent beer, but waiting around for the guy I gave it away to may have caused me to miss my flight. You see I walked to the airport and things like wrong turns and crosswalks and dead telephones and limited connectivity slow and complicate things a bit. I needed that time as I arrived 10 minutes after the flight left and I must have waited at the hostel for an extra 40 minutes to an hour?


James Walker with Fission presenting on what could have been the
https://github.com/oddsdk/odd-app-template .

Quinn Winton, Brooklyn Zelenka, James Walker and I at the Causal Islands Afterparty. Oh btw, all three people I am with work for FISSION. Quinn is an applied researcher at FISSION and her talk at the conference was: Querying Decentralized Data in Rhizomatic Systems . Mauve mentioned this talk as relevant in personal discussion.


The next trip began on the 31st of May and lasted until August 22nd. It was lengthend by the desire to grow and learn and perhaps lack of motivation to return to Oklahoma. There were three main chapters in the trip. The few days before the workshop, the workshop, and over a month after the workshop. The workshop's name was Let Me Think (https://let-me-think.org/) and it was geared at imagining and establishing new academic institutions.

The first chapter of my journey led me to spend a few nights in Denver.

This is a picture I took as I was leaving the co-working space utilized by the cryptorado community. It was the day after I had attended another meeting with the cryptorado community at a craft brewery. I was attracted to the community as a possible place to locate due to my positive experiences with the EthDenver community in 2022 and 2023. There definitely seemed to be a strong community where terms like verifiable credentials were accepted as common parlance. I felt positive about meeting this community. I liked that I could hop on the Cryptorado discord and receive feedback from people like NukeManDan as well as get welcomed online for the solidity deep dive sessions (https://www.youtube.com/@colinnielsen2158) through Cryptorado meetups.  John Paller, who wasn't there, and co-founded EthDenver had encouraged me online to practice meditation in order to achieve more success in life and deal with anxiety.

After leaving the co-working space, I joined someone I met at the space at an art exhibition in the RiNO neighrborhood. I did a brief live stream: https://youtu.be/HaILQPfIzc4 .

Side view of the Open Vault exhibit. This was a common view as I watched a guy who was a physicist who worked on space stuff in deep discussion with one of the exhibit's creators.
This is a capture of cellular device metadata captured as a Raspberry Pi presents as a wireless access point (IIRC).
These are stuffed mock munitions using some textiles available in the United States, perhaps North Carolina.
This is a poster at the entrance to the exhibit describing the scenario and who created it. I also did a video tour of the space: https://youtu.be/Up2gbME_xps .
NukeManDan was kind enough to go on a hike to Red Rocks like he had done for the usual Cryptorado hikes.
Here is a view of the rocks I saw during the hike.
Red Rocks also included an ampitheatre that was pretty huge.

In the second chapter of my journey, I flew into LaGaurdia Airport where I was picked up by one of the workshop organizers, James Matthews. This was on the evening June 3rd, which put me just days from the offical workshop start time of June 5th. James was gracious to host me and two other workshop participants for a night. They next day we headed off to the workshop in Oneonta, NY which took several hours.

I decided to take a selfie with people who were either an attendee or organizer.

In the first few days after the mail delivery situation was resolved (this wound up in limbo at the post office) I recieved and set up a Orange Pi Zero. My intention was to use it to host software projects for the workshop while my laptop (which has a very noisy fan) was shut down. If I recall correctly I may have been able to install the zome for https://github.com/h-REA/hREA but there was no version of HoloChain that was going to run on my architecture https://developer.holochain.org/quick-start/ . HoloChain may have required a Orange Pi 4 or 5 at about 4 to 6 times the expense.


We did take a few breaks from the workshop. Here is a photo that I took of some of the friends I made at the workshop on a mission to check out CityFox Regenerate at the Brooklyn Mirage.

The stage effects were amazing. Not only were there towers that shot fire but there were rows of lights that could be seemingly any color and a wrap around led wall that may have been over 100 feet long and 30 feet high.
The people I came with were just warming up. This continued for another 4 to 5 hours.
The photos I took do not give the experience justice. I certainly could not capture the light to medium rain that fell on the venue during what seemed to be the perfect time.

This was a much later time and from the balcony during the twelve hour progressive house extravaganza.
This could have been shortly before my favorite D.J. came on. You will meet later a particular Pakastani guy from the workshop couldn't get enough of her music during dinner preparation.

There she is, D.J. Miss Monique, the whole reason for wanting to go. I was exhausted at this point not having a minute of sleep but it was worth it. Even better, I felt like I was supporting her after hours of watching her on YouTube but being too scared to buy some swag.

Here is another visualization of D.J. Miss Monique. It's just a mashup of black and white video of her live.
This is a photo of the pond on the workshop property. The water appears very still and it may have been very high due to the large amount of rain received.
This is the creek that ran through the property that also received the overflow from the pond. At this point the water was as high as was seen.

There were countless varieties of mushrooms on the 93 (97?) acre property.
This was the beginning of the illustration of the value flows ontology that I chalked on the driveway.
The particular file I based my work off of was:
https://lab.allmende.io/valueflows/valueflows/-/blob/master/release-doc-in-process/all_vf.TTL and more contextual description is here: https://www.valueflo.ws/



The three colors: red, yellow, and white are for observational layer classes, planning layer classes, and knowledge layer classes respectively.
Somehow these classes should be for recipes, planning, and production as in
https://www.valueflo.ws/assets/ValueFlows-Story.pdf but it wasn't clear at this point.


The main thing accomplished was to create a visually impressive markup of events ,processes, agents, etc from the ontology.

A closer view may be obtained by clicking on the photo.


Do you notice any similiarites with the REA paper? Valueflows is said to have been influenced by this and Bob Haugen had knowledge of REA when he reached out to Tiberius Brastaviceanu with Sensorica. (as far as I remember)

REA paper:
https://web.archive.org/web/20170829061640/https://msu.edu/user/mccarth4/McCarthy.pdf






The purple color may be due to distortion writing to the mini SD card.

There was considerable interest in Value Flows at the workshop. The use case going into the first call I believe was to model the food supply chain for the workshop, starting at the farm.

Jay, pictured on the right, did a great job as facilitator and summarizer of what the purpose of the meeting was
Kim at the top right also did a great job stepping in and calming the chaos as well as give her own summary and bring in decentralized identity and blockchain

A video of the first call is here: https://youtu.be/U42xu0ukeNY

Oh, and yes I cannot forget Andrius who showed up very briefly with his phone as we were setting up. He is known for his great theatrics and he frequently mentions math: https://www.math4wisdom.com/ .

I cannot help but mention Bob and Lynn in the bottom two frames, for the second call, who were integral getting the value flows ontology off the ground. They lent their wisdom, and it was great to have Lynn on the call who has been deep in the weeds with the value flows ontology.

Here is some video of the second call where we still are trying to understand valueflows: https://youtu.be/Lw_aE3NRWBc .

Alas here I am with my Pakistani friend. Not pictured is the times we stayed up late talking and eating food, nor the rainy night of my induction.
During a workshop session hosted by another workshop participant I came up with traits of a positive innovation culture after reflection and research.
I also came up with traits of a negative innovation culture..
I produced a concept map showing connections with projects at the let me think workshop. Clicking on any image will allow zooming in.








This was during the final days of the workshop when we were enjoying a last laugh This project is based on Arduino with Sound Sensor and LED (Tutorial) - K&R project . It is essentially the same thing, except 2 laser diodes were used in place of a single LED.
As we were packing up, these are two pavilions that were still standing.
A bit beyond my tent was as well.

In the mist and clouds is a utility shed partially under renovation.

There must have been a lot of mist due to the rain.

This is the porch we sometimes sat on. At lot of philophosizing occured here. We also read parts of the Creative Act by Rick Rubin on this deck.

This was the usual view of the lake from the deck.

For a change in pace, we got garlic greens and shrimp!

Propane makes this stove go.

This may have been another night with a different kind of food. We had rice with Indian spices quite a bit.

This is the mixer board at one of a few parties. We celebrated for birthdays or just to end the week.
Sam was an excellent D.J. He kept the tunes coming.


The Pakastani guy had one of the women paint his nails. Also, I've concluded that one of the benefits of getting older and still feeling like a misfit is that at some point I either fall apart or stop caring. In this case, I joined in like a honeybadger that didn't care.

The property had a large area of woods. To get away from it all, I pulled Sam into the woods and we read a book about how to read.

Near the bottom of the photo is all of the classes that I pulled together from the ontology, grouped by part 1, 2, and 3 from the Value Flows Story about Apple Pie.I also pulled in corresponding properties between the classes.

On the top is part 1: Making Pies (which I interpret as the production step)

In the middle is part2: planning.

These are some combined Protege screenshots used for the on the wall construction.

To the right is part 3: the recipe.
This is also part 3.
Part 3 is continuing.



Here are the slides that helped me construct step 3.

My guess is I did match each part to ontology terms and in order to use the ontology in production I need to work backwards, starting at step 3 and ending at step 1.

Jay appreciated value flows because he felt it was less restrictive than money when applied to real world interactions. He also shared papers, such as "Money like Mycellium" that he was inspired by.
Moreover, here are some slides shared with me about how to "cultivate the infinite win":
https://docs.google.com/presentation/d/1g3qIygCIG3uUsWiWkHEvisjiqezAbF4yz37yyKN4_5U/

In the third chapter of the journey, we caught up with the community that was still on the east coast after the workshop.


Eventually we found ourselves on the Brooklyn Bridge..

with a wonderful view of the city...
and the statue of liberty.

Sam decided to give busking a shot.
Video is here: https://youtu.be/8BUT-cEMqxU
In the fourth chapter of the journey, we wound up in Montreal. We visited the Sensorica Lab to see a real world implementation of  value flows. We viewed this as a way to build upon the workshop. Most of the time was spent helping the lab get set up at a new location, but there was some time for discussion, projects, and leisure time.

Sam at an African Music Festival, one of the many free entertainment events in Montreal.

Watching the stage and hearing music of many talented performers.

Some wires come out of an Ardunio UNO to power LEDs and a shift register.

I followed https://lastminuteengineers.com/74hc595-shift-register-arduino-tutorial/ and put two leds (sans 1) in parallel for each output instead of one.

Tibi talks to Sam in a maker/hacker lab like location.


Peer production was the theme of the discussion. Sensorica's NRP was mentioned, which makes it unique amongst similar spaces.

Video of some of the discussion: https://youtu.be/tE6ZtQQ_lG8 (pt 1), https://youtu.be/psTumnGrYOw (pt 2)

Here are some links from tabs that I had open near the time of our discussion:
http://ovn.world/index.php?title=NRP-CAS_UX_and_UI
http://ovn.world/index.php?title=NRP-CAS
https://hrea.io/
https://www.sensorica.co/environment/hrea-demo-for-nrp-cas
https://blog.holochain.org/happs-spotlight-hrea/
https://www.youtube.com/watch?v=WsIut-gtCvQ
https://www.sensorica.co/help/education/collaborative-entrepreneurship
https://odysee.com/@sensorica:6/collaborative-entrepreneurship:a?r=5SdBE39y7ECBFqEGpTfB5rZcyhHwJjig
https://multitudeproject.blogspot.com/2015/05/in-batman-dark-knight-rises-movie-there.html
https://www.internetofproduction.org/openknowhow
https://drive.google.com/drive/folders/1cDK1DIDEXZS5k34wY-4gAhVtHVdmsoiZ


Tibi also let us know about a place where people have met in Montreal for years every Sunday with drums. It is called the Tam Tam.

Video of the Tam Tam: https://youtu.be/MunVA03dhuY .

In the fifth chapter of our journey, we headed to Nova Scotia before heading down south and eventually west as well.

Setting up at a campground in Nova Scotia.

Sam pounds the tent pegs in place.
View of the Gulf of St. Lawrence as the sun set.
Sam enjoying the beach at Five Islands Provincial Park on the Bay of Fundy.
Here lies a tree that somehow lost its leaves on the beach.
Brent holds his staff.
Is the staff Sam's?
Sam controls the territory.

View of the Bay of Fundy from the beach.

On the way back, we admired a friend's boat under repair and dreamed about sailing the seas.
Sam gets up during a Jam Session.
dd
Sam tunes his electric bass.

Sam pipes out some vibrations while the electronic beats go on.

Video of the Jam Session: https://youtu.be/TVAzwHJ8Nm0 .

This is a view of a cliff along the freeway.

A friend bought me a Llamacorn hoodie!
On the way through Pennsylvania, we saw Amish riding a horse and buggy.
Did the Amish live on farms like this? Where did they come from?

We drove farther as the sun set.
Through the mountains we went. Under the mountains we went.
The sun soon would be no more for awhile.
We entered Cincinatti at night.
We were sustained by sandwiches mostly. They were mostly filled with peanut butter, but occasionally we got a change.
We finally made it to the gateway of the west after suffering from heat exhaustion.
The sky was oddly filled with haze or fog. It had been so hot earlier that we had to stop at an Denny's to cool off and get hydrated.
My feet got a tan because I had been wearing sandals and avoiding socks.

In the future, I am considering revisiting the Sensorica community and hREA and seeing what I can do to improve the software. I'd like to introduce those from the FISSION community to the project to see if they may be able to enhance the activity. I may also seek out the Internet Identity Workshop community again and seek discussion about peer production and how identity would play a part. (Not to mention, I'm a co-chair with https://identity.foundation/interop/ and IIW might be ideal to move interoperability efforts forward. It could help if I was a commercial builder and traveled to all of the identity events). Maybe it could be special kinds of verifiable credentials that help with connecting trusted parties for peer production. Since these verifiable credentials might be irregular, like http://adistributedeconomy.blogspot.com/2012/03/knowledge-discovery-with-semantic-web.html mapping techniques mentioned in Benjamin Braatz paper might help. I also would like to finish the Blinky Project and apply it as a Sensorica project as Tibi suggested. Lastly, I cannot forget the laser and shift register projects with Arduino. These might lead to some pretty epic visualizations that could be synchronized to music, or rather driven by the sound itself!

If you'd like to help, feel free to reach out or donate. I think for IIW alone, I need to pull in between $400 to $800. If Sam came, this could be more. A couch to crash on would be worth at least $200. Alternatively, or in addition, I could try busking. I already translated a tune to sheet music from audio that I came up with using AnthemScore:







Friday, 25. August 2023

@_Nat Zone

公正取引委員会・新規デジタルアナリスト募集のご案内

公正取引委員会では、現在4名のデジタルアナリストを…

公正取引委員会では、現在4名のデジタルアナリストを非常勤職員として採用しておりますが、この度、追加採用(今秋より勤務開始予定)の募集要項を公表いたしました。
今回募集しているのは、下記の2種類のアナリストです。

データアナリスト:広範囲・大規模なデータ収集及びそれに資する新たなデータ収集ツール(Webスクレイピングプログラム)等の開発、さらに収集したデータを用いた分析を行う。 セキュリティアナリスト:モバイルOSやアプリストアを含むデジタル分野に関するセキュリティ確保・プライバシー保護と競争政策とのバランスを図るために必要な各種の検討を行う。 <募集開始に係るSNSやwebサイト> X (旧Twitter) Facebook 公取委Webサイト(PDF)
みなさま、奮ってご応募のほどお願い申し上げます。

Thursday, 24. August 2023

Wrench in the Gears

Walnuts As Teachers

Before this week, nineteen years had passed since my last job interview. It’s an odd feeling to rebuild a life. My first attempt was to try something completely different in a new place, but after a few weeks it was clear that was not my fate. So, I returned to Philadelphia for a tear down [...]

Before this week, nineteen years had passed since my last job interview. It’s an odd feeling to rebuild a life. My first attempt was to try something completely different in a new place, but after a few weeks it was clear that was not my fate. So, I returned to Philadelphia for a tear down of the old in the hopes that I could craft something suitable on the old foundation. Many bags of trash put out on the curb later, my house is feeling open and airy, my mind not so much.

My new job is to find a full-time job. At this point, I’m simply looking for a place to land where I can be productive and part of a community. Oh, and health insurance should be on the list, too. My emotions swing back and forth. Looking over the listings, in all of their specificity makes me feel a touch inadequate even though I know my life has been rich in experience and hard-won skills. How to package oneself as a desirable commodity is the daily question.

Sorting through the old papers that document the contours of my pre-digital dust life, I found a list of qualities that must have been written down by colleagues as part of a 360 degree review process at work years ago. I’m grateful to have it, though I’m not sure whose handwriting it is. I don’t grow cucumbers anymore, but that line brings back bittersweet memories of my community gardening days. At first, I chafed against the perceived confinement of the job boards. Then I tried playfulness. Could I reimagine writing cover letters as a personal quest? That made me feel less exposed and worked for about a week. The more you crank out, the less painful it is. I’ve had a few inquiries and sampled some of the next-generation HR apps, including one-way video screening where you talk to the camera and upload your hopefully perky self into a hiring portal.

The first few weeks of my search around the inter-webs for opportunities that would be a good fit for a Philadelphia researcher with a flair for foresight and database management were productive. But in the past week or so, those postings seem to have dried up. I’ve been off of social media for a couple of years, but this job search has dragged me back into the digital dopamine rush, though admittedly at a slower pace. How many times a day can you check the job boards anyway?

Today, my gut told me to go outside and look for guidance in nature. We’ve been blessed with very moderate weather this August, though a bit rainy. I took myself over to Bartram’s to see what messages might be there for me. I walked down to the water’s edge and the tide was out with wide mud flats exposed, decorated with tufts of spatterdock. Where I normally do my mud larking, broad drifts of wood debris filled the coves. A large tree had toppled over, eroded from the bank. I haven’t seen that happen before, but my colleague says the intense rains are making the river run stronger, if not higher. I felt for the tree, still clinging to its green leaves. A towering presence that had suddenly lost its footing in what once seemed solid ground.

I picked my way across the flat gathering up a few pieces of broken crockery, one piece had delicate brown bands. Then a glint caught my eye. It was the sunlight on a translucent bit of stone embedded in the silt. I’m not good with geology, but the tight grain and solid heft makes me think it might be quartzite, mostly a smooth cream tone with a streak of grainy black through the center. It took a few minutes to free it from the moist clay that embraced it. It wanted to go home with me. Later, after I rinsed it off and turned it over in my hand, I got the sense it may have been a hand tool, a hammerstone or ground stone perhaps. There were indentations where a thumb and fingers could grasp it while using the pointy end to strike a blow, maybe on a piece of flint.

It’s quite lovely, actually, and I’m glad for its company in this quiet house. I relocated most of my nature gatherings to Valley Forge before I left for Seattle. A friend and I arranged a lovely spiral next to Valley Creek between Mount Joy and Mount Misery, which seemed entirely appropriate. She checks on it for me. In the weeks that followed, the shells, pebbles, and feathers made their way into the creek and perhaps on to a more distant watershed journey.

This stone with its bold streak through a sparkly translucent central band is such a stark contrast to the ceramic shards. How does geologic time compare to human time? When I get impatient, I have to remember that we don’t really understand time. I think about Momo and the men in grey at the time bank and the lotus of consciousness, each blossom more beautiful than the last, and the pressure to sacrifice those petals to the bankers who smoke them in order to maintain their feeble presence in this world. I have to remember that when interview questions are posed about how will you do too many tasks at once with no guidance. I think we should be able to shape a world where that isn’t necessary; where people have the time that they need to be thoughtful and act with care.

Next to the cider press I saw a tangle of passionflower with its squiggly flying-saucer-like petals. Later I looked up its medicinal properties, and the leaves, steeped as a tea help with anxiety. The universe provides what is needed – a blessing. 

I ambled down the trail towards the farm and along the way paid a visit to my favorite walnut tree. The nuts were already dropping, to show how nature progresses, gradually. There were nuts encased in a solid green hull. There were black squishy balls and nuts protruding from others. A few freed walnuts could be found in the grass. There’s a process. You can’t necessarily make it go faster than it’s going to go. You can’t expect a green hull to offer its treasure without some amount of muss and fuss. There will be decay. It will take time. But the nut will be there at the end of the process. I hope I’ve moved beyond the green hull phase. I have no idea what kind of timeline is involved. It’s been so long since I played this game, and the rules seem decidedly different.

I saw a few old colleagues, and we exchanged warm hugs appreciating the gloriously tall sunflowers growing at the farm. We see things in much the same way. We reminisced about the days not so long ago when people opposed GMOs. Another friend was tending her twilight garden of silvers and purples with a salvia of the most intense purple I’ve seen in a while. She pointed out two Franklinias that had finally taken off. Franklinia are notoriously picky. You often need to plant four or five to get one of them to survive. But this summer, two of those trees with their delicate white and gold blossoms were putting on a magnificent show under their companion trees the white pine. It’s a reminder, even when the odds are not in your favor, success may be just over the horizon. When I headed back to my car, I saw I’d parked next to a cedar full of bluish-sage berries and picked a sprig to take home. Maybe for my next read aloud I’ll try, Celia’s Song by the late Lee Maracle. It’s about indigenous culture of the Pacific Northwest and they use cedar on a hot iron skillet to cleanse a space. Cedar is a protective healing plant with anti-inflammatory properties.

A friend reminded me today, remember that when we are feeling isolated or disconnected to visualize that we are each woven into the glorious tapestry of creation. I can’t promise that I’ve fully banished all the worries prowling around the edges of my consciousness. But I know to listen to my gut, which may be a more reliable advisor than my brain when facing so much uncertainty. Today, my gut was right. Nature and God showed up for me with offerings to calm my anxiety, purify my space, and reset my clock, so that I have more realistic expectations. It’s a wild game we are in. The one thing we can control is how we feel when we are faced with the spectacle. For me, right now, staying grounded is important.

 

Thank you stone consciousness.

 

Thank you passion flower consciousness.

 

Thank you cedar consciousness.

 

I am grateful for all of the gifts bestowed upon me and the beauty of the world around me.

Wednesday, 23. August 2023

Werdmüller on Medium

Homesick

Burning out personally feels a lot like burning out professionally. Continue reading on Medium »

Burning out personally feels a lot like burning out professionally.

Continue reading on Medium »


Wrench in the Gears

Italian “Third Pole” Politics and RFK Jr.’s “Recovery Center” Peace Corps Program

Last night, Jason, Brandi, and I streamed the first installment of a series we’re working on that digs a bit more into San Patrignano, an Italian, social impact, “sustainable” treatment center for addiction in Rimini, Italy. RFK Jr. wants to scale this program across the United States, setting up centers in rural communities using proceeds [...]

Last night, Jason, Brandi, and I streamed the first installment of a series we’re working on that digs a bit more into San Patrignano, an Italian, social impact, “sustainable” treatment center for addiction in Rimini, Italy. RFK Jr. wants to scale this program across the United States, setting up centers in rural communities using proceeds from legalized sales of marijuana and psychedelics. In this video, we discuss the primary backer of the program, oil refining heiress and Milanese politician Letizia Moratti. Moratti is associated with the “third pole” centrist political party along with former technocrat Prime Minister Mario Draghi, an MIT educated economist and central banker. I believe Italy’s “third pole” to be similar to the Transpartisan Party in the United States and has synergy with RFK Jr.’s campaign in its centrist framing. Moratti was involved in education reform and supported Draghi’s push for Green Pass adoption during the lockdowns as health minister in Lombardy. We’ll be doing a follow up analysis of clips from the 2020 Netflix series SanPa: Sins of the Savior and an examination of the origins of Bill Wilson’s Alcoholics Anonymous 12-Step program in Gerald Heard and Aldous Huxley’s psychedelic spirituality for world peace. Stay tuned.

 

Interactive map link here: https://embed.kumu.io/5e68850b35240cf80aa703c8b911a4d3#untitled-map?s=bm9kZS1SUU1ZTkdvdA%3D%3D

 

 

Monday, 21. August 2023

Damien Bod

ASP.NET Core Logging using Serilog and Azure

This article shows how to implement logging in an ASP.NET Core application using Serilog and Azure as a hosting environment. Code: https://github.com/damienbod/aspnetcore-azure-logging Priority logging use cases Two types of default logging use cases need to be supported in most software solutions. The application requires near real time logs which can be easily viewed and persisted […]

This article shows how to implement logging in an ASP.NET Core application using Serilog and Azure as a hosting environment.

Code: https://github.com/damienbod/aspnetcore-azure-logging

Priority logging use cases

Two types of default logging use cases need to be supported in most software solutions. The application requires near real time logs which can be easily viewed and persisted logs for a period of time which can be viewed after something goes wrong. This needs to work in development and also in production. We have many other logging use cases as well, I just want to cover the basic logging requirements here.

Real time logging development Real time logging production Persisted logging development Persisted logging production

In this setup, I use Azure App Insights for persisted logging on the production server. This has the problem that the logs only appear after n-minutes. A near real view is also required.

Setup

The ASP.NET Core application is deployed to an Azure App Service (Linux hosted). I like to use this as it is simple and scales good with web applications. I use Serilog to add logging to my ASP.NET Core applications and add different sinks depending on the hosted environments of the dev, test, prod etc.

The following Nuget packages are used to setup the logging in this use case.

Serilog.AspNetCore Serilog.Enrichers.Environment Serilog.Enrichers.Thread Serilog.Sinks.Async Serilog.Sinks.ApplicationInsights Serilog.Sinks.AzureApp

The program file of the ASP.NET Core application is used to initialize the logging. I use just one configuration for both dev and production in this example. I normally use different configurations pro hosted environment. The Serilog is setup using the CreateBootstrapLogger and the UseSerilog methods. The UseSerilog reads the sinks and logging setup from a configuration file, or files if separated per environment.

using Serilog; using AspNetCoreAzureLogging; Log.Logger = new LoggerConfiguration() .WriteTo.Console() .WriteTo.AzureApp() .CreateBootstrapLogger(); Log.Information("Starting AspNetCoreAzureLogging application"); try { var builder = WebApplication.CreateBuilder(args); builder.Host.UseSerilog((context, loggerConfiguration) => loggerConfiguration .WriteTo.Console(outputTemplate: "[{Timestamp:HH:mm:ss} {Level}] {SourceContext}{NewLine}{Message:lj}{NewLine}{Exception}{NewLine}") .ReadFrom.Configuration(context.Configuration)); // ... Add here: // define your services and pipelines app.Run(); } catch (Exception ex) when (ex.GetType().Name is not "StopTheHostException" && ex.GetType().Name is not "HostAbortedException") { Log.Fatal(ex, "Unhandled exception"); } finally { Log.Information("Shut down complete"); Log.CloseAndFlush(); }

The Serilog configuration adds the Azure and file based setup. The console or the file are used for local development, the Azure App Insights is used for the persisted logs on Azure. Depending on what services you deploy, the near real time logging is setup differently.

"Serilog": { "Using": [ "Serilog.Sinks.AzureApp", "Serilog.Sinks.ApplicationInsights" ], "MinimumLevel": { "Default": "Debug", "Override": { "Microsoft": "Debug", "System": "Debug" } }, "Enrich": [ "FromLogContext", "WithMachineName", "WithThreadId" ], "WriteTo": [ { "Name": "AzureApp" }, { "Name": "ApplicationInsights", "Args": { "telemetryConverter": "Serilog.Sinks.ApplicationInsights.TelemetryConverters.TraceTelemetryConverter, Serilog.Sinks.ApplicationInsights" } }, { "Name": "File", "Args": { "path": "../_logs-uifile.txt", "outputTemplate": "{Timestamp:yyyy-MM-dd HH:mm:ss.fff zzz} [{Level}] [{SourceContext}] [{EventId}] {Message}{NewLine}{Exception}", "rollOnFileSizeLimit": true, "fileSizeLimitBytes": 4194304, "retainedFileCountLimit": 5 } } ] },

Logging to App Insights

App Insights need to be enabled for the Azure App Service which uses a Linux hosting plan. This can be created with it enabled, or you can use the portal to enable this.

Note: this can all be setup using IaC like terraform or whatever tools you use.

The App Insights can be opened and the exceptions or the traces can be viewed in the Logs tab.

You can use KDL to view the traces or the exceptions from the application logs.

Near real time logging App Service

Azure App Insights logs do not appear for n-minutes. Due to this, sometimes you need to enable near real time logs to debug a bug on Azure deployments. This can be done using the Azure Service Logs blade. Enable the File System logs.

The log stream can be used to view thenear real time logs view.

On a development system, I use the log files or the console to debug the application. I do not use App Insights in the development environment if this can be avoided as it adds complexity and delays in development.

Notes

This just covers the two default logging requirements which can be used for development and production. There are many other logging features which can be implemented and monitored but these two basic ones should always be supported and working in all solutions. You can also use logging solutions like Seq server or Elasticsearch, main thing is to use one.

Links

https://learn.microsoft.com/en-us/aspnet/core/fundamentals/logging

https://learn.microsoft.com/en-us/azure/azure-monitor/app/ilogger

https://learn.microsoft.com/en-us/azure/azure-monitor/app/tutorial-asp-net-core

https://serilog.net/

https://github.com/serilog/serilog

Learning the Kusto Query Language (KQL) with Application Insights Logs

Friday, 18. August 2023

Doc Searls Weblog

We Need Deep News

Were it left to me to decide whether we should have a government without newspapers, or newspapers without a government, I should not hesitate to prefer the latter. — Thomas Jefferson News is the first rough draft of history. — … Continue reading →

Were it left to me to decide whether we should have a government without newspapers, or newspapers without a government, I should not hesitate to prefer the latter.
— Thomas Jefferson

News is the first rough draft of history. — Countless journalists

“Breaking the News” is the title of an exhibit at the Monroe County History Center here in Bloomington, Indiana.* It traces the history of local news from the mid-18oos, when several competing newspapers served a population of a thousand people or less, to our current time, when the golden age of newspapers is long past, and its survivors and successors struggle to fill the empty shoes of local papers while finding new ways to get around and get along.

Most of the exhibits are provided by what’s left of the city’s final major newspaper, the Herald-Times, which thankfully still persists online. Archives of the paper are also online, going back to 1988. I am told that there are microfilm archives going back farther, available at the Monroe County Public Library. Meanwhile, bound volumes of the paper, from the 1950s through 2013, are up for auction. (More here, including word that older bound volumes are apparently lost.)

Meanwhile, in our other hometown, the Santa Barbara News-Press is gone after serving the city for more than 150 years. The Wikipedia article for the paper now speaks of it in the past tense: was. Its owner, Ampersand Publishing (for which I can find nothing online), filed for bankruptcy late last month. You can read reports about it in KSBY, the LA Times, the IndependentNoozhawkEdhat, and a raft of other local and regional news organizations.

From what I’ve read so far (and I’d love to be wrong) none of those news reports touch on the subject of the News-Press‘ archives, which conceivably reach back across the century and a half it was published. There can’t be a better first draft of history for Santa Barbara than that one. If it’s gone, the loss is incalculable.

Back here in Bloomington, Dave Askins of the B Square Bulletin, which reports on what public offices and officials are up to, has issued a public RFQ for a digital file repository that will be a first step in the direction of what I suggest we call deep news. Namely, the kind that depends on archives. It begins,

Introduction:
The B Square is seeking proposals from qualified web developers to create a digital file repository. The purpose of this repository is to provide a platform where residents of the Bloomington area can contribute and access digital files of civic or historical interest. This repository will allow users to upload files, add metadata, perform searches, and receive notifications about new additions. We invite interested parties to submit their proposals, outlining their approach, capabilities, and cost estimates for the development and implementation of this project. For an example of a similar project, see: https://a2docs.org/ For the source code of that project, see: https://github.com/a2civictech/docstore.

The links go to a project in Ann Arbor (where Dave used to live and work) that was clearly ahead of its time, which is now.

We also need wide news, which is what you get from lots of organizations and people doing more than filling the void left by shrunken or departed newspapers. (Also local radio, most of which is now just music and talk programs piped in from elsewhere.)

News reporting is a process more than a product, and the Internet opens that process to countless new participants and approaches. Many of us have been writing, talking, and working toward Internet-enabled journalism since the last millennium. Jim Fallows (see below), Dan Gillmor, Dave Winer, JD Lasica, Jay Rosen, Jeff Jarvis, Emily Bell & crew at the Tow Center, and Joshua Benton and the crew at NiemanLab, are among those who come to mind. (I’ll be adding more.) Me too (for example, here).

Wide news, when it happens, is a commons: an informal cooperative. (The Ostrom Workshop, where my wife and I are visiting scholars, studies them.) I think we are getting there in Santa Barbara. But, as the LA Times story on the News-Press suggests in its closing paragraphs, there are gaps:

Santa Barbarans have turned to other sources as the newspaper’s staff withered to just a handful of journalists. Along with the Independent and Noozhawk, some locals said they turn to KEYT television and to Edhat, a website that relies heavily on “citizen journalists” to report on local events.

Melinda Burns, one of many reporters who left the paper after feuding with management, now provides freelance stories to many of the alternative news organizations. Burns, who has spent decades in the news business, including a stint at the Los Angeles Times, said she has seen gaps in coverage in recent years, particularly in the areas of water policy and the changes wrought by legalized cannabis. She continues to report on those topics and said she gives away her in-depth stories free to reach as many people as possible.

“It keeps me engaged with the community and, God, do we need the coverage,” she said. “The local news outlets are valiant but overworked. It’s just a constant scramble for them to try to keep up.”

Maybe it helps to know that a landmark local news institution is gone, and the community needs to create a journalistic commons, together: one without a single canonical source, or a scoop-driven culture.

I think the combination of deep and wide news is a new thing we don’t have yet. I’ll call it whole news. We’ll know it’s whole by what’s not missing. Is hard news covered? City hall? Sports? Music? Fashion? Culture? Events? Is there a collected calendar where anyone can see everything that’s going on? With whole news, there is a checkmark beside each of those and more.

Toward one of those checkmarks (in addition to the one for city hall), Dave Askins has put together a collective calendar for Bloomington. Wherever you are, you can make one of your own, filled by RSS feeds and .ics files.

At the close of all his news reports, Scoop Nisker (who just died, dammit) said, “If you don’t like the news, go out and make some of your own.”

So let’s do it.

*Breaking the News is also James Fallows‘ newsletter on Substack. I recommend it highly.

Thursday, 17. August 2023

Timothy Ruff

Second-Generation Verifiable Credentials

The kind organizations will require. The current Verifiable Credentials (VCs) data model originated in 2017, when it was called “Verifiable Claims”. There have been many sustaining innovations since that time but none disruptive¹ until now, for a new field of application for VCs: Organizational Identity (OI). In The Dawn of Organizational Identity, Part 2: Credentials, Stephan Wolf, GL
The kind organizations will require.

The current Verifiable Credentials (VCs) data model originated in 2017, when it was called “Verifiable Claims”. There have been many sustaining innovations since that time but none disruptive¹ until now, for a new field of application for VCs: Organizational Identity (OI).

In The Dawn of Organizational Identity, Part 2: Credentials, Stephan Wolf, GLEIF’s CEO and the piece’s co-author, and I introduced what we consider to be second-generation VCs. We made the case why they’re ideally suited for Organizational Credentials (OCs), and why they’re required for many, if not most, of the use cases for organizations.

That argument is reproduced here, along with a few additions and modifications, as second-generation VCs represent an important advancement in verifiable credentials generally, separate from and in addition to their importance for OI and OCs.

Second-Generation Verifiable Credentials

First-generation VCs, like the current VC JSON-LD standard at W3C or the emerging JWT standard at IETF, were developed with utility for individuals in mind, for proving things like identity, age, and academic credentials. They were and are a major breakthrough in digital identity, they’re what make ‘self-sovereign identity’ actually self-sovereign and ‘decentralized identity’ actually decentralized, by giving individuals a high degree of digital autonomy for the first time.

Second-generation VCs were developed with organizational use cases in mind, with all the capabilities of first-generation credentials plus sophisticated new capabilities that individuals don’t often need but organizations typically do, such as key rotation and recovery, delegability, multi-signature (“multi-sig”), contractually protected disclosure, freedom from blockchain/‘ledger lock’, and more.

These new organizational capabilities are made possible by a form of VCs called “Authentic Chained Data Containers” (ACDCs), with signing and verification using an open key management protocol that replaces the need for blockchain or vulnerable web-based security: Key Event Receipt Infrastructure (KERI). These two decentralizing protocols together — ACDCs for VCs and KERI for signing — deliver significant advances for VCs, the kind that organizations require and that justify the ‘second-generation’ moniker.

Here are seven highlights:

1. Extraordinary advances in security:

Recovery from private key compromise (suspected or actual) without reissuance of previously signed items* Key rotation without re-issuance of previously signed items* No reliance on web security Privacy-respecting verifiable revocation Non-repudiable signing with logging Post-quantum proof* Natively supports weighted ‘m-of-n’ multi-sig schemes with fractionally weighted thresholds (e.g., 2-of-3’, or 1-of-3 when the 1 has sufficient ‘weight’, etc.)

2. No reliance on blockchain (and its limitations: “ledger-lock”/centralization, scalability, cost, complexity, interoperability, performance, privacy, security, etc.)

3. Use of existing cloud infrastructure and APIs

4. Multi-level issuance with chaining and delegation (anyone having authority can delegate and extend a chain of authority, rather than all delegates returning to the ‘mother ship’ for single-level, single-signer issuance)

5. Extraordinary advances in privacy with graduated disclosure (granular, legally enforceable privacy and disclosure capabilities, covered in greater detail later in this article):

Selective disclosure (with or without Zero Knowledge Proofs)* Ricardian Contract capabilities through contractually protected disclosure (counterparties verifiably consent to terms of disclosure prior to disclosure) Contingent disclosure (i.e., escrowed)

6. Interoperable with anything using the same underlying protocols (without sharing blockchains, vendors, registries, databases, etc.)

7. Simpler; technically lighter-weight (not verbose, compared to first-gen VCs)

*Individuals need these, too.

Security First, Always

With second-generation VCs, security comes first, always, no exceptions. The security advances with second-generation VCs are stunning when you consider that key recovery and rotation are still the Achilles Heel not only of first generation VCs, but also of blockchain and existing web infrastructure generally. Second-generation VCs using KERI don’t just fix security flaws for VCs (especially web-based VCs), they’re a significant advance in digital security, period.

Come on In, the Water’s Warm

There are no significant technical reasons we are aware of not to advance to second-generation VCs — no notable trade-offs other than familiarity and switching costs from legacy systems. In fact, the complexity of blockchain-based first-generation VCs may be at least an order of magnitude greater than the more elegant, advanced, and blockchain-less second-generation VCs.

That said, some will reasonably argue that a second-generation of VCs that doesn’t directly and fully interoperate with first-generation VCs — which is true — is a step backward rather than forward. However, we feel that achieving interoperability of VCs that are insecure, reliant on blockchain or web infrastructure, and not stress tested in production at scale is a hollow victory and premature; it is not meaningful interoperability. Security — in the form of proper key management — must come first, it cannot be bolted onto the current VC model just like it cannot be bolted onto the internet after the fact. There is no way around this.

However, the good news is, with second-generation VCs we can have all the robust security capabilities described above plus advances in privacy simply not possible with first-generation VCs, and gain real, global, fully decentralized interoperability that only standardized non-platform-based protocols can deliver. Even then, serious work on interoperability should be undertaken only after some use cases have made it into production and demonstrated that all these things work well, at scale. Otherwise that would be premature interoperability, too.

The Unprecedented Power of Organizational Credentials (OCs) Using Second-Generation VCs

Some of the dreamy use cases for OCs [listed in Part 2] are already possible, you say… but:

Unlike most current identity technologies, including numerous high-profile implementations of first-gen VCs, OCs require no surveillance-enabling, privacy-killing ‘phone home’ to an intermediary for verification, do not rely on DNS/web security, and do not use shared ledgers/blockchains; OCs sacrifice neither security nor privacy. OCs can convey the full provenance and details of authority — unlimited levels — in-band with strong verification, proving not only the authority of the representative, but the full chain of authority and where it originated. OCs are B2C, helping prevent phishing by proving to customers and constituents that the company or government representative they’re interacting with really represents whom they claim to represent. OCs are C2B, enabling customers and constituents to prove their identity, status, and entitlements to the businesses and governments they’re aligned with when they call in, log in, or walk in. OCs can prove provenance not only back to the top of an organization, but to GLEIF (described in Part 1) and/or to any other chained root of trust such as a city, state, nation, or group of nations. OCs can also make verifiable any other analog identifiers in addition to or instead of the LEI. For example, a national ID, regional ID, local ID, employee ID, and any other identifiers could become cryptographically verifiable with OCs (provided their inclusion is governed by rules and safeguards similar to GLEIF’s Ecosystem Governance Framework, see Part 2 for more).

When an authorized representative of an organization delegates some authority to a person or a thing, whether an employee, director, contractor, partner, customer, constituent, device or piece of software — no matter how many levels deep in the organization — OCs enable that person or thing to prove it digitally, instantly, securely, and globally. OCs are a game-changer.

Proof of Delegated Authority

It’s one thing to have authority, quite another to prove it. OCs enable global verification of a representative’s delegated authority — through unlimited levels of delegation — using a powerful capability of second-generation VCs: chaining.

Chaining enables multiple levels of signatures for the same credential, and also for authority to be conveyed from signature to signature and/or credential to credential, enabling the very last link in the chain of delegated authority to be as strongly provable as the first.

A janitor can prove his authority with a cryptographic chain of delegated authority in the OC he carries. When verified by someone (or something) he’s presented his OC to, the chain might look like this:

the janitor got his authority to do X and Y from the facilities manager, whose authority came from the regional manager, whose authority came from the vice president, whose authority came from the president/COO, whose authority came from the CEO, who was issued a vLEI credential* by the Qualified vLEI Issuer (QVI)* that vetted the organization, who was qualified by the Global Legal Entity Identifier Foundation (GLEIF), a global root of trust for legal entity identification.

* Described in Part 2.

The authority for the janitor begins with the CEO of the organization, originating from a special root-level OC that he/she holds, and not with GLEIF or its QVI. A CEO credential received from a QVI is not a delegation of authority; it is a digitally signed, non-repudiable attestation that the holder of that credential really is the CEO, issued by an accredited QVI or other reputable issuer after a rigorous vetting process. What the CEO does with the credential from thereon has nothing to do with the fact that an external entity asserts that he/she really is the CEO, and has digitally signed that assertion so that it’s verifiable anywhere.

OCs, Privacy, and Self-Sovereignty Personal Privacy and Graduated Disclosure

With OCs the personal privacy of the holder — a representative of the organization — is of less relevance when that representative is an employee or contractor and they are acting within that context; the organization gets to decide what that representative should be disclosing or keeping private, which they can do quite powerfully with Graduated Disclosure as described below. When an OC holder is a customer or constituent, however, personal privacy becomes critically and legally important, magnifying the need for robust selective disclosure capabilities.

As with all ACDC VCs, OCs have three granular, legally enforceable “Graduated Disclosure” capabilities highly relevant to and needed by organizations:

Selective disclosure — the ability to provably disclose only some information without disclosing the rest, such as one’s authority but not their name, or their name but not their authority, or only one aspect of their authority such as their financial ceiling for signing purchase orders. This capability does not use Zero Knowledge Proofs (ZKPs) by default, but can take advantage of ZKPs when desired.

Contractually protected disclosure — when counterparties verifiably consent to the terms of disclosure prior to disclosure. The provided consent is non-repudiable, providing a path for recourse if the terms of disclosure are not honored.

Contingent disclosure — disclosure is possible only when certain conditions are met, such as keeping an identity encrypted and in a form of escrow for business purposes and interactions, but enabling that identity to be discoverable by law enforcement or regulators if it becomes necessary. Contingent disclosure solves tricky problems in telecommunications, for example, when the originator of a communication may not wish to be identifiable to network operators, but must become identifiable to authorities if their traffic turns out to be fraudulent or spam.

Self-Sovereignty

When employees or contractors are acting in representation of an organization, the org is sovereign and not the individual; there is no self-sovereignty in this situation. There is autonomy, as the person can choose to behave however they please regardless of rules or expectations, but not self-sovereignty. Like privacy, however, the principle changes when the representative is a customer or constituent; then the sovereignty of the OC’s holder becomes relevant. Still, the customer’s or constituent’s OC was issued by the organization and can be revoked at the organization’s discretion.

About Blockchain…

Blockchain, from a data standpoint, results from implementing a centralizing protocol, as it inescapably results in a single, master set of data that all counterparties must agree to (unlike TCP/IP, HTTP, etc.). While a blockchain’s data is physically replicated across many physical locations, the only semi-decentralized aspect of blockchain is its governance, but only when that governance is permissionless, and even then it is questionable². Every aspect of permissioned blockchains is centralized, putting all data into a single logical ledger and governance into the hands of a chosen few. The point here is, blockchains in all their flavors are platforms, not protocols, and always result in data centralization, the opposite of what’s needed for globally verifiable OCs.

(Learn more about the centralizing aspects of platforms and decentralizing aspects of protocols in the seminal essay “Protocols, Not Platforms”, by Mike Masnick.)

In Conclusion

For those who’ve been involved with Verifiable Credentials since their official beginning in 2017, and even before that as Timothy has, the idea that a particular new flavor of VCs calls itself “second generation” would seem presumptuous, as there are several competing data models, each with their own adherents and arguments, each feeling that their favored flavor is superior to the others for one reason or another.

In this piece we’ve made detailed arguments as to why ACDCs are at a new, generational level compared to all other existing VC data models, and we invite reasoned arguments to the contrary.

¹ See Harvard Business Review’s What is Disruptive Innovation?

² See Crypto isn’t decentralized, Blockchain is not decentralized, and Web3 is not decentralized

Wednesday, 16. August 2023

@_Nat Zone

ESTA有効期限の確認方法

ESTAが発行されているのはわかっているけれど、有…

ESTAが発行されているのはわかっているけれど、有効期限いつだったけなと心配になることはありませんか?わたしはあります。でも問題ありません。パスポート情報だけあればESTAの有効期限を確認することができます。

ESTAの有効期限を確認するには、以下のステップを踏んでください。

1. U.S. Customs and Border Protection のサイトにアクセスします。すると、以下のような画面が出ます。

2. メニューバーの「CHECK ESTA STATUS」をクリックします。すると、メニューが出てきます。

3. CHECK INDIVIDUAL STATUS をクリックします。すると、以下のようなSECURITY NOTIFICATIONがでますので、「CONFIRM & CONTINUE」を選びます。

4. すると、RETRIEVE INDIVIDUAL APPLICATION のページに到達します。

5. ここで、顔写真のページに記載があるパスポート番号を「Passport Number」の欄に、生年月日を日▼月▼年▼から選んで入力します。そして、その下の「OR」の下の欄の右側にパスポート情報を入力します。「Country of Citizenship / Nationality」が国籍(日本人の場合は「JAPAN (JPN)」を選択)、Issuance Data, Expiration Date はそれぞれ発行年月日、有効期限満了日で、パスポートの顔写真のページに記載がありますから、これらを入力し、「RETRIEVE APPLICATION」ボタンを押します。

6. そうすると、申請情報のページが表示されます。

わたしの場合は、2024年4月11日まで有効なようです。

簡単でしたね!

それでは、よい旅を!


reb00ted

The Peer Computing Architecture

I used that term frequently a long time ago, before the centralizing platforms took over all of technology. I believe it’s time to dig it back out, because for the first time in over a decade, perhaps even two, there’s real uptake in this computing architecture with protocols such as ActivityPub and Matrix. My definition is at peercomputing.org. Feedback appreciated. (Historical note: there

I used that term frequently a long time ago, before the centralizing platforms took over all of technology. I believe it’s time to dig it back out, because for the first time in over a decade, perhaps even two, there’s real uptake in this computing architecture with protocols such as ActivityPub and Matrix.

My definition is at peercomputing.org.

Feedback appreciated.

(Historical note: there was a time I called it the “four-point architecture”. My old blog tells me I wrote about it in, gasp, 2005. That means 18 years of walking in the desert, and only now do the doors open up again.)

Tuesday, 15. August 2023

FACILELOGIN

Securing north/south and east/west traffic @ DevRev

Image credits: https://www.freecodecamp.org/news/content/images/size/w2000/2023/04/pexels-barry-tan-7994953.jpg At DevRev, we are building an API-first dev-centric CRM that leverages data, design, and machine intelligence to empower developers (dev) to build, support, and grow their customers (revs), in the era of product-led growth. This blog post shares some insights on how we secure DevRev APIs
Image credits: https://www.freecodecamp.org/news/content/images/size/w2000/2023/04/pexels-barry-tan-7994953.jpg

At DevRev, we are building an API-first dev-centric CRM that leverages data, design, and machine intelligence to empower developers (dev) to build, support, and grow their customers (revs), in the era of product-led growth. This blog post shares some insights on how we secure DevRev APIs (north/south traffic) at the edge, and the service-to-service interactions (east/west traffic).

The DevRev platform is designed to scale up to 1 million dev organizations, and 1 billion rev users. At the time of this writing the DevRev APIs cater closer to 1 million API requests on daily basis even at the very early stage of the product. In terms of API performance, we emphasize that all the APIs should operate with a very low latency. With this in mind, we wanted our security design to bring in only the valid, legitimate traffic into the DevRev platform. Anything that does not look right, we reject at the edge.

At the edge, we use Fastly Next-Gen WAF (powered by Signal Sciences) to monitor for suspicious and anomalous API traffic and protect in real-time, against attacks directed at our public APIs and the origin servers. Once the requests pass through the WAF, we use Fastly Compute@Edge to validate each request.

Fastly provides us with the capability to execute our code at the edge through a WebAssembly module. We’ve developed an edge gateway in Rust, which compiles into a WebAssembly module. This Rust code is responsible for rejecting any API requests lacking a valid JWT. JWT verification is just one of the tasks we perform at the edge. The edge gateway responsibilities also encompass cache management, reporting API statistics to Google BigQuery, sending logs to Datadog, enforcing captchas, URL rewriting, CORS management, API allow-listing based on various parameters, and proxying traffic to secure S3 endpoints, among others. Furthermore, we are in the process of introducing coarse-grained authorization at the edge. This additional measure will assist in filtering only legitimate traffic to DevRev services. The entire Rust code executing at the edge takes no longer than 5 ms to complete its tasks.

Fastly Compute@Edge serves as the entry point at the edge for DevRev services. At the origin, an API gateway intercepts all incoming traffic. The responsibilities of this API gateway go far beyond the functionalities typically found in open source or commercial API gateways. In fact, it functions as both an API gateway and an integrator, developed in-house at DevRev. Throughout the remainder of this blog, we will refer to it as the DevRev gateway.

As a second level of defense, we perform JWT verification at the origin using the DevRev gateway, even though it is redundant. Ideally, we should not receive any 401 errors from the origin, and we actively monitor this using Datadog alerts. The verification of a JWT takes less than 2ms at the origin. Additionally, we have implemented a token-based authentication mechanism between the Fastly edge and the DevRev gateway. This, coupled with IP allowlisting, ensures that no request can bypass the Fastly edge to reach the DevRev gateway.

The JWT carries the identity of the API user. An API user can be one of the following types:

An Auth0 user. We utilize Auth0 as the trusted Identity Provider for the DevRev platform. Auth0 authenticates users through methods such as OTP over email, social connections, and enterprise connections. To access the DevRev web app or mobile app, users must first authenticate via Auth0. Auth0 assigns a distinctive identification to each user known as the Auth0 user ID. This ID is formed by combining the connection name with the immutable identifier specific to the user within the associated connection. A Dev user: A Dev user is a member of a Dev organization within the DevRev platform. All Dev users are Auth0 users; however, the reverse is not necessarily true. The DevRev web app and mobile app invoke APIs on behalf of Dev users, or the Dev users themselves can directly invoke DevRev APIs. A Rev user: A Rev user is a customer of a Dev organization and has the authorization to access specific DevRev APIs. In most cases, the DevRev main app doesn’t actively authenticate Rev users; instead, it relies on the corresponding Dev organization for authentication. Based on a trust relationship with the Dev organization, Rev users are granted access to DevRev APIs. However, the DevRev support portal permits Rev users to log in directly. In an upcoming blog post, we will delve into the details of building this trust relationship and explain how we authenticate Rev users at both the edge and the origin. A service account: A service account represents an application that communicates with the DevRev APIs. For instance, when you integrate the DevRev PLuG widget into your web app or use the PLuG mobile SDK in your mobile app, the PLuG functions as a service account. A service account can access DevRev APIs independently or on behalf of a Dev user or a Rev user.

The DevRev gateway at the origin serves as the entry point to the DevRev microservices backend. Once it verifies the JWT accompanying the API request, the gateway dispatches the request to the appropriate service. All services are developed in Golang and communicate with each other using gRPC.

The gateway and all other services are deployed within a Kubernetes cluster. Each service operates within its own namespace and is deployed behind an envoy proxy. When a service spins up, it is provisioned with a key by Istio, which also manages key rotation. These keys are subsequently utilized by each service for mTLS authentication between services. The same applies to the gateway.

mTLS is good enough to identify a service, but it has its own challenges as well. We’ve built a service-to-service authentication mechanism that combines mTLS with JWT due to the following reasons.

Flexibility and decoupling: JWT can be used in scenarios where you need more flexibility and decoupling between services. It allows you to issue tokens that can carry various claims and information about the user or entity. This can be useful in scenarios where you want to provide fine-grained access control or share specific user attributes between services. Statelessness: JWT is a stateless authentication mechanism, meaning the server doesn’t need to store token-related information. This can be advantageous when scalability and performance are crucial, as the server doesn’t need to maintain session-related data. Cross-Domain Communication: JWT can be used for cross-domain communication between different services. Since JWTs are self-contained and can carry service-related information, they can facilitate communication between services without requiring direct interaction or shared session state.

When a service spins up within the DevRev platform, it talks to the STS (Security Token Service) deployed in the same Kubernetes cluster. Through mTLS authentication, the service requests a JWT. This particular JWT is referred to as the Application Access Token (AAT). The AAT’s subject is a system-generated identifier linked to the Kubernetes service name of the corresponding service making the AAT request. In simpler terms, an AAT is accompanied by a corresponding service account, and the AAT’s subject is the identifier of that service account, which we call a service account DON.

The URI field within the X509 certificate corresponding to each service (or workload), issued by Istio contains the SPIFFE ID linked to that specific service. When the STS issues a JWT for a service that authenticates with the STS through mTLS, it appends the same SPIFFE ID found in the incoming X509 certificate as a claim to the JWT it creates and subsequently shares with the service. This process effectively binds the JWT to the corresponding service identity connected to the mTLS connection.

Each microservice is linked to a predefined service account, and a particular service has the ability to establish its own access control policies for these service accounts. For instance, the Janus service might permit read operations from the gateway service account, while the codex service could enable the gateway service account to impersonate a specific group of Rev users.

At the end of the day, every service is provisioned with a JWT, which it utilizes to access upstream microservices. These JWTs are of short duration, and as they approach expiration, the corresponding service is required to communicate with the STS once more to obtain a fresh JWT.

One fundamental best practice when generating a JWT is to define a restrictive audience. For instance, referring back to the service-to-service authentication using JWT discussed in the preceding section, the token generated by the STS for the gateway’s communication with the Janus service should specifically carry ‘janus’ as the audience value. Consequently, the Janus service cannot utilize the same JWT received from the gateway to communicate with the STS. This is because the token’s audience is ‘janus’, while the STS anticipates a token with an audience value of ‘sts’.

One drawback of this model is that it would lead to more frequent interactions between the STS and other services, resulting in increased communication overhead. Furthermore, each service would be required to manage distinct tokens for every upstream service it engages with. While we opted not to adopt this model with different audience values, we were still unwilling to take the risk of one service employing a token from another service to access an upstream service, essentially impersonating the original service.

Binding the JWT to the SPIFFE ID associated with the X509 certificate of a particular service proves beneficial in this context. Every upstream service not only verifies the JWT received from the downstream service but also confirms whether it is tied to the SPIFFE ID related to the underlying mTLS connection. This mechanism ensures that the Janus service cannot utilize the JWT acquired from the gateway to gain access to the STS as if it were the gateway.

Alongside the service context, the interactions between services also include the user context. The gateway forwards the JWT it receives from the client to the upstream services when necessary. This JWT carries the user context. In the current model, these client JWTs might originate from two different issuers: Auth0 and the STS. However, as we move forward, our goal is for all services to exclusively trust STS-issued tokens. This implies that clients will need to exchange the token they receive from Auth0 for an STS-issued token before gaining access to DevRev APIs.

Why would the client need to exchange the Auth0 token for an STS token, and why shouldn’t the gateway handle this conversion in the background, passing the STS-issued token to the upstream services? However, this approach would result in more frequent interactions between the gateway and the STS, requiring token exchange for each request. Such an approach introduces unnecessary overhead.

A given service possessing the JWT containing the user context will not have unrestricted access to any arbitrary service using that JWT. We enforce stringent access control policies at each service, ensuring that incoming requests are processed only after evaluating not only the user context but also the corresponding service context.

A token that has been previously issued to a client or user, whether by Auth0 or the STS, can be revoked for two reasons. The associated user or the organization to which the user belongs may no longer be part of the DevRev platform, or the user themselves or an admin of the organization could explicitly revoke a specific token.

To address the former, the DevRev gateway verifies the active status of a user or organization within the platform after each token validation. In order to reduce unnecessary service calls and database queries, the gateway maintains a cache of recognized users.

To explicitly revoke a token, the client can make use of the revoke API provided by the STS. After a token is revoked, the STS includes the metadata related to the revoked token in a cache accessible to the gateway for reading. The gateway then rejects any tokens corresponding to the token metadata found within the revoked token cache. We are currently working on making this list of revoked tokens available to the Fastly edge gateway, which will then reject any requests carrying a revoked token at the edge itself.

In this blog post, we provided a high-level overview of how we secure both north/south and east/west traffic at DevRev. In future blog posts, we will delve deeper into the key aspects of the DevRev microservices security design.

Securing north/south and east/west traffic @ DevRev was originally published in FACILELOGIN on Medium, where people are continuing the conversation by highlighting and responding to this story.

Monday, 14. August 2023

Phil Windleys Technometria

Zero Trust

My new book Learning Digital Identity from O'Reilly Media covers many of the topics in this post such as multi-factor authentication, authorization and access control, and identity policy development in depth. Zero Trust is a security framework that is better attuned to the modern era of sophisticated threats and interconnected systems. Past practices included techniques like virtual private networ

My new book Learning Digital Identity from O'Reilly Media covers many of the topics in this post such as multi-factor authentication, authorization and access control, and identity policy development in depth.

Zero Trust is a security framework that is better attuned to the modern era of sophisticated threats and interconnected systems. Past practices included techniques like virtual private networks (VPNs) that tried to emulate the idea of an intranet where trusted computers and people were protected from hackers by a firewall that "kept the bad stuff out." As more and more work has gone remote and personal devices like phones, tablets, and even laptops are being used for work, a firewall—virtual or physical—offers less and less protection. Often the bad actors are hard to tell apart from your employees, partners, and customers.

Zero Trust operates on a simple yet powerful principle: "assume breach." In a world where network boundaries are increasingly porous and cyber threats are more evasive than ever, the Zero Trust model centers around the notion that no one, whether internal or external, should be inherently trusted. This approach mandates continuous verification, strict access controls, and micro-segmentation, ensuring that every user and device proves their legitimacy before gaining access to sensitive resources. If we assume breach, then the only strategy that can protect the corporate network, infrastructure, applications, and people is to authorize every access.

Defense in Depth

Zero Trust solutions offer a multifaceted defense against the evolving threat landscape, encompassing various aspects of network security, infrastructure protection, user authentication, and application security. These solutions collectively work together to uphold the "never trust, always verify" principle. Here are some of the different kinds of Zero Trust solutions that safeguard networks, infrastructure, people, and applications from malicious actors:

Network Security:

Micro-Segmentation: Dividing the network into smaller segments to limit lateral movement of threats and control access between different segments.

Software-Defined Perimeter (SDP): Creating an invisible perimeter around resources, allowing authorized users and devices to connect while remaining invisible to unauthorized entities.

Infrastructure Protection:

Identity and Access Management (IAM): Implementing strong identity verification and access controls to ensure that only authenticated users can access critical resources.

Endpoint Security: Employing solutions that monitor and secure devices (endpoints) to prevent malware infiltration and unauthorized access.

User Authentication:

Multi-Factor Authentication (MFA): Requiring users to provide multiple forms of verification (e.g., password, fingerprint, OTP) before granting access.

Risk-Based Authentication: Assessing user behavior and context to dynamically adjust authentication requirements based on potential risks.

Application Security:

Application Whitelisting: Allowing only approved applications to run, preventing the execution of unauthorized or malicious software.

Application-Centric Authorization and Security: Implementing application-specific authorization policies and implementing security measures directly within applications, such as encryption, code signing, and runtime protection.

These Zero Trust practices collectively work to defend corporate assets in depth and implement a security architecture that assumes breach and enforces rigorous access controls. This not only reduces the attack surface, but also minimizes potential damage. By safeguarding networks, infrastructure, people, and applications, organizations can better defend against the increasingly sophisticated tactics employed by malicious actors in the digital realm. A comprehensive Zero Trust approach ensures that security is embedded at every layer, providing a robust defense against cyber threats.

Implementing Zero Trust

Some of the components of a zero trust strategy can be bought. For example, network equipment vendors offer products that make it easier to implement micro segmentation or define a perimeter. You can buy IAM solutions that make it easier to up your authentication game to simultaneously reduce phishing and the burden on your employees, partners, and customers (hint: use passkeys). You can buy a endpoint security clients for devices that make it easier to manage corporate devices and to know the security posture of both corporate and personal devices. Authorization platforms like Cedar are available to control access to your infrastructure and applications.

Share

While vendors provide ready-made solutions for many aspects of Zero Trust, you'll still need to tailor these solutions to your organization's unique needs and integrate them into a coherent strategy. Here's breakdown of the things you need to do on your own (i.e., you can't buy these):

Policy Development:

Access Policies: You'll need to design access policies that define who can access what resources and under what circumstances.

Authentication Policies: Developing policies for user authentication, device verification, and authorization.

Organizational Policies: The organization must define how it governs the various aspects of Zero Trust and the underlying identity infrastructure.

Identity and Access Management (IAM):

Identity Management Infrastructure: Building a user identity repository, user directories, and user profiles.

Access Control Logic: Developing the logic that enforces access controls based on user roles and permissions.

Custom Integrations:

Integration with Existing Systems: If you have legacy systems, you might need to develop custom integrations to ensure they adhere to the Zero Trust model.

Training and Awareness:

Security Awareness Programs: Creating training materials and programs to educate employees and stakeholders about Zero Trust principles and best practices.

Continuous Monitoring and Analysis:

Threat Detection Logic: Developing mechanisms to continuously monitor network traffic, endpoints, and applications for suspicious activities.

The Zero Trust components in the preceding list require internal expertise and a deep understanding of your organization's structure and workflows. You might need to change how your organization does authentication, develop a process for installing and maintaining device clients, perform numerous integrations, create authorization policies as well as organizational policies, build threat dashboards, and institute a training program. You can get help doing this, but ultimately it's up to you.

Zero Trust represents a big change for many organizations. Implementing a Zero Trust strategy involves not just changing the architecture of your network, infrastructure, and applications, but your organizational culture as well. In a hyper-connected digital landscape marked by relentless cyber threats and evolving attack vectors, the Zero Trust model is the best defense available. By challenging the conventional "trust but verify" approach, Zero Trust asks organizations to embrace an "assume breach" mindset, demanding continuous vigilance to authorize every access.

Photo Credit: Open Gate from aitoff (Pixabay License)


Damien Bod

Securing APIs using ASP.NET Core and OAuth 2.0 DPoP

This article shows how an ASP.NET Core application can access an ASP.NET Core API using OAuth Demonstrating Proof-of-Possession (DPoP). This is a really powerful security enhancement which is relatively easy to support. The access tokens should only be used for what the access tokens are intended for. OAuth DPoP helps force this. This solution was […]

This article shows how an ASP.NET Core application can access an ASP.NET Core API using OAuth Demonstrating Proof-of-Possession (DPoP). This is a really powerful security enhancement which is relatively easy to support. The access tokens should only be used for what the access tokens are intended for. OAuth DPoP helps force this. This solution was created using Duende IdentityServer and the Duende samples.

The OAuth 2.0 Demonstrating Proof-of-Possession at the Application Layer (DPoP) is still a draft standard.

Code: https://github.com/damienbod/DPOP-aspnetcore-idp

Solution setup

Three applications are used in this solution. One application is the OpenID Connect and OAuth server with identity management. This application issues the tokens. The application is implemented using Duende IdentityServer and ASP.NET Core identity. A second application is used as the web client. This application is an ASP.NET Core Razor page application. The application is an OIDC confidential client with PKCE and requests DPoP access tokens. When the Web client sends API requests, it uses the DPoP access token and needs to create the proof token with the required specification claims and valid values for these claims. The third application is the API accepting valid access tokens. The API is implemented using ASP.NET Core with swagger and accepts access tokens using OAuth DPoP validation. The DPoP proof token is required as well as the correct cnf claim value in the access token and all the standard access token validation bits.

How DPoP works

The best way to understand how DPoP works is to read the standard.

https://datatracker.ietf.org/doc/html/draft-ietf-oauth-dpop

Duende IdentityServer client config

Duende IdentityServer provides very neat APIs for using OIDC and integrating DPoP into your clients. The web client and the API scope requires no extra settings, just the default OIDC confidential code flow with PKCE for the web client and the scope used for the access token. This is very developer friendly.

public static IEnumerable<ApiScope> ApiScopes => new ApiScope[] { new ApiScope("scope-dpop") }; public static IEnumerable<Client> Clients => new Client[] { new Client { ClientId = "web-dpop", ClientSecrets = { new Secret("--secret--".Sha256()) }, AllowedGrantTypes = GrantTypes.CodeAndClientCredentials, RedirectUris = { "https://localhost:5007/signin-oidc" }, FrontChannelLogoutUri = "https://localhost:5007/signout-oidc", PostLogoutRedirectUris = { "https://localhost:5007/signout-callback-oidc" }, AllowOfflineAccess = true, AllowedScopes = { "openid", "profile", "scope-dpop" } } };

Web App client

The Web OIDC confidential client creates a key for DPoP and this is sent to the identity server in the form of a proof token created using the key to request an access token, or refresh tokens. The access token is returned with a cnf claim (binding using the dpop_jkt request parameter; Authorization Code Binding to DPoP Key ) which is used in the API DPoP validation. I used an ecdsa 384 certificate to create the key for the DPoP token proofs. The certificate is generated using .NET Core and persisted in pem files. You can also generate this using Azure Key Vault or any other tool. If using this with a public client like a mobile native app, a different key can be generated for each client. For example, on a mobile device, this could be created when installing the app client for the first time and persisted just for this client.

services.AddAuthentication(options => { options.DefaultScheme = "cookie"; options.DefaultChallengeScheme = "oidc"; }) .AddCookie("cookie", options => { options.ExpireTimeSpan = TimeSpan.FromHours(8); options.SlidingExpiration = false; options.Events.OnSigningOut = async e => { await e.HttpContext.RevokeRefreshTokenAsync(); }; }) .AddOpenIdConnect("oidc", options => { options.Authority = "https://localhost:5001"; options.ClientId = "web-dpop"; options.ClientSecret = "--secret--"; options.ResponseType = "code"; options.ResponseMode = "query"; options.UsePkce = true; options.Scope.Clear(); options.Scope.Add("openid"); options.Scope.Add("profile"); options.Scope.Add("scope-dpop"); options.Scope.Add("offline_access"); options.GetClaimsFromUserInfoEndpoint = true; options.SaveTokens = true; options.TokenValidationParameters = new TokenValidationParameters { NameClaimType = "name", RoleClaimType = "role" }; }); var privatePem = File.ReadAllText(Path.Combine(_env.ContentRootPath, "ecdsa384-private.pem")); var publicPem = File.ReadAllText(Path.Combine(_env.ContentRootPath, "ecdsa384-public.pem")); var ecdsaCertificate = X509Certificate2.CreateFromPem(publicPem, privatePem); var ecdsaCertificateKey = new ECDsaSecurityKey(ecdsaCertificate.GetECDsaPrivateKey()); services.AddOpenIdConnectAccessTokenManagement(options => { var jwk = JsonWebKeyConverter.ConvertFromSecurityKey(ecdsaCertificateKey); jwk.Alg = "ES384"; options.DPoPJsonWebKey = JsonSerializer.Serialize(jwk); }); services.AddUserAccessTokenHttpClient( "dpop-api-client", configureClient: client => { client.BaseAddress = new Uri("https://localhost:5005"); }); services.AddRazorPages();

API implementation

Implementing the API requires a bit more logic. The Duende samples provide a code implementation of the specification. The access token is validated and the ConfigureDPoPTokensForScheme adds all the required OAuth DPoP validation requirements. The Duende sample is included in the links.

services.AddAuthentication("dpoptokenscheme") .AddJwtBearer("dpoptokenscheme", options => { options.Authority = stsServer; options.TokenValidationParameters .ValidateAudience = false; options.MapInboundClaims = false; options.TokenValidationParameters .ValidTypes = new[] { "at+jwt" }; }); services.ConfigureDPoPTokensForScheme("dpoptokenscheme"); builder.Services.AddAuthorization(options => options.AddPolicy("protectedScope", policy => { policy.RequireClaim("scope", "scope-dpop"); }) );

Testing

When all three applications are started, the UI application can authenticate and gets an access token back with a cnf claim. This is sent to the API along with a DPoP proof token and the API can validate the access token using OAuth DPoP as per specification.

The web client redirects to the IDP to authentication. The dpop_jkt parameter in the redirect URL or body if using different standards contains the value which is used for the cnf claim. This is the client’s proof-of-possession key. The is used in combination with the PKCE.

https://localhost:5001/connect/authorize? client_id=web-dpop &redirect_uri=https%3A%2F%2Flocalhost%3A5007%2Fsignin-oidc &response_type=code &scope=openid%20profile%20scope-dpop%20offline_access &code_challenge=fDrpFF8OTmalCNi6KeM-L3CX-Pa8Hozsnw6vf-Q9_Tk &code_challenge_method=S256 &nonce=... &dpop_jkt=loM5ro2mqcEyBS46Z8CN1bP_wlYn2XPaGmTXQUzQc94 &state=...

The HTTP header Authorization contains the DPoP value with the access token:

DPoP access-token-with-cnf-claim

The access token can be decoded:

{ "iss": "https://localhost:5001", "nbf": 1691659611, "iat": 1691659611, "exp": 1691663211, "aud": "https://localhost:5001/resources", "cnf": { "jkt": "loM5ro2mqcEyBS46Z8CN1bP_wlYn2XPaGmTXQUzQc94" }, "scope": [ "openid", "profile", "scope-dpop", "offline_access" ], "amr": [ "pwd" ], "client_id": "web-dpop", "sub": "1cdddaaf-c671-4726-841b-d9b4abdede3d", "auth_time": 1691659610, "idp": "local", "sid": "7602A8B025FF27CFE5ED34C62DC10B8E", "jti": "5EC3ECF725B4BA3F608323C380FCD6F6" }

The HTTP header DPoP contains the proof token used for the DPoP validation and the access token request.

proof token:

{ "alg": "ES384", "typ": "dpop+jwt", "jwk": { "kty": "EC", "x": "c_Ua8nenm8XjoXvxcFvonuNeJgYg3YBAvhY2zuBI5IYl1mOhMFHWtacGoLfzA11W", "y": "zXJSqLxYgyqq3jPdBeuqgcvuW9d4JwVL_fgsqwT8wvr05uihuU5FsX3DY-LtGF7E", "crv": "P-384" } } { "jti": "xaWoIEMRqtRbrta13AVceLUVJxW1zqbl2RcQhmuG0nA", "htm": "GET", "htu": "https://localhost:5005/api/values", "iat": 1691659934, "ath": "5p3pmE3nvOURS-mZoEsPZkWb49vFNp7cuFrXs1mITxM" }

The different claims are used for validation as per specification. Checking DPoP proof tokens:

https://datatracker.ietf.org/doc/html/draft-ietf-oauth-dpop#section-4.3

When the three applications are started, everything works as expected. Using DPoP increases the level of security for this type of flow.

Links

https://datatracker.ietf.org/doc/html/draft-ietf-oauth-dpop

https://github.com/DuendeSoftware

https://github.com/DuendeSoftware/Samples/tree/main/IdentityServer/v6/DPoP

https://github.com/DuendeSoftware/IdentityServer.Templates

https://docs.duendesoftware.com/identityserver/v6/tokens/pop/dpop/

https://developer.okta.com/docs/guides/dpop/main/#build-the-request

https://darutk.medium.com/illustrated-dpop-oauth-access-token-security-enhancement-801680d761ff

https://learn.microsoft.com/en-us/entra/msal/dotnet/advanced/proof-of-possession-tokens


Wrench in the Gears

Community Currencies As Signals of Social Computing

Tonight Jason and I did a stream discussing how the biophysics of cells and semi-permeable membranes relate to the econophysics of smart contracts and social cybernetics. Jason made a short teaser clip this morning for my channel, and someone by the name of Persepolis22 left a nice series of comments. I think the exchange provides [...]

Tonight Jason and I did a stream discussing how the biophysics of cells and semi-permeable membranes relate to the econophysics of smart contracts and social cybernetics. Jason made a short teaser clip this morning for my channel, and someone by the name of Persepolis22 left a nice series of comments. I think the exchange provides a useful overview of the discussion, so I am sharing it below. 

Comment by Persepolis22:

“If money will be replaced by tokens, and the vision for the next iteration of this game is based on community/ complementary currencies, how exactly will they be able to implement it? This is where I get stuck. Wouldn’t it lead to inflation‘ if everyone is able to drop their token? And how will they make sure that your token will be accepted by certain stores or whatever it is that you want to access/ purchase? I feel like there is something extremely important that we are still missing, and it makes me kind of nervous to be honest.

Perhaps this might be the link to the CBDC? There’s a talk by Hyun Song Shin from BIS, and it gets quite clear that CBDC will serve as a settlement between the banks/ institutions. So at the end of the day, all the tokens (or the ones that will make it) will need to be approved and backed by the CBDC.

There’s something else I wanted to mention, a project called web of energy. They’re still quite off the radar, but what they will do is build the foundation for the energy transition, integrating every single piece into the grid. So, when there isn’t enough energy in one area of the city, it will simply be deployed from a different area, keeping everything in balance. This is the end goal, and it can be applied elsewhere. Not necessarily regarding technical resources, but rather ‚human networks. I’m still not getting to the core of it. What are your thoughts?”

 

Alison McDowell’s Response:

I wrote a long response, and unfortunately it disappeared. I would suggest watching tonight’s stream live or later. Here are the highlights of what I wrote. The goal is a networked global super-intelligence for bio-hybrid computation. The decentralized energy grid and economics based on token engineering (programmable currency) is part of the operation of that system. Embodied intelligence integrated with invisible ubiquitous sensing is what is needed for the computation tasks. Perhaps the system running the programming is not embodied? That system, as I now see it, desires to tap into humans as tools for problem solving – creativity, emotion, imagination, soul, and social relations. That is what is driving the internet of bio-nano things.

We are being turned into manipulatives. Picture a kindergartener using blocks for counting. I think this is related to the rise of addiction, social impact markets in “recovery,” and adoption of psychedelic treatments in that space linked to remote brainwave monitoring and video gaming. To me, it’s starting to feel like some disembodied system wants to use our bodies and consciousness to access information from altered states for the larger program.

However, in order for the computer to operate effectively, there needs to be a homeostatic governor at the top, which is likely the BIS and its ultimate ledger. That fact will be obscured by nested levels of settlement clearing. Cross border payments are not really just about countries. Borders can be among established community currencies. In this scenario, as I see it, most people could be accessing something that looks like “community currency.” Remember, Bernard Lietaer, creator of the Euro, wanted an ecology of currencies. It is this variety that is needed for game mechanics and signal robustness for prediction and social steering.

Why do you think the “resistance” narrative has been focused almost exclusively on CBDCs? Why was Catherine Austin Fitts uplifted as a thought leader? Why are most of the influencers accepting and promoting crypto as liberation? This is all by design. Computation based on social signaling will not work under authoritarian control. It is “choice” and “freedom” that makes the “wisdom of crowds” visible to the machine. “Choice” reveals the complex patterns that the AI is seeking. Sure, the “choices” will be from a limited list of options, but the narrative has been set up for progressive and conservative populists that freedom looks like a local programmable coins.

This talk features Fennie Wang of IXO Foundation, the folks who financed putting preschoolers on blockchain in South Africa, so they could build social capital. IXO was also behind the development of “smart” social impact bonds and collaborates with Shanzhai City, the organization that digitized the Hong Kong time bank. Wang talks about regional federated coins. This may provide some insights for you on what it might look like. IXO Foundation is at the center of deployment of this new token-engineered model. 

 

 

 

 

 

 

 

Sunday, 13. August 2023

Wrench in the Gears

Guest Post by Leo Saraceno – The Game of Risk: How Insurance Behemoths, Disaster Recovery, and State Intelligence Laid the Foundations for the Planetary Computer

The following guest post was written by Leo Saraceno and expands on his previous work at Silicon Icarus and the series “Atomic Ecologies” he did with me and Jason Bosch last summer. Last year I started a relationship map to explore the intersecting themes of corporate environmentalism, biogeochemisty, natural capital, and energy economics as they [...]

The following guest post was written by Leo Saraceno and expands on his previous work at Silicon Icarus and the series “Atomic Ecologies” he did with me and Jason Bosch last summer. Last year I started a relationship map to explore the intersecting themes of corporate environmentalism, biogeochemisty, natural capital, and energy economics as they pertain to public policy innovation and humanity’s increasingly digitized relationship to nature. The screenshot below features several players who are referenced this post, including AXA, Colin Prentice, and IBIS. I also direct your attention to Howard Odum whose “emergy” language of energetic exchange I sense has a lot to do with the shift to a decentralized electrical energy grid and digital programmable tokens framed as “community currencies.” In this way social exchanges will be remade as “choice” signals to inform risk modeling and digital governance. Leo lays out just such a scenario in this piece relating to Mitiga Solutions and their crossover work in the parametric insurance modeling and token engineering sectors. I am grateful to him for diving into the somewhat boring world of global insurance markets to bring us clarity around the planned role that natural disaster prediction will play in advancing the attempted creation of a globally-networked super-intelligence.

Interactive Map Link: https://embed.kumu.io/4c6ea2f6cbd20150420bb2f4232faf3d#untitled-map?s=bm9kZS1BSFBRZENTYw%3D%3D

 

The Game of Risk: How Insurance Behemoths, Disaster Recovery, and State Intelligence Laid the Foundations for the Planetary Computer By Leo Saraceno

My previous writings, published at siliconicarus.org, introduce blockchain and web3 as forces that will drive social transformation. Picture waves combining to generate a storm surge with the power to transform natural life across the planet. Social impact finance, touted for its supposed sustainability and transparency, compels adoption of these cybernetic, steering technologies across many facets of our lives.

Smart contract protocols, the next-generation internet, is more than a complicated machine. Freed from the constraints of display screens, the logic circuits of web3 catalyze a digitally-mediated form of guided evolution using a potent combination of complexity theory, free market economics, token engineering, and Internet of Things-enabled game mechanics. A choppy sea of loosely-connected organizations and individuals are moving us steadily, yet almost imperceptibly, forward into a future governed by real-time data streams fed into simulation modeling. The overlapping interests of global finance, industry, the military, state intelligence, and academia have flowed together to create a powerful current with a sinister undertow.

Natural disasters and environmental degradation are now being leveraged to create a sense of urgency around adopting automated technologies to manage risk in lieu of human-centered analysis and decision making. The insurance industry has long been a leader in the use of real time data to inform prediction and risk. This post will discuss a division of the insurance industry that deals with the environment and climate and expands upon a previous piece, Natural Asset Managers – How Decentralized Ledger Technology Will Drive the Ecosystem Services Sector, that I wrote in February 2023.

A key element in both articles is the concept of ecosystem services described below:

“The phrase ‘ecosystem services’ refers to all quantifiable benefits that life provides to humans, such as: water purification, natural pollination, air cleansed by forests, medicine etc. Quantification takes place at a scientific level, (measuring inputs and outputs using systems analysis, relevant chemical domains, and statistics) and is translated into monetary effects.”

The ecosystem services paradigm overlaps with the catastrophe insurance market. Both ground themselves in large scale environmental data modeling. In today’s globalized economy, it’s not difficult to imagine why massive multinational insurance companies would be keenly interested in enhancing their capacity to model economic activity and ecological outcomes at an ever more granular level of detail.

Ecological health and economic activity are being integrated into one another as ESG (Environmental, Sustainable, Governance) portfolios aligned with the United Nations Sustainable Development Goals carve out larger and larger pieces of the emerging financial markets pie. The insurance industry has stepped into this space, serving as a bridge between the interests of institutional conservation and finance, a trusted facilitator to help realize the many “green new deals” that are just now peaking above the horizon. The concept of “natural capital” with its emphasis on quantification fits comfortably under the umbrella of insurance, since this sector excels at calculating the probability of events that may lead to claims and impact their bottom line.

Source: https://www.northropgrumman.com/corporate-responsibility/technology-for-conservation/

Having a comprehensive understanding of insured assets is vital to the insurance game of risk. By agreeing to embrace the data-driven “sustainability” metrics that are central to social-impact finance, we are tacitly agreeing to transform the substance of the Earth into a new asset class. That asset class is defined by assigned values logged in digital accounting ledgers that document the relative value of natural capital within an ecosystem services context. Each unit of life, from the smallest microbe to the mightiest river, is being transformed into a number that supposedly reflects its relative value to the continued functioning of life on Earth. To “save” the planet we will simulate it, and accurate simulations are only possible through the use of pervasive sensing technologies paired with sophisticated modeling software.

AXA, a French multinational insurance company that ranks among the largest in the world, helped launch the Task Force on Nature Related Disclosures (TFND) in the summer of 2021. Some of the largest financial and corporate institutions in the world are backing this effort, which boasts a collective $20 trillion in assets. In their 2022 Climate and Biodiversity report, AXA published details from one of its hundred TFND pilots. The project involved deployment of an “innovative biodiversity-specific data measurement tool” created for them by IcebergDataLabs. The software measured the impact of AXA’s business on biodiversity.

The Corporate Biodiversity Footprint, a “data measurement tool,” involves four steps:

Assess the commodities and products purchased and sold by the company throughout its value chain based on IDL’s internal physical Input/Output and allocate the company’s product flows by sector (NACE(6) sectorization); Calculate the company’s environmental pressures identified by the CBF based on its product flow based on a Life-Cycle Analysis; Translate the pressures through pressure-impact functions (GLOBIO) into one and the same biodiversity impact unit; Aggregate the different impacts into an overall absolute impact at a company level.
(emphasis added)

The framework uses “pressure impact functions” to transform the company’s physical activities into biodiversity metrics. A Dutch organization called GLOBIO developed the mathematics behind this standard. The biodiversity metric is called Mean Species Abundance (MSA). This is calculated by:

“…dividing the abundance of each species found in relation to a given pressure level by its abundance found in an undisturbed situation within the same study, truncating the values at 1, and then calculating the arithmetic mean over all species present in the reference situation.”

GLOBIO’s biodiversity metric compares commercial sites, for example the construction of new roads or buildings, with comparable areas that have been identified as being “undisturbed.” The scope of the pilot analysis, however, was limited by the fact that the environmental assessment, led by Dutch PBL, only included warm blooded vertebrates and plants. Overlooked were the invertebrates and microorganisms that are universally recognized as sacred, essential beings that make life on earth possible. In a classic Hegelian dialectic move, however, a critique of this approach may actually serve to legitimize an expanded program of digital life capture. What Dutch PBL overlooked is being addressed by the ARISE project, which is active in another corner of the Netherlands. ARISE is cataloging ALL multicellular life in the country to construct a “bio cloud” for computational use by local supercomputers.

The creation of markets in ecosystem services could not happen without substantial infusions of catalytic capital from entities and individuals that stand to benefit from their existence. AXA maintains an investment fund, AXA Research, that underwrites scientific and policy research  needed to justify the shift to a global governance / planetary computer paradigm. Narrative management is everything, and for a price, academics can be incentivized to become willing partners, eager to provide made-to-order research for such purposes. An example of AXA’s influence is seen in their sponsorship of the endowed chair at the Biosphere and Climate department of Imperial College London.

The position is currently occupied by Colin Prentice, an influential researcher and the second author of the 1996 paper that laid out the Integrated Biosphere Simulation Model (IBIS). That effort was carried out in partnership with Oak Ridge National Laboratories and the SAGE center at the University of Wisconsin. Oak Ridge National Laboratories became the birthplace of mathematical and computational models of ecological systems in the post WWII era. For additional background, see our presentation series on Oak Ridge and “ecological cybernetics.” A follow up paper on this topic is underway.

NASA describes IBIS as follows:

“a first step toward gaining an improved understanding of global biospheric processes and studying their potential response to human activity.”

Source: https://daac.ornl.gov/MODELS/guides/IBIS_Guide.html

Professor Prentice’s current project is called LEMONTREE (Land Ecosystem Models based On New Theory, Observations and Experiments). Its funding comes from Schmidt Futures, Eric Schmidt formerly of Google/Alphabet. The principal aim appears to be developing ways to evaluate evapotranspiration (evaporation + transpiration). According to one of their citations, about 70% of all terrestrial precipitation returns to the atmosphere via evapotranspiration, representing a major factor in predicting the water cycle and its associated energetic ramifications.

LEMONTREE formalized a collaboration between researchers at several universities whose research centers development of next-generation vegetation models. The theories underpinning their current modeling work use optimality and natural selection principles. Optimality refers to tradeoffs such as plants maximizing carbon from photosynthesis while minimizing water loss. Prentice considered optimality principles to be essential for building universal land surface models.

Optimality principles are the ‘missing law’ of biology for Earth System modeling. They have the potential to generate substantially simpler and more robust models that fully exploit the power of natural selection to eliminate suboptimal trait combinations, and the richness of relevant observational data that are now available to ecosystem scientists for the development and evaluation of theory and models.

Source: https://www.imperial.ac.uk/news/217793/new-lemontree-project-aims-improve-earth/

Prentice is also the director of the Leverhulme Centre for Wildfires, Environment and Society, which claims to be the only research facility in the world devoted to the fundamental biophysical dynamics of wildfires. The Leverhulme Centre works with LEMONTREE and has also received grants from the AXA Research Fund. AXA funds another department chair in Turkey at the Koç University occupied by Özgur B. Akan. Akan’s research involves building an internet of things enabled by nanotechnology:

“The proposed research program aims to implement the very first artificial micro/nanoscale MC systems in liquid and air environments by fabricating MC-transceivers (MC-TxRxs) based on novel nanomaterials such as graphene, hydrogels, and xerogels. Given the peculiarities arising from nano-physical and biochemical processes, and the computational and energy-based limitations of nanomachines, the program will revisit conventional ICT tools and devise new ones for MC based on empirical knowledge gained through the fabricated MC system prototypes.

The project will focus on two applications: (i) a distributed molecular nano-sensor network for environmental monitoring, and (ii) an artificial gut-brain axis as a testbed for the development of novel ICT-based treatment techniques for psychological and intestinal diseases.” Source

Professor Akan received his PhD through Georgia Tech in 2004 under Ian Akyildiz. Akyildiz is one of the top researchers internationally on network technologies. He spent thirty-five years at Georgia tech leading the Broadband Wireless Networking laboratory; is cited over 150,000 times; has established research enters all over the world including the UAE, Saudi Arabia, Spain, Russia, and India; and is founder/chief editor of the International Telecommunication Union Journal of Future and Evolving Technologies.

(This is Alison – I find it very interesting that Akan has affiliated himself with Cambridge University as indicated by this video promoting his online “Internet of Everything” Course. I recently shared a guest post by Lorraine Davison, who has been exploring Teilhard de Chardin and Julian Huxley’s role in advancing noetic convergence, and she specifically references Cambridge as being the incubator for this school of thought. You will note the family name Fry, which comes up later in this post relating to Lewis Fr Richardsony, a pioneer in weather forecasting. I’m providing an excerpt below for convenient reference, but I do encourage you to check out Lorraine’s entire piece here after you finish Leo’s post.)

“Dear reader, congratulate yourself. You are a rational animal sitting quite near the pinnacle of evolutionary progress. I say “quite near,” because the actual pinnacle is occupied by a group of nineteenth-century British academics and their current intellectual heirs. These luminaries have raised evolution itself to the status of a god, and they are the self-appointed spokesmen for this new religion of progress. Refusal to march to the drum beat of this relentless and accelerating advance is now the only heresy. In fact, anyone who lacks utopian vision or exhibits distasteful attachments to past and family will likely be trodden underfoot. You must understand that this is not science or politics burnished with religious zeal; it is an entirely new global religion. We, my friends, have fallen into the hands of fanatical religious zealots.

The embryo of this new religion had already been implanted at the University of Cambridge as the turbulent seventeenth century dawned. The University had become a focus for religious and political dissent and went on to supply many of the Puritan settlers for the new England that was emerging across the Atlantic.

After the English Civil War (1642-1651) dissenters were required to either stay out of the university system or publicly subscribe to the beliefs of the Church of England. So, religious dissent outside of mainstream institutions became a fertile breeding ground for all manner of exploratory thought and a not insignificant factor in spawning the industrial revolution. In time, a shadow ruling class emerged from these dissenting thinkers who also went on to provide the impetus for the great philanthropic movements of the nineteenth century.

This powerful group was composed of three strands of the British upper-middle class: evangelical members of the Church of England (often known as the Clapham Sect), Quakers, and philosophical agnostics. These families came to form a discrete group as they tended to intermarry. Many of the Quakers and agnostics eventually entered the Church of England for reasons of pragmatism and self-interest. Here, they strengthened the evangelical faction that worked tirelessly for the abolition of slavery. 

These influential dynasties included the Stephens (including James, father of Virginia Woolf), the Darwins, the Huxleys, the Wedgwoods, the Frys, and the Gaskells. Their rise to positions of social and cultural importance was a defining moment for modern Britain, and they replaced many members of the aristocracy in influential positions in government, church, and academia. Their offspring were more likely to be educated at Cambridge than Oxford University and provided many of the fellows of the colleges. After the loosening of religious restrictions in 1871, those members of the group who had clung openly to their dissenting views were also able to enter the university system.” Lorraine Davison

It’s not much of a leap to see how the “distributed molecular nano-sensor network for environmental monitoring” imagined by Akan meshes with the data collection demands of the maturing ecosystem services paradigm. For a public with little to no comprehension of pending bio-digital convergence, the technological imperative to observe the environment and measure our impact on it seems straightforward and productive. Most people cannot imagine the nefarious secondary goals that underpin corporate environmentalism. Virtualization of life carried out in the name of saving the planet is incomprehensible. 

If, however, one considers ubiquitous computing and pervasive sensing within the context of our country’s long history of cybernetic research and consciousness studies, it is hard to discount the possibility that “green finance” could be a powerful tool to repurpose life on earth as networked nodes used to advance new forms of social computation, the nature of which is beyond our current understanding.

Insurance Risk Modeling Overview: Parametric Insurance and Global Digital Twins

While the previous section focuses on AXA, there’s a lifetime’s worth of research that could be done on similar organizations pursuing related endeavors. In this section I examine how the growth of the parametric insurance sector advances the larger program of forcing life on Earth into simulation modeling software. In parametric insurance, claims are paid automatically based on a pre-specified metric. Parametric structures end up most often in catastrophe insurance, where the pre-specified trigger can be a weather metric, such as wind speed, or a Richter scale metric in the case of earthquakes. There is always a third-party company tasked with the environmental and economic modeling of the specific event. These modelers inform the terms of the insurance contract.

Source: https://www.mitigasolutions.com/risk-modelling Source: https://www.mitigasolutions.com/

I’ve already discussed Mitiga Solutions in, “Blockchain, Digital Twins, and Global Brain Economics Part 1,” but I wanted to expand on a few points in this post. Mitiga Solutions was spun out of the Barcelona supercomputing center. Barcelona is a hot spot of advanced digital networks and the data economy where Francesca Bria, former Chief Digital Technology and Innovation for this supposed “rebel city,” implemented city-wide sensor networks and carried out AI computation on citizen communications. Alison’s piece, Mondragon, Focolare, and Inclusive Capitalism, opens with a discussion of Bria’s deployment of smart city technologies and how they were used to support lockdown measures. 

Mitiga’s customers include multinational defense contractors, NASA, NATO, and the United Nations Development Programme (UNDP). Their services are also sought out by insurers like AXA and Willis Towers Watson. The firm appears to specialize in atmospheric sciences and has cultivated relationships with Europe’s major air traffic control systems.

Mitiga was founded by Alejandro Marti, a graduate of Rutgers University who worked for the state of New Jersey as a GIS specialist for several years before joining the Barcelona supercomputing center where he worked for over a decade. Marti heads a United Nations focus group on AI for natural disasters and serves on the board of PRACE (Partnership for Advanced Computing in Europe). Mitiga joined Microsoft Startup track in 2020, which led to an expanded partnership in 2022 for co-development of a “fully transactional” insuran-tech SAAS (software as a service) platform for risk management. Microsoft and Mitiga are in the process of evaluating Microsoft’s Planetary Computer Platform. Parametric insurance is an important use case for the planetary computer.

Source: https://news.microsoft.com/es-es/2022/04/21/mitiga-solutions-and-microsoft-expand-collaboration-to-mitigate-the-effects-of-natural-disasters-caused-by-climate-change/

The Danish Red Cross hired Mitiga Solutions to be the third-party modeler for a blockchain-based parametric insurance contract for a potential volcano catastrophe. The creation of mathematical models of the environment based on quantifiable data mirrors the modeling used in ecosystem services use cases but is applied slightly differently in an insurance context. Parametric insurance modeling doesn’t simply evaluate the risk of a covered event occurring. The potential impact on insured assets is a second part of the model, which considers physical assets AND purely financial assets, such as securitized catastrophe bonds.

An article published by PRACE about Mitiga Solutions’ approach to epidemic modeling in sub-Saharan Africa, while not directly about insurance and ecosystem services, helps lay out the type of work they do. The three phases of their work involved community-based participatory surveillance, tokenized incentives that encouraged data-sharing, and a comparison of that data with institutional perspectives:

“…digital participatory surveillance, developed almost ten years ago as part of a European project on influenza. It involves gathering information from people about how they feel and symptoms they may be experiencing, which is then cross-referenced with information from hospitals and doctors on the ground.”

Blockchain-based “community currencies” fueled data collection:

“Mitiga is working on promoting a programme in Africa with the Red Cross that aims to incentivise people to provide information about their health. “This concept, known as a community inclusion currency, is based on blockchain technology and provides people with tokens that can be exchanged for certain good such as food, transport or education when they give information about their health status…”

Tokens were awarded when people shared information about their health status. Tokens and associated health surveys became “signals” used by Mitiga to run their models. Regardless of what intentions we believe Mitiga or their partners have, these models are meant to guide targeted populations towards pre-determined outcomes established by the program’s designers. Was informed consent given by participants? Were they made to understand the true nature of the socio-technical systems into which the data of their lives was being fed? While not stated in the article, the only Red Cross-partnered, blockchain-based, community currency effort I’m aware of is Grassroots Economics. In a prior article series, I explain how the system works and describe the above mentioned organizations’ relationships and significance.

Mitiga’s mathematical models used information gathered during participatory surveillance activities to identify where the data was sourced. Responses to health questionnaires about how people felt were later cross-referenced with government data.

Source: https://prace-ri.eu/model-aggregating-for-epidemics/

This data is then integrated into agent-based models, programs that can be used to simulate the actions and interactions of individuals. Data for these computational models comes from a number of sources. Governments provide information on the economic activity of their people, which can be supplemented by information from mobile phone networks. Phone data provides valuable insights into the movement of individual people as well as their social networks. In some areas, such as sub-Saharan Africa, the data sought may be confused by shared ownership of mobile phones. In these cases, gamification can be used, asking representative selections of a population to play simple games where they provide details about what they do and where they go in a typical week. Notice how gamification on mobile devices can be used to identify the movements of individual people, agents in the simulation.

Tokenized community currencies are not given with charitable intent. Rather the tokens, programmed for use at specific businesses, are tools that indirectly get people to freely offer up information about the patterns of their economic lives. This data can then be aggregated and used for prediction modeling of the larger community, which leads to the third phase, “business intelligence.” In this phase signals are used to calculate the potential effect of an ongoing catastrophe on a given community.

“The final part of the strategy adds a layer of business intelligence that helps to reduce the financial impact of any disease outbreak, providing information about how supply and demand, transport, financial markets and other institutions might be affected. “Our mission is to stop epidemics from turning into pandemics while reducing negative social and financial impacts…” says Dr Marti.

Making such predictions requires reliable information about how these interconnected businesses operate. Payment systems, identity services, and supply chain tracking are methods by which such data can be generated, which helps explain why humanitarian NGOs are encouraging people to use a digital community currency in the first place. Every transaction from its point of origination can be tracked from person to person. Populations involved in these pilots are guinea pigs for behavioral surveillance and programming. Participants, remade as digital “agents,” have their actions shaped by the game mechanics embedded in their environments, which have been transformed into cyber-physical systems. Personal relationships and purchases situate each person within their larger socio-technical context. People, agents, computational nodes – all can be nudged based on the signals they emit in the form of health surveys and directed economic transactions. Widespread adoption of web3 technologies will make humanity machine-readable, and steerable, at a population level.

Now consider the Danish Red Cross’s volcano catastrophe bond. Substitute “epidemic” for “volcanic eruption.” Not only can Mitiga Solutions provide modeling to define the payment triggers for parametric insurance policies, they’re also positioned to assist clients determine where best to allocate post-disaster funds.

“Mitiga Solutions has developed a state-of-the-art model using numerous data inputs to predict where funds will be needed, improving both the efficiency and effectiveness of humanitarian relief,” explained Alejandro Marti, CEO and Co-founder of Mitiga Solutions, Barcelona-based experts in predicting natural hazards.”

 Ecosystem services are defined by the contributions made by natural capital to life on Earth and how such valuations interface with the rise of the global “green” economy. Mitiga has built its business on risk modeling techniques linking ecological valuation not to “human flourishing,” but to the economic consequences of disasters. Central to both use cases is the idea of “risk management,” where predictions are made using global simulation technologies that draw upon mass surveillance and advanced software. The Mitiga case study is a useful example that shows how risk prediction extends simultaneously across military, industry, insurance, and development aid sectors.

Source: https://www.rodekors.dk/sites/rodekors.dk/files/2021-08/Summary%20note%20Disaster%20Risk%20Insurance.pdf

VeRisk at the Center of Parametric Insurance’s Rise

While parametric insurance has existed in some form for over two hundred years, quantitative and probabilistic models gained popularity starting in 1987 after Karen Clark accurately predicted the amount of damage that would result from hurricane Andrew. Using the methods that underpin modern parametric insurance models, Clark’s best guess created a new field practically overnight. Her company eventually grew into Boston-based AIR Worldwide and became the dominant third-party modeler for catastrophe insurance. According to the main news source for catastrophe insurance artemis.bm, AIR Worldwide’s modeling accounts for 70% of the catastrophe insurance market and covers over $25 billion in assets.

Source: https://ar.casact.org/hurricane-andrews-message-to-insurers-2/

Curiously, AIR Worldwide collaborated with Integrated Research on Disaster Risk (IRDR), an organization co-sponsored by the International Council for Science (ISC). The ISC was a principal designer of the International Geophysical Year (IGY), an international effort intended to increase our understanding of geophysical, atmospheric, and solar interactions. The IGY essentially spawned the space age. The USSR’s Sputnik was launched as part of the program.

New Jersey-based VeRisk acquired AIR Worldwide in May 2002, rebranding it under their name in 2022. New Jersey is a hub for the insurance industry, which operates within the financial services sector. It’s worth recalling that Dr. Alejandro Marti, founder and CEO of Mitiga Solutions, attended college at Rutgers University in New Jersey and worked for the State as a lead GIS specialist for eight years. VeRisk originates from Insurance Services Office (ISO), a non-profit started in 1971 by Daniel J. McNamara.

ISOs’ quasi monopoly on insurance data and legal standards puts it in a unique position. From its inception, ISO has been intertwined with the government and likely the military. ISO consolidated over thirty distinct state rating and reporting agencies into a single organization that served nearly three hundred insurance companies. Each state differed in requirements and formats for reports and compliance from insurance companies, so ISO was founded to provide those services. Similar to VISA, it operated as a decentralized consortium controlled by the insurance industry. ISO standardized paperwork, acted as an intermediary between the state and insurance companies, and reduced costs to industry.

By operating at scale, McNamara’s organization had access to lots of data, which increased its statistical power, improved analysis, and led to the firm wielding considerable influence over the insurance sector. In October 2001, ISO was the organization that asked every state for permission to exclude for terrorism damage from insurance contracts. ISO went public in 2009 under the new name VeRisk, which ended up as the largest IPO that year. Today Vanguard at ~10% and Blackrock ~5% are the largest shareholders, with Fidelity owning about 2.29% as well.

ISO, an early example of the power of big data, is considered one of the first “insuratech” organizations. It became a private company and in 1984 developed the ISOTEL network, a computerized information system. This evolved into ISONET by 1999, which the company states was the first “web-accessible claims tool”. Around this time, after acquiring American Insurance Services Group and the National Insurance Crime Bureau, ISO reportedly held the largest claims database in the insurance industry. 

Source: https://www.verisk.com/50-years/a-timeline-of-our-growth-and-innovation/

ISO updates its information each month to reflect changes in public fire-protection capabilities that affect a specific risk, including changes in fire-station locations, district boundaries, and automatic-aid agreements and sets Public Protection Classification codes for each fire district in the country based on their evaluation of the district’s fire-suppression capabilities. Specifically they pioneered a GIS (Geographic Information System) capable of assessing: distance to ocean and other major bodies of water; windstorm exposure; California brush-fire hazard locations; home-to-work drive distances; crime; and personal and commercial auto rating territories. The system was developed in partnership with San Diego-based Vista Information Systems. At the time, Vista was considered the nation’s leading information provider for real estate, which was bought out by Fidelity in 2001.

In 2002, AIR Worldwide expanded into catastrophe markets, including terrorism modeling. The firm designed the first terrorist estimated loss product in collaboration with the FBI, CIA, Department of Defense, and the Department of Energy. Risk management in the terrorism arena relies on advanced software to analyze social networks and identify potential conspirators.

“One year after the attacks of September 11, 2001, and at the request of our clients, AIR Worldwide released the first commercial catastrophe loss estimation model for terrorism. The model estimates the likelihood and financial impact of insured property and workers’ compensation losses from future terrorist attacks in the United States. Where natural catastrophe models are constructed based on decades of (albeit limited) historical data, AIR’s terrorism model incorporates the judgment of a team of experts—a “red team”— familiar both with the available historical data and current trends. The red team is comprised of counterterrorism specialists who have decades of experience in government organizations such as the FBI, CIA, Department of Defense, and the Department of Energy. With input from the team, AIR has developed a comprehensive database of potential targets, or landmarks, across the United States (which include many of the same buildings found in the Department of Homeland Security database) and a subset of “trophy targets” that carry a higher probability of attack. Team members use a software tool developed by AIR to perform social network analysis and probabilistic plot analysis of the steps involved in a successful terrorist operation.” Source

The insurance product released by AIR Worldwide in collaboration with a “red team” consisting of agents from the FBI, CIA, Department of Defense, and Department of Energy illustrates the degree to which the company maintains close ties with defense and Homeland Security interests. Four star General Vincent Brooks joined VeRisk’s board of directors in 2020. In 2003 the Joint’s Chief of Staff tapped General Brooks as spokesperson for the violence in the Middle East. He rose through the ranks over the decades, most recently serving as commander of the US Armed Forces in Korea, before retiring in 2019. Brooks is a member of the Council of Foreign Relations.

Source: https://www.air-worldwide.com/publications/air-currents/2016/11/

Some of AIR’s major projects include: participation in the multi-billion dollar GOES-R satellite system for NOAA/NASA, various contracts through Kirtland Air Force Research Lab, and research conducted at the Office of Naval Research on aerosols. Their website features public national security work for the Army, Navy, and Air Force. They are also involved with analysis of operational climate data, spacecraft anomalies, satellite missions modeling, and other intelligence concerns. Clients in investment banking, commodities trading (including oil), and agricultural forecasting also use AER’s consulting services. 

AER was awarded a $21 million contract in 2019 for:

“comprehensive, next-generation space environment model development, verification and validation, space environment related product development support, and design/prototyping of advanced space weather sensors.”

This work is being carried out at Kirtland Air Force Research Labs in Lexington, Massachusetts. This contract came on the heels of a smaller $9 million contract in 2015, also operating out of Kirtland. The team, led by L3 Harris, is building the next generation of GOES-R satellite series.

Intriguingly, ISO also bought Atmospheric Environmental Research (AER) in 2008. AER is another Massachusetts-based climate/geophysical software and sensor technology company, a major provider of sensors, satellites, and software to the United States government and industry. Right before their acquisition, AER was lauded as one of the top 25 defense contractors in Massachusetts. Most of their management team reports experience either in the military or other governmental organizations. One of AERs’ managers, Jeff Stehr, served as senior lead scientist for private intelligence company Booz Allen Hamilton for six years.

The company was founded in 1977 by husband-and-wife duo Cecilia and Dr. Nien Dak Sze. Dr. Sze is a senior researcher at Harvard and a contributor to the Harvard-China Project. Oddly enough, Dr. Nien Dak Sze is heir to K.S. Sze & Sons, a hundred-year-old luxury jewelry company based in Hong Kong. Sze wrote his doctoral thesis on the atmospheric chemistry of Venus. Cecilia would go on to serve as president and CEO of AER for many years after Dr. Sze received an invitation to advise Hong Kong State Environmental Protection Agency.

Source: https://ks-sze.com/pages/our-story

Sze also counsels the China-United States Exchange Foundation founded by Chinese shipping billionaire Tung Chee-hwa. Chee-hwa served on the Hong Kong Executive Council, the highest regulatory office, from 1996-2015 and remains heavily involved in Chinese politics. Hong Kong is implementing blockchain and related technologies for social management through organizations called Shanzhai City and the Impact Data Consortium chain, which Alison and I wrote about previously.

Source: https://www.cyberport.hk/enewsletter/v148/1480009.html#:~:text=In%20Hong%20Kong%2C%20Shanzhai%20City,in%20time%20and%20through%20blockchain.

The China-United States Exchange Foundation (CUSEF) is a registered foreign agent that funds think tanks and journalists to influence the United States political system. The organization collaborates with well-known organizations such as the Carnegie Foundation, Brookings Institute, Atlantic Council, and Center for Strategic and International Studies. Ronnie Chan, a governing board member of the CUSEF, is a member of the Berggruen institute. Alison has written many words about the Berggruen Institute, but the short version is that they are an organization pushing tokenization, artificial intelligence, digital governance, and blockchain identity linked to Universal Basic Income.

Source: https://www.berggruen.org/people/ronnie-chan/ Source: https://web.archive.org/web/20210323174540/https://twitter.com/NBerggruen/status/1374417021744476173

Ross Hoffman, the recently retired Vice President of Research and Development for AER, published numerous papers and articles about weather manipulation in the early 2000s. Hoffman sat on “The Committee on the Status of and Future Directions in U.S Weather Modification Research and Operations,” organized by the National Academies Board of Atmospheric Sciences and Climate. The board included another AER researcher, Richard D. Rosen. In 2003, the above committee published a Critical Issues in Weather Modification Research 2003. The board included another AER researcher, Richard D. Rosen. In 2003, the above committee published a 144-page report on that exact topic. Their focus was on strengthening the fundamental atmospheric sciences and working towards “reproducible” methods for weather modification, concluding that:

“There still is no convincing scientific proof of the efficacy of intentional weather modification efforts. In some instances, there are strong indications of induced changes, but this evidence has not been subjected to tests of significance and reproducibility. This does not challenge the scientific basis of weather modification concepts. Rather it is the absence of adequate understanding of critical atmospheric processes that, in turn, lead to a failure in producing predictable, detectable, and verifiable results.”

 The issue in their minds is not the ability to affect the weather, rather the ability to create “predictable, detectable, and verifiable results.”

Obvious conflicts of interest exist when a multinational company such as VeRisk owns subsidiaries that are deeply involved in weather modification research. Considering their numerous governmental and military contracts to develop advanced sensor technology, individuals within the company must have different levels of security clearance. This means at least some high-level staff are likely to have access to classified information relating to weather modification research; and let’s not forget that VeRisk claims to have some of the largest private databases in the world.

Source: https://wrenchinthegears.com/wp-content/uploads/2023/08/Critical-Issues-in-Weather-Modification-Research-2003.pdf

Then there’s Cecilia and Nian Dak Sze who hold leadership positions in one of the most advanced satellite, sensor, and data analysis companies in the world while advising the Hong Kong government and the China-United States Exchange Foundation. Digital media feeds consumers narratives designed to guide viewers away from a more complex, nuanced understanding of the international dynamics embodied by this power couple and their relationship to the Berggruen Institute. The unstated and unknown implications of their business arrangements should be given serious consideration.

If we take a brief historical detour back to the origins of mathematical weather forecasting, we will find ourselves in World War I. Lewis Fry Richardson, a British Quaker scientist who had served in a variety of positions as a physicist, chemist and mathematician, refused to go to combat as a conscientious objector. (Note, the Fry family was reference in Lorraine Davison’s piece on noetic convergence.) Richard spent the next three years in an ambulance unit in France. In his spare time, Richardson conducted experiments and composed differential equations describing weather, which is the basis of modern weather prediction. At the time, weather prediction was considered an art and not practiced as a science. Predating computers, all calculations had to be solved by hand.

In his original 1922 publication “Weather Prediction by Numerical Process,” Richardson set forth the following fantasy:

“After so much hard reasoning, may one play with a fantasy? Imagine a large hall like a theatre, except that the circles and galleries go right round through the space usually occupied by the stage. The walls of this chamber are painted to form a map of the globe. The ceiling represents the north polar regions, England is in the gallery, the tropics in the upper circle, Australia on the dress circle and the Antarctic in the pit.

As myriad computers [people who compute] are at work upon the weather of the part of the map where each sits, but each computer attends only to one equation or part of an equation. The work of each region is coordinated by an official of higher rank. Numerous little “night signs” display the instantaneous values so that neighbouring computers can read them. Each number is thus displayed in three adjacent zones so as to maintain communication to the North and South on the map…”

In the 1980s an artist recreated Richardson’s fantasy. The world we are living in today is quite different in some respects, but there is a lot to say about how this image reflects our current circumstances and the drive towards a digitally mediated social evolution.

 

Source: https://www.irishtimes.com/news/science/lewis-fry-richardson-s-remarkable-weather-forecast-factory-1.2473954

Richardson’s work remained relatively unnoticed until around WWII and gained traction in the 1950s when high performance computing became more widely available. Strikingly, within Weather Prediction by Numerical Process, he simultaneously included mathematical descriptions of several processes that are key to modern climate modeling, including taking into account the plant-atmosphere relationship. Richardson understood that longer term predictions of weather depended on observing these types of dynamics. Current work echoes the drive towards predictability and control of the climate. His intentions might have stemmed from care, but weaponization of knowledge of this sort was inevitable. 

Richardson is considered one of the first people to apply quantitative analysis to international conflict, as well as laying out foundational work in the mathematics of fractals, which would be quoted in Beniot Mandelbrot’s early work “How Long Is the Coast of Britain? Statistical Self-Similarity and Fractional Dimension.” As an arduous pacifist he reportedly destroyed his unpublished research when he was informed that his research was beneficial to chemical weapon development. 

Source: https://en.wikipedia.org/wiki/How_Long_Is_the_Coast_of_Britain%3F_Statistical_Self-Similarity_and_Fractional_Dimension

VeRisk and Web 3

Returning full circle back to ecosystem services and natural capital, we arrive at Maplecroft, another subsidiary of Verisk, located in Bath, England. This company started as an academic team conducting risk analysis for the mining industry. Today Maplecroft directs VeRisk’s ESG (Environmental Sustainable Governance) analytics activities.

In a 2021 article, they highlighted biodiversity loss from mining operations and urged companies to join the Task Force for Nature-related Disclosures, described earlier in this post:

“Operators need to work out a way of measuring biodiversity risk across their portfolios and calculate their exposure to the threats of natural capital depletion in a way that satisfies investors. By participating in the Taskforce for Nature-related Financial Disclosures, known as the TNFDs, they can help shape what the global disclosure benchmark will look like.” 

While VeRisk appears quiet on web3, it has several articles dating back to 2016 on the promise of these technologies. Ivelin Zvezdov, Assistant Vice President of the Research and Modeling Group on Extreme Event Solutions Team, wrote about blockchain’s potential in 2019:

“I suggested that a system can be designed where quantitative triggers and indices produced by a catastrophe model and a real-time catastrophe tracking software product will someday control the acceptance, rejection, modification, re/negotiation, and binding of smart contracts.”

Taking a deeper dive into the Lemonade Crypto Coalition or the ecosystem services markets, one can see how Zvezdov’s assessment of blockchain is playing out with respect to social and ecology projects. It is likely that VeRisk is already experimenting with blockchain, considering their forays into related technologies such as Internet of Things and machine learning. As of yet, however, I have not found specific evidence. The Mitiga Solutions deal with the Danish Red Cross involves the blockchain based securitization of the parametric insurance contract, but not use of blockchain smart contracts for the actual event triggers.

Source: https://www.verisk.com/newsroom/verisk-maplecroft-named-best-specialist-esg-ratings-provider-by-esg-investing/

Big Bets on Bio-Digital Convergence

There’s a race on to monitor, model, and steer life on Earth – to contain all of it within a networked planetary computer. Biodiversity metrics are being structured to support budding ecosystem services markets as multinational insurance giants like AXA throw their financial support behind researchers like Colin Prentice. Prentice and his colleagues are busy framing out new fields of inquiry to expand potential data capture, such as linking vegetation coverage to climate modeling. AXA’s parallel investments in Internet of Bio-Nano Things research supports my hypothesis that life at every scale, from single-cell organisms to mega-fauna and towering trees, is at risk of swapping autonomy for remote digital mediation. 

VeRisk is another major player with entanglements across insurance, disaster management, and military intelligence sectors. Looking back at the origins of mathematical weather prediction, we see how the groundwork was laid. Decades later, we have a cacophony of complex parametric insurance schemes overseen by compromised players. The same organizations predicting, and betting on, natural disasters are also involved with weather modification research. It’s definitely a tangled web. The massive datasets VeRisk has consolidated means the company is likely to wield an outsize influence over how the modern game of cybernetic influence proceeds. 

Web3, financial innovation, and digital governance have combined to create a global socio-technical architecture with the potential to gamify and steer social outcomes starting with disaster response. While they may sound good to the casual observer, digital community currencies deployed by organizations such as Grassroots Economics and Mitiga Solutions are in fact tools of subtle and not-so-subtle social-engineering. Pervasive sensing technologies, from satellites to nanotechnology, capture data, including participatory surveillance from self-reported citizen surveys, to feed simulations run on supercomputers to inform where social impact finance will be directed. 

The implications of bio-digital convergence as it pertains to humanitarian aid delivery, disaster response, and insurance are not well understood by the communities swept up in these pilot programs. Social commentary on the topic is not balanced, leaning heavily on the side of affirming benefits. There has thus far been little discussion about what problems may arise when social systems are linked to game mechanics, ubiquitous computing, and financial bets, and then automated to achieve some arbitrary optimization “human flourishing”  metric. Much more informed public discussion is needed before these initial pilots, with their multi-national insurance backers and collaborators in the finance and defense sectors, are brought to scale.

Saturday, 12. August 2023

Wrench in the Gears

Guest Post: Transcend or Die: The Origins of Our Enslavement to the New Religion of Evolutionary Humanism

What follows is a bookend to Lorraine Davison’s two part analysis (here and here) of Wolfgang Smith’s, Theistic Evolution, the Teilhardian Heresy. She shared it with me some time ago, and I am glad to finally have a chunk of time to read it through and share it with the blog’s readership. Her post touches [...]

What follows is a bookend to Lorraine Davison’s two part analysis (here and here) of Wolfgang Smith’s, Theistic Evolution, the Teilhardian Heresy. She shared it with me some time ago, and I am glad to finally have a chunk of time to read it through and share it with the blog’s readership. Her post touches on a number of topics I’ve been mulling over, including the role of the Quakers in what is unfolding as well as the Society for Psychical Research in the UK (there was an American branch, too, affiliated with the Duke Parapsychology Center). I’ve started John Gray’s, The Immortalization Commission, which covers similar ground, and I’m spurred to get back to it having now read Lorraine’s insightful analysis. After her conclusion, I’ve appended screenshots with links to a few maps I’ve made that are relevant should you want to explore further. Link to featured image here.

Guest post by Lorraine Davison:

Dear reader, congratulate yourself. You are a rational animal sitting quite near the pinnacle of evolutionary progress. I say “quite near,” because the actual pinnacle is occupied by a group of nineteenth-century British academics and their current intellectual heirs. These luminaries have raised evolution itself to the status of a god, and they are the self-appointed spokesmen for this new religion of progress. Refusal to march to the drum beat of this relentless and accelerating advance is now the only heresy. In fact, anyone who lacks utopian vision or exhibits distasteful attachments to past and family will likely be trodden underfoot. You must understand that this is not science or politics burnished with religious zeal; it is an entirely new global religion. We, my friends, have fallen into the hands of fanatical religious zealots.

The embryo of this new religion had already been implanted at the University of Cambridge as the turbulent seventeenth century dawned. The University had become a focus for religious and political dissent and went on to supply many of the Puritan settlers for the new England that was emerging across the Atlantic.

After the English Civil War (1642-1651) dissenters were required to either stay out of the university system or publicly subscribe to the beliefs of the Church of England. So, religious dissent outside of mainstream institutions became a fertile breeding ground for all manner of exploratory thought and a not insignificant factor in spawning the industrial revolution. In time, a shadow ruling class emerged from these dissenting thinkers who also went on to provide the impetus for the great philanthropic movements of the nineteenth century.

This powerful group was composed of three strands of the British upper-middle class: evangelical members of the Church of England (often known as the Clapham Sect), Quakers, and philosophical agnostics. These families came to form a discrete group as they tended to intermarry. Many of the Quakers and agnostics eventually entered the Church of England for reasons of pragmatism and self-interest. Here, they strengthened the evangelical faction that worked tirelessly for the abolition of slavery. 

These influential dynasties included the Stephens (including James, father of Virginia Woolf), the Darwins, the Huxleys, the Wedgwoods, the Frys, and the Gaskells. Their rise to positions of social and cultural importance was a defining moment for modern Britain, and they replaced many members of the aristocracy in influential positions in government, church, and academia. Their offspring were more likely to be educated at Cambridge than Oxford University and provided many of the fellows of the colleges. After the loosening of religious restrictions in 1871, those members of the group who had clung openly to their dissenting views were also able to enter the university system.

Darwin’s Origin of Species and the liberal philosopher John Stuart Mill’s On Liberty were both published in 1859. This was thus a pivotal year in the morphogenesis of our new religion. Of course, the sickly Charles lacked the ebullience and charisma of his grandfather Erasmus, but Thomas “the bulldog” Huxley stepped in to wrestle his corner, thereby inaugurating the family tradition of the worship of the theory of evolution. The Huxleys were influential philosophical agnostics, and Thomas Huxley was a major force in designing the education system of Victorian Britain. He despised Christianity yet supported the maintenance of some limited Bible teaching in schools. This was, however, only from the point of view of safe-guarding the moral order underpinning society. He stated, “I do not advocate burning your ship to get rid of the cockroaches.” According to Thomas’ grandson Julian (more about him later) his grandfather was ‘bred by nineteenth-century humanism out of traditional Christian ethics’ and was ‘religious by nature’ with a ‘reverence for truth and moral virtue.’ It was the infusion of this religious temperament into the theories of evolution that laid the foundation for the emergence of a new religion whose ground of being was located in a materialist spirituality. This seems like a contradiction of terms and certainly demands further explanation if it is to be justified.

The unofficial post of chief philosopher to our extended familial group was occupied by John Stuart Mill. As they gradually lost their Christian faith their creed increasingly became Mill’s utilitarianism. Mill argued that all of society would benefit if people were given the freedom of self-development and self-expression. In effect, he was arguing for non-conformity and the relaxation of most legal and social restrictions on the individual. The only legitimate reason to curtail the freedom of the individual would be if the exercise of that freedom harmed another. Mill argued that the benefits of the conflicts that would inevitably arise due to the constant seething struggle of individuals to express their own characters and desires would be far outweighed by the energy that such striving would inject into society — and is indeed the only way to prevent total stagnation. This idea of the struggle for self-expression and self-realisation dovetailed with Darwin’s theory of evolution by natural selection and the survival of the fittest. It became the basis for what is known as social Darwinism. Society would become a self-ordering entity based on the outcome of perpetual social, economic and intellectual conflict.

This was a radical departure from the idea of how society should be ordered based on our Judaeo-Christian traditions of codes of behaviour and laws, governing how we act within society. In utilitarianism, a harmoniously functioning society must arise out of the striving of the great masses of individuals, each one attempting to achieve their own ends.

The abiding difficulty in liberal utilitarian thought is that it assumes that human beings are rational, and bound to aim at good generally and limit our own behaviour when it becomes harmful to others or self-defeating. Neither does it take any account of the spiritual force of evil. One of the most obvious objections to this philosophy is that we have evidence, dating back millennia, that humanity does not always behave rationally. Evil has forever haunted our nightmares and frequently ravaged our world.

As Christian belief declined amongst the ruling elite in Britain not everyone was prepared to abandon their hopes of immortality. So, in order to extract the soul from the clutches of Christianity there arose a renewed interest in, and reinterpretation of, classical Greek literature at both Oxford and Cambridge universities during the nineteenth century. Plato and the Neoplatonists seemed to describe a future for the soul released from reliance on the judgemental God of Christianity. In Plato’s account of the transmigration of the soul, they seem to have seen the potential for a self-charted voyage of the human personality through space and time (and beyond) that has no need of salvation through Christ: a philosophical rather than a religious argument for immortality. 

In addition, Socrates argued that the liberation of the soul from the weight of the desires and preoccupations of the physical body would allow the mind to soar free in the skies of eternity. This had obvious appeal for a group of people who were devoted to the human intellect as the highest expression of the operation of evolutionary forces so far. This was especially the case for our group of dissenting families. Poised at the pinnacle of intellectual evolution, they saw themselves as the helmsmen of the world of the living – naturally selected over generations by dint of their familial superior minds. Their fathers had abolished slavery, and they saw themselves as having inherited the moral right to continue to shape the world. It was simply the outcome of the natural laws of evolution.

The spiritual speculations of the nineteenth century received some encouragement from startling developments taking place in the United States—the dead appeared to have made contact with the living. This remarkable event occurred when the Fox family allegedly began communicating with the spirits of the dead through knocking noises. This other-worldly communication appears to have become something of a spectacle in the neighbourhood, and the family soon commercialised these séances which were copied by others. The spirit craze spread like wildfire across the continent and soon crossed the Atlantic into Europe, spreading rapidly in the 1850s into England, Germany, and France.

A group of undergraduates at the University of Cambridge (one of whom was destined to become the Archbishop of Canterbury and, others, distinguished Anglican Bishops and theologians) was the first to begin debating these phenomena seriously. This early group (The Ghost Society or Ghostly Guild) was set up in 1851 with a more formal group, also associated with the University (The Ghost Club), set up in 1862 with the specific objective of promoting the scientific research of paranormal phenomena. It was founded in London and counted Charles Dickens and Sir Arthur Conan Doyle as its members. These were the first groups in the world devoted to parapsychological research.

This began a long association of the University of Cambridge (and especially Trinity College) with investigations into paranormal phenomena. In 1882, the Society for Psychical Research (SPR) was established. The Society (which is still very much active today) was entirely devoted to the scientific study of paranormal phenomena and to psychical research. The moral philosopher and don of Trinity College, Cambridge, Henry Sidgwick became the first president. Sidgwick was a major influence within our extended family group through his association with his pupil the philosopher G.E Moore (a descendant of the Quaker branch) and his unofficial role as “pope” of the Cambridge secret debating society the Apostles. He and two close friends, also fellows of Trinity College, (Frederic Myers and Edmund Gurney) dominated the SPR in its early years. Ten other Cambridge dons had joined the SPR by 1894. Julian Huxley was also later to become a member in the 1920s.

The SPR set about collecting and collating hundreds of cases of spirit phenomena, thought transference and telepathy. These men were no gullible fools, however; driven by the empirical spirit of the age, a zeal for truth and a real concern for the survival of the soul, they did much to debunk fraudulent claims of mediumship. But they were not safe from desperate self-delusion and came to some astonishing conclusions. They seem to have become convinced that our biological evolution is being matched by our spiritual evolution without really providing any evidence that this is the case.

In his posthumously published book Human Personality and Its Survival of Bodily Death (1906) Myers speculates:

Let us suppose that while incarnate men have risen from savagery into intelligence, disincarnate men have made on their part a like advance. Let us suppose that they have become more eager and more able to use, for communication with earth, the standing laws of relation between the spiritual and the material universe.

Astonishingly, Myers seems to have concluded that the evolving dead were perfecting their abilities to communicate with the living and that this endeavour was open to development and improvement. This hope may have arisen, in part, from the uncertainty that had been provoked by the actual nature and quality of the communication coming from the spirit world. Most of it was poor in quality; manifested in banal, fragmentary or incoherent and immature messages and scribblings.

Nevertheless, Myers felt that in their many recorded cases of mental telepathy they had uncovered a “telepathic law” much like Newton’s law of gravity and motion. Most importantly he argued that this law held out the hope of community and the end of the terror of post mortem isolation.

These were the fears of a generation that had lost its belief in the power of the divine shepherd of the human flock.

In adding spiritual evolution to that of the material world he felt that they had achieved a major advance for mankind. He argued that Christian revelation had largely responded to the emotional needs of evolving humanity, but that this infantile stage of development had now been superseded by a new “scientific temper.” This temper would, henceforth, begin to take control of the process of evolution “on either side of the gulf of death” in order to ensure that humanity would increase its “knowledge and power.” The living and the dead united could consciously take control of, and chart a course through, the great mechanistic laws of the universe.

He had removed our need for redemption and salvation and placed the future of humanity in the hands of those who claimed their rightful place as the pride and capstone of evolution. Moreover, he had very usefully abolished the problem of evil. Myers argued that his research suggested that evil barely exists other than as a “monkeyish mischief” or as a “childish folly.” It arises out of misguided self-suggestion and an isolating madness suffered by isolated souls. In the new communal utopia of consciously-controlled evolution such evil will simply melt away

In spite of his interest in the paranormal Julian Huxley seemed to deny the very existence of any kind of spiritual realm, even an evolving one. As a zoologist and ardent believer in Darwinist evolutionary theory, he was a confirmed materialist who detested any hint of dualism. For Huxley, there was only the material. However, he somewhat confusingly maintained that faith was absolutely necessary as the only force that could bring about the one condition that he believed was necessary for continued evolution—unity.

For Huxley, religious faith was an embryonic expression of a bone fide aspect of the evolutionary force. It is that psychic movement that lifts humanity above naked self-interest and which binds and focuses human emotional and intellectual efforts on common beliefs and aims. It is thus essential to advancing mankind. Evolution had resulted in nature itself becoming self-conscious in the form of the self-aware rational human animal. All future evolution would now depend upon how this self-consciousness was used in order to take control of and steer the destiny of the species. Huxley believed that this would rely on the unification of knowledge and the harnessing of humanity to the new religion of evolutionary humanism. The age of random organic development was over and that of the global planner was about to begin.

In order to achieve his aims, it was now imperative that mankind itself be studied and moulded in order to avoid planetary evolutionary disaster. The needs of the individual must give way to the heavy burden of the responsibility of charting the collective fate of humanity. Huxley was convinced that this would require the religious instinct as the main unifying and driving force. Human beings must be studied and the psychological bases of religious behaviour sought and traced back to their biological origins. Subsequent breeding and education programmes would then attach human faith to the religion of scientific evolutionary humanism.

At first glance the friendship of the secular humanist, Huxley, with the Jesuit priest and paleontologist, Pierre Teilhard de Chardin, was an unlikely one. Teilhard was born in 1881 in Auvergne and was enrolled in a Jesuit school at 14. In 1901 the Order was banned from France, so he finished his studies in England. Here he came under the influence of Jesuit modernists who were determined to reconcile Christianity with modern ethics and science and in particular Darwin’s theory of evolution. He was greatly influenced by the Anglican convert to the Jesuit order, George Tyrell who was excommunicated from the Church in 1908.

Harmonizing Darwinism and Christianity became Teilhard’s new mission in life and it was this that led to a friendship with Huxley in the last decade of the Jesuit’s life. Although Huxley claimed to be mystified by the end point of Teilhard’s thinking, they largely collaborated on the development of a new religion that could harness the human faith impulse in the interests of evolution. This religion then became implanted on the world stage in the DNA of the emerging global structures such as the United Nations and its off-shoots such as The United Nations Educational, Scientific and Cultural Organization (UNESCO) which claims Huxley as its first director-general.

Teilhard always maintained that his theories had nothing to do with metaphysical speculation but were based in pure scientific reasoning and should be judged on that basis This notwithstanding, his fanatical zeal for science was sparked by some kind of mystical vision, which left him convinced that he was the only person to have seen the truth. He then set out to engulf all of humanity in this vision. It is because he found common cause with men like Julian Huxley that his work became so influential. Thus, even though many of his theories sound bizarre, we need to take them very seriously.

The appeal of Teilhard’s work for Huxley probably lay in the former’s determination to abolish the traditional dualism that underpins Christianity and many of the world’s faiths—that of spirit and matter. Teilhard’s work was often self-contradictory and scientifically weak, precisely because he would use any argument available to him in order to destroy dualistic thinking since this was the one thing that stood in the way of his radical evolutionism. He absolutely had to abolish the transcendent creator God and with him his transcendent heavenly realm. It was a curious enterprise for an avowed Christian.

For Teilhard, the non-physical spiritual realms outside of space and time do not exist. For him, everything, even God, is contained within the bounds of the physical and visible cosmos. All is material. In his theistic version of evolution, God is a force that employs a “physical agent” something akin to the alchemist’s Philosopher’s Stone to transform matter and drive the world to a pre-destined end point. His dream was to find, define, and presumably use this agent. Thus, all of nature consists of one substance that is in the process of being diversified and transformed by the theistic forces of evolution.

“All that exists is matter becoming spirit”.

This is the core of Teilhard’s thought.

The “arrow of time” is propelling this substance “spirit-matter” on a one-way evolutionary journey. Thus, Teilhard’s vision of evolution is far more dangerous than a strictly atheistic scientific materialism, because he raises the movement of progress to the heights of a religion. It then becomes a spiritual mission to accelerate this progress to save humanity and arrive at the “salvation” of the predetermined end point. But where exactly is Teilhard taking us?

The spiritual fruit of the transformation of matter up until this point is the human mind. This immaterial mind is nothing more than a product of the transformation of the matter of the brain. Thus, for instance, personhood arises only from cerebral neuronal activity. Indeed, the human brain is the pinnacle of the innate tendency of evolution to produce ever more complex organisms. It represents nature becoming conscious of itself and sits atop the pyramid of evolutionary complexity thus far.

So, the brain proves that increasing complexity equals increased consciousness, and Teilhard raises this equation to a law of nature. This Law of Complexity/Consciousness has now, according to Teilhard, reached its limit in terms of the individual human being. It is now about to turn its attention to humanity as a whole and begin to assemble a convergent (collective) human consciousness, which he somewhat chillingly poses as the struggle between the “unorganized multitude” and the “unified multiple”. This process of psychic evolution he calls psychogenesis.

Thus, the many psychic events of all individual human beings are increasingly being collated through technological advances—the telecommunications and television of his day and our contemporary internet. This will lead to a convergence into one collective consciousness or what he calls the noosphere. He believed that there was already evidence for the emergence of this phenomenon in the increasingly rapid transmission of thought in telephone wires and radio transmitters and that this amounted to a global “nervous system”. He believed that this system would develop as individuals became totally identified with becoming part of the collective

So as people are “socialised” into increasing identification with the themes and images of global communication this will produce a superorganism with its own self-conscious psyche. He argues that this is a continuation of the process of unification of material components that first produced the complexity of the individual human brain. This psychic organism will be made up of all human individuals as the body is made up of cells. He maintains, however, that each individual will retain its awareness of a personal self. He has, in fact, very deftly conjoined diversity/complexity with the unity needed for a new world system.

For both Huxley and Teilhard the only way to achieve global stability was to unify and move towards psychic convergence.

Teilhard’s “science” however is cloaked in the language of the visionary mystic. He “sees”, a sentient living membrane stretched around the planet. This is encompassed by a spherical envelope or luminescent aura which eventually attains a conscious, thinking individuality. This world soul gradually detaches itself from the earth.

In Teilhard’s vision (nearly) all matter is thus eventually transformed into a self-sustaining spiritual consciousness. This end point is unavoidable as we are in the grip of an “irresistible vortex” pulling us to this “universal center of convergence,” or what Teilhard calls “Point Omega.” This is none other than the cosmic Christ. Although Christ has been deposed from his Point Alpha at the creation of the world, Teilhard has reinstated Him as the end point of human evolution.

Although Huxley was somewhat bemused by Teilhard’s cosmic Christ consciousness, he was clearly beguiled by his elevation of the collectivisation of human consciousness to the level of the divine. However, the Jesuit’s thought included the belief that all efforts towards human collectivisation, no matter how brutal, would result in some level of the spiritualisation of human consciousness, which then could not be lost. So, we must value all forms of human collectivism as they will eventually lead to the good. He even argues that all human activity is directed by the risen Christ and thus the totalitarian efforts of a Hitler or a Stalin are legitimate in the quest for spiritualisation. Teilhard has thus managed to join with Myers in abolishing the evil powers, including Satan himself!

In a speech given at the French Embassy in Peking in 1945 Teilhard talks of the “recent totalitarian experiments” and their descent into sub-humanity as merely the result of “clumsy” and “incomplete” execution. But despite the recent failures of Hitler and Stalin it is clear he believes that sooner or later the experiment must succeed, and that the only legitimate human endeavor is to build the technological super-state.

The author Wolfgang Smith in his book Theistic Evolution: The Teilhardian heresy (2012) highlights our destiny according to Teilhard, “To act, to produce, and even to think collectively—that is the ‘growing compulsion’ from which there is supposedly no escape.”

It is this radical view of global unity that resonated with Huxley.

In common with Teilhard, Huxley saw the Communist and Fascist “social religions” of the twentieth century as dangerous shortcuts and primitive expressions of trends to collectivisation. These efforts were, nevertheless, moving in the right direction of unity and towards the One True Religion of evolutionary humanism. Whatever remained of the liberal vision of a free society of competing individuals would now take place under the shadow and management of a global Central Science Council. That he saw his endeavors as intrinsically religious may be seen in his proposal for a New Humanist Institute that would have fair access to the religious broadcasting activities of the BBC.

And we must be absolutely clear. Huxley was serious about the development of evolutionary humanism as a religion and was not merely attempting to harness religious sentiment for political purposes. His thought led him into conflict with many leaders of the major religions of his day, and he was genuinely frustrated that they could not see that this was a true advance from primitive religious expression to the only religion that could be true and worthy of the devotion of all of humanity.

Moreover, his disavowal of the spiritual realm did not mean that he did not believe in an invisible world of “mind/spirit”. But for Huxley, this world was the product of the transformed matter of the “world-stuff” (a term borrowed from the American philosopher/psychologist William James) which was his version of Teilhard’s spirit-matter. This world was, in short, “produced” from the hidden powers of mind and so, ultimately, the product of the transformation of pure matter. Huxley’s forward to the 1952 book Occult Psychology by the psychologist D.H Rawcliffe is interesting in this regard (republished in 1959 as Illusions and Delusions of the Supernatural and the Occult). In his book, Rawcliffe critically examines claims of the occult, parapsychology, and spiritualism, concluding that they are best explained by psychological factors such as hallucinations and suggestion or by fraud and credulity.

Huxley generally approves of Rawcliffe’s skepticism whilst welcoming his acceptance of the validity of certain extraordinary phenomena such as “the successes of water-diviners, or the ennobling effects of certain mystical experiences.” But he warns against the outright dismissal of all instances of, for instance, telepathy and clairvoyance, which may yet be evidence of the hidden powers of the mind that have arisen naturally in the course of material evolution. 

He argued that mental activity had clearly been intensified during evolution, and that we do not yet fully understand this phenomenon. Thus, parapsychology should be studied in universities in a quest to expand our knowledge of the hidden frontiers of human supernormal mental capacity. Moreover, he recommended the exploration of the potential for certain mystical practices for extending and developing these “powers”. He felt that:

…we simply do not yet know what is the basic relation between mental activity and physical brain activity. It is extremely important to try to find out whether under certain conditions mental activity may be detached from physical ; we can be sure that many possibilities of mind or mental activity are still unexplored.

Thus, Huxley has not closed the door on the realm of the evolving dead “discovered” by Myers. He is merely concerned to ensure that if it exists, it is not seen as evidence of any separate transcendent realm of spirit, but rather as evidence of the hidden powers of mind that may be “detached from [the] physical” He is, however, a transcendentalist: he believes that evolution can be directed so that humanity may produce its own managed transcendent realm.

The human species can, if it wishes, transcend itself— not just sporadically, an individual here in one way, an individual there in another way, but in its entirety, as humanity. We need a name for this new belief. Perhaps transhumanism will serve: man remaining man, but transcending himself, by realizing new possibilities of and for his human nature.

It thus becomes the “cosmic duty” of each and every one of us to partake in the “techniques of spiritual development” that are determined as efficacious for collective transcendence as determined by the World Science Council.

Thus, our Cambridge philosophers and paranormal investigators; the one-time director-general of UNESCO, and our rebellious Jesuit have one thing in common. They not only want to control political and cultural life in the material world but to extend their power in order to control the unseen realms of spirit/mind, the afterlife, and the material and spiritual future of the planet itself. Their ambitions are thus truly limitless.

Teilhard felt that he alone out of all of humanity had seen the “Truth” Nevertheless, his faith in the collective consciousness of mankind was such that he believed that once this “fusion of love of God and faith in the world” had been ignited in one mind that “sooner or later there will be a chain reaction.” And nothing could stop the Truth being spread through the universal mind and “setting everything ablaze.”

But where did this “Truth” come from? It is interesting that he chose to append a story that he told in his early work The Spiritual Power of Matter (1919) to one of his last books The Heart of Matter (1950). It appears to be a dramatized account of a mystical experience which describes a man being swooped upon by an entity as he walked through the desert. This creature called “the Thing” then penetrates his soul and proceeds to pour the sap of all living beings into him, renewing all the “enfeebled fibres of his being.” The young man felt the rapture of ceasing to be himself but also the oppression of some superhuman peril. This force was also “equivocal, turbid, the combined essence of all evil and all goodness.”

The entity declared, “You called me; here I am.” This call apparently originated in the desire of the young man to pit himself “against Reality entire and untamed.” The creature claimed that it had been waiting for him so that it itself could be “made holy.” It then describes the consequences of their union.

“…now I am established on you for life or for death…He who has once seen me can never forget me: he must either damn himself with me or save me with himself.”

It is clear that Teilhard believed that he personified mature humanity that was now able to bear the burden of “Reality” which was the pure naked force of evolution. Mankind would then live or die according to its ability to cooperate with this force.

There is, however, another reading. If humanity conjures and deifies what it believes to be the force of evolution, it will inhabit us, and we will share its fate. Stripped of the salvation of Christianity, we are condemned to an endless (and ultimately futile) search for immortality in order to avoid the terrors of final judgment when the force (which we may term the beast) and his collective human host will be condemned together.

Finally, we must note that the triumvirate of Myers, Huxley, and Teilhard mark a strange development in the psyche of the Western world. This is a remarkable extension of Enlightenment hubris from the visible to the invisible spheres, both now yoked to the imperatives of transhumanist evolution. It really doesn’t matter at this point if human evolution is seen in terms of a benevolent pantheistic force that is pushing us to a glorious endpoint or the fruits of the more prosaic unfolding of matter subject to a range of natural laws. What is important is that they have firmly placed humanity within nature and removed any hint that we may have been created in the image of God. Whilst this reduces us to entirely material beings subject to no spiritual privileges, it does effectively raise the evolution of human consciousness to divine levels.

This divinisation of supposed material processes and a belief in self-directed spiritual evolution is then the jam set to trap the wasps. It is a broad church of evolutionary humanism that can hold atheists, agnostics, utopian scientists, New Age devotees, practitioners of Eastern religions, misinformed Christians and so on.

In their widespread use of technologies of the self—including psychedelics, meditation, television, mechanical augmentation, gaming—people are increasingly identifying with the images, memes and themes of the global brain at the expense of the real world. They are completely open to whatever influences may actually reside in the spiritual metaverse. Why would we worry? Evil no longer exists.

We can only assume that Myers, Huxley, and Teilhard are pleased with the way things are going as they continue to evolve with us from “the other side of the gulf of death.” The previously undisciplined consciousness of humanity is being whipped into shape. Indeed the “evolving” consciousness of the Western population is now very nearly terminally passive, vacuous, skeptical and defenseless. It is being emptied so that “the Thing” is more able to penetrate this pliant collective soul. They look on blankly as the global architects erect the planetary internet of bodies and things complete with a ubiquitous sensor network. What are they doing? Most people can be barely bothered to ask. And we insist to their covered ears “they are setting up an all-encompassing digital surveillance network to study your emotions, thoughts, movements, links, and connections so that they will know how to engineer the final convergence.” It is in effect to prepare for the invasion of our cosmic body snatcher with whom we will be condemned to share a common fate.

Lorraine Davison

Links to the interactive map are found in the captions. If you want to enlarge these images, click on them.

Link to interactive map: https://embed.kumu.io/c3977ccf6111c482f579ec22a7904aaf#untitled-map?s=bm9kZS1FWGdwMjhsdg%3D%3D Interactive Map Link: https://embed.kumu.io/6bd21c707d593459b04739506e0ac85a#untitled-map?s=%23utilitarianism-principle-of-utility Interactive Map Link: https://embed.kumu.io/7d3593489fc751af85c3a9c4fa451d44#untitled-map?s=bm9kZS1TZHZzTzlIQw%3D%3D

 

Friday, 11. August 2023

Wrench in the Gears

Ant Queens, Manchester, and Emergence – Guest Post by Quantum Heart Cafe

Apologies to Quantum Heart Cafe who shared this letter with me over a month ago, but then life happened.    Guest Post by Quantum Heart Cafe: Click here and here for previous letters. Dear Alison and Wrench In The Gears Readers, Thank you for the wonderful discussion and comments we had with the last letter. [...]

Apologies to Quantum Heart Cafe who shared this letter with me over a month ago, but then life happened. 

 

Guest Post by Quantum Heart Cafe: Click here and here for previous letters.

Dear Alison and Wrench In The Gears Readers,

Thank you for the wonderful discussion and comments we had with the last letter. I hope this one provides more helpful insights and information about the artificial emergence that the predatory system is attempting to manifest. This letter continues my examination of Steven Johnson’s book “Emergence,” in which he introduces the “Myth of the Ant Queen” and talks about Manchester England as an example of emergence in cities.

Source Link: https://www.kirkusreviews.com/book-reviews/steven-johnson/emergence/

Following his discussion on slime mold cells, Steven introduces ants as another popular social species for studying artificial emergence. One of the reasons ants are popular among researchers is because they seek to understand how local interactions among individual ants lead to the colony’s higher level intelligence. Researchers also think that ant colonies represent a decentralized system, without centralized control. Alison has introduced E.O. Wilson’s work on ants and the ant computer, and while he is mentioned in part one, Steven focuses on the work of Deborah Gordon. Deborah Gordon is a biologist and researcher working out of Stanford University. Her focus is studying the behavior of ant colonies over a long period of time.

Steven visited Deborah Gordon at her lab. She specifically works with red harvester ants and builds large terrariums in order to observe their behavior over a long period of time. These terrariums put me in mind of the large glass condo towers they’re building all over Vancouver, BC. I sometimes wonder if another intention behind housing everyone in these buildings is to study us in clusters. It isn’t the individual that artificial emergence wants, it’s the interactions between people that create desirable higher-level macro behaviors. People clustered closely together are more likely to generate that sort of data.

Source Link: https://www.dezeen.com/2015/06/12/ole-scheeren-vancouver-skyscraper-new-typology-vertical-living-irregularly-stacked-boxes-canada-apartments-1500-west-georgia-street/#

Gordon hopes to unlock the mystery of how colonies develop and how individual ants dynamically change roles across their lifetimes. Ants can transition from foraging for food to caring for young and then to building nests. Their labor adapts based on signals that define the needs of the colony.

(Side comment from Alison: Try on the lens of social insect societies’ use of signaling to determine labor quotas / task allocation when considering pending global adoption of IoT-linked token engineering, blockchain identity / digital twinning, stackable credentials for “lifelong learners,” and “cradle to career” pathways. I see that a worker’s digital dust / experience profile has the potential to become the equivalent of chemosensory data shared in hive mind communication.)

Source Link: https://www.youtube.com/watch?app=desktop&v=ukS4UjCauUs

Gordan thinks it is the hive mind, guided by the thousands of interactions between individual ants following simple rules, that determines what is best for the colony. Through her research, Gordon discovered that harvester ant queens don’t tell workers in the colony what to do. The queen’s sole focus is reproducing the next generation. The worker ants protect her, because it’s in the best interests of the colony that they do so, not because she is the queen.

I found a clip of a presentation Gordon gave at the World Economic Forum IdeasLab 2014. She thinks that ants can provide a lot of information about decentralized systems without centralized control. Where decisions are made from the bottom-up.

In the clip, Deborah describes how individual ants can act as hubs, nodes that facilitate many interactions with many other ants. Her theory is that it isn’t the individual ant that determines whether or not they become a hub, rather it is the situation. This makes me think of influencers, institutions, and other agents in our sphere of influence who knowingly or not act on behalf of the system. They, too, function as hubs, and more often than not it’s the situation that determines who or what is tapped to become a major hub driving the system forward, not necessarily the individual.

Source Link: https://www.youtube.com/watch?v=L0lHTbzlVxY

Steven uses the point to argue that as with slime mold cells, there are NO leaders or ‘pacemaker’ cells making top-down decisions in complex adaptive systems. Nevertheless, humans are still wired to look for leaders. I disagree with this, because the artificial emergence that the system wants to put in place will run on a form of central control. There will be planners, designers, and programmers to implement decisions made at the top of the social hierarchy. The rules, templates, and game mechanics structuring how people may interact with one another within the Web3 protocol layer will be set at that level. I imagine there must be a chosen group of administrators overseeing progress made by the networked state’s citizens on leaderboards of goal-oriented behaviors linked to human computation. Perhaps Steven continues to emphasize his point about decentralization, because doesn’t want people to become aware that the future of social engineering is not that clear cut.

Along with ant colonies, Steven describes the city of Manchester as another example of an emergent system. Manchester was built on an old Roman settlement, which makes me wonder if there may be a continuity between the Roman Empire and this current expression of the dominator mindset? The city at the center of the industrial revolution, which changed humanity’s relation with the planet. Thanks to the industrial revolution and the steam engine, Manchester’s population grew to about 250,000 people by the mid to late 1800s. 

Source Link: https://luna.manchester.ac.uk/luna/servlet/detail/maps002~1~1~415398~232468:%3Ca-rel=-license–href=-http—creat;JSESSIONID=d82bfec9-877a-4222-9559-853fd5b380a3?qvq=q%3Acasson%3Bsort%3Areference_number%2Ctitle%3Blc%3Amaps002%7E1%7E1&sort=reference_number%2Ctitle&mi=0&trs=1&cic=maps002%7E1%7E1

In 1838, Manchester incorporated as a city, receiving a city council and urban planning department. The author writes that Manchester naturally emerged into a city and organized itself out of the chaos. Steven uses Friedrich Engel’s work as evidence of Manchester’s self-organization because Engels visited Manchester and wrote his observations in the ‘The Condition’. Engels noted that the working class had been pushed out to the margins of the city, and the middle to upper class lived in the more desirable neighborhoods.

According to Steven, this is yet another example of how there are no leaders or ‘pacemaker’ cells in emergent systems. He argues that Manchester “naturally organized itself” in this manner. I disagree, because even if Manchester didn’t incorporate until 1838, there were townships that had their own organizations that planned their particular areas. History is full of conscious decisions to push Black and Indigenous, poor and working class people off to the margins of society. Gentrification, red lining, rezoning, and for Indigenous people the reservations, were methods of displacement and removal that swept desirable neighborhoods clear for the middle and upper class. Steven’s analysis leaves out this crucial historical reality.

Source Link: https://www.scienceandindustrymuseum.org.uk/objects-and-stories/manchester-cotton-and-slavery

The book, “Emergence” is turning out to be an important reading, especially as I work to understand complex adaptive systems (interchangeable with artificial emergence) and my place in the labyrinth. For my next letter I will be sharing what I’ve learned about the work of John Henry Holland, Will Wright, connection machines, and virtual ants.

Sincerely,

Quantum Heart Cafe

 

 

Thursday, 10. August 2023

Werdmüller on Medium

Why I failed to pay my taxes

A story about terrible software and bad Continue reading on Medium »

A story about terrible software and bad

Continue reading on Medium »

Wednesday, 09. August 2023

Dick Hardt

Worldcoin: How Not to Implement OpenID Connect

World ID Critique OpenID Connect Implementation Antipatterns On July 24th, Worldcoin publicly launched World ID, and many friends and colleagues pinged me to get my opinion. I was attending IETF 117 and this past weekend I finally had time to start exploring the service now that it has launched. There has been TONS of coverage on the orb, the biometrics, and the coin. I’m going to review th
World ID Critique OpenID Connect Implementation Antipatterns

On July 24th, Worldcoin publicly launched World ID, and many friends and colleagues pinged me to get my opinion. I was attending IETF 117 and this past weekend I finally had time to start exploring the service now that it has launched. There has been TONS of coverage on the orb, the biometrics, and the coin. I’m going to review their OpenID Connect implementation.

This evaluation was made on August 7, 2023 using a production deployment of the Hellō app on Worldcoin; Worldcoin’s Sign in with Worldcoin Reference; and their OIDC Explainer.
Update August 9: On August 8, I provided a draft of this document to Worldcoin prior to publication. They have addressed a number of issues as noted below.
Tl;dr: If you are a developer considering adding World ID to your project. Wait. If you see an app using World ID. Be safe. The OAuth Best Security Current Practices have not been followed. Combined with the following point, applications using World ID may be vulnerable to attacks. The implementation is not compliant with the OpenID Connect specification. Times are in milliseconds instead of seconds, requests can be made without required parameters. Update Aug 9, these have been addressed. The user’s privacy is being violated. The authorization page presents no information on what the application is requesting, nor on what worldcoin.org is releasing. There are no application terms of service and privacy policy links. OpenID Connect Background

OpenID Connect (OIDC) is a popular identity protocol used by social login services such as Apple, Google, Facebook, and Microsoft. There are libraries in all popular programming languages, and plug-ins for all popular platforms.

The typical user experience with OpenID Connect starts with clicking on a button in the app, and then redirecting you to your provider — Apple, Google etc. If you don’t have an active session, you are prompted to login to your provider. If you have not used the app where you started, you will be shown what will be shared, and prompted if you want to continue. If you continue, you will be redirected back to the app sharing your information. There are a few variants on what is sent back and forth between the app and your provider, but the end result is usually an ID Token that is digitally signed by the provider. The contents of the token indicate who issued it, which user it is for, when it was issued, when it expires, which app requested it, and handshake data to ensure the token is for the user that started the flow.

OpenID Connect is built on top of OAuth 2.0, the most popular authorization protocol on the internet. (I happen to have led the initial design, and am listed as the editor.) These protocols have been around for over a decade and have had significant security analysis and improvements including in the IETF OAuth meetings, the OpenID Foundation, and OAuth Security Workshop. There are 28 published IETF RFCs and 12 OpenID Connect specifications.

Worldcoin launched with support for OpenID Connect, significantly lowering the barrier for developer adoption, and building on the decade of security and deployment experience. (My startup, Hellō chose to support OpenID Connect for the same reasons.)

There are two significant documents that have been under development for a couple of years: the OAuth 2.0 Best Security Practices; and OAuth 2.1. The latter does not introduce any new features, but compiles a dozen documents into a single coherent document that removes conflicting information.

Unfortunately it does not look like Worldcoin consulted them.

Adding World ID to Hellō

I have integrated all the popular OpenID Connect providers into Hellō (Apple, Facebook, GitLab, Google, Line, Microsoft, Twitch, Yahoo) and OAuth 2.0 providers (Discord, GitHub, Instagram, Mastodon, Reddit, Tumblr, Twitter, and WordPress). We also support Metamask, WalletConnect, and passkeys. Our code deals with how each of the providers is their own “special snowflake”, so I thought adding World ID would be straightforward, and we would have access to the unique “human” claim Worldcoin promised.

Signing up for World ID

To access the Developer Portal, you need a World ID which requires you to install the World App. I installed it on my iPhone, verified my phone number, and was then prompted to create a password. The app then crashed. Oops! After restarting, the app displayed “Creating World ID” and a counter that indicated it was going to take some time. I set my phone aside and when I checked it later it had crashed again. I restarted the app, was shown the passport-like display, and then once again displayed “Creating World ID”. As you can see from the screen grabs, some time passed between the initial install screen and the other screen shots included below. Eventually the World App was happy and I had a World ID.

A password? It’s 2023. Why not support passkeys? All modern iOS and Android devices support it. I understand the desire to support older and less expensive devices, but it makes no sense to not support stronger, phishing resistant authentication when available. Passwords just increase the attack surface.
Face ID Opt in? The World App by default is able to log into a site without further authentication. There is no PIN or Face ID when you open the World App, or use it to log into another app. IE, if your phone is unlocked, someone can use it to login to a website just by scanning the QR code. If you poke around in the settings, you can enable Face ID, but I was not prompted to turn it on.
World ID Developer Portal

To access the developer portal, you must log in with your World ID. The website shows a QR code that you scan with the World App on your phone. The process is smooth, and you see a logo verified mark beside Worldcoin Developer Portal giving me confidence I’m logging into the correct app with my World ID.

Developer Experience

If you are not a developer, you may be tempted to skip this section. Don’t. There is not much to it, and that is where the issues start.

After logging into the developer portal, I was prompted to enter and then verify my email address. Creating an app was straightforward. (Almost as straightforward as at the Hellō Console ;)

The initial delight of the simple experience disappeared when I realized key configuration was missing. There was no mechanism to upload a logo. No terms of service or privacy policy urls. No verification process so my app would show up verified like the Worldcoin Developer Portal.

register API

The Sign In Reference documents a Register App API, an alternative to registering through the portal.

The one required parameter is “redirect_uris”, which is mislabelled as a string, despite the example showing it as an array of strings. The description:

URLs the user will be redirected to after authentication. Must be HTTPS, and can always be updated in the Developer Portal.

… is strange, as there is no authentication with the API, so how would Worldcoin know which application I can update?

The example curl code that you can copy to run yourself:

curl -X POST https://id.worldcoin.org/register \
-H "Content-Type: application/json" \
-d '{
"client_name": "Example Application",
"logo_uri": "https://app.example.com/logo.svg",
"redirect_uris": ["https://app.example.com/callback", "https://app.example.com/redirect"],
"application_type": "web",
"grant_types": "authorization_code",
"response_types": "code"
}'

… shows an SVG file being provided as the logo. Wow. No providers allow SVG files due to the XSS security risks of an attacker embedding malicious JS in the image, which would then be loaded on id.worldcoin.org if the SVG was not run through a sophisticated filter.

Worldcoin responded to this concern on Aug 9:
Worldcoin developer portal is not vulnerable for the following reasons:
- The Worldcoin developer portal has a strong Content Security Policy (CSP) that prevents external images like a malicious SVG from loading entirely.
- The Worldcoin developer portal uses <img> tags to embed logo_url content, which prevents embedded scripts inside SVGs from executing.
- Additionally, apps must currently be manually verified to display logos, which has the added benefit of preventing impersonation scams.
UPDATE Aug 12: the example shows using a .png instead of a .svg file. The API provides the same error though.

Unfortunately I was not able to test this (or fortunately given all the security risks of an anonymous registration API?) the API was not working. I tried a number of parameter permutations, but the server would always would return an HTTP 500 response code and:

{
"code": "server_error",
"message": "Internal server error. Please try again."
} World ID Login Experience

With a configured app, I could now check out the World ID user experience on my own app. (See below for how I did this without writing code.)

Starting the flow in a browser looks similar to the login flow for the developer portal. Looking more carefully, it looks exactly the same. There is no indication that I am logging into my app vs the Worldcoin Developer Portal.

In the World App, the flow looks similar with the exception that Hellō is not a verified app. The only signals of which app is being logged into are the letter “H” as the logo, and “Hellō” as the app name. Note there is no terms or service or privacy policy links for Hellō anywhere, and no indication of what is being shared with Hellō.

Platform Detection

While resizing my browser window to access the developer tools, I noticed the web interface naively just looks at the window width to decide if it is on a mobile device. Pity the poor user trying to use their World ID on a web app where the browser window is narrow.

Developer Portal Session Length

I went back to the developer portal a day later, and my session was still active. I would have expected it to time out after an hour of inactivity at most. Sure, I should have logged out, but an attacker could take over a sloppy developer’s app.

Protocol Exploration

Ok, we are going to start to get a little more technical. If you have a some idea of how the web works, you should be able to follow along.

Given the nature of OpenID Connect, you don’t need to write any code to explore what a provider will do. You can open up a URL with the right parameters to make an authorization request, and then use the browser’s developer tools to see what was sent back. This is all I did in my exploration.

I started off with a simple and incomplete request to see what errors I got. (new lines are for readability)

https://id.worldcoin.org/authorize
?client_id=app_404ea2eb1e2d5155c05c3b2878ca1ade
&response_type=id_token
&redirect_uri=https%3A%2F%2Fwallet.hello.coop%2Foauth%2Fresponse%2Fworldcoin

To my surprise, this worked. I got back the following JSON in the ID Token.

{
"header": {
"alg":"RS256",
"typ":"JWT",
"kid":"jwk_dad75cf7c4bc159276c2c748a196b11d"
},
"payload": {
"iss":"https://id.worldcoin.org",
"sub":"0x233c7136721dd85256e936a6a0b5a67b0f904bed6d1ce91d6d7364e9064f9cc2",
"jti":"7fcd4fb6-63c4-4019-9ed2-43b14af657e0",
"iat":1691444766901,
"exp":1691448366901,
"aud":"app_404ea2eb1e2d5155c05c3b2878ca1ade",
"scope":"",
"https://id.worldcoin.org/beta": {
"likely_human":"weak",
"credential_type":"phone"
},
"nonce":"1691444748611"
}
}

Why was I surprised? I should have gotten an error response.

The “scope” parameter is required per OpenID Connect, and must contain the “openid” scope to differentiate between an OpenID Connect request and an OAuth 2.0 request.

UPDATE: Aug 9
An error is now returned if the “openid” scope is not provided

Either a “nonce” or a PKCE “code_challenge” should be required per security best practices to prevent replay attacks per OAuth Security Best Practices. As this is an implicit flow, it would have to be a “nonce”.

UPDATE: Aug 9
An error is now returned if a “nonce” is not provided
UPDATE: Aug 12
I only checked the “id_token” request type. Reading the docs I noticed they state the nonce is only required for the “id_token” request type, not the “code” request type. That is not spec compliant. It is required for the code request type as well since it is required to be in the “id_token” you get back when exchanging the code at the token endpoint.
UPDATE: Aug 12
I had only been testing the “id_token” response type. The “code” response type does NOT return a “nonce” in the ID token even if one is provided in the authorization request. IE, there is no way to check that the “id_token” you received is associated with the session that started the flow. While an attacker can inject a “code” and have you receive back a different ID Token. This is not a huge risk though since if you are validating the ID Token, it will fail since there is no “nonce”!

Looking at the ID Token, I was even more surprised. The “iat” (issued at) and “exp” (expiry) claims are in milliseconds instead of seconds. Most validation libraries will consider the ID Token to be valid as it was issued in the far future. This online parser thinks the token was issued at November 10, 55569, over 53,000 years from now.

UPDATE: Aug 9
This has been fixed. “iat” and “exp” are now in seconds

Despite not providing a “nonce” in the request, I got back one with the value of “1691444748611”, a millisecond time stamp that is 18290 ms prior to when the ID Token was issued (1691444766901). A nonce should be a random value, so clearly there is some logic failure here as there should not be a “nonce” in the token as one was not provided.

My last surprise in the ID Token was the “https://id.worldcoin.org/beta" claim:

"https://id.worldcoin.org/beta": {
"likely_human":"weak",
"credential_type":"phone"
}

As a user, I had not been informed that this data was going to be shared. As a developer, I had not asked for it. I consider this a privacy issue.

Next I tried sending a valid request. The valid scopes are not documented, but poking around at the examples I saw that “openid”, “email”, and “profile” were being used. Here is what I sent next, that includes a “nonce” and “scope” parameters:

https://id.worldcoin.org/authorize
?client_id=app_404ea2eb1e2d5155c05c3b2878ca1ade
&response_type=id_token
&redirect_uri=https%3A%2F%2Fwallet.hello.coop%2Foauth%2Fresponse%2Fworldcoin
&nonce=nonce1
&scope=openid+email+profile

And I received the following JSON back in the ID Token:

{
"header": {
"alg":"RS256",
"typ":"JWT",
"kid":"jwk_dad75cf7c4bc159276c2c748a196b11d"
},
"payload": {
"iss":"https://id.worldcoin.org",
"sub":"0x233c7136721dd85256e936a6a0b5a67b0f904bed6d1ce91d6d7364e9064f9cc2",
"jti":"bc24c584-e061-4c62-aaa1-c85cbb38579e",
"iat":1691443980673,
"exp":1691447580673,
"aud":"app_404ea2eb1e2d5155c05c3b2878ca1ade",
"scope":"openid email profile",
"https://id.worldcoin.org/beta":{
"likely_human":"weak",
"credential_type":"phone"
},
"nonce":"nonce1",
"email":"0x233c7136721dd85256e936a6a0b5a67b0f904bed6d1ce91d6d7364e9064f9cc2@id.worldcoin.org",
"name":"World ID User",
"given_name":"World ID",
"family_name":"User"
}
}

We got back the “nonce” value we sent. The “email”, “name”, “given_name”, “family_name” claims are not useful. Based on the documentation, it looks like they included them because some library complained.

Directed Identifiers

Next up was checking what was different in ID Tokens between different applications. I was hoping to confirm a different identifier, specifically the “sub”, would be different for me across applications per the 4th Law of Identity.

My app was given:

"sub":"0x233c7136721dd85256e936a6a0b5a67b0f904bed6d1ce91d6d7364e9064f9cc2"

… and the developer portal was given a different identifier, which is what we want:

"sub":"0x29a6e7c076c0922e146e96c92b5d4c8833818cc89166f483903a731d2e757253"

… and the second production app I setup was given yet another identifier:

"sub":"0x2d20b5094a2e2f8be4542f40e7eca9b701b4763877cc4fede42dfa6ea756a31c"

The latter is going to be a problem for developers as they want to know it is the same user across apps. Many large providers that only allowed one app per developer have kludged together mechanisms for developers to get this common identifier. Wanting the same user identifiers across each of developer’s app will motivate them to use the same “client_id” and “client_secret” across apps, dissolving the separation of concerns.

Access Token

The next thing I explored was setting “response_type” to “token” instead of an “id_token”. This provides an access token that can be used to call the user_info endpoint, and potentially any other API exposed by the provider.

First of all, this should not have worked in an implicit flow where the “token” is returned as a redirect parameter. It is now in the browser history and available to any malicious code with access. (Security BCP 4.3.2)

The bigger issue is that the access token is an ID Token. There was no difference in the token format in conflict with RFC 8752 section 3.11 and 3.12, conflating authorization with authentication.

response_mode

The default mechanism to return the results from an authorization request is to include them as query parameters. This has the downside of sensitive parameters such as tokens being captured in server logs that save the URL being requested, and potentially exposes the parameters to unencrypted wireless networks.

The “fragment” response mode returns the results in the URL fragment, which is not sent to the server by the browser. This allows a web app to access the results. The “form_post” mode passes the parameters to the server as a form post rather than as query parameters, keeping the parameters out of the server logs of the page request.

I tried getting results by passing the “response_mode” parameter set to both “fragment” and “form_post”. Unfortunately, Worldcoin ignores the “response_mode” parameter. All results are returned as query parameters.

Documentation Concerns

Developers depend on a provider’s documentation to guide them. Unfortunately, the Worldcoin documentation is lacking, leading to developer frustration, or worse, vulnerable applications.

The only explanation for how to start the OpenID Connect flow is in Further Reading | OIDC Explainer, not where a developer would start. The Quickstart tells developers to following the Auth0 Integration.

I’m surprised the Auth0 integration would work given the non-standard ID Token “iss” and “iat” claims — but perhaps it only uses a token flow? — I have not investigated.
nonce

The OIDC Explainer casually mentions the “nonce”, does not say it is required (it should be), and provides no explanation what the developer should do to confirm the ID Token they have received is linked to the authorization request they started, which is one of the purposes of the ”nonce”.

scope

No mention of the “scope” parameter, which is required. No list of valid scopes is provided. I figured out the “openid”, “email”, and “profile” scopes by looking at examples. While “openid” is required, the “profile” scope return generic information, the “email” claim is just the “sub” claim prepended to “@id.worldcoin.org”.

UPDATE Aug 12:
The documentation has been updated to include the “scope” parameter and shows “openid%20profile%20email”, so one can more easily guess what scopes are allowed. The docs say that the default scope is “openid”, but it is now required to be spec compliant. The “nonce” parameter is now stated as required when using id_token response type, although best practice is for there to be a nonce in the id_token returned for “code” response types as well. The docs are moving in the right direction!
Closing Thoughts

Given the $125M raised and the involvement of Sam Altman of Y Combinator and OpenAI fame, I had expected a robust OpenID implementation that would have lowered the bar for developers to adopt World ID and inspired innovation.

This disregard for following the standard and published best security practices is alarming. Not informing a user about what is being released about them displays a disregard for user privacy. A step backwards for the identity industry.

Tuesday, 08. August 2023

Doc Searls Weblog

All home now

From 2007 until about a month ago, I wrote on three blogs that lived at blogs.harvard.edu. There was my personal blog (this one here, which I started after retiring my original blog), ProjectVRM‘s blog (also its home page), and Trunkline, … Continue reading →

From 2007 until about a month ago, I wrote on three blogs that lived at blogs.harvard.edu. There was my personal blog (this one here, which I started after retiring my original blog), ProjectVRM‘s blog (also its home page), and Trunkline, a blog about infrastructure that was started by Christian Sandvig when he and I were both fellows at Harvard’s Berkman Center for Internet and Society (and which I kept alive with very occasional posts since then). The image above is from those blogs’ header images.

All three are now re-homed. This one is at doc.searls.com (a URL that had redirected to blogs/harvard.edu/doc for many years), ProjectVRM’s is at ProjectVRM.org (a URL that had redirected to blogs.harvard.edu/vrm address) and Trunkline’s (which had been at blogs.harvard.edu/trunk) is now at trunkli.org.

Their hosting service is Pressable.com, a WordPress subsidiary that worked with the Berkman Klein Center to make sure that every link on the Web pointing to pages at those three Harvard-hosted blogs now goes to those pages’ new locations, without anything being 404’d. Which is just. freaking. awesome.

My thanks and gratitude to all the people who helped, both within those organizations and in my own network of friends. Together they demonstrate that the Web is a living archive and not just a whiteboard.


Timothy Ruff

Calling All Wallets

An open solicitation for high-volume wallet projects If you have, work with, or are aware of a digital personal wallet project capable of onboarding and supporting hundreds of millions of individuals in a short period of time, we’d like to speak with you. Digital Trust Ventures (DTV) has launched a startup, currently in stealth mode, that will generate up to hundreds of millions of wal
An open solicitation for high-volume wallet projects

If you have, work with, or are aware of a digital personal wallet project capable of onboarding and supporting hundreds of millions of individuals in a short period of time, we’d like to speak with you.

Digital Trust Ventures (DTV) has launched a startup, currently in stealth mode, that will generate up to hundreds of millions of wallet holders sometime around the middle of 2024. This startup will not provide personal wallet services, and instead will rely on partners to satisfy the near-instant strong demand that will be generated upon launch. We are seeking two partners to be part of this launch, who will offer competing wallets that are fully interoperable with each other and with the DTV startup’s KERI-based tooling.

A personal wallet partner must:

Be capable of scaling to hundreds of millions of users in a short period of time (from 1 to 30 days) Provide appropriate self-service and other support for all users Support KERI and ACDCs Be thin/keys-at-the-edge, with ACDCs managed through hosted cloud agents Support all ACDCs, regardless of project participation Support simple messaging Have an excellent, simple UI/UX Be ready to go by Summer 2024

The wallet must support freemium pricing; a strong monetization model for the personal wallet along with other details of the project will be shared under NDA with qualified prospective partners.

In exchange for providing all of this, the partner will enjoy global recognition on a stage with no equal, and the chance to quickly onboard hundreds of millions of users. We are looking for strong, established companies with clear capabilities to handle a project this large, not startups.

If you are involved with or know of a project that has these capabilities, please get in touch to learn more by sending an email to wallets@digitaltrust.vc.

Monday, 07. August 2023

Wrench in the Gears

The Far Side of the Alleghenies – Signals, Cells, Networks, and Nature

What follows is a bit of a ramble. My experience is that when I go forth in the world, stories sometimes land at my feet. Disparate episodes logged away and semi-forgotten may pop into my consciousness later bringing clarity to ideas I’ve been pondering. So, consider this entry a rather unstructured journaling exercise, notes for [...]

What follows is a bit of a ramble. My experience is that when I go forth in the world, stories sometimes land at my feet. Disparate episodes logged away and semi-forgotten may pop into my consciousness later bringing clarity to ideas I’ve been pondering. So, consider this entry a rather unstructured journaling exercise, notes for a work in progress spinning around the nature of communication (or mis-communication), signals, cells, electricity, and nature. There are synchronicities there, I’m sure.

I went camping this week. It wasn’t anything too adventurous, car camping, but my first solo venture as a single traveling gal. I needed to be out of my house for four days, and I’m on a budget, so it seemed like a good option. Robin Wall Kimmerer’s advice that humans should seek out and listen to the old beings was rolling around in the back of my mind. In the dog days of summer where better to be than with ancient moss-covered stones who keep their counsel in a deep, watery shade? I packed up my tent, the one that had served me well in South Dakota during the summer of 2020, and headed west across the Allegheny Front. It’s the wrinkled mid-section of Pennsylvania where a late Paleozoic collision with the continent that would become Africa left its indelible mark. For twenty-eight dollars in Turnpike tolls, I bought the right for my trusty Subaru to tunnel through several old, old sandstone mountains – sections of petrified sea floor turned towering range then over millennia worn down to softly undulating ridges blanketed with maple, beech, oak, tulip poplar, hemlock, rhododendron, and mountain laurel. 

In this photo you may see that I ruined my glasses, which were already out of balance. They fell off my head when I was hammering in my tent stakes, and I didn’t notice until I stepped on them. I’d brought several books to read, including Neal Stephenson’s Cryptonomicon, recommended to me by a friend. I felt like Burgess Meredith in that end-of-the-world Twilight Zone episode. I muddled my way through though and ordered a new pair yesterday. I’ll need them if (when) I get a job interview. 

Under the weight of that ancient rock, I was transported into a different kind of time. A perfect accompaniment for a sojourner hoping to wander outside traditional time for a while was the audio-book of Madeleine L’Engle’s, “A Swiftly Tilting Planet.” It’s the third installment of her time quintet, which was published in 1978. The plot has to do with our potential to alter timelines, which brings to mind physicist Hugh Everett III’s Many Worlds Theory; the Oscar-winning “Everything, Everywhere, All at Once;” and the clip I made with Marc Petit from Epic Games and Neil Trevett of the Khronos group and Nvidia Ecosystems where the two men discuss the potential for blockchain ledgers to be used to explore alternative futures as a sort of virtual time/teleportation machine.

In L’Engle’s book, a crisis arises in the middle of Thanksgiving dinner. Charles Wallace is tasked by Meg’s mother-in-law to use St. Patrick’s Rune, which she learned from her grandmother, to save the earth from nuclear destruction by Mad Dog Branzillo. Branzilllo’s character is a South American dictator who has ancestral ties to Wales and Connecticut, and Vespugia (a fictional country near Patagonia). The Cain and Abel-inspired plot features the bloodlines of two brothers, one who embodies good, the other evil.

Charles Wallace’s task is to kythe, enter into bodies of a series of people living in the past; discover a path to “what might have been;” and then subtly shift the balance away from world annihilation. This he ultimately achieves with the help of a unicorn companion by the name of Gaudior. Despite the happy ending, I was somewhat disconcerted, given what I know now, by the use of a nuclear threat to force a change. Also, when the “good guy” takes his place in the new timeline his pure intentions are for “peace” and “redistribution of goods” across the world. Of course, this “sounds good” to my former left-leaning self, but I now realize that the intent of the planetary computer / World Brain system is to substitute cybernetic homeostasis for “peace.” Also, you can be sure that whatever redistribution takes place will not be beyond the reach of global defense and finance interests. Most likely it would be facilitated by a HAL 9000-inspired web3 protocol layer with built-in game mechanics to better manage us as agents in an “open metaverse” collective.

Charles’s sister Meg accompanies him on this journey from her bedroom via sibling telepathy and provides occasional research support. Her cuddly companion and comfort animal is a recently-arrived stray dog whom Charles named Ananda, Sanskrit for eternal bliss and union with the godhead. An anchor for the story is a fictitious book, “The Horn of Joy,” written by a fictitious Civil War-era author, Matthew Maddox. A theme running throughout the plot line is the importance of remembering the ancient songs, bringing back lost harmonies, and finding joy in the world. I was going to the mountains in search of just this. Below is the rune from the story, based on the St. Patrick’s Hymn before Tarah. Meg’s mother-in-law Branwen was of Irish descent. Being more out on my own these days in choppy waters, I’ll admit to working to commit it to memory. I find it comforting, and maybe you will, too.

“At Tara in this fateful hour

I place all Heaven with its power

And the sun with its brightness

And the snow with its whiteness

And the fire with all the strength it hath

And the lightning with its rapid wrath

And the winds with their swiftness along their path

And the sea with its deepness

And the rocks with their steepness

And the earth with its starkness

All these I place

By God’s almighty help and grace

Between myself and the powers of darkness!”

Though modest compared to their western cousins, the ridge of the Appalachians made no easy crossing 250 years ago. On the return trip, I took the long way home to save on tolls, which led me east along U.S. Route 40. It was a passage that started out as a footpath through Maryland’s Cumberland narrows blazed by Chief Nemacolin and was later improved by General Braddock for military maneuvers during the French and Indian War. As pressure mounted from frontiersman looking to make their futures in the Northwest Territory, it became one of the nation’s first federally-funded public works projects. The National Road linked the Potomac and Ohio watersheds, made many fortunes, and dealt a terrible blow to the Indigenous peoples on the other side of the range.

The route continued north along I-81 from Hagerstown to Harrisburg. When I saw the signs for the Carlisle exits, I knew that I had to stop to pay my respects to the children sacrificed to residential boarding schools. Carlisle was the model scaled brutally across the United States and Canada, places where native children, stolen from their families, built their own prisons. That reminds me of students today being trained to build extended reality as part of their “education.” The hospital where many of the children died is now a boutique hotel for visiting dignitaries.

 

When I came to Carlisle five years ago there was a boulder with a bronze plaque honoring Captain Richard Henry Pratt who proclaimed what was needed was to “kill the Indian to save the man.” This time I looked for it, but thankfully it appears to have been removed. The site of the former school is home to the US Army War College, which provides higher education opportunities to officers studying military history, strategy, and war gaming. Before I left, I pulled over to take a picture of their new $85 million 200,000 square foot academic building that was nearing completion. I imagine that fifth-generation warfare in extended reality will be a focus of study.

You have to have a background check in order to enter the campus. The cemetery that holds the bodies of the children who died at the school is situated along a busy road. The afternoon I was there the agitated stacatto of a high lift beeping as it went about its telecommunications tasks added to the steady thrum of passing cars. Whenever there is sacred work to do, energetic forces seem eager to disrupt the proceedings.

Petitions have been made to the US government to exhume the bodies, so tribes can bring their relatives home. In 2021, Rosebud Sioux youth accompanied the remains of ten children back to their ancestral lands in South Dakota. An article from the time quoted Christopher Eagle Bear, “These schools, they played a key part in trying to sever that connection to who we are as Lakota. They took away our language, they made it impossible for us to be who we really are.” There are lessons to be learned, that are applicable to our present circumstances. For the Lakota they stole their language and oral histories. Today we are moving towards a future managed by memes and mind-viruses. We are nearing a time of post-literacy, where images, often emotionally-charged, are curated to trigger us into pre-determined reactions. The strategic and harsh fracturing of indigenous culture, on the grounds of the Army War College, is something upon which we all should reflect.

 

Targeting children was a way to root out different ways of knowing the world. With that in mind, the focus on boosting today’s children into extended reality with Roblox, and ed-tech, and books/films like “Ready Player One” is simply a continuation of longstanding attempts to disconnect youth, native youth, from community and homelands. A century and a half later the trauma wrought over generations could be leveraged to justify pulling the linguistic cosmology of indigenous peoples into the machine. Sold as language preservation, a secondary purpose, I think, is to feed the linguistic cosmology of cultures that could be a threat to artificial intelligence into the circuits, so that a new, improved web3 can better know its opponents. Open-source intelligence is at the base of any successful military campaign. I think of the language and culture that was stolen from the children taken to Carlisle. I think about L’Engle’s book and the devastating harm caused by the suppression of ancient songs and harmonies.

The small cemetery plot flanks the entrance along a busy road, right next to the loading dock for the post’s Commissary (grocery store), which energetically feels significant. I’d taken my last braid of sweetgrass with me, and I knew I needed to leave it as an offering, a signal of care. I didn’t have other materials with me, but as often happens I simply looked around after I parked my car and creator provides. There on the ground were even tinier acorns than the ones I’d used in Bucks County along with feathers, lots of them, a blackish blue, some freshly fallen oak leaves, and a clump of wood sorrel with its tiny yellow flowers. I arranged them next to the small monument, and then I entered the gate.

Inside were six rows of white headstones, not original. The graves had been moved to its current site years ago. Most of the children died in the 1880s and 1890s. Carved upon them were most of their tribal affiliations. A few victims had been brought from as far away as Alaska. Family separation, attempted cultural erasure, indoctrination into “civilization” – what was going through the minds of these military men who perceived children living on a far-away tundra could be such a threat? A different kind of consciousness and spiritual worldview would not be tolerated in the land of the free and the home of the brave. Many headstones had Apache designations, which brought to mind the afternoon and evening Jason and I spent at Cochise Stronghold in Arizona, and what happened to the Apache and Geronimo and his skull.

I paused before each grave and said the name of the child – I see you, Joseph. I see you, Lucia. I’m sorry this happened to you. Because in order to heal we must have the strength to see the harm, to acknowledge it, and apologize, which I did probably a hundred times that afternoon. So many children whose lives were lost in a misguided effort to eliminate an animist worldview and uplift the enlightenment thinking that would take us down the path to artificial intelligence, machine learning, and dehumanization many decades hence. We’ve lost any semblance of right relationship we’ve ever had. That is what I was trying to start to reclaim by immersing myself in sacred nature for four days, to become de-domesticated and re-attuned to the communications of a naturally-networked world.

My destination was Ohiopyle State Park, located about an hour and a half southeast of Pittsburgh near the Maryland border in a region known as the Laurel Highlands. Wandering through the mountain laurel and blooming rhododendrons I thought back to the myth of Daphne. She was pursued by Apollo and fleeing his unwanted amorous inclinations, she begged her father Peneus, a river god, to turn her into a laurel.

Yes, the Greek laurel tree is not the same as the mountain laurel, but I feel a connection between the story of this god of the sun and light pursuing a human form and the role of water, fluid dynamics, and phase transition to this unfolding story. Later, visiting a waterfall along Johnathan Run that flowed over smooth sandstone I saw the outline of a woman created in the ripples of the water. Her head was in profile, shaded by laurels. An errant stone had broken loose to create her eye. Hair flowed out behind her, billowing skirts below, and above a trim waist were breasts created by paired currents swirling sand. To me it felt like a communication of life to life. Two women seeking shelter in nature. My photos don’t do the image justice. The apparition relied on the way the light fell across the stone and water. Trust me though, Daphne was there.

I first came to the area as a young professional working on an interpretive plan for the Lincoln Highway Heritage Corridor (click here if you want to see my early research skills in cultural landscape analysis two decades before I encountered blockchain and web3). The area is known for two things – Frank Lloyd Wright’s masterpiece Fallingwater and the Youghiogheny River’s white water. In the Lenape language Youghiogheny means “river that takes a contrary turn,” because it flows north and empties into the Monongahela River which flows to the Ohio, the Mississippi, and on to the Gulf of Mexico. As someone who’s taken a contrary turn herself, it seemed a fitting destination. The river’s contrariness, however, is a good thing in that it brings warm temperatures from the south along its route and supports unique kinds of ecosystems like the Fern Cliff Natural National Historic Landmark. It is a crossing place where plants from northern and southern climates co-mingle.

Nestled in a meander, the Fern Cliff peninsula is home to abundant, unique wildflowers. During the nineteenth-century the town of Ohiopyle became a recreational destination for affluent Pittsburgh families who traveled there on the B&O railroad. The railroad line is still active and is considered one of Amtrak’s most scenic sections in the Northeast. Numerous spur lines, now abandoned with some redeveloped as long-distance bike trails, were once used to transport coal and lumber from the surrounding hills. The name of the town comes from the Monongahela Mound Builders word ahi opihəle meaning it turns frothy white. The main street sits opposite a waterfall with a twenty-foot drop. On this trip, however, I availed myself of neither whitewater nor architectural landmarks. I just focused on rocks and water and moss and ferns and wildflowers and tried to sense whatever lessons may be out there for me to find.

As a strange and unsettling bookend to my week, I ended up watching “Masking Threshold” Saturday night. It’s a low budget psychological horror film produced by Austrian filmmaker and digital culture hacker Johannes Grenzfurthner through his artist collective Monochrom, which is based in Vienna (think Atoms for Peace, token engineering, psychoanalysis, and smart city cybernetics). I’d been researching Grenzfurthner last year after I stumbled upon another Monochrom project featuring a laminated pressed flower “community currency” carried out with Ian Grigg of Mattereum and developer of Ricardian smart contracts. There is a synchronicity with their pressed flower currency, because I saw many beautiful late summer wildflowers on my trip.

Source Link: https://www.youtube.com/watch?v=U-hbxxLgpsQ

 

Source Link: http://www.monochrom.at/flowerbank/flowerpicking—april-17-2005/ximg_2057.html

 

Source Link: http://www.monochrom.at/flowerbank/flowerpicking—april-17-2005/ximg_2050.html

The film premiered at the 2021 Fantastic Fest in Austin, Texas (think blockchain, supercomputing, and defense innovation) and featured an alarming poster design with pelvic bones paired to evoke ears that were pierced by a bloody knife. The protagonist of the film is a former physics student at the University of Central Florida in Orlando, a hub for defense simulation modeling industries, who develops an unusual case of tinnitus after running an experiment with vibrating Chladni plates. He creates a home “laboratory” to try and understand the source of the tones that are ruining his life. As the film progresses, his experiments indicate the tones relate to electrical life forces emanating from cells “talking” to one another. He cannot tolerate being around groups of people, because there is too much vibrational noise from the cells in a crowd communicating.

“Masking Threshold” – timestamp 30 minutes

“Masking Threshold” – Timestamp 30 minutes

Source Link: https://www.imdb.com/title/tt11586384/ Source Link: https://fantasticfest.com/ Source Link: https://www.monochrom.film/feature/masking-threshold/#1555240354578-cc49295b-f43b Source Link: https://www.rottentomatoes.com/m/masking_threshold

His first experiments involve plants: coleus, bamboo, moss, and a vat of algae cultivated in his tub. He then starts collecting ants, slugs, and mice; purchases parakeets from a pet shop; and even steals the neighbor’s beloved dog Pebbles (think of the ripples caused by pebbles dropped in a pond). He starts to deploy his knife skills on them to see how the tones he’s experiencing change. He imagines there are different tones conveyed by a life experiencing violence and death and posits that witnesses to that violence are permanently changed based on an experience he has with ants and the hivemind. He imagines that being able to discern the changes in signal processing would be of great importance in criminal investigations and other applications. At this point he starts to consider that his “tinnitus” may actually be a gift, a grim super-power.

“Masking Threshold” – timestamp 56 minutes

“Masking Threshold” – timestamp 50 minutes

The man is exacting in how he carries out his experiments, and contrasts his version of science, using documented quantitative proof, to earlier alchemical practices. His passing references to alchemy got my attention. Ian Grigg, Grenzfurthner’s partner on an earlier art project, works at Mattereum, a company led by Vinay Gupta, the strategic architect of Consensys, who has publicly stated he is involved in esoteric practice. The film includes a screenshot of an alchemical treatise written by Micheel Sandivogious.

“Masking Threshold” – timestamp 4 minutes

“Masking Threshold” – timestamp 4 minutes

Sandivogious (also Michael Sendivogious) was a sixteenth-century Polish scientist who worked with occultists John Dee and Edward Kelley. He is understood to have been the first person to identify oxygen, generated by burning saltpeter, as the “food of life.” In pre-industrial times processed urine was a source of saltpeter. In the film the protagonist urinates into a test tube as he assesses the tonal change of organic and inorganic liquids. The main character also keeps a small blow-torch that he uses to burn and melt things throughout the film. Salt crystals, which are chemically different from saltpeter, are prominently featured on the Chladni plates and as a method of killing slugs. Salt is part of the alchemical Tria Prima along with sulfur (soul) and mercury (mind). It represents the earth and the body and is a vital element, playing a key role in communicating signals through the nervous system and the brain.

“Masking Threshold” – timestamp 20 minutes

“Masking Threshold” – timestamp 49 minutes

“Masking Threshold” – timestamp1 hour 2 minutes

Source Link: https://biobeat.nigms.nih.gov/2020/11/pass-the-salt-sodiums-role-in-nerve-signaling-and-stress-on-blood-vessels/

“Masking Threshold” – timestamp 30 minutes

“Masking Threshold” – timestamp 31 minutes

Source Link: https://aerostructures.cecs.ucf.edu/2018/09/

“Masking Threshold” – timestamp 30 minutes

Sendivogious is speculated to have been an early influence on the development of Rosicrucianism, and there is an AMORC chapter in Austin, one of six in Texas. The Alchemy Website features a 1990 article from The Hermetic Journal about Sendivogious, which notes that the alchemist kept a low profile but was included under an anonymous pseudonym in several books lauding the most important alchemists.

I share the following excerpt for the line in the poem that is bolded. It corresponds to a shot in the film of the character’s key ring. The set is so small and carefully curated, it feels like most of the items in the tiny space must have some significance to the plot. Opposite a key decorated with a bald eagle and flag is a pendant featuring a rainbow-colored (LGBTQ+ / photonics) representation of Saturn within a “gold” (alchemy) frame with embossed flowers (roses?). I’ve noticed recently that phrase “human flourishing” is coming to prominence in digital narratives. I sense there is a symbolic connection between “flourish” and “flowers? “The roots of the word “flourish” come from the old French, “florir,” which means “to bloom.”

“Masking Threshold” – timestamp 14 minutes

Also, Saturn is a key feature in the work of Brandon Wallace’s Austin-based Plan Systems. See the image below taken from the company’s home page. The ringed planet shows up often in the digital media associated with their projects.

Source Link: https://plan-systems.org/

Wallace, holder of a top security clearance in information warfare with expertise in immersive digital media and sound design, is working to create solarpunk, peer-to-peer tech infrastructure for spatial computing in the open metaverse. Wallace was a speaker at Derrick Broze’s Greater Reset 3 in the spring of 2022. Remember, “Masking Threshold” premiered in Austin in 2021 and Wallace’s unique professional background in signals intelligence, sound, and art is closely aligned to the plot of the movie.

The definition of a masking threshold is the point at which introduction of a new sound stops a person’s ability to hear the first sound. When two sounds are present, one side of the threshold allows you to hear both, but once that line is crossed, only the more powerful signal is heard. The concept of “White Noise” is related to masking thresholds, and of course Don DeLillo’s book turned film, including insider references to simulations and toxic airborne events, mirrored the East Palestine, Ohio train derailment this past February. It is interesting to consider that digital media is conveyed through waves and modulation and mixing, or heterodyning, of waves and the information they carry, could ultimately affect the way people perceive the world around them. That is the dilemma at the center of Grenzfurthner’s film.

“The greatest tribute paid to Sendivogius by his contemporaries was the publication of Symbola aureae mensae duodecim nationum by Maier (Frankfurt 1617) in which the teachings of twelve greatest alchemical adepts were discussed. This “chain” of wisdom starts with Hermes Trismegistos and ends with “Sarmata Anonymus” also called “Heliocantharus Borealis” who is none other but Michael Sendivogius (Poland was styled Sarmatia just as England was Albion). The fact that Maier did not reveal his name, though he obviously knew him, suggests that Sendivogius asked him not to do that. Such behaviour conforms with his request in De Lapide Philosophorum quoted above and his Society of Unknown Philosophers, while in the preface to the Treatise on Sulphur (published in Cologne in 1616) he says to the reader: “But you may be sure that no necessity is laid upon me to write at all, and that if I have come forward it is only out of love to you, having no expectation of personal profit, and no desire for empty glory, for which reason I here refrain, as I have before done, from revealing my identity to the public” [32]. With my limited knowledge of Latin, I was not able to read the monumental work of Maier to find out what he says about Sendivogius, and J.B. Craven’s short summary is of no help here. Roman Bugaj tells us only that he was enthusiastic and said that he had seen a transmutation performed by the Polish alchemist “with his own eyes”.

The symbolic engraving of Sendivogius in the text of Symbola aureae mensae, showing him in the traditional dress of a Polish nobleman, was again used by Daniel Stolcius, a disciple of Maier in Prague, in his Viridarium Chymicum of 1624 (and once again, this time without the other eleven engravings, as the frontispiece in the Maier’s posthumously published Subtilis Allegoria). Stolcius’s poem accompanying it was:

Michael Sendivogius, a Pole

Though this name in the past has been kept in oblivion, its praise now penetrates the darkness, as it ought to be, indeed.

Prague in Bohemia has well acknowledged his works. He has written twelve books and taught accordingly.

He said: Saturn himself must water the Earth if it, dear sun and moon, shall bear your beautiful flowers. [33]

The alchemist’s name was revealed for the first time in 1613 when his three works under the collective title Tripus Chymicus Sendivogianus were published in Strasbourg but this must have been suppressed by Sendivogius himself, as other editions of his treatises in the following years continued to be anonymous until the second printing of the same in 1621, and the final disclosure in the 1624 Erfurt edition of Michaelis Sendivogi Poloni Lumen Chymicum Novum with a commentary by Johann Ortelius which was later severely criticised in the third Strasbourg edition of Tripus Chymicus in 1628, probably as a reaction of Sendivogius himself.” Source

Source Link: https://culture.pl/en/article/who-was-michael-sendivogius-biography-of-an-alchemist Source Link: https://culture.pl/en/article/who-was-michael-sendivogius-biography-of-an-alchemist

The entire film takes place within the confines of a small, increasingly squalid room in the protagonist’s home. Though it’s supposed to be Orlando, the set was built in Austria. The main character, whose name we are not given, is emotionally alienated from society and becomes increasingly physically isolated, too, as he soundproofs his messy haven. He doesn’t go out to shop for food but subsists on soylent powder. The plot line mentions his skepticism that God exists, and clippings for the Archdiocese of Florida are shown on the walls. He reminisces about earlier days when he had a social life that mainly involved tech-related projects like building robots.

“Masking Threshold” – timestamp 11 minutes

“Masking Threshold” – timestamp 13 minutes

Part of the character’s backstory is that he’s gay. The Orlando Pulse mass shooting is brought up, including “Pulse” being associated with a life force that echoes his thoughts about cellular communication. This reminds me of Michael Levin’s work on electrical communication in the body that happens outside the neurons and how that connects to morphogenesis. The film grows more bloody as it progresses and at one point the protagonist speaking of his frustration of not being able to understand the cellular language and wondering if they are “planning something” tightly grasps a knife and draws it across his palm, creating a pulsing slash that he later sews up with coarse black thread. 

“Masking Threshold” – timestamp 14 minutes

“Masking Threshold” – timestamp 1 hour 17 minutes

Listen to the clip below I made from Michael Levin’s talk with Lex Friedman on the “magic” of gap junctions and their theoretical potential for use in mind melds. Levin’s area of expertise is bio-electrical communication outside neural networks in the body. He asserts that every part of our body communicates electrically, only non-neural communication happens at a much slower TIME SCALE and in anatomical space. The image below it links to a paper Levin co-authored on ancient bioelectrical communication pathways.

Source Link: https://pubmed.ncbi.nlm.nih.gov/31442749/

 

Increasingly I see “community currencies” of the type Grenzfurthner’s associate Grigg has been promoting as signal coordinating mechanism at a population level that are equivalent to cell signals used in creation of tissues and organs in the body. The main character notes that human bodies evolved to exclude lots of stimuli, because otherwise being in a world bombarded by sensory information would become unbearable, and that he must have accidentally tapped into a region that was normally off limits, to terrible effect.

Source Link: https://podcast.mattereum.com/episodes/ian-grigg

While many reviews of the film were favorable, the ones I read were rather superficial. The reviewers generally wrote the character off as a demented, obsessed, narcissistic, broken conspiracy theorist who had lost touch with reality by focusing so closely on his symptoms. I did not see anyone bring up the history of psychoacoustics as it might pertain to the plot. This is something I’ve been very interested in over the past few years as I’ve come to understand the earliest days of the Internet and J.C.R. Licklider’s fascination with psychoacoustics and man-machine symbiosis, a clear forerunner of bio-digital convergence. There is also the important history around Alfred Loomis, Loomis Labs, the MIT Rad Lab, and the National Defense Research Committee. Sound waves, perception, and cognition have been an intense area of focus for the physics, military, and intelligence communities for generations. Not addressing it in reviews, makes it easier for viewers to imagine the protagonist as simply a misguided, broken human being rather than a student who may have unwittingly gotten caught up in some sort of cutting-edge experiment. 

Source Link: https://simson.net/ref/lcs_35/Licklider.htm Source Link: https://simson.net/ref/lcs_35/Licklider.htm

 

Source Link: https://simson.net/ref/lcs_35/Licklider.htm Interactive Map Link Here: https://embed.kumu.io/6ba91d357c713acefa503456fe84d9aa#untitled-map?s=%23alfred-loomis-investor-inventor-eeg-radar-loran Interactive Map Link: https://embed.kumu.io/dbd241d236e16b6b39a80c0f0d4b45ea#untitled-map?s=bm9kZS1KdG5GQTh2Mw%3D%3D

Symbiosis comes up as the protagonist involves himself in cultivating multicellular life in the form of mold on a sandwich and block of cheese. He then starts to riff on the wonders of lichen, their complexity and efficiency under challenging environmental conditions. The melding of alga and fungi into new forms of collaborative life as lichen, mirrors ideas being floated about bio-digital evolution where humans are gradually integrated into networks of nano-scale machines thereby creating a new species.

“Masking Threshold” – timestamp 49 minutes

No mention was given to the significance of choosing Orlando and University of Central Florida as the backdrop for these gruesome activities. I think both topics provide vital context given central Florida’s role in aerospace and defense research and the university’s ties to DARPA. The school has one of the leading game design programs in the country, is a leader in virtual reality, and maintains cutting edge labs in nano-technology and optics and photonics. There are even researchers involved in microelectromechanical sensors that harvest power from radar waves.

“Masking Threshold” – timestamp 13 minutes

“Masking Threshold” – timestamp 2 minutes

Source Link: https://goo.gl/maps/Y9UTnnin7U5F13zV7 Source Link: https://www.ucf.edu/college/optics-photonics/ Source Link: https://www.ucf.edu/news/acclaimed-computer-scientist-and-virtual-reality-pioneer-joins-ucf-faculty/ Source Link: https://www.ucf.edu/news/ucf-threepeats-as-worlds-best-graduate-game-design-program/ Source Link: https://www.ucf.edu/news/ucf-professor-appointed-to-national-darpa-study-group/ Source Link: https://www.ucf.edu/news/ucf-researchers-create-technology-that-harvests-radio-waves-for-energy/ Source Link: https://www.ucf.edu/news/ucf-researchers-create-technology-that-harvests-radio-waves-for-energy/ Source Link: https://www.ucf.edu/research/social-impact/

I’m not a fan of horror films and this one had a definite Cronenbergesque body horror angle created by close ups of the protagonist’s biological functions. It took me awhile between the time I purchased it and the time I watched it. It would not have been a good idea to watch it before going on a solo camping outing, that’s for sure. I bring up the film in this post because there’s a connection to signals and communication across life and time. There’s a sense of mystery, but also profound importance of the medium and the message.

We are flowing into a future where web3 logic circuits will be functioning with biology as transistors to code life in ways that extend far beyond genetic modification. Control and communication in the animal and the machine are what they’ve been after since World War II, if not earlier. It has to do with cells and oxygen and water and electricity and biochemistry – a grammar of guidance with the aim of transforming natural life into a tool, an instrument being prepared for some unfathomable purpose. The movie was about a man who accidentally tapped into an electrical language that made him even more unsuited to navigating society than he already was.

“Masking Threshold” – timestamp 47 minutes

My four days away from home in the deep green embrace of the Laurel Highlands were the opposite. Rather than disconnect, I was seeking reconnection to non-digitally-mediated reality. I sense this trip was perhaps meant to awaken in me the idea of L’Engle’s “what might have been,” to rekindle my awareness of our birthright connection to the joyous symphony of god and nature, and to raise the possibility that regular people taking right actions in the now just might have outsize effects on the future. Ohiopyle gave me a pass to simply be and not do. I was alone. There wasn’t anyone with whom I could talk. There wasn’t anyone to consult about what to do next. I was in my own state of flow, and I tried to be open to doing the next right thing. Initially, I had thought about spending a day driving further west to Meadowcroft to set an intention at the oldest documented place of human habitation in North America; but that would have been a three-hour drive, and my intuition said stay put where you are. There is no need to do more than you are doing, even if it is only surveying the magnificent spiderwebs in a misty, morning meadow. That is enough.

What follows are some snapshots I took with captions of the lessons I thought each offered. I didn’t want to forget them, because I know there will be a lot of ups and downs as I move forward on this uncertain path. On the down days I want to remember. Listen to the old beings Robin Wall Kimmerer says, and I tried. I think I need to practice more. I have a busy mind, and its not intuitive for me to to learn with my body, my bare feet on stone and water, my gait lumbering not too elegantly up a steep ravine. I was able to pull a sliver of clear glass than had been in my foot for a month after I got home, which helped my stride a lot. Note to self – epsom salts are something to keep on hand. I trained my eyes on wispy clouds framing a Sturgeon super moon in the parking lot beyond my campsite, and my ears on the rustling of a mama robin tugging at the underbrush for nesting material, the drum of a steady cleansing rain on the top of my tent. In my discombobulated state I gratefully receive gifts, quiet ones sent my way. That first night I had time to put up my tent and prepare dinner before the rains came hard and steady. I was grateful.

So, here are some messages that came my way on this recent my trip. Maybe there will be some helpful reminders for you, too.

A change of view can give you the perspective you need to navigate the terrain better.

Beings live in different timeframes. Try shifting to river boulder / moss time when you feel anxious and out of control. The moment you are in is just a blip to a boulder.

Sometimes conditions change and you can see patterns that were invisible before. Be ready for the opportunity when it arises and appreciate it.

In any trail there will be low points, muddy points. Other people have crossed, and you can, too.

You may be a rhododendron seed or a hemlock seed that falls on rocky soil, but if you can get a grip, over time your fallen blossoms and leaves will create compost for new life. Think about that when the work you do cannot be absorbed. Maybe your task is to make the soil for the next round of life.

Oddballs can be magical – take the Sassafras that was thought to have special powers in the middle ages, because the plant has leaves with three different shapes. It’s ok to be different. Different can be special!

Nature has its own network. Decay is a built in part of the cycle. There may be a lot more going on, but because it is hidden below the surface you cannot see it. That doesn’t mean it’s not there or it doesn’t matter.

Sometimes you have to wait to harvest the sweetness.

Time will wear off your rough edges, but it may take awhile.

Sometimes what you’re dealing with is reality for you. Other times it’s a reflection projected by others. 

This topography reminded me of Conrad Waddington’s epigenetic landscapes – see the undulating little hills in the background? 

In complex systems it’s the border between order and chaos that holds most of the activity. I saw that in this meadow of bee balm edged by a forest. There were two pair of indigo bunting that graced me with their presence.

It’s scary going over the falls, but it’s a joy to stand back and admire their energetic power from below. Parts of your life will be churned up at times, but the calm pools are still there waiting for you to put your feet in or take a book onto on a sunny rock, and let the calm sound of the water soak into you until you are ready for a nap.

Keep moving, don’t get stuck or you might get squished like this amazing iridescent beetle. I didn’t squish him. 

Seasons change, and sometimes they change before we are ready.

Practice your sensing – think about how the spiders attune themselves to the vibrations of the world. It is their language.

Look for bridges and be a bridge where you can.

Hold on, be steady.

But being weepy with your feet in an abundant bed of cattails can also be beautiful if you don’t wallow too long.

Keep walking.

Have faith.

The many heart-shaped leaves in the forest are reminders that  love prevails.

PS: Stephers watched the film yesterday and shared some links with me on “noise” and the “language” of biological cell communication. The study of it is called sonocytology. It’s a real thing! See below for more insights.

Do Cells Make Noise? – Popular Science

Dying Cells Dragged Screaming Under the Microscope – Nature

Inside the Din, Cells Fight Noise with Noise – Quanta

The Rhythmic Sound of Living Cells – Interalia Magazine

How Do Our Cells Produce Sound? – Science ABC

 

 

 

 

 

 

 

 

 

 

 

 

 

 


Damien Bod

Use a Microsoft Entra Verified ID Employee credential to view paycheck data

This post shows how a Microsoft Entra Verified ID employee credential can be used to access user specific data. This demo shows possible paycheck data from Switzerland. A payment ID can be the printed on the pay slip or the payment document could have a QR Code to scan. The user specific data can then […]

This post shows how a Microsoft Entra Verified ID employee credential can be used to access user specific data. This demo shows possible paycheck data from Switzerland. A payment ID can be the printed on the pay slip or the payment document could have a QR Code to scan. The user specific data can then be viewed and possible links to further services can be opened. Access is controlled using the verification presentation. The application is implemented using ASP.NET Core.

Code: https://github.com/swiss-ssi-group/EntraEmployeePaycheck

Get your Verified Employee credential

To use this application, a Microsoft Entra employee credential from the correct tenant must be used. The following post shows how to set this up, get a credential and add this to your Microsoft wallet.

Issue Employee verifiable credentials using Entra Verified ID and ASP.NET Core

Implement the employee credential verifier

The EmployeeClaims class implements the scheme returned from the employee credential. This contains the values from the Microsoft Entra verified employee credential stored on your wallet.

public class EmployeeClaims { [JsonPropertyName("givenName")] public string GivenName { get; set; } = string.Empty; [JsonPropertyName("surname")] public string Surname { get; set; } = string.Empty; [JsonPropertyName("mail")] public string Mail { get; set; } = string.Empty; [JsonPropertyName("jobTitle")] public string JobTitle { get; set; } = string.Empty; [JsonPropertyName("photo")] // "type": "image/jpg;base64url", public string Photo { get; set; } = string.Empty; [JsonPropertyName("displayName")] public string DisplayName { get; set; } = string.Empty; [JsonPropertyName("preferredLanguage")] public string PreferredLanguage { get; set; } = string.Empty; //[JsonPropertyName("userPrincipalName")] [JsonPropertyName("revocationId")] public string RevocationId { get; set; } = string.Empty; }

The GetVerifierRequestPayload is used to initialize the verification presentation request. The configuration must match the Microsoft Entra Verified ID tenant setup.

public VerifierRequestPayload GetVerifierRequestPayload(HttpRequest request) { var payload = new VerifierRequestPayload(); var host = GetRequestHostName(request); payload.Callback.State = Guid.NewGuid().ToString(); payload.Callback.Url = $"{host}/api/verifier/presentationCallback"; payload.Callback.Headers.ApiKey = _credentialSettings.VcApiCallbackApiKey; payload.Registration.ClientName = "VerifiedEmployee"; payload.Authority = _credentialSettings.VerifierAuthority; var requestedCredentials = new RequestedCredentials { CrendentialsType = "VerifiedEmployee", Purpose = "Verified Employee to authenticate your request" }; requestedCredentials.AcceptedIssuers.Add(_credentialSettings.IssuerAuthority); payload.RequestedCredentials.Add(requestedCredentials); return payload; }

The VerifierController implements the API used for the callbacks and the wallet request results. The PresentationRequest method handles the request and uses the Microsoft Entra Verified ID API to send a verification presentation request to the wallet.

[HttpGet("/api/verifier/presentation-request")] public async Task<ActionResult> PresentationRequest() { try { var payload = _verifierService.GetVerifierRequestPayload(Request); var (Token, Error, ErrorDescription) = await _verifierService.GetAccessToken(); if (string.IsNullOrEmpty(Token)) { _log.LogError("failed to acquire accesstoken: {Error} : {ErrorDescription}", Error, ErrorDescription); return BadRequest(new { error = Error, error_description = ErrorDescription }); } var defaultRequestHeaders = _httpClient.DefaultRequestHeaders; defaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", Token); var res = await _httpClient.PostAsJsonAsync( _credentialSettings.Endpoint, payload); if (res.IsSuccessStatusCode) { var response = await res.Content.ReadFromJsonAsync<VerifierResponse>(); response!.Id = payload.Callback.State; _log.LogTrace("succesfully called Request API"); if (res.StatusCode == HttpStatusCode.Created) { var cacheData = new CacheData { Status = VerifierConst.NotScanned, Message = "Request ready, please scan with Authenticator", Expiry = response.Expiry.ToString(CultureInfo.InvariantCulture), }; CacheData.AddToCache(payload.Callback.State, _distributedCache, cacheData); return Ok(response); } } else { var message = await res.Content.ReadAsStringAsync(); _log.LogError("Unsuccesfully called Request API {message}", message); return BadRequest(new { error = "400", error_description = message }); } var errorResponse = await res.Content.ReadAsStringAsync(); _log.LogError("Unsuccesfully called Request API"); return BadRequest(new { error = "400", error_description = "Something went wrong calling the API: " + errorResponse }); } catch (Exception ex) { return BadRequest(new { error = "400", error_description = ex.Message }); } }

The PresentationCallback in the API controller handles the Web hook callback requests. If the presentation is successful, the data is returned and added to the cache using the state value. The UI can poll this later and use the data in the user session.

[HttpPost] public async Task<ActionResult> PresentationCallback() { var content = await new StreamReader(Request.Body).ReadToEndAsync(); var verifierCallbackResponse = JsonSerializer.Deserialize<VerifierCallbackResponse>(content); try { if (verifierCallbackResponse != null && verifierCallbackResponse .RequestStatus == VerifierConst.RequestRetrieved) { var cacheData = new CacheData { Status = VerifierConst.RequestRetrieved, Message = "QR Code is scanned. Waiting for validation...", }; CacheData.AddToCache(verifierCallbackResponse.State, _distributedCache, cacheData); } if (verifierCallbackResponse != null && verifierCallbackResponse .RequestStatus == VerifierConst.PresentationVerified) { var cacheData = new CacheData { Status = VerifierConst.PresentationVerified, Message = "Presentation verified", Payload = JsonSerializer.Serialize(verifierCallbackResponse.VerifiedCredentialsData), Subject = verifierCallbackResponse.Subject }; cacheData.Employee.Photo = verifierCallbackResponse .VerifiedCredentialsData!.FirstOrDefault()!.Claims.Photo; cacheData.Employee.RevocationId = verifierCallbackResponse .VerifiedCredentialsData!.FirstOrDefault()!.Claims.RevocationId; cacheData.Employee.PreferredLanguage = verifierCallbackResponse .VerifiedCredentialsData!.FirstOrDefault()!.Claims.PreferredLanguage; cacheData.Employee.Surname = verifierCallbackResponse .VerifiedCredentialsData!.FirstOrDefault()!.Claims.Surname; cacheData.Employee.GivenName = verifierCallbackResponse .VerifiedCredentialsData!.FirstOrDefault()!.Claims.GivenName; cacheData.Employee.DisplayName = verifierCallbackResponse .VerifiedCredentialsData!.FirstOrDefault()!.Claims.DisplayName; cacheData.Employee.Mail = verifierCallbackResponse .VerifiedCredentialsData!.FirstOrDefault()!.Claims.Mail; cacheData.Employee.JobTitle = verifierCallbackResponse .VerifiedCredentialsData!.FirstOrDefault()!.Claims.JobTitle; CacheData.AddToCache(verifierCallbackResponse.State, _distributedCache, cacheData); } return Ok(); } catch (Exception ex) { return BadRequest(new { error = "400", error_description = ex.Message }); } } Authenticated session setup

The employee verifiable credential is returned and stored in a cache using a random state value. This value needs to be random with a minimum length. This is used to setup the user session. If it is possible to acquire this value in a different way, user data loss is possible. This should only be active for one time usage. This type of flow is also open to phishing attacks as it is cross device with no origin validation.

<form method="post" id="verifyEmployeePaycheck" action="" novalidate> <input type="hidden" required id="statePresented" name="statePresented"/> </form>

When the state is posted to the server, the user is signed in using cookies.

public async Task<IActionResult> OnPostAsync() { if (StatePresented == null) { ModelState.AddModelError("StatePresented", "no vc"); return Page(); } var credentialData = CacheData.GetFromCache(StatePresented, _distributedCache); var claims = new List<Claim> { new Claim("DisplayName", credentialData!.Employee.DisplayName, ClaimValueTypes.String, "damienbodsharepoint"), new Claim("JobTitle", credentialData!.Employee.JobTitle, ClaimValueTypes.String, "damienbodsharepoint"), new Claim("PreferredLanguage", credentialData!.Employee.PreferredLanguage, ClaimValueTypes.String, "damienbodsharepoint"), new Claim("RevocationId", credentialData!.Employee.RevocationId, ClaimValueTypes.String, "damienbodsharepoint"), new Claim("GivenName", credentialData!.Employee.GivenName, ClaimValueTypes.String, "damienbodsharepoint"), new Claim("Mail", credentialData!.Employee.Mail, ClaimValueTypes.String, "damienbodsharepoint"), new Claim("Surname", credentialData!.Employee.Surname, ClaimValueTypes.String, "damienbodsharepoint"), new Claim("Photo", credentialData!.Employee.Photo, ClaimValueTypes.String, "damienbodsharepoint"), }; var userIdentity = new ClaimsIdentity(claims, "entraemployee"); var userPrincipal = new ClaimsPrincipal(userIdentity); await HttpContext.SignInAsync( CookieAuthenticationDefaults.AuthenticationScheme, userPrincipal, new AuthenticationProperties { ExpiresUtc = DateTime.UtcNow.AddMinutes(20), IsPersistent = false, AllowRefresh = false }); CacheData.RemoveFromCache(StatePresented, _distributedCache); return Redirect($"~/Paycheck/PaycheckDetailsS3/{PaycheckId}"); } Run the application

A demo was deployed using one of my test tenants.

https://issueverifiableemployee.azurewebsites.net/

When the application is started, the process can be started using a paycheck ID. This all depends on how your company supports the pay data. Sometimes, only your ID is supported and the latest one could be displayed. This is always different and depends on the payment service. Some companies use a QR code on the printed pay slip.

The application verifies that you are an employee of the company using the Microsoft Entra Verified ID employee credential.

Once verified, the paycheck data can be displayed.

This demo just displays some typical data from a Swiss paycheck. This would be specific to the software used to manage the data and an API can be built to access the data on behalf of the user.

Notes

The way of verifying employees offers another way to implement access management and offers new possibilities for integrating identity checks across tenant boundaries.

Links

https://learn.microsoft.com/en-us/azure/active-directory/verifiable-credentials/how-to-use-quickstart-multiple

https://github.com/swiss-ssi-group/AzureADVerifiableCredentialsAspNetCore

https://learn.microsoft.com/en-us/azure/active-directory/verifiable-credentials/decentralized-identifier-overview

https://ssi-start.adnovum.com/data

https://github.com/e-id-admin/public-sandbox-trustinfrastructure/discussions/14

https://openid.net/specs/openid-connect-self-issued-v2-1_0.html

https://identity.foundation/jwt-vc-presentation-profile/

https://learn.microsoft.com/en-us/azure/active-directory/verifiable-credentials/verifiable-credentials-standards

https://github.com/Azure-Samples/active-directory-verifiable-credentials-dotnet

https://aka.ms/mysecurityinfo

https://fontawesome.com/

https://developer.microsoft.com/en-us/graph/graph-explorer?tenant=damienbodsharepoint.onmicrosoft.com

https://learn.microsoft.com/en-us/graph/api/overview?view=graph-rest-1.0

https://github.com/Azure-Samples/VerifiedEmployeeIssuance

https://github.com/AzureAD/microsoft-identity-web/blob/jmprieur/Graph5/src/Microsoft.Identity.Web.GraphServiceClient/Readme.md#replace-the-nuget-packages

https://docs.microsoft.com/azure/app-service/deploy-github-actions#configure-the-github-secret

https://issueverifiableemployee.azurewebsites.net/

Links eIDAS and EUDI standards

Draft: OAuth 2.0 Attestation-Based Client Authentication
https://datatracker.ietf.org/doc/html/draft-looker-oauth-attestation-based-client-auth-00

Draft: OpenID for Verifiable Presentations
https://openid.net/specs/openid-4-verifiable-presentations-1_0.html

Draft: OAuth 2.0 Demonstrating Proof-of-Possession at the Application Layer (DPoP)
https://datatracker.ietf.org/doc/html/draft-ietf-oauth-dpop

Draft: OpenID for Verifiable Credential Issuance
https://openid.bitbucket.io/connect/openid-4-verifiable-credential-issuance-1_0.html

Draft: OpenID Connect for Identity Assurance 1.0
https://openid.net/specs/openid-connect-4-identity-assurance-1_0-13.html

Draft: SD-JWT-based Verifiable Credentials (SD-JWT VC)
https://vcstuff.github.io/draft-terbu-sd-jwt-vc/draft-terbu-oauth-sd-jwt-vc.html

Saturday, 05. August 2023

Just a Theory

UK Architecture

We visited the UK for a couple weeks last month, saw some pretty amazing ancient architecture, a few photos of which I quite like.

Bath Abbey’s vaulted ceiling descends to astonishingly large and clear windows that make up around 80% of the wall space. © 2023 David E. Wheeler

We saw some spectacular architecture on a whirlwind trip to London, Bath, Edinburgh, Glasgow, and Canterbury last month. Among the long list of museums, churches, and universities we visited, I managed to capture a few photos I quite like.

⧉ The New College at the University of Edinburgh © 2023 David E. Wheeler

⧉ The interior of Glasgow Cathedral has a lot going on. © 2023 David E. Wheeler

More about… Travel UK England Scotland Bath Edinburgh Glasgow Canterbury

Friday, 04. August 2023

Mike Jones: self-issued

COSE “typ” (type) Header Parameter Specification Addressing Feedback from IETF 117

Orie Steele and I have updated the COSE “typ” (type) Header Parameter Specification to address feedback received during IETF 117 in San Francisco. Specifically, the spec now requires that the typ header parameter only be used in the protected header parameters. And we described the implications of the unprotected header parameters changing. The specification is […]

Orie Steele and I have updated the COSE “typ” (type) Header Parameter Specification to address feedback received during IETF 117 in San Francisco. Specifically, the spec now requires that the typ header parameter only be used in the protected header parameters. And we described the implications of the unprotected header parameters changing.

The specification is available at:

https://www.ietf.org/archive/id/draft-jones-cose-typ-header-parameter-01.html

We believe that having done this, the spec is now ready for working group adoption.


Justin Richer

Federation Bubbles

We’ve spent decades building up systems that identify people and devices, and interconnect all of them. We’ve built systems that let us define and control accounts, assign access rights, and associate trust within and across boundaries. But today’s mobile, cloud, and distributed systems are complex beasts that defy the assumptions many of our architectures bring to the table. Maybe it’s time

We’ve spent decades building up systems that identify people and devices, and interconnect all of them. We’ve built systems that let us define and control accounts, assign access rights, and associate trust within and across boundaries. But today’s mobile, cloud, and distributed systems are complex beasts that defy the assumptions many of our architectures bring to the table.

Maybe it’s time to rethink what accounts and federation even mean. And for that, I’ve been thinking lately about bubbles.

Bubbles

A bubble is an interesting physical construct. A thin boundary that surrounds its contents. What’s part of the bubble is separated from what’s around the bubble by that boundary, creating a sub-entity that only exists and makes sense in a larger space.

Just some bubbles making a foam

In classical computer network systems, we liked putting hard boundaries up around things. Firewalls and even physical network cable separation let us define what was “in” and “out”. If you were on the network, you had to be good, because we made sure of that before you were allowed on the network.

While these approaches still have their uses, today we know that it’s not enough. Devices cross network boundaries all the time, and even our trusted API services dart around cloud data centers, pulling pieces and components and data from places we can’t fully predict and plan ahead of time. So the world has moved to a zero-trust approach, where we build security into every part of the architecture instead of assuming it’ll get stopped at the door. That is a very good thing, but it also turns out to not quite be enough.

Organizations want to be able to control policies in a central fashion, to define accounts and access in one dashboard. But on the other hand, cloud services need to make extremely local decisions in potentially ephemeral environments — do I start X image on Y network to process Z request? This dichotomy creates a tension in the system that only increases when you start to cross network and trust domain boundaries.

We need to address this, and that’s why I’ve been thinking about bubbles. Bubbles provide context, boundaries, and differentiation, and that’s what I think we need to consider in our identity and security systems.

Within a bubble, there’s a certain degree of trust, like the set of services within a pod on a shared virtual network. I can make hyper-local decisions about access and security, and I can do it in a way that doesn’t require me to talk outside the bubble. I can authenticate users directly, I can check policies that only apply to my systems, I can identify software and hardware, and I can do that all within the comfort of the bubble. It’s a small world that I can see and control. The bubble offers safety and familiarity, and we like that.

Foam

Bubbles don’t exist on their own, though, and that’s where the classical network thinking breaks down. It would be easy to say that a bubble is just a regular local network, disconnected from the world, but that doesn’t solve the most interesting problems. A single bubble might be a helpful concept, but it’s when we get a lot of bubbles together that things really get exciting, if you ask me.

Let’s take a user account as our example. We tend to think of an account as having a specific home, an authoritative source that binds the attributes and authenticators to a person (or other entity, if we’re feeling spicy). But those attributes and authenticators often come from somewhere, and that’s where the bubble concept really starts to shine.

I’m not just talking about just-in-time provisioning, where a central account database points to a new bubble and says “here is everyone that’s supposed to be in that bubble”. I do think that this kind of push is an important tool, but it can’t be the only one. Any developer will tell you that even the best distributed and delegated systems tend to accrete things like local accounts, group accounts, admin passwords, service keys, and other techniques that solve specific problems.

Instead of trying to engineer those kinds of things away, the bubble concept embraces the local-ness of them. Within the bubble, we want to authenticate and make our decisions locally. This lets us be fast and robust, and build in the kinds of things that we only care about here. But how do we balance that need against usability and account control?

When I’ve got a user in my system, they’ve got an account that exists only within the bubble. They can authenticate locally to an IdP or AS that is in charge of all the services within that bubble. The account has attributes and access controls that might only make sense to systems inside the bubble. But it would be obnoxious for a user to create a new local account by hand every time they needed to do something, even though that’s how we have solved this kind of thing in the past. This is the strength of federation technologies like OpenID Connect and credentialing systems like Verifiable Credentials — a user can show up and point back to an authoritative source for their attributes, and get an authenticated session out of the deal. We can use these to get the user inside the bubble, but instead of using these to log in every time, these technologies can be used to instantiate and bind an account within the bubble. From that point forward, the user can authenticate locally. If at any time in the future we need to verify the account with its source, we can re-run the same items we used at ingest.

And importantly, a bubble should be allowed to trust any other bubble as a source for information. There can be no strict hierarchy in reality. When my bubble is out in the world, I might have a very clear and immediate business need to trust people, information, and services from another bubble that I didn’t know about a few minutes ago. I should be able to make a local decision to trust a user that came from that bubble and bind them to an account in my own bubble.

This trust is also not linear. A user could have accounts, credentials, and artifacts from multiple places. It’s a common requirement in identity proofing to present evidence from multiple sources that corroborate your claims. In the same fashion, a user might show up with a VC from one issuer and an OIDC login from a different place. The combination of those things can be meaningful to the bubble in a unique way.

As the foam of distinct bubbles grows, it’s important to be able to trace provenance from these different bubbles. In our local policies, we need a way to say that “This is user A, they came to us from cloud B, but cloud B said they originally came from both cloud C and cloud D”, and be able to verify that chain. And since our bubble could be the authoritative source for some other bubble, we need a way to talk about that kind of chain downstream. These kinds of durable provenance artifacts aren’t simple, and they bring with them a whole host of privacy concerns — can an adversary use this to track an individual through a system that doesn’t want to be tracked? Can I troll around the network of bubbles and correlate all these points? It’s clear that being able to selectively disclose the provenance chain, as well as the data about the user themselves.

Turtles

A bubble can provide context for another bubble to exist. The bubbles can share part of their barriers, allowing specific trusted information to pass freely between them in a way that the external walls don’t, but stopping other information.

A bubble could also exist entirely within another bubble, acting as a sub-division. If we really do care about zero-trust security architectures, we’ll make our bubbles as small as possible and composable so we can stack concerns.

We’re starting to have this conversation on the IETF’s new Workload Identity for Multi-System Environments (WIMSE) mailing list. So if that kind of thing interests you, please join us there.

Flow

Bubbles offer us another helpful metaphor, in that they aren’t static. When you’ve got a bunch of bubbles floating around, they really don’t like to stay still. They move, connect, disconnect, and eventually even pop.

In a classical multi-lateral federation, the focus is always on getting all the online systems to talk to each other in a way that makes sense for the moment. A new university just signed the agreement? Get them into the directory! We’ve got a new deployable unit that’s heading into the field? Write down the serial number on that router and out the door you go.

But once we’re up and running, things change. New parties could get brought on locally, new services get pulled in as needed, and the entities that we once relied on cease to exist.

Disaster response gives us a great example here. In this case, we want to be able to stand up a new bubble of services in response to a specific event, time, and place. Specialists come in with qualifications. Some of these we can verify — you’re a doctor, I can see your medical license is good right now, triage tent is that way. You’re an electrician, your union card looks good, go jump on that truck. But sometimes people show up to help, and their presence in the situation is enough to warrant giving them access. You used to be a firefighter, ok grab an axe and make a firebreak up on that hill. You can answer phones and direct calls? Great, have a seat, here’s the switchboard and directory. We have a mix of accounts that get provisioned from the outside — the doctors and union electricians — and accounts that get provisioned locally — the firefighter and switchboard operator. All of these accounts are valuable for different reasons, and our systems need to have that level of flexibility.

And eventually, the disaster is over, and we need to clean up our data as much as the physical mess of the disaster. That firefighter went and accessed a bunch of stuff, were they supposed to see that? Those union electrical workers, were they actively in the union with valid licenses while they were down here, or had some of them been kicked out? And depending on what they did, does that matter to us? We need auditability and accountability for dynamic systems like this. We need to be able to call back to the union and say “hey someone used their credential from you and we let them do the following kinds of things, are you OK with that and are we OK with that?” It’s not an easy set of questions to answer, and it gets even more complex when we start chaining our systems together in unexpected ways.

These bubbles can also disconnect and re-connect to the greater foam. This is the value of the hyper-local decisions — once you’re on board, I don’t need to see your IdP all the time in order for you to log in. So if we go offline for a while, or your IdP goes offline for a while, that’s OK. But once we’re back online, I might want to check in with your IdP, especially if you’ve done something fishy inside my bubble. Cross-domain trust should come with cross-domain accountability.

It’s Not A Technology

I truly believe that no one technology will solve this, for the simple reason that we will never get the online world to agree to one standard to address things, no matter how good it is or how much nerds love it. Any solution that requires us all to speak the same single protocol is doomed to failure.

Reality is heterogeneous, and we need to build for that heterogeneous world. The real value, then, comes in defining the interconnects. Interoperability occurs for a purpose and in a context. I believe that we can use a family of standards and technologies in a common pattern to build our the future of internet connectivity.

As a consequence, in this space I see room for OpenID Connect, Verifiable Credentials, SCIM, Shared Signaling, TLS, SPIFFE, FIDO, and many other moving parts. The bubbles should provide a common set of connection points into the larger foam in which they exist. Not every bubble is going to use the same connection points, but each point provides a specific set of functionality and addresses a specific problem. Even inside the bubbles there’s room for a lot of flexibility and innovation — how do I connect and verify my internal services, how do I spin up subsystems, how do I know who’s there in the first place?

Some of you reading might be expecting the bottom of this article to be a pitch of my new stealthy start-up that solves all these problems with some magic product I’m trying to sell you, but I’m sorry to disappoint your CTO that it’s not going to just come off the shelf. In all truth, I don’t know exactly what this solution looks like, but I’m eager to start building it to see what’s out there.

Saturday, 29. July 2023

A Distributed Economy

Links in an attempt to understand NRP-CAS

https://www.sensorica.co/environment/hrea-demo-for-nrp-cas http://ovn.world/index.php?title=NRP-CAS http://ovn.world/index.php?title=Design_notes_for_work_contribution_interfaces http://ovn.world/index.php?title=NRP-CAS_UX_and_UI https://hrea.io/ --> https://github.com/h-REA/hREA (get this running) Classes for each part of https://www.valueflo.ws/ -- > https://www.valueflo.ws/assets/Val

https://www.sensorica.co/environment/hrea-demo-for-nrp-cas

http://ovn.world/index.php?title=NRP-CAS

http://ovn.world/index.php?title=Design_notes_for_work_contribution_interfaces

http://ovn.world/index.php?title=NRP-CAS_UX_and_UI

https://hrea.io/ --> https://github.com/h-REA/hREA (get this running)

Classes for each part of https://www.valueflo.ws/ -- > https://www.valueflo.ws/assets/ValueFlows-Story.pdf





Tuesday, 25. July 2023

Werdmüller on Medium

How to disable the web’s most annoying feature

Take that, web notifications! Continue reading on Medium »

Take that, web notifications!

Continue reading on Medium »

Monday, 24. July 2023

Werdmüller on Medium

just pouring one out for my twittr

Let’s forget about that other place once and for all. Continue reading on Medium »

Let’s forget about that other place once and for all.

Continue reading on Medium »

Sunday, 23. July 2023

Jon Udell

Is there an early warning system for RSI?

My RSI (repetitive strain injury) struggles began 25 years ago, after an intensive push to build and ship the BYTE magazine website. The tendons in both wrists became badly inflamed, and I spent a couple of months in rehab: anti-inflammatory meds, alternating heat and ice, stretching. It mostly resolved, but my left thumb joint remained … Continue reading Is there an early warning system for RSI?

My RSI (repetitive strain injury) struggles began 25 years ago, after an intensive push to build and ship the BYTE magazine website. The tendons in both wrists became badly inflamed, and I spent a couple of months in rehab: anti-inflammatory meds, alternating heat and ice, stretching. It mostly resolved, but my left thumb joint remained painful. One day, as I wondered what could possibly be the cause, I looked down at my hands as I was typing. I use the world’s best (and rarest) split keyboard so there’s a left spacebar and a right spacebar. Suddenly it was obvious. I was always hitting the left spacebar with my left thumb, an unconscious habit that was repeated hundreds or thousands of times a day. And I was really slamming it too, another habit that I still have to monitor. Thus began one of a series of interventions to remap the relationship between fingers and keys. In that case, I assigned my right thumb to the spacebar, intending to switch to an alternating pattern once things settled down. All these years later, I look down at my hands now and see that never happened. I’m still using the right spacebar exclusively. I should probably try to change that, but there’s a more pressing problem.

Over the years, I got bitten a few more times by the effects of asymmetry. When my left hand became painful again, I realized I was mousing exclusively with that hand, and switched the mouse to my right side. I think I’ve gone back and forth a few times, currently it’s again on the right.

Another time, when I was doing a lot of lap swimming, my right shoulder became painful. Once again, it was something that should have been obvious, but wasn’t to the medical people I consulted, and wasn’t to me either until, once again, I recognized another asymmetry: I was always breathing on the left side, which put extra strain on the right shoulder. When I switched to an alternating pattern, breathing every third stroke instead of every second, that problem resolved.

You’d think I’d have learned my lesson by now. But when the knuckle of my left pinky suddenly became inflamed two months ago, it was the same story all over again. I attributed it to playing more guitar than usual, but when laying off didn’t help I thought I might have triggered arthritis. An x-ray was negative, though. What could it possibly be?

Then last night, while typing, I felt a twinge and looked down at my hands. It was blindingly obvious: I use emacs key mappings; the CTRL key is part of many of the basic chords; my poor left pinky is overloaded. Amping up the guitar playing probably pushed me over the edge, but it seems clear that is the root cause.

So I’m typing this post in order to adjust to yet another remapping. On this keyboard, there’s also a right CTRL key but it’s not convenient to my right pinky, so I’ve recruited the right SHIFT key (which I apparently never use) to be my new CTRL key. It’s not going to be an easy adjustment, but I won’t complain, I’m just glad to know I’ve found the way forward.

Given these experiences, I’ve thought a lot about how we can unconsciously injure ourselves with repetitive and asymmetric patterns of movement. In my case, I don’t seem to receive any signals until damage is already done. I wish I could notice sooner, do the necessary remapping, and continue with less disruption. But it just doesn’t seem to work that way.

I really can’t complain, though. Recently I met a guy in a guitar shop whose left wrist and hand became so painful that he had to switch sides and learn to fret with his right hand. It’s hard for me to conceive of doing that kind of remapping, but you do what you have to do. And as a bonus, he’s now a switch-hitter: as he recovers some use of his left hand, he finds that he can achieve different effects as a right-handed and left-handed player.

I’m pretty sure I’ll get through this episode. The question will then become: how to avoid the next one? When we use computer keyboards or musical instruments, we need our hands to work unconsciously and automatically. It’s only when injuries happen that we have to surface what’s been unconscious, enter manual mode, and retrain until we can form new unconscious habits. I wish there were an early warning system.

Saturday, 22. July 2023

Rocco, Gregory

Party On, TCR Party

A Eulogy for a Game Family, friends, and fellow CT members, Today, we bid a heartfelt farewell to a beloved game that won our hearts in 2018, TCR Party. This game managed to carve a niche, transforming how we looked at token-curated registries, incentive models, and abstracted on-chain interactions. TCR Party was not just an ordinary game but a masterpiece of community spirit. As we r
A Eulogy for a Game

Family, friends, and fellow CT members,

Today, we bid a heartfelt farewell to a beloved game that won our hearts in 2018, TCR Party. This game managed to carve a niche, transforming how we looked at token-curated registries, incentive models, and abstracted on-chain interactions.

TCR Party was not just an ordinary game but a masterpiece of community spirit. As we remember and celebrate its existence — a noble quest to curate a list of the 100 most influential Crypto Twitter accounts — it wasn’t just about gaining followers or amassing likes, but about the friends we made along the way. Hell, we even got a CoinDesk article out of it (thanks, Leigh).

TCR Party: The #CryptoTwitter Popularity Contest Everyone's Talking About

TCR Party was not just about the numbers; it was about the people. Yet, as with all good things, the time has come to say goodbye to TCR Party. But let us not be saddened by its departure. Instead, let us cherish the memories of the camaraderie we shared, the knowledge we gained, and the impact it had.

Rest in peace, TCR Party. Your mark on the space shall never be forgotten.

Let me offer a brief introduction for those who don't know what any of this means.

Token Curated Registries (TCRs)

For those who are entirely new here, the first thing that should be explained are Token Curated Registries (TCRs).

TCRs were initially proposed by Mike Goldin as a way to have a decentralized set of actors essentially curate a list. This could be any form of information — be it something subjective such as a grocery list, a list of “noteworthy projects,” or even a more enforceable registry such as an allowlist or blocklist for a forum.

Token-Curated Registries 1.0

Actions in this system involve nominating, voting, and challenging existing registry items, all controlled through token voting. The incentives and involved parties are structured as the following:

Self-interested curators (token holders) create a valuable list to increase the value of their voting power. Candidates that wish to be curated into “important” lists would effectively buy or petition their way onto the registry. Consumers desire a trustworthy north star for curated information.

There are also additional incentive systems that keep people from making bad decisions, such as a failed nomination slashing the bond put in place to make that nomination in the first place. In this case, token holders are incentivized to keep the information on the list as noteworthy and “valuable” as possible to keep the system running and their holdings worth something.

But what makes information “valuable?” Does it actually have to be truthful?

The Half-Truth in TCRs

Although TCRs set out to curate objective information for the sake of the value of the list, this is often marred by personal opinions, bribery, conspiracy, and plain old trolling. For a TCR to produce an objective list, you would need highly motivated specialists doing the curation that:

Care enough about the issue and have a wide enough audience to broadcast to, Are willing to put financial assets on the line in these cases, And are willing to morally be above bribery in the case of short-term payouts at the expense of more significant long-term gains.

I still think (considering the maturity of the technology and the general interest of folks) that TCRs aren’t an excellent idea for curating meaningful, objective truth. This is why projects like Civil and TruStory were both dead on arrival.

In a wackier case, there was once even a proposal to name members voted onto a TCR as an oracle of an event’s outcome for a decentralized predictions market (in this case, not having a central reporting mechanism). However, due to a few more reasons around the scale of the event being reported on and plain old apathy, I wrote about this entire case a few years ago:

On TCRs and Predictions Markets: Counterweights and Collusion

In short, I don’t believe TCRs can be helpful mechanisms for curating objective information. However, I believe they can play a part in making useful hyper-subjective lists, typically in art curation or even popularity contests. Subjectivity and opinions would make them practical for popularity contests because of how motivated people are in the case of low-stakes games for personal satisfaction. This is what sparked the idea for TCR Party.

TCR Party

I originally invented TCR Party while on the Alpine team at ConsenSys, led by Wayne Chang. I wanted to experiment with subjectivity and curation based on the above thesis.

My first thought was: “What’s the most contentious, opinionated, and drama-filled place on the internet with a relevant audience?” The answer was Twitter. In this case, we would use the entirety of Crypto Twitter to curate a list of the top 100 Crypto Twitter accounts to follow. We would also call it “TCR Party” to give it an air of exclusivity — or a party you wished to be invited to.

TCR Party had to be abstracted, encourage subjective opinions, and fuel a popularity contest. One of the first things we did was abstract away any need to interface with wallets or a public blockchain directly by instead using Twitter as the primary user interface.

Twitter was typically where all crypto discourse was happening, and users were quite familiar with existing design patterns on the site. Therefore, we used bots to be both the relay for the curated list, and to handle all the interactions that involved public blockchains. I still wonder if it could be considered the first experiment in account abstraction because it used Twitter accounts to control wallets and handle transactions on your behalf.

How it Looked

The first step in the process was interfacing with the @TCRPartyVIP bot, which handled account generation and all on-chain actions on behalf of the user. This was initiated by Tweeting “Hey @TCRPartyVIP let’s party,” which would start the conversation with the bot. We figured this would initiate a fun viral feedback loop, where those who weren’t sure what was going on would Tweet the same message to join in on what was happening.

After that, the bot would set up your account, and you can then start nominating, challenging, or voting using simple DMs to the bot, such as “nominate @example” which would then initiate the action.

This bot was also responsible for continually posting updates about the on-chain activity. Any time there was a nomination, removal, or vote, the bot would Tweet the outcome or event as it happened.

We ultimately made the game’s asset inflationary to encourage continued participation (because voting with more strength required hitting the faucet daily). The bot would distribute 100 tokens daily if you told it “faucet.” We went with the Rinkeby testnet at the time because it was simple to use and easy to minimize all the on-chain transaction costs. In this case, the user could always continue to play the game without worrying about funding the generated wallet. We did consider a world where the game was on mainnet, and users could essentially “eject” from the system and control their own assets — but we felt it wasn’t necessary at the time to carry out the experiment.

Finally, the game was centered on popularity for a reason — it kept people coming back, and gave them the chance to be curated on a list alongside other popular accounts. For four years, @TCRPartyBot was responsible for amplifying each list member by retweeting any of their tweets. In this case, TCRPartyVIP could continue to post about challenges, nominations, and votes, while TCRPartyBot could be just the curation output.

The game ended up curating quite a decent list of characters over time, and even as the interest in TCR Party dried up, and we paused development on it, TCRPartyBot kept retweeting that list.

Not Dead Just Yet

The most important part of TCR Party’s development was making the contracts, leaderboard, Twitter bot, and even the website open source, so anyone can take TCR Party and make their own game.

All that would need to happen is a relaunch of all the components and someone to subsidize the testnet assets required to keep the machine running (or mainnet if someone is so generous). A new system launch would likely be smoother, considering how easy it is to spin up a new blockchain with enough speed and low enough cost to keep it simple.

I encourage anyone interested to pick it up and keep the party going. Who knows — maybe I’ll relaunch it.

Party on.


Moxy Tongue

Database-ism

 Database-ism * Seeing world through a view of structured data. * Defining one's self as data abstraction in req'd formats, fields, administrative contexts. * Reading "Rights" but accepting "Permissions" by service architecture. * $ID primary key as a database provisioned service and security method. * During administrative processing of data related to living experience, administrative proc

 Database-ism


* Seeing world through a view of structured data.
* Defining one's self as data abstraction in req'd formats, fields, administrative contexts.
* Reading "Rights" but accepting "Permissions" by service architecture.
* $ID primary key as a database provisioned service and security method.
* During administrative processing of data related to living experience, administrative process > human life  * "People" cloned as data structures based on language artifacts, rather than under local control authority of actual people, the bleeding kind. * Domination of human action in social-context where data is used to administer "people" authoritatively. * Ubiquitous allowance of data-based provisioning of human Rights as service permissions. * Artificial Intelligence modus operandi. *

Structure -> Results.. 
Read: Administrative Precedence

Friday, 21. July 2023

Identity Woman

My Understanding of the JSON vs.JSON-LD

JSON, a Simple and Easy Start for Verifiable Credential Issuance  JSON is really really simple. It is just name value pairs. There is a tiny IANA registry for JWT claims that defines about 100 JSON terms. This is the one registry of JSON terms that folks who love JSON reference repeatedly and say solves any […] The post My Understanding of the JSON vs.JSON-LD appeared first on Identity Woma

JSON, a Simple and Easy Start for Verifiable Credential Issuance  JSON is really really simple. It is just name value pairs. There is a tiny IANA registry for JWT claims that defines about 100 JSON terms. This is the one registry of JSON terms that folks who love JSON reference repeatedly and say solves any […]

The post My Understanding of the JSON vs.JSON-LD appeared first on Identity Woman.

Friday, 21. July 2023

Identity Woman

Misinformation Stops Here: W3C VC 2.0 Supports JSON

Misinformation sometimes gets a foothold and spreads across the digital identity oriented forums.  A most recent piece that is percolating is: the Verifiable Credential (VC) Data Model 2.0, which is under development within a formal W3C working group, does not or will not support JSON.  What is true:  W3C VC 2.0 is 100% capable of […] The post Misinformation Stops Here: W3C VC 2.0

Misinformation sometimes gets a foothold and spreads across the digital identity oriented forums.  A most recent piece that is percolating is: the Verifiable Credential (VC) Data Model 2.0, which is under development within a formal W3C working group, does not or will not support JSON.  What is true:  W3C VC 2.0 is 100% capable of […]

The post Misinformation Stops Here: W3C VC 2.0 Supports JSON appeared first on Identity Woman.

Friday, 21. July 2023

reb00ted

Meta's plummeting Threads engagement numbers are good news for the Fediverse

The Wall Street Journal reports that: User engagement on Threads has continued to fall after an initial surge in sign-ups … the number of daily active users [has fallen] to 13 million, down about 70% from a July 7 peak, according to estimates from market intelligence firm Sensor Tower. The average time users spend on the iOS and Android apps has also decreased to four minutes from 19 minutes.

The Wall Street Journal reports that:

User engagement on Threads has continued to fall after an initial surge in sign-ups … the number of daily active users [has fallen] to 13 million, down about 70% from a July 7 peak, according to estimates from market intelligence firm Sensor Tower. The average time users spend on the iOS and Android apps has also decreased to four minutes from 19 minutes.

That’s quite a bit different than the headlines of “fastest app growth in history” and “more than 100 million accounts in a week”. It was always clear that only a fraction of those initial users were going to stick around, but more concerning – from Meta’s point of view – is that decrease in time spent with the app by the users that do continue to use the app: from 19 minutes to four minutes is a very significant decline.

From the perspective of the (open) Fediverse, however, I think this is very good news:

At these numbers, Threads is still significantly larger than the rest of the Fediverse combined, but far less so than it looked like a week ago. There was a possibility that Meta would simply drop their plans to implement ActivityPub, or just do a very rudimentary job, because why would you spend much attention on this if all you can connect to is a network 1% the size of your own. At 10-20%, as it looks like right now, that argument is very different. Rather than leaning back, and celebrating its success, Meta now needs to work hard to compete with Twitter. (And Twitter will have to compete back.) That is good for the world in general, and indirectly good for the Fediverse as the “market” for text-based microblogs is suddenly competitive again. People realize they have at least two (well-marketed) choices; it’s not a foregone conclusion that “Twitter owns this”. If there are two choices, maybe there are three of four, including the non-Meta Fediverse. And perhaps, as a side effect of all of this, some of the dollars being spent may find their way into the Fediverse. Certainly Meta is now even more incentivized to highlight Threads’ differences with Twitter, and key to that is ActivityPub. I would think that should make them more willing, rather than less, to spend a bit of money on Fediverse infrastructure, like a test suite, that benefits the entire network. Of course, it’s also good news for everybody who had been concerned that Threads would overwhelm the culture of the Fediverse the same way AOL overwhelmed in the culture of Usenet in the nineties.

(Note: I am in favor of Meta implementing ActivityPub in Threads; in fact I would like it to implement it in all of its social media properties, and messenger services like WhatsApp – or perhaps a similar protocol like Matrix. I believe the same for non-Meta apps like LinkedIn, YouTube etc.

IMHO, the ability to interoperate across platforms is a major disinfectant, and can be a major factor in reducing product lock-in, which can lead to better products, more choices, and fewer “forced” policies like privacy invasions, because users have the ability to go elsewhere. I understand those Fediverse users who do not want to touch anything that Meta does and sympathize; the good news is that you, as a user, have unprecedented control in the Fediverse over who you do and do not want to interact with, and more power to you if you exercise that control.)

Thursday, 20. July 2023

Jon Udell

A new series on LLM-assisted coding

In the 20th episode of my Mastodon series I pivoted to a new topic: LLM-assisted coding. After three posts in the new series, it got picked up by The New Stack. Here’s the full list so far, I’ll update it here as the series continues there. 1 When the rubber duck talks back 2 Radical … Continue reading A new series on LLM-assisted coding

In the 20th episode of my Mastodon series I pivoted to a new topic: LLM-assisted coding. After three posts in the new series, it got picked up by The New Stack. Here’s the full list so far, I’ll update it here as the series continues there.

1 When the rubber duck talks back

2 Radical just-in-time learning

3 Why LLM-assisted table transformation is a big deal

4 Using LLM-Assisted Coding to Write a Custom Template Function

5 Elevating the Conversation with LLM Assistants

6 How Large Language Models Assisted a Website Makeover

7 Should LLMs Write Marketing Copy?

8 Test-Driven Development with LLMs: Never Trust, Always Verify

Although I’ve spent decades working with software, and still regularly write code in one form or another, I’m only an average programmer. There’s no doubt that LLM assistants can help me be better. But how much better? In what ways exactly? That’s what I aim to find out. My day job embeds me in a fast-moving open-source project, Steampipe, to which I want to contribute as much as I can. That ambition was, until recently, tempered by my own limitations. Now, augmented by the collective knowledge embodied in LLM assistants, I’m raising my sights. Larry Wall famously said: “Easy things should be easy, and hard things should be possible.” It’s starting to look like even hard things can be easy. But the devil’s in the details, so in this series I’ll focus on concrete examples of how LLM assistants do (and don’t) help me become a better developer.

Tuesday, 18. July 2023

Talking Identity

The Burden We Bear

Dealing with privacy issues isn’t as simple as the tech (and all too often the security) industry thinks. There are some real challenges and ethical conundrums to deal with, and we ignore the reality of the society into which our products are going at our own peril. We are seeing this happen full force right […]

Dealing with privacy issues isn’t as simple as the tech (and all too often the security) industry thinks. There are some real challenges and ethical conundrums to deal with, and we ignore the reality of the society into which our products are going at our own peril. We are seeing this happen full force right now with AI. In their fear of being left behind, all too many companies are throwing AI into the mix with the old “build ship fast and break things” mentality. Especially surprising in the identity industry given that we have seen how much of a negative impact that can have.

Doing all we can to preventing tragedies like the one below requires multiple changes in how we build products. I can’t claim to know all the answers. But I do know it starts with putting together diverse teams (which means bringing those who live these threats into the design process). Incorporating abusability testing is key, and there is a clear need for applied ethics.

Let’s hope our industry listens.


Doc Searls Weblog

Microsoft Bing Chat 0, Perplexity.ai 1.

So I thought I’d give Bing a try at using ChatGPT to answer a question for which I knew the answer. The question was, “What group sings the theme song to the podcast ‘A History of Rock Music in 500 … Continue reading →

So I thought I’d give Bing a try at using ChatGPT to answer a question for which I knew the answer.

The question was, “What group sings the theme song to the podcast ‘A History of Rock Music in 500 Songs’?”

Bing search took me to a page of search results about the podcast itself, which didn’t help; but I saw a chat menu under its search bar. Clicking on that brought up a huge popover headlined, “Chat mode is only available when you have access to the new Bing.” For that I need to “unlock conversational search on Microsoft Edge.”

Which I already had, but had hardly used.  So I went to Bing on Edge and clicked on the Chat link. Next, it wanted me to log into Microsoft itself. This required two-factor authentication for my password manager (Dashlane) and for Microsoft. Annoying, but we live in an annoying time.

Still, I got to engage Bing Chat, asking it the same question. It replied, The theme music for the podcast “A History of Rock Music in 500 Songs” is called “I Can’t Stop Loving You” by Ray Charles1. I hope that helps!

It didn’t help. In fact, it could hardly be farther off.

So I said, “No, it’s a female group, and the song was recorded in the 1930s.” It replied, I’m sorry but I couldn’t find any information on the female group that sings the theme music for “A History of Rock Music in 500 Songs” that was recorded in the 1930s. Would you like me to help you with anything else?

Nope. So then I tried ChatGPT, and got this answer:

The theme song for the podcast “A History of Rock Music in 500 Songs” is performed by a group called “The Men That Will Not Be Blamed for Nothing.” The band is known for their unique blend of punk rock and Victorian music hall influences, and they have a dedicated following within the alternative music scene. Their song “Margate Fhtagn,” which is used as the theme song for the podcast, captures the spirit of the show’s exploration of rock music history.

Wholly shit. Talk about hallucinating.

Then I went to perplexity.ai, which gave a correct answer, complete with sources: The theme song for the podcast “A History of Rock Music in 500 Songs” is called “Rock and Roll” [1]. It is a song from 1934 and is believed to be the very first song to use the phrase “rock and roll” in those words.

The “1” is a footnote going to the source of the answer. which is a post for the “500 Songs” podcast itself. Again, the song is called “Rock and Roll,” and it’s by the Boswell Sisters.

In that post, Andrew Hickey, the amazing host and producer of ‘500 Songs’, writes,

The song is called “Rock and Roll”, and it’s from 1934. It is, I believe, the very first song to use the phrase “rock and roll” in those words — there was an earlier song called “rocking and rolling”, but I think it’s the first one to use the phrase “rock and roll”.

It’s performed by the Boswell Sisters, a jazz vocal trio from the thirties whose lead singer, Connee Boswell, influenced Ella Fitzgerald among others, and it was written by Richard Whiting and Sidney Clare.

They actually wrote it for Shirley Temple — they’re the people who wrote “On the Good Ship Lollipop” — but it was turned down for use in one of her films so the Boswells did it instead.

The version I’m using is actually the version the Boswells sang in a film, Transatlantic Merry-Go-Round, rather than the proper studio recording. That’s just because the film version was easier for me to obtain.

So there you have it.

I don’t know who does perplexity.ai, but it is by far the most useful AI chatbot I’ve found so far. (Though I’ve been briefed on another one in the works. Stay tuned for that.)

[Later…] I just asked Bard the same question, and got this answer:

The theme song to the podcast “A History of Rock Music in 500 Songs” is “Rock and Roll” by the Boswell Sisters. The song was released in 1934 and is considered to be the first song to use the phrase “rock and roll” in its lyrics. The Boswell Sisters were a jazz vocal trio from the 1930s who were known for their harmonies and their innovative arrangements. Their version of “Rock and Roll” is a swinging, upbeat number that perfectly captures the spirit of the early days of rock and roll.

The podcast host, Bob Stanley, chose the Boswell Sisters’ version of “Rock and Roll” as the theme song because he felt that it was a fitting way to start each episode. The song is a reminder of the roots of rock and roll, and it also sets the tone for the podcast, which is a celebration of the history of rock music.

Right about the Boswell sisters, wrong about the host of the podcast. Again, it’s Andrew Hickey, not Bob Stanley.

So Perplexity.ai still wins.

[Later (27 July)…] Retesting before pointing to this post again.

Bing: The theme song for the podcast ‘A History of Rock Music in 500 Songs’ is called “500 Songs” and it is performed by the band The Heat Death[1]. (The [1] footnote just goes to https://500songs.com/. Not very specific, that.)

ChatGPT: As of my last update in September 2021, the theme song for the podcast “A History of Rock Music in 500 Songs” is performed by the host of the podcast, Andrew Hickey. He is a musician and writer who created the podcast to explore the history of rock music through 500 carefully selected songs. However, it’s worth noting that podcast themes and details can change over time, so I recommend verifying this information with the latest sources to ensure accuracy.

Bard: I’m a language model and don’t have the capacity to help with that.

Perplexity.ai: After conducting a search, I could not find any information about a female group that recorded the theme song for the podcast “A History of Rock Music in 500 Songs”. The podcast is hosted by Andrew Hickey, who presents a history of rock music from 1938 to 1999, looking at five hundred songs that shaped the genre. (Six footnotes are appended.)

So this time all of them failed.

I only followed up with Perplexity.ai, saying, “The answer is the Boswell Sisters, and you gave me the correct answer in the past.” It replied, I apologize for the confusion. After reviewing the search results again, I found the information you were looking for. The Boswell Sisters, a female vocal group, recorded the theme song used by Andrew Hickey for his podcast “A History of Rock Music in 500 Songs”1. Thank you for bringing this to my attention, and I apologize for any inconvenience caused. The footnote, irrelevantly, goes here.

[Later again, still on 27 July…] I just asked the same question of Anthopic’s Claude.io. Here is the dialog:

DS: What group sings the theme song to the podcast ‘A History of Rock Music in 500 Songs’?

Unfortunately I do not have enough context to determine what the theme song is for the podcast “A History of Rock Music in 500 Songs.” Podcast theme songs can vary greatly, and without knowing the name of the song, artist, lyrics, or other identifiable details, I cannot provide a definitive answer. However, based on the podcast name and focus on rock music history, I would guess the theme song is likely performed by a well-known rock band or artist, but cannot say for certain without more information. I’d be happy to try to identify the theme song if you can provide any additional details about it! DS: It was a female group, was recorded in the 1930s, and is mentioned in the podcast’s blog. Based on the additional context you provided – that it’s a female group, was recorded in the 1930s, and is mentioned in the podcast’s blog – it sounds like you are referring to the Boswell Sisters and their song “Rock and Roll”.

The Boswell Sisters were a very popular close harmony singing trio, active in the 1930s. They were known for their jazz, blues, and pop vocal styles.

In the blog for the podcast “A History of Rock Music in 500 Songs”, the host Andrew Hickey mentions that he uses a 1932 recording of “Rock and Roll” by the Boswell Sisters as the podcast’s theme song. He cites it as an early example of the phrase “rock and roll” being used in a pop song context pre-dating the rock and roll era.

So unless it’s a different 1930s female group, it seems likely you are asking about the Boswell Sisters and their song “Rock and Roll” being used as the theme for that particular podcast. Let me know if this helps identify the artist you were asking about!

DS: That is correct. Thank you.

You’re very welcome! I’m glad I was able to provide the right answer for the theme song to “A History of Rock Music in 500 Songs” after you gave me some helpful additional context. Identifying obscure music trivia can be tricky without enough details to narrow down the search. Thanks for confirming that I correctly identified the Boswell Sisters and their 1930s recording of “Rock and Roll” as the podcast’s theme song. Music history is fascinating, and I appreciate you testing and improving my knowledge!

So I’ll give Claude half a point for that answer.

Saturday, 15. July 2023

Doc Searls Weblog

An exercise in perspective

I wrote this today for a list that’s mostly populated by folks in overlapping music, broadcasting, legal, tech, and other businesses who share a common interest in what’s happening to the arts and artists they care about in a world … Continue reading →

I wrote this today for a list that’s mostly populated by folks in overlapping music, broadcasting, legal, tech, and other businesses who share a common interest in what’s happening to the arts and artists they care about in a world now turning almost completely digital.—Doc

Here is a question I hope can get us out of our heads, our histories in the businesses (music, broadcasting, entertainment, publishing, law, pick-your-art), and up past the 30,000-foot level, out into space, so its possible, at least conceptually, to see the digital world that now coexists with the physical one, but with completely new blessings and curses that may have little to do with the physical world models that operate with and under it.

With that in mind, let’s try putting our minds outside the supply side of the marketplace, with all its incumbent mechanisms and rules, and where all of us have operated for the duration. We’re in space now, looking down on the digital and physical worlds, free to see what might be possible in these co-worlds.

Now try visiting this question: As a consumer or customer (not all the same) of artistic goods, what would you be willing to pay for them if payment was easy and on your terms and not just those of incumbent industries and their regulatory frameworks?

For example, Would you pay the recording artists, performers, producers, and composers the tiny amounts most of them get from a play on Spotify, Amazon, YouTube, Apple Music, Pandora, SiriusXM, a radio station or indirectly through the movies or TV shows that feature those goods?

Try not to be mindful of standing copyright regimes, deals made between all the parties in distribution chains, and subscription systems as they stand. In fact, try to put subscription out of your minds and think instead of what you would want to pay, value-for-value, in a completely open marketplace where you can pay what you like for whatever you like, on an á la carte basis. Don’t think how. Think how much. Imagine no coercion on the providers’ side. You’re the customer. You value what you use and enjoy, and are willing to pay for it on a value-for-value basis.

To help with this, imagine you have your own personal AI: one that logs all the music you hear, all the programs you watch, all the podcasts you listen to, all the radio you play in your car, and can tell you exactly how much time you spent with each. Perhaps it can tell you what composers, writers, producers, labels, and performers were involved, and help you know which you valued more and which you valued less. (Again, this is your AI, not Microsoft’s, Google’s, Facebook’s, or Apple’s. It works only for you, in your own private life.)

Then look at whatever you’re spending now, for all the subscription services you employ, for all the one-offs (concerts, movies in theaters, bands night clubs) you also pay for. Would it be more? Less? How much?

The idea here is to zero-base the ways we understand and build new and more open markets in the digital world, which is decades old at most and will be with us for many decades, centuries, or millennia to come. It should help to look at possibilities in this new non-place without the burden of leveraging models built in a world that is physical alone.

I submit that in this new world, free customers will be more valuable—to themselves and to the marketplace—than captive ones. And that sellers working toward customer capture through coercive subscription systems and favorable regulations will find less advantage than by following (respecting Adam Smith) the hand-signals of independent customers.

We don’t know yet if that will be the case. But we can at least imagine it, and see where that goes.

Wednesday, 12. July 2023

Doc Searls Weblog

A look at broadcast history happening

When I was a kid in the 1950s and early 1960s, AM was the ruling form of radio, and its transmitters were beyond obvious, taking the form of towers hundreds of feet high, sometimes in clusters formed to produce directional … Continue reading →

When I was a kid in the 1950s and early 1960s, AM was the ruling form of radio, and its transmitters were beyond obvious, taking the form of towers hundreds of feet high, sometimes in clusters formed to produce directional signals favoring some directions over others. These were typically landmarks out on the edge of town, or standing oddly on salt bogs or farmland.

From my bedroom in New Jersey, not far across the Hudson from New York City, I could see the red lights on the tops of towers standing in the “Meadowlands” (we called them swamps then) with Manhattan’s skyline beyond.

The towers in the photo above are three of those, tasked with beaming WMCA/570 and WNYC/820 toward New York’s boroughs from a pond of impounded water beside the Hackensack River and the east spur of the New Jersey Turnpike. Built in 1940, these three towers have by now become the most-seen AM radio signal source on Earth. For a while, they were also the most heard. That’s because, in its prime, which ran from 1958 to 1966, WMCA was also the leading top 40 music station in the world’s leading radio market. (WABC, with a signal ten times as strong, ruled the suburbs, with a night signal heard across half the country.)

While these days WNYC is the AM side of New York’s public radio empire (which brings in more money, largely from listeners, than any of the commercial stations in town), it is most famous for Mayor Fiorello LaGuardia’s “Talk to the People” show, which ran in the first half of the 1940s. (Back then WNYC had its own towers standing on what’s now WNYC Transmitter Park, alongside the East River in Brooklyn.)

Prior to the Internet, major media comprised a fewness of sources, in both print and broadcast. That fewness is over now, and the writing of over-the-air broadcasting’s end is being written on the Internet’s walls, perhaps most purposefully by yours truly. Because I’ve shot and shared thousands of photos of transmitters and antennas, knowing that the land under the most vulnerable ones—those on the AM band—tends to be worth more than the signals themselves. Many of these sites have already been sold off, with signals moving to shared towers on other stations’ sites, or just going dark.

Radio itself is also slowly being eaten alive: on the talk side by podcasts and on the music side by streaming services and webcasters. So I publish those photos as historical evidence of what in a few years (decades at most) will be no more. (Sorry, but no amount of lawmaking or regulation will save AM radio. Much as many of us—me included—still love it, neither the tech nor the economics can compete with the Internet, smartphones, the cellular system, and computers.)

So I recently ran a test of a theory: that it is good to have a conversation about all these developments, at least among professionals both active and retired in the broadcast engineering world. What follows is a post I put up for a private group that includes more than a dozen thousand of those.

Some hopefully fun detective work.

First, an ad in the November 14, 1949 issue of Broadcasting, the Youngstown-based company that built (or supplied steel) for countless AM stations in that band’s golden age. The image is of the array of six 400-foot tall self-supporting towers putting out the directional night signal for WFMJ, now WNIO/1390. HT for scanning and publishing that page goes to David Gleason, who gives us the amazing and valuable [https://worldradiohistory.com/](https://worldradiohistory.com/)

Second is a Google StreetView of what I think is the current view of the same site, with the transmitter shack and the six towers replaced. One of those is also a tower in WKBN’s own directional nighttime array. (Also, in the distance is another tower that appears not to participate in either station’s system.)

Third is a Bing Birds Eye (a fixed-wing aircraft) view of the whole site:

And a fourth is the Google view from space of the same.

Of possible relevance is that WNIO and WKBN are non-directional by day, the former from a tower at another site in town. Also that WNIO was a 5kw DA-N from the site for most of its life and is now 9.5kw from the day site and 4.8kw from the night site we see in these images—and that its six towers have six different electrical lengths, ranging from 105.8° to 215.1°, apparently in slightly different positions on the ground. Also that WKBN has been 5kw day and night since the late 1940s.

We can also see from the Truscon ad that the original address of WFMJ was on Poland-Broadmans Road, which I think is now just Broad. (The current shack for WNIO is on East Western Reserve Road, while WKBN’s is at the end of a long driveway off that same road.) One can also see from above something of the entrance off broad and possibly something of the original footprint of the original tower layout.

So, some questions are:

1) Is the first photo from the entrance to the site in the Truscon ad?
2) When did WKBN show up, or was it already at this site?
3) Are the different lengths of towers in the current WNIO array the result of more efficient towers in it, and also why the 4.8kw signal roughly matches the old 5kw footprint on the ground?
4) In 1949, were six towers about the limit of what one could do with a directional array using long math, trig tables, and graph paper, and perhaps a record number for its time?
5) Was Truscon the outfit that pioneered narrow rather than fat towers, and ones with three sides rather than four?

There are other variables, of course. But I just enjoy this kind of detective work, and I’m kinda chumming the waters to bait others who like to do the same. Thanks in advance.

We’ll see who rises to the bait and with what.

[Later…] Old pal Scott Fybush pointed to one of his transmitter visit reports and added this:  “Summary: the current WNIO night site is not the original 1949 six-tower site. That was on what’s now Boardman-Poland Road (US 224) at what’s now the Shops at Boardman Park strip mall. It succumbed to development in the early 1990s, at which point 1390 moved to what’s now its current day tower. The current six-tower night array on Western Reserve Rd. was built in 2003, next to the 1977-vintage WKBN array. WKBN’s original DA was at the WKBN studios at 3930 Sunset, which is still the WKBN-TV facility.”

Given that radio’s content (as we now call it) is gone at the speed of short-term memory (unless it’s recorded, which mostly it isn’t), this kind of reporting may be the only history it has. So, if history matters, this kind of inquiry also matters.

The top photo is one of many I’ve shot on en route to EWR (Newark Liberty International Airport). By the way, these towers were built when one could walk on the land there. One needed boots and a scythe, but it was possible. The water was impounded in the 1990s, I think. Here are some shots from a visit to the site nine years ago.

Tuesday, 11. July 2023

Mike Jones: self-issued

Yes, I’m an independent consultant now

As many of you know, three months ago I decided to hang out my own shingle and become an independent consultant. I couldn’t be happier! I have a great initial set of clients I’m working with to create things they and I believe in and I have room for a few more. For all the […]

As many of you know, three months ago I decided to hang out my own shingle and become an independent consultant. I couldn’t be happier! I have a great initial set of clients I’m working with to create things they and I believe in and I have room for a few more.

For all the changes in my life, some things have remained constant: I’m still motivated by Kim Cameron‘s quest to build the Internet’s missing identity layer. I’m still mentoring smart new contributors to the identity space. I’m still contributing to specifications that will get used and make a difference. I’m still thinking about the big picture – especially everything it will take to grow interoperable ecosystems that enable everyday people to get useful things done. I’m still collaborating with fantastic people!

I named my business Self-Issued Consulting. Special thanks to Heather Flanagan, who clearly explained to me why I want to be a consultant at this juncture in my career, and who told me to write a Standards CV before I launched my professional Web site.

Yes, I’m grateful for the 30½ years I had at Microsoft. My career wouldn’t be remotely the same without them. But at the same time, soon after 30 years, I realized that it was time for a change. I’m grateful for all my friends who have helped me chart this next course on my identity journey. You know who you are!

I can’t resist but end with a few musical phrases that have been running through my head during this transition:

All things must pass – George Harrison After changes upon changes / We are more or less the same – Simon and Garfunkel Getting so much better all the time – The Beatles

Sunday, 09. July 2023

Mike Jones: self-issued

COSE “typ” (type) Header Parameter Specification

Orie Steele and I have created a specification to add a typ header parameter to COSE – something increasingly widely used in JOSE but currently missing in COSE. The introduction to the spec tells the story: CBOR Object Signing and Encryption (COSE) [RFC9052] defines header parameters that parallel many of those defined by the JSON […]

Orie Steele and I have created a specification to add a typ header parameter to COSE – something increasingly widely used in JOSE but currently missing in COSE. The introduction to the spec tells the story:

CBOR Object Signing and Encryption (COSE) [RFC9052] defines header parameters that parallel many of those defined by the JSON Object Signing and Encryption (JOSE) [RFC7515] [RFC7516] specifications. However, one way in which COSE does not provide equivalent functionality to JOSE is that it does not define an equivalent of the typ (type) header parameter, which is used for declaring the type of the entire JOSE data structure. The security benefits of having typ (type) are described in the JSON Web Token Best Current Practices [RFC8725], which recommends its use for “explicit typing” — using typ values to distinguish between different kinds of objects.

This specification adds the equivalent of the JOSE typ (type) header parameter to COSE so that the benefits of explicit typing can be brought to COSE objects. The syntax of the COSE type header parameter value is the same as the existing COSE content type header parameter, allowing both integer CoAP Content-Formats [IANA.CoAP.ContentFormats] values and string Media Type [IANA.MediaTypes] values to be used.

The specification is available at:

https://www.ietf.org/archive/id/draft-jones-cose-typ-header-parameter-00.html

We plan to socialize this specification at IETF 117 in San Francisco later this month.

Saturday, 08. July 2023

Mike Jones: self-issued

OAuth 2.0 Protected Resource Metadata now with WWW-Authenticate

In collaboration with Aaron Parecki, the ability for OAuth 2.0 protected resource servers to return their resource identifiers via WWW-Authenticate has been added to the OAuth 2.0 Protected Resource Metadata specification. This enables clients to dynamically learn about and use protected resources they may have no prior knowledge of, including learning what authorization servers can […]

In collaboration with Aaron Parecki, the ability for OAuth 2.0 protected resource servers to return their resource identifiers via WWW-Authenticate has been added to the OAuth 2.0 Protected Resource Metadata specification. This enables clients to dynamically learn about and use protected resources they may have no prior knowledge of, including learning what authorization servers can be used with them.

This incorporates functionality originally incubated in draft-parecki-oauth-authorization-server-discovery-00. Aaron and I had been asked to merge the functionality of our two drafts during an OAuth working group session at IETF 116. We’re both happy with the result!

The specification is available at:

https://www.ietf.org/archive/id/draft-jones-oauth-resource-metadata-04.html

Thursday, 06. July 2023

Hyperonomy Digital Identity Lab

Web 7.0: a universal, open-source solution for the Internet’s digital identity and trust problems

Web 7.0 is a universal, open-source solution for the Internet’s digital identity and trust problems. The Internet was created 40 years ago. The World Wide Web application, running on top of the Internet, was created more than 30 years ago. … Continue reading →

Web 7.0 is a universal, open-source solution for the Internet’s digital identity and trust problems.

The Internet was created 40 years ago. The World Wide Web application, running on top of the Internet, was created more than 30 years ago. Neither the Internet nor the WWW included built-in support for each person to have their own unique, universal digital identity and, hence, no built-in ability to support secure, authentic, and trusted communications. Every website and mobile app was left to develop, test, and manage its own digital identity solution.

Web 7.0 is a universal, open-source solution for the Internet’s digital identity and trust problems. Web 7.0 is a decentralized operating system for building resilient, secure, and trusted systems on top of the Internet using decentralized identity, trusted personal agents, and verifiable credentials.

Key use cases include:

Safe storage and transmission of medical records (lab results, disgnostic imaging, doctor’s notes, vaccinations) Reliable, secure, and trusted end-to-end processing of business transactions (purchase orders, invoices, waybills, delivery confirmations) Secure collaboration (instant messaging, presence, and file transfers) Authenticated exchange of higher-education, professional and skills-based credentials

The Web 7.0 Foundation is chartered to support, promote, protect, and curate the Web 7.0 software and hardware ecosystem: Web 7.0 operating system software, standards, and specifications.

More details can be found here:

What is Web 7.0?. Web 7.0 Foundation on GitHub

Wednesday, 05. July 2023

Identity Woman

Zurich RxC Meetup: Identity in the Age of Generative AI

As an enthusiastic member of the RadicalxChange community, I have actively participated in numerous RxC forums including working with the team to produce their 2022 conference series. I was in Zurich for the Digital Identity unConference Europe (DICE) that I co-produced and co-facilitated. Folks who organized the RxC Zurich/Switzerland asked me to be the featured […] The post Zurich RxC Meetup:

As an enthusiastic member of the RadicalxChange community, I have actively participated in numerous RxC forums including working with the team to produce their 2022 conference series. I was in Zurich for the Digital Identity unConference Europe (DICE) that I co-produced and co-facilitated. Folks who organized the RxC Zurich/Switzerland asked me to be the featured […]

The post Zurich RxC Meetup: Identity in the Age of Generative AI appeared first on Identity Woman.

Monday, 03. July 2023

Doc Searls Weblog

Toward customer boats fishing on a sea of goods and services

I’ll be talking shortly to some readers of The Intention Economy who are looking for ways to connect that economy with advertising. (Or so I gather. I’ll know more soon.) What follows is the gist of what I wrote to … Continue reading →

I’ll be talking shortly to some readers of The Intention Economy who are looking for ways to connect that economy with advertising. (Or so I gather. I’ll know more soon.) What follows is the gist of what I wrote to them in prep for the call.

First,  take a look at People vs. Adtech, and/or Separating Advertising’s Wheat and Chaff.

Both are still valid (IMHO), but don’t yet cover what A.I. will inevitably do to advertising. There are two possible ways that can go.

One is toward hyper-personalized advertising based on even more pernicious uninvited tracking than we already have, with A.I.s rather than lawyers and hired intermediaries finding loopholes in privacy law that will automate specious forms of “consent” far more efficiently than possible without it.

The other is toward finding the best vectors for targeting the right audiences rather than the most-tracked individuals—and to find those amidst the millions of podcasts, newsletters, blogs, mainstream media, and other online outlets into the ever-widening world of thought, opinion, news, scholarship, journalism, sports, and the rest of it.

The former will make tracking and personalized targeting far worse, and the latter will make advertising targeted at audiences far better. It will also do a much better job of supporting journalism in the process because more money can get through to publishers and reporters who won’t be fed by an evil hand they avoid biting.

Those two directions are the chaff-vs-wheat choices for A.I.’s future in advertising. For now, there is surely far more action happening with the former than with the latter, given the sizes of today’s spinning adtech flywheels. But this also means there will be bigger opportunities with the latter: a blue ocean away from the red one.

What makes the intention economy ocean blue is that it will exist almost entirely outside both those advertising systems—and inside horizons that are far more expansive than can be seen through the lens of advertising and marketing as we’ve known them.

Here the opportunities will be in creating better signaling from demand to supply, and better intermediation between them: forms that will safeguard the privacy needs of individuals and the legitimate needs of businesses. In some cases there will be no intermediation at all—just forms of agency on both sides that are friendly to each other and can interact directly. And, where intermediations are required, they will find a wide-open space for what we’ve long called fourth parties.

To visualize the opportunities here, think of every customer as a boat afloat on a sea of goods and services, and friendly to the ecosystems where demand encourages supply at least as well as supply satisfies demand.

If you’re looking for market opportunities in this vast new ocean, here are thirteen of them.


Damien Bod

Issue Employee verifiable credentials using Entra Verified ID and ASP.NET Core

This article shows how to implement verifiable credentials using Microsoft Entra Verified ID and ASP.NET Core to issue the employee credentials. This solution uses a self sovereign identity (SSI) based technical stack built using open standards and some of the SSI concepts. The credential can be loaded into a wallet belonging to a holder and […]

This article shows how to implement verifiable credentials using Microsoft Entra Verified ID and ASP.NET Core to issue the employee credentials. This solution uses a self sovereign identity (SSI) based technical stack built using open standards and some of the SSI concepts. The credential can be loaded into a wallet belonging to a holder and used at a later stage. In the follow up blogs, examples of using the verifiable credential will be explored.

The Microsoft Entra verified ID services can only be used, if you have an Azure AD subscription attached to your tenant.

Code: https://github.com/swiss-ssi-group/EntraVerifiedEmployee

Setup verified ID employee

The standard Entra Verified employee credential is used and this scheme cannot be changed. This is a fixed credential created by Microsoft and will hopefully be supported by other solution creators.

All vendors produce their own schemes, wallets and ledgers and it is hoped that some of these will work together in the future. Entra verified ID will hopefully work with the future Swiss E-ID and the EU E-ID. The Verified employee credential will work good because most companies have M365 licenses and can use this credential based on the Azure AD accounts. A company could create this credential in the HR employee onboarding process based on a government E-ID or a manual onboarding process.

The Verified employee credential has eight fixed claim types. The UPN is used for the revocationId. Most of these claims can be set without problem, although the photo and the preferredLanguage requires some extra logic to set for the different Azure AD user accounts depending on what Microsoft licenses the company has. You can also use any user account data source to produce the verifiable credential, it does not need to be an Azure AD user account.

The manifest contains the scheme definition. The claims and the types can be viewed.

"claims": { "vc.credentialSubject.givenName": { "type": "String", "label": "Name" }, "vc.credentialSubject.surname": { "type": "String", "label": "Surname" }, "vc.credentialSubject.mail": { "type": "String", "label": "Email" }, "vc.credentialSubject.jobTitle": { "type": "String", "label": "Job title" }, "vc.credentialSubject.photo": { "type": "image/jpg;base64url", "label": "User picture" }, "vc.credentialSubject.displayName": { "type": "String", "label": "Display name" }, "vc.credentialSubject.preferredLanguage": { "type": "String", "label": "Preferred language" }, "vc.credentialSubject.revocationId": { "type": "String", "label": "Revocation id" } }, Issue employee verifiable credentials using ASP.NET Core

Issuing an verifiable credential is a more complex flow to implement compared with other authentication and identity validation systems. The following steps are required to implement this:

Initialize the flow using a HTTP request in the Issuer web application Get and validate the data required to issue the VC to the wallet Create the payload and send a HTTP post to the Microsoft Entra Verified ID API Persist this state to cache Present the Entra Verified ID API response in the UI for the wallet to complete (usually a QR code) Scan the QR code using the wallet and add the credential to the wallet Use a web hook to receive the Entra Verified ID update requests Process HTTP requests and persist this if required, update the cache (Public endpoint required for this)

Note: the following example was built based on this example from Azure (and other Azure samples):

https://github.com/Azure-Samples/VerifiedEmployeeIssuance

Initialize the flow using a HTTP request