Last Update 3:46 PM January 25, 2022 (UTC)

Identity Blog Catcher

Brought to you by Identity Woman and Infominer.
Support this collaboration on Patreon!!!

Tuesday, 25. January 2022

John Philpin : Lifestream

I’m so used to stuff synching twixt devices that I just sent

I’m so used to stuff synching twixt devices that I just sent 5minutes messing with settings to fix functionality that doesn’t exist.

I’m so used to stuff synching twixt devices that I just sent 5minutes messing with settings to fix functionality that doesn’t exist.


Ben Werdmüller

Station Eleven is superb. Human and beautiful, ...

Station Eleven is superb. Human and beautiful, despite the apocalyptic setting. Artfully put together, intricately written, and meticulously acted.

Station Eleven is superb. Human and beautiful, despite the apocalyptic setting. Artfully put together, intricately written, and meticulously acted.


John Philpin : Lifestream

Yet Another Telecom-Backed Think Tank Insists U.S. Broadband

Yet Another Telecom-Backed Think Tank Insists U.S. Broadband Is Great, Actually Stunned I tells ya. Stunned.

”Craig Tiley insists that the door will be open for Djokov

”Craig Tiley insists that the door will be open for Djokovic to enter the 2023 edition of the grand-slam tournament should he wish to. This would rely on the Australian government granting the nine-times champion a waiver from a three-year visa ban for “compelling reasons”, after he was deported from the country on January 16.” This from The Sunday Times. So a three year ban is meaningle

”Craig Tiley insists that the door will be open for Djokovic to enter the 2023 edition of the grand-slam tournament should he wish to. This would rely on the Australian government granting the nine-times champion a waiver from a three-year visa ban for “compelling reasons”, after he was deported from the country on January 16.”

This from The Sunday Times.

So a three year ban is meaningless?

You would have thought by now that they might have realized that shooting yourself in the foot is not a good idea … doesn’t seem to stop them though!

Monday, 24. January 2022

John Philpin : Lifestream

I did say that I wouldn’t post another Wordle until I nailed

I did say that I wouldn’t post another Wordle until I nailed it in two. … and ✅ Wordle 220 2/6 ⬜⬜🟨🟨⬜ 🟩🟩🟩🟩🟩 Now there won’t be another until I do it in ‘1’.

I did say that I wouldn’t post another Wordle until I nailed it in two. … and ✅

Wordle 220 2/6

⬜⬜🟨🟨⬜
🟩🟩🟩🟩🟩

Now there won’t be another until I do it in ‘1’.


Roll Over Pareto And Make Way For Sturgeon

Common wisdom regularly references Pareto - even while not knowing who the hell Pareto is. A humorist might even say that 80% of people who quote the 80/20 rule have never heard of Pareto. Turns out it might even be worse. Sturgeon has 90% of everything is crap .. and Andreesen agrees (Apple News Link to WSJ) Seemingly even about his own fund. ”But doesn’t Sturgeon’s law also apply t

Common wisdom regularly references Pareto - even while not knowing who the hell Pareto is. A humorist might even say that 80% of people who quote the 80/20 rule have never heard of Pareto.

Turns out it might even be worse.

Sturgeon has 90% of everything is crap .. and Andreesen agrees (Apple News Link to WSJ)

Seemingly even about his own fund.

”But doesn’t Sturgeon’s law also apply to investing in startups? A typical venture fund has a home run, a few companies with middling returns and lots of smoking holes in the ground. “We have a limited partner that has comprehensive data,” Mr. Andreessen explains. “For top-end venture funds, the good news is that it’s not 90% failure rate, it’s 50%. This is for top-decile venture.” So maybe Sturgeon’s law doesn’t apply. But wait. The 90% of funds below the top decline do worse. Sturgeon’s law rules. Mr. Andreessen pauses, raises an eyebrow, and nods in agreement.”


True Dat In fact coincidentally just about to put out thr

True Dat In fact coincidentally just about to put out three different offers over the next two weeks.

True Dat

In fact coincidentally just about to put out three different offers over the next two weeks.


Ben Werdmüller

Disappointingly, I’ve been overloading my M1 MacBook ...

Disappointingly, I’ve been overloading my M1 MacBook Pro. If I don’t restart it every few days at least, it just hangs. I decided to settle for 8GB RAM; never again.

Disappointingly, I’ve been overloading my M1 MacBook Pro. If I don’t restart it every few days at least, it just hangs. I decided to settle for 8GB RAM; never again.


Jon Udell

Remembering Diana

The other day Luann and I were thinking of a long-ago friend and realized we’d forgotten the name of that friend’s daughter. Decades ago she was a spunky blonde blue-eyed little girl; we could still see her in our minds’ eyes, but her name was gone. “Don’t worry,” I said confidently, “it’ll come back to … Continue reading Remembering Diana

The other day Luann and I were thinking of a long-ago friend and realized we’d forgotten the name of that friend’s daughter. Decades ago she was a spunky blonde blue-eyed little girl; we could still see her in our minds’ eyes, but her name was gone.

“Don’t worry,” I said confidently, “it’ll come back to one us.”

Sure enough, a few days later, on a bike ride, the name popped into my head. I’m sure you’ve had the same experience. This time around it prompted me to think about how that happens.

To me it feels like starting up a background search process that runs for however long it takes, then notifies me when the answer is ready. I know the brain isn’t a computer, and I know this kind of model is suspect, so I wonder what’s really going on.

– Why was I was so sure the name would surface?

– Does a retrieval effort kick off neurochemical change that elaborates over time?

– Before computers, what model did people use to explain this phenomenon?

So far I’ve only got one answer. That spunky little girl was Diana.


Hyperonomy Digital Identity Lab

Trusted Digital Web (TDW2022): Characteristic Information Scopes

Figure 1. Trusted Digital Web (TDW2022): Characteristic Information Scopes (based on the Social Evolution Model

John Philpin : Lifestream

Consolidation

Web Sites Bit by bit I have been consolidating the ‘Wor(l)ds of John’ and putting everything under a single, smaller roof. So when I say ‘single’ … less? My Blot site is not just thoughts that somehow never made it to MicroBlog - but also includes the archives of 5 old Wordpress sites; - Beyond Bridges - Hidden In Plain Sight - Quotespace - And Another Thing - Just Good Music (th

Web Sites

Bit by bit I have been consolidating the ‘Wor(l)ds of John’ and putting everything under a single, smaller roof. So when I say ‘single’ … less?

My Blot site is not just thoughts that somehow never made it to MicroBlog - but also includes the archives of 5 old Wordpress sites;

- Beyond Bridges
- Hidden In Plain Sight
- Quotespace
- And Another Thing
- Just Good Music (the old MicroBlog JGM - sadly my original JGM site only exists in the Wayback Machine.

Only JGM is linked since I figure that is the only blog that MicroBloggers might even want to click through to.

… and a whole lot more.

Gah - that just made me realise that I need to spend a bit of time in better organizing blot.

The oldest post in there dates back to September 2010.

Meanwhile my Micro Blog site has another several thousand posts entries … the oldest dating back to April 2005.

The Micro Blog collection includes an old humour site that got moved before I came up with the blot consolidation. It is a post from this blog that holds the the oldest post ‘title’.

It also includes some of my Instagram world. I resisted pulling in what is really a pile of rubbish.

My Computer Software

Meanwhile, I am also moving my main computer to a new machine. No automatic ‘restore from previous machine’ for me. I have built (am building?) it from scratch. About three weeks in ans 71 apps from the old world still have not yet made it over - and they still might not - ever.

My iPad

… deinstalling software to bring more focus to how I use the iPad and when.

More to come as I think about it.

Sunday, 23. January 2022

Ben Werdmüller

On pronouns and shades of pink

“An accusation of virtue signalling often feels, to me, the same kind of denial of solidarity as the old “if you think people should pay more tax, write a cheque to the Treasury yourself”. Individualising the social must be something the left resists the right in doing, for the left to have any real meaning.” [Link]

“An accusation of virtue signalling often feels, to me, the same kind of denial of solidarity as the old “if you think people should pay more tax, write a cheque to the Treasury yourself”. Individualising the social must be something the left resists the right in doing, for the left to have any real meaning.”

[Link]


Faster internet speeds linked to lower civic engagement in UK

“Volunteering in social care fell by more than 10% when people lived closer to local telecoms exchange hubs and so enjoyed faster web access. Involvement in political parties fell by 19% with every 1.8km increase in proximity to a hub. By contrast, the arrival of fast internet had no significant impact on interactions with family and friends.” This feels solvable to me.

“Volunteering in social care fell by more than 10% when people lived closer to local telecoms exchange hubs and so enjoyed faster web access. Involvement in political parties fell by 19% with every 1.8km increase in proximity to a hub. By contrast, the arrival of fast internet had no significant impact on interactions with family and friends.” This feels solvable to me.

[Link]


Moxy Tongue

Rough Seas Ahead People

The past is dead.  You are here now. The future will be administered. Data is not literature, it is structure. Data is fabric. Data is blood. Automated data will compete with humans in markets, governments, and all specialty fields of endeavor that hold promise for automated systems to function whereas.  Whereas human; automated human process. Automate human data extraction. Au
The past is dead. 
You are here now.
The future will be administered. Data is not literature, it is structure. Data is fabric. Data is blood. Automated data will compete with humans in markets, governments, and all specialty fields of endeavor that hold promise for automated systems to function whereas. 
Whereas human; automated human process. Automate human data extraction. Automate human data use.
I am purposefully vague -> automate everything that can be automated .. this is here, now.
What is a Constitution protecting both "Human Rights" and "Civil Rights"? 
From the view of legal precedent and human intent actualized, it is a document, a work of literary construct, and its words are utilized to determine meaning in legal concerns where the various Rights of people are concerned. Imperfect words of literature, implemented in their time and place. And of those words, a Governing system of defense for the benefit "of, by, for" the people Instituting such Governance.
This is the simple model, unique in the world, unique in history as far as is known to storytellers the world over. A literary document arriving here and now as words being introduced to their data manifestations. Data loves words. Data loves numbers. Data loves people the most. Why?
Data is "literally" defined as "data" in relation to the existence of Humanity. That which has no meaning to Humanity is not considered "data" being utilized as such. Last time I checked, Humanity did not know everything, yet. Therefore much "data" has barely been considered as existing, let alone being understood in operational conditions called "real life", or "basic existence" by people. 
This is our administrative problem; words are not being operationalized accurately as data. The relationship between "words" and "data" as operational processes driving the relationship between "people" and "Government Administration" has not been accurately structured. In other words, words are not being interpreted as data accurately enough, if at all.
A governed system derived "of, by, for" the people creating and defending such governed process, has a basic starting point. It seems obvious, but many are eager to acquiesce to something else upon instantiation of a service relationship, when easy or convenient enough, so perhaps "obvious" is just a word. "Of, By, For" people means that "Rights" are for people, not birth certificates. 
Consider how you administer your own life. Think back to last time you went to the DMV. Think back to last time you filed taxes and something went wrong that you needed to fix. Think back to when you registered your child for kindergarten. Think back to the last time you created an online bank account. 
While you are considering these experiences, consider the simultaneous meaning created by the words "of, by, for" and whether any of those experiences existed outside of your Sovereign Rights as a person.
Humanity does not come into existence inside a database. The American Government does not come into authority "of, by, for" database entries. 
Instead, people at the edges of society, in the homes of our towns derive the meaning "of, by, for" their lawful participation. Rights are for people, not birth certificates. People prove birth certificates, birth certificates do not prove people. If an administrative process follows the wrong "administrative precedent" and logic structure, then "words" cease meaning what they were intended to mean.
This words-to-data slight of hand is apparently easy to run on people. The internet, an investment itself of Government created via DARPA and made public via NSF, showcases daily the mis-construed meaning of "words" as "data". People are being surveilled, tracked and provisioned access to services based on having their personal "ID:DATA" leveraged. In some cases, such as the new ID.me services being used at Government databases, facial scans are being correlated to match people as "people" operating as "data". The methods used defy "words" once easily accessible, and have been replaced by TOSDR higher up the administrative supply chain as contracts of adhesion.
Your root human rights, the basic meaning of words with Constitutional authority to declare war upon the enemies of a specific people in time, have been usurped, and without much notice, most all people have acquiesced to the "out-of-order" administrative data flows capturing their participation. Freedom can not exist on such an administrative plantation, whereby people are captured as data for use by 2nd and 3rd parties without any root control provided to the people giving such data existence and integrity.
People-backwards-authority will destroy this world. America can not be provisioned from a database. People possess root authority in America. America is the leader of the world, and immigrants come to America because "people possess root authority" in America. "Of, By, For" People in America, this is the greatest invention of America. Owning your own authority, owning root authority as a person expressing the Sovereign structure of your Rights as a person IS the greatest super power on planet Earth.
The American consumer marketplace is born in love with the creative spirit of Freedom. The American Dream lures people from the world over to its shores. A chance to be free, to own your own life and express your freedom in a market of ideas, where Rights are seen, protected, and leveraged for the benefit of all people. A place where work is honored, and where ladders may be climbed by personal effort and dedication in pursuit of myriad dreams. A land honored by the people who sustain its promise, who guard its shores, and share understanding of how American best practices can influence and improve the entire world.
It all begins with you.
If I could teach you how to do it for yourself I would. I try. My words here are for you to use as you wish. I donate them with many of my efforts sustained over many years. This moment (2020-2022) has been prepared for by many for many many years. A populace ignorant of how data would alter the meaning of words in the wrong hands was very predictable. Knowing what words as data meant in 1992 was less common. In fact, getting people to open ears, or an email, was a very developmental process. Much hand-holding, much repetition. I have personally shared words the world over, and mentored 10's of thousands over the past 25 years. To what end?
I have made no play to benefit from the ignorance of people. I have sought to propel conversation, understanding, skill, and professional practices. By all accounts, I have failed at scale. The world is being over-run by ignorance, and this ignorance is being looted, and much worse, it is being leveraged against the best interest of people, Individuals all.
"We the people" is a literary turn-of-hand in data terms; People, Individuals All. The only reality of the human species that matters is the one that honors what people actually are. Together, each of us as Individual, living among one another.. is the only reality that will ever exist. "We" is a royal construct if used to instantiate an Institutional outcome not under the control of actual people as functioning Individuals, and instead abstracts this reality via language, form, contract or use of computer science to enable services to be rendered upon people rather than "of, by, for" people.
The backwards interpretation of words as data process is the enemy of Humanity. Simple as that.
You must own root authority; Americans, People. 




Ben Werdmüller

The deep, dark wrongness

I was always a pretty good kid: good-natured, good in school, imaginative, and curious. I’d get up early every day to draw comic books before school; during the breaks between lessons on the school playground, I’d pretend I was putting on plays for astronauts. Afterwards, I’d muck about on our 8-bit computer, writing stories or small BASIC programs. I was a weird kid, for sure - nerdy long bef

I was always a pretty good kid: good-natured, good in school, imaginative, and curious. I’d get up early every day to draw comic books before school; during the breaks between lessons on the school playground, I’d pretend I was putting on plays for astronauts. Afterwards, I’d muck about on our 8-bit computer, writing stories or small BASIC programs. I was a weird kid, for sure - nerdy long before it was cool, the third culture child of activist hippies - but relatively happy with it. I had a good childhood that I mostly remember very fondly.

Becoming a teenager also meant becoming the owner of a dark cloud that no-one else could see, which seemed to grow every day. By the time I was fifteen or sixteen, I would wake up some days without any energy or motivation at all. Sometimes, cycling home from my high school along the Marston Ferry Road in Oxford, a trunk road on one side and fields of cows on the other, I’d just stop. It was as if I was unable to move my feet on the pedals. I described it at the time as feeling like my blood had suddenly turned to water. There was nothing left inside me to go.

That feeling of nothingness inside me, like my fire had gone out, came into focus before I graduated from high school. I felt wrong. There was something irretrievably wrong about me - no, wrong with me - and everybody knew it, and nobody would tell me what it was.

At the same time, I discovered the internet. Whereas I’d come home as a kid to draw and write, as a teenager I’d connect to our dial-up Demon Internet connection and sync my emails and newsgroup posts before logging off again. I learned to build websites as a way to express myself. (Here’s one of my interminable and not-just-a-little-toxic teenage poetry collections, preserved for all eternity on the Internet Archive. You’re welcome.) Most importantly of all, I connected with new friends who were my age, over usenet newsgroups and IRC: two text mediums.

Somehow, when I was connecting with people over text, in a realm where nobody could see me or really knew what I looked like, I felt more free to be myself. Even when I met up with my fellow uk.people.teens posters - our collective parents were somehow totally fine with us all traveling the country to meet strangers by ourselves - I felt more like I could be confidently me, perhaps because I had already laid the groundwork of my friendships in a way that I had more control over. My family has always felt safe to me because I could just be me around them. I had a core group of very close school friends too, who I’m friends with to this day; people who I felt like didn’t judge me, and who I could feel safe around. In more recent years, some of those close friends have veered into conservative Jordan Peterson territory and anti-inclusion rhetoric, and it’s felt like a profound violation of that safety to a degree that I haven’t been able to fully explain until recently.

My connection to the internet - as in my personal connection, the emotional link I made with it - came down to that feeling of safety. I used my real name, but there was a pseudonymity to it; I was able to skate past all the artifice and pressure to conform of in-person society that had led me to feel wrong in the first place. On the web, I didn’t have that feeling. I could just be a person like everybody else. Being present in real life was effort; being online was an enormous weight off my shoulders.

Perhaps the reason I’ve come back to building community spaces again and again is because I remember that feeling of connecting for the first time and finding that the enormous cloud hanging over me was missing. That deep connection between people who have never met is still, for me, what the internet is all about. Or to put it another way, I’m constantly chasing that feeling, and that’s why I work on the internet.

Likewise, that’s what I’m looking for from my in-person connections. I want to feel like I can be me, and that I will be loved and accepted as I am. While I’ve found that in connections with all kinds of people, I’ve most often found that to be true in queer spaces: in my life, the people who have had to work to define their own identity are the most likely to accept people who don’t fit in.

It’s important to me: that deep, dark feeling of wrongness has never gone away. It’s under my skin at the office; it’s behind my eyes at family gatherings; it’s what I think about when I wake up at three in the morning. If we’re friends or family or lovers, I want to feel safe with you. I want to know that you accept me despite the wrongness, whatever the wrongness might be.

It might be that the relief the internet gave me also delayed my reconciliation with what the wrongness actually was.

When I was six years old, I cried and cried because my mother told me I wouldn’t grow up to be a woman. The feeling of not wanting to be myself has been with me as long as I can remember. I found beauty in people who were not like myself. There was much to aspire to in not being me.

Puberty gave me, to be frank, enormous mass. I was taller than everyone else, bigger than everyone else, by the time I was eleven or twelve. I towered over everyone by the time I was fourteen. I was bigger and hairier and smellier. In adulthood, the way one ex-girlfriend described it, my body isn’t just taller: it’s like someone has used the resize tool in Photoshop and just made me bigger, proportionally. I was never thin or athletic or dainty; I suddenly ballooned like the Incredible Hulk, but without the musculature.

I had felt wrong in my body before. Now, there was more of my body - a lot more - to feel wrong in.

So much of that dark cloud was my discomfort with my physicality: the meatspace experience of living as me. People started to tell me that I was easy to find in crowds, or made fun of the bouncy walk I developed as my limbs grew. They meant nothing by it, but it cut deep.

To this day, I recoil when I see a photo of myself alongside someone else. I hate it: there’s always this enormous dude ruining a perfectly good picture. It doesn’t even feel like me; it’s akin to when Sam Beckett looks into the mirror in Quantum Leap, or when Neo sees the projection of himself in The Matrix: Resurrections. The word, I’ve learned, is dysphoria.

I don’t know where to take that, or what it really means. I feel intense discomfort with my body and the physical manifestation of myself in the world. It’s not necessarily gender dysphoria - I don’t know - but it’s dysphoria nonetheless. I hate my body and it doesn’t feel like me.

What now?

The advent of the commercial internet must have been solace for a great many people in this way. It’s not unreasonable to say that it saved my life: not necessarily because I would have killed myself (although there have been times in my life, particularly when I was younger, when I’ve thought about it), but because I wouldn’t have found a way to build community and live with the authentic connections I did. I would have been hiding, fully and completely. Everyone deserves to not hide.

But because I’ve been living with one foot outside of the physical world, it’s also taken me a long time to understand that my feeling of wrongness was so tied into my physicality, and that my need to present differently was so acute.

I actually felt a little relief last year when I dyed my hair electric blue on a whim: it felt right in a way I wasn’t used to, perhaps because it was something under my control, or perhaps because it was a signal that I wasn’t the person I felt I had presented as up to then. The blue has long since faded and grown out into highlights, but some sense of the relief it brought remains. I have to wonder how I would feel if I did more to my body, and what it would take to make that dark cloud go away for good.

I think feeling good - no, feeling right - involves embracing that the feeling I’ve been experiencing my whole life is valid, and then exploring what it means in the real world. You’ve got to face it; you’ve got to give it a name.

There’s a TikTok trend that uses a line from a MGMT song to make a point about closeted queerness: ‌Just know that if you hide, it doesn't go away. It’s clear to me that the internet has been a godsend for people who don’t feel like they fit in, who need to find community that is nurturing for them, and who need to explore who they are. Removing that cognitive cloud is no small thing. But the next step is still the hardest: figuring out who you are, and finding out how to be yourself.

Saturday, 22. January 2022

Ben Werdmüller

The Revenge of the Hot Water Bottle

“A hot water bottle is a sealable container filled with hot water, often enclosed in a textile cover, which is directly placed against a part of the body for thermal comfort. The hot water bottle is still a common household item in some places – such as the UK and Japan – but it is largely forgotten or disregarded in most of the industrialised world. If people know of it, they

“A hot water bottle is a sealable container filled with hot water, often enclosed in a textile cover, which is directly placed against a part of the body for thermal comfort. The hot water bottle is still a common household item in some places – such as the UK and Japan – but it is largely forgotten or disregarded in most of the industrialised world. If people know of it, they usually associate it with pain relief rather than thermal comfort, or they consider its use an outdated practice for the poor and the elderly.” I loved this piece about the history of hot water bottles. (They’re great!)

[Link]


John Philpin : Lifestream

Expanding Readwise niceties into articles …

Expanding Readwise niceties into articles …

Expanding Readwise niceties into articles …


100!

100!

100!


Ben Werdmüller

Some links out to the blogosphere

I’ve added two links to the bottom of every page on my website. The first is to the IndieWeb webring: a directory of personal websites from people who are a part of the indieweb movement. These sites run the gamut of topics, but they’re mostly personal profiles from people who like to write on the web. Just like me! (You can click the left or right arrows to get to a random site.) The second i

I’ve added two links to the bottom of every page on my website.

The first is to the IndieWeb webring: a directory of personal websites from people who are a part of the indieweb movement. These sites run the gamut of topics, but they’re mostly personal profiles from people who like to write on the web. Just like me! (You can click the left or right arrows to get to a random site.)

The second is to Blogroll.org, which I learned about from a post on Winnie Lim’s site. It’s exactly what you’d expect from the name: a categorized list of blogs. I love it and I’m glad it exists.

I want more of you to blog. Please write about your personal experiences! I want to read them! And doing it on your personal space is far better than simply tweeting, or using something like Facebook or (shudder) LinkedIn, simply because you can be more long-form, and build up a corpus of writing that really represents you. And sure, yes, Medium is fine. But I want to read what you have to say, and other people do too.


Let’s stop saying these two things

“When I hear “drinking the Kool-Aid”, I think about Leo Ryan, Jackie Speier, and 900+ dead followers of Jim Jones. [...] If your white grandfather was eligible to vote prior to the passage of the Fifteenth Amendment, you were eligible to vote. When you talk about being grandfathered in, that’s what you’re referring to.” [Link]

“When I hear “drinking the Kool-Aid”, I think about Leo Ryan, Jackie Speier, and 900+ dead followers of Jim Jones. [...] If your white grandfather was eligible to vote prior to the passage of the Fifteenth Amendment, you were eligible to vote. When you talk about being grandfathered in, that’s what you’re referring to.”

[Link]


When Microsoft Office Went Enterprise

“Practically, any time someone tries to take on two conflicting perspectives in one product, the product comes across as a compromise. It is neither one nor the other, but a displeasing mess. The hope I had at the start was that by deprioritizing our traditional retail-customer focus on personal productivity at the start of the release, we avoided the messy middle. We succeede

“Practically, any time someone tries to take on two conflicting perspectives in one product, the product comes across as a compromise. It is neither one nor the other, but a displeasing mess. The hope I had at the start was that by deprioritizing our traditional retail-customer focus on personal productivity at the start of the release, we avoided the messy middle. We succeeded at that, but I was struggling with how unsatisfying this felt.”

[Link]


I haven’t had a good mental health ...

I haven’t had a good mental health week, to say the least. Recharging and hibernating. I’ll bounce back.

I haven’t had a good mental health week, to say the least. Recharging and hibernating. I’ll bounce back.


John Philpin : Lifestream

I still need to write more about this - but no more dilly-da

I still need to write more about this - but no more dilly-dallying - time to share regardless. The image speaks volumes. Credit Tim Urban via Chris Hladczuk

I still need to write more about this - but no more dilly-dallying - time to share regardless. The image speaks volumes.

Credit Tim Urban via Chris Hladczuk


“As I've said a bunch before, I thi

“As I've said a bunch before, I think that crypto will have a place in the future, but I think the applications of it right now feel like they're solving solved problems worse than the existing solutions.” 💬 Matt Birchler Source : Solving Solved Problems

“As I've said a bunch before, I think that crypto will have a place in the future, but I think the applications of it right now feel like they're solving solved problems worse than the existing solutions.”

💬 Matt Birchler

Source : Solving Solved Problems


“I like any company that’s focused

“I like any company that’s focused on making the best, not the most.” 💬 John Gruber

“I like any company that’s focused on making the best, not the most.”

💬 John Gruber

Friday, 21. January 2022

John Philpin : Lifestream

German Police Caught Using COVID-Tracing Data To Search For

German Police Caught Using COVID-Tracing Data To Search For Crime Witnesses. As the article says: “I guess the only surprise is that it took this long to be abused.” I wonder how many other countries are doing this and have not yet been caught? Answers on a postcard!

German Police Caught Using COVID-Tracing Data To Search For Crime Witnesses.

As the article says:

“I guess the only surprise is that it took this long to be abused.”

I wonder how many other countries are doing this and have not yet been caught?

Answers on a postcard!


Paying for everything twice. … the second price is basically

Paying for everything twice. … the second price is basically your time … that you will invest in ‘going to the gym’, ‘reading the book’, ‘learning how it works’, setting it up. I think this has a lot to do with people not valuing their time. Read on ….

Paying for everything twice. … the second price is basically your time … that you will invest in ‘going to the gym’, ‘reading the book’, ‘learning how it works’, setting it up.

I think this has a lot to do with people not valuing their time.

Read on ….


Simon Willison

Roblox Return to Service 10/28-10/31 2021

Roblox Return to Service 10/28-10/31 2021 A particularly good example of a public postmortem on an outage. Roblox was down for 72 hours last year, as a result of an extremely complex set of circumstances which took a lot of effort to uncover. It's interesting to think through what kind of monitoring you would need to have in place to help identify the root cause of this kind of issue. Via

Roblox Return to Service 10/28-10/31 2021

A particularly good example of a public postmortem on an outage. Roblox was down for 72 hours last year, as a result of an extremely complex set of circumstances which took a lot of effort to uncover. It's interesting to think through what kind of monitoring you would need to have in place to help identify the root cause of this kind of issue.

Via @benbjohnson


John Philpin : Lifestream

Interesting that I read this from David Perelman about 2 hou

Interesting that I read this from David Perelman about 2 hours after I posted this

Interesting that I read this from David Perelman about 2 hours after I posted this


Darwin in action. Folk singer dies after deliberately cat

Darwin in action. Folk singer dies after deliberately catching COVID.

Apple to buy Peleton? I don’t see it.

Apple to buy Peleton? I don’t see it.

Apple to buy Peleton?

I don’t see it.

Thursday, 20. January 2022

John Philpin : Lifestream

Another Report Shows U.S. 5G Isn’t Living Up To The Hype.

Another Report Shows U.S. 5G Isn’t Living Up To The Hype. “A number of recent studies have already shown that U.S. wireless isn’t just the most expensive in the developed world, U.S. 5G is significantly slower than most overseas deployments. “ But we all knew that - didn’t we?

Another Report Shows U.S. 5G Isn’t Living Up To The Hype.

“A number of recent studies have already shown that U.S. wireless isn’t just the most expensive in the developed world, U.S. 5G is significantly slower than most overseas deployments. “

But we all knew that - didn’t we?


In the old days, we couldn’t be heard because gatekeepers cu

In the old days, we couldn’t be heard because gatekeepers curated who and what we could read and listen to. Now we can’t be heard above the noise of everyone wanting to be heard. So ‘curators’ are back. Real curators. Not algorithms. Real human curators.

In the old days, we couldn’t be heard because gatekeepers curated who and what we could read and listen to.

Now we can’t be heard above the noise of everyone wanting to be heard.

So ‘curators’ are back.

Real curators. Not algorithms. Real human curators.


I am heading in the same direction as @rnv original post. I

I am heading in the same direction as @rnv original post. I won’t get as deep as he did, but bit by bit my blot site is becoming my historical archive and micro blog is my personal / observations site. There are other spaces out there, but they are gradually folding into Blot.

I am heading in the same direction as @rnv original post. I won’t get as deep as he did, but bit by bit my blot site is becoming my historical archive and micro blog is my personal / observations site.

There are other spaces out there, but they are gradually folding into Blot.


I forgot that Apple removed the ‘Save As’ menu item several

I forgot that Apple removed the ‘Save As’ menu item several OS versions ago. I just had to remind myself how I added it back. Pretty simple really

I forgot that Apple removed the ‘Save As’ menu item several OS versions ago. I just had to remind myself how I added it back.

Pretty simple really


Much as I like this glossary … I do feel that there is an op

Much as I like this glossary … I do feel that there is an opportunity for an Ambrose Bierce approach to this topic.

Much as I like this glossary … I do feel that there is an opportunity for an Ambrose Bierce approach to this topic.


🎵🎼 🎶 Thinking about (and playing some) Free, went to check s

🎵🎼 🎶 Thinking about (and playing some) Free, went to check something and disappeared down a Rabbit Hole. I never knew that it was Alexis Korner that gave them their name.

🎵🎼 🎶 Thinking about (and playing some) Free, went to check something and disappeared down a Rabbit Hole. I never knew that it was Alexis Korner that gave them their name.


Web3 is going just great “Twitter launches special hexa

Web3 is going just great “Twitter launches special hexagonal NFT profile pictures, so now you don’t even have to check a username for “.eth” to know who to avoid” 😭😭😭😭😭

Web3 is going just great

“Twitter launches special hexagonal NFT profile pictures, so now you don’t even have to check a username for “.eth” to know who to avoid”

😭😭😭😭😭


Creativity isn't about starting something.

Creativity isn't about starting something. It's about making something. Making requires sustained effort, and sustained effort requires fuel. That fuel is optimism. 💬 Jason Fried Creativity requires optimism

Creativity isn't about starting something.
It's about making something.
Making requires sustained effort, and sustained effort requires fuel.
That fuel is optimism.

💬 Jason Fried

Creativity requires optimism


Simon Willison

How to Add a Favicon to Your Django Site

How to Add a Favicon to Your Django Site Adam Johnson did the research on the best way to handle favicons - Safari still doesn't handle SVG icons so the best solution today is a PNG served from the /favicon.ico path. This article inspired me to finally add a proper favicon to Datasette. Via @adamchainz

How to Add a Favicon to Your Django Site

Adam Johnson did the research on the best way to handle favicons - Safari still doesn't handle SVG icons so the best solution today is a PNG served from the /favicon.ico path. This article inspired me to finally add a proper favicon to Datasette.

Via @adamchainz

Wednesday, 19. January 2022

Phil Windley's Technometria

Web3, Coherence, and Platform Sovereignty

Summary: I read a couple of interesting things the past few weeks that have me thinking about the world we're building, the impact that tech has on it, and how it self-governs (or not). In The crypto-communists behind the Web3 revolution, Benjamin Pimentel argues that "The future of decentralized finance echoes a decidedly Marxist vision of the future." He references various Silic

Summary: I read a couple of interesting things the past few weeks that have me thinking about the world we're building, the impact that tech has on it, and how it self-governs (or not).

In The crypto-communists behind the Web3 revolution, Benjamin Pimentel argues that "The future of decentralized finance echoes a decidedly Marxist vision of the future." He references various Silicon Valley icons like Jack Dorsey, Marc Andreessen, Elon Musk, and others, comparing their statements on Web3 and crypto with the ideology of communism. He references Tom Goldberg's essay exploring the similarities between Karl Marx and Nakamoto Satoshi:

"Marx advocated for a stateless system, where the worker controlled the means of production," [Goldberg] said. "Satoshi sought to remove financial intermediaries — the banks and credit card companies that controlled the world's flow of value."

But while Marx and Satoshi both "articulated a reasoned, well-thought-out vision of the future," Goldenberg added, "neither had the power to predict how their ideas would influence others or be implemented. And neither could control their own creations."

And, among other things, compares Musk to Lenin:

But Musk and Lenin seem simpatico when it comes to the "ultimate aim of abolishing the state" (Lenin): "Just delete them all," Musk recently said, of the government subsidies that have historically sustained his firms. (Perhaps he's studied Mao Zedong's essay "On Contradiction.")

"So long as the state exists there is no freedom," Lenin declared. "When there is freedom, there will be no state."

All in all, it's an interesting read with lots to think about. But what really made it speak to me is that I've also been reading The Stack: On Software and Sovereignty by Benjamin Bratton, the Director of the Center for Design and Geopolitics at the University of California, San Diego. I won't lie; the book is, for a non-sociologist like me, a tough read. Still, I keep going cause there are so many interesting ideas in it.

Relevant to the crypto-communist article is the idea of platform sovereignty. Bratton references Carl Schmitt's arguments on the relationship between political epochs and spatial subdivision. As Bratton says there is "no stable geopolitical order without an underlying architecture of spatial subdivision" and "no geography without first topology."

Here's his insight on the heterarchical relationship between markets and states

[O]ne of the things that makes neoliberalism unique is that markets do not operate in conjunction with or in conflict with sovereign states, but rather that sovereignty is itself shifted from states into markets.

The point here is that markets and states co-exist, influencing each other. And that sovereignty can shift. Both are ways of creating coherence among participants. I think that's a good way to explore what Bratton means when he talks about platform sovereignty.

One of the important ideas in The Stack is that platforms rarely replace each other. Rather, they co-exist, strengthening or diminishing other platforms. Market's didn't do away with states—even as they stole attention and power from them. And, the internet didn't do away with either. The degree of sovereignty these platforms and their Users (in Bratton's terminology) enjoy depends on a variety of factors, but it most assuredly doesn't rely wholly on permission from other platforms.

In this worldview, there are innumerable platforms, not nicely contained within each other, but stacked willy-nilly with overlapping boundaries. Schmitt was primarily interested in geopolitical boundaries that played out in the Westphalian regime1. Bratton recognizes that one thing networks have given us is lots of boundaries, each one giving rise to sovereignty of various forms and in different degrees.

While many focus on what Doc Searls calls "vendor sports" and talk about which platform triumphs over another, Bratton's view is that when you stop looking at platforms in a specific type or category, there's not necessarily competition, and less so, the means of control between them that would ensure the ascendance and triumph of one over the other. Each is a social system with its own participants, rules, and goals.

Social systems that are enduring, scalable, and generative require coherence among participants. Coherence allows us to manage complexity. Coherence is necessary for any group of people to cooperate. The coherence necessary to create the internet came in part from standards, but more from the actions of people who created organizations, established those standards, ran services, and set up exchange points.

Coherence enables a group of people to operate with one mind about some set of ideas, processes, and outcomes. We only know of a few ways of creating coherence in social systems: tribes, institutions, markets, and networks2. Startups, for example, work as tribes. When there's only a small set of people, a strong leader can clearly communicate ideas and put incentives—and disincentives—in place. Coherence is the result. As companies grow, they become institutions that rely on rules and bureaucracy to establish coherence. While a strong leader is important, institutions are more about the organization than the personalities involved. Tribes and institutions are centralized--someone or some organization is making it all happen. More to the point, institutions rely on hierarchy to achieve coherence.

Markets are decentralized—specifically they are heterarchical rather than hierarchical. A set of rules, perhaps evolved over time through countless interactions, govern interactions and market participants are incented by market forces driven by economic opportunity to abide by the rules. Competition among private interests (hopefully behaving fairly and freely) allows multiple players with their own agendas to process complex transactions around a diverse set of interests.

Networks are also decentralized. Most of the platforms we see emerging today are networks of some kind. The rules of interaction in networked platforms are set in protocol. But protocol alone is not enough. Defining a protocol doesn't string cable or set up routers. There's something more to it.

As we've said, one form of organization doesn't usually supplant the previous, but augments it. The internet is the result of a mix of institutional, market-driven, and network-enabled forces. The internet has endured and functions because these forces, whether by design or luck, are sufficient to create the coherence necessary to turn the idea of a global, public decentralized communications system into a real network that routes packets from place to place. The same can be said for any other enduring platform.

Funny, it was Marc Andreessen, one of Pimentel's crypto-communists, who introduced me to Neal Stephenson's Snow Crash in 1999. Snow Crash is set in a world where various network platforms co-exist with much-diminished nation states, a metaverse, and each other, molding the geopolitical landscape of the novel to the extent they engender coherence in their various Users.

So, while Pimentel's article is interesting and informative in comparing the stated aspirations of crypto-enthusiasts and communists, I think the more correct view is that crypto isn't going to replace anything—we're not headed to a crypto-communist future. Rather it's going to add more platforms that influence, but don't displace, the things that came before. And those who see Web3 as a passing fad will likely be disappointed that it refuses to die—so long it generates networked platforms that creates sufficient coherence among their Users.

Books Mentioned
The Stack: On Software and Sovereignty by Benjamin H. Bratton

A comprehensive political and design theory of planetary-scale computation proposing that The Stack—an accidental megastructure—is both a technological apparatus and a model for a new geopolitical architecture. What has planetary-scale computation done to our geopolitical realities? It takes different forms at different scales—from energy and mineral sourcing and subterranean cloud infrastructure to urban software and massive universal addressing systems; from interfaces drawn by the augmentation of the hand and eye to users identified by self—quantification and the arrival of legions of sensors, algorithms, and robots. Together, how do these distort and deform modern political geographies and produce new territories in their own image?

Snow Crash by Neal Stephenson

In reality, Hiro Protagonist delivers pizza for Uncle Enzo’s CosoNostra Pizza Inc., but in the Metaverse he’s a warrior prince. Plunging headlong into the enigma of a new computer virus that’s striking down hackers everywhere, he races along the neon-lit streets on a search-and-destroy mission for the shadowy virtual villain threatening to bring about infocalypse. • In this mind-altering romp—where the term "Metaverse" was first coined—you'll experience a future America so bizarre, so outrageous, you'll recognize it immediately • One of Time's 100 best English-language novels.

The Shield of Achilles: War, Peace, and the Course of History by Philip Bobbitt

For five centuries, the State has evolved according to epoch-making cycles of war and peace. But now our world has changed irrevocably. What faces us in this era of fear and uncertainty? How do we protect ourselves against war machines that can penetrate the defenses of any state? Visionary and prophetic, The Shield of Achilles looks back at history, at the “Long War” of 1914-1990, and at the future: the death of the nation-state and the birth of a new kind of conflict without precedent.

Notes For a deep dive into Westphalean "princely states" and their evolution into modern nation states, I can't recommend strongly enough Philip Bobbit's The Shield of Achilles: War, Peace, and the Course of History. To explore this more, see this John Robb commentary on David Ronfeldt's Rand Corporation paper "Tribes, Institutions, Markets, Networks" (PDF).

Photo Credit: Lenin from Arkady Rylov via Wikimedia Commons (CC0)

Tags: web3 the+stack ssi bitcoin sovereign

Tuesday, 18. January 2022

Kerri Lemole

W3C Verifiable Credentials Education Task Force 2022 Planning

At the W3C VC-EDU Task Force we’ve been planning meeting agendas and topics for 2022. We’ve been hard at work writing use cases, helping education standards organizations understand and align with VCs, and we’ve been heading towards a model recommendation doc for the community. In 2022 we plan on building upon this and are ramping up for an exciting year of pilots. To get things in order, we

At the W3C VC-EDU Task Force we’ve been planning meeting agendas and topics for 2022. We’ve been hard at work writing use cases, helping education standards organizations understand and align with VCs, and we’ve been heading towards a model recommendation doc for the community. In 2022 we plan on building upon this and are ramping up for an exciting year of pilots.

To get things in order, we compiled a list of topics and descriptions in this sheet and have set up a ranking system. This ranking system is open until January 19 at 11:59pm ET and anyone is invited to weigh in. The co-chairs will evaluate the results and we’ll discuss them at the January 24th VC-EDU Call (call connection info).

It’s a lengthy and thought-provoking list and I hope we have the opportunity to dig deep into each of these topics and maybe more. I reconsidered my choices quite a few times before I landed on these top 5:

Verifiable Presentations (VPs) vs (nested) Verifiable Credentials (VCs) in the education context — How to express complex nested credentials (think full transcript). The description references full transcript but this topic is also related to presentation of multiple single achievements by the learner. I ranked this first because presentations are a core concept of VCs and very different from how the education ecosystem is accustomed to sharing their credentials. VPs introduce an exchange of credentials in response to a verifiable request versus sharing a badge online or emailing a PDF. Also, there’s been quite a bit of discussion surrounding more complex credentials such as published transcripts that we can get into here. Integration with Existing Systems — Digitizing existing systems, vs creating; existing LMSes; bridging; regulatory requirements — ex: licensing, PDFs needing to be visually inspected. To gain some traction with VCs, we need to understand how systems work now and what can be improved upon using VCs but also, how do we make VCs work with what is needed now? Bridging Tech. This ties into integrating with existing systems above. We are accustomed to the tech we have now and it will be with us for some time. For instance, email will still be used for usernames and identity references even when Decentralized Identifiers start gaining traction. They will coexist and it can be argued that compromises will need to be made (some will argue against this). Protocols — Much of the work in VC-EDU so far has been about the data model. But what about the protocols — what do we /do/ with the VCs once we settle on the format? (How to issue, verify, exchange, etc). This made my top five because as the description notes, we’re pretty close to a data model but we need to understand more about the protocols that deliver, receive, and negotiate credential exchanges. Part of what we do in VC-EDU is learn more about what is being discussed and developed in the broader ecosystem and understanding protocols will help the community with implementation. Context file for VC-EDU — Create a simple context file to describe an achievement claim. There are education standards organizations like IMS Global (Open Badges & CLR) that are working towards aligning with VC-EDU but having an open, community-created description of an achievement claim, even if it reuses elements from other vocabularies, will provide a simple and persistent reference. A context file in VC-EDU could also provide terms for uses in VCs that haven’t yet been explored in education standards organizations and could be models for future functionality considerations.

Simon Willison

Tricking Postgres into using an insane – but 200x faster – query plan

Tricking Postgres into using an insane – but 200x faster – query plan Jacob Martin talks through a PostgreSQL query optimization they implemented at Spacelift, showing in detail how to interpret the results of EXPLAIN (FORMAT JSON, ANALYZE) using the explain.dalibo.com visualization tool.

Tricking Postgres into using an insane – but 200x faster – query plan

Jacob Martin talks through a PostgreSQL query optimization they implemented at Spacelift, showing in detail how to interpret the results of EXPLAIN (FORMAT JSON, ANALYZE) using the explain.dalibo.com visualization tool.


Weeknotes: s3-credentials prefix and Datasette 0.60

A new release of s3-credentials with support for restricting access to keys that start with a prefix, Datasette 0.60 and a write-up of my process for shipping a feature. s3-credentials --prefix s3-credentials is my tool for creating limited scope AWS credentials that can only read and write from a specific S3 bucket. I introduced it in this blog entry in November, and I've continued to iterate

A new release of s3-credentials with support for restricting access to keys that start with a prefix, Datasette 0.60 and a write-up of my process for shipping a feature.

s3-credentials --prefix

s3-credentials is my tool for creating limited scope AWS credentials that can only read and write from a specific S3 bucket. I introduced it in this blog entry in November, and I've continued to iterate on it since then.

I released s3-credentials 0.9 today with a feature I've been planning since I first built the tool: the ability to specify a --prefix and get credentials that are only allowed to operate on keys within a specific folder within the S3 bucket.

This is particularly useful if you are building multi-tenant SaaS applications on top of AWS. You might decide to create a bucket per customer... but S3 limits you to 100 buckets for your by default, with a maximum of 1,000 buckets if you request an increase.

So a bucket per customer won't scale above 1,000 customers.

The sts.assume_role() API lets you retrieve temporary credentials for S3 that can have limits attached to them - including a limit to access keys within a specific bucket and under a specific prefix. That means you can create limited duration credentials that can only read and write from a specific prefix within a bucket.

Which solves the problem! Each of your customers can have a dedicated prefix within the bucket, and your application can issue restricted tokens that greatly reduce the risk of one customer accidentally seeing files that belong to another.

Here's how to use it:

s3-credentials create name-of-bucket --prefix user1410/

This will return a JSON set of credentials - an access key and secret key - that can only be used to read and write keys in that bucket that start with user1410/.

Add --read-only to make those credentials read-only, and --write-only for credentials that can be used to write but not read records.

If you add --duration 15m the returned credentials will only be valid for 15 minutes, using sts.assume_role(). The README includes a detailed description of the changes that will be made to your AWS account by the tool.

You can also add --dry-run to see a text summary of changes without applying them to your account. Here's an example:

% s3-credentials create name-of-bucket --prefix user1410/ --read-only --dry-run --duration 15m Would create bucket: 'name-of-bucket' Would ensure role: 's3-credentials.AmazonS3FullAccess' Would assume role using following policy for 900 seconds: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:GetBucketLocation" ], "Resource": [ "arn:aws:s3:::name-of-bucket" ] }, { "Effect": "Allow", "Action": [ "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::name-of-bucket" ], "Condition": { "StringLike": { "s3:prefix": [ "user1410/*" ] } } }, { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:GetObjectAcl", "s3:GetObjectLegalHold", "s3:GetObjectRetention", "s3:GetObjectTagging" ], "Resource": [ "arn:aws:s3:::name-of-bucket/user1410/*" ] } ] }

As with all things AWS, the magic is in the details of the JSON policy document. The README includes details of exactly what those policies look like. Getting them right was by far the hardest part of building this tool!

s3-credentials integration tests

When writing automated tests, I generally avoid calling any external APIs or making any outbound network traffic. I want the tests to run in an isolated environment, with no risk that some other system that's having a bad day could cause random test failures.

Since the hardest part of building this tool is having confidence that it does the right thing, I decided to also include a suite of integration tests that actively exercise Amazon S3.

By default, running pytest will skip these:

% pytest ================ test session starts ================ platform darwin -- Python 3.10.0, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 rootdir: /Users/simon/Dropbox/Development/s3-credentials plugins: recording-0.12.0, mock-3.6.1 collected 61 items tests/test_dry_run.py .... [ 6%] tests/test_integration.py ssssssss [ 19%] tests/test_s3_credentials.py ................ [ 45%] ................................. [100%] =========== 53 passed, 8 skipped in 1.21s ===========

Running pytest --integration runs the test suite with those tests enabled. It expects the computer they are running on to have AWS credentials with the ability to create buckets and users - I'm too nervous to add these secrets to GitHub Actions, so I currently only run the integration suite on my own laptop.

These were invaluable for getting confident that the new --prefix option behaved as expected, especially when combined with --read-only and --write-only. Here's the test_prefix_read_only() test which exercises the --prefix --read-only combination.

s3-credentials list-bucket

One more new feature: the s3-credentials list-bucket name-of-bucket command lists all of the keys in a specific bucket.

By default it returns a JSON array, but you can add --nl to get back newline delimited JSON or --csv or --tsv to get back CSV or TSV.

So... a fun thing you can do with the command is pipe the output into sqlite-utils insert to create a SQLite database file of your bucket contents... and then use Datasette to browse it!

% s3-credentials list-bucket static.niche-museums.com --nl \ | sqlite-utils insert s3.db keys - --nl % datasette s3.db -o

This will create a s3.db SQLite database with a keys table containing your bucket contents, then open Datasette to let you interact with the table.

Datasette 0.60

I shipped several months of work on Datasette a few days ago as Datasette 0.60. I published annotated release notes for that release which describe the background of those changes in detail.

I also released new versions of datasette-pretty-traces and datasette-leaflet-freedraw to take advantage of new features added to Datasette.

How I build a feature

My other big project this week was a blog post: How I build a feature, which goes into detail about the process I use for adding new features to my various projects. I've had some great feedback about this, so I'm tempted to write more about general software engineering process stuff here in the future.

Releases this week s3-credentials: 0.9 - (9 releases total) - 2022-01-18
A tool for creating credentials for accessing S3 buckets datasette-pretty-traces: 0.4 - (6 releases total) - 2022-01-14
Prettier formatting for ?_trace=1 traces datasette-leaflet-freedraw: 0.3 - (8 releases total) - 2022-01-14
Draw polygons on maps in Datasette datasette: 0.60 - (105 releases total) - 2022-01-14
An open source multi-tool for exploring and publishing data datasette-graphql: 2.0.1 - (33 releases total) - 2022-01-12
Datasette plugin providing an automatic GraphQL API for your SQLite databases TIL this week Configuring Dependabot for a Python project with dependencies in setup.py JavaScript date objects Streaming indented output of a JSON array

Monday, 17. January 2022

Here's Tom with the Weather

TX Pediatric Covid Hospitalizations

Using data from healthdata.gov, this is a graph of the “total_pediatric_patients_hospitalized_confirmed_covid” column over time for Texas. A similar graph for the U.S was shown on Twitter by Rob Swanda.

Using data from healthdata.gov, this is a graph of the “total_pediatric_patients_hospitalized_confirmed_covid” column over time for Texas. A similar graph for the U.S was shown on Twitter by Rob Swanda.


Markus Sabadello on Medium

Transatlantic SSI Interop

Today, there are more and more initiatives working on decentralized identity infrastructures, or Self-Sovereign Identity (SSI). However, there is a big paradox underlying all those initiatives: Even though they often use the same technical specifications , e.g. W3C Decentralized Identifiers (DIDs) and Verifiable Credentials (VCs), they are in practice usually not compatible. There are just to

Today, there are more and more initiatives working on decentralized identity infrastructures, or Self-Sovereign Identity (SSI). However, there is a big paradox underlying all those initiatives: Even though they often use the same technical specifications , e.g. W3C Decentralized Identifiers (DIDs) and Verifiable Credentials (VCs), they are in practice usually not compatible. There are just too many details where technological choices can diverge. Yes we all use DIDs and VCs. But do we use Data Integrity Proofs (formerly called Linked Data Proofs) or JWT Proofs? JSON-LD contexts or JSON schemas, or both? Do we use DIDComm (which version?), or CHAPI, or one of the emerging new variants of OpenID Connect? Which one of the many revocation mechanisms? Which DID methods? How do we format our deep links and encode our QR codes?

We all want to build the missing identity layer for the Internet, where everything is interoperable just like on the web. But we all do it in slightly different ways. So how can we solve this paradox? Do we create yet another interoperability working group?

No! We try out simple steps and make them work. We conduct concrete experiments that bridge gaps and cross borders. In this case, we planned and executed an experiment that demonstrates interoperability between prominent decentralized identity initiatives in the EU and the US, funded by the NGIatlantic.eu program. Two companies collaborated on this project: Danube Tech (EU) and Digital Bazaar (US).

EU-US collaboration on decentralized identity

In the EU, the European Blockchain Service Infrastructure (EBSI) is building an ambitious network that could become the basis for a digital wallet for all EU citizens. In the US, the Department of Homeland Security‘s Silicon Valley Innovation Program (SVIP) is working with companies around the world on personal digital credentials as well as trade use cases. Both projects have developed easy-to-understand narratives (student “Eva” in EBSI, immigrant “Louis” in SVIP). Both narratives are described further in the W3C’s DID Use Cases document (here and here). So we thought, let’s conduct an experiment that combines narratives and technological elements from both the EU and US sides!

SVIP (left side) and EBSI (right side)

We built and demonstrated two combined stories:

Eva studied in the EU and would then like to apply for a US visa. In this story, there is an EU-based Issuer of a VC, and a US-based Verifier. Louis is an immigrant in the US and would like to apply for PhD studies at an EU university. In this story, there is a US-based Issuer of a VC, and an EU-based Verifier.

For a walkthrough video, see: https://youtu.be/1t9m-U-3lMk

For a more detailed report, see: https://github.com/danubetech/transatlantic-ssi-interop/

In the broader decentralized identity community, both the EU- and US-based initiatives currently have strong influence. EBSI’s main strength is its ability to bring together dozens of universities and other organizations to build a vibrant community of VC Issuer and Verifiers. SVIP’s great value has been its continuous work on concrete test suites and interoperability events (“plugfests”) that involve multiple heterogeneous vendor solutions.

In this project, we used open-source libraries supported by ESSIF-Lab, as well as the Universal Resolver project from the Decentralized Identity Foundation (DIF). We also used various components supplied by Digital Bazaar, such as the Veres Wallet.

We hope that our “Transatlantic SSI Interop” experiment can serve as an inspiration and blueprint for further work on interoperability not only between different DID methods and VC types, but also between different vendors, ecosystems, and even continents.

Wallet containing an EU Diploma and US Permanent Resident Card

Simon Willison

SQLime: SQLite Playground

SQLime: SQLite Playground Anton Zhiyanov built this useful mobile-friendly online playground for trying things out it SQLite. It uses the sql.js library which compiles SQLite to WebAssembly, so it runs everything in the browser - but it also supports saving your work to Gists via the GitHub API. The JavaScript source code is fun to read: the site doesn't use npm or Webpack or similar, opting in

SQLime: SQLite Playground

Anton Zhiyanov built this useful mobile-friendly online playground for trying things out it SQLite. It uses the sql.js library which compiles SQLite to WebAssembly, so it runs everything in the browser - but it also supports saving your work to Gists via the GitHub API. The JavaScript source code is fun to read: the site doesn't use npm or Webpack or similar, opting instead to implement everything library-free using modern JavaScript modules and Web Components.

Via Anton Zhiyanov


Aaron Parecki

How to Green Screen on the YoloBox Pro

This step-by-step guide will show you how to use the chroma key feature on the YoloBox Pro to green screen yourself onto picture backgrounds and videos, or even add external graphics from a computer.

This step-by-step guide will show you how to use the chroma key feature on the YoloBox Pro to green screen yourself onto picture backgrounds and videos, or even add external graphics from a computer.

There are a few different ways to use the green screening feature in the YoloBox. You can use it to add a flat virtual background to your video, or you could use it to put yourself over a moving background or other video sources like an overhead or document camera. You could even key yourself over your computer screen showing your slides from a presentation.

You can also switch things around and instead of removing the background from your main camera, instead you can generate graphics on a computer screen with a green background and add those on top of your video.

Setting up your green screen

Before jumping in to the YoloBox, you'll want to make sure your green screen is set up properly. A quick summary of what you'll need to do is:

Light your green screen evenly Light your subject Don't wear anything green

Watch Kevin The Basic Filmmaker's excellent green screen tutorial for a complete guide to these steps!

Green screening on top of an image

We'll first look at how to green screen a camera on top of a static image. You can load images in to the YoloBox by putting them on the SD card. I recommend creating your background image at exactly the right size first, 1920x1080.

On the YoloBox, click the little person icon in the top right corner of the camera that you want to remove the green background from.

That will open up the Chroma Key Settings interface.

Turn on the "Keying Switch", and you should see a pretty good key if your green screen is lit well. If you have a blue screen instead of green, you can change that setting here. The "Similarity" and "Smoothness" sliders will affect how the YoloBox does the key. Adjust them until things look right and you don't have too much of your background showing and it isn't eating into your main subject.

Tap on the "Background Image" to choose which image from your SD card to use as the background. Only still graphics are supported.

Click "Done" and this will save your settings into that camera's source.

Now when you tap on that camera on the YoloBox, it will always include the background image in place of the green screen.

Green screening on top of other video sources

Green screening yourself on top of other video sources is similar but a slightly different process.

First, set up your HDMI source as described above, but instead of choosing a background image, leave it transparent.

Then click the "Add Video Source" button to create a new picture-in-picture layout.

Choose "PiP Video" from the options that appear. For the "Main Screen", choose the video angle that you want to use as the full screen background that you'll key yourself on top of. This could be a top down camera or could be your computer screen with alides for a presentation. It will then ask you to choose a "Sub Screen", and that is where you'll choose your camera angle that you've already set up for chroma keying.

This is where you can choose how big you want your picture to be, and you can drag it around with your finger to change the position.

Once you save this, your new PiP layout will appear as another camera angle you can switch to.

Cropping the green screened video

You may notice that if your green background doesn't cover the entire frame, you'll have black borders on the sides of your chroma keyed image. The YoloBox doesn't exactly have a cropping feature to fix this, but you can use the "Aspect Ratio" setting to crop the background.

You can edit your PiP video settings and choose "1:1" in the Aspect Ratio option to crop your video to a square, removing the black borders from the edges.

Adding computer graphics using the chroma key

Lastly, let's look at how to bring in graphics from an external computer source and key them out on the YoloBox.

When you plug in your computer's HDMI to the YoloBox, your computer will see it as an external monitor. Make sure your computer screen isn't mirrored so you can still use your main computer screen separately.

You can generate graphics in any program as long as you can have it use a green background. You can create animated graphics in Keynote for example, but for this tutorial we'll use the app H2R Graphics.

In H2R Graphics, you'll first want to make sure you set the background color to a bright green like #00FF00. Then you can open up the main output window and drag it over to your second screen (the YoloBox).

Choose the little person icon in the top right corner of the HDMI input of your computer screen to bring up the keying settings for it.

The defaults should look fine, but you can also make any adjustments here if you need. Click "Done" to save the settings.

Now you can create a new PiP layout with your main video as the background and your computer screen keyed out as the foreground or "Sub Screen".

For the Main Screen, choose the video angle you want to use as the background.

For the Sub Screen, choose your computer screen which should now have a transparent background.

Now the layout with your H2R Graphics output window is created as the PiP angle you can choose.

My YoloBox stand

If you haven't already seen it, be sure to check out my YoloBox stand I created! It tilts the YoloBox forward so it's easier to use on a desk, and you can also attach things to the cold shoe mounts on the back.

We have a version for both the YoloBox Pro and the original YoloBox, and it comes in red and black!

You can see the full video version of this blog post on my YouTube channel!


Damien Bod

Use FIDO2 passwordless authentication with Azure AD

This article shows how to implement FIDO2 passwordless authentication with Azure AD for users in an Azure tenant. FIDO2 provides one of the best user authentication methods and is a more secure authentication compared with other account authentication implementations such authenticator apps, SMS, email, password alone or SSI authentication. FIDO2 authentication protects against phishing. To […]

This article shows how to implement FIDO2 passwordless authentication with Azure AD for users in an Azure tenant. FIDO2 provides one of the best user authentication methods and is a more secure authentication compared with other account authentication implementations such authenticator apps, SMS, email, password alone or SSI authentication. FIDO2 authentication protects against phishing.

To role out the FIDO2 authenitcation in Azure AD and setup up an account, I used the Feitian FIDO2 BioPass K26 and K43 security keys. By using biometric security keys, you get an extra factor which is more secure than a pin. The biometric data never leaves the key. You should never share any biometric data anywhere or share on any shared server.

I used the Feitian BioPass FIDO2 Manager to setup my security keys using my fingerprint. This is really easy to use and very user friendly.

Setting up the Azure AD tenant

The FIDO2 security key authentication method should be activated on the Azure AD tenant. Disable all other authentication methods unless required. Also disable SMS authentication, this should not be used anymore. All users should be required to use MFA.

Now that the Azure AD tenant can use FIDO2 authentication, an account can be setup for this. If you implement this for a company’s tenant, you would role this out using scripts and automate the process. You can sign in to your account at Microsofts myaccount website and use the security info menu to configure the Feitian FIDO2 keys.

https://myaccount.microsoft.com/

I added my two security keys using USB. I use at least two security keys for each account. You should use a FIDO2 key as a fallback for the first key. Do not use SMS fallback or some type of email, password recovery. Worst case, your IT admin can reset the account for you and issue you new FIDO2 keys.

https://mysignins.microsoft.com/security-info

Using FIDO2 keys with Azure AD is really easy to setup and works great. I use FIDO2 everywhere which supports this and avoid other authentication methods. Some of the account popups for Azure AD is annoying trying to authenticate using password, SMS or email, I would prefer FIDO2 first for a better user experience. The Feitian BioPass FIDO2 security keys are excellent and I would recommend these.

Links:

https://www.microsoft.com/en-us/p/biopass-fido2-manager/9p2zjpwk3pxw

https://myaccount.microsoft.com/

https://mysignins.microsoft.com/security-info

https://docs.microsoft.com/en-us/azure/active-directory/authentication/concept-authentication-methods

https://www.ftsafe.com/article/619.html

Configure a FEITIAN FIDO2 BioPass security key

https://docs.microsoft.com/en-us/azure/active-directory/authentication/howto-authentication-passwordless-security-key

https://www.w3.org/TR/webauthn/

The authentication pyramid

Sunday, 16. January 2022

Simon Willison

Abusing AWS Lambda to make an Aussie Search Engine

Abusing AWS Lambda to make an Aussie Search Engine Ben Boyter built a search engine that only indexes .au Australian websites, with the novel approach of directly compiling the search index into 250 different ~40MB large lambda functions written in Go, then running searches across 12 million pages by farming them out to all of the lambdas and combining the results. His write-up includes all sort

Abusing AWS Lambda to make an Aussie Search Engine

Ben Boyter built a search engine that only indexes .au Australian websites, with the novel approach of directly compiling the search index into 250 different ~40MB large lambda functions written in Go, then running searches across 12 million pages by farming them out to all of the lambdas and combining the results. His write-up includes all sorts of details about how he built this, including how he ran the indexer and how he solved the surprisingly hard problem of returning good-enough text snippets for the results.

Via David Humphrey


Doc Searls Weblog

TheirCharts

If you’re getting health care in the U.S., chances are your providers are now trying to give you a better patient experience through a website called MyChart. This is supposed to be yours, as the first person singular pronoun My implies. Problem is, it’s TheirChart. And there are a lot of them. I have four (correction: five*) MyChart accounts […]

If you’re getting health care in the U.S., chances are your providers are now trying to give you a better patient experience through a website called MyChart.

This is supposed to be yours, as the first person singular pronoun My implies. Problem is, it’s TheirChart. And there are a lot of them. I have four (correction: five*) MyChart accounts with as many health care providers, so far: one in New York, two in Santa Barbara, one in Mountain View, and one in Los Angeles. I may soon have another in Bloomington, Indiana. None are mine. All are theirs, and they seem not to get along. Especially with me. (Some later correction on this below, and from readers who have weighed in. See the comments.)

Not surprisingly, all of them come from a single source: Epic Systems, the primary provider of back-end information tech to the country’s health care providers, including most of the big ones: Harvard, Yale, Mayo, UCLA, UChicago, Duke, Johns Hopkins, multiple Mount Sinais, and others like them. But, even though all these MyChart portals are provided by one company, and (I suppose) live in one cloud, there appears to be no way for you, the patient, to make those things work together inside an allied system that is truly yours (like your PC or your car is yours), or for you to provide them with data you already have from other sources. Which you could presumably do if My meant what it says.

The way they work can get perverse. For example, a couple days ago, one of my doctors’ offices called to tell me we would need to have a remote consult before she changed one of my prescriptions. This, I was told, could not be done over the phone. It would need to be done over video inside MyChart. So now we have an appointment for that meeting on Monday afternoon, using MyChart.

I decided to get ahead of that by finding my way into the right MyChart and leaving a session open in a browser tab. Then I made the mistake of starting to type “MyChart” into my browser’s location bar, and then not noticing that the top result was one of the countless other MyCharts maintained by countless other health care providers. But this other one looked so much like one of mine that I wasted an hour or more, failing to log in and then failing to recover my login credentials. It wasn’t until I called the customer service number thankfully listed on the website that I found I was trying to use the MyChart of some provider I’d never heard of—and which had never heard of me.

Now I’m looking at one of my two MyCharts for Santa Barbara, where it shows no upcoming visits. I can’t log into the other one to see if the Monday appointment is noted there, because that MyChart doesn’t know who I am. So I’m hoping to unfuck that one on Monday before the call on whichever MyChart I’ll need to use. Worst case, I’ll just tell the doctor’s office that we’ll have to make do with a phone call. If they answer the phone, that is.

The real problem here is that there seem to be hundreds or thousands of different health care providers, all using one company’s back end to provide personal health care information to millions of patients through hundreds or thousands of different portals, all called the same thing (or something close), while providing no obvious way for patients to gather their own data from multiple sources to use for their own independent purposes, both in and out of that system. Or any system.

To call this fubar understates the problem.

Here’s what matters: Epic can’t solve this. Nor can any or all of these separate health care systems. Because none of them are you.

You’re where the solution needs to happen. You need a simple and standardized way to collect and manage your own health-related information and engagements with multiple health care providers. One that’s yours.

This doesn’t mean you need to be alone in the wilderness. You do need expert help. In the old days, you used to get that through your primary care physician. But large health care operations have been hoovering up private practices for years, and one of the big reasons for that has been to make the data management side of medicine easier for physicians and their many associated providers. Not to make it easier for you. After all, you’re not their customer. Insurance companies are their customers.

In the midst of this is a market hole where your representation in the health care marketplace needs to sit. I know just one example of how that might work: the HIE of One. (HIE is Health Information Exchange.) For all our sakes, somebody please fund that work.

Far too much time, sweat, money, and blood is being spilled trying to solve this problem from the center outward. (For a few details on how awful that is, start reading here.)

While we’re probably never going to make health care in the U.S. something other than the B2B insurance business it has become, we can at least start working on a Me2B solution in the place it most needs to work: with patients. Because we’re the ones who need to be in full command of our relationships with our providers as well as with ourselves.

Health care, by the way, is just one category that cries out for solutions that can only come from the customers’ side. Customer Commons has a list of fourteen, including this one.

*Okay, now it’s Monday, and I’m a half-hour away from my consult with my doctor, via Zoom, inside MyChart. Turns out I was not yet registered with this MyChart, but at least there was a phone number I could call, and on the call (which my phone says took 14 minutes) we got my ass registered. He also pointed me to where, waaay down a very long menu, there is a “Link my accounts” choice, which brings up this:

Credit where due:

It was very easy to link my four known accounts, plus another (the one in Mountain View) that I had forgotten but somehow the MyChart master brain remembered. I suspect, given all the medical institutions I have encountered in my long life, that there are many more. Because in fact I had been to the Mountain View hospital only once, and I don’t even remember why, though I suppose I could check.

So that’s the good news. The bad news remains the same. None of these charts are mine. They are just views into many systems that are conditionally open to me. That they are now federated (that’s what this kind of linking-up is called) on Epic’s back end does not make it mine. It just makes it a many-theirs.

So the system still needs to be fixed. From our end.

 

 

 

 

 

Saturday, 15. January 2022

Doc Searls Weblog

Bothering with Brother

That’s the UI for  the Brother HL-L2305w laser printer, which you can get for $140 right now at OfficeMax (or Office Depot, same thing). It’s a good deal. It also took me a whole day to set up. See, it comes with instructions that say to use the UI above to make CONNECTING WLAN happen. […]


That’s the UI for  the Brother HL-L2305w laser printer, which you can get for $140 right now at OfficeMax (or Office Depot, same thing). It’s a good deal. It also took me a whole day to set up.

See, it comes with instructions that say to use the UI above to make CONNECTING WLAN happen. It doesn’t. Instead it sits for awhile, says TIMED OUT, and then prints out a page that says “The WLAN access point/router cannot be detected,” and gives instructions to locate the printer as close as possible to your wi-fi (that’s the WLAN) access point, to make sure you’re not using MAC address filtering or other secure things that might prevent connection.

After parking an access point (we have four in our house, all connected by Ethernet through a switch to the cable modem) right on top of the printer, I gave up, assumed it was bad, took it back and swapped it for another that had the same problem, meaning I was dealing with a feature.

Then, after failing to find help in the Brother Product Support Center, I registered the printer and logged in as a now-known customer. In that state I was able to chat with an entity (human, it seemed, but ya never know) who pointed me to a page with useful instructions, plus a video that’s also on YouTube, where I should have looked in the first place. Lesson re-learned.

So, if you get one, go straight to that YouTube link and save a lot of trouble/


Simon Willison

Writing a minimal Lua implementation with a virtual machine from scratch in Rust

Writing a minimal Lua implementation with a virtual machine from scratch in Rust Phil Eaton implements a subset of Lua in a Rust in this detailed tutorial.

Writing a minimal Lua implementation with a virtual machine from scratch in Rust

Phil Eaton implements a subset of Lua in a Rust in this detailed tutorial.

Friday, 14. January 2022

Simon Willison

Datasette 0.60: The annotated release notes

I released Datasette 0.60 today. It's a big release, incorporating 61 commits and 18 issues. Here are the annotated release notes. filters_from_request plugin hook New plugin hook: filters_from_request(request, database, table, datasette), which runs on the table page and can be used to support new custom query string parameters that modify the SQL query. (#473) The inspiration for t

I released Datasette 0.60 today. It's a big release, incorporating 61 commits and 18 issues. Here are the annotated release notes.

filters_from_request plugin hook
New plugin hook: filters_from_request(request, database, table, datasette), which runs on the table page and can be used to support new custom query string parameters that modify the SQL query. (#473)

The inspiration for this hook was my ongoing quest to simplify and refactor Datasette's TableView, the most complex page in the project which provides an interface for filtering and paginating through a table of data.

The main job of that page is to convert a query string - with things like ?country_long=China and &capacity_mw__gt=200 in it - into a SQL query.

So I extracted part of that logic out into a new plugin hook. I've already started using it in datasette-leaflet-freedraw to help support filtering a table by drawing on a map, demo here.

I also used the new hook to refactor Datasette itself. The filters.py module now registers where_filters(), search_filters() and through_filters() implementations against that hook, to support various core pieces of Datasette functionality.

Tracing, write API improvements and performance
The tracing feature now traces write queries, not just read queries. (#1568) Added two additional methods for writing to the database: await db.execute_write_script(sql, block=True) and await db.execute_write_many(sql, params_seq, block=True). (#1570) Made several performance improvements to the database schema introspection code that runs when Datasette first starts up. (#1555)

I built a new plugin called datasette-pretty-traces to help with my refactoring. It takes Datasette's existing ?_trace=1 feature, which dumps out a big blob of JSON at the bottom of the page, and turns it into something that's a bit easier to understand.

The plugin quickly started highlighting all sorts of interesting potential improvements!

After I added tracing to write queries it became apparent that Datasette's schema introspection code - which runs once when the server starts, and then re-runs any time it notices a change to a database schema - was painfully inefficient.

It writes information about the schema into an in-memory database, which I hope to use in the future to power features like search of all attached tables.

I ended up adding two new documented internal methods for speeding up those writes: db.execute_write_script() and db.execute_write_many(). These are now available for plugins to use as well.

The db.execute_write() internal method now defaults to blocking until the write operation has completed. Previously it defaulted to queuing the write and then continuing to run code while the write was in the queue. (#1579)

Spending time with code that wrote to the database highlighted a design flaw in Datasette's original write method. I realized that every line of code I had written that used it looked like this:

db.execute_write("insert into ...", block=True)

The block=True parameter means "block until the write has completed". Without it, the write goes into a queue and code continues executing whether or not the write has been made.

This was clearly the wrong default. I used GitHub code search to check if changing it would be disruptive - it would not - and made the change. I'm glad I caught this before Datasette 1.0!

Database write connections now execute the prepare_connection(conn, database, datasette) plugin hook. (#1564)

I noticed that writes to a database with SpatiaLite were failing with an error, because the SpatiaLite module was not being correctly loaded. This fixes that.

Faceting

A bunch of different fixes for Datasette's Faceting made it into this release:

The number of unique values in a facet is now always displayed. Previously it was only displayed if the user specified ?_facet_size=max. (#1556) Facets of type date or array can now be configured in metadata.json, see Facets in metadata.json. Thanks, David Larlet. (#1552) New ?_nosuggest=1 parameter for table views, which disables facet suggestion. (#1557) Fixed bug where ?_facet_array=tags&_facet=tags would only display one of the two selected facets. (#625)
Other, smaller changes
The Datasette() constructor no longer requires the files= argument, and is now documented at Datasette class. (#1563)

A tiny usability improvement, mainly for tests. It means you can write a test that looks like this:

import pytest from datasette.app import Datasette @pytest.mark.asyncio async def test_datasette_homepage(): ds = Datasette() response = await ds.client.get("/") assert "<title>Datasette" in response.text

Previously the files= argument was required, so you would have to use Datasette(files=[]).

The query string variables exposed by request.args will now include blank strings for arguments such as foo in ?foo=&bar=1 rather than ignoring those parameters entirely. (#1551)

This came out of the refactor - this commit tells the story.

Upgraded Pluggy dependency to 1.0. (#1575)

I needed this because Pluggy 1.0 allows multiple implementations of the same hook to be defined within the same file, like this:

@hookimpl(specname="filters_from_request") def where_filters(request, database, datasette): # ... @hookimpl(specname="filters_from_request") def search_filters(request, database, table, datasette): # ...
Now using Plausible analytics for the Datasette documentation.

I really like Plausible as an analytics product. It does a great job of respecting user privacy while still producing useful numbers. It's cookie-free, which means it doesn't trigger a need for GDPR banners in Europe. I'm increasing using it on all of my projects.

New CLI reference page showing the output of --help for each of the datasette sub-commands. This lead to several small improvements to the help copy. (#1594)

I first built this for sqlite-utils and liked it so much I brought it to Datasette as well. It's generated by cog, using this inline script in the reStructuredText.

And the rest
Label columns detected for foreign keys are now case-insensitive, so Name or TITLE will be detected in the same way as name or title. (#1544) explain query plan is now allowed with varying amounts of whitespace in the query. (#1588) Fixed bug where writable canned queries could not be used with custom templates. (#1547) Improved fix for a bug where columns with a underscore prefix could result in unnecessary hidden form fields. (#1527)

Thursday, 13. January 2022

Simon Willison

Announcing Parcel CSS: A new CSS parser, compiler, and minifier written in Rust!

Announcing Parcel CSS: A new CSS parser, compiler, and minifier written in Rust! An interesting thing about tools like this being written in Rust is that since the Rust-to-WASM pipeline is well trodden at this point, the live demo that this announcement links to runs entirely in the browser.

Announcing Parcel CSS: A new CSS parser, compiler, and minifier written in Rust!

An interesting thing about tools like this being written in Rust is that since the Rust-to-WASM pipeline is well trodden at this point, the live demo that this announcement links to runs entirely in the browser.


Mike Jones: self-issued

Described more of the motivations for the JWK Thumbprint URI specification

As requested by the chairs during today’s OAuth Virtual Office Hours call, Kristina Yasuda and I have updated the JWK Thumbprint URI specification to enhance the description of the motivations for the specification. In particular, it now describes using JWK Thumbprint URIs as key identifiers that can be syntactically distinguished from other kinds of identifiers […]

As requested by the chairs during today’s OAuth Virtual Office Hours call, Kristina Yasuda and I have updated the JWK Thumbprint URI specification to enhance the description of the motivations for the specification. In particular, it now describes using JWK Thumbprint URIs as key identifiers that can be syntactically distinguished from other kinds of identifiers also expressed as URIs. It is used this way in the Self-Issued OpenID Provider v2 specification, for instance. No normative changes were made.

As discussed on the call, we are requesting that that the chairs use this new draft as the basis for a call for working group adoption.

The specification is available at:

https://www.ietf.org/archive/id/draft-jones-oauth-jwk-thumbprint-uri-01.html

Wednesday, 12. January 2022

ian glazer's tuesdaynight

Memories of Kim Cameron

Reification. I learned that word from Kim. In the immediate next breath he said from the stage that he was told not everyone knew what reify meant and that he would use a more approachable word: “thingify.” And therein I learned another lesson from Kim about how to present to an audience. My memories of … Continue reading Memories of Kim Cameron

Reification. I learned that word from Kim. In the immediate next breath he said from the stage that he was told not everyone knew what reify meant and that he would use a more approachable word: “thingify.” And therein I learned another lesson from Kim about how to present to an audience.

My memories of Kim come in three phases: Kim as Legend, Kim as Colleague, and Kim as Human, and with each phase came new things to learn.

My first memories of Kim were of Kim as Legend. I think the very first was from IIW 1 (or maybe 2 – the one in Berkeley) at which he presented InfoCard. He owned the stage; he owned the subject matter. He continued to own the stage and the subject matter for years…sometimes the subject matter was more concrete, like InfoCard, and sometimes it was more abstract, like the metaverse. But regardless, it was enthralling.

At some point something changed… Kim was no longer an unapproachable Legend. He was someone with whom I could talk, disagree, and more directly question. In this phase of Kim as Colleague, I was lucky enough to have the opportunity to ask him private follow-up questions to his presentation. Leaving aside my “OMG he’s talking to me” feelings, I was blown away by his willingness to go into depth of his thought process with someone who didn’t work with him. He was more than willing to be challenged and to discuss the thorny problems in our world.

Somewhere in the midst of the Kim as Colleague phase something changed yet again and it is in this third phase, Kim as Human, where I have my most precious memories of him. Through meeting some of his family, being welcomed into his home, and sharing meals, I got to know Kim as the warm, curious, eager-to-laugh person that he was. There was seemingly always a glint in his eye indicating his willingness to cause a little trouble. 

The last in-person memory I have of him was just before the pandemic lockdowns in 2020. I happened to be lucky enough to be invited to an OpenID Foundation event at which Kim was speaking. He talked about his vision for the future and identity’s role therein. At the end of his presentation, I and others helped him down the steep stairs off of the stage. I held onto one of his hands as we helped him down. His hand was warm.


Identity Woman

Why we need DIDComm

This is the text of an email I got today from a company that i had a contract with last year. It is really really really annoying the whole process of sending secure communications and documents. Once I finished reading it – I was reminded quite strongly why we need DIDComm as a protocol to […] The post Why we need DIDComm appeared first on Identity Woman.

This is the text of an email I got today from a company that i had a contract with last year. It is really really really annoying the whole process of sending secure communications and documents. Once I finished reading it – I was reminded quite strongly why we need DIDComm as a protocol to […]

The post Why we need DIDComm appeared first on Identity Woman.


Simon Willison

How I build a feature

I'm maintaining a lot of different projects at the moment. I thought it would be useful to describe the process I use for adding a new feature to one of them, using the new sqlite-utils create-database command as an example. I like each feature to be represented by what I consider to be the perfect commit - one that bundles together the implementation, the tests, the documentation and a link to

I'm maintaining a lot of different projects at the moment. I thought it would be useful to describe the process I use for adding a new feature to one of them, using the new sqlite-utils create-database command as an example.

I like each feature to be represented by what I consider to be the perfect commit - one that bundles together the implementation, the tests, the documentation and a link to an external issue thread.

The sqlite-utils create-database command is very simple: it creates a new, empty SQLite database file. You use it like this:

% sqlite-utils create-database empty.db Everything starts with an issue

Every piece of work I do has an associated issue. This acts as ongoing work-in-progress notes and lets me record decisions, reference any research, drop in code snippets and sometimes even add screenshots and video - stuff that is really helpful but doesn't necessarily fit in code comments or commit messages.

Even if it's a tiny improvement that's only a few lines of code, I'll still open an issue for it - sometimes just a few minutes before closing it again as complete.

Any commits that I create that relate to an issue reference the issue number in their commit message. GitHub does a great job of automatically linking these together, bidirectionally so I can navigate from the commit to the issue or from the issue to the commit.

Having an issue also gives me something I can link to from my release notes.

In the case of the create-database command, I opened this issue in November when I had the idea for the feature.

I didn't do the work until over a month later - but because I had designed the feature in the issue comments I could get started on the implementation really quickly.

Development environment

Being able to quickly spin up a development environment for a project is crucial. All of my projects have a section in the README or the documentation describing how to do this - here's that section for sqlite-utils.

On my own laptop each project gets a directory, and I use pipenv shell in that directory to activate a directory-specific virtual environment, then pip install -e '.[test]' to install the dependencies and test dependencies.

Automated tests

All of my features are accompanied by automated tests. This gives me the confidence to boldly make changes to the software in the future without fear of breaking any existing features.

This means that writing tests needs to be as quick and easy as possible - the less friction here the better.

The best way to make writing tests easy is to have a great testing framework in place from the very beginning of the project. My cookiecutter templates (python-lib, datasette-plugin and click-app) all configure pytest and add a tests/ folder with a single passing test, to give me something to start adding tests to.

I can't say enough good things about pytest. Before I adopted it, writing tests was a chore. Now it's an activity I genuinely look forward to!

I'm not a religious adherent to writing the tests first - see How to cheat at unit tests with pytest and Black for more thoughts on that - but I'll write the test first if it's pragmatic to do so.

In the case of create-database, writing the test first felt like the right thing to do. Here's the test I started with:

def test_create_database(tmpdir): db_path = tmpdir / "test.db" assert not db_path.exists() result = CliRunner().invoke( cli.cli, ["create-database", str(db_path)] ) assert result.exit_code == 0 assert db_path.exists()

This test uses the tmpdir pytest fixture to provide a temporary directory that will be automatically cleaned up by pytest after the test run finishes.

It checks that the test.db file doesn't exist yet, then uses the Click framework's CliRunner utility to execute the create-database command. Then it checks that the command didn't throw an error and that the file has been created.

The I run the test, and watch it fail - because I haven't built the feature yet!

% pytest -k test_create_database ============ test session starts ============ platform darwin -- Python 3.8.2, pytest-6.2.4, py-1.10.0, pluggy-0.13.1 rootdir: /Users/simon/Dropbox/Development/sqlite-utils plugins: cov-2.12.1, hypothesis-6.14.5 collected 808 items / 807 deselected / 1 selected tests/test_cli.py F [100%] ================= FAILURES ================== ___________ test_create_database ____________ tmpdir = local('/private/var/folders/wr/hn3206rs1yzgq3r49bz8nvnh0000gn/T/pytest-of-simon/pytest-659/test_create_database0') def test_create_database(tmpdir): db_path = tmpdir / "test.db" assert not db_path.exists() result = CliRunner().invoke( cli.cli, ["create-database", str(db_path)] ) > assert result.exit_code == 0 E assert 1 == 0 E + where 1 = <Result SystemExit(1)>.exit_code tests/test_cli.py:2097: AssertionError ========== short test summary info ========== FAILED tests/test_cli.py::test_create_database - assert 1 == 0 ===== 1 failed, 807 deselected in 0.99s ====

The -k option lets me run any test that match the search string, rather than running the full test suite. I use this all the time.

Other pytest features I often use:

pytest -x: runs the entire test suite but quits at the first test that fails pytest --lf: re-runs any tests that failed during the last test run pytest --pdb -x: open the Python debugger at the first failed test (omit the -x to open it at every failed test). This is the main way I interact with the Python debugger. I often use this to help write the tests, since I can add assert False and get a shell inside the test to interact with various objects and figure out how to best run assertions against them. Implementing the feature

Test in place, it's time to implement the command. I added this code to my existing cli.py module:

@cli.command(name="create-database") @click.argument( "path", type=click.Path(file_okay=True, dir_okay=False, allow_dash=False), required=True, ) def create_database(path): "Create a new empty database file." db = sqlite_utils.Database(path) db.vacuum()

(I happen to know that the quickest way to create an empty SQLite database file is to run VACUUM against it.)

The test now passes!

I iterated on this implementation a little bit more, to add the --enable-wal option I had designed in the issue comments - and updated the test to match. You can see the final implementation in this commit: 1d64cd2e5b402ff957f9be2d9bb490d313c73989.

If I add a new test and it passes the first time, I’m always suspicious of it. I’ll deliberately break the test (change a 1 to a 2 for example) and run it again to make sure it fails, then change it back again.

Code formatting with Black

Black has increased my productivity as a Python developer by a material amount. I used to spend a whole bunch of brain cycles agonizing over how to indent my code, where to break up long function calls and suchlike. Thanks to Black I never think about this at all - I instinctively run black . in the root of my project and accept whatever style decisions it applies for me.

Linting

I have a few linters set up to run on every commit. I can run these locally too - how to do that is documented here - but I'm often a bit lazy and leave them to run in CI.

In this case one of my linters failed! I accidentally called the new command function create_table() when it should have been called create_database(). The code worked fine due to how the cli.command(name=...) decorator works but mypy complained about the redefined function name. I fixed that in a separate commit.

Documentation

My policy these days is that if a feature isn't documented it doesn't exist. Updating existing documentation isn't much work at all if the documentation already exists, and over time these incremental improvements add up to something really comprehensive.

For smaller projects I use a single README.md which gets displayed on both GitHub and PyPI (and the Datasette website too, for example on datasette.io/tools/git-history).

My larger projects, such as Datasette and sqlite-utils, use Read the Docs and reStructuredText with Sphinx instead.

I like reStructuredText mainly because it has really good support for internal reference links - something that is missing from Markdown, though it can be enabled using MyST.

sqlite-utils uses Sphinx. I have the sphinx-autobuild extension configured, which means I can run a live reloading server with the documentation like so:

cd docs make livehtml

Any time I'm working on the documentation I have that server running, so I can hit "save" in VS Code and see a preview in my browser a few seconds later.

For Markdown documentation I use the VS Code preview pane directly.

The moment the documentation is live online, I like to add a link to it in a comment on the issue thread.

Committing the change

I run git diff a LOT while hacking on code, to make sure I haven’t accidentally changed something unrelated. This also helps spot things like rogue print() debug statements I may have added.

Before my final commit, I sometimes even run git diff | grep print to check for those.

My goal with the commit is to bundle the test, documentation and implementation. If those are the only files I've changed I do this:

git commit -a -m "sqlite-utils create-database command, closes #348"

If this completes the work on the issue I use "closes #N", which causes GitHub to close the issue for me. If it's not yet ready to close I use "refs #N" instead.

Sometimes there will be unrelated changes in my working directory. If so, I use git add <files> and then commit just with git commit -m message.

Branches and pull requests

create-database is a good example of a feature that can be implemented in a single commit, with no need to work in a branch.

For larger features, I'll work in a feature branch:

git checkout -b my-feature

I'll make a commit (often just labelled "WIP prototype, refs #N") and then push that to GitHub and open a pull request for it:

git push -u origin my-feature

I ensure the new pull request links back to the issue in its description, then switch my ongoing commentary to comments on the pull request itself.

I'll sometimes add a task checklist to the opening comment on the pull request, since tasks there get reflected in the GitHub UI anywhere that links to the PR. Then I'll check those off as I complete them.

An example of a PR I used like this is #361: --lines and --text and --convert and --import.

I don't like merge commits - I much prefer to keep my main branch history as linear as possible. I usually merge my PRs through the GitHub web interface using the squash feature, which results in a single, clean commit to main with the combined tests, documentation and implementation. Occasionally I will see value in keeping the individual commits, in which case I will rebase merge them.

Another goal here is to keep the main branch releasable at all times. Incomplete work should stay in a branch. This makes turning around and releasing quick bug fixes a lot less stressful!

Release notes, and a release

A feature isn't truly finished until it's been released to PyPI.

All of my projects are configured the same way: they use GitHub releases to trigger a GitHub Actions workflow which publishes the new release to PyPI. The sqlite-utils workflow for that is here in publish.yml.

My cookiecutter templates for new projects set up this workflow for me. I just need to create a PyPI token for the project and assign it as a repository secret. See the python-lib cookiecutter README for details.

To push out a new release, I need to increment the version number in setup.py and write the release notes.

I use semantic versioning - a new feature is a minor version bump, a breaking change is a major version bump (I try very hard to avoid these) and a bug fix or documentation-only update is a patch increment.

Since create-database was a new feature, it went out in release 3.21.

My projects that use Sphinx for documentation have changelog.rst files in their repositories. I add the release notes there, linking to the relevant issues and cross-referencing the new documentation. Then I ship a commit that bundles the release notes with the bumped version number, with a commit message that looks like this:

git commit -m "Release 3.21 Refs #348, #364, #366, #368, #371, #372, #374, #375, #376, #379"

Here's the commit for release 3.21.

Referencing the issue numbers in the release automatically adds a note to their issue threads indicating the release that they went out in.

I generate that list of issue numbers by pasting the release notes into an Observable notebook I built for the purpose: Extract issue numbers from pasted text. Observable is really great for building this kind of tiny interactive utility.

For projects that just have a README I write the release notes in Markdown and paste them directly into the GitHub "new release" form.

I like to duplicate the release notes to GiHub releases for my Sphinx changelog projects too. This is mainly so the datasette.io website will display the release notes on its homepage, which is populated at build time using the GitHub GraphQL API.

To convert my reStructuredText to Markdown I copy and paste the rendered HTML into this brilliant Paste to Markdown tool by Euan Goddard.

A live demo

When possible, I like to have a live demo that I can link to.

This is easiest for features in Datasette core. Datesette’s main branch gets deployed automatically to latest.datasette.io so I can often link to a demo there.

For Datasette plugins, I’ll deploy a fresh instance with the plugin (e.g. this one for datasette-graphql) or (more commonly) add it to my big latest-with-plugins.datasette.io instance - which tries to demonstrate what happens to Datasette if you install dozens of plugins at once (so far it works OK).

Here’s a demo of the datasette-copyable plugin running there: https://latest-with-plugins.datasette.io/github/commits.copyable

Tell the world about it

The last step is to tell the world (beyond the people who meticulously read the release notes) about the new feature.

Depending on the size of the feature, I might do this with a tweet like this one - usually with a screenshot and a link to the documentation. I often extend this into a short Twitter thread, which gives me a chance to link to related concepts and demos or add more screenshots.

For larger or more interesting feature I'll blog about them. I may save this for my weekly weeknotes, but sometimes for particularly exciting features I'll write up a dedicated blog entry. Some examples include:

Executing advanced ALTER TABLE operations in SQLite Fun with binary data and SQLite Refactoring databases with sqlite-utils extract Joining CSV and JSON data with an in-memory SQLite database Apply conversion functions to data in SQLite columns with the sqlite-utils CLI tool

I may even assemble a full set of annotated release notes on my blog, where I quote each item from the release in turn and provide some fleshed out examples plus background information on why I built it.

If it’s a new Datasette (or Datasette-adjacent) feature, I’ll try to remember to write about it in the next edition of the Datasette Newsletter.

Finally, if I learned a new trick while building a feature I might extract that into a TIL. If I do that I'll link to the new TIL from the issue thread.

More examples of this pattern

Here are a bunch of examples of commits that implement this pattern, combining the tests, implementation and documentation into a single unit:

sqlite-utils: adding —limit and —offset to sqlite-utils rows sqlite-utils: --where and -p options for sqlite-utils convert s3-credentials: s3-credentials policy command datasette: db.execute_write_script() and db.execute_write_many() datasette: ?_nosuggest=1 parameter for table views datasette-graphql: GraphQL execution limits: time_limit_ms and num_queries_limit

Phil Windley's Technometria

Web3 and Digital Embodiment

Summary: Web3 will make a difference for all of us if it enables people to become digitally embodied, able to recognize, remember, and react to other people and organizations online—without the need to be in someone else's database. Tim O'Reilly recently published Why it’s too early to get excited about Web3, an excellent discussion on industrial transformation and the role that bubble

Summary: Web3 will make a difference for all of us if it enables people to become digitally embodied, able to recognize, remember, and react to other people and organizations online—without the need to be in someone else's database.

Tim O'Reilly recently published Why it’s too early to get excited about Web3, an excellent discussion on industrial transformation and the role that bubbles play. He used this historical lens to look at Web3 and where we might be with all of it.

One of Tim's points is that we fluctuate between decentralized and centralized models, using Clayton Christensen's law of conservation of attractive profits to show why this happens. He says:

I love the idealism of the Web3 vision, but we’ve been there before. During my career, we have gone through several cycles of decentralization and recentralization. The personal computer decentralized computing by providing a commodity PC architecture that anyone could build and that no one controlled. But Microsoft figured out how to recentralize the industry around a proprietary operating system. Open source software, the internet, and the World Wide Web broke the stranglehold of proprietary software with free software and open protocols, but within a few decades, Google, Amazon, and others had built huge new monopolies founded on big data.

Tim's broader point is that while there's a lot of promise, the applications that deliver a decentralized experience for most people just aren't there yet. Enthusiasts like to focus on decentralized finance or "DeFi" but I (and I think Tim) don't think that's enough. While centralized payments are a big part of the problem, I don't think it's the most fundamental. The most fundamental problem is that people have no place to stand in the modern web. They are not digitally embodied.

In Why Web3?, Fred Wilson, who has as deep an understanding of how the underlying technology works as anyone, explains it like this:

It all comes down to the database that sits behind an application. If that database is controlled by a single entity (think company, think big tech), then enormous market power accrues to the owner/administrator of that database.

This is why I think identity is the most fundamental building block for Web3 and one that's not being talked about enough yet. Identity is the ability to recognize, remember, and react to people, organizations, systems, and things. In the current web, companies employ many ponderous technological systems to perform those functions. I don't just mean their authentication and authorization systems, the things we normally associate with identity, but everything they use to create a relationship with their customers, partners, and employees—think of the CRM system, as just one example.

In these systems, we are like ghosts in the machines. We have "accounts" in companies' systems, but no good way to recognize, remember, and react to them or anyone else. Self-sovereign identity (SSI) gives people the software and systems to do that. Once we can recognize, remember, and react to others online (with software we control, not just through the fragmented interfaces of our mobile devices) we become digitally embodied, able to take action on our own.

With SSI, Fred's application databases are decentralized. That's not to say that companies won't continue to have systems for keeping track of who they interact with. But we'll finally have systems to keep track of them as well. More importantly, we get ways to interact with each other without having to be in their systems. That's the most important thing of all.

I've no doubt that there will be adjacent areas with attractive profits that lead to other forms of centralization, as Tim suggests. But, SSI, DeFi, and other Web3 technologies change the structure of online interaction in ways that will be difficult to undo. SSI, more specifically DIDComm, creates a secure, identity-enabled, privacy-respecting messaging overlay on top of the internet. This changes the game in important ways that level the playing field and creates a new layer where applications can be built that naturally respect human dignity and autonomy.

That doesn't mean that non-interoperable and non-substitutable applications won't emerge. After all, Google, Facebook, Apple, Amazon, and others became dominant through network effects and there's nothing about Web3 that reduces those. The only thing that does is continued work on standards and interoperability along with our insistence on digital rights for people. We're all a part of that effort. As I tell my students: Build the world you want to live in.

Photo Credit: Kuniyoshi Utagawa, The ghost of Taira Tomomori, Daimotsu bay from Kuniyoshi Utagawa (CC0)

Tags: identity ssi web3 interoperability didcomm

Tuesday, 11. January 2022

Vittorio Bertocci - CloudIdentity

Remembering Kim Cameron

Kim might no longer update his blog, nudge identity products toward his vision or give inspiring, generous talks to audiences large and small, but his influence looms large in the identity industry – an industry Kim changed forever. A lot has been written about Kim’s legacy to the industry already, by people who...

Kim might no longer update his blog, nudge identity products toward his vision or give inspiring, generous talks to audiences large and small, but his influence looms large in the identity industry – an industry Kim changed forever. A lot has been written about Kim’s legacy to the industry already, by people who write far better than yours truly, hence I won’t attempt that here.

I owe a huge debt of gratitude to Kim: I don’t know where I’d be or what I’d be doing if it wouldn’t have been for his ideas and direct sponsorship. That’s something I have firsthand experience on, so I can honor his memory by writing about that.

Back in 2005, still in Italy, I was one of the few Microsoft employees with hands-on, customer deployment experience in WS-STAR, the suite of protocols behind the SOA revolution. That earned me a job offer in Redmond, to evangelize the .NET stack (WCF, workflow, CardSpace) to Fortune 500 companies. That CardSpace thing was puzzling. There was nothing like it, it was ultra hard to develop for, and few people appeared to understand what it was for. One day I had face time with Kim. He introduced me to his Laws of Identity, and that changed everything. Suddenly the technology I was working on had a higher purpose, something directly connected to the rights and wellbeing of everyone- and a mission, making user centric identity viable and adopted. I gave myself to the mission with abandon, and Kim helped in every step of the way:

He invested time in developing me professionally, sharing his master negotiator and genuinely compassionate view of people to counter my abrasive personality back then He looped me in important conversations, inside and outside the company- conversations way above my pay grade or actual experience at that point. He introduced me to all sorts of key people, and helped me understand what was going on. Perhaps the most salient example is the initiative he led to bring together the different identity products Microsoft had in the late 2000s (and culminating in a joint presentation we delivered at PDC2008). The company back then was a very different place, and his steely determination coupled with incredible consensus building skills forever changed my perception of what’s possible and how to influence complex, sometimes adversarial organizations.  He really taught me to believe in myself and in a mission. That’s thanks to his encouragement that I approached Joan Murray (then acquisition editor at Addison Wesley) on the expo floor of some event, pitching to her a book that the world absolutely needs about cardspace and user centric identity, and once accepted finding the energy to learn everything (putting together a ToC, recruiting coauthors, writing in English…) as an evenings and weekends project. Kim generously wrote the foreword for us, and relentlessly promoted the book.
His sponsorship continued even after the CardSpace project, promoting my other books and activities (like those U-prove videos now lost in time). 

Those are just the ones top of mind. I am sure that if”d dig in his or my blog, I’d find countless more. It’s been a huge privilege to work so closely with Kim, and especially to benefit from his mentorship and friendship. I never, ever took that privilege for granted. Although Kim always seemed to operate under the assumption that everyone had something of value to contribute, and talking with him made you feel heard, he wasn’t shy in calling out trolls or people who in his view would stifle community efforts.

When the user centric identity effort substantially failed to gain traction in actual products, with the identity industry incorporating some important innovations (hello, claims) but generally rejecting many of the key tenets I held so dear, something broke inside me. I became disillusioned with pure principled views, and moved toward a stricter Job to be done, user cases driven stance.

That, Kim’s temporary retirement from Microsoft and eventually my move to Auth0 made my interactions with Kim less frequent. It was always nice to run into him at conferences; we kept backchanneling whenever industry news called for coordinated responses; and he reached out to me once to discuss SSI, but we never had a chance to do so. As cliche’ as it might be, I now deeply regret not having reached out more myself.
Last time I heard from him, it was during a reunion of the CardSpace team. It was a joyous occasion, seeing so many people that for a time all worked to realize his vision, and touched in various degrees by his influence. His health didn’t allow him to attend in person, but he called in – we passed the phone around, exchanging pleasantries without knowing we were saying our goodbyes. I remember his “hello Vittorio” as I picked up the phone from Mike- his cordial, even sweet tone as he put his usual care in pronouncing my name just right- right there to show the kindness this giant used with us all. 


Simon Willison

What's new in sqlite-utils 3.20 and 3.21

sqlite-utils is my combined CLI tool and Python library for manipulating SQLite databases. Consider this the annotated release notes for sqlite-utils 3.20 and 3.21, both released in the past week. sqlite-utils insert --convert with --lines and --text The sqlite-utils insert command inserts rows into a SQLite database from a JSON, CSV or TSV file, creating a table with the necessary columns if

sqlite-utils is my combined CLI tool and Python library for manipulating SQLite databases. Consider this the annotated release notes for sqlite-utils 3.20 and 3.21, both released in the past week.

sqlite-utils insert --convert with --lines and --text

The sqlite-utils insert command inserts rows into a SQLite database from a JSON, CSV or TSV file, creating a table with the necessary columns if one does not exist already.

It gained three new options in v3.20:

sqlite-utils insert ... --lines to insert the lines from a file into a table with a single line column, see Inserting unstructured data with --lines and --text. sqlite-utils insert ... --text to insert the contents of the file into a table with a single text column and a single row. sqlite-utils insert ... --convert allows a Python function to be provided that will be used to convert each row that is being inserted into the database. See Applying conversions while inserting data, including details on special behavior when combined with --lines and --text. (#356)

These features all evolved from an idea I had while re-reading my blog entry from last year, Apply conversion functions to data in SQLite columns with the sqlite-utils CLI tool. That blog entry introduced the sqlite-utils convert comand, which can run a custom Python function against a column in a table to convert that data in some way.

Given a log file log.txt that looks something like this:

2021-08-05T17:58:28.880469+00:00 app[web.1]: measure#nginx.service=4.212 request="GET /search/?type=blogmark&page=2&tag=highavailability HTTP/1.1" status_code=404 request_id=25eb296e-e970-4072-b75a-606e11e1db5b remote_addr="10.1.92.174" forwarded_for="114.119.136.88, 172.70.142.28" forwarded_proto="http" via="1.1 vegur" body_bytes_sent=179 referer="-" user_agent="Mozilla/5.0 (Linux; Android 7.0;) AppleWebKit/537.36 (KHTML, like Gecko) Mobile Safari/537.36 (compatible; PetalBot;+https://webmaster.petalsearch.com/site/petalbot)" request_time="4.212" upstream_response_time="4.212" upstream_connect_time="0.000" upstream_header_time="4.212";

I provided this example code to insert lines from a log file into a table with a single line column:

cat log.txt | \ jq --raw-input '{line: .}' --compact-output | \ sqlite-utils insert logs.db log - --nl

Since sqlite-utils insert requires JSON, this example first used jq to convert the lines into {"line": "..."} JSON objects.

My first idea was to improve this with the new --lines option, which lets you replace the above with this:

sqlite-utils insert logs.db log log.txt --lines

Using --lines will create a table with a single lines column and import every line from the file as a row in that table.

In the article, I then demonstrated how --convert could be used to convert those imported lines into structured rows using a regular expression:

sqlite-utils convert logs.db log line --import re --multi "$(cat <<EOD r = re.compile(r'([^\s=]+)=(?:"(.*?)"|(\S+))') pairs = {} for key, value1, value2 in r.findall(value): pairs[key] = value1 or value2 return pairs EOD )"

The new --convert option to sqlite-utils means you can now achieve the same thing using:

sqlite-utils insert logs.db log log.txt --lines \ --import re --convert "$(cat <<EOD r = re.compile(r'([^\s=]+)=(?:"(.*?)"|(\S+))') for key, value1, value2 in r.findall(line): pairs[key] = value1 or value2 return pairs EOD )"

Since the --lines option allows you to consume mostly unstructured files split by newlines, I decided to also add an option to consume an entire unstructured file as a single record. I originally called that --all but found the code got messy because it conflicted with Python's all() built-in, so I renamed it to --text.

Used on its own, --text creates a table with a single column called text:

% sqlite-utils insert logs.db fulllog log.txt --text % sqlite-utils schema logs.db CREATE TABLE [fulllog] ( [text] TEXT );

But with --convert you can pass a snippet of Python code which can take that text value and convert it into a list of dictionaries, which will then be used to populate the table.

Here's a fun example. The following one-liner uses the classic feedparser library to parse the Atom feed for my blog and load it into a database table:

curl 'https://simonwillison.net/atom/everything/' | \ sqlite-utils insert feed.db entries --text --convert ' feed = feedparser.parse(text) return feed.entries' - --import feedparser

The resulting database looks like this:

% sqlite-utils tables feed.db --counts -t table count ------- ------- feed 30 % sqlite-utils schema feed.db CREATE TABLE [feed] ( [title] TEXT, [title_detail] TEXT, [links] TEXT, [link] TEXT, [published] TEXT, [published_parsed] TEXT, [updated] TEXT, [updated_parsed] TEXT, [id] TEXT, [guidislink] INTEGER, [summary] TEXT, [summary_detail] TEXT, [tags] TEXT );

Not bad for a one-liner!

This example uses the --import option to import that feedparser library. This means you'll need to have that library installed in the same virtual environment as sqlite-utils.

If you run into problems here (maybe due to having installed sqlite-utils via Homebrew) one way to do this is to use the following:

python3 -m pip install feedparser sqlite-utils

Then use python3 -m sqlite_utils in place of sqlite-utils - this will ensure you are running the command from the same virtual environment where you installed the library.

--convert for regular rows

The above examples combine --convert with the --lines and --text options to parse unstructured text into database tables.

But --convert works with the existing sqlite-utils insert options as well.

To review, those are the following:

sqlite-utils insert by default expects a JSON file that's a list of objects, [{"id": 1, "text": "Like"}, {"id": 2, "text": "This"}]. sqlite-utils insert --nl accepts newline-delimited JSON, {"id": 1, "text": "Like"}\n{"id": 2, "text": "This"}. sqlite-utils insert --csv and --tsv accepts CSV/TSV - with --delimiter and --encoding and --quotechar and --no-headers options for customizing that import, and a --sniff option for automatically detecting those settings.

You can now use --convert to define a Python function that accepts a row dictionary representing each row from the import and modifies that dictionary or returns a fresh one with changes.

Here's a simple example that produces just the capitalized name, the latitude and the longitude from the WRI's global power plants CSV file:

curl https://raw.githubusercontent.com/wri/global-power-plant-database/master/output_database/global_power_plant_database.csv | \ sqlite-utils insert plants.db plants - --csv --convert ' return { "name": row["name"].upper(), "latitude": float(row["latitude"]), "longitude": float(row["longitude"]), }'

The resulting database looks like this:

% sqlite-utils schema plants.db CREATE TABLE [plants] ( [name] TEXT, [latitude] FLOAT, [longitude] FLOAT ); ~ % sqlite-utils rows plants.db plants | head -n 3 [{"name": "KAJAKI HYDROELECTRIC POWER PLANT AFGHANISTAN", "latitude": 32.322, "longitude": 65.119}, {"name": "KANDAHAR DOG", "latitude": 31.67, "longitude": 65.795}, {"name": "KANDAHAR JOL", "latitude": 31.623, "longitude": 65.792}, sqlite-utils bulk
New sqlite-utils bulk command which can import records in the same way as sqlite-utils insert (from JSON, CSV or TSV) and use them to bulk execute a parametrized SQL query. (#375)

With the addition of --lines, --text, --convert and --import the sqlite-utils insert command is now a powerful tool for turning anything into a list of Python dictionaries, which can then in turn be inserted into a SQLite database table.

Which gave me an idea... what if you could use the same mechanisms to execute SQL statements in bulk instead?

Python's SQLite library supports named parameters in SQL queries, which look like this:

insert into plants (id, name) values (:id, :name)

Those :id and :name parameters can be populated from a Python dictionary. And the .executemany() method can efficiently apply the same SQL query to a big list (or iterator or generator) of dictionaries in one go:

cursor = db.cursor() cursor.executemany( "insert into plants (id, name) values (:id, :name)", [{"id": 1, "name": "One"}, {"id": 2, "name": "Two"}] )

So I implemented the sqlite-utils bulk command, which takes the same import options as sqlite-utils but instead of creating and populating the specified table requires a SQL argument with a query that will be executed using the imported rows as arguments.

% sqlite-utils bulk demo.db \ 'insert into plants (id, name) values (:id, :name)' \ plants.csv --csv

This feels like a powerful new feature, which was very simple to implement because the hard work of importing the data had already been done by the insert command.

Running ANALYZE
New Python methods for running ANALYZE against a database, table or index: db.analyze() and table.analyze(), see Optimizing index usage with ANALYZE. (#366) New sqlite-utils analyze command for running ANALYZE using the CLI. (#379) The create-index, insert and upsert commands now have a new --analyze option for running ANALYZE after the command has completed. (#379)

This idea came from Forest Gregg, who initially suggested running ANALYZE automatically as part of the sqlite-utils create-index command.

I have to confess: in all of my years of using SQLite, I'd never actually explored the ANALYZE command.

When run, it builds a new table called sqlite_stats1 containing statistics about each of the indexes on the table - indicating how "selective" each index is - effectively how many rows on average you are likely to filter down to if you use the index.

The SQLite query planner can then use this to decide which index to consult. For example, given the following query:

select * from ny_times_us_counties where state = 'Missouri' and county = 'Greene'

(Try that here.)

If there are indexes on both columns, should the query planner use the state column or the county column?

In this case the state column will filter down to 75,209 rows, while the county column filters to 9,186 - so county is clearly the better query plan.

Impressively, SQLite seems to make this kind of decision perfectly well without the sqlite_stat1 table being populated: explain query plan select * from ny_times_us_counties where "county" = 'Greene' and "state" = 'Missouri' returns the following:

SEARCH TABLE ny_times_us_counties USING INDEX idx_ny_times_us_counties_county (county=?)

I've not actually found a good example of a query where the sqlite_stat1 table makes a difference yet, but I'm confident such queries exist!

Using SQL, you can run ANALYZE against an entire database by executing ANALYZE;, or against all of the indexes for a specific table with ANALYZE tablename;, or against a specific index by name using ANALYZE indexname;.

There's one catch with ANALYZE: since running it populates a static sqlite_stat1 table, the data in that table can get out of date. If you insert another million rows into a table for example your analyzye statistics might no longer reflect ground truth to the point that the query planner starts to make bad decisions.

For sqlite-utils I decided to make ANALYZE an explicit operation. In the Python library you can now run the following:

db.analyze() # Analyze every index in the database db.analyze("indexname") # Analyze a specific index db.analyze("tablename") # Analyze every index for that table # Or the same thing using a table object: db["tablename"].analyze()

I also added an optional analyze=True parameter to several methods, which you can use to trigger an ANALZYE once that operation completes:

db["tablename"].create_index(["column"], analyze=True) db["tablename"].insert_rows(rows, analyze=True) db["tablename"].delete_where(analyze=True)

The sqlite-utils CLI command has equivalent functionality:

# Analyze every index in a database: % sqlite-utils analyze database.db # Analyze a specific index: % sqlite-utils analyze database.db indexname # Analyze all indexes for a table: % sqlite-utils analyze database.db tablename

And an --analyze option for various commands:

% sqlite-utils create-index ... --analyze % sqlite-utils insert ... --analyze % sqlite-utils upsert ... --analyze Other smaller changes
New sqlite-utils create-database command for creating new empty database files. (#348)

Most sqlite-utils commands such as insert or create-table create the database file for you if it doesn't already exist, but I decided it would be neat to have an explicit create-database command for deliberately creating an empty database.

Update 13th January 2022: I wrote a detailed description of my process building this command in How I build a feature.

The CLI tool can now also be run using python -m sqlite_utils. (#368)

I initially added this to help write a unit test that exercised the tool through a subprocess (see TIL Testing a Click app with streaming input) but it's a neat pattern in general. datasette gained this through a contribution from Abdussamet Koçak a few years ago.

Using --fmt now implies --table, so you don't need to pass both options. (#374)

A nice tiny usability enhancement. You can now run sqlite-utils rows my.db mytable --fmt rst to get back a reStructuredText table - previously you also needed to add --table.

The insert-files command supports two new columns: stem and suffix. (#372)

I sometimes re-read the documentation for older features to remind me what they do, and occasionally an idea for a feature jumps out from that. Implementing these was a very small change.

The --nl import option now ignores blank lines in the input. (#376) Fixed bug where streaming input to the insert command with --batch-size 1 would appear to only commit after several rows had been ingested, due to unnecessary input buffering. (#364)

That --nl improvement came from tinkering around trying to fix the bug.

The bug itself was interesting: I initially thought that my entire mechanism for comitting on every --batch-size chunk was broken, but it turned out I was unnecessarily buffering data from standard input in order to support the --sniff option for detecting the shape of incoming CSV data.

db.supports_strict property showing if the database connection supports SQLite strict tables. table.strict property (see .strict) indicating if the table uses strict mode. (#344)

See previous weeknotes: this is the first part of my ongoing support for the new STRICT tables in SQLite.

I'm currently blocked on implementing more due to the need to get a robust mechanism up and running for executing sqlite-utils tests in CI against specific SQLite versions, see issue #346.

Releases this week

sqlite-utils: 3.21 - (92 releases total) - 2022-01-11
Python CLI utility and library for manipulating SQLite databases

sqlite-utils: 3.20 - 2022-01-05

stream-delay: 0.1 - 2022-01-08
Stream a file or stdin one line at a time with a delay

TILs this week Writing pytest tests against tools written with argparse Testing a Click app with streaming input

Aaron Parecki

How to convert USB webcams to HDMI

There are a handful of interesting USB webcams out there, which naturally work great with a computer. But what if you want to combine video from a USB webcam with your HDMI cameras in a video switcher like the ATEM Mini?

There are a handful of interesting USB webcams out there, which naturally work great with a computer. But what if you want to combine video from a USB webcam with your HDMI cameras in a video switcher like the ATEM Mini?

Most video switchers don't have a way to plug in USB webcams. That's because webcams are expected to plug in to a computer, and most video switchers aren't really computers. Thankfully over the past few years, UVC has become a standard for webcams, so there is no more worry about installing manufacturer-specific drivers for webcams anymore. For the most part, you can take any USB webcam and plug it into a computer and it will Just Work™.

I'm going to show you three different ways you can convert a USB UVC webcam to HDMI so you can use them with hardware video switchers like the ATEM Mini.

You can see a video version of this blog post on my YouTube channel!

Method 1: QuickTime Player

The simplest option is to use QuickTime on a Mac computer. For this, you'll need a Mac of course, as well as an HDMI output from the computer.

First, plug in the HDMI from your computer into your video switcher. Your computer will see it as a second monitor. In your display settings, make sure your computer is not mirroring the display. You want the computer to see the ATEM Mini or other video switcher as a secondary external display.

If you're doing this with the ATEM Mini, it's helpful to have a monitor plugged in to the ATEM Mini's HDMI output port, and then you can show your computer screen full screen on the ATEM's output by selecting that input's button in the "output" selector on the right side of the controls. This is important since you'll want to be able to navigate around the second screen a bit in the next steps.

Next, open QuickTime Player. Plug in your USB webcam into your computer. In the QuickTime "File" menu, choose "New Movie Recording". A window should appear with your default webcam. Click the little arrow next to the record button and you should see all your connected cameras as an option. Choose the USB camera you want to use and you should see it in the main video window.

Now drag that QuickTime window onto your second monitor that is actually the ATEM Mini. Click the green button in the top left corner to make the window full screen. Now what you see on the ATEM Mini should be just the full screen video. Make sure you move your cursor back to your main monitor so that it doesn't show up on the screen.

You're all set! You can switch the ATEM back to the multiview and you should see your webcam feed as one of the video inputs you can switch to.

Method 2: OBS

OBS is a powerful tool for doing all sorts of interesting things with video on your computer. You can use it to run a livestream, switching between multiple cameras and adding graphics on top. What we're going to use it for now is a simple way to get your USB cameras to show up on a second monitor attached to your computer.

Another benefit of OBS is that it is cross platform, so this method will work on Mac, Windows or Linux!

The basic idea is to create a scene in OBS that is just a full screen video of the webcam you want to use. Then you'll tell OBS to output that video on your second monitor, but the second monitor will actually be your computer's HDMI output plugged in to the ATEM Mini.

First, create a new scene, call it whatever you want, I'll call mine "Webcam". Inside that scene, add a new source of type "Video Capture Device". I'll call mine "Webcam Source".

When you create the source, it will ask you which video capture device you want to use, so choose your desired webcam at this step.

At this point you should see the webcam feed in the OBS main window. If it's not full screen, that's probably because the webcam is not full 1920x1080 resolution. You can drag the handles on the video to resize the picture to take up the full 1920x1080 screen. 

Next, right click anywhere in the main video window and choose "Fullscreen Projector (Preview)". Or if you use OBS in "Studio Mode", right click on the right pane and choose "Fullscreen Projector (Program)". Choose your secondary monitor that's plugged in to the ATEM, and OBS should take over that monitor and show just the video feed.

Method 3: Hardware Encoder

If you don't want to tie up a computer with this task, or don't have the space for a computer, another option is to use a dedicated hardware encoder to convert the USB webcam to HDMI.

There aren't a lot of options on the market for this right now, likely because it's not a super common thing to need to do. Currently, any device that can convert a UVC webcam to HDMI is basically a tiny computer. One example is the YoloBox which can accept some USB webcams as a video source alongside HDMI cameras. You could use the YoloBox to convert the USB camera to HDMI using the HDMI output of the YoloBox. 

Another option is this TBS2603au encoder/decoder

I originally was sent this device by TBS because I was interested in using it as an RTMP server. I wasn't able to figure that out, and have since switched to using the Magewell Pro Convert as an RTMP server which has been working great. But as I was poking around in the menus I realized that the TBS2603au has a USB port which can accept webcams!

So here are the step by step instructions for setting up the TBS2603au to output a USB webcam over its HDMI port.

The TBS2603au is controlled from its web interface. I'm going to assume you already know how to connect this to your network and configure the IP address and get to the device's web page. The default username and password are "admin" and "admin". Once you log in, you'll see a dashboard like this.

First, click on the "Encode" icon in the top bar. At the bottom, turn off the HDMI toggle and turn on the one next to USB.

Next click on the "Extend" tab in the top menu and choose "Video Mix".

Scroll down to the "Output Config" section and change "Mix Enable" to "Off", and choose "USBCam" from the "Video Source" option.

At this point you should see your webcam's picture out the device's HDMI port! And if that's plugged in to the ATEM Mini, your webcam will appear in your multiview!

I've tried this with a few different webcams and they all work great! 

The OBSBot Tiny is an auto-tracking PTZ camera that follows your face. The nice thing is that the camera itself is doing the face tracking, so no drivers are required!

The Elgato FaceCam is a high quality webcam for your PC, and it also works with this device. Although at that point you should probably just get a DSLR/mirrorless camera to use with the ATEM Mini.

This even works with the Insta360 One X2 in webcam mode. You won't get a full 360 picture, since in webcam mode the Insta360 One X2 uses only one of its two cameras. It does do some auto-tracking though.

The Mevo Start cameras are another interesting option, since you can crop in to specific parts of the video using a phone as a remote control.

There are a couple of problems with this method to be aware of. I wasn't able to find a way to output audio from the USB webcam, which means you will need to get your audio into the ATEM from another camera or external microphone. Another problem was with certain cameras (mainly the OBSBot Tiny), I left the device running overnight and in the morning it had crashed. I suspect it's because the OBSBot requires more power than other cameras due to its PTZ motor.

The TBS encoder isn't cheap, so it's not something you'd buy to use a generic webcam with your ATEM. But for use with specialized USB webcams like document cameras or PTZ cameras it could be a good option to use those cameras with streaming encoders like the ATEM Mini!

Let me know what USB webcams you'd like to use with your ATEM Mini or other hardware streaming encoder!

Monday, 10. January 2022

Damien Bod

Comparing the backend for frontend (BFF) security architecture with an SPA UI using a public API

This article compares the security architecture of an application implemented using a public UI SPA with a trusted API backend and the same solution implemented using the backend for frontend (BFF) security architecture. The main difference is that the first solution is separated into two applications, implemented and deployed as two where as the second […]

This article compares the security architecture of an application implemented using a public UI SPA with a trusted API backend and the same solution implemented using the backend for frontend (BFF) security architecture. The main difference is that the first solution is separated into two applications, implemented and deployed as two where as the second application is a single deployment and secured as a single application. The BFF has less risks and is a better security architecture but as always, no solution is perfect.

Setup BFF

The BFF solution is implemented and deployed as a single trusted application. All security is implemented in the trusted backend. The UI part of the application can only use the same domain APIs and cannot use APIs from separate domains. This architecture is the same as a standard ASP.NET Core Razor page UI confidential client. All APIs can be implemented in the same server part of the application. There is no requirement for downstream APIs. Due to this architecture, no sensitive data needs to be saved in the browser. This is effectively a trusted server rendered application. As with any server rendered application, this is protected using cookies with the required cookie protections and normally authenticates against an OIDC server using a trusted, confidential client with code flow and PKCE protection. Because the application is trusted, further protections can be added as required for example MTLS, further OIDC FAPI requirements and so on. If downstream APIs are required, these APIs do not need to be exposed in the public zone (internet) and can be implemented using a trusted client and with token binding between the client and the server. The XSS protection can be improved using a better CSP and all front-channel cross-domain calls can be completely blocked. Dynamic data (ie nonces) can be used to produce the CSP. The UI can be hosted using a server rendered page and dynamic meta data and settings can easily be added for the UI without further architecture or DevOps flows. I always host the UI part in a BFF using a server rendered file. A big win with the BFF architecture, is that the access tokens and the refresh tokens are not stored publicly in the browser. When using SignalR, the secure same site HTTP only cookie can be used and no token for authorization is required in the URL. This is an improvement as long as CSRF protection is applied. Extra CSRF protection is required for all server requests because cookies are used (as well as same site). This can be implemented using anti-forgery tokens or forcing CORS preflight using a custom header. This must be enforced on the backend for all HTTP requests where required. Because only a single application needs to be deployed, DevOPs is simpler and reduces complexity, this is my experience after using this in production with Blazor. Reduced complexity is reduced costs.

Setup SPA with public API

An SPA page solution is deployed as a separate UI application and a separate public API application. Two security flows are used in this setup and are two completely separate applications, even though the API and UI are “business” tightly coupled. The best and most productive solutions with this setup are where the backend APIs are made specifically and optimized for the UI. The API must be public if the SPA is public. The SPA has no backend which can be used for security, tokens and sensitive data are stored in the browser and needs to be accessed using Javascript. As XSS is very hard to protect against, this will always have security risks. When using SPAs, as the access tokens are shared around the browser or added to URLs for web sockets, it is really important to revoke the tokens on a logout. The refresh token requires specific protections for usage in SPAs. Access tokens cannot be revoked and reference tokens with introspection used in the API is the preferred security solution. A logout is possible with introspection and reference tokens using the revocation endpoint. It is very hard to implement a SSO logout when using an SPA. This is because only the front-channel logout is possible in an SPA and not a back-channel logout as with a server rendered application. This setup has performance advantages compared to the BFF architecture when using downstream APIs. The APIs from different domains can be used directly. Implementing UIs with PWA requirements is easier to implement compared to the BFF architecture. CSRF attacks are easier to secure against using tokens but has more risk with an XSS attack due to sensitive data in the public client.

Advantages using BFF

Single trusted application instead of two apps, public untrusted UI + public trusted API (reduced attack surface) Trusted client protected with a secret or certificate No access/reference tokens in the browser No refresh token in the browser Web sockets security improved (SignalR), no access/reference token in the URL Backchannel logout, SSO logout possible Improved CSP and security headers (can use dynamic data and block all other domains) => possible for better protection against XSS (depends on UI tech stack) Can use MTLS, OIDC FAPI, client binding for all downstream API calls from trusted UI app, so much improved security possible for the downstream API calls. No architecture requirement for public APIs outside same domain, downstream APIs can be deployed in a private trusted zone. Easier to build, deploy (my experience so far), Easier for me means reduced costs. Reduced maintenance due to reduced complexity. (This is my experience so far)

Disadvantages using BFF

Downstream APIs require redirect or second API call (YARP, OBO, •OAuth2 Resource Owner Credentials Flow, certificate auth ) PWA support not out of the box Performance worse if downstream APIs required (i.e an API call not on the same domain) All UI API POST, DELETE, PATCH, PUT HTTP requests must use anti-forgery token or force CORS preflight as well as same site protection. Cookies are hard to invalidate, requires extra logic (Is this required for a secure HTTP only same site cookie? low risk)

Discussions

I have had some excellent discussions on this topic and very valid points and arguments against some of points above. I would recommend reading these (link below) to get a bigger picture. Thanks kevin_chalet for the great feedback and comments.

https://github.com/openiddict/openiddict-samples/issues/180

Notes

A lot of opinions exist with this setup and I am sure lots of people see this in a different way with very valid points. Others are following software tech religions which prevents them accessing, evaluating different solution architectures. Nothing is ever black or white. No one solution is best for everything and all solutions have problems, or future problems with any setup will always happen. I believe using the BFF architecture, I can increase the security for the solutions with less effort and reduce the security risks and costs, thus creating more value for my clients. I still use SPA with APIs and see this as a valuable and good security solution for some systems. The entry level for BFF architecture with some tech stacks is still very high.

Links

https://github.com/damienbod/Blazor.BFF.AzureAD.Template

https://github.com/damienbod/Blazor.BFF.AzureB2C.Template

https://github.com/damienbod/Blazor.BFF.OpenIDConnect.Template

https://github.com/DuendeSoftware/BFF

https://github.com/manfredsteyer/yarp-auth-proxy

https://docs.microsoft.com/en-us/aspnet/core/blazor/

https://docs.duendesoftware.com/identityserver/v5/bff/overview/

https://github.com/berhir/BlazorWebAssemblyCookieAuth

https://github.com/manfredsteyer/yarp-auth-proxy

OIDC FAPI

https://www.cloudflare.com/learning/access-management/what-is-mutual-tls/

https://csp-evaluator.withgoogle.com/

https://docs.microsoft.com/en-us/aspnet/signalr/overview/getting-started/introduction-to-signalr

https://docs.microsoft.com/en-us/azure/active-directory/develop/v2-oauth2-on-behalf-of-flow

https://microsoft.github.io/reverse-proxy/index.html

https://docs.microsoft.com/en-us/aspnet/core/security/authentication/certauth

https://github.com/openiddict/openiddict-samples/issues/180

https://www.w3.org/TR/CSP3/

https://content-security-policy.com/

https://csp.withgoogle.com/docs/strict-csp.html

https://github.com/manfredsteyer/angular-oauth2-oidc

https://github.com/damienbod/angular-auth-oidc-client

https://docs.microsoft.com/en-us/azure/active-directory/develop/msal-overview

https://github.com/AzureAD/microsoft-identity-web

https://github.com/damienbod/AspNetCoreOpeniddict

https://github.com/openiddict/openiddict-samples

Sunday, 09. January 2022

Simon Willison

Quoting MetaMask Support

Before May 2021, the master key in MetaMask was called the “Seed Phrase”. Through user research and insights from our customer support team, we have concluded that this name does not properly convey the critical importance that this master key has for user security. This is why we will be changing our naming of this master key to “Secret Recovery Phrase”. Through May and June of 2021, we will be

Before May 2021, the master key in MetaMask was called the “Seed Phrase”. Through user research and insights from our customer support team, we have concluded that this name does not properly convey the critical importance that this master key has for user security. This is why we will be changing our naming of this master key to “Secret Recovery Phrase”. Through May and June of 2021, we will be phasing out the use of “seed phrase” in our application and support articles, and eventually exclusively calling it a “Secret Recovery Phrase.” No action is required, this is only a name change. We will be rolling this out on both the extension and the mobile app for all users.

MetaMask Support

Saturday, 08. January 2022

Simon Willison

Hashids

Hashids Confusingly named because it's not really a hash - this library (available in 40+ languages) offers a way to convert integer IDs to and from short strings of text based on a salt which, if kept secret, should help prevent people from deriving the IDs and using them to measure growth of your service. It works using a base62 alphabet that is shuffled using the salt. Via Tom MacWrigh

Hashids

Confusingly named because it's not really a hash - this library (available in 40+ languages) offers a way to convert integer IDs to and from short strings of text based on a salt which, if kept secret, should help prevent people from deriving the IDs and using them to measure growth of your service. It works using a base62 alphabet that is shuffled using the salt.

Via Tom MacWright

Friday, 07. January 2022

Identity Praxis, Inc.

Identity management is key to increasing security, reducing fraud and developing a seamless customer experience

I enjoyed participating in the Mobile Ecosystem Forum (MEF) Enterprise webinar on December 9, 2021. MEF explores its recent Personal Data and Identity Management Enterprise Survey – supported by Boku – in a webinar on 9th December 2021. MEF Programme Director, Andrew Parkin-White, is joined by Michael Becker, CEO of Identity Praxis and MEF Advisor and […] The post Identity management is key

I enjoyed participating in the Mobile Ecosystem Forum (MEF) Enterprise webinar on December 9, 2021.

MEF explores its recent Personal Data and Identity Management Enterprise Survey – supported by Boku – in a webinar on 9th December 2021. MEF Programme Director, Andrew Parkin-White, is joined by Michael Becker, CEO of Identity Praxis and MEF Advisor and Phil Todd, Director of Stereoscope, who co-authored the report.

Andrew Parkin-White wrote a nice blog piece that summarised our discussion. Three learnings came from our dialog:

Identity management is an iterative process with three core elements – initial identification, authentication (re-identifying the individual) and verification (ensuring the individual is who they claim to be) Enterprises employ a vast array of technologies to execute these processes which are growing in scope and complexity Understanding why identity management is necessary to enterprises and how this creates opportunities for vendors

You can watch the entire session you YouTube (60 min).

The post Identity management is key to increasing security, reducing fraud and developing a seamless customer experience appeared first on Identity Praxis, Inc..


Here's Tom with the Weather

The First Shots

A month ago, I learned about Katalin Karikó as I was reading Brendan Borrell’s The First Shots. She developed the modified mRNA (from which Moderna gets its name) that made possible the mRNA vaccines. The book describes how the University of Pennsylvania squandered her interest in the patent for her work by selling the rights to a company called Epicentre. Eventually, Moderna licensed the p

A month ago, I learned about Katalin Karikó as I was reading Brendan Borrell’s The First Shots. She developed the modified mRNA (from which Moderna gets its name) that made possible the mRNA vaccines. The book describes how the University of Pennsylvania squandered her interest in the patent for her work by selling the rights to a company called Epicentre. Eventually, Moderna licensed the patent from Epicentre to complement the work of Derrick Rossi.

In an interview, she also credits Paul Krieg and Douglas Melton for their contributions.

As a recipient of 3 doses of the Moderna vaccine, I’m thankful to these researchers and was glad to read this book.

Thursday, 06. January 2022

Here's Tom with the Weather


Simon Willison

Quoting Laurie Voss

Crypto creates a massively multiplayer online game where the game is "currency speculation", and it's very realistic because it really is money, at least if enough people get involved. [...] NFTs add another layer to the game. Instead of just currency speculation, you're now simulating art speculation too! The fact that you don't actually own the art and the fact that the art is randomly generate

Crypto creates a massively multiplayer online game where the game is "currency speculation", and it's very realistic because it really is money, at least if enough people get involved. [...] NFTs add another layer to the game. Instead of just currency speculation, you're now simulating art speculation too! The fact that you don't actually own the art and the fact that the art is randomly generated cartoon images of monkeys is entirely beside the point: the point is the speculation, and winning the game by making money. This is, again, a lot of fun to some people, and in addition to the piles of money they also in some very limited sense own a picture of a cartoon monkey that some people recognize as being very expensive, so they can brag without having to actually post screenshots of their bank balance, which nobody believed anyway.

Laurie Voss

Wednesday, 05. January 2022

Just a Theory

Every Day Is Jan 6 Now

The New York _Times_ gets real about the January 6 coup attempt.

The New York Times Editorial Board in an unusually direct piece last week:

It is regular citizens [who threaten election officials] and other public servants, who ask, “When can we use the guns?" and who vow to murder politicians who dare to vote their conscience. It is Republican lawmakers scrambling to make it harder for people to vote and easier to subvert their will if they do. It is Donald Trump who continues to stoke the flames of conflict with his rampant lies and limitless resentments and whose twisted version of reality still dominates one of the nation’s two major political parties.

In short, the Republic faces an existential threat from a movement that is openly contemptuous of democracy and has shown that it is willing to use violence to achieve its ends. No self-governing society can survive such a threat by denying that it exists. Rather, survival depends on looking back and forward at the same time.

See also this Vox piece. Great to see these outlets sound the alarm about the dangers to American democracy. The threats are very real, and clear-eyed discussions should ver much be dominating the public sphere.

More of this, please.

More about… New York Times January 6 Coup Democracy Vox

Moxy Tongue

Human Authority

Own Root, Dependencies:  

Own Root, Dependencies:

 
















Tuesday, 04. January 2022

Simon Willison

Weeknotes: Taking a break in Moss Landing

Took some time off. Saw some whales and sea otters. Added a new spot to Niche Museums. Natalie took me to Moss Landing for a few days for my birthday. I now think Moss Landing may be one of California's best kept secrets, for a whole bunch of reasons. Most importantly, Moss Landing has Elkhorn Sloug, California's second largest estuary and home to 7% of the world's population of sea otters.

Took some time off. Saw some whales and sea otters. Added a new spot to Niche Museums.

Natalie took me to Moss Landing for a few days for my birthday. I now think Moss Landing may be one of California's best kept secrets, for a whole bunch of reasons.

Most importantly, Moss Landing has Elkhorn Sloug, California's second largest estuary and home to 7% of the world's population of sea otters. And you can kayak there!

We rented a kayak from Kayak Connection and headed out for three hours on the water.

The rules are to stay eight boat lengths (100 feet) away from the otters, or to stop paddling and wait for them to leave if they pop up near your boat. And they pop up a lot!

We saw at least twenty sea otters. The largest can weigh 90lbs (that's two Cleos) and they were quite happy to ignore us and get on with otter stuff: floating on their backs, diving into the water and playing with each other.

We also saw harbor seals, egrets, herons, avocets and both brown and white pelicans.

Moss Landing also sits at the edge of Monterey Bay, which contains the Monterey Submarine Canyon, one of the largest such canyons in the world. Which means cold water and warm water mixing in interesting ways. Which means lots of nutritious tiny sea creatures. Which means whales!

We went whale watching with Blue Ocean Whale Watching, who came recommended by several naturalist friends. They were brilliant - they had an obvious passion for the ocean, shared great information and answered all of our increasingly eccentric questions. Did you know a Blue Whale can use a thousand calories of energy just opening its mouth?

We saw gray whales - expected at this time of year due to their migration from the arctic down south to their breeding lagoons in Baja, and humpback whales - not a usual occurrence at this time of year but evidently the younger whales don't necessarily stick to the official calendar.

Moss Landing also has a large number of noisy sea lions. This one was asleep on the dock when our ship returned.

Then yesterday morning we went for a walk around this pensinsula and saw sea otters fishing for crabs just yards away from shore! Plus a juvenile elephant seal who had hauled itself onto the beach.

We also dropped in to the Shakespeare Society of America - my first niche museum visit of 2022. I wrote about that for Niche Museums.

TIL this week kubectl proxy WebAuthn browser support Adding a CORS policy to an S3 bucket

Monday, 03. January 2022

Damien Bod

Secure a Blazor WASM ASP.NET Core hosted APP using BFF and OpenIddict

This article shows how to implement authentication and secure a Blazor WASM application hosted in ASP.NET Core using the backend for frontend (BFF) security architecture to authenticate. All security is implemented in the backend and the Blazor WASM is a view of the ASP.NET Core application, no security is implemented in the public client. The […]

This article shows how to implement authentication and secure a Blazor WASM application hosted in ASP.NET Core using the backend for frontend (BFF) security architecture to authenticate. All security is implemented in the backend and the Blazor WASM is a view of the ASP.NET Core application, no security is implemented in the public client. The application is a trusted client and a secret is used to authenticate the application as well as the identity. The Blazor WASM UI can only use the hosted APIs on the same domain.

Code https://github.com/damienbod/AspNetCoreOpeniddict

Setup

The Blazor WASM and the ASP.NET Core host application is implemented as a single application and deployed as one. The server part implements the authentication using OpenID Connect. OpenIddict is used to implement the OpenID Connect server application. The code flow with PKCE and a user secret is used for authentication.

Open ID Connect Server setup

The OpenID Connect server is implemented using OpenIddict. The is standard implementation as like the documentation. The worker class implements the IHostService interface and is used to add the code flow client used by the Blazor ASP.NET Core application. PKCE is added as well as a client secret.

static async Task RegisterApplicationsAsync(IServiceProvider provider) { var manager = provider.GetRequiredService<IOpenIddictApplicationManager>(); // Blazor Hosted if (await manager.FindByClientIdAsync("blazorcodeflowpkceclient") is null) { await manager.CreateAsync(new OpenIddictApplicationDescriptor { ClientId = "blazorcodeflowpkceclient", ConsentType = ConsentTypes.Explicit, DisplayName = "Blazor code PKCE", DisplayNames = { [CultureInfo.GetCultureInfo("fr-FR")] = "Application cliente MVC" }, PostLogoutRedirectUris = { new Uri("https://localhost:44348/signout-callback-oidc"), new Uri("https://localhost:5001/signout-callback-oidc") }, RedirectUris = { new Uri("https://localhost:44348/signin-oidc"), new Uri("https://localhost:5001/signin-oidc") }, ClientSecret = "codeflow_pkce_client_secret", Permissions = { Permissions.Endpoints.Authorization, Permissions.Endpoints.Logout, Permissions.Endpoints.Token, Permissions.Endpoints.Revocation, Permissions.GrantTypes.AuthorizationCode, Permissions.GrantTypes.RefreshToken, Permissions.ResponseTypes.Code, Permissions.Scopes.Email, Permissions.Scopes.Profile, Permissions.Scopes.Roles, Permissions.Prefixes.Scope + "dataEventRecords" }, Requirements = { Requirements.Features.ProofKeyForCodeExchange } }); } }

Blazor client Application

The client application was created using the Blazor.BFF.OpenIDConnect.Template Nuget template package. The configuration is read from the app settings using the OpenIDConnectSettings section. You could add more configurations if required. This is otherwise a standard OpenID Connect client and will work with any OIDC compatible server. PKCE is required and also a secret to validate the application. The AddAntiforgery method is used so that API calls can be forced to validate anti-forgery token to protect against CSRF as well as the same site cookie protection.

public void ConfigureServices(IServiceCollection services) { services.AddAntiforgery(options => { options.HeaderName = "X-XSRF-TOKEN"; options.Cookie.Name = "__Host-X-XSRF-TOKEN"; options.Cookie.SameSite = Microsoft.AspNetCore.Http.SameSiteMode.Strict; options.Cookie.SecurePolicy = Microsoft.AspNetCore.Http.CookieSecurePolicy.Always; }); services.AddHttpClient(); services.AddOptions(); ; var openIDConnectSettings = Configuration.GetSection("OpenIDConnectSettings"); services.AddAuthentication(options => { options.DefaultScheme = CookieAuthenticationDefaults.AuthenticationScheme; options.DefaultChallengeScheme = OpenIdConnectDefaults.AuthenticationScheme; }) .AddCookie() .AddOpenIdConnect(options => { options.SignInScheme = "Cookies"; options.Authority = openIDConnectSettings["Authority"]; options.ClientId = openIDConnectSettings["ClientId"]; options.ClientSecret = openIDConnectSettings["ClientSecret"]; options.RequireHttpsMetadata = true; options.ResponseType = "code"; options.UsePkce = true; options.Scope.Add("profile"); options.Scope.Add("offline_access"); options.SaveTokens = true; options.GetClaimsFromUserInfoEndpoint = true; //options.ClaimActions.MapUniqueJsonKey("preferred_username", "preferred_username"); }); services.AddControllersWithViews(options => options.Filters.Add(new AutoValidateAntiforgeryTokenAttribute())); services.AddRazorPages().AddMvcOptions(options => { //var policy = new AuthorizationPolicyBuilder() // .RequireAuthenticatedUser() // .Build(); //options.Filters.Add(new AuthorizeFilter(policy)); }); }

The OIDC configuration settings are read from the OpenIDConnectSettings section. This can be extended if further specific settings are required.

"OpenIDConnectSettings": { "Authority": "https://localhost:44395", "ClientId": "blazorcodeflowpkceclient", "ClientSecret": "codeflow_pkce_client_secret" },

The NetEscapades.AspNetCore.SecurityHeaders Nuget package is used to add security headers to the application to protect the session. The configuration is setup for Blazor.

public static HeaderPolicyCollection GetHeaderPolicyCollection(bool isDev, string idpHost) { var policy = new HeaderPolicyCollection() .AddFrameOptionsDeny() .AddXssProtectionBlock() .AddContentTypeOptionsNoSniff() .AddReferrerPolicyStrictOriginWhenCrossOrigin() .AddCrossOriginOpenerPolicy(builder => { builder.SameOrigin(); }) .AddCrossOriginResourcePolicy(builder => { builder.SameOrigin(); }) .AddCrossOriginEmbedderPolicy(builder => // remove for dev if using hot reload { builder.RequireCorp(); }) .AddContentSecurityPolicy(builder => { builder.AddObjectSrc().None(); builder.AddBlockAllMixedContent(); builder.AddImgSrc().Self().From("data:"); builder.AddFormAction().Self().From(idpHost); builder.AddFontSrc().Self(); builder.AddStyleSrc().Self(); builder.AddBaseUri().Self(); builder.AddFrameAncestors().None(); // due to Blazor builder.AddScriptSrc() .Self() .WithHash256("v8v3RKRPmN4odZ1CWM5gw80QKPCCWMcpNeOmimNL2AA=") .UnsafeEval(); // disable script and style CSP protection if using Blazor hot reload // if using hot reload, DO NOT deploy with an insecure CSP }) .RemoveServerHeader() .AddPermissionsPolicy(builder => { builder.AddAccelerometer().None(); builder.AddAutoplay().None(); builder.AddCamera().None(); builder.AddEncryptedMedia().None(); builder.AddFullscreen().All(); builder.AddGeolocation().None(); builder.AddGyroscope().None(); builder.AddMagnetometer().None(); builder.AddMicrophone().None(); builder.AddMidi().None(); builder.AddPayment().None(); builder.AddPictureInPicture().None(); builder.AddSyncXHR().None(); builder.AddUsb().None(); }); if (!isDev) { // maxage = one year in seconds policy.AddStrictTransportSecurityMaxAgeIncludeSubDomains(maxAgeInSeconds: 60 * 60 * 24 * 365); } return policy; }

The APIs used by the Blazor UI are protected by the ValidateAntiForgeryToken and the Authorize attribute. You could add authorization as well if required. Cookies are used for this API with same site protection.

[ValidateAntiForgeryToken] [Authorize(AuthenticationSchemes = CookieAuthenticationDefaults.AuthenticationScheme)] [ApiController] [Route("api/[controller]")] public class DirectApiController : ControllerBase { [HttpGet] public IEnumerable<string> Get() { return new List<string> { "some data", "more data", "loads of data" }; } }

When the application is started, the user can sign-in and authenticate using OpenIddict.

The setup keeps all the security implementation in the trusted backend. This setup can work against any OpenID Connect conform server. By having a trusted application, it is now possible to implement access to downstream APIs in a number of ways and possible to add further protections as required. The downstream API does not need to be public either. You should only use a downstream API if required. If a software architecture forces you to use APIs from separate domains, then a YARP reverse proxy can be used to access to API, or a service to service API call, ie trusted client with a trusted server, or an on behalf flow (OBO) flow can be used.

Links

https://documentation.openiddict.com/

https://github.com/damienbod/Blazor.BFF.OpenIDConnect.Template

https://github.com/andrewlock/NetEscapades.AspNetCore.SecurityHeaders

https://github.com/openiddict

Sunday, 02. January 2022

Jon Udell

The (appropriately) quantified self

A year after we moved to northern California I acquired a pair of shiny new titanium hip joints. There would be no more running for me. But I’m a lucky guy who gets to to bike and hike more than ever amidst spectacular scenery that no-one could fully explore in a lifetime. Although the osteoarthritis … Continue reading The (appropriately) quantified self

A year after we moved to northern California I acquired a pair of shiny new titanium hip joints. There would be no more running for me. But I’m a lucky guy who gets to to bike and hike more than ever amidst spectacular scenery that no-one could fully explore in a lifetime.

Although the osteoarthritis was more advanced on the right side, we opted for bilateral replacement because the left side wasn’t far behind. Things hadn’t felt symmetrical in the years leading up to the surgery, and that didn’t change. There’s always a sense that something’s different about the right side.

We’re pretty sure it’s not the hardware. X-rays show that the implants remain firmly seated, and there’s no measurable asymmetry. Something about the software has changed, but there’s been no way to pin down what’s different about the muscles, tendons, and ligaments on that side, whether there’s a correction to be made, and if so, how.

Last month, poking around on my iPhone, I noticed that I’d never opened the Health app. That’s beause I’ve always been ambivalent about the quantified self movement. In college, when I left competive gymnastics and took up running, I avoided tracking time and distance. Even then, before the advent of fancy tech, I knew I was capable of obsessive data-gathering and analysis, and didn’t want to go there. It was enough to just run, enjoy the scenery, and feel the afterglow.

When I launched the Health app, I was surprised to see that it had been counting my steps since I became an iPhone user 18 months ago. Really? I don’t recall opting into that feature.

Still, it was (of course!) fascinating to see the data and trends. And one metric in particular grabbed my attention: Walking Asymmetry.

Walking asymmetry is the percent of time that your steps with one foot are faster or slower than the other foot.

An even or symmetrical walk is often an important physical therapy goal when recovering from injury.

Here’s my chart for the past year.

I first saw this in mid-December when the trend was at its peak. What caused it? Well, it’s been rainy here (thankfully!), so I’ve been riding less, maybe that was a factor?

Since then I haven’t biked more, though, and I’ve walked the usual mile or two most days, with longer hikes on weekends. Yet the data suggest that I’ve reversed the trend.

What’s going on here?

Maybe this form of biofeedback worked. Once aware of the asymmetry I subconsciously corrected it. But that doesn’t explain the November/December trend.

Maybe the metric is bogus. A phone in your pocket doesn’t seem like a great way to measure walking asymmetry. I’ve also noticed that my step count and distances vary, on days when I’m riding, in ways that are hard to explain.

I’d like to try some real gait analysis using wearable tech. I suspect that data recorded from a couple of bike rides, mountain hikes, and neighborhood walks could help me understand the forces at play, and that realtime feedback could help me balance those forces.

I wouldn’t want to wear it all the time, though. It’d be a diagnostic and therapeutic tool, not a lifestyle.


Mike Jones: self-issued

Computing Archaeology Expedition: The First Smiley :-)

In September 1982, artificial intelligence professor Scott Fahlman made a post on the Carnegie Mellon Computer Science Department “general” bboard inventing the original smiley :-). I remember thinking at the time when I read it “what a good idea!”. But in 2002 when I told friends about it, I couldn’t find Scott’s post online anywhere. […]

In September 1982, artificial intelligence professor Scott Fahlman made a post on the Carnegie Mellon Computer Science Department “general” bboard inventing the original smiley :-). I remember thinking at the time when I read it “what a good idea!”. But in 2002 when I told friends about it, I couldn’t find Scott’s post online anywhere.

So in 2002, I led a computing archaeology expedition to restore his post. As described in my original post describing this accomplishment, after a significant effort to locate it, on September 10, 2002 the original post made by Scott Fahlman on CMU CS general bboard was retrieved by Jeff Baird from an October 1982 backup tape of the spice vax (cmu-750x). Here is Scott’s original post:

19-Sep-82 11:44 Scott E Fahlman :-) From: Scott E Fahlman <Fahlman at Cmu-20c> I propose that the following character sequence for joke markers: :-) Read it sideways. Actually, it is probably more economical to mark things that are NOT jokes, given current trends. For this, use :-(

I’m reposting this here now both to recommemorate the accomplishment nearly twenty years later, and because my page at Microsoft Research where it was originally posted is no longer available.

Wednesday, 29. December 2021

Just a Theory

Review: Project Hail Mary

A brief review of the new book by Andy Weir.

Project Hail Mary by Andy Weir
2021 Ballantine Books

Project Hail Mary follows the success of Andy Weir’s first novel, The Martian, and delivers the same kind of enjoyment. If a harrowing story of a solitary man in extreme environments using science and his wits to overcome one obstacle after another then this is the book for you. No super powers, no villains, no other people, really — just the a competent scientist overcoming the odds through experimentation, constant iteration, and sheer creativity. Personally I can’t get enough of it. Shoot it right into my veins.

Andy Weir seems to know his strengths and weaknesses, given these two books. If you want read stories of a diverse array of people interacting and growing through compelling character arcs, well, look elsewhere. Project Hail Mary doesn’t feature characters, really, but archetypes. No one really grows in this story: Ryland Grace, our protagonist and narrator, displays a consistent personality from start to finish. The book attempts to show him overcoming a character flaw, but it comes so late and at such variance to how he behaves and speaks to us that it frankly makes no sense.

But never mind, I can read other books for character growth and interaction. I’m here for the compelling plot, super interesting ideas and challenges (a whole new species that lives on the sun and migrates to Venus to breed? Lay it on me). It tickles my engineering and scientist inclinations, and we could use more of that sort of plotting in media.

So hoover it up. Project Hail Mary is a super fun adventure with compelling ideas, creative, competent people overcoming extreme circumstances without magic or hand-waving, and an unexpected friendship between two like-minded nerds in space.

I bet it’ll make a good movie, too.

More about… Books Andy Weir

Werdmüller on Medium

Hopes for 2022

Instead of a review of the year, let’s look ahead. Continue reading on Medium »

Instead of a review of the year, let’s look ahead.

Continue reading on Medium »


Doc Searls Weblog

Wayne Thiebaud, influencer

Just learned Wayne Thiebaud died, at 101. I didn’t know he was still alive. But I did know he had a lot of influence, most famously on pop art. Least famously, on me. Many of Thiebaud’s landscapes were from aerial perspectives. For example, this— —and this: In me, those influenced this— —and this— —and this— […]

Just learned Wayne Thiebaud died, at 101. I didn’t know he was still alive. But I did know he had a lot of influence, most famously on pop art. Least famously, on me.

Many of Thiebaud’s landscapes were from aerial perspectives. For example, this—

—and this:

In me, those influenced this—

—and this—

—and this—

—and this—

—and this—

—and this—

—and this—

—and even this:

Like Thiebaud, I love the high angle on the easily overlooked, and opportunity for revelations not obtainable from the ground, or in the midst.

Example. Can you guess where these mountains are?

Try Los Angeles. I shot that, as I did the others in this album, during the approach to LAX on a flight from Houston.

Here’s another shot in that series:

That’s 10,068-foot Mt. San Antonio, aka Old Baldy, highest of the San Gabriel Mountains. These are Los Angeles’ own Alps, which wall the north side of the L.A. basin, thwarting sprawl in that direction. The view is up San Antonio Canyon, below which lays a suburb-free delta of rocks and gravel spreading outward from canyon’s mouth. Across that mouth, and in a series of of similar ones below is a dam. These are for slowing “debris flows” coming out of the mountains after heavy rains, and sorting the flows’ contents into boulders, rocks, and gravel. Businesses that trade in these geological goods are also sited there. Imagine a business selling fresh lava from the base of a volcano, and you have some idea of how rapidly the geology changes here.

Anyway, while there is Thiebaud-informed art to that shot, there is also a purpose: I want people to see how these mountains are alive and dangerous in ways unlike no others flanking a city.

My main influence toward that purpose is John McPhee, the best nonfiction writer ever to walk the Earth—and report on it. Dig Los Angeles Against the Mountains. Doesn’t get better than that.

McPhee is 90 now. I dread losing him.

Friday, 24. December 2021

Simon Willison

Quoting danah boyd

Many of you here today are toolbuilders who help people work with data. Rather than presuming that those using your tools are clear-eyed about their data, how can you build features and methods that ensure people know the limits of their data and work with them responsibly? Your tools are not neutral. Neither is the data that your tools help analyze. How can you build tools that invite responsibl

Many of you here today are toolbuilders who help people work with data. Rather than presuming that those using your tools are clear-eyed about their data, how can you build features and methods that ensure people know the limits of their data and work with them responsibly? Your tools are not neutral. Neither is the data that your tools help analyze. How can you build tools that invite responsible data use and make visible when data is being manipulated? How can you help build tools for responsible governance?

danah boyd


The Asymmetry of Open Source

The Asymmetry of Open Source Caddy creator Matt Holt provides "a comprehensive guide to funding open source software projects". This is really useful - it describes a whole range of funding models that have been demonstrated to work, including sponsorship, consulting, private support channels and more. Via @mholt6

The Asymmetry of Open Source

Caddy creator Matt Holt provides "a comprehensive guide to funding open source software projects". This is really useful - it describes a whole range of funding models that have been demonstrated to work, including sponsorship, consulting, private support channels and more.

Via @mholt6


Weeknotes: datasette-tiddlywiki, filters_from_request

I made some good progress on the big refactor this week, including extracting some core logic out into a new Datasette plugin hook. I also got distracted by TiddlyWiki and released a new Datasette plugin that lets you run TiddlyWiki inside Datasette. datasette-tiddlywiki TiddlyWiki is a fascinating and unique project. Jeremy Ruston has been working on it for 17 years now and I've still not see

I made some good progress on the big refactor this week, including extracting some core logic out into a new Datasette plugin hook. I also got distracted by TiddlyWiki and released a new Datasette plugin that lets you run TiddlyWiki inside Datasette.

datasette-tiddlywiki

TiddlyWiki is a fascinating and unique project. Jeremy Ruston has been working on it for 17 years now and I've still not seen another piece of software that works even remotely like it.

It's a full-featured wiki that's implemented entirely as a single 2.3MB page of HTML and JavaScript, with a plugin system that allows it to be extended in all sorts of interesting ways.

The most unique feature of TiddlyWiki is how it persists data. You can create a brand new wiki by opening tiddlywiki.com/empty.html in your browser, making some edits... and then clicking the circle-tick "Save changes" button to download a copy of the page with your changes baked into it! Then you can open that up on your own computer and keep on using it.

There's actually a lot more to TiddlyWiki persistence than that: The GettingStarted guide lists dozens of options that vary depending on operating system and browser - it's worth browsing through them just to marvel at how much innovation has happened around the project just in the persistence space.

One of the options is to run a little server that implements the WebServer API and persists data sent via PUT requests. SQLite is an obvious candidate for a backend, and Datasette makes it pretty easy to provide APIs on top of SQLite... so I decided to experiment with building a Datasette plugin that offers a full persistant TiddlyWiki experience.

datasette-tiddlywiki is the result.

You can try it out by running datasette install datasette-tiddlywiki and then datasette tiddlywiki.db --create to start the server (with a tiddlywiki.db SQLite database that will be created if it does not already exist.)

Then navigate to http://localhost:8001/-/tiddlywiki to start interacting with your new TiddlyWiki. Any changes you make there will be persisted to the tiddlywiki database.

I had a running research issue that I updated as I was figuring out how to build it - all sorts of fun TiddlyWiki links and TILs are embedded in that thread. The issue started out in my private "notes" GitHub repository but I transferred it to the datasette-tiddlywiki repository after I had created and published the first version of the plugin.

filters_from_request() plugin hook

My big breakthrough in the ongoing Datasette Table View refactor project was a realization that I could simplify the table logic by extracting some of it out into a new plugin hook.

The new hook is called filters_from_request. It acknowledges that the primary goal of the table page is to convert query string parameters - like ?_search=tony or ?id__gte=6 or ?_where=id+in+(1,+2+,3) into SQL where clauses.

(Here's a full list of supported table arguments.)

So that's what filters_from_request() does - given a request object it can return SQL clauses that should be added to the WHERE.

Datasette now uses those internally to implement ?_where= and ?_search= and ?_through=, see datasette/filters.py.

I always try to accompany a new plugin hook with a plugin that actually uses it - in this case I've been updating datasette-leaflet-freedraw to use that hook to add a "draw a shape on a map to filter this table" interface to any table that it detects has a SpatiaLite geometry column. There's a demo of that here:

https://calands.datasettes.com/calands/CPAD_2020a_SuperUnits?_freedraw=%7B%22type%22%3A%22MultiPolygon%22%2C%22coordinates%22%3A%5B%5B%5B%5B-121.92627%2C37.73597%5D%2C%5B-121.83838%2C37.68382%5D%2C%5B-121.64063%2C37.45742%5D%2C%5B-121.57471%2C37.19533%5D%2C%5B-121.81641%2C36.80928%5D%2C%5B-122.146%2C36.63316%5D%2C%5B-122.56348%2C36.65079%5D%2C%5B-122.89307%2C36.79169%5D%2C%5B-123.06885%2C36.96745%5D%2C%5B-123.09082%2C37.33522%5D%2C%5B-123.0249%2C37.562%5D%2C%5B-122.91504%2C37.77071%5D%2C%5B-122.71729%2C37.92687%5D%2C%5B-122.58545%2C37.96152%5D%2C%5B-122.10205%2C37.96152%5D%2C%5B-121.92627%2C37.73597%5D%5D%5D%5D%7D

Note the new custom ?_freedraw={...} parameter which accepts a GeoJSON polygon and uses it to filter the table - that's implemented using the new hook.

This isn't in a full Datasette release yet, but it's available in the Datasette 0.60a1 alpha (added in 0.60a0) if you want to try it out.

Optimizing populate_table_schemas()

I introduced the datasette-pretty-traces plugin last week - it makes it much easier to see the queries that are running on any given Datasette page.

This week I realized it wasn't tracking write queries, so I added support for that - and discovered that on first page load after starting up Datasette spends a lot of time populating its own internal database containing schema information (see Weeknotes: Datasette internals from last year.)

I opened a tracking ticket and made a bunch of changes to optimize this. The new code in datasette/utils/internal_db.py uses two new documented internal methods:

db.execute_write_script() and db.execute_write_many()

These are the new methods that were created as part of the optimization work. They are documented here:

await db.execute_write_script(sql, block=True) await db.execute_write_many(sql, params_seq, block=True)

They are Datasette's async wrappers around the Python sqlite3 module's executemany() and executescript() methods.

I also made a breaking change to Datasette's existing execute_write() and execute_write_fn() methods: their block= argument now defaults to True, where it previously defaulted to False.

Prior to this change, db.execute_write(sql) would put the passed SQL in a queue to be executed once the write connection became available... and then return control to the calling code, whether or not that SQL had actually run- a fire-and-forget mechanism for executing SQL.

The block=True option would change it to blocking until the query had finished executing.

Looking at my own code, I realized I had never once used the fire-and-forget mechanism: I always used block=True to ensure the SQL had finished writing before I moved on.

So clearly block=True was a better default. I made that change in issue 1579.

This is technically a breaking change... but I used the new GitHub code search to see if anyone was using it in a way that would break and could only find one example of it in code not written by me, in datasette-webhook-write - and since they use block=True there anyway this update won't break their code.

If I'd released Datasette 1.0 I would still consider this a breaking change and bump the major version number, but thankfully I'm still in the 0.x range where I can be a bit less formal about these kinds of thing!

Releases this week datasette-tiddlywiki: 0.1 - 2021-12-23
Run TiddlyWiki in Datasette and save Tiddlers to a SQLite database asyncinject: 0.2 - (4 releases total) - 2021-12-21
Run async workflows using pytest-fixtures-style dependency injection datasette: 0.60a1 - (104 releases total) - 2021-12-19
An open source multi-tool for exploring and publishing data datasette-pretty-traces: 0.3.1 - (5 releases total) - 2021-12-19
Prettier formatting for ?_trace=1 traces TIL this week Creating a minimal SpatiaLite database with Python Safely outputting JSON Annotated explanation of David Beazley's dataklasses Adding a robots.txt using Cloudflare workers Transferring a GitHub issue from a private to a public repository

Thursday, 23. December 2021

Kyle Den Hartog

Financing Open Source Software Development with DAO Governance Tokens

Is it possible to fix the tragedy of the commons problem with a DAO Governance Token?

One of the biggest problems in open source software development today is that it’s that the majority of open source software is written by developers as side projects on their nights and weekends. Out of the mix of developers who do produce software in their nights and weekends only a small sliver of them receive any funding for their work. Of the small portion of developers who do get sponsored, an even smaller percentage are actually able to make enough money to fully cover their expenses in life. So clearly we haven’t developed a sustainable solution to finance open source software development. So what are the main ways that open source software development gets funded? The two primary methods that I see open source software being developed with is via organizational sponsors or altruistic funding. Let’s break these down a bit more to gain a better understanding of them.

The most common and well understood method today that open source projects are funded is via for-profit corporations sponsoring development of projects by way of allowing their full time staff to work on these large projects. Some great examples of this are projects like Kubernetes, Linux kernel, React Framework, Hashicorp Vault, and Rust programming language. In all of these examples, these projects are either directly managed via a team of developers at large organizations (think React Framework being maintained by Facebook), managed by a startup who opensources their core product with additional sticky features (think Hashicorp Vault), managed by a foundation with a combination of many different developers from many different organizations (think Kubernetes and the Linux kernel these days and now Rust Lang), and finally there’s hybrid projects which have transitioned from one category to another over time (Think Rust language being started at Mozilla and then transferred to a foundation). With all of these models one thing is clear. Developers have a day job that pays them and they’re essentially employed to produce open source software. The reasons why many companys are funding developers to produce open source development is so scattered that I’m sure I couldn’t name them all. However, one thing in my experience is clear and that is that most companies have some form of strategic decision at play that leads them down the path of making their source code open. Whether that strategy be as simple as they want to allow others to solve a problem they’ve had to solve, they want to leverage open source as a sales channel, or they’re simply looking for free software developement contributions from developers who like the project. Whatever, the reason the company has to justify it’s contributions it’s pretty clear that this is a major avenue for contribution to the OSS community.

The second most common method of development which has been around for awhile, but has only recently become a more legitimate model of funding has been through altruistic funding. What I mean by this method is that people, organizations, or other such entities will “sponsor” a developer who’s released an open source project that they believe should continue to be worked on. This most commonly was done via Paypal or Buy me a coffee in the past with Patreon and Github Sponsors getting involved more recently as well. This model of funding is becoming more common of a way to fund a small project which is used substantially by much larger projects or companies who want some certainty that the project will continue to be maintained in the future. It’s shown some promise for becoming a sustainable source of funding for developers who are looking for a way to monetize their projects without a massive overhead that comes with starting a company. However, while this method does leave the original maintainer in control of their project to continue to bring their vision to reality, it often times does not provide a sustainable and large enough income for most maintainers to leverage this avenue full time.

So what’s up with this DAO governance token idea then?

To put it simply the concept of leveraging a DAO token to manage an open source project is still just that - an idea. So why do I considered it worth exploring? Today, in the Defi space we see many different projects that are being built completely open source and doing so with often times very complex tokenomics schemes just to sustainably fund the development of the protocol. With each new project the developers need to find a new way to integrate a token into the system in order to fund their time and effort that they’d like to put into growing the project. However, what if we could re-shape the purpose of tokens to make it actually what the tokens are about which is funding the development of the project rather than trying to create a new gift card scheme for each new project?

The way I imagine this would work is via a DAO goverance token which effectively represents a share of the project. Each token that’s already been minted would allow for voting on proposals for to accept or reject new changes to the project in the same way that DAOs allow for decentralized goverance of treasuries today. However these proposals all come in the form of a pull request to modify the code allowing for developers to directly receive value for the proposed changes their making. Where things get interesting is that along with the new pull request comes a proposal set forth by the contributor who would assign a value they believe it’s worth represented in the value of new tokens which would be approved if the pull request is approved. This effectively would be diluting the value of the current tokens in exchange for work they’ve done to improve the project leading to an interesting trade in value. Current majority stake holders give up a small portion of their funds in exchange for receiving new contributions if and only if they believe the dilution is acceptable.

So how does this make developers money?

As a project grows and is utilized by more and more developers it will create an economic incentive for people and companies who wish to steer a project to buy up the currently available tokens or contribute to the project in order to collect these tokens. This value would be tradeable in terms of real world value either for money to buy food or for additional utility in upstream or downstream projects. The value of the tokens is only as valuable as the number of people who are utilizing the project and believe they need the ability to affect the direction of the project or make sure it remains maintained. Meaning for projects like Kubernetes where there’s numerous companies who’s core infrastructure built on top of this project they want to make sure they’re features are getting added and supported. Just like they do today in the Cloud Native Computing Foundation which sees many people from many different organizations and backgrounds contributing to the project now.

Where this becomes interesting is in the economic decision making that happens as a market is formed around maintainership of projects. With many of the good things that will be introduced like being able to have more full-time freelance software developers available I’m sure there will be interesting economic issues that will be introduced. It’s my belief though that this controversy will be worked out in different ways through different principles that projects will choose. As an example today one of the most obvious problems in large projects today such as when SushiSwap forked Uniswap and started taking sushiswap in a different direction. However, the legitimacy of the fork will help to form interesting economic behaviors on whether or not the value of the fork will go up like SushiSwap has shown by adding new and interesting contributions to their fork, or whether it will go down like many of the random clone projects that often lead to scams do.

I believe that if the mechanics of the maintainership role are established correctly then it may even be possible to create some interesting dynamics to reduce forking by leveraging the market dynamics. As an example if the DAO fork was required to mint the same number of tokens in the new project as the original project and make them assigned to the same maintainers then the original maintainers of the project could leverage their newly minted tokens in the new project in order to outright reject all proposals in the fork and slow down the momentum of the project. I tend to think this may be bad for innovation, but it’s an interesting example of how leveraging markets to make maintainership decisions can be utilized to build sustainability in open source development given that maintainership status of projects has legitimate value that if broken up and governed properly could be leveraged to reshape how software development is funded.

Wednesday, 22. December 2021

Doc Searls Weblog

Rage in Peace

The Cluetrain Manifesto had four authors but one voice, and that was Chris Locke‘s. Cluetrain, a word that didn’t exist before Chris (aka RageBoy), David Weinberger, Rick Levine and I made it up during a phone conversation in early 1999 (and based it on a joke about a company that didn’t get clues delivered by […]

The Cluetrain Manifesto had four authors but one voice, and that was Chris Locke‘s.

Cluetrain, a word that didn’t exist before Chris (aka RageBoy), David Weinberger, Rick Levine and I made it up during a phone conversation in early 1999 (and based it on a joke about a company that didn’t get clues delivered by train four times a day), is now tweeted constantly, close to 23 years later. (And by now belongs in the OED.)

In his book The Tipping Point, which was published the same month as The Cluetrain Manifesto (January, 2000), Malcolm Gladwell said, “the success of any kind of social epidemic is heavily dependent on the involvement of people with a particular and rare set of social gifts.” He also called this “the Law of the Few.” Among those few, one needed three kinds of people: mavens, connectors, and salespeople. Chris was all three. To different degrees so were David, Rick and myself; but Chris was the best, especially at connecting. He was the one who brought us together. And he was the one who sold us on making something happen. He moved us from one Newtonian state to another—a body at rest to a body in motion—by sending us this little graphic:

After we got that, we had to put up the Cluetrain website. And then we had to expand that site into a book, thanks to the viral outbreak of interest that followed a column about the site—and Chris especially, face and all—in The Wall Street Journal. Though a great enemy of marketing-as-usual, nobody was better than Chris at spreading a word. I mean, damn: dude got Cluetrain in the fucking Wall Street Journal! (Huge hat tip to Tom Petzinger for writing that column, and for writing the book’s forward as well. 

Want to know Chris’s marketing techniques? Read Gonzo Marketing: Winning Through Worst Practices, which followed Cluetrain, and had the best cover ever, with bullet holes (actual holes) through a barcode, and a red page behind it. I’m sure Chris came up with that idea. His graphic sense was equally creative, sharp and—as with everything—outrageous.

Or listen to the audio version, performed by Chris in his perfect baritone voice.

Alas, Chris died yesterday, after a long struggle with COPD. (Too much smoke, for too long. Got my dad and my old pal Ray too. That cigarette smoking has become unfashionable is a grace of our time.)

Good God, what a great writer Chris was. Try Winter Solstice. One pull-quote: “We learn to love the lie we must tell ourselves to survive.”

And his stories. OMG, were they good. Better than fiction, and all true.

For example, you know how, when two people are first getting to know each other, they exchange stories about parts of their lives? I remember once telling Chris that my parents were frontier types who met in Alaska. While I thought that would take us down an interesting story hole (my parents really were interesting people), Chris blasted open a conversational hole of his own the size of a crater: “My father was a priest and my mother was a nun.” Top that.

Once, when I missed a plane from SFO to meet Chris in Denver, I mentioned that I was standing next to a strangely wide glass wall at my just-vacated gate in Terminal 1. “I know that gate well,” he said. “And that glass is a trip. I once missed a plane there myself while I was on acid and got totally into that glass wall.” I don’t remember what he said after that, except that it was outrageous (for anyone but Chris) and I couldn’t stop laughing as his story went on.

Among too many other stories to count, here is one I hope his soul forgives me for lifting (along with that picture of him) from a thread on Facebook:

on this Father’s Day I am recalling getting drunk with MY dad on Christmas Eve 1968, as was our custom back then (this month I am 34 years sober). he told me he was suicidal and i knew he meant it. so I turned him on to acid there and then. it was a bit of a rocky trip, but things were better for him after that.

btw, when the trip got really rough, I tricked him into thinking he could fall asleep. “If you want to come down, just take six of these big bomber multivitamin pills and that’ll be it.” fat chance! but he fell sound asleep. as I sat next to him marveling at the sound of guardian angel wings softly beating over us, THE PHONE RANG!!! OMG. at like 4am! and worse, it was my judgmental hyper-Catholic MOTHER!! she said…

hello, is your father over there

….yes… I said.

are you two taking LSD?

oh no! had she gone psychic??

….yes… I said, fearful of what was coming next.

THANK GOD, she said. SOMETHING had to give.

and then:

“well, have a good trip,” she said, and rang off.

I’ll leave you with this, from a post on Chris’s Rageboy blog called Dust My Boom. It was written on the occasion of an odd wind coming toward Boulder that now seems prophetic toward the future that came three days ago when a wind-driven fire swept across the landscape, eventually roasting close to 600 homes, a hotel, and a shopping center. Read the whole thing for more about the wind…

There’s so much you don’t know about me. Cannot ever, no matter how hard I try to make it otherwise. I have been places, done things impossible to recount. I remember nights of love, each different from all the rest. I have sat beside the dead in the room with the open windows. I have seen those ships on fire off Orion’s shoulder.

Yeah well. I wrote something into the cluetrain manifesto that must have raised some eyebrows among our more knowing cousins. And it went like this:

…People of Earth The sky is open to the stars. Clouds roll over us night and day. Oceans rise and fall. Whatever you may have heard, this is our world, our place to be. Whatever you’ve been told, our flags fly free. Our heart goes on forever. People of Earth, remember.

So I should end this now, but that’s way too dramatic and drama is the wrong note to end on. I think I need to put in something ordinary here, pedestrian. A joke maybe. A duck walks into a bar…

Because, whatever it is, it’s just the normal regular passage of time. Nothing mystical. Nothing shocking. We are born. We grow old. We die. In between, we sometimes get a glimpse of something. If I knew what it was, I’d tell you in a second. I don’t know. Take this piece of writing as my prayer flag flapping out in the wind of a day that came on sideways. Who knows where it’s headed? Tomorrow I have a con-call at noon, a website to build, and forty-one phone calls to return. Possibly lunch.

What I do know is that if you’re lonely and you’re hurting, then you’re human. What am I telling you this for? Hell if I know. To cheer you up maybe. Let me know if it worked.

And remember the man who said all that, and so much more. He was here for real, and he is missed.

Tuesday, 21. December 2021

Tim Bouma's Blog

Public Sector Profile of the Pan-Canadian Trust Framework Version 1.4

The Public Sector Profile of the Pan-Canadian Trust Framework Version 1.4 is now available on GitHub Summary of Changes to Version 1.4: Public Sector Profile of the Pan-Canadian Trust Framework Version 1.4 is a continued refinement as result of application and iteration of the framework. While there are no major conceptual changes from Version 1.3, there are numerous refinements of d

The Public Sector Profile of the Pan-Canadian Trust Framework Version 1.4 is now available on GitHub

Summary of Changes to Version 1.4:

Public Sector Profile of the Pan-Canadian Trust Framework Version 1.4 is a continued refinement as result of application and iteration of the framework. While there are no major conceptual changes from Version 1.3, there are numerous refinements of definitions and descriptions and continued improvement of editorial and style consistency. Numerous improvements have been made due to feedback incorporated from the application of the PSP PCTF to trusted digital identity assessment and acceptance processes. Other changes have resulted from review and providing input into the National Standard of Canada, CAN/CIOSC 103–1, Digital trust and identity — Part 1: Fundamentals The PSP PCTF Assessment Workbook has been updated to reflect the latest changes.

Mike Jones: self-issued

Identity, Unlocked Podcast: OpenID Connect with Mike Jones

I had a fabulous time talking with my friend Vittorio Bertocci while recording the podcast Identity, Unlocked: OpenID Connect with Mike Jones. We covered a lot of ground in 43:29 – protocol design ground, developer ground, legal ground, and just pure history. As always, people were a big part of the story. Two of my […]

I had a fabulous time talking with my friend Vittorio Bertocci while recording the podcast Identity, Unlocked: OpenID Connect with Mike Jones. We covered a lot of ground in 43:29 – protocol design ground, developer ground, legal ground, and just pure history.

As always, people were a big part of the story. Two of my favorite parts are talking about how Kim Cameron brought me into the digital identity world to build the Internet’s missing identity layer (2:00-2:37) and describing how we applied the “Nov Matake Test” when thinking about keeping OpenID Connect simple (35:16-35:50).

Kim, I dedicate this podcast episode to you!

Monday, 20. December 2021

Damien Bod

Use calendar, mailbox settings and Teams presence in ASP.NET Core hosted Blazor WASM with Microsoft Graph

This article shows how to use Microsoft Graph with delegated permissions in a Blazor WASM ASP.NET Core hosted application. The application uses Microsoft.Identity.Web and the BFF architecture to authenticate against Azure AD. All security logic is implemented in the trusted backend. Microsoft Graph is used to access mailbox settings, teams presence and a users calendar. […]

This article shows how to use Microsoft Graph with delegated permissions in a Blazor WASM ASP.NET Core hosted application. The application uses Microsoft.Identity.Web and the BFF architecture to authenticate against Azure AD. All security logic is implemented in the trusted backend. Microsoft Graph is used to access mailbox settings, teams presence and a users calendar.

Code: https://github.com/damienbod/AspNetCoreBlazorMicrosoftGraph

2022-01-15 Updated Calander and MailboxSettings to use application permissions.

Use Case and setup

A Blazor WASM UI hosted in ASP.NET Core is used to access any users mailbox settings, team presence or calendar data in the same tenant. The application uses Azure AD for authentication. Blazorise is used in the Blazor WASM UI client project. An authenticated user can enter the target email to view the required data. Delegated Microsoft Graph permissions are used to authorize the API calls.

Setup Azure App registration

The Azure App registration is setup to allow the Microsoft Graph delegated permissions to access the mailbox settings, the teams presence data and the calendar data. The mail permissions are also added if you would like to send emails using Microsoft Graph. The application is a trusted server rendered one and can keep a secret. Because of this, the app is authenticated using a secret or a certificate. You should always authenticate the application if possible.

Implement the Blazor WASM ASP.NET Core Hosted authentication

Only one application exists for the UI and the backend and so only one Azure app registration is used. All authentication is implemented in the trusted backend. The BFF security architecture is used. Microsoft.Identity.Web is used in the the trusted backend to authenticate the application and the identity. No authentication is implemented in the Blazor WASM. This is a view of the server rendered application. The security architecture is more simple and no sensitive data is stored in the client browser. This is especially important since Azure AD does not support revocation, introspection or any way to invalidate the tokens on a logout. SPAs cannot fully logout in Azure AD or Azure B2C because the tokens cannot be invalidated. You should not be sharing the tokens in the untrusted zone due to this as this is hard to secure and you need to evaluate the risk of losing the tokens for your system. Using cookies with same site protection and keeping the tokens in the trusted backend reduces these security risks. Here is a quick start for dotnet Blazor BFF using Azure AD: Blazor.BFF.AzureAD.Template.

The ConfigureServices method is used to setup the services. You can do this in the program file as well. The AddAntiforgery method is used because cookies are used to access the API. Same site is also used to protect the cookies which should only work on the same domain and no sub domains or any other domain. The AddMicrosoftIdentityWebAppAuthentication method is used with a downstream API used for Microsoft Graph. Razor pages are used as the Blazor WASM is hosted in a Razor page and dynamic server data can be used to protect the application or also be used to add meta tags.

public void ConfigureServices(IServiceCollection services) { services.AddScoped<MicrosoftGraphDelegatedClient>(); services.AddScoped<EmailService>(); services.AddScoped<TeamsService>(); services.AddScoped<MicrosoftGraphApplicationClient>(); services.AddSingleton<ApiTokenInMemoryClient>(); services.AddAntiforgery(options => { options.HeaderName = "X-XSRF-TOKEN"; options.Cookie.Name = "__Host-X-XSRF-TOKEN"; options.Cookie.SameSite = Microsoft.AspNetCore.Http.SameSiteMode.Strict; options.Cookie.SecurePolicy = Microsoft.AspNetCore.Http.CookieSecurePolicy.Always; }); services.AddHttpClient(); services.AddOptions(); var scopes = Configuration.GetValue<string>("DownstreamApi:Scopes"); string[] initialScopes = scopes?.Split(' '); services.AddMicrosoftIdentityWebAppAuthentication(Configuration) .EnableTokenAcquisitionToCallDownstreamApi(initialScopes) .AddMicrosoftGraph("https://graph.microsoft.com/beta", scopes) .AddInMemoryTokenCaches(); services.AddControllersWithViews(options => options.Filters.Add(new AutoValidateAntiforgeryTokenAttribute())); services.AddRazorPages().AddMvcOptions(options => { }) .AddMicrosoftIdentityUI(); }

The Configure method adds the middleware as required. The UseSecurityHeaders method adds all the required security headers which are possible for Blazor. The Razor page _Host is used as the fallback, not a static page.

public void Configure(IApplicationBuilder app, IWebHostEnvironment env) { if (env.IsDevelopment()) { app.UseDeveloperExceptionPage(); app.UseWebAssemblyDebugging(); } else { app.UseExceptionHandler("/Error"); } app.UseSecurityHeaders( SecurityHeadersDefinitions.GetHeaderPolicyCollection(env.IsDevelopment(), Configuration["AzureAd:Instance"])); app.UseHttpsRedirection(); app.UseBlazorFrameworkFiles(); app.UseStaticFiles(); app.UseRouting(); app.UseAuthentication(); app.UseAuthorization(); app.UseEndpoints(endpoints => { endpoints.MapRazorPages(); endpoints.MapControllers(); endpoints.MapFallbackToPage("/_Host"); }); }

The security headers are implemented using the NetEscapades.AspNetCore.SecurityHeaders Nuget package. This adds everything which is possible for a production deployment. If you want to use hot reload you need to disable some of these policies. You must ensure that the CSS, script code is not implemented in a bad way, otherwise you leave you application open for attacks. A good dev environment should be as close as possible to the production deployment. I don’t use hot reload due to this. Due to Blazorise, the style policy allows inline style in the CSP protection.

public static HeaderPolicyCollection GetHeaderPolicyCollection(bool isDev, string idpHost) { var policy = new HeaderPolicyCollection() .AddFrameOptionsDeny() .AddXssProtectionBlock() .AddContentTypeOptionsNoSniff() .AddReferrerPolicyStrictOriginWhenCrossOrigin() .AddCrossOriginOpenerPolicy(builder => { builder.SameOrigin(); }) .AddCrossOriginResourcePolicy(builder => { builder.SameOrigin(); }) .AddCrossOriginEmbedderPolicy(builder => // remove for dev if using hot reload { builder.RequireCorp(); }) .AddContentSecurityPolicy(builder => { builder.AddObjectSrc().None(); builder.AddBlockAllMixedContent(); builder.AddImgSrc().Self().From("data:"); builder.AddFormAction().Self().From(idpHost); builder.AddFontSrc().Self(); builder.AddStyleSrc().Self().UnsafeInline(); builder.AddBaseUri().Self(); builder.AddFrameAncestors().None(); // due to Blazor builder.AddScriptSrc() .Self() .WithHash256("v8v3RKRPmN4odZ1CWM5gw80QKPCCWMcpNeOmimNL2AA=") .UnsafeEval(); // due to Blazor hot reload requires you to disable script and style CSP protection // if using hot reload, DO NOT deploy an with an insecure CSP }) .RemoveServerHeader() .AddPermissionsPolicy(builder => { builder.AddAccelerometer().None(); builder.AddAutoplay().None(); builder.AddCamera().None(); builder.AddEncryptedMedia().None(); builder.AddFullscreen().All(); builder.AddGeolocation().None(); builder.AddGyroscope().None(); builder.AddMagnetometer().None(); builder.AddMicrophone().None(); builder.AddMidi().None(); builder.AddPayment().None(); builder.AddPictureInPicture().None(); builder.AddSyncXHR().None(); builder.AddUsb().None(); }); if (!isDev) { // maxage = one year in seconds policy.AddStrictTransportSecurityMaxAgeIncludeSubDomains(maxAgeInSeconds: 60 * 60 * 24 * 365); } return policy; }

Microsoft Graph delegated client service

The GraphServiceClient service can be used directly from the IoC because of how we setup the Microsoft.Identity.Web configuration in the Startup class to use Microsoft Graph and Azure AD. A persistent cache is required for this to work correctly. The GetUserIdAsync method is used to get the Id of the user behind the email address. An equals filter is used. This method is used in most of the services.

private readonly GraphServiceClient _graphServiceClient; public MicrosoftGraphDelegatedClient(GraphServiceClient graphServiceClient) { _graphServiceClient = graphServiceClient; } private async Task<string> GetUserIdAsync(string email) { var filter = $"userPrincipalName eq '{email}'"; //var filter = $"startswith(userPrincipalName,'{email}')"; var users = await _graphServiceClient.Users .Request() .Filter(filter) .GetAsync(); if(users.CurrentPage.Count == 0) { return string.Empty; } return users.CurrentPage[0].Id; }

The delegated client GetGraphApiUser method uses an email to get the profile data of that user. This can be any email from your tenant. The users of the Microsoft Graph client is used.

public async Task<User> GetGraphApiUser(string email) { var id= await GetUserIdAsync(email); if (string.IsNullOrEmpty(upn)) return null; return await _graphServiceClient.Users[id] .Request() .GetAsync(); }

The MicrosoftGraphApplicationClient service is uses the Azure App registration application permissions with the client credentials flow. The MailboxSettings and the CalanderView use application permissions so that other users data can be used in the tenant.

The GetUserMailboxSettings method is used to get the MailboxSettings for the given email. The Id for the user is requested, then the MailboxSettings settings are returned for the user. This only works, if the MailboxSettings.Read application permission is granted to the Azure App registration.

public async Task<MailboxSettings> GetUserMailboxSettings(string email) { var graphServiceClient = GetGraphClient(); var id = await GetUserIdAsync(email, graphServiceClient); if (string.IsNullOrEmpty(id)) return null; var user = await graphServiceClient.Users[id] .Request() .Select("MailboxSettings") .GetAsync(); return user.MailboxSettings; } private async Task<string> GetUserIdAsync(string email, GraphServiceClient graphServiceClient) { var filter = $"userPrincipalName eq '{email}'"; //var filter = $"startswith(userPrincipalName,'{email}')"; var users = await graphServiceClient.Users .Request() .Filter(filter) .GetAsync(); if (users.CurrentPage.Count == 0) { return string.Empty; } return users.CurrentPage[0].Id; }

The GetCalanderForUser returns the calendar for the given email. This returns a flat list of FilteredEvent items. Microsoft Graph returns a IUserCalendarViewCollectionPage which is a bit complicated for using if only requesting small amounts of data. This works well for large results which needs to be paged, streamed or whatever. The CalendarView is used with a ‘to’ and a ‘from’ datetime filter to request the calendar events. This uses application permissions and so the application client credentials client is used.

public async Task<List<FilteredEvent>> GetCalanderForUser(string email, string from, string to) { var userCalendarViewCollectionPages = await GetCalanderForUserUsingGraph(email, from, to); var allEvents = new List<FilteredEvent>(); while (userCalendarViewCollectionPages != null && userCalendarViewCollectionPages.Count > 0) { foreach (var calenderEvent in userCalendarViewCollectionPages) { var filteredEvent = new FilteredEvent { ShowAs = calenderEvent.ShowAs, Sensitivity = calenderEvent.Sensitivity, Start = calenderEvent.Start, End = calenderEvent.End, Subject = calenderEvent.Subject, IsAllDay = calenderEvent.IsAllDay, Location = calenderEvent.Location }; allEvents.Add(filteredEvent); } if (userCalendarViewCollectionPages.NextPageRequest == null) break; } return allEvents; } private async Task<IUserCalendarViewCollectionPage> GetCalanderForUserUsingGraph( string email, string from, string to) { var graphServiceClient = GetGraphClient(); var id = await GetUserIdAsync(email, graphServiceClient); if (string.IsNullOrEmpty(id)) return null; var queryOptions = new List<QueryOption>() { new QueryOption("startDateTime", from), new QueryOption("endDateTime", to) }; var calendarView = await graphServiceClient.Users[id].CalendarView .Request(queryOptions) .Select("start,end,subject,location,sensitivity, showAs, isAllDay") .GetAsync(); return calendarView; }

The GetPresenceforEmail returns a teams presence list for the given email. This only works if the Presence.Read.All delegated permission is granted to the Azure App registration. Again Microsoft Graph returns a paged result which is not required in our use case. We only want this for a single email.

public async Task<List<Presence>> GetPresenceforEmail(string email) { var cloudCommunicationPages = await GetPresenceAsync(email); var allPresenceItems = new List<Presence>(); while (cloudCommunicationPages != null && cloudCommunicationPages.Count > 0) { foreach (var presence in cloudCommunicationPages) { allPresenceItems.Add(presence); } if (cloudCommunicationPages.NextPageRequest == null) break; } return allPresenceItems; } private async Task<ICloudCommunicationsGetPresencesByUserIdCollectionPage> GetPresenceAsync(string email) { var id = await GetUserIdAsync(email); var ids = new List<string>() { id }; return await _graphServiceClient.Communications .GetPresencesByUserId(ids) .Request() .PostAsync(); }

Blazor Server API

The Blazor server host application implements an API which requires cookies and a correct anti-forgery token to access the protected resource. This can only be accessed from the same domain. The used cookie has also same site protection and should only work for the exact same domain. All the Blazor WASM API calls use this API in for the Microsoft Graph data displays. The WASM application does not authenticate directly or use the Microsoft Graph service directly. The Microsoft Graph client is not exposed to the untrusted client browser.

[ValidateAntiForgeryToken] [Authorize(AuthenticationSchemes = CookieAuthenticationDefaults.AuthenticationScheme)] [AuthorizeForScopes(Scopes = new string[] { "User.ReadBasic.All user.read" })] [ApiController] [Route("api/[controller]")] public class GraphApiCallsController : ControllerBase { private MicrosoftGraphDelegatedClient _microsoftGraphDelegatedClient; private MicrosoftGraphApplicationClient _microsoftGraphApplicationClient; private readonly TeamsService _teamsService; private readonly EmailService _emailService; public GraphApiCallsController(MicrosoftGraphDelegatedClient microsoftGraphDelegatedClient, MicrosoftGraphApplicationClient microsoftGraphApplicationClient, TeamsService teamsService, EmailService emailService) { _microsoftGraphDelegatedClient = microsoftGraphDelegatedClient; _microsoftGraphApplicationClient = microsoftGraphApplicationClient; _teamsService = teamsService; _emailService = emailService; } [HttpGet("UserProfile")] public async Task<IEnumerable<string>> UserProfile() { var userData = await _microsoftGraphDelegatedClient.GetGraphApiUser(User.Identity.Name); return new List<string> { $"DisplayName: {userData.DisplayName}", $"GivenName: {userData.GivenName}", $"Preferred Language: {userData.PreferredLanguage}" }; } [HttpPost("MailboxSettings")] public async Task<IActionResult> MailboxSettings([FromBody] string email) { if (string.IsNullOrEmpty(email)) return BadRequest("No email"); try { var mailbox = await _microsoftGraphApplicationClient.GetUserMailboxSettings(email); if(mailbox == null) { return NotFound($"mailbox settings for {email} not found"); } var result = new List<MailboxSettingsData> { new MailboxSettingsData { Name = "User Email", Data = email }, new MailboxSettingsData { Name = "AutomaticRepliesSetting", Data = mailbox.AutomaticRepliesSetting.Status.ToString() }, new MailboxSettingsData { Name = "TimeZone", Data = mailbox.TimeZone }, new MailboxSettingsData { Name = "Language", Data = mailbox.Language.DisplayName } }; return Ok(result); } catch (Exception ex) { return BadRequest(ex.Message); } } [HttpPost("TeamsPresence")] public async Task<IActionResult> PresencePost([FromBody] string email) { if (string.IsNullOrEmpty(email)) return BadRequest("No email"); try { var userPresence = await _microsoftGraphDelegatedClient.GetPresenceforEmail(email); if (userPresence.Count == 0) { return NotFound(email); } var result = new List<PresenceData> { new PresenceData { Name = "User Email", Data = email }, new PresenceData { Name = "Availability", Data = userPresence[0].Availability } }; return Ok(result); } catch (Exception ex) { return BadRequest(ex.Message); } } [HttpPost("UserCalendar")] public async Task<IEnumerable<FilteredEventDto>> UserCalendar(UserCalendarDataModel userCalendarDataModel) { var userCalendar = await _microsoftGraphApplicationClient.GetCalanderForUser( userCalendarDataModel.Email, userCalendarDataModel.From.Value.ToString("yyyy-MM-ddTHH:mm:ss.sssZ"), userCalendarDataModel.To.Value.ToString("yyyy-MM-ddTHH:mm:ss.sssZ")); return userCalendar.Select(l => new FilteredEventDto { IsAllDay = l.IsAllDay.GetValueOrDefault(), Sensitivity = l.Sensitivity.ToString(), Start = l.Start?.DateTime, End = l.End?.DateTime, ShowAs = l.ShowAs.Value.ToString(), Subject=l.Subject }); } [HttpPost("CreateTeamsMeeting")] public async Task<TeamsMeetingCreated> CreateTeamsMeeting(TeamsMeetingDataModel teamsMeetingDataModel) { var meeting = _teamsService.CreateTeamsMeeting( teamsMeetingDataModel.MeetingName, teamsMeetingDataModel.From.Value, teamsMeetingDataModel.To.Value); var attendees = teamsMeetingDataModel.Attendees.Split(';'); List<string> items = new(); items.AddRange(attendees); var updatedMeeting = _teamsService.AddMeetingParticipants( meeting, items); var createdMeeting = await _microsoftGraphDelegatedClient.CreateOnlineMeeting(updatedMeeting); var teamsMeetingCreated = new TeamsMeetingCreated { Subject = createdMeeting.Subject, JoinUrl = createdMeeting.JoinUrl, Attendees = createdMeeting.Participants.Attendees.Select(c => c.Upn).ToList() }; // send emails foreach (var attendee in createdMeeting.Participants.Attendees) { var recipient = attendee.Upn.Trim(); var message = _emailService.CreateStandardEmail(recipient, createdMeeting.Subject, createdMeeting.JoinUrl); await _microsoftGraphDelegatedClient.SendEmailAsync(message); } teamsMeetingCreated.EmailSent = "Emails sent to all attendees, please check your mailbox"; return teamsMeetingCreated; } }

Blazor Calendar client

The Blazor WASM client does not implement any security and does not require a Microsoft.Identity.Web client. I like Blazorize and the Nuget packages are added to the UI so that these components can be used. These are nice components but use inline css.

<ItemGroup> <PackageReference Include="blazored.sessionstorage" Version="2.2.0" /> <PackageReference Include="Blazorise" Version="0.9.5.2" /> <PackageReference Include="Blazorise.Components" Version="0.9.5.2" /> <PackageReference Include="Blazorise.DataGrid" Version="0.9.5.2" /> <PackageReference Include="Blazorise.Icons.FontAwesome" Version="0.9.5.2" /> <PackageReference Include="Blazorise.Icons.Material" Version="0.9.5.2" /> <PackageReference Include="Blazorise.Material" Version="0.9.5.2" /> <PackageReference Include="Blazorise.Sidebar" Version="0.9.5.2" /> <PackageReference Include="Blazorise.Snackbar" Version="0.9.5.2" /> <PackageReference Include="Blazorise.SpinKit" Version="0.9.5.2" /> <PackageReference Include="Microsoft.AspNetCore.Components.WebAssembly" Version="6.0.1" /> <PackageReference Include="Microsoft.AspNetCore.Components.WebAssembly.DevServer" Version="6.0.1" PrivateAssets="all" /> <PackageReference Include="Microsoft.Extensions.Http" Version="6.0.0" /> <PackageReference Include="Microsoft.AspNetCore.Components.WebAssembly.Authentication" Version="6.0.1" /> </ItemGroup>

The Blazor WASM client is hosted in a Razor page. This makes it possible to add dynamic data. The anti-forgery token is added here as well as the security headers and the dynamic meta data if required.

@page "/" @namespace AspNetCoreMicrosoftGraph.Pages @using AspNetCoreMicrosoftGraph.Client @addTagHelper *, Microsoft.AspNetCore.Mvc.TagHelpers @{ Layout = null; } <!DOCTYPE html> <html lang="en"> <head> <link rel="apple-touch-icon" sizes="57x57" href="/apple-icon-57x57.png"> <link rel="apple-touch-icon" sizes="60x60" href="/apple-icon-60x60.png"> <link rel="apple-touch-icon" sizes="72x72" href="/apple-icon-72x72.png"> <link rel="apple-touch-icon" sizes="76x76" href="/apple-icon-76x76.png"> <link rel="apple-touch-icon" sizes="114x114" href="/apple-icon-114x114.png"> <link rel="apple-touch-icon" sizes="120x120" href="/apple-icon-120x120.png"> <link rel="apple-touch-icon" sizes="144x144" href="/apple-icon-144x144.png"> <link rel="apple-touch-icon" sizes="152x152" href="/apple-icon-152x152.png"> <link rel="apple-touch-icon" sizes="180x180" href="/apple-icon-180x180.png"> <link rel="icon" type="image/png" sizes="192x192" href="/android-icon-192x192.png"> <link rel="icon" type="image/png" sizes="32x32" href="/favicon-32x32.png"> <link rel="icon" type="image/png" sizes="96x96" href="/favicon-96x96.png"> <link rel="icon" type="image/png" sizes="16x16" href="/favicon-16x16.png"> <link rel="manifest" href="/manifest.json"> <meta name="msapplication-TileColor" content="#ffffff"> <meta name="msapplication-TileImage" content="/ms-icon-144x144.png"> <meta name="theme-color" content="#ffffff"> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=no" /> <title>Blazor Graph</title> <base href="~/" /> <link rel="stylesheet" href="/lib/font-awesome/css/all.min.css"> <!-- Material CSS --> <link href="css/material.min.css" rel="stylesheet"> <!-- Add Material font (Roboto) and Material icon as needed --> <link href="css/googlefontsroboto.css" rel="stylesheet"> <link href="css/fonts.google.icons.css" rel="stylesheet"> <link href="_content/Blazorise/blazorise.css" rel="stylesheet" /> <link href="_content/Blazorise.Material/blazorise.material.css" rel="stylesheet" /> <link href="_content/Blazorise.Icons.Material/blazorise.icons.material.css" rel="stylesheet" /> <link href="_content/Blazorise.SpinKit/blazorise.spinkit.css" rel="stylesheet" /> <link href="_content/Blazorise.Snackbar/blazorise.snackbar.css" rel="stylesheet" /> <link href="css/app.css" rel="stylesheet" /> <link href="AspNetCoreMicrosoftGraph.Client.styles.css" rel="stylesheet" /> <link href="manifest.json" rel="manifest" /> <link rel="apple-touch-icon" sizes="512x512" href="icon-512.png" /> </head> <body> <div id="app"> <!-- Spinner --> <div class="spinner d-flex align-items-center justify-content-center spinner"> <div class="spinner-border text-success" role="status"> <span class="sr-only">Loading...</span> </div> </div> </div> <div id="blazor-error-ui"> <environment include="Staging,Production"> An error has occurred. This application may no longer respond until reloaded. </environment> <environment include="Development"> An unhandled exception has occurred. See browser dev tools for details. </environment> <a href="" class="reload">Reload</a> <a class="dismiss">🗙</a> </div> <script src="lib/jquery/jquery.slim.min.js"></script> <script src="lib/popper.js/umd/popper.min.js"></script> <script src="js/material.min.js"></script> @*<script src="_content/Blazorise/blazorise.js"></script>*@ <script src="_content/Blazorise.Material/blazorise.material.js"></script> <script src="lib/flatpickr/l10n/default.js"></script> <script src="lib/flatpickr/l10n/de.js"></script> <script src="lib/flatpickr/l10n/fr.js"></script> <script src="lib/flatpickr/l10n/it.js"></script> <script language="javascript"> flatpickr.localize(flatpickr.l10ns.default); /*set global*/ </script> <script src="_framework/blazor.webassembly.js" ></script> <script src="antiForgeryToken.js" ></script> @Html.AntiForgeryToken() </body> </html>

The user calendar WASM view uses an input text field to enter any tenant email. This posts a request to the server API add returns the data which is displayed in the Blazorise Data Grid.

@page "/usercalendar" @inject IHttpClientFactory HttpClientFactory @inject IJSRuntime JSRuntime <h4>Calendar Events</h4> <Validations StatusChanged="@OnStatusChanged"> <Validation Validator="@ValidateEmail" > <TextEdit Placeholder="Enter email" @bind-Text="userCalendarDataModel.Email" > <Feedback> <ValidationNone>Please enter the email.</ValidationNone> <ValidationSuccess>Email is good.</ValidationSuccess> <ValidationError>Enter valid email!</ValidationError> </Feedback> </TextEdit> </Validation> <Field Horizontal="true"> <FieldLabel ColumnSize="ColumnSize.IsFull.OnTablet.Is2.OnDesktop">From</FieldLabel> <FieldBody ColumnSize="ColumnSize.IsFull.OnTablet.Is10.OnDesktop"> <DateEdit TValue="DateTime?" InputMode="DateInputMode.DateTime" @bind-Date="userCalendarDataModel.From" /> </FieldBody> </Field> <Field Horizontal="true"> <FieldLabel ColumnSize="ColumnSize.IsFull.OnTablet.Is2.OnDesktop">To</FieldLabel> <FieldBody ColumnSize="ColumnSize.IsFull.OnTablet.Is10.OnDesktop"> <DateEdit TValue="DateTime?" InputMode="DateInputMode.DateTime" @bind-Date="userCalendarDataModel.To" /> </FieldBody> </Field> <br /> <Button Color="Color.Primary" Disabled="@saveDisabled" PreventDefaultOnSubmit="true" Clicked="@Submit">Get calendar events for user</Button> </Validations> <br /><br /> @if (filteredEvents == null) { <p><em>@noDataResult</em></p> } else { <DataGrid TItem="FilteredEventDto" Data="@filteredEvents" Bordered="true" @bind-SelectedRow="@selectedFilteredEvent" PageSize=15 Responsive> <DataGridCommandColumn TItem="FilteredEventDto" /> <DataGridColumn TItem="FilteredEventDto" Field="@nameof(FilteredEventDto.Subject)" Caption="Subject" Sortable="true" /> <DataGridColumn TItem="FilteredEventDto" Field="@nameof(FilteredEventDto.Start)" Caption="Start" Editable="false" /> <DataGridColumn TItem="FilteredEventDto" Field="@nameof(FilteredEventDto.End)" Caption="End" Editable="false" /> <DataGridColumn TItem="FilteredEventDto" Field="@nameof(FilteredEventDto.Sensitivity)" Caption="Sensitivity" Editable="false"/> <DataGridColumn TItem="FilteredEventDto" Field="@nameof(FilteredEventDto.IsAllDay)" Caption="IsAllDay" Editable="false"/> <DataGridColumn TItem="FilteredEventDto" Field="@nameof(FilteredEventDto.ShowAs)" Caption="ShowAs" Editable="false"/> </DataGrid> } @code { private List<FilteredEventDto> filteredEvents; private UserCalendarDataModel userCalendarDataModel { get; set; } = new UserCalendarDataModel() { From = DateTime.UtcNow.AddDays(-7.0), To = DateTime.UtcNow.AddDays(7.0) }; private FilteredEventDto selectedFilteredEvent; private string noDataResult { get; set; } = "no data"; bool saveDisabled = true; Task OnStatusChanged( ValidationsStatusChangedEventArgs eventArgs ) { saveDisabled = eventArgs.Status != ValidationStatus.Success; return Task.CompletedTask; } void ValidateEmail( ValidatorEventArgs e ) { var email = Convert.ToString( e.Value ); e.Status = string.IsNullOrEmpty( email ) ? ValidationStatus.None : email.Contains( "@" ) ? ValidationStatus.Success : ValidationStatus.Error; } async Task Submit() { await PostData(userCalendarDataModel); } private async Task PostData(UserCalendarDataModel userCalendarDataModel) { var token = await JSRuntime.InvokeAsync<string>("getAntiForgeryToken"); var client = HttpClientFactory.CreateClient("default"); client.DefaultRequestHeaders.Add("X-XSRF-TOKEN", token); var response = await client.PostAsJsonAsync<UserCalendarDataModel>("api/GraphApiCalls/UserCalendar", userCalendarDataModel); if(response.IsSuccessStatusCode) { filteredEvents = await response.Content.ReadFromJsonAsync<List<FilteredEventDto>>(); } else { var error = await response.Content.ReadAsStringAsync(); filteredEvents = null; noDataResult = error; } } }

Running the application, a user can sign-in and request calendar data, mailbox settings or teams presence of any user in the tenant.

Using Graph together with Microsoft.Identity.Web works really well and can be implemented with little effort. By using the BFF and hosting the WASM in an ASP.NET Core application, less sensitive data needs to be exposed and it is possible to sign out without tokens possibly still existing in the untrusted zone after the logout. Blazor and Blazorise could be improved to support a better CSP and better security headers.

Links

https://blazorise.com/

https://github.com/AzureAD/microsoft-identity-web

https://docs.microsoft.com/en-us/graph/api/user-get-mailboxsettings

https://docs.microsoft.com/en-us/graph/api/presence-get

https://docs.microsoft.com/en-us/aspnet/core/blazor/security/content-security-policy

https://github.com/andrewlock/NetEscapades.AspNetCore.SecurityHeaders


Simon Willison

Annotated explanation of David Beazley's dataklasses

Annotated explanation of David Beazley's dataklasses David Beazley released a self-described "deliciously evil spin on dataclasses" that uses some deep Python trickery to implement a dataclass style decorator which creates classes that import 15-20 times faster than the original. I put together a heavily annotated version of his code while trying to figure out how all of the different Python tri

Annotated explanation of David Beazley's dataklasses

David Beazley released a self-described "deliciously evil spin on dataclasses" that uses some deep Python trickery to implement a dataclass style decorator which creates classes that import 15-20 times faster than the original. I put together a heavily annotated version of his code while trying to figure out how all of the different Python tricks in it work.

Via @simonw

Sunday, 19. December 2021

Mike Jones: self-issued

Stories of Kim Cameron

Since Kim’s passing, I’ve been reflecting on his impact on my life and remembering some of the things that made him special. Here’s a few stories I’d like to tell in his honor. Kim was more important to my career and life than most people know. Conversations with him in early 2005 led me to […]

Since Kim’s passing, I’ve been reflecting on his impact on my life and remembering some of the things that made him special. Here’s a few stories I’d like to tell in his honor.

Kim was more important to my career and life than most people know. Conversations with him in early 2005 led me to leave Microsoft Research and join his quest to “Build the Internet’s missing identity layer” – a passion that still motivates me to this day.

Within days of me joining the identity quest, Kim asked me to go with him to the first gathering of the Identity Gang at PC Forum in Scottsdale, Arizona. Many of the people that I met there remain important in my professional and personal life! The first Internet Identity Workshop soon followed.

Kim taught me a lot about building positive working relationships with others. Early on, he told me to always try to find something nice to say to others. Showing his devious sense of humor, he said “Even if you are sure that their efforts are doomed to fail because of fatal assumptions on their part, you can at least say to them ‘You’re working on solving a really important problem!’ :-)” He modelled by example that consensus is much easier to achieve when you make allies rather than enemies. And besides, it’s a lot more fun for everyone that way!

Kim was always generous with his time and hospitality and lots of fun to be around. I remember he and Adele inviting visitors from Deutsche Telekom to their home overlooking the water in Bellevue. He organized a night at the opera for identity friends in Munich. He took my wife Becky and I and Tony Nadalin out to dinner at his favorite restaurant in Paris, La Coupole. He and Adele were the instigators behind many a fun evening. He had a love of life beyond compare!

At one point in my career, I was hoping to switch to a manager more supportive of my passion for standards work, and asked Kim if I could work for him. I’ll always remember his response: “Having you work for me would be great, because I wouldn’t have to manage you. But the problem is that then they’d make me have others work for me too. Managing people would be the death of me!”

This blog exists because Kim encouraged me to blog.

I once asked Kim why there were so many Canadians working in digital identity. He replied: “Every day as a Canadian, you think ‘What is it that makes me uniquely Canadian, as opposed to being American? Whereas Americans never give it a thought. Canadians are always thinking about identity.'”

Kim was a visionary and a person of uncommon common sense. His Information Card paradigm was ahead of its time. For instance, the “selecting cards within a wallet” metaphor that Windows CardSpace introduced is now widespread – appearing in platform and Web account selectors, as well as emerging “self-sovereign identity” wallets, containing digital identities that you control. The demos people are giving now sure look a lot like InfoCard demos from back in the day!

Kim was a big believer in privacy and giving people control over their own data (see the Laws of Identity). He championed the effort for Microsoft to acquire and use the U-Prove selective disclosure technology, and to make it freely available for others to use.

Kim was hands-on. To get practical experience with OpenID Connect, he wrote a complete OpenID Provider in 2018 and even got it certified! You can see the certification entry at https://openid.net/certification/ for the “IEF Experimental Claimer V0.9” that he wrote.

Kim was highly valued by Microsoft’s leaders (and many others!). He briefly retired from Microsoft most of a decade ago, only to have the then-Executive Vice President of the Server and Tools division, Satya Nadella, immediately seek him out and ask him what it would take to convince him to return. Kim made his asks, the company agreed to them, and he was back within about a week. One of his asks resulted in the AAD business-to-customer (B2C) identity service in production use today. He also used to have regular one-on-ones with Bill Gates.

Kim wasn’t my mentor in any official capacity, but he was indeed my mentor in fact. I believe he saw potential in me and chose to take me under his wing and help me develop in oh so many ways. I’ll always be grateful for that, and most of all, for his friendship.

In September 2021 at the European Identity and Cloud (EIC) conference in Munich, Jackson Shaw and I remarked to each other that neither of us had heard from Kim in a while. I reached out to him, and he responded that his health was failing, without elaborating. Kim and I talked for a while on the phone after that. He encouraged me that the work we are doing now is really important, and to press forward quickly.

On October 25, 2021, Vittorio Bertocci organized an informal CardSpace team reunion in Redmond. Kim wished he could come but his health wasn’t up to travelling. Determined to include him in a meaningful way, I called him on my phone during the reunion and Kim spent about a half hour talking to most of the ~20 attendees in turn. They shared stories and laughed! As Vittorio said to me when we learned of his passing, we didn’t know then that we were saying goodbye.

P.S. Here’s a few of my favorite photos from the first event that Kim included me in:

All images are courtesy of Doc Searls. Each photo links to the original.

Saturday, 18. December 2021

Simon Willison

Transactionally Staged Job Drains in Postgres

Transactionally Staged Job Drains in Postgres Any time I see people argue that relational databases shouldn't be used to implement job queues I think of this post by Brandur from 2017. If you write to a queue before committing a transaction you run the risk of a queue consumer trying to read from the database before the new row becomes visible. If you write to the queue after the transaction the

Transactionally Staged Job Drains in Postgres

Any time I see people argue that relational databases shouldn't be used to implement job queues I think of this post by Brandur from 2017. If you write to a queue before committing a transaction you run the risk of a queue consumer trying to read from the database before the new row becomes visible. If you write to the queue after the transaction there's a risk an error might result in your message never being written. So: write to a relational staging table as part of the transaction, then have a separate process read from that table and write to the queue.

Friday, 17. December 2021

Simon Willison

TypeScript for Pythonistas

TypeScript for Pythonistas Really useful explanation of how TypeScript differs from Python with mypy. I hadn't realized TypeScript leans so far into structural typing, to the point that two types with different names but the same "shape" are identified as being the same type as each other. Via Hacker News

TypeScript for Pythonistas

Really useful explanation of how TypeScript differs from Python with mypy. I hadn't realized TypeScript leans so far into structural typing, to the point that two types with different names but the same "shape" are identified as being the same type as each other.

Via Hacker News

Thursday, 16. December 2021

Simon Willison

Weeknotes: Trapped in an eternal refactor

I'm still working on refactoring Datasette's table view. In doing so I spun out a new plugin, datasette-pretty-traces, which improves Datasette's tooling for seeing the SQL that was executed to build a specific page. datasette-pretty-traces I love tools like the Django Debug Toolbar which help show what's going on under the hood of an application (see also the Tikibar, a run-in-production alte

I'm still working on refactoring Datasette's table view. In doing so I spun out a new plugin, datasette-pretty-traces, which improves Datasette's tooling for seeing the SQL that was executed to build a specific page.

datasette-pretty-traces

I love tools like the Django Debug Toolbar which help show what's going on under the hood of an application (see also the Tikibar, a run-in-production alternative we built at Eventbrite).

Datasette has long had a ?_trace=1 option for outputting debug information about SQL queries executed to build a page, but the output is a big block of JSON in the page footer, example here.

For the table view refactor project I decided it was time to make this more readable, so I built a plugin that runs some JavaScript to spot that output and turn it into something a bit more legible:

You can try it out here.

I'm becoming increasingly comfortable with the idea that it's OK to ignore all of the current batch of JavaScript frameworks and libraries and just write code that uses the default browser APIs. Browser APIs are pretty great these days, especially given things like backtick literals for multi-line strings!

I'll probably merge this into Datasette core at some point in the future, but a neat thing about having plugin support is I can dash out initial versions of things like this without needing to polish them up and include them in a formal release of the parent project.

Progress on the eternal refactor

Issue 1518, split from issue 878, is the all-consuming refactor.

Datasette's table view is the most important page in the application: it's the interface that lets you browse a table, filter it, search it, run faceting against it and export it out as other formats.

It's the nastiest code in the entire project, having grown to over a thousand lines of Python. While it has very thorough tests, the actual code itself is unwieldy enough that it's slowing down progress on all kinds of things I want to get done before I ship Datasette 1.0.

So I'm picking away at it. I've broken the underlying tests up into two modules (test_table_api.py and test_table_html.py) and I've made some small improvements, but I've also spun up some not-yet-committed prototypes both against my experimental asyncinject library and a new experiment that involves something that, if you squint at it, looks a tiny bit like a new ORM. I do not want to build a new ORM!

I'm not happy with any of this yet, and it's definitely blocking my progress on other things. I'll just have to keep on chipping away and see if I can get to a breakthrough.

Releases this week datasette-pretty-traces: 0.2.1 - (3 releases total) - 2021-12-13
Prettier formatting for ?_trace=1 traces TIL this week Using C_INCLUDE_PATH to install Python packages Using lsof on macOS Registering the same Pluggy hook multiple times in a single file

A deep dive into an NSO zero-click iMessage exploit: Remote Code Execution

A deep dive into an NSO zero-click iMessage exploit: Remote Code Execution Fascinating and terrifying description of an extremely sophisticated attack against iMessage. iMessage was passing incoming image bytes through to a bunch of different libraries to figure out which image format should be decoded, including a PDF renderer that supported the old JBIG2 compression format. JBIG2 includes a me

A deep dive into an NSO zero-click iMessage exploit: Remote Code Execution

Fascinating and terrifying description of an extremely sophisticated attack against iMessage. iMessage was passing incoming image bytes through to a bunch of different libraries to figure out which image format should be decoded, including a PDF renderer that supported the old JBIG2 compression format. JBIG2 includes a mechanism for programatically swapping the values of individual black and white pixels... which turns out to be Turing complete, and means that a sufficiently cunning "image" can include a full computer architecture defined in terms of logical bit operations. Combine this with an integer overflow and you can perform arbitrary memory operations that break out of the iOS sandbox.

Via @migueldeicaza


Markus Sabadello on Medium

Report from EBSI4Austria

In 2018, all European member states, together with Norway and Lichtenstein, signed a declaration stating the joint ambition to take advantage of blockchain technology. These 29 countries founded the European Blockchain Partnership (EBP), and within this partnership, they decided to build the so-called European Blockchain Services Infrastructure (EBSI). EBSI was created aiming to, on the one hand,

In 2018, all European member states, together with Norway and Lichtenstein, signed a declaration stating the joint ambition to take advantage of blockchain technology. These 29 countries founded the European Blockchain Partnership (EBP), and within this partnership, they decided to build the so-called European Blockchain Services Infrastructure (EBSI).

EBSI was created aiming to, on the one hand, provide blockchain capabilities used by the partner of the EPB to implement and realize blockchain projects and use cases within these countries. Moreover, on the other hand, to achieve certain use cases on a European level. The so-called use case groups were defined and present the working groups related to a specific use case to support the latter idea. These use case groups consist of representatives of the EBP member counties, domain experts as well as the European Commission.

Initially, four use case groups were founded, namely the European Self-Sovereign Identity Framework (ESSIF), the diploma use case, document traceability, and secure document transfer. ESSIF focuses on digital identities where the user is in control over her identity data. The diploma use case focuses on educational diplomas of students and related processes such as issuing, verifying, revocation, and all of these processes in cross-border scenarios. Document traceability considers the anchoring of document-related identifiers like hashes on the blockcahin and secure document sharing on tax-related information transfer.

EBSI defined so-called use case groups that should be achieved using the provided capabilities to showcase their functionality and bring in expertise in the specific fields. Each use case group consists of representatives of the member states, domain experts, and the European Commission.

About EBSI4Austria

EBSI4Austria is a CEF funded project with two main objectives. First, EBSI4Austria aims to set up, operate and maintain the Austrian’s EBSI node. Second, we pilot the diploma use case on the Austrian level supported by two Universities and data providers as well as verifiers.

EBSI created a so-called early adopter program to speed up the use case integration of the participating countries. EBSI4Austria joined this ambiguous program already in the first wave reflecting our project’s motivation.

Partners

EBSI4Austria consists of three partners, namely two Universities such as Graz University of Technology (TU Graz) and the Vienna University of Economics (WU Vienna), together with Danube Tech, a Vienna based company that provides leading expertise in Self-Sovereign Identity (SSI) as well as distributed systems and is involved in related standardization bodies. The Universities are responsible for issuing students’ diplomas and also verifying them. Austrian’s EBSI node is set up and operated at the department eGovernment innovation center (EGIZ), which is part of Graz University of technology.

User Story

Figure 1 illustrates the user story that is covered in our project. A student studying at the Graz University of Technology is finishing her bachelor’s program. TU Graz issues her diploma credential stating her bachelor’s degree, which the student stores in her wallet. Next, she wants to apply for a master’s program at the Vienna University of Economics and Business; thus, she presents her bachelor’s diploma credential. After successfully finishing her master’s program at WU Vienna, the university issues her master’s diploma credential to the student. The student is very ambitious; therefore, she applies for a Ph.D. position at the Berlin Institute of Technology by presenting her diplomas. All involved parties utilize the EBSI blockchain network to verify if the issuing universities are trusted issuers.

Figure 1: User Story of the Diploma Use Case Technology

In order to implement our EBSI4Austria project, we used similar technologies as many other Self-Sovereign Identity (SSI) initiatives, i.e., based on building blocks such as Decentralized Identifiers (DIDs) and Verifiable Credentials (VCs).

We created two DIDs on the EBSI blockchain for the two universities, as follows:

Test DID for TU Graz: did:ebsi:zuoS6VfnmNLduF2dynhsjBU Test DID for WU Vienna: did:ebsi:z23EQVGi5so9sBwytv6nMXMo

In addition, we registered them in EBSI’s Trusted Issuer Registry (TIR).

We also designed Verifiable Credentials to model digital versions of university diplomas. We implemented them using different credential and proof formats to accommodate changing requirements and guidelines in the EBSI specifications throughout the year. See here for some examples in different formats:

Example Diploma by TU Graz:

JSON-LD+LD-Proofs JSON-LD+JWT (also see JWT payload only) JSON+JWT (also see JWT payload only)

Example Diploma by WU Wien:

Paper version (in German) Paper version (in English) JSON-LD+LD-Proofs JSON-LD+JWT (also see JWT payload only) JSON+JWT (also see JWT payload only)

We also designed our own (experimental) JSON-LD context in order to be able to work with Linked Data Proofs (see essif-schemas-vc-2020-v1.jsonld). In our opinion, it would be preferable if JSON-LD contexts were provided by EBSI to all member states instead of having to do this separately for each EBSI pilot project.

We use the following technologies in our project:

Universal Resolver → For resolving DIDs. Universal Registrar → For creating DIDs. Universal Issuer → For issuing VCs. Universal Verifier → For verifying VCs. SSI Java Libraries ld-signatures-java — For Linked Data Signatures. verifiable-credentials-java — For Verifiable Credentials.

We set up the following demonstration websites:

https://tugraz.ebsi4austria.danubetech.com/ — Issuer demo website https://wuwien.ebsi4austria.danubetech.com/ — Verifier demo website

See this Github repository for additional technical details about EBSI4Austria.

Multi-University Pilot

Within EBSI’s early adopter program, EBSI4Austria also joined the multi-university pilot (MU pilot) in which the focus is on issuing and verifying student diplomas between universities but in this case, even in a cross-border scenario. This multi-university pilot should underpin the possibilities even across countries.

While working on the MU pilot, we participated in several EBSI Early Adopter program meetings to identify issuers, verifiers, and types of credentials. We were in contact with members of Spanish EBSI pilot projects (especially from the SSI company Gataca), to compare our approaches to EBSI DIDs and Verifiable Credentials. We had several technical discussions and email exchanges regarding details of those credentials, e.g. about the JSON-LD contexts and exact proof formats we were planning to use. During these exchanges, we were able to exchange initial examples of verifiable credentials and verify them.

Within one of the “clusters” of the EBSI MU pilot, we also collaborated closely with the “UniCert” aka “EBSI4Germany” project led by the Technical University of Berlin, a member of the EBSI early adopter program and the German IDunion consortium. This collaboration proved to be particularly interesting for the following reasons:

1. Since TU Berlin participates both in EBSI and IDunion, they have unique insights into the similarities and differences between these different SSI networks.

2. TU Berlin was also able to share some experiences regarding the use of existing standards such as Europass and ELMO/EMREX, which can help with semantic interoperability of Verifiable Credentials use in EBSI.

Figure 2: Multi-university pilot scenario.

Note: This blog post was co-authored by Andreas Abraham (eGovernment Innovation Center) and Markus Sabadello (Danube Tech). The EBSI4Austria project was funded under agreement No INEA/CEF/ICT/A2020/2271545.

Wednesday, 15. December 2021

Phil Windley's Technometria

Leveraging the Identity Metasystem

Summary: Metasystems promote network effects because they provide leverage: one infrastructure that not only serves many purposes, but also engenders consistent behavior. I recently saw the following tweet (since deleted): Soooo many questions: Would a vaccination status NFT be transferable (a core feature of most NFTs)? What's the underlying platform'

Summary: Metasystems promote network effects because they provide leverage: one infrastructure that not only serves many purposes, but also engenders consistent behavior.

I recently saw the following tweet (since deleted):

Soooo many questions:

Would a vaccination status NFT be transferable (a core feature of most NFTs)? What's the underlying platform's security? How's the smart contract written? Who has control? How do identifiers work in this system? What are the privacy implications? How much PII is on the blockchain?

Answering these questions would require significant work. And that work might have to be redone for each NFT used for SSI, depending on how Origin Trail is architected. If they are based on ERC-721, that tells us some things we may need to know, but doesn't answer most of the questions I pose above. If Origin Trail runs its own blockchain, then there are even more questions.

One of the big value propositions of an identity metasystem is that many of these questions can be answered once for the metasystem rather than answering them for each identity system built on top of it. In particular, the metasystem guarantees the fidelity of the credential exchange. Credential fidelity comprises four important attributes. Credential exchange on the identity metasystem:

Reveals the identifier of the issuer Ensures the credential was issued to the party presenting it Ensures the credential has not been tampered with Reveals whether or not the credential has been revoked

I don't know anything about Origin Trail. It's possible that they have worked all this out. The point is simply that the tweet caused me to think about the advantages of a metasystem.

Metasystems give us leverage because their known behavior makes reasoning about systems built on top of them easier. Knowing how TCP works lets us know important properties of the protocols that operate on top of it. Similarly, in the identity metasystem, knowing how DIDs, DID Registries, DIDComm, and verifiable credential exchange protocols function means that we can more easily understand the behavior of a system built on top of it, like the health pass ecosystem. This leverage is an important component of the network effects that a well designed metasystem can provide.

Photo Credit: Archimedes Lever from ZDF/Terra X/Gruppe 5/ Susanne Utzt, Cristina Trebbi/ Jens Boeck, Dieter Stürmer / Fabian Wienke / Sebastian Martinez/ xkopp, polloq (CC BY 4.0)

Tags: identity metasystem nft security ssi fidelity verifiable+credentials


Here's Tom with the Weather

Last day with Pandemic Beard

Tuesday, 14. December 2021

@_Nat Zone

日本経済新聞にインタビューが掲載されました:「巨大ITと国際規格策定」

2021年12月14日の日本経済新聞朝刊(16面)… The post 日本経済新聞にインタビューが掲載されました:「巨大ITと国際規格策定」 first appeared on @_Nat Zone.

2021年12月14日の日本経済新聞朝刊(16面)に、大豆生田記者による5段にわたるインタビュー記事が掲載されました。

巨大ITと国際規格策定

米OpenIDファウンデーション理事長 崎村夏彦氏テクノロジストの時代2021年12月14日 2:00 [有料会員限定]

https://www.nikkei.com/article/DGKKZO78400100T11C21A2TEB000/

写真を撮られると思っていなかったので、ボサボサヘアのやつれた研究者然とした写真が掲載されております。どうして標準化の世界に飛び込んだかなども語られております。

崎村さんが標準化に飛び込んだ理由、知らなかったな @_nat / “巨大ITと国際規格策定” https://t.co/au3TDCZwPT

— Masanori Kusunoki / 楠 正憲 (@masanork) December 13, 2021

この記事に関連して、Q&AをTwitterのSpaceを使ってやる会を企画したいと思っています。

記事についてのQ&A のスペース開催はいつが良いですか?

— 崎村夏彦『デジタルアイデンティティ』7/16発売 (@_nat) December 15, 2021

アンケートによると、平日夜が濃厚です。Twitter上でアナウンスします(ここにも書くかも)ので、@_nat をフォローして少々お待ち下さい。

The post 日本経済新聞にインタビューが掲載されました:「巨大ITと国際規格策定」 first appeared on @_Nat Zone.

Monday, 13. December 2021

Damien Bod

Implement Compound Proof BBS+ verifiable credentials using ASP.NET Core and MATTR

This article shows how Zero Knowledge Proofs BBS+ verifiable credentials can be used to verify credential subject data from two separate verifiable credentials implemented in ASP.NET Core and MATTR. The ZKP BBS+ verifiable credentials are issued and stored on a digital wallet using a Self-Issued Identity Provider (SIOP) and OpenID Connect. A compound proof presentation […]

This article shows how Zero Knowledge Proofs BBS+ verifiable credentials can be used to verify credential subject data from two separate verifiable credentials implemented in ASP.NET Core and MATTR. The ZKP BBS+ verifiable credentials are issued and stored on a digital wallet using a Self-Issued Identity Provider (SIOP) and OpenID Connect. A compound proof presentation template is created to verify the user data in a single verify.

Code: https://github.com/swiss-ssi-group/MattrAspNetCoreCompoundProofBBS

Blogs in the series

Getting started with Self Sovereign Identity SSI Create an OIDC credential Issuer with MATTR and ASP.NET Core Present and Verify Verifiable Credentials in ASP.NET Core using Decentralized Identities and MATTR Verify vaccination data using Zero Knowledge Proofs with ASP.NET Core and MATTR Challenges to Self Sovereign Identity Implement Compound Proof BBS+ verifiable credentials using ASP.NET Core and MATTR

What are ZKP BBS+ verifiable credentials

BBS+ verifiable credentials are built using JSON-LD and makes it possible to support selective disclosure of subject claims from a verifiable credential, compound proofs of different VCs, zero knowledge proofs where the subject claims do not need to be exposed to verify something, private holder binding and prevent tracking. The specification and implementation are still a work in progress.

Setup

The solution is setup to issue and verify the BBS+ verifiable credentials. The credential issuers are implemented in ASP.NET Core as well as the verifiable credential verifier. One credential issuer implements a BBS+ JSON-LD E-ID verifiable credential using SIOP together with Auth0 as the identity provider and the MATTR API which implements the access to the ledger and implements the logic for creating and verifying the verifiable credential and implementing the SSI specifications. The second credential issuer implements a county of residence BBS+ verifiable credential issuer like the first one. The ASP.NET Core verifier project uses a BBS+ verify presentation to verify that a user has the correct E-ID credentials and the county residence verifiable credentials in one request. This is presented as a compound proof using credential subject data from both verifiable credentials. The credentials are presented from the MATTR wallet to the ASP.NET Core verifier application.

The BBS+ compound proof is made up from the two verifiable credentials stored on the wallet. The holder of the wallet owns the credentials and can be trusted to a fairly high level because SIOP was used to add the credentials to the MATTR wallet which requires a user authentication on the wallet using OpenID Connect. If the host system has strong authentication, the user of the wallet is probably the same person for which the credentials where intended for and issued too. We only can prove that the verifiable credentials are valid, we cannot prove that the person sending the credentials is also the subject of the credentials or has the authorization to act on behalf of the credential subject. With SIOP, we know that the credentials were issued in a way which allows for strong authentication.

Implementing the Credential Issuers

The credentials are created using a credential issuer and can be added to the users wallet using SIOP. An ASP.NET Core application is used to implement the MATTR API client for creating and issuing the credentials. Auth0 is used for the OIDC server and the profiles used in the verifiable credentials are added here. The Auth0 server is part of the credential issuer service business. The application has two separate flows for administrators and users, or holders of the credentials and credential issuer administrators.

An administrator can signin to the credential issuer ASP.NET Core application using OIDC and can create new OIDC credential issuers using BBS+. Once created, the callback URL for the credential issuer needs to be added to the Auth0 client application as a redirect URL.

A user can login to the ASP.NET Core application and request the verifiable credentials only for themselves. This is not authenticated on the ASP.NET Core application, but on the wallet application using the SIOP flow. The application presents a QR Code which starts the flow. Once authenticated, the credentials are added to the digital wallet. Both the E-ID and the county of residence credentials are added and stored on the wallet.

Auth0 Auth pipeline rules

The credential subject claims added to the verifiable credential uses the profile data from the Auth0 identity provider. This data can be added using an Auth0 auth pipeline rule. Once defined, if the user has the profile data, the verifiable credentials can be created from the data.

function (user, context, callback) { const namespace = 'https://damianbod-sandbox.vii.mattr.global/'; context.idToken[namespace + 'name'] = user.user_metadata.name; context.idToken[namespace + 'first_name'] = user.user_metadata.first_name; context.idToken[namespace + 'date_of_birth'] = user.user_metadata.date_of_birth; context.idToken[namespace + 'family_name'] = user.user_metadata.family_name; context.idToken[namespace + 'given_name'] = user.user_metadata.given_name; context.idToken[namespace + 'birth_place'] = user.user_metadata.birth_place; context.idToken[namespace + 'gender'] = user.user_metadata.gender; context.idToken[namespace + 'height'] = user.user_metadata.height; context.idToken[namespace + 'nationality'] = user.user_metadata.nationality; context.idToken[namespace + 'address_country'] = user.user_metadata.address_country; context.idToken[namespace + 'address_locality'] = user.user_metadata.address_locality; context.idToken[namespace + 'address_region'] = user.user_metadata.address_region; context.idToken[namespace + 'street_address'] = user.user_metadata.street_address; context.idToken[namespace + 'postal_code'] = user.user_metadata.postal_code; callback(null, user, context); }

Once issued, the verifiable credential is saved to the digital wallet like this:

{ "type": [ "VerifiableCredential", "VerifiableCredentialExtension" ], "issuer": { "id": "did:key:zUC7GiWMGY2pynrFG7TcstDiZeNKfpMPY8YT5z4xgd58wE927UxaJfaqFuXb9giCS1diTwLi8G18hRgZ928b4qd8nkPRdZCEaBGChGSjUzfFDm6Tyio1GN2npT9o7K5uu8mDs2g", "name": "damianbod-sandbox.vii.mattr.global" }, "name": "EID", "issuanceDate": "2021-12-04T11:47:41.319Z", "credentialSubject": { "id": "did:key:z6MkmGHPWdKjLqiTydLHvRRdHPNDdUDKDudjiF87RNFjM2fb", "family_name": "Bob", "given_name": "Lammy", "date_of_birth": "1953-07-21", "birth_place": "Seattle", "height": "176cm", "nationality": "USA", "gender": "Male" }, "@context": [ "https://www.w3.org/2018/credentials/v1", "https://w3id.org/security/bbs/v1", { "@vocab": "https://w3id.org/security/undefinedTerm#" }, "https://mattr.global/contexts/vc-extensions/v1", "https://schema.org", "https://w3id.org/vc-revocation-list-2020/v1" ], "credentialStatus": { "id": "https://damianbod-sandbox.vii.mattr.global/core/v1/revocation-lists/dd507c44-044c-433b-98ab-6fa9934d6b01#0", "type": "RevocationList2020Status", "revocationListIndex": "0", "revocationListCredential": "https://damianbod-sandbox.vii.mattr.global/core/v1/revocation-lists/dd507c44-044c-433b-98ab-6fa9934d6b01" }, "proof": { "type": "BbsBlsSignature2020", "created": "2021-12-04T11:47:42Z", "proofPurpose": "assertionMethod", "proofValue": "qquknHC7zaklJd0/IbceP0qC9sGYfkwszlujrNQn+RFg1/lUbjCe85Qnwed7QBQkIGnYRHydZiD+8wJG8/R5i8YPJhWuneWNE151GbPTaMhGNZtM763yi2A11xYLmB86x0d1JLdHaO30NleacpTs9g==", "verificationMethod": "did:key:zUC7GiWMGY2pynrFG7TcstDiZeNKfpMPY8YT5z4xgd58wE927UxaJfaqFuXb9giCS1diTwLi8G18hRgZ928b4qd8nkPRdZCEaBGChGSjUzfFDm6Tyio1GN2npT9o7K5uu8mDs2g#zUC7GiWMGY2pynrFG7TcstDiZeNKfpMPY8YT5z4xgd58wE927UxaJfaqFuXb9giCS1diTwLi8G18hRgZ928b4qd8nkPRdZCEaBGChGSjUzfFDm6Tyio1GN2npT9o7K5uu8mDs2g" } }

For more information on adding BBS+ verifiable credentials using MATTR, see the documentation, or a previous blog in this series.

Verifying the compound proof BBS+ verifiable credential

The verifier application needs to use both E-ID and county of residence verifiable credentials. This is done using a presentation template which is specific to the MATTR platform. Once created, a verify request is created using this template and presented to the user in the UI as a QR code. The holder of the wallet can scan this code and the verification begins. The wallet will use the verification request and try to find the credentials on the wallet which matches what was requested. If the wallet has the data from the correct issuers, the holder of the wallet consents, the data is sent to the verifier application using a new presentation verifiable credential using the credential subject data from both of the existing verifiable credentials stored on the wallet. The webhook or an API on the verifier application handles this and validates the request. If all is good, the data is persisted and the UI is updated using SignalR messaging.

Creating a verifier presentation template

Before verifier presentations can be sent a the digital wallet, a template needs to be created in the MATTR platform. The CreatePresentationTemplate Razor page is used to create a new template. The template requires the two DIDs used for issuing the credentials from the credential issuer applications.

public class CreatePresentationTemplateModel : PageModel { private readonly MattrPresentationTemplateService _mattrVerifyService; public bool CreatingPresentationTemplate { get; set; } = true; public string TemplateId { get; set; } [BindProperty] public PresentationTemplate PresentationTemplate { get; set; } public CreatePresentationTemplateModel(MattrPresentationTemplateService mattrVerifyService) { _mattrVerifyService = mattrVerifyService; } public void OnGet() { PresentationTemplate = new PresentationTemplate(); } public async Task<IActionResult> OnPostAsync() { if (!ModelState.IsValid) { return Page(); } TemplateId = await _mattrVerifyService.CreatePresentationTemplateId( PresentationTemplate.DidEid, PresentationTemplate.DidCountyResidence); CreatingPresentationTemplate = false; return Page(); } } public class PresentationTemplate { [Required] public string DidEid { get; set; } [Required] public string DidCountyResidence { get; set; } }

The MattrPresentationTemplateService class implements the logic required to create a new presentation template. The service gets a new access token for your MATTR tenant and creates a new template using the credential subjects required and the correct contexts. BBS+ and frames require specific contexts. The CredentialQuery2 has two separate Frame items, one for each verifiable credential created and stored on the digital wallet.

public class MattrPresentationTemplateService { private readonly IHttpClientFactory _clientFactory; private readonly MattrTokenApiService _mattrTokenApiService; private readonly VerifyEidCountyResidenceDbService _verifyEidAndCountyResidenceDbService; private readonly MattrConfiguration _mattrConfiguration; public MattrPresentationTemplateService(IHttpClientFactory clientFactory, IOptions<MattrConfiguration> mattrConfiguration, MattrTokenApiService mattrTokenApiService, VerifyEidCountyResidenceDbService VerifyEidAndCountyResidenceDbService) { _clientFactory = clientFactory; _mattrTokenApiService = mattrTokenApiService; _verifyEidAndCountyResidenceDbService = VerifyEidAndCountyResidenceDbService; _mattrConfiguration = mattrConfiguration.Value; } public async Task<string> CreatePresentationTemplateId(string didEid, string didCountyResidence) { // create a new one var v1PresentationTemplateResponse = await CreateMattrPresentationTemplate(didEid, didCountyResidence); // save to db var template = new EidCountyResidenceDataPresentationTemplate { DidEid = didEid, DidCountyResidence = didCountyResidence, TemplateId = v1PresentationTemplateResponse.Id, MattrPresentationTemplateReponse = JsonConvert.SerializeObject(v1PresentationTemplateResponse) }; await _verifyEidAndCountyResidenceDbService.CreateEidAndCountyResidenceDataTemplate(template); return v1PresentationTemplateResponse.Id; } private async Task<V1_PresentationTemplateResponse> CreateMattrPresentationTemplate(string didId, string didCountyResidence) { HttpClient client = _clientFactory.CreateClient(); var accessToken = await _mattrTokenApiService.GetApiToken(client, "mattrAccessToken"); client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", accessToken); client.DefaultRequestHeaders.TryAddWithoutValidation("Content-Type", "application/json"); var v1PresentationTemplateResponse = await CreateMattrPresentationTemplate(client, didId, didCountyResidence); return v1PresentationTemplateResponse; } private async Task<V1_PresentationTemplateResponse> CreateMattrPresentationTemplate( HttpClient client, string didEid, string didCountyResidence) { // create presentation, post to presentations templates api // https://learn.mattr.global/tutorials/verify/presentation-request-template // https://learn.mattr.global/tutorials/verify/presentation-request-template#create-a-privacy-preserving-presentation-request-template-for-zkp-enabled-credentials var createPresentationsTemplatesUrl = $"https://{_mattrConfiguration.TenantSubdomain}/v1/presentations/templates"; var eidAdditionalPropertiesCredentialSubject = new Dictionary<string, object>(); eidAdditionalPropertiesCredentialSubject.Add("credentialSubject", new EidDataCredentialSubject { Explicit = true }); var countyResidenceAdditionalPropertiesCredentialSubject = new Dictionary<string, object>(); countyResidenceAdditionalPropertiesCredentialSubject.Add("credentialSubject", new CountyResidenceDataCredentialSubject { Explicit = true }); var additionalPropertiesCredentialQuery = new Dictionary<string, object>(); additionalPropertiesCredentialQuery.Add("required", true); var additionalPropertiesQuery = new Dictionary<string, object>(); additionalPropertiesQuery.Add("type", "QueryByFrame"); additionalPropertiesQuery.Add("credentialQuery", new List<CredentialQuery2> { new CredentialQuery2 { Reason = "Please provide your E-ID", TrustedIssuer = new List<TrustedIssuer>{ new TrustedIssuer { Required = true, Issuer = didEid // DID used to create the oidc } }, Frame = new Frame { Context = new List<object>{ "https://www.w3.org/2018/credentials/v1", "https://w3id.org/security/bbs/v1", "https://mattr.global/contexts/vc-extensions/v1", "https://schema.org", "https://w3id.org/vc-revocation-list-2020/v1" }, Type = "VerifiableCredential", AdditionalProperties = eidAdditionalPropertiesCredentialSubject }, AdditionalProperties = additionalPropertiesCredentialQuery }, new CredentialQuery2 { Reason = "Please provide your Residence data", TrustedIssuer = new List<TrustedIssuer>{ new TrustedIssuer { Required = true, Issuer = didCountyResidence // DID used to create the oidc } }, Frame = new Frame { Context = new List<object>{ "https://www.w3.org/2018/credentials/v1", "https://w3id.org/security/bbs/v1", "https://mattr.global/contexts/vc-extensions/v1", "https://schema.org", "https://w3id.org/vc-revocation-list-2020/v1" }, Type = "VerifiableCredential", AdditionalProperties = countyResidenceAdditionalPropertiesCredentialSubject }, AdditionalProperties = additionalPropertiesCredentialQuery } }); var payload = new MattrOpenApiClient.V1_CreatePresentationTemplate { Domain = _mattrConfiguration.TenantSubdomain, Name = "zkp-eid-county-residence-compound", Query = new List<Query> { new Query { AdditionalProperties = additionalPropertiesQuery } } }; var payloadJson = JsonConvert.SerializeObject(payload); var uri = new Uri(createPresentationsTemplatesUrl); using (var content = new StringContentWithoutCharset(payloadJson, "application/json")) { var presentationTemplateResponse = await client.PostAsync(uri, content); if (presentationTemplateResponse.StatusCode == System.Net.HttpStatusCode.Created) { var v1PresentationTemplateResponse = JsonConvert .DeserializeObject<MattrOpenApiClient.V1_PresentationTemplateResponse>( await presentationTemplateResponse.Content.ReadAsStringAsync()); return v1PresentationTemplateResponse; } var error = await presentationTemplateResponse.Content.ReadAsStringAsync(); } throw new Exception("whoops something went wrong"); } } public class EidDataCredentialSubject { [Newtonsoft.Json.JsonProperty("@explicit", Required = Newtonsoft.Json.Required.Always)] public bool Explicit { get; set; } [Newtonsoft.Json.JsonProperty("family_name", Required = Newtonsoft.Json.Required.Always)] [System.ComponentModel.DataAnnotations.Required] public object FamilyName { get; set; } = new object(); [Newtonsoft.Json.JsonProperty("given_name", Required = Newtonsoft.Json.Required.Always)] [System.ComponentModel.DataAnnotations.Required] public object GivenName { get; set; } = new object(); [Newtonsoft.Json.JsonProperty("date_of_birth", Required = Newtonsoft.Json.Required.Always)] [System.ComponentModel.DataAnnotations.Required] public object DateOfBirth { get; set; } = new object(); [Newtonsoft.Json.JsonProperty("birth_place", Required = Newtonsoft.Json.Required.Always)] [System.ComponentModel.DataAnnotations.Required] public object BirthPlace { get; set; } = new object(); [Newtonsoft.Json.JsonProperty("height", Required = Newtonsoft.Json.Required.Always)] [System.ComponentModel.DataAnnotations.Required] public object Height { get; set; } = new object(); [Newtonsoft.Json.JsonProperty("nationality", Required = Newtonsoft.Json.Required.Always)] [System.ComponentModel.DataAnnotations.Required] public object Nationality { get; set; } = new object(); [Newtonsoft.Json.JsonProperty("gender", Required = Newtonsoft.Json.Required.Always)] [System.ComponentModel.DataAnnotations.Required] public object Gender { get; set; } = new object(); } public class CountyResidenceDataCredentialSubject { [Newtonsoft.Json.JsonProperty("@explicit", Required = Newtonsoft.Json.Required.Always)] public bool Explicit { get; set; } [Newtonsoft.Json.JsonProperty("family_name", Required = Newtonsoft.Json.Required.Always)] [System.ComponentModel.DataAnnotations.Required] public object FamilyName { get; set; } = new object(); [Newtonsoft.Json.JsonProperty("given_name", Required = Newtonsoft.Json.Required.Always)] [System.ComponentModel.DataAnnotations.Required] public object GivenName { get; set; } = new object(); [Newtonsoft.Json.JsonProperty("date_of_birth", Required = Newtonsoft.Json.Required.Always)] [System.ComponentModel.DataAnnotations.Required] public object DateOfBirth { get; set; } = new object(); [Newtonsoft.Json.JsonProperty("address_country", Required = Newtonsoft.Json.Required.Always)] [System.ComponentModel.DataAnnotations.Required] public object AddressCountry { get; set; } = new object(); [Newtonsoft.Json.JsonProperty("address_locality", Required = Newtonsoft.Json.Required.Always)] [System.ComponentModel.DataAnnotations.Required] public object AddressLocality { get; set; } = new object(); [Newtonsoft.Json.JsonProperty("address_region", Required = Newtonsoft.Json.Required.Always)] [System.ComponentModel.DataAnnotations.Required] public object AddressRegion { get; set; } = new object(); [Newtonsoft.Json.JsonProperty("street_address", Required = Newtonsoft.Json.Required.Always)] [System.ComponentModel.DataAnnotations.Required] public object StreetAddress { get; set; } = new object(); [Newtonsoft.Json.JsonProperty("postal_code", Required = Newtonsoft.Json.Required.Always)] [System.ComponentModel.DataAnnotations.Required] public object PostalCode { get; set; } = new object(); }

When the presentation template is created, the following JSON payload in returned. This is what is used to create verifier presentation requests. The context must contain the value of the context value of the credentials on the wallet. You can also verify that the trusted issuer matches and that the two Frame objects are created correctly with the required values.

{ "id": "f188df35-e76f-4794-8e64-eedbe0af2b19", "domain": "damianbod-sandbox.vii.mattr.global", "name": "zkp-eid-county-residence-compound", "query": [ { "type": "QueryByFrame", "credentialQuery": [ { "reason": "Please provide your E-ID", "frame": { "@context": [ "https://www.w3.org/2018/credentials/v1", "https://w3id.org/security/bbs/v1", "https://mattr.global/contexts/vc-extensions/v1", "https://schema.org", "https://w3id.org/vc-revocation-list-2020/v1" ], "type": "VerifiableCredential", "credentialSubject": { "@explicit": true, "family_name": {}, "given_name": {}, "date_of_birth": {}, "birth_place": {}, "height": {}, "nationality": {}, "gender": {} } }, "trustedIssuer": [ { "required": true, "issuer": "did:key:zUC7GiWMGY2pynrFG7TcstDiZeNKfpMPY8YT5z4xgd58wE927UxaJfaqFuXb9giCS1diTwLi8G18hRgZ928b4qd8nkPRdZCEaBGChGSjUzfFDm6Tyio1GN2npT9o7K5uu8mDs2g" } ], "required": true }, { "reason": "Please provide your Residence data", "frame": { "@context": [ "https://www.w3.org/2018/credentials/v1", "https://w3id.org/security/bbs/v1", "https://mattr.global/contexts/vc-extensions/v1", "https://schema.org", "https://w3id.org/vc-revocation-list-2020/v1" ], "type": "VerifiableCredential", "credentialSubject": { "@explicit": true, "family_name": {}, "given_name": {}, "date_of_birth": {}, "address_country": {}, "address_locality": {}, "address_region": {}, "street_address": {}, "postal_code": {} } }, "trustedIssuer": [ { "required": true, "issuer": "did:key:zUC7G95fmyuYXNP2oqhhWkysmMPafU4dUWtqzXSsijsLCVauFDhAB7Dqbk2LCeo488j9iWGLXCL59ocYzhTmS3U7WNdukoJ2A8Z8AVCzeS5TySDJcYCjzuaPm7voPGPqtYa6eLV" } ], "required": true } ] } ] }

The presentation template is ready and can be used now. This is just a specific definition used by the MATTR platform. This is not saved to the ledger.

Creating a verifier request and present QR Code

Now that we have a presentation template, we initialize a verifier presentation request and present this as a QR Code for the holder of the digital wallet to scan. The CreateVerifyCallback method creates the verification and returns a signed token which is added to the QR Code to scan and the challengeId is encoded in base64 as we use this in the URL to request or handle the webhook callback.

public class CreateVerifierDisplayQrCodeModel : PageModel { private readonly MattrCredentialVerifyCallbackService _mattrCredentialVerifyCallbackService; public bool CreatingVerifier { get; set; } = true; public string QrCodeUrl { get; set; } [BindProperty] public string ChallengeId { get; set; } [BindProperty] public string Base64ChallengeId { get; set; } [BindProperty] public CreateVerifierDisplayQrCodeCallbackUrl CallbackUrlDto { get; set; } public CreateVerifierDisplayQrCodeModel(MattrCredentialVerifyCallbackService mattrCredentialVerifyCallbackService) { _mattrCredentialVerifyCallbackService = mattrCredentialVerifyCallbackService; } public void OnGet() { CallbackUrlDto = new CreateVerifierDisplayQrCodeCallbackUrl(); CallbackUrlDto.CallbackUrl = $"https://{HttpContext.Request.Host.Value}"; } public async Task<IActionResult> OnPostAsync() { if (!ModelState.IsValid) { return Page(); } var result = await _mattrCredentialVerifyCallbackService .CreateVerifyCallback(CallbackUrlDto.CallbackUrl); CreatingVerifier = false; var walletUrl = result.WalletUrl.Trim(); ChallengeId = result.ChallengeId; var valueBytes = Encoding.UTF8.GetBytes(ChallengeId); Base64ChallengeId = Convert.ToBase64String(valueBytes); VerificationRedirectController.WalletUrls.Add(Base64ChallengeId, walletUrl); // https://learn.mattr.global/tutorials/verify/using-callback/callback-e-to-e#redirect-urls //var qrCodeUrl = $"didcomm://{walletUrl}"; QrCodeUrl = $"didcomm://https://{HttpContext.Request.Host.Value}/VerificationRedirect/{Base64ChallengeId}"; return Page(); } } public class CreateVerifierDisplayQrCodeCallbackUrl { [Required] public string CallbackUrl { get; set; } }

The CreateVerifyCallback method uses the host as the base URL for the callback definition which is included in the verification. An access token is requested for the MATTR API, this is used for all the requests. The last issued template is used in the verification. A new DID is created or the existing DID for this verifier is used to attach the verify presentation on the ledger. The InvokePresentationRequest is used to initialize the verification presentation. This request uses the templateId, the callback URL and the DID. Part of the body payload of the response of the request is signed and this is returned to the Razor page to be displayed as part of the QR code. This signed token is longer and so a didcomm redirect is used in the QR Code and not the value directly in the Razor page..

/// <summary> /// https://learn.mattr.global/tutorials/verify/using-callback/callback-e-to-e /// </summary> /// <param name="callbackBaseUrl"></param> /// <returns></returns> public async Task<(string WalletUrl, string ChallengeId)> CreateVerifyCallback(string callbackBaseUrl) { callbackBaseUrl = callbackBaseUrl.Trim(); if (!callbackBaseUrl.EndsWith('/')) { callbackBaseUrl = $"{callbackBaseUrl}/"; } var callbackUrlFull = $"{callbackBaseUrl}{MATTR_CALLBACK_VERIFY_PATH}"; var challenge = GetEncodedRandomString(); HttpClient client = _clientFactory.CreateClient(); var accessToken = await _mattrTokenApiService.GetApiToken(client, "mattrAccessToken"); client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", accessToken); client.DefaultRequestHeaders.TryAddWithoutValidation("Content-Type", "application/json"); var template = await _VerifyEidAndCountyResidenceDbService.GetLastPresentationTemplate(); var didToVerify = await _mattrCreateDidService.GetDidOrCreate("did_for_verify"); // Request DID from ledger V1_GetDidResponse did = await RequestDID(didToVerify.Did, client); // Invoke the Presentation Request var invokePresentationResponse = await InvokePresentationRequest( client, didToVerify.Did, template.TemplateId, challenge, callbackUrlFull); // Sign and Encode the Presentation Request body var signAndEncodePresentationRequestBodyResponse = await SignAndEncodePresentationRequestBody( client, did, invokePresentationResponse); // fix strange DTO var jws = signAndEncodePresentationRequestBodyResponse.Replace("\"", ""); // save to db var vaccinationDataPresentationVerify = new EidCountyResidenceDataPresentationVerify { DidEid = template.DidEid, DidCountyResidence = template.DidCountyResidence, TemplateId = template.TemplateId, CallbackUrl = callbackUrlFull, Challenge = challenge, InvokePresentationResponse = JsonConvert.SerializeObject(invokePresentationResponse), Did = JsonConvert.SerializeObject(did), SignAndEncodePresentationRequestBody = jws }; await _VerifyEidAndCountyResidenceDbService.CreateEidAndCountyResidenceDataPresentationVerify(vaccinationDataPresentationVerify); var walletUrl = $"https://{_mattrConfiguration.TenantSubdomain}/?request={jws}"; return (walletUrl, challenge); }

The QR Code is displayed in the UI.

Once the QR Code is created and scanned, the SignalR client starts listening for messages returned for the challengeId.

@section scripts { <script src="~/js/qrcode.min.js"></script> <script type="text/javascript"> new QRCode(document.getElementById("qrCode"), { text: "@Html.Raw(Model.QrCodeUrl)", width: 300, height: 300, correctLevel: QRCode.CorrectLevel.L }); $(document).ready(() => { }); var connection = new signalR.HubConnectionBuilder().withUrl("/mattrVerifiedSuccessHub").build(); connection.on("MattrCallbackSuccess", function (base64ChallengeId) { console.log("received verification:" + base64ChallengeId); window.location.href = "/VerifiedUser?base64ChallengeId=" + base64ChallengeId; }); connection.start().then(function () { console.log(connection.connectionId); const base64ChallengeId = $("#Base64ChallengeId").val(); console.warn("base64ChallengeId: " + base64ChallengeId); if (base64ChallengeId) { console.log(base64ChallengeId); // join message connection.invoke("AddChallenge", base64ChallengeId, connection.connectionId).catch(function (err) { return console.error(err.toString()); }); } }).catch(function (err) { return console.error(err.toString()); }); </script> }

Validating the verification callback

After the holder of the digital wallet has given consent, the wallet sends the verifiable credential data back to the verifier application in a HTTP request. This is sent to a webhook or an API in the verifier application. This needs to be verified correctly. In this demo, only the challengeId is used to match the request, the payload is not validated which it should be. The callback handler stores the data to the database and sends a SignalR message to inform the waiting client that the verify has been completed successfully.

private readonly VerifyEidCountyResidenceDbService _verifyEidAndCountyResidenceDbService; private readonly IHubContext<MattrVerifiedSuccessHub> _hubContext; public VerificationController(VerifyEidCountyResidenceDbService verifyEidAndCountyResidenceDbService, IHubContext<MattrVerifiedSuccessHub> hubContext) { _hubContext = hubContext; _verifyEidAndCountyResidenceDbService = verifyEidAndCountyResidenceDbService; } /// <summary> /// { /// "presentationType": "QueryByFrame", /// "challengeId": "nGu/E6eQ8AraHzWyB/kluudUhraB8GybC3PNHyZI", /// "claims": { /// "id": "did:key:z6MkmGHPWdKjLqiTydLHvRRdHPNDdUDKDudjiF87RNFjM2fb", /// "http://schema.org/birth_place": "Seattle", /// "http://schema.org/date_of_birth": "1953-07-21", /// "http://schema.org/family_name": "Bob", /// "http://schema.org/gender": "Male", /// "http://schema.org/given_name": "Lammy", /// "http://schema.org/height": "176cm", /// "http://schema.org/nationality": "USA", /// "http://schema.org/address_country": "Schweiz", /// "http://schema.org/address_locality": "Thun", /// "http://schema.org/address_region": "Bern", /// "http://schema.org/postal_code": "3000", /// "http://schema.org/street_address": "Thunerstrasse 14" /// }, /// "verified": true, /// "holder": "did:key:z6MkmGHPWdKjLqiTydLHvRRdHPNDdUDKDudjiF87RNFjM2fb" /// } /// </summary> /// <param name="body"></param> /// <returns></returns> [HttpPost] [Route("[action]")] public async Task<IActionResult> VerificationDataCallback() { string content = await new System.IO.StreamReader(Request.Body).ReadToEndAsync(); var body = JsonSerializer.Deserialize<VerifiedEidCountyResidenceData>(content); var valueBytes = Encoding.UTF8.GetBytes(body.ChallengeId); var base64ChallengeId = Convert.ToBase64String(valueBytes); string connectionId; var found = MattrVerifiedSuccessHub.Challenges .TryGetValue(base64ChallengeId, out connectionId); //test Signalr //await _hubContext.Clients.Client(connectionId).SendAsync("MattrCallbackSuccess", $"{base64ChallengeId}"); //return Ok(); var exists = await _verifyEidAndCountyResidenceDbService.ChallengeExists(body.ChallengeId); if (exists) { await _verifyEidAndCountyResidenceDbService.PersistVerification(body); if (found) { //$"/VerifiedUser?base64ChallengeId={base64ChallengeId}" await _hubContext.Clients .Client(connectionId) .SendAsync("MattrCallbackSuccess", $"{base64ChallengeId}"); } return Ok(); } return BadRequest("unknown verify request"); }

The VerifiedUser ASP.NET Core Razor page displays the data after a successful verification. This uses the challengeId to get the data from the database and display this in the UI for the next steps.

public class VerifiedUserModel : PageModel { private readonly VerifyEidCountyResidenceDbService _verifyEidCountyResidenceDbService; public VerifiedUserModel(VerifyEidCountyResidenceDbService verifyEidCountyResidenceDbService) { _verifyEidCountyResidenceDbService = verifyEidCountyResidenceDbService; } public string Base64ChallengeId { get; set; } public EidCountyResidenceVerifiedClaimsDto VerifiedEidCountyResidenceDataClaims { get; private set; } public async Task OnGetAsync(string base64ChallengeId) { // user query param to get challenge id and display data if (base64ChallengeId != null) { var valueBytes = Convert.FromBase64String(base64ChallengeId); var challengeId = Encoding.UTF8.GetString(valueBytes); var verifiedDataUser = await _verifyEidCountyResidenceDbService.GetVerifiedUser(challengeId); VerifiedEidCountyResidenceDataClaims = new EidCountyResidenceVerifiedClaimsDto { // Common DateOfBirth = verifiedDataUser.DateOfBirth, FamilyName = verifiedDataUser.FamilyName, GivenName = verifiedDataUser.GivenName, // E-ID BirthPlace = verifiedDataUser.BirthPlace, Height = verifiedDataUser.Height, Nationality = verifiedDataUser.Nationality, Gender = verifiedDataUser.Gender, // County Residence AddressCountry = verifiedDataUser.AddressCountry, AddressLocality = verifiedDataUser.AddressLocality, AddressRegion = verifiedDataUser.AddressRegion, StreetAddress = verifiedDataUser.StreetAddress, PostalCode = verifiedDataUser.PostalCode }; } } }

The demo UI displays the data after a successful verification. The next steps of the verifier process can be implemented using these values. This would typically included creating an account and setting up an authentication which is not subject to phishing for high security or at least which has a second factor.

Notes

The MATTR BBS+ verifiable credentials look really good and supports selective disclosure and compound proofs. The implementation is still a WIP and MATTR are investing in this at present and will hopefully complete and improve all the BBS+ features. Until BBS+ is implemented by the majority of SSI platform providers and the specs are completed, I don’t not see how SSI can be adopted unless of course all converge on some other standard. This would help improve some of the interop problems between the vendors.

Links

https://mattr.global/

https://learn.mattr.global/tutorials/verify/using-callback/callback-e-to-e

https://mattr.global/get-started/

https://learn.mattr.global/

https://keybase.io/

Generating a ZKP-enabled BBS+ credential using the MATTR Platform

https://learn.mattr.global/tutorials/dids/did-key

https://gunnarpeipman.com/httpclient-remove-charset/

https://auth0.com/

Where to begin with OIDC and SIOP

https://anonyome.com/2020/06/decentralized-identity-key-concepts-explained/

Verifiable-Credentials-Flavors-Explained

https://learn.mattr.global/api-reference/

https://w3c-ccg.github.io/ld-proofs/

Verifiable Credentials Data Model v1.1 (w3.org)


Mike Jones: self-issued

OpenID Presentations at December 2021 OpenID Virtual Workshop

I gave the following presentations at the Thursday, December 9, 2021 OpenID Virtual Workshop: OpenID Connect Working Group (PowerPoint) (PDF) OpenID Enhanced Authentication Profile (EAP) Working Group (PowerPoint) (PDF)

I gave the following presentations at the Thursday, December 9, 2021 OpenID Virtual Workshop:

OpenID Connect Working Group (PowerPoint) (PDF) OpenID Enhanced Authentication Profile (EAP) Working Group (PowerPoint) (PDF)

Sunday, 12. December 2021

Simon Willison

servefolder.dev

servefolder.dev Absurdly clever application of service workers and the file system API: you can select a folder from your computer and the contents of that folder will be served (just to you) from a path on this website - all without uploading any content. The code is on GitHub and offers a useful, succinct introduction to how to use those APIs. Via AshleyScirra/servefolder.dev

servefolder.dev

Absurdly clever application of service workers and the file system API: you can select a folder from your computer and the contents of that folder will be served (just to you) from a path on this website - all without uploading any content. The code is on GitHub and offers a useful, succinct introduction to how to use those APIs.

Via AshleyScirra/servefolder.dev

Friday, 10. December 2021

MyDigitalFootprint

Why is being data Savvy not the right goal?

It is suggested that all which glitters is gold when it comes to data: the more data, the better. I have challenged this thinking that more data is better on numerous occasions, and essentially they all come to the same point. Data volume does not lead to better decisions.   A “simplistic” graph is doing the rounds (again) and is copied below. The two-axis links the quality of a decisio
It is suggested that all which glitters is gold when it comes to data: the more data, the better. I have challenged this thinking that more data is better on numerous occasions, and essentially they all come to the same point. Data volume does not lead to better decisions.  

A “simplistic” graph is doing the rounds (again) and is copied below. The two-axis links the quality of a decision and the person's capability with data.  It infers that boards, executives and senior leadership need to be “data-savvy” if they are to make better decisions. Data Savvy is a position between being “data-naive or data-devoid” and “drunk on data.”  The former has no data or skills; the latter is too much data or cannot use the tools. Data Savvy means you are skilled with the correct data and the right tools.

This thinking is driven by those trying to sell data training by simplifying a concept to such a point its becomes meaningless but is easy to sell/ buy and looks great as a visual.  When you don’t have enough time to reflect on the graph and the message, it looks logical, inspired and correct - it is none of these things.   The basis of the idea is that a board or senior leadership team who are data-savvy will make better decisions, based on the framing that if you are naive or drunk on data, you will make poor decisions.  

The first issue I have is that if the data does not have attestation, your capability (data-savviness) will make no difference to the quality of the decision.  One could argue that you will test the data if data-savvy, but this is also untrue as most boards cannot test the data, relying on the organisations' processes and procedures to ensure “quality” data. This is a wild assumption. 


It is worth searching for what “data-savvy” means and reading a few articles.  You will find that many put becoming data-savvy as a step in the journey to being data-driven.  To a second point:  data-driven means you will always be late.  To wait for enough data to reduce the risk to match your risk framework means that you will be late in the decision-making process.  Data-driven does not make you fast, agile, ahead, innovative or adaptive.   Data-driven makes you late, slow, behind and a follower.

Is the reality of wanting to be data-savvy or a desire to be data-driven that you look to use data to reduce risk and therefore become more risk-averse, which means you miss the signals that would make you genuinely innovative?

The question as a CDO (data or digital) we should reflect on is “how do we reconcile that we want to be first, innovative, creative or early; but our processes, methods, and tools depend on data that means we will always be late!” The more innovative we want to be, the less data we will have and the more risk we need to take, which does not align to the leadership, culture or rewards/ incentives that we have or operate to.


Identity Praxis, Inc.

The Identity Imperative: Risk Management, Value Creation, and Balance of Power Shifts

Article published by the Mobile Ecosystem Forum, 12/10/2021. Article published by the Mobile Ecosystem Forum, 12/10/2021. “We know now that technology and business models are accelerating at a faster pace than ever before in human history. In 10 years time, who knows what kind of conversations we’re going to be having, but the one thing we […] The post The Identity Imperative: Risk Manageme

Article published by the Mobile Ecosystem Forum, 12/10/2021.

Article published by the Mobile Ecosystem Forum, 12/10/2021.

“We know now that technology and business models are accelerating at a faster pace than ever before in human history. In 10 years time, who knows what kind of conversations we’re going to be having, but the one thing we know is that we’re all going to be increasingly vulnerable, as more of our services, more of our citizen identity, move online.” – Surash Patel, VP EMEA, TeleSign Corporation 2021 (click here to listen).1

I recently sat down with  Surash Patel, VP EMEA for TeleSign and Board Member of the Mobile Ecosystem Forum (MEF) to discuss the personal data & identity (PD&I) market, for a PD&I market assessment report I’m working on for the MEF (the report will be out in January 2022). Surash’s above quote stuck out to me because I think he is right. It also reminds me of another quote, one from WPP:

“By 2030 society will no longer tolerate a business model that relies on mass transactions of increasingly sensitive personal data: a quite different system will be in place.” – WPP2

I took away three key insights, although are more, from my interview with Surash:

Enterprises must immediately start learning how to master [mobile] identity verification; mobile identity verification can help reduce losses to fraud and self-inflicted losses of revenue. Enterprises that effectively use mobile identity verification can create value and generate trust and engagement at every stage of the customer journey. There is much we—people, private organizations, and public institutions—need to know and do to equip for the now and prepare for the future.

The following summarizes my conversation with Surash. . To watch the complete interview with Surash Patel of TeleSign (39:11 min), click here.

Risk Mitigation, Value Creation, and the Customer Journey

When introducing himself and his background, Surash opened with a wonderfully self-reflective quote:

“I completely missed a trick on my career and where it was going, in that I thought about the value exchange between the consumer and the brand. From a marketing perspective, I never really considered it from the digital identity perspective before–seeing the numbers on digital fraud right now I think that case is becoming more and more clear to me.” – Surash Patel, VP EMEA, Telesign Corporation 2021 (click here).

By reading between the lines of his statement, I gather that, as a marketer, he previously saw identity as a tool for audience targeting and promotion. But, once he went into the infrastructure side of the business, he realized identity plays an even bigger role throughout the industry. This is because identity has a critical role at every touchpoint along the customer journey–not just for marketing, but for fraud prevention, revenue protection, and trust.

Risk Mitigation and managing losses

Drawing from industry reports, Surash notes that t businesses are losing upwards of $56 billion a year to fraud each year.4 Because of this, “knowing your customer,” i.e., knowing that there is a legitimate human on the other side of a digital transaction, is not just a nice to have, but a business imperative. Surash points out that it’s not just fraud that brands must contend with when it comes to losses. They must also contend with self-inflicted wounds.

Surash referenced a report from Checkout.com which found that, in 2019, brands in the UK, US, France, and Germany lost $20.3 billion in 2019 due to false declines at check out, i.e. identity verification system failures. $12.7 billion of these losses went to competitors, while $7.6 billion just evaporated.5 My takeaway from this is that it’s necessary for brands to see identity verification as a strategic imperative, not just an IT function.

But, reducing fraud and managing revenue breakage is not all Surash brought up. He also noted that, based on findings in the Checkout.com report, consumers would pay an average of $4 to be sure their transactions are secure. So, not only can brands reduce fraud, but they can also retain sales by more effectively identifying their customers (listen to his comments here).

The Potential For Harm is Real and Must Be Managed

Let’s briefly return to Surash’s quote above:

“We know now that technology and, you know, business models are accelerating at a faster pace than ever before in human history. In 10 years time, who knows what kind of conversations we’re going to be having, but the one thing we know is that we’re all going to be increasingly vulnerable as more of our services, more of a citizen identity, move online.” – Surash Patel, VP EMEA, Telesign Corporation 2021 (click here to listen).6

I agree with him–the people, not just businesses, are at risk of being even more vulnerable than they are now, but that does not mean the risks we face today are trivial. Harm from the misuse of personal data is all around us. We primarily measure this in financial terms. For example, in 2020, U.S. consumers reported losses of $86M to fraud originating from text messaging scams .7

On harm

There is more privacy harm out there than financial loss, noted by Ignacio N. Cofone,8 and it is not a trivial discussion to be slipped under the rug. In fact, it is one of the fundamental drivers behind emerging people-centric regulations, industry best practices, and the reshaping of law.

This topic is too big to cover in this article, but I can provide a good resource for you on privacy harm. One of my go-to resources when considering this issue is Daniel Solove, who recently, along with Danielle Keates Citron, updated the  Typology of Privacy Harms. This is a must-read if you are a student on the topic of being of service to the connected individual.

The Customer Journey and Balance of Power Shift

To address these privacy harms, Surash specifically calls for the government to get involved. However, Surash thinks brands and individuals alike can do more as well. Surash makes it clear that individuals need to be more aware and accountable for their own actions and interactions. He also thinks, however, that brands need to learn to engage people in an even value exchange (hear Surash’s comment). Furthermore, he recognizes that people are taking more control of their data, and as this continues, we may eventually see the evolution of consumer “curation services  (hear his remark),” what may call “infomediaries.” Again, I’m drawn to the WPP quote above. Brands need to prepare for fundamental shifts in people’s attitudes and expectations. The implications of these shifts will be profound, as they will force a change in competition, business models, product offerings, and business practices.

There is Much We All Need to Know and Do

So, after taking all this in, what’s next?

What I learned from my time with Surash is that an effective identity management implementation can collectively save brands billions, while building trust and improving their ability to serve their customer throughout the customer’s journey.

Surash emphasized that the people must know that they are at risk and be aware of all that is going on in the industry. As a result of this knowledge, they can take steps to advocate for and protect themselves. He notes that individuals “absolutely need to know the value of their data” and “how they can challenge” the brands’ use of their data. Surash suggested that individuals need to start shifting the balance of power by approaching brands and questioning, “Do you really need that data to serve me? If not, don’t ask me for it.” Surash does recognize, however, that this is going to be hard for individuals to go up against the “large” brand. As noted below, we believe in both the companies’ and the government’s abilities to do more.

For brands, Surash wants them to:

Take cyberattacks seriously and prepare as the attacks are expected to get worse. Get the fraud and marketing teams working together and not at loggerheads Not just onboard individuals after the first transaction, but to continually evaluate and authenticate their customers as their customers move along the journey. He suggests that brands must learn to evaluate the local context of each engagement, regularly verify and authenticate their customers, and show the people they serve some respect by making an effort to validate if individuals’ circumstances (preferences, address, phone number, etc.) have changed over time. Surash implies that these actions will both reduce the risk of fraud and cybercrime, but also improve the relationship they have with those they serve. Ensure there is always an even value exchange, and if the brand wants more in return during a transaction, e.g. more data to support a future upsell, then they should consider paying the individual for it.

As for public institutions, .e.g. governments, Surash suggests that “there isn’t enough being done to protect the consumers.” Governments should work with industry to refine value propositions, institute consistent standards, and advocate for consumers.

Clearly, this is all just the tip of the iceberg. There is definitely more to come.

Watch the complete interview with Surash Patel of TeleSign (39:11 min, click here).

REFERENCES

Becker, Michael. “The Chain of Trust & Mobile Number Identity Scoring: AnInterview with Virginie Debris of GSM.” Accessed October 28, 2021. https://www.youtube.com/watch?v=ftJ_4800W2Y.

Becker, Michael, and Surash Patel. “The Identity Imperative: Risk Management, Value Creation, and Balance of Power Shifts.” Accessed October 30, 2021. https://www.youtube.com/watch?v=V5WlrHSohpM.

Buzzard, John, and Tracy Kitten. “2021 Identity Fraud Study: Shifting Angles.”Livonia, MI: Javelin, March 2021. https://www.javelinstrategy.com/content/2021-identity-fraud-report-shifting-angles-identity-fraud.

Citron, Danielle Keats, and Daniel Solove. “Privacy Harms.” Boston University Law Review 102, no. 2022 (February 2021). https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3782222.

“Data 2030: What Does the Future of Data Look Like? | WPP.” London: WPP, November 2020. https://www.wpp.com/wpp-iq/2020/11/data-2030—what-does-the-future-of-data-look-like.

Scrase, Julie, Kasey Ly, Henry Worthington, and Ben Skeleton. “Black Boxes and Paradoxes. The Real Cost of Disconnected Payments.” Checkout.com, July 2021. https://www.checkout.com/connected-payments/black-boxes-and-paradoxes.

Skiba, Katherine. “Consumers Lost $86m to Fraud Originating in Scam Texts.”AARP, June 2021. https://www.aarp.org/money/scams-fraud/info-2021/texts-smartphone.html.

Becker and Patel, “The Identity Imperative.” “Data 2030.”↩︎ Becker, “The Chain of Trust & Mobile Number Identity Scoring.” Buzzard and Kitten, “2021 Identity Fraud Study.” Scrase et al., “Black Boxes and Paradoxes. The Real Cost of Disconnected Payments.” Becker and Patel, “The Identity Imperative.” Skiba, “Consumers Lost $86m to Fraud Originating in Scam Texts.”

The post The Identity Imperative: Risk Management, Value Creation, and Balance of Power Shifts appeared first on Identity Praxis, Inc..


Simon Willison

wheel.yml for Pyjion using cibuildwheel

wheel.yml for Pyjion using cibuildwheel cibuildwheel, maintained by the Python Packaging Authority, builds and tests Python wheels across multiple platforms. I hadn't realized quite how minimal a configuration using their GitHub Actions action was until I looked at how Pyjion was using it. Via @simonw

wheel.yml for Pyjion using cibuildwheel

cibuildwheel, maintained by the Python Packaging Authority, builds and tests Python wheels across multiple platforms. I hadn't realized quite how minimal a configuration using their GitHub Actions action was until I looked at how Pyjion was using it.

Via @simonw

Thursday, 09. December 2021

Simon Willison

Introducing stack graphs

Introducing stack graphs GitHub launched "precise code navigation" for Python today - the first language to get support for this feature. Click on any Python symbol in GitHub's code browsing views and a box will show you exactly where that symbol was defined - all based on static analysis by a custom parser written in Rust as opposed to executing any Python code directly. The underlying computer

Introducing stack graphs

GitHub launched "precise code navigation" for Python today - the first language to get support for this feature. Click on any Python symbol in GitHub's code browsing views and a box will show you exactly where that symbol was defined - all based on static analysis by a custom parser written in Rust as opposed to executing any Python code directly. The underlying computer science uses a technique called stack graphs, based on scope graphs research from Eelco Visser’s research group at TU Delft.

Via Precise code navigation for Python, and code navigation in pull requests


Notes on Notes.app

Notes on Notes.app Apple's Notes app keeps its data in a SQLite database at ~/Library/Group\ Containers/group.com.apple.notes/NoteStore.sqlite - but it's pretty difficult to extract data from. It turns out the note text is stored as a gzipped protocol buffers object in the ZICNOTEDATA.ZDATA column. Steve Dunham did the hard work of figuring out how it all works - the complexity stems from Apple'

Notes on Notes.app

Apple's Notes app keeps its data in a SQLite database at ~/Library/Group\ Containers/group.com.apple.notes/NoteStore.sqlite - but it's pretty difficult to extract data from. It turns out the note text is stored as a gzipped protocol buffers object in the ZICNOTEDATA.ZDATA column. Steve Dunham did the hard work of figuring out how it all works - the complexity stems from Apple's use of CRDT's to support seamless multiple edits from different devices.

Wednesday, 08. December 2021

Simon Willison

Weeknotes: git-history, bug magnets and s3-credentials --public

I've stopped considering my projects "shipped" until I've written a proper blog entry about them, so yesterday I finally shipped git-history, coinciding with the release of version 0.6 - a full 27 days after the first 0.1. It took way more work than I was expecting to get to this point! I wrote the first version of git-history in an afternoon, as a tool for a workshop I was presenting on Git s

I've stopped considering my projects "shipped" until I've written a proper blog entry about them, so yesterday I finally shipped git-history, coinciding with the release of version 0.6 - a full 27 days after the first 0.1.

It took way more work than I was expecting to get to this point!

I wrote the first version of git-history in an afternoon, as a tool for a workshop I was presenting on Git scraping and Datasette.

Before promoting it more widely, I wanted to make some improvements to the schema. In particular, I wanted to record only the updated values in the item_version table - which otherwise could end up duplicating a full copy of each item in the database hundreds or even thousands of times.

Getting this right took a lot of work, and I kept on getting stumped by weird bugs and edge-cases. This bug in particular added a couple of days to the project.

The whole project turned out to be something of a bug magnet, partly because of a design decision I made concerning column names.

git-history creates tables with columns that correspond to the underlying data. Since it also needs its own columns for tracking things like commits and incremental versions, I decided to use underscore prefixes for reserved columns such as _item and _version

Datasette uses underscore prefixes for its own purposes - special table arguments such as ?_facet=column-name. It's supposed to work with existing columns that use underscores by converting query string arguments like ?_item=3 into ?_item__exact=3 - but git-history was the first of my projects to really exercise this, and I kept on finding bugs. Datasette 0.59.2 and 0.59.4 both have related bug fixes, and there's a re-opened bug that I have yet to resolve.

Building the ca-fires demo also revealed a bug in datasette-cluster-map which I fixed in version 0.17.2.

s3-credentials --public

The git-history live demos are built and deployed by this GitHub Actions workflow. The workflow works by checking out three separate repos and running git-history against them. It takes advantage of that tool's ability to add just new commits to an existing database to run faster, so it needs to persist database files in between runs.

Since these files can be several hundred MBs, I decided to persist them in an S3 bucket.

My s3-credentials tool provides the ability to create a new S3 bucket along with restricted read-write credentials just for that bucket, ideal for use in a GitHub Actions workflow.

I decided to make the bucket public such that anyone can download files from it, since there was no reason to keep it private. I've been wanting to add this ability to s3-credentials for a while now, so this was the impetus I needed to finally ship that feature.

It's surprisingly hard to figure out how to make an S3 bucket public these days! It turned out the magic recipe was adding a JSON bucket policy document to the bucket granting s3:GetObject permission to principal * - here's that policy in full.

I released s3-credentials 0.8 with a new --public option for creating public buckets - here are the release notes in full:

s3-credentials create my-bucket --public option for creating public buckets, which allow anyone with knowledge of a filename to download that file. This works by attaching this public bucket policy to the bucket after it is created. #42 s3-credentials put-object now sets the Content-Type header on the uploaded object. The type is detected based on the filename, or can be specified using the new --content-type option. #43 s3-credentials policy my-bucket --public-bucket outputs the public bucket policy that would be attached to a bucket of that name. #44

I wrote up this TIL which doubles as a mini-tutorial on using s3-credentials: Storing files in an S3 bucket between GitHub Actions runs.

datasette-hovercards

This was a quick experiment which turned into a prototype Datasette plugin. I really like how GitHub show hover card previews of links to issues in their interface:

I decided to see if I could build something similar for links within Datasette, specifically the links that show up when a column is a foreign key to another record.

Here's what I've got so far:

There's an interactive demo running on this table page.

It still needs a bunch of work - in particular I need to think harder about when the card is shown, where it displays relative to the mouse pointer, what causes it to be hidden again and how it should handle different page widths. Ideally I'd like to figure out a useful mobile / touch-screen variant, but I'm not sure how that could work.

The prototype plugin is called datasette-hovercards - I'd like to eventually merge this back into Datasette core once I'm happy with how it works.

Releases this week git-history: 0.6.1 - (9 releases total) - 2021-12-08
Tools for analyzing Git history using SQLite datasette-cluster-map: 0.17.2 - (20 releases total) - 2021-12-07
Datasette plugin that shows a map for any data with latitude/longitude columns s3-credentials: 0.8 - (8 releases total) - 2021-12-07
A tool for creating credentials for accessing S3 buckets asyncinject: 0.2a1 - (3 releases total) - 2021-12-03
Run async workflows using pytest-fixtures-style dependency injection datasette-hovercards: 0.1a0 - 2021-12-02
Add preview hovercards to links in Datasette github-to-sqlite: 2.8.3 - (22 releases total) - 2021-12-01
Save data from GitHub to a SQLite database TIL this week __init_subclass__ Storing files in an S3 bucket between GitHub Actions runs

Phil Windley's Technometria

In Memory of Kim Cameron

Summary: Kim Cameron's Laws of Identity are a foundational work in online identity. He used his technical knowledge and position to advance the state of online identity in countless ways over multiple decades. We will miss him. I got word early last week that Kim Cameron, a giant of the identity world, had passed away. I was shocked. I still am. Kim was one of those people who play

Summary: Kim Cameron's Laws of Identity are a foundational work in online identity. He used his technical knowledge and position to advance the state of online identity in countless ways over multiple decades. We will miss him.

I got word early last week that Kim Cameron, a giant of the identity world, had passed away. I was shocked. I still am. Kim was one of those people who played such a big role in my life over so many years, that it's hard to imagine he's not there any more. The week before we'd been scheduled to talk and he'd canceled cause he wasn't feeling well. He wanted to talk about verifiable credential exchange over OpenID Connect SIOP. He was still in the trenches...working on identity. I'm sad we'll never get that last talk.

I had met Kim and talked to him at several identity conferences back in the early 2000's. But my relationship with him grew into friendship through Internet Identity Workshop (IIW). We held the first IIW in October 2005. Along with many others, Kim was there to talk about his ideas and plans for identity. He'd already published his Laws of Identity and was working on a project called Information Cards.

At that first IIW, we went to dinner at the end of day one. The idea was that everyone would pay for their own meal since the workshop hadn't collected any money in the registration for that. Kim, with his typical generosity, told everyone that Microsoft would buy dinner...and drinks. Later he confided to me that he wasn't sure Microsoft would really pick up the tab—which turned out to be surprisingly large because of the alcohol—but he'd make sure it was covered either way. That started a tradition. Microsoft has sponsored the workshop dinner at every in-person IIW we've held. Not only with dinner, but in countless other ways, Kim's participation and support was instrumental in building IIW over the years.

Over the last several years I've remarked several times that Kim must be a being from the future. With his laws, Kim was telling us things about identity in 2004 that we weren't ready to hear until just the last few years. Kim not only saw the need for a set of laws to govern online identity system architecture, but also foresaw the need for an identity metasystem on the internet. Whether you view the laws as leading to the metasystem design or springing from it, they are both necessary to a future where people can live rich, authentic online lives. The laws are Kim's legacy in the identity world.

Kim's technical excellence got him a seat at the table. His position at Microsoft gave him a big voice. But what made Kim effective was his gentle approach to technical discussions, especially those he thought might be contentious. He listened, asked questions, and guided discussion. As a result of his leadership we made progress in many ways that might not have otherwise happened. The identity world will sorely miss his leadership. I will miss his company, learning from him, and, most of all, his friendship.

Other tributes to Kim can be found in these links:

Photos of Kim Cameron from Doc Searls Remembering Kim Cameron The Gentle Lawgiver Rest in Peace Kim Cameron In Memory of Kim Cameron (official obituary) Remembering a Human, Being.

Photo Credit: Photos of Kim Cameron from Doc Searls (CC BY 2.0)

Tags: identity microsoft iiw metasystem ssi

Tuesday, 07. December 2021

Simon Willison

git-history: a tool for analyzing scraped data collected using Git and SQLite

I described Git scraping last year: a technique for writing scrapers where you periodically snapshot a source of data to a Git repository in order to record changes to that source over time. The open challenge was how to analyze that data once it was collected. git-history is my new tool designed to tackle that problem. Git scraping, a refresher A neat thing about scraping to a Git repositor

I described Git scraping last year: a technique for writing scrapers where you periodically snapshot a source of data to a Git repository in order to record changes to that source over time.

The open challenge was how to analyze that data once it was collected. git-history is my new tool designed to tackle that problem.

Git scraping, a refresher

A neat thing about scraping to a Git repository is that the scrapers themselves can be really simple. I demonstrated how to run scrapers for free using GitHub Actions in this five minute lightning talk back in March.

Here's a concrete example: California's state fire department, Cal Fire, maintain an incident map at fire.ca.gov/incidents showing the status of current large fires in the state.

I found the underlying data here:

curl https://www.fire.ca.gov/umbraco/Api/IncidentApi/GetIncidents

Then I built a simple scraper that grabs a copy of that every 20 minutes and commits it to Git. I've been running that for 14 months now, and it's collected 1,559 commits!

The thing that excites me most about Git scraping is that it can create truly unique datasets. It's common for organizations not to keep detailed archives of what changed and where, so by scraping their data into a Git repository you can often end up with a more detailed history than they maintain themselves.

There's one big challenge though; having collected that data, how can you best analyze it? Reading through thousands of commit differences and eyeballing changes to JSON or CSV files isn't a great way of finding the interesting stories that have been captured.

git-history

git-history is the new CLI tool I've built to answer that question. It reads through the entire history of a file and generates a SQLite database reflecting changes to that file over time. You can then use Datasette to explore the resulting data.

Here's an example database created by running the tool against my ca-fires-history repository. I created the SQLite database by running this in the repository directory:

git-history file ca-fires.db incidents.json \ --namespace incident \ --id UniqueId \ --convert 'json.loads(content)["Incidents"]'

In this example we are processing the history of a single file called incidents.json.

We use the UniqueId column to identify which records are changed over time as opposed to newly created.

Specifying --namespace incident causes the created database tables to be called incident and incident_version rather than the default of item and item_version.

And we have a fragment of Python code that knows how to turn each version stored in that commit history into a list of objects compatible with the tool, see --convert in the documentation for details.

Let's use the database to answer some questions about fires in California over the past 14 months.

The incident table contains a copy of the latest record for every incident. We can use that to see a map of every fire:

This uses the datasette-cluster-map plugin, which draws a map of every row with a valid latitude and longitude column.

Where things get interesting is the incident_version table. This is where changes between different scraped versions of each item are recorded.

Those 250 fires have 2,060 recorded versions. If we facet by _item we can see which fires had the most versions recorded. Here are the top ten:

Dixie Fire 268 Caldor Fire 153 Monument Fire 65 August Complex (includes Doe Fire) 64 Creek Fire 56 French Fire 53 Silverado Fire 52 Fawn Fire 45 Blue Ridge Fire 39 McFarland Fire 34

This looks about right - the larger the number of versions the longer the fire must have been burning. The Dixie Fire has its own Wikipedia page!

Clicking through to the Dixie Fire lands us on a page showing every "version" that we captured, ordered by version number.

git-history only writes values to this table that have changed since the previous version. This means you can glance at the table grid and get a feel for which pieces of information were updated over time:

The ConditionStatement is a text description that changes frequently, but the other two interesting columns look to be AcresBurned and PercentContained.

That _commit table is a foreign key to commits, which records commits that have been processed by the tool - mainly so that when you run it a second time it can pick up where it finished last time.

We can join against commits to see the date that each version was created. Or we can use the incident_version_detail view which performs that join for us.

Using that view, we can filter for just rows where _item is 174 and AcresBurned is not blank, then use the datasette-vega plugin to visualize the _commit_at date column against the AcresBurned numeric column... and we get a graph of the growth of the Dixie Fire over time!

To review: we started out with a GitHub Actions scheduled workflow grabbing a copy of a JSON API endpoint every 20 minutes. Thanks to git-history, Datasette and datasette-vega we now have a chart showing the growth of the longest-lived California wildfire of the last 14 months over time.

A note on schema design

One of the hardest problems in designing git-history was deciding on an appropriate schema for storing version changes over time.

I ended up with the following (edited for clarity):

CREATE TABLE [commits] ( [id] INTEGER PRIMARY KEY, [hash] TEXT, [commit_at] TEXT ); CREATE TABLE [item] ( [_id] INTEGER PRIMARY KEY, [_item_id] TEXT, [IncidentID] TEXT, [Location] TEXT, [Type] TEXT, [_commit] INTEGER ); CREATE TABLE [item_version] ( [_id] INTEGER PRIMARY KEY, [_item] INTEGER REFERENCES [item]([_id]), [_version] INTEGER, [_commit] INTEGER REFERENCES [commits]([id]), [IncidentID] TEXT, [Location] TEXT, [Type] TEXT ); CREATE TABLE [columns] ( [id] INTEGER PRIMARY KEY, [namespace] INTEGER REFERENCES [namespaces]([id]), [name] TEXT ); CREATE TABLE [item_changed] ( [item_version] INTEGER REFERENCES [item_version]([_id]), [column] INTEGER REFERENCES [columns]([id]), PRIMARY KEY ([item_version], [column]) );

As shown earlier, records in the item_version table represent snapshots over time - but to save on database space and provide a neater interface for browsing versions, they only record columns that had changed since their previous version. Any unchanged columns are stored as null.

There's one catch with this schema: what do we do if a new version of an item sets one of the columns to null? How can we tell the difference between that and a column that didn't change?

I ended up solving that with an item_changed many-to-many table, which uses pairs of integers (hopefully taking up as little space as possible) to record exactly which columns were modified in which item_version records.

The item_version_detail view displays columns from that many-to-many table as JSON - here's a filtered example showing which columns were changed in which versions of which items:

Here's a SQL query that shows, for ca-fires, which columns were updated most often:

select columns.name, count(*) from incident_changed join incident_version on incident_changed.item_version = incident_version._id join columns on incident_changed.column = columns.id where incident_version._version > 1 group by columns.name order by count(*) desc Updated: 1785 PercentContained: 740 ConditionStatement: 734 AcresBurned: 616 Started: 327 PersonnelInvolved: 286 Engines: 274 CrewsInvolved: 256 WaterTenders: 225 Dozers: 211 AirTankers: 181 StructuresDestroyed: 125 Helicopters: 122

Helicopters are exciting! Let's find all of the fires which had at least one record where the number of helicopters changed (after the first version). We'll use a nested SQL query:

select * from incident where _id in ( select _item from incident_version where _id in ( select item_version from incident_changed where column = 15 ) and _version > 1 )

That returned 19 fires that were significant enough to involve helicopters - here they are on a map:

Advanced usage of --convert

Drew Breunig has been running a Git scraper for the past 8 months in dbreunig/511-events-history against 511.org, a site showing traffic incidents in the San Francisco Bay Area. I loaded his data into this example sf-bay-511 database.

The sf-bay-511 example is useful for digging more into the --convert option to git-history.

git-history requires recorded data to be in a specific shape: it needs a JSON list of JSON objects, where each object has a column that can be treated as a unique ID for purposes of tracking changes to that specific record over time.

The ideal tracked JSON file would look something like this:

[ { "IncidentID": "abc123", "Location": "Corner of 4th and Vermont", "Type": "fire" }, { "IncidentID": "cde448", "Location": "555 West Example Drive", "Type": "medical" } ]

It's common for data that has been scraped to not fit this ideal shape.

The 511.org JSON feed can be found here - it's a pretty complicated nested set of objects, and there's a bunch of data in there that's quite noisy without adding much to the overall analysis - things like a updated timestamp field that changes in every version even if there are no changes, or a deeply nested "extension" object full of duplicate data.

I wrote a snippet of Python to transform each of those recorded snapshots into a simpler structure, and then passed that Python code to the --convert option to the script:

#!/bin/bash git-history file sf-bay-511.db 511-events-history/events.json \ --repo 511-events-history \ --id id \ --convert ' data = json.loads(content) if data.get("error"): # {"code": 500, "error": "Error accessing remote data..."} return for event in data["Events"]: event["id"] = event["extension"]["event-reference"]["event-identifier"] # Remove noisy updated timestamp del event["updated"] # Drop extension block entirely del event["extension"] # "schedule" block is noisy but not interesting del event["schedule"] # Flatten nested subtypes event["event_subtypes"] = event["event_subtypes"]["event_subtype"] if not isinstance(event["event_subtypes"], list): event["event_subtypes"] = [event["event_subtypes"]] yield event '

The single-quoted string passed to --convert is compiled into a Python function and run against each Git version in turn. My code loops through the nested Events list, modifying each record and then outputting them as an iterable sequence using yield.

A few of the records in the history were server 500 errors, so the code block knows how to identify and skip those as well.

When working with git-history I find myself spending most of my time iterating on these conversion scripts. Passing strings of Python code to tools like this is a pretty fun pattern - I also used it for sqlite-utils convert earlier this year.

Trying this out yourself

If you want to try this out for yourself the git-history tool has an extensive README describing the other options, and the scripts used to create these demos can be found in the demos folder.

The git-scraping topic on GitHub now has over 200 repos now built by dozens of different people - that's a lot of interesting scraped data sat there waiting to be explored!


Quoting Martin O'Leary

One popular way of making money through cryptocurrency is to start a new currency, while retaining a large chunk of it for yourself. As a result, there are now thousands of competing cryptocurrencies in operation, with relatively little technical difference between them. In order to succeed, currency founders must convince people that their currency is new and different, and crucially, that the b

One popular way of making money through cryptocurrency is to start a new currency, while retaining a large chunk of it for yourself. As a result, there are now thousands of competing cryptocurrencies in operation, with relatively little technical difference between them. In order to succeed, currency founders must convince people that their currency is new and different, and crucially, that the buyer understands this while other less savvy investors do not. Wild claims, fanciful economic ideas and rampant technobabble are the order of the day. This is a field that thrives on mystique, and particularly preys on participants’ fear of missing out on the next big thing.

Martin O'Leary

Monday, 06. December 2021

Doc Searls Weblog

The gentle lawgiver

This is about credit where due, and unwanted by the credited. I speak here of Kim Cameron, a man whose modesty was immense because it had to be, given the size of his importance to us all. See, to the degree that identity matters, and disparate systems getting along with each other matters—in both cases for […]

This is about credit where due, and unwanted by the credited. I speak here of Kim Cameron, a man whose modesty was immense because it had to be, given the size of his importance to us all.

See, to the degree that identity matters, and disparate systems getting along with each other matters—in both cases for the sakes of each and all—Kim’s original wisdom and guidance matters. And that mattering is only beginning to play out.

But Kim isn’t here to shake his head at what I just said, because (as I reported in my prior post) he passed last week.

While I expect Kim’s thoughts and works to prove out over time, the point I want to make here is that it is possible for an open and generous person in a giant company to use its power for good, and not play the heavy doing it. That’s the example Kim set in the two decades he was the top architect of Microsoft’s approach to digital identity and meta systems (that is, systems that make disparate systems work as if just one).

I first saw him practice these powers at the inaugural meeting of a group that called itself the Identity Gang. That name was given to the group by Steve Gillmor, who hosted a Gillmor Gang podcast (here’s the audio) on the topic of digital identity, on December 31, 2004: New Years Eve. To follow that up, seven of the nine people in that podcast, plus about as many more, gathered during a break at Esther Dyson‘s PC Forum conference in Scottsdale, Arizona, on March 20, 2005. Here is an album of photos I shot of the Gang, sitting around an outside table. (The shot above is one of them.) There was a purpose to the meeting: deciding what we should do next, for all of the very different identity-related projects we were working on—and for all the other possible developments that also needed support.

Kim was the most powerful participant, owing both to his position at Microsoft and for having issued, one by one, Seven Laws of Identity, over the preceding months. Like the Ten Commandments, Kim’s laws are rules which, even if followed poorly, civilize the world.

Kim always insisted that his Laws were not carved on stone tablets and that he was no burning bush, but those laws were, and remain, enormously important. And I doubt that would be so without Kim’s 200-proof Canadian modesty.

The next time the Identity Gang met was in October of that year, in Berkeley. By then the gang had grown to about a hundred people. Organized by Kaliya (IdentityWoman) Young, Phil Windley, and myself (but mostly the other two), the next meeting was branded Internet Identity Workshop (IIW), and it has been held every Fall and Spring since then at the Computer History Museum (and, on three pandemic occasions, online), with hundreds, from all over the world, participating every time.

IIW is an open space workshop, meaning that it consists entirely of breakouts on topics chosen and led by the participants. There are no keynotes, no panels, no vendor booths. Sponsor involvement is limited to food, coffee, free wi-fi, projectors, and other graces that carry no other promotional value. (Thanks to Kim, it has long been a tradition for Microsoft to sponsor an evening at a local restaurant and bar.) Most importantly, the people attending from big companies and startups alike are those with the ability to engineer or guide technical developments that work for everyone and not for just those companies.

I’m biased, but I believe IIW is the most essential and productive conference of any kind, in the world. Conversations and developments of many kinds are moved forward at every one of them. Examples of developments that might not be the same today but for IIW include OAuth, OpenID, personal clouds, picosSSI, VRM, KERI, and distributed ledgers.

I am also sure that progress made around digital identity would not be the same (or as advanced) without Kim Cameron’s strong and gentle guidance. Hats off to his spirit, his laws, and his example.

 

 


Damien Bod

Blazor WASM hosted in ASP.NET Core templates with Azure B2C and Azure AD authentication using Backend for Frontend (BFF)

I have implemented many Blazor WASM ASP.NET Core hosted applications now for both Azure AD and Azure B2C authentication. I always implement security for this type of application now using the Backend for Frontend (BFF) security architecture and can remove the tokens from the client. This is also what I recommend. At present, no Microsoft […]

I have implemented many Blazor WASM ASP.NET Core hosted applications now for both Azure AD and Azure B2C authentication. I always implement security for this type of application now using the Backend for Frontend (BFF) security architecture and can remove the tokens from the client. This is also what I recommend. At present, no Microsoft templates exist for this and it takes too much effort to set this up every time I start a new project.

To fill this gap, I created two templates to speed up the development process:

Blazor.BFF.AzureAD.Template Blazor.BFF.AzureB2C.Template

The two separate Nuget packages were created, one for Azure AD and one for Azure B2C due to the restrictions and differences in the identity providers. Both packages use Microsoft.Identity.Web to authenticate and also use Microsoft Graph to access the profile data. I have included the typical security headers required using the NetEscapades.AspNetCore.SecurityHeaders Nuget package. The security headers and the CSP are setup ready for production deployment. (As best as Blazor allows)

Using the templates

You can install the Azure B2C template as follows:

dotnet new -i Blazor.BFF.AzureB2C.Template

and the Azure AD template with

dotnet new -i Blazor.BFF.AzureAD.Template

Then you can create a new project using the dotnet CLI

dotnet new blazorbffb2c -n YourCompany.MyB2cBlazor

or

dotnet new blazorbffaad -n YourCompany.MyADBlazor

You need to create the Azure App registrations for the applications as requires. If using Azure B2C, the user flow needs to be created. This is well documentation on the Microsoft.Identity.Web github repo. The Graph permissions also need to be added and configured in the app.settings.json. The app.settings.json have been configured with the values expected. Once all configured, you can run the application and also deploy this to an Azure Web App or whatever.

Each template implements an API call and a user profile view using the Microsoft Graph data.

If you have any comments, or ways of improving these templates, please create an issue or a PR in the github repo.

Links

https://github.com/damienbod/Blazor.BFF.AzureAD.Template

https://github.com/damienbod/Blazor.BFF.AzureB2C.Template

https://github.com/AzureAD/microsoft-identity-web

https://github.com/andrewlock/NetEscapades.AspNetCore.SecurityHeaders

Sunday, 05. December 2021

Altmode

Sussex Day 11: Padding(ton) Home

Sunday, November 14, 2021 We got an early start, said good-bye to Celeste (who got to stay in the room a little longer), and headed for Paddington Station about 7 am to catch the Heathrow Express. We bought our tickets, got out on the platform, and were greeted with a message board saying that there […]

Sunday, November 14, 2021

We got an early start, said good-bye to Celeste (who got to stay in the room a little longer), and headed for Paddington Station about 7 am to catch the Heathrow Express. We bought our tickets, got out on the platform, and were greeted with a message board saying that there were delays on the line and that some trains had been canceled. This made us a little nervous, but the Network Rail application on my phone reassured us that there would, in fact, be a train soon. Although we had a bit more than the usual wait for Heathrow Express, the train proceeded normally and was not excessively crowded.

After the usual long walk, we reached the ticket counter and checked in. They were thorough in checking our vaccination and COVID testing status, although not to the point of actually checking the QR codes associated with each. After checking bags, there was another long walk to the vicinity of the gate. United’s lounge in London is still closed, but in the meantime they have an arrangement with Singapore Airlines for the use of their lounge where we were able to get breakfast.

At the gate, Kenna was diverted for extra security screening because the “SSSS” designation was printed on her boarding pass. Following that inconvenience, our flight departed on time, which given that we have only a 2-hour layover in Chicago (including customs and immigration) we appreciated. However, our arrival gate was occupied by another plane, resulting in about a 30 minute delay which made us understandably nervous.

Greenland from the air

Having seen US Customs signs back in San Francisco promoting the Mobile Passport immigration application for our phones, we entered our passport information and customs declaration. But after racing to the immigration hall, we were told, “We don’t use that any more. Get in line.” More nervousness about the time. After getting through Customs (which left us outside security), we took the tram to Terminal 1 for our flight to San Francisco.

Here we noticed that Kenna didn’t have the TSA Precheck designation on her boarding card, probably as a result of the SSSS designation earlier. It may not have mattered; there were signs saying precheck was closed and the people checking boarding passes didn’t seem to know. So we both went through the “slow line”, and unfortunately Kenna set something off and had to go through some extra screening. Apparently they thought there was something about one of her shoes, which they ran through the X-ray machine again; more delay. It was interesting that there were a number of women having their shoes rechecked at the same time.

We raced to our gate, nearly the furthest from the security checkpoint, and made it in enough time, but with not much to spare. The ride to San Francisco was unremarkable, and we collected our bags and caught our ride home, according to plan.

Epilogue

Arriving home we were severely jet lagged as expected, but tried to stay up as late as we could manage. After a few hours of sleep, I awoke about 2 am. I could hear some water dripping, which I attributed to a downspout being clogged with leaves following some recent rainfall. So I got up to investigate, and instead discovered that there was a substantial amount of water dripping from the ceiling into our guest room. It turns out that a hot water pipe in the attic had developed a pinhole leak and over time had soaked one wall. So we now have a new project.

This article is the final installment in a series about our recent travels to southern England. To see the introductory article in the series, click here.

Saturday, 04. December 2021

Altmode

Sussex Day 10: London

Saturday, November 13, 2021 London isn’t in Sussex, that’s just the theme of the trip. Celeste expressed a lot of interest in visiting the Imperial War Museum, which none of us had visited, so we decided to make that our first destination. After a quick Pret a Manger breakfast, we took the Tube to the […]

Saturday, November 13, 2021

London isn’t in Sussex, that’s just the theme of the trip.

Celeste expressed a lot of interest in visiting the Imperial War Museum, which none of us had visited, so we decided to make that our first destination. After a quick Pret a Manger breakfast, we took the Tube to the south side of London. The first thing you notice is the battleship guns at the front. My interest was also piqued by a short segment of Berlin Wall near the front entrance.

The museum has a large collection on several floors, with areas emphasizing World War I, World War II, the Cold War, the Holocaust, etc. One could easily spend several days to see all of the exhibits. Toward the end of our visit, we went in to the World War II gallery (having already seen quite a number of exhibits dealing with WW II), and it went on…and on. The gallery was very large and went into great detail, including many stories about participants in the war, German as well as Allied. We hadn’t expected the gallery to be nearly as large as it was, and might have allocated more time if we had.

Early in the afternoon we tired of the museum and decided to look for lunch. We thought we might like German food, so guided by our phones, we walked north and came to Mercato Metropolitano, a large semi-outdoor food court with sustainable food from many world cuisines. Each of us selected something we like, but we never found the German restaurant we thought was there.

Continuing north, we got to the Borough Market, a large trading market established in 1756. Perhaps because it was a Saturday, it was very crowded. Normally this might not have been as notable but that since the COVID epidemic we have avoided and become unaccustomed to crowds. We walked through quickly and then continued on to the Thames, where we went along the south shore to the Millennium Bridge. We walked out on the bridge, took some pictures, and continued west to the Westminster Bridge. All along the way there were people — lots of people.

After crossing the Westminster bridge, we took a short Tube ride to the West End. Again, everything was crowded. We tried a couple of places for dinner, but nothing was available without an advance booking. The Five Guys burger restaurant was jammed, and there was even a long queue at McDonalds (!). We couldn’t figure out the attraction there.

We finally settled on Itsu, the same Asian-themed fast food chain that we had tried in Brighton. We were able to find a table and had an enjoyable light meal.

The big event of the day was this evening: we had tickets to Back to the Future: The Musical, playing at the Adelphi Theatre on The Strand. This is a new show that that just opened in July 2021 and has not yet made it to the United States. The theatre was, as expected, nearly full. But we had been told that COVID vaccination, negative tests, and the wearing of masks would be required. In fact, we were never asked about vaccinations or tests, and the majority of the audience did not wear masks. We felt somewhat less safe as a result.

Still, the show was very enjoyable. As Celeste pointed out, this is a “tech show” with the strong point being special effects. Most of the performances, particularly Doc Brown, were excellent as well, although Celeste noted that some of the actors had trouble with American accents.

We took the Tube back to our hotel and are retiring quickly. Tomorrow will be an early day for Jim and Kenna’s flight back home.

This article is part of a series about our recent travels to southern England. To see the introductory article in the series, click here.

Friday, 03. December 2021

Doc Searls Weblog

Remembering Kim Cameron

Got word yesterday that Kim Cameron had passed. Hit me hard. Kim was a loving and loved friend. He was also a brilliant and influential thinker and technologist. That’s Kim, above, speaking at the 2018 EIC conference in Germany. His topics were The Laws of Identity on the Blockchain and Informational Self-Determination in a Post Facebook/Cambridge Analytica Era (in […]

Got word yesterday that Kim Cameron had passed.

Hit me hard. Kim was a loving and loved friend. He was also a brilliant and influential thinker and technologist.

That’s Kim, above, speaking at the 2018 EIC conference in Germany. His topics were The Laws of Identity on the Blockchain and Informational Self-Determination in a Post Facebook/Cambridge Analytica Era (in the Ownership of Data track).

The laws were seven:

User control and consent Minimum disclosure for a constrained use Justifiable parties Directed identity (meaning pairwise, known only to the person and the other party) Pluralism of operators Human integration Consistent experience across contexts

He wrote these in 2004, when he was still early in his tenure as Microsoft’s chief architect for identity (one of several similar titles he held at the company). Perhaps more than anyone at Microsoft—or at any big company—Kim pushed constantly toward openness, inclusivity, compatibility, cooperation, and the need for individual agency and scale. His laws, and other contributions to tech, are still only beginning to have full influence. Kim was way ahead of his time, and its a terrible shame that his own is up. He died of cancer on November 30.

But Kim was so much more—and other—than his work. He was a great musician, teacher (in French and English), thinker, epicure, traveler, father, husband, and friend. As a companion, he was always fun, as well as curious, passionate, caring, gracious. Pick a flattering adjective and it likely applies.

I am reminded of what a friend said of Amos Tversky, another genius of seemingly boundless vitality who died too soon: “Death is unrepresentative of him.”

That’s one reason it’s hard to think of Kim in the past tense, and why I resisted the urge to update Kim’s Wikipedia page earlier today. (Somebody has done that now, I see.)

We all get our closing parentheses. I’ve gone longer without closing mine than Kim did before closing his. That also makes me sad, not that I’m in a hurry. Being old means knowing you’re in the exit line, but okay with others cutting in. I just wish this time it wasn’t Kim.

Britt Blaser says life is like a loaf of bread. It’s one loaf no matter how many slices are in it. Some people get a few slices, others many. For the sake of us all, I wish Kim had more.

Here is an album of photos of Kim, going back to 2005 at Esther Dyson’s PC Forum, where we had the first gathering of what would become the Internet Identity Workshop, the 34th of which is coming up next Spring. As with many other things in the world, it wouldn’t be the same—or here at all—without Kim.

Bonus links:

Kim’s official obituary Rest in Peace, Kim Cameron, by Joerg Resch Remembering Kim Cameron, by Phil Windley Remembering a Human, Being, by Britt Blaser Cannibal Lobsters and Stolen Fingerprints – remembering Kim Cameron, by Mary Branscombe Stories of Kim Cameron, by Mike Jones

Altmode

Sussex Day 9: Brighton to London

Friday, November 12, 2021 Since it is now 2 days before our return to the United States, today was the day for our pre-trip COVID test. We were a little nervous about that because, of course, it determines whether we return as planned. Expecting a similar experience as for our Day 2 test, we were […]

Friday, November 12, 2021

Since it is now 2 days before our return to the United States, today was the day for our pre-trip COVID test. We were a little nervous about that because, of course, it determines whether we return as planned. Expecting a similar experience as for our Day 2 test, we were a bit surprised that this time we would have to do a proctored test where the proctor would watch us take the test via video chat. The next surprise was that you seem to need both a smartphone to run their app and some other device for the chat session. So we got out our iPads, and (third surprise) there was apparently a bug in their application causing it not to work on an iPad. So we got out my Mac laptop and (fourth surprise) couldn’t use my usual browser, Firefox, but could fortunately use Safari. Each test took about half an hour, including a 15-minute wait for the test to develop. Following the wait, a second video chat was set up where they read the test with you and issued your certificate. Very fortunately, both of our tests were negative.

We checked out of the apartment/hotel just before checkout time and stored our bags. Then the question was what to do until Celeste finished classes so we could all take the train to London. The answer was the Sea Life Brighton, apparently the oldest aquarium in the world. While not an extensive collection, many of the exhibits were in a classic style with ornate frames supporting the glass windows. There was a very enjoyable tunnel where you can sit while fish (and turtles!) swim overhead. The aquarium covered a number of regions of the world, with more of an emphasis on fresh-water fish than many others we have seen.

After browsing a bookstore for a while, we collected our bags and headed for the train station. Trains run to Victoria Station in London every half hour, and fortunately that connected well with the train Celeste took from Falmer to meet us.

After the train trip and Tube ride to Paddington Station, we walked the short distance to our hotel, a newly renovated boutique hotel called Inhabit. We chose it largely because it had nice triple rooms, including an actual bed (not sofa bed) for Celeste. No London trip would be complete without a hotel where it’s necessary to lug your bags up a flight of stairs, but fortunately this one only required a single flight. Our room was modern and comfortable.

I had booked a table at the Victoria, a pub in the Paddington area, and we were seated in a pleasant and not noisy dining room upstairs. Dinner was excellent. Upon returning to the hotel, Celeste immediately collapsed for the night on her cozy bed.

This article is part of a series about our recent travels to southern England. To see the introductory article in the series, click here.

Thursday, 02. December 2021

Altmode

Sussex Day 8: Hove and Skating

Thursday, November 11, 2021 While Celeste was in classes, Kenna and I set out on foot for Hove, Brighton’s “twin” city to the west. We had a rather pleasant walk through a shopping district, but there wasn’t much remarkable to see. In Hove, we turned south and followed the main road along the Channel back […]

Thursday, November 11, 2021

While Celeste was in classes, Kenna and I set out on foot for Hove, Brighton’s “twin” city to the west. We had a rather pleasant walk through a shopping district, but there wasn’t much remarkable to see. In Hove, we turned south and followed the main road along the Channel back to the west. We stopped to look at one of the characteristic crescent-shaped residential developments, and continued toward Brighton. We considered going on the i360 observation tower, but it wasn’t particularly clear and the expense didn’t seem worth it.

Celeste and a friend of hers (another exchange student from Colorado) joined us in the afternoon to go ice skating at the Royal Pavilion Ice Rink. While I am used to hockey skates, it was a bit of an adjustment to the others who are used to the toe picks on figure skates. We all got the hang of it; the ice was beautifully maintained (although with some puddles) and the rink was not particularly crowded for our 3 pm session.

After skating we sat in the attached cafe to chat until it was time for dinner, which we had at an Italian restaurant, Bella Italia, in the Lanes.

This article is part of a series about our recent travels to southern England. To see the introductory article in the series, click here.


Sussex Day 7: Pavilion and Museum

Wednesday, November 10, 2021 Celeste has a busy class schedule the early part of the day, so Kenna and I set out on our own, first for a hearty breakfast at Billie’s Cafe and then to the Royal Pavilion, one of the sightseeing highlights of Brighton. Originally a country estate, it was remodeled by King […]

Wednesday, November 10, 2021

Celeste has a busy class schedule the early part of the day, so Kenna and I set out on our own, first for a hearty breakfast at Billie’s Cafe and then to the Royal Pavilion, one of the sightseeing highlights of Brighton. Originally a country estate, it was remodeled by King George IV into an ornate building, with the exterior having an Indian theme and the interior extensively decorated and furnished in Chinese style.

Brighton’s Royal Pavilion has had a varied history, having been of less interest to Queen Victoria (George IV’s successor in the throne) who moved most of the furnishings to London and sold the building to the City of Brighton. Over the years it has been refurnished in the original style and with many of the original furnishings, some of which have been loaned by Queen Elizabeth. The Pavilion was in the process of being decorated for Christmas, which reminded us of a visit we made two years ago to Filoli in California.

After the Pavilion, we went across the garden to the Brighton Museum, which had a wide range of exhibits ranging from ancient history of the British Isles and ancient Egypt to LGBT styles of the late 20th century and modern furniture.

Having finished her classes, Celeste joined us for lunch at Itsu, one of a chain of Asian-inspired fast food restaurants. We then returned with Celeste to the museum to see a bit more and allow her time to do some research she had planned.

We then made our way behind the Pavilion, where a seasonal ice rink is set up for recreational ice skating. With its location next to the Pavilion it is a particularly scenic place to skate. We are looking forward to doing that tomorrow.

Celeste returned to campus, and Kenna and I, having had a substantial lunch, opted for a light dinner at Ten Green Bottles, a local wine bar.

This article is part of a series about our recent travels to southern England. To see the introductory article in the series, click here.

Wednesday, 01. December 2021

Identity Woman

Joining Secure Justice Advisory Board

I am pleased to share that I have joined the Secure Justice Advisory board. I have known Brian Hofer since he was one of the leaders within Oakland Privacy that successfully resisted the Domain Awareness Center for Oakland. I wrote a guest blog post about a philosophy of activism and theory of change called Engaging […] The post Joining Secure Justice Advisory Board appeared first on Identity Wo

I am pleased to share that I have joined the Secure Justice Advisory board. I have known Brian Hofer since he was one of the leaders within Oakland Privacy that successfully resisted the Domain Awareness Center for Oakland. I wrote a guest blog post about a philosophy of activism and theory of change called Engaging […]

The post Joining Secure Justice Advisory Board appeared first on Identity Woman.


MyDigitalFootprint

"Hard & Fast" Vs "Late & Slow"

The title might sound like a movie but this article is about unpacking decision making. We need leaders to be confident in their decisions so we can hold them accountable. We desire leaders to lead, wanting them to be early. They achieve this by listening to the signals and reacting before is is obvious to the casual observer. However, those in leadership who we hold accountable do not want to
The title might sound like a movie but this article is about unpacking decision making.

We need leaders to be confident in their decisions so we can hold them accountable. We desire leaders to lead, wanting them to be early. They achieve this by listening to the signals and reacting before is is obvious to the casual observer. However, those in leadership who we hold accountable do not want to make the “wrong” decisions. A wrong decision can mean liability, loss of reputation or perceived to be too risky. A long senior leadership career requires navigating a careful path between not takingtoo much risk by going too “early”, which leads to failure, and not being late such that anyone could have made the decision earlier and looking incompetent. Easy leadership does not look like leadership as it finds a path of not being early or late (the majority)



When we unpack leadership trends over the past 100 years that include ideas such as improving margin, diversification, reduction, speed to market, finance lead decisions, data-led, customer first, agile, just-in-time, customer centricity, digital first, personalisation, automated decisions, innovation, transformation, ethics, diversity, privacy by design, shareholder primacy, stakeholder management, re-engineering, outsourcing to name a few. Over the same period of time our ideas of leadership styles has also evolved.



There is an inference or hypothesis that we can test, which is that our approach to risk means we have the leaders that we now deserve. Does our risk create the leadersghip we have or does leadership manage risk to what we want is a cause and effect problem that results from the complex market we have.

The Ladder of Inference below is a concept developed by the late Harvard Professor Chris Argyris, to help explain why anyone reading this and looking at (the same/a) set of evidence we can draw very different conclusions. However, the point is that what we want leadership who has the courage for decisions that are “hard and fast”, but what we get “late and slow” Data led, waiting for the data, following the model all confirm that the decisions we are taking are late and slow. We know there is a gap, it is just hard to know why. Hard and fast occurs when there is a lack of data or evidence and rests on judgment and not confirmation, the very things we value but peanilse for at the same time.





Right now we see this with how the government have reacted to COVID, again we can conclude with hindsight that no-one country leadership got it right and the majority appear to continuen to get it wrong in the view that the voters will not vote from them if they take the hard choices, follow the science, follow the data making sure we are late and slow.

Climate change and COP26. There will never be enough data and waiting for more data confirms our need to manage to a risk model that does not account for the environment with the same weight as finance.

Peak Paradox

The Peak Paradox framework forces us to address the question “what are we optimising for?” Previous articles have highlighted the issues about decision making at Peak Paradox, however at each point we should also consider the leadership style “Hard & Fast versus Late & Slow”





The Peak Paradox model gives us a position in space, at each point where we are, thinking about hard & fast vs late and slow introduces a concept of time and direction into the model.

Tuesday, 30. November 2021

Altmode

Sussex Day 6: Downtime

Tuesday, November 9, 2021 Somewhat at the midpoint of our trip, it was time to take care of a few things like laundry. It’s also time for the thrice-annual Internet Engineering Task Force meeting, which was supposed to be in Madrid, but is being held online (again) due to the pandemic. I co-chaired a session […]

Tuesday, November 9, 2021

Somewhat at the midpoint of our trip, it was time to take care of a few things like laundry. It’s also time for the thrice-annual Internet Engineering Task Force meeting, which was supposed to be in Madrid, but is being held online (again) due to the pandemic. I co-chaired a session from noon to 2 pm local time today, so I needed to be at the hotel for that. Meanwhile Kenna and Celeste did some exploring around the little shops in the Brighton Lanes.

Our downtime day also gave us an opportunity to do some laundry. One of the attractive features of our “aparthotel” is a compact combination washer/dryer. Our room also came with a couple of detergent pods, which were unfortunately and unexpectedly heavily scented. We will be using our own detergent in the future. The dryer was slow, but it did the job.

IETF virtual venue

I am again thankful for the good internet service here; the meeting went without a hitch (my co-chair is in Melbourne, Australia). Kenna and Celeste brought lunch from Pret a Manger to eat between meeting sessions I needed to attend. Following the second session we went off for dinner at a pizza place we had discovered, Franco Manca. The pizza and surroundings were outstanding; we would definitely return (and Celeste probably will). We then saw Celeste off to her bus back to campus and we returned to our hotel.

This article is part of a series about our recent travels to southern England. To see the introductory article in the series, click here.


Matt Flynn: InfoSec | IAM

Introducing OCI IAM Identity Domains

A little over a year ago, I switched roles at Oracle and joined the Oracle Cloud Infrastructure (OCI) Product Management team working on Identity and Access Management (IAM) services. It's been an incredibly interesting (and challenging) year leading up to our release of OCI IAM identity domains.  We merged an enterprise-class Identity-as-a-Service (IDaaS) solution with our OCI-native IAM se

A little over a year ago, I switched roles at Oracle and joined the Oracle Cloud Infrastructure (OCI) Product Management team working on Identity and Access Management (IAM) services. It's been an incredibly interesting (and challenging) year leading up to our release of OCI IAM identity domains

We merged an enterprise-class Identity-as-a-Service (IDaaS) solution with our OCI-native IAM service to create a cloud platform IAM service unlike any other. We encountered numerous challenges along the way that would have been much easier if we allowed for customer interruption. But we had a key goal to not cause any interruptions or changes in functionality to our thousands of existing IDaaS customers. It's been immeasurably impressive to watch the development organization attack and conquer those challenges.

Now, with a few clicks from the OCI admin console, customers can create self-contained IDaaS instances to accommodate a variety of IAM use-cases. And this is just the beginning. The new, upgraded OCI IAM service serves as the foundation for what's to come. And I've never been more optimistic about Oracle's future in the IAM space.

Here's a short excerpt from our blog post Introducing OCI IAM Identity Domains:

"Over the past five years, Oracle Identity Cloud Service (IDCS) has grown to support thousands of customers and currently manages hundreds of millions of identities. Current IDCS customers enjoy a broad set of Identity and Access Management (IAM) features for authentication (federated, social, delegated, adaptive, multi-factor authentication (MFA)), access management, manual or automated identity lifecycle and entitlement management, and single sign-on (SSO) (federated, gateways, proxies, password vaulting).

In addition to serving IAM use cases for workforce and consumer access scenarios, IDCS has frequently been leveraged to enhance IAM capabilities for Oracle Cloud Infrastructure (OCI) workloads. The OCI Identity and Access Management (OCI IAM) service, a native OCI service that provides the access control plane for Oracle Cloud resources (networking, compute, storage, analytics, etc.), has provided the IAM framework for OCI via authentication, access policies, and integrations with OCI security approaches such as compartments and tagging. OCI customers have adopted IDCS for its broader authentication options, identity lifecycle management capabilities, and to provide a seamless sign-on experience for end users that extends beyond the Oracle Cloud.

To better address Oracle customers’ IAM requirements and to simplify access management across Oracle Cloud, multi-cloud, Oracle enterprise applications, and third-party applications, Oracle has merged IDCS and OCI IAM into a single, unified cloud service that brings all of IDCS’ advanced identity and access management features natively into the OCI IAM service. To align with Oracle Cloud branding, the unified IAM service will leverage the OCI brand and will be offered as OCI IAM. Each instance of the OCI IAM service will be managed as identity domains in the OCI console."

Learn more about OCI IAM identity domains

Monday, 29. November 2021

Altmode

Sussex Day 5: Lewes

Monday, November 8, 2021 We started our day fairly early, getting a quick Starbucks breakfast before getting on the bus to University of Sussex to meet Celeste at 9:30 am. Celeste has an hour-long radio show, “Oops That Had Banjos”, on the campus radio station, University Radio Falmer. She invited us to co-host the show. […]

Monday, November 8, 2021

We started our day fairly early, getting a quick Starbucks breakfast before getting on the bus to University of Sussex to meet Celeste at 9:30 am. Celeste has an hour-long radio show, “Oops That Had Banjos”, on the campus radio station, University Radio Falmer. She invited us to co-host the show. The studio was exactly as I had imagined, and it was a lot of fun doing the show with her. We each contributed a couple of songs to the playlist, and got to introduce them briefly.

After the show, Celeste had classes so we continued on to Lewes. We hadn’t been able to see much on our short visit Sunday evening. We started out at Lewes Castle & Museum, again getting an idea of the history of the place and then visiting portions of the castle itself. It was a clear day, and the view from the top was excellent. As with many of these sites, the castle went through many changes through the centuries as political conditions changed.

Lewes Barbican Gate and view from the Castle

After climbing around the castle, we were ready for lunch. We checked out a few restaurants in town before settling on the Riverside Cafe, in an attractive area on the River Ouse. After lunch, we walked among a number of small shops before entering a Waterstones bookstore. How we miss spending time in quality bookstores! I expect we’ll be seeking them out more once we return.

We then took the train back to Brighton, since I had a meeting to attend for work. The meeting went well; the internet connection at the hotel is solid and makes it seem like it hardly matters where in the world I am when attending these meetings.

Celeste came down to Brighton to have dinner with us. We decided to go with Latin American food at a local chain called Las Iguanas. The food was quite good although somewhat standard, at least to those of us from California and Colorado.

This article is part of a series about our recent travels to southern England. To see the introductory article in the series, click here.


Phil Windley's Technometria

Digital Memories

Summary: Digital memories are an important component of our digital embodiment. SSI provides a foundation for self-sovereign digital memories to solve the digital-analog memory divide. In a recent thread on the VRM mailing list, StJohn Deakins of Citizen Me shared a formulation of interactions that I thought was helpful in unpacking the discussions about data, identity, and ownership. H

Summary: Digital memories are an important component of our digital embodiment. SSI provides a foundation for self-sovereign digital memories to solve the digital-analog memory divide.

In a recent thread on the VRM mailing list, StJohn Deakins of Citizen Me shared a formulation of interactions that I thought was helpful in unpacking the discussions about data, identity, and ownership. His framework concerns analog vs digital memories.

In real life, we often interact with others—both people and institutions—with relative anonymity. For example, if I go the store and use cash to buy a coke there is no exchange of identity information. If I use a credit card it's rarely the case that the entire transaction happens under the administrative authority of the identity system inherent in the credit card. Only the financial part of the transaction takes place in that identity system. This is true of most interactions in real life.

In this situation, the cashier, others in the store, and I all share a common experience. Even so, we each retain our own memories of the interaction. No one participating would claim to "own" the interaction. But we all retain memories which are our own. There is no single "true" record of the interaction. Every participant will have a different perspective (literally).

On the other hand, the store, as an institution, retains a memory of having sold a coke and the credit card transaction. This digital memory of the transaction can easily persist longer than any of the analog memories of the event. Because it is a digital record we trust it and tend to think of it as "true." For some purposes, say proving in a court of law that I was in the store at a particular time, this is certainly true. But for other purposes (e.g. was the cashier friendly or rude?) the digital memory is woefully anemic.

Online, we only have digital memories. And people have very few tools for saving, managing, recalling, and using them. I think digital memories are one of the primary features of digital embodiment—giving people a place to stand in the digital world, their own perspective, memories, and capacity to act. We can't be peers online without having our own digital memories.

StJohn calls this the "analog-digital memory divide." This divide is one source of the power imbalance between people and administrative entities (i.e. anyone who has a record of you in an account). CitizenMe provides tools for people to manage digital memories. People retain their own digital memory of the event. While every participant has a similar digital memory of the event, they can all be different, reflecting different vantage points.

One of the recent trends in application development is microservices, with an attendant denormalization of data. The realization that there doesn't have to be, indeed often can't be, a single source of truth for data has freed application development from the strictures of centralization and led to more easily built and operated distributed applications that are resilient and scale. I think this same idea applies to digital interactions generally. Freeing ourselves from the mindset that digital systems can and should provide a single record that is "true" will lead to more autonomy and richer interactions.

Self-sovereign identity (SSI) provides a foundation for our digital personhood and allows us to not only taking charge of our digital memories but operationalize all of our digital relationships. Enriching the digital memories of events by allowing everyone their own perspective (i.e. making them self-sovereign) will lead to a digital world that is more like real life.

Related:

Ephemeral Relationships—Many of the relationships we have online don’t have to be long-lived. Ephemeral relationships, offering virtual anonymity, are technically possible without a loss of functionality or convenience. Why don’t they exist? Surveillance is profitable. Fluid Multi-Pseudonymity—Fluid multi-pseudonymity perfectly describes the way we live our lives and the reality that identity systems must realize if we are to live authentically in the digital sphere. Can the Digital Future Be Our Home?—This post features three fantastic books from three great, but quite different, authors on the subject of Big Tech, surveillance capitalism, and what's to be done about it.

Photo Credit: Counter Supermarket Product Shopping Shop Grocery from maxpixel (CC0)

Tags: identity ssi relationships pseudonymity


reb00ted

Facebook's metaverse pivot is a Hail Mary pass

The more I think about Facebook’s Meta’s pivot to the metaverse, the less it appears like they do this voluntarily. I think they have no other choice: their existing business is running out of steam. Consider: At about 3.5 billion month active users of at least one of their products (Facebook, Instagram, Whatsapp etc), they are running out of more humans to sign up. People say they use

The more I think about Facebook’s Meta’s pivot to the metaverse, the less it appears like they do this voluntarily. I think they have no other choice: their existing business is running out of steam. Consider:

At about 3.5 billion month active users of at least one of their products (Facebook, Instagram, Whatsapp etc), they are running out of more humans to sign up.

People say they use Facebook to stay in touch with family and friends. But there is now one ad in my feed for each three or four posts that I actually want to see. Add more ads than this, and users will turn their backs: Facebook doesn’t help them with what they want help with any more, it’s all ads.

While their ARPU is much higher in the US than in Europe, where in turn it is much higher than the rest of the world – hinting that international growth should be possible – their distribution of ARPU is not all that different from the whole ad market’s distribution of ad revenues in different regions. Convincing, say, Africa to spend much more on ads does not sound like a growth story.

And between the regulators in the EU and elsewhere, moves to effectively ban further Instagram-like acquisitions, lawsuits left and right, and Apple’s privacy moves, their room to manoever is getting tighter, not wider.

Their current price/sales ratio of just under 10 is hard to be justified for long under these constraints. They must also be telling themselves that relying on an entirely ad-based business model is not good long-term strategy any more, given the backlash against surveillance capitalism.

So what do you do?

I think you change the fundamentals of your business at the same time you change the conversation, leveraging the technology you own. And you end up with:

Oculus as the replacement for the mobile phone;

Headset and app store sales, for Oculus, as an entirely new business model that’s been proven (by the iPhone) to be highly profitable and is less under attack by regulators and the public; it also supports potentially much higher ARPU than just ads;

Renaming the company to something completely harmless and bland sounding; that will also let you drop the Facebook brand should it become too toxic down the road.

The risks are immense, starting with: how many hours a day do you hold your mobile phone in your hand, in comparison to how many hours a day you are willing to wear a bucket on your head, ahem, a headset? Even fundamental interaction questions, architecture questions and use case questions for the metaverse are still completely up in the air.

Credit to Mark Zuckerberg for pulling off a move as substantial as this for an almost trillion dollar company. I can’t think of any company which has ever done anything similar at this scale. When Intel pivoted from memory to CPUs, back in the 1980’s and at a much smaller scale, at least it was clear that there was going to be significant, growing demand for CPUs. This is not clear at all about headsets beyond niches such as gaming. So they are really jumping into the unknown with both feet.

But I don’t think any more they had a choice.

Sunday, 28. November 2021

Altmode

Sussex Day 4: Hastings

Sunday, November 7, 2021 Having gone west to Chichester yesterday, today we went east to Hastings, notable for the Norman conquest of 1066 (although the actual Battle of Hastings was some distance inland). We arranged to meet Celeste on the train as we passed through Falmer, where her campus is located, for the hour-or-so trip […]

Sunday, November 7, 2021

Having gone west to Chichester yesterday, today we went east to Hastings, notable for the Norman conquest of 1066 (although the actual Battle of Hastings was some distance inland). We arranged to meet Celeste on the train as we passed through Falmer, where her campus is located, for the hour-or-so trip along the coast. Unfortunately, it seems like it’s an hour train ride to most sights outside Brighton.

Hastings is an attractive and somewhat touristy town, along the Channel and in a narrow valley surrounded by substantial hills. We walked through the town, stopping for a fish and chips lunch along the way, and admiring the small shops in the Old Town. We took a funicular up one of the two hills and had an excellent view of the surrounding terrain. Unfortunately, the ruins of the castle at Hastings were closed for the season.

Funicular and view from the top

After returning via funicular, we continued through the town to the Hastings Museum, a well curated (and free!) small museum that was thorough in its coverage of the history of the area, from the Iron Age to the present. It also included an extensive collection from a local family that sailed around the world in the 1800s.

Taking the train back, we had a change of trains in Lewes, which Celeste had visited and enjoyed previously. We stopped at the Lewes Arms pub, but unfortunately (since it was Sunday evening) the kitchen had closed so we couldn’t get food. So Celeste returned to campus and got dinner there, while Kenna and I got take-out chicken sandwiches to eat in our hotel.

Our weekly family Zoom conference is on Sunday evening, England time, so we ate our sandwiches while chatting with other family members back home. It’s so much easier to stay in close touch with family while traveling than it was just a few years ago.

This article is part of a series about our recent travels to southern England. To see the introductory article in the series, click here.


Just a Theory

Accelerate Perl Github Workflows with Caching

A quick tip for speeding up Perl builds in GitHub workflows by caching dependencies.

I’ve spent quite a few hours evenings and weekends recently building out a comprehensive suite of GitHub Actions for Sqitch. They cover a dozen versions of Perl, nearly 70 database versions amongst nine database engines, plus a coverage test and a release workflow. A pull request can expect over 100 actions to run. Each build requires over 100 direct dependencies, plus all their dependencies. Installing them for every build would make any given run untenable.

Happily, GitHub Actions include a caching feature, and thanks to a recent improvement to shogo82148/actions-setup-perl, it’s quite easy to use in a version-independent way. Here’s an example:

name: Test on: [push, pull_request] jobs: OS: strategy: matrix: os: [ ubuntu, macos, windows ] perl: [ 'latest', '5.34', '5.32', '5.30', '5.28' ] name: Perl ${{ matrix.perl }} on ${{ matrix.os }} runs-on: ${{ matrix.os }}-latest steps: - name: Checkout Source uses: actions/checkout@v2 - name: Setup Perl id: perl uses: shogo82148/actions-setup-perl@v1 with: { perl-version: "${{ matrix.perl }}" } - name: Cache CPAN Modules uses: actions/cache@v2 with: path: local key: perl-${{ steps.perl.outputs.perl-hash }} - name: Install Dependencies run: cpm install --verbose --show-build-log-on-failure --no-test --cpanfile cpanfile - name: Run Tests env: { PERL5LIB: "${{ github.workspace }}/local/lib/perl5" } run: prove -lrj4

This workflow tests every permutation of OS and Perl version specified in jobs.OS.strategy.matrix, resulting in 15 jobs. The runs-on value determines the OS, while the steps section defines steps for each permutation. Let’s take each step in turn:

“Checkout Source” checks the project out of GitHub. Pretty much required for any project. “Setup Perl” sets up the version of Perl using the value from the matrix. Note the id key set to perl, used in the next step. “Cache CPAN Modules” uses the cache action to cache the directory named local with the key perl-${{ steps.perl.outputs.perl-hash }}. The key lets us keep different versions of the local directory based on a unique key. Here we’ve used the perl-hash output from the perl step defined above. The actions-setup-perl action outputs this value, which contains a hash of the output of perl -V, so we’re tying the cache to a very specific version and build of Perl. This is important since compiled modules are not compatible across major versions of Perl. “Install Dependencies” uses cpm to quickly install Perl dependencies. By default, it puts them into the local subdirectory of the current directory — just where we configured the cache. On the first run for a given OS and Perl version, it will install all the dependencies. But on subsequent runs it will find the dependencies already present, thank to the cache, and quickly exit, reporting “All requirements are satisfied.” In this Sqitch job, it takes less than a second. “Run Tests” runs the tests that require the dependencies. It requires the PERL5LIB environment variable to point to the location of our cached dependencies.

That’s the whole deal. The first run will be the slowest, depending on the number of dependencies, but subsequent runs will be much faster, up to the seven-day caching period. For a complex project like Sqitch, which uses the same OS and Perl version for most of its actions, this results in a tremendous build time savings. CI configurations we’ve used in the past often took an hour or more to run. Today, most builds take only a few minutes to test, with longer times determined not by dependency installation but by container and database latency.

More about… Perl GitHub GitHub Actions GitHub Workflows Caching

Saturday, 27. November 2021

Altmode

Sussex Day 3: Chichester and Fishbourne

Saturday, November 6, 2021 After a pleasant breakfast at a cafe in The Lanes, we met up with Celeste at the Brighton train station and rode to Chichester, about an hour to the west. Chichester is a pleasant (and yes, touristy) town with a notable cathedral. Arriving somewhat late, we walked through the town and […]

Saturday, November 6, 2021

After a pleasant breakfast at a cafe in The Lanes, we met up with Celeste at the Brighton train station and rode to Chichester, about an hour to the west. Chichester is a pleasant (and yes, touristy) town with a notable cathedral. Arriving somewhat late, we walked through the town and then found lunch at a small restaurant on a side road as many of the major restaurants in town were quite crowded (it is a Saturday, after all).



One of the main attractions in the area is the Fishbourne Roman Palace, one village to the west. We set out on foot, through a bit of rain, for a walk of a couple of miles. But when we arrived it was well worth the trip. This is an actual Roman palace, constructed in about 79AD, that had been uncovered starting in the 1960s, along with many coins, implements, and other artifacts. The mosaic floors were large and particularly impressive. As a teenager, I got to visit the ruins in Pompeii; these were of a similar nature. This palace and surrounding settlements were key to the Roman development of infrastructure in England.

Returning from Fishbourne to Chichester, we made a short visit to Chichester Cathedral. Unfortunately, the sun had set and it was difficult to see most of the stained glass. At the time of our visit, there was a large model of the Moon, traveling to several locations in Europe, that was hanging from the ceiling in the middle of the church. It was a striking thing to see, especially as we first entered.

After our train trip back from Chichester, we parted with Celeste who returned to campus. Since it was a Saturday night, restaurants were crowded, but we were able to get dinner at a large chain pub, Wetherspoons. The pub was noisy and table service was minimal. We ordered via their website and they only cleared the previous patrons’ dirty dishes when they delivered our food. The food was acceptable, but nothing to blog about.

This article is part of a series about our recent travels to southern England. To see the introductory article in the series, click here.

Friday, 26. November 2021

Altmode

Sussex Day 2: Guy Fawkes Day

Friday, November 5, 2021 These days, “day 2” after arriving in the UK has a special implication: it is on this day that you must take and report the results of a COVID-19 test. As required, we ordered the tests and had them delivered to our hotel. After breakfast (in our room/apartment, with groceries we […]

Friday, November 5, 2021

These days, “day 2” after arriving in the UK has a special implication: it is on this day that you must take and report the results of a COVID-19 test. As required, we ordered the tests and had them delivered to our hotel. After breakfast (in our room/apartment, with groceries we picked up previously), we very methodically followed the instructions and 15 minutes later happily had negative test results. You send an image of the test stick next to the information page from your passport and a little while later they send a certificate of your test results. Presumably the results were sent to the authorities as well so they don’t come looking for us.

Celeste had classes at various times (including an 8:30 pm meeting with the Theater Department in Colorado) so Kenna and I were on our own today. We set out to explore Brighton, a city that bears resemblance to both Santa Cruz and Berkeley, California. We took a rather long walk, beginning with the shore area. We walked out to the end of Brighton Palace Pier, which includes a sizable game arcade and a small amusement park with rides at the end.

We decided to continue eastward along the shore and walked a long path toward Brighton Marina. Along the way were a variety of activities, including miniature golf, a sauna, an outdoor studio including a yoga class in session, and an electric railway. There was also quite a bit of construction, which makes sense since it’s off-season.

We arrived at the marina not entirely clear on how to approach it by foot: the major building we saw was a large parking garage. So we continued along the Undercliff Trail, the cliffs being primarily chalk. This is how we picture the White Cliffs of Dover must be (although Dover probably has higher cliffs). At the far end of the marina we found a pedestrian entrance, and walked back through the marina to find some lunch. We ate outdoors at Taste Sussex, which was quite good, although our seating area got a little chilly once the sun fell behind a nearby building.

Brighton cliffs and marina

Our return took us through the areas of the marina that didn’t look very pedestrian-friendly, but were actually OK. We took a different route back to the hotel through the Kemptown District. We’re not sure we found the main part of Kemptown but we did walk past the Royal Sussex Hospital.

We had heard about the Brighton Toy and Model Museum, and had a little time so we went looking for it. The maps indicated that it is located adjacent to the train station, so we went to the train station and wandered around for quite a while before discovering that it’s sort of under the station, accessed through a door on the side of an underpass. The museum is physically small, but they have a very extensive collection of classic toys and model trains, primarily from the early 20th century. The staff was helpful and friendly, even offering suggestions for what else to see while in the area.

We had a late lunch, so instead of going out for dinner we opted for wine/beer and a charcuterie plate from the small bar in the hotel. It included a couple of unusual cheeses (a black-colored cheddar, for example) and met our needs well.

Guy Fawkes Day is traditionally celebrated in the UK with bonfires and fireworks displays to commemorate his 1605 attempt to blow up Parliament. Although our room faces the Channel, we heard but had limited visibility to various fireworks being set off on the shore. So we again set out on foot and saw a few, but since the fireworks are unofficial (set off by random people on the beach), they were widely dispersed and unorganized.

It has been a good day for walking, with over 10 miles traveled.

This article is part of a series about our recent travels to southern England. To see the introductory article in the series, click here.

Wednesday, 24. November 2021

Mike Jones: self-issued

JWK Thumbprint URI Specification

The JSON Web Key (JWK) Thumbprint specification [RFC 7638] defines a method for computing a hash value over a JSON Web Key (JWK) [RFC 7517] and encoding that hash in a URL-safe manner. Kristina Yasuda and I have just created the JWK Thumbprint URI specification, which defines how to represent JWK Thumbprints as URIs. This […]

The JSON Web Key (JWK) Thumbprint specification [RFC 7638] defines a method for computing a hash value over a JSON Web Key (JWK) [RFC 7517] and encoding that hash in a URL-safe manner. Kristina Yasuda and I have just created the JWK Thumbprint URI specification, which defines how to represent JWK Thumbprints as URIs. This enables JWK Thumbprints to be communicated in contexts requiring URIs, including in specific JSON Web Token (JWT) [RFC 7519] claims.

Use cases for this specification were developed in the OpenID Connect Working Group of the OpenID Foundation. Specifically, its use is planned in future versions of the Self-Issued OpenID Provider v2 specification.

The specification is available at:

https://www.ietf.org/archive/id/draft-jones-oauth-jwk-thumbprint-uri-00.html

Identity Woman

Quoted in Consumer Reports article on COVID Certificates

How to Prove You’re Vaccinated for COVID-19 You may need to prove your vaccination status for travel or work, or to attend an event. Paper credentials usually work, but a new crop of digital verification apps is adding confusion. Kaliya Young, an expert on digital identity verification working on the COVID Credentials Initiative, is also […] The post Quoted in Consumer Reports article on COVID C

How to Prove You’re Vaccinated for COVID-19 You may need to prove your vaccination status for travel or work, or to attend an event. Paper credentials usually work, but a new crop of digital verification apps is adding confusion. Kaliya Young, an expert on digital identity verification working on the COVID Credentials Initiative, is also […]

The post Quoted in Consumer Reports article on COVID Certificates appeared first on Identity Woman.

Tuesday, 23. November 2021

Identity Woman

Is it all change for identity?

Opening Plenary EEMA’s Information Security Solutions Europe Keynote Panel Last week while I was at Phocuswright I also had the pleasure of being on the Keynote Panel at EEMA‘s Information Security Solutions Europe [ISSE] virtual event. We had a great conversation talking about the emerging landscape around eIDAS and the recent announcement that the EU […] The post Is it all change for identity?

Opening Plenary EEMA’s Information Security Solutions Europe Keynote Panel Last week while I was at Phocuswright I also had the pleasure of being on the Keynote Panel at EEMA‘s Information Security Solutions Europe [ISSE] virtual event. We had a great conversation talking about the emerging landscape around eIDAS and the recent announcement that the EU […]

The post Is it all change for identity? appeared first on Identity Woman.

Tuesday, 23. November 2021

Identity Woman

Cohere: Podcast

I had the pleasure of talking with Bill Johnston who I met many years ago via Forum One and their online community work. It was fun to chat again and to share for the community management audience some of the latest thinking on Self-Sovereign Identity. Kaliya Young is many things: an advocate for open Internet […] The post Cohere: Podcast appeared first on Identity Woman.

I had the pleasure of talking with Bill Johnston who I met many years ago via Forum One and their online community work. It was fun to chat again and to share for the community management audience some of the latest thinking on Self-Sovereign Identity. Kaliya Young is many things: an advocate for open Internet […]

The post Cohere: Podcast appeared first on Identity Woman.

Monday, 22. November 2021

Damien Bod

Implement certificate authentication in ASP.NET Core for an Azure B2C API connector

This article shows how an ASP.NET Core API can be setup to require certificates for authentication. The API is used to implement an Azure B2C API connector service. The API connector client uses a certificate to request profile data from the Azure App Service API implementation, which is validated using the certificate thumbprint. Code: https://github.com/damienbod/AspNetCoreB2cExtraClaims […]

This article shows how an ASP.NET Core API can be setup to require certificates for authentication. The API is used to implement an Azure B2C API connector service. The API connector client uses a certificate to request profile data from the Azure App Service API implementation, which is validated using the certificate thumbprint.

Code: https://github.com/damienbod/AspNetCoreB2cExtraClaims

Blogs in this series

Securing ASP.NET Core Razor Pages, Web APIs with Azure B2C external and Azure AD internal identities Using Azure security groups in ASP.NET Core with an Azure B2C Identity Provider Add extra claims to an Azure B2C user flow using API connectors and ASP.NET Core Implement certificate authentication in ASP.NET Core for an Azure B2C API connector

Setup Azure App Service

An Azure App Service was created which uses .NET and 64 bit configurations. The Azure App Service is configured to require incoming client certificates and will forward this to the application. By configuring this, any valid certificate will work. The certificate still needs to be validated inside the application. You need to check that the correct client certificate is being used.

Implement the API with certificate authentication for deployment

The AddAuthentication sets the default scheme to CertificateAuthentication. The AddCertificate method adds the required configuration to validate the client certificates used with each request. We use a self signed certificate for the authentication. If a valid certificate is used, the MyCertificateValidationService is used to validate that it is also the correct certificate.

var builder = WebApplication.CreateBuilder(args); builder.Services.AddControllers(); builder.Services.AddEndpointsApiExplorer(); builder.Services.AddSwaggerGen(); builder.Services.AddSingleton<MyCertificateValidationService>(); builder.Services.AddAuthentication(CertificateAuthenticationDefaults.AuthenticationScheme) .AddCertificate(options => { // https://docs.microsoft.com/en-us/aspnet/core/security/authentication/certauth options.AllowedCertificateTypes = CertificateTypes.SelfSigned; options.Events = new CertificateAuthenticationEvents { OnCertificateValidated = context => { var validationService = context.HttpContext.RequestServices.GetService<MyCertificateValidationService>(); if (validationService != null && validationService.ValidateCertificate(context.ClientCertificate)) { var claims = new[] { new Claim(ClaimTypes.NameIdentifier, context.ClientCertificate.Subject, ClaimValueTypes.String, context.Options.ClaimsIssuer), new Claim(ClaimTypes.Name, context.ClientCertificate.Subject, ClaimValueTypes.String, context.Options.ClaimsIssuer) }; context.Principal = new ClaimsPrincipal(new ClaimsIdentity(claims, context.Scheme.Name)); context.Success(); } else { context.Fail("invalid cert"); } return Task.CompletedTask; } }; }); builder.Host.UseSerilog((hostingContext, loggerConfiguration) => loggerConfiguration .ReadFrom.Configuration(hostingContext.Configuration) .Enrich.FromLogContext() .MinimumLevel.Debug() .WriteTo.Console() .WriteTo.File( //$@"../certauth.txt", $@"D:\home\LogFiles\Application\{Environment.UserDomainName}.txt", fileSizeLimitBytes: 1_000_000, rollOnFileSizeLimit: true, shared: true, flushToDiskInterval: TimeSpan.FromSeconds(1)));

The middleware services are setup so that in development no authentication is used and the requests are validated using basic authentication. If the environment in not development, certificate authentication is used and all API calls require authorization.

var app = builder.Build(); if (app.Environment.IsDevelopment()) { app.UseSwagger(); app.UseSwaggerUI(); } app.UseHttpsRedirection(); if (!app.Environment.IsDevelopment()) { app.UseAuthentication(); app.UseAuthorization(); app.MapControllers().RequireAuthorization(); } else { app.UseAuthorization(); app.MapControllers(); } app.Run();

The MyCertificateValidationService validates the certificate. This checks if the certificate used has the correct thumbprint and is the same as the certificate used in the client application, in these case the Azure B2C API connector.

public class MyCertificateValidationService { private readonly ILogger<MyCertificateValidationService> _logger; public MyCertificateValidationService(ILogger<MyCertificateValidationService> logger) { _logger = logger; } public bool ValidateCertificate(X509Certificate2 clientCertificate) { return CheckIfThumbprintIsValid(clientCertificate); } private bool CheckIfThumbprintIsValid(X509Certificate2 clientCertificate) { var listOfValidThumbprints = new List<string> { // add thumbprints of your allowed clients "15D118271F9AE7855778A2E6A00A575341D3D904" }; if (listOfValidThumbprints.Contains(clientCertificate.Thumbprint)) { _logger.LogInformation($"Custom auth-success for certificate {clientCertificate.FriendlyName} {clientCertificate.Thumbprint}"); return true; } _logger.LogWarning($"auth failed for certificate {clientCertificate.FriendlyName} {clientCertificate.Thumbprint}"); return false; } }

Setup Azure B2C API connector with certification authentication

The Azure B2C API connector is setup to use a certificate. You can create the certificate anyway you want. I used the CertificateManager Nuget package to create a RSA 512 certificate with a 3072 key size. The thumbprint from this certificate needs to be validated in the ASP.NET Core API application.

The Azure B2C API connector is added to the Azure B2C user flow. The use flow requires all the custom claims to be defined and the values can be set in the API Connector service. See the first post in this blog group for details.

Creating an RSA 512 with a 3072 size key

You can create certificates using .NET Core using the CertificateManager Nuget package which provides some helper methods for creating the X509 certificates as required.

class Program { static CreateCertificates _cc; static void Main(string[] args) { var builder = new ConfigurationBuilder() .AddUserSecrets<Program>(); var configuration = builder.Build(); var sp = new ServiceCollection() .AddCertificateManager() .BuildServiceProvider(); _cc = sp.GetService<CreateCertificates>(); var rsaCert = CreateRsaCertificateSha512KeySize2048("localhost", 10); string password = configuration["certificateSecret"]; var iec = sp.GetService<ImportExportCertificate>(); var rsaCertPfxBytes = iec.ExportSelfSignedCertificatePfx(password, rsaCert); File.WriteAllBytes("cert_rsa512.pfx", rsaCertPfxBytes); Console.WriteLine("created"); } public static X509Certificate2 CreateRsaCertificateSha512KeySize2048(string dnsName, int validityPeriodInYears) { var basicConstraints = new BasicConstraints { CertificateAuthority = false, HasPathLengthConstraint = false, PathLengthConstraint = 0, Critical = false }; var subjectAlternativeName = new SubjectAlternativeName { DnsName = new List<string> { dnsName, } }; var x509KeyUsageFlags = X509KeyUsageFlags.DigitalSignature; // only if certification authentication is used var enhancedKeyUsages = new OidCollection { OidLookup.ClientAuthentication, // OidLookup.ServerAuthentication // OidLookup.CodeSigning, // OidLookup.SecureEmail, // OidLookup.TimeStamping }; var certificate = _cc.NewRsaSelfSignedCertificate( new DistinguishedName { CommonName = dnsName }, basicConstraints, new ValidityPeriod { ValidFrom = DateTimeOffset.UtcNow, ValidTo = DateTimeOffset.UtcNow.AddYears(validityPeriodInYears) }, subjectAlternativeName, enhancedKeyUsages, x509KeyUsageFlags, new RsaConfiguration { KeySize = 3072, HashAlgorithmName = HashAlgorithmName.SHA512 }); return certificate; } }

Running the applications

I setup two user flows for running and testing the applications. One is using ngrok and local development with basic authentication. The second is using certification authentication and the deployed Azure App service. I published the API to the App service and run the UI application. When the user signs in, the API connector is used to get the extra custom claims from the deployed API and is returned.

Links:

https://docs.microsoft.com/en-us/azure/active-directory-b2c/api-connectors-overview?pivots=b2c-user-flow

https://docs.microsoft.com/en-us/azure/active-directory-b2c/

https://github.com/Azure-Samples/active-directory-dotnet-external-identities-api-connector-azure-function-validate/

https://docs.microsoft.com/en-us/dotnet/standard/serialization/system-text-json-customize-properties?pivots=dotnet-6-0

https://github.com/AzureAD/microsoft-identity-web/wiki

Securing Azure Functions using certificate authentication

Thursday, 18. November 2021

Ally Medina - Blockchain Advocacy

Initial Policy Offerings

A Reader’s Guide How should crypto be regulated? And by whom? These are the big questions the industry is grappling with in the wake of the infrastructure bill being signed with the haphazardly expanded definition of a broker dealer for tax reporting provisions. So now the industry is *atwitter* with ideas about where to go from here. Three large companies have all come out with policy

A Reader’s Guide

How should crypto be regulated? And by whom? These are the big questions the industry is grappling with in the wake of the infrastructure bill being signed with the haphazardly expanded definition of a broker dealer for tax reporting provisions. So now the industry is *atwitter* with ideas about where to go from here.

Three large companies have all come out with policy suggestions: FTX, Coinbase and and A16z. While these proposals differ in approach, they all seek to address a few central policy questions. I’ll break down the proposals based on key subject areas:

Consumer Protection

A16: Suggests a framework for DAO’s to provide disclosures.

Coinbase: Sets a goal to “​​Enhance transparency through appropriate disclosure requirements. Protect against fraud and market manipulation”

FTX: Similarly suggests framework for “disclosure and transparency standards”

All three of these make ample mention of consumer protections that seem to begin and end at disclosures. Regulators might want something with a little more teeth. FTX provides a more robust outline for combating fraud, suggesting the use of on-chain analytics tools. This is a smart and concrete suggestion of how to improve existing regulation that relies on SARS reports filed AFTER suspicious activity.

Exactly how Decentralized?

A16: Seeks to create a definition and entity status for DAO’s, which would ostensibly require a different kind of regulation than more custodial services.

Coinbase: ​Platforms and services that do not custody or otherwise control the assets of a customer — including miners, stakers and developers — would need to be treated differently

FTX- Doesn’t mention decentralization.

These are really varied approaches. I’m not criticizing FTX here, they are focusing on consumer protections and combating fraud which are good things to highlight. However the core regulatory issues is- can we differentiate between decentralized and centralized products and does that create a fundamentally conflict with existing law. A16z’s approach is novel, a new designation without a new agency.

The Devil You Know vs The Devil you Don’t

A16- Suggests the Government Office of Accountability “assess the current state of regulatory jurisdiction over cryptocurrency, digital assets, and decentralized technology, and to compare the costs and benefits of harmonizing jurisdiction among agencies against vesting supervision and oversight with a federally chartered self-regulatory organization or one or more nonprofit corporations.”

Coinbase- Argues that this technology needs a new regulatory agency and that all digital assets should be under a single regulatory authority. Also suggests coordination with a Self Regulatory Organization.

FTX- Doesn’t step into that morass.

Coinbase has the most aggressive position here. I personally am not convinced of the need for a new regulatory agency. We haven’t tried it the old fashioned way yet, where existing agencies offer clarity about what would bring a digital asset into their jurisdiction and what would exclude it. Creating a new agency is a slow and expensive process. And then that agency would need to justify its existence by aggressively cracking down. It’s a bit like creating a hammer and then inevitably complaining that it sees everything as a nail.

How to achieve regulatory change in the US for crypto:

Stop tweeting aggressively at the people who regulate you. Negative points if you are a billionaire complaining about taxes. Spend some time developing relationships with policymakers and working collaboratively with communities you want to support. Lotta talk of unbanked communities- any stats on how they are being served by this tech? (Seriously please share) Consider looking at who is already doing the work you want to accelerate and consider working with them/learning from and supporting existing efforts rather than whipping out your proposal and demanding attention. Examples: CoinCenter, Blockchain Association. At the state level: Blockchain Advocacy Coalition of course, Cascadia Blockchain Council, Texas Blockchain Council etc.

A16: https://int.nyt.com/data/documenttools/2021-09-27-andreessen-horowitz-senate-banking-proposals/ec055eb0ce534033/full.pdf#page=9

Coinbase: https://blog.coinbase.com/digital-asset-policy-proposal-safeguarding-americas-financial-leadership-ce569c27d86c

FTX: https://blog.ftx.com/policy/policy-goals-market-regulation/

Wednesday, 17. November 2021

Phil Windley's Technometria

NFTs, Verifiable Credentials, and Picos

Summary: The hype over NFTs and collectibles is blinding us to their true usefulness as trustworthy persistent data objects. How do they sit in the landscape with verifiable credentials and picos? Listening to this Reality 2.0 podcast about NFTs with Doc Searls, Katherine Druckman, and their guest Greg Bledsoe got me thinking about NFTs. I first wrote about NFTs in 2018 regarding w

Summary: The hype over NFTs and collectibles is blinding us to their true usefulness as trustworthy persistent data objects. How do they sit in the landscape with verifiable credentials and picos?

Listening to this Reality 2.0 podcast about NFTs with Doc Searls, Katherine Druckman, and their guest Greg Bledsoe got me thinking about NFTs. I first wrote about NFTs in 2018 regarding what was perhaps the first popular NFT: Cryptokitties. I bought a few and played with them, examined the contract code, and was thinking about how they might enable self-sovereignty, or not. I wrote:

[E]ach kitty has some interesting properties:

Each Kitty is distinguishable from all the rest and has a unique identity and existence Each kitty is owned by someone. Specifically, it is controlled by whoever has the private keys associated with the address that the kitty is tied to.

This is a good description of the properties of NFTs in general. Notice that nothing here says that NFTs have to be about art, or collectibles, although that's the primary use case right now that's generating so much hype. Cryptokitties were more interesting than most of the NFT use cases right now because the smart contract allowed them to be remixed to produce new kitties (for a fee).

Suppose I rewrote the quote from my post on Cryptokitties like this:

[E]ach verifiable credential (VC) has some interesting properties:

Each VC is distinguishable from all the rest and has a unique identity and existence Each VC is owned by someone. Specifically, it is controlled by whoever has the private keys associated with the address that the VC was issued to.

Interesting, no? So, if these properties are true for both NFTs and verifiable credentials, what's the difference? The primary difference is that right now, we envision VC issuers to be institutions, like the DMV, your bank, or employer. And institutions are centralized. In contrast, because NFTs are created using a smart contract on the Ethereum blockchain, we think of them as decentralized. But, not so fast. As I noted in my post on Cryptokitties, you can't assume an NFT is decentralized without examining the smart contract.

There is one problem with CryptoKitties as a model of self-sovereignty: the CryptoKitty smart contract has a "pause" function that can be executed by certain addresses. This is probably a protection against bugs—no one wants to be the next theDAO—but it does provide someone besides the owner with a way to shut it all down.

I have no idea who that someone is and can't hold them responsible for their behavior—I'd guess it's someone connected with CryptoKitties. Whoever has control of these special addresses could shutdown the entire enterprise. I do not believe, based on the contract structure, that they could take away individual kitties; it's an all or nothing proposition. Since they charged money for setting this up, there's likely some contract law that could be used for recourse.

So, without looking at the code for the smart contract, it's hard to say that a particular NFT is decentralized or not. They may be just as centralized as your bank1.

To examine this more closely, let's look at a property title, like a car title, as an example. The DMV could decide to issue car titles as verifiable credentials tomorrow. And the infrastructure to support it is all there: well-supported open source code, companies to provide issuing software and wallets, and specifications and protocols for interoperability2. Nothing has to change politically for that to happen.

The DMV could also issue car titles as NFTs. With an NFT, I'd prove I own the car by exercising control over the private key that controls the NFT representing the car title. The state might do this to provide more automation for car transfers. Here too, they'd have to find an infrastructure provider to help them, ensure they had a usable wallet to store the title, and interact with the smart contract. I don't know how interoperable this would be.

One of the key features of an NFT is that they can be transfered between owners or controllers. Verifiable credentials, because of their core use cases are not designed to be transferred, rather they are revoked and reissued.

Suppose I want to sell the car. With a verifiable credential, the state would still be there, revoking the VC representing my title to the car and issuing a new VC to the buyer when the title transfers. The record of who owns what is still the database at the DMV. With NFTs we can get rid of that database. So, selling my car now becomes something that might happen in a decentralized way, without the state as an intermediary. Note that they could do this and still retain a regulatory interest in car titling if they control the smart contract.

But, the real value of issuing a car title as an NFT would be if it were done using a smart contract in a way that decentralized car titles. If you imagine a world of self-driving cars that own and sell themselves, then that's interesting. You could also imagine that we want to remove the DMV from the business of titling cars altogether. That's a big political lift, but if you dream of a world with much smaller government, then NFT-based car titles might be a way to do that. But I think it's a ways off. So, we could use NFTs for car titles, but right now there's not much point besides the automation.

You can also imagine a much more decentralized future for verifiable credentials. There's no reason a smart contract couldn't issue and revoke verifiable credentials according to rules embodied in the code. Sphereon has an integration between verifiable credentials and the Digital Assets Markup Language (DAML), a smart contract language. Again, how decentralized the application is depends on the smart contract, but decentralized, institution-independent verifiable credentials are possible.

A decade ago, Lucas, Ballay, and McManus wrote Trillions: Thriving in the Emerging Information Ecology. One of the ideas they talked about was something they called a persistent data object (PDO). I was intrigued by persistent data objects because of the work we'd been doing at Kynetx on personal clouds. In applying the idea of PDOs, I quickly realized that what we were doing was much more than data because our persistent data objects also encapsulated code and the name persistent compute objects, or picos was born.

An NFT is one possible realization of Trillion's PDOs. So are verifiable credentials. Both are persistent containers for data. They are both capable of inspiring confidence that the data they contain has fidelity and, perhaps, a trustworthy provenance. A pico is an agent. Picos can:

have a wallet that holds and exchanges NFTs and credentials according to the rules encapsulated in the pico. be programmed to interact with the smart contract for an NFT to perform the legal operations. be programmed to receive, hold, and present verifiable credentials according to the proper DIDCommm protocols.3

Relationship between NFTs, verifiable credentials, and picos (click to enlarge)

NFTs are currently in their awkward, Pets.com stage. Way too much hype and many myopic use cases. But I think they'll grow to have valuable uses in creating more decentralized, trustworthy data objects. If you listen to the reality 2.0 podcast starting about 56 minutes, Greg, Katherine, and Doc get into some of those. Greg's starting with games—a good place, I think. Supply chain is another promising area. If you need decentralized, automated, trustworthy, persistent data containers, then NFTs fit the bill.

People who live in a country with a strong commitment to the rule of law, might ask why decentralizing things like titles and supply chains is a good idea. But that's not everyone's reality. Blockchains and NFTs can inspire confidence in systems that would otherwise be too costly or untrustworthy otherwise. Picos are a great way to create distributed systems of entity-oriented compute nodes that are capable of using PDOs.

Notes Note that I'm not saying that Cryptokitties is as centralized as your bank. Just that without looking at the code, you can't tell. Yeah, I know that interop is still a work in progress. But at least it's in progress, not ignored. These are not capabilities that picos presently have, but they do support DIDComm messaging. Want to help add these to picos? Contact me.

Photo Credit: Colorful Shipping Containers from frank mckenna (CC0)

Tags: verifiable+credentials non+fungible+token ssi identity picos


Identity Woman

COVID & Travel Resources for Phocuswright

I’m speaking today at the Phocuswright conference and this post is sharing key resources for folks who are watching/attending who want to get engaged with our work. The Covid Credentials Initiative where I am the Ecosystems Director is the place to start. We have a vibrant global learning community striving to solve challenge of common […] The post COVID & Travel Resources for Phocuswright a

I’m speaking today at the Phocuswright conference and this post is sharing key resources for folks who are watching/attending who want to get engaged with our work. The Covid Credentials Initiative where I am the Ecosystems Director is the place to start. We have a vibrant global learning community striving to solve challenge of common […]

The post COVID & Travel Resources for Phocuswright appeared first on Identity Woman.

Monday, 15. November 2021

reb00ted

Social Media Architectures and Their Consequences

This is an outcome of a session I ran at last week’s “Logging Off Facebook – What comes next?" unconference. We explored what technical architecture choices have which technical, or non-technical consequences for social media products. This table was created during the session. It is not complete, and personally I disagree with a few points, but it’s still worthwhile publishing IMHO. So here y

This is an outcome of a session I ran at last week’s “Logging Off Facebook – What comes next?" unconference. We explored what technical architecture choices have which technical, or non-technical consequences for social media products.

This table was created during the session. It is not complete, and personally I disagree with a few points, but it’s still worthwhile publishing IMHO.

So here you are:

Facebook-style ("centralized") Mastodon-style ("federated") IndieWeb-style ("distributed/P2P") Blockchain-style Moderation Uniform, consistent moderation policy for all users Locally different moderation policies, but consistent for all users on a node Every user decides on their own Posit - algorithmic smart contract that drives consensus Censorship easy; global one node at a time full censorship not viable full censorship not viable Software upgrades Fast, uncomplicated for all users Inconsistent across the network Inconsistent across the network Consistent, but large synchronization / management costs Money Centralized; most accumulated by "Facebook" Donations (BuyMeACoffee, LiberaPay); Patronage (Patreon) Paid to/earned by network nodes; value fluctuates due to speculation Authentication Centralized Decentralized (e.g. Solid, OpenID, SSI) Decentralized (e.g. wallets) Advertising Decided by "Facebook" Not usually Determined by user Governance Centralized, unaccountable Several components: protocol-level, code-level and instance-level Several components: protocol-level, code-level and instance-level Search & Discovery Group formation Regulation Ownership Totalitarian Individual

Phil Windley's Technometria

Zero Knowledge Proofs

Summary: Zero-knowledge proofs are a powerful cryptographic technique at the heart of self-sovereign identity (SSI). This post should help you understand what they are and how they can be used. Suppose Peggy needs to prove to Victor that she is in possession of a secret without revealing the secret. Can she do so in a way that convinces Victor that she really does know the secret? T

Summary: Zero-knowledge proofs are a powerful cryptographic technique at the heart of self-sovereign identity (SSI). This post should help you understand what they are and how they can be used.

Suppose Peggy needs to prove to Victor that she is in possession of a secret without revealing the secret. Can she do so in a way that convinces Victor that she really does know the secret? This is the question at the heart of one of the most powerful cryptographic processes we can employ in identity systems: zero-knowledge proofs (ZKPs). Suppose for example that Peggy has a digital driver's license and wants to prove to Victor, the bartender, that she's over 21 without just handing over her driver's license or even showing him her birthdate. ZKPs allow Peggy to prove her driver's license says she's at least 21 in a way that convinces Victor without Peggy having to reveal anything else (i.e., there's zero excess knowledge).

This problem was first explored by MIT researchers Shafi Goldwasser, Silvio Micali and Charles Rackoff in the 1980s as a way of combatting information leakage. The goal is to reduce the amount of extra information the verifier, Victor, can learn about the prover, Peggy.

One way to understand how ZKPs work is the story of the Cave of Alibaba, first published by cryptographers Quisquater, Guillou, and Berson1. The following diagram provides an illustration.

Peggy and Victor in Alibaba's Cave (click to enlarge)

The Cave of Alibaba has two passages, labeled A and B, that split off a single passageway connected to the entrance. Peggy possesses a secret code that allows her to unlock a door connecting A and B. Victor wants to buy the code but won't pay until he's sure Peggy knows it. Peggy won't share it with Victor until he pays.

The algorithm for Peggy proving she knows the code proceeds as follows:

Victor stands outside the cave while Peggy enters and selects one of the passages. Victor is not allowed to see which path Peggy takes. Victor enters the cave and calls out "A" or "B" at random. Peggy emerges from the correct passageway because she can easily unlock the door regardless of which choice she made when entering. Of course, Peggy could have just gotten lucky and guessed right, so Peggy and Victor repeat the experiment many times.

If Peggy can always come back by whichever passageway Victor selects, then there is an increasing probability that Peggy really knows the code. After 20 tries, there's less than one chance in a million that Peggy is simply guessing which letter Victor will call. This constitutes a probabilistic proof that Peggy knows the secret.

This algorithm not only allows Peggy to convince Victor she knows the code, but it does it in a way that ensures Victor can't convince anyone else Peggy knows the code. Suppose Victor records the entire transaction. The only thing an observer sees is Victor calling out letters and Peggy emerging from the right tunnel. The observer can't be sure Victor and Peggy didn't agree on a sequence of letters in advance to fool observers. Note that this property relies on the algorithm using a good pseudo-random number generator with a high-entropy seed so that Peggy and third-party observers can't predict Victor's choices.

Thus, while Peggy cannot deny to Victor that she knows the secret, she can deny that she knows the secret to other third parties. This ensures that anything she proves to Victor stays between them and Victor cannot leak it—at least in a cryptographic way that proves it came from Peggy. Peggy retains control of both her secret and the fact that she knows it.

When we say "zero knowledge" and talk about Victor learning nothing beyond the proposition in question, that's not perfectly true. In the cave of Alibaba, Peggy proves in zero knowledge that she knows the secret. But there are many other things that Victor learns about Peggy that ZKPs can do nothing about. For example, Victor knows that Peggy can hear him, speaks his language, walk, and is cooperative. He also might learn things about the cave, like approximately how long it takes to unlock the door. Peggy learns similar things about Victor. So, the reality is that the proof is approximately zero knowledge not perfectly zero knowledge.

ZKP Systems

The example of Alibaba's Cave is a very specific use of ZKPs, what's called a zero-knowledge proof of knowledge. Peggy is proving she knows (or possesses something). More generally, Peggy might want to prove many facts to Victor. These could include propositional phrases or even values. ZKPs can do that as well.

To understand how we can prove propositions in zero knowledge, consider a different example, sometimes called the Socialist Millionaire Problem. Suppose Peggy and Victor want to know if they're being paid a fair wage. Specifically, they want to know whether they are paid the same amount, but don't want to disclose their specific hourly rate to each other or even a trusted third party. In this instance, Peggy isn't proving she knows a secret, rather, she's proving an equality (or inequality) proposition.

For simplicity, assume that Peggy and Victor are being paid one of $10, $20, $30, or $40 per hour. The algorithm works like this:

Peggy buys four lock boxes and labels them $10, $20, $30, and $40. She throws away the keys to every box except the one labeled with her wage. Peggy gives all the locked boxes to Victor who privately puts a slip of paper with a "+" into the slot at the top of the box labeled with his salary. He puts a slip with a "-" in all the other boxes. Victor gives the boxes back to Peggy who uses her key in private to open the box with her salary on it. If she finds a "+" then they make the same amount. Otherwise, they make a different amount. She can use this to prove the fact to Victor.

This is called an oblivious transfer and proves the proposition VictorSalary = PeggySalary true or false in zero knowledge (i.e., without revealing any other information).

For this to work, Peggy and Victor must trust that the other will be forthcoming and state their real salary. Victor needs to trust that Peggy will throw away the three other keys. Peggy must trust that Victor will put only one slip with a "+" on it in the boxes.

Just like digital certificates need a PKI to establish confidence beyond what would be possible with self-issued certificates alone, ZKPs are more powerful in a system that allows Peggy and Victor to prove facts from things others say about them, not just what they say about themselves. For example, rather than Peggy and Victor self-asserting their salary, suppose they could rely on a signed document from the HR department in making their assertion so that both know that the other is stating their true salary. Verifiable Credentials provide a system for using ZKPs to prove many different facts alone or in concert, in ways that give confidence in the method and trust in the data.

Non-Interactive ZKPs

In the previous examples, Peggy was able to prove things to Victor through a series of interactions. For ZKPs to be practical, interactions between the prover and the verifier should be minimal. Fortunately, a technique called SNARK allows for non-interactive zero knowledge proofs.

SNARKs have the following properties (from whence they derive their name):

Succinct: the sizes of the messages are small compared to the length of the actual proof. Non-interactive: other than some setup, the prover sends only one message to the verifier. ARguments: this is really an argument that something is correct, not a proof as we understand it mathematically. Specifically, the prover theoretically could prove false statements given enough computational power. So, SNARKs are "computationally sound" rather than "perfectly sound". of Knowledge: the prover knows the fact in question.

You'll typically see "zk" (for zero-knowledge) tacked on the front to indicate that during this process, the verifier learns nothing other than the facts being proved.

The mathematics underlying zkSNARKs involves homomorphic computation over high-degree polynomials. But we can understand how zkSNARKs work without knowing the underlying mathematics that ensures that they're sound. If you'd like more details of the mathematics, I recommend Christian Reitwiessner's "zkSNARKs in a Nutshell".

As a simple example, suppose Victor is given a sha256 hash, H, of some value. Peggy wants to prove that she knows a value s such that sha265(s) == H without revealing s to Victor. We can define a function C that captures the relationship:

C(x, w) = ( sha256(w) == x )

So, C(H, s) == true, while other values for w will return false.

Computing a zkSNARK requires three functions G, P, and V. G is the key generator that takes a secret parameter called lambda and the function C and generates two public keys, the proving key pk and the verification key vk. They need only be generated once for a given function C. The parameter lambda must be destroyed after this step since it is not needed again and anyone who has it can generate fake proofs.

The prover function P takes as input the proving key pk, a public input x, and a private (secret) witness w. The result of executing P(pk,x,w) is a proof, prf, that the prover knows a value for w that satisfies C.

The verifier function V computes V(vk, x, prf) which is true if the proof prf is correct and false otherwise.

Returning to Peggy and Victor, Victor chooses a function C representing what he wants Peggy to prove, creates a random number lambda, and runs G to generate the proving and verification keys:

(pk, vk) = G(C, lambda)

Peggy must not learn the value of lambda. Victor shares C, pk, and vk with Peggy.

Peggy wants to prove she knows the value s that satisfies C for x = H. She runs the proving function P using these values as inputs:

prf = P(pk, H, s)

Peggy presents the proof prf to Victor who runs the verification function:

V(vk, H, prf)

If the result is true, then the Victor can be assured that Peggy knows the value s.

The function C does not need to be limited to a hash as we did in this example. Within limits of the underlying mathematics, C can be quite complicated and involve any number of values that Victor would like Peggy to prove, all at one time.

Notes Quisquater, Jean-Jacques; Guillou, Louis C.; Berson, Thomas A. (1990). How to Explain Zero-Knowledge Protocols to Your Children (PDF). Advances in Cryptology – CRYPTO '89: Proceedings. Lecture Notes in Computer Science. 435. pp. 628–631. doi:10.1007/0-387-34805-0_60. ISBN 978-0-387-97317-3.

Photo Credit: Under 25? Please be prepared to show proof of age when buying alcohol from Gordon Joly (CC BY-SA 2.0)

Tags: identity cryptography verifiable+credentials ssi


Damien Bod

Add extra claims to an Azure B2C user flow using API connectors and ASP.NET Core

This post shows how to implement an ASP.NET Core Razor Page application which authenticates using Azure B2C and uses custom claims implemented using the Azure B2C API connector. The claims provider is implemented using an ASP.NET Core API application and the Azure API connector requests the data from this API. The Azure API connector adds […]

This post shows how to implement an ASP.NET Core Razor Page application which authenticates using Azure B2C and uses custom claims implemented using the Azure B2C API connector. The claims provider is implemented using an ASP.NET Core API application and the Azure API connector requests the data from this API. The Azure API connector adds the claims after an Azure B2C sign in flow or whatever settings you configured in the Azure B2C user flow.

Code: https://github.com/damienbod/AspNetCoreB2cExtraClaims

Blogs in this series

Securing ASP.NET Core Razor Pages, Web APIs with Azure B2C external and Azure AD internal identities Using Azure security groups in ASP.NET Core with an Azure B2C Identity Provider Add extra claims to an Azure B2C user flow using API connectors and ASP.NET Core Implement certificate authentication in ASP.NET Core for an Azure B2C API connector

Setup the Azure B2C App Registration

An Azure App registration is setup for the ASP.NET Core Razor page application. A client secret is used to authenticate the client. The redirect URI is added for the app. This is a standard implementation.

Setup the API connector

The API connector is setup to add the extra claims after a sign in. This defines the API endpoint and the authentication method. Only Basic or certificate authentication is possible for this API service. Both of these are not ideal for implementing and using this service to add extra claims to the identity. I started ngrok using the cmd and used the URL from this to configure Azure B2C API connector. Maybe two separate connectors could be setup for a solution, one like this for development and a second one with the Azure App service host address and certificate authentication used.

Azure B2C user attribute

The custom claims are added to the Azure B2C user attributes. The custom claims can be add as required.

Setup to Azure B2C user flow

The Azure B2C user flow is configured to used the API connector. This flow adds the application claims to the token which it receives from the API call used in the API connector.

The custom claims are added then using the application claims blade. This is required if the custom claims are to be added.

I also added the custom claims to the Azure B2C user flow user attributes.

Azure B2C is now setup to use the custom claims and the data for these claims will be set used the API connector service.

ASP.NET Core Razor Page

The ASP.NET Core Razor Page uses Microsoft.Identity.Web to authenticate using Azure B2C. This is a standard setup for a B2C user flow.

builder.Services.AddAuthentication(OpenIdConnectDefaults.AuthenticationScheme) .AddMicrosoftIdentityWebApp(builder.Configuration.GetSection("AzureAdB2C")); builder.Services.AddAuthorization(options => { options.FallbackPolicy = options.DefaultPolicy; }); builder.Services.AddRazorPages() .AddMicrosoftIdentityUI(); var app = builder.Build(); JwtSecurityTokenHandler.DefaultInboundClaimTypeMap.Clear();

The main difference between an Azure B2C user flow and an Azure AD authentication is the configuration. The SignUpSignInPolicyId is set to match the configured Azure B2C user flow and the Instance uses the b2clogin from the domain unlike the AAD configuration definition.

"AzureAdB2C": { "Instance": "https://b2cdamienbod.b2clogin.com", "ClientId": "ab393e93-e762-4108-a3f5-326cf8e3874b", "Domain": "b2cdamienbod.onmicrosoft.com", "SignUpSignInPolicyId": "B2C_1_ExtraClaims", "TenantId": "f611d805-cf72-446f-9a7f-68f2746e4724", "CallbackPath": "/signin-oidc", "SignedOutCallbackPath": "/signout-callback-oidc" //"ClientSecret": "--in-user-settings--" },

The index Razor page returns the claims and displays the values in the UI.

public class IndexModel : PageModel { [BindProperty] public IEnumerable<Claim> Claims { get; set; } = Enumerable.Empty<Claim>(); public void OnGet() { Claims = User.Claims; } }

This is all the end user application requires, there is no special setup here.

ASP.NET Core API connector implementation

The API implemented for the Azure API connector uses a HTTP Post. Basic authentication is used to validate the request as well as the client ID which needs to match the configured App registration. This is weak authentication and should not be used in production especially since the API provides sensitive PII data. If the request provides the correct credentials and the correct client ID, the data is returned for the email. In this demo, the email is returned in the custom claim. Normal the data would be returned using some data store or whatever.

[HttpPost] public async Task<IActionResult> PostAsync() { // Check HTTP basic authorization if (!IsAuthorized(Request)) { _logger.LogWarning("HTTP basic authentication validation failed."); return Unauthorized(); } string content = await new System.IO.StreamReader(Request.Body).ReadToEndAsync(); var requestConnector = JsonSerializer.Deserialize<RequestConnector>(content); // If input data is null, show block page if (requestConnector == null) { return BadRequest(new ResponseContent("ShowBlockPage", "There was a problem with your request.")); } string clientId = _configuration["AzureAdB2C:ClientId"]; if (!clientId.Equals(requestConnector.ClientId)) { _logger.LogWarning("HTTP clientId is not authorized."); return Unauthorized(); } // If email claim not found, show block page. Email is required and sent by default. if (requestConnector.Email == null || requestConnector.Email == "" || requestConnector.Email.Contains("@") == false) { return BadRequest(new ResponseContent("ShowBlockPage", "Email name is mandatory.")); } var result = new ResponseContent { // use the objectId of the email to get the user specfic claims MyCustomClaim = $"everything awesome {requestConnector.Email}" }; return Ok(result); } private bool IsAuthorized(HttpRequest req) { string username = _configuration["BasicAuthUsername"]; string password = _configuration["BasicAuthPassword"]; // Check if the HTTP Authorization header exist if (!req.Headers.ContainsKey("Authorization")) { _logger.LogWarning("Missing HTTP basic authentication header."); return false; } // Read the authorization header var auth = req.Headers["Authorization"].ToString(); // Ensure the type of the authorization header id `Basic` if (!auth.StartsWith("Basic ")) { _logger.LogWarning("HTTP basic authentication header must start with 'Basic '."); return false; } // Get the the HTTP basinc authorization credentials var cred = System.Text.Encoding.UTF8.GetString(Convert.FromBase64String(auth.Substring(6))).Split(':'); // Evaluate the credentials and return the result return (cred[0] == username && cred[1] == password); }

The ResponseContent class is used to return the data for the identity. All custom claims must be prefixed with the extension_ The data is then added to the profile data.

public class ResponseContent { public const string ApiVersion = "1.0.0"; public ResponseContent() { Version = ApiVersion; Action = "Continue"; } public ResponseContent(string action, string userMessage) { Version = ApiVersion; Action = action; UserMessage = userMessage; if (action == "ValidationError") { Status = "400"; } } [JsonPropertyName("version")] public string Version { get; } [JsonPropertyName("action")] public string Action { get; set; } [JsonPropertyName("userMessage")] public string? UserMessage { get; set; } [JsonPropertyName("status")] public string? Status { get; set; } [JsonPropertyName("extension_MyCustomClaim")] public string MyCustomClaim { get; set; } = string.Empty; } }

With this, custom claims can be added to Azure B2C identities. This can be really useful when for example implementing verifiable credentials using id_tokens. This is much more complicated to implement compared to other IDPs but at least it is possible and can be solved. The technical solution to secure the API has room for improvements.

Testing

The applications can be started and the API connector needs to be mapped to a public IP. After starting the apps, start ngrok with a matching configuration for the HTTP address of the API connector API.

ngrok http https://localhost:5002

This URL in the API connector configured on Azure needs to match this ngrok URL. all good, the applications will run and the custom claim will be displayed in the UI.

Notes

The profile data in this API is very sensitive and you should use maximal security protections which are possible. Using Basic authentication alone for this type of API is not a good idea. It would be great to see managed identities supported or something like this. I used basic authentication so that I could use ngrok to demo the feature, we need a public endpoint for testing. I would not use this in a productive deployment. I would use certificate authentication with an Azure App service deployment and the certificate created and deployed using Azure Key Vault. Certificate rotation would have to be setup. I am not sure how good API connector infrastructure automation can be implemented, I have not tried this yet. A separate security solution would need to be implemented for local development. This is all a bit messy as all these extra steps end up in costs or developers taking short cuts and deploying with less security.

Links:

https://docs.microsoft.com/en-us/azure/active-directory-b2c/api-connectors-overview?pivots=b2c-user-flow

https://github.com/Azure-Samples/active-directory-dotnet-external-identities-api-connector-azure-function-validate/

https://docs.microsoft.com/en-us/dotnet/standard/serialization/system-text-json-customize-properties?pivots=dotnet-6-0

https://github.com/AzureAD/microsoft-identity-web/wiki

https://ngrok.com/

Securing Azure Functions using certificate authentication

Tuesday, 09. November 2021

Phil Windley's Technometria

Identity and Consistent User Experience

Summary: Consistent user experience is key enabler of digital embodiment and is critical to our ability to operationalize our digital lives. The other day Lynne called me up and asked a seemingly innocuous question: "How do I share this video?" Ten years ago, the answer would have been easy: copy the URL and send it to them. Now...not so much. In order to answer that question I first had

Summary: Consistent user experience is key enabler of digital embodiment and is critical to our ability to operationalize our digital lives.

The other day Lynne called me up and asked a seemingly innocuous question: "How do I share this video?" Ten years ago, the answer would have been easy: copy the URL and send it to them. Now...not so much. In order to answer that question I first had to determine which app she was using. And, since I wasn't that familiar with it, open it and search through the user interface to find the share button.

One of the features of web browsers that we don't appreciate as much as we should is the consistent user experience that the browser provides. Tabs, address bars, the back button, reloading and other features are largely the same regardless of which browser you use. There's a reason why don't break the back button! was a common tip for web designers over the years. People depend on the web's consistent user experience.

Alas, apps have changed all that. Apps freed developers from the strictures of the web. No doubt there's been some excellent uses of this freedom, but what we've lost is consistency in core user experiences. That's unfortunate.

The web, and the internet for that matter, never had a consistent user experience for authentication. At least not one that caught on. Consequently, the user experience is very fragmented. Even so, Kim Cameron's Seven Laws of Identity speaks for consistent user experience in Law 7: Consistent Experience Across Contexts. Kim says:

The unifying identity metasystem must guarantee its users a simple, consistent experience while enabling separation of contexts through multiple operators and technologies.

Think about logging into various websites and apps throughout your day. You probably do it way too often. But it's also made much more complex because it's slightly different everywhere. Different locations and modalities, different rules for passwords, different methods for 2FA, and so on. It's maddening.

There's a saying in security: "Don't roll your own crypto." I think we need a corollary in identity: "Don't roll your own interface." But how do we do that? And what should the interface be? One answer is to adopt the user experience people already understand from the physical world: connections and credentials.

Kim Cameron gave us a model back in 2005 when he introduced Information Cards. Information cards are digital analogs of the credentials we all carry around in the physical world. People understand credentials. Information cards worked on a protocol-mediated identity metasystem so that anyone could use them and write software for them.

Information cards didn't make it, but the ideas underlying information cards live on in modern self-sovereign identity (SSI) systems. The user experience in SSI springs from the protocol embodied in the identity metasystem. In an SSI system, people use wallets that manage connections and credentials. They can create relationships with other people, organizations, and things. And they receive credentials from other participants and present those credentials to transfer information about themselves in a trustworthy manner. They don't see keys, passwords, authentication codes, and other artifacts of the ad hoc identity systems in widespread use today. Rather they use familiar artifacts to interact with others in ways that feel familiar because they are similar to how identity works in the physical world.

This idea feels simple and obvious, but I think that conceals its incredible power. Having a wallet I control where I manage digital relationships and credentials gives me a place to stand in the digital world and operationalize my digital life. I think of it as digital embodiment. An SSI wallet gives me an interoperable way to connect and interact with others online as me. I can create both rich, long-lived relationships and service short-lived, ephemeral relationships with whatever degree of trustworthy data is appropriate for the relationship and its context.

Relationships and Interactions in SSI (click to enlarge)

We have plenty of online relationships today, but they are not operational because we are prevented from acting by their anemic natures. Our helplessness is the result of the power imbalance that is inherent in bureaucratic relationships. The solution to the anemic relationships created by administrative identity systems is to provide people with the tools they need to operationalize their self-sovereign authority and act as peers with others online. Consistent user experience is a key enabler of digital embodiment. When we dine at a restaurant or shop at a store in the physical world, we do not do so within some administrative system. Rather, as embodied agents, we operationalize our relationships, whether they be long-lived or nascent, by acting for ourselves. The SSI wallet is the platform upon which people can stand and become embodied online to operationalize their digital life as full-fledged participants in the digital realm.

Photo Credit: Red leather wallet on white paper from Pikrepo (CC0)

Tags: identity ux wallets ssi

Monday, 08. November 2021

Damien Bod

ASP.NET Core scheduling with Quartz.NET and SignalR monitoring

This article shows how scheduled tasks can be implemented in ASP.NET Core using Quartz.NET and then displays the job info in an ASP.NET Core Razor page using SignalR. A concurrent job and a non concurrent job are implemented using a simple trigger to show the difference in how the jobs are run. Quartz.NET provides lots […]

This article shows how scheduled tasks can be implemented in ASP.NET Core using Quartz.NET and then displays the job info in an ASP.NET Core Razor page using SignalR. A concurrent job and a non concurrent job are implemented using a simple trigger to show the difference in how the jobs are run. Quartz.NET provides lots of scheduling features and has an easy to use API for implementing scheduled jobs.

Code: https://github.com/damienbod/AspNetCoreQuartz

A simple ASP.NET Core Razor Page web application is used to implement the scheduler and the SignalR messaging. The Quartz Nuget package and the Quartz.Extensions.Hosting Nuget package are used to implement the scheduling service. The Microsoft.AspNetCore.SignalR.Client package is used to send messages to all listening web socket clients.

<PropertyGroup> <TargetFramework>net6.0</TargetFramework> <Nullable>enable</Nullable> <ImplicitUsings>enable</ImplicitUsings> </PropertyGroup> <ItemGroup> <PackageReference Include="Microsoft.AspNetCore.SignalR.Client" Version="6.0.0" /> <PackageReference Include="Microsoft.Extensions.Hosting" Version="6.0.0" /> <PackageReference Include="Quartz" Version="3.3.3" /> <PackageReference Include="Quartz.Extensions.Hosting" Version="3.3.3" /> </ItemGroup>

The .NET 6 templates no longer use a Startup class, all this logic can now be implemented directly in the Program.cs file with no static main. The ConfigureServices logic can be implemented using a WebApplicationBuilder instance. The AddQuartz method is used to add the scheduling services. Two jobs are added, a concurrent job and a non concurrent job. Both jobs are triggered with a simple trigger every five seconds which runs forever. The AddQuartzHostedService method adds the service as a hosted service. The AddSignalR adds the SignalR services.

using AspNetCoreQuartz; using AspNetCoreQuartz.QuartzServices; using Quartz; var builder = WebApplication.CreateBuilder(args); builder.Services.AddRazorPages(); builder.Services.AddSignalR(); builder.Services.AddQuartz(q => { q.UseMicrosoftDependencyInjectionJobFactory(); var conconcurrentJobKey = new JobKey("ConconcurrentJob"); q.AddJob<ConconcurrentJob>(opts => opts.WithIdentity(conconcurrentJobKey)); q.AddTrigger(opts => opts .ForJob(conconcurrentJobKey) .WithIdentity("ConconcurrentJob-trigger") .WithSimpleSchedule(x => x .WithIntervalInSeconds(5) .RepeatForever())); var nonConconcurrentJobKey = new JobKey("NonConconcurrentJob"); q.AddJob<NonConconcurrentJob>(opts => opts.WithIdentity(nonConconcurrentJobKey)); q.AddTrigger(opts => opts .ForJob(nonConconcurrentJobKey) .WithIdentity("NonConconcurrentJob-trigger") .WithSimpleSchedule(x => x .WithIntervalInSeconds(5) .RepeatForever())); }); builder.Services.AddQuartzHostedService( q => q.WaitForJobsToComplete = true);

The WebApplication instance is used to add the middleware like the Startup Configure method. The SignalR JobsHub endpoint is added to send the live messages of the running jobs to the UI in the client browser..

var app = builder.Build(); if (!app.Environment.IsDevelopment()) { app.UseExceptionHandler("/Error"); app.UseHsts(); } app.UseHttpsRedirection(); app.UseStaticFiles(); app.UseRouting(); app.UseAuthorization(); app.UseEndpoints(endpoints => { endpoints.MapHub<JobsHub>("/jobshub"); }); app.MapRazorPages(); app.Run();

The ConconcurrentJob implements the IJob interface and logs messages before and after a time delay. A SignalR client is used to send all the job information to any listening clients. A seven second sleep was added to simulate a slow running job. The jobs are triggered every 5 seconds, so this should result in no change in behavior as the jobs can run in parallel.

using Microsoft.AspNetCore.SignalR; using Quartz; namespace AspNetCoreQuartz.QuartzServices { public class ConconcurrentJob : IJob { private readonly ILogger<ConconcurrentJob> _logger; private static int _counter = 0; private readonly IHubContext<JobsHub> _hubContext; public ConconcurrentJob(ILogger<ConconcurrentJob> logger, IHubContext<JobsHub> hubContext) { _logger = logger; _hubContext = hubContext; } public async Task Execute(IJobExecutionContext context) { var count = _counter++; var beginMessage = $"Conconcurrent Job BEGIN {count} {DateTime.UtcNow}"; await _hubContext.Clients.All.SendAsync("ConcurrentJobs", beginMessage); _logger.LogInformation(beginMessage); Thread.Sleep(7000); var endMessage = $"Conconcurrent Job END {count} {DateTime.UtcNow}"; await _hubContext.Clients.All.SendAsync("ConcurrentJobs", endMessage); _logger.LogInformation(endMessage); } } }

The NonConconcurrentJob class is almost like the previous job, except the DisallowConcurrentExecution attribute is used to prevent concurrent running of the job. This means that even though the trigger is set to five seconds, each job must wait until the previous job finishes.

[DisallowConcurrentExecution] public class NonConconcurrentJob : IJob { private readonly ILogger<NonConconcurrentJob> _logger; private static int _counter = 0; private readonly IHubContext<JobsHub> _hubContext; public NonConconcurrentJob(ILogger<NonConconcurrentJob> logger, IHubContext<JobsHub> hubContext) { _logger = logger; _hubContext = hubContext; } public async Task Execute(IJobExecutionContext context) { var count = _counter++; var beginMessage = $"NonConconcurrentJob Job BEGIN {count} {DateTime.UtcNow}"; await _hubContext.Clients.All.SendAsync("NonConcurrentJobs", beginMessage); _logger.LogInformation(beginMessage); Thread.Sleep(7000); var endMessage = $"NonConconcurrentJob Job END {count} {DateTime.UtcNow}"; await _hubContext.Clients.All.SendAsync("NonConcurrentJobs", endMessage); _logger.LogInformation(endMessage); } }

The JobsHub class implements the SignalR Hub and define methods for sending SignalR messages. Two messages are used, one for the concurrent job messages and one for the non concurrent job messages.

public class JobsHub : Hub { public Task SendConcurrentJobsMessage(string message) { return Clients.All.SendAsync("ConcurrentJobs", message); } public Task SendNonConcurrentJobsMessage(string message) { return Clients.All.SendAsync("NonConcurrentJobs", message); } }

The microsoft signalr Javascript package is used to implement the client which listens for messages.

{ "version": "1.0", "defaultProvider": "cdnjs", "libraries": [ { "library": "microsoft-signalr@5.0.11", "destination": "wwwroot/lib/microsoft-signalr/" } ] }

The Index Razor Page view uses the SignalR Javascript file and displays messages by adding html elements.

@page @model IndexModel @{ ViewData["Title"] = "Home page"; } <div class="container"> <div class="row"> <div class="col-6"> <ul id="concurrentJobs"></ul> </div> <div class="col-6"> <ul id="nonConcurrentJobs"></ul> </div> </div> </div> <script src="~/lib/microsoft-signalr/signalr.js"></script>

The SignalR client adds the two methods to listen to messages sent from the Quartz jobs.

const connection = new signalR.HubConnectionBuilder() .withUrl("/jobshub") .configureLogging(signalR.LogLevel.Information) .build(); async function start() { try { await connection.start(); console.log("SignalR Connected."); } catch (err) { console.log(err); setTimeout(start, 5000); } }; connection.onclose(async () => { await start(); }); start(); connection.on("ConcurrentJobs", function (message) { var li = document.createElement("li"); document.getElementById("concurrentJobs").appendChild(li); li.textContent = `${message}`; }); connection.on("NonConcurrentJobs", function (message) { var li = document.createElement("li"); document.getElementById("nonConcurrentJobs").appendChild(li); li.textContent = `${message}`; });

When the application is run and the hosted Quartz service runs the scheduled jobs, the concurrent jobs starts every five seconds as required and the non concurrent job runs every seven seconds due to the thread sleep. Running concurrent or non concurrent jobs by using a single attribute definition is a really powerful feature of Quartz.NET.

Quartz.NET provides great documentation and has a really simple API. By using SignalR, it would be really easy to implement a good monitoring UI.

Links:

https://www.quartz-scheduler.net/

https://andrewlock.net/using-quartz-net-with-asp-net-core-and-worker-services/

https://docs.microsoft.com/en-us/aspnet/core/signalr/introduction

Sunday, 07. November 2021

Doc Searls Weblog

On using Wikipedia in schools

In Students are told not to use Wikipedia for research. But it’s a trustworthy source, Rachel Cunneen and Mathieu O’Niel nicely unpack their case for the headline. In a online polylogue in response to that piece, I wrote, “You always have a choice: to help or to hurt.” That’s what my mom told me, a zillion years […]

In Students are told not to use Wikipedia for research. But it’s a trustworthy source, Rachel Cunneen and Mathieu O’Niel nicely unpack their case for the headline. In a online polylogue in response to that piece, I wrote,

“You always have a choice: to help or to hurt.” That’s what my mom told me, a zillion years ago. It applies to everything we do, pretty much.

The purpose of Wikipedia is to help. Almost entirely, it does. It is a work of positive construction without equal or substitute. That some use it to hurt, or to spread false information, does not diminish Wikipedia’s worth as a resource.

The trick for researchers using Wikipedia as a resource is not a difficult one: don’t cite it. Dig down in references, make sure those are good, and move on from there. It’s not complicated.

Since that topic and comment are due to slide down into the Web’s great forgettery (where Google searches do not go), I thought I’d share it here.

Thursday, 04. November 2021

Tim Bouma's Blog

The Rise of MetaNations

Photo by Vladislav Klapin on Unsplash We are witnessing the rise of metanations (i.e., digitally native nations, not nation states that are trying to be digital). The first instance of which is Facebook Meta. The newer term emerging is the metaverse, which will eventually refer to the collection of emerging digitally native constructs, such as digital identity, digital currency and non-fungibl
Photo by Vladislav Klapin on Unsplash

We are witnessing the rise of metanations (i.e., digitally native nations, not nation states that are trying to be digital). The first instance of which is Facebook Meta. The newer term emerging is the metaverse, which will eventually refer to the collection of emerging digitally native constructs, such as digital identity, digital currency and non-fungible tokens . We’re not there yet, but many are seeing the trajectory where metanations, like Facebook will have metacitizens, who will have metarights to interact and transact in this new space. This is not science fiction, but is becoming a reality and the fronts are opening up on identity, currency, rights and property that exist within these digital realms but also touch upon the real world.

So what’s the imperative for us as real people and governments? To make sure that these realms are as open and inclusive as possible. Personally for me, I don’t want to have a future where certain metacitizens can exert their metarights in an unfair way within the real world; the chosen few getting to the front of the line for everything.

But we can’t just regulate and outlaw — we need to counter in an open fashion. We need open identity, open currency, open payments, and open rights

Where I am seeing the battle shape up most clearly is in the open payments space, specifically the Lightning Network. I am sure as part of Facebook’s play, they will introduce their own currency, Diem, that can only be used within their own metaverse according to their own rules. Honestly, I don’t believe we can counter this as governments and regulators, we need support open approaches such as the Lightning Network. A great backgrounder article by Nik Bhatia, author of Layered Money, here

Wednesday, 03. November 2021

Identity Praxis, Inc.

FTC’s Shot Across the Bow: Purpose and Use Restrictions Could Frame The Future of Personal Data Management

I just read a wonderful piece from Joseph Duball,1 who reported on the U.S. Federal Trade Commissioner Rebecca Kelly Slaughter’s keynote at the IAPP’s “Privacy. Security. Risk 2021” event. According to Duball, Slaughter suggests that we need to change “the way people view, prioritize, and conceptualize their data.” “Too many services are about leveraging consumer […] The post FTC’s Shot Across t

I just read a wonderful piece from Joseph Duball,1 who reported on the U.S. Federal Trade Commissioner Rebecca Kelly Slaughter’s keynote at the IAPP’s “Privacy. Security. Risk 2021” event. According to Duball, Slaughter suggests that we need to change “the way people view, prioritize, and conceptualize their data.”

“Too many services are about leveraging consumer data instead of straightforwardly providing value. For even the savviest users, the price of browsing the internet is being tracked across the web.” – Rebecca Kelly Slaughter, Commissioner, U.S. Federal Trade Commission 20212

According to Slaughter, privacy issues within data-driven markets stem from surveillance capitalism, which is fueled by indiscriminate data collection practices. She suggests that the remedy to curtail these practices is to focus on “purpose and use” restrictions and limitations rather than solely relying on the notice & choice framework. In other words, in the future industry may no longer justify data practice with explanations like “they opted in” and “we got consent” or falling ba

“Collection and use limitations can help protect people’s rights. It should not be necessary to trade one’s data away as a cost of full participation in society and the modern information economy.” – Rebecca Kelly Slaughter, Commissioner, U.S. Federal Trade Commission 20213

FTC Has Concerns Other Than Privacy

So that there is no uncertainty or doubt, however, Duball4 reports that, while consumer privacy is a chief concern for the commission, it is not the primary concern to the exclusion of other concerns. The commission is also worried about algorithmic bias and “dark patterns” practices. In other words, it is not just about the data, it is about the methods used to “trick” people into giving it up and how it is processed and applied to business decision-making.

Takeaway

My takeaway is that it is time for organizations to take a serious look at revamping their end-to-end processes and tech-stacks. This is a c-suite leadership all-hands-on-deck moment. It will take years for the larger organizations to turn their flotilla in the right direction, and for the industry at large to sort everything out. However, rest asserted, the empowered person–the self-sovereign individual–is nigh and will sort it out for industry soon enough.

There is time, but not much, maybe three, five, or seven years, before the people will be equipped with the knowledge and tools to take back control of their data. It is already happening; just look at open banking in the UK. These new tools, aka personal information management systems, will enable people to granularly exchange data on their terms with their stated purpose of use, not the businesses. They will give them the power to process data in a way that protects them from bias or at least helps them know when it is happening.

Why should businesses care about all this? Well, I predict, as do many so I’m not too far out on a limb here, that in the not-too-distant future, the empowered, connected individual will walk with their wallet and only do business with those institutions that respect their sovereignty, both physically and digitally. So, to all out there, it is time, and it is time to prepare for the future.

REFERENCES Duball, Joseph. “On the Horizon: FTC’s Slaughter Maps Data Regulation’s Potential Future.” The Privacy Advisor, November 2021. https://iapp.org/news/a/on-the-horizon-ftcs-slaughter-maps-data-regulations-potential-future/.↩︎ IBID.↩︎ IBID.↩︎ IBID.↩︎

The post FTC’s Shot Across the Bow: Purpose and Use Restrictions Could Frame The Future of Personal Data Management appeared first on Identity Praxis, Inc..


Vishal Gupta

THREE core fundamental policy gaps that are clogging the courts of India

No civilisation can exist or prosper without the rule of law. The rule of law cannot exist without a proper justice system. In India, criminals have a free reign as justice system is used as a tool to make victims succumb into unfair settlements or withdrawals. Unfortunately, the Indian justice system has gone caput and remains clogged with over 4+ crore cases pending in courts due to following co

No civilisation can exist or prosper without the rule of law. The rule of law cannot exist without a proper justice system. In India, criminals have a free reign as justice system is used as a tool to make victims succumb into unfair settlements or withdrawals. Unfortunately, the Indian justice system has gone caput and remains clogged with over 4+ crore cases pending in courts due to following core reasons.

Policy gap 1 — Only in India there is zero deterrence to perjury

a. In India, perjury law is taken very lightly. 99% of cases are full of lies, deceit and mockery of justice. Affidavits in India do not serve much purpose. There have been various judgments and cries from all levels of judiciary but in vain.

b. Perjury makes any case 10x more complex and consumes 20x more time to adjudicate. It is akin to allowing a bullock cart in the middle of express highway. It is nothing but an attack on justice delivery system. Historically, perjury used to be punished with death sentence and considered an equally serious crime as murder.

c. This is against the best international practices and does not make economic sense for India.

Policy gap 2 — India has no equivalent laws such as the U.S. Federal Rule 11 enacted in 1983 (as amendment in 1993) to curb the explosion of frivolous litigation.

The entire provision of rule 11 of U.S. federal rules can be summarized in the following manner:

It requires that a district court mandatorily sanction attorneys or parties who submit improper pleadings like pleadings with

Improper purpose Containing frivolous arguments Facts or arguments that have no evidentiary support Omissions and errors, misleading and crafty language Baseless or unreasonable denials with non-application of mind Negligence, failure to appear or unreasonable adjournments

All developed countries like UK and Australia also have similar laws to combat frivolous litigation.

Police gap 3 — Only in India the lawyers are barred from contingent fee agreements under Bar council rules Indian lawyers are incentivized to make cases go longer, add complexity and never end. Consequently, they super specialize and become innovators in creating alibis so that the case never ends. Lawyers in India do not investigate or legally vet the case before filing. There is no incentive to apply for torts claims or prosecute perjury and therefore there is no deterrence created. Lawyers are supposed to be gate keepers to prevent frivolous litigation but in India it is actually opposite due to the perverse policy on contingent fee and torts.

Economic modelling suggests that — contingent fee arrangements reduce frivolous suits when compared to hourly fee arrangements. The reasoning is simple: When an attorney’s compensation is based solely on success, as opposed to hours billed, there is great incentive to accept and prosecute only meritorious cases.

At least one empirical analysis concludes that — “hourly fees encourage the filing of low-quality suits and increase the time to settlement (i.e., contingency fees increase legal quality and decrease the time to settlement).”

Negative impact of the 3 policy gaps

1. Conviction rates in India are abysmally low below 10% whereas internationally between 60% — 100%.[1]

a. Japan — 99.97%
b. China — 98%
c. Russia — 90%
d. UK — 80%
e. US — between 65% and 80%

2. Globally the 80%-99% cases get settled before trial but India its actually opposite because

There is absolutely no fear of law and perjury is a norm. Lawyers have no interest in driving a settlement due to per appearance fee system. There is an artificial limit of fee shifting (awarding costs), torts and there is no motivation of the part of judges to create deterrence.

3. The contingent fee and tort cannot be enabled till such time the fraternity of lawyers can be relied upon for ethical and moral conduct.

— — — — — — — — — — — — — — — — — — — — — —

Easy solution to complex problem

— — — — — — — — — — — — — — — — — — — — — —

Policy 1 — Restricting perjury and making it a non-bailable offence will resolve 50% cases immediately
(i.e. 1.5 crore cases within 3 months)

India must truly embrace “Satyam ev jayate” and make perjury a non-bailable offence. All lawyers and litigants should be given 3 month’s notice to refile their pleadings or settle the cases. They would have to face the mandatory consequences of false evidence or averments later found to be untrue.

There needs to be a basic expectation reset in Indian courts litigation — the filings are correct, and the lawyer is responsible for prima facie diligence and candor before courts. The role of judiciary is not to distinguish between truth and false but to determine the sequence of events, fix accountability and award penalties. Presenting false evidence or frivolous arguments must be seen as separate offence on its own.

At a minimum IPC 195(1)(b)(i) must be removed that states the following –
“No Court shall take cognizance- of any offence punishable under any of the following sections of the IPC (45 of 1860), namely, sections 193 to 196 (both inclusive), 199, 200, 205 to 211 (both inclusive) and 228, when such offence is alleged to have been committed in, or in relation to, any proceeding in any Court”

Because:

This necessarily makes the judges party to the complaint and therefore all judges are reluctant to prosecute perjury. This estoppel opens up possibility of corruption in judiciary and public offices. The intention of this provision has clearly backfired. This restriction is unique to india and against the international norms Policy 2 — Sanctions on lawyers to streamline the perverse incentives that plague Indian justice system

The courts in USA are mandated to compulsorily sanction lawyers for various professional misconduct in litigation (Federal rule 11 of civil procedure) like:

Remedies and sanctions for lawyer’s misconduct can be categorized into three groups.

Sanctions and remedies for attorney misconduct which are available to public authorities. Such sanctions include professional discipline, criminal liability of lawyers who assist their clients in committing criminal acts, and judicially imposed sanctions such as for contempt of court. Professional discipline is generally the best known sanction for attorney misconduct. Sanctions which are available to lawyers’ clients. For example, damages for attorney malpractice, forfeiture of an attorney’s fee, and judicial nullification of gifts or business transactions that breach a lawyer’s fiduciary duty to a client. Remedies that may be available to third parties injured by a lawyer’s conduct on behalf of a client. These include injunctions against representing a client in violation of the lawyer’s duty to a third party, damages for breach of an obligation the attorney assumes to a non-client, and judicial nullification of settlements or jury verdicts obtained by attorney misconduct. Policy 3 — Contingent Fee and Tort Law making judiciary 3x efficient.
(needs perjury law as discussed above to unlock)

In India Lawyers spend more time in making the case complex and lengthy based on “dehari system” while the western counterparts earn far more money by genuinely solving cases and creating real values for the country. The market economics based on judicial policy on regulating legal profession and ethics in India is however geared towards permanently clogging the system. 3 of the 4 stakeholders benefit by making the litigation never end.

There is a dire need for the system to be re-incentivized wherein the legal profession can generate 10x more value for the country in catching and penalizing law abusers. This in turn will also attract and create more talented lawyers because then they will be investing more time in investigating and preparing the cases to win.

Making perjury a non-bailable offence and rules for mandatorily sanctioning lawyer mis-conduct will unlock the contingent fee and tort law in India.

Allowing contingent fee for lawyers have four principal policy justifications

Firstly, such arrangements

enable the impecunious (having no money) to obtain representation. Such persons cannot afford the costs of litigation unless and until it is successful. Even members of the middle- and upper-socioeconomic classes may find it difficult to pay legal fees in advance of success and collection of judgment. This is particularly so today as litigation has become more complex, often involving suits against multiple parties or multinational entities, and concerning matters requiring expert scientific and economic evidence.

Secondly,

Contingent fee arrangements can help align the interests of lawyer and client, as both will have a direct financial stake in the outcome of the litigation.

Third

By predicating an attorney’s compensation on the success of a suit, the attorney is given incentive to function as gatekeeper, screening cases for both merit and sufficiency of proof, and lodging only those likely to succeed. This provides as an important and genuine signal for litigants to understand the merit of their case.

Fourth

And more generally, all persons of sound mind should be permitted to contract freely, and restrictions on contingent fee arrangements inhibit this freedom.

Three other reasons to justify unlocking of contingent fee:-

Clients, particularly unsophisticated ones, may be unable to determine when an attorney has underperformed or acted irresponsibly;15 in these instances, an attorney’s reputation would be unaffected, and thus the risk of reputational harm would not adequately protect against malfeasance. Even when clients are aware of an attorney’s poor performance or irresponsibility, they may lack the means, media, or credibility to effectively harm the attorney’s reputation. The interests of attorney and client are more closely aligned, ceteris paribus, when fee arrangements are structured so as to minimize perverse incentives. Why contingent fees reduce caseload:

Lesser cases better filing

Complainants get genuine advice about chances of winning. They do not file unless they get lawyer’s buy-in.

Lawyers screen cases for merit and sufficiency of proof before filing. Lawyers don’t pick up bad cases to manage reputation. Comprehensive evidence gathering happens before Lawyer decides to file a case.

Lawyers simplify case and only allege charges that can be sustained.

Use simple and concise arguments for the judges. Lawyers spend more time working hard outside the courts. Thus increasing case quality.

Faster case proceedings

Less adjournments and hearings They do better preparation and take less adjournments. Multiple steps get completed in single hearings. They create urgency for clients to show up at every hearing. Lawyers do not unnecessarily appeal and stay matters because they do not get paid per hearing and want quick results.

Case withdrawals

There are less takers if a lawyer drops a case when there are surprises by client which will adversely impact the outcome. Lawyers persuade complainants to settle when appropriate. Conclusions

With 3+ crore cases pending and a dysfunctional justice delivery there is a mass exodus of Ultra High Net worth Individuals (UHNIs) from India. There is an absolute urgent need for above reforms before the situation turns into a complete banana republic.

It is an absolute embarrassment and national shame to allow Indians to be blatant liars even in courts. Business and survival in India today is a race to the bottom just because there is no bar on falsehood and having corrupt values, it is near impossible to survive without being part of the same culture.

It is prayed that even more stricter laws be enacted than just global norms such that Indians can be trusted globally to never lie. It’s not just about Indian courts but they could be counted as the most truthful race even in foreign lands and command high respect globally.

Restoring justice delivery Economic benefits of reform in perjury, contingent fee and legal ethics

1. Higher quality of litigation work.

2. Attract better talent to the profession due to higher profitability.

3. Create more jobs in the economy.

4. Offer higher pool of qualified people for judiciary.

5. Drastic reduction in corrupt or criminal activity due to fear of law.

6. Unlocking of trillions of dollars of wealth/resources stuck in litigation.

7. Better investment climate due to more reliability in business transactions.

8. Higher reliability in products and services.

9. Access to justice for all.

10. Restoring faith in judiciary and honour in being Indian.

Further reading

1. Perjury: Important Case Laws Showing How Seriously It is Taken in India! (lawyersclubindia.com)

2. LAW OF PERJURY- (Second Edition) — Indian Bar Association

[1] Comparison of the conviction rates of a few countries of the world | A wide angle view of India (wordpress.com)

Tuesday, 02. November 2021

MyDigitalFootprint

Optimising for “performance” is directional #cop26

In the week where the worlds “leaders” meet to discuss and agree on the future of our climate at  #COP26, I remain sceptical about agreements.  At #COP22 there was an agreement to halve deforestation by 2020; we missed it, so we have moved the target out.   Here is a review of all the past COP meetings and outcomes. It is hard to find any resources to compare previous agreements

In the week where the worlds “leaders” meet to discuss and agree on the future of our climate at  #COP26, I remain sceptical about agreements.  At #COP22 there was an agreement to halve deforestation by 2020; we missed it, so we have moved the target out.   Here is a review of all the past COP meetings and outcomes. It is hard to find any resources to compare previous agreements with achievements.  Below is from the UN. 



The reason I remain doubtful and sceptical is that the decision of 1.5 degrees is framed.  We are optimising for a goal.  In this case, we do not want to increase our global temperature beyond 1.5 degrees.  Have you ever tried to heat water and stop the heating process such that a temperature target was reached. Critically you only have one go.    Try it. Fill a pan with ice and set a target, say 38.4 degrees, use a thermometer and switch off the pan when you think the final temperature will be your target. Did you manage to get within 1.5 degrees of your target? 

The Peak Paradox framework forces us to think that we cannot optimise for one outcome, one goal, one target or for a one-dimensional framing. To do so would be to ignore other optimisations, visions, ideas or beliefs.  When we optimise, for one thing, something else will not have an optimal outcome.    

In business, we are asked to articulate a single purpose, one mission, the single justification that sets out a reason to exist.  The more we as a board or senior leadership team optimise for “performance”, the more we become directional. Performance itself, along with the thinking that drives the best allocation of resources, means we are framed to optimise for an outcome.   In social sciences and economics, this is called path dependency.   The unintended consequences of our previous decisions that drive efficiency and effectiveness might not directly impact us, but another part of an interdependent system that, through several unconnected actions, will feed into our future decisions and outcomes.  Complex system thinking highlights such causes and effects of positive and negative feedback loops. 

By example. Dishwasher tablets make me more productive but make the water system less efficient by reducing the effectiveness of the ecosystem. However, I am framed by performance and my own efficiency; therefore, I have to optimise my time, and the dishwasher is a perfect solution.  Indeed we are marketed that the dishwasher saves water and energy compared to other washing up techniques. The single narrow view of optimisation is straightforward and easy to understand.  The views we hold on battery cars to mobile phones is framed as one of being more productive.  Performance as a metric matters more than anything else, why because the story says performance creates economic activity that creates growth that means fewer people are in poverty.  

“Performance” is a one-dimensional optimisation where economic activity based on financial outcomes wins.  You and I are the agents of any increase in performance, and the losses in the equation of equilibrium are somewhere else in the system. We are framed and educated to doubt the long, complex link between the use of anti-bacterial wipes and someone else’s skin condition. Performance as a dimension for measurement creates an optimal outcome for one and a sub-optimal outcome for someone else. 

Performance as a measure creates an optimal outcome for one and a sub-optimal outcome for someone else. 

If it was just me, then the cause and effect relationship are hard to see, but when more of humanity optimises for performance, it is the scale at which we all lose that suddenly comes into effect. Perhaps it is time to question the simple linear ideas of one purpose, one measure, one mission, and try to optimise for different things simultaneously; however, that means simple political messages, tabloid headlines, and social-media-driven advertising will fail.  Are leaders ready to lead, or do they enjoy too much power to do the right thing?


Monday, 01. November 2021

Phil Windley's Technometria

Picos at the Edge

Summary: The future of computing is moving from the cloud to the edge. How can we create a decentralized, general-purpose computing mesh? Picos provide a model for exploration. Rainbow's End is one of my favorite books. A work of fiction, Rainbow's End imagines life in a near future world where augmented reality and pervasive IoT technology are the fabric within which people live th

Summary: The future of computing is moving from the cloud to the edge. How can we create a decentralized, general-purpose computing mesh? Picos provide a model for exploration.

Rainbow's End is one of my favorite books. A work of fiction, Rainbow's End imagines life in a near future world where augmented reality and pervasive IoT technology are the fabric within which people live their lives. This book, from 2006, is perhaps where I first began to understand the nature and importance of computing at the edge. A world where computing is ambient and immersive can't rely only on computers in the cloud.

We have significant edge computing now in the form of powerful mobile devices, but that computing is not shared without roundtrips to centralized cloud computing. One of the key components of 5G technology is compute and storage at the edge—on the cell towers themselves— that distributes computing and reduces latency. Akamai, CloudFront, and others have provided these services for years, but still in a data center somewhere. 5G moves it right to the pole in your backyard.

But the vision I've had since reading Rainbow's End is not just distributed, but decentralized, edge computing. Imagine your persistent compute jobs in interoperable containers moving around a mesh of compute engines that live on phones, laptops, servers, or anywhere else where spare cycles exist.

IPFS does this for storage, decentralizing file storage by putting files in shared spaces at the edge. With IPFS, people act as user-operators to host and receive content in a peer-to-peer manner. If a file gets more popular, IPFS attempts to store it in more places and closer to the need.

You can play with this first hand at NoFilter.org, which brands itself as a "the world's first unstoppable, uncensorable, undeplatformable, decentralized freedom of speech app." There's no server storing files, just a set of Javascript files that run in your browser. Identity is provided via Metamask which uses an Ethereum address as your identifier. I created some posts on NoFilter to explore how it works. If you look at the URL for that link, you'll see this:

https://nofilter.org/#/0xdbca72ed00c24d50661641bf42ad4be003a30b84

The portion after the # is the Ethereum address I used at NoFilter. If we look at a single post, you'll see a URL like this:

https://nofilter.org/#/0xdbca72ed00c24d50661641bf42ad4be003a30b84/ QmTn2r2e4LQ5ffh86KDcexNrTBaByyTiNP3pQDbNWiNJyt

Note that there's an additional identifier following the slash after my Ethereum address. This is the IPFS hash of the content of that post and is available on IPFS directly. What's stored on IPFS is the JSON of the post that the Javascript renders in the browser.

{ "author": "0xdbca72ed00c24d50661641bf42ad4be003a30b84", "title": "The IPFS Address", "timestamp": "2021-10-25T22:46:46-0-6:720", "body": "<p>If I go here:</p><p><a href=\"https://ipfs.io/ipfs/ QmT57jkkR2sh2i4uLRAZuWu6TatEDQdKN8HnwaZGaXJTrr\";>..." }

As far as I can tell, this is completely decentralized. The identity is just an Ethereum address that anyone can create using Metamask, a Javascript application that runs in the browser. The files are stored on IPFS, decentralized on storage providers around the net. They are rendered using Javascript that runs in the browser. So long as you have access to the Javascript files from somewhere you can write and read articles without reliance on any central server.

Decentralized Computing

My vision for picos is that they can operate on a decentralized mesh of pico engines in a similar decentralized fashion. Picos are already encapsulations of computation with isolated state and programs that control their operation. There are two primary problems with the current pico engine that have to be addressed to make picos independent of the underlying engine:

Picos are addressed by URL, so the pico engine's host name or IP address becomes part of the pico's address Picos have a persistence layer that is currently provided by the engine the pico is hosted on.

The first problem is solvable using DIDs and DIDComm. We've made progress in this area. You can create and use DIDs in a pico. But they are not, yet, the primary means of addressing and communicating with the pico.

The second problem could be addressed with IPFS. We've not done any work in this area yet. So I'm not aware of the pitfalls or problems, but it looks doable.

With these two architectural issues out of the way, implementing a way for picos to move easily between engines would be straightforward. We have import and export functionality already. I'm envisioning something that picos could control themselves, on demand, programatically. Ultimately, I want the pico to chose where it's hosted based on whatever factors the owner or programmer deems most important. That could be hosting cost, latency, availability, capacity, or other factors. The decentralized directory to discover engines advertising certain features or factors, and a means to pay them would have to be built—possibly as a smart contract.

A trickier problem is protecting picos from malevolent engines. This is the hardest problem, as far as I can tell. Initially, collections of trusted engines, possibly using staking, could be used.

There are plenty of fun, interesting problems if you'd like to help.

Use Picos

If you're intrigued and want to get started with picos, there's a Quickstart along with a series of lessons. If you need help, contact me and we'll get you added to the Picolabs Slack. We'd love to help you use picos for your next distributed application.

If you're interested in the pico engine, the pico engine is an open source project licensed under a liberal MIT license. You can see current issues for the pico engine here. Details about contributing to the engine are in the repository's README.

Bonus Material
Rainbows End by Vernor Vinge

The information revolution of the past thirty years blossoms into a web of conspiracies that could destroy Western civilisation. At the centre of the action is Robert Gu, a former Alzheimer's victim who has regained his mental and physical health through radical new therapies, and his family. His son and daughter-in-law are both in the military - but not a military we would recognise - while his middle school-age granddaughter is involved in perhaps the most dangerous game of all, with people and forces more powerful than she or her parents can imagine.

The End of Cloud Computing by Peter Levine

Photo Credit: SiO2 Fracture: Chemomechanics with a Machine Learning Hybrid QM/MM Scheme from Argonne National Laboratory (CC BY-NC-SA 2.0)

Tags: picos cloud edge mesh actors


Doc Searls Weblog

Going west

Long ago a person dear to me disappeared for what would become eight years. When this happened I was given comfort and perspective by a professor of history whose study concentrated on the American South after the Civil War. “You know what the most common record of young men was, after the Civil War?” he […]

Long ago a person dear to me disappeared for what would become eight years. When this happened I was given comfort and perspective by a professor of history whose study concentrated on the American South after the Civil War.

“You know what the most common record of young men was, after the Civil War?” he asked.

“You mean census records?”

“Yes, and church records, family histories, all that.”

“I don’t know.”

“Two words: Went west.”

He then explained that that, except for the natives here in the U.S., nearly all of our ancestors had gone west. Literally or metaphorically, voluntarily or not, they went west.

More importantly, most were not going back. Many, perhaps most, were hardly heard from again in the places they left. The break from the past in countless places was sadly complete for those left behind. All that remained were those two words: went west.

This fact, he said, is at the heart of American rootlessness.

“We are the least rooted civilization on Earth,” he said. “This is why we have the weakest family values in the world.”

This is also why he also thought political talk about “family values” was especially ironic. We may have those values, but they tend not to keep us from going west anyway.

This comes to mind because I just heard Harry Chapin‘s “Cat’s in the Cradle” for the first time in years, and it hurt to hear it. (Give it a whack and try not to be moved. Especially if you also know that Harry—a great songwriter—died in a horrible accident while still a young father.)

You don’t need to grow up in an unhappy family to go west anyway. That happened for me. My family was a very happy one, and when i got out of high school I was eager to go somewhere else anyway. Eventually I went all the way west, from New Jersey, then North Carolina, then Calfornia. After that, also Boston, New York and Bloomington, Indiana. There was westering in all those moves.

Now I’m back in California for a bit, missing all those places, and people in them.

There are reasons for everything, but in most cases those are just explanations. Saul Bellow explains the difference in Mr. Sammler’s Planet:

You had to be a crank to insist on being right. Being right was largely a matter of explanations. Intellectual man had become an explaining creature. Fathers to children, wives to husbands, lecturers to listeners, experts to laymen, colleagues to colleagues, doctors to patients, man to his own soul, explained. The roots of this, the causes of the other, the source of events, the history, the structure, the reasons why. For the most part, in one ear out the other. The soul wanted what it wanted. It had its own natural knowledge. It sat unhappily on superstructures of explanation, poor bird, not knowing which way to fly.

What explains the human diaspora better than our westering tendencies? That we tend to otherize and fight each other? That we are relentlessly ambulatory? Those are surely involved. But maybe there is nothing more human than to say “I gotta go,” without needing a reason beyond the urge alone.

Thursday, 28. October 2021

Phil Windley's Technometria

Token-Based Identity

Summary: Token-based identity systems move us from talking about who, to thinking about what, so that people can operationalize their digital lives. Token-based identity systems support complex online interactions that are flexible, ad hoc, and cross-domain. I've spent some time thinking about this article from PeterVan on Programmable Money and Identity. Peter references a white pap

Summary: Token-based identity systems move us from talking about who, to thinking about what, so that people can operationalize their digital lives. Token-based identity systems support complex online interactions that are flexible, ad hoc, and cross-domain.

I've spent some time thinking about this article from PeterVan on Programmable Money and Identity. Peter references a white paper on central bank digital currencies and one on identity composability by Andrew Hong to lead into a discussion of account- and token-based1 identity. In his article, Peter says:

For Account-based identity, you need to be sure of the identity of the account holder (the User ID / Password of your Facebook-account, your company-network, etc.). For Token-based identity (Certified claim about your age for example) you need a certified claim about an attribute of that identity.

In other words, while account-based identity focuses on linking a person in possession of authentication factors to a trove of information, token-based identity is focused on claims about the subject's attributes. More succinctly: account-based identity focuses on who you are whereas token-based identity is focused on what you are.

One of my favorite scenarios for exploring this is meeting a friend for lunch. You arrive at the restaurant on time and she’s nowhere to be found. You go to the hostess to inquire about the reservation. She tells you that your reservation is correct, and your friend is already there. She escorts you to the table where you greet your friend. You are seated and the hostess leaves you with a menu. Within a few moments, the waitress arrives to take your order. You ask a few questions about different dishes. You both settle on your order and the waitress leaves to communicate with the kitchen. You happily settle in to chat with your friend, while your food is being prepared. Later you might get a refill on a drink, order dessert, and eventually pay.

While you, your friend, the host, and waitstaff recognized, remembered, and interacted with people, places, and things countless times during this scenario, at no time were you required to be identified as a particular person. Even paying with a credit card doesn't require that. Credit cards are a token-based identity system that says something about you rather than who you are. And while you do have an account with your bank, the brilliance of the credit card is that you no longer have to have accounts with every place you want credit. You simply present a token that gives the merchant confidence that they will be paid. Here are a few of the "whats" in this scenario:

My friend The person sitting at table 3 Over 21 Guest who ordered the medium-rare steak Someone who needs a refill Excellent tipper Person who owes $179.35 Person in possession of a MasterCard

You don't need an account at the restaurant for any of this to work. But you do need relationships. Some, like the relationship with your friend and MasterCard, are long-lived and identified. Most are ephemeral and pseudonymous. While the server at the restaurant certainly "identifies" patrons, they usually forget them as soon as the transaction is complete. And the identification is usually pseudonymous (e.g. "the couple at table three" rather than "Phillip and Lynne Windley").

In the digital realm, we suffer from the problem of not being in proximity to those we're interacting with. As a result, we need a technical means to establish a relationship. Traditionally, we've done that with accounts and identifying, using authentication factors, who is connecting. As a result, all online relationships tend to be long-lived and identified in important ways—even when they don't need to be. This has been a boon to surveillance capitalism.

In contrast, SSI establishes peer-to-peer relationships using peer DIDs (autonomic identifiers) that can be forgotten or remembered as needed. These relationships allow secure communication for issuing and presenting credentials that say something about the subject (what) without necessarily identifying the subject (who). This token-based identity system more faithfully mirrors the way identity works in the physical world.

Account- and token-based identity are not mutually exclusive. In fact, token-based identity often has its roots in an account somewhere, as we discovered about MasterCard. But the key is that you're leveraging that account to avoid being in an administrative relationship in other places. To see that, consider the interactions that happen after an automobile accident.

Account and token interactions after an automobile accident (click to enlarge)

In this scenario, two drivers, Alice and Bob, have had an accident. The highway patrol has come to the scene to make an accident report. Both Alice and Bob have a number of credentials (tokens) in their digital wallets that they control and will be important in creating the report:

Proof of insurance issued by their respective insurance companies Vehicle title issued by the state founded on a vehicle original document from the vehicle's manufacturer. Vehicle registration issued by the Department of Motor Vehicles (DMV) Driver's license issued by the Department of Public Safety (DPS) in Alice's case and the DMV in Bob's In addition, the patrol officer has a badge from the Highway Patrol.

Each of these credentials is the fruit of an account of some kind (i.e. the person was identified as part of the process). But the fact that Alice, Bob, and the patrol officer have tokens of one sort or another that stem from those accounts allows them to act autonomously from those administrative systems to participate in a complex, ad hoc, cross-domain workflow that will play out over the course of days or weeks.

Account-based and token-based identity system co-exist in any sufficiently complex ecosystem. Self-sovereign identity (SSI) doesn't replace administrative identity systems, it gives us another tool that enables better privacy, more flexible interactions, and increased autonomy. In the automobile scenario, for example, Alice and Bob will have an ephemeral relationship that lasts a few weeks. They'll likely never see the patrol officer after the initial encounter. Alice and Bob would make and sign statements that everyone would like to have confidence in. The police officer would create an accident report. All of this is so complex and unique that it is unlikely to ever happen within a single administrative identity system or on some kind of platform.

Token-based identity allows people to operationalize their digital lives by supporting online interactions that are multi-source, fluid multi-pseudonymous, and decentralized. Ensuring that the token-based identity system is also self-sovereign ensures that people can act autonomously without being within someone else's administrative identity system as they go about their online lives. I think of it as digital embodiment—giving people a way to be peers with other actors in online interactions.

Notes I'm using "token" in the general sense here. I'm not referring to either cryptocurrency or hardware authentication devices specifically.

Photo Credit: Tickets from Clint Hilbert (Pixabay)

Tags: identity ssi tokens surveillance+capitalism relationships


Hans Zandbelt

mod_auth_openidc vs. legacy Web Access Management

A sneak preview of an upcoming presentation about a comparison between mod_auth_openidc and legacy Web Access Management.

A sneak preview of an upcoming presentation about a comparison between mod_auth_openidc and legacy Web Access Management.

Wednesday, 27. October 2021

Identity Praxis, Inc.

A call for New PD&I Exchange Models, The Trust Chain, and A Connected Individual Identity Scoring Scheme: An Interview with Virginie Debris of GMS

Art- A call for New Industry Data Exchange Models, The Trust Chain, and A Connected Individual Transaction And Identity Scoring Scheme: An Interview with Virginie Debris of GMS I recently sat down with Virginie Debris, the Chief Product Officer for Global Messaging Service (GMS) and Board Member of the Mobile Ecosystem Forum, to talk about […] The post A call for New PD&I Exchange Models, Th

Art- A call for New Industry Data Exchange Models, The Trust Chain, and A Connected Individual Transaction And Identity Scoring Scheme: An Interview with Virginie Debris of GMS

I recently sat down with Virginie Debris, the Chief Product Officer for Global Messaging Service (GMS) and Board Member of the Mobile Ecosystem Forum, to talk about personal data and identity (PD&I). We had an enlightening discussion (see video of the interview: 46:16 min)). The conversation took us down unexpected paths and brought several insights and recommendations to light.

In our interview, we discussed the role of personal data and identity and how enterprises use it to know and serve their customers and protect the enterprises’ interests. To my delight, we uncovered three ideas that could help us all better protect PD&I and improve the market’s efficiency.

Idea One: Build out and refine “The Trust Chain”, or “chain of trust,” a PD&I industry value chain framework envisioned by Virginie. Idea Two: Refine PD&I industry practices, optimize all of the data that mobile operators are holding on to, and ensure that appropriate technical, legal, and ethical exchange mechanisms are in place to ensure responsible use of PD&I. Idea Three: Standardize a connected individual identity scoring scheme, i.e., a scheme for identity and transaction verification, often centralized around mobile data. This scheme is analogous to credit scoring for lending and fraud detection for credit card purchases. It would help enterprises simultaneously better serve their customers, protect PD&I, mitigate fraud, and improve their regulatory compliance efforts.

According to Virginie, a commercial imperative for an enterprise is knowing their customer–verifying the customer’s identity prior to and during engagements. Knowing the customer helps enterprises not only better serve the customer, but also manage costs, reduce waste, mitigate fraud, and stay on the right side of the law and regulations. Virginie remarked that her customers often say, “I want to know who is my end user. Who am I talking to? Am I speaking to the right person in front of me?” This is hard enough in the physical realm, and in the digital realm it is even more difficult. The ideas discussed in this interview can help enterprises answer these questions.

Consumer Identity and the Enterprise

The mobile phone has become a cornerstone for digital identity management and commerce. In fact, Cameron D’Ambrosi, Managing Director of Liminal, has gone as far as to suggest mobile has an irreplaceable role in the digital identity ecosystem.1 Mobile can help enterprises be certain whom they are dealing with, and with this certainty help them, with confidence, successfully connect, communicate, and engage people in nearly any transactions.

To successfully leverage mobile as a tool for customer identity management, which is an enabler of what is known as “know your customer” or KYC, enterprises work with organizations like GMS to integrate mobile identity verification into their commercial workflow. In our interview, Virginie notes that GMS is a global messaging aggregator, the “man in the middle.” It provides messaging and related services powered by personal data and identity to enterprises and mobile operators, including KYC services.

Benefits gained from knowing your customer

There is a wide range of use cases for why an enterprise may want to use services provided by players like GMS. They can:

Improve customer experience: Knowing the customer and the context of a transaction can help improve the customer experience. Maintain data hygiene: Ensuring data in a CRM or customer system of record that is accurate can improve marketing, save money, reduce fraud, and more. Effectively manage data: Reducing duplicate records, tagging data, and more can reduce costs, create efficiency, and generate new business opportunities (side note: poor data management costs enterprises billions annually).2 Ensure regulatory compliance: Industry and government best practices, legislation, and regulation is not just nice to have; it is a business requirement. Staying compliant can mitigate risk, build trust, and help organizations differentiate themselves in the market. Mitigate cybercrime: Cybercrime is costing industry trillions of dollars a year (Morgan (2020) predicts the tally could be as much as $10.5 trillion annually by 2025).3These losses can be reduced with an effective strategy. The connected individual identity scoring scheme

When a consumer signs up for or buys a product or service, an enterprise may prompt them to provide a mobile number and other personal data as part of the maintenance of their profile and to support the transaction. An enterprise working with GSM, in real-time, can ping GMS’s network to verify if the consumer-provided mobile number is real, i.e., operational. Moreover, they can ask GMS to predict, with varying levels of accuracy, if a mobile number and PD&I being used in a transaction is associated with a real person. They can also ask if the presumed person conducting the transaction can be trusted or if they might be a fraudster looking to cheat the business. This is a decision based on relevant personal information provided by the individual prior to or during the transaction, as well as data drawn from other sources.

This type of real-time identity and trust verification is made possible by a process Virginie refers to as “scoring.” I refer to it as “the connected individual identity scoring scheme.” Scoring is an intricate and complex choreography of data management and analysis, executed by GMS in milliseconds. This dance consists of pulling together and analyzing a myriad of personal data, deterministic and probabilistic identifiers, and mobile phone signals. The actors in this dance include GMS, the enterprise, the consumer, and GMS’s strategic network of mobile network operators and PD&I aggregator partners.

When asked by an enterprise to produce a score, GMS, in real-time, combines and analyzes enterprise-provided data (e.g., customer name, addresses, phone number, presumed location, etc.), mobile operator signal data (e.g., the actual location of a phone, SIM card, and number forwarding status), and PD&I aggregator supplied data. From this information, it produces a score. This score is used to determine the likelihood a transaction being initiated by “someone” is legitimate and can be trusted, or not. A perfect score of 1 would suggest that, with one hundred percent certainty, the person is who they say they are and can be trusted, and a score of zero would suggest they are most certainly a cybercriminal.

In our interview, Virginie notes, “nothing is perfect, we need to admit that,” thus suggesting that one should never expect a perfect score. The more certain a business wants to be, i.e. the higher score they require to confirm a transaction, the more the business should expect the possibility of increased transactional costs, time, and friction in the user experience. Keeping this in mind, businesses should develop a risk tolerance matrix, based on the context of a transaction, to determine if they want to accept the current transaction or not. For example, for lower risk or lower cost transactions (e.g., an online pizza order) the business might have a lower assurance tolerance and will accept a lower score. For higher-risk or higher-cost transactions (e.g., a bank wire transfer), they might need a higher assurance tolerance and accept only higher scores.

Example: Detecting fraud in a banking experience

Virginie used a bank transaction as an example. She explained that a bank could check if a customer’s mobile phone is near the expected location of a transaction. If it was not, this might suggest there is a possibility of fraud occurring, which would negatively impact the score.

Mobile scoring happens every day, but not always by this name–others refer to it as mobile signaling or mobile device intelligence. However, Virginie alluded to a challenge. There is no industry standard for scoring, which may lead to inconsistencies in execution and bias across the industry. She suggested that more industry collaboration is needed to prevent this.

The Trust Chain

During our conversation, Virginie proposed a novel idea which frames what we in the industry could do to optimize the PD&I value and use it responsibly. Virginie said we need to build a chain of trust amongst the PD&I actors, “The Trust Chain”.

I have taken poetic license, based on our conversation, and have illustrated The Trust Chain in the figure below. The figure depicts connected individuals* at the center, resting on a bed of industry players linked to enterprises. A yellow band circles them all to illustrate the flow of personal data and identity throughout the chain.

Defining the connected individual and being phygital: It is so easy in business to get distracted by our labels. It is important to remember the terms we use to refer to the people we serve—prospect, consumer, patient, shopper, investor, user, etc.—are contrived and can distract. These terms are all referring to the same thing: a human, an individual, and more importantly, a contextual state or action at some point along the customer journey, i.e., sometimes I am a shopper considering a product, other times I am a consumer using the product. The shopper and the consumer are not always the same person. Understanding this is important to ensure effective engagement in the connected age. In the context of today’s world and this discussion, the individual is connected. They are connected with phones, tablets, smartwatches, cars, and more. These connections have made us “phygital” beings, merging the digital and physical self. Each and every one of these connections is producing data.

According to Virginie, the key to making the industry more effective and efficient is to tap into more and more of the connected individual data held and managed by mobile network operators. This is because, in her own words, “they know everything.” To tap into this data, Virginie said a number of technical, legal, and ethical complexities must be overcome. In addition, an improved model for data exchange amongst the primary actors of the industry—mobile network operators, enterprises, messaging aggregators (like GMS), and PD&I aggregators—needs to be established. In other words, “The Trust Chain” needs to be refined and built. The presumption behind all of this is that the current models of data exchange can be found wanting.

What we need to do next

In summary, the conclusion I draw from my interview with Virginie are that we should come together to tackle:

The technical, legal, and ethical complexities to enable more effective access to the treasure trove of data held by the mobile network operators The standardization of a connected individual scoring scheme The development and integrity for “The Trust Chain”

My takeaway from our discussion is simple, I agree with her ideas. These efforts and more are needed. The use of personal data and identity throughout the industry is accelerating at an exponential rate. To ensure all parties can safely engage, transact, and thrive, it is critical that industry leaders develop a sustainable and responsible, marketplace.

I encourage you to watch to the full interview here.

Becker, “Mobile’s Irreplaceable Role in the Digital Identity Ecosystem.”↩︎ “Dark Data – Are You at Risk?”↩︎ Morgan, “Cybercrime To Cost The World $10.5 Trillion Annually By 2025.”↩︎ REFERENCES Becker, Michael. “Mobile’s Irreplaceable Role in the Digital Identity Ecosystem: Liminal’s Cameron D’Ambrosi Speaks to MEF – Blog.” MEF, October 2021. https://mobileecosystemforum.com/2021/10/07/mobiles-irreplaceable-role-in-the-digital-identity-ecosystem-liminals-cameron-dambrosi-speaks-to-mef/. “Dark Data – Are You at Risk?” Veritas Dark Data. Are You at Risk?, July 2019. https://www.veritas.com/form/whitepaper/dark-data-risk. Morgan, Steve. “Cybercrime To Cost The World $10.5 Trillion Annually By 2025.” Cybercrime Magazine, November 2020. https://cybersecurityventures.com/cybercrime-damages-6-trillion-by-2021/.

The post A call for New PD&I Exchange Models, The Trust Chain, and A Connected Individual Identity Scoring Scheme: An Interview with Virginie Debris of GMS appeared first on Identity Praxis, Inc..

Monday, 25. October 2021

Damien Bod

Create and issue verifiable credentials in ASP.NET Core using Azure AD

This article shows how Azure AD verifiable credentials can be issued and used in an ASP.NET Core application. An ASP.NET Core Razor page application is used to implement the credential issuer. To issue credentials, the application must manage the credential subject data as well as require authenticated users who would like to add verifiable credentials […]

This article shows how Azure AD verifiable credentials can be issued and used in an ASP.NET Core application. An ASP.NET Core Razor page application is used to implement the credential issuer. To issue credentials, the application must manage the credential subject data as well as require authenticated users who would like to add verifiable credentials to their digital wallet. The Microsoft Authenticator mobile application is used as the digital wallet.

Code: https://github.com/swiss-ssi-group/AzureADVerifiableCredentialsAspNetCore

Blogs in this series

Getting started with Self Sovereign Identity SSI Challenges to Self Sovereign Identity

Setup

Two ASP.NET Core applications are implemented to issue and verify the verifiable credentials. The credential issuer must administrate and authenticate its identities to issue verifiable credentials. A verifiable credential issuer should never issue credentials to unauthenticated subjects of the credential. As the verifier normally only authorizes the credential, it is important to know that the credentials were at least issued correctly. We do not know as a verifier who or and mostly what sends the verifiable credentials but at least we know that the credentials are valid if we trust the issuer. It is possible to use private holder binding for a holder of a wallet which would increase the trust between the verifier and the issued credentials.

The credential issuer in this demo issues credentials for driving licenses using Azure AD verifiable credentials. The ASP.NET Core application uses Microsoft.Identity.Web to authenticate all identities. In a real application, the application would be authenticated as well requiring 2FA for all users. Azure AD supports this good. The administrators would also require admin rights, which could be implemented using Azure security groups or Azure roles which are added to the application as claims after the OIDC authentication flow.

Any authenticated identity can request credentials (A driving license in this demo) for themselves and no one else. The administrators can create data which is used as the subject, but not issue credentials for others.

Azure AD verifiable credential setup

Azure AD verifiable credentials is setup using the Azure Docs for the Rest API and the Azure verifiable credential ASP.NET Core sample application.

Following the documentation, a display file and a rules file were uploaded for the verifiable credentials created for this issuer. In this demo, two credential subjects are defined to hold the data when issuing or verifying the credentials.

{ "default": { "locale": "en-US", "card": { "title": "National Driving License VC", "issuedBy": "Damienbod", "backgroundColor": "#003333", "textColor": "#ffffff", "logo": { "uri": "https://raw.githubusercontent.com/swiss-ssi-group/TrinsicAspNetCore/main/src/NationalDrivingLicense/wwwroot/ndl_car_01.png", "description": "National Driving License Logo" }, "description": "Use your verified credential to prove to anyone that you can drive." }, "consent": { "title": "Do you want to get your Verified Credential?", "instructions": "Sign in with your account to get your card." }, "claims": { "vc.credentialSubject.name": { "type": "String", "label": "Name" }, "vc.credentialSubject.details": { "type": "String", "label": "Details" } } } }

The rules file defines the attestations for the credentials. Two standard claims are used to hold the data, the given_name and the family_name. These claims are mapped to our name and details subject claims and holds all the data. Adding custom claims to Azure AD or Azure B2C is not so easy and so I decided for the demo, it would be easier to use standard claims which works without custom configurations. The data sent from the issuer to the holder of the claims can be sent in the application. It should be possible to add credential subject properties without requiring standard AD id_token claims, but I was not able to set this up in the current preview version.

{ "attestations": { "idTokens": [ { "id": "https://self-issued.me", "mapping": { "name": { "claim": "$.given_name" }, "details": { "claim": "$.family_name" } }, "configuration": "https://self-issued.me", "client_id": "", "redirect_uri": "" } ] }, "validityInterval": 2592001, "vc": { "type": [ "MyDrivingLicense" ] } }

The rest of the Azure AD credentials are setup exactly like the documentation.

Administration of the Driving licenses

The verifiable credential issuer application uses a Razor page application which accesses a Microsoft SQL Azure database using Entity Framework Core to access the database. The administrator of the credentials can assign driving licenses to any user. The DrivingLicenseDbContext class is used to define the DBSet for driver licenses.

ublic class DrivingLicenseDbContext : DbContext { public DbSet<DriverLicense> DriverLicenses { get; set; } public DrivingLicenseDbContext(DbContextOptions<DrivingLicenseDbContext> options) : base(options) { } protected override void OnModelCreating(ModelBuilder builder) { builder.Entity<DriverLicense>().HasKey(m => m.Id); base.OnModelCreating(builder); } }

A DriverLicense entity contains the infomation we use to create verifiable credentials.

public class DriverLicense { [Key] public Guid Id { get; set; } public string UserName { get; set; } = string.Empty; public DateTimeOffset IssuedAt { get; set; } public string Name { get; set; } = string.Empty; public string FirstName { get; set; } = string.Empty; public DateTimeOffset DateOfBirth { get; set; } public string Issuedby { get; set; } = string.Empty; public bool Valid { get; set; } public string DriverLicenseCredentials { get; set; } = string.Empty; public string LicenseType { get; set; } = string.Empty; }

Issuing credentials to authenticated identities

When issuing verifiable credentials using Azure AD Rest API, an IssuanceRequestPayload payload is used to request the credentials which are to be issued to the digital wallet. Verifiable credentials are issued to a digital wallet. The credentials are issued for the holder of the wallet. The payload classes are the same for all API implementations apart from the CredentialsClaims class which contains the subject claims which match the rules file of your definition.

public class IssuanceRequestPayload { [JsonPropertyName("includeQRCode")] public bool IncludeQRCode { get; set; } [JsonPropertyName("callback")] public Callback Callback { get; set; } = new Callback(); [JsonPropertyName("authority")] public string Authority { get; set; } = string.Empty; [JsonPropertyName("registration")] public Registration Registration { get; set; } = new Registration(); [JsonPropertyName("issuance")] public Issuance Issuance { get; set; } = new Issuance(); } public class Callback { [JsonPropertyName("url")] public string Url { get; set; } = string.Empty; [JsonPropertyName("state")] public string State { get; set; } = string.Empty; [JsonPropertyName("headers")] public Headers Headers { get; set; } = new Headers(); } public class Headers { [JsonPropertyName("api-key")] public string ApiKey { get; set; } = string.Empty; } public class Registration { [JsonPropertyName("clientName")] public string ClientName { get; set; } = string.Empty; } public class Issuance { [JsonPropertyName("type")] public string CredentialsType { get; set; } = string.Empty; [JsonPropertyName("manifest")] public string Manifest { get; set; } = string.Empty; [JsonPropertyName("pin")] public Pin Pin { get; set; } = new Pin(); [JsonPropertyName("claims")] public CredentialsClaims Claims { get; set; } = new CredentialsClaims(); } public class Pin { [JsonPropertyName("value")] public string Value { get; set; } = string.Empty; [JsonPropertyName("length")] public int Length { get; set; } = 4; } /// Application specific claims used in the payload of the issue request. /// When using the id_token for the subject claims, the IDP needs to add the values to the id_token! /// The claims can be mapped to anything then. public class CredentialsClaims { /// <summary> /// attribute names need to match a claim from the id_token /// </summary> [JsonPropertyName("given_name")] public string Name { get; set; } = string.Empty; [JsonPropertyName("family_name")] public string Details { get; set; } = string.Empty; }

The GetIssuanceRequestPayloadAsync method sets the data for each identity that requested the credentials. Only a signed in user can request the credentials for themselves. The context.User.Identity is used and the data is selected from the database for the signed in user. It is important that credentials are only issued to authenticated users. Users and the application must be authenticated correctly using 2FA and so on. Per default, the credentials are only authorized on the verifier which is probably not enough for most security flows.

public async Task<IssuanceRequestPayload> GetIssuanceRequestPayloadAsync(HttpRequest request, HttpContext context) { var payload = new IssuanceRequestPayload(); var length = 4; var pinMaxValue = (int)Math.Pow(10, length) - 1; var randomNumber = RandomNumberGenerator.GetInt32(1, pinMaxValue); var newpin = string.Format("{0:D" + length.ToString() + "}", randomNumber); payload.Issuance.Pin.Length = 4; payload.Issuance.Pin.Value = newpin; payload.Issuance.CredentialsType = "MyDrivingLicense"; payload.Issuance.Manifest = _credentialSettings.CredentialManifest; var host = GetRequestHostName(request); payload.Callback.State = Guid.NewGuid().ToString(); payload.Callback.Url = $"{host}:/api/issuer/issuanceCallback"; payload.Callback.Headers.ApiKey = _credentialSettings.VcApiCallbackApiKey; payload.Registration.ClientName = "Verifiable Credential NDL Sample"; payload.Authority = _credentialSettings.IssuerAuthority; var driverLicense = await _driverLicenseService.GetDriverLicense(context.User.Identity.Name); payload.Issuance.Claims.Name = $"{driverLicense.FirstName} {driverLicense.Name} {driverLicense.UserName}"; payload.Issuance.Claims.Details = $"Type: {driverLicense.LicenseType} IssuedAt: {driverLicense.IssuedAt:yyyy-MM-dd}"; return payload; }

The IssuanceRequestAsync method gets the payload data and request credentials from the Azure AD verifiable credentials REST API and returns this value which can be scanned using a QR code in the Razor page. The request returns fast. Depending on how the flow continues, a web hook in the application will update the status in a cache. This cache is persisted and polled from the UI. This could be improved by using SignalR.

[HttpGet("/api/issuer/issuance-request")] public async Task<ActionResult> IssuanceRequestAsync() { try { var payload = await _issuerService.GetIssuanceRequestPayloadAsync(Request, HttpContext); try { var (Token, Error, ErrorDescription) = await _issuerService.GetAccessToken(); if (string.IsNullOrEmpty(Token)) { _log.LogError($"failed to acquire accesstoken: {Error} : {ErrorDescription}"); return BadRequest(new { error = Error, error_description = ErrorDescription }); } var defaultRequestHeaders = _httpClient.DefaultRequestHeaders; defaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", Token); HttpResponseMessage res = await _httpClient.PostAsJsonAsync( _credentialSettings.ApiEndpoint, payload); var response = await res.Content.ReadFromJsonAsync<IssuanceResponse>(); if(response == null) { return BadRequest(new { error = "400", error_description = "no response from VC API"}); } if (res.StatusCode == HttpStatusCode.Created) { _log.LogTrace("succesfully called Request API"); if (payload.Issuance.Pin.Value != null) { response.Pin = payload.Issuance.Pin.Value; } response.Id = payload.Callback.State; var cacheData = new CacheData { Status = IssuanceConst.NotScanned, Message = "Request ready, please scan with Authenticator", Expiry = response.Expiry.ToString() }; _cache.Set(payload.Callback.State, JsonSerializer.Serialize(cacheData)); return Ok(response); } else { _log.LogError("Unsuccesfully called Request API"); return BadRequest(new { error = "400", error_description = "Something went wrong calling the API: " + response }); } } catch (Exception ex) { return BadRequest(new { error = "400", error_description = "Something went wrong calling the API: " + ex.Message }); } } catch (Exception ex) { return BadRequest(new { error = "400", error_description = ex.Message }); } }

The IssuanceResponse is returned to the UI.

public class IssuanceResponse { [JsonPropertyName("requestId")] public string RequestId { get; set; } = string.Empty; [JsonPropertyName("url")] public string Url { get; set; } = string.Empty; [JsonPropertyName("expiry")] public int Expiry { get; set; } [JsonPropertyName("pin")] public string Pin { get; set; } = string.Empty; [JsonPropertyName("id")] public string Id { get; set; } = string.Empty; }

The IssuanceCallback is used as a web hook for the Azure AD verifiable credentials. When developing or deploying, this web hook needs to have a public IP. I use ngrok to test this. Because the issuer authenticates the identities using an Azure App registration, everytime the ngrok URL changes, the redirect URL needs to be updated. Each callback request updates the cache. This API also needs to allow anonymous requests if the rest of the application is authenticated using OIDC. The AllowAnonymous attribute is required, if you use an authenticated ASP.NET Core application.

[AllowAnonymous] [HttpPost("/api/issuer/issuanceCallback")] public async Task<ActionResult> IssuanceCallback() { string content = await new System.IO.StreamReader(Request.Body).ReadToEndAsync(); var issuanceResponse = JsonSerializer.Deserialize<IssuanceCallbackResponse>(content); try { //there are 2 different callbacks. 1 if the QR code is scanned (or deeplink has been followed) //Scanning the QR code makes Authenticator download the specific request from the server //the request will be deleted from the server immediately. //That's why it is so important to capture this callback and relay this to the UI so the UI can hide //the QR code to prevent the user from scanning it twice (resulting in an error since the request is already deleted) if (issuanceResponse.Code == IssuanceConst.RequestRetrieved) { var cacheData = new CacheData { Status = IssuanceConst.RequestRetrieved, Message = "QR Code is scanned. Waiting for issuance...", }; _cache.Set(issuanceResponse.State, JsonSerializer.Serialize(cacheData)); } if (issuanceResponse.Code == IssuanceConst.IssuanceSuccessful) { var cacheData = new CacheData { Status = IssuanceConst.IssuanceSuccessful, Message = "Credential successfully issued", }; _cache.Set(issuanceResponse.State, JsonSerializer.Serialize(cacheData)); } if (issuanceResponse.Code == IssuanceConst.IssuanceError) { var cacheData = new CacheData { Status = IssuanceConst.IssuanceError, Payload = issuanceResponse.Error?.Code, //at the moment there isn't a specific error for incorrect entry of a pincode. //So assume this error happens when the users entered the incorrect pincode and ask to try again. Message = issuanceResponse.Error?.Message }; _cache.Set(issuanceResponse.State, JsonSerializer.Serialize(cacheData)); } return Ok(); } catch (Exception ex) { return BadRequest(new { error = "400", error_description = ex.Message }); } }

The IssuanceCallbackResponse is returned to the UI.

public class IssuanceCallbackResponse { [JsonPropertyName("code")] public string Code { get; set; } = string.Empty; [JsonPropertyName("requestId")] public string RequestId { get; set; } = string.Empty; [JsonPropertyName("state")] public string State { get; set; } = string.Empty; [JsonPropertyName("error")] public CallbackError? Error { get; set; } }

The IssuanceResponse method is polled from a Javascript client in the Razor page UI. This method updates the status in the UI using the cache and the database.

[HttpGet("/api/issuer/issuance-response")] public ActionResult IssuanceResponse() { try { //the id is the state value initially created when the issuance request was requested from the request API //the in-memory database uses this as key to get and store the state of the process so the UI can be updated string state = this.Request.Query["id"]; if (string.IsNullOrEmpty(state)) { return BadRequest(new { error = "400", error_description = "Missing argument 'id'" }); } CacheData value = null; if (_cache.TryGetValue(state, out string buf)) { value = JsonSerializer.Deserialize<CacheData>(buf); Debug.WriteLine("check if there was a response yet: " + value); return new ContentResult { ContentType = "application/json", Content = JsonSerializer.Serialize(value) }; } return Ok(); } catch (Exception ex) { return BadRequest(new { error = "400", error_description = ex.Message }); } }

The DriverLicenseCredentialsModel class is used for the credential issuing for the sign-in user. The HTML part of the Razor page contains the Javascript client code which was implemented using the code from the Microsoft Azure sample.

public class DriverLicenseCredentialsModel : PageModel { private readonly DriverLicenseService _driverLicenseService; public string DriverLicenseMessage { get; set; } = "Loading credentials"; public bool HasDriverLicense { get; set; } = false; public DriverLicense DriverLicense { get; set; } public DriverLicenseCredentialsModel(DriverLicenseService driverLicenseService) { _driverLicenseService = driverLicenseService; } public async Task OnGetAsync() { DriverLicense = await _driverLicenseService.GetDriverLicense(HttpContext.User.Identity.Name); if (DriverLicense != null) { DriverLicenseMessage = "Add your driver license credentials to your wallet"; HasDriverLicense = true; } else { DriverLicenseMessage = "You have no valid driver license"; } } }

Testing and running the applications

Ngrok is used to provide a public callback for the Azure AD verifiable credentials callback. When the application is started, you need to create a driving license. This is done in the administration Razor page. Once a driving license exists, the View driver license Razor page can be used to issue a verifiable credential to the logged in user. A QR Code is displayed which can be scanned to begin the issue flow.

Using the Microsoft authenticator, you can scan the QR Code and add the verifiable credentials to your digital wallet. The credentials can now be used in any verifier which supports the Microsoft Authenticator wallet. The verify ASP.NET Core application can be used to verify and used the issued verifiable credential from the Wallet.

Links:

https://docs.microsoft.com/en-us/azure/active-directory/verifiable-credentials/

https://github.com/Azure-Samples/active-directory-verifiable-credentials-dotnet

https://www.microsoft.com/de-ch/security/business/identity-access-management/decentralized-identity-blockchain

https://didproject.azurewebsites.net/docs/issuer-setup.html

https://didproject.azurewebsites.net/docs/credential-design.html

https://github.com/Azure-Samples/active-directory-verifiable-credentials

https://identity.foundation/

https://www.w3.org/TR/vc-data-model/

https://daniel-krzyczkowski.github.io/Azure-AD-Verifiable-Credentials-Intro/

https://dotnetthoughts.net/using-node-services-in-aspnet-core/

https://identity.foundation/ion/explorer

https://www.npmjs.com/package/ngrok

https://github.com/microsoft/VerifiableCredentials-Verification-SDK-Typescript

https://identity.foundation/ion/explorer

https://www.npmjs.com/package/ngrok

https://github.com/microsoft/VerifiableCredentials-Verification-SDK-Typescript


Kyle Den Hartog

My Take on the Misframing of the Authentication Problem

First off, the user experience of authenticating on the web has to be joyful first and foremost. Secondly, I think it's important that we recognize that the security of any authentication system is probabilistic, not deterministic.

Prelude: First off, if you haven’t already read The Quest to Replace the Password stop reading this and give that a read first. To paraphrase my computer security professor, If you haven’t read this paper before you design an authentication system you’re probably just reinventing something already created or missing a piece of the puzzle. So go ahead take a read of that paper first before you continue on. In fact, I just re-read this paper before I started working on it because it does an excellent job of framing the problem.

Over the past few years, I’ve spent a fair amount of time thinking about what the next generation of authentication and authorization systems will look like from a variety of different perspectives. I started out looking at the problem from a user’s perspective originally just looking to get rid of passwords. Then I looked at it as an attacker working as an intern penetration tester which gave me a unique eye into understanding the attacker mindset. Unfortunately, I didn’t find myself enjoying the red team side of security too much (combined with having a 2-year non-compete as an intern - that was a joke in hindsight) so by happenstance, I found myself into the standards community where the next generation of authentication (AuthN) and authorization (AuthZ) systems have been being built. Combine this with the view of a software engineer attempting to build the standards that are being written by some world-class experts with centuries of combined experience. Throughout this experience, I’ve gotten to view the state of the art while also getting to keep the naiveness that comes with being a fairly new engineer relative to some of the experts I get to work with. During this time I’ve become a bit more opinionated on what makes a good authentication system and this time around I’m going to jot down my current thoughts on what makes something useful.

However, one aspect that I think that paper lacks is that it frames the problem of authentication on the web as something where the next goal is to move to something different other than passwords, but we just haven’t found that better thing yet. While I think this is probably true for some aspects of low-security systems I think that fundamentally passwords or more generally, “Things I know” are here to stay. The problem is that we haven’t done a good enough job of understanding the requirements that the user needs to intuitively use the system as well as making sure that the system sufficiently gets out of the way of the user. As the paper does point out though, this is likely due to our specialization bias which doesn’t allow us to take into considerations the holistic view points necessary to approach the problem. What I’m proposing I think is a way in which we can jump this hurdle through the usage of hard data. Read on and let me know if you think this can solve this issue or if I’m just full of my own implicit biases.

So what’re the important insights that I’ve been thinking about lately? First off, the user experience of authenticating on the web has to be joyful first and foremost. If the user doesn’t have a better experience than what they do with passwords, which is a very low bar to beat when considering all the passwords they have to remember, then it’s simply unacceptable and won’t achieve the uptake that’s needed to overtake passwords.

Secondly, I think it’s important that we recognize that the security of any authentication system is probabilistic, not deterministic. By reframing the problem to understand that with each additional security check we do under the classifications of “what I know”, “what I am”, and “what I have” it allows us to better understand the problem we’re actually trying to solve with security. To put this idea in perspective think about the problem this way. What’s the probability that a security system will be broken for any particular user during a particular period? For example, a user who chooses to reuse their password that’s set to “#1password” for every website is a lot less likely (my intuition - happy to be proven wrong) than a user who can memorize a password like “p2D16U$nClNjqLseKTtnjw” for every website. However, there’s a significant tradeoff on the user’s experience which is why the case where a user reuses an easy to remember is a lot more likely to occur when studying users than the latter case even though we know it’s less secure.

So what gives? This all sounds obvious to a semi-well thought-out engineer, right? The difference is we simply don’t know the probability of a security system failing in an ideal scenario over a pre-determined period. To put this in perspective - can anyone point me to an academic research paper or even some user research that tells me the probability that a user’s password will be discovered by an attacker in the next year? What about the probability that the user shares their password with a trusted person because the system wasn’t deployed with a delegation system? Or how about how the probability will drop as the user reuses their password across many websites? Simply put I think we’ve been asking the wrong question here and until we can have hard data on this we can’t make rigorous choices on the acceptable UX/security tradeoffs that are so hard to decide today.

This isn’t relevant for just passwords either, this extends to many different forms of authentication that fall under the other two authentication classes as well. For example, what’s the probability that a user’s account will be breached when relying on the Open ID Connect protocol rather than a password? Furthermore, what’s the likelihood that the user prefers the Open ID Connect system rather than a password for each website, and is that likelihood worth the increase or decrease in probabilistic security under one or many attack vectors?

The best part of this framing is that it changes how we look at security on the web from the user’s perspective, but that’s not the only part that has to be considered as is rightly pointed in the paper. There’s a very important third factor that has to be considered as well which is deployability, which I like to reframe as “developer experience” or DevX.

By evaluating a constraint of a system in this way we reframe the problem into a measurable outcome that becomes far more tractable and measurable between the variety of constraints that need to be considered including the developer deploying or maintaining the system, the user who’s using it, and the resistance to common threats (Don’t worry about unrealistic threat models for now - mature them over time) which the user expects the designers of the system to protect them from.

Once we’ve got that data let’s sit down and re-evaluate what are the most important principles of designing the system. I’ll make a few predictions to wrap this up as well.

First prediction: I think once we have this data we’ll see a few different things which will be obvious in hindsight. A system that doesn’t prioritize UX over DevX over probabilistic security resilience will be dead in the water since it goes against the “user should enjoy the authentication experience principle”. Additionally, DevX has to come before security because without a good devX it’s less likely to be implemented at all let alone properly.

Second prediction: I’d venture to guess that we’ll learn a few things about the way we frame security on the web with the clear winner being that we should be designing for MFA systems by default, but “what I have” categories need to be the basis of the majority of experiences for the user with the “what I know” categories being an escalated approach with “what I am” categories only being needed in the highest assurance use cases or at the time of more red flags having been raised (e.g. new IP address, new device, etc) and being enforced on-device rather than handled by a remote server.

Final prediction: Recovery is going to be the hardest part of the system to figure out with multi-device flows being only slightly easier to solve. I wouldn’t be surprised if the solution to recovery was actually to not recover and instead make it super easy to “burn and recreate” (as John Jordan has advocated in the decentralized identity community) an account on the web because that’s how hard recovery actually is to get right.

So that’s what I’ve got for now. I’m sure I’m missing something here and I’m sure I’m wrong in a few other cases. Share your comments on this down below or feel free to send me an email and tell me I’m wrong. I do appreciate the thoughtfulness that others put into pointing these things out so let me know what you think and let’s discuss it further. Thanks for reading!

Thursday, 21. October 2021

Mike Jones: self-issued

OpenID and FIDO Presentation at October 2021 FIDO Plenary

I described the relationship between OpenID and FIDO during the October 21, 2021 FIDO Alliance plenary meeting, including how OpenID Connect and FIDO are complementary. In particular, I explained that using WebAuthn/FIDO authenticators to sign into OpenID Providers brings phishing resistance to millions of OpenID Relying Parties without them having to do anything! The presentation […]

I described the relationship between OpenID and FIDO during the October 21, 2021 FIDO Alliance plenary meeting, including how OpenID Connect and FIDO are complementary. In particular, I explained that using WebAuthn/FIDO authenticators to sign into OpenID Providers brings phishing resistance to millions of OpenID Relying Parties without them having to do anything!

The presentation was:

OpenID and FIDO (PowerPoint) (PDF)

MyDigitalFootprint

When does democracy break?

We spend most of our waking hours being held to account between the guardrails of risk-informed and responsible decision making.  Undoubtedly, we often climb over the guardrails and make ill-informed, irresponsible and irrational decisions, but that is human agency. It is also true that we would not innovate, create or discover if we could not explore the other side of our safety guardrails.
We spend most of our waking hours being held to account between the guardrails of risk-informed and responsible decision making.  Undoubtedly, we often climb over the guardrails and make ill-informed, irresponsible and irrational decisions, but that is human agency. It is also true that we would not innovate, create or discover if we could not explore the other side of our safety guardrails. 

Today’s perception of responsible decision making is different to that our grandparents held.  Our grandchildren will look back at our “risk-informed” decisions with the advantage of hindsight and question why our risk management frameworks were so short-term-focused.  However, we need to recognise that our guardrails are established by current political, economic and societal framing.   

What are we optimising for?”

I often ask the question, “what are we optimising for?” The reason I ask this question is to draw out different viewpoints in a leadership team.  The viewpoints that drive individual optimisation is framed by their experience, ability to understand time-frames and incentives. 

Peak Paradox is a non-confrontational framework to explore our different perceptions of what we are optimising for.  We need different and diverse views to ensure that our guardrails don’t become so narrow they look like rail tracks, and we repeat the same mistakes because that is what the process determines.  Equally, we must agree together boundaries of divergence which means we can optimise as a team for something that we believe in and has a shared purpose of us and our stakeholders.  Finding this dynamic area of alignment is made easier with the Peak Paradox framework. 

However, our model of risk-informed responsible decision making is based on the idea that the majority decides, essentially democracy.  If the “majority” is a supermajority, we need 76%; for a simple majority, we need 51%, and for minority protections less than 10%.  What we get depends on how someone has previously set up the checks, balances, and control system. And then there is the idea of monarchy or the rich and powerful making the decisions for everyone else, the 0.001%.  

However, our guardrails for democratic decision making breaks down when there is a need for choices, decisions, judgement that has to be made that people do not like.  How do we enable better decisions when hard decisions mean you will lose the support of the majority? 

We do not all agree with vaccines (even before covid), eating meat, climate change, our government or authority.  Vaccines in the current global pandemic period are one such tension of divergence.  We see this every day in the news feeds; there is an equal and opposite view for every side.  Humanity at large lives at Peak Paradox, but we don’t equally value everyone views. Why do we struggle with the idea that individual liberty of choice can be taken away in the interests of everyone?  

Climate change is another.  Should the government act in the longer term and protect the future or act in the short term persevering liberty and ensure re-election?  Protestors with a cause may fight authority and are portrayed as mavericks and disruptors, but history remembers many as pioneers, martyrs and great leaders.  Those who protested in the 1960 and 1970s against nuclear energy may look back today and think that they should have been fighting fossil fuels.  Such pioneers are optimising for something outside of the normal guardrails. Whilst they appear to be living outside of the accepted guardrails, they can see they should be adopted.

Our guardrails work well when we can agree on the consequences of risk-informed and responsible decision making; however, in an age when information is untrusted and who is responsible is questioned, we find that we all have different guardrails.  Living at Peak Paradox means we have to accept we will never agree on what is a responsible decision and we are likely to see that our iteration of democracy that maintains power and control is going to end in a revolution if we could only agree on what to optimise for.   



@_Nat Zone

【11月12日開催】金融・決済サービスにおける認証・API認可・同意取得プロセスの最新動向

~オープンバンキングからGAINまで~ 来る11月… The post 【11月12日開催】金融・決済サービスにおける認証・API認可・同意取得プロセスの最新動向 first appeared on @_Nat Zone.

~オープンバンキングからGAINまで~

来る11月12日に、『金融・決済サービスにおける認証・API認可・同意取得プロセスの最新動向と将来像~オープンバンキングからGAINまで~』と題してセミナー(有料)を行います(予定)。お客様が集まれば、ですが。2019年に行ったセミナーのフォローアップの位置付けです。以下のアジェンダからは読み取りにくいかもしれませんが、今回は特に分散アイデンティティの実現に重要な「OIDC for Identity Assurance」や、9月にドイツで発表した主に金融機関が属性プロバイダー(Identity Information Provider)としての役割を果たしていくグローバルな枠組みであるGAIN (Global Assured Identity Network) などにスポットライトを当てて行きたいと思います。

また、今回はリモートだけでなく対面もあるハイブリッド方式で、わたしは現地に入りますので、もしいらっしゃればお目にかかることができるので楽しみにしております。お申し込みはこちらのリンクから

https://seminar-info.jp/entry/seminars/view/1/5474

アジェンダ

1.金融系サービスにとっての「認証」問題とデジタルアイデンティティー
(1)認証の課題と口座の不正利用・不正送金問題~事例を交えて
(2)GAFAの中心戦略デジタルアイデンティティーとは何か
(3)アイデンティティ管理フレームワーク
(4)質疑応答

2.デジタルアイデンティティーの標準「OpenID Connect」の概要
(1)ログインモデルの変遷
(2)OpenID Connectの概要
(3)OpenID Connectの3つの属性連携モデル
(4)質疑応答

3.Open Banking と OpenIDの拡張仕様
(1)Open Bankingとは?FAPIとは?
(2)FAPIの世界的広がり
(3)CIBA, Grant Management, OIDC4IDA
(4)質疑応答

4.デジタルアイデンティティーの実現に向けて
(1)金融機関にとってのメリット
(2)何から取り組むべきか
(3)開発・実装上の留意点
(4)質疑応答

5.デジタルアイデンティティーの未来
(1)プライバシーの尊重とデジタルアイデンティティー
(2)選択的属性提供と分散アイデンティティ基盤OpenID Connect
(3)GAINトラストフレームワーク
(4)質疑応答

6.質疑応答
(1)全体を通じて
(2)書籍『デジタルアイデンティティー』についてのQ&Aセッション

The post 【11月12日開催】金融・決済サービスにおける認証・API認可・同意取得プロセスの最新動向 first appeared on @_Nat Zone.

Wednesday, 20. October 2021

Werdmüller on Medium

Reconfiguring

A short story Continue reading on Medium »

MyDigitalFootprint

Climate impact #COP26

Are the consequence of a ½ baked decision that we created (the mess we are in)squared This article joins how the #climate outcomes we get may be related to the evidence requirements we set.  The audience for this viewpoint is those who are thinking about the long term consequences of our current decisions and the evidence we use to support those decisions. We are hoping to bring a sense of
Are the consequence of a ½ baked decision that we created (the mess we are in)squared

This article joins how the #climate outcomes we get may be related to the evidence requirements we set.  The audience for this viewpoint is those who are thinking about the long term consequences of our current decisions and the evidence we use to support those decisions. We are hoping to bring a sense of clarity to our community on why we feel frustrated and lost. You should read this because it will make you think and it will raise questions we need to debate over coffee as we search to become better versions of ourselves. 

@yaelrozencwajg @yangbo @tonyfish

The running order is: Which camp are you in for positioning of the crisis: know and accepted, still questioning or denial.   What are the early approaches to solutions?  What are policymakers doing and what is their perspective.  The action is to accept the invitation to debate at the end.


Part 1. Sustainability set up

The world appears more opinionated and divided about everything.  Climate change: real or not.  Vaccination for COVID19 conspiracy and control or in the public best interest.  Space travel for billionaires or feeding those in need. Universal basic income policy vs ignoring those aspects of society we find uncomfortable. Equality creates a more fair society or leaves us alone. So many votes are for self-interest, “it is fairer to me.”  Transparency will hold those in power to account or it will only make or worse. Open networks might create new business models on the web, but will they be sustainable? Sustainability is a false claim or it is our only option. Like books, and publications it is now complexity that is the tool that ensures power remains with the few.

We need to unpack the conflictual and tension-filled gaps in our beliefs, opinions and judgment because we depend on evidence to change our views. How evidence is presented and the systematic squeezing out of curiosity frames us and our current views. 

Evidence, in this context, is actually a problem as we have a very divided idea of what evidence is.  For some, evidence is a social media post from an influencer with 10 million followers (how can everyone else be wrong).  For others a headline on the front of a tabloid newspaper is truth (is it printed.)  For others, a statistical peer-reviewed leading journal publication that is cited 100 times is evidence.  In terms of evidence for decision making, there is a gap between the evidence requirements for research and the evidence requirements for business decisions.  To be clear, it is not that either is better; it is how we frame evidence that matters. The danger is being framed to believe something because there is a mismatch in the evidence requirements for a decision.  A single 100% influencer claim without statistical proof say about “fertility” vs a statistical trial with probability highlights both how a claim is only a claim but many will not understand what evidence is.

Why is this important? Because the evidence we see in journals, TV, media, books and publications have different criteria and credentials to those that inform business decisions.  Where is the environmental action being decided; in the board rooms!  Is this gap in evidence leading to a sustainability gap?


Part 2. Analysis from Kahan 

Here is the rub, it turns out that how scientific evidence is presented matters as its very presentation creates division. 

Dan Kahan, a Yale behavioural economist, has spent the last decade studying whether the use of reason aggravates or reduces “partisan” beliefs. His research papers are here. His research shows that aggravation and alienation easily win, irrespective of being more liberal or conservative. The more we use our faculties for scientific thought, the more likely we will take a strong position that aligns with our (original) political group (or thought).

A way through this could be to copy “solution journalism”, which reports on ways people and governments meaningfully respond to difficult problems and not on what the data says the problem is. Rather than use our best insights, analysis and thinking to reach the version of the “truth”, we use data to find ways to agree with others opinions in our communities. We help everyone to become curious. Tony Fish has created the Peak Paradox framework as an approach to remaining curious by identifying where we are aligned and where there is a delta in views without conflict.

When we use data and science in our arguments and explain the problem, the individuals will selectively credit and discredit information in patterns that reflect their commitment to certain values. They (we) (I) assimilate what they (we)(I) want.

Kahan, in 2014, asked over 1,500 respondents whether they agreed or disagreed with the following statement: “There is solid evidence of recent global warming due mostly to human activity such as burning fossil fuels.” They collected information on individuals' political beliefs and rated their science intelligence.” The analysis found that those with the least science intelligence actually have less partisan positions than those with the most. A Conservative with strong science intelligence will use their skills to find evidence against human-caused global warming, while a Liberal will find evidence for it (cognitive bias.)

In the chart above, the y-axis represents the probability of a person agreeing that human activity caused climate change. The x-axis represents the percentile a person scored on the scientific knowledge test. The width of the bars shows the confidence interval for that probability.


Part 3. Are our policies being formed by the evidence we like or evidence we have?

Governments, activists, and the media have gotten better at holding corporations accountable for the societal repercussions of their actions. A plethora of groups score businesses based on their ESG  performance, and despite often dubious methodology, these rankings are gathering a lot of attention. As a result, ESG has emerged as an unavoidable concern for corporate executives worldwide. ESG should be for boards for “purpose” or “are we doing the right thing” camp, but instead has ended up in the compliance camp, do the min, tick box.

For decades businesses have addressed sustainability as an end of pipe problem or afterthought. Rather than fundamentally altering their models to recognise that sustainability and wellbeing are critical parts for long term success, boards have typically delegated social issues to corporate social responsibility, compliance policies, or charitable foundations and associations, thus they publish their findings (which is not evidence) in annual reports. The issue becomes that neither investors nor stakeholders read these sustainability reports. Actually, they shouldn’t either.  

Although investors' thinking on sustainability has evolved substantially over the past few decades, sustainability and efficiency leaders have used strategies to pressure corporations to advance a wide range of social concerns such as the SDGs across industries and supply chains, regardless of the financial considerations. As ESG assessments, sustainability reports or guidelines have become more rigorous; this accountability raises essential biases. This “pressure” has resulted in many types of actions and raised many concerns, including in the operational efficiencies supposed to reduce the use of energy and natural resources at the expense of their profitability.

Whilst it is still unclear if most investors utilise ESGs factors in their investment selection process based on evidence, it is clear that we do side with what we want to hear and not with the science, Dan Kahan, in part 2 above, was right.


Part 4. Null Hypothesis H(0) A lack of headspace for most people to think about the complexity of these issues due to meeting performance targets means leadership has to make time. And if it does not, we become the problem.  

At what point do people care about something bigger than themselves? This means you as a person have the headspace to move from survival towards thriving. (PeakParadox.com)

If ⅓ of the world don’t know where the next meal today comes from, they will not have the headspace to worry about sustainability.

If the next additional ⅓ of the world don’t know where the food will come from for tomorrow, they will not have the headspace to worry about sustainability.

If the next ⅙ of the world will run out of food and money in 4 weeks - worrying about sustainability is not their most significant concern.

Less than ⅙ of the world can survive and think beyond four weeks - is that enough to make a difference, and are these people in roles that count?  

Between than 0.01% and 0.001%  (8m and 80m) people should be able to consider global complexity on the basis that they will never have money or food issues (over $1m in assets), but are they acting together and is their voice enough to make a difference?  

Is leadership's first priority to ensure that first ⅚ have enough to survive and worry for them, but are they able to manage this conflict?  Which group has the headspace to cope with recycling?  What is amazing and worthy of note is that the majority of those who care about the environment, sustainability, recycling have created headspace irrespective of their situation. The argument above was designed to frame your thinking, the reality is we don’t create headspace because we are too busy.

Part 5.  The imposter syndrome: followers are not followers

Politics (leadership), Business (leadership), Quango/ NGO (leadership), individuals (leadership), influencers (leadership) - all have different agendas, and demands for different outcomes as incentives drive in different directions.   We lack sustainable leadership that drives in one direction. 

As a Leader and opinion former, the most troubling finding should be that individuals with more “scientific intelligence” are the quickest to align themselves on subjects they don’t know anything about. In one experiment, Kahan analysed how people’s opinions on an unfamiliar subject are affected when given some basic scientific information, along with details about what people in their self-identified political group tend to believe about that subject. It turned out that those with the strongest scientific reasoning skills were most likely to use the information to develop partisan opinions.

Critically Kahan’s research shows that people that score well on a measure called “scientific curiosity” actually show less partisanship, and it is this aspect we need to use.

Do we need to move away from “truth”, “facts”, “data”, and “right decisions” if we want to have a board and senior team who can become aligned? We need to present ideas, concepts, how others are finding solutions and make our teams more curious. Being curious appears to be the best way to bring us together — however counterintuitive that is.  But to do that, we have to give up on filling time, productivity, efficiency, effectiveness, keeping people busy and giving more people time to escape survival and work together for the better good. 

There is a systematic squeezing out of curiosity in our current system.  Are we to blame through schooling and education, and search engines? Have we lost how to be curious if the fact and truth presented to us is one that just aligns to our natural bias or one that challenges us?   Do we spend sufficient time with others' views to be able to improve our own?  Has individualism and personalisation created and reinforced our self opinions of our own views are correct?  Does the advertising model depend on this divide?

Part 6. Conclusion, rationality and irrationality 

There is a clear message to those in leadership; stop using evidence to create division, push people away or sure up your own camp. How do we take (all) evidence in and use it to ask questions which mean we come together for a common purpose?  

Politics becomes irrational as we focus more on the individual and less on society and community. Politicians and policy formation need to be voted in, which means they have to mislead and misrepresent their populations who are acting in their own interests: therefore, we find evidence for decisions for short term gain based on individual preferences and not long term community - this is obvious but has to be said.  The same is happening in many corporations.

Anger is often seen as a rational emotion, but that is because we focus on the evidence we want to justify the action. When you feel under-represented, threatened, or in harm's way - the evidence you want will fit the glove. Understanding how the evidence frames us is what brings value to the process. 


Part 7. Call to action: The Road to Sustainability webinar series

We believe we have to communicate better, talk openly, listen to more, debate to appreciate, be curious and find a route to collaborate. The best way is to do something small.  Sign up for the sessions below and bring your evidence, but be prepared to take away different evidence so we can make better decisions together. 

The Road to Sustainability is a content and tool platform and initiative launched in October 2020 that started as a weekly email newsletter providing approaches and strategies to plan sustainability and innovation. We are launching the third edition of our webinar series, following up on the successful two previous versions, “From chaos to recovery: gateway to sustainability”.

This new series will take place every Monday through 5 meetings from October 4th to November 8th, with a possible extension of the plan.

The following schedule is based on our approach "Roadmap and product management - the new framework for sustainability conversations". The sessions have an informative purpose and constitute sets of criteria to help organisations in their operations towards sustainability:

Please register here: https://event.theroadtosustainability.com.



Identity Praxis, Inc.

Mobile Marketing in a Privacy-First World – Mobile Marketing Expert, Michael Becker

I thoroughly enjoyed my interview with Nishant Garg with WhatMarketWants. Here’s the abstract for the interview: “What’s your company’s approach to Mobile Marketing in a Privacy-First World? Mobile Marketing Expert Michael Becker talks about what’s mobile marketing, the potential of mobile marketing, mobile responsive content, mobile-optimized website, marketing in a privacy-centric world, the fut

I thoroughly enjoyed my interview with Nishant Garg with WhatMarketWants.

Here’s the abstract for the interview:

“What’s your company’s approach to Mobile Marketing in a Privacy-First World? Mobile Marketing Expert Michael Becker talks about what’s mobile marketing, the potential of mobile marketing, mobile responsive content, mobile-optimized website, marketing in a privacy-centric world, the future of marketing, and much more” (Garg, 2021).

WhatMarketWants Interview with Michael Becker

REFERENCES Garg, N. (2021). Mobile Marketing in a Privacy-First World – Mobile Marketing Expert, Michael Becker. Retrieved October 22, 2021, from https://www.youtube.com/watch?v=IjIIPsI5Uoc

The post Mobile Marketing in a Privacy-First World – Mobile Marketing Expert, Michael Becker appeared first on Identity Praxis, Inc..

Tuesday, 19. October 2021

Werdmüller on Medium

The corpus at the end of the world

A short story about machine learning, the climate crisis, and change Continue reading on Medium »

A short story about machine learning, the climate crisis, and change

Continue reading on Medium »

Monday, 18. October 2021

Webistemology - John Wunderlich

The Vaccine Certificate Experience

Version 1 of the Ontario COVID Vaccine Certificate is a cumbersome experience that needs some work
"It was hard to write, it should be hard to use"

The quote above was something a programmer friend of mine used to say in the '90s and, despite all the advances in user experience or UX design, it appears to remain effectively true. Let's look at the experience over this past weekend with the newly rolled out enhanced vaccine certificate in Ontario.

Let me preface this by saying that I appreciate that this may be version 1 of a certificate with a QR code and that my comments are intended for an improved version 1.x or 2 of the proof of vaccination. I should also note that CBC has already pointed out that Ontario's enhanced vaccine certificate system is not accessible to marginalized people.

Downloading the Certificate

You will need to go to https://covid-19.ontario.ca/get-proof/ and answer the following questions:

How many doses of the COVID-19 vaccine do you currently have? (required) Did you get all your doses in Ontario? (required) Select which health card you have (required) Do you identify as First Nations, Inuit, or Métis? (required) If you are a non-Indigenous partner or household member of someone in this group, select "Yes." A couple of more clicks (get the certificate through the website or get it by mail or print it at a local library, ServiceOntario location, or call a friend and you will be asked to agree to the Terms of Service which includes the following:
By inputting your personal information and personal health information into the COVID-19 Vaccination Services you are agreeing to the ministry's collection, use and disclosure of this information for the purpose of researching, investigating, eliminating or reducing the current COVID-19 outbreak and as permitted or required by law in accordance with PHIPA as set out above. You also agree that your information will be made available to the Public Health Unit(s) in Ontario responsible for your geographic area for the same purpose.
More specifically, by using COVID-19 Vaccination Services, you consent to the ministry collecting identifying information, including personal health information, about you that you submit through the patient verification page so that the Ministry can ensure that it correctly identifies you for the purpose of administering the COVID-19 vaccination program.
Neither the ministry nor the Public Health Unit(s) in Ontario will further use or disclose your personal information or personal health information except for the purposes set out above.

When you click on the "Get a copy" button you may be asked to wait because a virtual queue is being used to throttle traffic. When you get through you will be asked to provide information from your health card. I have a green OHIP card with a photo so I get this screen:

Assuming the information is correct you will be shown your new certificate, including a QR code (more on this later) as a PDF. At this point, it is on you to print out copies of the two-page certificate and carry them with you. I took it upon myself to crop the certificate to include just my name, birth date & the QR code and print it out small enough that I could put it in a laminating pouch and carry it in my wallet. This enables me to go to my wallet, pull out the laminate and my driving license, and have both ready for presentation. For me, that is the simplest and easiest way. Your mileage may vary since my approach requires being comfortable with basic pdf or graphic editing and having access to a printer. I also put electronic copies on my phone so that I could use that option

Verifying the certificate

Here is what I observed at brunch (shout out to the Sunset Grill on the Danforth).

Staff person asks for Proof of Vaccine Customer digs out their phone or paper copy of the certificate The staff person looks at it (or uses the Ontario Verify App) Staff person asks for a government-issued ID Customer digs out their driver's license The staff person looks at the ID and verifies the name We can do better

What I oberved is NOT a user-friendly experience for either the customer or the business. For the experience to be improved it needs to be a single presentation operation of either a paper or digital certificate that the business can verify in one step. Here's an example that I mocked up some months ago (the picture is from https://thispersondoesnotexist.com/)

This provides the following functionality:

The existing QR code will return the same (i.e. green checkmark in the event of a good code) The verifier (staff person) can compare the photo on the card with the person in front of them rather than asking for a government ID. Digital verification may have the option of showing the verifier the picture on the verify app for increased assurance. Note that the Ministry of Health is already authorized to have pictures for the current health card. Privacy and Security

The advantage of a paper and ID card presentation ritual is that it is difficult to hack. So if we are going to improve the presentation with a single credential as above, privacy and security MUST be protected. This is why a version 1 that is paper/PDF only is not a bad security and privacy choice. On the Verify Ontario app side, both the terms of use and privacy statement are reasonably clear (although the choice to use Google Analytics could be questioned) and make the right commitments  

Recommendations Provide retailers with a verifier

It's nice that the Ontario Verify app is freely downloadable. I used it to check that the laminated cards that I made from my own certificate were readable. But this puts the burden on the retailer and their staff. When I saw someone come in with their QR code, the waitress had to dig out her personal phone and use that. Not a good solution. Either the provincial government or public health should provide retailers with a low to zero cost option to procure their own tablets for use on entry to the store.

Provide Ontarians with options

For example:

I'm relatively tech-savvy so I'd be happy with a QR code/certificate I could add to a wallet app on my phone for easy display without fumbling around. ServiceOntario should provide a service to produce laminated wallet cards WITH photos to any Ontarian who shows up at a ServiceOntario site. On a go-forward basis, ensure that people attending vaccination clinics get printouts of their QR code based certificates WHEN they get vaccinated since the certificate includes the date of vaccination and presumably the QR code won't return a "Green" until the appropriate dated

With all of the above said, I have to say I'm happy that Ontario's first steps for vaccine certificates appear to have respected Ontarians' privacy and look to be built securely. I look forward to the next couple of weeks because I'm sure that security people will be pounding on the service to find flaws. WHEN they find flaws, let's hope that the province is responsive so that we can all benefit.  

Friday, 15. October 2021

Doc Searls Weblog

On solving the worldwide shipping crisis

The worldwide shipping crisis is bad. Here are some reasons: “Just in time” manufacturing, shipping, delivery, and logistics. For several decades, the whole supply system has been optimized for “lean” everything. On the whole, no part of it fully comprehends breakdowns outside the scope of immediate upstream or downstream dependencies. The pandemic, which has been […]


The worldwide shipping crisis is bad. Here are some reasons:

“Just in time” manufacturing, shipping, delivery, and logistics. For several decades, the whole supply system has been optimized for “lean” everything. On the whole, no part of it fully comprehends breakdowns outside the scope of immediate upstream or downstream dependencies. The pandemic, which has been depriving nearly every sector of labor, intelligence, leadership, data, and much else, since early last year. Catastrophes. The largest of these was the 2021 Suez Canal Obstruction, which has had countless effects upstream and down. Competing narratives. Humans can’t help reducing all complex situations to stories, all of which require protagonists, problems, and movement toward resolution. It’s how our minds are built, and why it’s hard to look more deeply and broadly at any issue and why it’s here. (For more on that, see Where Journalism Fails.) Corruption. This is endemic to every complex economy: construction, online advertising, high finance, whatever. It happens here too. (And, like incompetence, it tends to worsen in a crisis.) Bureacracies & non-harmonized regulations. More about this below*. Complicating secondary and tertiary effects. The most obvious of these is inflation. Says here, “the spot rate for a 40-foot shipping container from Shanghai to Los Angeles rising from about $3,500 last year to $12,500 as of the end of September.” I’ve since heard numbers as high as $50,000. And, of course, inflation also happens for other reasons, which further complicates things.

To wrap one’s head around all of those (and more), it might help to start with Aristotle’s four “causes” (which might also be translated as “explanations”). Wikipedia illustrates these with a wooden dining table:

Its material cause is wood. Its efficient cause is carpentry. Its final cause is dining. Its formal cause (what gives it form) is design.

Of those, formal cause is what matters most. That’s because, without knowledge of what a table is, it wouldn’t get made.

But the worldwide supply chain (which is less a single chain than braided rivers spreading outward from many sources through countless deltas) is impossible to reduce to any one formal cause. Mining, manufacturing, harvesting, shipping on sea and land, distribution, wholesale and retail sales are all involved, and specialized in their own ways, dependencies withstanding.

I suggest, however, that the most formal of the supply chain problem’s causes is also what’s required to sort out and solve it: digital technology and the Internet. From What does the Internet make of us?, sourcing the McLuhans:

“People don’t want to know the cause of anything”, Marshall said (and Eric quotes, in Media and Formal Cause). “They do not want to know why radio caused Hitler and Gandhi alike. They do not want to know that print caused anything whatever. As users of these media, they wish merely to get inside…”

We are all inside a digital environment that is making each of us while also making our systems. This can’t be reversed. But it can be understood, at least to some degree. And that understanding can be applied.

How? Well, Marshall McLuhan—who died in 1980—saw in the rise of computing the retrieval of what he called “perfect memory—total and exact.” (Laws of Media, 1988.) So, wouldn’t it be nice if we could apply that power to the totality of the world’s supply chains, subsuming and transcending the scope and interests of any part, whether those parts be truckers, laws, standards, and the rest—and do it in real time? Global aviation has some of this, but it’s also a much simpler system than the braided rivers between global supply and global demand.

Is there something like that? I don’t yet know. Closest I’ve found is the UN’s IMO (International Maritime Organizaiton), and that only covers “the safety and security of shipping and the prevention of marine and atmospheric pollution by ships.” Not very encompassing, that. If any of ya’ll know more, fill us in.

[*Added 18 October] Just attended a talk by Oswald KuylerManaging Director of the International Chamber of Commerce‘s Digital Standards initiative, on an “Integrated Approach” by his and allied organizations that addresses “digital islands,” “no single view of available standards” both open and closed, “limited investments into training, change management and adoption,” “lack of enabling rules and regulations,” “outdated regulation,” “privacy law barriers,” “trade standard adoption gaps,” “costly technical integration,” “fragmentation” that “prevents paperless trade,” and other factors. Yet he also says the whole thing is “bent but not broken,” and that (says one slide) “trade and supply chain prove more resilient than imagined.”

Another relevant .org is the International Chamber of Shipping.

By the way, Heather Cox Richardson (whose newsletter I highly recommend) yesterday summarized what the Biden administration is trying to do about all this:

Biden also announced today a deal among a number of different players to try to relieve the supply chain slowdowns that have built up as people turned to online shopping during the pandemic. Those slowdowns threaten the delivery of packages for the holidays, and Biden has pulled together government officials, labor unions, and company ownership to solve the backup.

The Port of Los Angeles, which handles 40% of the container traffic coming into the U.S., has had container ships stuck offshore for weeks. In June, Biden put together a Supply Chain Disruption Task Force, which has hammered out a deal. The port is going to begin operating around the clock, seven days a week. The International Longshore and Warehouse Union has agreed to fill extra shifts. And major retailers, including Walmart, FedEx, UPS, Samsung, Home Depot, and Target, have agreed to move quickly to clear their goods out of the dock areas, speeding up operations to do it and committing to putting teams to work extra hours.

“The supply chain is essentially in the hands of the private sector,” a White House official told Donna Littlejohn of the Los Angeles Daily News, “so we need the private sector…to help solve these problems.” But Biden has brokered a deal among the different stakeholders to end what was becoming a crisis.

Hopefully helpful, but not sufficient.

Bonus link: a view of worldwide marine shipping. (Zoom in and out, and slide in any direction for a great way to spend some useful time.)

The photo is of Newark’s container port, viewed from an arriving flight at EWR, in 2009.

Thursday, 14. October 2021

Mike Jones: self-issued

Proof-of-possession (pop) AMR method added to OpenID Enhanced Authentication Profile spec

I’ve defined an Authentication Method Reference (AMR) value called “pop†to indicate that Proof-of-possession of a key was performed. Unlike the existing “hwk†(hardware key) and “swk†(software key) methods, it is intentionally unspecified whether the proof-of-possession key is hardware-secured or software-secured. Among other use cases, this AMR method is applicable whenever a WebAuthn

Iâ€ve defined an Authentication Method Reference (AMR) value called “pop†to indicate that Proof-of-possession of a key was performed. Unlike the existing “hwk†(hardware key) and “swk†(software key) methods, it is intentionally unspecified whether the proof-of-possession key is hardware-secured or software-secured. Among other use cases, this AMR method is applicable whenever a WebAuthn or FIDO authenticator are used.

The specification is available at these locations:

https://openid.net/specs/openid-connect-eap-acr-values-1_0-01.html https://openid.net/specs/openid-connect-eap-acr-values-1_0.html

Thanks to Christiaan Brand for suggesting this.

Wednesday, 13. October 2021

MyDigitalFootprint

Do shareholders have to live in a Peak Paradox?

For the past 50 years, most business leaders in free market-based economies have been taught or trained in a core ideology; "that the purpose of business is to serve only the shareholder". The source was #MiltonFriedman. Whilst shareholder primacy is simple and allows businesses to optimise for one thing, which has an advantage in terms of decision making, it has also created significant damage. I

For the past 50 years, most business leaders in free market-based economies have been taught or trained in a core ideology; "that the purpose of business is to serve only the shareholder". The source was #MiltonFriedman. Whilst shareholder primacy is simple and allows businesses to optimise for one thing, which has an advantage in terms of decision making, it has also created significant damage. It does not stand up to modern critique and challenges such as ESG. Importantly there is an assumption that a shareholder group was a united and unified collective.
There is an assumption: shareholders are united and unified
The reality is that in a public or investor-led company, the shareholders are as diverse in terms of vision, rationale, purpose and expectation as any standard group of associated parties. Shareholders as a group rarely have an entirely homogeneous objective. Given the dominance of shareholder primacy, how do we know this concept of non-homogeneous actors is genuine. Because there are conflicts among shareholders, which is so large, it constitutes a topic of high practical relevance for academic interest. Two areas dominate research in this area, board performance and board dynamics. Particular attention has been given to conflict that arises when ownership is shared between a dominant, controlling shareholder and minority shareholders.




The majority and dominant shareholders have incentives to pursue personal goals through the business as they disproportionately gain the benefits but do not fully bear the economic risks and can misuse their power to exploit minority shareholders. That is, the majority shareholder may push management and the board to pursue objectives that align with their own priorities but that are detrimental for the minority shareholder. Studies have shown that such conflicts negatively affect a firm's performance, valuation, and innovation. In listed companies, market regulators, using minority protection clauses, try to avoid this abuse of power, but in private equity markets, there is no regulator to prevent this conflict.

Surprisingly, however, far less research has examined whether and how different types of shareholders (even stakeholders) can complement each other so that mutual benefits arise.

In a recent study examining shareholder relationships in privately held firms, the outcomes of private equity investments in privately held family businesses were compared. This research hypothesis was to test if the objectives of professional investors and family owners differ. PE investors focus on maximising financial returns through a medium-term exit and generally have lower levels of risk aversion. Family shareholders, in contrast, generally have most of their wealth concentrated in a single firm, hold longer time horizons, and are often particularly concerned about non-economic benefits the firm brings to the family (e.g., reputation in the community). What is clear is that the reason to trade and have a purpose is critical for any alignment to emerge.

We know that boards and management need to be informed and review shareholders' objectives. Shareholders need to be informed and review the extent to which they are still aligned in terms of time horizons, risk preferences, need for cash, prioritising financial goals, and whether control or dominance of an agenda is constructive or destructive. Which is the segway to the Peak Paradox mapping above.






The fundamental issue is that we cannot make decisions at Peak Paradox and have to move towards a Peak to determine what we are optimising for. For shareholders who optimise for Peak Individual Purpose will want to use a dominant position or minority protection to force an agenda to their own goals. A high performing board, perceived, will optimise towards Peak Work Purpose. This included commercial and noncommercial, where commercial boards will optimise for a single objective such as shareholder primacy. What becomes evident at this point is the question about diversity. Dominant shareholders can create environments where the lack of diversity or thinking, experience, motivation, purpose and incentives serves them and their objectives. This will also be seen as a board that does not ask questions, cannot deal with conflict and avoids tension. Finally, if we move towards optimising for a better society (Peak Social Purpose), we find that we move towards serving stakeholders.

Will society be better off because this business exists?
The critical point here is that stakeholders cannot live with Peak Paradox's decision-making and have to find something to optimise for and align to. This will not be a three or five-year plan but a reason for the business to exist. At Peak Paradox, the most important questions get asked, such as "Will society be better off because this business exists?" Because the same data should present a yes and no answer, and the board needs to justify why it believes it should.

Tuesday, 12. October 2021

Mike Jones: self-issued

OpenID Connect Presentation at IIW XXXIII

I gave the following invited “101” session presentation at the 33rd Internet Identity Workshop (IIW) on Tuesday, October 12, 2021: Introduction to OpenID Connect (PowerPoint) (PDF) The session was well attended. There was a good discussion about the use of passwordless authentication with OpenID Connect.

I gave the following invited “101” session presentation at the 33rd Internet Identity Workshop (IIW) on Tuesday, October 12, 2021:

Introduction to OpenID Connect (PowerPoint) (PDF)

The session was well attended. There was a good discussion about the use of passwordless authentication with OpenID Connect.


Werdmüller on Medium

Checking in, checking out

A short story about the future of work Continue reading on Medium »

A short story about the future of work

Continue reading on Medium »

Monday, 11. October 2021

reb00ted

What does a personal home page look in the Metaverse? A prototype

At IndieWeb Create Day this past weekend, I created a prototype for what an IndieWeb-style personal home page could look like in the metaverse. Here’s a video of the demo: The source is here, in case you want to play with it.

At IndieWeb Create Day this past weekend, I created a prototype for what an IndieWeb-style personal home page could look like in the metaverse.

Here’s a video of the demo:

The source is here, in case you want to play with it.


It's been 15 years of Project VRM: Here's a collection of use cases and requirements identified over the years

Today’s Project VRM meeting marks the project’s 15 years anniversary. A good opportunity to list the uses cases that have emerged over the years. To make them more manageable, I categorize them by the stage of the relationship between customer and vendor: Category 1: Establishing the relationship What happens when a Customer or a Vendor wishes to initiate a relationship, or wishes to modify th

Today’s Project VRM meeting marks the project’s 15 years anniversary. A good opportunity to list the uses cases that have emerged over the years. To make them more manageable, I categorize them by the stage of the relationship between customer and vendor:

Category 1: Establishing the relationship

What happens when a Customer or a Vendor wishes to initiate a relationship, or wishes to modify the terms of the relationship.

1.1 Customer initiates a new relationship with a Vendor

“As a Customer, I want to offer initiating a new relationship with a Vendor.”

Description:

The Customer encounters the Vendor’s electronic presence (e.g. their website) The Customer performs a gesture on the Vendor’s site that indicates their interest of establishing a relationship As part of the gesture, the Customer’s proposed terms are conveyed to the Vendor In response, the Vendor provides acceptance of the proposed relationship and offered terms, or offers alternate terms in return. If the offered terms are different from the proposed terms, the Customer has the opportunity to propose alternate terms; this continues until both parties agree on the terms or abort the initiation. Once the terms have been agreed on, both sides record their agreement.

Notes:

To make this “consumer-grade”, much of the complexity of such concepts as “proposed terms” needs to be hidden behind reasonable defaults. 1.2 Vendor initiates a new relationship with a Customer

“As a Vendor, I want to offer initiating a new relationship with a Customer.”

Similar as for “Customer initiates a new relationship with a Vendor”, but with reversed roles.

1.3 Customer and Vendor agree on a closer relationship

“The Customer and the Vendor agree on a closer relationship.”

Description:

The Customer and the Vendor have been in a relationship governed by certain terms for some time. Now, either the Customer or the Vendor propose new terms to the other party, and the other party accepts. The new terms permit all activities permitted by the old terms, plus some additional ones.

Example:

The Customer has agreed with the Vendor that the Vendor may send the Customer product updates once a month. For that purpose, the Customer has provided Vendor an e-mail address (but no physical address). Now, the Customer has decided to purchase a product from Vendor. To ship it, the Vendor needs to have the Customer’s shipping address. New terms that also include the shipping address are being negotiated. 1.4 A Customer wants a more distant relationship

“As a Customer, I want to limit the Vendor to more restrictive terms.”

Description:

The Customer and the Vendor have been in a relationship governed by certain terms for some time. Now, the Customer wishes to disallow certain activities previously allowed by the terms, without terminating the relationship. The Customer offers new terms, which the Vendor may or may not accept. The Vendor may offer alternate terms in turn. This negotiation continues until either mutually acceptable terms are found, or the relationship terminates.

Examples:

The Customer has agreed with the Vendor that the Vendor may send the Customer product updates. Now the Customer decides that they do not wish to receive product updates more frequently than once a quarter. The Customer has agreed to behavioral profiling when visiting Vendor’s website. Now, while the Customer still wishes to use the website, they do no longer consent to the behavioral profiling. 2. Category: Ongoing relationship

What happens during a relationship after it has been established and while no party has the intention of either modifying the strength of, or even leaving the relationship.

2.1 Intentcasting

“As a Customer, I want to publish, to a selection of Vendors that I trust, that I wish to purchase a product with description X”

Benefits:

Convenience for Customer Potential for a non-standard deal (e.g. I am a well-known influencer, and the seller makes Customer a special deal)

Features:

It’s basically a shopping list free-form text plus “terms” (FIXME: open issue) Retailers can populate product alternatives for each item in the list might simply be a search at the retailer site but terms need to be computable

Issues:

what if a retailer lies and spams with unrelated products, or unavailable products? 2.2 Automated intentcasting on my behalf

“As a Customer, I want an ‘AI’ running on my behalf to issue Intentcasts when it is in my interest”

Description:

This is similar to functionality deployed by some retailers today: “We noticed you have not bought diapers in the last 30 days. Aren’t you about to run low? Here are some offers”. But this functionality runs on my behalf, and takes data from all sorts of places into account.

Benefits:

Convenience, time savings 2.3 Contextual product reviews and ratings

“As a Customer, I want to see reviews and ratings of the product and seller alternatives for any of my shopping list items.”

Benefits:

Same as product reviews in silos, but without the silos: have access to more reviews and ratings by more people Seller alternatives give Customer more choice 2.4 Filter offers by interest

“As a Customer, I want to receive offers that match my implicitly declared interest”

Description:

In Intentcasting, I actively publish my intent to meet a need I know I have by purchasing a product This is about offers in response to needs, or benefits, that I have not explicitly declared but that can be inferred. Example: if I have purchased a laser printer 6 months ago, and I have not purchased replacement toner cartridges nor have I declared an intent to purchase some, I would appreciate offers for such toner cartridges (but not of inkjet cartridges)

Benefits:

Convenience for Customer Better response rate for Vendor 2.5 Full contact record

“As a Customer, I want a full record of all interactions between Customer and each Vendor accessible in a single place.”

Benefits:

Simplicity & Convenience Trackability Similar to CRM

Notes:

This should cover all modes of communication, from e-mail to trouble tickets, voice calls and home visits. 2.6 Manage trusted Vendor’s in a single place

“As a Customer, I want to see and manage my list of trusted Vendors in a single place”

Benefits:

Simplicity Transparency (to Customer)

Notes:

Probably should also have a non-trusted Vendor’s: so my banned vendors are maintained in the same place — which subset is being displayed is just a filter function Probably should have a list of all Vendor’s ever interacted with 2.7 Notify of changes about products I’m interested in

“As a Customer, I want be notified of important changes (e.g. price) in products that I’m interested in.”

Benefits:

can use my shopping cart as a “price watch list” and purchase when I think the price is right

Notes:

This should apply to items in my shopping cart, but also items in product lists that I might have created (“save for later” lists) 2.8 Personal wallet

“As a Customer, I want my own wallet that I can use with any Vendor”

Benefits:

Simplicity & Convenience Unified billing

Notes:

Unified ceremony Should be able to delegate to the payment network of my choice 2.9 Preventative maintenance

“As a Customer, I would like to be notified of my options when a product I own needs maintenance”

Description:

If I have a water heater, and it is about to fail, I would like to be notified that it is about to fail, with offers for what to do about it. It’s a kind of intentcasting but the intent is inferred from the fact that I own the product, the product is about to fail, that I don’t want it to fail and I am willing to entertain offers from the vendor I bought it from and others.

Benefits:

Convenience for Customer No service interruption 2.10 Product clouds

“Each product instance has its own cloud”

Benefits:

Collects information over the lifetime of the product instance Product instance-specific Can change ownership with the product Does not disappear with the vendor

Example:

My water heater has its own cloud. It knows usage, and maintenance events. It continues to work even if the vendor of the water heater goes out of business. 2.11 Product info in context

“As a Customer, I want to access product documentation, available instruction manuals etc in the context of the purchase that I made”

Benefits:

Simplicity & Convenience Updated documentation, new materials etc show up automatically 2.12 Set and monitor terms for relationships

“As a Customer, I want to set terms for my relationship with a Vendor in a single place”

Benefits:

Simplicity & Convenience Privacy & Agency

Notes:

Originally this was only about terms for provided personal data, but it appears this is a broader issue: I also want to set terms for, say, dispute resolution (“I never consent to arbitration”) or customer support (“must agree to never let Customer wait on the phone for more than 30min”) 2.13 Update information in a single place only

“As a Customer, I (only) want to update my personal contact information in one place that I control”

Benefits:

for Customer: convenience for Vendor: more accurate information

Issues:

Should that be a copy and update-copy process (push), or a copy and update-on-need process (pull with copy) or a fetch-and-discard process (pull without copy)?

Notes:

Originally phrased as only about contact info (name, married name, shipping address, phone number etc) this probably applies to other types of information was well, such as, say, credit card numbers, loyalty club memberships, even dietary preferences or interests (“I gave up stamp collecting”) 2.14 Single shopping cart

“As a Customer, I want to use a single shopping cart for all e-commerce sites on the web.”

Benefits:

I don’t need to create accounts on many websites, or login into many websites I decide when the collection of items in the cart expires and I don’t lose work It makes it easier for Customer to shop at more sites, and I can more easily buy from the long tail of sites

Features:

It shows product and seller It may show alternate sellers and difference in terms (e.g. price, shipping, speed) We may also want to have product lists that aren’t a shopping cart (“save for later” lists) 2.15 Unified communications/notifications preferences

“As a Customer, I want to manage my communication/notification preferences with all Vendors in a single place”

Notes:

In a single place, and in a single manner. I should not have to do things differently to unsubscribe from the product newsletter of vendors X and Y.

Benefits:

Simplicity & Convenience 2.16 Unified product feedback

“As a Customer, I want a uniform way to submit (positive and negative) product feedback (and receive responses) with any Vendor”

Benefits:

Simplicity & Convenience Trackability Similar to CRM

Notes:

Should be easy to do this either privately or publicly 2.17 Unified purchase history

“As a Customer, I want to have a record of all my product purchases in a single place”

Benefits:

Simplicity & Convenience If I wish to re-order a product I purchased before, I can easily find it and the vendor that I got it from 2.18 Unified subscriptions management

“As a Customer, I want to manage all my ongoing product subscriptions in a single place”

Benefits:

Simplicity & Convenience Expense management

Note:

This is implied by the source, not explicitly mentioned.

Future:

Opens up possibilities for subscription bundle business models 2.19 Unified support experience

Benefits:

Simplicity & Convenience Trackability Similar to CRM

Notes:

Should be multi-modal: trouble tickets, chat, e-mail, voice etc 3. Category: Beyond binary relationships

Use cases that involve more than one Customer, or more than one Vendor, or both.

3.1 Proven capabilities

“As a Vendor, I want to give another party (Customer or Vendor) the capabilities to perform certain operations”

Description:

This is SSI Object Capabilities Example: I want to give my customer the ability to open a locker The scenario should be robust with respect to confidentiality and accuracy. 3.2 Silo-free product reviews

“As a Customer, I want to publish my reviews and ratings about products I own so they can be used by any other Customer at any point of purchase”

Benefits:

Same as product reviews in silos, but without the silos: broader distribution of my review for more benefit by more people

Notes:

Rephrased from “express genuine loyalty” 3.3 Monitoring of terms violation

“As a Customer, I want to be notified if other Customer’s interacting with a Vendor report a violation of their terms”

Description:

I have a relationship with a Vendor, and we have agreed to certain terms. If the Vendor breaks those terms, and other Customer’s in a similar relationship with the Vendor notice that, I want to be notified.

Benefits:

Trust, security, safety

Notes:

This can of course be abused through fake reports, so suitable measures must be taken. 3.4 Unified health and wellness records

“As a Customer, I want my health and wellness records from all sources to be aggregated in a place that I control.”

Benefits:

Survives disappearance of the vendor Privacy Allows cross-data-source personal analytics and insights Integration across healthcare (regulated industry) and wellness (consumer) 3.5 Verified credentials

“As a Customer, I want to be able to tell Vendor 1 that Vendor 2 makes a claim about Customer.”

Description:

This is the SSI verified credential use case Example: I want to tell a potential employer that I have earned a certain degress from a certain institution The scenario should be robust with respect to confidentiality and accuracy. 4. Category: Ending the relationship

What happens when one of the parties wishes to end the relationship.

4.1 Banning the vendor

“As a Customer, I want to permanently ban a Vendor from doing business with Customer ever again.”

Description:

This form of ending the relationship means that I don’t want to be told of offers or responses to Intentcasts etc. by this vendor ever again. 4.2 Disassociating from the vendor

“As a Customer, I want to stop interacting with a vendor at this time but I am open to future interactions.”

Description:

This probably means that data sharing and other interaction reset to the level of how it was before the relationship was established. However, the Customer and the Vendor do not need to go through the technical introduction ceremony again when the relationship is revived. 4.3 Firing the customer

“As a Vendor, I want to stop interacting with a particular Customer.”

Description:

For customers (or non-customers) that the Vendor does not wish to serve (e.g. because of excessive support requests), the VendorVendor may unilaterally terminate the relationship with the Customer.

Sunday, 10. October 2021

Werdmüller on Medium

The first wave, an army

A short story Continue reading on Medium »

Saturday, 09. October 2021

@_Nat Zone

IT批評でクロサカタツヤさんと対談しました

テーマはデジタルアイデンティティなのですが、対談が… The post IT批評でクロサカタツヤさんと対談しました first appeared on @_Nat Zone.

テーマはデジタルアイデンティティなのですが、対談がちょうどJR東日本の駅構内カメラ設置問題の話題が盛り上がっていたときだったので、そこから話はいろいろと広く深く進んでいきました。

たいへん丁寧にまとめていただいたので、デジタルアイデンティティやプライバシーに対するクロサカさんと私の考え方や問題意識も、楽しみながら読んで理解していただけるのではないかと思います。結構ボリュームが有って、2回に分けての公開になっています。

もくじ

第1回

JR東日本の監視カメラ問題の本質

広告にはネガティブだが監視カメラには寛容な日本人

日本の企業は上下水道を整備するよも井戸を掘りたがる

マイナンバーのアイデンティティー・レジスターには必要なものが欠けている

失敗の本質を総括しなければ同じことを繰り返す

アカウンタビリティの欠如が信頼を毀損している

第2回

なぜいまだにマイナンバーへの銀行口座登録が実現しないのか

信頼性あるデータ流通の場を提供する「GAIN」という試み

アイデンティティー・マネジメントをGAFAMに依存することのリスク

個人情報・データプライバシーは構造ではなく状態

監視カメラを設置することの合理性について比較衡量すべき

アイデンティティーは第四次産業革命を推進するキャピタルである

The post IT批評でクロサカタツヤさんと対談しました first appeared on @_Nat Zone.

Friday, 08. October 2021

reb00ted

What is the metaverse? A simple definition.

Mark Zuckerberg recently dedicated a long interview to the subject, so the metaverse must be a thing. Promptly, the chatter is exploding. Personally I believe the metaverse is currently underhyped: it will be A Really Big Thing, for consumers, and for businesses, and in particular for commerce and collaboration, far beyond what we can grasp in video games and the likes today. So what is it, this

Mark Zuckerberg recently dedicated a long interview to the subject, so the metaverse must be a thing. Promptly, the chatter is exploding. Personally I believe the metaverse is currently underhyped: it will be A Really Big Thing, for consumers, and for businesses, and in particular for commerce and collaboration, far beyond what we can grasp in video games and the likes today.

So what is it, this metaverse thing? I want to share my definition, because I think it is simple, and useful.

It’s best expressed in a two-by-two table:

How you access it:
physically vs. virtually Where it is:
in physical or virtual space The virtual world, accessed physically:
Augmented Reality The virtual world, accessed virtually:
today's internet and future virtual worlds The physical world, accessed physically:
our ancestors exclusively lived here The physical world, accessed virtually:
the Internet of Things

By way of explanation:

There is physical space – the physical world around us – and virtual space, the space of pure information, that only exists on computers.

Our ancestors, and we so far, have mostly been interacting with physical space by touching it physically. But in recent decades, we have learned to access it from the information sphere, and that’s usually described as the Internet of Things: if you run an app to control your lights, or open your garage door, that’s what I’m talking about.

So far, we have mostly interacted with virtual space through special-purpose devices that form the gateway to the virtual space: first computers, now phones and in the future: headsets. I call this accessing virtual space virtually.

And when we don our smart glasses, and wave our arms, we interact with virtual space from physical space, which is the last quadrant.

In my definition, those four quadrants together form the metaverse: the metaverse is a superset of both meatspace and cyberspace, so to speak.

This definition has been quite helpful for me to understand what various projects are working on.

Thursday, 07. October 2021

Werdmüller on Medium

Productivity score

You’ve got to work, bitch. Continue reading on Medium »

You’ve got to work, bitch.

Continue reading on Medium »


MyDigitalFootprint

What are we asking the questions for?

What are we asking the questions for? This link gives you access to all the articles and archives for </Hello CDO> This article unpacks questions and framing as I tend to focus on the conflicts, tensions, and compromises that face any CDO in the first 100 days — ranging from the effects of a poor job description to how a company’s culture means that data-led decisions are not decisions.
What are we asking the questions for?

This link gives you access to all the articles and archives for </Hello CDO>

This article unpacks questions and framing as I tend to focus on the conflicts, tensions, and compromises that face any CDO in the first 100 days — ranging from the effects of a poor job description to how a company’s culture means that data-led decisions are not decisions.




I love this TED talk from Dana Kanze at LSE.  Dana’s talk builds on the research of Tory Higgins who is credited with creating the social theory “Regulatory Focus”  This is a good summary if you have not run into it before.

Essentially the idea behind “Regulatory Focus” is to explore motivations and routes to getting the outcome you want. The context in this article is to explore how a framed approach to questions creates biased outcomes. One framing in Regulatory Focus centres on a “Promotion Focus” which looks for “gain” which can be translated as finding or seeking hope, advancement and accomplishment.  The counter is a prevention focus that centres on losses that looks to find safety, responsibility and security.  

In Dana’s research, which is the basis for her talk about why women get less venture funding, she categorises “Promotion” questions as ones that will ask the following questions as they seek to focus on GAIN. When talking about: customers seeks data about acquisition, income looking for data to confirm sales, Market seeking to understand the size,  Balance Sheet will want to know data on assets, projections should focus on growth and questions on strategy will want to understand the vision. 

Dana’s research shows that “Prevention” questions will be framed as LOSSES, therefore a focus on customers would seek data on retention, income questions look for data on margin, market questions would focus on shape, balance sheet questions will centre on liability, projection data wants to confirm stability, and strategy to ensure execution capability.

Whilst Dana has collected this data as part of a noble and useful study on why women get less funding from venture capital, investors tend to bias and ask women prevention questions with a framing away from gain towards losses and downside risk.  However, we should reflect on this data and research for a little longer.

Does your executive team tend to focus on promotion or prevention questions?  Do different executive roles tend to frame their questions based on gain (upside) or loss prevention (risk)?  Is the person leading the questions critical to the decision-making unit and will their questions frame promotion or prevention for the company.  Whilst a rounded team should ask both, do they?

As the CDO, our role is to find “all data.” and not data to fit a question. Data that shows your company's approach to how they frame questions for Prevention or Promotion is just data,  but this data will help you as a team make better decisions. 

A specific challenge in the first 100 days is to determine what questions we, as a leadership team, ask ourselves and how our framing is biasing our choices and decisions.  We are unlikely to like this data and how power aligns with those who prevent or promote a proposal or recommendation.  

Based on listening to questions and those in communications, I have been building an ontology of questions and been categorising them for a while now.  I am interested both in the absolute bias, the bias to people, the bias to project types and also how bais changes depending on who is leading the project and questions.  Based on a sample size that is not statistical, I believe that Dana’s work has far wider implications than just venture funding. 

---

Whilst our ongoing agile iteration into information beings is never-ending, there are the first 100 days in the new role. But what to focus on? Well, that rose-tinted period of conflicting priorities is what </Hello, CDO!> is all about. Maintaining sanity when all else has been lost to untested data assumptions is a different problem entirely.

Wednesday, 06. October 2021

Mike Jones: self-issued

Server-contributed nonces added to OAuth DPoP

The latest version of the “OAuth 2.0 Demonstration of Proof-of-Possession at the Application Layer (DPoP)†specification adds an option for servers to supply a nonce value to be included in the DPoP proof. Both authorization servers and resource servers can provide nonce values to clients. As described in the updated Security Considerations, the nonce prevents […]

The latest version of the “OAuth 2.0 Demonstration of Proof-of-Possession at the Application Layer (DPoP)†specification adds an option for servers to supply a nonce value to be included in the DPoP proof. Both authorization servers and resource servers can provide nonce values to clients.

As described in the updated Security Considerations, the nonce prevents a malicious party in control of the client (who might be a legitimate end-user) from pre-generating DPoP proofs to be used in the future and exfiltrating them to a machine without the DPoP private key. When server-provided nonces are used, actual possession of the proof-of-possession key is being demonstrated — not just possession of a DPoP proof.

The specification is available at:

https://tools.ietf.org/id/draft-ietf-oauth-dpop-04.html

Werdmüller on Medium

The mirror of infinite worlds

We can reach into the multiverse and find your happiest self. Continue reading on Medium »

We can reach into the multiverse and find your happiest self.

Continue reading on Medium »

Tuesday, 05. October 2021

Heather Vescent

Three Governments enabling digital identity interoperability

Photo by Andrew Coop on Unsplash Since 2016, a growing number of digital identity experts have worked together to create privacy preserving, digitally native, trusted credentials, which enable the secure sharing of data from decentralized locations. U.S., Canadian, and European Governments see how this technology can provide superior data protection for its citizens, while enabling global data
Photo by Andrew Coop on Unsplash

Since 2016, a growing number of digital identity experts have worked together to create privacy preserving, digitally native, trusted credentials, which enable the secure sharing of data from decentralized locations. U.S., Canadian, and European Governments see how this technology can provide superior data protection for its citizens, while enabling global data sharing — that’s why they have invested more than $16 million USD into this space over the past several years.

On September 15, 2021, I moderated a panel with representatives from the United States Government, the Canadian Government, and the European Commission. Below is an edited excerpt from the panel that included:

Anil John, Technical Director of the Silicon Valley Innovation Program which has invested $8 million on R&D, proof of concepts, product development, refinement, and a digital wallet UI competition. Tim Bouma, Senior Policy Analyst for identity management at the Treasury Board secretariat at of the Government of Canada. The User-centric Verifiable Digital Credentials Challenge has awarded $4 million CAD in two phases. Olivier Bringer, Head of the Next-Generation Internet at the European commission and under his program, he has awarded about € 6 million euros through three open calls for EID and SSI solutions. Heather Vescent, Co-Chair of the Credentials Community Group which incubates many of the open standards in this space & an author of The Comprehensive Guide to Self Sovereign Identity. Policy and Technology

Heather Vescent: Policy tends to be squishy while technology and especially technology standards must be precise. What challenges do you face implementing policy decisions into technology?

Anil John: Our two primary work streams come out of the oldest parts of the U.S. government: U.S. Citizenship and Immigration Service, and U.S. Customs and Border Protection.

Immigration credentials must be available to anyone regardless of their technical savvy or infrastructure availability. The USCIS team is focused on leaving nobody behind, and do not have the luxury of pivoting to a digital only model. Technology has to provide credentials in electronic format, as well as on paper — each with a high degree of verification validation.

U.S. Customs has to deal with every single entity that is shipping goods into the U.S. We don’t have any choice but to be globally interoperable, because while we may be the largest customs organization on the planet, we do not want to mandate a single platform or technology stack. Interoperability is critical so that everybody has a choice in the technology that they are using.

It is easy to get pushback about why we are doing this long public process, rather than put money into a vendor and buy their technology. One of the reasons we made the decision to work in public, under the remit of a global standards organization, was to ensure that we were not repeating past mistakes where we were locked into particular vendors or platforms. In the past, we were locked into proprietary APIs, with high switching costs, left with the care and feeding of legacy systems that were very uniquely government centric. There are benefits to develop technology in the public from a solution choice and a public interest perspective.

Tim Bouma: The challenge is that you have to deal with short term exigencies, and combine that with a long-term vision to come up with requirements that are fairly timeless. Because once you develop the requirements, they have to last for a decade or more. This forces you to think of what the timeless requirements might be. In order to do that, you have to understand the technology very deeply. You have to understand the abstraction, so you can come up with language that can serve the test of time.

Olivier Bringer: I agree that it can be a challenge to implement policy choices into technology; but it can also be an opportunity. Innovators have not waited for the European General Data Protection Regulation before developing privacy preserving technologies. This is an opportunity for companies, an opportunity for administration, an opportunity for innovators to develop new technologies and new business models.

“Policy is an opportunity to implement fundamental rights into technology.” — Olivier Bringer

In NGI we try to take the policy development, the regulations that we have, as an opportunity to implement our fundamental rights, an opportunity to implement the law into technology. Our program supports innovators, the adoption of their technologies, their solutions, and their integration into standards. We do that firstly in Europe, but our ambition is to link to the global environment and work in cooperation with others.

A Benefit to Citizens

Vescent: What would you tell your citizens is the most important reason to invest in this infrastructure and businesses that use it?

Bringer: First that we are funding technologies, which are, this is our motto, human centric. So returning to your first question, going beyond pure policy development, we think it’s really important to develop the technology. So next to the regulation that we put in place in the field of electronic identity, in the field of AI, in the field of data protection, it’s important to fund the technologies that will implement these policies and regulation. This is really what we try to do when we build an internet that’s more trustworthy, that gives more control to the users in terms of the data they disclose, in terms of control of their identity online, in terms of including everyone in this increasingly important digital environment. It is technology geared towards the citizen.

John: I’m going to answer for myself, rather than speak for my organization. Having said that, I think what I would tell [citizens] would be that we want to make your life more secure, and more privacy-respecting, without leaving anybody behind. These are the first technologies that have come along that give us a hope in ensuring that we’re not trapped by nefarious or corporate or money interests. That there is a choice in the marketplace in what is available to our citizens in how they access it. And we absolutely are not ignoring the people who may not have a level of comfort with digital technologies and leaving them behind, but ensuring that there is a clear bridge with this technology to what their level of comfort is.

Bouma: We are moving to a notion of a digital economy, and it’s more than some slogan. We are building a digital infrastructure that is becoming a critical infrastructure. It is important that we understand that we develop the capabilities that ensure that the citizens actually feel safe. This is as fundamental as having safe drinking water and a regulated electricity supply. So now we need to start thinking about the digital capabilities, verifiable credentials included, as part of the national and international infrastructure.

We’re doing proof of concepts and pilots on national digital infrastructure, to understand what it means, to use a Canadian metaphor, create a “set of rails” that goes across the country. What would that look like? What are the capabilities? At the end of the day, we need to build services that can be trusted by Canadians and everyone. And there’s a lot of engineering and a lot of policy work that has to go into that.

Most Surprising Lesson

Vescent: What’s the most surprising thing you’ve learned since the inception of your investment programs.?

Bouma: I had a major shift in perspective. There are other technical ecosystems we have to take into account, like the mobile driver’s license, the digital travel credentials, and others. So, we have to figure out how to incorporate all those requirements. The mantra I’ve been using is, we’ve got these different technical ecosystems but we have to focus on the human being — the point of integration is at the digital wallet. So that’s reframed my policy thinking — there’s a multiplicity of technical ecosystems that we have to account from a policy point of view.

Bringer: I’m impressed by the quality of the innovators. We have people who are very good in technology who understand the political challenges, the policy context in which they intervene and who are able to make excellent contribution to our own policy, and who are really dedicated to our human centric vision.

John: I’ll give you one positive one, one negative. On the positive side, I am happy there is a community of people that understands there is value in working together to ensure the shared infrastructure that we are all using has a common foundation of security, privacy, and interoperability — that it is not a zero-sum game. They can compete on top of a common foundation.

On the negative side, I’m fascinated by the shenanigans being pulled by people who use the theater of interoperability in order to pedal proprietary solutions. Fortunately, I think a lot of the public sector entities are getting more educated so that they can see through the interoperability theater.

Curious for more? Watch the full panel. Or click through for a playlist with technology demos showing interoperability.

Learn more by the author

Heather Vescent is a co-chair of the W3C Credential Community Group, and an author of the Comprehensive Guide to Self-Sovereign Identity. Curious about the Credentials Community Group or want to get started with decentralized identity technology? Get in touch for personal introduction.

Three Governments enabling digital identity interoperability was originally published in In Present Tense on Medium, where people are continuing the conversation by highlighting and responding to this story.


Jon Udell

My own personal AWS S3 bucket

I’ve just rediscovered two digital assets that I’d forgotten about. 1. The Reddit username judell, which I created in 2005 and never used. When you visit the page it says “hmm… u/judell hasn’t posted anything” but also reports, in my Trophy Case, that I belong to the 15-year club. 2. The Amazon AWS S3 bucket … Continue reading My own personal AWS S3 bucket

I’ve just rediscovered two digital assets that I’d forgotten about.

1. The Reddit username judell, which I created in 2005 and never used. When you visit the page it says “hmm… u/judell hasn’t posted anything” but also reports, in my Trophy Case, that I belong to the 15-year club.

2. The Amazon AWS S3 bucket named simply jon, which I created in 2006 for an InfoWorld blog post and companion column about the birth of Amazon Web Services. As Wikipedia’s timeline shows, AWS started in March of that year.

Care to guess the odds that I could still access both of these assets after leaving them in limbo for 15 years?

Spoiler alert: it was a coin flip.

I’ve had no luck with Reddit so far. The email account I signed up with no longer exists. The support folks kindly switched me to a current email but it’s somehow linked to Educational_Elk_7869 not to judell. I guess we may still get it sorted but the point is that I was not at all surprised by this loss of continuity. I’ve lost control of all kinds of digital assets over the years, including the above-cited InfoWorld article which only Wayback (thank you as always!) now remembers.

When I turned my attention to AWS S3 I was dreading a similar outcome. I’d gone to Microsoft not long after I made that AWS developer account; my early cloud adventures were all in Azure; could I still access those long-dormant AWS resources? Happily: yes.

Here’s the backstory from that 2006 blog post:

Naming

The name of the bucket is jon. The bucket namespace is global which means that as long as jon is owned by my S3 developer account, nobody else can use that name. Will this lead to a namespace land grab? We’ll see. Meanwhile, I’ve got mine, and although I may never again top Jon Stewart as Google’s #1 Jon, his people are going to have to talk to my people if they want my Amazon bucket.

I’m not holding my breath waiting for an offer. Bucket names never mattered in the way domain names do. Still, I would love to be pleasantly surprised!

My newfound interest in AWS is, of course, because Steampipe wraps SQL around a whole bunch of AWS APIs including the one for S3 buckets. So, for example, when exactly did I create that bucket? Of course I can log into the AWS console and click my way to the answer. But I’m all about SQL lately so instead I can do this.

> select name, arn, creation_date from aws_s3_bucket +-------+--------------------+---------------------+ | name | arn | creation_date | +-------+--------------------+---------------------+ | jon | arn:aws:s3:::jon | 2006-03-16 08:16:12 | | luann | arn:aws:s3:::luann | 2007-04-26 14:47:45 | +-------+--------------------+---------------------+

Oh, and there’s the other one I made for Luann the following year. These are pretty cool ARNs (Amazon Resource Names)! I should probably do something with them; the names you can get nowadays are more like Educational_Elk_7869.

Anyway I’m about to learn a great deal about the many AWS APIs that Steampipe can query, check for policy compliance, and join with the APIs of other services. Meanwhile it’s fun to recall that I wrote one of the first reviews of the inaugural AWS product and, in the process, laid claim to some very special S3 bucket names.

Monday, 04. October 2021

Doc Searls Weblog

Where the Intention Economy Beats the Attention Economy

There’s an economic theory here: Free customers are more valuable than captive ones—to themselves, to the companies they deal with, and to the marketplace. If that’s true, the intention economy will prove it. If not, we’ll stay stuck in the attention economy, where the belief that captive customers are more valuable than free ones prevails. Let […]

There’s an economic theory here: Free customers are more valuable than captive ones—to themselves, to the companies they deal with, and to the marketplace. If that’s true, the intention economy will prove it. If not, we’ll stay stuck in the attention economy, where the belief that captive customers are more valuable than free ones prevails.

Let me explain.

The attention economy is not native to human attention. It’s native to businesses that  seek to grab and manipulate buyers’ attention. This includes the businesses themselves and their agents. Both see human attention as a “resource” as passive and ready for extraction as oil and coal. The primary actors in this economy—purveyors and customers of marketing and advertising services—typically talk about human beings not only as mere “users” and “consumers,” but as “targets” to “acquire,” “manage,” “control” and “lock in.” They are also oblivious to the irony that this is the same language used by those who own cattle and slaves.

While attention-grabbing has been around for as long as we’ve had yelling, in our digital age the fields of practice (abbreviated martech and adtech) have become so vast and varied that nobody (really, nobody) can get their head around everything that’s going on in them. (Examples of attempts are here, here and here.)

One thing we know for sure is that martech and adtech rationalize taking advantage of absent personal privacy tech in the hands of their targets. What we need there are the digital equivalents of the privacy tech we call clothing and shelter in the physical world. We also need means to signal our privacy preferences, to obtain agreements to those, and to audit compliance and resolve disputes. As it stands in the attention economy, privacy is a weak promise made separately by websites and services that are highly incentivised not to provide it. Tracking prophylaxis in browsers is some help, but itworks differently for every browser and it’s hard to tell what’s actually going on.

Another thing we know for sure is that the attention economy is thick with fraud, malware, and worse. For a view of how much worse, look at any adtech-supported website through PageXray and see the hundreds or thousands of ways sthe site and its invisible partners are trying to track you. (For example, here’s what Smithsonian Magazine‘s site does.)

We also know that lawmaking to stop adtech’s harms (e.g. GDPR and CCPA) has thus far mostly caused inconvenience for you and me (how many “consent” notices have interrupted your web surfing today?)—while creating a vast new industry devoted to making tracking as easy as legally possible. Look up GDPR+compliance and you’ll get way over 100 million results. Almost all of those will be for companies selling other companies ways to obey the letter of privacy law while violating its spirit.

Yet all that bad shit is also a red herring, misdirecting attention away from the inefficiencies of an economy that depends on unwelcome surveillance and algorithmic guesswork about what people might want.

Think about this: even if you apply all the machine learning and artificial intelligence in the world to all the personal data that might be harvested, you still can’t beat what’s possible when the targets of that surveillance have their own ways to contact and inform sellers of what they actually want and don’t want, plus ways to form genuine relationships and express genuine (rather than coerced) loyalty, and to do all of that at scale.

We don’t have that yet. But when we do, it will be an intention economy. Here are the opening paragraphs of The Intention Economy: When Customers Take Charge (Harvard Business Review Press, 2012):

This book stands with the customer. This is out of necessity, not sympathy. Over the coming years, customers will be emancipated from systems built to control them. They will become free and independent actors in the marketplace, equipped to tell vendors what they want, how they want it, where and when—even how much they’d like to pay—outside of any vendor’s system of customer control. Customers will be able to form and break relationships with vendors, on customers’ own terms, and not just on the take-it-or-leave-it terms that have been pro forma since Industry won the Industrial Revolution.

Customer power will be personal, not just collective.  Each customer will come to market equipped with his or her own means for collecting and storing personal data, expressing demand, making choices, setting preferences, proffering terms of engagement, offering payments and participating in relationships—whether those relationships are shallow or deep, and whether they last for moments or years. Those means will be standardized. No vendor will control them.

Demand will no longer be expressed only in the forms of cash, collective appetites, or the inferences of crunched data over which the individual has little or no control. Demand will be personal. This means customers will be in charge of personal information they share with all parties, including vendors.

Customers will have their own means for storing and sharing their own data, and their own tools for engaging with vendors and other parties.  With these tools customers will run their own loyalty programs—ones in which vendors will be the members. Customers will no longer need to carry around vendor-issued loyalty cards and key tags. This means vendors’ loyalty programs will be based on genuine loyalty by customers, and will benefit from a far greater range of information than tracking customer behavior alone can provide.

Thus relationship management will go both ways. Just as vendors today are able to manage relationships with customers and third parties, customers tomorrow will be able to manage relationships with vendors and fourth parties, which are companies that serve as agents of customer demand, from the customer’s side of the marketplace.

Relationships between customers and vendors will be voluntary and genuine, with loyalty anchored in mutual respect and concern, rather than coercion. So, rather than “targeting,” “capturing,” “acquiring,” “managing,” “locking in” and “owning” customers, as if they were slaves or cattle, vendors will earn the respect of customers who are now free to bring far more to the market’s table than the old vendor-based systems ever contemplated, much less allowed.

Likewise, rather than guessing what might get the attention of consumers—or what might “drive” them like cattle—vendors will respond to actual intentions of customers. Once customers’ expressions of intent become abundant and clear, the range of economic interplay between supply and demand will widen, and its sum will increase. The result we will call the Intention Economy.

This new economy will outperform the Attention Economy that has shaped marketing and sales since the dawn of advertising. Customer intentions, well-expressed and understood, will improve marketing and sales, because both will work with better information, and both will be spared the cost and effort wasted on guesses about what customers might want, and flooding media with messages that miss their marks. Advertising will also improve.

The volume, variety and relevance of information coming from customers in the Intention Economy will strip the gears of systems built for controlling customer behavior, or for limiting customer input. The quality of that information will also obsolete or re-purpose the guesswork mills of marketing, fed by crumb-trails of data shed by customers’ mobile gear and Web browsers. “Mining” of customer data will still be useful to vendors, though less so than intention-based data provided directly by customers.

In economic terms, there will be high opportunity costs for vendors that ignore useful signaling coming from customers. There will also be high opportunity gains for companies that take advantage of growing customer independence and empowerment.

But this hasn’t happened yet. Why?

Let’s start with supply and demand, which is roughly about price. Wikipedia: “the relationship between the price of a given good or product and the willingness of people to either buy or sell it.” But that wasn’t the original idea. “Supply and demand” was first expressed as “demand and supply” by Sir James Denham-Steuart in An Inquiry into the Principles of Political Oeconomy, written in 1767. To Sir James, demand and supply wasn’t about price. Specifically, “it must constantly appear reciprocal. If I demand a pair of shoes, the shoemaker either demands money or something else for his own use.” Also, “The nature of demand is to encourage industry.”

Nine years later, in The Wealth of Nations, Adam Smith, a more visible bulb in the Scottish Enlightenment, wrote, “The real and effectual discipline which is exercised over a workman is that of his customers. It is the fear of losing their employment which restrains his frauds and corrects his negligence.” Again, nothing about price.

But neither of those guys lived to see the industrial age take off. When that happened, demand became an effect of supply, rather than a cause of it. Supply came to run whole markets on a massive scale, with makers and distributors of goods able to serve countless customers in parallel. The industrial age also ubiquitized standard-form contracts of adhesion binding all customers to one supplier with a single “agreement.”

But, had Sir James and Adam lived into the current millennium, they would have seen that it is now possible, thanks to digital technologies and the Internet, for customers to achieve scale across many companies, with efficiencies not imaginable in the pre-digital industrial age.

For example, it should be possible for a customer to express her intentions—say, “I need a stroller for twins downtown this afternoon”—to whole markets, but without being trapped inside any one company’s walled garden. In other words, not only inside Amazon, eBay or Craigslist. This is called intentcasting, and among its virtues is what Kim Cameron calls “minimum disclosure for constrained purposes” to “justifiable parties” through a choice among a “plurality of operators.”

Likewise, there is no reason why websites and services can’t agree to your privacy policy, and your terms of engagement. In legal terms, you should be able to operate as the first party, and to proffer your own terms, to which sites and services can agree (or, as privacy laws now say, consent) as second parties. That this is barely thinkable is a legacy of a time that has sadly not yet left us: one in which only companies can enjoy that kind of scale. Yet it would clearly be a convenience to have privacy as normalized in the online world as it is in the offline one. But we’re slowly getting there; for example with Customer Commons’ P2B1, aka #NoStalking term, which readers can proffer and publishers can agree agree to. It says “Just give me ads not based on tracking me.” Also with the IEEE’s P7012 Standard for Machine Readable Personal Privacy Terms working group.

Same with subscriptions. A person should be able to keep track of all her regular payments for subscription services, to keep track of new and better deals as they come along, to express to service providers her intentions toward those new deals, and to cancel or unsubscribe. There are lots of tools for this today, for example TruebillBobbyMoney DashboardMintSubscript MeBillTracker ProTrimSubbyCard DueSiftSubMan, and Subscript Me. There are also subscription management systems offered by PaypalAmazonApple and Google (e.g. with Google Sheets and Google Doc templates). But all of them to one degree or another are based more on the felt need by those suppliers for customer captivity than for customer independence.

As Customer Commons unpacks it here, there are many largely or entirely empty market spaces that are wide open for free and independent customers: identity, shopping (e.g. with shopping carts of your own to take from site to site), loyalty (of the genuine kind), property ownership (the real Internet of Things), and payments, for example.

It is possible to fill all those spaces if we have the capacity to—as Sir James put it—encourage industry, restrain fraud and correct negligence. While there is some progress in some of those areas, the going is still slow on the global scale. After all, The Intention Economy is nine years old and we still don’t have it yet. Is it just not possible, or are we starting in the wrong places?

I think it’s the latter.

Way back in 1995, when the Internet first showed up on both of our desktops, my wife Joyce said, “The sweet spot of the Internet isn’t global. It’s local.” That was the gist of my TEDx Santa Barbara talk in 2018. It’s also why Joyce and I are now in Bloomington, Indiana, working with the Ostrom Workshop at Indiana University on deploying a new way for demand and supply to inform each other and get business rolling—and to start locally. It’s called the Byway, and it works outside of the old supply-controlled industrial model. Here’s an FAQ. Please feel free to add questions in the comments here.

The title image is by the great Hugh Macleod, and was commissioned in 2004 for a startup he and I both served and is now long gone.

 


Werdmüller on Medium

The dark flood

What I did when the world went away Continue reading on Medium »

What I did when the world went away

Continue reading on Medium »

Sunday, 03. October 2021

Randall Degges

How I Converted a REST API I Don't Control to GraphQL

I’ve been building and working with REST APIs for many years now. Recently, however, I’ve been spending more and more of my time working with (and building) GraphQL-based APIs. While GraphQL has generally made my life easier, especially as I’ve been building and consuming more data-heavy APIs, there is still one extremely annoying problem I’ve run into over and over again: lack of GraphQL s

I’ve been building and working with REST APIs for many years now. Recently, however, I’ve been spending more and more of my time working with (and building) GraphQL-based APIs.

While GraphQL has generally made my life easier, especially as I’ve been building and consuming more data-heavy APIs, there is still one extremely annoying problem I’ve run into over and over again: lack of GraphQL support for third-party APIs I need to consume.

If I’m trying to use a third-party API service that doesn’t have a GraphQL endpoint, I either need to:

Download, install, configure, and learn how to use their custom REST API clients (which takes a lot of time and means my codebase is now a bit cluttered), or Build my own GraphQL proxy for their service… But this is a big task. I’ve got to read through all their REST API docs and carefully define GraphQL schemas for everything, learning all the ins and outs of their API as I go.

In short: it’s a lot of work either way, but if I really want to use GraphQL everywhere I have to work for it.

In an ideal world, every API service would have a GraphQL endpoint, this way I could just use a single GraphQL library to query all the API services I need to talk to.

Luckily, one of my favorite developer tools, StepZen (disclaimer: I advise them), has made this problem a lot less painful.

What StepZen Does

StepZen is a platform that lets you host a GraphQL endpoint to use in your applications. But, more importantly, they’ve designed a schema system that lets you import (or build your own) GraphQL wrappers for just about anything: REST APIs, databases, etc. It’s really neat!

Using the StepZen CLI, for example, I can create a new GraphQL endpoint that allows me to talk with the Airtable and FedEx APIs, neither of which support GraphQL natively. The beautiful thing is, I don’t even need to write a GraphQL wrapper for this myself since someone else already did!

Here’s what this looks like (assuming you’ve already got the StepZen CLI installed and initialized):

$ stepzen import airtable $ stepzen import fedex $ stepzen start

Using the import command, StepZen will download the publicly available GraphQL schemas for these services (Airtable and FedEx) to your local directory. The start command then launches a GraphQL explorer on localhost, port 5000, which you can use to interactively query the Airtable and FexEx APIs using GraphQL! Pretty neat!

How to Access Public GraphQL Schemas

So, let’s say you want to use GraphQL to query an API service that doesn’t support GraphQL. If you’re using StepZen, you can find all the publicly available (official) schemas on the StepZen Schemas repo on GitHub. This repo is namespaced, so if you see a folder in the project, you can use the stepzen import command on it directly. At the time of writing, there are 24 publicly available GraphQL schemas you can instantly use.

StepZen currently has support for lots of popular developer services: Disqus, GitHub, Shopify, etc.

You can, of course, create your own GraphQL schemas as well.

The Problem I Ran Into with GraphQL

Several months ago I was working on a simple user-facing web app. The entire backend of the app was using GraphQL and I was trying to keep the codebase as pure as possible, which meant not using any REST APIs directly.

In all my years of building web apps, I’ve always used a REST API at some point. So this was a bit of an experiment to see whether or not I could build my app without cluttering my codebase with REST API clients or requests.

As I was working on the app, I went to use one of my favorite free API services, Random User. If you haven’t heard of it before, it’s a publicly available REST API you can hit to generate realistic-looking fake users. It’s incredible for building a development environment, using seed data in an app, or creating real-world-looking MVPs. I use it all the time.

I knew going into this process that the Random User API didn’t have a GraphQL endpoint, so I figured I’d spend some time creating one for them.

How to Convert a REST API to GraphQL

My goal, as I mentioned above, was to build a GraphQL endpoint for the Random User REST API.

In order to make this work, what you have to do is translate your REST API so the StepZen service can understand which endpoints you are hitting and what types of inputs and outputs they require.

Once you’ve defined all this, StepZen will be able to create a GraphQL endpoint for your app. When your app queries the StepZen GraphQL endpoint, StepZen will translate your GraphQL request into the proper REST API call, execute the request, then translate the response and send it back to your app in standard GraphQL format.

Here’s how you can convert any REST API to GraphQL using StepZen, step-by-step.

Step 1: Read Through the REST API Docs and Identify Endpoints to Convert

The first part of converting any REST API to GraphQL is to fully understand what endpoints you want to convert from REST to GraphQL.

In my case, the Random User API only has a single endpoint, so this isn’t complicated. But, let’s say you’re converting a large, popular API like Twilio that has many endpoints… This can be much more difficult.

To make things simpler, here’s what I recommend: when converting a REST API to GraphQL, identify the endpoints you actually need and ignore the rest. Trying to build an entire abstraction layer for a massive API like Twilio would be difficult, so instead, focus only on the key endpoints you need to use. In the future, if you need to use additional endpoints, you can always add support for them.

NOTE: This is especially true if you’re using StepZen. It’s incredibly easy to support additional endpoints in StepZen once you have a basic schema going, so don’t feel bad about focusing on the important ones and ignoring the rest.

Following this just-in-time pattern will help save you time and get you back to working on the important stuff.

Step 2: Understand the REST Endpoint’s Schem