Last Update 5:41 AM June 24, 2021 (UTC)

Identity Blog Catcher

Brought to you by Identity Woman and Infominer.
Support this collaboration on Patreon!!!

Thursday, 24. June 2021

John Philpin : Lifestream

Florida is tied with Texas for having the most people facing

Florida is tied with Texas for having the most people facing charges linked to the Capitol riot. Those two states are also top for ‘net population growth’. That is 7 and 5 people (respectively) arriving in each of those states for every 1 that leaves.

Florida is tied with Texas for having the most people facing charges linked to the Capitol riot.

Those two states are also top for ‘net population growth’. That is 7 and 5 people (respectively) arriving in each of those states for every 1 that leaves.


Ben Werdmüller

Five years on, I’m still horribly angry ...

Five years on, I’m still horribly angry about Brexit, and still taking it personally that I’m not legally allowed to live in the country where I grew up. On the plus side, California has much better weather.

Five years on, I’m still horribly angry about Brexit, and still taking it personally that I’m not legally allowed to live in the country where I grew up. On the plus side, California has much better weather.


John Philpin : Lifestream

Uberall raises $115M, acquires MomentFeed to scale up its lo

Uberall raises $115M, acquires MomentFeed to scale up its location marketing services. They might think it clever to play off the ‘success’ of Uber … but history shows that ‘Uber Alles’ might not be a good line to use.

Uberall raises $115M, acquires MomentFeed to scale up its location marketing services.

They might think it clever to play off the ‘success’ of Uber … but history shows that ‘Uber Alles’ might not be a good line to use.


”My considered opinion, after long reflection, is that whi

”My considered opinion, after long reflection, is that whilst in many places the effect of Ulysses on the reader undoubtedly is somewhat emetic, nowhere does it tend to be an aphrodisiac. Ulysses may, therefore, be admitted to the United States.” John M. Woolsey via

”My considered opinion, after long reflection, is that whilst in many places the effect of Ulysses on the reader undoubtedly is somewhat emetic, nowhere does it tend to be an aphrodisiac. Ulysses may, therefore, be admitted to the United States.”

John M. Woolsey

via


This Summer, Beer Goes Soft-Serve Wrong - at so many leve

This Summer, Beer Goes Soft-Serve Wrong - at so many levels!

This Summer, Beer Goes Soft-Serve

Wrong - at so many levels!


How Media Consumption Evolved Throughout COVID-19 Another

How Media Consumption Evolved Throughout COVID-19 Another example of what I hate generational categories. It would also be useful to flip so that I can see media category - and then who is ‘consuming it’ - so allowing me to self manage my generational fit!

How Media Consumption Evolved Throughout COVID-19

Another example of what I hate generational categories.

It would also be useful to flip so that I can see media category - and then who is ‘consuming it’ - so allowing me to self manage my generational fit!


Mississippi: The Model Republican State Dana Blankenhorn

Mississippi: The Model Republican State Dana Blankenhorn

If In Doubt, Begin I get it. Sound advice at first pass.

If In Doubt, Begin I get it. Sound advice at first pass. But I do see downsides.

If In Doubt, Begin

I get it. Sound advice at first pass. But I do see downsides.

Wednesday, 23. June 2021

Mike Jones: self-issued

Second Version of FIDO2 Client to Authenticator Protocol (CTAP) Now a Standard

The FIDO Alliance has completed the CTAP 2.1 Specification. This follows the publication of the closely-related second version of the W3C Web Authentication (WebAuthn) specification. Today’s FIDO Alliance announcement describes the enhancements in the second version as follows: Enhancements to FIDO standards to accelerate passwordless in the enterprise The FIDO Alliance has announced enhancements t

The FIDO Alliance has completed the CTAP 2.1 Specification. This follows the publication of the closely-related second version of the W3C Web Authentication (WebAuthn) specification.

Today’s FIDO Alliance announcement describes the enhancements in the second version as follows:

Enhancements to FIDO standards to accelerate passwordless in the enterprise

The FIDO Alliance has announced enhancements to its FIDO2 specifications, which include several new features that will be helpful for passwordless enterprise deployments and other complex security applications. Both FIDO2 specifications were recently updated by their governing bodies – with the World Wide Web Consortium (W3C) approving WebAuthn Level 2 and FIDO doing the same for CTAP 2.1.

Key to these enhancements is enterprise attestation, which provides enterprise IT with improved management of FIDO authenticators used by employees. Enterprise attestation enables better binding of an authenticator to an account, assists with usage tracking and other management functions including credential and pin management, and biometric enrollment required in the enterprise.

Other updates include support for cross-origin iFrames and Apple attestation, as well as improvements to resident credentials. More details on these and other FIDO specification enhancements are available here.


Ben Werdmüller

NGL, having bright blue hair is making me really happy every time I see it.


Do Chance Meetings at the Office Boost Innovation? There’s No Evidence of It

“Remote work, though, can enable ideas to bubble up from people with different backgrounds. Online, people who are not comfortable speaking up in an in-person meeting may feel more able to weigh in. Brainstorming sessions using apps like Slack can surface many more perspectives by including people who wouldn’t have been invited to a meeting, like interns or employees in other

“Remote work, though, can enable ideas to bubble up from people with different backgrounds. Online, people who are not comfortable speaking up in an in-person meeting may feel more able to weigh in. Brainstorming sessions using apps like Slack can surface many more perspectives by including people who wouldn’t have been invited to a meeting, like interns or employees in other departments.”

[Link]


@_Nat Zone

Identiverse: 7 Principles of Digital Being

日本時間 7月1日午前4時 (Wednesday,… The post Identiverse: 7 Principles of Digital Being first appeared on @_Nat Zone.

日本時間 7月1日午前4時 (Wednesday, June 30 1:00pm – 1:30pm MDT)

2020年に、物理的な世界からサイバー世界への移行がかつてないスピードで進みました。COVID-19がその移行を課したのです。5年から10年分の変化が数ヶ月の間に行われたと言う人もいます。そこでは、ステイホーム・ポリシーにより物理的な世界でのコミュニケーション能力が大きく制限され、サイバー・コミュニケーションに大きく依存するようになりました。サイバー空間では、あらゆるコミュニケーションはDigital Being(サイバースペースにおける私たちのデジタル表現)を通じて行われます。しかし、Digital Beingが無形である以上、私たちの「安全な生活」に欠かせない、当たり前だと思っている物理的な存在の多くの特性がそこにはありません。このセッションでは、私たちのデジタルライフを安全で安心なものにするために必要な、Digital Beingの7つの原則を概観します。

The post Identiverse: 7 Principles of Digital Being first appeared on @_Nat Zone.

Tuesday, 22. June 2021

Ben Werdmüller

What’s the Difference Between a ‘Borb’ and a ‘Floof’?

“Let us now apply this logic. Borbs as a category heavily intersect with birbs, defined as both are by roundness. But just as every bird is not a birb, every birb is not a borb. Some birds naturally have deep chests and short necks, easily securing their borbness: chickadees, European Robins, and Bearded Tits, the last of which seems to be the poster child for the type. Other

“Let us now apply this logic. Borbs as a category heavily intersect with birbs, defined as both are by roundness. But just as every bird is not a birb, every birb is not a borb. Some birds naturally have deep chests and short necks, easily securing their borbness: chickadees, European Robins, and Bearded Tits, the last of which seems to be the poster child for the type. Other clear borbs include pigeons, thrushes, warblers, game birds, small parrots, most owls, and penguins.”

[Link]


Simon Willison

A framework for building Open Graph images

A framework for building Open Graph images GitHub's new social preview images are generated by a Node.js script that fetches data from their GraphQL API, generates an HTML version of the card and then grabs a PNG snapshot of it using Puppeteer. It takes an average of 280ms to serve an image and generates around 2 million unique images a day. Interestingly, they found that bumping the available R

A framework for building Open Graph images

GitHub's new social preview images are generated by a Node.js script that fetches data from their GraphQL API, generates an HTML version of the card and then grabs a PNG snapshot of it using Puppeteer. It takes an average of 280ms to serve an image and generates around 2 million unique images a day. Interestingly, they found that bumping the available RAM from 512MB up to 513MB had a big effect on performance, because Chromium detects devices on 512MB or less and switches some processes from parallel to sequential.


Ben Werdmüller

Records Show Nearly 900 Secret Service Employees Got COVID

“The records show that of the 881 positive test results recorded between March 1, 2021 and March 9, 2021, the majority, 477, came from employees working as special agents, and 249 were from members of the uniformed division.” [Link]

“The records show that of the 881 positive test results recorded between March 1, 2021 and March 9, 2021, the majority, 477, came from employees working as special agents, and 249 were from members of the uniformed division.”

[Link]


Doc Searls Weblog

Dialog with a robot

So I ordered one of these yesterday… …and got a notice that said, That was to my building in Manhattan. I’m on the third floor, so I went down to the front, and saw it wasn’t there. Since the chance that an Amazon package sitting in front of a Manhattan building’s front door will not […]

So I ordered one of these yesterday…

…and got a notice that said,

That was to my building in Manhattan.

I’m on the third floor, so I went down to the front, and saw it wasn’t there. Since the chance that an Amazon package sitting in front of a Manhattan building’s front door will not be stolen is above zero, I wanted to know if I should be worried. So I went to Amazon’s Contact Us page and got this:

Then, after I clicked on the big yellow button, a dialog ensued :

Hi! It’s Amazon’s messaging assistant again.
Amazon eero mesh WiFi system – 2 pack Is this what you need help with?

12:49 PM

Yes, that’s it

12:50 PM

Messaging Assistant | Customer Service

Looks like this item was delivered on Tuesday, Jun 22. Could you confirm what went wrong, so I can help?

12:50 PM

Didn’t get it

12:50 PM

Messaging Assistant | Customer Service

Sorry to hear that. Tell me a little more about what happened.

12:50 PM

Package never arrived

12:50 PM

Messaging Assistant | Customer Service

OK, looks like the delivery status may have been updated too soon. The good news is it should arrive soon. Give us until end of day Wednesday, Jun 23. Really appreciate your patience with this. Is there anything else I can help with?

12:50 PM

No, that’s all

12:50 PM

Messaging Assistant | Customer Service

Thanks for choosing Amazon. I’m here to help whenever you have an issue.

12:50 PM

Messaging Assistant has left. Thanks for contacting Amazon!

This kind of thing has happened before; not just with Amazon, and not just here. In fact, it has happened enough times for me to wonder whether there is a method to the madness.

I mean, is “the delivery status may have been updated too soon” an occurrence that’s common for a reason? Also, might that reason be technical, human, neither or both?

To be fair, this is not a big deal. Progress is how the miraculous becomes mundane, and this is a good example of it the mundane miraculous at work. “No sooner said than shipped” is the new norm. So are conversations with robots.

I’d love it if one could give me answers below, but that feature isn’t working right now. So feel free to tweet yours or something. Thanks!


Simon Willison

What I've learned about data recently

What I've learned about data recently Laurie Voss talks about the structure of data teams, based on his experience at npm and more recently Netlify. He suggests that Airflow and dbt are the data world's equivalent of frameworks like Rails: opinionated tools that solve core problems and which mean that you can now hire people who understand how your data pipelines work on their first day on the j

What I've learned about data recently

Laurie Voss talks about the structure of data teams, based on his experience at npm and more recently Netlify. He suggests that Airflow and dbt are the data world's equivalent of frameworks like Rails: opinionated tools that solve core problems and which mean that you can now hire people who understand how your data pipelines work on their first day on the job.

Via @seldo


Ben Werdmüller

When an Eel Climbs a Ramp to Eat Squid From a Clamp, That’s a Moray

“Moray eels can hunt on land, and footage from a recent study highlights how they accomplish this feat with a sneaky second set of jaws.” Also: perfect headline, well done. [Link]

“Moray eels can hunt on land, and footage from a recent study highlights how they accomplish this feat with a sneaky second set of jaws.” Also: perfect headline, well done.

[Link]


@_Nat Zone

Identiverse: Where are we with SIOP and DID?

6月23日(水7:30am – 8:2… The post Identiverse: Where are we with SIOP and DID? first appeared on @_Nat Zone.

6月23日(水
7:30am – 8:20am MDT (1:30pm – 2:20pm UTC)

昨年、DIFとOIDFが提携し、DID/VCの世界をOpenID Connectの世界につなげるための作業が進んでます。この過程で、WGは分散型IDシステムが直面する多くの困難な問題に取り組んでいます。

このパネルでは、3人の著名なパネリストとともに、仕様の状況とこれらの「難しい問題」に関する最新の考え方について議論します。

キム・キャメロン氏、『Laws of Identity』の著者、アイデンティティ・ブログ クリスティーナ・ヤスダ氏、マイクロソフト株式会社、アイデンティティ・スタンダード・アーキテクト トバイアス・ルッカー氏、Mattr社 テクニカル・スタンダード・アーキテクト

パネルディスカッションでは、まずキムが高いレベルの問題提起を行います。次に、OpenID AB/Connect WG内のSIOP Special Callの議長であるクリスティーナが、グループの作業の現状を説明します。

そして、ここからがお楽しみです。私はモデレーターとして「難しい質問」に答えていきます。バーチャル会議の技術がどうなるかわかりませんが、可能であれば、Ping Identity社のDavid Waite氏にも質問をしていただくかもしれません。

ご興味のある方は、ぜひお越し下さい。

このイベントへの登録は、こちらのリンクから行えます: https://pheedloop.com/identiverse2021/site/. ディスカウントコード「speaker1492」をご利用いただくと、登録料が10%オフになります。

The post Identiverse: Where are we with SIOP and DID? first appeared on @_Nat Zone.

19th International Conference on Applied Cryptography and Network Security

2021-06-22 に 19th In… The post 19th International Conference on Applied Cryptography and Network Security first appeared on @_Nat Zone.

2021-06-22 に 19th International Conference on Applied Cryptography and Network Security にて基調講演をさせていただきました。タイトルは『Digital Being』。来週6月30日にIdentiverseでやるもののフル版です。

このような機会をいただけたことを感謝いたします。

The post 19th International Conference on Applied Cryptography and Network Security first appeared on @_Nat Zone.

Simon Willison

GitLab Culture: The phases of remote adaptation

GitLab Culture: The phases of remote adaptation GitLab claim to be "the world's largest all-remote company" - 1300 employees across 65 countries, with not a single physical office. Lots of interesting thinking in this article about different phases a company can go through to become truly remote-first. "Maximally efficient remote environments will do as little work as possible synchronously, ins

GitLab Culture: The phases of remote adaptation

GitLab claim to be "the world's largest all-remote company" - 1300 employees across 65 countries, with not a single physical office. Lots of interesting thinking in this article about different phases a company can go through to become truly remote-first. "Maximally efficient remote environments will do as little work as possible synchronously, instead focusing the valuable moments where two or more people are online at the same time on informal communication and bonding." They also expire their Slack messages after 90 days to force critical project information into documents and issue threads.


Ben Werdmüller

I'm contemplating writing a guide to what ...

I'm contemplating writing a guide to what to expect when a parent is diagnosed with familial pulmonary fibrosis. I'm not a doctor, but there's a lot to subjectively describe. If you were doing this, what platform would you use? Straight website? Something else?

I'm contemplating writing a guide to what to expect when a parent is diagnosed with familial pulmonary fibrosis. I'm not a doctor, but there's a lot to subjectively describe. If you were doing this, what platform would you use? Straight website? Something else?


Kyle Den Hartog

Common Delegation Patterns in the Verifiable Credential Ecosystem

Here's 3 ways you can utilize VCs and DIDs to enable delegation and attenuated delegation for more complex scenarios.
Common Delegation Patterns in the Verifiable Credential Ecosystem

It’s commonly understood that verifiable credentials are a useful data model for expressing provenance and authority of data. In other words, they’re great for solving the “who says what” problem in a digital ecosystem. However, did you know that there are three ways in which you can utilize VCs and DIDs to enable delegation and attenuated delegation for more complex scenarios? In this blog post, I’ll cover the three patterns you can use with examples to help you figure out some of the more advanced capabilities of the VC data model. See below for more details!

Terminology

Since it’s quite common to see these terms within the IAM space, I figured it would be useful to first cover what each term means in simple term.

Delegation: This is the ability for someone to share their abilities with another user within the system. There’s multiple ways this can be done all with different forms of tradeoffs.

A user who shares their password with another user is performing delegation, but not in a way that allows an authorization endpoint (often times called a “verifier”) to be able to uniquely differentiate between the two users. This inability to differentiate at the authorization endpoint often leads to concerns around the confused deputy problem.

This when the system has been designed to allow the user to share their abilities with another user. The most common way this shows up today is with google document links. When a user “allows anyone with this link to view/comment/edit” they’re granting abilites to other users in a delegated way. The difference being the system can identify these unique users, which is commonly seen via the further delegation based upon an email address or organization.

Attenuated delegation: This is when the user opts to share only a portion of the abilities that they have. So looking at our options from delegation, option 1 would not allow this because the user is sharing all the same abilities that they have when logging into an account. So if an admin shares their password, the person they share the password with has admin abilities as well. One of the more common examples I can think of when attenuated delegation that is in use today is when a valet key is given to a valet. This key gives the valet access to drive the vehicle, but doesn’t allow them to open a glovebox or trunk for example. In this case, the driver of the vehichle is able to share only a portion of their abilities (driving) without sharing all of their abilities (opening a glovebox or trunk). In almost every system I’ve seen, this has to be intentionally designed into the system.

Delegation without attenuation by using DID Documents

As an example, let’s say we have the two following DID Documents:

Alice’s DID Document:

{ "@context": [ "https://www.w3.org/ns/did/v1", "https://w3id.org/security/suites/ed25519-2020/v1" ], "id": "did:example:alice", "verificationMethod": [ { "id": "did:example:alice#aliceKey1", "type": "Ed25519VerificationKey2020", "controller": "did:example:alice", "publicKeyMultibase": "zH3C2AVvLMv6gmMNam3uVAjZpfkcJCwDwnZn6z3wXmqPV" }, { "id": "did:example:bob#bobKey1", "type": "Ed25519VerificationKey2020", "controller": "did:example:bob", "publicKeyMultibase": "z9hFgmPVfmBZwRvFEyniQDBkz9LmV7gDEqytWyGZLmDXE" } ], "assertionMethod": [ "did:example:alice#aliceKey1", "did:example:bob#bobKey1" ] }

Bob’s DID Document:

{ "@context": [ "https://www.w3.org/ns/did/v1", "https://w3id.org/security/suites/ed25519-2020/v1" ], "id": "did:example:bob", "verificationMethod": [ { "id": "did:example:bob#bobKey1", "type": "Ed25519VerificationKey2020", "controller": "did:example:bob", "publicKeyMultibase": "z9hFgmPVfmBZwRvFEyniQDBkz9LmV7gDEqytWyGZLmDXE" } ], "assertionMethod": ["did:example:bob#bobKey1"] }

In this example we can see that Alice has the capability to delegate to Bob the ability for him to assert on her behalf. In the case of the following VC, bob could then create a valid verifiable presentation based on the following verifiable credential that was delegated to him:

{ "@context": [ "https://www.w3.org/2018/credentials/v1", "https://www.w3.org/2018/credentials/examples/v1" ], "id": "http://example.gov/credentials/3732", "type": ["VerifiableCredential", "UniversityDegreeCredential"], "issuer": { "id": "did:example:issuer" }, "issuanceDate": "2020-03-10T04:24:12.164Z", "credentialSubject": { "id": "did:example:alice", "degree": { "type": "BachelorDegree", "name": "Bachelor of Science and Arts" } }, "proof": { "type": "JsonWebSignature2020", "created": "2020-03-21T17:51:48Z", "verificationMethod": "did:example:issuer#credentialIssuanceKey", "proofPurpose": "assertionMethod", "jws": "eyJiNjQiOmZhbHNlLCJjcml0IjpbImI2NCJdLCJhbGciOiJFZERTQSJ9..OPxskX37SK0FhmYygDk-S4csY_gNhCUgSOAaXFXDTZx86CmI5nU9xkqtLWg-f4cqkigKDdMVdtIqWAvaYx2JBA" } }

Where the verifiablePresentation looks like so:

{ "@context": [ "https://www.w3.org/2018/credentials/v1", "https://example.com/credentials/latest" ], "type": ["VerifiablePresentation"], "verifiableCredential": [ { "@context": [ "https://www.w3.org/2018/credentials/v1", "https://www.w3.org/2018/credentials/examples/v1" ], "id": "http://example.gov/credentials/3732", "type": ["VerifiableCredential", "UniversityDegreeCredential"], "issuer": { "id": "did:example:issuer" }, "issuanceDate": "2020-03-10T04:24:12.164Z", "credentialSubject": { "id": "did:example:alice", "degree": { "type": "BachelorDegree", "name": "Bachelor of Science and Arts" } }, "proof": { "type": "JsonWebSignature2020", "created": "2020-03-21T17:51:48Z", "verificationMethod": "did:example:issuer#credentialIssuanceKey", "proofPurpose": "assertionMethod", "jws": "eyJiNjQiOmZhbHNlLCJjcml0IjpbImI2NCJdLCJhbGciOiJFZERTQSJ9..OPxskX37SK0FhmYygDk-S4csY_gNhCUgSOAaXFXDTZx86CmI5nU9xkqtLWg-f4cqkigKDdMVdtIqWAvaYx2JBA" } } ], "id": "ebc6f1c2", "holder": "did:example:alice", "proof": { "type": "Ed25519Signature2020", "created": "2019-12-11T03:50:55Z", "verificationMethod": "did:example:bob#bobKey1", "proofPurpose": "authentication", "challenge": "123", "proofValue": "z5LgJQhEvrLoNqXSbBzFR6mqmBnUefxX6dBjn2A4FYmmtB3EcWC41RmvHARgHwZyuMkR9xMbMCY7Ch4iRr9R8o1JffWY63FRfX3em8f3avb1CU6FaxiMjZdNegc" } }

This effectively is a method to allow Alice to add bob’s public key to her did document and giving him a copy of the verifiable credential to delegate him to act on her behalf. However, because this pattern relies on Alice granting Bob full authority to perform any action on her behalf, it lacks attenuation, but does have delegation.

Delegation by VCs

Additionally, there’s a well documented case to be able to establish delegation using just VCs without the ability to attenuate the data. See Section Appendix C.5 for more details on this pattern.

Attenuated Delegation by VCs

Further, by combining the pattern described in Section Appendix C.5 and combining it with the usage of a selective disclosure scheme like BBS signatures, we can enable an attenuated delegation pattern.

This is done via modifying the type of proof used to sign the credential and then utilizing the selective disclosure to limit the data. Using the following original verifiable credential:

{ "@context": [ "https://www.w3.org/2018/credentials/v1", "https://w3id.org/citizenship/v1", "https://w3id.org/security/bbs/v1" ], "id": "https://issuer.oidp.uscis.gov/credentials/83627465", "type": ["VerifiableCredential", "PermanentResidentCard"], "issuer": "did:example:issuer", "identifier": "83627465", "name": "Permanent Resident Card", "description": "Government of Example Permanent Resident Card.", "issuanceDate": "2019-12-03T12:19:52Z", "expirationDate": "2029-12-03T12:19:52Z", "credentialSubject": { "id": "did:example:b34ca6cd37bbf23", "type": ["PermanentResident", "Person"], "givenName": "Alice", "familyName": "SMITH", "gender": "Female", "image": "data:image/png;base64,iVBORw0KGgokJggg==", "residentSince": "2015-01-01", "lprCategory": "C09", "lprNumber": "999-999-999", "commuterClassification": "C1", "birthCountry": "Bahamas", "birthDate": "1958-07-17" }, "proof": { "type": "BbsBlsSignature2020", "created": "2020-10-16T23:59:31Z", "proofPurpose": "assertionMethod", "proofValue": "kAkloZSlK79ARnlx54tPqmQyy6G7/36xU/LZgrdVmCqqI9M0muKLxkaHNsgVDBBvYp85VT3uouLFSXPMr7Stjgq62+OCunba7bNdGfhM/FUsx9zpfRtw7jeE182CN1cZakOoSVsQz61c16zQikXM3w==", "verificationMethod": "did:example:issuer#test" } }

Then the following derived proof could be used to attenuate the data while also delegating the credential capabilities to a new holder:

{ "@context": [ "https://www.w3.org/2018/credentials/v1", "https://www.w3.org/2018/credentials/examples/v1" ], "id": "did:example:76e12ec21ebhyu1f712ebc6f1z2", "type": ["VerifiablePresentation"], "verifiableCredential": [ { "@context": [ "https://www.w3.org/2018/credentials/v1", "https://w3id.org/citizenship/v1", "https://w3id.org/security/bbs/v1" ], "id": "https://issuer.oidp.uscis.gov/credentials/83627465", "type": ["PermanentResidentCard", "VerifiableCredential"], "description": "Government of Example Permanent Resident Card.", "identifier": "83627465", "name": "Permanent Resident Card", "credentialSubject": { "id": "did:example:alice", "type": ["Person", "PermanentResident"], "familyName": "SMITH", "gender": "Female", "givenName": "Alice" }, "expirationDate": "2029-12-03T12:19:52Z", "issuanceDate": "2019-12-03T12:19:52Z", "issuer": "did:example:issuer", "proof": { "type": "BbsBlsSignatureProof2020", "nonce": "wrmPiSRm+iBqnGBXz+/37LLYRZWirGgIORKHIkrgWVnHtb4fDe/4ZPZaZ+/RwGVJYYY=", "proofValue": "ABkB/wbvt6213E9eJ+aRGbdG1IIQtx+IdAXALLNg2a5ENSGOIBxRGSoArKXwD/diieDWG6+0q8CWh7CViUqOOdEhYp/DonzmjoWbWECalE6x/qtyBeE7W9TJTXyK/yW6JKSKPz2ht4J0XLV84DZrxMF4HMrY7rFHvdE4xV7ULeC9vNmAmwYAqJfNwY94FG2erg2K2cg0AAAAdLfutjMuBO0JnrlRW6O6TheATv0xZZHP9kf1AYqPaxsYg0bq2XYzkp+tzMBq1rH3tgAAAAIDTzuPazvFHijdzuAgYg+Sg0ziF+Gw5Bz8r2cuvuSg1yKWqW1dM5GhGn6SZUpczTXuZuKGlo4cZrwbIg9wf4lBs3kQwWULRtQUXki9izmznt4Go98X/ElOguLLum4S78Gehe1ql6CXD1zS5PiDXjDzAAAACWz/sbigWpPmUqNA8YUczOuzBUvzmkpjVyL9aqf1e7rSZmN8CNa6dTGOzgKYgDGoIbSQR8EN8Ld7kpTIAdi4YvNZwEYlda/BR6oSrFCquafz7s/jeXyOYMsiVC53Zls9KEg64tG7n90XuZOyMk9RAdcxYRGligbFuG2Ap+rQ+rrELJaW7DWwFEI6cRnitZo6aS0hHmiOKKtJyA7KFbx27nBGd2y3JCvgYO6VUROQ//t3F4aRVI1U53e5N3MU+lt9GmFeL+Kv+2zV1WssScO0ZImDGDOvjDs1shnNSjIJ0RBNAo2YzhFKh3ExWd9WbiZ2/USSyomaSK4EzdTDqi2JCGdqS7IpooKSX/1Dp4K+d8HhPLGNLX4yfMoG9SnRfRQZZQ==", "verificationMethod": "did:example:issuer#test", "proofPurpose": "assertionMethod", "created": "2020-10-16T23:59:31Z" } }, { "@context": [ "https://www.w3.org/2018/credentials/v1", "https://w3id.org/citizenship/v1", "https://w3id.org/security/v3-unstable" ], "id": "https://issuer.oidp.uscis.gov/credentials/83627465", "type": ["PermanentResidentCard", "VerifiableCredential"], "description": "Government of Example Permanent Resident Card.", "identifier": "83627465", "name": "Permanent Resident Card", "credentialSubject": { "id": "did:example:bob", "type": ["Person", "PermanentResident"], "familyName": "SMITH", "gender": "Female", "givenName": "Alice" }, "expirationDate": "2029-12-03T12:19:52Z", "issuanceDate": "2019-12-03T12:19:52Z", "issuer": "did:example:alice", "proof": { "type": "Ed25519Signature2020", "created": "2019-12-11T03:50:55Z", "proofValue": "z5LgmVhjjPTEzGL31k2eEde8bdr4MAzxQv87AmdHt5Usd1uGK1Ae88NoZ5jgTLKS6sJCZnQNthR3qAbyRMxvkqSkss2WtyKLa9rqhJmR6YEBkiuUtxawhrscWXm", "proofPurpose": "assertionMethod", "verificationMethod": "did:example:alice#aliceKey1" } } ], "proof": [ { "type": "Ed25519Signature2020", "created": "2018-06-18T21:19:10Z", "proofPurpose": "assertionMethod", "verificationMethod": "did:example:bob#bobKey1", "challenge": "c0ae1c8e-c7e7-469f-b252-86e6a0e7387e", "jws": "BavEll0/I1..W3JT24=" } ] } Conclusion

So there’s a few different take aways from this that should be highlighted. First and foremost VCs and DIDs have some really interesting capabilities that enable delegation and attenuated delegation. However, it’s not always advantageous and requires thorough understanding of the use case that’s trying to be built with DIDs and VCs and whether or not delegation needs to be enabled or disabled. In some cases, it’s being left in when there’s an expectation that a credential is only able to be used by a single holder. Finally, it should be noted that while VCs are capable of being used for authorization systems, it’s generally not a good idea to do so. This is because the complexity (based on the number of checks) of the authorization system is quite high and the semantics have not been designed in a way that align well with most authorization systems meaning a vulnerability is more likely to be introduced. If you’re looking for these capabilities, I would recommend you look to the ZCAP-LD data model which is designed especially for these concepts. And if you’re still confused and would like some help please reach out and I can see how I can help.

Monday, 21. June 2021

Ben Werdmüller

’Nightmare Scenario’ fresh details on chaos, conflicts inside Trump’s pandemic response

"In the early days of the coronavirus pandemic, as White House officials debated whether to bring infected Americans home for care, President Donald Trump suggested his own plan for where to send them, eager to suppress the numbers on U.S. soil. “Don’t we have an island that we own?” the president reportedly asked those assembled in the Situation Room in February 2020, before

"In the early days of the coronavirus pandemic, as White House officials debated whether to bring infected Americans home for care, President Donald Trump suggested his own plan for where to send them, eager to suppress the numbers on U.S. soil. “Don’t we have an island that we own?” the president reportedly asked those assembled in the Situation Room in February 2020, before the U.S. outbreak would explode. “What about Guantánamo?”"

[Link]


Happy summer 2020!

Happy summer 2020!

Happy summer 2020!


Phil Windley's Technometria

Reciprocal Negotiated Accountability

Summary: The self-sovereign internet underscores a tension between those who desire perfect oversight of society and those who desire perfect privacy. In this post, I explore a method for conditional legibility of private communications. In Self-Sovereign Communication, Oskar Van Deventer, discusses the communications layer enabled by DIDs. This is the same layer that I've labeled t

Summary: The self-sovereign internet underscores a tension between those who desire perfect oversight of society and those who desire perfect privacy. In this post, I explore a method for conditional legibility of private communications.

In Self-Sovereign Communication, Oskar Van Deventer, discusses the communications layer enabled by DIDs. This is the same layer that I've labeled the self-sovereign internet.

Oskar lays out nine requirements for self-sovereign communications (emphasis added):

The communication channel shall be usable for machine-readable issuer-holder-verifier interactions . The communication channel shall be protected against eavesdropping, impersonation, message modification and repudiation. Parties shall be able to digitally find each other and to establish a communication channel. The communication channel between counterparties shall be persistent. The communication channel shall be intrinsically symmetrical. The communication channel shall not unnecessarily disclose information between counterparties or to third parties. The communication channel shall be unilaterally closable. The communication channel shall not depend on third parties more than needed. The communication channel shall enable compliance with legal requirements, like legal intercept.

I was pleased to see these principles laid out clearly because many of them are often discussed (including by me) as properties of DIDComm, without the precision Oskar imposes.

The last, as Oskar concedes, is likely to be the most controversial. Indeed, when I read it my first reaction was to start arguing. If complying with legal requirements means creating backdoors to DIDComm, I'd oppose it.

The problem with backdoors for complying with legal requirements is that now developers and cloud operators are left with the task of determining who the good guys are. The whole point of decentralized communication systems is to avoid the kind of centralized, single-point-of-failure that backdoors imply.

Reciprocal Negotiated Accountability

In Reciprocal Negotiated Accountability, Daniel Hardman proposes an alternative to backdoors.

Daniel's idea is to combine two capabilities to create a decentralized system for enabling accountability.

The first is digital watermarks and data terms of service. The watermark is a cryptographically signed addition to the original document that states the terms behind the sharing. For example, a sales agreement could include data sharing terms that state the recipient may not disclose named aspects of the document except under legal subpoena.

The second is provisional anonymity where identifying information is encrypted and the encrypted packaged is shared with the recipient. The keys to decrypt the identifying information are shared with a third party under escrow with legal requirements that the keys only be reveled to the recipient under specific conditions.

Daniel combines these into a decentralized system of opt-in agreements between parties that are tailored to the context and circumstances of the specific communications channel and data sharing. The legal agreement defines the requirements that must be met for access.

Daniel calls this "reciprocal negotiated accountability" because both parties negotiate an agreement about how shared data will be treated.

Daniel's solution won't make those who wish for unfettered access to communications channels happy. But it represents an alternative to backdoors that solves many of the problems backdoors present while protecting privacy for legitimate uses–as negotiated by the parties sharing data.

Photo Credit: 3D Tin Can Phones from Chris Potter (CC BY 2.0)

Tags: identity ssi didcomm privacy cryptography


The Self-Sovereign Internet

Summary: I'm attended the Hyperledger Global Forum this last week–all virtual. I spoke on DIDComm and the Self-Sovereign Internet. If you're not up to speed with DIDComm and how it creates a secure overlay network on TCP/IP, you should be. I consider it one of the most important things to come out of the SSI movement. DIDs and DID-based Communication or DIDComm form the second layer of the SSI

Summary: I'm attended the Hyperledger Global Forum this last week–all virtual. I spoke on DIDComm and the Self-Sovereign Internet. If you're not up to speed with DIDComm and how it creates a secure overlay network on TCP/IP, you should be. I consider it one of the most important things to come out of the SSI movement.

DIDs and DID-based Communication or DIDComm form the second layer of the SSI stack, providing a secure communications layer for the exchange of identity information via verifiable credentials. But, because of it's flexibility and the ability to define protocols on top of DIDComm messaging, it promises to be as important as the identity layer it enables. Autonomic Identifiers

The foundation of the self-sovereign internet is built on autonomic identifiers. I'm going to speak about Peer DIDs here, but KERI and other systems provide autonomic identifiers that serve just as well.

Identity systems provide the means to remember, recognize, and rely on other parties in a relationship. To do so, they use identifiers, convenient handles that name the thing being remembered.

Identifiers are issued to or created by a controller (e.g. Alice) who, by virtue of knowing the authentication factors (e.g. password, key fob, cryptographic key), can make authoritative statements about the identifier (e.g. claim it by logging in).

Bindings between Alice, a Peer DID, and the public key it points to. (click to enlarge)

In an autonomic identity architecture, the controller, Alice, generates a public-private key pair, derives a globally unique identifier, and shares the identifier and the currently associated public key with others to create relationships. Alice uses her private key to sign statements that authenticate herself and authorize use of the identifier. A digital signature also provides the means for Alice to cryptographically respond to challenges so she can prove she controls the identifier. These self-authentication and self-authorization capabilities make the identifier self-certifying and self-managing, meaning that there is no external third party, not even a ledger, needed for Alice to manage and use the identifier and prove to others the integrity of the bindings between herself and the identifier.

Any entity can create and establish control over an identifier in a manner that is independent, interoperable, and portable without recourse to any central authority. Autonomic identity systems rely solely on self-sovereign authority.

Alice can create as many Peer DIDs as she needs, each pointing to a public key that Alice controls. Alice can rotate the keys underneath the Peer DID anytime without impacting the identifier or her ability to prove she controls it. She keeps track of these key events in a key event log. The key event log is a chain of signed change records that can be cryptographically verified. Alice can use it to prove the provenance of her control of the identifier from its inception to the present.

Peer DID Exchange

Alice can exchange Peer DIDs with Bob to create a relationship. Because DIDs are associated with public keys, the exchange ensures that Alice and Bob have each other's public keys. They share key event logs (using a CRDT) for each identifier. If either updates the keys associated with the DID, the other is informed of change.

Alice and Bob exchange Peer DIDs to create a relationship (click to enlarge)

Having exchanged DIDs, Alice and Bob can now exchange signed and encrypted messages with each other using DIDComm. DIDComm is a messaging protocol that rides on top of this Peer DID relationship.

Alice and Bob use DIDComm to exchange messages (click to enlarge)

Alice and Bob are using digital wallets to store the keys (and manage their key event logs). The DIDComm messages are being exchanged using software agents that understand the DIDComm messaging protocol and use the keys in the wallet.

Alice can have DID-based relationships with multiple people, organizations, and even things. Each relationship includes a secure DIDComm-based messaging capability.

Some of Alice's Relationships (click to enlarge)

This network of DID-based relationships forms an overlay network. An overlay network comprises virtual links that correspond to a path in the underlying network. Secure overlay networks rely on an identity layer based on asymmetric key cryptography to ensure message integrity, non-repudiation, and confidentiality.

DIDComm messaging has several important properties that, taken together, provide a generative, secure network overlay for the Internet.

Secure - DID-based relationships are mutually authenticating. Private - messages can be encrypted. Interoperable - messages can be exchanged between any agents that support the DIDComm protocol. Transport-agnostic - DIDComm does not rely on any specific network technology–it is as happy on Bluetooth as on TCP/IP or anything else. Extensible - DIDComm is designed to support other protocols riding on top of its general secure messaging infrastructure. Protocological Power

The extensibility of DIDComm is one of its most powerful features because it makes DIDComm generative–just like the Internet itself.

Protocols describe the rules for a set of interactions, specifying the kinds of interactions that can happen without being overly prescriptive about their nature or content. Protocols formalize workflows for specific interactions like ordering food at a restaurant, playing a game, or applying for college.

The Hyperledger Aries project has a collection of RFCs that describe protocols for DIDComm messaging. While we have come to think of SSI agents being strictly about exchanging peer DIDs to create a connection, request and issue a credential, or prove things using credentials, these are merely specific protocols defined to run over the DIDComm messaging protocol. The follow specifications describe the protocols for these three core applications of DIDComm:

Connecting with others Requesting and issuing credentials Proving things using credentials

Dozens, even hundreds, of other protocols are possible.

Daniel Hardman has provided a comprehensive tutorial on defining protocols on DIDComm. One of the Aries RFCs is sample protocol definition of a protocol for playing TicTacToe over DIDComm messaging.

Alice and Bob play Tic Tac Toe (click to enlarge)

The TicTacToe protocol defines types of messages that are allowed, the game state, and what messages are allowed in each game state. I recommend it as a way to understand DIDComm protocols since it's familiar and easy to understand. Bruce Conrad who works on picos with me implemented the TicTacToe protocol for picos, which act as DIDComm agents.

Generativity

In 2005, Jonathan Zittrain wrote a compelling and prescient examination of the generative capacity of the Internet and its tens of millions of attached PCs. Zittrain defined generativity thus:

Generativity denotes a technology's overall capacity to produce unprompted change driven by large, varied, and uncoordinated audiences. From The Generative Internet
Referenced 2021-06-14T13:41:18-0600

Generative systems use a few basic rules, structures, or features to yield behaviors that can be extremely varied and unpredictable. Zittrain goes on to lay out the criteria for evaluating the generativity of a technology:

Generativity is a function of a technology's capacity for leverage across a range of tasks, adaptability to a range of different tasks, ease of mastery, and accessibility.

I have made the case elsewhere that the self-sovereign internet meets Zittrain's criteria for generativity.

Generativity provides decentralized actors with the ability to create cooperating, complex structures and behavior. No one person or group can or will think of all the possible uses, but each is free to adapt the system to their own use. The architecture of the self-sovereign internet enables adaptation of DIDComm messaging to any circumstance.

I am bullish on the possibilities for verifiable credentials to allow people to live digital lives with dignity and effectiveness, addresses the problems of social inclusion, and support economic equality to everyone around the globe. With all that, I believe the possibilities for the self-sovereign internet are even larger, promising a more secure and private, albeit no less useful, internet for tomorrow. DIDComm may turn out to be the most important part of self-sovereign identity.

Tags: ssi identity didcomm ssiot


Ben Werdmüller

Fears for future of American journalism as hedge funds flex power

"According to a recent analysis, hedge funds or private equity firms now control half of US daily newspapers, including some of the largest newspaper groups in the country: Tribune, McClatchy and MediaNews Group." [Link]

"According to a recent analysis, hedge funds or private equity firms now control half of US daily newspapers, including some of the largest newspaper groups in the country: Tribune, McClatchy and MediaNews Group."

[Link]

Sunday, 20. June 2021

reb00ted

Everybody has our data ... except us

Google has all my e-mail. (And I don’t. They merely let me access it with a browser.) Facebook has the list of all of my friends and what I said to them. (And I don’t.) LinkedIn has all of my business contacts. (Repeat after me: and I don’t.) Instagram has all my photos. Well, the Instagram department of Facebook does. (Chorus now: and I don’t.) Amazon has the list of all my purchases, an

Google has all my e-mail. (And I don’t. They merely let me access it with a browser.)

Facebook has the list of all of my friends and what I said to them. (And I don’t.)

LinkedIn has all of my business contacts. (Repeat after me: and I don’t.)

Instagram has all my photos. Well, the Instagram department of Facebook does. (Chorus now: and I don’t.)

Amazon has the list of all my purchases, and knows what products I was interested in but didn’t buy after all. (AND I DON’T.)

The list goes on.

Does this sound right to you? It sounds quite wrong to me.

But maybe it doesn’t matter. Things go well in this arrangement, don’t they?

Not so fast. Let’s start with losing access to my accounts. Maybe I forgot my password and failed to reset it. Or maybe I managed to get a password reset e-mail but my primary e-mail account was hacked, and now the attacker controls my Facebook account and I don’t. Maybe Google decided to disable my account, and given that there is no appeals process, that’s the end of me and my e-mail. The end of a 20-year-long record of all important things I wrote to others and they wrote to me. In the blink of an eye, because they have my data and I don’t.

But even if everything is fine, and I don’t forget my passwords and don’t get hacked and won’t get locked out: it’s a bit like being a really unimportant guest living in a fancy hotel on a short-term lease. They can kick me out any time, and keep all my furniture and other possessions, no questions asked. Thank you, I prefer my very own home, where nobody can kick me out, or at least renter protection laws, which don’t exist online.

We got to get our data back.


SSI Ambassador

Digital Identity Wallet: A place for your self-sovereign identity

Illustration: Digital Identity Wallet This article explains what a wallet is, how it works and how you can use it for managing your digital identity. While there are also wallets for institutions, this article only focuses on wallets for end users. The article also takes into consideration the revised eIDAS regulation including the ‘European Digital Identity Wallet’. An Introduction A d
Illustration: Digital Identity Wallet

This article explains what a wallet is, how it works and how you can use it for managing your digital identity. While there are also wallets for institutions, this article only focuses on wallets for end users. The article also takes into consideration the revised eIDAS regulation including the ‘European Digital Identity Wallet’.

An Introduction

A digital wallet is a key management application, which provides a user with a graphical interface to store, manage and secure digital keys. These keys can be used to sign transactions, statements, credentials, documents or claims.

A digital identity wallet enables a user to establish relationships and interact with third parties in a trusted manner. While the wallet aspect is mainly dealing with key management, storage aspects and the graphical interface (UI = User Interface), the third party interactions are rather organised by your agent, which is a part of your wallet. Your agent handles third-party interactions in your name and acts in your interest. It is a piece of software, which helps you to stay in control of your messaging, security, health records, privacy, purchases etc.

A digital identity wallet based on self-sovereign identity (SSI) principles.

Not all wallets are the same! A wallet is a piece of technology, which can be implemented in different ways. While it can leverage decentralised identifiers (DIDs) and verified credentials, it doesn’t necessarily need to follow the principles of SSI. There are cloud wallets, which are hosted and provided by a third party. A good comparison is a wallet for cryptocurrencies — think of an exchange like Coinbase or Binance. These companies offer you a hosted wallet, which you don’t control. You might have the authentication means (password + second factor) to access your wallet, but it’s not yours. In this case, you are the owner, but not the possessor and you don’t have data sovereignty. This is in stark contrast to so-called self-custody wallets, which you install on your device. These wallets randomly create a private key, which only you know. The key point is that your wallet creates keys for you, which only you know and not a third party.

A digital identity wallet, which follows the principles of SSI enables the user to have data sovereignty and complete control as well as data portability. It provides you with the necessary autonomy to be independent of a third party. It not only enables ownership but also possession. It enables transparency and explicit consent when sharing information. It’s also vendor-independent, meaning you can export your data and import it into another wallet of your choice since it is built on open standards, which are widely used. It also creates peer identifiers for every new contact or rather an interaction, instead of getting an identifier assigned to it.

The core functions of an SSI wallet

Please note that the described characteristics below are not universally valid for all (SSI) wallets. It’s rather a desired status.

Interactions in an SSI ecosystem, picture by Adrian Doerk from Lissi.

Establish trusted relationships with third parties:
The wallet can create an encrypted communication channel to exchange information between you and a (trusted) third party. This communication channel is based on a unique identifier, which you control. Hence it is not hosted by a third party. This in turn enables you the portability of communication channels. Meaning if you switch from one wallet to another you can use the communication channels, which you already created without the reliance on any third-party platform.

Store, organise and present verified data:
The user can store and manage (verified) credentials among other information within the wallet. Once in the wallet, credentials can be used to answer a proof request from every connection. The wallet creates a verifiable presentation, which the user can choose to send or instead decline the proof request. Users are also able to verify the identity of the other party, effectively establishing a trusted relationship, which can be leveraged to share and receive information within a defined trust framework such as the eIDAS regulation. This trust might be based on an electronic seal or similar trust mechanisms. However, this might not be the case for all contacts.

The information exchanged can be verified, but can also be self-attested or just proof a certain threshold without revealing the exact information like your age when proofing you are 18 or older.

A transparent history of shared data:
Since the wallet usually keeps a history of interactions, the user can track who shared what data and when. This leads to greater transparency for the user and helps to better exercise data protection rights. A framework for the integration of detailed consent forms is currently under development.

Self-custody: With great control comes great responsibility

These digital wallets run locally as an application on the device of the user. From a technical perspective, these wallets are similar to self-custody wallets for cryptocurrencies. Similar to these wallets the user has the responsibility to do a backup.

Identification, authentication and authorization:
Before diving deeper into the core functions of a digital wallet we need to understand the differences between the three words above. The questions are from the perspective of the verifier or issuer.

Identification answers the question: “Who is I’m talking to?”
Authentication answers the question: “Is it you (whom I’ve identified already) again?”
Authorization answers the question: “What rights do I want to grant you?”

The wallet can enable a variety of additional functions and thus serves as a central point for the user to manage and access services. For example, the wallet can be used to replace traditional authentication methods such as passwords with a single sign-on (SSO) functionality. Furthermore, existing standards such as the OpenID Connect protocol can also be connected to enable communication with existing infrastructure. Hence, once widespread adopted the wallet will completely replace passwords for you and enable you to identify and authenticate yourself, identity third-parties as well as authorise third-parties to use your data according to your permissions.

The European Digital Identity Wallet Illustration: European Digital Identity Wallet

The main regulation within the European Union, which addresses the topic of identification (among other topics) is the regulation on electronic identification and trust services for electronic transactions in the internal market better known as eIDAS regulation. The regulation just went through a major revision and now includes several aspects regarding self-sovereign identities in its draft version. It also includes the aspect of a European Digital Identity Wallet, which “is a product and service that allows the user to store identity data, credentials and attributes linked to her/his identity, to provide them to relying parties on request and to use them for authentication, online and offline, for a service in accordance with Article 6a; and to create qualified electronic signatures and seals”.

In the following, the European Digital Identity Wallet is referred to as “EU Wallet” for simplicity.

Functions of the EU Wallet:

It should technically enable the selective disclosure of attributes to relying parties. Member States should also reach an agreement on common elements of a business model and fee structure of the EU Wallets. The EU Wallet will furthermore enable citizens to create qualified electronic signatures that can facilitate political participation among other functions.

EU Wallets shall enable the user to: securely request and obtain, store, select, combine and share, in a manner that is transparent to and traceable by the user, the necessary legal personal identification data and electronic attestation of attributes to authenticate online and offline to use online public and private services; sign statements or contracts utilising qualified electronic signatures (QES) have full control of the EU Wallet. The issuer of the EU Wallet shall not collect information about the use of the EU Wallet which are not necessary for the provision of the EU Wallet service. use it free of charge (natural persons). access the wallet for persons with disabilities. The EU Wallets shall provide a common interface: to qualified and non-qualified trust service providers issuing qualified and non-qualified electronic attestations of attributes. for relying parties to request and validate personal identification data and electronic attestations of attributes. for the presentation to relying parties of personal identification data, electronic attestation of attributes or other data such as credentials, in local mode not requiring internet access for the wallet. Furthermore, the EU Wallets need to: ensure that trust service providers of qualified attestations of attributes cannot receive any information about the use of these attributes. meet the requirements in regards to assurance level “high”, in particular as applied to the requirements for identity proofing and verification, and electronic identification means management and authentication. provide a mechanism to ensure that the relying party is able to authenticate the user and to receive electronic attestations of attributes; ensure that the personal identification data uniquely and persistently represent the natural or legal person is associated with it. An EU Wallet shall be issued:

(a) by a Member State;
(b) under a mandate from a Member State;
(c) independently but recognised by a Member State.

The draft regulation requires Member States to issue a EU Wallet under a notified eID scheme to common technical standards following a compulsory compliance assessment and voluntary certification within the European cybersecurity certification framework, as established by the Cybersecurity Act.

Service providers should communicate their intent to rely on the EU Wallets to the Member States. Relying on the level of assurance “high”, the EU Wallets should benefit from the potential offered by tamper-proof solutions such as secure elements.

The conformity of EU Wallets should be certified by accredited public or private sector bodies designated by Member States. Certification should in particular rely on the relevant European cybersecurity certifications schemes. The EU Wallet Trust Mark’ means an indication in a simple, recognisable and clear manner that a EU Wallet has been issued in accordance with this Regulation.

The Commission shall establish, publish and maintain a list of certified European Digital Identity Wallets.

Where are heading? A personal opinion: A wallet acts as a central point to manage interactions in a digital world. It will be the standard for identity management and be ubiquitous in everyday life. Technology is moving fast. While standardisation is taking quite a while, institutions and governments have recognised the need for innovation in this area to secure data sovereignty and not be dependent on a foreign oligopoly of companies with irresponsible business practices. Dozens of use-cases from different industries are explored in parallel and will be available in the upcoming months. Over time digital wallets for pure identity management and financial applications (e.g. cryptocurrencies) are likely to merge. The draft of the revised eIDAS regulation of the European Union is the most comprehensive and ambitious step towards SSI internationally. While the exact implementation and technical specifications of the EU Wallets are still to be determined, their compulsory issuance for the EU Member States as well as the compulsory acceptance for big platform providers will have a tremendous international impact. Key management is an issue, which still needs to be solved. This means, the user also has the responsibility of having a secure backup solution to restore the wallet in case of lost access. Sources used: Amending Regulation (EU) No 910/2014 as regards establishing a framework for a European digital Identity (eIDAS Draft). The current and future state of digital wallets, Darrell O’ Donnell, Making sense of digital Wallets, Digital Identification and Authentication Council of Canada (DIACC) Digital wallet and digital agents, Self-sovereign identity, manning publications What is a wallet, Kaliya Young What goes in a wallet, Daniel Hardmann, W3C CCG Security, Siloes and sovereignty, Daniel Hardman About the author:

Adrian Doerk is a trained IT-System Electrician and has a degree in international business. He focuses on data sovereignty for individuals, institutions and governments. Currently, he works at main incubator where he leads the business development for Lissi and is also responsible for the communication at IDunion. He’s also active in foundations such as DIF and ToIP. SSI Ambassador is a private educational endeavour.

Disclaimer:

This article does not represent the official view of any entity, which is mentioned in this article or which is affiliated with the author. It solely represents the opinion of the author.


John Philpin : Lifestream

Why on earth when you visit a web site would it tell you to

Why on earth when you visit a web site would it tell you to come back during ‘normal working hours’ - and then provide the times!!

Why on earth when you visit a web site would it tell you to come back during ‘normal working hours’ - and then provide the times!!


Consensus … a great reminder …   Read The Art

Consensus … a great reminder …   Read The Article

Consensus … a great reminder …

 

Read The Article


What the UK considers to be a ‘test’ …  

What the UK considers to be a ‘test’ …  

What the UK considers to be a ‘test’ …

 


The State of California   … specifically Lake

The State of California   … specifically Lake Oroville - north of Sacramento. The Weather Channel (video)

The State of California

 

… specifically Lake Oroville - north of Sacramento.

The Weather Channel (video)

Saturday, 19. June 2021

Simon Willison

Joining CSV and JSON data with an in-memory SQLite database

The new sqlite-utils memory command can import CSV and JSON data directly into an in-memory SQLite database, combine and query it using SQL and output the results as CSV, JSON or various other formats of plain text tables. sqlite-utils memory The new feature is part of sqlite-utils 3.10, which I released this morning. I've recorded this video demonstrating the new feature - with full accompa

The new sqlite-utils memory command can import CSV and JSON data directly into an in-memory SQLite database, combine and query it using SQL and output the results as CSV, JSON or various other formats of plain text tables.

sqlite-utils memory

The new feature is part of sqlite-utils 3.10, which I released this morning.

I've recorded this video demonstrating the new feature - with full accompanying notes below.

sqlite-utils already offers a mechanism for importing CSV and JSON data into a SQLite database file, in the form of the sqlite-utils insert command. Processing data with this involves two steps: first import it into a temp.db file, then use sqlite-utils query to run queries and output the results.

Using SQL to re-shape data is really useful - since sqlite-utils can output in multiple different formats, I frequently find myself loading in a CSV file and exporting it back out as JSON, or vice-versa.

This week I realized that I had most of the pieces in place to reduce this to a single step. The new sqlite-utils memory command (full documentation here) operates against a temporary, in-memory SQLite database. It can import data, execute SQL and output the result in a one-liner, without needing any temporary database files along the way.

Here's an example. My Dogsheep GitHub organization has a number of repositories. GitHub make those available via an authentication-optional API endpoint at https://api.github.com/users/dogsheep/repos - which returns JSON that looks like this (simplified):

[ { "id": 197431109, "name": "dogsheep-beta", "full_name": "dogsheep/dogsheep-beta", "size": 61, "stargazers_count": 79, "watchers_count": 79, "forks": 0, "open_issues": 11 }, { "id": 256834907, "name": "dogsheep-photos", "full_name": "dogsheep/dogsheep-photos", "size": 64, "stargazers_count": 116, "watchers_count": 116, "forks": 5, "open_issues": 18 } ]

With sqlite-utils memory we can see the 3 most popular repos by number of stars like this:

$ curl -s 'https://api.github.com/users/dogsheep/repos' \ | sqlite-utils memory - ' select full_name, forks_count, stargazers_count as stars from stdin order by stars desc limit 3 ' -t full_name forks_count stars -------------------------- ------------- ------- dogsheep/twitter-to-sqlite 12 225 dogsheep/github-to-sqlite 14 139 dogsheep/dogsheep-photos 5 116

We're using curl to fetch the JSON and pipe it into sqlite-utils memory - the - means "read from standard input". Then we pass the following SQL query:

select full_name, forks_count, stargazers_count as stars from stdin order by stars desc limit 3

stdin is the temporary table created for the data piped in to the tool. The query selects three of the JSON properties, renames stargazers_count to stars, sorts by stars and return the first three.

The -t option here means "output as a formatted table" - without that option we get JSON:

$ curl -s 'https://api.github.com/users/dogsheep/repos' \ | sqlite-utils memory - ' select full_name, forks_count, stargazers_count as stars from stdin order by stars desc limit 3 ' [{"full_name": "dogsheep/twitter-to-sqlite", "forks_count": 12, "stars": 225}, {"full_name": "dogsheep/github-to-sqlite", "forks_count": 14, "stars": 139}, {"full_name": "dogsheep/dogsheep-photos", "forks_count": 5, "stars": 116}]

Or we can use --csv to get back CSV:

$ curl -s 'https://api.github.com/users/dogsheep/repos' \ | sqlite-utils memory - ' select full_name, forks_count, stargazers_count as stars from stdin order by stars desc limit 3 ' --csv full_name,forks_count,stars dogsheep/twitter-to-sqlite,12,225 dogsheep/github-to-sqlite,14,139 dogsheep/dogsheep-photos,5,116

The -t option supports a number of different formats, specified using --fmt. If I wanted to generate a LaTeX table of the top reos by stars I could do this:

$ curl -s 'https://api.github.com/users/dogsheep/repos' \ | sqlite-utils memory - ' select full_name, forks_count, stargazers_count as stars from stdin order by stars desc limit 3 ' -t --fmt=latex \begin{tabular}{lrr} \hline full\_name & forks\_count & stars \\ \hline dogsheep/twitter-to-sqlite & 12 & 225 \\ dogsheep/github-to-sqlite & 14 & 139 \\ dogsheep/dogsheep-photos & 5 & 116 \\ \hline \end{tabular}

We can run aggregate queries too - let's add up the total size and total number of stars across all of those repositories:

$ curl -s 'https://api.github.com/users/dogsheep/repos' \ | sqlite-utils memory - ' select sum(size), sum(stargazers_count) from stdin ' -t sum(size) sum(stargazers_count) ----------- ----------------------- 843 934

(I believe size here is measured in kilobytes: the GitHub API documentation isn't clear on this point.)

Joining across different files

All of these examples have worked with JSON data piped into the tool - but you can also pass one or more files, of different formats, in a way that lets you execute joins against them.

As an example, let's combine two sources of data.

The New York Times publish a us-states.csv file with Covid cases and deaths by state over time.

The CDC have an undocumented JSON endpoint (which I've been archiving here) tracking the progress of vaccination across different states.

We're going to run a join from that CSV data to that JSON data, and output a table of results.

First, we need to download the files. The CDC JSON data isn't quite in the right shape for our purposes:

{ "runid": 2023, "vaccination_data": [ { "Date": "2021-06-19", "Location": "US", "ShortName": "USA", ...

sqlite-utils expects a flat JSON array of objects - we can use jq to re-shape the data like so:

$ curl https://covid.cdc.gov/covid-data-tracker/COVIDData/getAjaxData?id=vaccination_data \ | jq .vaccination_data > vaccination_data.json

The New York Times data is good as is:

$ wget 'https://github.com/nytimes/covid-19-data/raw/master/us-states.csv'

Now that we have the data locally, we can run a join to combine it using the following command:

$ sqlite-utils memory us-states.csv vaccination_data.json " select max(t1.date), t1.state, t1.cases, t1.deaths, t2.Census2019, t2.Dist_Per_100K from t1 join t2 on t1.state = replace(t2.LongName, 'New York State', 'New York') group by t1.state order by Dist_Per_100K desc " -t max(t1.date) state cases deaths Census2019 Dist_Per_100K -------------- ------------------------ ------- -------- ------------ --------------- 2021-06-18 District of Columbia 49243 1141 705749 149248 2021-06-18 Vermont 24360 256 623989 146257 2021-06-18 Rhode Island 152383 2724 1059361 141291 2021-06-18 Massachusetts 709263 17960 6892503 139692 2021-06-18 Maryland 461852 9703 6045680 138193 2021-06-18 Maine 68753 854 1344212 136894 2021-06-18 Hawaii 35903 507 1415872 136024 ...

I'm using automatically created numeric aliases t1 and t2 for the files here, but I can also use their full table names "us-states" (quotes needed due to the hyphen) and vaccination_data instead.

The replace() operation there is needed because the vaccination_data.json file calls New York "New York State" while the us-states.csv file just calls it "New York".

The max(t1.date) and group by t1.state is a useful SQLite trick: if you perform a group by and then ask for the max() of a value, the other columns returned from that table will be the columns for the row that contains that maximum value.

This demo is a bit of a stretch - once I reach this level of complexity I'm more likely to load the files into a SQLite database file on disk and open them up in Datasette - but it's a fun example of a more complex join in action.

Also in sqlite-utils 3.10

The sqlite-utils memory command has another new trick up its sleeve: it automatically detects which columns in a CSV or TSV file contain integer or float values and creates the corresponding in-memory SQLite table with the correct types. This ensures max() and sum() and order by work in a predictable manner, without accidentally sorting 1 as higher than 11.

I didn't want to break backwards compatibility for existing users of the sqlite-utils insert command so I've added type detection there as a new option, --detect-types or -d for short:

$ sqlite-utils insert my.db us_states us-states.csv --csv -d [####################################] 100% $ sqlite-utils schema my.db CREATE TABLE "us_states" ( [date] TEXT, [state] TEXT, [fips] INTEGER, [cases] INTEGER, [deaths] INTEGER );

There's more in the changelog.

Releases this week sqlite-utils: 3.10 - (78 releases total) - 2021-06-19
Python CLI utility and library for manipulating SQLite databases dogsheep-beta: 0.10.2 - (20 releases total) - 2021-06-13
Build a search index across content from multiple SQLite database tables and run faceted searches against it using Datasette yaml-to-sqlite: 1.0 - (5 releases total) - 2021-06-13
Utility for converting YAML files to SQLite markdown-to-sqlite: 1.0 - (2 releases total) - 2021-06-13
CLI tool for loading markdown files into a SQLite database TIL this week Mouse support in vim

Doc Searls Weblog

Redux 001: BuzzPhrasing

Since I’m done with fighting in the red ocean of the surveillance-dominated Web, I’ve decided, while busy working in the blue ocean (on what for now we’re calling i-commerce), to bring back, in this blog, some of the hundreds of things I’ve written over the last 30+ years. I’m calling it the Redux series. To qualify, […]

Since I’m done with fighting in the red ocean of the surveillance-dominated Web, I’ve decided, while busy working in the blue ocean (on what for now we’re calling i-commerce), to bring back, in this blog, some of the hundreds of things I’ve written over the last 30+ years. I’m calling it the Redux series. To qualify, these should still ring true today, or at least provide some history. This early one is still on the Web, here at BuzzPhraser.com. I’ve made only two small edits, regarding dates. (And thanks to Denise Caruso for reminding me that this thing started out on paper, very long ago.)

The original BuzzPhraser was created in 1990, or perhaps earlier, as a spreadsheet, then a HyperCard stack; and it quickly became one of the most-downloaded files on AOL and Compuserve. For years after that it languished, mostly because I didn’t want to re-write the software. But when the Web came along, I knew I had to find a way to re-create it. The means didn’t find that end, however, until Charles Roth grabbed the buzzwords by their serifs and made it happen, using a bit of clever Javascript. Once you start having fun with the new BuzzPhraser, I’m sure you’ll thank him as much as I do.

The story that follows was written for the original BuzzPhraser. I thought it would be fun to publish it unchanged.

—Doc, sometime in the late ’90s

BuzzPhrases are built with TechnoLatin, a non-language that replaces plain English nouns with vague but precise-sounding substitutes.  In TechnoLatin, a disk drive is a “data management solution.”  A network is a “workgroup productivity platform.”  A phone is a “telecommunications device”.

The virtue of TechnoLatin is that it describes just about anything technical.  The vice of TechnoLatin is that it really doesn’t mean anything.  This is because TechnoLatin is comprised of words that are either meaningless or have been reduced to that state by frequent use.  Like the blank tiles in Scrabble, you can put them anywhere, but they have no value.  The real value of TechnoLatin is that it sounds precise while what it says is vague as air.  And as easily inflated.

Thanks to TechnoLatin, today’s technology companies no longer make chips, boards, computers, monitors or printers.  They don’t even make products.  Today everybody makes “solutions” that are described as “interoperable,” “committed,” “architected,” “seamless” or whatever.  While these words sound specific, they describe almost nothing.  But where they fail as description they succeed as camouflage: they conceal meaning, vanish into surroundings and tend to go unnoticed.

Take the most over-used word in TechnoLatin today: solution.  What the hell does “solution” really mean?  Well, if you lift the camouflage, you see it usually means “product.”  Try this: every time you run across “solution” in a technology context, substitute “product.”  Note that the two are completely interchangeable.  The difference is, “product” actually means something, while “solution” does not.  In fact, the popularity of “solution” owes to its lack of specificity.  While it presumably suggests the relief of some “problem,” it really serves only to distance what it labels from the most frightening risk of specificity: the clarity of actual limits.

The fact is, most vendors of technology products don’t like to admit that their creations are limited in any way.  Surely, a new spreadsheet — the labor of many nerd/years — is something more than “just a spreadsheet.”  But what?  Lacking an available noun, it’s easy to build a suitable substitute with TechnoLatin.  Call it an “executive information matrix.”  Or a “productivity enhancement engine.”  In all seriousness, many companies spend months at this exercise.  Or even years.  It’s incredible.

There is also a narcotic appeal to buzzphrasing in TechnoLatin.  It makes the abuser feel as if he or she is really saying something, while in fact the practice only mystifies the listener or reader.  And since buzzphrasing is so popular, it gives the abuser a soothing sense of conformity, like teenagers get when they speak slang.  But, like slang, TechnoLatin feels better than it looks.  In truth, it looks suspicious.  And with good reason.  TechnoLatin often does not mean what it says, because the elaborate buzzphrases it builds are still only approximations.

But who cares? Buzzphrasing is epidemic.  You can’t get away from it.  Everybody does it.  There is one nice thing about Everybody, however: they’re a big market.

So, after studying this disease for many years, I decided, like any self-respecting doctor, to profit from the problem.  And, like any self-respecting Silicon Valley entrepreneur, I decided to do this with a new product for which there was absolutely no proven need, in complete faith that people would buy it.  Such is the nature of marketing in the technology business.

But, lacking the investment capital required to generate demand where none exists, I decided on a more generous approach: to give it away, in hope that even if I failed to halt the epidemic, at least I could get people to talk about it.

With this altruistic but slightly commercial goal in mind, I joined farces with Ray Miller of Turtlelips Services to create a product that would encourage and support the narcotic practice of buzzphrasing.  Being the brilliant programmer he is, Ray hacked it into a stack in less time than it took for me to write this prose.  And now here it is, free as flu, catching on all over the damn place.

What made BuzzPhraser possible as a product is that the practice of buzzphrasing actually has rules.  Like English, TechnoLatin is built around nouns.  It has adjectives to modify those nouns.  And adverbs to modify the adjectives.  It also has a class of nouns that modify other nouns — we call them “adnouns.”  And it has a nice assortment of hyphenated prefixes and suffixes (such as “multi-” and “-driven”) that we call “hyphixes.”

Since the TechnoLatin lexicon is filled with meaningless words in all those categories, the words that comprise TechnoLatin buzzphrases can be assembled in just about any number or order, held together as if by velcro.  These are the rules:

adverbs modify adjectives adjectives modify adnouns, nouns or each other adnouns modify nouns or other adnouns nouns are modified by adnouns or adjectives prefixes modify all adjectives suffixes qualify all adnouns

Here is a diagram that shows how the rules work:

As with English, there are many exceptions.  But, as with programming, we don’t make any.  So cope with it.

With one adverb, one adjective, two adnouns, a noun and a prefix, you get “backwardly architected hyper-intelligent analysis inference leader.”  With an adjective and two nouns, you get “interactive leverage module.”  Put together buzzphrases of almost any shape and length:

“Breakthrough-capable technology market” “Primarily distinguished optional contingency philosophy control power environment” “Executive inference server” “Evidently complete key business manipulation capacity method” “Incrementally intelligent workgroup process topology vendor”

The amazing thing is that all of these sound, as we say in TechnoLatin, “virtually credible.”  And one nice thing about the computer business is — thanks largely to the brain-softening results of prolonged TechnoLatin abuse — “virtually credible” is exactly what it means in plain English: close enough.

BuzzPhraser makes “close enough” easy to reach by substituting guesswork for thinking.  Just keep hitting the button until the right buzzphrase comes along.  Then use that buzzphrase in faith that at least it sounds like you know what you’re saying.  And hey, in this business, isn’t that virtually credible?

Acknowledgements

Thanks to:

Stewart Alsop II, who published “Random Strings of TechnoLatin” along with the original Generic Description Table in both the Preceedings and Proceedings of Agenda 90; and who would like an e-mail front end that automatically discards any message with too many TechnoLatin words and buzzphrases.

Spencer F. Katt of PC Week, who devoted parts of two consecutive rumor columns to the Table, and posted it on the magazine’s CompuServe bulletin board, from which so many people copied it that I thought there might be something going on here.

Guy Kawasaki, who told me “this needs to be a product.”

Bob LeVitus, who told me “you ought to get this hacked into a stack.”

And Ray Miller, who did it.  Beautifully.

Doc Searls
Palo Alto, California
March 7, 1991


John Philpin : Lifestream

How many TLDs are there these days? … and .sucks ? Rea

How many TLDs are there these days? … and .sucks ? Really?

How many TLDs are there these days?

… and .sucks ?

Really?


Amazon tracks warehouse workers’ every move because Jeff Bez

Amazon tracks warehouse workers’ every move because Jeff Bezos thinks people are inherently lazy. .. but those people do actually work Jeff.

A JetBlue passenger said the flight crew threatened to kick

A JetBlue passenger said the flight crew threatened to kick him off the plane after a first-class passenger gave him an eye mask but an attendant demanded he give it back. What is wrong with the world?

Identity Woman

Fake Students

It became clear to me again today why we here in California need Verifiable Credentials. I teach in a CCC – a California Community College. This summer I have a class and right now is the “census deadline” to drop students who haven’t been attending class. Below is the note we were sent regarding fraudulent […] The post Fake Students appeared first on Identity Woman.

It became clear to me again today why we here in California need Verifiable Credentials. I teach in a CCC – a California Community College. This summer I have a class and right now is the “census deadline” to drop students who haven’t been attending class. Below is the note we were sent regarding fraudulent […]

The post Fake Students appeared first on Identity Woman.


John Philpin : Lifestream

🎶🎼 Foo Fighters Cover Bee Gees for Record Store Day LP …

🎶🎼 Foo Fighters Cover Bee Gees for Record Store Day LP … didn’t see that one coming!

🎶🎼 Foo Fighters Cover Bee Gees for Record Store Day LP

… didn’t see that one coming!

Friday, 18. June 2021

John Philpin : Lifestream

Uber and Lyft Donated to Community Groups Who Then Pushed th

Uber and Lyft Donated to Community Groups Who Then Pushed the Companies’ Agenda. … over and over. They didn’t even change the words of the circulated templates. You don’t easily change the corporate DNA when you change the CEO.

Uber and Lyft Donated to Community Groups Who Then Pushed the Companies’ Agenda. … over and over.

They didn’t even change the words of the circulated templates.

You don’t easily change the corporate DNA when you change the CEO.

Thursday, 17. June 2021

Simon Willison

Multi-region PostgreSQL on Fly

Multi-region PostgreSQL on Fly Really interesting piece of architectural design from Fly here. Fly can run your application (as a Docker container run using Firecracker) in multiple regions around the world, and they've now quietly added PostgreSQL multi-region support. The way it works is that all-but-one region can have a read-only replica, and requests sent to application servers can perform

Multi-region PostgreSQL on Fly

Really interesting piece of architectural design from Fly here. Fly can run your application (as a Docker container run using Firecracker) in multiple regions around the world, and they've now quietly added PostgreSQL multi-region support. The way it works is that all-but-one region can have a read-only replica, and requests sent to application servers can perform read-only queries against their local region's replica. If a request needs to execute a SQL update your application code can return a "fly-replay: region=scl" HTTP header and the Fly CDN will transparently replay the request against the region containing the leader database. This also means you can implement tricks like setting a 10s expiring cookie every time the user performs a write, such that their requests in the next 10s will go straight to the leader and avoid them experiencing any replication lag that hasn't caught up with their latest update.

Via @mrkurt

Wednesday, 16. June 2021

John Philpin : Lifestream

Tim Berners-Lee auctioning original World Wide Web source co

Tim Berners-Lee auctioning original World Wide Web source code as NFT. FINALLY An NFT worth having.

Roger Waters Says He Turned Down Instagram Offer to Use Pink

Roger Waters Says He Turned Down Instagram Offer to Use Pink Floyd’s “Another Brick in the Wall” in Advertisement. I wonder if anybody at Facetagram knows - or even looked up - the lyrics?

Tuesday, 15. June 2021

John Philpin : Lifestream

Yup - about right.

Yup - about right.

Yup - about right.


Very neat, cool and tidy - but at $399 … I’ll take a pass!

Very neat, cool and tidy - but at $399 … I’ll take a pass!

Very neat, cool and tidy - but at $399 … I’ll take a pass!


Hyperonomy Digital Identity Lab

The Verifiable Economy: Fully Decentralized Object (FDO) Example: Bob’s UDID Document

Strongly-typed Code to Generate Bob’s UDID Document Bob’s UDID Document

Strongly-typed Code to Generate Bob’s UDID Document

Bob’s UDID Document

{ "CellId": 6601258412767401213, "CredentialCore": { "udid": "did:svrn:credential:FD54/8F2B/8D61/9C5B", "context": [ "https://www.sovrona.com/ns/svrn/v1" ], "claims": [ { "key": "authentication", "attribute": [ { "key": "publicKey", "value": "1234567890ABCDEFGHIJKLMNOPQRSTUVWXYZ" }, { "key": "id", "value": "#pubkey1" }, { "key": "type", "value": "AUTHN-KEY" } ] }, { "key": "service", "attribute": [ { "key": "serviceEndPoint", "value": "http://localhost:5304/" }, { "key": "id", "value": "#sep1" }, { "key": "type", "value": "SEP-TCS" } ] }, { "key": "testkey1", "value": "testvalue1" }, { "key": "testkey2", "attributes": [ [ { "key": "publicKey", "value": "1234567890ABCDEFGHIJKLMNOPQRSTUVWXYZ" }, { "key": "id", "value": "#pubkey1" }, { "key": "type", "value": "AUTHN-KEY" } ], [ { "key": "publicKey", "value": "1234567890ABCDEFGHIJKLMNOPQRSTUVWXYZ" }, { "key": "id", "value": "#pubkey2" }, { "key": "type", "value": "AUTHN-KEY" } ] ] } ] }, "Envelope": { "kind": "UDIDDocument", "encryptionFlag": "NotEncrypted", "hashedThumbprint64": "MpUTVq+AYTMEucWUFfMWfsWJRQ6tmO6FGzjAJGMN4T0=", "signedHashSignature64": "CLFgZCLJPzozxwB+JjJr7xQdZxgcwbEX4XBsujD+1rCW0sd6T4JFMVFTb86H50HQZ6h7myUld+9pIlbNWS3IPIg11uwYjlzMe32AO+ETCMSEJQJAPN9IJB//C4J2SkAdkK9OszStVsA/GYYtKZQdYSTdDESQCDVw6292N92bIJY=", "comments": [ "Bob's UDID Document", "It works!", "Created by TDW.TCSServer at 2021-06-15 07:07:09Z" ] } }

John Philpin : Lifestream

I’m only 25 minutes in, but so far the @gruber WWDC show is

I’m only 25 minutes in, but so far the @gruber WWDC show is a better ad for Apple than Apple manages! Hope he gets to something useful soon.

I’m only 25 minutes in, but so far the @gruber WWDC show is a better ad for Apple than Apple manages!

Hope he gets to something useful soon.

Monday, 14. June 2021

John Philpin : Lifestream

There’s so much Automattic could have done to improve Tumblr

There’s so much Automattic could have done to improve Tumblr - they didn’t … I guess the same will go for for DayOne.

There’s so much Automattic could have done to improve Tumblr - they didn’t …

I guess the same will go for for DayOne.

Sunday, 13. June 2021

reb00ted

Colorado is now the 3rd US state with modern privacy legislation, with a twist

This past week, the Colorado legislature passed the “Colorado Privacy Act”, which makes Colorado the 3rd US state with modern privacy legislation, following the CCPA in California and the CDPA in Virginia. It will start going into effect about 24 months from now. Its provisions are similar to California’s and Virginia’s, except for this one, which goes substantially further: A CONTROLL

This past week, the Colorado legislature passed the “Colorado Privacy Act”, which makes Colorado the 3rd US state with modern privacy legislation, following the CCPA in California and the CDPA in Virginia. It will start going into effect about 24 months from now.

Its provisions are similar to California’s and Virginia’s, except for this one, which goes substantially further:

A CONTROLLER THAT PROCESSES PERSONAL DATA FOR PURPOSES OF TARGETED ADVERTISING OR THE SALE OF PERSONAL DATA SHALL ALLOW CONSUMERS TO EXERCISE THE RIGHT TO OPT OUT OF THE PROCESSING OF PERSONAL DATA CONCERNING THE CONSUMER FOR PURPOSES OF TARGETED ADVERTISING OR THE SALE OF PERSONAL DATA … BY CONTROLLERS THROUGH A USER-SELECTED UNIVERSAL OPT-OUT MECHANISM THAT MEETS THE TECHNICAL SPECIFICATIONS ESTABLISHED BY THE ATTORNEY GENERAL…

In other words, Do Not Track – or something very much like it – is back in Colorado, and ignoring the setting, like companies did widely when Do Not Track was created, is not an option any more. The technical details will need to be figured out between now and when this provision goes into effect, which two and a half years away. So plenty of time to get this right.

Progress! Which US state is next? The IAPP has a great chart showing the state of privacy legislation in states around the US.

Here is the full text of the Colorado bill.

Saturday, 12. June 2021

Simon Willison

Best Practices Around Production Ready Web Apps with Docker Compose

Best Practices Around Production Ready Web Apps with Docker Compose I asked on Twitter for some tips on Docker Compose and was pointed to this article by Nick Janetakis, which has a whole host of useful tips and patterns I hadn't encountered before. Via @mwarkentin

Best Practices Around Production Ready Web Apps with Docker Compose

I asked on Twitter for some tips on Docker Compose and was pointed to this article by Nick Janetakis, which has a whole host of useful tips and patterns I hadn't encountered before.

Via @mwarkentin


Weeknotes: New releases across nine different projects

A new release and security patch for Datasette, plus releases of sqlite-utils, datasette-auth-passwords, django-sql-dashboard, datasette-upload-csvs, xml-analyser, datasette-placekey, datasette-mask-columns and db-to-sqlite. Datasette 0.57 with a security fix I started the week by wrapping up work on Datasette 0.57, which I previewed here last week. While upgrading the ?_trace=1 debug mechan

A new release and security patch for Datasette, plus releases of sqlite-utils, datasette-auth-passwords, django-sql-dashboard, datasette-upload-csvs, xml-analyser, datasette-placekey, datasette-mask-columns and db-to-sqlite.

Datasette 0.57 with a security fix

I started the week by wrapping up work on Datasette 0.57, which I previewed here last week.

While upgrading the ?_trace=1 debug mechanism to work with streaming CSV files (in order to fix a nasty CSV performance issue) I spotted a severe XSS security hole. I fixed this in 0.57 and also pushed out a 0.56.1 release with the same fix, to cover users who weren't ready to test and upgrade the larger changes in the minor point release.

GitHub have a neat mechanism for documenting security vulnerabilities which helps alert other repositories that are relying on a piece of software. They also offer a tool for requesting a CVE - I used this, and a couple of days later GitHub's security team reviewed and issued CVE-2021-32670 (and in NIST) - Datasette's first entry in the CVE database!

I don't believe anyone found this hole before me, so it's unlikely to have been exploited in the wild in the past, but if you use Datasette with an authentication plugin sach as datasette-auth-passwords or deploy on a domain that also hosts other authenticated web applications you should upgrade urgently.

Security aside, my favourite features in the 0.57 release are:

If you try to execute a SQL query containing a syntax error, Datasette will re-display your query rather than expecting you to click the "back" button - demo here. This is a huge usability improvement that's been far too long coming. You can now use ?_col= and ?_nocol= parameters to select which columns are displayed for a table, either by URL hacking or using new "Hide this column" / "Show all columns" cog menu options. These selections also affect JSON and CSV output. Facet results now have a ... more link which links to the ?_facet_size=max version of the page, showing up to 1,000 results - so you can finally facet across al U.S states or all world countries without being truncated at 30. Here's a demo.

I also released a 0.57.1 bug fix and a 0.58a0 alpha previewing a small improvement to the menu_links(), table_actions() and database_actions() plugin hooks. I've decided to start releasing plugin hook changes as alphas as early as possible, since it makes it easier for people (including myself) to try them out in plugins.

sqlite-utils 3.8 and 3.9

Two minor releases each with two new features.

sqlite-utils 3.8 introduces a sqlite-utils indexes my.db command-line tool for viewing the indexes in a database, and a new Python library table.xindexes introspection property which returns more details than the existing table.indexes - I decided to implement a new property rather than break backwards compatibility for existing code using table.indexes.

Similarly, sqlite-utils 3.9 introduces a sqlite-utils schema my.db command and accompanying db.schema property that exposes the SQL schema of the entire database - as opposed to table.schema which just returns the schema for one specific table.

django-sql-dashboard 0.16

The first new feature contributed to this package by another developer: Atul Varma upgraded the description on saved dashboards to render as Markdown, which makes sense since Markdown is packaged with Django SQL Dashboard already - as seen in this delightfully gnarly query which dynamically constructs a Markdown summary of all of the groups and permissions configured for a Django application.

db-to-sqlite 1.4

db-to-sqlite is my command-line tool for importing data from any SQLAlchemy-supported relational database into a SQLite database. It's a quick and easy way to load data into a format that can be handled by Datasette.

The last release was nearly a year ago. I dropped into the project to wrap up a pull request from six months ago and ended up migrating CI from Travis to GitHub Actions and fixing a few other long-standing bugs and feature requests too.

Thanks to Jeremy Dormitzer for implementing the new --postgres-schema feature, and sorry it took so long to land in a release!

And the others datasette-auth-passwords simply bumps the Datasette dependency to 0.56.1 or later, to ensure it isn't run against an older insecure Datasette version. datasette-upload-csvs fixes a bug where this plugin broke against Datasette versions newer than 0.54. xml-analyser 1.1 added support for reading from standard input, e.g. cat x.xml | xml-analyzer - - I wanted this for some XML exploration I was doing, then I added some sorting improvements in this and in a 1.2 follow-up release. datasette-placekey is a tiny new plugin which adds SQL functions for working with placekeys - a way of encoding addresses which looks like this: 222-222@63s-spp-nbk. The plugin provides SQL functions including geo_to_placekey(latitude, longitude) and placekey_to_geo() and placekey_to_h3(). datasette-mask-columns 0.2.1 simply removes a stale pinned version of Datasette. TIL this week Building a Markdown summary of Django group permissions Exporting Amplitude events to SQLite Running a MySQL server using Homebrew

Friday, 11. June 2021

John Philpin : Lifestream

Is it technology or the carriers that stops me from having m

Is it technology or the carriers that stops me from having my 🍎watch as my prime cellular connection and have my Mac/‘phone’/iPad wifi camped on the watches hot spot?

Is it technology or the carriers that stops me from having my 🍎watch as my prime cellular connection and have my Mac/‘phone’/iPad wifi camped on the watches hot spot?


If you’re in England, the government is planning to hand you

If you’re in England, the government is planning to hand your private health records to people you don’t know – again. OPT OUT

Trump Is Increasingly Consumed With Ballot Audits I keep

Trump Is Increasingly Consumed With Ballot Audits I keep reading that ‘45’ is crazy. We should stop it. I don’t want to that as his Defence in a court of law.

Trump Is Increasingly Consumed With Ballot Audits

I keep reading that ‘45’ is crazy. We should stop it. I don’t want to that as his Defence in a court of law.


Kerri Lemole

Open Badges as Verifiable Credentials

In the W3C VC-EDU call on June 7, 2021 we discussed Open Badges asserted as W3C Verifiable Credentials (VCs). This call began the public discussion of Open Badges as Native VCs (potentially as Open Badges 3.0) to inform the IMS Open Badges Working Group. Why are we discussing this? Why does it matter? How will it work? Feedback from folks in the community have suggested that it would be helpful to

In the W3C VC-EDU call on June 7, 2021 we discussed Open Badges asserted as W3C Verifiable Credentials (VCs). This call began the public discussion of Open Badges as Native VCs (potentially as Open Badges 3.0) to inform the IMS Open Badges Working Group. Why are we discussing this? Why does it matter? How will it work? Feedback from folks in the community have suggested that it would be helpful to answer these questions first from a conceptual standpoint. In a later post, we can outline what the structural changes could look like.

Open Badges are digital credentials that can recognize learning, achievements, and even memberships. They look like an image, but inside the image are metadata properties (it works much like how digital photos have metadata properties that explain the data of the photo, location, etc. so that when the photo is uploaded, applications understand the data). The metadata describes why the badge was issued, when, the recipient (typically an email address), the description and criteria of the achievement, and, critically, how this digital credential can be verified.

The verification of an Open Badge is dependent on the Issuer of the badge. It can be done in one of two ways:

Hosted Verification — One of the metadata properties of an Open Badges is the assertion id which is a URL that hosts the metadata in a .json file. If that URL can’t be found, then the badge is not verifiable Signed Verification — An Issuer can digitally sign the metadata of a badge. This uses cryptography to ensure that the badge data has not changed and that the issuer was the entity that issued the badge. The public key used to sign the badge must be traceable back to the issuer.

Primarily, Open Badges platforms are issuing hosted badges. This means the verifying party is dependent on the issuer to host the data. Also, the issuer has some ability to track when the badge data has been accessed and potentially by who (or at least by IP address).

For 99% of the badges that have been issued to date, this is fine. In fact, badges are often shared via web pages that attractively display the image and data (not by the .json files that contain the data or the baked image with the data inside). So while Open Badges are both human readable and machine readable, the human readable approach is what is used most often. Signed badges are closer to Verifiable Credentials because they also rely on cryptographic proof.

Neither of these approaches give learners control of their data and this is the overall conceptual shift between Open Badges 2.0 and Open Badges as Verifiable Credentials.

Verifiable Credentials, like signed Open Badges, are signed by the issuer and verified cryptographically. Decentralized Identifiers (DIDs) can be used to identify the issuer and recipient (using DIDs in Open Badges 2.0 has been prototyped). Also, Verifiable Credentials have a concept called “Presentations” which can be digitally signed by the recipient to prove that they are the recipient of the verifiable credential(s) being presented. All in all, this means that the issuer is verified and also the recipient.

Verifiable Credentials can be displayed, shared, and presented from wallet applications that are web, mobile, or desktop based (most are mobile right now). Recipients can present one or more of their credentials to verifying third-parties who can verify the credentials without depending on the issuers and consume the data. Not only can the participant manage which credentials are being verified, they can control what aspects of their data are being shared.

For example, a student’s digital wallet may contain a digital passport, an Open Badge representing a completed course, and an Open Badge representing a student ID (all as Verifiable Credentials). A relying third-party, such as a potential employer seeking to fill an internship role, may need to verify that a student is over 18, has completed a course on communication, and is a current student. The employer’s website application can ask the student to provide this discrete information using their wallet and the student can do this without revealing any of the other information in those credentials like their date of birth, address, photo, or even the name of the school they attend. And all of this information can be verified without contacting the issuers of those credentials. Open Badges 2.0 can’t do this.

Verifiable Credentials put learners in the center of a trust triangle with issuers and verifiers. They also add an additional layer of verification for the recipients. Open Badges can take advantage of this, be the first education-focused digital credential spec to promote personal protection of and access to data, and be part of the growing ecosystem that is exchanging Verifiable Credentials.

It’s worth noting that the human readable aspect of Open Badges would not change in a VC version. Issuers can still display web pages for the issued badges and allow recipients to share their badges online from those pages. The difference being that those web pages would not be reliant on for machine verification or consumption.

Join us for this continuing discussion in the next VC-EDU call this coming Monday June 14 (and most Mondays) at 8am PDT / 11am EDT / 4pm BST / 5pm CEST. This call is public and all are welcome to join. For this Monday’s call (6/17), the zoom info has changed:

https://us02web.zoom.us/j/89426035740?pwd=dmQ0QS9wMUdFaGo0eFcwbkplV3RjUT09

Thursday, 10. June 2021

Bill Wendel's Real Estate Cafe

Peak real estate dysfunction & the K-shaped housing emergency, where do you stand?

Nearly four months ago, economist John Wake (who’s joined one of our Friday RECALL Roundtables), warned that the nation was facing a pandemic-induced, National Housing Emergency. … The post Peak real estate dysfunction & the K-shaped housing emergency, where do you stand? first appeared on Real Estate Cafe.

Nearly four months ago, economist John Wake (who’s joined one of our Friday RECALL Roundtables), warned that the nation was facing a pandemic-induced, National Housing Emergency. …

The post Peak real estate dysfunction & the K-shaped housing emergency, where do you stand? first appeared on Real Estate Cafe.


Damien Bod

Integration Testing for ASP.NET Core using EF Core Cosmos with XUnit and Azure DevOps

This article shows how integration tests could be implemented for an ASP.NET Core application which uses EF Core and Azure Cosmos. The database tests can be run locally or in an Azure DevOps build using the Azure Cosmos emulator. XUnit is used to implement the tests. Code: https://github.com/damienbod/AspNetCoreEfCoreCosmosTesting EF Core is used to the access […]

This article shows how integration tests could be implemented for an ASP.NET Core application which uses EF Core and Azure Cosmos. The database tests can be run locally or in an Azure DevOps build using the Azure Cosmos emulator. XUnit is used to implement the tests.

Code: https://github.com/damienbod/AspNetCoreEfCoreCosmosTesting

EF Core is used to the access Azure Cosmos database. An EF Core DbContext was created to access Cosmos. This is like any EF Core context, with the DBSet definitions as required. Some Cosmos specific definitions are added using the OnModelCreating method. See the Cosmos-specific model customization for more details.

public class CosmosContext : DbContext { public CosmosContext(DbContextOptions<CosmosContext> options) : base(options) { } public DbSet<MyData> MyData { get; set; } protected override void OnModelCreating(ModelBuilder modelBuilder) { modelBuilder.HasDefaultContainer("MyDataStore"); modelBuilder.Entity<MyData>() .ToContainer("MyDataItems"); modelBuilder.Entity<MyData>() .HasPartitionKey(o => o.PartitionKey); modelBuilder.Entity<MyData>() .Property(d => d.ETag) .IsETagConcurrency(); } }

The MyData class is is used to model the Cosmos documents. This has a PartitionKey and also an ETag which can be used for the Optimistic concurrency validation.

public class MyData { public string Id { get; set; } public string PartitionKey { get; set; } public string Name { get; set; } public string Description { get; set; } public string ETag { get; set; } }

The MyDataService service class is used to access the context and implement some query logic as required. I like to keep this simple and not separate the specification of the queries from the the business or the Linq statements. This reduces the amount of code and keeps the data access, business simple and makes it easy to adapt.

public class MyDataService { private CosmosContext _cosmosContext; public MyDataService(CosmosContext cosmosContext) { _cosmosContext = cosmosContext; } public void EnsureCreated() { _cosmosContext.Database.EnsureCreated(); } public async Task CreateAsync(MyData myData) { await _cosmosContext.MyData.AddAsync(myData); await _cosmosContext.SaveChangesAsync(false); } public async Task<MyData> Get(string id) { return await _cosmosContext.MyData.FirstAsync(d => d.Id == id); } public async Task<IList<MyData>> NameContains(string name) { return await _cosmosContext.MyData .Where(d => d.Name.Contains(name)).ToListAsync(); } }

The ConfigureServices method adds the services required to use EF Core and Cosmos DB. The services are used in a Razor page application, but this could be any web application, ASP.NET Core API or ASP.NET Core Blazor.

public void ConfigureServices(IServiceCollection services) { services.AddDbContext<CosmosContext>(options => { options.UseCosmos( "AccountEndpoint=https://localhost:8081/;AccountKey=C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==", databaseName: "MyDataDb" ); }); services.AddScoped<MyDataService>(); services.AddRazorPages(); }

The service needs to be tested. Instead of mocking away the database or using separate specifications classes as parameters, the service can be tested as one using Azure Cosmos emulator and EF Core. We used the framework tools to test our code. An EF Core in-memory database could also be used instead of the Azure Cosmos emulator. We use the emulator for these tests.

The tests are setup to add the services to the IoC and build these. The code can be run and asserted as required. To start locally in dev, the Azure Cosmos emulator needs to be started first.

using AspNetCoreCosmos.DataAccess; using Microsoft.EntityFrameworkCore; using Microsoft.Extensions.DependencyInjection; using System; using System.Threading.Tasks; using Xunit; namespace AspNetCoreCosmos.DbTests { public class MyDataTests : IAsyncLifetime { private ServiceProvider _serviceProvider; public ServiceProvider ServiceProvider { get; set; } [Fact] public async Task MyDataCreateAsync() { using (var scope = _serviceProvider.CreateScope()) { // Arrange var myData = new MyData { Id = Guid.NewGuid().ToString(), PartitionKey = "Test", Name = "testData", Description = "test description" }; var myDataService = scope.ServiceProvider.GetService<MyDataService>(); myDataService.EnsureCreated(); // Act await myDataService.CreateAsync(myData); var first = await myDataService.Get(myData.Id); // Arrange Assert.Equal(myData.Id, first.Id); } } public Task InitializeAsync() { var serviceCollection = new ServiceCollection(); serviceCollection.AddDbContext<CosmosContext>(options => { options.UseCosmos( "AccountEndpoint=https://localhost:8081/;AccountKey=C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==", databaseName: "MyDataDb" ); }); serviceCollection.AddScoped<MyDataService>(); _serviceProvider = serviceCollection.BuildServiceProvider(); return Task.CompletedTask; } public Task DisposeAsync() { return Task.CompletedTask; } } }

The integration tests can be run in the Azure DevOps CI. I used a yaml file to test this and added this to my Azure DevOps build. This was a little bit tricky to setup because I did not easily find any working docs. The Microsoft.Azure.CosmosDB.Emulator is installed and started using Powershell. Then the tests can be run.

Note: Cosmos db emulator is pre-installed on the windows-latest vm hosted image on Azure DevOps . Docs here – https://github.com/actions/virtual-environments/blob/main/images/win/Windows2019-Readme.md.


Thank you Lohith for researching this!

trigger: - main variables: solution: '**/*.sln' buildPlatform: 'Any CPU' buildConfiguration: 'Release' vmImage: 'windows-latest' stages: - stage: Build displayName: Build .NET sln pool: vmImage: $(vmImage) jobs: - job: Build displayName: Build pool: vmImage: $(vmImage) steps: - task: NuGetToolInstaller@1 - task: NuGetCommand@2 inputs: restoreSolution: '$(solution)' - task: VSBuild@1 inputs: solution: '$(solution)' msbuildArgs: '/p:DeployOnBuild=true /p:WebPublishMethod=Package /p:PackageAsSingleFile=true /p:SkipInvalidConfigurations=true /p:DesktopBuildPackageLocation="$(build.artifactStagingDirectory)\WebApp.zip" /p:DeployIisAppPath="Default Web Site"' platform: '$(buildPlatform)' configuration: '$(buildConfiguration)' - task: VSTest@2 inputs: platform: '$(buildPlatform)' configuration: '$(buildConfiguration)' - stage: IntegrationTests displayName: Integration Tests dependsOn: Build pool: vmImage: $(vmImage) jobs: - job: intgrationtests displayName: Run integration tests steps: - task: DotNetCoreCLI@2 displayName: Restore inputs: command: 'restore' - task: PowerShell@2 displayName: 'Starting Cosmos Emulator' inputs: targetType: 'inline' workingDirectory: $(Pipeline.Workspace) script: | Write-Host "Starting CosmosDB Emulator" Import-Module "C:/Program Files/Azure Cosmos DB Emulator/PSModules/Microsoft.Azure.CosmosDB.Emulator" Start-CosmosDbEmulator - task: DotNetCoreCLI@2 displayName: "Cosmos Database Tests" inputs: command: test projects: "**/*.DbTests.csproj"

You can add the yaml pipeline to your Azure DevOps build and it will run like the triggers are defined or the Azure DevOps policies.

This works good, but you have to be careful in preparing the tests and running in parallel. Implementing the tests like this means you have less code in your application and you can still fully test all your code. A disadvantage with this approach is that the tests take longer to run compared to unit tests without the emulator.

Links

https://docs.microsoft.com/en-us/ef/core/providers/cosmos/

https://docs.microsoft.com/en-us/aspnet/core/test/integration-tests

https://docs.microsoft.com/en-us/azure/cosmos-db/

https://dev.azure.com/

https://xunit.net/

https://docs.microsoft.com/en-us/azure/cosmos-db/local-emulator

Wednesday, 09. June 2021

Matt Flynn: InfoSec | IAM

Bell Labs, the Colonial Pipeline and Multi-Factor Authentication (MFA)

A simple technology invented by Bell Labs over 20 years ago (and widely used today) could have prevented the Colonial Pipeline attack. In 1880, the French government awarded Alexander Graham Bell roughly the equivalent of $300K as a prize for inventing the telephone. He used the award to fund the research laboratory that became colloquially known as Bell Labs. If you’re not familiar with Bell

A simple technology invented by Bell Labs over 20 years ago (and widely used today) could have prevented the Colonial Pipeline attack.

In 1880, the French government awarded Alexander Graham Bell roughly the equivalent of $300K as a prize for inventing the telephone. He used the award to fund the research laboratory that became colloquially known as Bell Labs. If you’re not familiar with Bell Labs, you should be. In the 140+ years that followed, researchers at Bell Labsinvented radio astronomy, transistors, lasers, solar cells, information theory, and UNIX, just to name a few of the many accomplishments. Among the many prestigious awardsgranted to Bell Labs researchers are nine Nobel prizes and twenty-two IEEE Medals of Honor.

In 1998, I joined AT&T Labs, which was a research group that the company retained when they spun out most of Bell Labs to Lucent Technologies in 1996. I was a Web Application developer; one of the least technical roles in the Labs. If I ever thought for a moment that I knew technology, I was quickly humbled when I built an app that tracked the Labs' actually importantprojects. The experience of working in the Labs stuck with me in the form of humility and curiosity. I accepted that I may never be the foremost expert in any given technology and I assumed the mindset of a forever student. Even today, I constantly question what I think I know because there are always holes in my knowledge or perspectives that I haven’t seen.

1998 was the same year that researchers at AT&T Labs were issued a patent (filed in 1995) for what became known in our industry as Multi-Factor Authentication (MFA). As a Product Manager at a tech firm, I don’t review patents for legal reasons. But I recently saw an excerpt of the abstract for the AT&T patent and there was one line that I found entertaining: “A preferred method of alerting the customer and receiving a confirmation to authorize the transaction back from the customer is illustratively afforded by conventional two-way pagers.” Not much has changed in 23 years. Pagers have been largely replaced by SMS but text messaging through the telecom provider’s network remains one of the most popular delivery mechanisms for MFA (despite some potential security flaws). 

I have no personal insight into AT&T’s motivations at the time, but I read Kevin Mitnick’s book a few years ago (Ghost in the Wires) and can’t help but wonder if AT&T was at the forefront of developing security technologies because they were such a target of hackers for so many years. I also reached out to Steve Greenspan, one of the inventors named in the patent to get his thoughts on the project. He noted:

"Two-way pagers had just come out (1994-1995), and our cybersecurity friends were debating whether quantum computing would undermine password-based security. The goal was to explore business applications for two-way pagers and to put humans in-the-loop for secure access."

Quantum computing is a a pretty interesting business driver for MFA, especially in the mid-1990's. The concern is even more relevant today as we inch closer to quantum compute becoming a practical reality. Today's authentication systems should store password data in non-reversible hashes (theoretically preventing the quantum threat), but it's clear that credentials are being stolen all the time (often via large databases that are just left unprotected) and MFA remains a top solution to mitigate the damage. Steve and team were clearly on the right track when they dreamed up out-of-band authentication and deserve some credit and recognition for the foresight.

You may be wondering how this relates to the pipeline attack that led to fuel shortages across the U.S. East Coast. Bloomberg reportedthat the Colonial Pipeline, which is the largest fuel pipeline in the country, was taken down by a single compromised password. That should never happen given the variety of tools available to limit and control access, starting with MFA – a relatively simple solution that would likely have prevented the attack. The entry point to the system was a Virtual Private Network (VPN) account. If you’re using a VPN and expose anything sensitive inside the VPN, you should implement strong authentication that includes at least two authentication factors (something you know, something you have, something you are). These are widely available technologies that are very effective against lost or stolen credentials.

Of course, authentication isn’t the end of the story. Today’s widely distributed and highly dynamic environments require multiple layers of security. We all know how popular email and phishing attacks have become. It only takes one person inside a network to open an email, click a link, or logon to a phishing site to give an adversary a foothold in the network. We have to assume that will happen and build layers of strong security between any one user and the potential targets.

To illustrate the point, here’s a quick example:

Grocery stores who sell small, high-value items have traditionally struggled with theft. (Ask me over a beer sometime about how I helped take down a recurring thief when I worked at a grocery store.) If the only answer was to authenticate users (check ID) on the way into the store, it wouldn't be enough. Once inside, someone can still pocket items and walk out without paying. If you walk into a grocery store today, you’ll see cameras in the healthcare aisle where small, expensive medications line the shelves. But that’s not enough either. Each item is also locked in an anti-theft device that’s removed at the register. And some items are found in a locked cabinet that requires employee assistance. Theft still happens, but each layer reduces the risk. Our IT environments are much more complicated in terms of the various pathways to theft and our responses to reduce risk typically require more than a few layers of security.

Sensitive data should only be stored in a secure area of the network with access controls and Least Privilege enforcement. Access should be limited to specific hosts or networks. Data should be encrypted (inside the file when possible - so if the file is stolen, the data is still unusable). There should be strong authentication to get into the network and monitoring of all activity. There should be alerts on unusual behavior and Data Loss Prevention (DLP) to evaluate the sensitivity of data moving across the network. The environment should be scanned regularly for vulnerabilities and misconfigurations. And on and on. Any one of these security mechanisms alone is not enough. This multi-layered approach to security is critical in developing a strong security posture that minimizes risk.

We could argue about where to start or which security controls are most important. But, it seems like a no-brainer to implement MFA for employees accessing corporate data and applications. Microsoft, who deals with 300 million fraudulent sign-in attempts daily concludedthat “MFA can block over 99.9 percent of account compromise attacks.”That sounds about right. While targeted attacks have increased in prevalence, most attacks are not targeted at specific companies or individuals. Most start with automated scripting or broad-scale phishing attacks that span across potentially thousands of companies and/or millions of people at the same time. When a foothold is found (a script finds a vulnerability or an open port, a user enters credentials into the phishing site, etc.), the attack begins. Implementing a few simple security technologies like automated vulnerability scanning and MFA can prevent most attacks before they begin. Even if a sophisticated phishing attack succeeds despite MFA, the credentials will not be very useful beyond the initial session (which should be limited in scope by other controls).

No single technology will solve all cybersecurity problems. But, implementing MFA is low-cost, easy-to-implement, and highly effective. It may even make life easier for end-users. Password requirements can be loosened because there’s less risk associated with cracked passwords. And there are numerous implementations of passwordless authentication that, while they may not always meet the strict definition of MFA, provide similar (sometimes higher) levels of security as MFA without requiring a password. Combined with context-aware adaptive security (that verifies device, network, location, time-of-day, etc.), these passwordless authentication options may provide the right level of balance between security and user experience. At this point, this isn’t scare tactics or FUD. Attacks on National infrastructure or other high-profile targets can impact the lives of millions with a single execute command. MFA is an easy layer to add to improve security and it’s commonly included with authentication solutions, so there’s really no excuse. It’s time to get it done.


Werdmüller on Medium

Ma.

We lost my mother, Deborah Monas, on Sunday evening. I was giving her a head rub; my sister Hannah held her hands; my dad Oscar and her brother Steve were at her feet. How is this possible? The world doesn’t feel real. When I was a small child, we used to pull out the sofabed in our postgraduate student flat and watch Doctor Who together under a blanket. What I would give for a TARDIS

We lost my mother, Deborah Monas, on Sunday evening. I was giving her a head rub; my sister Hannah held her hands; my dad Oscar and her brother Steve were at her feet.

How is this possible? The world doesn’t feel real.

When I was a small child, we used to pull out the sofabed in our postgraduate student flat and watch Doctor Who together under a blanket. What I would give for a TARDIS now: a way to correct the fabric of time and space. Clearly something is badly wrong with it.

I want to have something perfect to say, some beautiful encapsulation of who she was, but the truth is that nothing could be perfect enough. There’s no way to write a biography that isn’t an imperfect model. Nothing can adequately describe my beautiful mother: her overwhelming kindness, her sense of justice, her irreverence for tradition and institutions, the love that our house was always, always filled with.

I’ve written a lot about her health in this space. She had dyskeratosis congenita, which first expressed itself as pulmonary fibrosis. She fought it hard. When she began to use supplementary oxygen, my sister and I moved to California to be closer. The double lung transplant she received at UCSF gave us almost an extra decade of time with her, and it was a privilege to be by her side, with my dad and my sister, on this journey.

She loved reading, and both Hannah and I got to share those experiences with her. Hannah read to Ma for years: books by authors like Wade Davis, Octavia Butler, and Tommy Orange. Ma and I shared book recommendations; this year we read Caste and The Nickel Boys, among others. Even as it became harder to read in print, she picked up audiobooks, and kept going, recording what she’d read in a notebook she kept by her bed.

Years ago, we were gifted a Sunfish sailing boat by our family friend Pammy Biscoe. The two of them had sailed on it when they were much younger. Now it became her and my thing: she was happiest out on Waquoit Bay, our striped sails catching the wind just right. She called it a Whoosh. I’ll remember her in an infinity of ways, in an infinity of moments, but being out on the water with her, watching her smile as we picked up speed, is one I’ll treasure the most.

When my parents met, in Berkeley in the 1970s, she worked to fight for affirmative action and tenants’ rights. Her belief and support for progressive causes was an anchor throughout her life. We discovered this week that she had quietly made over 178 progressive donations last year alone, without any of us really knowing. A list of some of the causes she consistently supported follows at the end of this post; in lieu of flowers, we’re asking people to contribute if they have the means.

I want to honor her by furthering what she put into the world. The loving, non-conformist, irreverent, equity-minded spirit that she embodied.

As she lay in her hospital bed, we read Ma messages from people who loved her throughout her life. One, from Hannah’s friend Anita Hurrell, particularly sums up our collective childhood, and the sensibility I want to take forward in my own life, inspired by my mother. I’ll end by sharing it here, with kind permission.

Dearest Deb,

One time you drove us in the van to the seaside and we ate sandwiches with cucumber in them and I thought they tasted delicious and I felt this strong sense of deep content sitting with Hannah in the back listening to her singing and humming for the whole journey. I have no idea where we went, and in my head it was nowhere in England, but rather part of the big-hearted, loving, funny, relaxed, non-conformist world of your family in my childhood — full of your laughter and your enormous kindness. Sitting on the bench in your house in Marston I recall a moment of feeling complete certainty that your pirozhki were the yummiest food that had ever been made. Staying in Wheatley when my mum and dad had gone away we ate popcorn and I felt safe even though at the time I had lost Blue Bear. I remember calling you my second mummy. I’m not sure I was always a very nice kid and was probably very ungrateful then, but now I wish I could convey how I revere you, Oscar, Ben and Hannah in my thoughts and how lucky I feel to keep your example with me. I look back and see how talented Hannah already was then when we were so little and she could just sing and sing and draw in a magic way, how cool she was (her doll Manuela had beautiful black wild curly hair while I chose Charlotte with twee clothes and ridiculous ringlets), what a true feminist you were, how much of parenting you seemed to do much better than we do these days, how generous and homemade and fun and kind the world you and Oscar made was.

You are an asset to the universe. I will always love you very much.

Anita

Causes Ma consistently supported:

ACLU — donate

Planned Parenthood — donate

Progressive Turnout Project — donate

NARAL Pro-Choice America — donate

314 Action Fund — donate

Stop Republicans — donate

National Democratic Training Committee — donate

End Citizens United — donate

KQED Public Media — donate

The Squad Victory Fund — donate

BOLD Democrats PAC — donate

National Bail Out — donate

Phil Arballo for Congress (CA-22) — donate

Scott Sifton for Senate (MO) — donate

Originally published at https://werd.io on June 9, 2021.


Simon Willison

I saw millions compromise their Facebook accounts to fuel fake engagement

I saw millions compromise their Facebook accounts to fuel fake engagement Sophie Zhang, ex-Facebook, describes how millions of Facebook users have signed up for "autolikers" - programs that promise likes and engagement for their posts, in exchange for access to their accounts which are then combined into the larger bot farm and used to provide likes to other posts. "Self-compromise was a widespr

I saw millions compromise their Facebook accounts to fuel fake engagement

Sophie Zhang, ex-Facebook, describes how millions of Facebook users have signed up for "autolikers" - programs that promise likes and engagement for their posts, in exchange for access to their accounts which are then combined into the larger bot farm and used to provide likes to other posts. "Self-compromise was a widespread problem, and possibly the largest single source of existing inauthentic activity on Facebook during my time there. While actual fake accounts can be banned, Facebook is unwilling to disable the accounts of real users who share their accounts with a bot farm."

Tuesday, 08. June 2021

Simon Willison

Quoting Brendan Gregg

When I was a performance consultant I'd show up to random companies who wanted me to fix their computer performance issues. If they trusted me with a login to their production servers, I could help them a lot quicker. To get that trust I knew which tools looked but didn't touch: Which were observability tools and which were experimental tools. "I'll start with observability tools only" is somethi

When I was a performance consultant I'd show up to random companies who wanted me to fix their computer performance issues. If they trusted me with a login to their production servers, I could help them a lot quicker. To get that trust I knew which tools looked but didn't touch: Which were observability tools and which were experimental tools. "I'll start with observability tools only" is something I'd say at the start of every engagement.

Brendan Gregg


Timothy Ruff

The EU Announcement is the Biggest Ever in SSI

On June 3, 2021 a significant thing happened for SSI (self-sovereign identity): The European Union announced a plan for an EU-wide digital wallet that would support drivers licenses, tax filings, opening bank accounts, prescriptions, academic achievements, seamless travel experiences and proving one’s age to enter a nightclub, among many other possibilities.* It’s easily the biggest thing to ever

On June 3, 2021 a significant thing happened for SSI (self-sovereign identity): The European Union announced a plan for an EU-wide digital wallet that would support drivers licenses, tax filings, opening bank accounts, prescriptions, academic achievements, seamless travel experiences and proving one’s age to enter a nightclub, among many other possibilities.*

It’s easily the biggest thing to ever happen to SSI, and arguably the biggest to ever happen to identity generally, digital or otherwise.

Read more at my new blog location, here: https://credentialmaster.com/the-eu-announcement-is-the-biggest-ever-in-ssi/


MyDigitalFootprint

Day 0 CDO language. The translator, interpreter and go-between

Whilst our ongoing agile iteration into information beings is never-ending, there are the first 100 days. But what to focus on? Well, that rose-tinted period of conflicting optimisation is what </Hello, CDO!> is all about. Maintaining sanity when all else has been lost to untested data assumptions is a different problem entirely. On Day zero of being a #CDO, you have to be ready and pre

Whilst our ongoing agile iteration into information beings is never-ending, there are the first 100 days. But what to focus on? Well, that rose-tinted period of conflicting optimisation is what </Hello, CDO!> is all about. Maintaining sanity when all else has been lost to untested data assumptions is a different problem entirely.

On Day zero of being a #CDO, you have to be ready and prepared as a translator, interpreter and go-between. Yes, the essential “translation” of business needs into information requires identifying the appropriate data, the relevant analysis, and the correct interpretations, but that is not what I am talking about. There is a different translation to the appropriately modelled, described and analysed, data that offers the language to enable siloed departments in organisations to talk to each other.

The CDO must have translation skills to help other executives talk about what data means to them and that each party leaves with a common understanding. Exceptional executives can ensure that key concepts from one domain or department are appropriately translated into language and concepts in the other domain.

By example,

Data in the language of the finance department has to do with how to position data revenue and costs in the activities in the finance team use. Getting the budget spent on data, algorithms and analysis into the proper R&D tax claim, the capitalisation table and the correct allocation for depreciation take times. How and when should data revenue be recognised? How should the margin for data be calculated? It has nothing to do with what the data does. Understanding what KPI’s the CFO is tasked with and how you can help them be achieved makes for an excellent working relationship. If The CFO is tasked with improving the balance sheet, how you capitalise spending is critical. Data in the language of the legal department is a focus on the contractual and regulatory obligations. Understanding what the General Council is responsible for is critical, as it has nothing to do with the value data creates for the organisation. If you want certain protections for data, data use and data sharing, working with the legal team to craft holistic on and offline terms that spread across all contracts is critical. Data in the language of the marketing team can be about privacy persevering data sharing and trust, leading to additional brand value. Sitting down with the senior leadership team in marketing to understand their KPI and BSC can help you translate data into what they need to achieve their goals. It has nothing to do with the value of the data; how they use data is a different issue. However, the CDO must be on top of conflicts that arise with sales targets driven by incentives, KPI’s opt-in demands and contractual boundaries.

Being the translator, interpreter and go-between form you like a bridge, but as already articulated, it means you are also the new corporate punchbag.

Note to the CEO

The CDO must have translation skill to help your executives speak with other disciplines about data and enable each party to leave with a common understanding. All CDOs will find the language to translate business context into data-enabled initiatives and ensure that key concepts from one domain or department are appropriately translated into language and concepts in the other domain.

We focus on the skills to “translate business needs” into data and information needs during interviews and the recruitment cycle. This additional translator, interpreter and go-between skill in this article are the built-in skills for exceptional CDO’s. However, if there is someone already in the team with this strength, it is not essential, but it does create a better working culture. Should you need these skills, they must be proven to you before you offer the job, it is not one to learn on the job.

Monday, 07. June 2021

Phil Windley's Technometria

Building an SSI Ecosystem: MemberPass and Credit Unions

Summary: How does a functioning credential ecosystem get started? This post looks at MemberPass, the credit union credential ecosystem and explores the strategies credit unions are using to gain broader adoption. My work in self-sovereign identity began with credit unions. It was March of 2016 and I was having a conversation with Timothy Ruff and Jason Law of Evernym about how difficult i

Summary: How does a functioning credential ecosystem get started? This post looks at MemberPass, the credit union credential ecosystem and explores the strategies credit unions are using to gain broader adoption.

My work in self-sovereign identity began with credit unions. It was March of 2016 and I was having a conversation with Timothy Ruff and Jason Law of Evernym about how difficult it would be to get a multi-sided verifiable credential market going. Timothy's response was "You've got to come to Denver next week!" I showed up at a hotel ballroom in Denver to find almost 100 executives from credit unions all across the US clamoring (no, really) for verifiable credentials. I was hooked.

Over five years later, with a few fits and starts, credit unions are deploying credential-based identification systems for their members. To date, seven credit unions have issued credentials to over 22,000 members or about 2% of the eligible membership of those same credit unions.

Why do credit unions care? One word: fraud. Or maybe two: fraud reduction.

It's All About Authentication

Credit unions and their members face the threat of fraud on all sides. And credit unions employ lots of tools to fight it. But ultimately, the problem comes down to the member and credit union authenticating each other. The problem is that doing this securely annoys people.

None of us like to spend a minute–or more–answering security questions at the start of a customer service call. And SMS-based multi-factor authentication is becoming increasingly fraught. Is that text you just got warning you about fraudulent charges on your credit card really from the credit union? It's hard to tell.

Early on, a few intrepid people in the credit union industry recognized that self-sovereign identity (SSI) offered a way out of this mess. Credit unions are often small and band together to form credit union service organizations (CUSOs) that provide them the services they can't build on their own. They formed a CUSO called CULedger (later renamed Bonifii) to make that vision a reality. Bonifii offers an SSI-based solution for credit unions called MemberPass.

MemberPass Trust Triangle (click to enlarge)

MemberPass allows credit unions to offer their members a verifiable credential that they can use to prove their member number to the credit union. Initially, the MemberPass credential schema is fairly simple, containing only the following attributes:

CredentialDescription CredentialId MemberSince MemberNumber CredentialName Institution

Of course, credentials could be much more complicated than this, but this simple schema is sufficient for a member to prove they are in possession of a credential for a specific member number. Members use the MemberPass wallet to connect to the credit union and hold the MemberPass credential.

MemberPass relies on Bonifii's partner Evernym for technical services. Credit unions integrate their back office applications with the MemberPass platform at Bonifii which relies on cloud services provided by Evernym.

MemberPass Architecture (click to enlarge) Growing Adoption

While much of the response to fraud is reactive, MemberPass is proactive. Credit unions work to get members using MemberPass as an active measure to prevent fraud. As I said earlier, to date, seven credit unions have issued credentials to over 22,000 members or about 2% of the eligible membership of those same credit unions. Julie Esser, Bonifii's SVP of Client Engagement expects the number of credit unions using MemberPass to more than double in 2021 and the number of eligible members to jump by almost an order of magnitude.

Increasing the number of credit unions using MemberPass is the first segment in the adoption journey. MemberPass is already integrated with some of the back office platforms that credit unions use, easing the journey. Bonifii is also working with third party integrators to ensure they're technically ready to do the integrations for the rest.

The second segment of the adoption journey is increasing the percentage of members enrolled from the current 2% to 5% and then 10% over the next year. To do that, Bonifii works with credit unions to train frontline staff in the enrollment process. Early enrollments are happening in the branch. But enrollment can also happen on the phone. The phone enrollment process takes 3-5 minutes. The member receives the MemberPass credential while they're on the phone so the call center agent can help with any problems.

First Education Credit Union's President, Jim Yates, says that most new members are signing up. Signing up the larger body of existing members will likely require a move to self-enrollment since many never come into a branch. Self-enrollment is possible within the authenticated context of the credit union's web site. If the member chooses to enroll, they'll be directed to download the MemberPass app and then scan a QR code. This establishes a secure DIDComm connection. The credit union can then make the MemberPass credential offer. UNIFY Financial Credit Union allows self-enrollment now their online banking application.

Once a member is enrolled, the credential can be used in-person at the branch, in the drive-thru lane (with or without interactive teller machines), on the phone, or online. This is not only more secure, but often more convenient as well. For example, someone going through the drive-thru lane can authenticate without passing plastic credentials back and forth. Logging in no longer involves receiving a text and then typing in the code. And calling into the call center no longer requires answering a series of questions of questionable value.

Instead, a push notification on the member's phone asks them to verify they're the one transacting with the teller, call-center employee, or web site. The member clicks "accept" and they're done. Behind the scenes, this is a proof request made through the already established DID connection. By clicking "accept", the member is responding to the request and proving attributes from their MemberPass verifiable credential.

And it's a win for the credit unions too. Desert Financial's EVP Ron Amstutz says it's an important step in reducing fraud. Desert Financial knows they're talking to a member and the member knows they're talking to Desert Financial. Desert Financial is initially recruiting members for the program who call into the call center frequently since that's a big pain point.

Zach Eychaner from 4Front Credit Union says the call center is the first focus for them as well. They are able to shave 30-40 seconds off of each call. With 20,000 calls a year, that time adds up.

The Road Ahead

The MemberPass credential with its limited set of attributes is just a start. The future could include using MemberPass at an ATM or to open account at another credit union. Bonifii's Esser says "Once they get used to MemberPass, members will expect to use it everywhere."

Here are a few things that credit unions could do to make more use of credentials and SSI:

As we've seen, the current MemberPass schema is very simple–it doesn't even include the members name. A schema with more information in it–information that's been validated by the credit union–would make it usable outside the narrow use case of authenticating the member to the credit union and offer more value to members. Credit unions could offer a pre-approval credential for loans that the member could hold and then use when they were ready for a loan. Bonifii could issue a credential for KYC use at credit unions, banks, and in other financial transactions. Shared branching is a hot topic in the credit union industry right now. Twenty-three thousand branches looks like a mega bank. But the identity fraud problems are even harder to solve across credit unions. MemberPass can help make shared branching a reality. Employers and employee groups historically make up the foundation of credit unions. Credit unions could partner with employers to create a credential ecosystem. The DIDComm connection is a secure messaging system. Credit unions can use this secure channel for sending notifications to members, or for customer service.

The lessons from MemberPass and the credit union industry are important for anyone launching a credential effort:

Pay attention to the process and tailor it to your industry. Fraud reduction is the focus. Credit unions are evolving their enrollment process and targeting the parts of the process where they can get the most leverage. Start simple. MemberPass is a simple credential but it serves an important purpose: reliably authenticating the member to reduce fraud. Plan for the future, but don't get distracted. There are a thousand use cases for credentials in financial services. Get some early wins with your simple "MVVC", minimum viable verifiable credential, before you move on to the rest. Stay the course. Building a credential ecosystem is more about human factors than technology. In the words of Julie Esser "The technology is baked." But that's just the start. The MemberPass ecosystem is complicated by regulation, scale, and a decentralized collection of players, each with their own problems and goals. Building an ecosystem in this environment isn't easy, but it's where the reward is.

The Covid-19 pandemic caused credit union branches to close and call center volume skyrocketed and drive-thru lanes were crowded. As a result, fraud also increased. This created a heightened awareness of the importance of digital identity across the credit union industry. But while the pandemic might have pushed things along, many in the credit union industry had already concluded that self-sovereign identity was an answer that was not only flexible, interoperable, and secure, but also one that was aligned with the values of the member-owned cooperatives that make up the credit union industry.

Tags: identity ssi use+cases verifiable+credentials finance

Sunday, 06. June 2021

Simon Willison

An incomplete list of skills senior engineers need, beyond coding

An incomplete list of skills senior engineers need, beyond coding By Camille Fournier, author of my favourite book on engineering management "The Manager's Path". Number one is "How to run a meeting, and no, being the person who talks the most in the meeting is not the same thing as running it".

An incomplete list of skills senior engineers need, beyond coding

By Camille Fournier, author of my favourite book on engineering management "The Manager's Path". Number one is "How to run a meeting, and no, being the person who talks the most in the meeting is not the same thing as running it".


Apple’s tightly controlled App Store is teeming with scams

Apple’s tightly controlled App Store is teeming with scams I'm quoted in an article in the Washington Post today (linked at the top of the homepage!) explaining how I got scammed on the App Store and spent $19 on a TV remote app with a similar name to the official Samsung app. I mistakenly assumed that the App Store review process wouldn't allow an app called "Smart Things" to show up in search

Apple’s tightly controlled App Store is teeming with scams

I'm quoted in an article in the Washington Post today (linked at the top of the homepage!) explaining how I got scammed on the App Store and spent $19 on a TV remote app with a similar name to the official Samsung app. I mistakenly assumed that the App Store review process wouldn't allow an app called "Smart Things" to show up in search when I was looking for SmartThings, the official name - and assumed that Samsung were nickel-and-diming their customers rather than expecting the App Store review process to have failed so obviously.


The humble hash aggregate

The humble hash aggregate Today I learned that "hash aggregate" is the name for the algorithm where you split a list of tuples on a common key, run an aggregation against each resulting group and combine the results back together again - I'd previously thought if this in terms of map/reduce but hash aggregate is a much older term used widely by SQL engines - I've seen it come up in PostgreSQL ex

The humble hash aggregate

Today I learned that "hash aggregate" is the name for the algorithm where you split a list of tuples on a common key, run an aggregation against each resulting group and combine the results back together again - I'd previously thought if this in terms of map/reduce but hash aggregate is a much older term used widely by SQL engines - I've seen it come up in PostgreSQL explain query output (for GROUP BY) before but didn't know what it meant.

Via @vboykis

Saturday, 05. June 2021

Simon Willison

Reflected cross-site scripting issue in Datasette

Reflected cross-site scripting issue in Datasette Here's the GitHub security advisory I published for the XSS hole in Datasette. The fix is available in versions 0.57 and 0.56.1, both released today. Via @simonw

Reflected cross-site scripting issue in Datasette

Here's the GitHub security advisory I published for the XSS hole in Datasette. The fix is available in versions 0.57 and 0.56.1, both released today.

Via @simonw


Datasette 0.57

Datasette 0.57 Released today, Datasette 0.57 has new options for controlling which columns are visible on a table page, a way to show more than the default 30 facet results, a whole bunch of smaller improvements and a fix for a severe cross-site scripting security vulnerability.

Datasette 0.57

Released today, Datasette 0.57 has new options for controlling which columns are visible on a table page, a way to show more than the default 30 facet results, a whole bunch of smaller improvements and a fix for a severe cross-site scripting security vulnerability.

Friday, 04. June 2021

MyDigitalFootprint

The shadowy hierarchy

I remain curious about how I can make better or wiser decisions.  I am sharing this as part of my journey as I unpack my own boundaries and models that prevent me from making better decisions.   Context I have personally, and will always, dislike and distrust “traditional” hierarchy, probably because I perceived that the “power” wielded on me would never be available to me.   I
I remain curious about how I can make better or wiser decisions.  I am sharing this as part of my journey as I unpack my own boundaries and models that prevent me from making better decisions.  
Context

I have personally, and will always, dislike and distrust “traditional” hierarchy, probably because I perceived that the “power” wielded on me would never be available to me.   I was always on the outside; it is the joy of neuro-diversity that you become aware at an early age that to fit in the system and structure, you have to align to it, which for me, had no natural alignment.  You either fight to fit in, fight the system or create your own.  For many fitting in is natural, for me it never happened, and I stupidly opted for creating my own.  I rebelled against the system and structures as I could only see hierarchy as a method of control to something I did not align to - telling me to do things that made no sense.  Write with your right hand as a lefty.

I am not alone; from Machiavelli to Second lieutenant Boris Drubetskoy in Tolstoy's War and Peace, many have realised that there are two structures of power.  There is an obvious hierarchy of control, reporting and subordination, and there is a second unsaid and unwritten one of lobby and access.  The obvious one is easy to rebel.  The lobby and access inner ring is where true power rests; it is the one to try and join. However, it is invitation-only, and many will sacrifice more than they realise in terms of ethics, morals and beliefs to be at the table and remain part of it. When thinking about data, bias and decision making, the incentive to join and incentives when in the club is a critically important driver. This unsaid shadowy hierarchy creates outcomes we did not plan for as some will jeopardise judgment and trust to be rewarded.  I would not align to this one either, and it was self-evident that you can only become a partner in a big firm when you have a big mortgage, depending on a big salary. The significant debt and dependence on salary provide leverage.  The humour in “Yes Minister”, the 1980’s BBC  political satire sitcom, exposed much of this.  

Why does this matter?  We increasingly talk to the equality and diversity (race, gender and neuro) agendas at the board and in the senior leadership team, but it is increasingly evident that there is an inner circle, which means we are not making smarter or better decisions.  The targets for transparency and equality are just that, a target.  (says the white over 50 male!)

I appreciate that it is difficult to separate the two dependent and co-joined aspects of the hierarchy. One aspect represents power and control, which is realised in terms of budget, size, scope, authority, and political dominance. The second is how and where important decisions sit, but those who lobby may be outside of the insights that data now provides for complex decision making.   

The separate but parallel structures have worked for a long time as the consequences of long term decisions and intergenerational outcomes were not rewarded or measured. ESG, climate and the brutality of our assault on our environment, coupled with data, means this is all changing, but those who gave everything to join the inner circle have an incentive to maintain the status quo for as long as possible. 

What ghosts are in the system that means decisions we make, with the best intentions, do not create the outcomes we desire

Take a moment to reflect.  Are there separate hierarchies in your experience, or are they the same thing? It matters today more than ever before as we now have to make long term decisions against the backdrop of our systems of short term incentives and rewards.  The inner ring becomes blind to a structure that always gives the feel of power as their short term decisions are highly rewarded through immediate incentives, which means being in the club is valuable for the few.  

This viewpoint is looking at these two hierarchies (formal and informal) but positioning where either can be in the shadow of the other. Why am I writing this because we appear to find short term incentive-driven decisions easy but struggle to make long term judgment?  I do not doubt the integrity of leaders who want to make better long term decisions using data and be better ancestors but get frustrated that it does not work.  There are likely ghosts from the old decision making and lobby hierarchy in our current heuristics that create outcomes that were not planned. 

Note: The term “Shadow Hierarchy” is used in management textbooks to describe the difference between a formal published and public structure and the one that actually has power and decision making.  Googling “The Shadow of Hierarchy” will lead you to the 2008 paper from Adrianne Heuritier and Dirk Lehmkuhl, which was part of an EU funded project looking at “new models of governance.”  


Our past is not easy to face up to. 

There appears to be a much longer story that starts when decisions, power and control were all united.  Over the past 5000 years, we have become increasingly short term focused and separated the decision-making process for the long term from power and control based on the science of management incentive and short term goals. There is no doubt that the structure of economics, along with other biases, come to play.  However,  we have created data technology on top of complex relational dependencies that means signals that should become noise through the layers of analysis, in fact, become significant distracting “interruptions” at the board. Whilst in the old system, noise and signals were hidden by the informal lobby that prevented more thoughtful long term decision making based on data, today they (signals and noise) create paralysis at all levels in structures as everyone knows everything.

Should the head of a religious movement be leading planning for the next 100 years or focussing on if the budget is spent according to the plan this quarter?  Should our political leaders face the daily news headlines and respond or ensure we are equipped to face a global pandemic?   Should the head of state be allowed to focus on sustainability, climate and global concerns or defending their grandchildren’s choices? What makes noise and news dominate interruptions.  In a joined power and decision hierarchy structure a long time ago, a few individuals could make those decisions and choices, and lobby worked.  Today our hierarchies,  analogous to decision making,  have become paralysed, confused and ineffective as the volume of data and signals mixed with noise has risen to a level where our ability to know what is the right thing to do is broken. Currently, we have not transitioned to something that works. Indeed lobby has also failed as it has become increasingly linked to short term reward and incentives.

The figure below captures this concept on a two-axis chart of scale and impact.  The bottom right being an idea, it has no scale and little impact.  The top right is big government, global businesses, global NGO’s and charities, global religion and large scale movements.   On the journey from the idea to scale, we either transition from our ability to make long term decisions to focus on a quarter by quarter reporting justifying the delta between actual and plan *or* hold onto inter-generational consequences. Risk, funding, capital and markets have a significant impact on the loss of that long term thinking, as the rewards for the players become aligned to short term incentives. Whilst the long term hierarchies become corrupted, the shadow hierarchy of lobby gives way to a different incentive and power game.  Impact and scale create the same problems irrespective of the organisation; short-termism can be recognised and rewarded.   

Joseph, as in the Bible story and multicoloured dream coat fame aka the Tim Rice and Andrew Lloyd Webber musical. It is one of the earlier written examples of man’s capability for longer-term planning.  It was a 14-year cycle, so not long term; however, 7 years of abundance followed by 7 of famine. Grow, harvest, store and distribute later. 4,000 years on and a 14-year strategic planning cycle looks massive but still short compared to the famed 100 years plus China and Japanese plans. I may have rose-tinted glasses that we were once better at long-range forecasting, but this is a good piece from three world-leading experts “is humanity, in fact, unable to successfully plan for the long-term future?”   In the context of our currently limiting systems - yes.  We have to break the system. I smell revolution.

Complexity of relationships

No doubt, a small part of the issue with our inability to long-range plan is the management of the complexity in relationships.  The web of collaborative relationships that we need to consider only ever gets more strained, detailed, involved, dependant, and unbalanced, as each party align to their incentives and rewards.  Critically, the unbalanced nature of the relationship means it is increasingly difficult to predict outcomes and reactions. (search:  emergent complex systems). Our commercial framing is explored in the figure below.

The shareholder has a dependency on the board to make a decision that supports their funding of the capital. In contrast, the directors are accountable and responsible for their decisions, including unlimited liability. Everyone now has a voice that can affect choices and actions.  The Director/ regulator/ ecosystem axis is dominated by who has the accountability and responsibilities, which are different for each stakeholder in an ecosystem and are often driving in different directions because of Balanced ScoreCards (BSC), KPI’s, incentives and being in the club.  OKR (objective and key results) are no better, and the difference between built vs operate is a false one for the long term.  Building the wrong thing can be rewarded very well in OKR land. 

Continuing around the model, there is a remarkably fluid relationship between the executive teams’ and the board, where the board depends on the exec team. Still, the exec team is accountable to the board.  (The level of fluid varies by nation and company law.) The relationships between and with the customer are particularly misunderstood, but ultimately the law says the directors are held accountable. Each of the roles in the chain of relationships from shareholders to the executive team required individuals capable of dealing with complex judgement.  However, this assumes that the first part of this viewpoint is null.  Let’s expand. 

We face three connected issues but not mutually exclusive because they are a team or part of an ecosystem.  

Some individuals in positions of influence and power are part of the club or want to be in the club and therefore have incentives to be in the club and stay in the club.  Their judgement and actions are aligned with being in the club incentives.

Some individuals have reached influence and power but cannot grasp the new mental models and skills for complex judgment.  Their judgement and actions are aligned with their experience.

Some individuals can understand the complexity and seek to explain and justify decisions and actions. 

Taking this thinking and plotting this on skills/ ability vs decision environment.

The diagram above shows a decreasing number of skilled individuals who can cope with complexity. This is because the training system is about effectiveness and efficiency and not about determining if you are doing the right thing (efficacy).  Many arrive at senior roles and find they have to shift their mindsets; some do, and some don’t, but they both now have decision capacity. 

For example, we know that systems that create inequality, insecurity, and unsustainable practices are not easily transformed.  Think of our government and economy. We have a system where 95% of the world live meal by meal, day by day, week by week or month by month.  An additional 4.99% can survive for 6 to 8 months on the available cash and funds. Less than 0.01% (80 Million) of the world’s population can plan for more than a year.  When you have those few in power and lobby, will they ever need to vote for change?  Worth following and reading Umair Haque; I love this essay “How the Economy is Designed to Keep You Poor and Powerless.” 

The laws of 1% explore outcomes if we change everything by + or - 1% and what it means to human behaviours.

What do “ghosts in the system” look like? 

Imagine you are at the fairground, and there is one of the stalls where you get to throw something to win a prize.  In the UK, we have the Coconut Shy.   You pay to get three balls or bags, stand behind the line and throw them one at a time at your coconut of choice.  Knock a coconut off, and you win it.  A simple game of skill (apparently).  However, when there is a ghost in the system, it is not so simple. You line up your ball on coconut number one (it is the biggest one) and throw it with all your skill.  As the ball approaches, the coconut moves and your ball sails past. You line up ball 2, aiming again at the largest coconut (the biggest prize); this time, you miss, but coconut number 4 wobbles.  Your last chance on this budget.  Lining up coconut number one for the last time, you hit it, but it does not fall off, but coconut number 6 does, the smallest one.   The ghosts win.   Your decisions and outcome was coconut number 1; you got number 6.  It was not your lack of skill; coconut number 1 is glued on. Different motivations and rewards.  

The ghosts make signals and noise. 

This viewpoint started from hierarchy, and I want to return to thinking about the two hierarchies, one of decision and one of power and unpack the issues that the abundance of data has created for us.  The diagram below sits with the idea that there is a natural order of decision making and power.  The movement from the bottom to the peak is a move in the decision time horizon.  Overlaid on this model is VUCA (volatile, uncertain, complex and ambiguous) situations. VUCA came from preparing/ training the solder who would have to face a situation where they had to decide. 

The hierarchy is worth a paragraph to unpack, as the context of VUCA is situational.  Front line workers are trained to be reactive. They know how to act based not on scenario but based on the situation (police, army, emergency, fire, medical, call centre, customer-facing representatives).  How they react is one of the rules, heuristics and repeated numerous times.  As the NIKE brand says - “just do it”, which is what you can do when you have trained to do the same thing for 10,000 hours - you don’t have to think it is a reaction.  Above this line is the management who ensure the environment the workers have has the lowest possible risk.  Above them is management, who think about “how do we adapt to new situations and threats?” Above them is the strategy layer, which considers “what skills do we need for the next five years, where are the gaps, and how do we access the skills.”   For most companies, this takes us to the CEO.  However, in public operations, there are two more layers. The transformational one is thinking about the infrastructure for the next twenty years and, finally, the policymakers.  The policy leadership should be thinking 50 years hence and considering what policy we will need to form and how. Even at this simple layering, we can see that global leaders from presidents, prime ministers, and heads of state struggle to plan for 5 years yet are tasked with 50.  We are not 10 x better; we have created a system 10 times worse. 

What we should be witnessing is that one layers signal is another layers noise.  Each layer takes all the signal from below that becomes their noise but detects new signals that they work to.  An upward flow - not downward instruction.  As the figure below shows, each layer has to apply different skills to find the signals needed to do their role.  Interestingly, the lowest layers have the most operations discretion as they are exposed to the highest operational risks.  Most diversity is welcome, but not from those who cannot do the same thing every time.  Innovation may not be your friend in the depths.  The strategy layer has peak decision discretion. Innovation is critical, and so is all diversity is critical. At the pinnacle is policy discretion, where is the least personal risk and whilst diversity is essential but so is adherence to a north star and single vision - so less diversity might help.  Diversity is about situational improvement in decision making for better outcomes.  Ouch.

However, this is not what we are witnessing right now - the above is theoretical rubbish.  What we are feeling is summed up in the diagram below.  There is are signals from the bottom, creating signals at the top, every layer adding more signals and noise.  VUCA has gone from situational layers to the entire organisation, where everyone is reacting to everything.  The ghosts of old processes and previous decisions are no longer limited to a layer, but everyone owns everything and has to react and understand. To repeat a previous line.  Today our hierarchies,  analogous to decision making,  have become paralysed, confused and ineffective as the volume of data and signals mixed with noise has risen to a level where our ability to know what is the right thing to do is broken. Currently, we have not transitioned to something that works. 

The observation took time, but the question for us all is, do we believe it, and if so, what can we do?  Do you believe it should be reflected on within the context of being in the club and how much you have already compromised on?  Our Millenials are not in the club and will not compromise, ask them.  

Our stability has vanished, and our tools have broken.

Even to a casual observer, we live in turbulent times, which is seen through an increase in VUCA. The difficultly facing boards who carry responsibility for their decisions cannot be overstated.  We have to deal with the ghosts of the past, the voice of the present and the spirits of the future.  The diagram below brings out some of the tensions and conflicts being faced as we struggle to determine what we are optimising for.  The two-axis are communication (said and unsaid) and status (know and unknown).  The unsaid is that which is not written or spoken and assumed, often to avoid more conflict.  

We have to deal with the ghosts of the past, the voice of the present and the spirits of the future.

Top right (said and known) This is the day to day operational aspects of the board and senior leadership team. There are two camps (supporter and action owner) at the board with 10 people involved in decision making. The focus is on management, KPI’s BSC and reporting.  There are known status with data, and everyone is able and capable of engagement in the topics. We are married to this quadrant as it is easy and pays the remuneration, and we are incentivised to focus on it.  This quadrant has traditionally depended on stability, but with VUCA disrupting that and the volume to signal and noise - we have to spend all our time here as that is all we have time for.

The bottom right is said and unknown.  In this situation, everyone has a view about the unknown resulting in 10 camps (personal opinions and experience) with the 10 people.  However, communication tends to be frank and honest. We go to this quadrant every now and again but quickly withdraw to safer grounds.  

The known and unsaid, top left.  The unsaid here are the assumptions that we all make about everyone else in the room and their viewpoints.  This time there is are 100 camps as we all assume about what everyone else thinks about everyone as there is no communication.  The principal reasons not to communicate is conflict, dominance and leadership style.  We are trapped by the debt we have (mortgage, credit, school fees, lifestyle, divorce) and need the salary; therefore, there is a degree of control.  We try to avoid this quadrant but every now and then, we end up there because of other stress, pressures and the need for a diversion or win a political game.  

The bottom left, the unknown and unsaid.  The ecosystem has a voice, the partners of the directors have a voice, everyone has a voice, and many, due to previous flights and allegiances, are in several camps at the same time.  The crux is that we are human and bring our differences, but it is very messy because it is unknown and unsaid. But this quadrant represents the volatility, uncertainty, complexity and ambiguity we are currently facing, and we don’t have the tools to deal with it. We have a preference based on skills, experience and incentives to focus on the top right. 

Our processes and methods enable only certain decisions

One critical aspect of being in a leadership capacity is to question and determine how our processes and methods guide and frame certain decisions, which means we have to unpack legacy and find the ghosts in the system.

Legacy within this framing is threefold. Decisions. Decisions. Decisions. These are:

Previous incentives and power games created decisions that created processes, methods and rules; they are now ghosts in the systems. These decisions were taken so long ago that no one knows why, how or when it was decided. It is the way we do it; it is our IP, our brand.

Decisions that created “information and technology debt” included embedded and baked-in systems, hidden and no-longer supported code, and automation based on tools and data biased when created.

Decisions that created noise in the hierarchy to lose or filter signals that someone did not want to hear. It was the creation of layers, reports, practices, structural regulation and unchallenged assumptions.

Unpacking legacy questions will take time. It is worth asking questions about legacy when you are new to a company and then verifying them over time as we become blind to the tools that mould us.

I am focused on this because I want to determine how I can make smarter/ better decisions with data. For that, I need a data decision framework.  Therefore I tend to ask what one thing do we as a leadership team want from our data? The response varies but include:

Evidence-based, actionable insights

What should we automate?

How do we know we are doing the right thing?

Where are there efficiencies to be gained?

What do customers really want?

How to manipulate customers to increase margin and revenues?

Where are risks that we cannot see?

What is being hidden that we cannot see?

If you look at this list in the context of what tools and decisions already frame the response, are these what we are looking for data to answer or are we looking to data to affirm/ justify what we have already decided. A different response that no one will say “to justify what we are already doing!”  This fits into the know/ unsaid in the previous matrix. The top left, the one we avoid opening up.  

Data has bias because of previous decisions.  Or we can write, “the ghost of previous decisions will guide our current decisions”.  Importantly, our data, which we trust, is only the representation of the past, which means our tools fail.

Therefore, as a leadership team, we have to find non-data tools to check what decisions from the past are biasing the current data, processes and tools. We cannot usefully answer the question we have set ourselves “What is the one thing that we, as a team and organisation what our data to drive, deliver or provide? Without understanding the situation.

The CTO knows that they have to build a new platform when the bug list, new feature development and maintenance costs are bigger and will take more time than developing a new platform — this is the technology debt question. The CIO or newly created #CDO role has to understand what is your information debt. The CTO will struggle as there is no clear path from policy to code. Similarly, the CIO/CDO struggles with no clear path from policy (what one thing) to better data for the decisions we require. The data leadership team inherit and are now accountable for previous ghosts and decisions, which constrain what is now possible as the biased tool has created what we have. The costs of collecting, labelling, holding, sorting and creating training data continually increase, creating a more significant gap and misalignment in values and expectations from data.

“We become what we behold. We shape our tools, and then our tools shape us” is often mistakenly attributed to Marshall McLuhan and called McLuhan Law. The quote was actually written by Father John Culkin, SJ, a Professor of Communication at Fordham University in New York and friend of McLuhan. Such is the problem with data.

As we have separate decisions, power and now data, perhaps we should reflect on these questions as a leadership team.

What do we want to become, and what tools and data will help us?

What tools do we use, and what will they enable us to become?

What is the minimum viable data set required to give the best value?

Do our tools and data trap us?

Is the work of data to “detect and measure” or to “enables change and transform?” 



The laws of 1% - how far before you reach a revolution?

What happens when we decrease life support by 1% Taking water down by 1% per day once you pass a threshold of low water intake and death is a certainty in the short term (blue line).  Taking food down by 1% per day, once you pass a threshold, death is a certainty, but it is slower to take you from life than reducing water (green line). Taking lifestyle or experience down by 1% per day gets
What happens when we decrease life support by 1%

Taking water down by 1% per day once you pass a threshold of low water intake and death is a certainty in the short term (blue line).  Taking food down by 1% per day, once you pass a threshold, death is a certainty, but it is slower to take you from life than reducing water (green line). Taking lifestyle or experience down by 1% per day gets more challenging and more complex, and you will likely die earlier, but death is not facing you (black line).  We all die, but lack of water means it happens now, lack of food means it happens soon, lack of lifestyle means life could have been longer (black line).  Leaving purple as the optimal.

When reducing water by 1%, you reach a point very quickly where revolution is worth it as there is nothing to lose.   When reducing food by 1%, you reach a point slowly where revolution is worth it as there is less to lose. Still, if you go too far beyond the point when you have agency and capacity from sufficient food, you lose the ability to win the revolution. And you have just enough; why give it up?  Reducing lifestyle by 1% revolution may happen when sufficient have nothing left to lose, which may never come. 

Taking away can lead to revolution. How to prevent revolution, improve by 1%

What happens when we increase life support by 1%

Increasing water by 1% per day, once you pass a threshold of high water intake, death is a certainty from drowning (blue line) but is very hard to enforce.  Increasing food by 1% per day, there is no threshold where death is a certainty because it is so slow. Increasing lifestyle or experience by 1% per day gets more straightforward, and you will die from something but death is not facing you (black line).  Whilst we all die, too much water, food or lifestyle puts death into the distance, and you have agency and choice. 

 

Making Stability look like 1% per year

When change happens at 1% per day, we notice, when does 1% looks like stability?  A week, month or year.  If I change water, food or lifestyle up or down by 1% per year, life will not change much. However, if down, you will reach a point in 20 years where you can smell revolution. If up, do we become comfortably numb and depend on hope that rather than change. 

Is growth just a mask for the prevention of a power change?  How much will you have to lose before a revolution?

If the climate changes up by 1% per year - how long will it take?  If the climate changes down by 1% - how long will it take?

Thursday, 03. June 2021

Hyperonomy Digital Identity Lab

Hydroponic Pods


Simon Willison

Weeknotes: Docker architectures, sqlite-utils 3.7, nearly there with Datasette 0.57

This week I learned a whole bunch about using Docker to emulate different architectures, released sqlite-utils 3.7 and made a ton of progress towards the almost-ready-to-ship Datasette 0.57. Development environments on the Mac M1 We added another full-time developer to the VIAL project this week, which came with a new requirement: we needed our development environment to work on an M1 Mac. "

This week I learned a whole bunch about using Docker to emulate different architectures, released sqlite-utils 3.7 and made a ton of progress towards the almost-ready-to-ship Datasette 0.57.

Development environments on the Mac M1

We added another full-time developer to the VIAL project this week, which came with a new requirement: we needed our development environment to work on an M1 Mac.

"Can't be that hard", I thought.

After several rounds of attempted fixes, I decided I needed an M1 for myself. I ended up hiring an M1 Mac Mini from MacStadium, and dove into the horror show that is getting a relatively straight-forward Django app with a small but crucial number of C dependencies (most notably psycopg2 and cryptography) running in a Docker container on an M1 machine.

Having almost beaten all of the dependencies into shape, I hit a free(): invalid pointer error. With no idea how to even start debugging that, I wrote up what I had learned so far and declared defeat on Twitter.

the possibly-good news is that you can emulate aarch64-linux on other platforms Docker runs on

... said Mo McRoberts. And it turns out you can!

The short version: I'm using Docker Compose, and it turned out all I had to do was add platform: linux/amd64 to my docker-compose.yml file and Docker built and ran my container under that architecture, using QEMU as an emulation layer. And just like that, everything on the M1 started working!

Performance isn't brilliant, but it works. In the longer run it would be great to get the whole environment running on the native architecture for the M1, but it's great to have found an escape hatch.

More QEMU: testing Datasette on different architectures

Blair Drummond contributed a patch for Datasette that enables building the official Datasette Docker container across multiple architectures, using the Docker buildx mechanism.

It successfully builds and publishes images, which is cool... but what's a good way to test them?

Again, the --platform option to Docker holds the key. Here's a one-liner that runs the Datasette image built for linux/s390x on my laptop, via an emulation layer:

% docker run --platform linux/s390x \ -v `pwd`:/mnt -p 8001:8001 \ datasetteproject/datasette:0.57a1 datasette -p 8001 -h 0.0.0.0

And here's a recipe that confirms the platform by outputting the result of Python's platform.uname() function:

% docker run -it --platform linux/s390x -v `pwd`:/mnt \ -p 8001:8001 datasetteproject/datasette:0.57a1 \ python -c 'import platform; print(platform.uname())' uname_result( system='Linux', node='d14916ca91df', release='4.19.121-linuxkit', version='#1 SMP Thu Jan 21 15:36:34 UTC 2021', machine='s390x' )

Ideally I'd like to run the full Datasette test suite inside the container. That requires some extra dependencies, plus mounting the Datasette root folder (complete with tests) inside the container. Here's a hacky way to do that:

docker run -it -v `pwd`:/mnt --platform linux/amd64 \ datasetteproject/datasette:0.57a1 bash -c ' pip install "pytest>=5.2.2,<6.3.0" \ "pytest-xdist>=2.2.1,<2.3" \ "pytest-asyncio>=0.10,<0.16" \ "beautifulsoup4>=4.8.1,<4.10.0" \ "black==21.5b1" \ "pytest-timeout>=1.4.2,<1.5" \ "trustme>=0.7,<0.8" \ && cd /mnt && pytest'

Before I release 0.57 I want to use this trick to make sure I'm only building official Docker containers for platforms on which the test suite passes.

sqlite-utils 3.7

This was an absolutely classic case of yak shaving. I noticed that exporting CSV data from covid-19.datasettes.com/covid/ny_times_us_counties - now with 1,378,190 rows - was running really slowly.

That page is sorted by date, descending - and Datasette large exports work using keyset pagination, which means they execute 1,300 SQL queries (1000 rows per query) ordered by date, descending with a filter and a limit.

There was an index on date but it was in ascending order. SQLite also supports descending order indexes, and a micro-benchmark suggested that this could speed things up - each query could take 10ms instead of ~200ms.

So I needed to teach sqlite-utils to create descending order indexes. And then since I had a bunch of functionality accumulated since version 3.6 back in February, I cleaned those up, fixed an additional bug and shipped a 3.7 release.

Then when I applied the fix to my covid-19 project it made almost no difference to the performance at all! It turned out I had been shaving entirely the wrong yak.

The real problem was that each page of results was unneccessarily calculating facets, suggested facets and a full table count - potentially a thousand times when returning a million streaming rows. Which leads me to...

Progress towards Datasette 0.57

This was the yak I needed to shave all along. Some highlights from the past 8 days:

I landed ?col= and ?_nocol= - you can now select exactly which columns you would like to see on the table page (and in the CSV and JSON exports). A new ?_facet_size=max option (#1337), which is now linked to from the … shown at the bottom of a truncated list of facet results. This is particularly useful for things like lists of states - 50 gets truncated to 30, but the new link lets you see all 50 in one place. You can now opt-out of both facets and table counts with the new ?_nofacet=1 and ?_nocount=1 options - both of which are used to greatly speed up large CSV exports. And a fix for an issue first opened in November 2019 - if your SQL query throws an error, Datasette now shows you the error in context with the SQL query so you can edit it without having to hit "back" to recover it! This may seem like a small thing but it was a surprisingly fiddly fix - I'm so glad to finally have it done though.

I hope to release 0.57 final later this week. In the meantime some of the above is available in the 0.57a1 alpha.

TIL this week Turning an array of arrays into objects with jq Running Docker on an M1 Mac Finding CSV files that start with a BOM using ripgrep Releases this week sqlite-utils: 3.7 - (74 releases total) - 2021-05-29
Python CLI utility and library for manipulating SQLite databases datasette: 0.57a1 - (87 releases total) - 2021-05-27
An open source multi-tool for exploring and publishing data django-sql-dashboard: 0.15 - (29 releases total) - 2021-05-25
Django app for building dashboards using raw SQL queries

Wednesday, 02. June 2021

Phil Windley's Technometria

SSI Interaction Patterns

Summary: Interaction patterns in SSI differ from what we see in a traditional IAM system that is performing authentication and authorization. This post discussed three different authentication and authorization patterns and then shows how they are all specializations of the general verifiable credential exchange pattern. Last year, I wrote about how digital relationships are operation

Summary: Interaction patterns in SSI differ from what we see in a traditional IAM system that is performing authentication and authorization. This post discussed three different authentication and authorization patterns and then shows how they are all specializations of the general verifiable credential exchange pattern.

Last year, I wrote about how digital relationships are operationalized in response to a post from Doc Searls about SSI wallets. The wallet (and the agent it is paired with) is a key player in SSI workflows. A recent exchange with the good folks at TechVision Research made me realize that I hadn't ever written about the patterns that an SSI wallet uses to realize operational digital relationships. Today I'm going to take a stab at three simple authentication and authorization patterns in SSI to show the interactions necessary to accomplish these foundational workflows. Finally, I'll show how all three are just specializations of the standard verifiable credential exchange pattern.

DID Authentication Pattern

The simplest authentication pattern uses decentralized identifiers (DIDs) as autonomic identifiers to establish a peer relationship. Because of their mutual authentication capabilities, DID relationships can be used for authentication.

Simple DID Authn Interaction Pattern (click to enlarge)

This pattern has two parties:

Alice has an SSI wallet on her mobile phone. Bravo Corp has an enterprise wallet tied to an IAM system that is protecting some resource.

The interaction pattern has the following steps:

Alice and Bravo establish a Peer DID relationship (blue arrow). This means that they each generate a Peer DID and send it to the other, along with the associated public key. These identifiers are self-certifying and each party can use the information associated with the DID to authenticate the other. Alice tries to access the protected resource (red arrow). The request is intermediated by Bravo's IAM system. As part of this request, Alice makes her DID known. There are a number of sub-scenarios for the different ways this may happen. For example, she could scan a QR code or enter an associated human-readable identifier. The IAM system, working in concert with Bravo's enterprise wallet, issues a DID Auth challenge to Alice's wallet through her phone. Alice is notified by her wallet of the challenge and approves the response from her wallet to Bravo. Bravo verifies Alice's response.

A few things to note about this interaction:

Because Alice and Bravo are using Peer DIDs, no ledger is involved in the authentication. In a Peer DID relationship, both parties keep the other informed of relevant key events (e.g. key rotation) and store that information in a cryptographic key event log. Any authorization would have to be done based on information the IAM system has from another source. For example, if the Peer DID relationship were established within a different authenticated context, Alice could have been assigned a group for RBAC or other attributes could have been associated with Alice's DID within Bravo's IAM system. The interaction pattern shown here is leaves out a number of details. Markus Sabadello identifies ten different variations of this pattern in his talk Introduction to DID Auth for SSI. Single-Party Credential Authorization Pattern

While the DID Authn pattern is simple, it is not as flexible as we need in some situations. For more complicated scenarios, we can use verifiable credentials. The first scenario we'll consider is where the same organization is issuing and verifying the credential.

Single-Party Credential-Based Authn Pattern (click to enlarge)

The parties in this scenario are the same: Alice and Bravo Corp.

The interaction pattern proceeds as follows:

Since Bravo Corp will be issuing a credential, they write a Public DID and credential definition to the ledger. They might also write a schema and revocation registry, if necessary. Alice and Bravo establish a Peer DID relationship (blue arrow). Note that the DID that Bravo uses for this relationship is not the public DID created in (1), instead Bravo creates a Peer DID especially for the relationship with Alice. Bravo issues a credential to Alice (green arrow). The nature, content, and context of this credential issuance depend on Bravo and Alice's specific needs. Bravo is the credential issuer and Alice is the credential holder. Alice tries to access a protected resource (red arrow). The request is intermediated by Bravo's IAM system. Like the DID Authn pattern, the IAM system is working in concert with an enterprise wallet. Bravo is using a policy-based access control (PBAC) system that relies on knowing attributes about Alice. The IAM system makes a credential request to Alice that asks for specific attributes based on the attributes needed by the policy for the resource Alice is accessing. Alice sees the request and authorizes her wallet to issue a proof of attributes based on the credential she holds. The response contains only the attributes that Bravo needs, not the entire credential to minimize the information that is shared. The PBAC system uses the attributes in the proof presentation to authorize Alice's access.

A few things to note:

Bravo does not need to access the ledger to verify the credential since they already know the information necessary to perform the validation since it's their credential. Even so, Bravo writes the public DID and credential definition to the ledger so that Alice can present the credential to others who can verify it, supporting use cases beyond Bravo's. Using a credential held by Alice to validate her authority to access the protected resource is more flexible for Bravo, and potentially more reliable, than a centralized attribute store. Rather than building a central attribute store and linking every system in the enterprise to it, each system can stand alone from the central store and make decisions based on the policies in place for that system. Astute readers will read the last bullet and think "but don't they all have to be linked to the same digital wallet to take advantage of the Peer DID relationship?" The answer is "no." Each service can have its own Peer DID relationship with Alice, verify the attributes from the credential, and know it's Alice. The only thing they need to know is the public DID their organization uses and the credential definition for the credential. Multi-Party Credential Authorization Pattern

We can extend the single-party pattern, to include multiple parties. In this pattern, one entity, Bravo Corp, is issuing credentials, but another entity, Certiphi Corp, is verifying the credential and using its attributes to authorize Alice's access to a resource.

Multi-Party Credential-Based Authn Pattern (click to enlarge)

The interaction proceeds as follows:

Since Bravo Corp is issuing a credential, they write a Public DID and credential definition to the ledger. Again, they might also write a schema and revocation registry, if needed. Alice and Bravo establish a Peer DID relationship (blue arrow). Bravo issues a credential to Alice (green arrow). Alice and Certiphi establish a Peer DID relationship. Alice tries to access the protected resource (red arrow) at Certiphi. The request is intermediated by Certiphi's IAM system. Certiphi is using a policy-based access control system so, the IAM system makes a credential request to Alice that asks for the specific attributes needed by the policy for access to the resource. Alice sees the request and authorizes her wallet to issue a proof of attributes based on the credentials she holds. The wallet automatically chooses the credential from Bravo since it has the attributes needed to satisfy Certiphi's request. Certiphi cryptographically validates the fidelity of the proof to ensure it's from Bravo, is about Alice, hasn't been tampered with, and hasn't been revoked. They might also need to validate the provenance of the attributes in the proof. Certiphi is the credential verifier in this pattern. The PBAC system uses the attributes in the proof presentation to authorize Alice's access.

A few things to note:

The DID relationship Alice and Certiphi create in (4) could be ephemeral, it needn't be permanent unless the parties need it to be. There is no direct connection or link between Bravo Corp and Certiphi Corp. They needn't have any pre-existing business or technical relationship. Certiphi needn't connect to a Bravo Corp API. The primary difference between the single-party and multi-party patterns is step (8), checking the fidelity and provenance of the credential. The fidelity can be done automatically using cryptography. Determining provenance is not a trivial thing since it involves Certiphi determining they can trust attributes attested by Bravo. This is a governance issue, not a technical one. The governance issue could be simple or complex. Perhaps Bravo is known to Certiphi (e.g., a local business next to a large university). Certiphi might ask Bravo to prove things about itself using credentials issued to Bravo by someone Certiphi already trusts (e.g., the government). Bravo and Certiphi might already be part of some established governance framework (e.g., a university accreditation organization). Generalized Trustworthy Data Transfer Pattern

Authentication and authorization are table stakes for any identity interaction. The general data transfer pattern moves beyond simple authentication and authorization patterns to using identity data in workflows.

Credential-Based Data Transfer Pattern (click to enlarge)

In this pattern, all of the interactions are identical to the pattern for multi-party authorization in the last section with a few exceptions:

Alice is accessing a web service that needs data to proceed in (5) rather than a protected resource. The web service uses the data from the proof presentment as part of its workflow (e.g. fill out a form).

We can view all of the previous patterns as specializations of this pattern:

The Peer DID relationship provides a mutually authenticated communications channel in every case that can always be used to know that you're talking to the entity with whom the relationship was originally established–the core requirement for any authentication. Transferring attributes using verifiable credentials for PBAC is just a special case of transferring attribute data in a trustworthy manner. The difference is the end-use of the attributes: the PBAC system or some other service. There's no need for the data transferred in the general pattern to come from a single credential. In fact, the service can ask for attributes without knowing what credentials Alice holds. Alice's wallet will match the requested attributes to the credentials Alice holds. Alice can choose which credentials to use for specific attributes (e.g. date of birth) if she wants. While the figure shows Alice accessing a web service, this can be further generalized beyond the web. Any data transfer for an online workflow can happen using verifiable credentials. While this pattern involves Alice and two organizations, there's no reason why people can't be credential issuers and verifiers. Indeed, any party in these diagrams could play any of the roles.

Viewing traditional IAM functions like authentication and authorization as special purpose data transfers broadens SSI significantly beyond what we have traditionally seen as "digital identity." The uses for verifiable credentials are vast and include many things we may not think of as "credentials". While this expanded view of digital identity may make some uncomfortable, I think it is perfectly aligned with my belief that we build identity systems to manage relationships, not identities. Every relationship is unique. Flexible, trustworthy digital credentials serve that uniqueness and introduce the means of moving digital identity beyond just authentication and authorization.

Photo Credit: Phone Icon from Fast Icon Design (Linkware)

Tags: identity ssi decentralized+identifiers verifiable+credentials


Jon Udell

Dear background mind: please think about Postgres and Clojure

I used a Lisp variant in my first programming job, so I have some appreciation for the “code as data” power of that language. Nowadays I’m building an analytics system that combines Postgres and two of its procedural languages, pl/pgsql and pl/python, with a sense-making tool called Metabase that’s written in a Lisp variant called … Continue reading Dear background mind: please think about Postgres

I used a Lisp variant in my first programming job, so I have some appreciation for the “code as data” power of that language. Nowadays I’m building an analytics system that combines Postgres and two of its procedural languages, pl/pgsql and pl/python, with a sense-making tool called Metabase that’s written in a Lisp variant called Clojure.

In Postgres I’m able to wrap SQL queries in functions; they compose with other functions that do things like cache results in materialized views and aggregate subgroups. It all feels very dynamic and functional, which are two of Clojure’s main calling cards, so this makes me wonder about Clojure as another Postgres procedural language.

For the pl/python functions in my system, “code as data” looks like building SQL statements that swap variables into SQL templates and combine them. This string-building approach is an anti-pattern for Clojure folk. They don’t want to work with fragments of SQL text, they want to work with Clojure maps. Here’s an example from Honey SQL, the library that Metabase uses to build SQL texts from structured data.

(def sqlmap {:select [:a :b :c]
:from [:foo]
:where [:= :f.a "baz"]})

This seems like an important power to be able to wield in the hybrid programming environment that Postgres provides. I can imagine making very good use of it.

But I know very little yet about Clojure. Is this really an idea worth exploring? How else would it differ from what I’m now able to do in Python? To learn more about Clojure I’ve been watching talks by its creator, Rich Hickey. Most are tech-heavy but one of them isn’t like the others. In Hammock-driven development he lays out a process for creative problem solving that coordinates the waking mind (at the computer doing critical thinking and analysis) with the background mind (on the hammock doing abstraction, analogy, and synthesis). The coordination is explicit: You use the waking mind to feed work to the background mind which is “the solver of most non-trivial problems”; you weight your inputs to the background mind in order to influence their priority in its thinking.

I guess I’ve done that kind of thing implicitly from time to time, but never in such an intentional way. So, is Clojure-in-Postgres worth exploring? Perhaps by writing this I’ll prime my background mind and will receive more clarity in the morning.

Tuesday, 01. June 2021

Simon Willison

Quoting Tim Bray

I’m pretty convinced that the biggest single contributor to improved software in my lifetime wasn’t object-orientation or higher-level languages or functional programming or strong typing or MVC or anything else: It was the rise of testing culture. — Tim Bray

I’m pretty convinced that the biggest single contributor to improved software in my lifetime wasn’t object-orientation or higher-level languages or functional programming or strong typing or MVC or anything else: It was the rise of testing culture.

Tim Bray


MyDigitalFootprint

The Paradox that war and monopoly are the same thing.

Peak Paradox is a framework to help you reclaim clarity and control as we believe that if you cannot see the paradox, you have been framed and are in a model where someone else is in control.  When you can see the paradox in the data, information, recommendation or decision, you will have improved your clarity.  This is a guide on how to apply the framework to unlock business models, co

Peak Paradox is a framework to help you reclaim clarity and control as we believe that if you cannot see the paradox, you have been framed and are in a model where someone else is in control.  When you can see the paradox in the data, information, recommendation or decision, you will have improved your clarity.  This is a guide on how to apply the framework to unlock business models, concepts and complexity.

Most aspects of everyday business and life can be questioned within the boundaries of the Peak Paradox Framework.  I have been exploring that we tend to go out of bounds for a while but find it is too hard to stay there for long. Movement outside of the boundary tends to be a transitional state.  The thick red line illustrates the usual boundaries in the figure below.  This post explored how to make better decisions using the framework and laid down the idea that some decisions can occur off-piste, outside of the normal boundaries. It presented the reason why we need to minimise shared decision making, find followers and create shared passion if we want to make a difference.  Indeed it is no longer about enemies in the camp but about identifying those who share the belief. We must have diversity and not about diversification away from the shared belief.  

I explored the concept that we are transitioning from Physical beings to Information beings.  The article proposed that our existing models for oversight and governance are breaking and that we need to think about addressing this emerging gap.  In an earlier post, I explained that many models take us to the boundaries in the Peak Paradox framework, such as cults, politics, beliefs and religion.  This post presents the thinking that the economic model of monopoly and political activity of war are outside the framework and a transitional state. 

As physical beings 

In the figure below, war and monopoly are outside the boundary between Peak Individual Purpose and Peak Work Purpose.  War is nasty, inhumane, and complex, but it serves as a tool to acquire more land with a historical lens. Control of land gave you access to resources, people and materials.  The more land and more resources you had access to, in straightforward terms, enabled a few to realise their Peak Individual Purpose (become more sovereign) and for others to realise Peak Work Purpose through growth. 

A more modern version of war is the economics of monopoly, duopoly, or cartel, where a few seek to control resources for their personal gain, enabling them to achieve Peak Individual Purpose through their companies having control.  

Both war and monopoly have become increasingly regulated over time. We have created, as humans, complex rules of engagement, governance and oversight.  In part to ensure control and power does not fall to one person in a democracy, to protect the vulnerable and to respect life.  If they work or not is another story. 

As information beings

It is not that physical has or will vanish; we will co-exist as both physical and informational. Our physical being is still very much here, but we live in an emergent digital/ data augmented world.  War has become cyberwarfare and monopoly of land, and people have become the monopoly of data that indirectly controls people and land, often through an instrument such as debt. 

The question being asked is, “have we changed more than just the nature of war and monopoly as we expand from physical beings to information beings?”  Our rules, methods, insights and policy work for us as physical beings, but do the same systems and structures work in an information age.  Is the paradox that we have to deal with the same outcomes from war and monopolies (control and power), but there is a shift.  What is this shift?   Yes, some cybercrime and cyber warfare is a replication of the historical war framing, insomuch that an individual, cause or nation wants to use data/ digital as a method of control.  But this is just the natural advancement of physical beings adapting to new technology. Gunpowder is another example.  However, information being cyberwar is a fight or rage against the machine trying to bring about a better life for everyone and expose injustice - this is different. Past freedom fighters against oppression look similar.  A monopoly of data is different to a monopoly of physical resources because data is data.  Data is, by its nature, non-rivalrous and non-excludable. This simple fact is that you cannot declare ownership of data (though many people try); you cannot control it; you lose nothing when you copy it. It is why data is data.   Data can bring about better health care; data can bring about better education; data can bring transparency, and data can bring equality.  

An AI can make better decisions, if we knew what this is, without the bias of human politics. However, it is far too easy to keep our physical being lens on everything as we like it, we are used to it, our systems are aligned to it, and we can ignore or rubbish the emergence and augmentation of data in our decision methodology.  A critical point beginning that what we had as oversight and controls for war and monopoly worked for physical beings. The same systems break as informational beings because there is a shift in purpose and even outcomes. What is hard to comprehend is that this is not a transition; it is the emergence of information beings and that both will co-exist. We need to keep adapting our physical rules but need new thinking for informational beings.    

What does the Peak Paradox framework highlight about this situation? There is an assumption that we are aligned (what type of being are we or both) and how we understand if there is alignment in the team.  Because this is hard and subtle, we prefer to assume and make jumps because curiosity and questioning expose ghosts in our thinking and the system;  like conflict, tension and power. We don't like facing up to these as they are hard and reflective. The Peak Paradox framework will surface the ghosts and determine if there is an alignment or a delta in the team, project, mission or purpose.  If we are only physical beings in our views, we will ignore digital and data and the coming information being.   If we are information beings, we see that the controls don’t work, and we are free to exploit an emergent space.  The current data monopoly holders don’t look like monopolies through the lens of physical beings.

The current data monopoly holders don’t look like monopolies through the lens of physical beings.    

War and monopoly exist in both the physical and data/ digital/ information world. Still, we cannot apply the same system of the physical world controls developed over generations to the digital/ data/ information world - it is not they break it is they don’t work.   The system of control fails because the purpose for which war and monopoly exist is the same and has created a new space.  We use the exact same words, but the meaning has changed.  Why is monopoly different in a data/ digital information world? Because it looks competitive, data is movable, and users have a choice.  Why is war in a data/ digital information world different - because the outcome is not about control.  However, the physical still exists, which can confuse and create paradoxes.

 


Monday, 31. May 2021

Doc Searls Weblog

Comparing cameras

On the top left is a photo taken with my trusty old (also much used and abused) Canon 5D Mark III. On the top right is one taken by a borrowed new Sony a7Riii. Below both are cropped close-ups of detail. The scene is in a room illuminated by incandescent track lighting. It is not […]

On the top left is a photo taken with my trusty old (also much used and abused) Canon 5D Mark III. On the top right is one taken by a borrowed new Sony a7Riii. Below both are cropped close-ups of detail. The scene is in a room illuminated by incandescent track lighting. It is not an art shot, though it does contain photo art by our good friend Marian Crostic, whose Sony a7R she is kindly remanding to my custody tomorrow. (Her main camera is now an a7Riii like the borrowed one I used here.)

Both photos were shot with Canon and Sony’s best 24-105 f4 zoom lenses, at the 105mm end. Both were also set to automatic, meaning the camera chooses all the settings. In both cases the camera chose ISO 3200 at f4. The only difference was shutter speed: 1/125 sec on the Canon and 1/160 sec on the Sony. While 3200 is not the prettiest ISO, I wanted to compare both cameras indoors under less than ideal lighting, because that’s typical of situations where I shoot a lot of people.

One difference between these cameras is the pixel density of the sensor: the Canon’s shot is 5760 x 3840 pixels, while the Sony’s is 7952 x 5304. While that difference accounts for some of the higher detail in the Sony’s shot, it’s clear to me that the Sony lens is simply sharper, as Ken Rockwell kinda promised in this glowing review. (Also, to be fair, the Canon lens has had a lot of use.)

All the images above are screen shots of RAW versions of the photos (.CR2 for the Canon and .ARW for the Sony). Though I don’t have the time or patience to show differences in the .JPG versions of these photos, it’s clear to me that the Canon’s JPGs look less artifacted by compression. The obvious artifacts in the Sony shots have me thinking I may only shoot RAW with the a7R, though I’ll need to test it out first.

The main difference overall, at least in this setting, is in the warmth of the color. There the Canon has a huge advantage. I could say it’s also because the Sony is slightly less exposed (by the higher shutter speed); but I noticed the same difference in test shots I took outdoors as well, under both overcast and sunlit skies, and at ISO 100. The Canon seems warmer, though the Sony has far more detail one can pull out of shadows.

I should add that neither camera got the color of the wall (a creamy white) right in these photos, with the Canon leaning hot and the Sony leaning cool.

Anyway, I just thought I’d share that much before I pick up the a7R, and start using it to shoot stuff in New York, where I’m headed Wednesday night after more than a year away.

 

 


Damien Bod

Verify vaccination data using Zero Knowledge Proofs with ASP.NET Core and MATTR

This article shows how Zero Knowledge Proofs ZKP verifiable credentials can be used to verify a persons vaccination data implemented in ASP.NET Core and MATTR. The ZKP BBS+ verifiable credentials are issued and stored on a digital wallet using a Self-Issued Identity Provider (SIOP) and Open ID Connect. The data can then be used to […]

This article shows how Zero Knowledge Proofs ZKP verifiable credentials can be used to verify a persons vaccination data implemented in ASP.NET Core and MATTR. The ZKP BBS+ verifiable credentials are issued and stored on a digital wallet using a Self-Issued Identity Provider (SIOP) and Open ID Connect. The data can then be used to verify if the holder has the required credentials, but only the required data is used and returned to the verification application. The holder of the data who owns the wallet can consent or not consent to allow the verification application to see and use the vaccination data. Auth0 is used to implement the identity provider.

Code https://github.com/swiss-ssi-group/MattrZeroKnowledgeProofsAspNetCore

Blogs in the series

Getting started with Self Sovereign Identity SSI Create an OIDC credential Issuer with MATTR and ASP.NET Core Present and Verify Verifiable Credentials in ASP.NET Core using Decentralized Identities and MATTR Verify vaccination data using Zero Knowledge Proofs with ASP.NET Core and MATTR

What are Zero Knowledge Proof enabled credentials

Zero Knowledge Proof enabled credentials allows you to selectively disclose claims from a verifiable credential without disclosing all of the information of the verifiable credential. It also makes it possible to verify data without having to share the sensitive data required to verify something. In this post, we will just selectively request part of the data from a verifiable credential. This would make it possible to implement business flows without having to share or copy the sensitive data.

Setup ZKP Vaccination Data Issue and Verify

The demo application implements a covid vaccination data process. A number of components, applications are required to implement these flows. The idea is that when a person is vaccinated, the authority responsible for this could add the vaccination data to the persons identity. In this demo, that would be added to the Auth0 service which can be accessed in the id_token claims. The vaccination data organization can use the credentials issuer application to create a DID credential issuer using ZKP verifiable credentials. The end user can use a digital wallet to add his or his credentials using the SIOP flow which gets the claims from the IDP and adds the data to the digital wallet. The verification application would have to create a presentation template defining the claims which are required to use to verify. Once created, a new verification request can be created and used to verify the vaccination data from the user. The user would scan the presented QR Code from the verifier application and display a verify in the digital wallet. Once consented, the data is returned to the verifier application using an API. The data can be processed and the UI is updated with a verified result or not. The blockchain ledger is abstracted away and used indirectly through MATTR which has APIs for the Self sovereign identity specifications.

Issuing Zero Knowledge Proof enabled credentials

The VaccineCredentialsIssuer ASP.NET Core application is used to create the credential issuer and present this as a QR Code for the user to add vaccination Zero Knowledge Proofs verifiable credentials. The flow implemented is very similar to the flow used in the previous blog Create an OIDC credential Issuer with MATTR and ASP.NET Core . A DID is created to use a BLS key type which supports BBS+ signatures for issuing ZKP-enabled credentials.

public class MattrOptions { /// <summary> /// The supported key types for the DIDs are ed25519 and bls12381g2. /// If the keyType is omitted, the default key type that will be used is ed25519. /// /// If the keyType in options is set to bls12381g2 a DID will be created with /// a BLS key type which supports BBS+ signatures for issuing ZKP-enabled credentials. /// </summary> public string keyType { get; set; } = "bls12381g2"; }

The DID is used to create a credential issuer for the ZKP credentials. The CreateMattrCredentialIssuer method takes the DID created with the bls12381g2 key and creates the OIDC credential issuer.

private async Task<V1_CreateOidcIssuerResponse> CreateMattrCredentialIssuer(HttpClient client, V1_CreateDidResponse did) { // create vc, post to credentials api // https://learn.mattr.global/tutorials/issue/oidc-bridge/setup-issuer var createCredentialsUrl = $"https://{_mattrConfiguration.TenantSubdomain}/ext/oidc/v1/issuers"; var payload = new MattrOpenApiClient.V1_CreateOidcIssuerRequest { Credential = new Credential { IssuerDid = did.Did, Name = "VaccinationCertificate7", Context = new List<Uri> { new Uri( "https://schema.org"), new Uri( "https://www.w3.org/2018/credentials/v1") }, Type = new List<string> { "VerifiableCredential" } }, ClaimMappings = new List<ClaimMappings> { new ClaimMappings{ JsonLdTerm="family_name", OidcClaim=$"https://{_mattrConfiguration.TenantSubdomain}/family_name"}, new ClaimMappings{ JsonLdTerm="given_name", OidcClaim=$"https://{_mattrConfiguration.TenantSubdomain}/given_name"}, new ClaimMappings{ JsonLdTerm="date_of_birth", OidcClaim=$"https://{_mattrConfiguration.TenantSubdomain}/date_of_birth"}, new ClaimMappings{ JsonLdTerm="medicinal_product_code", OidcClaim=$"https://{_mattrConfiguration.TenantSubdomain}/medicinal_product_code"}, new ClaimMappings{ JsonLdTerm="number_of_doses", OidcClaim=$"https://{_mattrConfiguration.TenantSubdomain}/number_of_doses"}, new ClaimMappings{ JsonLdTerm="total_number_of_doses", OidcClaim=$"https://{_mattrConfiguration.TenantSubdomain}/total_number_of_doses"}, new ClaimMappings{ JsonLdTerm="vaccination_date", OidcClaim=$"https://{_mattrConfiguration.TenantSubdomain}/vaccination_date"}, new ClaimMappings{ JsonLdTerm="country_of_vaccination", OidcClaim=$"https://{_mattrConfiguration.TenantSubdomain}/country_of_vaccination"} }, FederatedProvider = new FederatedProvider { ClientId = _configuration["Auth0Wallet:ClientId"], ClientSecret = _configuration["Auth0Wallet:ClientSecret"], Url = new Uri($"https://{_configuration["Auth0Wallet:Domain"]}"), Scope = new List<string> { "openid", "profile", "email" } } }; var payloadJson = JsonConvert.SerializeObject(payload); var uri = new Uri(createCredentialsUrl); using (var content = new StringContentWithoutCharset(payloadJson, "application/json")) { var createOidcIssuerResponse = await client.PostAsync(uri, content); if (createOidcIssuerResponse.StatusCode == System.Net.HttpStatusCode.Created) { var v1CreateOidcIssuerResponse = JsonConvert.DeserializeObject<V1_CreateOidcIssuerResponse>( await createOidcIssuerResponse.Content.ReadAsStringAsync()); return v1CreateOidcIssuerResponse; } var error = await createOidcIssuerResponse.Content.ReadAsStringAsync(); } throw new Exception("whoops something went wrong"); }

The data from the MATTR response is used to create the callback for the credentials issuer. This is persisted to a database as this needs to be created only once and can be re-used.

public async Task<string> CreateCredentialsAndCallback(string name) { // create a new one var vaccinationDataCredentials = await CreateMattrDidAndCredentialIssuer(); vaccinationDataCredentials.Name = name; await _vaccineCredentialsIssuerCredentialsService.CreateVaccinationData(vaccinationDataCredentials); var callback = $"https://{_mattrConfiguration.TenantSubdomain}/ext/oidc/v1/issuers/{vaccinationDataCredentials.OidcIssuerId}/federated/callback"; return callback; }

The data is displayed as a QR Code in a ASP.NET Core Razor page application. This can be scanned and the credentials will be added to your digital wallet, if the Open ID Connect server has the claims for the identity required by this issuer.

Auth0 is used to add the identity data for the claims. An Auth0 pipeline rule was created and used to add these claims to the id_tokens.

function (user, context, callback) { const namespace = 'https://damianbod-sandbox.vii.mattr.global/'; context.idToken[namespace + 'date_of_birth'] = user.user_metadata.date_of_birth; context.idToken[namespace + 'family_name'] = user.user_metadata.family_name; context.idToken[namespace + 'given_name'] = user.user_metadata.given_name; context.idToken[namespace + 'medicinal_product_code'] = user.user_metadata.medicinal_product_code; context.idToken[namespace + 'number_of_doses'] = user.user_metadata.number_of_doses; context.idToken[namespace + 'total_number_of_doses'] = user.user_metadata.total_number_of_doses; context.idToken[namespace + 'vaccination_date'] = user.user_metadata.vaccination_date; context.idToken[namespace + 'country_of_vaccination'] = user.user_metadata.country_of_vaccination; callback(null, user, context); }

The data needs to be added to each user in Auth0. If using this in a real application, a UI could be created and used to add the specific data for each user. The credential issuer is tightly coupled through the data with the IDP. So each credential issuer which creates verifiable credentials would require it’s own identity provider and full access to update the profiles. The IDP contains the business data required to issuer the credentials. Auth0 might not be a good choice for this, maybe something like IdentityServer or Openiddict would be a better choice because you could implement custom UIs with ASP.NET Core Identity and the complete UIs for the credential issuing flows.

When the credential issuer is scanned by the digital wallet, the user logs into the OIDC server and gets the data for the ZKP verifiable credentials. In a MATTR wallet, this is displayed with the Privacy enhancing credential information.

Verifying the credentials

Before the ZKP credentials can be verified, a presentation template is created to define the required credentials to verify. The DID ID from the credential issuer is used to find the DID in the ledger. The CreateMattrPresentationTemplate method creates the template using the QueryByFrame so that the exact claims can be defined. The context must use the https://w3c-ccg.github.io/ldp-bbs2020/context/v1 namespace to use the ZKP BBS+ credentials in MATTR. The type must be VerifiableCredential.

private async Task<V1_PresentationTemplateResponse> CreateMattrPresentationTemplate( HttpClient client, string didId) { // create presentation, post to presentations templates api // https://learn.mattr.global/tutorials/verify/presentation-request-template // https://learn.mattr.global/tutorials/verify/presentation-request-template#create-a-privacy-preserving-presentation-request-template-for-zkp-enabled-credentials var createPresentationsTemplatesUrl = $"https://{_mattrConfiguration.TenantSubdomain}/v1/presentations/templates"; var additionalPropertiesCredentialSubject = new Dictionary<string, object>(); additionalPropertiesCredentialSubject.Add("credentialSubject", new VaccanationDataCredentialSubject { Explicit = true }); var additionalPropertiesCredentialQuery = new Dictionary<string, object>(); additionalPropertiesCredentialQuery.Add("required", true); var additionalPropertiesQuery = new Dictionary<string, object>(); additionalPropertiesQuery.Add("type", "QueryByFrame"); additionalPropertiesQuery.Add("credentialQuery", new List<CredentialQuery2> { new CredentialQuery2 { Reason = "Please provide your vaccination data", TrustedIssuer = new List<TrustedIssuer>{ new TrustedIssuer { Required = true, Issuer = didId // DID use to create the oidc } }, Frame = new Frame { Context = new List<object>{ "https://www.w3.org/2018/credentials/v1", "https://w3c-ccg.github.io/ldp-bbs2020/context/v1", "https://schema.org", }, Type = "VerifiableCredential", AdditionalProperties = additionalPropertiesCredentialSubject }, AdditionalProperties = additionalPropertiesCredentialQuery } }); var payload = new MattrOpenApiClient.V1_CreatePresentationTemplate { Domain = _mattrConfiguration.TenantSubdomain, Name = "zkp-certificate-presentation-11", Query = new List<Query> { new Query { AdditionalProperties = additionalPropertiesQuery } } }; var payloadJson = JsonConvert.SerializeObject(payload); var uri = new Uri(createPresentationsTemplatesUrl); using (var content = new StringContentWithoutCharset(payloadJson, "application/json")) { var presentationTemplateResponse = await client.PostAsync(uri, content); if (presentationTemplateResponse.StatusCode == System.Net.HttpStatusCode.Created) { var v1PresentationTemplateResponse = JsonConvert .DeserializeObject<MattrOpenApiClient.V1_PresentationTemplateResponse>( await presentationTemplateResponse.Content.ReadAsStringAsync()); return v1PresentationTemplateResponse; } var error = await presentationTemplateResponse.Content.ReadAsStringAsync(); } throw new Exception("whoops something went wrong"); }

The VaccanationDataCredentialSubject class defines the specific claims to use for the verification.

public class VaccanationDataCredentialSubject { [Newtonsoft.Json.JsonProperty("@explicit", Required = Newtonsoft.Json.Required.Always)] public bool Explicit { get; set; } [Newtonsoft.Json.JsonProperty("family_name", Required = Newtonsoft.Json.Required.Always)] [System.ComponentModel.DataAnnotations.Required] public object FamilyName { get; set; } = new object(); [Newtonsoft.Json.JsonProperty("given_name", Required = Newtonsoft.Json.Required.Always)] [System.ComponentModel.DataAnnotations.Required] public object GivenName { get; set; } = new object(); [Newtonsoft.Json.JsonProperty("date_of_birth", Required = Newtonsoft.Json.Required.Always)] [System.ComponentModel.DataAnnotations.Required] public object DateOfBirth { get; set; } = new object(); [Newtonsoft.Json.JsonProperty("medicinal_product_code", Required = Newtonsoft.Json.Required.Always)] [System.ComponentModel.DataAnnotations.Required] public object MedicinalProductCode { get; set; } = new object(); }

Verifying is very similar to the blog Present and Verify Verifiable Credentials in ASP.NET Core using Decentralized Identities and MATTR. A new DID of type ed25519 is used to invoke a verify request and also sign the request. The verifying flow in the application presents a QR Code using the redirectURL technic because the signed request is too long to present as a QR Code. This request returns a 302 with the full jws.

The application needs to be started using a public domain because the digital wallet will request back to the API with the data. I use ngrok to test locally. The verifier application can be started and the verify process is started by clicking the verify button which displays the QR Code to verify.

Start the application and start ngrok

ngrok http http://localhost:5000

The QR Code can be scanned to verify.

In the digital wallet, the verification request for the vaccination data can be viewed and if ok sent. The digital wallet displays the data which is disclosed and the data which is not. When the user clicks send, the data is validated and the API from the verifier application is called.

When the Verify application receives the callback from the digital wallet, the data is validated and the challenge ID is used to notify the user of a successful verification. The data is saved to a database and ASP.NET Core SignalR is used to update the UI. When the message from SignalR is sent, the user is redirected to the success page using the challenge ID and the data is displayed with the success image.

Notes

Now we have a full create, holder, verify process implemented for Zero Knowledge Proof verifiable credentials using covid vaccination data. OIDC is used to authenticate and create the claims used for the credentials. The OIDC connect server or identity provider is tightly coupled to the credential issuer because business uses the data from the id_token. When using SIOP, I would use ASP.NET Core Identity and either OpenIddict of Identityserver4 to implement this as part of the credential issuer. You need full control of the claims so using Auth0, Azure AD or Azure B2C would probably be a bad choice here. You could federate then to one of these from the credential issuer to use the profiles as required. Each vaccinated user would also require a user account. ZKP verifiable credentials makes it possible to support user privacy better and mix claims easier from different credentials. Another problem with this solution is the vendor lockdown. This is a problem with any self sovereign solution at the moment. Even though, the specifications are all standard, unless you want to implement this completely yourself, you would choose a vendor specific implementation which locks you down to a specific wallet or with specific features only. Interop does not seem to work at the moment. This is a problem with all security solutions at present not just SSI, all software producers, security services use security reasoning as an excuse to try to force you into lockdown into their specific product. You can see this with most of the existing OIDC solutions and services. Typical quotes for this are “You can use any OIDC client, but we recommend our OIDC client…” SSI will open possibilities for new security solutions and it will be very interesting to see how application security develops in the next five years.

Links

https://mattr.global/

https://learn.mattr.global/tutorials/verify/using-callback/callback-e-to-e

https://mattr.global/get-started/

https://learn.mattr.global/

https://keybase.io/

Generating a ZKP-enabled BBS+ credential using the MATTR Platform

https://learn.mattr.global/tutorials/dids/did-key

https://gunnarpeipman.com/httpclient-remove-charset/

https://auth0.com/

Where to begin with OIDC and SIOP

https://anonyome.com/2020/06/decentralized-identity-key-concepts-explained/

Verifiable-Credentials-Flavors-Explained

Sunday, 30. May 2021

Doc Searls Weblog

Apple vs (or plus) Adtech, Part II

My post yesterday saw action on Techmeme (as I write this, it’s at #2) and on Twitter (from Don Marti, Augustine Fou, et. al.), and in thoughtful blog posts by John Gruber in Daring Fireball and Nick Heer in Pixel Envy. All pushed back on at least some of what I said. Here are some […]

My post yesterday saw action on Techmeme (as I write this, it’s at #2) and on Twitter (from Don Marti, Augustine Fou, et. al.), and in thoughtful blog posts by John Gruber in Daring Fireball and Nick Heer in Pixel Envy. All pushed back on at least some of what I said. Here are some excerpts, with my responses. First, John:

Doc Searls:

Here’s what’s misleading about this message: Felix would have had none of those trackers following him if he had gone into Settings → Privacy → Tracking, and pushed the switch to off […].

Key fact: it is defaulted to on. Meaning Apple is not fully serious about privacy. If Apple was fully serious, your iPhone would be set to not allow tracking in the first place. All those trackers would come pre-vaporized.

For all the criticism Apple has faced from the ad tech industry over this feature, it’s fun to see criticism that Apple isn’t going far enough. But I don’t think Searls’s critique here is fair. Permission to allow tracking is not on by default — what is on by default is permission for the app to ask. Searls makes that clear, I know, but it feels like he’s arguing as though apps can track you by default, and they can’t.

But I don’t think Searls’s critique here is fair. Permission to allow tracking is not on by default — what is on by default is permission for the app to ask. Searls makes that clear, I know, but it feels like he’s arguing as though apps can track you by default, and they can’t.

I’m not arguing that. But let’s dig down a bit on all this.

What Apple has here is a system for asking in both directions (apps asking to track, and users asking apps not to track). I think this is weird and unclear, while simply disallowing tracking globally would be clear. So would a setting that simply turns off all apps’ ability to track. But that’s not what we have.

Or maybe we do.

To review… in Settings—>Privacy—>Tracking, is a single OFF/ON switch for “Allow Ads to Request to Track.” It is by default set to ON. (I called AppleCare to be sure about this. The guy I spoke to said yes, it is.) Below that setting is a bit of explanatory text with a “Learn more” link that goes to this long column of text one swipes down four times (at least on my phone) to read:

Okay, now look in the fifth paragraph (three up from where you’re reading now). There it says that by turning the setting to OFF, “all apps…will be blocked from accessing the device’s Advertising Identifier.” Maybe I’m reading this wrong, but it seems plain to me that this will at least pre-vaporize trackers vectored on the device identifier (technically called IDFA: ID For Advertisers).

After explaining why he thinks the default setting to ON is the better choice, and why he likes it that way (e.g. he can see what apps want to track, surprisingly few do, and he knows which they are), John says this about the IDFA:

IDFA was well-intentioned, but I think in hindsight Apple realizes it was naive to think the surveillance ad industry could be trusted with anything.

And why “ask” an app not to track? Why not “tell”? Or, better yet, “Prevent Tracking By This App”? Does asking an app not to track mean it won’t?

This is Apple being honest. Apple can block apps from accessing the IDFA identifier, but there’s nothing Apple can do to guarantee that apps won’t come up with their own device fingerprinting schemes to track users behind their backs. Using “Don’t Allow Tracking” or some such label instead of “Ask App Not to Track” would create the false impression that Apple can block any and all forms of tracking. It’s like a restaurant with a no smoking policy. That doesn’t mean you won’t go into the restroom and find a patron sneaking a smoke. I think if Apple catches applications circumventing “Ask App Not to Track” with custom schemes, they’ll take punitive action, just like a restaurant might ask a patron to leave if they catch them smoking in the restroom — but they can’t guarantee it won’t happen. (Joanna Stern asked Craig Federighi about this in their interview a few weeks ago, and Federighi answered honestly.)

If Apple could give you a button that guaranteed an app couldn’t track you, they would, and they’d label it appropriately. But they can’t so they don’t, and they won’t exaggerate what they can do.

On Twitter Don Marti writes,

Unfortunately it probably has to be “ask app not to track” because some apps will figure out ways around the policy (like all mobile app store policies). Probably better not to give people a false sense of security if they are suspicious of an app

—and then points to P&G Worked With China Trade Group on Tech to Sidestep Apple Privacy Rules, subtitled “One of world’s largest ad buyers spent years building marketing machine reliant on digital user data, putting it at odds with iPhone maker’s privacy moves” in The Wall Street Journal. In it is this:

P&G marketing chief Marc Pritchard has advocated for a universal way to track users across platforms, including those run by Facebook and Alphabet Inc.’s Google, that protects privacy while also giving marketers information to better hone their messages.

Frustrated with what it saw as tech companies’ lack of transparency, P&G began building its own consumer database several years ago, seeking to generate detailed intelligence on consumer behavior without relying on data gathered by Facebook, Google and other platforms. The information is a combination of anonymous consumer IDs culled from devices and personal information that customers share willingly. The company said in 2019 that it had amassed 1.5 billion consumer identifications world-wide.

China, where Facebook and Google have a limited presence, is P&G’s most sophisticated market for using that database. The company funnels 80% of its digital-ad buying there through “programmatic ads” that let it target people with the highest propensity to buy without presenting them with irrelevant or excessive ads, P&G Chief Executive Officer David Taylor said at a conference last year.

“We are reinventing brand building, from wasteful mass marketing to mass one-to-one brand building fueled by data and technology,” he said. “This is driving growth while delivering savings and efficiencies.”

In response to that, I tweeted,

Won’t app makers find ways to work around the no tracking ask, regardless of whether it’s a global or a one-at-a-time setting? That seems to be what the
@WSJ is saying about  @ProcterGamble ‘s work with #CAID device fingerprinting.

Don replied,

Yes. Some app developers will figure out a way to track you that doesn’t get caught by the App Store review. Apple can’t promise a complete “stop this app from tracking me” feature because sometimes it will be one of those apps that’s breaking the rules

Then Augustine Fou replied,

of course, MANY ad tech companies have been working on fingerprinting for years, as a work around to browsers (like Firefox) allowing users to delete cookies many years ago. Fingerprinting is even more pernicious because it is on server-side and out of control of user entirely

That last point is why I’ve long argued that we have a very basic problem with the client server model itself: that it all but guarantees a feudal system in which clients are serfs and site operators (and Big Tech in general) are their lords and masters. Though my original metaphor for client-server (which I have been told was originally a euphemism for slave-master) was calf-cow:

Here’s more on that one, plus some other metaphors as well:

A sense of bewronging (from 2011) Stop making cows. Quit being calves (from 2012) Is being less tasty vegetables our best strategy? (2021) Thinking outside the browser (2021) Toward e-commerce 2.0 (2021) How the cookie poisoned the Web (2021)

I’ll pick up that thread after visiting what Nick says about fingerprinting:

There are countless ways that devices can be fingerprinted, and the mandated use of IDFA instead of those surreptitious methods makes it harder for ad tech companies to be sneaky. It has long been possible to turn off IDFA or reset the identifier. If it did not exist, ad tech companies would find other ways of individual tracking without users’ knowledge, consent, or control.

And why “ask” an app not to track? Why not “tell”? Or, better yet, “Prevent Tracking By This App”? Does asking an app not to track mean it won’t?

History has an answer for those questions.

Remember Do Not Track? Invented in the dawn of tracking, back in the late ’00s, it’s still a setting in every one of our browsers. But it too is just an ask — and ignored by nearly every website on Earth.

Much like Do Not Track, App Tracking Transparency is a request — verified as much as Apple can by App Review — to avoid false certainty. Tracking is a pernicious reality of every internet-connected technology. It is ludicrous to think that any company could singlehandedly find and disable all forms of fingerprinting in all apps, or to guarantee that users will not be tracked.

I agree. This too is a problem with the feudal system that the Web + app world has become, and Nick is right to point it out. He continues,

The thing that bugs me is that Searls knows all of this. He’s Doc Searls; he has an extraordinary thirteen year history of writing about this stuff. So I am not entirely sure why he is making arguments like the ones above that, with knowledge of his understanding of this space, begin to feel disingenuous. I have been thinking about this since I read this article last night and I have not come to a satisfactory realistic conclusion.

Here’s a realistic conclusion (or at least the one that’s in my head right now): I was mistaken to assume that Apple has more control here than it really does, and it’s right for all these guys (Nick, John, Augustine, Don and others) to point that out. Hey, I gave in to wishful thinking and unconscious ad hominem argumentation. Mea bozo. I sit corrected.

He continues,

Apple is a big, giant, powerful company — but it is only one company that operates within the realities of legal and technical domains. We cannot engineer our way out of the anti-privacy ad tech mess. The only solution is regulatory. That will not guarantee that bad actors do not exist, but it could create penalties for, say, Google when it ignores users’ choices or Dr. B when it warehouses medical data for unspecified future purposes.

We’ve had the GDPR and the CCPA in enforceable forms for awhile now, and the main result, for us mere “data subjects” (GDPR) and “consumers” (CCPA) is a far worse collection of experiences in using the Web.

At this point my faith in regulation (which I celebrated, at least in the GDPR case, when it went into force) is less than zero. So is my faith in tech, within the existing system.

So I’m moving on, and working on a new approach, outside the whole feudal system, which I describe in A New Way. It’s truly new and small, but I think it can be huge: much bigger than the existing system, simply because we on the demand side will have better ways of informing supply (are you listening, Mark Pritchard?) than even the best surveillance systems can guess at.

Saturday, 29. May 2021

Doc Searls Weblog

Apple vs (or plus) Adtech, Part I

This piece has had a lot of very smart push-back (and forward, but mostly back). I respond to it in Part II, here. If you haven’t seen it yet, watch Apple’s Privacy on iPhone | tracked ad. In it a guy named Felix (that’s him, above) goes from a coffee shop to a waiting room […]

This piece has had a lot of very smart push-back (and forward, but mostly back). I respond to it in Part II, here.

If you haven’t seen it yet, watch Apple’s Privacy on iPhone | tracked ad. In it a guy named Felix (that’s him, above) goes from a coffee shop to a waiting room somewhere, accumulating a vast herd of hangers-on along the way. The herd represents trackers in his phone, all crowding his personal space while gathering private information about him. The sound track is “Mind Your Own Business,” by Delta 5. Lyrics:

Can I have a taste of your ice cream?
Can I lick the crumbs from your table?
Can I interfere in your crisis?

No, mind your own business
No, mind your own business

Can you hear those people behind me?
Looking at your feelings inside me
Listen to the distance between us

Why don’t you mind your own business?
Why don’t you mind your own business?

Can you hear those people behind me?
Looking at your feelings inside me
Listen to the distance between us

Why don’t you mind your own business?
Why don’t you mind your own business?

The ad says this when Felix checks his phone from the crowded room filled with people spying on his life:

Then this:

Finally, when he presses “Ask App Not to Track,” all the hangers-on go pop and turn to dust—

Followed by

Except that she gets popped too:

Meaning he doesn’t want any one of those trackers in his life.

The final image is the one at the top.

Here’s what’s misleading about this message: Felix would have had none of those trackers following him if he had gone into Settings—>Privacy—>Tracking, and pushed the switch to off, like I’ve done here:

Key fact: it is defaulted to on. Meaning Apple is not fully serious about privacy. If Apple was fully serious, your iPhone would be set to not allow tracking in the first place. All those trackers would come pre-vaporized. And Apple never would have given every iPhone an IDFA—ID For Advertisers—in the first place. (And never mind that they created IDFA back in 2013 partly to wean advertisers from tracking and targeting phones’ UDIDs (unique device IDs).

Defaulting the master Tracking setting to ON means Felix has to tap “Ask App Not To Track” for every single one of those hangers-on. Meaning that one click won’t vaporize all those apps at once. Just one at a time. This too is misleading as well as unserious.

And why “ask” an app not to track? Why not “tell”? Or, better yet, “Prevent Tracking By This App”? Does asking an app not to track mean it won’t?

History has an answer for those questions.

Remember Do Not Track? Invented in the dawn of tracking, back in the late ’00s, it’s still a setting in every one of our browsers. But it too is just an ask—and ignored by nearly every website on Earth.

Here is how the setting looks, buried deep on Google’s Chrome:

It’s hardly worth bothering to turn that on (it’s defaulted to off), because it became clear long ago that Do Not Track was utterly defeated by the adtech biz and its dependents in online publishing. The standard itself was morphed to meaninglessness at the W3C, where by the end (in 2019) it got re-branded “Tracking Preference Expression.” (As if any of us has a preference for tracking other than to make it not happen or go away.)

By the way, thanks to adtech’s defeat of Do Not Track in 2014, people took matters into their own hands, by installing ad and tracking blockers en masse, turning ad blocking, an option that had been laying around since 2004, into the biggest boycott in world history by 2015.

And now we have one large company, Apple, making big and (somewhat, as we see above) bold moves toward respecting personal privacy. That’s good as far as it goes. But how far is that, exactly? To see how far, here are some questions:

Will “asking” apps not to track on an iPhone actually make an app not track? How will one be able to tell? What auditing and accounting mechanisms are in place—on your phone, on the apps’ side, or at Apple?

As for people’s responses to Apple’s new setting, here are some numbers for a three-week time frame: April 26 to May 16. They come from FLURRY, a subsidiary of Verizon Media, which is an adtech company. I’ll summarize:

For “Worldwide daily op-in rate after iOS 14.5 launch across all apps,” expressed as “% of mobile active app users who allow app tracking among uses who have chosen to either allow or deny tracking” started at 11% and rose to 15%. The “U.S. Daily opt-in rate after iOS launch across all apps,” expressed as “% of mobile active app users who allow app tracking among users who have chosen to either allow or deny tracking” started at 2% and rose to 6%. The “Worldwide daily opt-in rate across apps that have displayed the prompt,” expressed as “% of mobile active app users who allow app tracking among users who have chosen to either allow or deny tracking” started at 31% and went down to 24%. The “Worldwide daily share of mobile app users with ‘restricted’ app tracking” (that’s where somebody goes into Settings—>Privacy—>Tracking and switches off “Allow Apps to Request to Track”), expressed as “% of mobile active app users who cannot be tracked by default and don’t have a choice to select a tracking option” started and stayed within a point of 5% . And the “U.S. daily share of mobile app users with ‘restricted’ app tracking,” expressed as “% of mobile active app users who cannot be tracked by default and don’t have a choice to select a tracking option” started at 4% and ended at 3%, with some dips to 2%.

Clearly tracking isn’t popular, but those first two numbers should cause concern for those who want tracking to stay unpopular. The adtech business is relentless in advocacy of tracking, constantly pitching stories about how essential tracking-based “relevant,” “personalized” and “interest-based” advertising is—for you, and for the “free” Web and Internet.

It is also essential to note that Apple does advertising as well. Here’s Benedict Evans on a slope for Apple that is slippery in several ways:

Apple has built up its own ad system on the iPhone, which records, tracks and targets users and serves them ads, but does this on the device itself rather than on the cloud, and only its own apps and services. Apple tracks lots of different aspects of your behaviour and uses that data to put you into anonymised interest-based cohorts and serve you ads that are targeted to your interests, in the App Store, Stocks and News apps. You can read Apple’s description of that here – Apple is tracking a lot of user data, but nothing leaves your phone. Your phone is tracking you, but it doesn’t tell anyone anything.

This is conceptually pretty similar to Google’s proposed FLoC, in which your Chrome web browser uses the web pages you visit to put you into anonymised interest-based cohorts without your browsing history itself leaving your device. Publishers (and hence advertisers) can ask Chrome for a cohort and serve you an appropriate ad rather than tracking and targeting you yourself. Your browser is tracking you, but it doesn’t tell anyone anything -except for that anonymous cohort.

Google, obviously, wants FLoC to be a generalised system used by third-party publishers and advertisers. At the moment, Apple runs its own cohort tracking, publishing and advertising as a sealed system. It has begun selling targeted ads inside the App Store (at precisely the moment that it crippled third party app install ads with IDFA), but it isn’t offering this tracking and targeting to anyone else. Unlike FLoC, an advertiser, web page or app can’t ask what cohort your iPhone has put you in – only Apple’s apps can do that, including the app store.

So, the obvious, cynical theory is that Apple decided to cripple third-party app install ads just at the point that it was poised to launch its own, and to weaken the broader smartphone ad model so that companies would be driven towards in-app purchase instead. (The even more cynical theory would be that Apple expects to lose a big chunk of App Store commission as a result of lawsuits and so plans to replace this with app install ads. I don’t actually believe this – amongst other things I think Apple believes it will win its Epic and Spotify cases.)

Much more interesting, though, is what happens if Apple opens up its cohort tracking and targeting, and says that apps, or Safari, can now serve anonymous, targeted, private ads without the publisher or developer knowing the targeting data. It could create an API to serve those ads in Safari and in apps, without the publisher knowing what the cohort was or even without knowing what the ad was. What if Apple offered that, and described it as a truly ‘private, personalised’ ad model, on a platform with at least 60% of US mobile traffic, and over a billion global users?…

Apple has a tendency to build up strategic assets in discrete blocks and small parts of products, and then combine them into one. It’s been planning to shift the Mac to its own silicon for close to a decade, and added biometrics to its products before adding Apple Pay and then a credit card. Now it has Apple Pay and ‘Sign in with Apple’ as new building blocks on the web, that might be combined into other things. It seems pretty obvious that Privacy is another of those building blocks, deployed step by step in lots of different places. Privacy has been good business for Apple, and advertising is a bigger business than all of those.

All of which is why I’ve lately been thinking that privacy is a losing battle on the Web. And that we need to start building a byway around the whole mess: one where demand can signal supply about exactly what it wants, rather than having demand constantly being spied on and guessed at by adtech’s creepy machinery.

Friday, 28. May 2021

Simon Willison

explain.dalibo.com

explain.dalibo.com By far the best tool I've seen for turning the output of PostgreSQL EXPLAIN ANALYZE into something I can actually understand - produces a tree visualization which includes clear explanations of what each step (such as a "Index Only Scan Node") actually means. Via EXPLAIN ANALYZE in PostgreSQL and how to interpret it

explain.dalibo.com

By far the best tool I've seen for turning the output of PostgreSQL EXPLAIN ANALYZE into something I can actually understand - produces a tree visualization which includes clear explanations of what each step (such as a "Index Only Scan Node") actually means.

Via EXPLAIN ANALYZE in PostgreSQL and how to interpret it


Phil Windley's Technometria

Comparing X.509 Certificates with SSI

Summary: X.509 certificates have been around for 40 years and have proven to be a trustworthy means of exchanging data. So, what are the differences between X.509 ceritificates and SSI? And what are the advantages of each? I sometimes talk to people who ask "Why do we need SSI? What's wrong with X.509 certificates?" Here's some thoughts. X.509 is a standard that defines th

Summary: X.509 certificates have been around for 40 years and have proven to be a trustworthy means of exchanging data. So, what are the differences between X.509 ceritificates and SSI? And what are the advantages of each?

I sometimes talk to people who ask "Why do we need SSI? What's wrong with X.509 certificates?" Here's some thoughts.

X.509 is a standard that defines the format for public key certificates. Public key certificates can be used to tie a public key to other information. The most common use, by far, is TLS/SSL, the basis for trust in HTTPS, the protocol that secures the Web. In TLS, the certificate binds a public key to a domain name (and perhaps other information).

The first challenge for many people is determining whether X.509 certificates are more like verifiable credentials or DIDDocs. This is understandable since X.509 combines the functions of these two separate SSI standards. X.509 certificates themselves are like DIDDocs in that they bind information to a public key. But the hierarchical public key infrastructure (PKI) of X.509 is meant to attest to the veracity of the the X.509 certificate. And X.509 extensions allow other information to be included. So, X.509 certificates also bind the public key (as an identifier) to real-world attributes. DIDDocs don't have anything like PKI. Rather SSI uses verifiable credentials to assert information about a decentralized identifier in a trustworthy way.

Another important difference between X.509 certificates and DIDDocs is that the primary purpose of the DIDDoc is to bind the public key in the DIDDoc to a decentralized identifier, or DID, whereas X.509 certificates can bind the public key to a subject name and other information like a domain name. Extensions to the certificate allow it to also bind the public key to other information. The important distinction is that the DID is required and represents a unique name for the DIDDoc. A DID must have some means of resolving to the DIDDoc1. The DID provides a level of indirection to the public key. Consequently, the public key associated with a DID can be rotated without changing the DID and so it can be used as a permanent identifier. I won't get into the details around how this is done securely, but you can read far more detail at The Architecture of Identity Systems if you're curious.

The veracity of an X.509 certificate is usually determined from the strictly hierarchical public key infrastructure (PKI). For example, when you visit a web site, your browser uses the X.509 certificate from the web site to establish a secure connection. If you click on the lock, you'll see information about that certificate. The web site's certificate was signed by some organization that is attesting to the information in the certificate. You can use the certificate of the signing organization to know its public key to do the check. But how do you know that certificate is valid? It's signed using the private key whose public key is in yet another certification, and so on. Eventually this has to stop and it does when you get to a certificate that was stored in browser when it was built. CA Browser Forum is the organization that determines what certificates are worthy to be inside browsers.

Showing the Certificate Hierarchy in Brave (click to enlarge)

In contrast, the veracity of the DID and associated DIDDoc is ascertained by a heterarchical method. The DID and DIDDoc are self-asserted and self-certifying. You can use cryptographic means to determine that the binding asserted in the DIDDoc has not been tampered with, but the DID infrastructure itself does nothing to tell you who or what the DID is bound to in a verifiable way. For that, we use verifiable credentials.

Suppose the DID in question is one Alice generated to give to Bravo Corp, her mortgage processor. Bravo knows nothing about the DID they've received except that it's bound, in the associated DIDDoc, with a specific public key (and possibly an endpoint of some kind). They ask Alice to prove things about herself as part of the mortgage application process and over time learn quite a bit. Alice proves her name and date of birth using a verifiable credential representing her driver's license. She proves her income using a verifiable credential from her employer, and her banking information using a verifiable credential from her bank. The information in each of these verifiable credentials is attested by its issuer: the DMV, the employer, and the bank. Bravo's reasons for trusting these organization are up to Bravo:

The may be well known. Bravo may have a prior relationship with them. Bravo might ask them to prove things about themselves (using verifiable credentials, of course). Or they may belong to a trust framework that Bravo can use access publicly.

Furthermore, zero-knowledge proofs (ZKPs)2 allow Alice to combine the attributes in these various credentials (and others) in a way that only discloses what Bravo is asking for and nothing more. And her digital wallet was able to do this for her automatically without Alice having to pick and choose the various attributes from various credentials herself. The proof shows that the information from these three credentials is all bound to the person who controls the DID that Alice gave to Bravo. The proof also shows that these credentials have not been revoked.

You can imagine Alice having X.509 certificates from the DMV, her employer, and her bank that attest these same things (through X.509 extensions). She would also have a personal certificate with her public key that she used to anchor each of these other certificates. The X.509 certificates are not linked in any way other than Alice's public key. She has to use the same public key in all of them so they can be correlated. She uses her personal certificate to prove she's in control of the public key she provides to the DMV, employer, and bank. If she changes her public key, so has to get new certificates. This is a good example of the dual nature of X.509 certificates. Alice's personal certificate looks like a DIDDoc, but the certificates with extensions look like verifiable credentials.

There's no easy way for Alice to restrict what attributes she shares when she shares these certificates. She has to share the entire certificate. Bravo would trust these certificates in the same way your browser does, by following the chain to some smallish set of trusted certificate authorities fo each kind of certificate (driver's license, employer, or bank). Bravo would also check certificate revocation lists for each certificate to ensure they're still valid.

The advantage of X.509 certificates is that the technology, processes, and governance behind them are well-known and understood. No small thing. The public key infrastructure is well developed with a long history of securely communicating trustworthy public keys. DIDs and verifiable credentials are relatively new. Although standards, open source code, and multiple vendors exist, they are unproven compared to X.509.

So, why do something new? DIDs, DIDDocs, and verifiable credentials have several advantages over X.509 certificates:

DIDs are more secure. DIDs allow public keys to be rotated in a trustworthy manner. Consequently, Alice can rotate the key underlying the DID at will without having to get new credentials. The identifier lives as long as Alice needs it to. Alice won't be tempted to hold onto a potentially comprimised key because she's worried about the inconvenience. SSI uses the right tools for each part of the process. The SSI architecture cleanly separates providing an identifier for Alice from proving things about Alice. The binding between the DID and its associated public key can be verified cryptographically without relying on a hierarchical chain of authorities. The fidelity of the credential exchange can be verified cryptographically using information in a public credential registry (often a ledger of some sort). This separation allows the methods and tools to be crafted to the needs of each kind of document. Verifiable credentials minimize information disclosure. Sharing only what's necessary protects Alice's privacy. This Webinar on ZKP-oriented Credentials from Daniel Hardman is an excellent, approachable tutorial on the many benefits of ZKPs for credential exchange. SSI data sharing UX is safer. ZKPs provide convenience for Alice saving her time, and reducing the chance of her oversharing through human error (i.e. they are safer from a privacy perspective). SSI has a consistent UX. SSI wallets and agents provide a good user experience for managing relationships, storing credentials, and responding to proof requests. As far as I know, X.509 certificate wallets do not exist as such, so they would need to be developed to provide a comparable user experience. Verifiable credentials provide better interoperability. Alice is able to use multiple credentials from different issuers and prove things to many verifiers because of standards, not just for data formats, but also protocols for issuace and presentment. I know of no standards for how X.509 credentials can be used to prove the kind of information in the mortgage example in an interoperable way. They have been around for over 40 years and yet they are almost exclusively used for TLS and nothing else.

The high-level goals of X.509 certificates are similar to those of DIDs and verifiable credentials. But DIDs and verifiable credentials represent an innovation that takes learnings from 40 years of experience and new developments in cryptography into account to provide a better, more flexible solution to the problem of exchanging data in a trustworthy way. SSI in the form of DIDs and verifiable credentials promise a global, interoperable data exchange metasystem that is cryptographically sound with an excellent user experience.

Notes The resolution need not be global or public. For Peer DIDs, the resolution is local. Note that not all credential exchange methods use ZKPs. They should.

Tags: ssi identity x.509 pki decentralized+identifiers

Wednesday, 26. May 2021

Simon Willison

M1RACLES: M1ssing Register Access Controls Leak EL0 State

M1RACLES: M1ssing Register Access Controls Leak EL0 State You need to read (or at least scan) all the way to the bottom: this security disclosure is a masterpiece. It not only describes a real flaw in the M1 silicon but also deconstructs the whole culture of over-hyped name-branded vulnerability reports. The TLDR is that you don't really need to worry about this one, and if you're writing this k

M1RACLES: M1ssing Register Access Controls Leak EL0 State

You need to read (or at least scan) all the way to the bottom: this security disclosure is a masterpiece. It not only describes a real flaw in the M1 silicon but also deconstructs the whole culture of over-hyped name-branded vulnerability reports. The TLDR is that you don't really need to worry about this one, and if you're writing this kind if thing up for a news article you should read all the way to the end first!


FACILELOGIN

Introducing Keto, the open source implementation of Google’s Zanzibar

https://deih43ym53wif.cloudfront.net/zanzibar-tanzania-2_7e6a147e43.jpeg Zanzibar is Google’s consistent, global authorization system; a project started at Google in 2012. It primarily plays two roles. It acts as a storage system for access control lists and groups used by hundreds of Google projects (Maps, Google Drive, Calendar, and so on) to store permissions of digital objects. Also, Zanzibar a
https://deih43ym53wif.cloudfront.net/zanzibar-tanzania-2_7e6a147e43.jpeg

Zanzibar is Google’s consistent, global authorization system; a project started at Google in 2012. It primarily plays two roles. It acts as a storage system for access control lists and groups used by hundreds of Google projects (Maps, Google Drive, Calendar, and so on) to store permissions of digital objects. Also, Zanzibar acts as an authorization engine, where the Google products talk to check whether a given request is authorized or not.

At the moment Zanzibar manages 2 trillion+ tuples of 1500+ namespaces, replicated in several dozens of locations globally over 10,000+ servers. It handles 10million+ client queries per second.

I recently learnt about Keto, an open source implementation of Zanzibar from a Hacker News thread, and thought it would be a very interesting topic to discuss in our Silicon Valley IAM Meetup and invited Patrik Neu, who is the core maintainer of the Keto project to share his insights with our meetup community. Patrik is based out of Munich, Germany and he’s been contributing to open source projects since 2017.

The Ory, is the company behind Keto, and they are also in the process of building a SaaS offering on top of Keto.

Introducing Keto, the open source implementation of Google’s Zanzibar was originally published in FACILELOGIN on Medium, where people are continuing the conversation by highlighting and responding to this story.

Tuesday, 25. May 2021

Mike Jones: self-issued

OpenID Connect Federation updated in preparation for third Implementer’s Draft review

The OpenID Connect Federation specification has been updated to add Security Considerations text. As discussed in the recent OpenID Connect working group calls, we are currently reviewing the specification in preparation for it becoming the third and possibly last Implementer’s Draft. Working group members (and others!) are encouraged to provide feedback on the draft soon […]

The OpenID Connect Federation specification has been updated to add Security Considerations text. As discussed in the recent OpenID Connect working group calls, we are currently reviewing the specification in preparation for it becoming the third and possibly last Implementer’s Draft.

Working group members (and others!) are encouraged to provide feedback on the draft soon before we start the foundation-wide review. We will probably decide next week to progress the draft to foundation-wide review. In particular, there’s been interest recently in both Entity Statements and Automatic Registration among those working on Self-Issued OpenID Provider extensions. Reviews of those features would be particularly welcome.

The updated specification is published at:

https://openid.net/specs/openid-connect-federation-1_0-16.html

Special thanks to Roland Hedberg for the updates!


Phil Windley's Technometria

Life-Like Anonymity and the Poison Web

Summary: Natural anonymity comes from our ability to recognize others without the aid of an external identity system. Online interactions will only be able to mirror life-like anonymity when we can use decentralized identity systems that don't force all interactions to be under the purview of centralized, administrative identity systems. Doc Searls published a piece last week entitl

Summary: Natural anonymity comes from our ability to recognize others without the aid of an external identity system. Online interactions will only be able to mirror life-like anonymity when we can use decentralized identity systems that don't force all interactions to be under the purview of centralized, administrative identity systems.

Doc Searls published a piece last week entitled "How the Cookie Poisoned the Web". Doc points to various privacy ills of Web 2.0 and in each instance says "Blame the cookie." Doc's larger point is that the web started out as a peer-to-peer publishing system that was wholly decentralized and gave everyone equal voice.

Doc continues:

But gradually a poison disabled personal agency. That poison was the cookie.

Very few web sites in the early web had identity systems. For the peer-to-peer sharing of documents and discovery via embedded links, none were needed. HTTP, the foundational protocol of the web is stateless, meaning the HTTP server does not know whether any two requests are related to each other.

Stateless is fine for document sharing and linking using hypertext. But it makes building a shopping cart really hard. Back in the mid-90's figuring out how to build a functional shopping cart was on everyone's mind, mine included. I was the cofounder and CTO of an early ecommerce site, imall.com. Without changing HTTP, the most promising strategy was to include a correlation identifier in all the links generated by the site, so we'd know who was making the request. But this was buggy and caused lots of customer support issues.

A correlation identifier is a unique string that can be used to link requests. Ultimately, the the HTTP community added a correlation identifier called a "cookie" (which took its name from a correlation identifier used in unix). HTTP cookies are generated by the server and stored on the browser. Whenever the browser makes a request to the server, it sends back the cookie, allowing the server to correlate all requests from that browser.

That all sounds innocuous enough and in theory, it is. But the devil is in the details. If I'm shopping on imall.com, I want the site to keep track of me because that provides utility and convenience. But it turns out that most web pages are not a single chunk of HTML that the server sends down. They have lots of other things, like javascript files and images, embedded in them too. These other things don't have to be from the same server. Each of those servers can set a cookie as well. And since they know where they were linked from, they can correlate activity across multiple websites.

This is how (simple) ad tracking works. When you see an ad on web site A, it's being served from a server owned by an ad company that web site A has an agreement with. The ad server plants a cookie in your browser. Now you visit web site B that also includes ads from the same ad server. Your browser dutifully reports the ad server cookie back to the ad server along with the information that the ad was on web site B. The company running the ad server now knows you were on web site A and web site B (along with lots of other metadata). Rather than correlating requests on a single web site, they are using cookies to correlate your activity across the web.

This is the poison Doc is talking about. The web cookie, as designed, goes well beyond correlating activity on a single web site for purposes of creating some utility like a shopping cart or a chat server. The web cookie allows correlating activities of people across the web. And it doesn't stop with your browsing history. The ad company starts knowing other things about you (because the web sites you visit tell them) and soon they can develop a comprehensive dossier.

Like-Like Anonymity and the Administrative Internet

In real life, we often interact with others—both people and institutions—with relative anonymity. For example, if I go the store and buy a coke with cash there is no exchange of identity information necessary. Even if I use a credit card it's rarely the case that the entire transaction happens under the administrative authority of the identity system inherent in the credit card. Only the financial part of the transaction takes place in that identity system. This is true of most interactions in real life.

In contrast, in the digital world, very few meaningful transactions are done outside of some administrative identity system. There are several reasons why identity is so important in the digital world:

Continuity—While web sessions can be pseudonymous, as we've seen, they are often correlated across multiple independent sessions and devices using an authenticated correlation identifier. This allows, for example, the customer to have a shopping cart that not only persists across time but also on different devices. Convenience—So long as the customer is authenticating, we might as well further store additional information like addresses and credit card numbers for their convenience, to extend the shopping example. Storing these allows the customer to complete transactions without having to enter the same information over and over. Trust—There are some actions that should only be taken by certain people, or people in certain roles, or with specific attributes. Once a shopping site has stored my credit card, for example, I ought to be the only one who can use it. Identity systems provide authentication mechanisms as the means of knowing who is at the other end of the wire so that we know what actions they're allowed to take. This places identifiers in context so they can be trusted. Surveillance—Unfortunately, identity systems also provide the means of tracking individuals across transactions for purposes of gathering data about them. This data gathering may be innocuous or nefarious, but there is no doubt that it is enabled by identity systems in use on the internet.

In real life, we do without identity systems for most things. You don't have to identify yourself to the movie theater to watch a movie or log into some system to sit in a restaurant and have a private conversation with friends. In real life, we act as embodied, independent agents. Our physical presence and the laws of physics have a lot to do with our ability to function with workable anonymity across many domains.

So, how did we get surveillance and it's attendant affects on natural anonymity as an unintended, but oft-exploited feature of administrative digital identity systems? Precisely because they are administrative.

Legibility

Legibility is a term used to describe how administrative systems make things governable by simplifying, inventorying, and rationalizing things around them. James C. Scott's seminal book, Seeing Like a State, nicely analyzes legibility and its unintended consequences. Venkatesh Rao has a great summary if you'd like the TL;DR.

Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed by James C. Scott

In this wide-ranging and original book, James C. Scott analyzes failed cases of large-scale authoritarian plans in a variety of fields. Centrally managed social plans misfire, Scott argues, when they impose schematic visions that do violence to complex interdependencies that are not—and cannot—be fully understood. Further, the success of designs for social organization depends upon the recognition that local, practical knowledge is as important as formal, epistemic knowledge.

Identity systems make people legible in order to offer continuity, convenience, and trust. But, as we've seen, that legibility also allows surveillance. In some respects, this is the trade off we always get with administrative systems. By creating legibility, administrative systems threaten privacy.

Administrative systems are centralized. They are owned. They are run for the purposes of their owners, not the purposes of the people or things being administered. They are bureaucracies for governing something. They rely on rules, procedures, and formal interaction patterns. Need a new password? Be sure to follow the password rules of what ever administrative system you're in.

Every interaction you have online happens under the watchful eye of a bureaucracy built to govern the system and the people using it. The bureaucracy may be benevolent, benign, or malevolent but it controls the interaction and people pay the price of the interpretive work necessary to figure out how it functions.

Real Life is Decentralized

On the other hand, in real life we interact as peers. We do interact with administrative systems of various sorts, but no one would describe that as real life. When I go to a store, I don't think about shopping within their administrative system. Rather, I walk in, look at stuff, talk to people, put things in a cart, and check out. The administrative system is there, but it's for governing the store, not the customers.

We can't have online interactions that feel like real life until we redecentralize the internet. The internet started out decentralized. The early web was decentralized. But the need for continuity, convenience, and trust led more and more interactions to happen within someone's administrative system.

Most online administrative systems make themselves as unobtrusive as they can. But there's no getting around the fact that every move we make is within a system that knows who we are and monitors what we're doing. In real life, I don't rely on the administrative system of the restaurant to identify the people I'm having dinner with. The restaurant doesn't need to check our IDs or surveil us in order to create an environment where we can talk and enjoy a meal together.

The good news is that we're finally developing the tools necessary to create decentralized online experiences. What if you could interact with your friends online on the basis of an identity that they bring to you directly—one that you could recognize and trust? You wouldn't need Facebook or WhatsApp to identify and track your friends for you.

Decentralized identity is the foundation for a decentralized web—a web that flexibly supports the kind of ad hoc interactions people have with each other all the time in real life. Until we do, we'll never get an online world that mirrors real life and its natural anonymity.

Photo Credit: Poison Apple Sugar Cookies from Angelica Made Me (unknown)

Tags: identity web surveillance+capitalism decentralization


MyDigitalFootprint

Do our tools mould our outcomes and decisions?

Day zero of being a #CDO is probably not the best day to ask difficult questions; however, sometimes, there is no better day.  The first question to ask the executive leadership team as you walk around being introduced might be: “What is the one thing that we, as a team and organisation, want our data to drive, deliver or provide?” You might want to wait to ask this question and first de

Day zero of being a #CDO is probably not the best day to ask difficult questions; however, sometimes, there is no better day.  The first question to ask the executive leadership team as you walk around being introduced might be: “What is the one thing that we, as a team and organisation, want our data to drive, deliver or provide?”

You might want to wait to ask this question and first determine what tools are being used. This will frame what outcomes and decisions are being supported.  The question and answers allow you to determine if there is an alignment or gap between “What is the one thing that we, as a team and organisation what our data to drive, deliver or provide?”  and what will happen anyway because of tools, processes and legacy.  One critical aspect of being a #CDO is determining how our processes and methods only enable certain decisions to be made, but we have to unpack legacy.

Legacy within this framing is threefold. Decisions. Decisions. Decisions. These are:  

Decisions that created processes, methods and rules which were created by previous incentives and power games; and are now ghosts in the systems. These decisions were taken so long ago that no one knows why, how or when it was decided.  It is the way we do it; it is our IP, our brand.

Decisions that created “information and technology debt” included embedded and baked-in systems, hidden, and no-longer supported code, and automation based on tools and data biased when created. 

Decisions that created noise in the hierarchy to lose or filter signals that someone did not want to hear.  It was the creation of layers, reports, practices, structural regulation and unchallenged assumptions.

Unpacking legacy questions will take time, and you will not be ready on Day-0. Therefore any responses you get have to be framed within such understanding when it arrives.  It is worth asking the question and then verifying before becoming blind to the tools that mould you. 

“What is the one thing that we, as a team and organisation what our data to drive, deliver or provide?  Well, it could be:

Evidence-based, actionable insights

What should we automate?

How do we know we are doing the right thing?

Where are there efficiencies to be gained?

What do customers really want?

How to manipulate customers to increase margin and revenues?

Where are risks that we cannot see?

What is being hidden that we cannot see?

If you look at this list in the context of what tools and decisions already frame the response, are these what we are looking for data to answer or are we looking to data to affirm/ justify what we have already decided.  A different response that no one will say “to justify what we are already doing!”

Data has bias because of previous decisions. Therefore, a #CDO has to find non-data tools to check what decisions from the past are biasing the current data, processes and tools. We cannot usefully answer the question we have set ourselves “What is the one thing that we, as a team and organisation what our data to drive, deliver or provide? without understanding the situation.

The CTO knows that they have to build a new platform when the bug list, new feature development and maintenance costs are bigger and will take more time than developing a new platform - this is the technology debt question.  As the #CDO, you have to understand what is your information debt. The CTO will struggle as there is no clear path from policy to code. Similarly, the CDO, we struggle with no clear path from policy (what one thing) to better data for the decisions we require.  You inherit and are now accountable for previous decisions, which constrain what is now possible as the biased tool has created what we have. The costs of collecting, labelling, holding, sorting and creating training data continually increase, creating a more significant gap and misalignment in values and expectations from data.

“We become what we behold. We shape our tools, and then our tools shape us”  is often mistakenly attributed to Marshall McLuhan (as in the image above) and called McLuhan Law. The quote was actually written by Father John Culkin, SJ, a Professor of Communication at Fordham University in New York and friend of McLuhan. Such is the problem with data. 

Perhaps the opening question is the wrong one? Perhaps we should reflect on these questions as a leadership team.

What do we want to become, and what tools and data will help us?

What tools do we use, and what will they enable us to become? 

What is the minimum viable data set required to give the best value? 

Do our tools and data trap us?



Hans Zandbelt

Using an OAuth 2.0 Resource Server with Certificate-Bound Access Tokens

ZmartZone has implemented OAuth 2.0 Resource Server functionality in Apache/NGINX modules so these components can be used as a reverse proxy in front of APIs or other backends. In such a setup the backend does not have to deal with … Continue reading →

ZmartZone has implemented OAuth 2.0 Resource Server functionality in Apache/NGINX modules so these components can be used as a reverse proxy in front of APIs or other backends. In such a setup the backend does not have to deal with security but outsources it to a proxy sitting in front of it in a similar way that TLS termination is often offloaded to a load-balancer.

Most of the OAuth 2.0 deployments today use so-called bearer access tokens that are easy to deploy and use. This type of access token is not bound to the Client presenting the token which means that an attacker intercepting an access token can just use that token to get access to the resources/APIs/services as if it were the Client. So called Proof-of-Possession semantics for access tokens prevent that type of attack and present a more secure setup but such a system is typically harder to implement, deploy and maintain.

A relative simple variant of Proof-of-Possession for access tokens is specified in RFC 8705 OAuth 2.0 Mutual-TLS Client Authentication and Certificate-Bound Access Tokens. This specification leverages a (possibly self-signed) certificate held by the Client to bind an access token cryptographically to the private key associated with that certificate.

This specification is implemented in liboauth2 1.4.1 which is used in the Apache module mod_oauth2 3.2.1. This means that you can now require and verify OAuth 2.0 certificate bound access tokens for your API in a very simple way that is easy to deploy. All it takes is an Apache server in front of your API, configured with something like:

AuthType oauth2
OAuth2TokenVerify jwk "{\"kty\":\"RSA\",\"kid\":\"one\",\"use\":\"sig\",\"n\":\"...\",\"e\":\"AQAB\" }" type=mtls&mtls.policy=optional
SSLVerifyClient optional_no_ca

Acknowledgement: this work was performed in a partnership with Connect2ID. Thanks Vladimir Dzhuvinov.


Simon Willison

Weeknotes: Spinning back up on Datasette

I've been somewhat distracted from Datasette for the past couple of months, thanks to my work on VIAL and the accompanying open source project django-sql-dashboard. This week I scraped back some time to work on Datasette. ?_facet_size=100 This was a feature suggestion by Marjorie Roswell. Datasette defaults to showing a maximum of 30 results for a requested facet, which can be inconvenient if

I've been somewhat distracted from Datasette for the past couple of months, thanks to my work on VIAL and the accompanying open source project django-sql-dashboard. This week I scraped back some time to work on Datasette.

?_facet_size=100

This was a feature suggestion by Marjorie Roswell. Datasette defaults to showing a maximum of 30 results for a requested facet, which can be inconvenient if you are faceting on something like U.S. states where you'd ideally like to see all 50 at once.

You can customize this with the default_facet_size setting, but now you can also customize it for a specific page using the new ?_facet_size=100 query string parameter. You can request up to max_returned_rows results, which defaults to 1000.

You can try this feature out, along with a bunch of dependency upgrades and a minor bug fix, in the new 0.57a0 alpha release.

apt-get install now works in the Docker container

The datasetteproj/datasette Docker container had a bug in it where it was not possible to use apt-get install to add extra packages to containers that used it as a base - a problem if you want to use a plugin such as datasette-ripgrep which depends on ripgrep being available.

Brandon Roberts pointed this out along with a suggested fix, which I've adopted and will become available after the next non-alpha Datasette release.

Show/hide columns

I'm developing this in a branch at the moment, but I'm finally getting close to a fix for issue #615 (opened in November 2019) that will let the Datasette table view show and hide specific columns, using new ?_col= and ?_nocol= query string arguments. Here's an animated demo of my work so far:

Releases this week django-sql-dashboard: 0.15 - (29 releases total) - 2021-05-25
Django app for building dashboards using raw SQL queries datasette: 0.57a0 - (86 releases total) - 2021-05-23
An open source multi-tool for exploring and publishing data geojson-to-sqlite: 0.3.1 - (7 releases total) - 2021-05-20
CLI tool for converting GeoJSON files to SQLite (with SpatiaLite) TIL this week Switching between gcloud accounts Finding duplicate records by matching name and nearby distance Docker Compose for Django development

Monday, 24. May 2021

Simon Willison

HackSoft Django styleguide: services and selectors

HackSoft Django styleguide: services and selectors HackSoft's Django styleguide uses the terms "services" and "selectors". Services are functions that live in services.py and perform business logic operations such as creating new entities that might span multiple Django models. Selectors live in selectors.py and perform more complex database read operations, such as returning objects in a way th

HackSoft Django styleguide: services and selectors

HackSoft's Django styleguide uses the terms "services" and "selectors". Services are functions that live in services.py and perform business logic operations such as creating new entities that might span multiple Django models. Selectors live in selectors.py and perform more complex database read operations, such as returning objects in a way that respects visibility permissions.


How to look at the stack with gdb

How to look at the stack with gdb Useful short tutorial on gdb from first principles.

How to look at the stack with gdb

Useful short tutorial on gdb from first principles.


MyDigitalFootprint

What occurs when physical beings transition to information beings?

You are standing in front of Deep Thought, as in the AI in HitchHicker Guide to the Galaxy.  You are tasked with asking the famous question.  “What is the answer to the world, the universe and everything?”  In Douglas Adams original work, Deep Thought responds by saying, “it will take some time”; before coming back sometime later with the answer “42.”  However, this is when

You are standing in front of Deep Thought, as in the AI in HitchHicker Guide to the Galaxy.  You are tasked with asking the famous question.  “What is the answer to the world, the universe and everything?”  In Douglas Adams original work, Deep Thought responds by saying, “it will take some time”; before coming back sometime later with the answer “42.”  However, this is when data was a statistic, and we were “physical beings” who had not considered what experience would emerge as data became dynamic and we transitioned to “information beings.”

In our new context as informational beings, Deep Thought would not just take our question, go off to think about it but would respond.  Deep Thought would do what we do and ask for clarification and to check the understanding of the question. I can imagine Deep Thought asking, “I just wanta check what you mean by ‘the answer to the world the universe and everything’ and do you want a brave answer, comfortable, or courageous answer?  I can do a kind, generous or safe one too? What type of answer do you want?”

This is the problem we are facing.  The questions that we have learnt for governance and oversight over millennia were fabulous to determine who decides, and who decides who decides in a physical world. It was where a human was held responsible and accountable.  The issue of oversight becomes very differnt when we have automation and machines/ AI make decisions for us, how do we know who decides?  Our questions will not cut it.  

When we apply our old governance, old oversight, old accountable models and old frameworks to new problems, it feels like there is a disconnect and there is a level of dysfunctionality as the old model was not designed for the new problems.  There is a forward capability; insomuch that old problems can be addressed better on the new systems, but the old systems cannot help solve or answer new questions.  There is a backwards capability issue (not backwards compatibility), just like you cannot run the latest software on the early computers. 

We appear to be right now lost (which the media portrays as division or civilisation collapse creating divides to gain clicks and generate revenue). In reality, we are transitioning from "Physical"  to "Information" beings.  I take comfort that the best minds at MIT cannot decide either:  Atoms to Bits OR  Bits to Atoms. We know if MIT is wrestling, then we are in a transition.

The transition to information beings is reaching a chasm or hurdle, as the old model, which we depended on for governance and oversight, is at the edge of its limits.  Data can perfectly tell us how effective and efficient everything is, including the deviance to plan, but data could not tell us if the plan was right or if we are doing the right thing, efficacy.  We depended on the oversight model of humans for this.  The former, efficiency, is easy, the latter being really quite tricky as it is so contextual to the moment in time and values/ principles we currently hold.  Data can tell us from where we came, our models based on said data can suggest where we will go, but as yet, we cannot determine if we should. As an adventurer, I tend to do it anyway, ignoring should, because it is there, and my scars are evidence that it does not always work out well. 

The data, models, algorithms, ML and AI we have now created are capable of informing us if it is the right thing to do, but how do we know?

Writing things down, TV, Radio, Social media, books, screens, and printing created threats to established power structures that the old system of control adapted to embrace the new technology over time. However, I believe that the current oversight, governance, compliance and regulatory controls, which are based on the last 5000 years of adaption as physical beings, which are not bad in truth, CANNOT adapt to the next evolution from physical to information. Indeed our infrastructure gave rise to data, which is creating us as information beings.  Our current models of helping us determine if we are doing the right thing fall apart as we evolve to become information beings because of a shift in power and control. Our current oversight was not designed for autonomous decision-making by a non-human, and whilst the old methods have adopted too many situations, this move to information beings appears, right now, to break the capabilities of current governance. 

We feel the gap when we read the news but don’t as yet have the language or words to describe it.   We deny and hide from this reality by focussing on back to normal growth and trying to create new models.  

Governance follows change; it does not lead to a change. Governance and oversight help set up stability when we arrive at a new place. It is a flywheel for path dependency until someone breaks the axel. We are lucky to be alive now, seeing and witnessing this transition and can apply our minds to support this.   

as physical beings over generations, we learn that 

power and control matter

monopoly and scale in a vertical matter

accountable and responsible limited to the very narrow works, and we can ignore any wider consequences 

relationships have limited context

values and principles change slowly, allowing for rules to create an impression of stability

governance and oversight work to determine if we are doing the right thing by slowing decision time, filtering noise and hierarchy to create signals


as informational beings, we have not quite worked out the new game, however;

information and data must have attestation and rights as they matter

real-time learning models with computational power really matters

dependent and inter-dependence ecosystems ensure consequences matter

commons and the quality of the human experience on earth matter

collective and individual accountability and responsibility enable oversight in real-time


I believe we feel lost and vulnerable as we are starting to see that our lack of diversity, biased education, retention of power and control mechanisms, along with oversight and governance, worked for some of us some of the time as physical beings.  However, they are not suitable when we arrive at "information beings," wanting everything to work for most people most of the time. Utopia is still a dream.  We are at the start of a journey, and right now, we lack language, words, stories, rhetoric, meaning, models, structures, frameworks and a north star. 

We know that growth fuels increasing salaries which is a mechanism for control by creating more debt, as you can borrow more.  Growth hides and maintains the lie about change for longer. It is the flywheel. Increasing lifestyles create more people who have to remain aligned,  the balance of sufficient salary to have enough debt that you remain conformist and will do as needed.  Data means we can now pick up signals from employees about who is a conformist because of debt/ trapped.  They are the easiest to corrupt, will be the weak link in the security chain, allows someone to control or exert undue influence - keep the model and the belief.  The wheels are coming off, and the axels are breaking.  

What happens when AI (who is not paid) and is not aligned to the power plays makes decisions for the best outcome.  How will you control the AI, like we use salary for humans?

Such interesting times



The diminishing value of a data set

Source:  worth observing that this Dilbert was 1993 !!!!  Radar and sonar are incredible inventions as they allow us to perceive what cannot be actually seen with the naked eye.  As technology has advance and big data analysis has emerged we have gone from a simple echo to high-quality resolution.  However, the peak value for Radar is that it informs you something is there

Source:  worth observing that this Dilbert was 1993 !!!! 

Radar and sonar are incredible inventions as they allow us to perceive what cannot be actually seen with the naked eye.  As technology has advance and big data analysis has emerged we have gone from a simple echo to high-quality resolution.  However, the peak value for Radar is that it informs you something is there which requires low resolution and very little data.  As Radar resolution has improved we can get direction and speed which requires a little more time. This new information definitely adds value to any required reactive decision. The identification of what the actual object is through increased resolution has an incremental value but not as much as knowing it is there and what direction at what speed but such information can lead to a better decision but suddenly there is an economics of cost compared to the incremental improvement in outcome.  Knowing what type of bird by species or what plane by manufacturer, does add cost and size of data set but it does not add any value in terms of the decision requirement. 

Today, current data scientists, enthusiasts and idealists are asking you to store vast quantities of data from your customer and eco-system in the hope that at some point the algorithms and AI technology will give you new rich and deep insights, such as the equivalent resolution of what rivet types are used in construction for the inbound missile for Radar. We have to question when even though technically possible,  does the resolution stop adding value?

The more data you collect the better the decisions you will be able to make in the future is a lie that is driving boards to take on the cost for data and data risk that is beyond their scope of any calculatable return.  “Collect all the data and store it as you don’t know what we might discover” is hope and choice, it is not a rational business decision. The hope is wrapped up in a power pitch that our competitive advantage will be the discovery and ability to perform magical acts of mind manipulation which will supercharge sales through the control of customers. Or is it fear that everyone else will have this and we will have no future, FOMO of the big data promise. 

Investing in the future is a fantastic idea when collecting moon or martian dust and storing it until we improve our science tools and instruments but for a company right now this is a lie.   We are collecting and storing data in the hope that an algorithm will be able to mine more value than the cost of collecting, storing and algorithm development.  Right now there is little evidence from all the work done that using data to change behaviour works at scale.  This is a real cost today for “hope,” which the CFO has not found an ethical way to put on a balance sheet.  

We have been sold a vision the more data we have the better decisions and insights we will have. Any data scientist or statistician will confirm that resolution of a decision or insight will not substantially improve beyond a certain data set size.  You are not keeping and storing data for better decision making and insight today, just in the hope for new insights tomorrow. But the COO, CIO, CDO or CTO who is empire-building and determine that size equates to power within your culture are dependent on fear, uncertainty and doubt and insufficiently skilled directors to be able to challenge this. The tech looks like magic and every consulting company sees this as a revenue stream and write reports to support this view.

As a sceptic of the big data story, this article nicely aligns with your affinity and belief.  You see that your next action is the stop the ever-increasing data spend and focus on traditional business.  Whilst difficult to understand data is the right answer to business problems.  The takeaway should be to question: Where is the diminishing return for data for your organisation?  How do you know the data is good? How do you know the analysis is good. How do you know if what you are doing is the right thing to do? How do you incentive the right actions?

Data is not the problem, too much data will not create the return we hope for based on the cost we have, but we have to remain focussed that data is essential. How do we make data work for most of the people most of the time and not a few people on the odd occasion. 


Damien Bod

Secure an Angular SPA and an ASP.NET Core API using Auth0

This article shows how to implement an Angular single page application with an ASP.NET Core API and secured using the Open ID Connect code flow with PKCE and OAuth JWT Bearer tokens to protect the API. The identity provider is implemented using Auth0. The flow uses refresh tokens to renew the SPA session and the […]

This article shows how to implement an Angular single page application with an ASP.NET Core API and secured using the Open ID Connect code flow with PKCE and OAuth JWT Bearer tokens to protect the API. The identity provider is implemented using Auth0. The flow uses refresh tokens to renew the SPA session and the revocation endpoint is used to clean up the refresh tokens on logout.

Code: https://github.com/damienbod/Auth0AngularAspNetCoreApi

Setup

The solutions consists of three parts, an ASP.NET Core API which would provide the data in a secure way, an Angular application which would use the data and the Auth0 service which is used as the identity provider. Both applications are registered in Auth0 and the refresh tokens are configured for the SPA. The API can be used from the SPA application.

Angular SPA Code flow PKCE with refresh tokens

The Angular Open ID Connect client is implemented using the npm package angular-auth-oidc-client. The Auth0 client requires two special configurations to use an API. The audience is added as a custom parameter in the authorize request so that the required API can be used. The customParamsRefreshToken is used to add the scope parameter to the refresh request which is required by Auth0. The rest is standard Open ID Connect settings used for code flow using PKCE and refresh tokens.

import { APP_INITIALIZER, NgModule } from '@angular/core'; import { AuthModule, LogLevel, OidcConfigService } from 'angular-auth-oidc-client'; export function configureAuth(oidcConfigService: OidcConfigService) { return () => oidcConfigService.withConfig({ stsServer: 'https://dev-damienbod.eu.auth0.com', redirectUrl: window.location.origin, postLogoutRedirectUri: window.location.origin, clientId: 'Ujh5oSBAFr1BuilgkZPcMWEgnuREgrwU', scope: 'openid profile offline_access auth0-user-api-spa', responseType: 'code', silentRenew: true, useRefreshToken: true, logLevel: LogLevel.Debug, customParams: { audience: 'https://auth0-api-spa', // API app in Auth0 }, customParamsRefreshToken: { scope: 'openid profile offline_access auth0-user-api-spa', }, }); } @NgModule({ imports: [AuthModule.forRoot()], providers: [ OidcConfigService, { provide: APP_INITIALIZER, useFactory: configureAuth, deps: [OidcConfigService], multi: true, }, ], exports: [AuthModule], }) export class AuthConfigModule {}

An AuthInterceptor class is used to add the access token to the API requests to the secure APIs which use the access token. It is important that the access token is only sent to the intended API and not every outgoing HTTP request.

import { HttpInterceptor, HttpRequest, HttpHandler } from '@angular/common/http'; import { Injectable } from '@angular/core'; import { AuthService } from './auth.service'; @Injectable() export class AuthInterceptor implements HttpInterceptor { private secureRoutes = ['https://localhost:44390']; constructor(private authService: AuthService) {} intercept( request: HttpRequest<any>, next: HttpHandler ) { if (!this.secureRoutes.find((x) => request.url.startsWith(x))) { return next.handle(request); } const token = this.authService.token; if (!token) { return next.handle(request); } request = request.clone({ headers: request.headers.set('Authorization', 'Bearer ' + token), }); return next.handle(request); } }

ASP.NET Core API OAuth

The ASP.NET Core API allows requests from the calling SPA application. CORS is enabled for the application. The AddAuthentication method is used to add JWT bearer token security and the policies are added to verify the access token. The UseAuthentication method is used to add the security middleware.

public void ConfigureServices(IServiceCollection services) { // ... JwtSecurityTokenHandler.DefaultInboundClaimTypeMap.Clear(); // IdentityModelEventSource.ShowPII = true; // only needed for browser clients services.AddCors(options => { options.AddPolicy("AllowAllOrigins", builder => { builder .AllowCredentials() .WithOrigins( "https://localhost:4204") .SetIsOriginAllowedToAllowWildcardSubdomains() .AllowAnyHeader() .AllowAnyMethod(); }); }); services.AddAuthentication(options => { options.DefaultAuthenticateScheme = JwtBearerDefaults.AuthenticationScheme; options.DefaultChallengeScheme = JwtBearerDefaults.AuthenticationScheme; }).AddJwtBearer(options => { options.Authority = "https://dev-damienbod.eu.auth0.com/"; options.Audience = "https://auth0-api-spa"; }); services.AddSingleton<IAuthorizationHandler, UserApiScopeHandler>(); services.AddAuthorization(policies => { policies.AddPolicy("p-user-api-auth0", p => { p.Requirements.Add(new UserApiScopeHandlerRequirement()); // Validate id of application for which the token was created p.RequireClaim("azp", "Ujh5oSBAFr1BuilgkZPcMWEgnuREgrwU"); }); }); services.AddControllers(options => { var policy = new AuthorizationPolicyBuilder() .RequireAuthenticatedUser() .Build(); options.Filters.Add(new AuthorizeFilter(policy)); }); } public void Configure(IApplicationBuilder app, IWebHostEnvironment env) { // ... app.UseCors("AllowAllOrigins"); app.UseHttpsRedirection(); app.UseRouting(); app.UseAuthentication(); app.UseAuthorization(); app.UseEndpoints(endpoints => { endpoints.MapControllers(); }); }

The UserApiScopeHandler class implements the AuthorizationHandler to require the UserApiScopeHandlerRequirement requirement which is used as the policy.

public class UserApiScopeHandler : AuthorizationHandler<UserApiScopeHandlerRequirement> { protected override Task HandleRequirementAsync(AuthorizationHandlerContext context, UserApiScopeHandlerRequirement requirement) { if (context == null) throw new ArgumentNullException(nameof(context)); if (requirement == null) throw new ArgumentNullException(nameof(requirement)); var scopeClaim = context.User.Claims.FirstOrDefault(t => t.Type == "scope"); if (scopeClaim != null) { var scopes = scopeClaim.Value.Split(" ", StringSplitOptions.RemoveEmptyEntries); if (scopes.Any(t => t == "auth0-user-api-spa")) { context.Succeed(requirement); } } return Task.CompletedTask; } }

The UserOneController class uses the policy which validates the access token and the claims from the token.

[SwaggerTag("User access token protected using Auth0")] [Authorize(Policy = "p-user-api-auth0")] [ApiController] [Route("api/[controller]")] public class UserOneController : ControllerBase { /// <summary> /// returns data id the correct Auth0 access token is used. /// </summary> /// <returns>protected data</returns> [HttpGet] [ProducesResponseType(StatusCodes.Status200OK)] [ProducesResponseType(StatusCodes.Status401Unauthorized)] public IEnumerable<string> Get() { return new List<string> { "user one data" }; } }

Problems, notes, Improvements

Auth0 supports the revocation endpoint which is really good and so the refresh token can be revoked when the Angular application is logged out. This is really a MUST I think if using refresh tokens in the browser. It is not possible to revoke the access tokens so these remain valid after the SPA app logs out. You could reduce the lifespan of the access tokens which would improve this a bit. Auth0 does not support reference tokens and introspection which I would always use if using SPA authentication. Introspection could be supported by using a different identity provider. Using refresh token rotation is really important when using refresh tokens in the browser, this should also be configured.

Using Auth0 with an SPA means you cannot fully logout. The tokens are also stored somewhere in the browser, but at least the refresh token can be revoked which is really important. To improve security, you could switch to a BFF architecture and remove the tokens from the browser. Then it would also be possible to fully logout. The BFF also allows for client authentication and other security features which are not possible with an SPA.

Links

https://auth0.com/docs/quickstart/webapp/aspnet-core

https://docs.microsoft.com/en-us/aspnet/core/security/authorization/introduction

Open ID Connect

https://www.npmjs.com/package/angular-auth-oidc-client

Friday, 21. May 2021

Hyperonomy Digital Identity Lab

NETAGO Downtime – May 20-21, 2021

Hardwired ThinkPad laptop Edition: Windows 10 Pro Version: 20H2 OS build: 19042.928 716 failures 75.67% downtime WiFi Wireless Lenovo laptop Edition: Windows 10 Pro Version: 20H2 OS build: 19042.985 753 failures 6.86% downtime
Hardwired ThinkPad laptop Edition: Windows 10 Pro Version: 20H2 OS build: 19042.928 716 failures 75.67% downtime

WiFi Wireless Lenovo laptop Edition: Windows 10 Pro Version: 20H2 OS build: 19042.985 753 failures 6.86% downtime netuptime-20210519-223828-tp-hardwiredDownload netuptime-20210518-051106-lenovo-wifiDownload

Thursday, 20. May 2021

MyDigitalFootprint

What Happens When the (commercial) Model Wins?

source : https://dilbert.com/strip/2021-05-20 The Insurance model wins through fear which makes you play it safe and limit risk; as they only payout for sure bets. The Banking model wins when growth fuels a salary increase and you borrow more than you have or save.  The Food production model wins when you stop cooking and you become addicted to sugar and salt (convenience) as the
source : https://dilbert.com/strip/2021-05-20

The Insurance model wins through fear which makes you play it safe and limit risk; as they only payout for sure bets.

The Banking model wins when growth fuels a salary increase and you borrow more than you have or save. 

The Food production model wins when you stop cooking and you become addicted to sugar and salt (convenience) as they control all the supply.

The Supplements model wins by fear and misinformation because you are not cooking. 

The Logistics and transport model wins with long complex supply chains due to higher levels of specialism creating better EOS.

The Legal market wins every way no matter what anyone else does, but especially the wealthy.

The Pharma Industry wins when you medicate and hide the issue rather than solve root causes.

The Consulting market wins because they promote that someone else is winning which drives the fear of losing your job because the banking industry has control by indebtedness of you both.

The Compute/ tech/ IT model wins by remaining sufficiently advanced to appear like magic.

All the Extraction models win by creating demand for new and throw away of the old to recycle, which only reduces their extraction costs.

The Waste model wins by media, Instagram, images, lifestyle and no one incentivized to recycle or repair.

The Energy model wins by their efficiency works, and everyone's else's fails or being too lazy.

The Media industry win by division, snippets, shortness, news and fueling opinion.

The Travel and entertainment market wins because it enables us to avoid for a moment that we are losing to the models that control us.

The Economic and money model wins because we all believe it.

The Species model wins by growth, and our interdependent economic models work together without having to worry if it is doing the right thing. 

Ecology wins and loses together but has an ebb and flow of changing balance

The Health and medical model wins as the ebb and flow have become more ebb and less flow.

The Incentive model wins as we are trapped by the other models.

The political model wins because we cannot hold to account anyone who decides and cannot determine who decides who decides because they make us think we did. 

Every one of the eight bn people on earth interprets each model based on bias, framed by their context, experience, and beliefs. 

So what happens when the model wins? Surely something else must come second, lose or be compromised.  Right now, as our models win the Earth is not compromised, as the Earth will remain here for another few billion years irrespective of what we do - it will just have a different balance. The compromise is the burden of our future generations from whom we have removed choice.  We have not questioned and determined what lifestyle they want; we have just focussed on the one we can get. 

Can the Earth also win if the model wins? Definitely, however, can further generations enjoy and have a life we expect if our models remain - that is a hope that we need to question.

 


@_Nat Zone

【講演案内】アフターコロナのトラスト形成 (2021-05-21)

第25回サイバー犯罪に関する白浜シンポジウム テー… The post 【講演案内】アフターコロナのトラスト形成 (2021-05-21) first appeared on @_Nat Zone.

第25回サイバー犯罪に関する白浜シンポジウム テーマ今こそ考えるサイバー空間の「信頼」~クラウドセキュリティとゼロトラストネットワーク~ の中で、「アフターコロナのトラスト形成」(2021年5月21日13:55~14:35)と題して講演します。また、その後引き続き「トラストなき時代のセキュリティ」と題して、上原 哲太郎 先生の司会で河野 省二 氏(日本マイクロソフト株式会社)と丸山 満彦 氏(PwCコンサルティング合同会社)とパネルディスカッションを行います。
全体のプログラムは、https://sccs-jp.org/symposium25/lecture/#lecture03 からアクセスしていただけます。(パネルの時間が間違っていますが…。)

それでは、オンラインでお会いしましょう。

2日目 13:55~14:35

アフターコロナのトラスト形成

崎村 夏彦 氏(NATコンサルティング合同会社)

新型コロナ禍により、多くの人々は好む好まざるに関わらず、インターネット上に生活の軸足を移さざるをえなくなりました。
サイバー大陸(第八大陸)への強制移住です。その結果、これまでの慣習にしたがった「信頼」のあり方の課題が噴出してきました。

このセッションでは、こうした課題の内主なものを取り上げ紹介するとともに、どのように解くのが良いと考えられているかを諸外国や標準化の動きを交えながら考察します。

The post 【講演案内】アフターコロナのトラスト形成 (2021-05-21) first appeared on @_Nat Zone.

Wednesday, 19. May 2021

Doc Searls Weblog

Making useful photographs

What does it mean when perhaps hundreds of thousands of one’s photos appear in articles, essays and posts all over the Web? It means they’re useful. That’s why I posted the originals in the first place, and licensed them to require only attribution. Because of that, I can at least guess at how many have […]

What does it mean when perhaps hundreds of thousands of one’s photos appear in articles, essays and posts all over the Web?

It means they’re useful. That’s why I posted the originals in the first place, and licensed them to require only attribution. Because of that, I can at least guess at how many have been put to use.

For one example subject, take Lithium, a metal in the periodic table. Lithium is making news these days, because it’s both scarce and required for the batteries of electric and hybrid vehicles. At issue especially is how and where lithium is extracted from the Earth. As Ivan Penn and Eric Lipton put it in The Lithium Gold Rush: Inside the Race to Power Electric Vehicles (6 May in The New York Times), extraction “might not be very green.”

But it is blue. Or turquoise. Or aqua. Or whatever colors you see in the photo above.

I took that shot on a 2010 flight over Nevada. Looking out the window, it’s hard to miss lakes of bright colors on the desert floor, looking like stained glass windows into the Earth. I didn’t know at the time that the puddles were lithium, but I did know they’d be useful when I published them, along with whatever information a little bit of research would reveal about them. After I did the research, I put 17 photos in an album on Flickr titled Lithium Mines in Nevada and added the same set to another album called Mines and Mining, which is now 329 photos long.

Also on that flight, which produced 130 photos now in an album called 2010_08_06 rno-phx-bos, other topics of interest are the Shoshone Mountains, Yucca Lake and Yucca Flat (with “subsidence craters” over underground nuclear bomb explosions), the Nevada Test Site, (where hundreds of atomic bomb tests took place, among other interesting things, “Doom Town” on Frenchman Lake, Broom Lake in Area 51, Creech Air Force Base (from which military drones are remotely controlled), Grand Canyon, and Buffalo at night. None of the photos of mine at those links (all in Wikipedia) are especially artistic. In fact most of them make me cringe today, because I hadn’t yet mastered Photoshop when I posted them in the first place. Back then I shot only .jpgs, rather than RAW photos, which means I can’t go back do much to improve them. But all are useful, especially to writers and publications covering the topic of lithium mining. For example, my photos of those lithium lakes appear in—

Biden clean energy talk fuels mining reform bills in E&E What an ancient lake in Nevada reveals about the future of tech in Fast Company TRANSITION TO ELECTRIC CARS NEED NOT DEMAND A TOXIC LITHIUM LEGACY, in Energy Mix Leading the Charge…To Lithium And Beyond? in Nevada Forward Lithium: Commodity Overview in Geology for Investors Lithium mining innovators secure investment from Bill Gates-led fund in Mining Technology The Path to Lithium Batteries: Friend or Foe? in Treehugger

And those are just the first six among 23,200 results in a search for my name + lithium. And those results are just from pubs that have bothered to obey my Creative Commons license, which only requires attribution. Countless others don’t.

Google also finds 57,400 results for my name + mining. On top of those, there are also thousands of other results for potash, river, geology, mining, mountains, dunes, desert, beach, ocean, hebrides, glacier, and other landforms sometimes best viewed from above. And that’s on top of more than 1500 photos of mine parked in Wikimedia Commons, of which many (perhaps most) are already in Wikipedia (sometimes in multiple places) or on their way there.

And those are just a few of the many subjects I’ve shot, posted and annotated to make them useful to the world. Which is why I’m guessing the number of photos actually being used is in the hundreds of thousands by now.

I have placed none of those photos in any of those places. I just put them up where they can easily be found and put to use. For example, when I shot Thedford, Nebraska, I knew somebody would find the photo and put it in Wikipedia.

Shots like these are a small percentage of all the photos I’ve taken over many decades. In fact, most of my photography is of people and scenes, not stuff like you find in the links above.

But apparently my main calling as a photographer is to push useful photos to the edge of the public domain, and to describe and tag them in ways that make them easy for researchers and journalists to find and use. And so far that has been a very successful strategy.

Addendum:::

So I have a camera question for the fellow photographers out there.

My main camera is a 2012-vintage Canon 5D Mark III , which replaced a 2005-vintage Canon 5D (source of the lithium lake shots), which replaced a Canon 30D of the same generation, and a Nikon Coolpix before that. All of these are retired or beat up now. Being um, resource constrained, every camera and lens I’ve used in this millennium I’ve either rented or bought used.

Now, out of great kindness, an old friend is giving me a Sony a7R that has been idle since she replaced it with a Sony a7Riii. I’ve played with her newer Sony, and really like how much lighter mirrorless full-frames can be. (And the a7R is lighter than the a7Riii.) The question now is what kind of lens I want to start with here, given that my budget is $0 (though I will spend more than that). The Sony equivalent of the lens I use most, a Canon 24-105 f4 L, runs >$1000, even used.

I suppose I could get non-Sony lenses for less, but … I’m not sure that’s the way to go. I’m kinda tempted to get a telephoto zoom or prime for the Sony and keep using the Canon for everything else. But then I’m carrying two cameras everywhere.

But I just looked at Ken Rockwell’s take on the Sony 24-105mm f/4
FE G OSS Full-Frame E-Mount
, which appears to outperform the Canon equivalent (two links back) so maybe I’ll find a bullet to bite, and spend the grand.

[25 May…] And I did that. The lens just arrived. Now I just need to match it up with a7R, which will probably happen next Tuesday. I trust you’ll see some results soon after that.


@_Nat Zone

国際金融協会(IIF)のAPACサミットでバンコク銀行頭取他とパネルディスカッショします(5/20, 5/21)

国際金融協会(International Inst… The post 国際金融協会(IIF)のAPACサミットでバンコク銀行頭取他とパネルディスカッショします(5/20, 5/21) first appeared on @_Nat Zone.

国際金融協会(International Institute of Finance, IIF)のアジア・パシフィックサミットに出演します。1日目の最後のセッション、5月20日11:20(日本時間)〜12:00 で、バンコク銀行頭取など錚々たる方々の間でのパネルディスカッションです。(英語です)

「Eコマースの未来」

新型コロナ は、オンライン商取引、新しい支払方法の採用、消費者の嗜好の大幅な変化などの傾向を劇的に加速させました。このことは、これらの変化がどの程度続くのか、あるいはさらに進化し続けるのか、企業が新技術を市場に投入する能力、そして中小企業がEコマースビジネスに移行したり再構築したりする際に必要なツールや技術について、重要な問題を提起しています。このセッションでは、デジタル化されたサービスをエンドユーザーや消費者に届けるための「ラストワンマイル」の課題についても考えます。
このセッションでは、デジタル化されたサービスをエンドユーザーや消費者に届ける「ラストマイル」の課題や、実店舗の未来、パンデミック後の世界でEコマースと従来の小売業がどのように融合するかについても考えます。

Brad Carr, Managing Director, Digital Finance, IIF (Moderator) Nat Sakimura, Chairman, OpenID Foundation Chartsiri Sophonpanich, President, Bangkok Bank Public Company Limited Kati Suominen, Founder and Chief Executive Officer, Nextrade Group

このサミットは、このパネル以外にも見どころが沢山です。たとえば、2日目の朝一番には金融庁の氷見野長官が登場します。

お申し込みはこちらのページの一番下から→申し込みページ

プログラム→ プログラム (PDF)

The post 国際金融協会(IIF)のAPACサミットでバンコク銀行頭取他とパネルディスカッショします(5/20, 5/21) first appeared on @_Nat Zone.

Phil Windley's Technometria

Can the Digital Future Be Our Home?

Summary: This post features three fantastic books from three great, but quite different, authors on the subject of Big Tech, surveillance capitalism, and what's to be done about it. I recently read Shoshana Zuboff's book on surveillance capitalism. Not only is the book thought provoking, but Zuboff's writing verges on the poetic at times, making it a delightful read. In her opening c

Summary: This post features three fantastic books from three great, but quite different, authors on the subject of Big Tech, surveillance capitalism, and what's to be done about it.

I recently read Shoshana Zuboff's book on surveillance capitalism. Not only is the book thought provoking, but Zuboff's writing verges on the poetic at times, making it a delightful read. In her opening chapter she asks the question "Can the digital future be our home?"

This question is perhaps one of the most important of our age. More and more of our lives are being intermediated by digital systems. And yet those systems are not ours, but rather belong to the companies that provide them. And our experience on them is predicated on the goals, desires, and needs of those companies, not ours. I call these systems "administrative" because they are built to administer our experience in a particular domain for the administrator's specific purposes.

The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power by Shoshana Zuboff

The challenges to humanity posed by the digital future, the first detailed examination of the unprecedented form of power called "surveillance capitalism," and the quest by powerful corporations to predict and control our behavior. In this masterwork of original thinking and research, Shoshana Zuboff provides startling insights into the phenomenon that she has named surveillance capitalism. The stakes could not be higher: a global architecture of behavior modification threatens human nature in the twenty-first century just as industrial capitalism disfigured the natural world in the twentieth.

Zuboff makes a number of compelling arguments about why surveillance capitalism represents a significant threat to humanity's future. An overarching conclusion is that by putting everyone inside their administrative systems to make our lives legible to their surveillance, these companies become tyrants.

[T]yranny is the obliteration of politics. It is founded on its own strain of radical indifference in which every person, except the tyrant, is understood as an organism among organisms in an equivalency of Other-Ones.

Contrary to what many might believe, the obliteration of politics is not a good thing. As we discovered a few issues ago (see Legitimacy and Decentralized Systems), politics is how decentralized, democratic systems achieve legitimacy and coherence. Getting rid of politics requires putting everyone and everything in the centralized administrative system of the surveillance capitalist—making them subject to the dictates of the tyrant who has radical indifference to their autonomy, individuality, and humanity.

Zuboff's statement echos David Graeber's discussion of bureaucracy in The Utopia of Rules. Bureaucratic interactions are simple and predictable. But they are soulless. They are transactional and cannot provide the basis for authentic digital relationships (see Authentic Digital Relationships ).

The Utopia of Rules: On Technology, Stupidity, and the Secret Joys of Bureaucracy by David Graeber

Where does the desire for endless rules, regulations, and bureaucracy come from? How did we come to spend so much of our time filling out forms? And is it really a cipher for state violence? To answer these questions, the anthropologist David Graeber—one of our most important and provocative thinkers—traces the peculiar and unexpected ways we relate to bureaucracy today, and reveals how it shapes our lives in ways we may not even notice...though he also suggests that there may be something perversely appealing—even romantic—about bureaucracy.

Living our lives inside the administrative systems of Big Tech is akin to living your life inside an amusement park. Not altogether unpleasant, but a far cry from authentic. Stippled with moments of joy, but devoid of real happiness and freedom. Treated identically and transactionally despite pretensions to personalization.

In How to Destroy Surveillance Capitalism, Cory Doctorow argues that while Zuboff's observations are not incorrect, her conclusions about what constitutes surveillance capitalism's real dangers are mistaken. Where Zuboff sees companies who are getting better and better at predicting and controlling our actions, Doctorow sees companies selling the power to persuade, poorly. The real harm is the surveillance, not mind control.

How to Destroy Surveillance Capitalism by Cory Doctorow

For years, we've been hearing about the ills of surveillance capitalism --- the business of extracting, collecting, and selling vast reams of user data that exploded with the rise of tech giants like Google, Facebook, and Amazon. But what if everything we've been hearing is wrong? What if surveillance capitalism is not some rogue capitalism or a wrong turn taken by some misguided corporations? What if the system is working exactly as intended—and the only hope of restoring an open web is to take the fight directly to the system itself?

Zuboff's conclusion that surveillance capitalism is a new "rogue" form of capitalism leaves us with little recourse but to regulate the ills that surveillance capitalists bring about. Not unreasonably, Zuboff's prescription for this predicament is to protect, trust, and utilize democratic processes—to collectively push back. To not let our cynicism dissuade us or cause us to lose hope.

But, merely regulating a big monopoly only further entrenches it, locking the world into the status quo. If we want to destroy surveillance capitalism, Cory argues, we have to break it up and decentralize, making "big tech small again." Ultimately, the choice is to fix Big Tech or fix the internet. Cory argues for the second and I'm on board.

Fixing the internet is hard, but not impossible. Cory references Lawrence Lessig, saying "our lives are regulated by four forces: law (what's legal), code (what's technologically possible), norms (what's socially acceptable), and markets (what's profitable)." We can bring all four to bear on this problem.

Antitrust, post Reagan, has lost its teeth and come to focus only on consumer harm instead of other anti-competitive behaviors like buying up large rivals and new competitors. If the problem with "Big Tech" is that it is "big" then restructuring antitrust laws to break up large tech companies is a critical tool.

Many will fear that breaking up big tech will diminish the fruits of the digital world we've come to enjoy, and even rely on. Centralization, they will say, is the only way to safely and efficiently build messaging platforms, app stores, social networks, and other features of Web 2.0 that we've come to enjoy.

This is where Lessig's other three forces come into play. As I've written, in numerous ways, the means exist to decentralize most of the centralized Web 2.0 platforms (i.e. it's "technologically possible" in Lessig's words). The internet itself and more recent decentralized networks like Bitcoin and Ethereum show that large, decentralized systems can achieve legitimacy to accomplish global goals.

Beyond tech, I have hope that norms are changing. People are more aware and wary of the dangers of surveillance and the need for better online privacy. Collecting data is becoming less socially acceptable. Security breeches affect more and more people, waking them up to the problem of companies collecting and holding large caches of personal data. And competitors to big tech with decentralized solutions are always emerging. A little antitrust help could be what it takes to make them viable.

There's no single act that's going to change the way things work now. Getting Congress to act on antitrust requires a big shift in norms. Changing norms requires new technical possibilities, new applications, and, frankly, more privacy problems. Change is predicated on a web of interrelated actions that we must iterate over.

Returning to Zuboff's opening question: "Can the digital future be our home?" Fixing Big Tech just leaves us where we're at, with slightly fewer problems. It's a dead end road that doesn't lead to a digital home. But fixing the internet, redecentralizing it, promises a future where we can live authentic digital lives that compliment our physical lives. I choose to fight for that future.

Tags: identity surveillance+capitalism decentralization


Simon Willison

Flat Data

Flat Data New project from the GitHub OCTO (the Office of the CTO, love that backronym) somewhat inspired by my work on Git scraping: I'm really excited to see GitHub embracing git for CSV/JSON data in this way. Flat incorporates a reusable Action for scraping and storing data (using Deno), a VS Code extension for setting up those workflows and a very nicely designed Flat Viewer web app for brow

Flat Data

New project from the GitHub OCTO (the Office of the CTO, love that backronym) somewhat inspired by my work on Git scraping: I'm really excited to see GitHub embracing git for CSV/JSON data in this way. Flat incorporates a reusable Action for scraping and storing data (using Deno), a VS Code extension for setting up those workflows and a very nicely designed Flat Viewer web app for browsing CSV and JSON data hosted on GitHub.

Tuesday, 18. May 2021

Phil Windley's Technometria

Building an SSI Ecosystem: Digital Staff Passports at the NHS

Summary: How does a functioning credential ecosystem get started? This post goes deep on Manny Nijjar’s work to create a program for using digital staff passports in the sprawling UK NHS bureaucracy. Dr Manny Nijjar is an infectious disease doctor with Whipps Cross Hospital in the UK. He’s also an innovator who quickly saw how verifiable credentials could be applied to health care. I

Summary: How does a functioning credential ecosystem get started? This post goes deep on Manny Nijjar’s work to create a program for using digital staff passports in the sprawling UK NHS bureaucracy.

Dr Manny Nijjar is an infectious disease doctor with Whipps Cross Hospital in the UK. He’s also an innovator who quickly saw how verifiable credentials could be applied to health care. I first met Manny at the launch of Sovrin Foundation in London in September 2016. He’s been working to bring this vision to life with his company Truu, ever since.

SSI For Healthcare: Lessons from the NHS Frontline

In this video, Manny discusses why he became interested in digital credentials. He also speaks to the influence medical ethics has had on his journey. In 2015, he was training to become an infectious disease specialist. Manny was the most senior clinician on site in the evenings, in charge of about 500 beds.

Manny kept getting called by, and about, a temporary agency doctor every night. Manny and other medical staff had questions about this doctor’s skills, qualifications, and the decisions he was making. But there were shortages and the hospital needed to fill the gap. Manny was so discouraged by seeing an unqualified physician slip through the cracks, that he was about to quit his career, but instead he determined to do something about it.

Serendipitously, Manny came across self-sovereign identity (SSI) at the same time and, as I said, spoke at the launch of Sovrin Foundation. Over the next several years, Manny and his partners worked to create an SSI solution that the National Health Service in the UK could use to instantly verify the identity and skills of temporary and permanent clinical staff. There were three primary problems that this solves:

Patient Safety - Verifying the identity and skills of temporary and permanent clinical staff. Burden on Clinical Staff - Admin time for repeated identity and pre-employment checks. Organizational Risk and Operational Inefficiencies - Failure of manual checks. Time and cost to onboard healthcare staff.

Manny’s first thought had been to use a traditional, administrative scheme using usernames and passwords. But he saw the problems with that. He realized a digital credential was a better answer. And his journey into self-sovereign identity commenced.

Manny's paper credentials (click to enlarge)

Over the past five years, Manny and his team at Truu have worked with clinicians, various parts of the NHS, employers, HR departments, and locum agencies to understand their needs and build a solution that fits.

In 2019, Truu conducted a pilot with the NHS where the General Medical Council (GMC) issued “license to practice” credentials to SSI wallets controlled by medical staff. Medical staff could present that credential to Blackpool Teaching Hospitals. The hospital, in turn, issued a “sign in” credential to the staff member who could then use it to log into clinical systems at the hospital.

Digital Credentials for People and Organizations (click to enlarge)

The Covid-19 pandemic increased the pressure on the NHS, making the need to easily move staff between facilities acute. Truu worked with NHS to use this critical moment to shift to digital credentials and to do it in the right way. Truu’s early work, including the pilot, positioned the idea so that it could be quickly adopted when it was needed most. Digital credentialing in healthcare simplifies onboarding for providers, enables the secure expansion of telehealth services, and enhances information exchange—providing a path to interoperability for healthcare data.

The National Health Service in the UK has a program to issue staff passports to medical personnel, confirming their qualifications and ability to work. NHS staff passports are based on verifiable credentials. Eighty-four NHS organizations are participating to date.

Locations of Participating Organizations in the NHS Staff Passport Program in April 2021 (click to enlarge)

The work that Manny, his team at Truu, and partners like Evernym have done has already had a big impact. The UK Department of Health and Social Care recognized the importance of the program, promising to expand the use of staff passports in their Busting Bureaucracy report. They said:

NHSE/I, NHSX and HEE are working to provide multiple staff groups with access to digital staff passports in line with People Plan commitments to improve workforce agility and to support staff training and development.

Junior doctors, who frequently rotate to different healthcare providers, are being prioritized and the ambition is that they will have access to staff passports in 2021/22. The passports will hold digital credentials representing their skills, competencies and occupational health checks. Other target groups include specialists such as maternity and stroke care staff who often need to be rapidly deployed to a neighboring hospital or care home. The use of digital staff passports will save agency fees and release time for care.

Medical staff passports are catching on in the UK where they are solving real problems that ultimately impact patient care, staff fatigue, and patient access and privacy. The journey hasn’t been short, but the NHS Staff Passport program is illustrative of a successful credential ecosystem.

Related Videos

In this 11 minute video, I explain how trust frameworks function in an ecosystem like the one that the NHS has created.

Phil Windley on Trust Frameworks

In this hour-long meetup, Drummond Reed talks with CU Ledger (now Bonifii), about their work to establish a trust framework for credit union credentials. I’ll be writing more about the credit union industry’s MemberPass credential in a future newsletter.

Trust Frameworks and SSI: An Interview with CULedger on the Credit Union MyCUID Trust Framework

A version of this article was previously published in the Technometria Newsletter, Issue #9, May 4, 2021.

Images are from the SSI For Healthcare: Lessons from the NHS Frontline video referenced above.

Tags: ssi identity use+cases verifiable+credentials healthcare


Simon Willison

Weeknotes: Velma, more Django SQL Dashboard

Matching locations for Vaccinate The States, fun with GeoJSON and more improvements to Django SQL Dashboard. Velma I described a few weeks ago part of the process we've been using to build Vaccinate The States - a map of every COVID vaccine location in the USA (now at just over 70,000 markers and counting). Short version: we have scrapers and data ingesters for a whole bunch of different sou

Matching locations for Vaccinate The States, fun with GeoJSON and more improvements to Django SQL Dashboard.

Velma

I described a few weeks ago part of the process we've been using to build Vaccinate The States - a map of every COVID vaccine location in the USA (now at just over 70,000 markers and counting).

Short version: we have scrapers and data ingesters for a whole bunch of different sources (see the vaccine-feed-ingest repository).

Part of the challenge here is how to deal with duplicates - with multiple sources of data, chances are high that the same location will show up in more than on of our input feeds.

So in the past weeks we've been building a new tool code-named Velma to help handle this. It shows our volunteers a freshly scraped location and asks them to either match it to one of our existing locations (based on automated suggestions) or use it to create a brand new location in our database.

I've been working exclusively on the backend APIs for Velma: APIs that return new scraped data and accept and process the human matching decisions from our volunteers.

This week we've been expanding Velma to also cover merging potential duplicate locations within our existing corpus, so I've been building out the APIs for that effort as well.

I've also been working on new export code for making our entire set of locations available to partners and interested outside developers. We hope to launch that fully in the next few days.

geojson-to-sqlite

One of the export formats we are working with is GeoJSON. I have a tool called geojson-to-sqlite which I released last year: this week I released an updated version with the ability to create SpatiaLite indexes and a --nl option for consuming newline-delimited GeoJSON, contributed by Chris Amico.

I've also been experimenting with SpatiaLite's KNN mechanism using geojson-to-sqlite to load in data - here's a TIL showing how to use those tools together.

Django SQL Dashboard

I released the first non-alpha version of this last week and it's started to gain some traction: I've heard from a few people who are trying it out on their projects and it seems to work, so that's good!

I released version 0.14 yesterday with a bunch of fixes based on feedback from users, plus a security fix that closes a hole where users without the execute_sql permission but with access to the Django Admin could modify the SQL in saved dashboards and hence execute their own custom queries.

I also made a bunch of improvements to the documentation, including adding screenshots and demo links to the widgets page.

TIL this week The Wikipedia page stats API Vega-Lite bar charts in the same order as the data Enabling a gin index for faster LIKE queries KNN queries with SpatiaLite Django data migration using a PostgreSQL CTE Releases this week geojson-to-sqlite: 0.3 - (6 releases total) - 2021-05-17
CLI tool for converting GeoJSON files to SQLite (with SpatiaLite) django-sql-dashboard: 0.14 - (28 releases total) - 2021-05-16
Django app for building dashboards using raw SQL queries

Monday, 17. May 2021

Simon Willison

No feigning surprise

No feigning surprise Don't feign surprise if someone doesn't know something that you think they should know. Even better: even if you are surprised, don't let them know! "When people feign surprise, it’s usually to make them feel better about themselves and others feel worse." Via @cameronbardell

No feigning surprise

Don't feign surprise if someone doesn't know something that you think they should know. Even better: even if you are surprised, don't let them know! "When people feign surprise, it’s usually to make them feel better about themselves and others feel worse."

Via @cameronbardell


Damien Bod

Securing OAuth Bearer tokens from multiple Identity Providers in an ASP.NET Core API

This article shows how to secure and use different APIs in an ASP.NET Core API which support OAuth access tokens from multiple identity providers. Access tokens from Azure AD and from Auth0 can be be used to access data from the service. Each API only supports a specific token from the specific identity provider. Microsoft.Identity.Web […]

This article shows how to secure and use different APIs in an ASP.NET Core API which support OAuth access tokens from multiple identity providers. Access tokens from Azure AD and from Auth0 can be be used to access data from the service. Each API only supports a specific token from the specific identity provider. Microsoft.Identity.Web is used to implement the access token authorization for the Azure AD tokens and the default authorization is used to support the Auth0 access tokens.

Code: https://github.com/damienbod/SeparatingApisPerSecurityLevel

Blogs in this series

Securing multiple Auth0 APIs in ASP.NET Core using OAuth Bearer tokens Securing OAuth Bearer tokens from multiple Identity Providers in an ASP.NET Core API

Setup

An API ASP.NET Core application is created to implement the multiple APIs and accept access tokens created by Auth0 and Azure AD. The access tokens need to be validated and should only work for the intended purpose for which the access token was created. The Azure AD API is used by an ASP.NET Core Razor page application which requests an user access token with the correct scope to access the API. Two Azure AD App registrations are used to define the Azure AD setup. The Auth0 application is implemented using a Blazor server hosted application and accesses the two Auth0 APIs, See the pervious post for details.

To support the multiple identity providers, multiple schemes are used. The Auth0 APIs use the default scheme definition for JWT Bearer tokens and the Azure AD uses a custom named scheme. It does not matter which scheme is used for which as long as the correct scheme is defined on the controller securing the API. The AddMicrosoftIdentityWebApiAuthentication method takes the scheme and the configuration name as a optional parameter. The Azure AD configuration is defined like any standard Azure AD API in ASP.NET Core.

public void ConfigureServices(IServiceCollection services) { // Adds Microsoft Identity platform (AAD v2.0) // support to protect this Api services.AddMicrosoftIdentityWebApiAuthentication( Configuration, "AzureAd", "myADscheme"); // Auth0 API configuration=> default scheme services.AddAuthentication(options => { options.DefaultAuthenticateScheme = JwtBearerDefaults.AuthenticationScheme; options.DefaultChallengeScheme = JwtBearerDefaults.AuthenticationScheme; }).AddJwtBearer(options => { options.Authority = "https://dev-damienbod.eu.auth0.com/"; options.Audience = "https://auth0-api1"; }); services.AddSingleton<IAuthorizationHandler, UserApiScopeHandler>(); // authorization definitions for the multiple Auth0 tokens services.AddAuthorization(policies => { policies.AddPolicy("p-user-api-auth0", p => { p.Requirements.Add(new UserApiScopeHandlerRequirement()); // Validate id of application for which the token was created p.RequireClaim("azp", "AScjLo16UadTQRIt2Zm1xLHVaEaE1feA"); }); policies.AddPolicy("p-service-api-auth0", p => { // Validate id of application for which the token was created p.RequireClaim("azp", "naWWz6gdxtbQ68Hd2oAehABmmGM9m1zJ"); p.RequireClaim("gty", "client-credentials"); }); }); services.AddControllers(options => { var policy = new AuthorizationPolicyBuilder() .RequireAuthenticatedUser() .Build(); options.Filters.Add(new AuthorizeFilter(policy)); }); }

The Configure method uses the UseAuthentication method to add the middleware for the APIs.

public void Configure(IApplicationBuilder app, IWebHostEnvironment env) { // ... app.UseRouting(); app.UseAuthentication(); app.UseAuthorization(); app.UseEndpoints(endpoints => { endpoints.MapControllers(); }); }

The AzureADUserOneController class is used to implement the API for the Azure AD access tokens. The AuthorizeForScopes attribute from Microsoft.Identity.Web is used to validate the Azure AD App registration access token and define the scheme required for the validation. The scope name must match the Azure App registration definition.

using System.Collections.Generic; using Microsoft.AspNetCore.Authorization; using Microsoft.AspNetCore.Mvc; using Microsoft.Extensions.Logging; using Microsoft.Identity.Web; namespace MyApi.Controllers { [AuthorizeForScopes(Scopes = new string[] { "api://72286b8d-5010-4632-9cea-e69e565a5517/user_impersonation" }, AuthenticationScheme = "myADscheme")] [ApiController] [Route("api/[controller]")] public class AzureADUserOneController : ControllerBase { private readonly ILogger<UserOneController> _logger; public AzureADUserOneController(ILogger<UserOneController> logger) { _logger = logger; } [HttpGet] public IEnumerable<string> Get() { return new List<string> { "AzureADUser one data" }; } } }

The UserOneController implements the Auth0 user access token API. Since the default scheme is used, no scheme definition is required. The authorization policy is used to secure the API which validates the scope and the claims for this API.

using System.Collections.Generic; using Microsoft.AspNetCore.Authorization; using Microsoft.AspNetCore.Mvc; using Microsoft.Extensions.Logging; namespace MyApi.Controllers { [Authorize(Policy = "p-user-api-auth0")] [ApiController] [Route("api/[controller]")] public class UserOneController : ControllerBase { private readonly ILogger<UserOneController> _logger; public UserOneController(ILogger<UserOneController> logger) { _logger = logger; } [HttpGet] public IEnumerable<string> Get() { return new List<string> { "user one data" }; } } }

When the API application is started the APIs can be used and a swagger UI implemented using Swashbuckle was created to display the different APIs. Each API will only work with the correct access token. The different UIs can use the APIs and data is returned.

Links

https://auth0.com/docs/quickstart/webapp/aspnet-core

https://docs.microsoft.com/en-us/aspnet/core/security/authorization/introduction

Open ID Connect

Securing Blazor Web assembly using Cookies and Auth0


Simon Willison

geocode-sqlite

geocode-sqlite Neat command-line Python utility by Chris Amico: point it at a SQLite database file and it will add latitude and longitude columns and populate them by geocoding one or more of the other fields, using your choice from four currently supported geocoders.

geocode-sqlite

Neat command-line Python utility by Chris Amico: point it at a SQLite database file and it will add latitude and longitude columns and populate them by geocoding one or more of the other fields, using your choice from four currently supported geocoders.

Sunday, 16. May 2021

Hyperonomy Digital Identity Lab

NETAGO Downtime – 2021-05-15 & 16

Hardwired ThinkPad laptop: 81.01% downtime (same laptop used for prior NUM reports) WiFi Wireless Lenovo laptop: 3.78% downtime
Hardwired ThinkPad laptop: 81.01% downtime (same laptop used for prior NUM reports) WiFi Wireless Lenovo laptop: 3.78% downtime netuptime-20210516-075453-tp-hardwiredDownload netuptime-20210514-061223-lenovo-wifiDownload

Friday, 14. May 2021

Doc Searls Weblog

How the cookie poisoned the Web

Have you ever wondered why you have to consent to terms required by the websites of the world, rather than the other way around? Or why you have no record of what you have accepted or agreed to? Blame the cookie. Have you wondered why you have no more privacy on the Web than what […]

Have you ever wondered why you have to consent to terms required by the websites of the world, rather than the other way around? Or why you have no record of what you have accepted or agreed to?

Blame the cookie.

Have you wondered why you have no more privacy on the Web than what other parties grant you (which is none at all), and that you can only opt in or out of choices that others provide—while the only controls you have over your privacy are to skulk around like a criminal (thank you, Edward Snowden and Russell Brand, for that analogy) or to stay offline completely?

Blame the cookie.

And have you paused to wonder why Europe’s GDPR regards you as a mere “data subject” while assuming that the only parties qualified to be “data controllers” and “data processors” are the sites and services of the world, leaving you with little more agency than those sites and services allow, or provide you?

Blame the cookie.

Or why California’s CCPA regards you as a mere “consumer” (not a producer, much less a complete human being), and only gives you the right to ask the sites and services of the world to give back data they have gathered about you, or not to “sell” that personal data, whatever the hell that means?

Blame the cookie.

There are more examples, but you get the point: this situation has become so established that it’s hard to imagine any other way for the Web to operate.

Now here’s another point: it didn’t have to be that way.

The World Wide Web that Tim Berners-Lee invented didn’t have cookies. It also didn’t have websites. It had pages one could publish or read, at any distance across the Internet.

This original Web was simple and peer-to-peer. It was meant to be personal as well, meaning an individual could publish with a server or read with a browser. One could also write pages easily with an HTML editor, which was also easy to invent and deploy.

It should help to recall that the Apache Web server, which has published most of the world’s Web pages across most the time the Web has been around, was meant originally to work as a personal server. That’s because the original design assumption was that anyone, from individuals to large enterprises, could have a server of their own, and publish whatever they wanted on it. The same went for people reading pages on the Web.

Back in the 90s my own website, searls.com, ran on a box under my desk. It could do that because, even though my connection was just dial-up speed, it was on full time over its own static IP address, which I easily rented from my ISP. In fact, that I had sixteen of those addresses, so I could operate another server in my office for storing and transferring articles and columns I wrote to Linux Journal. Every night a cron utility would push what I wrote to the magazine itself. Both servers ran Apache. And none of this was especially geeky. (I’m not a programmer and the only code I know is Morse.)

My point here is that the Web back then was still peer-to-peer and welcoming to individuals who wished to operate at full agency. It even stayed that way through the Age of Blogs in the early ’00s.

But gradually a poison disabled personal agency. That poison was the cookie.

Technically a cookie is a token—a string of text—left by one computer program with another, to help the two remember each other. These are used for many purposes in computing.

But computing for the Web got a special kind of cookie called the HTTP cookie. This, Wikipedia says (at that link)

…is a small piece of data stored on the user‘s computer by the web browser while browsing a website. Cookies were designed to be a reliable mechanism for websites to remember stateful information (such as items added in the shopping cart in an online store) or to record the user’s browsing activity (including clicking particular buttons, logging in, or recording which pages were visited in the past). They can also be used to remember pieces of information that the user previously entered into form fields, such as names, addresses, passwords, and payment card numbers.

It also says,

Cookies perform essential functions in the modern web. Perhaps most importantly, authentication cookies are the most common method used by web servers to know whether the user is logged in or not, and which account they are logged in with.

This, however, was not the original idea, which Lou Montulli came up with in 1994. Lou’s idea was just for a server to remember the last state of a browser’s interaction with it. But that one move—a server putting a cookie inside every visiting browser—crossed a privacy threshold: a personal boundary that should have been clear from the start but was not.

Once that boundary was crossed, and the number and variety of cookies increased, a snowball started rolling, and whatever chance we had to protect our privacy behind that boundary, was lost.

Today that snowball is so large that nearly all personal agency on the Web happens within the separate silos of every website, and compromised by whatever countless cookies and other tracking methods are used to keep track of, and to follow, the individual.

This is why most of the great stuff you can do on the Web is by grace of Google, Apple, Facebook, Amazon, Twitter, WordPress and countless others, including those third parties.

Bruce Schneier calls this a feudal system:

Some of us have pledged our allegiance to Google: We have Gmail accounts, we use Google Calendar and Google Docs, and we have Android phones. Others have pledged allegiance to Apple: We have Macintosh laptops, iPhones, and iPads; and we let iCloud automatically synchronize and back up everything. Still others of us let Microsoft do it all. Or we buy our music and e-books from Amazon, which keeps records of what we own and allows downloading to a Kindle, computer, or phone. Some of us have pretty much abandoned e-mail altogether … for Facebook.

These vendors are becoming our feudal lords, and we are becoming their vassals.

Bruce wrote that in 2012, about the time we invested hope in Do Not Track, which was designed as a polite request one could turn on in a browser, and servers could obey.

Alas, the tracking-based online advertising business and its dependents in publishing dismissed Do Not Track with contempt.

Starting in 2013, we serfs fought back, by the hundreds of millions, blocking ads and tracking: the biggest boycott in world history. This, however, did nothing to stop what Shoshana Zuboff calls Surveillance Capitalism and Brett Frischmann and Evan Selinger call Re-engineering Humanity.

Today our poisoned minds can hardly imagine having native capacities of our own that can operate at scale across all the world’s websites and services. To have that ability would also be at odds with the methods and imperatives of personally targeted advertising, which requires cookies and other tracking methods. One of those imperatives is making money: $Trillions of it.

The business itself (aka adtech) is extremely complex and deeply corrupt: filled with fraud, botnets and malwareMost of the money spent on adtech also goes to intermediaries and not to the media you (as they like to say) consume. It’s a freaking fecosystem, and every participant’s dependence on it is extreme.

Take, for example, Vizio TVs. As Samuel Axon puts it in Ars Technica, Vizio TV buyers are becoming the product Vizio sells, not just its customers Vizio’s ads, streaming, and data business grew 133 percent year over year.

Without cookies and the cookie-like trackers by which Vizio and its third parties can target customers directly, that business wouldn’t be there.

As a measure of how far this poisoning has gone, dig this: FouAnalyticsPageXray says the Ars Technica story above comes to your browser with all this spyware you don’t ask for or expect when you click on that link:

Adserver Requests: 786 Tracking Requests: 532 Other Requests: 112

I’m also betting that nobody reporting for a Condé Nast publication will touch that third rail, which I have been challenging journalists to do in 139 posts, essays, columns and articles, starting in 2008.

(Please prove me wrong, @SamuelAxon—or any reporter other than Farhad Manjoo, who so far is the only journalist from a major publication I know to have bitten the robotic hand that feeds them. I also note that the hand in his case is The New York Times‘, and that it has backed off a great deal in the amount of tracking it does. Hats off for that.)

At this stage of the Web’s moral devolution, it is nearly impossible to think outside the cookie-based fecosystem. If it was, we would get back the agency we lost, and the regulations we’re writing would respect and encourage that agency as well.

But that’s not happening, in spite of all the positive privacy moves Apple, Brave, Mozilla, Consumer Reports, the EFF and others are making.

My hat’s off to all of them, but let’s face it: the poisoning is too far advanced. After fighting it for more than 22 years (dating from publishing The Cluetrain Manifesto in 1999), I’m moving on.

To here.


Hyperonomy Digital Identity Lab

NETAGO Downtime – 2021-05-14 AM

Hardwired ThinkPad laptop: 77.96% downtime (same laptop used for prior NUM reports) WiFi Wireless Lenovo laptop: 2.99% downtime
Hardwired ThinkPad laptop: 77.96% downtime (same laptop used for prior NUM reports) WiFi Wireless Lenovo laptop: 2.99% downtime netuptime-20210513-230229-tp-hardwiredDownload netuptime-20210513-230235-lenovo-wifiDownload

Simon Willison

Powering the Python Package Index in 2021

Powering the Python Package Index in 2021 PyPI now serves "nearly 900 terabytes over more than 2 billion requests per day". Bandwidth is donated by Fastly, a value estimated at 1.8 million dollars per month! Lots more detail about how PyPI has evolved over the past years in this post by Dustin Ingram.

Powering the Python Package Index in 2021

PyPI now serves "nearly 900 terabytes over more than 2 billion requests per day". Bandwidth is donated by Fastly, a value estimated at 1.8 million dollars per month! Lots more detail about how PyPI has evolved over the past years in this post by Dustin Ingram.

Thursday, 13. May 2021

Simon Willison

Quoting Brian LeRoux

Folks think s3 is static assets hosting but really it's a consistent and highly available key value store with first class blob support — Brian LeRoux

Folks think s3 is static assets hosting but really it's a consistent and highly available key value store with first class blob support

Brian LeRoux


Here's Tom with the Weather

Betty's Funeral Service

My mother Betty passed away Friday night and her service is Saturday at 2pm at the Kingwood Funeral Home. This is one of my favorite pictures of her with her mother and my brother David in downtown San Francisco. Growing up, I always remember that there was a set of golf clubs in the garage but it seemed like they were never used. Luckily, this week, I listened to a Christmas audio message

My mother Betty passed away Friday night and her service is Saturday at 2pm at the Kingwood Funeral Home. This is one of my favorite pictures of her with her mother and my brother David in downtown San Francisco.

Growing up, I always remember that there was a set of golf clubs in the garage but it seemed like they were never used. Luckily, this week, I listened to a Christmas audio message from 1971 that my parents had sent to my grandparents. My mom said that the day before, Baxter had taken her to the golf course at Heather Farm and they golfed nine holes. She said on the first hole, she amused several bystanders trying to get out of the sand trap and they decided to quit keeping score from there. She said she learned golf was not her sport and would find something else. She did find tennis. I was glad to play that sport with her and the family.


Hyperonomy Digital Identity Lab

NETAGO Downtime – 2021-05-13 AM – 88.81%

Over the last approximately 7.5 hours, the NETAGO internet service in Bindloss, Alberta was down 88.81% of the time according to the Net Uptime Monitor (NUM) app. This is not acceptable. Net Uptime Monitor Failure Log (NetUptimeMonitor.com)Licensed to Michael Herman … Continue reading →

Over the last approximately 7.5 hours, the NETAGO internet service in Bindloss, Alberta was down 88.81% of the time according to the Net Uptime Monitor (NUM) app. This is not acceptable.

Net Uptime Monitor Failure Log (NetUptimeMonitor.com)
Licensed to Michael Herman

=======================================

2021-05-12 10:19:45 PM Log Start

Failure Start Length
2021-05-12 10:20:33 PM 0:00:05
2021-05-12 10:20:52 PM 0:03:23
2021-05-12 10:24:47 PM 0:01:32
2021-05-12 10:26:54 PM 0:04:16
2021-05-12 10:31:32 PM 0:07:08
2021-05-12 10:38:48 PM 0:01:30
2021-05-12 10:40:26 PM 0:10:28
2021-05-12 10:51:22 PM 0:03:33
2021-05-12 10:55:03 PM 0:06:38
2021-05-12 11:01:50 PM 0:13:03
2021-05-12 11:16:13 PM 0:09:05
2021-05-12 11:26:56 PM 0:10:07
2021-05-12 11:37:12 PM 0:04:32
2021-05-12 11:44:15 PM 0:12:41
2021-05-12 11:58:03 PM 0:03:08
2021-05-13 12:05:52 AM 0:04:00
2021-05-13 12:10:00 AM 0:09:12
2021-05-13 12:19:20 AM 0:03:58
2021-05-13 12:23:26 AM 0:06:59
2021-05-13 12:32:56 AM 0:13:24
2021-05-13 12:47:53 AM 0:12:56
2021-05-13 1:00:57 AM 0:01:34
2021-05-13 1:03:24 AM 0:01:54
2021-05-13 1:07:04 AM 0:21:49
2021-05-13 1:29:07 AM 0:00:23
2021-05-13 1:29:39 AM 0:02:10
2021-05-13 1:31:57 AM 0:01:56
2021-05-13 1:34:01 AM 0:01:06
2021-05-13 1:35:15 AM 0:00:30
2021-05-13 1:36:19 AM 0:05:39
2021-05-13 1:42:07 AM 0:00:36
2021-05-13 1:42:51 AM 0:12:26
2021-05-13 1:56:36 AM 0:04:57
2021-05-13 2:01:41 AM 0:11:08
2021-05-13 2:12:58 AM 0:04:35
2021-05-13 2:20:10 AM 0:21:12
2021-05-13 2:41:44 AM 0:06:45
2021-05-13 2:48:44 AM 0:06:04
2021-05-13 2:57:25 AM 0:05:49
2021-05-13 3:04:02 AM 0:00:33
2021-05-13 3:04:56 AM 0:00:05
2021-05-13 3:05:23 AM 0:04:44
2021-05-13 3:10:22 AM 0:07:22
2021-05-13 3:17:52 AM 0:07:44
2021-05-13 3:27:15 AM 0:18:57
2021-05-13 3:46:20 AM 0:09:16
2021-05-13 3:58:39 AM 0:00:51
2021-05-13 4:01:55 AM 0:06:26
2021-05-13 4:08:42 AM 0:19:29
2021-05-13 4:28:20 AM 0:10:05
2021-05-13 4:40:36 AM 0:11:36
2021-05-13 4:52:21 AM 0:02:01
2021-05-13 4:54:36 AM 0:03:54
2021-05-13 4:58:39 AM 0:03:10
2021-05-13 5:01:57 AM 0:07:02
2021-05-13 5:09:33 AM 0:07:25
2021-05-13 5:17:19 AM 0:02:02
2021-05-13 5:19:36 AM 0:00:33
2021-05-13 5:20:17 AM 0:01:09
2021-05-13 5:23:24 AM 0:01:32
2021-05-13 5:25:05 AM 0:03:57
2021-05-13 5:29:23 AM 0:01:28
2021-05-13 5:30:59 AM 0:10:35
2021-05-13 5:42:40 AM 0:01:57
2021-05-13 5:46:10 AM 0:00:51
2021-05-13 5:47:09 AM 0:00:13

2021-05-13 5:48:22 AM 0:08:06

2021-05-13 5:56:36 AM Log End


Monitor Duration 7:36:50
Failure Summary:
Count 67
Total Downtime 6:45:44
% Downtime 88.81
Minimum Length 0:00:05
Maximum Length 0:21:49
Average Length 0:06:03

Wednesday, 12. May 2021

Simon Willison

Quoting Using async and await in Flask 2.0

Async functions require an event loop to run. Flask, as a WSGI application, uses one worker to handle one request/response cycle. When a request comes in to an async view, Flask will start an event loop in a thread, run the view function there, then return the result. Each request still ties up one worker, even for async views. The upside is that you can run async code within a view, for example

Async functions require an event loop to run. Flask, as a WSGI application, uses one worker to handle one request/response cycle. When a request comes in to an async view, Flask will start an event loop in a thread, run the view function there, then return the result.

Each request still ties up one worker, even for async views. The upside is that you can run async code within a view, for example to make multiple concurrent database queries, HTTP requests to an external API, etc. However, the number of requests your application can handle at one time will remain the same.

Using async and await in Flask 2.0


New Major Versions Released! Flask 2.0, Werkzeug 2.0, Jinja 3.0, Click 8.0, ItsDangerous 2.0, and MarkupSafe 2.0

New Major Versions Released! Flask 2.0, Werkzeug 2.0, Jinja 3.0, Click 8.0, ItsDangerous 2.0, and MarkupSafe 2.0 Huge set of releases from the Pallets team. Python 3.6+ required and comprehensive type annotations. Flask now supports async views, Jinja async templates (used extensively by Datasette) "no longer requires patching", Click has a bunch of new code around shell tab completion, ItsDange

New Major Versions Released! Flask 2.0, Werkzeug 2.0, Jinja 3.0, Click 8.0, ItsDangerous 2.0, and MarkupSafe 2.0

Huge set of releases from the Pallets team. Python 3.6+ required and comprehensive type annotations. Flask now supports async views, Jinja async templates (used extensively by Datasette) "no longer requires patching", Click has a bunch of new code around shell tab completion, ItsDangerous supports key rotation and so much more.


MyDigitalFootprint

who wins when our diversity creates less diversity?

The Media wins by playing with us When Education wins, everyone wins I can win but the self interest distroys more We win by being one together in our diversity It is not we lose by doing nothing, someone else gains more Our paradox is that the more diversity we have, the less diverse we become.   
The Media wins by playing with us
When Education wins, everyone wins
I can win but the self interest distroys more
We win by being one together in our diversity
It is not we lose by doing nothing, someone else gains more Our paradox is that the more diversity we have, the less diverse we become.   

Hyperonomy Digital Identity Lab

NETAGO Downtime – 2021-05-12 Early Morning – 93.6%

Net Uptime Monitor Failure Log (NetUptimeMonitor.com)Licensed to Michael Herman ======================================= 2021-05-12 3:24:40 AM Log Start Failure Start Length2021-05-12 3:26:48 AM 0:00:052021-05-12 3:27:08 AM 0:00:162021-05-12 3:28:06 AM 0:06:222021-05-12 3:34:36 AM 0:01:042021-05-12 3:35:48 AM 0:05:232021-05-12 3:43:30 AM 0:27:342021-05-12 4:11:13 AM 0:00:292021-05-12 4:11:56 … Conti

Net Uptime Monitor Failure Log (NetUptimeMonitor.com)
Licensed to Michael Herman

=======================================

2021-05-12 3:24:40 AM Log Start

Failure Start Length
2021-05-12 3:26:48 AM 0:00:05
2021-05-12 3:27:08 AM 0:00:16
2021-05-12 3:28:06 AM 0:06:22
2021-05-12 3:34:36 AM 0:01:04
2021-05-12 3:35:48 AM 0:05:23
2021-05-12 3:43:30 AM 0:27:34
2021-05-12 4:11:13 AM 0:00:29
2021-05-12 4:11:56 AM 0:02:18
2021-05-12 4:14:55 AM 0:02:05
2021-05-12 4:17:09 AM 0:05:56
2021-05-12 4:23:13 AM 0:56:42
2021-05-12 5:20:23 AM 0:00:23
2021-05-12 5:21:46 AM 0:00:33
2021-05-12 5:22:27 AM 0:07:16
2021-05-12 5:29:51 AM 0:04:38
2021-05-12 5:34:38 AM 0:17:12
2021-05-12 5:51:58 AM 0:07:41
2021-05-12 5:59:54 AM 0:00:15
2021-05-12 6:02:15 AM 0:00:27
2021-05-12 6:02:50 AM 0:04:07
2021-05-12 6:07:37 AM 0:21:43

2021-05-12 6:29:28 AM 0:07:42

2021-05-12 6:37:22 AM Log End


Monitor Duration 3:12:41
Failure Summary:
Count 22
Total Downtime 3:00:21
% Downtime 93.60
Minimum Length 0:00:05
Maximum Length 0:56:42
Average Length 0:08:11


Net Uptime Monitor (NUM)

The Net Uptime Monitor What it does… Is your internet connection unreliable? You’ve probably called your internet provider’s support line and maybe they were able to help you, maybe they even sent out a tech to look at it. But … Continue reading →

The Net Uptime Monitor


What it does…


Is your internet connection unreliable? You’ve probably called your internet provider’s support line and maybe they were able to help you, maybe they even sent out a tech to look at it. But all too often the response is “Well, it’s working fine now!”


The Net Uptime Monitor alerts you to failures in your internet connection and documents the exact time and length of those failures. This failure log will help your provider troubleshoot the problem – after it helps you convince them it’s not your imagination! Net Uptime Monitor is designed to be as simple as possible and accomplish this one purpose accurately and thoroughly with the least effort from you.


How it works…


Net Uptime Monitor (NUM) uses the “Ping” command to test the response from three public servers operated by Google, Level 3, and OpenDNS. (See “What’s a Ping?” below for an explanation.) Each server is pinged in turn at an interval that you can set – normally five seconds. By default, NUM waits 200 milliseconds (2/10 of a second) for the server to respond – at least 3 times as long as a typical broadband internet connection should take.


NUM pings one server at a time; if the server responds, NUM waits the test interval, then pings the next server. If the server doesn’t respond, NUM immediately tries the next server, then the next. If any of the servers respond, then your connection must be working. Only when all three servers fail to respond does NUM determine that your connection is down.


By using three servers, NUM ensures that the problem isn’t just with the server or with some connection on the way to that server, or that the server isn’t momentarily slow or congested.


NUM can detect failures as short as a couple of seconds in length, but you can decide how long a failure must be before it really counts. A very short failure of a second or so is not likely to affect your use of the net and isn’t of any real concern. You can set how long a failure must be before NUM alerts you to it and records the failure in its failure log.


Connection is up, no previous failures…

Connection is down, one previous failure…

The display shows the names and IP addresses of each server. The indicator “light” flashes yellow when the ping is sent and shows green for a successful response. The response time of the last ping is shown. When the response time exceeds the time set for “Wait for Ping Response”, the indicator turns red to show no response from that server.


If your connection fails, the current fail length is displayed in red. When the length of the failure exceeds your setting for “Log Failure If Longer Than”, NUM plays an alert sound and writes the failure information into its log.


The display also shows the monitored time (how long the monitor has been running), the time since the last logged failure (up time), the start time and length of the last logged failure, and the total count of logged failures since NUM was started. The current settings for the test interval and the minimum failure length to be logged are shown at the bottom of the display.


Click the minimize button on the NUM window to hide the display. NUM disappears into your system tray in the “notifications area”. The NUM icon is shown in the notification – you can hover over the icon to see the current time since the last failure (“Up Time”) or click the icon to restore the display. In the Settings, you can choose to have a “failure alert” sound play, and/or have the NUM window “pop up”, if a connection failure longer than your minimum setting occurs.


The Log


NUM keeps a log of results in a text file. You can view the current log at any time by clicking the “View Log” button. The log is displayed in a separate window. NUM will continue to update the log even while you are viewing it.
Because the log is a plain text file, you can open it outside of the NUM program. It will open in Notepad or your default text editor, so you can easily edit or print the log.


The log records the start and end time of the monitoring and each failure start time and length. A summary shows the total monitoring time, failure count, total down time, percentage of down time, and the minimum, maximum, and average failure lengths. Here’s an example:

Net Uptime Monitor Failure Log (NetUptimeMonitor.com)
Licensed to Example User

=======================================

8/17/2015 8:44:28 AM Log Start

Failure Start Length
8/17/2015 1:44:25 PM 0:00:44
8/17/2015 1:49:53 PM 0:00:36

8/17/2015 1:52:39 PM 0:01:59

8/18/2015 12:13:17 AM Log End
Monitor Duration 15:28:46
Failure Summary:
Count 3
Total Downtime 0:03:20
% Downtime 0.36
Minimum Length 0:00:36
Maximum Length 0:01:59

Average Length 0:01:06

The example shows date and time in US English format; your log will use the format for your region.
The log files are saved in a folder of your choice; the default is your Documents folder. You can choose a different folder in the Settings.


Also in the Settings, there are two options for the log file:
1) New File Each Run
A new file is created each time NUM starts. Each log file is named with the date and time NUM was started so that they will appear in your directory in chronological order. The file name is in the form of “NetUptime 20110805 134243.txt”. In this example, the date is August 10, 2015 – 20150810 – and the time is 1:42:43 PM – 134243.
2) Add to Existing File
Each new log is added to the same single file. The file name is always NetUptime.txt. As long as that file exists in the folder where you have chosen to save the log file, NUM will keep adding to it. If the file doesn’t exist, i.e. it’s been deleted, moved, or renamed, NUM will start a new file.


The Settings

Click the “Change Settings” button on the NUM display to open the Settings window. There are several settings available:


Startup Settings…


· Start when Windows Starts? – Check the box and NUM will automatically start when your computer starts. Uncheck the box and you can start NUM when you want by clicking its desktop icon. The default on installation is checked – NUM starts automatically.
· Start Minimized in Tray? – Check the box and NUM will be minimized in the system tray automatically when it starts. The default on installation is unchecked – NUM starts with the main form displayed.
Test Settings…
· Test Interval – how many seconds between ping tests when the servers are responding. Five seconds is the default. It is possible that NUM will miss a failure that is shorter than the time between tests, so if your connection has very frequent failures of just a few seconds you might choose a shorter test interval. If you don’t have many failures, you may want to test less often. Most connection problems result in less frequent but longer failures, so five seconds is a good choice for most users.
· Wait for Ping Response – the length of time NUM waits for a response after sending a ping. The default setting of 200 milliseconds is recommended for normal situations. If you have a slower internet connection, such as a dialup or mobile connection, or are in a remote area where response is typically slow, you can set the wait time for up to 1500 milliseconds (1.5 seconds). To help you find the best setting for your situation, set the wait time to 1500 milliseconds and observe the ping response times NUM displays when your connection is working normally. Set the wait time to about 1.5 times the typical ping response times you see for efficient detection of outages.
· Change Target Servers – click to open the Target Servers window.

You can edit the IP Address and Name of any of the three servers. Click the Test button to try that server, verifying that it responds and checking the response time.


The default target servers (Google, Level 3, OpenDNS) were selected for their performance and very high reliability. You should only use a different server if you find that one of these servers does not respond reliably in your particular situation. Click “Restore Defaults” to reset the Target Servers to their original values. Changes to the Target Servers take effect the next time the program starts.


Alert and Log Settings…


· Pop Up on Failure? – Check the box and the NUM form will pop up from the system tray when there is a failure. Uncheck the box and NUM will continue to log and alert but it will stay minimized during a failure. The default on installation is checked – if NUM is minimized to the system tray, the main NUM form will be displayed when a failure is logged.
· Alert and Log Failure If Longer Than – the minimum failure length that will be counted, both for the log and the alert of a failure. Five seconds is the default setting.
· Log File Location – the folder where the logs will be stored. Click the “Select Folder” button to browse to the folder you want. The log for the current run of NUM is already started, so a change in this setting will not take effect until the next time you run NUM.
· Log File Option – New File Each Run (the default) or Add to Existing File. See previous section “The Log” for a more detailed explanation.
· Choose Failure Alert Sound – choose the sound NUM makes when a failure is counted. The sound plays when you choose its button so you can preview each one. Choose “None” to silence the alert. Choose “Custom” and click the Select File button to use any .WAV sound file on your system. The default on installation is the “Short” sound.
· Play Reconnect Sound – NUM can play a sound when your internet reconnects after a failure. Choose “None” to silence the reconnect sound. Choose “Custom” and click the Select File button to use any .WAV sound file on your system.


Combine Settings for “Invisible” Operation


NUM can do its job without showing itself or alerting the user to its operation in any way. Choose these settings:
· Start when Windows Starts? – checked.
· Start Minimized in Tray? – checked.
· Pop Up On Failure – unchecked.
· Choose Failure Alert Sound – None.
· Choose Reconnect Sound – None.
With this combination of settings, the user need never be aware of NUM. This is useful in a support situation where you are installing NUM on a computer you aren’t personally using.


What’s a Ping?


“Ping” is a command available on all kinds of computers that tests whether another computer on the network will respond to your computer. It’s named after the sound of submarine sonar systems – they send out a “ping” sound which bounces off their target and they listen for that echo, locating their target. The internet “ping” works in a similar way. You name your target, an internet server, and “ping” it. The ping command and response looks like this (in a DOS command window):


C:\ ping google.com

Pinging google.com [74.125.224.84] with 32 bytes of data:
Reply from 74.125.224.84: bytes=32 time=30ms TTL=54
Reply from 74.125.224.84: bytes=32 time=31ms TTL=54
Reply from 74.125.224.84: bytes=32 time=31ms TTL=54
Reply from 74.125.224.84: bytes=32 time=31ms TTL=54

Ping statistics for 74.125.224.84:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 30ms, Maximum = 31ms, Average = 30ms

A ping command actually generates four requests and the server replies four times. Each response is timed in thousandths of a second (ms = milliseconds). Here we see that the server at google.com responded in about 31/1000 or 3/100 of a second. The internet is fast! – when everything’s working.


Licensing


A license for Net Uptime Monitor removes the time limits from the trial version and lets you use the full program on one computer. To purchase a license or register your license, just click “Trial Version – Click to Register or Purchase License” at the bottom of the NUM main form. If you have your license, enter the License Key code you’ve received and click Register. If you need a license, click Purchase a License to visit our web site and make your purchase.
If you have already registered your copy of NUM, your name and email are shown on the main form. Click the License Info button to see your license key.


Moving to a New Computer or Installing a New Operating System


You must unregister your license before you replace your computer or install a new version of Windows. This will make your license key available again to use on your new system. Just click License Info, click Print This Form to make sure you’ll have the license key, then click Unregister License. The program will go back to Trial mode. You can then reuse your license key to register NUM on any computer.


NETAGO Downtime – 2021-05-11 – 93%

Net Uptime Monitor Failure Log (NetUptimeMonitor.com)Licensed to Michael Herman ======================================= 2021-05-11 7:03:07 AM Log Start Failure Start Length2021-05-11 7:03:38 AM 0:00:462021-05-11 7:06:43 AM 0:00:052021-05-11 7:07:43 AM 0:02:572021-05-11 7:10:49 AM 0:36:322021-05-11 7:47:29 AM 0:05:172021-05-11 7:53:40 AM 0:07:142021-05-11 8:01:02 AM 0:01:422021-05-11 8:02:53 … Conti

Net Uptime Monitor Failure Log (NetUptimeMonitor.com)
Licensed to Michael Herman

=======================================

2021-05-11 7:03:07 AM Log Start

Failure Start Length
2021-05-11 7:03:38 AM 0:00:46
2021-05-11 7:06:43 AM 0:00:05
2021-05-11 7:07:43 AM 0:02:57
2021-05-11 7:10:49 AM 0:36:32
2021-05-11 7:47:29 AM 0:05:17
2021-05-11 7:53:40 AM 0:07:14
2021-05-11 8:01:02 AM 0:01:42
2021-05-11 8:02:53 AM 0:03:00
2021-05-11 8:06:01 AM 0:00:50
2021-05-11 8:07:18 AM 0:02:56
2021-05-11 8:10:23 AM 0:06:16
2021-05-11 8:17:07 AM 0:06:48
2021-05-11 8:24:49 AM 0:02:43
2021-05-11 8:27:40 AM 0:05:36
2021-05-11 8:33:50 AM 0:00:24
2021-05-11 8:35:28 AM 0:10:01
2021-05-11 8:46:23 AM 0:21:58
2021-05-11 9:09:15 AM 0:13:49
2021-05-11 9:23:39 AM 0:02:06
2021-05-11 9:28:10 AM 0:06:14
2021-05-11 9:35:57 AM 0:12:14
2021-05-11 9:49:11 AM 0:02:55
2021-05-11 9:53:13 AM 0:10:16
2021-05-11 10:03:43 AM 0:07:44
2021-05-11 10:13:39 AM 0:00:24
2021-05-11 10:16:27 AM 0:03:55
2021-05-11 10:20:56 AM 0:07:14
2021-05-11 10:28:24 AM 0:00:20
2021-05-11 10:28:53 AM 0:00:11
2021-05-11 10:29:13 AM 0:00:11
2021-05-11 10:30:11 AM 0:00:41
2021-05-11 10:31:20 AM 0:01:01
2021-05-11 10:32:48 AM 0:06:12
2021-05-11 10:40:07 AM 0:01:14
2021-05-11 10:41:29 AM 0:04:03
2021-05-11 10:46:00 AM 0:08:14
2021-05-11 10:55:34 AM 0:03:37
2021-05-11 11:00:56 AM 0:01:54
2021-05-11 11:03:05 AM 0:06:05
2021-05-11 11:09:38 AM 0:14:24
2021-05-11 11:25:15 AM 0:00:15
2021-05-11 11:25:38 AM 0:04:01
2021-05-11 11:30:08 AM 0:00:25
2021-05-11 11:31:28 AM 0:03:26
2021-05-11 11:35:08 AM 0:03:37
2021-05-11 11:39:33 AM 0:01:45
2021-05-11 11:41:32 AM 0:05:13
2021-05-11 11:47:20 AM 0:16:20
2021-05-11 12:05:19 PM 0:18:31
2021-05-11 12:24:11 PM 0:12:56
2021-05-11 12:37:15 PM 0:08:06
2021-05-11 12:45:30 PM 0:01:18
2021-05-11 12:48:01 PM 0:01:58
2021-05-11 12:50:08 PM 0:05:59
2021-05-11 12:56:15 PM 0:29:48
2021-05-11 1:26:18 PM 0:06:12
2021-05-11 1:32:45 PM 0:16:43
2021-05-11 1:49:43 PM 0:00:11
2021-05-11 1:50:02 PM 0:32:03
2021-05-11 2:22:13 PM 0:17:54
2021-05-11 2:40:41 PM 0:00:06
2021-05-11 2:40:55 PM 0:16:30
2021-05-11 2:59:04 PM 0:00:54
2021-05-11 3:01:44 PM 0:01:18
2021-05-11 3:03:10 PM 0:18:05
2021-05-11 3:21:36 PM 0:01:21
2021-05-11 3:23:51 PM 0:10:10
2021-05-11 3:34:10 PM 0:03:28
2021-05-11 3:39:16 PM 0:06:33
2021-05-11 3:46:24 PM 0:16:42
2021-05-11 4:03:14 PM 0:19:54
2021-05-11 4:25:39 PM 0:12:58
2021-05-11 4:40:09 PM 0:04:53
2021-05-11 4:45:10 PM 0:01:45
2021-05-11 4:47:03 PM 0:06:46
2021-05-11 4:53:58 PM 0:01:08
2021-05-11 4:55:14 PM 0:38:56
2021-05-11 5:34:25 PM 0:13:51
2021-05-11 5:48:30 PM 0:35:29
2021-05-11 6:26:18 PM 0:01:45
2021-05-11 6:28:17 PM 0:09:13
2021-05-11 6:37:45 PM 0:01:16
2021-05-11 6:39:10 PM 0:01:36
2021-05-11 6:42:25 PM 0:34:53
2021-05-11 7:17:27 PM 0:08:36
2021-05-11 7:26:37 PM 0:15:57
2021-05-11 7:42:43 PM 0:04:15
2021-05-11 7:47:06 PM 0:35:10
2021-05-11 8:22:37 PM 0:09:37
2021-05-11 8:32:23 PM 0:13:20

2021-05-11 8:45:51 PM 0:24:59

2021-05-11 9:10:58 PM Log End
Monitor Duration 14:07:50
Failure Summary:
Count 91
Total Downtime 13:08:53
% Downtime 93.05
Minimum Length 0:00:05
Maximum Length 0:38:56

Average Length 0:08:40

Tuesday, 11. May 2021

Phil Windley's Technometria

Decentralized System in a Box

Summary: I’ve been a beekeeper for many years. I love the honey, but I love watching the bees even more. They are a fascinating example of a natural, decentralized system. I installed a package of bees in a hive over the weekend. You buy bees in packages that contain 15-20 thousand bees and a queen. The queen is in a cage so she is easy to find. Queens give off a pheromone that attra

Summary: I’ve been a beekeeper for many years. I love the honey, but I love watching the bees even more. They are a fascinating example of a natural, decentralized system.

I installed a package of bees in a hive over the weekend. You buy bees in packages that contain 15-20 thousand bees and a queen. The queen is in a cage so she is easy to find. Queens give off a pheromone that attracts the other bees in the hive. The queen is the secret to creating legitimacy for the hive (see Legitimacy and Decentralized Systems for more on legitimacy). If the queen is in the new hive, chances are the other bees will see it as their legitimate home and stick around.

Queen in a cage (click to enlarge)

I placed queen cage in the hive using a rubber band to fix the cage on one of the frames that the bees make honeycomb on. I replaced the cork in the cage with a candy stopper. The bees eat through the candy over the course of a few days and free the queen. Hopefully, by that time, the hive is established and the bees stick around.

After placing the queen cage in the hive, you just dump the bees out on top of the frames. I love this part because thousands of bees are flying everywhere trying to make sense of what just happened. But over the course of an hour or two, the hive coalesces on the queen and most of the bees are inside, getting adjusted to their new home.

Bees on top of the hive frames (click to enlarge) About an hour after the bees get their new home, they're out on the porch, fanning and taking orientation flights. (click to enlarge)

Besides providing a basis for hive legitimacy, the queen is also the sole reproductive individual, responsible for laying every egg that will be raised in the hive. This is a big job. During the summer, she will lay about 2000 eggs per day and the hive will swell to multiple tens of thousands of bees. But beyond this, the queen’s role is limited. She doesn’t direct the actions of the members of the hive. No one does.

Thermoregulation

So, how does the hive function without central direction? Thermoregulation provides an example. Despite the fact that bees themselves are not homeothermic, the hive is. The bees manage to keep the hive at 93-94°F (34°C) regardless of the outside air temperature.

How do the bees do that? The straightforward answer is that some bees go to the entrance of the hive and fan air to increase circulation when the internal temperature gets too high. When it gets too low, bees cluster in the center and generate heat by shivering.

The more interesting question is “how do the bees know to do that?” All the bees have similar genetic programming (algorithmic governance). But the tasks that they’re inclined to do depend on their age. The youngest workers clean cells, then move onto nursing functions, mortuary activities, guarding the hive, and finally, in the last weeks of their lives, to foraging for water, nectar, and pollen.

Bees have a genetic threshold for carrying out these tasks. The threshold changes as they age. A young bee has a very high threshold for foraging that decreases over her life. Further, these thresholds vary by patriline (even though every bee in the hive has the same mother, there are many fathers), providing diversity.

So as the temperature in the hive climbs, a few bees go down to the hive entrance and fan. As it gets hotter, even more bees will take up the task, depending on their internal threshold. Their genetic programming, combined with the diversity in their thresholds, promotes an even response to temperature swings that could damage the hive. You can read more about hive thermoregulation in an earlier blog post I wrote on the topic.

Swarming and Protecting Against Byzantine Failure

An even more interesting phenomenon is how bees decide to swarm. Because the hive is a super organism, the queen’s efforts to reproduce don’t result in a new hive unless there’s a swarm. Swarming is how new hives are created.

Bees swarm in response to stresses like insufficient food supply, too little space, and so on. But no one really knows how a hive decides it’s time to swarm. In preparation for a swarm, the hive starts to raise new queens. Whether an egg grows into a worker, drone, or queen is determined by how the larva is fed by nurse bees. At some point the bees collectively determine to swarm and the queen produces a pheromone that broadcasts that decision.

The swarm consists of the current queen (and her powerful pheromones), some of the worker bees, and a portion of the honey stores. The swarm leaves the hive and the remaining bees raise the new queen and carry on. The swarm flies a short distance and settles down on some convenient structure to decide where to make their permanent home. Again the swarm centers on the queen. This is where the fun starts.

Thomas Seeley of Cornell has been studying swarms for his entire career. In the following video he describes how bees use collective decision making to choose their new home.

Cornell professor, biologist and beekeeper Thomas Seeley (click to view)

There are several interesting features in this process. First, Seeley has determined that bees don’t just make a good decision, but the best possible decision. I think that’s amazing. Several hundred bees leave the swarm to search for a new home and participate in a debate to choose one of the available sites and settle on the best choice.

This is a process that is potentially subject to byzantine failure. Not that the bees are malicious, in fact they’re programmed to accurately represent their findings. But they can report faulty information based on their judgment of the suitability of a candidate site. The use of reputation signals for sites and voting by multiple inspectors allows the bees avoid bad decisions even in the face of false signals.

Swarm lodged in a fruit tree in my garden (click to enlarge)

The process is further protected from error because bees are programmed to only advertise sites they’ve actually visited. Again, they don’t have the ability to be malicious. Each bee advertising a potential site has done the work of flying to the site and inspecting it. As bees signal their excitement for that site in a waggle dance, even more bees will fly out to it, perform an inspection, and return to advertise their findings. I don’t know if I’d characterize this as proof of work, but it does ensure that votes are based on real information. Once a quorum of bees in the swarm reach consensus about a particular site, the swarm departs and takes up residence in their new home.

Honeybee Democracy by Thomas D. Seeley

Honeybees make decisions collectively--and democratically. Every year, faced with the life-or-death problem of choosing and traveling to a new home, honeybees stake everything on a process that includes collective fact-finding, vigorous debate, and consensus building. In fact, as world-renowned animal behaviorist Thomas Seeley reveals, these incredible insects have much to teach us when it comes to collective wisdom and effective decision making.

You may not be thrilled if a swarm determines the best new home is in your attic, but you can be thrilled with the knowledge that ten thousand decentralized bees with sophisticated algorithmic programming achieved consensus and ranked it #1.

The hive is a super organism with its intelligence spread out among its tens of thousands of members. Life and death decisions are made on a daily basis in a completely decentralized fashion. Besides thermoregulation of the hive and finding a new home, the bees in a hive autonomously make millions of other decentralized decisions every day that result in the hive not only surviving but thriving in hostile conditions. I find that remarkable.

Tags: decentralization identity legitimacy


MyDigitalFootprint

Dashboards - we love them, but why do they love us?

Subject: Agenda item for our away day on strategy and scenarios To: CEO and senior exec team We should congratulate ourselves on the progress made, however as your CDO, I am now going to make a case that we measure too much, have too much data and that as a team, we should reflect on the next thing that data can support us in! We have bought into “Data is the new oil,” and whilst we know th


Subject: Agenda item for our away day on strategy and scenarios

To: CEO and senior exec team

We should congratulate ourselves on the progress made, however as your CDO, I am now going to make a case that we measure too much, have too much data and that as a team, we should reflect on the next thing that data can support us in!

We have bought into “Data is the new oil,” and whilst we know the analogy breaks down below the veneer, the message is beautifully simple and has empowered the change to a data and digital business. The global pandemic has accelerated our adoption and transformation, and we are in a better place than March 2020. However, sticking with oil, we know that the extraction process has downsides, including carbon release, messy, and difficulty locating economic wells.   Amongst data’s most significant downsides are legal liabilities, noise and the wrong data. 

I can easily hide data’s downsides through dashboards.  Our dashboards are based on trickle-down KPI and objectives from our strategic plan.  We hand out SMART objectives, but such objectives fundamentally assume that we are and continue to do the right thing.  We gather data and post-analysis present it as trending graphs or red, amber, green dashboards, aiming for everything to be going in the right direction or green.  Green only means we have no negative variance between our plan and the actual. Organisationally we are incentivised to achieve green at any unknown cost or consequence.  Trending analysis can always be generated through the selection of data to fit the story. 

Right now, we are using data to measure, analyse and inform. We have better control of variance across our entire operations and ecosystem than at any previous point in our history. We increasingly have to depend on dashboards to manage the complicated and complex, remain informed, and determine where our energies should be focussed.  Without a doubt, this should remain and continue to be improved, but we are already on a diminishing return with the data we collect due to the increase in noise over signal and should aim to cull than add. 

As an agenda item; should we introduce the colour BLUE into our reporting, trends, and dashboards to reduce reporting and data?  The traditional traffic lights remain the same; blue is not replacing any of them; it can become the one that allows us to know we are doing the right thing and not just doing the wrong thing that is in the plan in the most efficient and effective way (can be green or red).

As a team, we have to feed our sense of enquiry, and our existing data gathering, analysis, and reporting do not value the complexity we are faced with. More data does not solve complexity. Data has allowed us to evolve from finance data as the most critical decision-making source to become more sensitive.  Whilst we have far more data, it is still narrow, and we should consider how we prepare for the next evolution, which will not be more of the same data.  Following on from customer data and the start in ESG, the next data set we are being mandated to report on is Human Capital.  

Human capital reporting opens up our ability to sense what our employees and ecosystem are sensing, seeing and implying; helping us to determine, using technologies such as sentiment analysis, if we are doing the right thing. Where are issues that are not on the dashboard are occurring? What, as yet, unidentified tensions and conflicts are created by our current trickle-down objective/ incentive system? However, as you can imagine, big brother, privacy, and trust are foundational issues we need to discuss front up before this next evolutionary step hits us.  This next phase of our data journey means we will find it harder to hide in the selection of data for trends and dashboards or just seek the right trend or green. This is more data, but different data and it will fill a gap in our knowledge,  meaning we will be better informed about complex decisions. 

I would like to present for 15 minutes on this topic and host a 45-minute debate with your approval.

Your CDO


Monday, 10. May 2021

Simon Willison

Django SQL Dashboard

I've released the first non-alpha version of Django SQL Dashboard, which provides an interface for running arbitrary read-only SQL queries directly against a PostgreSQL database, protected by the Django authentication scheme. It can also be used to create saved dashboards that can be published or shared internally. I started building this tool back in March as part of my work to port VaccinateCA

I've released the first non-alpha version of Django SQL Dashboard, which provides an interface for running arbitrary read-only SQL queries directly against a PostgreSQL database, protected by the Django authentication scheme. It can also be used to create saved dashboards that can be published or shared internally.

I started building this tool back in March as part of my work to port VaccinateCA away from Airtable to a custom Django backend. One of the strengths of Airtable is that it allows ad-hoc data exploration and reporting, and I wanted to provide an alternative to that for the new Django backend.

I also wanted to try out some new ideas for Datasette, which doesn't (yet) work with PostgreSQL.

First, a demo

I recorded this three minute video demo introducing the software, using my blog's database as an example.

In the video I run the following SQL queries to explore the many-to-many table that maps tags to my blog entries:

select * from blog_entry_tags;

The table starts out looking like this - not particularly interesting:

Then I run this query to join it against the blog_tag table and get the details of each tag:

select * from blog_entry_tags join blog_tag on blog_tag.id = blog_entry_tags.tag_id

This is a bit more useful. I then click on the "count" link at the top of that "tag" column. This constructs a SQL query for me that uses a count(*) and group by to return a count of each value in that column:

select "tag", count(*) as n from ( select * from blog_entry_tags join blog_tag on blog_tag.id = blog_entry_tags.tag_id ) as results group by "tag" order by n desc

Then I demonstrate some of the default widget visualizations that come with Django SQL Dashboard. If I rewrite the query to return columns called bar_label and bar_quantity the tool will render the results as a bar chart:

select "tag" as bar_label, count(*) as bar_quantity from ( select * from blog_entry_tags join blog_tag on blog_tag.id = blog_entry_tags.tag_id ) as results group by "tag" order by bar_quantity desc

Next, I demonstrate a similar trick that instead produces a word cloud by aliasing the columns to wordcloud_word and wordcloud_count:

select "tag" as wordcloud_word, count(*) as wordcloud_count from ( select * from blog_entry_tags join blog_tag on blog_tag.id = blog_entry_tags.tag_id ) as results group by "tag" order by wordcloud_count desc

Finally, I show how that query can be turned into a saved dashboard and made available to the public. Here's the saved dashboard I created in the video:

https://simonwillison.net/dashboard/tag-cloud/

This illustrates a key idea underlying both Django SQL dashboard and Datasette: a complete application can be defined as a SQL query!

Much of the work we do as web application developers can be boiled down to constructing a SQL query and hooking it up to output to a web page. If you can safely execute SQL queries from page query strings this means you can build custom applications that exist entirely as bookmarkable URLs.

My draw-a-shape-on-a-map application for searching mini parks in California from a few months ago is another example of this pattern in action.

Custom widgets

Building new custom widgets for this tool is extremely easy - hence the word cloud widget which I actually built specially for this demo. All you need to provide is a single Django template file.

If your widget is going to respond to returned columns wordcloud_word and wordcloud_count the name of that template is those columns, sorted alphabetically and joined with hyphens:

wordcloud_count-wordcloud_word.html

Place that in a django_sql_dashboard/widgets template directory and the new widget will be ready to use. Here's the full implementation of the word cloud widget.

Named parameter support

This is a feature I lifted directly from Datasette. You can construct SQL queries that look like this:

select * from blog_entry where id = %(id)s

This uses psycopg2 syntax for named parameters. The value will be correctly quoted and escaped, so this is a useful tool for avoiding SQL injection attacks.

Djang SQL Dashboard spots these parameters and turns them into form fields. Here's what that looks like in the interface:

These forms submit using GET, so the result can be bookmarked. Here's a saved dashboard you can use to retrieve the details of any of my blog entries by their numeric ID:

https://simonwillison.net/dashboard/blog-entry-by-id/?id=7991

You can include multiple SQL parameters on a single dashboard, and any form parameters will be made available to all of those queries.

This means you can build dashboards that run multiple queries against the same arguments. Imagine for example you want to build a report about a specific user's activity across multiple tables - you can accept their user ID as a parameter, then display the output of multiple queries (including custom visualizations) that each refer to that parameter.

Export through copy and paste

I love copy and paste as a mechanism for exporting data from a system. Django SQL Dashboard embraces this in a couple of ways:

Results from SQL queries can be copied out as TSV from an expandable textarea below the table - up to 1,000 rows. I like this format because you can paste it directly into Google Sheets or Excel to get the data correctly split into cells. Any time JSON is returned as a value from PostgreSQL, a "copy to clipboard" icon is shown next to the JSON. I use this a lot: both for JSON stored in PostgreSQL as well as the output from JSON aggregation functions. Export all query results as CSV/TSV

This comes up a lot at Vaccinate CA: we do a lot of data analysis where we need to work with other tools or send data to partners, and having a way to export the full set of results for a query (rather than truncating at the first thousand to avoid crashing the user's browser) was a frequent need. - Django SQL Dashboard provides this option using a combination of Django's streaming HTTP response mechanism and PostgreSQL server-side cursors to efficiently stream large amounts of data without running out of resources.

A complex example: searching code examples across my blog

I decided to see how far I could push PostgreSQL.

I often include code in my blog entries - examples that are wrapped in a <pre> tag. Within that tag I sometimes apply syntax highlighting (a bunch of <span> elements).

It turns out I've included code snippets in 134 different blog entries:

select count(*) from blog_entry where body ~ '<pre>.*<pre>'

Can I use regular expressions in PostgreSQL to extract just the code examples, clean them up (removing those spans, reversing HTML entity encoding) and then provide simple search across the text of those examples, all in one query?

It turns out I can!

Here's a saved dashboard you can use to execute searches against just the contents of those <pre> tags across every entry on my blog:

https://simonwillison.net/dashboard/code-examples/?search=select

with results_stripped as ( select id, title, replace(replace(replace(replace(replace(regexp_replace( (regexp_matches(body, '<pre>(.*?)</pre>', 'g'))[1], E'<[^>]+>', '', 'gi' ), '&quot' || chr(59), '"'), '&gt' || chr(59), '>'), '&lt' || chr(59), '<'), '&#039' || chr(59), chr(39)), '&amp' || chr(59), '&' ) as code from blog_entry where body ~ '<pre>.*<pre>' ) select id, title, code, 'https://simonwillison.net/e/' || id as link from results_stripped where code like '%%' || %(search)s || '%%' limit 10

There's a lot going on here. The key component is this bit:

regexp_matches(body, '<pre>(.*?)</pre>', 'g'))[1]

The regexp_matches() function, with the 'g' flag, returns every match for the given regular expression. As part of a larger select query this means that if the expression matches three times you'll get back three rows in the output (in this case with duplicate id and title columns) - which is what I want here.

It's wrapped in a terrifying nest of extra functions. These serve two purposes: they strip out any nested HTML tags, and the un-escape the &quot;, &lt;, &gt;, &amp; and &#039; HTML entities. I did this as a nested block of replace() functions - there's probably a neater solution here.

The chr(59) bits are a hack: Django SQL Dashboard disallows the ; character to ensure people can't execute multiple SQL queries - which could be used to work around some of the per-transaction protective settings applied by the tool.

But I need to search-and-replace &quot; - so I use this pattern to include the semicolon:

replace(text, '&quot' || chr(59), '"')

Where || is the PostgreSQL string concatenation operator.

The search itself is constructed like this:

where code like '%%' || %(search)s || '%%'

This constructs a like query against '%your-search-term%' - the double percentage sign escaping is needed because % has a special meaning here (it's part of the %(search)s named parameter).

One last trick: the final output of the query is produced by this:

select id, title, code, 'https://simonwillison.net/e/' || id as link from results_stripped

results_stripped is a CTE defined earlier - I usually try to wrap up complex weird stuff like those nested replace() calls in a CTE so I can write a simple final query.

The 'https://simonwillison.net/e/' || id as link bit here concatenates together a URL that links to my entry based on its ID. My blog uses /yyyy/Mon/slug/ URLs but generating these from a SQL query against the created column was a little fussy, so I added /e/ID redirecting URLs to make generating links in dashboard queries easier.

Future plans

Django SQL Dashboard has already proved itself invaluable for my current project. I imagine I'll be using it for every Django project I build going forward - being able to query the database like this, create ad-hoc visualizations and then link to them is a huge productivity boost.

The bigger question is how it overlaps with Datasette.

Datasette has been SQLite-only since I started the project three and a half years ago - because I know that building a database abstraction layer is a huge additional commitment and, for Datasette's initial purpose of helping publish read-only data, it didn't feel necessary.

I have a growing suspicion that getting Datasette to work against PostgreSQL (and other database backends) in addition to SQLite is less work than I had originally thought.

Datasette is also built on top of ASGI. Django 3.0 introduced ASGI support, so it's now possible to host ASGI applications like Datasette as part of a unified Django application.

So it's possible that the future of Django SQL Dashboard will be for Datasette to eventually make it obsolete.

That doesn't stop it from being extremely useful today. If you try it out I'd love to hear from you! I'm also keen to see people start to expand it for their own projects, especially via the custom widgets mechanism.

Let me know if you try it out!

TIL this week Scroll page to form if there are errors Releases this week django-sql-dashboard: 0.12 - (22 releases total) - 2021-05-08
Django app for building dashboards using raw SQL queries

Damien Bod

Present and Verify Verifiable Credentials in ASP.NET Core using Decentralized Identities and MATTR

This article shows how use verifiable credentials stored on a digital wallet to verify a digital identity and use in an application. For this to work, a trust needs to exist between the verifiable credential issuer and the application which requires the verifiable credentials to verify. A blockchain decentralized database is used and MATTR is […]

This article shows how use verifiable credentials stored on a digital wallet to verify a digital identity and use in an application. For this to work, a trust needs to exist between the verifiable credential issuer and the application which requires the verifiable credentials to verify. A blockchain decentralized database is used and MATTR is used as a access layer to this ledger and blockchain. The applications are implemented in ASP.NET Core.

The verifier application Bo Insurance is used to implement the verification process and to create a presentation template. The application sends a HTTP post request to create a presentation request using the DID Id from the OIDC credential Issuer, created in the previous article. This DID is created from the National Driving license application which issues verifiable credentials and so a trust needs to exist between the two applications. Once the credentials have been issued to a holder of the verifiable credentials and stored for example in a digital wallet, the issuer is no longer involved in the process. Verifying the credentials only requires the holder and the verifier and the decentralized database which holds the digital identities and documents. The verifier application gets the DID from the ledger and signs the verify request. The request can then be presented as a QR Code. The holder can scan this using a MATTR digital wallet and grant consent to share the credentials with the application. The digital wallet calls the callback API defined in the request presentation body and sends the data to the API. The verifier application hosting the API would need to verify the data and can update the application UI using SignalR to continue the business process with the verified credentials.

Code https://github.com/swiss-ssi-group/MattrGlobalAspNetCore

Blogs in the series

Getting started with Self Sovereign Identity SSI Create an OIDC credential Issuer with MATTR and ASP.NET Core Present and Verify Verifiable Credentials in ASP.NET Core using Decentralized Identities and MATTR Verify vaccination data using Zero Knowledge Proofs with ASP.NET Core and MATTR

Create the presentation template for the Verifiable Credential

A presentation template is required to verify the issued verifiable credentials stored on a digital wallet.

The digital identity (DID) Id of the OIDC credential issuer is all that is required to create a presentation request template. In the application which issues credentials, ie the NationalDrivingLicense, a Razor page was created to view the DID of the OIDC credential issuer.

The DID can be used to create the presentation template. The MATTR documentation is really good here:

https://learn.mattr.global/tutorials/verify/presentation-request-template

A Razor page was created to start this task from the UI. This would normally require authentication as this is an administrator task from the application requesting the verified credentials. The code behind the Razor page takes the DID request parameter and calls the MattrPresentationTemplateService to create the presentation template and present this id a database.

public class CreatePresentationTemplateModel : PageModel { private readonly MattrPresentationTemplateService _mattrVerifyService; public bool CreatingPresentationTemplate { get; set; } = true; public string TemplateId { get; set; } [BindProperty] public PresentationTemplate PresentationTemplate { get; set; } public CreatePresentationTemplateModel(MattrPresentationTemplateService mattrVerifyService) { _mattrVerifyService = mattrVerifyService; } public void OnGet() { PresentationTemplate = new PresentationTemplate(); } public async Task<IActionResult> OnPostAsync() { if (!ModelState.IsValid) { return Page(); } TemplateId = await _mattrVerifyService.CreatePresentationTemplateId(PresentationTemplate.DidId); CreatingPresentationTemplate = false; return Page(); } } public class PresentationTemplate { [Required] public string DidId { get; set; } }

The Razor page html template creates a form to post the request to the server rendered page and displays the templateId after, if the creation was successful.

@page @model BoInsurance.Pages.CreatePresentationTemplateModel <div class="container-fluid"> <div class="row"> <div class="col-sm"> <form method="post"> <div> <div class="form-group"> <label class="control-label">DID ID</label> <input asp-for="PresentationTemplate.DidId" class="form-control" /> <span asp-validation-for="PresentationTemplate.DidId" class="text-danger"></span> </div> <div class="form-group"> @if (Model.CreatingPresentationTemplate) { <input class="form-control" type="submit" readonly="@Model.CreatingPresentationTemplate" value="Create Presentation Template" /> } </div> <div class="form-group"> @if (!Model.CreatingPresentationTemplate) { <div class="alert alert-success"> <strong>Mattr Presentation Template created</strong> </div> } </div> </div> </form> <hr /> <p>When the templateId is created, you can use the template ID to verify</p> </div> <div class="col-sm"> <div> <img src="~/ndl_car_01.png" width="200" alt="Driver License"> <div> <b>Driver Licence templateId from presentation template</b> <hr /> <dl class="row"> <dt class="col-sm-4">templateId</dt> <dd class="col-sm-8"> @Model.TemplateId </dd> </dl> </div> </div> </div> </div> </div>

The MattrPresentationTemplateService is used to create the MATTR presentation template. This class uses the MATTR API and sends a HTTP post request with the DID Id of the OIDC credential issuer and creates a presentation template. The service saves the returned payload to a database and returns the template ID as the result. The template ID is required to verify the verifiable credentials.

The MattrTokenApiService is used to request an API token for the MATTR API using the credential of your MATTR account. This service has a simple token cache and only requests new access tokens when no token exists or the token has expired.

The BoInsuranceDbService service is used to access the SQL database using Entity Framework Core. This provides simple methods to persist or select the data as required.

private readonly IHttpClientFactory _clientFactory; private readonly MattrTokenApiService _mattrTokenApiService; private readonly BoInsuranceDbService _boInsuranceDbService; private readonly MattrConfiguration _mattrConfiguration; public MattrPresentationTemplateService(IHttpClientFactory clientFactory, IOptions<MattrConfiguration> mattrConfiguration, MattrTokenApiService mattrTokenApiService, BoInsuranceDbService boInsuranceDbService) { _clientFactory = clientFactory; _mattrTokenApiService = mattrTokenApiService; _boInsuranceDbService = boInsuranceDbService; _mattrConfiguration = mattrConfiguration.Value; } public async Task<string> CreatePresentationTemplateId(string didId) { // create a new one var v1PresentationTemplateResponse = await CreateMattrPresentationTemplate(didId); // save to db var drivingLicensePresentationTemplate = new DrivingLicensePresentationTemplate { DidId = didId, TemplateId = v1PresentationTemplateResponse.Id, MattrPresentationTemplateReponse = JsonConvert .SerializeObject(v1PresentationTemplateResponse) }; await _boInsuranceDbService .CreateDriverLicensePresentationTemplate(drivingLicensePresentationTemplate); return v1PresentationTemplateResponse.Id; } private async Task<V1_PresentationTemplateResponse> CreateMattrPresentationTemplate(string didId) { HttpClient client = _clientFactory.CreateClient(); var accessToken = await _mattrTokenApiService.GetApiToken(client, "mattrAccessToken"); client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", accessToken); client.DefaultRequestHeaders.TryAddWithoutValidation("Content-Type", "application/json"); var v1PresentationTemplateResponse = await CreateMattrPresentationTemplate(client, didId); return v1PresentationTemplateResponse; }

The CreateMattrPresentationTemplate method sends the HTTP Post request like in the MATTR API documentation. Creating the payload for the HTTP post request using the MATTR Open API definitions is a small bit complicated. This could be improved with a better Open API definition. In our use case, we just want to create the default template for the OIDC credential issuer and so just require the DID Id. Most of the other properties are fixed values, see the MATTR API docs for more information.

private async Task<V1_PresentationTemplateResponse> CreateMattrPresentationTemplate( HttpClient client, string didId) { // create presentation, post to presentations templates api // https://learn.mattr.global/tutorials/verify/presentation-request-template var createPresentationsTemplatesUrl = $"https://{_mattrConfiguration.TenantSubdomain}/v1/presentations/templates"; var additionalProperties = new Dictionary<string, object>(); additionalProperties.Add("type", "QueryByExample"); additionalProperties.Add("credentialQuery", new List<CredentialQuery> { new CredentialQuery { Reason = "Please provide your driving license", Required = true, Example = new Example { Context = new List<object>{ "https://schema.org" }, Type = "VerifiableCredential", TrustedIssuer = new List<TrustedIssuer2> { new TrustedIssuer2 { Required = true, Issuer = didId // DID use to create the oidc } } } } }); var payload = new MattrOpenApiClient.V1_CreatePresentationTemplate { Domain = _mattrConfiguration.TenantSubdomain, Name = "certificate-presentation", Query = new List<Query> { new Query { AdditionalProperties = additionalProperties } } }; var payloadJson = JsonConvert.SerializeObject(payload); var uri = new Uri(createPresentationsTemplatesUrl); using (var content = new StringContentWithoutCharset(payloadJson, "application/json")) { var presentationTemplateResponse = await client.PostAsync(uri, content); if (presentationTemplateResponse.StatusCode == System.Net.HttpStatusCode.Created) { var v1PresentationTemplateResponse = JsonConvert .DeserializeObject<MattrOpenApiClient.V1_PresentationTemplateResponse>( await presentationTemplateResponse.Content.ReadAsStringAsync()); return v1PresentationTemplateResponse; } var error = await presentationTemplateResponse.Content.ReadAsStringAsync(); } throw new Exception("whoops something went wrong"); }

The application can be started and the presentation template can be created. The ID is returned back to the UI for the next step.

Verify the verifiable credentials

Now that a template exists to request the verifiable data from the holder of the data which is normally stored in a digital wallet, the verifier application can create and start a verification process. A post request is sent to the MATTR APIs which creates a presentation request using a DID ID and the required template. The application can request the DID from the OIDC credential issuer. The request is signed using the correct key from the DID and the request is published in the UI as a QR Code. A digital wallet is used to scan the code and the user of the wallet can grant consent to share the personal data. The wallet sends a HTTP post request to the callback API. This API handles the request, would validate the data and updates the UI using SignalR to move to the next step of the business process using the verified data.

Step 1 Invoke a presentation request

The InvokePresentationRequest method implements the presentation request. This method requires the DID Id of the OIDC credential issuer which will by used to get the data from the holder of the data. The template ID is also required from the template created above. A challenge is also used to track the verification. The challenge is a random value and is used when the digital wallet calls the API with the verified data. The callback URL is where the data is returned to. This could be unique for every request or anything you want. The payload is created like the docs from the MATTR API defines. The post request is sent to the MATTR API and a V1_CreatePresentationRequestResponse is returned if all is configured correctly.

private async Task<V1_CreatePresentationRequestResponse> InvokePresentationRequest( HttpClient client, string didId, string templateId, string challenge, string callbackUrl) { var createDidUrl = $"https://{_mattrConfiguration.TenantSubdomain}/v1/presentations/requests"; var payload = new MattrOpenApiClient.V1_CreatePresentationRequestRequest { Did = didId, TemplateId = templateId, Challenge = challenge, CallbackUrl = new Uri(callbackUrl), ExpiresTime = MATTR_EPOCH_EXPIRES_TIME_VERIFIY // Epoch time }; var payloadJson = JsonConvert.SerializeObject(payload); var uri = new Uri(createDidUrl); using (var content = new StringContentWithoutCharset(payloadJson, "application/json")) { var response = await client.PostAsync(uri, content); if (response.StatusCode == System.Net.HttpStatusCode.Created) { var v1CreatePresentationRequestResponse = JsonConvert .DeserializeObject<V1_CreatePresentationRequestResponse>( await response.Content.ReadAsStringAsync()); return v1CreatePresentationRequestResponse; } var error = await response.Content.ReadAsStringAsync(); } return null; }

Step 2 Get the OIDC Issuer DID

The RequestDID method uses the MATTR API to get the DID data from the blochchain for the OIDC credential issuer. Only the DID Id is required.

private async Task<V1_GetDidResponse> RequestDID(string didId, HttpClient client) { var requestUrl = $"https://{_mattrConfiguration.TenantSubdomain}/core/v1/dids/{didId}"; var uri = new Uri(requestUrl); var didResponse = await client.GetAsync(uri); if (didResponse.StatusCode == System.Net.HttpStatusCode.OK) { var v1CreateDidResponse = JsonConvert.DeserializeObject<V1_GetDidResponse>( await didResponse.Content.ReadAsStringAsync()); return v1CreateDidResponse; } var error = await didResponse.Content.ReadAsStringAsync(); return null; }

Step 3 Sign the request using correct key and display QR Code

To verify data using a digital wallet, the payload must be signed using the correct key. The SignAndEncodePresentationRequestBody uses the DID payload and the request from the presentation request to create the payload to sign. Creating the payload is a big messy due to the OpenAPI definitions created for the MATTR API. A HTTP post request with the payload returns the signed JWT in a payload in a strange data format so we parse this as a string and manually get the JWT payload.

private async Task<string> SignAndEncodePresentationRequestBody( HttpClient client, V1_GetDidResponse did, V1_CreatePresentationRequestResponse v1CreatePresentationRequestResponse) { var createDidUrl = $"https://{_mattrConfiguration.TenantSubdomain}/v1/messaging/sign"; object didUrlArray; did.DidDocument.AdditionalProperties.TryGetValue("authentication", out didUrlArray); var didUrl = didUrlArray.ToString().Split("\"")[1]; var payload = new MattrOpenApiClient.SignMessageRequest { DidUrl = didUrl, Payload = v1CreatePresentationRequestResponse.Request }; var payloadJson = JsonConvert.SerializeObject(payload); var uri = new Uri(createDidUrl); using (var content = new StringContentWithoutCharset(payloadJson, "application/json")) { var response = await client.PostAsync(uri, content); if (response.StatusCode == System.Net.HttpStatusCode.OK) { var result = await response.Content.ReadAsStringAsync(); return result; } var error = await response.Content.ReadAsStringAsync(); } return null; }

The CreateVerifyCallback method uses the presentation request, the get DID and the sign HTTP post requests to create a URL which can be displayed in a UI. The challenge is created using the RNGCryptoServiceProvider class which creates a random string. The access token to access the API is returned from the client credentials OAuth requests or from the in memory cache. The DrivingLicensePresentationVerify class is persisted to a database and the verify URL is returned so that this could be displayed as a QR Code in the UI.

/// <summary> /// https://learn.mattr.global/tutorials/verify/using-callback/callback-e-to-e /// </summary> /// <param name="callbackBaseUrl"></param> /// <returns></returns> public async Task<(string QrCodeUrl, string ChallengeId)> CreateVerifyCallback(string callbackBaseUrl) { callbackBaseUrl = callbackBaseUrl.Trim(); if (!callbackBaseUrl.EndsWith('/')) { callbackBaseUrl = $"{callbackBaseUrl}/"; } var callbackUrlFull = $"{callbackBaseUrl}{MATTR_CALLBACK_VERIFY_PATH}"; var challenge = GetEncodedRandomString(); HttpClient client = _clientFactory.CreateClient(); var accessToken = await _mattrTokenApiService.GetApiToken(client, "mattrAccessToken"); client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", accessToken); client.DefaultRequestHeaders.TryAddWithoutValidation("Content-Type", "application/json"); var template = await _boInsuranceDbService.GetLastDriverLicensePrsentationTemplate(); // Invoke the Presentation Request var invokePresentationResponse = await InvokePresentationRequest( client, template.DidId, template.TemplateId, challenge, callbackUrlFull); // Request DID V1_GetDidResponse did = await RequestDID(template.DidId, client); // Sign and Encode the Presentation Request body var signAndEncodePresentationRequestBodyResponse = await SignAndEncodePresentationRequestBody( client, did, invokePresentationResponse); // fix strange DTO var jws = signAndEncodePresentationRequestBodyResponse.Replace("\"", ""); // save to db // TODO add this back once working var drivingLicensePresentationVerify = new DrivingLicensePresentationVerify { DidId = template.DidId, TemplateId = template.TemplateId, CallbackUrl = callbackUrlFull, Challenge = challenge, InvokePresentationResponse = JsonConvert.SerializeObject(invokePresentationResponse), Did = JsonConvert.SerializeObject(did), SignAndEncodePresentationRequestBody = jws }; await _boInsuranceDbService.CreateDrivingLicensePresentationVerify(drivingLicensePresentationVerify); var qrCodeUrl = $"didcomm://https://{_mattrConfiguration.TenantSubdomain}/?request={jws}"; return (qrCodeUrl, challenge); } private string GetEncodedRandomString() { var base64 = Convert.ToBase64String(GenerateRandomBytes(30)); return HtmlEncoder.Default.Encode(base64); } private byte[] GenerateRandomBytes(int length) { using var randonNumberGen = new RNGCryptoServiceProvider(); var byteArray = new byte[length]; randonNumberGen.GetBytes(byteArray); return byteArray; }

The CreateVerifierDisplayQrCodeModel is the code behind for the Razor page to request a verification and also display the verify QR Code for the digital wallet to scan. The CallbackUrl can be set from the UI so that this is easier for testing. This callback can be any webhook you want or API. To test the application in local development, I used ngrok. The return URL has to match the proxy which tunnels to you PC, once you start. If the API has no public address when debugging, you will not be able to test locally.

public class CreateVerifierDisplayQrCodeModel : PageModel { private readonly MattrCredentialVerifyCallbackService _mattrCredentialVerifyCallbackService; public bool CreatingVerifier { get; set; } = true; public string QrCodeUrl { get; set; } [BindProperty] public string ChallengeId { get; set; } [BindProperty] public CreateVerifierDisplayQrCodeCallbackUrl CallbackUrlDto { get; set; } public CreateVerifierDisplayQrCodeModel(MattrCredentialVerifyCallbackService mattrCredentialVerifyCallbackService) { _mattrCredentialVerifyCallbackService = mattrCredentialVerifyCallbackService; } public void OnGet() { CallbackUrlDto = new CreateVerifierDisplayQrCodeCallbackUrl(); CallbackUrlDto.CallbackUrl = $"https://{HttpContext.Request.Host.Value}"; } public async Task<IActionResult> OnPostAsync() { if (!ModelState.IsValid) { return Page(); } var result = await _mattrCredentialVerifyCallbackService .CreateVerifyCallback(CallbackUrlDto.CallbackUrl); CreatingVerifier = false; QrCodeUrl = result.QrCodeUrl; ChallengeId = result.ChallengeId; return Page(); } } public class CreateVerifierDisplayQrCodeCallbackUrl { [Required] public string CallbackUrl { get; set; } }

The html or template part of the Razor page displays the QR Code from a successful post request. You can set any URL for the callback in the form request. This is really just for testing.

@page @model BoInsurance.Pages.CreateVerifierDisplayQrCodeModel <div class="container-fluid"> <div class="row"> <div class="col-sm"> <form method="post"> <div> <div class="form-group"> <label class="control-label">Callback base URL (ngrok in debug...)</label> <input asp-for="CallbackUrlDto.CallbackUrl" class="form-control" /> <span asp-validation-for="CallbackUrlDto.CallbackUrl" class="text-danger"></span> </div> <div class="form-group"> @if (Model.CreatingVerifier) { <input class="form-control" type="submit" readonly="@Model.CreatingVerifier" value="Create Verification" /> } </div> <div class="form-group"> @if (!Model.CreatingVerifier) { <div class="alert alert-success"> <strong>Ready to verify</strong> </div> } </div> </div> </form> <hr /> <p>When the verification is created, you can scan the QR Code to verify</p> </div> <div class="col-sm"> <div> <img src="~/ndl_car_01.png" width="200" alt="Driver License"> </div> </div> </div> <div class="row"> <div class="col-sm"> <div class="qr" id="qrCode"></div> <input asp-for="ChallengeId" hidden/> </div> </div> </div> @section scripts { <script src="~/js/qrcode.min.js"></script> <script type="text/javascript"> new QRCode(document.getElementById("qrCode"), { text: "@Html.Raw(Model.QrCodeUrl)", width: 400, height: 400, correctLevel: QRCode.CorrectLevel.M }); $(document).ready(() => { }); var connection = new signalR.HubConnectionBuilder().withUrl("/mattrVerifiedSuccessHub").build(); connection.on("MattrCallbackSuccess", function (challengeId) { console.log("received verification:" + challengeId); window.location.href = "/VerifiedUser?challengeid=" + challengeId; }); connection.start().then(function () { //console.log(connection.connectionId); const challengeId = $("#ChallengeId").val(); if (challengeId) { console.log(challengeId); // join message connection.invoke("AddChallenge", challengeId, connection.connectionId).catch(function (err) { return console.error(err.toString()); }); } }).catch(function (err) { return console.error(err.toString()); }); </script> }

Step 4 Implement the Callback and update the UI using SignalR

After a successful verification in the digital wallet, the wallet sends the verified credentials to the API defined in the presentation request. The API handling this needs to update the correct client UI and continue the business process using the verified data. We use SignalR for this with a single client to client connection. The Signal connections for each connection is associated with a challenge ID, the same Id we used to create the presentation request. Using this, only the correct client will be notified and not all clients broadcasted. The DrivingLicenseCallback takes the body with is specific for the credentials you issued. This is always depending on what you request. The data is saved to a database and the client is informed to continue. We send a message directly to the correct client using the connectionId of the SignalR session created for this challenge.

[ApiController] [Route("api/[controller]")] public class VerificationController : Controller { private readonly BoInsuranceDbService _boInsuranceDbService; private readonly IHubContext<MattrVerifiedSuccessHub> _hubContext; public VerificationController(BoInsuranceDbService boInsuranceDbService, IHubContext<MattrVerifiedSuccessHub> hubContext) { _hubContext = hubContext; _boInsuranceDbService = boInsuranceDbService; } /// <summary> /// { /// "presentationType": "QueryByExample", /// "challengeId": "GW8FGpP6jhFrl37yQZIM6w", /// "claims": { /// "id": "did:key:z6MkfxQU7dy8eKxyHpG267FV23agZQu9zmokd8BprepfHALi", /// "name": "Chris", /// "firstName": "Shin", /// "licenseType": "Certificate Name", /// "dateOfBirth": "some data", /// "licenseIssuedAt": "dda" /// }, /// "verified": true, /// "holder": "did:key:z6MkgmEkNM32vyFeMXcQA7AfQDznu47qHCZpy2AYH2Dtdu1d" /// } /// </summary> /// <param name="body"></param> /// <returns></returns> [HttpPost] [Route("[action]")] public async Task<IActionResult> DrivingLicenseCallback([FromBody] VerifiedDriverLicense body) { string connectionId; var found = MattrVerifiedSuccessHub.Challenges .TryGetValue(body.ChallengeId, out connectionId); // test Signalr //await _hubContext.Clients.Client(connectionId).SendAsync("MattrCallbackSuccess", $"{body.ChallengeId}"); //return Ok(); var exists = await _boInsuranceDbService.ChallengeExists(body.ChallengeId); if (exists) { await _boInsuranceDbService.PersistVerification(body); if (found) { //$"/VerifiedUser?challengeid={body.ChallengeId}" await _hubContext.Clients .Client(connectionId) .SendAsync("MattrCallbackSuccess", $"{body.ChallengeId}"); } return Ok(); } return BadRequest("unknown verify request"); } }

The SignalR server is configured in the Startup class of the ASP.NET Core application. The path for the hub is defined in the MapHub method.

public void ConfigureServices(IServiceCollection services) { // ... services.AddRazorPages(); services.AddSignalR(); services.AddControllers(); } public void Configure(IApplicationBuilder app, IWebHostEnvironment env) { // ... app.UseEndpoints(endpoints => { endpoints.MapRazorPages(); endpoints.MapHub<MattrVerifiedSuccessHub>("/mattrVerifiedSuccessHub"); endpoints.MapControllers(); }); }

The Hub implementation requires only one fixed method. The AddChallenge method takes the challenge Id and adds this the an in-memory cache. The controller implemented for the callbacks uses this ConcurrentDictionary to find the correct connectionId which is mapped to the challenges form the verification.

public class MattrVerifiedSuccessHub : Hub { /// <summary> /// This should be replaced with a cache which expires or something /// </summary> public static readonly ConcurrentDictionary<string, string> Challenges = new ConcurrentDictionary<string, string>(); public void AddChallenge(string challengeId, string connnectionId) { Challenges.TryAdd(challengeId, connnectionId); } }

The Javascript SignalR client in the browser connects to the SignalR server and registers the connectionId with the challenge ID used for the verification of the verifiable credentials from the holder of the digital wallet. If a client gets a message from that a verify has completed successfully and the callback has been called, it will redirect to the verified page. The client listens to the MattrCallbackSuccess for messages. These messages are sent from the callback controller directly.

<script type="text/javascript"> var connection = new signalR.HubConnectionBuilder() .withUrl("/mattrVerifiedSuccessHub").build(); connection.on("MattrCallbackSuccess", function (challengeId) { console.log("received verification:" + challengeId); window.location.href = "/VerifiedUser?challengeid=" + challengeId; }); connection.start().then(function () { //console.log(connection.connectionId); const challengeId = $("#ChallengeId").val(); if (challengeId) { console.log(challengeId); // join message connection.invoke("AddChallenge", challengeId, connection.connectionId).catch(function (err) { return console.error(err.toString()); }); } }).catch(function (err) { return console.error(err.toString()); }); </script>

The VerifiedUserModel Razor page displays the data and the business process can continue using the verified data.

public class VerifiedUserModel : PageModel { private readonly BoInsuranceDbService _boInsuranceDbService; public VerifiedUserModel(BoInsuranceDbService boInsuranceDbService) { _boInsuranceDbService = boInsuranceDbService; } public string ChallengeId { get; set; } public DriverLicenseClaimsDto VerifiedDriverLicenseClaims { get; private set; } public async Task OnGetAsync(string challengeId) { // user query param to get challenge id and display data if (challengeId != null) { var verifiedDriverLicenseUser = await _boInsuranceDbService.GetVerifiedUser(challengeId); VerifiedDriverLicenseClaims = new DriverLicenseClaimsDto { DateOfBirth = verifiedDriverLicenseUser.DateOfBirth, Name = verifiedDriverLicenseUser.Name, LicenseType = verifiedDriverLicenseUser.LicenseType, FirstName = verifiedDriverLicenseUser.FirstName, LicenseIssuedAt = verifiedDriverLicenseUser.LicenseIssuedAt }; } } } public class DriverLicenseClaimsDto { public string Name { get; set; } public string FirstName { get; set; } public string LicenseType { get; set; } public string DateOfBirth { get; set; } public string LicenseIssuedAt { get; set; } }

Running the verifier

To test the BoInsurance application locally, which is the verifier application, ngrok is used so that we have a public address for the callback. I install ngrok using npm. Without a license, you can only run your application in http.

npm install -g ngrok

Run the ngrok from the command line using the the URL of the application. I start the ASP.NET Core application at localhost port 5000.

ngrok http localhost:5000

You should be able to copied the ngrok URL and use this in the browser to test the verification.

Once running, a verification can be created and you can scan the QR Code with your digital wallet. Once you grant access to your data, the data is sent to the callback API and the UI will be redirected to the success page.

Notes

MATTR APIs work really well and support some of the flows for digital identities. I plan to try out the zero proof flow next. It is only possible to create verifiable credentials from data from your identity provider using the id_token. To issue credentials, you have to implement your own identity provider and cannot use business data from your application. If you have full control like with Openiddict, IdenityServer4 or Auth0, this is no problem, just more complicated to implement. If you do not control the data in your identity provider, you would need to create a second identity provider to issue credentials. This is part of your business logic then and not just an identity provider. This will always be a problem is using Azure AD or IDPs from large, medium size companies. The quality of the verifiable credentials also depend on how good the OIDC credential issuers are implemented as these are still central databases for these credentials and are still open to all the problems we have today. Decentralized identities have to potential to solve many problems but still have many unsolved problems.

Links

https://mattr.global/

https://learn.mattr.global/tutorials/verify/using-callback/callback-e-to-e

https://mattr.global/get-started/

https://learn.mattr.global/

https://keybase.io/

https://learn.mattr.global/tutorials/dids/did-key

https://gunnarpeipman.com/httpclient-remove-charset/

https://auth0.com/

Friday, 07. May 2021

Doc Searls Weblog

First iPhone mention?

I wrote this fake story on January 24, 2005, in an email to Peter Hirshberg after we jokingly came up with it during a phone call. Far as I know, it was the first mention of the word “iPhone.” Apple introduces one-button iPhone Shuffle To nobody’s surprise, Apple’s long-awaited entry into the telephony market is […]

I wrote this fake story on January 24, 2005, in an email to Peter Hirshberg after we jokingly came up with it during a phone call. Far as I know, it was the first mention of the word “iPhone.”

Apple introduces one-button iPhone Shuffle

To nobody’s surprise, Apple’s long-awaited entry into the telephony market is no less radical and minimalistic than the one-button mouse and the gum-stick-sized music player. In fact, the company’s new cell phone — developed in deeply secret partnership with Motorola — extends the concept behind the company’s latest iPod, as well as its brand identity.

Like the iPod Shuffle, the new iPhone Shuffle has no display. It’s an all-white rectangle with a little green light to show that a call is in progress. While the iPhone Shuffle resembles the iPod Shuffle, its user interface is even more spare. In place of the round directional “wheel” of the iPods, the iPhone Shuffle sports a single square button. When pressed, the iPod Shuffle dials a random number from its phone book.

“Our research showed that people don’t care who they call as much as they care about being on the phone,” said Apple CEO Steve Jobs. “We also found that most cell phone users hate routine, and prefer to be surprised. That’s just as true for people answering calls as it is for people making them. It’s much more liberating, and far more social, to call people at random than it is to call them deliberately.”

Said (pick an analyst), “We expect the iPhone Shuffle will do as much to change the culture of telephony as the iPod has done to change the culture of music listening.”

Safety was also a concern behind the one-button design. “We all know that thousands of people die on highways every year when they take their eyes off the road to dial or answer a cell phone,” Jobs said. “With the iPhone Shuffle, all they have to do is press one button, simple as that.”

For people who would rather dial contacts in order than at random, the iPhone Shuffle (like the iPod Shuffle) has a switch that allows users to call their phone book in the same order as listings are loaded loaded from the Address Book application.
To accommodate the new product, Apple also released Version 4.0.1 of  Address Book, which now features “phonelists” modeled after the familiar “playlists” in iTunes. These allow the iPhone Shuffle’s phone book to be populated by the same ‘iFill’ system that loads playlists from iTunes into iPod Shuffles.

A number of online sites reported that Apple negotiating with one of the major cell carriers to allow free calls between members who maintain .Mac accounts and keep their data in Apple’s Address Book. A few of those sites also suggested that future products in the Shuffle line will combine random phone calling and music playing, allowing users to play random music for random phone contacts.

The iPhone Shuffle will be sold at Apple retail stores.

Wednesday, 05. May 2021

Nader Helmy

IIW32: BBS+ and beyond

The Internet Identity Workshop continues to be a central nucleus for thoughtful discussion and development of all things related to digital identity. The most recent workshop, which was held in mid-April, was no exception. Despite the lack of in-person interaction due to the ongoing global pandemic, this IIW was as lively as ever, bringing together a diverse set of stakeholders from across the glo

The Internet Identity Workshop continues to be a central nucleus for thoughtful discussion and development of all things related to digital identity. The most recent workshop, which was held in mid-April, was no exception. Despite the lack of in-person interaction due to the ongoing global pandemic, this IIW was as lively as ever, bringing together a diverse set of stakeholders from across the globe to share experiences, swap perspectives, and engage in healthy debates.

One common theme this year was the continued development and adoption of BBS+ signatures, a type of multi-message cryptographic digital signature that enables selective disclosure of verifiable credentials. We first introduced this technology at IIW30 in April 2020, and have been inspired and delighted by the community’s embrace and contribution to this effort across the board. In the year since, progress has been made in a variety of areas, from standards-level support to independent implementations and advanced feature support.

We thought we’d take a moment to round up some of the significant developments surrounding BBS+ signatures and highlight a few of the top items to pay attention to going forward.

Over the past few months, the linked data proofs reference implementation of BBS+ published a new release that introduces a variety of improvements in efficiency and security, including formal alignment to the W3C CCG Security Vocab v3 definitions. In addition, support for JSON-LD BBS+ signatures was added to the VC HTTP API, making it possible to test this functionality in an interoperable way with other vendors participating in an open environment.

An important element in enabling BBS+ signatures is using what’s known as a pairing-friendly curve; for our purposes we use BLS12–381. We have seen some promising signs of adoption for this key pair, with multiple Decentralized Identifier (DID) methods — both did:indy from Hyperledger and did:ion from DIF — indicating they intend to add or already have support for these keys, allowing BBS+ signatures to be issued across a variety of decentralized networks and ecosystems. This development is possible due to the fact that BBS+ signatures is a ledger-independent approach to selective disclosure, effectively no custom logic or bespoke infrastructure is needed for these digital signatures to be created, used and understood.

In addition, the Hyperledger Aries project has been hard at work developing interoperable and ledger-agnostic capabilities in open source. The method used to track interop targets within the cohort and ultimately measure conformance against Aries standards is what’s known as an Aries Interop Profile (AIP). A major upcoming update to AIP will add support for additional DID methods, key types and credential formats, as well as introducing Aries support for JSON-LD BBS+ signatures as part of AIP 2.0. This will allow Aries-driven credential issuance and presentation protocols to work natively with BBS+ credentials, making that functionality broadly available for those in the Aries community and beyond.

There have also been a number of exciting developments when it comes to independent implementations of BBS+ signatures. Animo Solutions has recently implemented JSON-LD BBS+ signatures support into the popular open-source codebase Hyperledger Aries Cloud Agent Python (ACA-Py). In another independent effort, Trinsic has contributed an implementation of JSON-LD BBS+ credentials which they have demonstrated to be working in tandem with DIDComm v2, a secure messaging protocol based on DIDs. Implementations such as these help to demonstrate that open standards are transparent, can be understood and verified independently, and can be implemented with separate languages and tech stacks. They also set the groundwork for demonstrating real testing-driven interoperability via mechanisms such as the VC HTTP API and AIP 2.0. We are continuously looking to improve the documentation of these specs and standards so that their implications and nuances can be more broadly understood by builders and developers looking to engage with the technology.

On the cryptographic side of things, progress is also being made in hardening the existing BBS+ specification as well as expanding BBS+ to support more advanced privacy-preserving features. A significant development in this area is the work of cryptographer Michael Lodder who has been actively conducting research on an enhanced credential revocation mechanism using cryptographic accumulators with BBS+. This approach presents a promising alternative to existing solutions that allow authoritative issuers to update the status of issued credentials without compromising the privacy of the credential holder or subject who may be presenting the credential. We see this as another application of BBS+ signatures in the context of verifiable credentials that carries a lot of potential in pushing this technology to an even more robust state.

There was also initial discussion and tacit agreement to create a new cryptography-focused working group at Decentralized Identity Foundation. As the new WG drafts its charter, the first work item of this group will be the BBS+ Signatures spec which defines the cryptographic scheme known as BBS+ agnostic of its application in areas such as linked data signatures or verifiable credentials. In the future, this WG will likely evolve to include other crypto-related work items from the community.

This is just the tip of the iceberg when it comes to the momentum and development building around this technology in the community. We couldn’t be more excited about the future of BBS+ signatures, especially as we gear up to tackle the next set of hard problems in this area including privacy-preserving subject authentication and revocation using cryptographic accumulators. If you’re interested we encourage you to get involved, either by contributing to the Linked Data Proofs specification, checking out our reference implementations, or participating in the new WG at DIF, to name but a few of the many ways to engage with this work. We look forward to doing this retrospective at many IIWs to come, documenting the ever-growing community that continues to champion this technology in dynamic and interesting ways.

IIW32: BBS+ and beyond was originally published in MATTR on Medium, where people are continuing the conversation by highlighting and responding to this story.


Doc Searls Weblog

My podcasts of choice

As a follow-up to what I wrote earlier today, here are my own favorite podcasts, in the order they currently appear in my phone’s podcast apps: Radio Open Source (from itself) Bill Simmons (on The Ringer) Fresh Air (from WHYY via NPR) JJ Reddick & Tommy Alter (from ThreeFourTwo) The Mismatch (on The Ringer) The New […]

As a follow-up to what I wrote earlier today, here are my own favorite podcasts, in the order they currently appear in my phone’s podcast apps:

Radio Open Source (from itself) Bill Simmons (on The Ringer) Fresh Air (from WHYY via NPR) JJ Reddick & Tommy Alter (from ThreeFourTwo) The Mismatch (on The Ringer) The New Yorker Radio Hour (WNYC via NPR) Econtalk (from itself) On the Media (WNYC) How I Built This with Guy Raz (from NPR) The Daily (from New York Times) Reimagining the Internet (from Ethan Zuckerman and UMass Amherst) Planet Money (from NPR) Up First (from NPR) Here’s the thing (WNYC via NPR) FLOSS Weekly (TWiT) Reality2.0 (from itself)

Note that I can’t help listening to the last two, because I host one and co-host the other.

There are others I’ll listen to on occasion as well, usually after hearing bits of them on live radio. These include Radiolab, This American Life, 99% Invisible, Snap Judgement, Freakonomics Radio, Hidden BrainInvisibilia, The Moth, Studio 360. Plus limited run podcasts, such as Serial, S-Town, Rabbit Hole and Floodlines.

Finally, there are others I intend to listen to at some point, such as Footnoting History, Philosophize This, The Infinite Monkey Cage, Stuff You Should Know, The Memory Palace, and Blind Spot.

And those are just off the top of my head. I’m sure there are others I’m forgetting.

Anyway, most of the time I’d rather listen to those than live radio—even though I am a devoted listener to a raft of public stations (especially KCLU, KPCC, KCRW, KQED, WNYC, WBUR and WGBH) and too many channels to mention on SiriusXM, starting with Howard Stern and the NBA channel.

 

Tuesday, 04. May 2021

Justin Richer

Signing HTTP Messages

There’s a new draft in the HTTP working group that deals with signing HTTP messages of all types. Why is it here, and what does that give us? HTTP is irrefutably a fundamental building block of most of today’s software systems. Yet security and identity need to be layered alongside HTTP. The most common of these is simply running the HTTP protocol over an encrypted socket using TLS, resultin

There’s a new draft in the HTTP working group that deals with signing HTTP messages of all types. Why is it here, and what does that give us?

HTTP is irrefutably a fundamental building block of most of today’s software systems. Yet security and identity need to be layered alongside HTTP. The most common of these is simply running the HTTP protocol over an encrypted socket using TLS, resulting in HTTPS. While this is a powerful and important security component, TLS works only my protecting the stream of bits in transit. It does not allow for message-level and application-level security operations. But what if we could sign the messages themselves?

While it is possible to wrap the body of a request in a cryptographic envelope like JOSE or XML DSig, such approaches force developers to ignore most of the power and flexibility of HTTP, reducing it to a dumb transport layer. In order to sign a message but keep using HTTP as it stands, with all the verbs and headers and content types that it gives us, we will need a scheme that allows us to add a detached signature to the HTTP message. The cryptographic elements to the message can then be generated and validated separately from the request itself, providing a layered approach.

There have been numerous attempts at creating detached signature methods for HTTP over the years, one of the most famous being the Cavage draft which itself started as a community-facing version of Amazon’s SIGv4 method used within AWS. There were several other efforts, and all of them incompatible with each other in one way or another. To address this, the HTTP Working Group in the IETF stepped up and took on the effort of creating an RFC-track standard for HTTP message signatures that could be used across the variety of use cases.

As of the writing of this post, the specification is at version 04. While it’s not finished yet, it’s recently become a bit more stable and so it’s worth looking at it in greater depth.

Normalizing HTTP

As it turns out, the hardest part of signing HTTP messages isn’t the signing, it’s the HTTP. HTTP is a messy set of specifications, with pieces that have been built up by many authors over many years in ways that aren’t always that consistent. A recent move towards consistency has been the adoption of Structured Field Values for HTTP. In short, structured fields allow HTTP headers to house simple, non-recursive data structures with unambiguous parsing and deterministic serialization. These aspects made it perfect for use within the HTTP message signatures specification.

Previous efforts at HTTP message signing concentrated on creating a signature around HTTP headers, and the current draft is no exception in allowing that. On top of that, the current draft also allows for the definition of specialty fields that contain other pieces of constructed information not found in the headers themselves. These covered components are identified and combined with each other into a signature input string. To this string is added a field that includes all of the input parameters to this signature. For example, let’s say we want to sign parts of this HTTP request:

POST /foo?param=value&pet=dog HTTP/1.1
Host: example.com
Date: Tue, 20 Apr 2021 02:07:55 GMT
Content-Type: application/json
Digest: SHA-256=X48E9qOokqqrvdts8nOJRJN3OWDUoyWxBf7kbu9DBPE=
Content-Length: 18

{"hello": "world"}

We choose the components we want to sign, including the target of the request and a subset of the available headers, and create the following signature input string:

@request-target": post /foo?param=value&pet=dog
"host": example.com
"date": Tue, 20 Apr 2021 02:07:55 GMT
"content-type": application/json
"digest": SHA-256=X48E9qOokqqrvdts8nOJRJN3OWDUoyWxBf7kbu9DBPE=
"content-length": 18
"@signature-params": ("@request-target" "host" "date" "content-type" "digest" "content-length");created=1618884475;keyid="test-key-rsa-pss"

With a given HTTP message and a set of input parameters determining which parts of the message are covered with a signature, any party can re-generate this string with a reasonable level of success. Unsigned headers can be added to the message by intermediaries without invalidating the signature, and it’s even possible for an intermediary to add its own signature to the message on the way through — but we’ll get more into that advanced use case in a future post. The result of this is that the signer and verifier will re-create this signature input string independently of each other.

Now that we have a normalized string to sign, how do we actually sign it?

Signing and Verifying Content

Once we have the string, it’s a relatively straightforward matter of applying a key and signature function to the string. Any signature method that takes in and bunch of bytes and spits out a different set of bytes is technically feasible here.

How do the signer and verifier know which algorithm to use on a given method? It turns out that different deployments have drastically different needs in this regard. As a consequence, this is an aspect that is application specific by the specification, with several common methods called out:

The signer and verifier can both be configured to expect only a specific algorithm, or have that algorithm identified by some aspect external to the protocol. The signer and verifier can identify the key used to do the signing and figure out the signature algorithm based on that. If an application’s using JSON Web Keys, the alg field of the key provides an easy way to identify a signing mechanism. If the signer and verifier need to signal the algorithm dynamically at runtime, there is an alg field in the signature parameter set itself that points to a new registry. And if two or more of these methods are applicable to a given message, the answers all have to match, otherwise something fishy is going on and the signature is invalidated.

Given the above signature input string and an RSA-PSS signing method, we end up with the following Base64-encoded bytes as the signature output:

NtIKWuXjr4SBEXj97gbick4O95ff378I0CZOa2VnIeEXZ1itzAdqTpSvG91XYrq5CfxCmk8zz1Zg7ZGYD+ngJyVn805r73rh2eFCPO+ZXDs45Is/Ex8srzGC9sfVZfqeEfApRFFe5yXDmANVUwzFWCEnGM6+SJVmWl1/jyEn45qA6Hw+ZDHbrbp6qvD4N0S92jlPyVVEh/SmCwnkeNiBgnbt+E0K5wCFNHPbo4X1Tj406W+bTtnKzaoKxBWKW8aIQ7rg92zqE1oqBRjqtRi5/Q6P5ZYYGGINKzNyV3UjZtxeZNnNJ+MAnWS0mofFqcZHVgSU/1wUzP7MhzOKLca1Yg==

This gives us a signed object, and now we need to put that into our HTTP message.

Sending Signatures in Messages

The HTTP message signature specification defines two new headers to carry the signature, Signature and Signature-Input. Both of these use the Dictionary construct from the HTTP Structured Field Values standard to carry a named signature.

But first, why two headers? This construct allows us to easily separate the metadata about the signature — how it was made — from the signature value itself. This separation makes parsing simpler and also allows the HTTP message signatures specification to support multiple independent signatures on a given message.

The Signature-Input header contains all the parameters that went into the creation of the signature, including the list of covered content, identifiers for the key and algorithm, and items like timestamps or other application-specific flags. In fact, this is the same value used as the last line of the signature input string, and so its values are always covered by the signature. The Signature header contains the value of the signature itself as a byte array, encoded in Base64. The signer chooses a name for the signature object and adds both items to the headers. The name has no semantic impact, it just needs to be unique within a given request.

Let’s say this signature is named sig1. The signer adds both headers to the request above, resulting in the following signed request.

POST /foo?param=value&pet=dog HTTP/1.1
Host: example.com
Date: Tue, 20 Apr 2021 02:07:55 GMT
Content-Type: application/json
Digest: SHA-256=X48E9qOokqqrvdts8nOJRJN3OWDUoyWxBf7kbu9DBPE=
Content-Length: 18
Signature-Input: sig1=("host" "date" "content-type");created=1618884475;keyid="test-key-rsa-pss"
Signature: sig1=:NtIKWuXjr4SBEXj97gbick4O95ff378I0CZOa2VnIeEXZ1itzAdqTpSvG91XYrq5CfxCmk8zz1Zg7ZGYD+ngJyVn805r73rh2eFCPO+ZXDs45Is/Ex8srzGC9sfVZfqeEfApRFFe5yXDmANVUwzFWCEnGM6+SJVmWl1/jyEn45qA6Hw+ZDHbrbp6qvD4N0S92jlPyVVEh/SmCwnkeNiBgnbt+E0K5wCFNHPbo4X1Tj406W+bTtnKzaoKxBWKW8aIQ7rg92zqE1oqBRjqtRi5/Q6P5ZYYGGINKzNyV3UjZtxeZNnNJ+MAnWS0mofFqcZHVgSU/1wUzP7MhzOKLca1Yg==: {"hello": "world"}

Note that none of the other headers or aspects of the message are modified by the signature process.

The verifier parses both headers, re-creates the signature input string from the request, and verifies the signature value using the identified key and algorithm. But how does the verifier know that this signature is sufficient for this request, and how does the signer know what to sign in the first place?

Applying the Message Signature Specification

As discussed above, the signer and verifier need to have a way of figuring out which algorithm and keys are appropriate for a given signed message. In many deployments, this information can be gleaned through context and configuration. For example, a key derivation algorithm based on the tenant identifier in the URL can be used to dereference the key needed for a given call. Or an application identifier passed in the body could point to a record giving both the expected algorithm and allowable key material.

In addition to defining a predictable way to determine this, an application of the HTTP message signatures specification also needs to define which parts of the message need to be signed. For example, and API might have very different behaviors based on a Content-Type header but not really care about the Content-Encoding. A security protocol like OAuth or GNAP would require signing the Authorization header that contains the access token as well as the @request-target specialty field.

The HTTP protocol is also designed to allow interception and proxy of requests and responses, with intermediaries fully allowed to alter the message in certain ways. Applications that need to account for such intermediaries can be picky about which headers and components are signed, allowing the signature to survive expected message modifications but protecting against unanticipated changes in transit.

This fundamentally means that no signature method will ever be perfect for all messages — but that’s ok. The HTTP message signature draft instead leans on flexibility, allowing applications to define how best to apply the signature methods to achieve the security needed.

Building the Standard

The HTTP message signatures specification is still a long way from being done. It’s taken in a number of different inputs and many years of collective community experience, and that initially resulted in some major churn in the specification’s syntax and structure. As of version 04 though, the major surgery seems to be behind us. While there will inevitably be some changes to the parameters, names, and possibly even structures, the core of this is pretty solid. It’s time to start implementing it and testing it out with applications of all stripes, and I invite all of you to join me in doing just that.


Doc Searls Weblog

A half-century of NPR

NPR, which turned 50 yesterday, used to mean National Public Radio. It still does, at least legally; but they quit calling it that in 2010. The reason given was “…most of our audience — more than 27 million listeners to NPR member stations and millions more who experience our content on NPR.org and through mobile or […]

NPR, which turned 50 yesterday, used to mean National Public Radio. It still does, at least legally; but they quit calling it that in 2010. The reason given was “…most of our audience — more than 27 million listeners to NPR member stations and millions more who experience our content on NPR.org and through mobile or tablet devices — identify us as NPR.” Translation: We’re not just radio any more.

And they aren’t. Television, newspapers and magazines also aren’t what they were. All of those are now experienced mostly on glowing rectangles connected to the Internet.

Put another way, the Internet is assimilating all of them. On the Internet, radio is also fracturing into new and largely (though not entirely) different spawn. The main two are streaming (for music, live news and events) and podcasting (for talk and news).

This sidelines the radio sources called stations. Think about it: how much these days do you ask yourself “What’s on?” And how much do you listen to an actual radio, or watch TV through an antenna? Do you even have a radio that’s not in a car or stored away in the garage?

If you value and manage your time, chances are you are using apps to store and forward your listening and viewing to later times, when you can easily speed up the program or skip over ads and other “content” you don’t want to “consume.” (I put those in quotes because only the supply side talks that way about what they produce and what you do with it.)

This does not match the legacy structure of radio stations. Especially technically.

See, the purpose of stations is to stand in one place radiating sound (called “programs”) on signals, in real time (called ‘live”), around the clock, for a limited geography: a city or a region. Key thing: they have to fill that time.

For this stations can get along without studios (like companies in our current plague have found ways to get along without offices). But they still need to maintain transmitters with antennas.

For AM, which was born in the 1920s, the waves are so long that whole towers, or collections of them, radiate the signals. In almost all cases these facilities take up acres of real estate—sometimes dozens of acres. For FM and TV, media born in the 1940s, the waves are short, but need to radiate from high places: atop towers, tall buildings or mountains.

Maintaining these facilities isn’t cheap. In the case of AM stations, it is now common for the land under towers to be worth far more than the stations themselves, which is why so many AM stations are now going off the air or moving off to share other stations’ facilities, usually at the cost of lost coverage.

This is why I am sure that most or all of these facilities will be as gone as horse-drawn carriages and steam engines, sometime in the next few years or decades. Also why I am documenting transmitters that still stand, photographically. You can see a collection of my transmitter and antenna photos here and here. (The image above is what radiates KPCC/89.3 from Mt. Wilson, which overlooks Los Angeles.)

It’s a safe bet, for a few more years at least, that stations will still be around, transmitting to people mostly on the Net. But at some point (probably many points) the transmitters will be gone, simply because they cost too much, don’t do enough—and in one significant way, do too much. Namely, fill the clock, 24/7, with “content.”

To help get our heads around this, consider this: the word station derives from the Latin station- and statio from stare, which means to stand. In a place.

In the terrestrial world, we needed stationary places for carriages, trains and busses to stop. On radio, we used to need what we called a “dial,” where radio stations could be found on stationary positions called channels or frequencies. Now those are numbers that appear in a read-out.

But even those were being obsolesced decades ago in Europe. There a car radio might say the name of a station, which might be received on any number of frequencies, transmitted by many facilities, spread across a region or a country. What makes this possible is a standard called RDS, which uses a function called alternative frequency (AF) to make a radio play a given station on whatever channel sounds best to the radio. This would be ideal for the CBC in Canada and for regional public stations such as WAMC, KPCC, KUER and KCRW, which all have many transmitters scattered around.

Alas, when this standard was being worked out in the ’80s and early ’90s, the North American folks couldn’t imagine one station on many frequencies and in many locations, so they deployed a lesser derivative standard called RDBS, which lacked the AF function.

But this is now, and on its 50th anniversary public radio—and NPR stations especially—are doing well.

In radio ratings for New York, Los Angeles, San Francisco, Washington, San Diego, and dozens of other markets, the top news station is an NPR one. Here in Santa Barbara, about a quarter of all listening goes to non-commercial stations, led by KCLU, the most local of the NPR affiliates with transmitters here. (Best I can tell, Santa Barbara, which I wrote about here in 2019, is still the top market for public radio in the country. Number two is still Vermont.)

But I gotta wonder how long the station-based status quo will remain stationary in our listening habits. To the degree that I’m a one-person bellwether, the prospects aren’t good. Nearly all my listening these days is to podcasts or to streams on the Net. Some of those are from stations, but most are straight from producers, only one of which is NPR. And I listen to few of them live.

Still, it’s a good bet that NPR will do well for decades to come. Its main challenge will be to survive the end of station-based live broadcasting. Because that eventuality is starting to become visible.


MyDigitalFootprint

In leadership, why is recognising paradox critically important?

Source: Wendy Smith  https://www.learninginnovationslab.org/guest-faculty/ The importance of creating or seeing a paradox is that you can understand that the data and facts being presented to you can lead to the recommendation or conclusion being offered, but equally that the same data and facts can equally lead to a different conclusion.   Our problem is that we are not very

Source: Wendy Smith  https://www.learninginnovationslab.org/guest-faculty/


The importance of creating or seeing a paradox is that you can understand that the data and facts being presented to you can lead to the recommendation or conclusion being offered, but equally that the same data and facts can equally lead to a different conclusion.  

Our problem is that we are not very good at finding flaws in our own arguments, if for no other reason than they support our incentives and beliefs. We tend to take it personally when someone attacks our logic, beliefs or method, even if they are searching for the paradox. Equally, the person you are about to question reacts just like you do.  

Searching for the paradox allows you to see the jumps, assumptions and framing in the logic being presented, which lays bare how our thinking and decisions are being manipulated.  Often it turns out, others are blinded to see one conclusion, and as a leader and executive, your role is to explore and question the flow.  

Logical flow decisions often create paradoxes because of an invalid argument, but they are nevertheless valuable in creating a narrative. We see this as a statement or proposition which, despite sound (or apparently sound) reasoning from acceptable premises, leads to a conclusion that seems logically unacceptable or self-contradictory.  Finding a paradox in non-logical flow decisions reveals errors in definitions that were assumed to be rigorous. Equally,  a paradox can be seen when a seemingly absurd or contradictory statement or proposition, which when investigated, proves to be well-founded or true.   What is evident is the need for critical thinking, questions and sensitivity. 

Just because an idea pops into your head during a presentation doesn’t mean it’s true or reasonable. Idea bias is a new skinny belief you have just created, leading you to poor decision making as you have been framed. Often in a meeting, the framing is such that the presenter has set up a story or analogy which you believe in and fail to create new questions about (idea bias); as a way to make the logic jumps needed to justify a story.  If you cannot see the paradox, you are in their model, which means you are unlikely to make an unbiased decision.  If you can see the paradox you have mastered critical thinking and can use tools to ensure you make decisions that lead to outcomes that you want. 

 If you cannot see the paradox, you are in a model.


Decision-making framing itself create paradox’s for us 

Prevention paradox: For one person to benefit, many people have to change their behaviour — even though they receive no benefit or even suffer, from the change.  An assumption about the adoption of a product.

Decision-making paradox: Picking “the best decision-making method” is a decision problem in itself. Can the tool pick the best tool?  What has your process already framed as a decision method?

Abilene paradox: Making a decision based on what you think others want to do and not on what they actually want to do.  Everybody decides to do something that nobody really wants to do, but only what they thought everybody else wanted to do.  Do we have the agency to make an individual choice in the setting we have?

Inventor’s paradox: It is easier to solve a more general problem that covers the specifics of the sought-after solution.  Have we actually solved the problem?

Willpower paradox: Those who kept their minds open were more goal-directed and more motivated than those who declared their objective to themselves.

Buridan’s ass: Making a rational choice between two outcomes of equal value creates the longest delay in decision making (thanks, Yael).  Better known as Fredkin’s paradox: The more similar two choices are, the more time a decision-making agent spends on deciding.

Navigation paradox: Increased navigational precision may result in increased collision risk.

The paradox of tolerance: Should one tolerate intolerance if intolerance would destroy the possibility of tolerance?

Prevention paradox: For one person to benefit, many people have to change their behaviour — even though they receive no benefit or even suffer, from the change.

Willpower paradox: Those who kept their minds open were more goal-directed and more motivated than those who declared their objective to themselves.

Rule-following paradox: Even though rules are intended to determine actions, “no course of action could be determined by a rule because any course of action can be made out, to accord with the rule.”


A growing list of paradoxes that can help develop critical thinking can be found here.  I am exploring Paradox as I expand on my thinking at www.peakparadox.com 



Voidstar: blog

Dead Lies Dreaming (Laundry Files, 10) by Charles Stross

[from: Librarything]

[from: Librarything]

Harrow the Ninth (The Locked Tomb Trilogy, 2) by Tamsyn Muir

[from: Librarything]

[from: Librarything]

False Value (Rivers of London) by Ben Aaronovitch

[from: Librarything]

[from: Librarything]

Attack Surface by Cory Doctorow

[from: Librarything]

[from: Librarything]

Robot Artists & Black Swans: The Italian Fantascienza Stories by Bruce Sterling

[from: Librarything]

[from: Librarything]

What Abigail Did That Summer by Ben Aaronovitch

[from: Librarything]

[from: Librarything]

Monday, 03. May 2021

Damien Bod

Create an OIDC credential Issuer with MATTR and ASP.NET Core

This article shows how to create and issue verifiable credentials using MATTR and an ASP.NET Core. The ASP.NET Core application allows an admin user to create an OIDC credential issuer using the MATTR service. The credentials are displayed in an ASP.NET Core Razor Page web UI as a QR code for the users of the […]

This article shows how to create and issue verifiable credentials using MATTR and an ASP.NET Core. The ASP.NET Core application allows an admin user to create an OIDC credential issuer using the MATTR service. The credentials are displayed in an ASP.NET Core Razor Page web UI as a QR code for the users of the application. The user can use a digital wallet form MATTR to scan the QR code, authenticate against an Auth0 identity provider configured for this flow and use the claims from the id token to add the verified credential to the digital wallet. In a follow up post, a second application will then use the verified credentials to allow access to a second business process.

Code: https://github.com/swiss-ssi-group/MattrGlobalAspNetCore

Blogs in the series

Getting started with Self Sovereign Identity SSI Create an OIDC credential Issuer with MATTR and ASP.NET Core Present and Verify Verifiable Credentials in ASP.NET Core using Decentralized Identities and MATTR Verify vaccination data using Zero Knowledge Proofs with ASP.NET Core and MATTR

Setup

The solutions involves an MATTR API which handles all the blockchain identity logic. An ASP.NET Core application is used to create the digital identity and the OIDC credential issuer using the MATTR APIs and also present this as a QR code which can be scanned. An identity provider is required to add the credential properties to the id token. The properties in a verified credential are issued using the claims values from the id token so a specific identity provider is required with every credential issuer using this technic. Part of the business of this solution is adding business claims to the identity provider. A MATTR digital wallet is required to scan the QR code, authenticate against the OIDC provider which in our case is Auth0 and then store the verified credentials to the wallet for later use.

MATTR Setup

You need to register with MATTR and create a new account. MATTR will issue you access to your sandbox domain and you will get access data from them plus a link to support.

Once setup, use the OIDC Bridge tutorial to implement the flow used in this demo. The docs are really good but you need to follow the docs exactly.

https://learn.mattr.global/tutorials/issue/oidc-bridge/issue-oidc

Auth0 Setup

A standard trusted web application which supports the code flow is required so that the MATTR digital wallet can authenticate using the identity provider and use the id token values from the claims which are required in the credential. It is important to create a new application which is only used for this because the client secret is required when creating the OIDC credential issuer and is shared with the MATTR platform. It would probably be better to use certificates instead of a shared secret which is persisted in different databases. We also use a second Auth0 application configuration to sign into the web application but this is not required to issue credentials.

In Auth0, rules are used to extend the id token claims. You need to add your claims as required by the MATTR API and your business logic for the credentials you wish to issue.

function (user, context, callback) { const namespace = 'https://--your-tenant--.vii.mattr.global/'; context.idToken[namespace + 'license_issued_at'] = user.user_metadata.license_issued_at; context.idToken[namespace + 'license_type'] = user.user_metadata.license_type; context.idToken[namespace + 'name'] = user.user_metadata.name; context.idToken[namespace + 'first_name'] = user.user_metadata.first_name; context.idToken[namespace + 'date_of_birth'] = user.user_metadata.date_of_birth; callback(null, user, context); }

For every user (holder) who should be able to create verifiable credentials, you must add the credential data to the user profile. This is part of the business process with this flow. If you were to implement this for a real application with lots of users, it would probably be better to integrate the identity provider into the solution issuing the credentials and add a UI for editing the user profile data which is used in the credentials. This would be really easy using ASP.NET Core Identity and for example OpenIddict or IdentityServer4. It is important that the user cannot edit this data. This logic is part of the credential issuer logic and not part of the user profile.

After creating a new MATTR OIDC credential issuer, the callback URL needs to be added to the Open ID connect code flow client used for the digital wallet sign in.

Add the URL to the Allowed Callback URLs in the settings of your Auth0 application configuration for the digital wallet.

Implementing the OpenID Connect credentials Issuer application

The ASP.NET Core application is used to create new OIDC credential issuers and also display the QR code for these so that the verifiable credential can be loaded to the digital wallet. The application requires secrets. The data is stored to a database, so that any credential can be added to a wallet at a later date and also so that you can find the credentials you created. The MattrConfiguration is the data and the secrets you received from MATTR for you account access to the API. The Auth0 configuration is the data required to sign in to the application. The Auth0Wallet configuration is the data required to create the OIDC credential issuer so that the digital wallet can authenticate the identity using the Auth0 application. This data is stored in the user secrets during development.

{ // use user secrets "ConnectionStrings": { "DefaultConnection": "--your-connection-string--" }, "MattrConfiguration": { "Audience": "https://vii.mattr.global", "ClientId": "--your-client-id--", "ClientSecret": "--your-client-secret--", "TenantId": "--your-tenant--", "TenantSubdomain": "--your-tenant-sub-domain--", "Url": "http://mattr-prod.au.auth0.com/oauth/token" }, "Auth0": { "Domain": "--your-auth0-domain", "ClientId": "--your--auth0-client-id--", "ClientSecret": "--your-auth0-client-secret--", } "Auth0Wallet": { "Domain": "--your-auth0-wallet-domain", "ClientId": "--your--auth0-wallet-client-id--", "ClientSecret": "--your-auth0-wallet-client-secret--", } }

Accessing the MATTR APIs

The MattrConfiguration DTO is used to fetch the MATTR account data for the API access and to use in the application.

public class MattrConfiguration { public string Audience { get; set; } public string ClientId { get; set; } public string ClientSecret { get; set; } public string TenantId { get; set; } public string TenantSubdomain { get; set; } public string Url { get; set; } }

The MattrTokenApiService is used to acquire an access token and used for the MATTR API access. The token is stored to a cache and only fetched if the old one has expired or is not available.

public class MattrTokenApiService { private readonly ILogger<MattrTokenApiService> _logger; private readonly MattrConfiguration _mattrConfiguration; private static readonly Object _lock = new Object(); private IDistributedCache _cache; private const int cacheExpirationInDays = 1; private class AccessTokenResult { public string AcessToken { get; set; } = string.Empty; public DateTime ExpiresIn { get; set; } } private class AccessTokenItem { public string access_token { get; set; } = string.Empty; public int expires_in { get; set; } public string token_type { get; set; } public string scope { get; set; } } private class MattrCrendentials { public string audience { get; set; } public string client_id { get; set; } public string client_secret { get; set; } public string grant_type { get; set; } = "client_credentials"; } public MattrTokenApiService( IOptions<MattrConfiguration> mattrConfiguration, IHttpClientFactory httpClientFactory, ILoggerFactory loggerFactory, IDistributedCache cache) { _mattrConfiguration = mattrConfiguration.Value; _logger = loggerFactory.CreateLogger<MattrTokenApiService>(); _cache = cache; } public async Task<string> GetApiToken(HttpClient client, string api_name) { var accessToken = GetFromCache(api_name); if (accessToken != null) { if (accessToken.ExpiresIn > DateTime.UtcNow) { return accessToken.AcessToken; } else { // remove => NOT Needed for this cache type } } _logger.LogDebug($"GetApiToken new from oauth server for {api_name}"); // add var newAccessToken = await GetApiTokenClient(client); AddToCache(api_name, newAccessToken); return newAccessToken.AcessToken; } private async Task<AccessTokenResult> GetApiTokenClient(HttpClient client) { try { var payload = new MattrCrendentials { client_id = _mattrConfiguration.ClientId, client_secret = _mattrConfiguration.ClientSecret, audience = _mattrConfiguration.Audience }; var authUrl = "https://auth.mattr.global/oauth/token"; var tokenResponse = await client.PostAsJsonAsync(authUrl, payload); if (tokenResponse.StatusCode == System.Net.HttpStatusCode.OK) { var result = await tokenResponse.Content.ReadFromJsonAsync<AccessTokenItem>(); DateTime expirationTime = DateTimeOffset.FromUnixTimeSeconds(result.expires_in).DateTime; return new AccessTokenResult { AcessToken = result.access_token, ExpiresIn = expirationTime }; } _logger.LogError($"tokenResponse.IsError Status code: {tokenResponse.StatusCode}, Error: {tokenResponse.ReasonPhrase}"); throw new ApplicationException($"Status code: {tokenResponse.StatusCode}, Error: {tokenResponse.ReasonPhrase}"); } catch (Exception e) { _logger.LogError($"Exception {e}"); throw new ApplicationException($"Exception {e}"); } } private void AddToCache(string key, AccessTokenResult accessTokenItem) { var options = new DistributedCacheEntryOptions().SetSlidingExpiration(TimeSpan.FromDays(cacheExpirationInDays)); lock (_lock) { _cache.SetString(key, JsonConvert.SerializeObject(accessTokenItem), options); } } private AccessTokenResult GetFromCache(string key) { var item = _cache.GetString(key); if (item != null) { return JsonConvert.DeserializeObject<AccessTokenResult>(item); } return null; } }

Generating the API DTOs using Nswag

The MattrOpenApiClientSevice file was generated using Nswag and the Open API file provided by MATTR here. We only generated the DTOs using this and access the client then using a HttpClient instance. The Open API file used in this solution is deployed in the git repo.

Creating the OIDC credential issuer

The MattrCredentialsService is used to create an OIDC credentials issuer using the MATTR APIs. This is implemented using the CreateCredentialsAndCallback method. The created callback is returned so that it can be displayed in the UI and copied to the specific Auth0 application configuration.

private readonly IConfiguration _configuration; private readonly DriverLicenseCredentialsService _driverLicenseService; private readonly IHttpClientFactory _clientFactory; private readonly MattrTokenApiService _mattrTokenApiService; private readonly MattrConfiguration _mattrConfiguration; public MattrCredentialsService(IConfiguration configuration, DriverLicenseCredentialsService driverLicenseService, IHttpClientFactory clientFactory, IOptions<MattrConfiguration> mattrConfiguration, MattrTokenApiService mattrTokenApiService) { _configuration = configuration; _driverLicenseService = driverLicenseService; _clientFactory = clientFactory; _mattrTokenApiService = mattrTokenApiService; _mattrConfiguration = mattrConfiguration.Value; } public async Task<string> CreateCredentialsAndCallback(string name) { // create a new one var driverLicenseCredentials = await CreateMattrDidAndCredentialIssuer(); driverLicenseCredentials.Name = name; await _driverLicenseService.CreateDriverLicense(driverLicenseCredentials); var callback = $"https://{_mattrConfiguration.TenantSubdomain}/ext/oidc/v1/issuers/{driverLicenseCredentials.OidcIssuerId}/federated/callback"; return callback; }

The CreateMattrDidAndCredentialIssuer method implements the different steps described in the MATTR API documentation for this. An access token for the MATTR API is created or retrieved from the cache and DID is created and the id from the DID post response is used to create the OIDC credential issuer. The DriverLicenseCredentials is returned which is persisted to a database and the callback is created using this object.

private async Task<DriverLicenseCredentials> CreateMattrDidAndCredentialIssuer() { HttpClient client = _clientFactory.CreateClient(); var accessToken = await _mattrTokenApiService .GetApiToken(client, "mattrAccessToken"); client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", accessToken); client.DefaultRequestHeaders .TryAddWithoutValidation("Content-Type", "application/json"); var did = await CreateMattrDid(client); var oidcIssuer = await CreateMattrCredentialIssuer(client, did); return new DriverLicenseCredentials { Name = "not_named", Did = JsonConvert.SerializeObject(did), OidcIssuer = JsonConvert.SerializeObject(oidcIssuer), OidcIssuerId = oidcIssuer.Id }; }

The CreateMattrDid method creates a new DID as specified by the API. The MattrOptions class is used to create the request object. This is serialized using the StringContentWithoutCharset class due to a bug in the MATTR API validation. I created this class using the blog from Gunnar Peipman.

private async Task<V1_CreateDidResponse> CreateMattrDid(HttpClient client) { // create did , post to dids // https://learn.mattr.global/api-ref/#operation/createDid // https://learn.mattr.global/tutorials/dids/use-did/ var createDidUrl = $"https://{_mattrConfiguration.TenantSubdomain}/core/v1/dids"; var payload = new MattrOpenApiClient.V1_CreateDidDocument { Method = MattrOpenApiClient.V1_CreateDidDocumentMethod.Key, Options = new MattrOptions() }; var payloadJson = JsonConvert.SerializeObject(payload); var uri = new Uri(createDidUrl); using (var content = new StringContentWithoutCharset(payloadJson, "application/json")) { var createDidResponse = await client.PostAsync(uri, content); if (createDidResponse.StatusCode == System.Net.HttpStatusCode.Created) { var v1CreateDidResponse = JsonConvert.DeserializeObject<V1_CreateDidResponse>( await createDidResponse.Content.ReadAsStringAsync()); return v1CreateDidResponse; } var error = await createDidResponse.Content.ReadAsStringAsync(); } return null; }

The MattrOptions DTO is used to create a default DID using the key type “ed25519”. See the MATTR API docs for further details.

public class MattrOptions { /// <summary> /// The supported key types for the DIDs are ed25519 and bls12381g2. /// If the keyType is omitted, the default key type that will be used is ed25519. /// /// If the keyType in options is set to bls12381g2 a DID will be created with /// a BLS key type which supports BBS+ signatures for issuing ZKP-enabled credentials. /// </summary> public string keyType { get; set; } = "ed25519"; }

The CreateMattrCredentialIssuer implements the OIDC credential issuer to create the post request. The request properties need to be setup for your credential properties and must match claims from the id token of the Auth0 user profile. This is where the OIDC client for the digital wallet is setup and also where the credential claims are specified. If this is setup up incorrectly, loading the data into your wallet will fail. The HTTP request and the response DTOs are implemented using the Nswag generated classes.

private async Task<V1_CreateOidcIssuerResponse> CreateMattrCredentialIssuer(HttpClient client, V1_CreateDidResponse did) { // create vc, post to credentials api // https://learn.mattr.global/tutorials/issue/oidc-bridge/setup-issuer var createCredentialsUrl = $"https://{_mattrConfiguration.TenantSubdomain}/ext/oidc/v1/issuers"; var payload = new MattrOpenApiClient.V1_CreateOidcIssuerRequest { Credential = new Credential { IssuerDid = did.Did, Name = "NationalDrivingLicense", Context = new List<Uri> { new Uri( "https://schema.org") // Only this is supported }, Type = new List<string> { "nationaldrivinglicense" } }, ClaimMappings = new List<ClaimMappings> { new ClaimMappings{ JsonLdTerm="name", OidcClaim=$"https://{_mattrConfiguration.TenantSubdomain}/name"}, new ClaimMappings{ JsonLdTerm="firstName", OidcClaim=$"https://{_mattrConfiguration.TenantSubdomain}/first_name"}, new ClaimMappings{ JsonLdTerm="licenseType", OidcClaim=$"https://{_mattrConfiguration.TenantSubdomain}/license_type"}, new ClaimMappings{ JsonLdTerm="dateOfBirth", OidcClaim=$"https://{_mattrConfiguration.TenantSubdomain}/date_of_birth"}, new ClaimMappings{ JsonLdTerm="licenseIssuedAt", OidcClaim=$"https://{_mattrConfiguration.TenantSubdomain}/license_issued_at"} }, FederatedProvider = new FederatedProvider { ClientId = _configuration["Auth0Wallet:ClientId"], ClientSecret = _configuration["Auth0Wallet:ClientSecret"], Url = new Uri($"https://{_configuration["Auth0Wallet:Domain"]}"), Scope = new List<string> { "openid", "profile", "email" } } }; var payloadJson = JsonConvert.SerializeObject(payload); var uri = new Uri(createCredentialsUrl); using (var content = new StringContentWithoutCharset(payloadJson, "application/json")) { var createOidcIssuerResponse = await client.PostAsync(uri, content); if (createOidcIssuerResponse.StatusCode == System.Net.HttpStatusCode.Created) { var v1CreateOidcIssuerResponse = JsonConvert.DeserializeObject<V1_CreateOidcIssuerResponse>( await createOidcIssuerResponse.Content.ReadAsStringAsync()); return v1CreateOidcIssuerResponse; } var error = await createOidcIssuerResponse.Content.ReadAsStringAsync(); } throw new Exception("whoops something went wrong"); }

Now the service is completely ready to generate credentials. This can be used in any Blazor UI, Razor page or MVC view in ASP.NET Core. The services are added to the DI in the startup class. The callback method is displayed in the UI if the application successfully creates a new OIDC credential issuer.

private readonly MattrCredentialsService _mattrCredentialsService; public bool CreatingDriverLicense { get; set; } = true; public string Callback { get; set; } [BindProperty] public IssuerCredential IssuerCredential { get; set; } public AdminModel(MattrCredentialsService mattrCredentialsService) { _mattrCredentialsService = mattrCredentialsService; } public void OnGet() { IssuerCredential = new IssuerCredential(); } public async Task<IActionResult> OnPostAsync() { if (!ModelState.IsValid) { return Page(); } Callback = await _mattrCredentialsService .CreateCredentialsAndCallback(IssuerCredential.CredentialName); CreatingDriverLicense = false; return Page(); } } public class IssuerCredential { [Required] public string CredentialName { get; set; } }

Adding credentials you wallet

After the callback method has been added to the Auth0 callback URLs, the credentials can be used to add verifiable credentials to your wallet. This is fairly simple. The Razor Page uses the data from the database and generates an URL using the MATTR specification and the id from the created OIDC credential issuer. The claims from the id token or the profile data is just used to display the data for the user signed into the web application. This is not the same data which is used be the digital wallet. If the same person logs into the digital wallet, then the data is the same. The wallet authenticates the identity separately.

public class DriverLicenseCredentialsModel : PageModel { private readonly DriverLicenseCredentialsService _driverLicenseCredentialsService; private readonly MattrConfiguration _mattrConfiguration; public string DriverLicenseMessage { get; set; } = "Loading credentials"; public bool HasDriverLicense { get; set; } = false; public DriverLicense DriverLicense { get; set; } public string CredentialOfferUrl { get; set; } public DriverLicenseCredentialsModel(DriverLicenseCredentialsService driverLicenseCredentialsService, IOptions<MattrConfiguration> mattrConfiguration) { _driverLicenseCredentialsService = driverLicenseCredentialsService; _mattrConfiguration = mattrConfiguration.Value; } public async Task OnGetAsync() { //"license_issued_at": "2021-03-02", //"license_type": "B1", //"name": "Bob", //"first_name": "Lammy", //"date_of_birth": "1953-07-21" var identityHasDriverLicenseClaims = true; var nameClaim = User.Claims.FirstOrDefault(t => t.Type == $"https://{_mattrConfiguration.TenantSubdomain}/name"); var firstNameClaim = User.Claims.FirstOrDefault(t => t.Type == $"https://{_mattrConfiguration.TenantSubdomain}/first_name"); var licenseTypeClaim = User.Claims.FirstOrDefault(t => t.Type == $"https://{_mattrConfiguration.TenantSubdomain}/license_type"); var dateOfBirthClaim = User.Claims.FirstOrDefault(t => t.Type == $"https://{_mattrConfiguration.TenantSubdomain}/date_of_birth"); var licenseIssuedAtClaim = User.Claims.FirstOrDefault(t => t.Type == $"https://{_mattrConfiguration.TenantSubdomain}/license_issued_at"); if (nameClaim == null || firstNameClaim == null || licenseTypeClaim == null || dateOfBirthClaim == null || licenseIssuedAtClaim == null) { identityHasDriverLicenseClaims = false; } if (identityHasDriverLicenseClaims) { DriverLicense = new DriverLicense { Name = nameClaim.Value, FirstName = firstNameClaim.Value, LicenseType = licenseTypeClaim.Value, DateOfBirth = dateOfBirthClaim.Value, IssuedAt = licenseIssuedAtClaim.Value, UserName = User.Identity.Name }; // get per name //var offerUrl = await _driverLicenseCredentialsService.GetDriverLicenseCredentialIssuerUrl("ndlseven"); // get the last one var offerUrl = await _driverLicenseCredentialsService.GetLastDriverLicenseCredentialIssuerUrl(); DriverLicenseMessage = "Add your driver license credentials to your wallet"; CredentialOfferUrl = offerUrl; HasDriverLicense = true; } else { DriverLicenseMessage = "You have no valid driver license"; } } }

The data is displayed using Bootstrap. If you use a MATTR wallet to scan the QR Code shown underneath, you will be redirected to authenticate against the specified Auth0 application. If you have the claims, you can add verifiable claims to you digital wallet.

Notes

MATTR API has a some problems with its API and a stricter validation would help a lot. But MATTR support is awesome and the team are really helpful and you will end up with a working solution. It would be also awesome if the Open API file could be used without changes to generate a client and the DTOs. It would makes sense, if you could issue credentials data from the data in the credential issuer application and not from the id token of the user profile. I understand that in some use cases, you would like to protect against any wallet taking credentials for other identities, but I as a credential issuer cannot always add my business data to user profiles from the IDP. The security of this solution all depends on the user profile data. If a non authorized person can change this data (in this case, this could be the same user), then incorrect verifiable credentials can be created.

Next step is to create an application to verify and use the verifiable credentials created here.

Links

https://mattr.global/

https://mattr.global/get-started/

https://learn.mattr.global/

https://keybase.io/

https://learn.mattr.global/tutorials/dids/did-key

https://gunnarpeipman.com/httpclient-remove-charset/

https://auth0.com/

Thursday, 29. April 2021

Mike Jones: self-issued

OpenID Connect Working Group Presentation at the Third Virtual OpenID Workshop

I gave the following presentation on the OpenID Connect Working Group at the Third Virtual OpenID Workshop on Thursday, April 29, 2021: OpenID Connect Working Group (PowerPoint) (PDF)

I gave the following presentation on the OpenID Connect Working Group at the Third Virtual OpenID Workshop on Thursday, April 29, 2021:

OpenID Connect Working Group (PowerPoint) (PDF)

Bill Wendel's Real Estate Cafe

Homebuyers, let’s use 20th anniversary to call for a Bidding War Bill of Rights!

Kudos to investigative journalists in Canada for putting BLIND bidding wars into the spotlight. Invite readers to visit yesterday’s article and watch the video. Equally… The post Homebuyers, let's use 20th anniversary to call for a Bidding War Bill of Rights! first appeared on Real Estate Cafe.

Kudos to investigative journalists in Canada for putting BLIND bidding wars into the spotlight. Invite readers to visit yesterday’s article and watch the video. Equally…

The post Homebuyers, let's use 20th anniversary to call for a Bidding War Bill of Rights! first appeared on Real Estate Cafe.


MyDigitalFootprint

the journey and the destination

I know the journey is more important than the destination, but destinations provide an essential point as they mark somewhere to head towards.  All journeys start with a single step, and for me, this journey started a little over three years ago. I have spent this past period considering the question, “How do we make Better Decisions.” This question was refined to become “How do we make B

I know the journey is more important than the destination, but destinations provide an essential point as they mark somewhere to head towards.  All journeys start with a single step, and for me, this journey started a little over three years ago. I have spent this past period considering the question, “How do we make Better Decisions.” This question was refined to become “How do we make Better Decisions with data.” This expanded into “How do we make Better Decisions with data and be better ancestors?”  My journey can finally see a destination. 

However, I am now facing a more significant challenge.

Having reached the destination zone, I want to leave a mark, and it's straightforward to imagine planting a flag. The hope is that when the flag is planted some of the team back at home can see that you've reached the final place.  In most circumstances, the destination is not in the Line-of-Sight. Therefore you pick up the flag and wave it, hoping that somebody with binoculars can see you waving your destination flag. If someone sees you, they relay it to the others that the advance party or pioneers (risk-takers and general nutters) have reached the destination. In innovation and invention land, you're tempted to stay at the destination, hoping that the others will follow you to the same place. Waiting there means you eventually run out of resources, this being cash or budget. Having run out of resources, you're faced with the reality you have to head back to where everyone else is as nobody is going to follow you to the destination; it is too risky.  On arriving back at home, there's a bit of a party and a celebration. You relay your stories about the journey and how wonderful the destination is.   After a few hours of partying, everybody heads back to their homes, leaving you wondering how to persuade others to go to the destination as well.

There are a series of videos on YouTube of a single dancer who starts to dance to music, and over periods of 5-minutes to several hours, the dancing crowd grows until everybody is dancing. Early adopters and supporters join the pioneer, and eventually, they are joined by the followers who make the crowd. (2021 #covid19 note, what is a crowd and what is a party?)

The next day after your return, everybody gets up and goes around their jobs and business as usual.  What is now needed is to convert the excitement of the destination into a language and story that the first supporters and early adopters can relate to and join in. What is tremendously difficult is constructing a straightforward linear narrative and step actions, meaning the crowd can also join in and desire to get to a new destination. (crossing the chasm)  For businesses right now, this is the digital transformation, data, circular economy, ESG, climate change, AI, ethics and sustainability.  The pioneers have been to the destination, planted the flags, imagined the better fruits, but we have to drop the complexity of the issues and find a better story for us all to get there.

The journey I have been on for the last few years is understanding how we make better decisions.  This means I have had to unpack numerous topics and dependencies deep in the messiness of models and complexity. Unsurprisingly there are only a few who are also motivated about understanding the complexity and philiosphy of better decisions, it is my burden. We are converting what we have found at the destination (how to make better decisions with data and be better ancestors) into a language that fellow supporters and early adopters can engage with and discuss. 

BUT

I am now desperate for expert support to convert the ideas into straightforward narrative and actions that the crowd in the town can take on board and start their journey, but knowing where they are going has been discovered. My struggle is that I find it hard to dumb down the complex into a tweet as it cannot possibly embrace the complexity. I continually flight myself as I can see the simple is not truly representative. The PhD virologist of 30 years is given 1 minute to explain virus mutation on the news.  The simple becomes fodder to the vocal and easily upset in social media land as it is easily misunderstood and taken out of context. I spend more time trying to justify (to myself) why I am using simple concepts to explain complex ideas.

Tribe, I need some help.  Is there anybody out there (a nod to Pink Floyd) who is willing to spend the time in a small community helping to refine the words, language, stories, ideas and concepts into a straightforward linear narrative that portrays actions and activities, which means we can all take steps on a journey to making Better Decisions? This is not a job it is a vocation. Journeys are so much more fun when we do them together, and the next part of the journey is to do it all together. I'm looking for early adopter and supporters who can share their destinations so that together we can create a better narrative about decision making, governance and oversight. 

Are you willing and able to help form and write a narrative to enable others to come on our journey?



Day 0 - as the CDO, you are now the new corporate punch bag.

In commercial land, the axis of power has tended to rest with the CEO/ CFO relationship.   There is always a myriad of other political triangles that lobby and wrestle for power and sway decisions.  Given that decisions are increasingly reliant on evidence which is data, the CDO gets dragged into everyone's battles, which are not always in the best interest of the business, customer,

In commercial land, the axis of power has tended to rest with the CEO/ CFO relationship.   There is always a myriad of other political triangles that lobby and wrestle for power and sway decisions.  Given that decisions are increasingly reliant on evidence which is data, the CDO gets dragged into everyone's battles, which are not always in the best interest of the business, customer, ecosystem or society - such are incentive scheme.

Everyone else in the senior team does not want to recognise is that the data they use as evidence and proof is equally supportive or detrimental to everyone else's cause.  Whilst everyone else on the leadership team gets to pick and bias what they foreground and promote, the CDO has to keep their mind open and judge all data with the same level of critical thinking.  This tends to mean the CDO becomes the punch bag when data either supports or otherwise a decision, which in reality is a political lobby for power which the data may not fully support. However, we are all fallible, and data is not the evidence we want it to be. 

we are all fallible, and data is not the evidence we want it to be. 

Given that even the most highly skilled data scientists, who were incentivised to come to the most accurate results, can create a broad range of conclusions, even when given the same data and hypothesis.  This means that senior leadership teams don’t know if the data they have and the conclusion they have reached is correct. 

A recent and significant published paper gave 73 teams the same data and research question. The answers varied widely, and very little variation was easily explained. What the paper goes on to say is that competencies and potential confirmation biases do not explain the broad variation in outcomes. 



The paper concludes that if any given study had been conducted by a different (set of) researcher(s), perhaps even the same researchers at a different time, its results may well have varied for reasons that cannot be reduced to easily observable analytical choices or biases.  They conclude that steps in the research process remain undisclosed in the standard presentation and consumption of scientific results and have exposed a “hidden universe” of idiosyncratic research and researcher variability. An important takeaway here is that the reality of researcher decisions cannot easily be simulated.

Therefore, there are at least two forces at play in data leading to a recommendation or decision.  One, the bias and foregrounding of data outcomes by a member of the leadership team is for a reason. Two. the same data, same tools, the same team can generate more than one recommendation.  How to determine what is at play is a modern-day skill the CDO must-have. 

As the CDO, you have to gain the trust of all your senior team and work with them to determine what is incentive biased, desired outcome-driven, seeking support or driven from the data set where there is naturally more than one possible conclusion.  In doing this, you have to be capable of assessing the alignment or divergence from others in the team who have come to different conclusions.   This becomes more complex if the data set is public, purchased, or gathered from the ecosystem with partners, as others outside of the organisation can create different conclusions. You have to be prepared to justify your reason, rationale and processes.   The skill you need to demonstrate is one of consistency in finding a route to align any data conclusions to the company's purpose and agreed strategic goals and not to the forces of lobby. Leave those calls to the CEO knowing the CEO has your back or get out, as being the new corporate punch bag is not fun.   


Note to the CEO

Each leadership team member is playing their own game and is looking to the CDO to find and support data for their cause, lobby, decision or budget.  This means that the CDO becomes the corporate punch bag and police, taking over from HR. The CDO has to navigate the path of varying conclusions and desired outcomes from the same data set, which are in conflict as they meet individuals agendas.   As the CEO, you have to be aware of this and the game your CFO will play.  The power axis itself of the CEO/ CFO relationship comes under stress as the CDO can give you more insights into decisions “presented because of incentives and self-interest” than anyone else, but HR will still want to own it.  If you alienate the CDO, you will lose that linkage, which is exactly what others want.  However, first, check that the CDO has this trust of the team and that your CDO has the capability and capacity to manage this modern-day leadership challenge.  If not, it might be time to upgrade your CDO with new skills or find a new version. 






If your strategic plan is based on data, have you considered the consequences?

source: accenture https://www.accenture.com/_acnmedia/PDF-108/Accenture-closing-data-value-gap-fixed.pdf Several generations ago, the incentives in your organisation mean that those who collected and analysed old data created bias. Such bias occurred as people in the system favoured specific incentives, rewards and recommendations.  The decisions made created certain processes a

source: accenture https://www.accenture.com/_acnmedia/PDF-108/Accenture-closing-data-value-gap-fixed.pdf


Several generations ago, the incentives in your organisation mean that those who collected and analysed old data created bias.

Such bias occurred as people in the system favoured specific incentives, rewards and recommendations. 

The decisions made created certain processes and rules to hide the maintenance of those incentives and biases.

The biases worked to favour certain (the same) groups and outcomes, which have, over time, become part of the culture, reinforcing the processes and rules.

How do you know, today, what bias there is in your strategic plan. What framing and blindness are created because of the ghosts in your system?   

If you cannot see, touch and feel equality and balance in gender, race and neuro-diversity, it is likely that the bias is still there.  Whilst it might feel good to get to a target, that does not mean the systems, rules and processes are not without those same biases.   It took generations to build in; it takes far more effort than a target to bring about better decisions. 

How do you know your data set has the views of everyone who is critical to your business today and in the future? How do you know the tools you use provide equal weight to everyone to make our business thrive?  How do you know if the recommendation was written before the analysis? How do your incentives create a new bias?

Is the consequence of your beautiful strategic data-led plan that you get precisely what the biased data wants.

In any framework where data leads to decisions, strategy or automation, first understand how you might be reinforcing something you are trying to eliminate.


Wednesday, 28. April 2021

Mike Jones: self-issued

Passing the Torch at the OpenID Foundation

Today marks an important milestone in the life of the OpenID Foundation and the worldwide digital identity community. Following Don Thibeau’s decade of exemplary service to the OpenID Foundation as its Executive Director, today we welcomed Gail Hodges as our new Executive Director. Don was instrumental in the creation of OpenID Connect, the Open Identity […]

Today marks an important milestone in the life of the OpenID Foundation and the worldwide digital identity community. Following Don Thibeau’s decade of exemplary service to the OpenID Foundation as its Executive Director, today we welcomed Gail Hodges as our new Executive Director.

Don was instrumental in the creation of OpenID Connect, the Open Identity Exchange, the OpenID Certification program, the Financial-grade API (FAPI), and its ongoing worldwide adoption. He’s created and nurtured numerous liaison relationships with organizations and initiatives advancing digital identity and user empowerment worldwide. And thankfully, Don intends to stay active in digital identity and the OpenID Foundation, including supporting Gail in her new role.

Gail’s Twitter motto is “Reinventing identity as a public good”, which I believe will be indicative of the directions in which she’ll help lead the OpenID Foundation. She has extensive leadership experience in both digital identity and international finance, as described in her LinkedIn profile. The board is thrilled to have her on board and looks forward to what we’ll accomplish together in this next exciting chapter of the OpenID Foundation!

I encourage all of you to come meet Gail at the OpenID Foundation Workshop tomorrow, where she’ll introduce herself to the OpenID community.


Phil Windley's Technometria

Legitimacy and Decentralized Systems

Summary: Why are some decentralized systems accepted and widely used while others wither? Why do some “hard forks” succeed while others fail? It all comes down to legitimacy. As an undergraduate engineering major, I recall being surprised by the so-called three body problem. In Newtonian mechanics, there are nice closed-form solutions to problems involving the motion of two intera

Summary: Why are some decentralized systems accepted and widely used while others wither? Why do some “hard forks” succeed while others fail? It all comes down to legitimacy.

As an undergraduate engineering major, I recall being surprised by the so-called three body problem. In Newtonian mechanics, there are nice closed-form solutions to problems involving the motion of two interacting bodies, given their initial position and velocity. This isn’t true of systems with three or more points. How can adding just one more point to the system make it unsolvable?

N-body systems are chaotic for most initial conditions and their solution involves numerical methods—simulation—rather than nice, undergraduate-level math. In other words, it’s messy. Humans like simple solutions.

Like the n-body problem, decentralized systems are chaotic and messy. Humans aren’t good at reasoning about emergent behavior from the coordinated, yet autonomous, behavior of interacting agents. We build bureaucracies and enact laws to try to make chaotic systems legible. The internet was our first, large-scale technical system where decentralization and governance clashed. I remember people in the 90’s asking “Who’s in charge of the internet?”

In The Most Important Scarce Resource is Legitimacy, Vitalik Buterin, the creator of Ethereum, discusses why legitimacy is crucial for the success of any decentralized endeavor. He says:

[T]he Bitcoin and Ethereum ecosystems are capable of summoning up billions of dollars of capital, but have strange and hard-to-understand restrictions on where that capital can go. From The Most Important Scarce Resource is Legitimacy
Referenced 2021-04-26T14:46:43-0600

These “strange and hard to understand restrictions” are rooted in legitimacy. Decentralized systems must be considered legitimate in order to thrive. That legitimacy is tied to how well the systems and people enabling them, like programmers and miners, are seen to be following “the rules” both written and unwritten. Legitimacy isn’t a technical issue, but a social one.

Wikipedia defines legitimacy as

the right and acceptance of an authority, usually a governing law or a regime.

While this is most often applied to governments, I think we can rightly pose legitimacy questions for technical systems, especially those that have large impacts on people and society.

With respect to legitimacy, Philip Bobbit says:

The defining characteristic … of a constitutional order is its basis for legitimacy. The constitutional order of the industrial nation state, within which we currently live, promised: give us power and we will improve the material well-being of the nation.

In other words, legitimacy comes from the constitutional order: the structure of the governance and its explicit and implicit promises. People grant legitimacy to constitutional orders that meet their expectations by surrendering part of their sovereignty to them. In the quote from Vilaik above, the "strange and hard to understand restrictions" are promises that members of the Bitcoin or Ethreum ecosystems believe those constitutional orders have made. And if they're broken, the legitimacy of those system is threatened.

Talking about “legitimacy” and “constitutional orders” for decentralized systems like Bitcoin, Ethereum, or your favorite NFT might feel strange, but I believe these are critical tools for understanding why some thrive and others wither. Or why some hard forks succeed and others don't.

In Bobbitt’s theory of constitutional orders, transitions from one constitutional order to a new one always requires war. While people seeking legitimacy for one decentralized system or another might not use tanks or missiles, a hard fork is essentially just that—a war fought to cause the transition from one constitutional order to another because of a question of legitimacy. For example, Vitalik describes how the Steem community did a hard fork to create Hive, leaving Steem’s founder (and his tokens) behind because the constitutional order he represented lost its legitimacy because people believed it could no longer keep its promises.

So when you hear someone talking about a decentralized system and starting sentences with phrases like “Somebody should…” or “Why do we let them…” or “Who’s in charge of…”, beware. Unlike most of the easy to understand systems we’re familiar with, decentralized systems are heterarchical, not hierarchical. Thus the means of their control is political, not authoritarian. These systems are not allowed to exist—they're called "permissionless" for a reason. They simply are, by virtue of their legitimacy in the eyes of people who use and support them.

This doesn’t mean decentralized systems are unassailable, but changing them is slower and less sure than most people would like. When you “know” the right way to do something, you want a boss who can dictate the change. Changing decentralized systems is a political process that sometimes requires war. As Clausewitz said “War is the continuation of politics by other means.”

There are no closed-form solutions to the n-body problems represented by decentralized systems. They are messy and chaotic. I’m not sure people will ever get more comfortable with decentralization or understand it well enough to reason about it carefully. But one thing is for sure: decentralized systems don’t care. They simply are.

A version of this article was previously published in Technometria Newsletter, Issue #6, April 13, 2021.

Photo Credit: Major General Andrew Jackson and his Soldiers claim a victory in the Battle of New Orleans during the War of 1812. from Georgia National Guard (CC BY 2.0)

Tags: legitimacy decentralization

Monday, 26. April 2021

Hyperonomy Digital Identity Lab

The Verifiable Economy Architecture Reference Model (VE-ARM): Fully Decentralized Object (FDO) Model

Michael HermanHyperonomy Digital Identity LabTrusted Digital Web ProjectParallelspace Corporation NOTE: This article supersedes an older version of this article: The Verifiable Economy: Architecture Reference Model (VE-ARM) 0.1: Original Concepts [OLD] 1. Introduction 1.1 Goals The goals of this article are three-fold: … Continue reading →

Michael Herman
Hyperonomy Digital Identity Lab
Trusted Digital Web Project
Parallelspace Corporation

NOTE: This article supersedes an older version of this article:

The Verifiable Economy: Architecture Reference Model (VE-ARM) 0.1: Original Concepts [OLD] 1. Introduction 1.1 Goals

The goals of this article are three-fold:

Introduce the concept of a Verifiable Capability Authorizations (VCA) and how they can be used to implement controls over which specific methods a particular party is allowed to execute against a particular instance of a Fully Decentralized Object (FDO). VCAs are both delegatable and attenuatable. Illustrate how #graphitization techniques can be used for modeling and visualizing: Trusted Decentralized Identifiers (DIDs) DID Documents Trusted Digital Agents (and their Service Endpoints (SEPs)) Verifiable Credentials (VCs) Verifiable Capability Authorizations (VCAs) and, Most importantly, their myriad of interrelationships. Use the above 2 goals to further detail and describe how to use the VE-ARM model for implementing trusted, reliable, efficient, frictionless, standards-based, global-scale software systems based on Fully Decentralized Objects (FDOs). 1.2 Purpose

This article takes the following “All-in” graph view of The Verifiable Economy Architecture Reference Model (VE-ARM) and partitions it into a series of subgraphs that depict the key elements of the overall architecture reference model for FDOs. Each subgraph is documented with a narrative that is mapped to the numbered blue targets used to identify each element in each subgraph.

Figure 1. Subgraph 0. The Verifiable Economy Architecture Reference Model (VE-ARM)

The above graphitization is the result of a several iterations validating The Verifiable Economy Architecture Reference Model (VE-ARM) against the following live scenario:

Erin acquiring a personal DID and DID Document to enable Erin to acquire a Province of Sovronia Driver’s License (SDL) (represented as an FDO) and hold the SDL in Erin’s digital wallet.

TDW Glossary: Self-Sovereign Identity (SSI) User Scenarios: Erin Buys a Car in Sovronia (3 User Scenarios)

A Fully Decentralized Object (FDO) is comprised of the following minimal elements:

DID (and correspond DID Document) Master Verifiable Capability Authorization (MVCA) for the object’s DID and DID Document Zero or more Verifiable Capability Authorizations (VCAs) linked to the above MVCA for the object (recursively) A Property Set for the FDO Property Set DID (and corresponding DID Document) Property Set MVCA that is issued when the Property Set’s DID and DID Document is issued. Property Set Verifiable Credential (VC) is issued to hold the object’s properties and their values Zero or more Verifiable Capability Authorizations (VCAs) linked to the FDO’s Property Set MVCA (recursively) A Trusted Digital Agent registered with a Service Endpoint (SEP) in the object’s DID Document that implements the VCA-controlled methods for accessing and interacting with the object and/or it’s property set. Control over which methods are invokable by a party is controlled by the respective MVCAs and a Delegated Directed Graphs of VCAs (if there are any).

The goal and purpose of the VE-ARM is to describe a Fully-Decentralized Object (FDO) model that unites the following concepts into a single integrated, operational model:

Verifiable Identifiers, Decentralized Identifiers (DIDs), and DID Documents; Verifiable Claims, Relationships, and Verifiable Credentials (VCs); Master Verifiable Capability Authorizations (MVCA) (Master Proclamations), Verifiable Capability Authorizations (VCAs) (Proclamations), Verifiable Capability Authorization Method Invocations (MIs); and Trusted Digital Agents (TDAs). 1.3 Background

The scenario used to model the VE-ARM is an example of a citizen (Erin) of a fictional Canadian province called Sovronia holding a valid physical Sovronia Driver’s License (Erin RW SDL) as well as a digital, verifiable Sovronia Driver’s License (Erin SDL).

Figure 2. Erin’s “Real World” Sovronia Driver’s License (Erin RW SDL) 1.4 Graphitization of the Verifiable Economy Architecture Reference Model (VE-ARM)

The underlying model was built automatically using a series of Neo4j Cypher queries running against a collection of actual DID Document, Verifiable Credential, and Verifiable Capability Authorization JSON files. The visualization was laid out using the Neo4j Browser. The resulting layout was manually optimized to produce the final version of the graphitization used in this article. The numbered targets used to identify each element in each subgraph were added using Microsoft PowerPoint.

2. Organization of this Article

Following a list of Key Definitions, the remainder of this article is organized as a series of increasingly more detailed explanations of the VE-ARM model. The overall model is partitioned into a collection of (overlapping) subgraphs.

Each subgraph is described by a narrative that explains the purpose of each element in the particular subgraph. Each narrative is organized as a list of numbered bullets that further describe to the corresponding numbered blue targets used to identify each element in each subgraph .

A narrative is a story. It recounts a series of events that have taken place. … These essays are telling a story in order to drive a point home. Narration, however, is the act of telling a story.

Examples of Narration: 3 Main Types in Literature
2.1 Table of Subgraphs Subgraph F1 – Erin’s DID Document (DD) Neighborhood Subgraph F2 – Erin’s DD Master Verifiable Capability Authorization (MVCA) Neighborhood Subgraph F3 – Province of Sovronia DID Document (DD) Neighborhood Subgraph F4 – Province of Sovronia DD Master Verifiable Capability Authorization (MVCA) Neighborhood Subgraph F5 – DID Documents (DDs) and Master Verifiable Capability Authorizations (MVCAs) Neighborhood Subgraph F6 – Erin’s Sovronia Drivers License (SDL) Property Set Verifiable Credential (VC) Neighborhood Subgraph F7 – Erin’s SDL Property Set Delegated Directed Graph of Verifiable Capability Authorizations Neighborhood Subgraph F8 – Erin “Real World” Neighborhood Subgraph F9 – SOVRONA Trusted Decentralized Identity Provider (TDIDP) Neighborhood Subgraph F10 – The Verifiable Economy “All-In” Graph View Figure 4. Subgraph 0. Table of Subgraphs 3. Key Definitions

Several of the following definitions (those related to the concept oferifiable capability authorizations) are inspired by the following RWoT5 article:

Linked Data Capabilities by Christopher Lemmer Webber and Mark S. Miller

Additional context can be found in Authorization Capabilities for Linked Data v0.3.

3.1 VE-ARM Verifiable Capability Authorization (VCA) Model

The VE-ARM Verifiable Capability Authorization (VCA) model used to grant the authority to specific parties to invoke specific methods against an instance of a Fully Decentralized Object (FDO). The VE-ARM VCA model is based, in part, on the Object-Capability Model. The VE-ARM VCA model supports Delegation and Attenuation.

3.2 Object Capability Model

The object-capability model is a computer security model. A capability describes a transferable right to perform one (or more) operations on a given object. It can be obtained by the following combination:

– An unforgeable reference (in the sense of object references or protected pointers) that can be sent in messages.
– A message that specifies the operation to be performed.

Object-Capability Model (https://en.wikipedia.org/wiki/Object-capability_model)
3.3 VCA Model Principles Delegation and Attenuation

With delegation, a capability holder can transfer his capability to another entity, whereas with attenuation he can confine a capability before delegating it.

Capability-based access control for multi-tenant systems using OAuth 2.0 and Verifiable Credentials
3.4 Fully Decentralized Object (FDO)

In The Verifiable Economy, a Fully Decentralized Object (FDO) is comprised of the following minimal elements:

DID (and correspond DID Document) Master Verifiable Capability Authorization (MVCA) for the object’s DID and DID Document Zero or more Verifiable Capability Authorizations (VCAs) linked to the above MVCA for the object (recursively) A Property Set for the FDO Property Set DID (and corresponding DID Document) Property Set MVCA that is issued when the Property Set’s DID and DID Document is issued. Property Set Verifiable Credential (VC) is issued to hold the object’s properties and their values Zero or more Verifiable Capability Authorizations (VCAs) linked to the FDO’s Property Set MVCA (recursively) An Trusted Digital Agent registered with a Service Endpoint (SEP) in the object’s DID Document that implements the VCA-controlled methods for accessing and interacting with the object and/or it’s property set. Control over which methods are invokable by a party is controlled by the respective MVCAs and a Delegated Directed Graphs of VCAs (if there are any). 3.5 Fully Decentralized Object (FDO) Model

A complete decentralized object system based on the concept of FDOs.

3.6 Verifiable Capability Authorization (VCA)

A Verifiable Capability Authorization (VCA) is a JSON-LD structure that grants (or restricts) a specific party (the controller of a key (grantedKey)) the ability to invoke specific methods against a specific instance of a Fully Decentralized Object (FDO). A VCA typically has a type of Proclamation (unless it is a Method Invocation VCA).

A VCA has the following properties:

id – trusted, verifiable decentralized identifier for the VCA type – “Proclamation” parent – trusted, verifiable decentralized identifier for a parent VCA whose control supersedes this current VCA. subject – trusted, verifiable decentralized identifier of the specific instance of the FDO. grantedKey – trusted, verifiable key of the party to whom the specified capabilities are being granted specifically with respect to the specific instance of the FDO. caveat – the collection of specific capabilities the party represented by grantedKey is granted (or restricted) from invoking against a specific instance of the FDO identified by the subject identifier. signature – trusted, verifiable proof that this VCA is legitimate.

NOTE: The current VCA’s capabilities must be equal to or an attenuation of the parent VCA’s capabilities. This part of the VCA model is recursive.

NOTE: An FDO can be an object or a service represented as an object.

The following is an example of a VCA associated with Erin and Erin’s Sovronia Driver’s License Property Set.

Snippet 1. Verifiable Credential Authorization (VCA) Example 3.7 Master Verifiable Capability Authorization (MVCA)

A Master Verifiable Capability Authorization (MVCA) is a Proclamation-type VCA that is created for every FDO at the time that the DID and DID Document for the FDO is issued by a Trusted Decentralized Identity Provider (TDIDP) (e.g. SOVRONIA).

That is, a new MVCA is created whenever a new DID and DID Document are issued by a TDIDP. The MVCA typically grants authorization for any and all methods to the controller of the DID. (This is the essence of the definition of self-sovereign identity principle.)

An MVCA has the following properties:

id – trusted, verifiable decentralized identifier for the VCA type – “Proclamation” (or “Invocation”) subject – trusted, verifiable decentralized identifier of the specific instance of the FDO. An FDO can be an object or a service represented as an object. grantedKey – trusted, verifiable key of the party to whom the specified capabilities are being granted specifically with respect to the specific instance of the FDO. caveat – the collection of specific capabilities the party represented by grantedKey is granted (or restricted) from invoking against a specific instance of the FDO identified by the subject identifier. Typically, this is set to RestrictToMethod( * ) granting the controller of the grantedKey to execute any and all methods against the subject. (This is where and how the essence of the definition of the self-sovereign identity principle is realized.) signature – trusted, verifiable proof that this VCA is legitimate.

NOTE: A MVCA has no parent property because an MVCA always represents the top-level root VCA in a Delegated Directed Graphs of Verifiable Capability Authorizations (see below).

The following is an example of a MVCA for Erin’s Sovronia Drivers License Property Set. This MVCA is the parent of the above VCA.

Snippet 2. Master Verifiable Credential Authorization (MVCA) Example 3.8 VCA Method Invocation (MI)

A VCA Method Invocation (MI) is a JSON-LD structure that attempts to invoke a specific method against a specific instance of a Fully Decentralized Object (FDO) on behalf of a specific invoking party. An MI is of type Invocation (not Proclamation).

An MI has the following properties:

id – trusted, verifiable decentralized identifier for the MI type – “Invocation” proclamation – trusted, verifiable decentralized identifier for the VCA to be used for this MI against the specific instance of an FDO by a specific party (Proclamation VCA). method – specific name of the method to be invoked against the specific instance of an FDO by a specific party. usingKey – trusted, verifiable key of the party to be used to attempt the invocation of the above method against a specific instance of the FDO. signature – trusted, verifiable proof that this VCA is legitimate.

NOTE: An MI doesn’t have a subject property. The target object is specified by the subject property of the proclamation VCA.

A very important point you make is, “NOTE: An MI doesn’t have a subject property. The target object is specified by the subject property of the proclamation VCA.”  That point is so important, not separating designation from authorization, that I’d like to see it in bold.

Alan Karp alanhkarp@gmail.com, May 17, 2021 CCG Mailing List

The following is an example of a MI that attempts to invoke the Present method on behalf of Erin against Erin’s Sovronia Drivers License Property Set. The referenced VCA is the VCA example from above.

Snippet 3. Verifiable Credential Authorization Method Invocation (MI) Example 3.9 Delegated Directed Graph of Verifiable Capability Authorizations

A Delegated Directed Graph of Verifiable Capability Authorizations is a directed list of VCAs that starts with an MVCA as it’s top-level, root VCA. Each VCA in the graph points to the previous VCA in the graph via its parent property. An MI, in turn, refers to a single VCA in the graph via the MI’s proclamation property. The capabilities in effect are those that are specifically listed in the target VCA’s caveat property. While there is no inheritance of capabilities in this model, the capabilities specified by each VCA must be equal or less than (a subset of) the capabilities of the parent VCA (see the definition of Principles of Delegation and Attenuation).

The above examples of an MVCA, a VCA, and an MI, taken together, form an example of a Delegated Directed Graph of Verifiable Capability Authorizations.

Figure 3. Delegated Directed Graph of Verifiable Capability Authorizations Example

3.8.1 Narrative

17. Erin SDL Prop Set MVCA. Erin SDL Prop Set MVCA is the Master Verifiable Capability Authorization (MVCA) created for Erin’s SDL Prop Set (DD) at the time that the DID and DID Document were first issued by SOVRONA on behalf of the Province of Sovronia for Erin’s SDL. (A new MVCA is created whenever a new DID and DID Document are issued by a TDIDP. The MVCA grants authorization for any and all methods defined for the subject to the effective issuer. In this case, the effective issuer is the Province of Sovronia.)

18. Erin SDL VCA. Erin SDL VCA is the Verifiable Capability Authorization (VCA) created for Erin’s SDL Prop Set DD. The VCA was issued by the Province of Sovronia authorizing Erin to be able to present the properties (and their values) of Erin’s SDL to a third party using the Present method associated with Erin’s SDL Prop Set and supported (implemented) by Erin’s AGENT. The parent of Erin’s SDL VCA is the Erin SDL MVCA.

19. Erin SDL VCA MI. Erin SDL VCA MI is an example of a MVCA Method Invocation (VCA MI) that uses the Erin SCL VCA which authorizes the potential execution of the Present method by Erin against Erin’s SDL Prop Set.

3.10 Resource Servers and Authentication Servers

A resource server that hosts a protected resource owned by a resource owner, a client wishing to access that resource, and an authorization server responsible for generating access tokens. Access tokens are granted to clients authorized by the resource owner: client authorization is proven using an authorization grant. In our system we are using the ‘client credentials’ grant. As it can be seen from Fig. 1, when this type of grant is used, a resource owner configures the authentication server with the credentials of the authorized clients; a client authenticates to the authorization server and receives an access token, then it uses the access token to access the protected resource.

Capability-based access control for multi-tenant systems using OAuth 2.0 and Verifiable Credentials

Although these terms are not currently used in the VE-ARM, the resource server role is assigned to the FDO AGENT specified in the subject’s DID document. The authorization server role is assigned to the actor who is responsible for creating Verifiable Capability Authorizations (VCAs). In the current example, SOVORONIA hosts the authorization server on behalf of either the Province of Sovronia or Erin.

4. VE-ARM Principles

The following principles are used to guide The Verifiable Economy Architecture Reference Model (VE-ARM):

DD MVCA Principle. Every DID (and DID Document) has a corresponding Master Verifiable Capability Authorization (MCVA). Whenever a DID and corresponding DID Document is issued, a corresponding Master Verifiable Capability Authorization (MCVA) is automatically created. See F2 in Figure 1. Snippet 4 is an example of a DID Document Master Verifiable Capability Authorization (DD MVCA). Property Set VC Principle. All of the properties (and their values), a Property Set, for a particular decentralized object are stored in a Verifiable Credential (VC) that has an id value that is equal to the DID id of the decentralized object. See F6 in Figure 6. Snippet 5 is a partial example of a Property Set Verifiable Credential (PS VC). Snippet 4. DID Document Master Verifiable Capability Authorization (MVCA) Example Snippet 5. Partial Property Set Verifiable Credential (VC) Example

NOTE: Additional architecture and design principles need to be added to this section.

5. Erin’s DID Document (DD) and Master Verifiable Capability Authorization (MVCA) Neighborhoods

Erin Amanda Lee Anderson is a Person, a Citizen of Sovronia, and a Sovronia Driver’s License holder. The following is a graphitization of Erin’s DID and DID Document and the corresponding Master Verifiable Capability Authorization (MVCA).

Figure 5. Subgraphs F1 and F2: Erin’s DID Document (DD) and Master Verifiable Capability Authorization (MVCA) Neighborhoods 5.1 Erin’s DID Document Narrative (F1)

1. Erin. Erin is a RW_PERSON (“Real World” Person) and a citizen of the Province of Sovronia. Erin also holds a (valid) Sovronia Driver’s License (SDL) and controls a “Real World” Wallet (RW_WALLET) as well as a Digital Wallet (PDR).

2. Erin D Wallet. Erin D Wallet is a Digital Wallet (PDR (Private Data Registry)) controlled by Erin, a Person.

3. Erin DD. Erin DD is the primary DIDDOC (DID Document) for Erin, a Person. It is issued by SOVRONA who records it on the SOVRONA VDR and it is also held in the Erin DD Wallet.

4. DID:SVRN:PERSON:04900EEF-38E7-487E-8D6F-09D6C95D9D3E#fdom1. DID:SVRN:PERSON:04900EEF-38E7-487E-8D6F-09D6C95D9D3E#fdom1 is the identifier for the primary AGENT for Erin, a Person.

5. http://services.sovronia.ca/agent. http://services.sovronia.ca/agent is the primary SEP (Service Endpoint) for accessing the AGENT(s) associated with the DID(s) and DID Document(s) issued by the Province of Sovronia, an Organization. This includes all of the DID(s) and DID Document(s) associated with Erin.

6. SOVRONA VDR. SOVRONA VDR is the primary VDR (Verifiable Data Registry) controlled by SOVRONA, an Organization. The SOVRONA VDR is used to host the SVRN DID Method.

5.2 Erin’s DD Master Capability Authorization Narrative (F2)

7. Erin DD MVCA. Erin DD MVCA is the Master Verifiable Capability Authorization (MVCA) created for Erin’s DID Document at the time that the DID and DID Document were first issued by SOVRONA on behalf of the Province of Sovronia for Erin. (A new MVCA is created whenever a new DID and DID Document are issued by a TDIDP. The MVCA grants authorization for any and all methods to Erin.)

6. Province of Sovronia DID Document (DD) and DD Master Verifiable Capability Authorization (MVCA) Neighborhood

Province of Sovronia is an Organization and a “Real World” Nation State (sovronia.ca). The following is a graphitization of the Province of Sovronia’s DID and DID Document and its corresponding Master Verifiable Capability Authorization (MVCA).

Figure 6. Subgraphs F3 and F4: Province of Sovronia DID Document (DD) and Master Verifiable Capability Authorization (MVCA) Neighborhood 6.1 Province of Sovronia DID Document (DD) Narrative (F3)

6. SOVRONA VDR. SOVRONA VDR is the primary VDR (Verifiable Data Registry) controlled by SOVRONA, an Organization. The SOVRONA VDR is used to host the SVRN DID Method.

8. PoS RW Nation State. The Province of Sovronia is a (fictitious) Province (RW_NATIONSTATE (“Real World” Nation State)) in Canada and the legal government jurisdiction for the citizens of the province. The Province of Sovronia is an Organization. The Province of Sovronia issues “Real World” Sovronia Driver’s Licenses (SDLs) but relies on SOVRONA to issue digital, verifiable SDLs.

9. PoS D Wallet. PoS D Wallet is a Digital Wallet (PDR (Private Data Registry)) controlled by the Province of Sovronia, an Organization.

10. PoS DD. PoS DD is the primary DIDDOC (DID Document) for the Province of Sovronia, an Organization. It is issued by SOVRONA who records it on the SOVRONA VDR and it is held in the PoS D Wallet.

11. DID:SVRN:ORG:0E51593F-99F7-4722-9139-3E564B7B8D2B#fdom1. DID:SVRN:ORG:0E51593F-99F7-4722-9139-3E564B7B8D2B#fdom1 is the identifier for the primary AGENT for the Province of Sovronia, an Organization.

12. http://services.sovrona.com/agent. http://services.sovrona.com/agent is the primary SEP (Service Endpoint) for accessing the AGENT(s) associated with the DID(s) and DID Document(s) issued by SOVRONA, an Organization.

6.2 Province of Sovronia DD Master Capability Authorization Neighborhood (F4)

13. PoS DD MVCA. PoS DD MVCA is the Master Verifiable Capability Authorization (MVCA) created for the Province of Sovronia’s DID Document (DD) at the time that the DID and DID Document were first issued by SOVRONA on behalf of itself for the Province of Sovronia. (A new MVCA is created whenever a new DID and DID Document are issued by a TDIDP. The MVCA grants authorization for any and all methods to the Province of Sovronia.)

7. DID Document (DD) and Master Verifiable Capability Authorization (MVCA) Neighborhoods

A new MVCA is created whenever a new DID and DID Document are issued by a TDIDP. This subgraph highlights that with every new DID and DID Document, a corresponding MVCA is issued at the same time. The graphitization includes all of the DIDs in the Subgraph 0 scenario (plus their corresponding MVCAs).

Figure 7. DID Document (DD) and Master Verifiable Capability Authorization (MVCA) Neighborhoods 7.1 DID Documents (DDs) and Master Verifiable Capability Authorizations (MVCAs) Narratives (F5)

3. Erin DD. Erin DD is the primary DIDDOC (DID Document) for Erin, a Person. It is issued by SOVRONA who records it on the SOVRONA VDR and it is also held in the Erin DD Wallet.

7. Erin DD MVCA. Erin DD MVCA is the Master Verifiable Capability Authorization (MVCA) created for Erin’s DID Document at the time that the DID and DID Document were first issued by SOVRONA on behalf of the Province of Sovronia for Erin. (A new MVCA is created whenever a new DID and DID Document are issued by a TDIDP. The MVCA grants authorization for any and all methods to Erin.)

10. PoS DD. PoS DD is the primary DIDDOC (DID Document) for the Province of Sovronia, an Organization. It is issued by SOVRONA who records it on the SOVRONA VDR and it is held in the PoS D Wallet.

13. PoS DD MVCA. PoS DD MVCA is the Master Verifiable Capability Authorization (MVCA) created for the Province of Sovronia’s DID Document (DD) at the time that the DID and DID Document were first issued by SOVRONA on behalf of itself for the Province of Sovronia. (A new MVCA is created whenever a new DID and DID Document are issued by a TDIDP. The MVCA grants authorization for any and all methods to the Province of Sovronia.)

14. Erin SDL DD. Erin SDL DD is the primary DIDDOC (DID Document) for Erin’s digital, verifiable SDL.

15. Erin SDL MVCA. Erin SDL MVCA is the Master Verifiable Capability Authorization (MVCA) created for Erin’s SDL (DD) at the time that the DID and DID Document were first issued by SOVRONA on behalf of the Province of Sovronia for Erin’s SDL. (A new MVCA is created whenever a new DID and DID Document are issued by a TDIDP. The MVCA grants authorization for any and all methods defined for the subject to the effective issuer. In this case, the effective issuer is the Province of Sovronia.)

16. Erin SDL Prop Set DD. Erin SDL Prop Set DD is the primary DIDDOC (DID Document) for the Verified Credential (VC) that is used to represent the properties of Erin’s digital, verifiable SDL (and their values). The properties (and their values) are represented in Erin SDL Prop Set, a Verifiable Credential associated with the DID in Erin SDL Prop Set DD.

17. Erin SDL Prop Set MVCA. Erin SDL Prop Set MVCA is the Master Verifiable Capability Authorization (MVCA) created for Erin’s SDL Prop Set (DD) at the time that the DID and DID Document were first issued by SOVRONA on behalf of the Province of Sovronia for Erin’s SDL. (A new MVCA is created whenever a new DID and DID Document are issued by a TDIDP. The MVCA grants authorization for any and all methods defined for the subject to the effective issuer. In this case, the effective issuer is the Province of Sovronia.)

8. Erin’s Sovronia Drivers License Property Set DID Document (DD) and Master Verifiable Capability Authorization (MVCA) Neighborhood

Subgraph F6 illustrates how a Property Set for an FDO is realized by a Verifiable Credential (VC). The following is a graphitization of Erin’s Sovronia Driver’s License Property Set.

NOTE: All the properties of an FDO (an FDO Property Set) are represented by one or more Verifiable Credentials associated with the FDO’s DID. A Property Set is associated with an FDO by creating a Verifiable Credential that holds the properties (and their values) that is linked to the FDO’s DID.

VE-ARM Principles
Figure 8. Subgraphs F6. Erin’s Sovronia Drivers License Property Set DID Document (DD) and Master Verifiable Capability Authorization (MVCA) Neighborhood 8.1 Erin’s Sovronia Drivers License Property Set Verifiable Credential (VC) Narrative (F6)

16. Erin SDL Prop Set DD. Erin SDL Prop Set DD is the primary DIDDOC (DID Document) for the Verified Credential (VC) that is used to represent the properties of Erin’s digital, verifiable SDL (and their values). The properties (and their values) are represented in Erin SDL Prop Set, a Verifiable Credential associated with the DID in Erin SDL Prop Set DD.

17. Erin SDL Prop Set MVCA. Erin SDL Prop Set MVCA is the Master Verifiable Capability Authorization (MVCA) created for Erin’s SDL Prop Set (DD) at the time that the DID and DID Document were first issued by SOVRONA on behalf of the Province of Sovronia for Erin’s SDL. (A new MVCA is created whenever a new DID and DID Document are issued by a TDIDP. The MVCA grants authorization for any and all methods defined for the subject to the effective issuer. In this case, the effective issuer is the Province of Sovronia.)

20. Erin SDL Prop Set VC. Erin SDL Prop Set VC is the Verified Credential (VC) that is used to represent the properties of Erin’s digital, verifiable SDL (and their values). The properties (and their values) are represented in Erin SDL Prop Set VC, a Verifiable Credential associated with the DID in Erin SDL Prop Set DD.

9. Erin’s Sovronia Drivers License Property Set Delegated Directed Graph of Verifiable Capability Authorizations Neighborhood

This subgraph illustrates what a Delegated Directed Graph of Verifiable Capability Authorizations looks like. The graphitization of the Delegated Directed Graph of VCAs applies to Erin’s Sovronia Drivers License Property Set.

The Delegated Directed Graph of VCAs, in this scenario, consists of:

Erin’s Sovronia Drivers License Property Set MVCA One VCA linked back to the MVCA One VCA Method Innovation (MI) linked back the VCA Figure 9. Subgraphs F7. Erin’s Sovronia Drivers License Property Set Delegated Directed Graph of Verifiable Capability Authorizations Neighborhood 9.1 Erin’s SDL Property Set Delegated Directed Graph of Verifiable Capability Authorizations Narrative (F7)

16. Erin SDL Prop Set DD. Erin SDL Prop Set DD is the primary DIDDOC (DID Document) for the Verified Credential (VC) that is used to represent the properties of Erin’s digital, verifiable SDL (and their values). The properties (and their values) are represented in Erin SDL Prop Set, a Verifiable Credential associated with the DID in Erin SDL Prop Set DD.

17. Erin SDL Prop Set MVCA. Erin SDL Prop Set MVCA is the Master Verifiable Capability Authorization (MVCA) created for Erin’s SDL Prop Set (DD) at the time that the DID and DID Document were first issued by SOVRONA on behalf of the Province of Sovronia for Erin’s SDL. (A new MVCA is created whenever a new DID and DID Document are issued by a TDIDP. The MVCA grants authorization for any and all methods defined for the subject to the effective issuer. In this case, the effective issuer is the Province of Sovronia.)

18. Erin SDL VCA. Erin SDL VCA is the Verifiable Capability Authorization (VCA) created for Erin’s SDL Prop Set DD. The VCA was issued by the Province of Sovronia authorizing Erin to be able to present the properties (and their values) of Erin’s SDL to a third party using the Present method associated with Erin’s SDL Prop Set and supported (implemented) by Erin’s AGENT. The parent of Erin’s SDL VCA is the Erin SDL MVCA.

19. Erin SDL VCA MI. Erin SDL VCA MI is an example of a MVCA Method Invocation (VCA MI) that uses the Erin SCL VCA which authorizes the potential execution of the Present method by Erin against Erin’s SDL Prop Set.

10. SOVRONA Trusted Decentralized Identity Provider (TDIDP) DID Document (DD), DD Master Verifiable Capability Authorization (MVCA) and Erin “Real World” Neighborhoods

Subgraph F8 is a visualization of:

Erin’s “Real World” objects Erin’s “Real World” Wallet (Erin RW (Leather) Wallet) Erin’s “Real World” Sovronia Drivers License (Erin RW SDL) SVORONIA’s DID and DID Document (and corresponding MVCA) Figure 10. SOVRONA TDIDP DID Document (DD), DD Master Verifiable Capability Authorization (MVCA) and Erin “Real World” Neighborhoods 10.1 Erin’s “Real World” Narrative (F9)

1. Erin. Erin is a RW_PERSON (“Real World” Person) and a citizen of the Province of Sovronia. Erin also holds a (valid) Sovronia Driver’s License (SDL) and controls a “Real World” Wallet (RW_WALLET) as well as a Digital Wallet (PDR).

8. PoS RW Nation State. The Province of Sovronia is a (fictitious) Province (RW_NATIONSTATE (“Real World” Nation State)) in Canada and the legal government jurisdiction for the citizens of the province. The Province of Sovronia is an Organization. The Province of Sovronia issues “Real World” Sovronia Driver’s Licenses (SDLs) but relies on SOVRONA to issue digital, verifiable SDLs.

22. Erin RW Wallet. Erin RW Wallet is a RW_WALLET (“Real World” (Leather) Wallet) and it is used to hold Erin’s “Real World” Sovronia Driver’s License (Erin RW SDL). Erin RW Wallet is owned and controlled by Erin.

23. Erin RW SDL. Erin RW SDL is Erin’s RW_SDL (“Real World” Sovronia Driver’s License) and it is held by Erin in Erin’s RW Wallet.

10.2 SOVRONA TDIDP Narrative (F10)

12. http://services.sovrona.com/agent. http://services.sovrona.com/agent is the primary SEP (Service Endpoint) for accessing the AGENT(s) associated with the DID(s) and DID Document(s) issued by SOVRONA, an Organization.

24. SOVRONA Organization. SOVRONA is an Organization and the primary “Real World” TDIDP (RW_DIDPROVIDER) for the citizens and government of Sovronia, a fictitious province in Canada. SOVRONA controls a Digital Wallet (PDR (Personal Data Registry)), SOVRONA D Wallet, as well as the SOVRONA Verifiable Data Registry (VDR).

25. SOVRONA D Wallet. SOVRONA D Wallet is a Digital Wallet (PDR (Private Data Registry)) that is controlled by SOVRONA, an Organization.

26. SOVRONA DD. SOVRONA DD is the primary DIDDOC (DID Document) for SOVRONA, an Organization.

27. DID:SVRN:ORG:01E9CFEA-E36D-4111-AB68-D99AE9D86D51#fdom1. DID:SVRN:ORG:01E9CFEA-E36D-4111-AB68-D99AE9D86D51#fdom1 is the identifier for the primary AGENT for SOVRONA, an Organization.

28. SOVRONA DD MVCA. SOVRONA DD MVCA is the Master Verifiable Capability Authorization (MVCA) created for SOVRONA’s DID Document (DD) at the time that the DID and DID Document were first issued by SOVRONA on behalf of itself for SOVRONA’s DD. (A new MVCA is created whenever a new DID and DID Document are issued by a TDIDP. The MVCA grants authorization for any and all methods defined for the subject to the effective issuer. In this case, the effective issuer is SOVRONA, the Organization.)

11. VE-ARM “All-In” Graph View

The following is a depiction of the “All-In” view of the The Verifiable Economy Architecture Reference Model (VE-ARM) graph. This graph view represents the union of all of the previous subgraphs.

Figure 11. Subgraph F10. The Verifiable Economy “All-In” Graph View 11.1 Narrative

1. Erin. Erin is a RW_PERSON (“Real World” Person) and a citizen of the Province of Sovronia. Erin also holds a (valid) Sovronia Driver’s License (SDL) and controls a “Real World” Wallet (RW_WALLET) as well as a Digital Wallet (PDR).

2. Erin D Wallet. Erin D Wallet is a Digital Wallet (PDR (Private Data Registry)) controlled by Erin, a Person.

3. Erin DD. Erin DD is the primary DIDDOC (DID Document) for Erin, a Person. It is issued by SOVRONA who records it on the SOVRONA VDR and it is also held in the Erin DD Wallet.

4. DID:SVRN:PERSON:04900EEF-38E7-487E-8D6F-09D6C95D9D3E#fdom1. DID:SVRN:PERSON:04900EEF-38E7-487E-8D6F-09D6C95D9D3E#fdom1 is the identifier for the primary AGENT for Erin, a Person.

5. http://services.sovronia.ca/agent. http://services.sovronia.ca/agent is the primary SEP (Service Endpoint) for accessing the AGENT(s) associated with the DID(s) and DID Document(s) issued by the Province of Sovronia, an Organization. This includes all of the DID(s) and DID Document(s) associated with Erin.

6. SOVRONA VDR. SOVRONA VDR is the primary VDR (Verifiable Data Registry) controlled by SOVRONA, an Organization. The SOVRONA VDR is used to host the SVRN DID Method.

7. Erin DD MVCA. Erin DD MVCA is the Master Verifiable Capability Authorization (MVCA) created for Erin’s DID Document at the time that the DID and DID Document were first issued by SOVRONA on behalf of the Province of Sovronia for Erin. (A new MVCA is created whenever a new DID and DID Document are issued by a TDIDP. The MVCA grants authorization for any and all methods to Erin.)

8. PoS RW Nation State. The Province of Sovronia is a (fictitious) Province (RW_NATIONSTATE (“Real World” Nation State)) in Canada and the legal government jurisdiction for the citizens of the province. The Province of Sovronia is an Organization. The Province of Sovronia issues “Real World” Sovronia Driver’s Licenses (SDLs) but relies on SOVRONA to issue digital, verifiable SDLs.

9. PoS D Wallet. PoS D Wallet is a Digital Wallet (PDR (Private Data Registry)) controlled by the Province of Sovronia, an Organization.

10. PoS DD. PoS DD is the primary DIDDOC (DID Document) for the Province of Sovronia, an Organization. It is issued by SOVRONA who records it on the SOVRONA VDR and it is held in the PoS D Wallet.

11. DID:SVRN:ORG:0E51593F-99F7-4722-9139-3E564B7B8D2B#fdom1. DID:SVRN:ORG:0E51593F-99F7-4722-9139-3E564B7B8D2B#fdom1 is the identifier for the primary AGENT for the Province of Sovronia, an Organization.

12. http://services.sovrona.com/agent. http://services.sovrona.com/agent is the primary SEP (Service Endpoint) for accessing the AGENT(s) associated with the DID(s) and DID Document(s) issued by SOVRONA, an Organization.

13. PoS DD MVCA. PoS DD MVCA is the Master Verifiable Capability Authorization (MVCA) created for the Province of Sovronia’s DID Document (DD) at the time that the DID and DID Document were first issued by SOVRONA on behalf of itself for the Province of Sovronia. (A new MVCA is created whenever a new DID and DID Document are issued by a TDIDP. The MVCA grants authorization for any and all methods to the Province of Sovronia.)

14. Erin SDL DD. Erin SDL DD is the primary DIDDOC (DID Document) for Erin’s digital, verifiable SDL.

15. Erin SDL MVCA. Erin SDL MVCA is the Master Verifiable Capability Authorization (MVCA) created for Erin’s SDL (DD) at the time that the DID and DID Document were first issued by SOVRONA on behalf of the Province of Sovronia for Erin’s SDL. (A new MVCA is created whenever a new DID and DID Document are issued by a TDIDP. The MVCA grants authorization for any and all methods defined for the subject to the effective issuer. In this case, the effective issuer is the Province of Sovronia.)

16. Erin SDL Prop Set DD. Erin SDL Prop Set DD is the primary DIDDOC (DID Document) for the Verified Credential (VC) that is used to represent the properties of Erin’s digital, verifiable SDL (and their values). The properties (and their values) are represented in Erin SDL Prop Set, a Verifiable Credential associated with the DID in Erin SDL Prop Set DD.

17. Erin SDL Prop Set MVCA. Erin SDL Prop Set MVCA is the Master Verifiable Capability Authorization (MVCA) created for Erin’s SDL Prop Set (DD) at the time that the DID and DID Document were first issued by SOVRONA on behalf of the Province of Sovronia for Erin’s SDL. (A new MVCA is created whenever a new DID and DID Document are issued by a TDIDP. The MVCA grants authorization for any and all methods defined for the subject to the effective issuer. In this case, the effective issuer is the Province of Sovronia.)

18. Erin SDL VCA. Erin SDL VCA is the Verifiable Capability Authorization (VCA) created for Erin’s SDL Prop Set DD. The VCA was issued by the Province of Sovronia authorizing Erin to be able to present the properties (and their values) of Erin’s SDL to a third party using the Present method associated with Erin’s SDL Prop Set and supported (implemented) by Erin’s AGENT. The parent of Erin’s SDL VCA is the Erin SDL MVCA.

19. Erin SDL VCA MI. Erin SDL VCA MI is an example of a MVCA Method Invocation (VCA MI) that uses the Erin SCL VCA which authorizes the potential execution of the Present method by Erin against Erin’s SDL Prop Set.

20. Erin SDL Prop Set VC. Erin SDL Prop Set VC is the Verified Credential (VC) that is used to represent the properties of Erin’s digital, verifiable SDL (and their values). The properties (and their values) are represented in Erin SDL Prop Set VC, a Verifiable Credential associated with the DID in Erin SDL Prop Set DD.

21. DID:SVRN:VC:0B114A04-2559-4C68-AE43-B7004646BD76#fdom1. DID:SVRN:VC:0B114A04-2559-4C68-AE43-B7004646BD76#fdom1 is the identifier for the primary AGENT for Erin SDL Property Set DD.

22. Erin RW Wallet. Erin RW Wallet is a RW_WALLET (“Real World” (Leather) Wallet) and it is used to hold Erin’s “Real World” Sovronia Driver’s License (Erin RW SDL). Erin RW Wallet is owned and controlled by Erin.

23. Erin RW SDL. Erin RW SDL is Erin’s RW_SDL (“Real World” Sovronia Driver’s License) and it is held by Erin in Erin’s RW Wallet.

24. SOVRONA Organization. SOVRONA is an Organization and the primary “Real World” TDIDP (RW_DIDPROVIDER) for the citizens and government of Sovronia, a fictitious province in Canada. SOVRONA controls a Digital Wallet (PDR (Personal Data Registry)), SOVRONA D Wallet, as well as the SOVRONA Verifiable Data Registry (VDR).

25. SOVRONA D Wallet. SOVRONA D Wallet is a Digital Wallet (PDR (Private Data Registry)) that is controlled by SOVRONA, an Organization.

26. SOVRONA DD. SOVRONA DD is the primary DIDDOC (DID Document) for SOVRONA, an Organization.

27. DID:SVRN:ORG:01E9CFEA-E36D-4111-AB68-D99AE9D86D51#fdom1. DID:SVRN:ORG:01E9CFEA-E36D-4111-AB68-D99AE9D86D51#fdom1 is the identifier for the primary AGENT for SOVRONA, an Organization.

28. SOVRONA DD MVCA. SOVRONA DD MVCA is the Master Verifiable Capability Authorization (MVCA) created for SOVRONA’s DID Document (DD) at the time that the DID and DID Document were first issued by SOVRONA on behalf of itself for SOVRONA’s DD. (A new MVCA is created whenever a new DID and DID Document are issued by a TDIDP. The MVCA grants authorization for any and all methods defined for the subject to the effective issuer. In this case, the effective issuer is SOVRONA, the Organization.)

29. DID:SVRN:LICENSE:999902-638#fdom1. DID:SVRN:LICENSE:999902-638#fdom1 is the identifier for the primary AGENT for Erin SDL DD.

12. Conclusions

The goals of this article are three-fold:

Introduce the concept of a Verifiable Capability Authorizations (VCA) and how they can be used to implement controls over which specific methods a particular party is allowed to execute against a particular instance of a Fully Decentralized Object (FDO). VCAs are both delegatable and attenuatable. Illustrate how #graphitization techniques can be used for visualizing: Trusted Decentralized Identifiers (DIDs) DID Documents Trusted Digital Agents (and their Service Endpoints (SEPs)) Verifiable Credentials (VCs) Verifiable Capability Authorizations (VCAs) and, Most importantly, their myriad of interrelationships. Use the above 2 goals to further detail and describe how to use the VE-ARM model for implementing trusted, reliable, efficient, frictionless, standards-based, global-scale software systems based on Fully Decentralized Objects (FDOs).

This article described The Verifiable Economy Architecture Reference Model (VE-ARM) using a #graphiziation approach for modeling and visualization. The resulting overall graph was partitioned into a series of subgraphs that depict the key elements of the architecture reference model. Each subgraph was documented with a narrative that is mapped to the numbered blue targets used to identify each element in each subgraph .


Damien Bod

Securing an ASP.NET Core app and web API using windows authentication

This post shows how an ASP.NET Core Web API and an ASP.NET Core Razor page application can be implemented to use windows authentication. The Razor page application uses Javascript to display an autocomplete control which gets the data indirectly from the service API which is protected using windows authentication. The Razor Page application uses the […]

This post shows how an ASP.NET Core Web API and an ASP.NET Core Razor page application can be implemented to use windows authentication. The Razor page application uses Javascript to display an autocomplete control which gets the data indirectly from the service API which is protected using windows authentication. The Razor Page application uses the API to get the auto-complete suggestions data. Both applications are protected using windows authentication.

Code: https://github.com/damienbod/PoCWindowsAuth

Setup the API

The ASP.NET Core demo API is setup to use windows authentication. The launch settings windowsAuthentication property is set to true and the anonymousAuthentication property to false. The application host file settings on your development PC would also need to be configured to allow windows authentication, which is disabled by default. See the stack overflow link at the bottom for more information.

{ "iisSettings": { "windowsAuthentication": true, "anonymousAuthentication": false, "iisExpress": { "applicationUrl": "https://localhost:44364", "sslPort": 44364 } },

The Startup ConfigureServices method is configured to require authentication using the IISDefaults.AuthenticationScheme scheme. This would need to be changed if you were using a different hosting model.

public void ConfigureServices(IServiceCollection services) { services.AddAuthentication(IISDefaults.AuthenticationScheme); services.AddControllers().AddJsonOptions(option => option.JsonSerializerOptions .PropertyNamingPolicy = JsonNamingPolicy.CamelCase); } public void Configure(IApplicationBuilder app, IWebHostEnvironment env) { if (env.IsDevelopment()) { app.UseDeveloperExceptionPage(); } app.UseHttpsRedirection(); app.UseRouting(); app.UseAuthentication(); app.UseAuthorization(); app.UseEndpoints(endpoints => { endpoints.MapControllers(); }); }

The API is protected using the authorize attribute. This example returns the user name from the windows authentication.

[Authorize] [ApiController] [Route("api/[controller]")] public class MyDataController : ControllerBase { private readonly ILogger<MyDataController> _logger; public MyDataController(ILogger<MyDataController> logger) { _logger = logger; } [HttpGet] public IEnumerable<string> Get() { return new List<string> { User.Identity.Name }; } }

Implement the ASP.NET Core Razor pages

The application calling the API also requires windows authentication and requests the data from the API project. The HttpClient instance requesting the data from the API project must send the default credentials with each API call. A HttpClientHandler is used to implement this. The HttpClientHandler is added to a named AddHttpClient service which can be used anywhere in the application.

public void ConfigureServices(IServiceCollection services) { services.AddAuthentication(IISDefaults.AuthenticationScheme); services.AddHttpClient(); HttpClientHandler handler = new HttpClientHandler() { UseDefaultCredentials = true }; services.AddHttpClient("windowsAuthClient", c =>{ }) .ConfigurePrimaryHttpMessageHandler(() => handler); services.AddScoped<MyDataClientService>(); services.AddRazorPages().AddJsonOptions(option => option.JsonSerializerOptions .PropertyNamingPolicy = JsonNamingPolicy.CamelCase); }

A client service is implemented to call the API from the second project. This client uses the IHttpClientFactory to create instances of the HttpClient. The CreateClient method is used to create an instance using the named client which was configured in the Startup class. This instance will send credentials to the API.

public MyDataClientService( IConfiguration configurations, IHttpClientFactory clientFactory) { _configurations = configurations; _clientFactory = clientFactory; _jsonSerializerOptions = new JsonSerializerOptions { PropertyNameCaseInsensitive = true, }; } public async Task<List<string>> GetMyData() { try { var client = _clientFactory.CreateClient("windowsAuthClient"); client.BaseAddress = new Uri(_configurations["MyApiUrl"]); var response = await client.GetAsync("api/MyData"); if (response.IsSuccessStatusCode) { var data = await JsonSerializer.DeserializeAsync<List<string>>( await response.Content.ReadAsStreamAsync()); return data; } var error = await response.Content.ReadAsStringAsync(); throw new ApplicationException($"Status code: {response.StatusCode}, Error: {response.ReasonPhrase}, Message: {error}"); } catch (Exception e) { throw new ApplicationException($"Exception {e}"); } }

Javascript UI

If using Javascript to call the API protected with window authentication, this can become a bit tricky due to CORS when using windows authentication. I prefer to avoid this and use a backend to proxy the calls from my trusted backend to the API. The OnGetAutoCompleteSuggest method is used to call the API. The would also make it easy to map DTOs from my API to my view DTOs as required.

public class IndexModel : PageModel { private readonly ILogger<IndexModel> _logger; private readonly MyDataClientService _myDataClientService; public List<string> DataFromApi; public string SearchText { get; set; } public List<PersonCity> PersonCities; public IndexModel(MyDataClientService myDataClientService, ILogger<IndexModel> logger) { _myDataClientService = myDataClientService; _logger = logger; } public async Task OnGetAsync() { DataFromApi = await _myDataClientService.GetMyData(); } public async Task<ActionResult> OnGetAutoCompleteSuggest(string term) { PersonCities = await _myDataClientService.Suggest(term); SearchText = term; return new JsonResult(PersonCities); } }

The Razor Page underneath uses an autocomplete implemented in Javascript to suggest data requested from the API. Any Javascript framework can be used in this way.

@page "{handler?}" @model IndexModel @{ ViewData["Title"] = "Home page"; } <div class="text-center"> <p>Data from API:</p> @foreach (string item in Model.DataFromApi) { <p>@item</p><br /> } </div> <hr /> <fieldset class="form"> <legend>Search for a person in the search engine</legend> <table width="500"> <tr> <th></th> </tr> <tr> <td> <input class="form-control" id="autocomplete" type="text" style="width:500px" /> </td> </tr> </table> </fieldset> <br /> <div class="card" id="results"> <h5 class="card-header"> <span id="docName"></span> <span id="docFamilyName"></span> </h5> <div class="card-body"> <p class="card-text"><span id="docInfo"></span></p> <p class="card-text"><span id="docCityCountry"></span></p> <p class="card-text"><span id="docWeb"></span></p> </div> </div> @section scripts { <script type="text/javascript"> var items; $(document).ready(function () { $("#results").hide(); $("input#autocomplete").autocomplete({ source: function (request, response) { $.ajax({ url: "Index/AutoCompleteSuggest", dataType: "json", data: { term: request.term, }, success: function (data) { var itemArray = new Array(); for (i = 0; i < data.length; i++) { itemArray[i] = { label: data[i].name + " " + data[i].familyName, value: data[i].name + " " + data[i].familyName, data: data[i] } } console.log(itemArray); response(itemArray); }, error: function (data, type) { console.log(type); } }); }, select: function (event, ui) { $("#results").show(); $("#docNameId").text(ui.item.data.id); $("#docName").text(ui.item.data.name); $("#docFamilyName").text(ui.item.data.familyName); $("#docInfo").text(ui.item.data.info); $("#docCityCountry").text(ui.item.data.cityCountry); $("#docWeb").text(ui.item.data.web); console.log(ui.item); } }); }); </script> }

If all is setup correctly, the ASP.NET Core application displays the API data which is protected using the windows authentication.

CRSF

If using windows authentication, you need to protect against CSRF forgery like any application using cookies. It is also recommended NOT to use windows authentication in the public domain. Modern security architectures should be used like Open ID Connect whenever possible. This works well on intranets or for making changes to existing applications which use windows authentication in secure networks.

Links:

https://stackoverflow.com/questions/36946304/using-windows-authentication-in-asp-net

https://docs.microsoft.com/en-us/aspnet/web-api/overview/security/integrated-windows-authentication

Saturday, 24. April 2021

DustyCloud Brainstorms

Beyond the shouting match: what is a blockchain, really?

If there's one thing that's true about the word "blockchain", it's that these days people have strong opinions about it. Open your social media feed and you'll see people either heaping praises on blockchains, calling them the saviors of humanity, or condemning them as destroying and burning down the planet …

If there's one thing that's true about the word "blockchain", it's that these days people have strong opinions about it. Open your social media feed and you'll see people either heaping praises on blockchains, calling them the saviors of humanity, or condemning them as destroying and burning down the planet and making the rich richer and the poor poorer and generally all the other kinds of fights that people like to have about capitalism (also a quasi-vague word occupying some hotly contested mental real estate).

There are good reasons to hold opinions about various aspects of what are called "blockchains", and I too have some pretty strong opinions I'll be getting into in a followup article. The followup article will be about "cryptocurrencies", which many people also seem to think of as synonymous with "blockchains", but this isn't particularly true either, but we'll deal with that one then.

In the meanwhile, some of the fighting on the internet is kind of confusing, but even more importantly, kind of confused. Some of it might be what I call "sportsballing": for whatever reason, for or against blockchains has become part of your local sportsball team, and we've all got to be team players or we're gonna let the local team down already, right? And the thing about sportsballing is that it's kind of arbitrary and it kind of isn't, because you might pick a sportsball team because you did all your research or you might have picked it because that just happens to be the team in your area or the team your friends like, but god almighty once you've picked your sportsball team let's actually not talk against it because that might be giving in to the other side. But sportsballing kind of isn't arbitrary either because it tends to be initially connected to real communities of real human beings and there's usually a deeper cultural web than appears at surface level, so when you're poking at it, it appears surface-level shallow but there are some real intricacies beneath the surface. (But anyway, go sportsball team.)

But I digress. There are important issues to discuss, yet people aren't really discussing them, partly because people mean different things. "Blockchain" is a strange term that encompasses a wide idea space, and what people consider or assume essential to it vary just as widely, and thus when two people are arguing they might not even be arguing about the same thing. So let's get to unpacking.

"Blockchain" as handwaving towards decentralized networks in general

Years ago I was at a conference about decentralized networked technology, and I was having a conversation with someone I had just met. This person was telling me how excited they were about blockchains... finally we have decentralized network designs, and so this seems really useful for society!

I paused for a moment and said yes, blockchains can be useful for some things, though they tend to have significant costs or at least tradeoffs. It's good that we also have other decentralized network technology; for example, the ActivityPub standard I was involved in had no blockchains but did rely on the much older "classic actor model."

"Oh," the other person said, "I didn't know there were other kinds of decentralized network designs. I thought that 'blockchain' just meant 'decentralized network technology'."

It was as if a light had turned on and illuminated the room for me. Oh! This explained so many conversations I had been having over the years. Of course... for many people, blockchains like Bitcoin were the first ever exposure they had (aside from email, which maybe they never gave much thought to as being decentralized) of something that involved a decentralized protocol. So for many people, "blockchain" and "decentralized technology" are synonyms, if not in technical design, but in terms of understanding of a space.

Mark S. Miller, who was standing next to me, smiled and gave a very interesting followup: "There is only one case in which you need a blockchain, and that is in a decentralized system which needs to converge on a single order of events, such as a public ledger dealing with the double spending problem."

Two revelations at once. It was a good conversation... it was a good start. But I think there's more.

Blockchains are the "cloud" of merkle trees

As time has gone on, the discourse over blockchains has gotten more dramatic. This is partly because what a "blockchain" is hasn't been well defined.

All terminology exists on an ever-present battle between fuzziness and crispness, with some terms being much clearer than others. The term "boolean" has a fairly crisp definition in computer science, but if I ask you to show me your "stove", the device you show me today may be incomprehensible to someone's definition a few centuries ago, particularly in that today it might not involve fire. Trying to define as in terms of its functionality can also cause confusion: if I asked you to show me a stove, and you showed me a computer processor or a car engine, I might be fairly confused, even though technically people enjoy showing off that they can cook eggs on both of these devices when they get hot enough. (See also: Identity is a Katamari, language is a Katamari explosion.)

Still, some terms are fuzzier than others, and as far as terms go, "blockchain" is quite fuzzy. Hence my joke: "Blockchains are the 'cloud' of merkle trees."

This ~joke tends to get a lot of laughs out of a particular kind of audience, and confused looks from others, so let me explain. The one thing everyone seems to agree on is that it's a "chain of blocks", but all that really seems to mean is that it's a merkle tree... really, just an immutable datastructure where one node points at the parent node which points at the parent node all the way up. The joke then is not that this merkle tree runs on a cloud, but that "cloud computing" means approximately nothing: it's marketing speak for some vague handwavey set of "other peoples' computers are doing computation somewhere, possibly on your behalf sometimes." Therefore, "cloud of merkle trees" refers to the vagueness of the situation. (As everyone knows, jokes are funnier when fully explained, so I'll turn on my "STUDIO LAUGHTER" sign here.)

So, a blockchain is a chain of blocks, ie a merkle tree, and I mean, technically speaking, that means that Git is a blockchain (especially if the commits are signed), but when you see someone arguing on the internet about whether or not blockchains are "good" or "bad", they probably weren't thinking about git, which aside from having a high barrier of entry in its interface and some concerns about the hashing algorithm used, isn't really something likely to drag you into an internet flamewar.

"Blockchain" is to "Bitcoin" what "Roguelike" is to "Rogue"

These days it's common to see people either heaping praises on blockchains or criticizing them, and those people tend to be shouting past one another. I'll save unpacking that for another post. In the meanwhile though, it's worth noting that people might not be talking about the same things.

What isn't in doubt is whether or not Bitcoin is a blockchain... trying to understand and then explore the problem space around Bitcoin is what created the term "blockchain". It's a bit like the video game genre of roguelikes, which started with the game Rogue, particularly explored and expanded upon in NetHack, and then suddenly exploding into the indie game scene as a "genre" of its own. Except the genre has become fuzzier and fuzzier as people have explored the surrounding space. What is essential? Is a grid based layout essential? Is a non-euclidean grid acceptable? Do you have to provide an ascii or ansi art interface so people can play in their terminals? Dare we allow unicode characters? What if we throw out terminals altogether and just play on a grid of 2d pixelart? What about 3d art? What about permadeath? What about the fantasy theme? What about random level generation? What are the key features of a roguelike?

Well now we're at the point where I pick up a game like Blazing Beaks and it calls itself a "roguelite", which I guess is embracing the point that terminology has gotten extremely fuzzy... this game feels more like Robotron than Rogue.

So... if "blockchain" is to Bitcoin what "roguelike" is to Rogue, then what's essential to a blockchain? Does the blockchain have to be applied to a financial instrument, or can it be used to store updateable information about eg identity? Is global consensus required? Or what about a "trusted quorum" of nodes, such as in Hyperledger? Is "mining" some kind of asset a key part of the system? Is proof of work acceptable, or is proof of stake okay? What about proof of space, proof of space-time, proof of pudding?

On top of all this, some of the terms around blockchains have been absorbed as if into them. For instance, I think to many people, "smart contract" means something like "code which runs on a blockchain" thanks to Ethereum's major adoption of the term, but the E programming language described "smart contracts" as the "likely killer app of distributed capabilities" all the way back in 1999, and was borrowing the term from Nick Szabo, but really the same folks working on E had described many of those same ideas in the Agoric Papers back in 1988. Bitcoin wasn't even a thing at all until at least 2008, so depending on how you look at it, "smart contracts" precede "blockchains" by one or two decades. So "blockchain" has somehow even rolled up terms outside of its space as if within it. (By the way, I don't think anyone has given a good and crisp definition for "smart contract" either despite some of these people trying to give me one, so let me give you one that I think is better and embraces its fuzziness: "Smart contracts allow you to do the kinds of things you might do with legal contracts, but relying on networked computation instead of a traditional state-based legal system." It's too bad more people also don't know about the huge role that Mark Miller's "split contracts" idea plays into this space because that's what makes the idea finally makes sense... but that's a conversation for another time.) (EDIT: Well, after I wrote this, Kate Sills lent me her definition, which I think is the best one: "Smart contracts are credible commitments using technology, and outside a state-provided legal system." I like it!)

So anyway, the point of this whole section is to say that kind of like roguelike, people are thinking of different things as essential to blockchains. Everyone roughly agrees on the jumping-off point of ideas but since not everyone agrees from there, it's good to check in when we're having the conversation. Wait, you do/don't like this game because it's a roguelike? Maybe we should check in on what features you mean. Likewise for blockchains. Because if you're blaming blockchains for burning down the planet, more than likely you're not condemning signed git repositories (or at least, if you're condemning them, you're probably doing so about it from an aspect that isn't the fundamental datastructure... probably).

This is an "easier said than done" kind of thing though, because of course, I'm kind of getting into some "in the weeds" level of details here... but it's the "in the weeds" where all the substance of the disagreements really are. The person you are talking with might not actually even know or consider the same aspects to be essential that you consider essential though, so taking some time to ask which things we mean can help us lead to a more productive conversation sooner.

"Blockchain" as an identity signal

First, a digression. One thing that's kind of curious about the term "virtue signal" is that in general it tends to be used as a kind of virtue signal. It's kind of like the word hipster in the previous decade, which weirdly seemed to be obsessively and pejoratively used by people who resembled hipsters than anyone else. Hence I used to make a joke called "hipster recursion", which is that since hipsters seem more obsessesed with pejorative labeling of hipsterism than anyone else, there's no way to call someone a "hipster" without yourself taking on hipster-like traits, and so inevitably even this conversation is N-levels deep into hipster recursion for some numerical value of N.

"Virtue signaling" appears similar, but even more ironically so (which is a pretty amazing feat given how much of hipsterdom seems to surround a kind of inauthentic irony). When I hear someone say "virtue signaling" with a kind of sneer, part of that seems to be acknowledging that other people are sending signals merely to impress others that they are some kind of the same group but it seems as if it's being raised as in a you-know-and-I-know-that-by-me-acknowledging-this-I'm-above-virtue-signaling kind of way. Except that by any possible definition of virtue signaling, the above appears to be a kind of virtue signaling, so now we're into virtue signaling recursion.

Well, one way to claw our way out of the rabbithole of all this is to drop the pejorative aspect of it and just acknowledge that signaling is something that everyone does. Hence me saying "identity signaling" here. You can't really escape identity signaling, or even sportsballing, but you can acknowledge that it's a thing that we all do, and there's a reason for it: people only have so much time to find out information about each other, so they're searching for clues that they might align and that, if they introduce you to their peer group, that you might align with them as well, without access to a god-like view of the universe where they know exactly what you think and exactly what kinds of things you've done and exactly what way you'll behave in the future or whether or not you share the same values. (After all, what else is virtue ethics but an ethical framework that takes this in its most condensed form as its foundation?) But it's true that at its worst, this seems to result in shallow, quick, judgmental behavior, usually based on stereotypes of the other side... which can be unfortunate or unfair to whomever is being talked about. But also on the flip side, people also do identity signal to each other because they want to create a sense of community and bonding. That's what a lot of culture is. It's worth acknowledging then that this occurs, recognizing its use and limitations, without pretending that we are above it.

So wow, that's quite a major digression, so now let's get back to "identity signaling". There is definitely a lot of identity signaling that tends to happen around the word "blockchain", for or against. Around the critiques of the worst of this, I tend to agree: I find much of the machismo hyper-white-male-privilege that surrounds some of the "blockchain" space uncomfortable or cringey.

But I also have some close friends who are not male and/or are people of color and those ones tend to actually suffer the worst of it from these communities internally, but also seem to find things of value in them, but particularly seem to feel squeezed externally when the field is reduced to these kinds of (anti?-)patterns. There's something sad about that, where I see on the one hand friends complaining about blockchain from the outside on behalf of people who on the inside seem to be both struggling internally but then kind of crushed by being lumped into the same identified problems externally. This is hardly a unique problem but it's worth highlighting for a moment I think.

But anyway, I've taken a bunch of time on this, more than I care to, maybe because (irony again?) I feel that too much of public conversation is also hyperfocusing on this aspect... whether there's a subculture around blockchain, whether or not that subculture is good or bad, etc. There's a lot worthwhile in unpacking this discourse-wise, but some of the criticisms of blockchains as a technology (to the extent it even is coherently one) seem to get lumped up into all of this. It's good to provide thoughtful cultural critique, particularly one which encourages healthy social change. And we can't escape identity signaling. But as someone who's trying to figure out what properties of networked systems we do and don't want, I feel like I'm trying to navigate the machine and for whatever reason, my foot keeps getting caught in the gears here. Well, maybe that itself is pointing to some architectural mistakes, but socially architectural ones. But it's useful to also be able to draw boundaries around it so that we know where this part of the conversation begins and ends.

"Blockchain" as "decentralized centralization" (or "decentralized convergence")

One of the weird things about people having the idea of "blockchains" as being synonymous with "decentralization" is that it's kind of both very true and very untrue, depending on what abstraction layer you're looking at.

For a moment, I'm going to frame this in harsh terms: blockchains are decentralized centralization.

What? How dare I! You'll notice that this section is in harsh contrast to the "blockchain as handwaving towards decentralized networks in general" section... well, I am acknowledging the decentralized aspect of it, but the weird thing about a blockchain is that it's a decentralized set of nodes converging on (creating a centrality of!) a single abstract machine.

Contrast with classic actor model systems like CapTP in Spritely Goblins, or as less good examples (because they aren't quite as behavior-oriented as they are correspondence-oriented, usually) ActivityPub or SMTP (ie, email). All of these systems involve decentralized computation and collaboration stemming from sending messages to actors (aka "distributed objects"). Of CapTP this is especially clear and extreme: computations happen in parallel across many collaborating machines (and even better, many collaborating objects on many collaborating machines), and the behavior of other machines and their objects is often even opaque to you. (CapTP survives this in a beautiful way, being able to do well on anonymous, peer to peer, "mutually suspicious" networks. But maybe read my rambling thoughts about CapTP elsewhere.)

While to some degree there are some very clever tricks in the world of cryptography where you may be able to get back some of the opacity, this tends to be very expensive, adding an expensive component to the already inescapable additional expenses of a blockchain. A multi-party blockchain with some kind of consensus will always, by definition be slower than a single machine operating alone.

If you are irritated by this framing: good. It's probably good to be irritated by it at least once, if you can recognize the portion of truth in it. But maybe that needs some unpacking to get there. It might be better to say "blockchains are decentralized convergence", but I have some other phrasing that might be helpful.

"Blockchain" as "a single machine that many people run"

There's value in having a single abstract machine that many people run. The most famous source of value is in the "double spending problem". How do we make sure that when someone has money, they don't spend that money twice?

Traditional accounting solves this with a linear, sequential ledger, and it turns out that the right solution boils down to the same thing in computers. Emphasis on sequential: in order to make sure money balances out right, we really do have to be able to order things.

Here's the thing though: the double spending problem was in a sense solved in terms of single-computers a long time ago in the object capability security community. Capability-based Financial Instruments was written about a decade before blockchains even existed and showed off how to make a "mint" (kind of like a fiat-currency bank) that can be implemented in about 25 lines of code in the right architecture (I've ported it to Goblins, for instance) and yet has both distributed accounts and is robust against corruption on errors.

However, this seems to be running on a "single-computer based machine", and again operates like a fiat currency. Anyone can create their own fiat currency like this, and they are cheap, cheap, cheap (and fast!) to make. But it does rely on sequentiality to some degree to operate correctly (avoiding a class of attacks called "re-entrancy attacks").

But this "single-computer based machine" might bother you for a couple reasons:

We might be afraid the server might crash and service will be interrupted, or worse yet, we will no longer be able to access our accounts.

Or, even if we could trade these on an open market, and maybe diversify our portfolio, maybe we don't want to have to trust a single operator or even some appointed team of operators... maybe we have a lot of money in one of these systems and we want to be sure that it won't suddenly vanish due to corruption.

Well, if our code operates deterministically, then what if from the same initial conditions (or saved snapshot of the system) we replay all input messages to the machine? Functional programmers know: we'll end up with the same result.

So okay, we might want to be sure this doesn't accidentally get corrupted, maybe for backup reasons. So maybe we submit the input messages to two computers, and then if one crashes, we just continue on with the second one until the other comes up, and then we can restore the first one from the progress the second machine made while the first one was down.

Oh hey, this is already technically a blockchain. Except our trust model is that we implicitly trust both machines.

Hm. Maybe we're now worried that we might have top-down government pressure to coerce some behavior on one of our nodes, or maybe we're worried that someone at a local datacenter is going to flip some bits to make themselves rich. So we actually want to spread this abstract machine out over three countries. So okay, we do that, and now we set a rule agreeing on what all the series of input messages are... if two of three nodes agree, that's good enough. Oh hey look, we've just invented the "small-quorum-style" blockchain/ledger!

(And yes, you can wire up Goblins to do just this; a hint as to how is seen in the Terminal Phase time travel demo. Actually, let's come back to that later.)

Well, okay. This is probably good enough for a private financial asset, but what about if we want to make something more... global? Where nobody is in charge!

Well, we could do that too. Here's what we do.

First, we need to prevent a "swarming attack" (okay, this is generally called a "sybil attack" in the literature, but for a multitude of reasons I won't get into, I don't like that term). If a global set of peers are running this single abstract machine, we need to make sure there aren't invocations filling up the system with garbage, since we all basically have to keep that information around. Well... this is exactly where those proof-of-foo systems come in the first time; in fact Proof of Work's origin is in something called Hashcash which was designed to add "friction" to disincentivize spam for email-like systems. If we don't do something friction-oriented in this category, our ledger is going to be too easily filled with garbage too fast. We also need to agree on what the order of messages is, so we can use this mechanism in conjuction with a consensus algorithm.

When are new units of currency issued? Well, in our original mint example, the person who set up the mint was the one given the authority to make new money out of thin air (and they can hand out attenuated versions of that authority to others as they see fit). But what if instead of handing this capability out to individuals we handed it out to anyone who can meet an abstract requirement? For instance, in zcap-ld an invoker can be any kind of entity which is specified with linked data proofs, meaning those entities can be something other than a single key... for instance, what if we delegated to an abstract invoker that was specified as being "whoever can solve the state of the machine's current proof-of-work puzzle"? Oh my gosh! We just took our 25-line mint and extended it for mining-style blockchains. And the fundamental design still applies!

With these two adjustments, we've created a "public blockchain" akin to bitcoin. And we don't need to use proof-of-work for either technically... we could swap in different mechanisms of friction / qualification.

If the set of inputs are stored as a merkle tree, then all of the system types we just looked at are technically blockchains:

A second machine as failover in a trusted environment

Three semi-trusted machines with small-scale private consensus

A public blockchain without global trust, with swarming-attack resistance and an interesting abstract capability accessible to anyone who can meet the abstract requirement (in this case, to issue some new currency).

The difference for choosing any of the above is really a question of: "what is your trust/failover requirements?"

Blockchains as time travel plus convergent inputs

If this doesn't sound believable to you, that you could create something like a "public blockchain" on top of something like Goblins so easily, consider how we might extend time travel in Terminal Phase to add multiplayer. As a reminder, here's an image:

Now, a secret thing about Terminal Phase is that the gameplay is deterministic (the random starfield in the background is not, but the gameplay is) and runs on a fixed frame-rate. This means that given the same set of keyboard inputs, the game will always play the same, every time.

Okay, well let's say we wanted to hand some way for someone to replay our last game. Chess games can be fully replayed with a very condensed syntax, meaning that merely handing someone a short list of codes they can precisely replay the same game, every time, deterministically.

Well okay, as a first attempt at thinking this through, what if for some game of Terminal Phase I played we wrote down each keystroke I entered on my keyboard, on every tick of the game? Terminal Phase runs at 30 ticks per second. So okay, if you replay these, each one at 30 ticks per second, then yeah, you'd end up with the same gameplay every time.

It would be simple enough for me to encode these as a linked list (cons, cons, cons!) and hand them to you. You could descend all the way to the root of the list and start playing them back up (ie, play the list in reverse order) and you'd get the same result as I did. I could even stream new events to you by giving you new items to tack onto the front of the list, and you could "watch" a game I was playing live.

So now imagine that you and I want to play Terminal Phase together now, over the network. Let's imagine there are two ships, and for simplicity, we're playing cooperatively. (The same ideas can be extended to competitive, but for narrating how real-time games work it's easier to to start with a cooperative assumption.)

We could start out by wiring things up on the network so that I am allowed to press certain keys for player 1 and you are allowed to press certain keys for player 2. (Now it's worth noting that a better way to do this doesn't involve keys on the keyboard but capability references, and really that's how we'd do things if we were to bring this multiplayer idea live, but I'm trying to provide a metaphor that's easy to think about without introducing the complicated sounding kinds of terms like "c-lists" and "vat turns" that we ocap people seem to like.) So, as a first attempt, maybe if we were playing on a local area network or something, we could synchronize at every game tick: I share my input with you and you share yours, and then and only then do both of our systems actually input them into that game-tick's inputs. We'll have achieved a kind of "convergence" as to the current game state on every tick. (EDIT: I wrote "a kind of consensus" instead of "a kind of convergence" originally, and that was an error, because it misleads on what consensus algorithms tend to do.)

Except this wouldn't work very well if you and I were living far away from each other and playing over the internet... the lag time for doing this for every game tick might slow the system to a crawl... our computers wouldn't get each others' inputs as fast as the game was moving along, and would have to pause until we received each others' moves.

So okay, here's what we'll do. Remember the time-travel GUI above? As you can see, we're effectively restoring from an old snapshot. Oh! So okay. We could save a snapshot of the game every second, and then both get each other our inputs to each other as fast as we can, but knowing it'll lag. So, without having seen your inputs yet, I could move my ship up and to the right and fire (and send that I did that to you). My game would be in a "dirty state"... I haven't actually seen what you've done yet. Now suddenly I get the last set of moves you did over the network... in the last five frames, you move down and to the left and fire. Now we've got each others' inputs... what our systems can do is secretly time travel behind the scenes to the last snapshot, then fast forward, replaying both of our inputs on each tick up until the latest state where we've both seen each others' moves (but we wouldn't show the fast forward process, we'd just show the result with the fast forward having been applied). This can happen fast enough that I might see your ship jump forward a little, and maybe your bullet will kill the enemy instead of mine and the scores shift so that you actually got some points that for a moment I thought I had, but this can all happen in realtime and we don't need to slow down the game at all to do it.

Again, all the above can be done, but with actual wiring of capabilities instead of the keystroke metaphor... and actually, the same set of ideas can be done with any kind of system, not just a game.

And oh hey, technically, technically, technically if we both hashed each of our previous messages in the linked list and signed each one, then this would qualify as a merkle tree and then this would also qualify as a blockchain... but wait, this doesn't have anything to do with cryptocurrencies! So is it really a blockchain?

"Blockchain" as synonym for "cryptocurrency" but this is wrong and don't do this one

By now you've probably gotten the sense that I really was annoyed with the first section of "blockchain" as a synonym for "decentralization" (especially because blockchains are decentralized centralization/convergence) and that is completely true. But even more annoying to me is the synonym of "blockchain" with "cryptocurrency".

"Cryptocurrency" means "cryptographically based currency" and it is NOT synonymous with blockchains. Digicash precedes blockchains by a dramatic amount, but it is a cryptocurrency. The "simple mint" type system also precedes blockchains and while it can be run on a blockchain, it can also run on a solo computer/machine.

But as we saw, we could perceive multiplayer Terminal Phase as technically, technically a blockchain, even though it has nothing to do with currencies whatsoever.

So again a blockchain is just a single, abstract, sequential machine, run by multiple parties. That's it. It's more general than cryptocurrencies, and it's not exclusive to implementing them either. One is a kind of programming-plus-cryptography-use-case (cryptocurrencies), the other one is a kind of abstracted machine (blockchains).

So please. They are frequently combined, but don't treat them as the same thing.

Blockchains as single abstract machines on a wider network

One of my favorite talks is Mark Miller's Programming Secure Smart Contracts talk. Admittedly, I like it partly because it well illustrates some of the low-level problems I've been working on, and that might not be as useful to everyone else. But it has this lovely diagram in it:

This is better understood by watching the video, but the abstraction layers described here are basically as follows:

"Machines" are the lowest layer of abstraction on the network, but there a variety of kinds of machines. Public blockchains are one, quorum blockchains are another, solo computer machines yet another (and the simplest case, too). What's interesting then is that we can see public chains and quorums abstractly demonstrated as machines in and of themselves... even though they are run by many parties.

Vats are the next layer of abstraction, these are basically the "communicating event loops"... actors/objects live inside them, and more or less these things run sequentially.

Replace "JS ocaps" with "language ocaps" and you can see actors/objects in both Javascript and Spritely living here.

Finally, at the top are "erights" and "smart contracts", which feed into each other... "erights" are "exclusive electronic rights", and "smart contracts" are generally patterns of cooperation involving achieving mutual goals despite suspicion, generally involving the trading of these erights things (but not necessarily).

Okay, well cool! This finally explains the worldview I see blockchains on. And we can see a few curious things:

The "public chain" and "quorum" kinds of machines still boil down to a single, sequential abstract machine.

Object connections exist between the machines... ocap security. No matter whether it's run by a single computer or multiple.

Public blockchains, quorum blockchains, solo-computer machines all talk to each other, and communicate between object references on each other.

Blockchains are not magical things. They are abstracted machines on the network. Some of them have special rules that let whoever can prove they qualify for them access some well-known capabilities, but really they're just abstracted machines.

And here's an observation: you aren't ever going to move all computation to a single blockchain. Agoric's CEO, Dean Tribble, explained beautifully why on a recent podcast:

One of the problems with Ethereum is it is as tightly coupled as possible. The entire world is a single sequence of actions that runs on a computer with about the power of a cell phone. Now, that's obviously hugely valuable to be able to do commerce in a high-integrity fashion, even if you can only share a cell phone's worth of compute power with the entire rest of the world. But that's clearly gonna hit a brick wall. And we've done lots of large-scale distributed systems whether payments or cyberspace or coordination, and the fundamental model that covers all of those is islands of sequential programming in a sea of asynchronous communication. That is what the internet is about, that's what the interchain is about, that's what physics requires you to do if you want a system to scale.

Put this way, it should be obvious: are we going to replace the entire internet with something that has the power of a cell phone? To ask the question is to know the answer: of course not. Even when we do admit blockchain'y systems into our system, we're going to have to have many of them communicating with each other.

Blockchains are just machines that many people/agents run. That's it.

Some of these are encoded with some nice default programming to do some useful things, but all of them can be done in non-blockchain systems because communicating islands of sequential processes is the generalization. You might still want a blockchain, ie you might want multiple parties running one of those machines as a shared abstract machine, but how you configure that blockchain from there might depend on your trust and integrity requirements.

What do I think of blockchains?

I've covered a wide variety of perspectives of "what is a blockchain" in this article.

On the worse end of things are the parts involving hand-wavey confusion about decentralization, mistaken ideas of them being tied to cryptocurrencies, marketing hype, cultural assumptions, and some real, but not intrinsic, cultural problems.

In the middle, I am particularly keen on highlighting the similarity between the term "blockchain" and the term "roguelike", how both of them might boil down to some key ideas or not, but more importantly they're both a rough family of ideas that diverge from one highly influential source (Bitcoin and Rogue respectively). This is also the source of much of the "shouting past each other", because many people are referring to different components that they view as essential or inessential. Many of these pieces may be useful or harmful in isolation, in small amounts, in large amounts, but much of the arguing (and posturing) involves highlighting different things.

On the better end of things is a revelation, that blockchains are just another way of abstracting a computer so that multiple parties can run it. The particular decisions and use cases layered on top of this fundamental design are highly variant.

Having made the waters clear again, we could muddy them. A friend once tried to convince me that all computers are technically blockchains, that blockchains are the generalization of computing, and the case of a solo computer is merely one where a blockchain is run only by one party and no transaction history or old state is kept around. Maybe, but I don't think this is very useful. You can go in either direction, and I think the time travel and Terminal Phase section maybe makes that clear to me, but I'm not so sure how it lands with others I suppose. But a term tends to be useful in terms of what it introduces, and calling everything a blockchain seems to make the term even less useful than it already is. While a blockchain could be one or more parties running a sequential machine as the generalization, I suggest we stick to two or more.

Blockchains are not magic pixie dust, putting something on a blockchain does not make it work better or more decentralized... indeed, what a blockchain really does is converging (or re-centralizing) a machine from a decentralized set of computers. And it always does so with some cost, some set of overhead... but what those costs and overhead are varies depending on what the configuration decisions are. Those decisions should always stem from some careful thinking about what those trust and integrity needs are... one of the more frustrating things about blockchains being a technology of great hype and low understanding is that such care is much less common than it should be.

Having a blockchain, as a convergent machine, can be useful. But how that abstracted convergent machine is arranged can diverge dramatically; if we aren't talking about the same choices, we might shout past each other. Still, it may be an unfair ask to request that those without a deep technical background go into technical specifics, and I recognize that, and in a sense there can be some amount gained from speaking towards broad-sweeping, fuzzy sets and the patterns they seem to be carrying. A gut-sense assertion from a set of loosely observed behaviors can be a useful starting point. But to get at the root of what those gut senses actually map to, we will have to be specific, and we should encourage that specificity where we can (without being rude about it) and help others see those components as well.

But ultimately, as convergent machines, blockchains will not operate alone. I think the system that will hook them all together should be CapTP. But no matter the underlying protocol abstraction, blockchains are just abstract machines on the network.

Having finally disentangled what blockchains are, I think soon I would like to move onto what cryptocurrencies are. Knowing that they are not necessarily tied to blockchains opens us up to considering an ecosystem, even an interoperable and exchangeable one, of varying cryptographically based financial instruments, and the different roles and uses they might play. But that is another post of its own, for whenever I can get to it, I suppose.

ADDENDUM: After writing this post, I had several conversations with several blockchain-oriented people. Each of them roughly seemed to agree that Bitcoin was roughly the prototypical "blockchain", but each of them also seemed to highlight different things they thought were "essential" to what a "blockchain" is: some kinds of consensus algorithms being better than others, that kinds of social arrangements are enabled, whether transferrable assets are encoded on the chain, etc. To start with, I feel like this does confirm some of the premise of this post, that Bitcoin is the starting point, but like Rogue and "roguelikes", "blockchains" are an exploration space stemming from a particular influential technical piece.

However my friend Kate Sills (who also gave me a much better definition for "smart contracts", added above) highlighted something that I hadn't talked about much in my article so far, which I do agree deserves expansion. Kate said: "I do think there is something huge missing from your piece. Bitcoin is amazing because it aligns incentives among actors who otherwise have no goals in common."

I agree that there's something important here, and this definition of "blockchain" maybe does explain why while from a computer science perspective, perhaps signed git trees do resemble blockchains, they don't seem to fit within the realm of what most people are thinking about... while git might be a tool used by several people with aligned incentives, it is not generally itself the layer of incentive-alignment.

Thursday, 22. April 2021

FACILELOGIN

My Personal Brand is My Professional Success Story!

This blog is the script of a talk I did internally at WSO2 in 2015 to inspire the team to build their personal brand. Found this today, buried in my Google Docs, and thought of sharing publicly (unedited), in case if someone finds it helpful! Good Morning folks, thanks for joining in — it’s my great pleasure to do this session on ‘My Personal Brand is My Professional Success Story’. Fi

This blog is the script of a talk I did internally at WSO2 in 2015 to inspire the team to build their personal brand. Found this today, buried in my Google Docs, and thought of sharing publicly (unedited), in case if someone finds it helpful!

Good Morning folks, thanks for joining in — it’s my great pleasure to do this session on ‘My Personal Brand is My Professional Success Story’.

First of all I must thank Asanka, Zai, Charitha, Usama and the entire marketing team for giving me the opportunity to present on this topic. In first sight, I thought, it’s tough to present on a topic — that I have not consciously focused on — or purposely wanted to achieve myself. Then again, thinking further on the title, I realized whether we like it or not each one of us has a personal brand.

https://miro.medium.com/max/800/1*Yp9bj3XCL0s_fbGWUmGA2g.jpeg

The personal brand is the image of you, that you cultivate in others’ minds. In other words — that is how — others think about you. This raises a question in all ‘radical’ minds — why we have to care about what others think about us — we do our bit in the way we want, and should we care about personal branding? It is extremely important to find answers to this question, because if we are not convinced of something, we will never do it.

In my view there are no individuals — there is a bigger, greater team behind each individual. This bigger, greater team includes your parents, siblings, spouse, kids, relations, friends. colleagues and many more. You like it or not, more or less you are a reflection of this team behind you. As we grow up as human beings the team behind us — or the team which influences us, widens up. It would not just include well-wishers, but also haters, competitors and many more. But, still you as an individual is a reflection of this team. Sometimes — or even in most of the cases, the haters could motivate you more than well-wishers. This team also includes people you have never talked to — people you have never seen — people who never exist, like some characters in books. This is the team behind you — at the same time — you like it or not you become a member of a team behind another individual or set of individuals. In other words, you get influenced by a team and then again you influence another set of individuals.

Let me take a quick example. Everyone knows Mahatma Gandhi. In his own words, Ghandi once said — “Three moderns have left a deep impress on my life and captivated me. Raychandbhai by his living contact; Tolstoy by his book, “The Kingdom of God is within you”; and Ruskin by his “Unto This Last”. That was what influenced him — today there are countless number of individuals who are influenced by Gandhi.

Arguably, CNBC, in 2014, named Steve Jobs as the most influential person in last 25 years. Thousands of people are influenced by Steve Jobs, at the same time there are many other people who influenced Jobs — Edwin H. Land, who co-founded Polaroid and made a number of significant advancements in the field of photography, is one of them, Jobs used to talk about.

In short, whether you like it or not, more or less, you get influenced by others and then again you influence others. Now it is a question on how much an impact you want to make on the rest of the world before you die, to make this world a better place than it looks like today. I

If you want to make a good/positive impact on others, you care about how they think about you. If you cannot build a positive image of you, in their minds, you will find it extremely hard to make a positive impact in their lives. The positive image of you is the reflection of your character. If you have a bad character, it is extremely hard to build a good image out of it, not quite impossible though. But, if your character is good, the positive image is the bonus what you get for it. Personal branding requires, little more than having a good image — you need to learn to express yourself — not to market yourself — but to express yourself. Everyone from the history, who has made a positive impact to the world, have expressed themselves. The way Gandhi took to express himself, is not the same which Steve Jobs picked.

The rest of the talk from here onwards, is about, how to build a good image and then how to express your character to the rest, to build a positive personal brand.

In my view, everyone of us, should have a vision for the life. Vision for the life is the one that drives you to the future. The vision for the life is the one that motivates us to wake up every morning. If you don’t have one — start thinking about it from today. Think about, what motivates you to do what you do everyday. Having a good vision is the core in building a great image.

The vision has to be inspirational — a great vision statement inspires and moves us. It is a motivational force that compels action. You recognize a great vision statement when you find it difficult not to be inspired.

The vision has to be challenging — the best vision statements challenge us to become better. In this way, a vision statement requires us to stretch ourselves in pursuit of the vision we seek to achieve. The vision is not an ‘easy target’; it is something that if achieved- would represent a sense of pride and fulfillment.

The vision has to be achievable — vision must not be so far-fetched that is outside of our reach. It must be conceivably possible, though not probable without additional effort.

When we start working for a company, we get used to spend most of our time working there. If your vision for life does not match with the vision of the company you work for — there could be many conflicts and you won’t be productive. If your vision for life, is to make this world a better place, you cannot work for a company which produces cigarettes or weapons.

The second most important thing in building a good image, is your integrity. Oprah Winfrey, who is a well-respected TV talk show host, actress, producer and philanthropist, says “Real integrity is doing the right thing, knowing that nobody’s going to know whether you did it or not.”. I don’t think there is a better way of explaining, ‘integrity’ than this. It captures all what it needs to be.

I have visited and talked with many WSO2 customers over last eight years. We never talk to a customer with an intention of selling a product. First thing we do is listening to them and learn from them, then we all work towards the best solution to the problem they have. Finally we see how WSO2 could fit into the solution. If it is not a perfect fit — we never lie — we identify the gaps — and find a way to move forward by filling those gaps. Most of the time we win the customer at the second stage, when we build the solution for them and in many cases they agree to go ahead with us, even we are not the perfect match for their problem. That is mainly because, the level of integrity we demonstrate as a company.

No one is perfect — that also implies everyone makes mistakes. A guy with a high level of integrity would never hide mistakes, but rather would accept it, apologize for it and fix it. Never he would lie — never he would say something to one person and something else to another. Mark Twain once said, “If you tell the truth, you don’t have to remember anything.”

In short, vision for life will drive you to the future, while the integrity is the cornerstone of your personal brand.

The third most important thing you should do in building a positive image is, to raise yourself against negativity. Do not let negativity kill your productivity, enthusiasm, passion and spirit. People who spread negativity are the people who feel extremely insecure in their current state. They only have complaints — no suggestions. Their feedbacks are negative, not constructive. They see only bad — not a single bit of good. Identifying these type of people are not that hard — first you need to shield yourself from negativity — then you need to protect your team. Even just by keeping silent when hear something negative, you indirectly contributes to spread it over — fix it at that very point. If you are closely following the US presidential election campaign, you might have noticed, Donald Trump, who is the republican front runner at the moment, is being heavily criticized for being silent and not correcting a question raised by someone in his political campaign, where he said — ‘Muslims is a problem and Barack Obama is a Muslim’. Even though Trump is still the frontrunner, his popularity has gone down considerably after these dialogues.

The fourth most important thing you should do in building a positive image is, when you do something do it to a level where it can make an impact. If you believe something is right, go for it and make it happen. At the end of the day you may fail — but look back and see whether you have contributed to your best — if so, you will never frustrate — no regrets.

Expressing an idea is important — but representing an idea is much more important. When you represent something you own it. If you want to do something to make an impact, you must own it. You should not be a someone who talks the talk but does not walk the walk.

Tolerating criticism and accepting constructive feedback, is another key aspect in building a positive image. There are no better sources than criticisms to validate the direction we are heading and to learn. Bill Gates, once said ‘Your most unhappy customers are your greatest source of learning’.

We discussed so far, the need to build a positive image and how to do it. Next we will focus on how to build a personal brand by expressing yourself. As we discussed before, personal branding requires, little more than having a good image — you need to learn to express yourself — not to market yourself. If you already have a positive image, being little expressive will build you a positive personal brand. If you already have a negative image, being little expressive will build you a negative personal brand. The image you build is a reflection of your character. That includes your role, as a father, son, brother, husband, colleague, friend, mentor, developer, architect and many more. You can build an image as a good father and a bad son — or as a good son and a bad brother — or as a good friend and a bad developer — likewise any combinations. But, more or less, ultimately your true image is how you do overall. You can be the best developer in the company, but then again if you do not understand the value of respecting each other’s religions and cultural values or — in a single word, if you are a racist — your top skills as a developer is worthless.

You need to pick how you want to impact the world — or how you want the world to see you. Thats your personal brand — and you build it on top of your character or the image. Your overall character is the shelter for your personal brand. If you do not build it right — if you find holes in it — you cannot protect your brand, even from a light shower. That’s why building the right character comes well before building a personal brand.

In my view, the area you can make the most impact to the world, in its positive direction, is the area that you are most passionate about. If you are extremely worried and angry about child labour — you can be a thought leader in protesting against child labour. If you are extremely worried and angry about human rights violations — you can be a thought leader in protecting human rights. If you are extremely passionate about integration technologies, you can be a thought leader in integration space. If you are extremely passionate about machine learning, you can be a thought leader in machine learning space. If you are passionate about APIs — you can be a thought leader in API space. If you are passionate about Big Data, you can be a thought leader in Big Data space. If you are passionate about Identity and Access Management, you can be a thought leader in Identity and Access Management space. Opportunities are limitless — but remember our ground rules — if you do something — do it to a level where it can make a positive impact. You do not need to worry about being a thought leader, but when you make a good positive impact, you will become a thought leader automatically.

Once you decide the area where you want to make an impact — the rest depends on how good you as a communicator. Communication is critically important, because, that’s the only way you can reach to your audience. Content marketing is the best way to build a brand and reputation online; when people look for information, they tend to go back to sources that were helpful to them. If you can become a trusted source of information through your content, over time you’ll become collectively known as the expert of your specific field. It’s best to start your own blog and update it on a regular basis — at least weekly. If you do not update regularly you lose your audience. At the start it would be tough — but once you make it a practice, it will start to happen effortlessly. Another key principle I would like to highlight here is the difference between good and great. Most of the time the difference between good and great lies heavily on how you do tiny/little things better. You may spend hours in writing a blog — finding the content — validating the content and getting all in order. But — we are bit lazy to put another more five to ten minutes of effort, to format our blog post, publish it in DZone and other blog aggregators, share it in social media sites — and do little more. This additional ten minutes of effort could easily make your blog post from being a good one to a great one — and also would attract a larger audience.

Regularly participate in mailing lists related to the subject of your interest is another way of passing your message to the rest of the world. These mailing lists may be within WSO2 or even outside. Look for standard bodies like W3C, IETF, OASIS — and any other communities that share the same interest of yours, and eagerly participate in related discussions. Not just the mailing lists, look for interesting groups in Facebook, LinkedIn, StackOverFlow and wherever possible and make your mark.

Webinars at WSO2 is another key medium to pass your message to the audience of your interest. If you want to be a thought leader in your product space, then your responsibility does not end at the moment you release the product. You need to come up with a plan for evangelization — and webinars could be extremely helpful.

At WSO2 you get a massive number of opportunities to build your brand. Your personal brand is important to you as well as, to the company you serve. Few years back, we had a VP of Marketing called Katie Poplin — and I personally got motivated by some of the concepts she put forward. One thing she believed was, in the open source community, brand value of individuals are much higher and trustworthier than that of companies. People used to think, everything a company shares, is part of their marketing propaganda — which may not reflect the real value. But, what individuals share are their first hand experience. Also we had monthly awards for best blogger, best evangelist and the best article. If I remember correctly I won both the best blogger and best evangelist awards in couple of months and it was fun :-).

Then again don’t just get constrained by the the opportunities you get from WSO2. Always look for what is happening outside. Try to get your articles published in external portals. Also — look for writing books. Writing a book is not hard as it looks to be. First you need to come up with a proposal, with an attractive topic, in your domain of expertise, and then submit it to few publishers. Most of the publishers accept book proposals and if you go to their web sites, you will find everything you need to know, on writing books — go for it!.

Conferences and meetups are another way to establish yourself as a prominent speaker in the corresponding domain. Then again, getting a speaking opportunity will depend on how better you have done your homework.

These are only few techniques to build your brand, in the domain you are interested in, on top of your personal image or the character. Building your personal brand is a focused exercise, not just a piece of cake. It’s a journey, not a destination. Once you built it — maintaining it and protecting it is much harder. As we discussed before, your image or the character is the shelter or the shield of your personal brand. If you build your character consciously, that’ll help you in protecting your brand.

Finally to wrap up, in this session we discussed the importance of brand building, how to build your character and the image, how to build a personal brand under the shelter of your character. Thank you very much.

My Personal Brand is My Professional Success Story! was originally published in FACILELOGIN on Medium, where people are continuing the conversation by highlighting and responding to this story.

Wednesday, 21. April 2021

Here's Tom with the Weather

Vacciniation Achievement Unlocked

I am grateful and feel fortunate to have received my 2nd Moderna shot today. I hope the vaccinations become more widely available around the world.

I am grateful and feel fortunate to have received my 2nd Moderna shot today. I hope the vaccinations become more widely available around the world.


Mike Jones: self-issued

OpenID Connect Presentation at IIW XXXII

I gave the following invited “101” session presentation at the 32nd Internet Identity Workshop (IIW) on Tuesday, April 20, 2021: Introduction to OpenID Connect (PowerPoint) (PDF) The session was well attended. There was a good discussion about uses of Self-Issued OpenID Providers.

I gave the following invited “101” session presentation at the 32nd Internet Identity Workshop (IIW) on Tuesday, April 20, 2021:

Introduction to OpenID Connect (PowerPoint) (PDF)

The session was well attended. There was a good discussion about uses of Self-Issued OpenID Providers.


OAuth 2.0 JWT Secured Authorization Request (JAR) sent back to the RFC Editor

As described in my last post about OAuth JAR, after it was first sent to the RFC Editor, the IESG requested an additional round of IETF feedback. I’m happy to report that, having addressed this feedback, the spec has now been sent back to the RFC Editor. As a reminder, this specification takes the JWT […]

As described in my last post about OAuth JAR, after it was first sent to the RFC Editor, the IESG requested an additional round of IETF feedback. I’m happy to report that, having addressed this feedback, the spec has now been sent back to the RFC Editor.

As a reminder, this specification takes the JWT Request Object from Section 6 of OpenID Connect Core (Passing Request Parameters as JWTs) and makes this functionality available for pure OAuth 2.0 applications – and does so without introducing breaking changes. This is one of a series of specifications bringing functionality originally developed for OpenID Connect to the OAuth 2.0 ecosystem. Other such specifications included OAuth 2.0 Dynamic Client Registration Protocol [RFC 7591] and OAuth 2.0 Authorization Server Metadata [RFC 8414].

The specification is available at:

https://tools.ietf.org/html/draft-ietf-oauth-jwsreq-33

An HTML-formatted version is also available at:

https://self-issued.info/docs/draft-ietf-oauth-jwsreq-33.html

Monday, 19. April 2021

Bill Wendel's Real Estate Cafe

What will happen to housing prices when artificially low inventory hits a tipping point?

Real Estate Cafe has used the hashtag #Covid_ImpactRE to tweet about market distortions during the totally artificial housing market of the past year. Yesterday’s tweet… The post What will happen to housing prices when artificially low inventory hits a tipping point? first appeared on Real Estate Cafe.

Real Estate Cafe has used the hashtag #Covid_ImpactRE to tweet about market distortions during the totally artificial housing market of the past year. Yesterday’s tweet…

The post What will happen to housing prices when artificially low inventory hits a tipping point? first appeared on Real Estate Cafe.


Damien Bod

Securing multiple Auth0 APIs in ASP.NET Core using OAuth Bearer tokens

This article shows a strategy for security multiple APIs which have different authorization requirements but the tokens are issued by the same authority. Auth0 is used as the identity provider. A user API and a service API are implemented in the ASP.NET Core API project. The access token for the user API data is created […]

This article shows a strategy for security multiple APIs which have different authorization requirements but the tokens are issued by the same authority. Auth0 is used as the identity provider. A user API and a service API are implemented in the ASP.NET Core API project. The access token for the user API data is created using an Open ID Connect Code flow with PKCE authentication and the service API access token is created using the client credentials flow in the trusted backend of the Blazor application. It is important that both access tokens will only work for the intended API.

Code: https://github.com/damienbod/SeparatingApisPerSecurityLevel

Blogs in this series

Securing multiple Auth0 APIs in ASP.NET Core using OAuth Bearer tokens Securing OAuth Bearer tokens from multiple Identity Providers in an ASP.NET Core API

Setup

The projects are setup to use a Blazor WASM application hosted in ASP.NET Core secured using the Open ID Connect code flow with PKCE and the BFF pattern. Cookies are used to persist the session. This application uses two separate APIs, a user data API and a service API. The access token from the OIDC authentication is used to access the user data API and a client credentials flow is used to get an access token for the service API. Auth0 is setup using a regular web application and an API configuration. A scope was added to the API which is requested in the client application and validated in the API project.

Implementing the APIs in ASP.NET Core

OAuth2 JwtBearer auth is used to secure the APIs. As we use the same Authority and the same Audience, a single scheme can be used for both applications. We use the default JwtBearerDefaults.AuthenticationScheme.

services.AddAuthentication(options => { options.DefaultAuthenticateScheme = JwtBearerDefaults.AuthenticationScheme; options.DefaultChallengeScheme = JwtBearerDefaults.AuthenticationScheme; }).AddJwtBearer(options => { options.Authority = "https://dev-damienbod.eu.auth0.com/"; options.Audience = "https://auth0-api1"; });

The AddAuthorization method is used to setup the policies so that each API can authorize that the correct token was used to request the data. Two policies are added, one for the user access token and one for the service access token. The access token created using the client credentials flow with Auth0 can be authorized using the azp claim and the Auth0 gty claim. The API client-id is validated using the token claims. The user access token is validated using an IAuthorizationHandler implementation. A default policy is added to the AddControllers method to require an authenticated user meaning a valid access token.

services.AddSingleton<IAuthorizationHandler, UserApiScopeHandler>(); services.AddAuthorization(policies => { policies.AddPolicy("p-user-api-auth0", p => { p.Requirements.Add(new UserApiScopeHandlerRequirement()); // Validate id of application for which the token was created p.RequireClaim("azp", "AScjLo16UadTQRIt2Zm1xLHVaEaE1feA"); }); policies.AddPolicy("p-service-api-auth0", p => { // Validate id of application for which the token was created p.RequireClaim("azp", "naWWz6gdxtbQ68Hd2oAehABmmGM9m1zJ"); p.RequireClaim("gty", "client-credentials"); }); }); services.AddControllers(options => { var policy = new AuthorizationPolicyBuilder() .RequireAuthenticatedUser() .Build(); options.Filters.Add(new AuthorizeFilter(policy)); });

Swagger is added with an OAuth UI so that we can add access tokens manually to test the APIs.

services.AddSwaggerGen(c => { // add JWT Authentication var securityScheme = new OpenApiSecurityScheme { Name = "JWT Authentication", Description = "Enter JWT Bearer token **_only_**", In = ParameterLocation.Header, Type = SecuritySchemeType.Http, Scheme = "bearer", // must be lower case BearerFormat = "JWT", Reference = new OpenApiReference { Id = JwtBearerDefaults.AuthenticationScheme, Type = ReferenceType.SecurityScheme } }; c.AddSecurityDefinition(securityScheme.Reference.Id, securityScheme); c.AddSecurityRequirement(new OpenApiSecurityRequirement { {securityScheme, new string[] { }} }); c.SwaggerDoc("v1", new OpenApiInfo { Title = "My API", Version = "v1", Description = "My API", Contact = new OpenApiContact { Name = "damienbod", Email = string.Empty, Url = new Uri("https://damienbod.com/"), }, }); });

The Configure method is used to add the middleware to implement the API application. It is important to use the UseAuthentication middleware and you should have no reason to implement this yourself. If you find yourself implementing some special authentication middleware for whatever reason, maybe your security architecture might be incorrect.

public void Configure(IApplicationBuilder app) { app.UseSwagger(); app.UseSwaggerUI(c => { c.SwaggerEndpoint("/swagger/v1/swagger.json", "User API"); c.RoutePrefix = string.Empty; }); // only needed for browser clients // app.UseCors("AllowAllOrigins"); app.UseHttpsRedirection(); app.UseRouting(); app.UseAuthentication(); app.UseAuthorization(); app.UseEndpoints(endpoints => { endpoints.MapControllers(); }); }

The UserApiScopeHandler class implements the abstract AuthorizationHandler class. Logic can be implemented here to fulfil the UserApiScopeHandlerRequirement requirement. This requirement is what we use to authorize a request for the user data API. This handler just validates if the required scope exists in the scope claim.

public class UserApiScopeHandler : AuthorizationHandler<UserApiScopeHandlerRequirement> { protected override Task HandleRequirementAsync( AuthorizationHandlerContext context, UserApiScopeHandlerRequirement requirement) { if (context == null) throw new ArgumentNullException(nameof(context)); if (requirement == null) throw new ArgumentNullException(nameof(requirement)); var scopeClaim = context .User .Claims .FirstOrDefault(t => t.Type == "scope"); if (scopeClaim != null) { var scopes = scopeClaim .Value .Split(" ", StringSplitOptions.RemoveEmptyEntries); if (scopes.Any(t => t == "auth0-user-api-one")) { context.Succeed(requirement); } } return Task.CompletedTask; } } public class UserApiScopeHandlerRequirement : IAuthorizationRequirement{ }

The policies can be applied anywhere within the application and the authorization logic is not tightly coupled anywhere to the business of the application. By separating the authorization implementation with the business implementation of the application, it is easier to maintain and understand the authorization and business of the application. This has worked well for me and I find it easy to test and maintain applications setup like this over long periods of time.

[Authorize(Policy = "p-user-api-auth0")] [ApiController] [Route("api/[controller]")] public class UserOneController : ControllerBase

The p-service-api-auth policy is applied to the Service API.

[Authorize(Policy = "p-service-api-auth0")] [ApiController] [Route("api/[controller]")] public class ServiceTwoController : ControllerBase

When the application is started, the swagger UI is displayed and any access token can be pasted into the swagger UI. Both APIs are displayed in the swagger and both APIs require a different access token.

Calling the clients from ASP.NET Core

A Blazor WASM application hosted in ASP.NET Core is used to access the APIs. The application is secured using a trusted server rendered application and the OIDC data is persisted to a secure cookie. The OnRedirectToIdentityProvider method is used to set the audience of the API to request the access token with the required scope. The scopes are added to the OIDC options.

services.AddAuthentication(options => { options.DefaultScheme = CookieAuthenticationDefaults.AuthenticationScheme; options.DefaultChallengeScheme = OpenIdConnectDefaults.AuthenticationScheme; }) .AddCookie(options => { options.Cookie.Name = "__Host-BlazorServer"; options.Cookie.SameSite = SameSiteMode.Lax; }) .AddOpenIdConnect(OpenIdConnectDefaults.AuthenticationScheme, options => { options.Authority = $"https://{Configuration["Auth0:Domain"]}"; options.ClientId = Configuration["Auth0:ClientId"]; options.ClientSecret = Configuration["Auth0:ClientSecret"]; options.ResponseType = OpenIdConnectResponseType.Code; options.Scope.Clear(); options.Scope.Add("openid"); options.Scope.Add("profile"); options.Scope.Add("email"); options.Scope.Add("auth0-user-api-one"); options.CallbackPath = new PathString("/signin-oidc"); options.ClaimsIssuer = "Auth0"; options.SaveTokens = true; options.UsePkce = true; options.GetClaimsFromUserInfoEndpoint = true; options.TokenValidationParameters.NameClaimType = "name"; options.Events = new OpenIdConnectEvents { // handle the logout redirection OnRedirectToIdentityProviderForSignOut = (context) => { var logoutUri = $"https://{Configuration["Auth0:Domain"]}/v2/logout?client_id={Configuration["Auth0:ClientId"]}"; var postLogoutUri = context.Properties.RedirectUri; if (!string.IsNullOrEmpty(postLogoutUri)) { if (postLogoutUri.StartsWith("/")) { // transform to absolute var request = context.Request; postLogoutUri = request.Scheme + "://" + request.Host + request.PathBase + postLogoutUri; } logoutUri += $"&returnTo={ Uri.EscapeDataString(postLogoutUri)}"; } context.Response.Redirect(logoutUri); context.HandleResponse(); return Task.CompletedTask; }, OnRedirectToIdentityProvider = context => { // The context's ProtocolMessage can be used to pass along additional query parameters // to Auth0's /authorize endpoint. // // Set the audience query parameter to the API identifier to ensure the returned Access Tokens can be used // to call protected endpoints on the corresponding API. context.ProtocolMessage.SetParameter("audience", "https://auth0-api1"); return Task.FromResult(0); } }; });

Calling the User API

A user API client service is used to request the data from the ASP.NET Core API. The access token is passed as a parameter and the IHttpClientFactory is used to create the HttpClient.

/// <summary> /// setup to oidc client in the startup correctly /// https://auth0.com/docs/quickstart/webapp/aspnet-core#enterprise-saml-and-others- /// </summary> public class MyApiUserOneClient { private readonly IConfiguration _configurations; private readonly IHttpClientFactory _clientFactory; public MyApiUserOneClient( IConfiguration configurations, IHttpClientFactory clientFactory) { _configurations = configurations; _clientFactory = clientFactory; } public async Task<List<string>> GetUserOneApiData(string accessToken) { try { var client = _clientFactory.CreateClient(); client.BaseAddress = new Uri(_configurations["MyApiUrl"]); client.SetBearerToken(accessToken); var response = await client.GetAsync("api/UserOne"); if (response.IsSuccessStatusCode) { var data = await JsonSerializer.DeserializeAsync<List<string>>( await response.Content.ReadAsStreamAsync()); return data; } throw new ApplicationException($"Status code: {response.StatusCode}, Error: {response.ReasonPhrase}"); } catch (Exception e) { throw new ApplicationException($"Exception {e}"); } } }

The user access token is saved to the HttpContext after a successful sign-in and the GetTokenAsync method with the “access_token” parameter is used to retrieve the user access token.

private readonly MyApiUserOneClient _myApiUserOneClient; public CallUserApiController( MyApiUserOneClient myApiUserOneClient) { _myApiUserOneClient = myApiUserOneClient; } [HttpGet] public async Task<IActionResult> GetAsync() { // call user API string accessToken = await HttpContext.GetTokenAsync("access_token"); var userData = await _myApiUserOneClient .GetUserOneApiData(accessToken); return Ok(userData); }

Calling the Service API

Using a service API requires requesting an access token using the OAuth client credentials flow. This flow can only be used in a trusted backend and a secret is required to request an access token. No user is involved. This is a machine to machine request. The access token is persisted to a distributed cache.

public class Auth0CCTokenApiService { private readonly ILogger<Auth0CCTokenApiService> _logger; private readonly Auth0ApiConfiguration _auth0ApiConfiguration; private static readonly Object _lock = new Object(); private IDistributedCache _cache; private const int cacheExpirationInDays = 1; private class AccessTokenResult { public string AcessToken { get; set; } = string.Empty; public DateTime ExpiresIn { get; set; } } private class AccessTokenItem { public string access_token { get; set; } = string.Empty; public int expires_in { get; set; } public string token_type { get; set; } public string scope { get; set; } } public Auth0CCTokenApiService( IOptions<Auth0ApiConfiguration> auth0ApiConfiguration, IHttpClientFactory httpClientFactory, ILoggerFactory loggerFactory, IDistributedCache cache) { _auth0ApiConfiguration = auth0ApiConfiguration.Value; _logger = loggerFactory.CreateLogger<Auth0CCTokenApiService>(); _cache = cache; } public async Task<string> GetApiToken(HttpClient client, string api_name) { var accessToken = GetFromCache(api_name); if (accessToken != null) { if (accessToken.ExpiresIn > DateTime.UtcNow) { return accessToken.AcessToken; } else { // remove => NOT Needed for this cache type } } _logger.LogDebug($"GetApiToken new from oauth server for {api_name}"); // add var newAccessToken = await GetApiTokenClient(client); AddToCache(api_name, newAccessToken); return newAccessToken.AcessToken; } private async Task<AccessTokenResult> GetApiTokenClient(HttpClient client) { try { var payload = new Auth0ClientCrendentials { client_id = _auth0ApiConfiguration.ClientId, client_secret = _auth0ApiConfiguration.ClientSecret, audience = _auth0ApiConfiguration.Audience }; var authUrl = _auth0ApiConfiguration.Url; var tokenResponse = await client.PostAsJsonAsync(authUrl, payload); if (tokenResponse.StatusCode == System.Net.HttpStatusCode.OK) { var result = await tokenResponse.Content.ReadFromJsonAsync<AccessTokenItem>(); DateTime expirationTime = DateTimeOffset.FromUnixTimeSeconds(result.expires_in).DateTime; return new AccessTokenResult { AcessToken = result.access_token, ExpiresIn = expirationTime }; } _logger.LogError($"tokenResponse.IsError Status code: {tokenResponse.StatusCode}, Error: {tokenResponse.ReasonPhrase}"); throw new ApplicationException($"Status code: {tokenResponse.StatusCode}, Error: {tokenResponse.ReasonPhrase}"); } catch (Exception e) { _logger.LogError($"Exception {e}"); throw new ApplicationException($"Exception {e}"); } } private void AddToCache(string key, AccessTokenResult accessTokenItem) { var options = new DistributedCacheEntryOptions().SetSlidingExpiration(TimeSpan.FromDays(cacheExpirationInDays)); lock (_lock) { _cache.SetString(key, System.Text.Json.JsonSerializer.Serialize(accessTokenItem), options); } } private AccessTokenResult GetFromCache(string key) { var item = _cache.GetString(key); if (item != null) { return System.Text.Json.JsonSerializer.Deserialize<AccessTokenResult>(item); } return null; } }

The MyApiServiceTwoClient service uses the client credentials token client to get the access token and request data from the service API.

public class MyApiServiceTwoClient { private readonly IConfiguration _configurations; private readonly IHttpClientFactory _clientFactory; private readonly Auth0CCTokenApiService _auth0TokenApiService; public MyApiServiceTwoClient( IConfiguration configurations, IHttpClientFactory clientFactory, Auth0CCTokenApiService auth0TokenApiService) { _configurations = configurations; _clientFactory = clientFactory; _auth0TokenApiService = auth0TokenApiService; } public async Task<List<string>> GetServiceTwoApiData() { try { var client = _clientFactory.CreateClient(); client.BaseAddress = new Uri(_configurations["MyApiUrl"]); var access_token = await _auth0TokenApiService.GetApiToken(client, "ServiceTwoApi"); client.SetBearerToken(access_token); var response = await client.GetAsync("api/ServiceTwo"); if (response.IsSuccessStatusCode) { var data = await JsonSerializer.DeserializeAsync<List<string>>( await response.Content.ReadAsStreamAsync()); return data; } throw new ApplicationException($"Status code: {response.StatusCode}, Error: {response.ReasonPhrase}"); } catch (Exception e) { throw new ApplicationException($"Exception {e}"); } } }

The services are added to the default IoC in ASP.NET Core so that construction injection can be used.

services.AddHttpClient(); services.AddOptions(); services.Configure<Auth0ApiConfiguration(Configuration.GetSection("Auth0ApiConfiguration"); services.AddScoped<Auth0CCTokenApiService>(); services.AddScoped<MyApiServiceTwoClient>(); services.AddScoped<MyApiUserOneClient>();

The service can be used anywhere in the code as required.

private readonly MyApiServiceTwoClient _myApiClientService; public CallServiceApiController( MyApiServiceTwoClient myApiClientService) { _myApiClientService = myApiClientService; } [HttpGet] public async Task<IActionResult> GetAsync() { // call service API var serviceData = await _myApiClientService.GetServiceTwoApiData(); return Ok(serviceData); }

You can test the APIs in the swagger UI. I added a breakpoint to my application and copied the access token. I added the token to the swagger UI.

If you send a HTTP request using the wrong token for the intended API, the request will be rejected and a 401or 403 is returned. Without the extra authorization logic implemented with the policies, this request would not have failed.

Notes

It is really important to validate that only access tokens created for the specific APIs will work. There are different ways of implementing this. If using service APIs which are probably solution internal, you could possibly use network security as well to separate these into different security zones. It is really important to validate the no access non-functional use case where using the same identity provider to create the access token for different APIs or if the identity provider produces access tokens for different applications which will probably have different security requirements. For high security requirements, you could use sender constrained tokens.

Links

https://auth0.com/docs/quickstart/webapp/aspnet-core

https://docs.microsoft.com/en-us/aspnet/core/security/authorization/introduction

Open ID Connect

Securing Blazor Web assembly using Cookies and Auth0

Wednesday, 14. April 2021

Bill Wendel's Real Estate Cafe

#LetUsDream: Do we dwell together to make money or is this a community?

What is the meaning of this city?Do you huddle together because youlove each other?What will you answer?“We all dwell together to make moneyfrom each other”?… The post #LetUsDream: Do we dwell together to make money or is this a community? first appeared on Real Estate Cafe.

What is the meaning of this city?Do you huddle together because youlove each other?What will you answer?“We all dwell together to make moneyfrom each other”?…

The post #LetUsDream: Do we dwell together to make money or is this a community? first appeared on Real Estate Cafe.


Mike Jones: self-issued

Second Version of W3C Web Authentication (WebAuthn) Now a Standard

The World Wide Web Consortium (W3C) has published this Recommendation for the Web Authentication (WebAuthn) Level 2 specification, meaning that it now a completed standard. While remaining compatible with the original standard, this second version adds additional features, among them for user verification enhancements, manageability, enterprise features, and an Apple attestation format. The compani

The World Wide Web Consortium (W3C) has published this Recommendation for the Web Authentication (WebAuthn) Level 2 specification, meaning that it now a completed standard. While remaining compatible with the original standard, this second version adds additional features, among them for user verification enhancements, manageability, enterprise features, and an Apple attestation format. The companion second FIDO2 Client to Authenticator Protocol (CTAP) specification is also approaching becoming a completed standard.

See the W3C announcement of this achievement. Also, see Tim Cappalli’s summary of the changes in the second versions of WebAuthn and FIDO2.


Karyl Fowler

Takeaways from the Suez Canal Crisis

An Appeal for Supply Chain Agility — Powered by Verifiable Credentials Ever Given — Wikimedia Commons The Suez Canal debacle had a massive impact on global supply chains — estimated at >$9B in financial hits each day the Ever Given was stuck, totaling at nearly $54B in losses in stalled cargo shipments alone. And it’s no secret that the canal, which sees >12% of global trade move through it
An Appeal for Supply Chain Agility — Powered by Verifiable Credentials Ever Given — Wikimedia Commons

The Suez Canal debacle had a massive impact on global supply chains — estimated at >$9B in financial hits each day the Ever Given was stuck, totaling at nearly $54B in losses in stalled cargo shipments alone. And it’s no secret that the canal, which sees >12% of global trade move through it annually, dealt an especially brutal blow to the oil and gas industry while blocked (given it represents the primary shipping channel for nearly 10% of gas and 8% of natural gas).

While the Ever Given itself was a container ship, likely loaded with finished goods versus raw materials or commodities, the situation has already — and will continue to — have a massive negative impact on totally unrelated industries…for months to come. Here’s an example of the resulting impact on steel and aluminum prices; this had related impact again to oil and gas (steel pipes flow oil) as well as infrastructure and…finished goods (like cars). And the costs continue to climb as the drama unfolds with port authorities and insurers battling over what’s owed to who.

Transmute is a software company — a verifiable credentials as a service company to be exact — and we’ve been focused specifically on the credentials involved in moving steel assets around the globe alongside our customers at DHS SVIP and CBP for the last couple years now. Now, there’s no “silver bullet” for mitigating the fiscal impact of the Ever Given on global trade, and ships who arrived the day it got stuck or shortly after certainly faced a tough decision — sail around the Cape of Africa for up to ~$800K [fuel costs alone] + ~26 days to trip or wait it out at an up to $30K per day demurrage expense [without knowing it’d only be stuck for 6 days or ~$180,000].

So what if you’re a shipping manager and you can make this decision faster? Or, make the call before your ship arrives at the canal? [Some did make this decision, by the way]. What if your goods are stuck on the Ever Given — do you wait it out? Switching suppliers is costly, and you’ve likely got existing contracts in place for much of the cargo. Even if you could fulfill existing contracts and demand on time with a new supplier, what do you do with the delayed cargo expense? What if you’re unsure whether you can sell the duplicate and delayed goods when they reach their originally intended destination?

Well, verifiable credentials — a special kind of digital document that’s cryptographically provable, timestamped and anchored to an immutable ledger at the very moment in time it’s created — can give companies the kind of data needed to make these sorts of decisions. With use over time for trade data, verifiable credentials build a natural reputation for all the things the trade documents are about: suppliers, products, contracts, ports, regulations, tariffs, time between supply chain handoff points, etc.

This type of structured data is of such high integrity that supply chain operators can rely on it and feel empowered to make decisions based on it.

What I’m hoping comes from this global trade disaster is a change in the way supply chain operators make critical decisions. Supply chains of the future will be powered by verifiable credentials, which seamlessly bridge all the data silos that exist today — whether software-created silos or even the paper-based manual, offline silos.

Today, it’s possible to move from a static, critical chain style of management where we often find ourselves in a reactive position to supply chains that look more like an octopus. High integrity data about suppliers and products enables proactive, dynamic decision making in anticipation of and in real time response to shifts in the market — ultimately capturing more revenue opportunities and mitigating risk at the same time.

Takeaways from the Suez Canal Crisis was originally published in Transmute on Medium, where people are continuing the conversation by highlighting and responding to this story.


Aaron Parecki

How to Sign Users In with IndieAuth

This post will show you step by step how you can let people log in to your website with their own IndieAuth website so you don't need to worry about user accounts or passwords.

This post will show you step by step how you can let people log in to your website with their own IndieAuth website so you don't need to worry about user accounts or passwords.

What is IndieAuth? IndieAuth is an extension of OAuth 2.0 that enables an individual website like someone's WordPress, Gitea or OwnCast instance to become its own identity provider. This means you can use your own website to sign in to other websites that support IndieAuth.

You can learn more about the differences between IndieAuth and OAuth by reading OAuth for the Open Web.

What You'll Need

You'll need a few tools and libraries to sign users in with IndieAuth.

An HTTP client. A URL parsing library. A hashing library that supports SHA256. A library to find <link> tags in HTML. The ability to show an HTML form to the user. IndieAuth Flow Summary

Here is a summary of the steps to let people sign in to your website with IndieAuth. We'll dive deeper into each step later in this post.

Present a sign-in form asking the user to enter their server address. Fetch the URL to discover their IndieAuth server. Redirect them to their IndieAuth server with the details of your sign-in request in the query string. Wait for the user to be redirected back to your website with an authorization code in the query string. Exchange the authorization code for the user's profile information by making an HTTP request to their IndieAuth server. Step by Step

Let's dive into the details of each step of the flow. While this is meant to be an approachable guide to IndieAuth, eventually you'll want to make sure to read the spec to make sure you're handling all the edge cases you might encounter properly.

Show the Sign-In Form

First you'll need to ask the user to enter their server address. You should show a form with a single HTML field, <input type="url">. You need to know at least the server name of the user's website.

To improve the user experience, you should add some JavaScript to automatically add the https:// scheme if the user doesn't type it in.

The form should submit to a route on your website that will start the flow. Here's a complete example of an IndieAuth sign-in form.

<form action="/indieauth/start" method="post"> <input type="url" name="url" placeholder="example.com"> <br> <input type="submit" value="Sign In"> </form>

When the user submits this form, you'll start with the URL they enter and you're ready to begin the IndieAuth flow.

Discover the IndieAuth Server

There are potentially two URLs you'll need to find at the URL the user entered in order to complete the flow: the authorization endpoint and token endpoint.

The authorization endpoint is where you'll redirect the user to so they can sign in and approve the request. Eventually they'll be redirected back to your app with an authorization code in the query string. You can take that authorization code and exchange it for their profile information. If your app wanted to read or write additional data from their website, such as when creating posts using Micropub, it could exchange that code at the second endpoint (the token endpoint) to get an access token.

To find these endpoints, you'll fetch the URL the user entered (after validating and normalizing it first) and look for <link> tags on the web page. Specifically, you'll be looking for <link rel="authorization_endpoint" href="..."> and <link rel="token_endpoint" href="..."> to find the endpoints you need for the flow. You'll want to use an HTML parser or a link rel parser library to find these URLs.

Start the Flow by Redirecting the User

Now you're ready to send the user to their IndieAuth server to have them log in and approve your request.

You'll need to take the authorization endpoint you discovered in the previous request and add a bunch of parameters to the query string, then redirect the user to that URL. Here is the list of parameters to add to the query string:

response_type=code - This tells the server you are doing an IndieAuth authorization code flow. client_id= - Set this value to the home page of your website the user is signing in to. redirect_uri= - This is the URL where you want the user to be returned to after they log in and approve the request. It should have the same domain name as the client_id value. state= - Before starting this step, you should generate a random value for the state parameter and store it in a session and include it in the request. This is for CSRF protection for your app. code_challenge= - This is the base64-urlencoded SHA256 hash of a random string you will generate. We'll cover this in more detail below. code_challenge_method=S256 - This tells the server which hashing method you used, which will be SHA256 or S256 for short. me= - (optional) You can provide the URL the user entered in your sign-in form as a parameter here which can be a hint to some IndieAuth servers that support multiple users per server. scope=profile - (optional) If you want to request the user's profile information such as their name, photo, or email, include the scope parameter in the request. The value of the scope parameter can be either profile or profile email. (Make sure to URL-encode the value when including it in a URL, so it will end up as profile+email or profile%20email.)

Calculating the Code Challenge

The Code Challenge is a hash of a secret (called the Code Verifier) that you generate before redirecting the user. This lets the server know that the thing that will later make the request for the user's profile information is the same thing that started the flow. You can see the full details of how to create this parameter in the spec, but the summary is:

Create a random string (called the Code Verifier) between 43-128 characters long Calculate the SHA256 hash of the string Base64-URL encode the hash to create the Code Challenge

The part that people most often make a mistake with is the Base64-URL encoding. Make sure you are encoding the raw hash value, not a hex representation of the hash like some hashing libraries will return.

Once you're ready with all these values, add them all to the query string of the authorization endpoint you previously discovered. For example if the user's authorization endpoint is https://indieauth.rocks/authorize because their website is https://indieauth.rocks, then you'd add these parameters to the query string to create a URL like:

https://indieauth.rocks/authorize?response_type=code &client_id=https://example-app.com &redirect_uri=https://example-app.com/redirect &state=a46a0b27e67c0cb53 &code_challenge=eBKnGb9SEoqsi0RGBv00dsvFDzJNQOyomi6LE87RVSc &code_challenge_method=S256 &me=https://indieauth.rocks &scope=profile

Note: The user's authorization endpoint might not be on the same domain as the URL they entered. That's okay! That just means they have delegated their IndieAuth handling to an external service.

Now you can redirect the user to this URL so that they can approve this request at their own IndieAuth server.

Handle the Redirect Back

You won't see the user again until after they've logged in to their website and approved the request. Eventually the IndieAuth server will redirect the user back to the redirect_uri you provided in the authorization request. The authorization server will add two query parameters to the redirect: code and state. For example:

https://example-app.com/redirect?code=af79b83817b317afc9aa &state=a46a0b27e67c0cb53

First you need to double check that the state value in the redirect matches the state value that you included in the initial request. This is a CSRF protection mechanism. Assuming they match, you're ready to exchange the authorization code for the user's profile information.

Exchange the Authorization Code for the User's Profile Info

Now you'll need to make a POST request to exchange the authorization code for the user's profile information. Since this code was returned in a redirect, the IndieAuth server needs an extra confirmation that it was sent back to the right thing, which is what the Code Verifier and Code Challenge are for. You'll make a POST request to the authorization endpoint with the following parameters:

grant_type=authorization_code code= - The authorization code as received in the redirect. client_id= - The same client_id as was used in the original request. redirect_uri= The same redirect_uri as was used in the original request. code_verifier= The original random string you generated when calculating the Code Challenge.

This is described in additional detail in the spec.

Assuming everything checks out, the IndieAuth server will respond with the full URL of the user, as well as their stated profile information if requested. The response will look like the below:

{ "me": "https://indieauth.rocks/", "profile": { "name": "IndieAuth Rocks", "url": https://indieauth.rocks/" "photo": "https://indieauth.rocks/profile.jpg" } }

Wait! We're not done yet! Just because you get information in this response doesn't necessarily mean you can trust it yet! There are two important points here:

The information under the profile object must ALWAYS be treated as user-supplied data, not treated as canonical or authoritative in any way. This means for example not de-duping users based on the profile.url field or profile.email field. If the me URL is not an exact match of the URL the user initially entered, you need to re-discover the authorization endpoint of the me URL returned in this response and make sure it matches exactly the authorization server you found in the initial discovery step.

You can perform the same discovery step as in the beginning, but this time using the me URL returned in the authorization code response. If that authorization endpoint matches the same authorization endpoint that you used when you started the flow, everything is fine and you can treat this response as valid.

This last validation step is critical, since without it, anyone could set up an authorization endpoint claiming to be anyone else's server. More details are available in the spec.

Now you're done!

The me URL is the value you should use as the canonical and stable identifier for this user. You can use the information in the profile object to augment this user account with information like the user's name or profile information. If the user logs in again later, look up the user from their me URL and update their name/photo/email with the most recent values in the profile object to keep their profile up to date.

Testing Your IndieAuth Client

To test your IndieAuth client, you'll need to find a handful of IndieAuth providers in the wild you can use to sign in to it. Here are some to get you started:

Micro.blog - All micro.blog accounts are IndieAuth identities as well. You can use a free account for testing. WordPress - With the IndieAuth plugin installed, a WordPress site can be its own IndieAuth server as well. Drupal - The IndieWeb module for Drupal will let a Drupal instance be its own IndieAuth server. Selfauth - Selfauth is a single PHP file that acts as an IndieAuth server.

Eventually I will get around to finishing the test suite at indieauth.rocks so that you have a testing tool readily available, but in the mean time the options above should be enough to get you started.

Getting Help

If you get stuck or need help, feel free to drop by the IndieWeb chat to ask questions! Myself and many others are there all the time and happy to help troubleshoot new IndieAuth implementations!

Tuesday, 13. April 2021

MyDigitalFootprint

What superpowers does a CDO need?

Below are essential characteristics any CDO’s needs, ideal for a job description. After the list, I want to expand on one new superpower all CDO’s need, oddly where less data is more powerful. Image Source: https://openpolicy.blog.gov.uk/2020/01/17/lab-long-read-human-centred-policy-blending-big-data-and-thick-data-in-national-policy/ Day 0 a CDO must: BE a champion of fac

Below are essential characteristics any CDO’s needs, ideal for a job description. After the list, I want to expand on one new superpower all CDO’s need, oddly where less data is more powerful.

Image Source: https://openpolicy.blog.gov.uk/2020/01/17/lab-long-read-human-centred-policy-blending-big-data-and-thick-data-in-national-policy/

Day 0 a CDO must:

BE a champion of fact-based, data-driven decision making. However, complex decisions based on experience, gut instinct, leadership and opinions still play a role, but most decisions can now be underpinned with a firmer foundation. BE curious about how the business operates and makes money and its drivers of cost, revenue, and customer satisfaction through the lens of data and analytical models. BE an ambassador of change. Data uncovers assumptions that unpack previous political decisions and moves power. Data does not create change but will create conflict — how this is managed is a critical CDO skill. BE a great storyteller. KNOW who is the smartest data scientist in the company, where the most sophisticated models are, and understand and appreciate what those data teams do and how they do it. Managing and getting the best from these teams is a skill everyone needs. FIGURE out and articulate the value your team can deliver to the business in the next week, month, and quarter. As the CDO, what is the value you bring to your peers and shareholder in the next 5 years? IMPROVE decision making using data for day to day, how to reduce risk and how to inform the company on achieving and adapting the company’s strategy. BUILD relationships to source data both within your business and the wider ecosystem. This is both to determine the quality of the data and be able to better use data and or roll out solutions that improve quality and decision-making. KNOW what technical questions to ask and being able to live with the complexity involved in the delivery.

Decision making is a complex affair, and as CDO’s we are there to support. Decisions are perceived to be easier when there is lots of data, and the signal is big, loud and really clear. Big data has a place, but we must not forget small signals from ethnographic data sources. Leadership often does not know what to do with critical and challenging small data, especially when it challenges easy assumptions that big data justifies.

A CDO superpower is to shine a light on all data, without bias

Our superpower is to shine a light on all data, without bias, and help strategic thinkers, who often put a higher value on quantitative data. They didn’t know how to handle data that wasn’t easily measurable does not show up in existing paid-for reports. Ethnographic work has a serious perception problem in a data-driven decision world. A key role of the CDO is to uncover all data and its value, not bias to a bigger data set — that is just lazy. I love this image from @triciawang, where the idea of critical small data set is represented as “thick data.” Do follow her work https://www.triciawang.com/ or that of Genevieve BellKate Crawford and danah boyd (@zephoria).

Source: Nokia’s experience of ignoring small data

Note to the CEO

Digital transformation has built a dependence on data, and the bigger the data set, the more weight it is assumed to have. Often, there is a dangerous assumption made that the risk in a decision is reduced because of the data set's size. It may be true for operational issues and automated decision making but not necessarily for strategy.

As the CEO, you need to determine the half-life of the data used to justify or solidify a decision. Half-life in science is when more than 50 per cent of a substance has undergone a radical change; in business terms, this is when half the value of the data is lost or a doubling of the error. The bigger the data set, the quicker (shorter) the half-life will be. Indeed some data’s half-life is less than the time it took to collect and store it. It is big but it really has no value. For small data sets, such as ethnographic data, the half-life can be longer than a 3 to 5 years strategic planning cycle. Since some data might be small and could be a signal to your future, supporting a CDO who puts equal weight on all data is critical to success.

Monday, 12. April 2021

Damien Bod

Securing Blazor Web assembly using Cookies and Auth0

The article shows how an ASP.NET Core Blazor web assembly UI hosted in an ASP.NET Core application can be secured using cookies. Auth0 is used as the identity provider. The trusted application is protected using the Open ID Connect code flow with a secret and using PKCE. The API calls are protected using the secure […]

The article shows how an ASP.NET Core Blazor web assembly UI hosted in an ASP.NET Core application can be secured using cookies. Auth0 is used as the identity provider. The trusted application is protected using the Open ID Connect code flow with a secret and using PKCE. The API calls are protected using the secure cookie and anti-forgery tokens to protect against CSRF. This architecture is also known as the Backends for Frontends (BFF) Pattern.

Code: https://github.com/damienbod/SeparatingApisPerSecurityLevel

Blogs in this series

Securing Blazor Web assembly using Cookies and Azure AD Securing Blazor Web assembly using Cookies and Auth0

The application was built as described in the previous blog in this series. Please refer to that blog for implementation details about the WASM application, user session and anti-forgery tokens. Setting up the Auth0 authentication and the differences are described in this blog.

An Auth0 account is required and a Regular Web Application was setup for this. This is not an SPA application and must always be deployed with a backend which can keep a secret. The WASM client can only use the APIs on the same domain and uses cookies. All application authentication is implemented in the trusted backend and the secure data is encrypted in the cookie.

The Microsoft.AspNetCore.Authentication.OpenIdConnect Nuget package is used to add the authentication to the ASP.NET Core application. User secrets are used for configuration which uses the Auth0 sensitive data

<Project Sdk="Microsoft.NET.Sdk.Web"> <PropertyGroup> <TargetFramework>net5.0</TargetFramework> <WebProject_DirectoryAccessLevelKey>1</WebProject_DirectoryAccessLevelKey> <UserSecretsId>de0b7f31-65d4-46d6-8382-30c94073cf4a</UserSecretsId> </PropertyGroup> <ItemGroup> <ProjectReference Include="..\Client\BlazorAuth0Bff.Client.csproj" /> <ProjectReference Include="..\Shared\BlazorAuth0Bff.Shared.csproj" /> </ItemGroup> <ItemGroup> <PackageReference Include="Microsoft.AspNetCore.Components.WebAssembly.Server" Version="5.0.5" /> <PackageReference Include="Microsoft.AspNetCore.Authentication.JwtBearer" Version="5.0.5" NoWarn="NU1605" /> <PackageReference Include="Microsoft.AspNetCore.Authentication.OpenIdConnect" Version="5.0.5" NoWarn="NU1605" /> <PackageReference Include="IdentityModel" Version="5.1.0" /> <PackageReference Include="IdentityModel.AspNetCore" Version="3.0.0" /> </ItemGroup> </Project>

The ConfigureServices method in the Startup class of the ASP.NET Core Blazor server application is used to add the authentication. The Open ID Connect code flow with PKCE and a client secret is used for the default challenge and a cookie is used to persist the tokens if authenticated. The Blazor client WASM uses the cookie to access the API.

The Open ID Connect is configured to match the Auth0 settings for the client. A client secret is required and used to authenticate the application. The PKCE option is set explicitly to use PKCE with the client configuration. The required scopes are set so that the profile is returned and an email. These are OIDC standard scopes. The user profile API is used to return the profile data and so keep the id_token small. The tokens are persisted. If successful, the data is persisted to an identity cookie. The logout client is configured as documented by Auth0 in its example.

services.AddAuthentication(options => { options.DefaultScheme = CookieAuthenticationDefaults.AuthenticationScheme; options.DefaultChallengeScheme = OpenIdConnectDefaults.AuthenticationScheme; }) .AddCookie(options => { options.Cookie.Name = "__Host-BlazorServer"; options.Cookie.SameSite = SameSiteMode.Lax; }) .AddOpenIdConnect(OpenIdConnectDefaults.AuthenticationScheme, options => { options.Authority = $"https://{Configuration["Auth0:Domain"]}"; options.ClientId = Configuration["Auth0:ClientId"]; options.ClientSecret = Configuration["Auth0:ClientSecret"]; options.ResponseType = OpenIdConnectResponseType.Code; options.Scope.Clear(); options.Scope.Add("openid"); options.Scope.Add("profile"); options.Scope.Add("email"); options.CallbackPath = new PathString("/signin-oidc"); options.ClaimsIssuer = "Auth0"; options.SaveTokens = true; options.UsePkce = true; options.GetClaimsFromUserInfoEndpoint = true; options.TokenValidationParameters.NameClaimType = "name"; options.Events = new OpenIdConnectEvents { // handle the logout redirection OnRedirectToIdentityProviderForSignOut = (context) => { var logoutUri = $"https://{Configuration["Auth0:Domain"]}/v2/logout?client_id={Configuration["Auth0:ClientId"]}"; var postLogoutUri = context.Properties.RedirectUri; if (!string.IsNullOrEmpty(postLogoutUri)) { if (postLogoutUri.StartsWith("/")) { // transform to absolute var request = context.Request; postLogoutUri = request.Scheme + "://" + request.Host + request.PathBase + postLogoutUri; } logoutUri += $"&returnTo={ Uri.EscapeDataString(postLogoutUri)}"; } context.Response.Redirect(logoutUri); context.HandleResponse(); return Task.CompletedTask; } }; });

The Configure method is implement to require authentication. The UseAuthentication extension method is required. Our endpoints are added like in the previous blog.

public void Configure(IApplicationBuilder app, IWebHostEnvironment env) { // IdentityModelEventSource.ShowPII = true; JwtSecurityTokenHandler.DefaultInboundClaimTypeMap.Clear(); app.UseHttpsRedirection(); app.UseBlazorFrameworkFiles(); app.UseStaticFiles(); app.UseRouting(); app.UseAuthentication(); app.UseAuthorization(); app.UseEndpoints(endpoints => { endpoints.MapRazorPages(); endpoints.MapControllers(); endpoints.MapFallbackToPage("/_Host"); }); }

The Auth0 configuration can be downloaded in the sample application, or you can configure this direct in the Auth0 UI and copy this. Three properties are required. I added these to the user secrets in my application development. If I deployed this to Azure, I would add these to an Azure Key Vault and can then use managed identities to access the secrets.

"Auth0": { "Domain": "your-domain-in-auth0", "ClientId": "--in-secrets--", "ClientSecret": "--in-secrets--" }

Now everything will run and you can now use ASP.NET Core Blazor BFF with Auth0. We don’t need any access tokens in the browser. This was really simple to configure and only ASP.NET Core standard Nuget packages are used. Security best practices are supported by Auth0 and it is really easy to setup. In production I would force MFA and FIDO2 if possible.

Links

Securing Blazor Web assembly using Cookies and Azure AD

https://auth0.com/

https://docs.microsoft.com/en-us/aspnet/core/blazor/components/prerendering-and-integration?view=aspnetcore-5.0&pivots=webassembly#configuration

https://docs.microsoft.com/en-us/aspnet/core/security/anti-request-forgery

https://docs.microsoft.com/en-us/aspnet/core/blazor/security

https://docs.microsoft.com/en-us/aspnet/core/blazor/security/webassembly/additional-scenarios

Sunday, 11. April 2021

Virtual Democracy

On Science Preprints: academic publishing takes a quantum leap into the present

Academic journals are becoming the vacuum tubes of the Academy 2.0 enterprise; they are already described and defined more by their limitations than by their advantages. In their early decades, they served us well, until they didn’t. After the transition to an academy-internal publication economy, powered by ePrint services hosted across the planet, journals will not be missed. That individual acad
Academic journals are becoming the vacuum tubes of the Academy 2.0 enterprise; they are already described and defined more by their limitations than by their advantages. In their early decades, they served us well, until they didn’t. After the transition to an academy-internal publication economy, powered by ePrint services hosted across the planet, journals will not be missed. That individual academic libraries should need to continue to pony up for thousands of journal subscriptions for decades to come is now an idea only in the Xeroxed business models of for-profit publishers. Everyone else is looking for a way out; and the internet awaits.

Saturday, 10. April 2021

Bill Wendel's Real Estate Cafe

Housing recovery or iCovery? 10 iFactors driving unsustainable price spikes

Anyone reading the Boston Globe’s Spring House Hunt articles this week online or in print this weekend?  To put them into context, sharing my comment… The post Housing recovery or iCovery? 10 iFactors driving unsustainable price spikes first appeared on Real Estate Cafe.

Anyone reading the Boston Globe’s Spring House Hunt articles this week online or in print this weekend?  To put them into context, sharing my comment…

The post Housing recovery or iCovery? 10 iFactors driving unsustainable price spikes first appeared on Real Estate Cafe.

Tuesday, 06. April 2021

The Dingle Group

SSI in IoT, The SOFIE Project

Decentralized Identifiers and Verifiable Credentials are starting to make their way into the world of IoT. There are many ongoing research projects funded by EU and private sector organizations as well as an increasing number of DLT based IoT projects that are including DIDs and VCs as a core component of their solutions. For the 22nd Vienna Digital Identity Meetup* we hosted three of the lead

Decentralized Identifiers and Verifiable Credentials are starting to make their way into the world of IoT. There are many ongoing research projects funded by EU and private sector organizations as well as an increasing number of DLT based IoT projects that are including DIDs and VCs as a core component of their solutions.

For the 22nd Vienna Digital Identity Meetup* we hosted three of the lead researchers from the EU H2020 funded The SOFIE Project. The SOFIE Project wrapped up at the end of last year a key part of this research focused on the the use of SSI concepts in three IoT sectors (energy, supply chain, and mixed reality gaming) targeting integrating SSI in without requiring changes to the existing IoT systems.

Our three presenters were from two different European research universities, Aalto University (Dr. Dmitrij Lagutin and Dr. Yki Kortesniemi) and Athens University of Economics and Business (Dr. Nikos Fotiou)

The presentation covered four areas of interest in SSI the IoT sector:

DIDs and VCs on constrained devices

Access control using the W3C Web of Things (WoT) Thing Description

did:self method

Ephemeral DIDs and Ring signatures

Each of these research areas are integrated into real world use cases and connected to the sectors that were part of the SOFIR project’s mandate.

(Note: There were some ‘technical issues’ at the start of the event and the introduction part of the presentation has been truncated, but the good new is all of our presenters content is there.)

To listen to a recording of the event please check out the link: https://vimeo.com/530442817

Time markers:

0:00:00 - SOFIE Project Introduction, (Dr. Dmitrij Lagutin)

0:02:33 - DIDs and VCs on constrained devices

0:14:00 - Access Control for WoT using VCs (Dr. Nikos Fotiou)

0:33:23 - did:self method

0:46:00 - Ephemeral DIDs and Ring Signatures (Dr. Yki Kortesniemi)

1:07:29 - Wrap-up & Upcoming Events


Resources

The SOFIE Project Slide deck: download

And as a reminder, we continue to have online only events.

If interested in getting notifications of upcoming events please join the event group at: https://www.meetup.com/Vienna-Digital-Identity-Meetup/

*The Vienna Digital Identity Meetup is hosted by The Dingle Group and is focused on educating business, societal, legal and technologists on the new opportunities that arise with a high assurance digital identity created by the reduction risk and strengthened provenance. We meet on the 4th Monday of every month, in person (when permitted) in Vienna and online on Zoom. Connecting and educating across borders and oceans.

Monday, 05. April 2021

Damien Bod

Creating Verifiable credentials in ASP.NET Core for decentralized identities using Trinsic

This article shows how verifiable credentials can be created in ASP.NET Core for decentralized identities using the Trinsic platform which is a Self-sovereign identity implementation with APIs to integrate. The verifiable credentials can be downloaded to your digital wallet if you have access and can be used in separate application which understands the Trinsic APIs. […]

This article shows how verifiable credentials can be created in ASP.NET Core for decentralized identities using the Trinsic platform which is a Self-sovereign identity implementation with APIs to integrate. The verifiable credentials can be downloaded to your digital wallet if you have access and can be used in separate application which understands the Trinsic APIs.

Code: https://github.com/swiss-ssi-group/TrinsicAspNetCore

Blogs in this series

Getting started with Self Sovereign Identity SSI Creating Verifiable credentials in ASP.NET Core for decentralized identities using Trinsic Verifying Verifiable Credentials in ASP.NET Core for Decentralized Identities using Trinsic

Setup

We want implement the flow shown in the following figure. The National Driving license application is responsible for issuing driver licenses and administrating licenses for users which have authenticated correctly. The user can see his or her driver license and a verifiable credential displayed as a QR code which can be used to add the credential to a digital wallet. When the application generates the credential, it adds the credential DID to the blockchain ledger with the cryptographic proof of the issuer and the document. When you scan the QR Code, the DID will get validated and will be added to the wallet along with the request claims. The digital wallet must be able to find the DID on the correct network and the schema and needs to search for the ledger in the correct blockchain. A good wallet should take care of this for you. The schema is required so that the data in the DID document can be understood.

Trinsic Setup

Trinsic is used to connect to the blockchain and create the DIDs, credentials in this example. Trinsic provides good getting started docs.

In Trinsic, you need to create an organisation for the Issuer application.

Click on the details of the organisation to get the API key. This is required for the application. This API Key cannot be replaced or updated, so if you make a mistake and lose this, commit it in code, you would have to create a new organisation. It is almost important to note the network. This is where you can find the DID to get the credentials produced by this issuer.

To issuer credentials, you need to create a template or schema with the claims which are issued in the credential using the template. The issuer application provides values for the claims.

Implementing the ASP.NET Core Issuer

The verifiable credentials issuer is implemented in an ASP.NET Core application using Razor pages and Identity. This application needs to authenticate the users before issuing a verifiable credential for the user. FIDO2 with the correct authenticate flow would be a good choice as this would protect against phishing. You could use credentials as well, if the users of the applications had a trusted ID. You would still have to protect against phishing. The quality of the credentials issued depends on the security of the issuing application. If the application has weak user authentication, then the credentials cannot be trusted. For a bank, gov IDs, drivings license, a high level of security is required. Open ID Connect FAPI with FIDO2 would make a good solution to authenticate the user. Or a user with a trusted gov issued credential together with FIDO2 would also be good.

The ASP.NET Core application initializes the services and adds the Trinsic client using the API Key from the organisation which issues the credentials. The Trinsic.ServiceClients Nuget package is used for the Trinsic integration. ASP.NET Core Identity is used to add, remove users and add driving licenses for the users in the administration part of the application. MFA should be setup but as this is a demo, I have not forced this.

public void ConfigureServices(IServiceCollection services) { services.AddScoped<TrinsicCredentialsService>(); services.AddScoped<DriverLicenseService>(); services.AddTrinsicClient(options => { // For CredentialsClient and WalletClient // API key of National Driving License (Organisation which does the verification) options.AccessToken = Configuration["Trinsic:ApiKey"]; // For ProviderClient // options.ProviderKey = providerKey; }); services.AddDbContext<ApplicationDbContext>(options => options.UseSqlServer( Configuration.GetConnectionString("DefaultConnection"))); services.AddDatabaseDeveloperPageExceptionFilter(); services.AddIdentity<IdentityUser, IdentityRole>( options => options.SignIn.RequireConfirmedAccount = false) .AddEntityFrameworkStores<ApplicationDbContext>() .AddDefaultTokenProviders(); services.AddSingleton<IEmailSender, EmailSender>(); services.AddScoped<IUserClaimsPrincipalFactory<IdentityUser>, AdditionalUserClaimsPrincipalFactory>(); services.AddAuthorization(options => { options.AddPolicy("TwoFactorEnabled", x => x.RequireClaim("amr", "mfa") ); }); services.AddRazorPages(); }

User secrets are used to add the secrets required for the application in development. The secrets can be added to the Json secrets file and not to the code source. If deploying this to Azure, the secrets would be read from Azure Key vault. The application requires the Trinsic API Key and the credential template definition ID created in Trinsic studio.

{ "ConnectionStrings": { "DefaultConnection": "--db-connection-string--" }, "Trinsic": { "ApiKey": "--your-api-key-organisation--", "CredentialTemplateDefinitionId": "--Template-credential-definition-id--" } }

The driving license service is responsible for creating driver license for each user. This is just an example of logic and is not related to SSI.

using Microsoft.EntityFrameworkCore; using NationalDrivingLicense.Data; using System.Threading.Tasks; namespace NationalDrivingLicense { public class DriverLicenseService { private readonly ApplicationDbContext _applicationDbContext; public DriverLicenseService(ApplicationDbContext applicationDbContext) { _applicationDbContext = applicationDbContext; } public async Taskbool> HasIdentityDriverLicense(string username) { if (!string.IsNullOrEmpty(username)) { var driverLicense = await _applicationDbContext.DriverLicenses.FirstOrDefaultAsync( dl => dl.UserName == username && dl.Valid == true ); if (driverLicense != null) { return true; } } return false; } public async Task<DriverLicense> GetDriverLicense(string username) { var driverLicense = await _applicationDbContext.DriverLicenses.FirstOrDefaultAsync( dl => dl.UserName == username && dl.Valid == true ); return driverLicense; } public async Task UpdateDriverLicense(DriverLicense driverLicense) { _applicationDbContext.DriverLicenses.Update(driverLicense); await _applicationDbContext.SaveChangesAsync(); } } }

The Trinsic credentials service is responsible for creating the verifiable credentials. It uses the users drivers license and creates a new credential using the Trinsic client API using the CreateCredentialAsync method. The claims must match the template created in the studio. A Trinsic specific URL is returned. This can be used to create a QR Code which can be scanned from a Trinsic digital wallet.

public class TrinsicCredentialsService { private readonly ICredentialsServiceClient _credentialServiceClient; private readonly IConfiguration _configuration; private readonly DriverLicenseService _driverLicenseService; public TrinsicCredentialsService(ICredentialsServiceClient credentialServiceClient, IConfiguration configuration, DriverLicenseService driverLicenseService) { _credentialServiceClient = credentialServiceClient; _configuration = configuration; _driverLicenseService = driverLicenseService; } public async Task<string> GetDriverLicenseCredential(string username) { if (!await _driverLicenseService.HasIdentityDriverLicense(username)) { throw new ArgumentException("user has no valid driver license"); } var driverLicense = await _driverLicenseService.GetDriverLicense(username); if (!string.IsNullOrEmpty(driverLicense.DriverLicenseCredentials)) { return driverLicense.DriverLicenseCredentials; } string connectionId = null; // Can be null | <connection identifier> bool automaticIssuance = false; IDictionary<string, string> credentialValues = new Dictionary<String, String>() { {"Issued At", driverLicense.IssuedAt.ToString()}, {"Name", driverLicense.Name}, {"First Name", driverLicense.FirstName}, {"Date of Birth", driverLicense.DateOfBirth.Date.ToString()}, {"License Type", driverLicense.LicenseType} }; CredentialContract credential = await _credentialServiceClient .CreateCredentialAsync(new CredentialOfferParameters { DefinitionId = _configuration["Trinsic:CredentialTemplateDefinitionId"], ConnectionId = connectionId, AutomaticIssuance = automaticIssuance, CredentialValues = credentialValues }); driverLicense.DriverLicenseCredentials = credential.OfferUrl; await _driverLicenseService.UpdateDriverLicense(driverLicense); return credential.OfferUrl; } }

The DriverLicenseCredentials Razor page uses the Trinsic service and returns the credentials URL if the user has a valid drivers license.

using System.Threading.Tasks; using Microsoft.AspNetCore.Mvc.RazorPages; using NationalDrivingLicense.Data; namespace NationalDrivingLicense.Pages { public class DriverLicenseCredentialsModel : PageModel { private readonly TrinsicCredentialsService _trinsicCredentialsService; private readonly DriverLicenseService _driverLicenseService; public string DriverLicenseMessage { get; set; } = "Loading credentials"; public bool HasDriverLicense { get; set; } = false; public DriverLicense DriverLicense { get; set; } public string CredentialOfferUrl { get; set; } public DriverLicenseCredentialsModel(TrinsicCredentialsService trinsicCredentialsService, DriverLicenseService driverLicenseService) { _trinsicCredentialsService = trinsicCredentialsService; _driverLicenseService = driverLicenseService; } public async Task OnGetAsync() { DriverLicense = await _driverLicenseService.GetDriverLicense(HttpContext.User.Identity.Name); if (DriverLicense != null) { var offerUrl = await _trinsicCredentialsService .GetDriverLicenseCredential(HttpContext.User.Identity.Name); DriverLicenseMessage = "Add your driver license credentials to your wallet"; CredentialOfferUrl = offerUrl; HasDriverLicense = true; } else { DriverLicenseMessage = "You have no valid driver license"; } } } }

The Razor page template displays the QR code and information about the driver license issued to the logged in user.

@page @model NationalDrivingLicense.Pages.DriverLicenseCredentialsModel @{ } <h3>@Model.DriverLicenseMessage</h3> <br /> <br /> @if (Model.HasDriverLicense) { <div class="container-fluid"> <div class="row"> <div class="col-sm"> <div class="qr" id="qrCode"></div> </div> <div class="col-sm"> <div> <img src="~/ndl_car_01.png" width="200" alt="Driver License"> <div> <b>Driver Licence: @Html.DisplayFor(model => model.DriverLicense.UserName)</b> <hr /> <dl class="row"> <dt class="col-sm-4">Issued</dt> <dd class="col-sm-8"> @Model.DriverLicense.IssuedAt.ToString("MM/dd/yyyy") </dd> <dt class="col-sm-4"> @Html.DisplayNameFor(model => model.DriverLicense.Name) </dt> <dd class="col-sm-8"> @Html.DisplayFor(model => model.DriverLicense.Name) </dd> <dt class="col-sm-4">First Name</dt> <dd class="col-sm-8"> @Html.DisplayFor(model => model.DriverLicense.FirstName) </dd> <dt class="col-sm-4">License Type</dt> <dd class="col-sm-8"> @Html.DisplayFor(model => model.DriverLicense.LicenseType) </dd> <dt class="col-sm-4">Date of Birth</dt> <dd class="col-sm-8"> @Model.DriverLicense.DateOfBirth.ToString("MM/dd/yyyy") </dd> <dt class="col-sm-4">Issued by</dt> <dd class="col-sm-8"> @Html.DisplayFor(model => model.DriverLicense.Issuedby) </dd> <dt class="col-sm-4"> @Html.DisplayNameFor(model => model.DriverLicense.Valid) </dt> <dd class="col-sm-8"> @Html.DisplayFor(model => model.DriverLicense.Valid) </dd> </dl> </div> </div> </div> </div> </div> } @section scripts { <script src="~/js/qrcode.min.js"></script> <script type="text/javascript"> new QRCode(document.g