Last Update 12:23 AM November 24, 2020 (UTC)

Identity Blog Catcher

Brought to you by Identity Woman and Infominer.
Support this collaboration on Patreon!!!

Monday, 23. November 2020

John Philpin : Lifestream

Apple’s Head of Security Indicted in Santa Clara County for

Apple’s Head of Security Indicted in Santa Clara County for making Illegal Donations to the Sherriff’s Office to gain a Weapons Permit. Well that can’t be good. Can it?

Ben Werdmüller

TODO

I was awake in the early hours of the morning, staring at nothing, my heart racing. It was a bit like The Queen's Gambit, except instead of a chessboard on the ceiling, there was a kanban board, with a huge backlog of things I knew I'd failed to do. This morning, I got up, turned my ceiling kanban board into a real one in Notion, and got to work. The world feels more manageable in the cold ligh

I was awake in the early hours of the morning, staring at nothing, my heart racing. It was a bit like The Queen's Gambit, except instead of a chessboard on the ceiling, there was a kanban board, with a huge backlog of things I knew I'd failed to do.

This morning, I got up, turned my ceiling kanban board into a real one in Notion, and got to work. The world feels more manageable in the cold light of day. I'm making progress.

I'm not sure how to deal with the things that don't quite fit on a task list. You can't drag life from column to column or build it into a database. You've got to live it. Or at least, that's what I've always thought.

I'm more of an intuitive thinker than a planner. I always have been. I go off-recipe when I cook; I meander when I travel; I play with my code. Sometimes it works out and I discover things I never would have encountered otherwise; sometimes it doesn't, and I find myself in a mess of my own making. But I've started to write scripts for the hard stuff, and to my horror, it really helps.

I was stressed out about having to give difficult feedback to a colleague, but I wrote out what I was going to say ahead of time in detail, and it turned out to be nowhere near as bad as I expected it to be. I had to spend a day cold calling customers - an introvert's worst nightmare - but having my script in front of me meant my numbers were startlingly high.

Perhaps the most important thing is that writing the script is a way to cut through the fear. Getting something down in writing works: it's a step in the right direction. Then comes editing, and iteration. Which is far better than staring at the ceiling at 3am beating yourself up for being behind.


Identity Woman

In a digital age, how can we reconnect values, principles and rules?

Who is the “we” – this piece is co-authored by Kaliya Young and Tony Fish who together have worked for over 45 years on identity and personal data. For this article, we are looking at the role of values, principles and rules within the industry and sectors seeking to re-define, re-imagine and create ways for […] The post In a digital age, how can we reconnect values, principles and rules? appear

Who is the “we” – this piece is co-authored by Kaliya Young and Tony Fish who together have worked for over 45 years on identity and personal data. For this article, we are looking at the role of values, principles and rules within the industry and sectors seeking to re-define, re-imagine and create ways for […]

The post In a digital age, how can we reconnect values, principles and rules? appeared first on Identity Woman.


Tim Bouma's Blog

Tony Fish Drummond Reed This is a tricky balance and you are making me aware of a use case I hadn’t…

Tony Fish Drummond Reed This is a tricky balance and you are making me aware of a use case I hadn’t really considered. This is by no means intended to strip away how you can present yourself. If that was the case, we will have lost all of our gains. Rather, there should be a spectrum of how one can present or accept. Unfortunately we know that there is no black and white; some acceptance rules are

Tony Fish Drummond Reed This is a tricky balance and you are making me aware of a use case I hadn’t really considered. This is by no means intended to strip away how you can present yourself. If that was the case, we will have lost all of our gains. Rather, there should be a spectrum of how one can present or accept. Unfortunately we know that there is no black and white; some acceptance rules are unreasonably unequivocal and we are forced to bend the rules (haven’t we all?) for a better outcome. But that is also exploited. In some cases it’s worth burning the registry office to keep everyone safe, but I am hoping there is a middle ground where individuals truly control what they have and reveal nothing to their disadvantage.

In the end, the world is not perfect, and we must be diligent that whatever we create, however good it is, can be used for imperfect ends.


John Philpin : Lifestream

Very early days … BUT … if you are interested in joining the

Very early days … BUT … if you are interested in joining the People First Network …. we are here : https://my.peoplefirst.network/share/hSmPm7enh-CQp6gt No Facebook, no Twitter, no algorithms …. just people.

Very early days … BUT … if you are interested in joining the People First Network …. we are here :

https://my.peoplefirst.network/share/hSmPm7enh-CQp6gt

No Facebook, no Twitter, no algorithms …. just people.


Is there any movie that features Miami that doesn’t open wit

Is there any movie that features Miami that doesn’t open with a camera out at sea - rushing toward the shore line across the water at speed? 🎥🎬📽

Is there any movie that features Miami that doesn’t open with a camera out at sea - rushing toward the shore line across the water at speed?

🎥🎬📽


Leisure centre Oasis took their name from closes down Now

Leisure centre Oasis took their name from closes down Now if only the band would as well, the world would improve at a stroke.

Leisure centre Oasis took their name from closes down

Now if only the band would as well, the world would improve at a stroke.

Sunday, 22. November 2020

John Philpin : Lifestream

Winning Read All About It

Winning Read All About It

Where is the ‘webp’ format suddenly coming from - and if it

Where is the ‘webp’ format suddenly coming from - and if it is THAT standard - why don’t all apps recognize it?

Where is the ‘webp’ format suddenly coming from - and if it is THAT standard - why don’t all apps recognize it?


Ben Werdmüller

Unlearning disruption

I want to unlearn the definition of "disruption". Disruption in the Clayton Christensen sense is all about removing an incumbent business from its perch by reaching an audience it has overlooked and growing from there. It's about building a business by out-competing another business. As far as business goes, it's good strategy, but it doesn't do much to change the status quo. You may have out

I want to unlearn the definition of "disruption".

Disruption in the Clayton Christensen sense is all about removing an incumbent business from its perch by reaching an audience it has overlooked and growing from there. It's about building a business by out-competing another business. As far as business goes, it's good strategy, but it doesn't do much to change the status quo. You may have out-innovated someone else's company, but the rules of business remain in place.

The thing is, the rules of business aren't working. The systems we have in place depend on outrageous inequality and are enforced by police who gun down people in the street with relative impunity. They incentivize keeping millions of people homeless so that others can grow wealthier. They enforce a widening divide between people who are inside the system and people who are locked out in such a way that the only way to beat the system for your own well-being is to perpetuate it.

Originally, the web was a movement. It's hard to remember now, but allowing everyone to publish was a major social change; giving everyone a voice was a new idea that many people argued (and continue to argue) against. Previously, publishing was a privilege that was bestowed by a circle of predominantly white men. Now, any old riff-raff (or to put it another way, anyone regardless of whether or not they had received permission) could stick up a website and be read by millions. It was a revolution.

It was that revolution - not the ability to make money or the opportunity to create new businesses - that made me fall in love with the web.

Of course, what came next was hardly revolutionary. The existing gatekeepers fell, and were replaced with yet more gatekeepers, who used the global nature of the web to become bigger and more powerful than their predecessors. The excitement of empowering communities all over the world gave way to a wave of people who were excited about building bigger companies and generating more wealth than ever before. The incumbent structures were disrupted in favor of yet more of the same old business.

I want to return to that revolutionary spirit and reclaim the web's radical core.

That doesn't mean I want to turn back the clock. The web movement was, itself, predominantly white and male. As a direct outcome, it tended to overlook the abuse and systemic oppression overwhelmingly experienced by women, communities of color, and LGBTQIA+ communities. As a whole, it was Euro-centric and dismissive of the global south. It's not revolutionary if the same old faces are in charge: the only way the movement can succeed is through radical inclusion. Leadership must be open to people of all backgrounds and contexts; ownership of the process, as well as its outcomes, must be truly democratic.

But we badly need to get back to the business of disrupting global capitalism itself, in order to create something that truly works for everyone. To do so, we must be informed by the past, but ready to build something genuinely new. In the same way that allowing everyone to publish radically changed the cultural landscape forever, we need to change nothing less than who is allowed to be an owner of the processes that run the world. The flow of money; the flow of political power; the flow of permission. Speech was just the first step.

This North Star of real, radical change is the definition of disruption I want to be governed by. I want to help create a more democratic, more equal world, where authority is devolved to all of us. It's not about getting rich. It's about sharing power.

 

Photo by Gayatri Malhotra on Unsplash


Tim Bouma's Blog

Next Stop: Global Verification Network

Next Stop: A Global Verification Network Photo by Clem Onojeghuo on Unsplash Authors note: This is my opinion only and does not reflect that of my employer or any organization with which I am involved. As this is an opinion, I take full responsibility for any implied, explicit, or unconscious bias. I am open to feedback and correction; this opinion is subject to change at any time. We’r
Next Stop: A Global Verification Network Photo by Clem Onojeghuo on Unsplash

Authors note: This is my opinion only and does not reflect that of my employer or any organization with which I am involved. As this is an opinion, I take full responsibility for any implied, explicit, or unconscious bias. I am open to feedback and correction; this opinion is subject to change at any time.

We’re almost there for truly global trusted interoperability. We almost have all of the networks we need. Let’s go through the networks we already have or will have soon (please note — I am only focusing on electronic networks, not physical or social networks)

Global Communication Network — The Internet as we know it today. Conceptualized as a singular, ubiquitous thing that we take for granted, it is actually a network of networks and an amalgam of protocols and technologies abstracted and unified bound by a set of rules known as Internet Protocol. We can just communicate with one another.

Global Location Network — This is the Global Positioning System (GPS). GPS is so embedded in our lives — it is baked into the chips that we wear and take with up(watches, Fitbits, cycling computers, etc.), we no longer notice its presence. We can just know where we are.

Global Monetary Network — This network is still emerging. Bitcoin is the frontrunner, but there are contenders and competitors, such as Central Bank Digital Currencies (CBDCs). However this will play out, we will soon be able to exchange monetary value with one another, without the backing of governments and relying on financial intermediaries we have used for centuries.

So what is the next stop for the network? It’s this:

Global Verification Network — A network to independently verify without reliance on trusted intermediaries. Simply put, someone presents you with something — a claim, a statement, or whatever, and you will be able to prove that it is true without accepting it a face value or calling home to a centralized system that could deny you service, surveil you, or give you a false confirmation (for whatever reason). The business of trust can then be between you and the presenter, and you decide what you need to independently verify.

The exact capabilities of this global verification network are still to be determined but it is becoming clearer every day. Much of what is required as ingredients already exist as siloed bespoke add-ons onto the Internet as we today (TLS, etc.). Further, the cryptography that will enable this global verification network has already existed for years if not decades.

The hardest part ahead is not the technology, it’s the wholesale re-conceptualization of what is required for a global verification network that puts the power of the network back into the endpoints that is you and me.

In the coming weeks, I will be providing more detail, but I want you to take away from this post, that the next major stop for networks is a global verification network.


John Philpin : Lifestream

”This is not a jungle this is a garden at the Google campu

”This is not a jungle this is a garden at the Google campus.” I’ll come back with the credit later this week. Until then - any ideas?

”This is not a jungle this is a garden at the Google campus.”

I’ll come back with the credit later this week. Until then - any ideas?


reb00ted

How is the climate?

Just ask this ancient time capsule found in Ireland.

Just ask this ancient time capsule found in Ireland.


Simon Willison

Weeknotes: datasette-indieauth, datasette-graphql, PyCon Argentina

Last week's weeknotes took the form of my Personal Data Warehouses: Reclaiming Your Data talk write-up, which represented most of what I got done that week. This week I mainly worked on datasette-indieauth, but I also gave a keynote at PyCon Argentina and released a version of datasette-graphql with a small security fix. datasette-indieauth I wrote about this project in detail in Implementing

Last week's weeknotes took the form of my Personal Data Warehouses: Reclaiming Your Data talk write-up, which represented most of what I got done that week. This week I mainly worked on datasette-indieauth, but I also gave a keynote at PyCon Argentina and released a version of datasette-graphql with a small security fix.

datasette-indieauth

I wrote about this project in detail in Implementing IndieAuth for Datasette - it was inspired by last weekend's IndieWebCamp East and provides Datasette with a password-less sign in option with the least possible amount of configuration.

Shortly after release version 1.0 of the plugin I realized it had a critical security vulnerability, where a malicious authorization server could fake a sign-in as any user! I fixed this in version 1.1 and released that along with a GitHub security advisory: Implementation trusts the "me" field returned by the authorization server without verifying it.

The IndieAuth community has an active #dev chat channel, available in Slack and through IRC and their web chat interface. I've had some very productive conversations there about parts of the specification that I found confusing.

datasette-graphql

This week I also issued a security advisory for my datasette-graphql plugin. This one was thankfully much less severe: I realized that the plugin was leaking details of the schema of otherwise private databases, if they were protected by Datasette's permission system.

Here's the advisory: datasette-graphql leaks details of the schema of private database files. It's important to note that the actual content of the tables was not exposed - just the schema details such as the names of the tables and columns.

To my knowledge no-one has installed that plugin on an internet-exposed Datasette instance that includes private databases, so I don't think anyone was affected by the vulnerability. The fix is available in datasette-graphql 1.2.

Also in that release: I've added table action items that link to an example GraphQL query for each table. This is a pretty neat usability enhancement, since the example includes all of the non-foreign-key columns making it a useful starting point for iterating on a query. You can try that out starting on this page.

Keynoting PyCon Argentina

On Friday I presented a keynote at PyCon Argentina. I actually recorded this several weeks ago, but the keynote was broadcast live on YouTube so I got to watch the talk and post real-time notes and links to an accompanying Google Doc, which I also used for Q&A after tha talk.

The conference was really well organized, with top notch production values. They made a pixel-art version of my for the poster!

The video isn't available yet, but I'll link to it when they share it. I'm particularly excited about the professionally translated subtitles en Español.

Miscellaneous

Since Datasette depends on Python 3.6 these days, I decided to try out f-strings. I used flynt to automatically convert all of my usage of .format() to use f-strings instead. Flynt is built on top of astor, a really neat looking library for more productively manipulating Python source code using Python's AST.

I've long been envious of the JavaScript community's aggressive use of codemods for automated refactoring, so I'm excited to see that kind of thing become more common in the Python community.

datasette-search-all is my plugin that returns search results from ALL attached searchable database tables, using a barrage of fetch() calls. I bumped it to a 1.0 release adding loading indicators, more reliable URL construction (with the new datasette.urls utilities) and a menu item in Datasette's new navigation menu.

Releases in the past two weeks datasette-graphql 1.2 - 2020-11-21 datasette-indieauth 1.2 - 2020-11-19 datasette-indieauth 1.1 - 2020-11-19 datasette-indieauth 1.0 - 2020-11-18 datasette-indieauth 0.3.2 - 2020-11-18 datasette-indieauth 0.3.1 - 2020-11-18 datasette-indieauth 0.3 - 2020-11-18 datasette-indieauth 0.3a0 - 2020-11-17 datasette-indieauth 0.2a0 - 2020-11-15 datasette-indieauth 0.1a0 - 2020-11-15 datasette-copyable 0.3.1 - 2020-11-14 datasette-search-all 1.0 - 2020-11-12 sqlite-utils 3.0 - 2020-11-08

Saturday, 21. November 2020

Simon Willison

datasette-graphql 1.2

datasette-graphql 1.2 A new release of the datasette-graphql plugin, fixing a minor security flaw: previous versions of the plugin could expose the schema (but not the actual data) of tables in databases that were otherwise protected by Datasette's permission system. Via @simonw

datasette-graphql 1.2

A new release of the datasette-graphql plugin, fixing a minor security flaw: previous versions of the plugin could expose the schema (but not the actual data) of tables in databases that were otherwise protected by Datasette's permission system.

Via @simonw


Ben Werdmüller

Where's my flying car?

The question is a trope of the modern age. We were promised flying cars half a century ago; where are they? Every so often someone even tries to answer that question. Passenger drones, manned quadcopters, and even jetpacks have tried to bring this 1950s vision of the future to life. The truth is, they're all doomed to fail. Flying cars are the modern equivalent of a faster horse. They're not w

The question is a trope of the modern age. We were promised flying cars half a century ago; where are they?

Every so often someone even tries to answer that question. Passenger drones, manned quadcopters, and even jetpacks have tried to bring this 1950s vision of the future to life.

The truth is, they're all doomed to fail. Flying cars are the modern equivalent of a faster horse. They're not what we really want. It's not the right question.

Go deeper. Ask why we want flying cars. To get there faster? Avoid traffic? Have a greater sense of freedom? Feel like we're living in the future?

If we find those questions and solve for them, we're likely to arrive at more interesting solutions. Less commuting; more geographic diversity; new kinds of mass transit; ideas we haven't conceived of yet.

It's a far more interesting, and more fruitful, way to look at the world. The science fiction of the past can give us hints about how we might solve these deeper needs, but they don't absolve us of having to discover them.


Identity Woman

Self Sovereign Identity Critique, Critique.

I have been asked by many people my opinion about Philip Sheldrake’s so called critique of SSI that went from a mailing list thread on VRM that began over a year ago to a twitter thread at some point and then a presentation at IIW and blog post he wrote. Below is the letter I […] The post Self Sovereign Identity Critique, Critique. appeared first on Identity Woman.

I have been asked by many people my opinion about Philip Sheldrake’s so called critique of SSI that went from a mailing list thread on VRM that began over a year ago to a twitter thread at some point and then a presentation at IIW and blog post he wrote. Below is the letter I […]

The post Self Sovereign Identity Critique, Critique. appeared first on Identity Woman.


Simon Willison

I Lived Through A Stupid Coup. America Is Having One Now

I Lived Through A Stupid Coup. America Is Having One Now If, like me, you have been avoiding the word "coup" since it feels like a clear over-reaction to what's going on, I challenge you to read this piece and not change your mind. Via Harper Reed

I Lived Through A Stupid Coup. America Is Having One Now

If, like me, you have been avoiding the word "coup" since it feels like a clear over-reaction to what's going on, I challenge you to read this piece and not change your mind.

Via Harper Reed


Mike Jones: self-issued

Concise Binary Object Representation (CBOR) Tags for Date is now RFC 8943

The Concise Binary Object Representation (CBOR) Tags for Date specification has now been published as RFC 8943. In particular, the full-date tag requested for use by the ISO Mobile Driver’s License specification in the ISO/IEC JTC 1/SC 17 “Cards and security devices for personal identification” working group has been created by this RFC. The abstract […]

The Concise Binary Object Representation (CBOR) Tags for Date specification has now been published as RFC 8943. In particular, the full-date tag requested for use by the ISO Mobile Driver’s License specification in the ISO/IEC JTC 1/SC 17 “Cards and security devices for personal identification” working group has been created by this RFC. The abstract of the RFC is:


The Concise Binary Object Representation (CBOR), as specified in RFC 7049, is a data format whose design goals include the possibility of extremely small code size, fairly small message size, and extensibility without the need for version negotiation.


In CBOR, one point of extensibility is the definition of CBOR tags. RFC 7049 defines two tags for time: CBOR tag 0 (date/time string as per RFC 3339) and tag 1 (POSIX “seconds since the epoch”). Since then, additional requirements have become known. This specification defines a CBOR tag for a date text string (as per RFC 3339) for applications needing a textual date representation within the Gregorian calendar without a time. It also defines a CBOR tag for days since the date 1970-01-01 in the Gregorian calendar for applications needing a numeric date representation without a time. This specification is the reference document for IANA registration of the CBOR tags defined.

Note that a gifted musical singer/songwriter appears in this RFC in a contextually appropriate fashion, should you need an additional incentive to read the specification. ;-)

Friday, 20. November 2020

John Philpin : Lifestream

Dear LowerMyBills.com You sent the email, so you know the

Dear LowerMyBills.com You sent the email, so you know the address and if you don’t, why on earth am I going to give it to you now to add it to all the other lists. I do know where you got this email from, message marked as spam, domain reported. Have a good day.

Dear LowerMyBills.com

You sent the email, so you know the address and if you don’t, why on earth am I going to give it to you now to add it to all the other lists.

I do know where you got this email from, message marked as spam, domain reported.

Have a good day.


Simon Willison

Quoting Joe Morrison

The open secret Jennings filled me in on is that OpenStreetMap (OSM) is now at the center of an unholy alliance of the world’s largest and wealthiest technology companies. The most valuable companies in the world are treating OSM as critical infrastructure for some of the most-used software ever written. The four companies in the inner circle— Facebook, Apple, Amazon, and Microsoft— have a combin

The open secret Jennings filled me in on is that OpenStreetMap (OSM) is now at the center of an unholy alliance of the world’s largest and wealthiest technology companies. The most valuable companies in the world are treating OSM as critical infrastructure for some of the most-used software ever written. The four companies in the inner circle— Facebook, Apple, Amazon, and Microsoft— have a combined market capitalization of over six trillion dollars.

Joe Morrison


The trouble with transaction.atomic

The trouble with transaction.atomic David Seddon provides a detailed explanation of Django's nestable transaction.atomic() context manager and describes a gotcha that can occur if you lose track of whether your code is already running in a transaction block, since you may be working with savepoints instead - along with some smart workarounds. Via @adamchainz

The trouble with transaction.atomic

David Seddon provides a detailed explanation of Django's nestable transaction.atomic() context manager and describes a gotcha that can occur if you lose track of whether your code is already running in a transaction block, since you may be working with savepoints instead - along with some smart workarounds.

Via @adamchainz


John Philpin : Lifestream

And while we are talking pencils … and iPad one … air or sma

And while we are talking pencils … and iPad one … air or small pro … I’ve read the differences … and .. well is the pro worth the extra. What are you really getting?

And while we are talking pencils … and iPad one … air or small pro … I’ve read the differences … and .. well is the pro worth the extra. What are you really getting?


File this one in the stupid question bucket. I know the A

File this one in the stupid question bucket. I know the Apple Pencil won’t work with the iPhone … but … would it work just as a simple pointer .. like the old adonit and similar styluses used to work?

File this one in the stupid question bucket.

I know the Apple Pencil won’t work with the iPhone … but … would it work just as a simple pointer .. like the old adonit and similar styluses used to work?

Thursday, 19. November 2020

John Philpin : Lifestream

Trump administration in ‘staggering’ isolation at UN on heal

Trump administration in ‘staggering’ isolation at UN on health issues “Outgoing Trump administration’s final days at the United Nations have resulted in a deepening of US isolation on social and health issues, with only a handful of allies including Russia, Belarus and Syria.”

Trump administration in ‘staggering’ isolation at UN on health issues

“Outgoing Trump administration’s final days at the United Nations have resulted in a deepening of US isolation on social and health issues, with only a handful of allies including Russia, Belarus and Syria.”


BuzzFeed agrees to buy HuffPost in latest online media merge

BuzzFeed agrees to buy HuffPost in latest online media merger More proof that (-1) + (-1) = (+1)

Simon Willison

Internet Archive Software Library: Flash

Internet Archive Software Library: Flash A fantastic new initiative from the Internet Archive: they're now archiving Flash (.swf) files and serving them for modern browsers using Ruffle, a Flash Player emulator written in Rust and compiled to WebAssembly. They are fully interactive and audio works too. Considering the enormous quantity of creative material released in Flash over the decades this

Internet Archive Software Library: Flash

A fantastic new initiative from the Internet Archive: they're now archiving Flash (.swf) files and serving them for modern browsers using Ruffle, a Flash Player emulator written in Rust and compiled to WebAssembly. They are fully interactive and audio works too. Considering the enormous quantity of creative material released in Flash over the decades this helps fill a big hole in the Internet's cultural memory.

Via Jason Scott


Security vulnerability in datasette-indieauth: Implementation trusts the "me" field returned by the authorization server without verifying it

Security vulnerability in datasette-indieauth: Implementation trusts the "me" field returned by the authorization server without verifying it I spotted a critical security vulnerability in my new datasette-indieauth plugin: it accepted the "me" profile URL value returned from the authorization server in the final step of the IndieAuth flow without verifying it, which means a malicious server cou

Security vulnerability in datasette-indieauth: Implementation trusts the "me" field returned by the authorization server without verifying it

I spotted a critical security vulnerability in my new datasette-indieauth plugin: it accepted the "me" profile URL value returned from the authorization server in the final step of the IndieAuth flow without verifying it, which means a malicious server could imitate any user. I've shipped 1.1 with a fix and posted a security advisory to the GitHub repository.

Wednesday, 18. November 2020

Simon Willison

Implementing IndieAuth for Datasette

IndieAuth is a spiritual successor to OpenID, developed and maintained by the IndieWeb community and based on OAuth 2. This weekend I attended IndieWebCamp East Coast and was inspired to try my hand at an implementation. datasette-indieauth is the result, a new plugin which enables IndieAuth logins to a Datasette instance. Surprisingly this was my first IndieWebCamp - I've been adjacent to that

IndieAuth is a spiritual successor to OpenID, developed and maintained by the IndieWeb community and based on OAuth 2. This weekend I attended IndieWebCamp East Coast and was inspired to try my hand at an implementation. datasette-indieauth is the result, a new plugin which enables IndieAuth logins to a Datasette instance.

Surprisingly this was my first IndieWebCamp - I've been adjacent to that community for over a decade, but I'd never made it to one of their in-person events before. Now that everything's virtual I didn't even have to travel anywhere, so I finally got to break my streak of non-attendance.

Understanding IndieAuth

The key idea behind IndieAuth is to provide federated login based on URLs. Users enter a URL that they own (e.g. simonwillison.net), and the protocol then derives their identity provider, redirects the user there, waits for them to sign in and get redirected back and then uses tokens passed in the redirect to prove the user's ownership of the URL and sign them in.

Here's what that authentication flow looks like, using this demo of the plugin:

IndieAuth works by scanning the linked page for a <link rel="authorization_endpoint" href="https://indieauth.com/auth"> HTML element which indicates a service that should be redirected to in order to authenticate the user.

I'm using IndieAuth.com for my own site's authorization endpoint, an identity provider run by IndieAuth spec author Aaron Parecki. IndieAuth.com implements RelMeAuth.

RelMeAuth is a neat hack where the authentication provider can scan the user's URL for a <link href="https://github.com/simonw" rel="me"> element, confirm that the GitHub profile in question links back to the same page, and then delegate to GitHub authentication for the actual sign-in.

Why implement this for Datasette?

A key goal of Datasette is to reduce the friction involved in publishing data online as much as possible.

The datasette publish command addresses this by providing a single CLI command for publishing a SQLite database to the internet and assigning it a new URL.

datasette publish cloudrun ca-fires.db \ --service ca-fires \ --title "Latest fires in California"

This command will create a new Google Cloud Run service, package up the ca-fires.db (created in this talk) along with the Datasette web application, and deploy the resulting site using Google Cloud Run.

It will output a URL that looks like this: https://ca-fires-j7hipcg4aq-uc.a.run.app

Datasette is unauthenticated by default - anyone can view the published data. If you want to add authentication you can do so using a plugin, for example datasette-auth-passwords.

Authentication without passwords is better. The datasette-auth-github plugin implements single-sign-on against the GitHub API, but comes with a slight disadvantage: you need to register and configure your application with GitHub in order to configure things like the redirect URL needed for authentication.

For most applications this isn't a problem, but when you're deploying dozens or potentially hundreds of applications with Datasette - each with initially unpredictable URLs - this can add quite a bit of friction.

The joy of IndieAuth (and OpenID before it) is that there's no centralized authority to register with. You can deploy an application to any URL, install the datasette-indieauth plugin and users can start authenticating with your site.

Even better... IndieAuth means you can grant people permission to access a site without them needing to create an account, provided they have their own domain with IndieAuth setup.

I took advantage of that in the design of datasette-indieauth. Say you want to publish a Datasette that only I can access - you can do that using the restrict_access plugin configuration setting like so:

datasette publish cloudrun simon-only.db \ --service simon-only \ --title "For Simon's eye only" \ --install datasette-indieauth \ --plugin-secret datasette-indieauth \ restrict_access https://simonwillison.net/

The resulting Datasette instance will require the user to authenticate in order to view it - and will only allow access to the user who can use IndieAuth to prove that they are the owner of simonwillison.net.

Next steps

There are two sides to the IndieAuth specification: client sites that allow sign-in with IndieAuth, and authorization providers that handle that authentication.

datasette-indieauth currently acts as a client, allowing sign-in with IndieAuth.

I'm considering extending the plugin to act as an authorization provider as well. This is a bit more challenging as authentication providers need to maintain some small aspects of session state, but it would be good for the IndieAuth ecosystem for there to be more providers. The most widely used provider at the moment is the excellent IndieAuth WordPress plugin, which I used while testing my Datasette plugin and really was just a one-click install from the WordPress plugin directory.

datasette-indieauth has 100% test coverage, and I wrote the bulk of the logic in a standalone utils.py module which could potentially be extracted out of the plugin and used to implement IndieAuth in Python against other frameworks. A Django IndieAuth provider is another potential project, which could integrate directly with my Django blog.

Addendum: what about OpenID?

Fom 2006 to 2010 I was a passionate advocate for OpenID. It was clear to me that passwords were an increasingly unpleasant barrier to secure usage of the web, and that some form of federated sign-in was inevitable. I was terrified that Microsoft Passport would take over all authentication on the web!

With hindsight that's not quite what happened: for a while it looked like Facebook would win instead, but today it seems to be a fairly even balance between Facebook, Google, community-specific authentication providers like GitHub and Apple's iPhone-monopoly-enforced Sign in with Apple.

OpenID as an open standard didn't really make it. The specification grew in complicated new directions (Yadis, XRDS, i-names, OpenID Connect, OpenID 2.0) and it never quite overcame the usability hurdle of users having to understand URLs as identifiers.

IndieAuth is a much simpler specification, based on lessons learned from OAuth. I'm still worried about URLs as identifiers, but helping people reclaim their online presence and understand those concepts is core to what the IndieWeb movement is all about.

IndieAuth also has some clever additional tricks up its sleeve. My favourite is that IndieAuth can return an identifier for the user that's different from the one they typed in the box. This means that if a top-level domain with many users supports IndieAuth, each user can learn to just type example.com in (or click a branded button) to start the authentication flow - they'll be signed in as example.com/users/simonw based on who they authenticated as. This feels like an enormous usability improvement to me, and one that could really help avoid users having to remember their own profile URLs.

OpenID was trying to solve authentication for every user of the internet. IndieAuth is less ambitious - if it only takes off with the subset of people who embrace the IndieWeb movement I think that's OK.

The datasette-indieauth project is yet another example of the benefit of having a plugin ecosystem around Datasette: I can add support for technologies like IndieAuth without baking them into Datasette's core, which almost eliminates the risk to the integrity of the larger project of trying out something new.


Heather Vescent

On Being Self Less

Photo by Linus Nylund on Unsplash The Supply Chain of You As I dig into the Buddhist rabbit hole of emptiness, one thing comes up over and over — this idea of selflessness. As I struggled to understand the Buddhist description of selflessness, I realized I was limited by an existing indoctrination that had a very different (mis?) interpretation of selflessness, that comes from my midwest
Photo by Linus Nylund on Unsplash The Supply Chain of You

As I dig into the Buddhist rabbit hole of emptiness, one thing comes up over and over — this idea of selflessness. As I struggled to understand the Buddhist description of selflessness, I realized I was limited by an existing indoctrination that had a very different (mis?) interpretation of selflessness, that comes from my midwestern religious upbringing.

Selflessness

I was taught selflessness as some aspect of Christianity, that Jesus was selfless and we should be too. And I learned that this kind of selflessness was about thinking about others, considering others, putting others before you. And I was indoctrinated that I should put others before me.

Well this didn’t work so great for me. While I was busy putting others before me, I did not receive the same treatment. I was expected to put others before me, AND ALSO meet all my own needs. Hello burnout, not to mention unfairness.

Selfish

I rebelled against this and went to take care of my needs first (well first I had to figure out what I needed cause I’d been brainwashed by society). This caused the Christians in my life to call me selfish. But if I didn’t take care of my needs, and other people didn’t take care of my needs either, where did that leave me? Was I supposed to be ok with that?

Selflessness Redux

As I’ve gone deeper into the Tibetan Mahayana Buddhist canon of emptiness, I came against “selflessness” again. Selflessness is supposedly one of the easiest ways to understand the weight and important of emptiness (I disagree, but that is a post for another day), because by starting with selflessness, you supposedly realize emptiness directly and relatedly to yourself. (There are also some epic mental logistics!)

Selfless in the Buddhist canon means you (and everyone) has an ever changing “I” aka this thing you think is you, that you ascribe an identity too, is not solid or everlasting — it is constantly changing based on the context and relation to the world. A bunch of years ago, I gave a talk called “How We Create I(dentity)” which talked about how we each have complex identities that change based on context.

Remember the uproar when Facebook was like you can only have one identity there, and it has to be the legal you? That’s the wrong view of identity. Your FB identity is constrained to the FB platform. (And teaser: your FB identity is dependent on the platform too.)

Dependent Arising

This “you” is dependent on many things. It’s dependent on the evolution of DNA so you have a brain and a body to move around in this world. It’s dependent on the social structure and education and global politics and the food you eat and what your ancestors ate and food is itself dependent on the earth and the sun.

We have a business description for this: the supply chain.

Think about it. You go to a store, you have shelves of products. Those products didn’t just magically appear out of nothing! They don’t exist on their own. The box of cookies exists because of all the steps it took along the supply chain, from the growing of the farmer’s wheat, the harvest tracked on an IoT connected John Deere tractor, the grain shipped cross border (and taxed) to a factory, which makes, say, King Arthur flour, which then is sold to bakeries, and those bakeries use human and machine labor to combine the flour with eggs, sugar, butter, baking soda, and chocolate, which is baked, packaged, marketed, shipped to your store where you can buy and eat them during a global pandemic.

This is called “dependent arising.” Which basically means, the thing does not, can not, exist on its own, of its own accord.

The box of cookies does not magically pop into existence from emptiness. It comes into existence bit by bit as it moves along the supply chain. We Create the Identity (and the product) of the thing.

The Supply Chain of You

Now, apply this scenario to you. You are the box of cookies. Your existence is dependent on many things. You can not exist separately. You are dependent arising. If you understand and accept that you are a result of the “supply chain of you” and you are constantly changing based on context, you may also accept/realize that you are not as solid as you think you are.

This is selflessness. It is the understanding that your existence is dependent on the world around you AND this constantly changing you is not a solid everlasting thing. You have no self to center on — because the self is a constantly changing projection. (I like to think about the self as disco lights at the club creating the ambiance of the dancefloor.)

Resolution?

My Christian understanding of selflessness is about putting others first because ??? IDK Jesus said so?

My Buddhist understanding of selflessness is about realizing I could not exist without everything in the world, and that I do not exist — my identity/self does not exist — outside of the world. And so in this understanding I see how I connect and am created/influenced by everything.

In the Christian sense, if I do not put others first, I am “bad,” but this is self alienating. Whereas in the Buddhist sense, who I am is created by the world, and thus, I have the power to influence and create others as much as they have to influence and create me. So I can authentically consider others in order to influence and co-create who they are. Which if I think about it, is perhaps not so different from intention of considering others in the first place, with one key difference, considering others as if they are you instead of not-you, because together We Create (individual & collective) I(dentity).


John Philpin : Lifestream

Citibank. Seriously?

Citibank. Seriously?

Citibank. Seriously?

Tuesday, 17. November 2020

John Philpin : Lifestream

Day 17. Doubling up as a record for the day AND using all

Day 17. Doubling up as a record for the day AND using all 17 words to date in reverse order.   I need to train my memory so I don’t get too far behind these daily Microblogvember posts, but also not to far ahead - because that would be spooky and I guess the judge probably wouldn’t wear it - what say you @macgenie ? or do the elderly get a free pass - no pressure, no force, just aski

Day 17.

Doubling up as a record for the day AND using all 17 words to date in reverse order.

 

I need to train my memory so I don’t get too far behind these daily Microblogvember posts, but also not to far ahead - because that would be spooky and I guess the judge probably wouldn’t wear it - what say you @macgenie ? or do the elderly get a free pass - no pressure, no force, just asking, because I don’t want to put you in a bind if you have to inflate the stats - I mean that would just be too puzzling, besides I don’t want to stoop that low in trying to make this work, I mean I am near - but so far its nothing that might astonish you because it’s hard to concentrate and this is the most exciting thing I’ve written today … I am that dreary.

 

See The Series


Simon Willison

Amstelvar

Amstelvar A real showcase of what variable fonts can do: this open source font by David Berlow has 17 different variables controlling many different aspects of the font. Via @markboulton

Amstelvar

A real showcase of what variable fonts can do: this open source font by David Berlow has 17 different variables controlling many different aspects of the font.

Via @markboulton

Monday, 16. November 2020

Simon Willison

Ok Google: please publish your DKIM secret keys

Ok Google: please publish your DKIM secret keys The DKIM standard allows email providers such as Gmail to include cryptographic headers that protect against spoofing, proving that an email was sent by a specific host and has not been tampered with. But it has an unintended side effect: if someone's email is leaked (as happened to John Podesta in 2016) DKIM headers can be used to prove the validi

Ok Google: please publish your DKIM secret keys

The DKIM standard allows email providers such as Gmail to include cryptographic headers that protect against spoofing, proving that an email was sent by a specific host and has not been tampered with. But it has an unintended side effect: if someone's email is leaked (as happened to John Podesta in 2016) DKIM headers can be used to prove the validity of the leaked emails. This makes DKIM an enabling factor for blackmail and other security breach related crimes. Matthew Green proposes a neat solution: providers like Gmail should rotate their DKIM keys frequently and publish the PRIVATE key after rotation. By enabling spoofing of past email headers they would provide deniability for victims of leaks, fixing this unintended consequence of the DKIM standard.

Via @matthew_d_green


Identity Praxis, Inc.

An Interview on Self-sovereign Identity with Kaliya Young: The Identity Women

Understanding the future of the Internet and the flows of identity & personal information   I enjoyed interviewing Kaliya Young, The Identity Women, last week, during an Identity Praxis Experts Corner Interview on November 13, 2020. About Kaliya and Her Purpose   Kaliya, “The Identity Women,” is a preeminent expert in all things identity and […] The post An Interview on Self-sovere
Understanding the future of the Internet and the flows of identity & personal information

 

I enjoyed interviewing Kaliya Young, The Identity Women, last week, during an Identity Praxis Experts Corner Interview on November 13, 2020.

About Kaliya and Her Purpose

 

Kaliya, “The Identity Women,” is a preeminent expert in all things identity and personal information management standards, protocols, resources, and relationships.

If you listen to our interview, you’ll hear about her purpose and passion, how she is living it, and how she helps guide the world down a path toward self-sovereignty. At the end of this path is the promise that one day people—you, me…everyone—will have control, i.e., self-determination and agency, over their identity and personal information.

Kaliya’s purpose is to answer this profound question: “How do we own, control, manage, and represent ourselves in the digital world, independently of the BigTech companies (Facebook, Google, etc.)?”

For the last 15+ years, Kaliya has been at the center of the self-sovereign identity (SSI) movement.

The SSI movement is all about creating open standards and technologies that change how identity and personal information are collected, managed, and exchanged throughout society. In other words, SSI is about getting our identity out of the grip of BigTech and into the hands of the individual, the data subject. But, SSI is so much more; it is also about evolving the Internet, making it more efficient and secure, and creating new opportunities for businesses to innovate and to forge new and lasting relationships with the people they serve.

The Opportunities form SSI.

In our interview, Kaliya highlights several opportunities that SSI can make possible, including:

The availability of new protocols will make it possible to securely and efficiently move data across the Internet without it being centralized in a few players’ hands. The possibility for people to have control over their identity and data is created and moved, rather than it being in control of BigTech like Facebook and Google (see Kaliya’s Jan. 2020 interview in Wired, where she discusses making Facebook obsolete). The chance for businesses to build a new kind of trusted, secure, and transparent relationship with their customers, and to do so while saving time, money, and reducing risk (Kaliya recommends that you check out DIDComm Messaging, an emerging protocol that is promised to bring this opportunity to light). SSI Use Cases

It is still early days for SSI, but people worldwide are working diligently to create the foundation of SSI so that a wealth of privacy and people-centric services can come to light.

Following my interview with Kaliya, I took a look at the W3C Verifiable Credentials Working Group (VCWG), as recommended by Kaliya. The W3C VCWG is a team diligently working on adding a new, secure identity management layer to the Internet. Among other efforts that they are working on, I found that, in Sept. 2020, they released a list of use cases and technical requirements for self-sovereign identity, see the Use Cases and Requirements for Decentralized Identifiers draft spec.

Here is a list of high-lighted use cases, The online shopper is assured that the product they’re buying is authentic. Owners of manufactured goods, e.g., a car, can track the product’s ownership history and the provenance of every part (original and replaced) while preserving players’ anonymity up and down the supply chain for the life of the product. Support for data vaults, aka personal data stores; people can securely store their data in the cloud and be confident that they and only they have access to it and that they can offer fine-grained access to their data when they want to. For example, they can securely share their age in a way that someone else can verify and trust without needing the person’s actual birth date or any additional information. Track verifiable credentials back to their issuer; people or organizations looking to verify someone’s data will be able to track a verifiable credential or piece of data back to a trusted source, like a chamber of commerce, bank, or DMV. New and improved data exchange consent management, not just for data exchange but to manage online tracking to power personalization and analytics Power secure and anonymous payments Secure physical and digital identity cards or licensing credentials that allow for fine-grained control of exactly what data is shared when the proof of identity or licensee rights need to be verified The Challenges for SSI

According to Kaliya, the most significant challenges that we must overcome to make SSI a reality include,

Counteract the inertia of the status-quo; people don’t like change; they are used to existing knowledge-based authentication practices, processes, and systems. People’s awareness generation and adoption of new SSI empowered services. Education for everyone: developers, users, executives, customers, clients, investors, regulators, and so much more. The new market model, i.e., we now face a three-sided market, and all the business models and systems operations must evolve to accommodate the new use cases. Kaliya: A wealth of resources and knowledge 

Kaliya is a wealth of knowledge.

She knows the SSI industry structure. She knows the leading players. She can point you to the right resources to introduce you to people that can help you understand, plan, and execute SSI solutions and services.

To put a fine point on it, working with Kaliya can save you months, if not years, of stumbling around in the dark as you look to figure out what SSI can do for you and your business.

Here are just a few of the people and resources that Kaliya high-lighted and alluded to in our interview:

Decentralized Identity Foundation, a leading industry group spearheading SSI standards. W3C Credentials community group, a leading industry group spearheading SSI standards. Trust over IP Foundation, a leading industry group spearheading SSI standards and governance models. Kim Cameron and the 7 Laws of Identity, a godfather and visionary in all things identity DIDComm Messaging Protocol, a protocol for trusted data exchange Internet Identity Workshop, a bi-annual incoherence where all things SSI are discussed Domains of Identity, a book she wrote on identity. Comprehensive Guide to Self Sovereign Identity, a book she wrote on self-sovereign identity.

That’s it for now. Enjoy. We’ll be sure to bring Kaliya back soon.

The post An Interview on Self-sovereign Identity with Kaliya Young: The Identity Women appeared first on Identity Praxis, Inc..


infominer

Leveling Up - What I’ve been working on lately.

It’s been a year since my last post… Overwhelmed by trying to keep up with the fast flows of information, and my own internal processes, I took a break from social media, stopped working on most of the projects I had begun, and turned inward. I’ve also been busy leveling up, and discovering my potential. So what exactly have I doing with all that time?

It’s been a year since my last post… Overwhelmed by trying to keep up with the fast flows of information, and my own internal processes, I took a break from social media, stopped working on most of the projects I had begun, and turned inward. I’ve also been busy leveling up, and discovering my potential.

So what exactly have I doing with all that time?

Managing Emotions

If you haven’t been following my story, so far, the short version is that I quit drinking a few years ago, and all of this info-gathering has been an important part of my recovery process, and a path to creating a new life.

All the same, however far along I had come in my work, and self-education, I still had the challenges processing every day emotions, which I’d been primarily avoiding by processing as much information on valuable topics as possible.

Turning that around, I shifted most of my info-gathering from blockchain, cryptocurrencies and decentralized-identity, to focus more on mental hygiene and emotional agility.

The most valuable information I’ve encountered in that regard is Marshall Rosenberg’s Nonviolent Communication.

We’re interested, in nonviolent communication, with the kind of honesty that supports people connecting with each other in a way that makes compassionate giving inevitable, that makes it enjoyable for people to contribute to each other’s well being. - Marshall Rosenberg

I’ve got a couple projects brewing on that topic, and will write more about that later, so don’t want to take too much space for that now.

In brief, I can say that spending time with the teachings of Marshall Rosenberg has made a significant contribution to my mental health and emotional wellbeing.

If you’re curious to learn more, I called a session at my first IIW on the topic, you can check out the notes for that session on the IIW Wiki, which provides a high-level overview, and lots of links.

Developing the backend for a sustainable weekly newsletter

I’d been chatting with Kaliya Identity Woman for around a year, after contacting her about the potential for our collaborating on decentralized identity. At some point, she proposed the idea of writing a newsletter together, under the Identosphere.net domain.

Instead of jumping in head first, like I usually do, we’ve spent a lot of time figuring out how to run a newsletter, sustainably, with as few third party services as possible, while I’m learning my way around various web-tools.

We’re tackling a field that touches every domain, has a deep history, and is currently growing faster than anyone can keep up with. But this problem of fast-moving information streams isn’t unique to digital identity, and I’d like to share this process for others to benefit from.

GitHub Pages

Once my ‘Awesome List’ outgrew the Awesome format, I began learning to create static web-sites with GitHub Pages and Jekyll.

GitHub Pages Starter Pack (a resource I’ve created along that journey)

Static Websites are great for security and easy to set up, but if you’re an indie hacker, you’re gonna want some forms so you can begin collecting subscribers! Forms are not supported natively through Jekyll or GitHub Pages.

Enter Staticman

Staticman is a comments engine for static websites, but can be used for any kind of form, with the proper precautions.

It can be deployed to Heroku with a click of a button, made into a GitHub App, or run on your own server. Once set up, it will submit a pull-request to your repository with the form details (and an optional mailgun integration).

I set it up on my own server and created a bot account on GitHub with permissions to a private repository for the Staticman app to update with subscriptions e-mails to.

Made the form, and a staticman.yml config file in the root of the private repository where I’m collecting e-mail addresses.

The Subscription Form <center> <h3>Subscribe for Updates</h3> <form class="staticman" method="POST" action="https://identosphere.net/staticman/v2/entry/infominer33/subscribe/master/subscribe"> <input name="options[redirect]" type="hidden" value="https://infominer.xyz/subscribed"> <input name="options[slug]" type="hidden" value="infohub"> <input name="fields[name]" type="text" placeholder="Name (optional)"><br> <input name="fields[email]" type="email" placeholder="Email"><br> <input name="fields[message]" type="text" placeholder="Areas of Interest (optional)"><br> <input name="links" type="hidden" placeholder="links"> <button type="submit">Subscribe</button> </form> </center> The staticman.yml config in the root of my private subscribe repo subscribe: allowedFields: ["name", "email", "message"] allowedOrigins: ["infominer.xyz","identosphere.net"] branch: "master" commitMessage: "New subscriber: {fields.name}" filename: "subscribe-{@timestamp}" format: "yaml" generatedFields: date: type: "date" options: format: "iso8601" moderation: false name: "infominer.xyz" path: "{options.slug}" requiredFields: ["email"]

It seems to be struggling with GitHub’s recent move to change the name of your default branch from master to main (for new repositories). So, unfortunately, I had to re-create a master branch to get it running.

Planet Pluto Feed Reader

One of the most promising projects I found, in pursuit of keeping up with all the info, is Planet Pluto Feed Reader, by Gerald Bauer.

In online media a planet is a feed aggregator application designed to collect posts from the weblogs of members of an internet community and display them on a single page. - Planet (Software)

For the uninitiated, I should add that websites generate RSS feeds that can be read by a newsreader, allowing users to keep up with posts from multiple locations without needing to visit each site individually. You very likely use RSS all the time without knowing, for example, your podcast player depends on RSS feeds to bring episodes directly to your phone.

What Pluto Feed reader does is just like your podcast app, except, instead of an application on your phone that only you can browse, it builds a simple webpage from the feeds you add to it, that can be published on GitHub, your favorite static web-hosting service, or on your own server in the cloud.

Pluto is built with Ruby, using the ERB templating language for web-page design.

One of the cool things about ERB is it lets you use any ruby function in your web-page template, supporting any capability you might want to enable while rendering your feed. This project has greatly helped me to learn the basics of Ruby while customizing its templates to suit my needs.

Feed Search

I use the RSSHub Radar browser extension to find feeds for sites while I’m browsing. However, this would be a lot of work when I want to get feeds for a number of sites at once.

I found a few simple python apps that find feeds for me. They aren’t perfect, but they do allow me to find feeds for multiple sites at the same time, all I have to do is format the query and hit enter.

As you can see below, these are not fully formed applications, just a few lines of code. To run them, it’s necessary to install Python, install the package with pip (pip install feedsearch-crawler), and type python at the command prompt, which takes you to a Python terminal that will recognize these commands.

From there you can type\paste python commands for demonstration, practice, or for simple scripts like this. I could also put the following scripts into their own feedsearch.py file and type python feedsearch.py, but I haven’t gotten around to doing anything like that.

Depending on the site, and the features you’re interested in, either of these feed seekers has their merits.

Feedsearch Crawler DBeath/feedsearch-crawler from feedsearch_crawler import search import logging logging.basicConfig(filename='example.log',level=logging.DEBUG) import output_opml list = ["http://bigfintechmedia.com/Blog/","http://blockchainespana.com/","http://blog.deanland.com/"] for items in list: feeds = search(items) output_opml(feeds).decode() logger = logging.getLogger("feedsearch_crawler") Feed seeker mitmedialab/feed_seeker from feed_seeker import generate_feed_urls list = ["http://bigfintechmedia.com/Blog/","http://blockchainespana.com/","http://blog.deanland.com/"] for items in list: for url in generate_feed_urls(items): print(url) GitHub Actions

Pluto Feed Reader is great, but I needed to find a way for it to run a regular schedule, so I wouldn’t have to run the command every time I wanted to check for new feeds. For this, I’ve used GitHub actions.

This is an incredible feature of GitHub that allows you to spin up a virtual machine, install an operating system, dependencies supporting your application, and whatever commands you’d like to run, on a schedule.

name: Build BlogCatcher on: schedule: # This action runs 4x a day. - cron: '0/60 */4 * * *' push: paths: # It also runs whenever I add a new feed to Pluto's config file. - 'planetid.ini' jobs: updatefeeds: # Install Ubuntu runs-on: ubuntu-latest steps: # Access my project repo to apply updates after pluto runs - uses: actions/checkout@v2 - name: Set up Ruby uses: ruby/setup-ruby@v1 with: ruby-version: 2.6 - name: Install dependencies # Download and install SQLite (needed for Pluto), then delete downloaded installer run: | wget http://security.ubuntu.com/ubuntu/pool/main/s/sqlite3/libsqlite3-dev_3.22.0-1ubuntu0.4_amd64.deb sudo dpkg -i libsqlite3-dev_3.22.0-1ubuntu0.4_amd64.deb rm libsqlite3-dev_3.22.0-1ubuntu0.4_amd64.deb gem install pluto && gem install nokogiri && gem install sanitize - name: build blogcatcher # This is the command I use to build my pluto project run: pluto b planetid.ini -t planetid -o docs - name: Deploy Files # This one adds the updates to my project run: | git remote add gh-token "https://github.com/identosphere/identity-blogcatcher.git" git config user.name "github-actions[bot]" git config user.email "41898282+github-actions[bot]@users.noreply.github.com" git add . git commit -a -m "update blogcatcher" git pull git push gh-token master Identosphere Blogcatcher

Identosphere Blogcatcher (source) is a feed aggregator for personal blogs of people who’ve been working on digital identity through the years, inspired by the original Planet Identity.

We also have a page for companies, and another for organizations working in the field.

Identosphere Weekly Highlights

Last month, Kaliya suggested that since we have these pages up and running smoothly, we were ready to start our newsletter. This is just a small piece of the backend information portal we’re working towards, and not enough to make this project as painless and comprehensive as possible, but we had enough to get started.

Every weekend we get together, browse the BlogCatcher, and share essential content others in our field will appreciate.

We’ll be publishing our 6th edition, at the start of next week, and our numbers are doing well!

This newsletter is free, and a great opportunity for us to work together on something consistent while developing a few other ideas.

identosphere.substack.com

Setting up a newsletter without third-party intermediaries is more of a challenge than I’m currently up for, so we’ve settled on Substack for now, which seems to be a trending platform for tech newsletters.

It has a variety of options for both paid and free content, and you can read our content before subscribing.

Support us on Patreon

While keeping the newsletter free, we are accepting contributions via Patreon. (yes another intermediary, but we can draw upon a large existing userbase, and it’s definitely easier than setting up a self-hosted alternative.)

So far, we have enough to cover a bit more than server costs, and this will ideally grow to support our efforts, and enable us to sustainably continue developing these open informational projects.

Python, Twitter Api, and GitHub Actions

Since we’re publishing this newsletter, and I’ve gotten a better handle on my inner state, I decided it was time to come back to twitter. However, I knew I couldn’t do it the old way, where I manually re-tweeted everything of interest, spending hours a day scrolling multiple accounts trying to stay abreast of important developments.

Instead, I dove into the twitter api. The benefits of using twitter programmatically can’t be understated. For my first project, I decided to try an auto-poster, which could enable me to keep an active twitter account, without having to regularly pay attention to twitter.

I found a simple guide How To Write a Twitter Bot with Python and tweepy composed of a dozen lines of python. That simple script posts a tweet to your account, but I wanted to post from a pre-made list, and so figured out how to read from a yaml file, and then used GitHub actions to run the script on a regular schedule.

While that didn’t result in anything I’m ready to share here, quite yet, somewhere during that process I realized that I could write python. After playing around with Ruby, in ERB, to build the BlogCatcher, and running various python scripts that other people wrote, tinkering where necessary, eventually I had pieced together enough knowledge I could actually write my own code!

Decentralized ID Weekly Twitter Collections

With that experience as a foundation I knew I was ready to come back to Twitter, begin trying to make more efficient use of its wealth of knowledge, and see about keeping up my accounts without losing too much hair.

I made a script that searches twitter for a variety of keywords related to decentralized identity, and write the tweet text and some other attributes to a csv file. From there, I can sort through those tweets, and save only the most relevant, and publish a few hundred tweets about decentralized identity to a weekly twitter collection, that make our job a lot easier than going to 100’s of websites to find out what’s happening. :D

Soon, these will be regularly published to decentralized-id.com, which I found out is an approved method of re-publishing tweets, unlike the ad hoc method I was using before, sharing them to discord channels (which grabs metadata and displays the preview image \ text), exporting their contents and re-publishing that.

I do intend to share my source for all that after I’ve gotten the kinks worked out, and set it running on an action.

Twitter collections I’ve made so far Self Sovereign ID 101 October Week 5 #SSI #DID November Week 1 #SSI #DID Decentralized-ID.com

Now we have a newsletter, and are seeking patrons, it seemed appropriate to work on developing decentralized-id.com, cleaning up some of its hastily thrown together parts, and adding a bunch of content to better represent the decentralized identity space.

That said, I’ve not given up on Bitcoin, or Crypto. On the contrary, I’m sure this work is only expanding my future capacity to continue working to fulfil the original vision of those resources.

Web-Work.Tools

With the information and skills i’ve gathered over the past year, web-work.tools was starting to look pretty out of date, relative to the growth of my understanding.

I updated GitHub Pages Starter Pack \ Extended Resources quite a lot to reflect those learnings, and separate the wheat from the chaff of the links I gathered when I was first figuring out how to use GitHub pages.

Thanks for stopping by! Subscribe for Updates


Subscribe

John Philpin : Lifestream

Day 15. My memory falls back to a guess as what to wear f

Day 15. My memory falls back to a guess as what to wear for a ‘far out’ spooky party See The Series

Day 15.

My memory falls back to a guess as what to wear for a ‘far out’ spooky party

See The Series


Day 16. My memory falls back to a guess as what to wear f

Day 16. My memory falls back to a guess as what to wear for a ‘far out’ spooky party See The Series

Day 16.

My memory falls back to a guess as what to wear for a ‘far out’ spooky party

See The Series


Day 13. My memory falls back to a guess as what to wear f

Day 13. My memory falls back to a guess as what to wear for a ‘far out’ spooky party See The Series

Day 13.

My memory falls back to a guess as what to wear for a ‘far out’ spooky party

See The Series


Day 14. My memory falls back to a guess as what to wear f

Day 14. My memory falls back to a guess as what to wear for a ‘far out’ spooky party See The Series

Day 14.

My memory falls back to a guess as what to wear for a ‘far out’ spooky party

See The Series


Day 12. My memory falls back to a guess as what to wear f

Day 12. My memory falls back to a guess as what to wear for a ‘far out’ spooky party See The Series

Day 12.

My memory falls back to a guess as what to wear for a ‘far out’ spooky party

See The Series


Here's Tom with the Weather

Sunday, 15. November 2020

Simon Willison

CoronaFaceImpact

CoronaFaceImpact Variable fonts are fonts that can be customized by passing in additional parameters, which is done in CSS using the font-variation-settings property. Here's a ​variable font that shows multiple effects of Covid-19 lockdown on a bearded face, created by Friedrich Althausen. Via Kevin Marks

CoronaFaceImpact

Variable fonts are fonts that can be customized by passing in additional parameters, which is done in CSS using the font-variation-settings property. Here's a ​variable font that shows multiple effects of Covid-19 lockdown on a bearded face, created by Friedrich Althausen.

Via Kevin Marks


Tim Bouma's Blog

Trust Frameworks? Standards Matter.

Photo by Tekton on Unsplash Note: This post is the author’s opinion only and does not represent the opinion of the author’s employer, or any organizations with which the author is involved. Over the past few years, and especially in the face of the COVID-19, there has been a proliferation of activity of developing digital identity trust frameworks. Trust frameworks are being developed by the
Photo by Tekton on Unsplash

Note: This post is the author’s opinion only and does not represent the opinion of the author’s employer, or any organizations with which the author is involved.

Over the past few years, and especially in the face of the COVID-19, there has been a proliferation of activity of developing digital identity trust frameworks. Trust frameworks are being developed by the private sector and the public sector, as collaborative or sector-specific efforts. Trust mark and trust certification programs are also emerging alongside trust framework development efforts.

These trust framework development efforts are worthy undertakings and the results of these efforts should automatically engender trust. But the problem that we are now faced with, all good intentions aside, is — how do we truly trust a trust framework?

The answer is simple — with standards.

Trust frameworks need standards to be trusted.

Within the Canadian context, a standard is defined by the Standards Council of Canada, as:

“a document that provides a set of agreed-upon rules, guidelines or characteristics for activities or their results. Standards establish accepted practices, technical requirements, and terminologies for diverse fields.”

This standard definition might sound straightforward — making a ‘standard” might sound easy but the hard part is all the work leading up to agreeing on those things that are part of a standard — an agreed-upon rules, guidelines or characteristics for activities or their results.

That’s where trust frameworks come into play. Much of the work that eventually ends up in a standard is years if not decades in the making. For years I have been part of developing the Public Sector Profile of the Pan-Canadian Trust Framework. This work had started in earnest in early 2015, and building on work that goes as far as back as 2007 (you can find a lot of the historical material in the docs folder in the PCTF repository on GitHub)

What has come out of all of this work is a trust framework — a set of agreed on principles, definitions, standards, specifications, conformance criteria, and assessment approach.

This definition of a trust framework, sounds pretty much like a standard, doesn’t it? Yes and no. What the trust framework has not gone through is a standards development process that respects and safeguards the interests of all stakeholders affected by the standard. Within the Canadian context, that’s where Standards Council of Canada comes into play by specifying how standards should be developed and how to accredit certain bodies to be standards development organizations.

So trust frameworks, however good and complete they are, still need to go through the step of becoming an official standard. Fortunately, this is the case in Canada, where the Public Sector Profile of the Pan-Canadian Trust Framework was used to develop CAN/CIOSC 103–1:2020 Digital trust and Identity — Part 1: Fundamentals. This standard was developed by the CIO Strategy Council, a standards development organization accredited by the Standards Council of Canada.

In closing, there are lots of trust frameworks being developed today. But to be truly trusted, a trust framework needs to either apply existing standards or become a standard itself. In Canada, we have been extremely fortunate to see the good work that we have done in the public sector to be transformed into a national standard that serves the interests of all Canadians.

Saturday, 14. November 2020

reb00ted

On Tim Hwang's book: Subprime Attention Crisis

My friend Doc Searls has been talking about this book repeatedly in recent months, as have many others interested in rolling back surveillance capitalism, improving privacy and user agency, and cleaning up the unholy mess that on-line advertising has become. Finally I have read the book, and here are a few notes. Tim Hwang makes three core points: Programmatic, on-line advertising

My friend Doc Searls has been talking about this book repeatedly in recent months, as have many others interested in rolling back surveillance capitalism, improving privacy and user agency, and cleaning up the unholy mess that on-line advertising has become. Finally I have read the book, and here are a few notes.

Tim Hwang makes three core points:

Programmatic, on-line advertising is fundamentally, irredeamably broken. It’s not a matter of whether it will implode, but just when. Apply the lessons from the 2018 subprime mortgage crisis: advertising inventory is a different asset class, but the situation is fundamentally the same: eroding fundamentals in the face of an opaque, overhyped market, which will lead to a crash with similarly major consequences when it occurs.

I buy his first point. I mostly buy his second, but there are too many important differences with the market for collateralized mortgages in 2008 for me to buy his third. Ultimately that parallel isn’t that important, however: if he’s right that programmatic on-line advertising is headed for something dramatic, whether it’s like 2008 subprime mortgages or some other crash doesn’t matter in the end.

Why would anybody say programmatic, on-line advertising is broken? He has many examples, go read the book, but let me mention my personal favorite from personal experience: ads, to me, on Spotify:

Spotify, for a long time, advertised joining the Marine Corps to me. I should be flattered how young, vigorous, and gung-ho they consider me, but hmm, I don’t think so. This must be because they have some wrong data about me, and while Spotify got the Marine Corps' money all the same, the Marine Corps totally wasted their spend.

While this example is particularly egregious, Hwang has many other examples, which argue that this is a major and pervasive problem.

I recently downloaded the personal data Spotify have about me, as I can because we have the CCPA in California. Looking at the advertising subjects they have tagged me with, guess what?

It was worse than I was afraid of. I loaded the tags into a spreadsheet, and categorized them into three groups:

Interests I definitely have. Example: “Computers and software high spender”. Guilty as charged.

Interests I definitely do not have. Example: “March Madness Basketball Fan”. What? Never watched basketball in my life. I don’t actually know what “March Madness” might even be and I’m disinclined to look it up.

Interests that I might or might not have, Meh so to speak. Example: “Vitamin C category purchasers”. Maybe I bought some one day. I don’t remember.

How do you think these categories break down? The majority (30/66, almost half) of tags Spotify has about me is in the Meh category. Will I buy more Vitamin C if they advertise it to me? Maybe, but quite unlikely. Consider the ad spend money in this category mostly wasted on me.

But this is the kicker: 24 of the remaining tags were “definitely not” and only 12 were “definitely yes”. Twice as many categories about me were absolutely wrong as were correct!!

Only 18% of the total categories were clearly correct, and worth spending ad money on to target me.

Eighteen.

From the name of the tags in the Spotify export, I guess most of them were purchased from third parties. (Makes sense: how would Spotify know I’m interested in Vitamin C, or not?) In other words, 18% of the data they purchased about me was correct, 36% incorrect, and the rest more or less random. No wonder Hwang immediately thinks of junk mortgage bonds with numbers like these.

But as he points out, advertisers keep spending money, however. Why? I suggest the answer is very simple: because of a lack of alternatives.

If you stop advertising on-line, what are you going to do instead? As long as there isn’t a better alternative, it’s a better plan to pinch your nose and go to your CEO and say, yes, I know that today, not just half but a full 82% of our advertising money is wasted, but it’s better to waste all that money than not to advertise at all. I can understand that. Terrible, but reality.

So, for me, the more interesting question is: “How can we do better?” And I think the times are getting ripe for doing something better… stay tuned :-)


FACILELOGIN

The Role of CIAM in Digital Transformation

Companies and organizations have strategic decisions to make at the Customer Identity & Access Management (CIAM) front. First, they have to decide whether to invest into a dedicated CIAM solution or to build on existing infrastructure. If there is already a foundation, what should be their next steps to have a mature CIAM strategy in place? If they do not have a CIAM solution, where do they st

Companies and organizations have strategic decisions to make at the Customer Identity & Access Management (CIAM) front. First, they have to decide whether to invest into a dedicated CIAM solution or to build on existing infrastructure. If there is already a foundation, what should be their next steps to have a mature CIAM strategy in place? If they do not have a CIAM solution, where do they start? Applications, systems, identities tend to be siloed while as a business grows, it’s imperative they are cohesive and well-integrated in order to provide a superior customer experience.

An effective CIAM solution will help connect various applications and systems such as CRM, data management, analytics and marketing platforms. This helps to move towards a 360-degree view of the customer which is a key prerequisite for successful digital transformation.

In the following webinar recording, I join with KuppingerCole Senior Analyst and Lead Advisor Matthias Reinwarth to explain how CIAM helps to achieve digital transformation, best practices in CIAM and pitfalls to avoid. Also we talk about the 5 pillars in CIAM essential for your CIAM strategy and maturity models to determine stage of growth.

The Role of CIAM in Digital Transformation was originally published in FACILELOGIN on Medium, where people are continuing the conversation by highlighting and responding to this story.


Simon Willison

The Cleanest Trick for Autogrowing Textareas

The Cleanest Trick for Autogrowing Textareas This is a very clever trick. Textarea content is mirrored into a data attribute using a JavaScript one-liner, then a visibility: hidden ::after element clones that content using content: attr(data-replicated-value). The hidden element exists in a CSS grid with the textarea which allows the textarea to resize within the grid when the hidden element inc

The Cleanest Trick for Autogrowing Textareas

This is a very clever trick. Textarea content is mirrored into a data attribute using a JavaScript one-liner, then a visibility: hidden ::after element clones that content using content: attr(data-replicated-value). The hidden element exists in a CSS grid with the textarea which allows the textarea to resize within the grid when the hidden element increases its height.

Via @chriscoyier


Hunting for Malicious Packages on PyPI

Hunting for Malicious Packages on PyPI Jordan Wright installed all 268,000 Python packages from PyPI in containers, and ran Sysdig to capture syscalls made during installation to see if any of them were making extra network calls or reading or writing from the filesystem. Absolutely brilliant piece of security engineering and research. Via @di_codes

Hunting for Malicious Packages on PyPI

Jordan Wright installed all 268,000 Python packages from PyPI in containers, and ran Sysdig to capture syscalls made during installation to see if any of them were making extra network calls or reading or writing from the filesystem. Absolutely brilliant piece of security engineering and research.

Via @di_codes


Personal Data Warehouses: Reclaiming Your Data

I gave a talk yesterday about personal data warehouses for GitHub's OCTO Speaker Series, focusing on my Datasette and Dogsheep projects. The video of the talk is now available, and I'm presenting that here along with an annotated summary of the talk, including links to demos and further information. There's a short technical glitch with the screen sharing in the first couple of minutes of the t

I gave a talk yesterday about personal data warehouses for GitHub's OCTO Speaker Series, focusing on my Datasette and Dogsheep projects. The video of the talk is now available, and I'm presenting that here along with an annotated summary of the talk, including links to demos and further information.

There's a short technical glitch with the screen sharing in the first couple of minutes of the talk - I've added screenshots to the notes which show what you would have seen if my screen had been correctly shared.

Unstick

I'm going to be talking about personal data warehouses, what they are, why you want one, how to build them and some of the interesting things you can do once you've set one up.

I'm going to start with a demo.

This is my dog, Cleo - when she won first place in a dog costume competition here, dressed as the Golden Gate Bridge!

So the question I want to answer is: How much of a San Francisco hipster is Cleo?

I can answer it using my personal data warehouse.

I have a database of ten year's worth of my checkins on Foursquare Swarm - generated using my swarm-to-sqlite tool. Every time I check in somewhere with Cleo I use the Wolf emoji in the checkin message.

I can filter for just checkins where the checkin message includes the wolf emoji.

Which means I can see just her checkins - all 280 of them.

If I facet by venue category, I can see she's checked in at 57 parks, 32 dog runs, 19 coffee shops and 12 organic groceries.

Then I can facet by venue category and filter down to just her 19 checkins at coffee shops.

Turns out she's a Blue Bottle girl at heart.

Being able to build a map of the coffee shops that your dog likes is obviously a very valuable reason to build your own personal data warehouse.

Let's take a step back and talk about how this demo works.

The key to this demo is this web application I'm running called Datasette. I've been working on this project for three years now, and the goal is to make it as easy and cheap as possible to explore data in all sorts of shapes and sizes.

Ten years ago I was working for the Guardian newspaper in London. One of the things I realized when I joined the organization is that newspapers collect enormous amounts of data. Any time they publish a chart or map in the newspaper someone has to collect the underlying information.

There was a journalist there called Simon Rogers who was a wizard at collecting any data you could think to ask for. He knew exactly where to get it from, and had collected a huge number of brilliant spreadsheets on his desktop computer.

We decided we wanted to publish the data behind the stories. We started something called the Data Blog, and aimed to accompany our stories with the raw data behind them.

We ended up using Google Sheets to publish the data. It worked, but I always felt like there should be a better way to publish this kind of structured data in a way that was as useful and flexible as possible for our audience.

Fast forward to 2017, when I was looking into this new thing called "serverless" hosting - in particular one called Zeit Now, which has since rebranded as Vercel.

My favourite aspect of Serverless is "Scale to zero" - the idea that you only pay for hosting when your project is receiving traffic.

If you're like me, and you love building side-projects but you don't like paying $5/month for them for the rest of your life, this is perfect.

The catch is that serverless providers tend to charge you extra for databases, or require you to buy a hosted database from another provider.

But what if your database doesn't change? Can you bundle your database in the same container as your code?

This was the initial inspiration behind creating Datasette.

Here's another demo. The World Resources Institute maintain a CSV file of every power plant in the world.

Like many groups, they publish that data on GitHub.

I have a script that grabs their most recent data and publishes it using Datasette.

Here's the contents of their CSV file published using Datasette

Datasette supports plugins. You've already seen this plugin in my demo of Cleo's coffee shops - it's called datasette-cluster-map and it works by looking for tables with a latitude and longitude column and plotting the data on a map.

Straight away looking at this data you notice that there's a couple of power plants down here in Antarctica. This is McMurdo station, and it has a 6.6MW oil generator.

And oh look, there's a wind farm down there too on Ross Island knocking out 1MW of electricity.

But this is also a demonstration of faceting. I can slice down to just the nuclear power plants in France and see those on a map.

And anything i can see in the interface, I can get out as JSON. Here's a JSON file showing all of those nuclear power plants in France.

And here's a CSV export which I can use to pull the data into Excel or other CSV-compatible software.

If I click "view and edit SQL" to get back the SQL query that was used to generate the page - and I can edit and re-execute that query.

I can get those custom results back as CSV or JSON as well!

In most web applications this would be seen as a terrifying security hole - it's a SQL injection attack, as a documented feature!

A couple of reasons this isn't a problem here:

Firstly, this is setup as a read-only database: INSERT and UPDATE statements that would modify it are not allowed. There's a one second time limit on queries as well.

Secondly, everything in this database is designed to be published. There are no password hashes or private user data that could be exposed here.

This also means we have a JSON API that lets JavaScript execute SQL queries against a backend! This turns out to be really useful for rapid prototyping.

It's worth talking about the secret sauce that makes this all possible.

This is all built on top of SQLite. Everyone watching this talk uses SQLite every day, even if you don't know it.

Most iPhone apps use SQLite, many desktop apps do, it's even running inside my Apple Watch.

One of my favourite features is that a SQLite database is a single file on disk. This makes it easy to copy, send around and also means I can bundle data up in that single file, include it in a Docker file and deploy it to serverless hosts to serve it on the internet.

Here's another demo that helps show how GitHub fits into all of this.

Last year PG&E - the power company that covers much of California - turned off the power to large swathes of the state.

I got lucky: six months earlier I had started scraping their outage map and recording the history to a GitHub repository.

simonw/pge-outages is a git repository with 34,000 commits tracking the history of outages that PG&E had published on their outage map.

You can see that two minutes ago they added 35 new outages.

I'm using this data to publish a Datasette instance with details of their historic outages. Here's a page showing their current outages ordered by the most customers affected by the outage.

Read Tracking PG&E outages by scraping to a git repo for more details on this project.

I recently decided to give this technique a name. I'm calling it Git scraping - the idea is to take any data source on the web that represents a point-in-time and commit it to a git repository that tells the story of the history of that particular thing.

Here's my article describing the pattern in more detail: Git scraping: track changes over time by scraping to a Git repository.

This technique really stood out just last week during the US election.

This is the New York Times election scraper website, built by Alex Gaynor and a growing team of contributors. It scrapes the New York Times election results and uses the data over time to show how the results are trending.

It uses a GitHub Actions script that runs on a schedule, plus a really clever Python script that turns it into a useful web page.

You can find more examples of Git scraping under the git-scraping topic on GitHub.

I'm going to do a bit of live coding to show you how this stuff works.

This is the incidents page from the state of California CAL FIRE website.

Any time I see a map like this, my first instinct is to open up the browser developer tools and try to figure out how it works.

If I open the network tab, refresh the page and then filter to just XHR requests.

A neat trick is to order by size - because inevitably the thing at the top of the list is the most interesting data on the page.

This appears to be a JSON file telling me about all of the current fires in the state of California!

(I set up a Git scraper for this a while ago.)

Now I'm going to take this a step further and turn it into a Datasette instance.

It looks like the AllYearIncidents key is the most interesting bit here.

I'm going to use curl to fetch that data, then pipe it through jq to filter for just that AllYearIncidents array.

curl 'https://www.fire.ca.gov/umbraco/Api/IncidentApi/GetIncidents' \ | jq .AllYearIncidents

Now I have a list of incidents for this year.

Next I'm going to pipe it into a tool I've been building called sqlite-utils - it's a suite of tools for manipulating SQLite databases.

I'm going to use the "insert" command and insert the data into a ca-fires.db in an incidents table.

curl 'https://www.fire.ca.gov/umbraco/Api/IncidentApi/GetIncidents' \ | jq .AllYearIncidents \ | sqlite-utils insert ca-fires.db incidents -

Now I've got a ca-fires.db file. I can open that in Datasette:

datasette ca-fires.db -o

And here it is - a brand new database.

You can straight away see that one of the rows has a bad location, hence it appears in Antarctica.

But 258 of them look like they are in the right place.

I can also facet by county, to see which county had the most fires in 2020 - Riverside had 21.

I'm going to take this a step further and put it on the internet, using a command called datasette publish.

Datasette publish supports a number of different hosting providers. I'm going to use Vercel.

I'm going to tell it to publish that database to a project called "ca-fires" - and tell it to install the datasette-cluster-map plugin.

datasette publish vercel ca-fires.db \ --project ca-fires \ --install datasette-cluster-map

This then takes that database file, bundles it up with the Datasette application and deploys it to Vercel.

Vercel gives me a URL where I can watch the progress of the deploy.

The goal here is to have as few steps as possible between finding some interesting data, turning it into a SQLite database you can use with Datasette and then publishing it online.

And this here is that database I just created - available for anyone on the internet to visit and build against.

https://ca-fires.vercel.app/ca-fires/incidents

I've given you a whistle-stop tour of Datasette for the purposes of publishing data, and hopefully doing some serious data journalism.

So what does this all have to do with personal data warehouses?

Last year, I read this essay by Stephen Wolfram: Seeking the Productive Life: Some Details of My Personal Infrastructure. It's an incredible exploration of fourty years of productivity hacks that Stephen Wolfram has applied to become the CEO of a 1,000 person company that works remotely. He's optimized every aspect of his professional and personal life.

It's a lot.

But there was one part of this that really caught my eye. He talks about a thing he calls a "metasearcher" - a search engine on his personal homepage that searches every email, journals, files, everything he's ever done - all in one place.

And I thought to myself, I really want THAT. I love this idea of a personal portal to my own stuff.

And because it was inspired by Stephen Wolfram, but I was planning on building a much less impressive version, I decided to call it Dogsheep.

Wolf, ram. Dog, sheep.

I've been building this over the past year.

So essentially this is my personal data warehouse. It pulls in my personal data from as many sources as I can find and gives me an interface to browse that data and run queries against it.

I've got data from Twitter, Apple HealthKit, GitHub, Swarm, Hacker News, Photos, a copy of my genome... all sorts of things.

I'll show a few more demos.

Here's another one about Cleo. Cleo has a Twitter account, and every time she goes to the vet she posts a selfie and says how much she weighs.

Here's a SQL query that finds every tweet that mentions her weight, pulls out her weight in pounds using a regular expression, then uses the datasette-vega charting plugin to show a self-reported chart of her weight over time.

select created_at, regexp_match('.*?(\d+(\.\d+))lb.*', full_text, 1) as lbs, full_text, case when (media_url_https is not null) then json_object('img_src', media_url_https, 'width', 300) end as photo from tweets left join media_tweets on tweets.id = media_tweets.tweets_id left join media on media.id = media_tweets.media_id where full_text like '%lb%' and user = 3166449535 and lbs is not null group by tweets.id order by created_at desc limit 101

I did 23AndMe a few years ago, so I have a copy of my genome in Dogsheep. This SQL query tells me what colour my eyes are.

Apparently they are blue, 99% of the time.

select rsid, genotype, case genotype when 'AA' then 'brown eye color, 80% of the time' when 'AG' then 'brown eye color' when 'GG' then 'blue eye color, 99% of the time' end as interpretation from genome where rsid = 'rs12913832'

I have HealthKit data from my Apple Watch.

Something I really like about Apple's approach to this stuff is that they don't just upload all of your data to the cloud.

This data lives on your watch and on your phone, and there's an option in the Health app on your phone to export it - as a zip file full of XML.

I wrote a script called healthkit-to-sqlite that converts that zip file into a SQLite database, and now I have tables for things like my basal energy burned, my body fat percentage, flights of stairs I've climbed.

But the really fun part is that it turns out any time you track an outdoor workout on your Apple Watch it records your exact location every few seconds, and you can get that data back out again!

This is a map of my exact route for the San Francisco Half Marathon three years ago.

I've started tracking an "outdoor walk" every time I go on a walk now, just so I can get the GPS data out again later.

I have a lot of data from GitHub about my projects - all of my commits, issues, issue comments and releases - everything I can get out of the GitHub API using my github-to-sqlite tool.

So I can do things like see all of my commits across all of my projects, search and facet them.

I have a public demo of a subset of this data at github-to-sqlite.dogsheep.net.

I can search my commits for any commit that mentions "pytest".

I have all of my releases, which is useful for when I write my weeknotes and want to figure out what I've been working on.

Apple Photos is a particularly interesting source of data.

It turns out the Apple Photos app uses a SQLite database, and if you know what you're doing you can extract photo metadata from it.

They actually run machine learning models on your own device to figure out what your photos are of!

You can use the machine learning labels to see all of the photos you have taken of pelicans. Here are all of the photos I have taken that Apple Photos have identified as pelicans.

It also turns out they have columns called things like ZOVERALLAESTHETICSCORE, ZHARMONIOUSCOLORSCORE, ZPLEASANTCAMERATILTSCORE and more.

So I can sort my pelican photos with the most aesthetically pleasing first!

I wrote more about this on my blog; Using SQL to find my best photo of a pelican according to Apple Photos.

And a few weeks ago I finally got around to building the thing I'd always wanted: the search engine.

I called it Dogsheep Beta, because Stephen Wolfram has a search engine called Wolfram Alpha.

This is pun-driven development: I came up with this pun a while ago and liked it so much I committed to building the software.

I wanted to know when the last time I had eaten a waffle-fish ice cream was. I knew it was in Cupertino, so I searched Dogsheep Beta for Cupertino and found this photo.

I hope this illustrates how much you can do if you pull all of your personal data into one place!

The GDPR law that passed in Europe a few years ago really helps with this stuff.

Companies have to provide you with access to the data that they store about you.

Many big internet companies have responded to this by providing a self-service export feature, usually buried somewhere in the settings.

You can also request data directly from companies, but the self-service option helps them keep their customer support costs down.

This stuff becomes easier over time as more companies build out these features.

The other challenge is how we democratize access to this.

Everything I've shown you today is open source: you can install this software and use it yourself, for free.

But there's a lot of assembly required. You need to figure out authentication tokens, find somewhere to host it, set up cron jobs and authentication.

But this should be accessible to regular non-uber-nerd humans!

Expecting regular humans to run a secure web server somewhere is pretty terrifying. I've been looking at WireGuard and Tailscale to help make secure access between devices easier, but that's still very much for super-users only.

Running this as a hosted service doesn't appeal: taking responsibility for people's personal data is scary, and it's probably not a great business.

I think the best options are to run on people's own personal devices - their mobile phones and their laptops. I think it's feasible to get Datasette running in those environments, and I really like the idea of users being able to import their personal data onto a device that they control and analyzing it there.

I invite you to try this all out for yourself!

datasette.io for Datasette

github.com/dogsheep and dogsheep.github.io for Dogsheep

simonwillison.net is my personal blog

twitter.com/simonw is my Twitter account

The Dogsheep GitHub organization has most of the tools that I've used to build out my personal Dogsheep warehouse - many of them using the naming convention of something-to-sqlite.

Q&A, from this Google Doc

Q: Is there/will there be a Datasette hosted service that I can pay $ for? I would like to pay $5/month to get access to the latest version of Dogsheep with all the latest plugins!

I don’t want to build a hosting site for personal private data because I think people should stay in control of that themselves, plus I don’t think there’s a particularly good business model for that.

Instead, I’m building a hosted service for Datasette (called Datasette Cloud) which is aimed at companies and organizations. I want to be able to provide newsrooms and other groups with a private, secure, hosted environment where they can share data with each other and run analysis.

Q: How do you sync your data from your phone/watch to the data warehouse? Is it a manual process?

The health data is manual: the iOS Health app has an export button which generates a zip file of XML which you can then AirDrop to a laptop. I then run my healthkit-to-sqlite script against it to generate the DB file and SCP that to my Dogsheep server.

Many of my other Dogsheep tools use APIs and can run on cron, to fetch the most recent data from Swarm and Twitter and GitHub and so on.

Q: When accessing Github/Twitter etc do you run queries against their API or you periodically sync (retrieve mostly I guess) the data to the warehouse first and then query locally?

I always try to get ALL the data so I can query it locally. The problem with APIs that let you run queries is that inevitably there’s something I want to do that can’t be done of the API - so I’d much rather suck everything down into my own database so I can write my own SQL queries.

Here's an example of my swarm-to-sqlite script, pulling in just checkins from the past two weeks (using authentication credentials from an environment variable).

swarm-to-sqlite swarm.db --since=2w

Here's a redacted copy of my Dogsheep crontab.

Q: Have you explored doing this as a single page app so that it is possible to deploy this as a static site? What are the constraints there?

It’s actually possible to query SQLite databases entirely within client-side JavaScript using SQL.js (SQLite compiled to WebAssembly)

This Observable notebook is an example that uses this to run SQL queries against a SQLite database file loaded from a URL.

Datasette’s JSON and GraphQL APIs mean it can easily act as an API backend to SPAs

I built this site to offer a search engine for trees in San Francisco. View source to see how it hits a Datasette API in the background: https://sf-trees.com/?q=palm

You can use the network pane to see that it's running queries against a Datasette backend.

Here's the JavaScript code which calls the API.

This demo shows Datasette’s GraphQL plugin in action.

Q: What possibilities for data entry tools do the writable canned queries open up?

Writable canned queries are a relatively recent Datasette feature that allow administrators to configure a UPDATE/INSERT/DELETE query that can be called by users filling in forms or accessed via a JSON API.

The idea is to make it easy to build backends that handle simple data entry in addition to serving read-only queries. It’s a feature with a lot of potential but so far I’ve not used it for anything significant.

Currently it can generate a VERY basic form (with single-line input values, similar to this search example) but I hope to expand it in the future to support custom form widgets via plugins for things like dates, map locations or autocomplete against other tables.

Q: For the local version where you had a 1-line push to deploy a new datasette: how do you handle updates? Is there a similar 1-line update to update an existing deployed datasette?

I deploy a brand new installation every time the data changes! This works great for data that only changes a few times a day. If I have a project that changes multiple times an hour I’ll run it as a regular VPS instead rather than use a serverless hosting provider.

Friday, 13. November 2020

Tim Bouma's Blog

Self-Sovereign Identity: Interview with Tim Bouma

An interview by SSI_Ambassador a Twitter account with educational content about self-sovereign identity with a focus on the European Union. The SSI_Ambassador account is managed by Adrian Doerk and the interview was conducted as part of Adrian’s Bachelor’s thesis. I have asked Adrian’s permission to post this material and he has graciously granted me permission. The post is a lightly edited versio

An interview by SSI_Ambassador a Twitter account with educational content about self-sovereign identity with a focus on the European Union. The SSI_Ambassador account is managed by Adrian Doerk and the interview was conducted as part of Adrian’s Bachelor’s thesis. I have asked Adrian’s permission to post this material and he has graciously granted me permission. The post is a lightly edited version of the interview transcript. The interview took place in September 2020.

Note: All views and opinions expressed are mine only and do not represent that of my employer or organizations with whom I am involved.

Photo by Lili Popper on Unsplash The growth factors of Self-Sovereign Identity Solutions in Europe”

Adrian Doerk: My research question is concerned about the growth factors of self-sovereign identity solutions in Europe. You as somebody who is very familiar with the topic of SSI, what would you think about, when you read the term growth factor of self-sovereign identity, what comes to your mind?

Tim Bouma: I believe the main growth factor is going to be adoption by users and it has to be really easy. Another growth factor is that SSI will need to be part of an infrastructure. I’m not sure if SSI is viable being marketed as a separate product because I don’t think end-users really understand it. The growth factor is going to be similar to plumbing — some additional standardized capabilities that we need to build. It will be as exciting as buying a 1/4 inch washer and bolt. It will just be part of the infrastructure and the demand will be from higher-order products not for SSI itself. I’d say most people won’t even know what it is, nor should they know about it. It’s not that different from the markets in the early days of PC networking. Remember you had your choice of drivers and different companies providing those things and after a while, it just gets baked in the operating system and people don’t even know that they’re using it. As it for being a discrete market, I see very quickly being subsumed by a higher-order products and like subsumed into mobile operating systems, into desktop devices, tablets, etc. It’s not that different from how a lot of other products or technologies evolved over time.

Adrian Doerk: We as SSI for German consortia we want to build infrastructure for Europe, so you might have read our press release. Probably not — no worries. So basically, our idea is to come up with a base layer infrastructure which is used as a public utility as defined in the Trust over IP stack level one with a European scope in terms of the governance and a worldwide usage. So considering this plan as public private partnership. What would be your recommendations for the governance for this network?

Tim Bouma: Well, you are totally aligned with my thinking. In fact, we’re about to announce a challenge. There’s a couple of things going on within the government, Canada. We’re launching a technology challenge (note: since this interview the challenge has been launched.) to figure out exactly what layer one would be for the digital infrastructure with the standards, and also what specifically is the scope of layer one and I can point you to that link afterwards, but that’s what I’ve been working on. We were just awarding the contracts as we speak. We’re getting six vendors to help us out. I think to answer your questions, I have some good ideas, but I’m not 100% sure because it is relatively new area and I think we need to be quite open on having our assumptions challenged and change during the course, but I see a very clear differentiation between the technical interoperability and the business interoperability, and in fact the challenge that I’m doing We’ve got six different use cases ranging from government security clearances to issuing of cannabis licenses to name a few. I’m not concerned about the content of the credential because that’s more business interoperability. I’m concerned that whatever credential, SSI credential or whatever is being issued into the system can actually be verified from the system irrespective of what’s inside. I hope I’m not losing track your question here. I see a very clear division of the private sector operating that system. I don’t see why government needs to build it and operate it. We don’t do that for networks, we don’t do that for payment rails. It has to be done in a way that governments have optionality that if a new operator comes along that’s more trustworthy or has different characteristics, there’s no reason why they can’t be used. There’s a risk. Maybe it’s not a risk for this to turn into a natural monopoly if we aren’t careful to make sure that we don’t have the standards 100% right? We have to be very, very careful that we want to have a plurality of operators. But that doesn’t mean a whole lot of them. I see that there were probably only for national infrastructure that maybe one or two domestic operators. And then probably, you know there’s going to be some international operators, but they need to work together so that’s a choice.

Adrian Doerk: Who exactly do you mean with operators? Do you mean the Stewards?

Tim Bouma: OK, so there’s two different things. There’s a steward, the governance which and again this is going to be a tricky and I’ve noticed that the Trust over IP Foundation revised their model that you could have governance at each of the layers. And so the question is governance at which layer and then what’s the composition of that governance? I would see at layer one. It’s largely a technical issue. It could be just part predominantly private sector players, maybe some government or nonprofit, but I just don’t know yet. I think where a government really will play is not in the infrastructure itself, but how that infrastructure is used and relied on for doing administration of programs. Provision of services. You know it could be passports. It can be currency. It could be educational credentials or whatever. I think government needs to be concerned at that level, but less so at the lower levels. But having confidence in those lower levels.

Adrian Doerk: When we speak about adoption, one of the big topics is use cases in general. We think that more or less the low hanging fruit, which is really easy to implement, is where you have the issuer also as a relying party. For example a University, which issued a student ID and then checks it again to issue him some other credentials. What would you think would be good for the start for different use cases? Let me reframe the question shortly. What are your recommendations for use cases to start with? What is the best one?

Tim Bouma: We had six vendors propose to us and they came up with six different use cases, and they’re quite varied, and I don’t think I can say which one is going to take off by adoption or not, but there is a government security clearances, there’s a cannabis licensing, there’s one for having your digital birth certificate, there’s one for a job site permit, it came from oil and gas. I’m not so sure which one is going to play out. I think what’s more important is really having a crystal clear understanding of what’s the digital infrastructure that can serve all of those use cases. That’s where my thinking is. What’s the absolute minimum that needs to be built? That could be an infrastructure so I think any one of these use cases can take off, but I think that model of issuer Holder verifier and we’ve generalized it to methods. It doesn’t have to be a blockchain. It could be a database. It could be different ways of doing it. There’s a super pattern there that will just serve all the use cases and this is where I’ve been putting a lot of intellectual effort just on my own time just to understand what the parallels are to digital currency and digital identity. It all boils down to kind of the similar idea is that I need to independently verify something. And I need to do it in a way that’s as flexible as possible, and then I need to have some additional functions. Digital currency. You need a transfer capability for digital identity or digital verification. I don’t think you need that. What are the absolute minimal requirements for this digital infrastructure? And it’s kind of like standardizing on paper and ink for doing contracts. You know you need paper and you need ink. What should we all standardize on? 8 1/2 by 11 or 8, four and a special type of ink that you need to use or just ink. Can’t be pencil or graphite or crayon and that’s good enough to move on to all the other very use cases. I don’t know what use case is going to take off. I think the important thing for us to do is do the critical thinking to figure out what are the common patterns on underneath there that are going to apply in all of those use cases. And as I said my working hypothesis now is that the issuer, holder, verifier with some ornamentation will do the job.

Adrian Doerk: Considering you your knowledge with the pan Canadian trust framework. You, as a policymaker, what will be your recommendation for policymakers in the European Union which work for example at the European self-sovereign identity framework?

Tim Bouma: It’s interesting. ’cause I actually had a call on this very same issue. I think policymakers actually have to go back to the drawing board and take a look at all the concepts and see if they have the right concepts to actually build out a framework and regulation, and that’s what we’ve been doing with Pan Canadian Trust Framework. We’ve recognized that what we tried to do is ingest all the latest concepts, such as issuer, holder, verifier credentials and express them in a way that does not limit them by assumption, like you don’t assume the credential is a document for example, or physical document. Or it’s just manifested only as a physical document. A credentials is a claim that can be independently verifiable and coming up with those concepts. So when you’re actually building up the frameworks and regulations you have a robust and a framework that doesn’t constrain you to a particular technological approach. There may be new technologies that come along that you didn’t even anticipate, but if you’ve done your critical thinking up front, there should be no reason why you can’t adopt that, so I think we’re just at this interesting point right now. I think we have an opportunity to go back to the drawing board. And this is just not an issue of just updating like eIDAS or other regulations and just tweaking a bit. It’s like going back to the drawing board and just say do we have the right policy constructs, which then could become regulatory requirements or legislative requirements. I think that we’re building a next generation of solutions here, and I think it’s really important that that we have the right constructs going forward, and I think we do have good confidence because I’ve looked at my evolution of thinking. You know I really started to get deep in the space in 2016 and really spend a lot of time internalizing the concepts. And it’s just a lot of iterations, but I feel like we’re in a good spot now to actually have a conversation of what these frameworks and regulations might be. It’s not just taking a paper analogue and saying, You know, just let’s do a digital equivalent of that, or a document analogue. We have to think about it differently.

Adrian Doerk: Then I would like to come to my last question. What do you think will be the negative sites or the danger sites of SSI?

Tim Bouma: Aside from all the hype and blue-sky stuff that has no merit. You see this often with any type of new technology, for example that SSI will solve hunger. It will solve society’s problems. First of all, just making sure it doesn’t get implicated in outrageous claims and that it has nothing that those are deeper problems to solve. So I think, as Gartner calls it, there’s the hype cycle. Of course, when you have the hype cycle, you get the what I call the allergic reaction that people will say, “We’re not going to use it because, you know, it’s got a bad name.” The other thing that we need to be concerned with or cognizant of is that we could build some capabilities that are outside of the states control. And I don’t know how that would manifest itself. All right, the great example is the Bitcoin Blockchain. It basically is a system that just runs on its own and no one can stop it because the way it’s structured, there’s no Corporation or operator that you can actually like take down and the algorithms, proof of work, and that it’s all open and permissionless. People are valuing like whatever is associated with their Bitcoin address because they value it. And there’s basically no way that a state or large actor can actually control that. And also not really bad thing. You know the way I’ve been describing it is that in the Bitcoin context from the economic context, we may have a new macroeconomic factor coming on the horizon that we need to work into our models around a proof of work turning energy into a digital assets and how that plays out, don’t know. So I, I think some of the downsides might be is. There may be some key capabilities that could be built. That could be viewed as illegal or unlawful in certain contexts, and so they they ban it outright. So I think we have to be very careful with this new technology to make sure that we bring the stakeholders along so we can embrace the positive side of the technology. Every technology is a two-edged sword, gunpowder, guns, you know anything? There’s an upside and there’s the downside, right? And I think that’s something that we have to be very cognizant of just like you know. In the mid 90s you had the crypto wars with the clipper chip. You can only have expert with certain key strengths and that caused a reaction and so we have to be careful that we don’t get caught into those same traps of us against the government or government against them. I think we have to figure out how to work this out together.

Thursday, 12. November 2020

John Philpin : Lifestream

GAH - whatever I do :-( Not Alone

GAH - whatever I do :-( Not Alone

GAH - whatever I do :-(

Not Alone


Calling Wordpress peeps … #help I am using the 2020 templ

Calling Wordpress peeps … #help I am using the 2020 template and my HR separator default to this … Whatever happened to good old plain lines that I can manage with CSS - my only choice as an alternative seems to be a dots … anyone know how I can go back to a simple line that I can style myself? I have googled but all that I can find is what I know … I am guessing that I need to define

Calling Wordpress peeps … #help

I am using the 2020 template and my HR separator default to this …

Whatever happened to good old plain lines that I can manage with CSS - my only choice as an alternative seems to be a dots … anyone know how I can go back to a simple line that I can style myself?

I have googled but all that I can find is what I know … I am guessing that I need to define it in the bowels of WP somewhere?

My thanks in anticipation.


Identity Woman

Self-Sovereign Identity Critique, Critique /7

This is the 7/8 posts addressing the accusation by Philip Sheldrake that SSI is dystopian. We have now gotten to the Buckminster Fuller section of the document. I <3 Bucky. He was an amazing visionary and like Douglas Englebart, who I had the good fortune to meet and have lunch with, dedicated his life to […] The post Self-Sovereign Identity Critique, Critique /7 appeared first on Identity Wo

This is the 7/8 posts addressing the accusation by Philip Sheldrake that SSI is dystopian. We have now gotten to the Buckminster Fuller section of the document. I <3 Bucky. He was an amazing visionary and like Douglas Englebart, who I had the good fortune to meet and have lunch with, dedicated his life to […]

The post Self-Sovereign Identity Critique, Critique /7 appeared first on Identity Woman.


Self-Soverieng Identity Critique, Critique /6

So Philip here is where you go off the rails to make the assertion that we working on SSI are trying to ‘encompass all of what it means to be human and have an identity’ with our technologies. it’s time to explore a possible categorization of all things ‘identity’ that will help throw some more […] The post Self-Soverieng Identity Critique, Critique /6 appeared first on Identity Woman.

So Philip here is where you go off the rails to make the assertion that we working on SSI are trying to ‘encompass all of what it means to be human and have an identity’ with our technologies. it’s time to explore a possible categorization of all things ‘identity’ that will help throw some more […]

The post Self-Soverieng Identity Critique, Critique /6 appeared first on Identity Woman.


Self-Sovereign Identity Critique, Critique /5

This is part 5 of 8 posts critiquing Philip’s assertion that all of SSI is a Dystopian effort when its really the work of a community of practical idealists who really want to build real things in the real world and do the right thing. This volume focuses on this quote that draws on Lawrence […] The post Self-Sovereign Identity Critique, Critique /5 appeared first on Identity Woman.

This is part 5 of 8 posts critiquing Philip’s assertion that all of SSI is a Dystopian effort when its really the work of a community of practical idealists who really want to build real things in the real world and do the right thing. This volume focuses on this quote that draws on Lawrence […]

The post Self-Sovereign Identity Critique, Critique /5 appeared first on Identity Woman.


Self-Sovereign Identity Critique, Critique /4

Philip’s essay has so many flaws that I have had to continue to pull it a part in ta series. Below is a quote from Philip’s critique and I am so confused – What are you talking about? Who has built this system with SSI that you speak of? It just doesn’t exist yet. AND […] The post Self-Sovereign Identity Critique, Critique /4 appeared first on Identity Woman.

Philip’s essay has so many flaws that I have had to continue to pull it a part in ta series. Below is a quote from Philip’s critique and I am so confused – What are you talking about? Who has built this system with SSI that you speak of? It just doesn’t exist yet. AND […]

The post Self-Sovereign Identity Critique, Critique /4 appeared first on Identity Woman.


Self-Sovereign Identity Critique, Critique /3

I will continue to lay into Philip for failing making broad sweeping generalizations about it that are simply not true and create mis-information in our space. He goes on his piece to say this: When the SSI community refers to an ‘identity layer’, its subject is actually a set of algorithms and services designed to […] The post Self-Sovereign Identity Critique, Critique /3 appeared first on Iden

I will continue to lay into Philip for failing making broad sweeping generalizations about it that are simply not true and create mis-information in our space. He goes on his piece to say this: When the SSI community refers to an ‘identity layer’, its subject is actually a set of algorithms and services designed to […]

The post Self-Sovereign Identity Critique, Critique /3 appeared first on Identity Woman.


Self-Sovereign Identity Critique, Critique /2

At one point in my career I would have been considered “non-technical”. This however is no longer the case. I don’t write code and I don’t as yet write specs. I do understand this technology as deeply as anyone can who isn’t writing the code can. I co-chair a technical working group developing standards for […] The post Self-Sovereign Identity Critique, Critique /2 appeared first on Identity Wom

At one point in my career I would have been considered “non-technical”. This however is no longer the case. I don’t write code and I don’t as yet write specs. I do understand this technology as deeply as anyone can who isn’t writing the code can. I co-chair a technical working group developing standards for […]

The post Self-Sovereign Identity Critique, Critique /2 appeared first on Identity Woman.


Simon Willison

Intent to Remove: HTTP/2 and gQUIC server push

Intent to Remove: HTTP/2 and gQUIC server push The Chrome / Blink team announce their intent to remove HTTP/2 server push support, where servers can start pushing an asset to a client before it has been requested. It's been in browsers for over five years now and adoption is terrible. "Over the past 28 days [...] 99.97% of connections never received a pushed stream that got matched with a reques

Intent to Remove: HTTP/2 and gQUIC server push

The Chrome / Blink team announce their intent to remove HTTP/2 server push support, where servers can start pushing an asset to a client before it has been requested. It's been in browsers for over five years now and adoption is terrible. "Over the past 28 days [...] 99.97% of connections never received a pushed stream that got matched with a request [...] These numbers are exactly the same as in June 2019". Datasette serves redirects with Link: preload headers that cause smart proxies (like Cloudflare) to push the redirected page to the client along with the redirect, but I don't exepect to miss that optimization if it quietly stops working.

Via @cramforce


Phil Windley's Technometria

DIDComm and the Self-Sovereign Internet

Summary: DIDComm is the messaging protocol that provides utility for DID-based relationships. DIDComm is more than just a way to exchange credentials, it's a protocol layer capable of supporting specialized application protocols for specific workflows. Because of its general nature and inherent support for self-sovereign relationships, DIDComm provides a basis for a self-sovereign internet much mo

Summary: DIDComm is the messaging protocol that provides utility for DID-based relationships. DIDComm is more than just a way to exchange credentials, it's a protocol layer capable of supporting specialized application protocols for specific workflows. Because of its general nature and inherent support for self-sovereign relationships, DIDComm provides a basis for a self-sovereign internet much more private, enabling, and flexible than the one we've built using Web 2.0 technologies.

DID-based relationships are the foundation of self-sovereign identity (SSI). The exchange of DIDs to form a connection with another party gives both parties a relationship that is self-certifying and mutually authenticated. Further, the connection forms a secure messaging channel called DID Communication or DIDComm. DIDComm messaging is more important than most understand, providing a secure, interoperable, and flexible general messaging overlay for the entire internet.

Most people familiar with SSI equate DIDComm with verifiable credential exchange, but it's much more than that. Credential exchange is just one of an infinite variety of protocols that can ride on top of the general messaging protocol that DIDComm provides. Comparing DIDComm to the venerable TCP/IP protocol suite does not go too far. Just as numerous application protocols ride on top of TCP/IP, so too can various application protocols take advantage of DIDComm's secure messaging overlay network. The result is more than a secure messaging overlay for the internet, it is the foundation for a self-sovereign internet with all that that implies.

DID Communications Protocol

DIDComm messages are exchanged between software agents that act on behalf of the people or organizations that control them. I often use the term "wallet" to denote both the wallet and agent, but in this post we should distinguish between them. Agents are rule-executing software systems that exchange DIDComm messages. Wallets store DIDs, credentials, personally identifying information, cryptographic keys, and much more. Agents use wallets for some of the things they do, but not all agents need a wallet to function.

For example, imagine Alice and Bob wish to play a game of TicTacToe using game software that employs DIDComm. Alice's agent and Bob's agent will exchange a series of messages. Alice and Bob may be using a game UI and be unaware of the details but the agents are preparing plaintext JSON messages1 for each move using a TicTacToe protocol that describes the format and appropriateness of a given message based on the current state of the game.

Alice and Bob play TicTacToe over DIDComm messaging (click to enlarge)

When Alice places an X in a square in the game interface, her agent looks up Bob's DID Document. She received this when she and Bob exchanged DIDs and it's kept up to date by Bob whenever he rotates the keys underlying the DID2. Alice's agent gets two key pieces of information from the DID Document: the endpoint where messages can be sent to Bob and the public key Bob's agent is using for the Alice:Bob relationship.

Alice's agent uses Bob's public key to encrypt the JSON message to ensure only Bob's agent can read it and adds authentication using the private key Alice uses in the Alice:Bob relationship. Alice's agent arranges to deliver the message to Bob's agent through whatever means are necessary given his choice of endpoint. DIDComm messages are often routed through other agents under Alice and Bob's control.

Once Bob's agent receives the message, it authenticates that it came from Alice and decrypts it. For a game of TicTacToe, it would ensure the message complies with the TicTacToe protocol given the current state of play. If it complies, the agent would present Alice's move to Bob through the game UI and await his response so that the process could continue. But different protocols could behave differently. For example, not all protocols need to take turns like the TicTacToe protocol does.

DIDComm Properties

The DIDComm Protocol is designed to be

Secure Private Interoperable Transport-agnostic Extensible

Secure and private follow from the protocol's support for heterachical (peer-to-peer) connections and decentralized design along with its use of end-to-end encryption.

As an interoperable protocol, DIDComm is not dependent on a specific operating system, programming language, vendor, network, hardware platform, or ledger3. While DIDComm was originally developed within the Hyperledger Aries project, it aims to be the common language of any secure, private, self-sovereign interaction on, or off, the internet.

In addition to being interoperable, DIDComm should be able to make use of any transport mechanism including HTTP(S) 1.x and 2.0, WebSockets, IRC, Bluetooth, NFC, Signal, email, push notifications to mobile devices, Ham radio, multicast, snail mail, and more.

DIDComm is an asynchronous, simplex messaging protocol that is designed for extensibility by allowing for protocols to be run on top of it. By using asynchronous, simplex messaging as the lowest common denominator, almost any other interaction pattern can be built on top of DIDComm. Application-layer protocols running on top of DIDComm allow extensibility in a way that also supports interoperability.

DIDComm Protocols

Protocols describe the rules for a set of interactions, specifying the kinds of interactions that can happen without being overly prescriptive about their nature or content. Protocols formalize workflows for specific interactions like ordering food at a restaurant, playing a game, or applying for college. DIDComm and its application protocols are one of the cornerstones of the SSI metasystem, giving rise to a protocological culture within the metasystem that is open, agentic, inclusive, flexible, modular, and universal.

While we have come to think of SSI agents being strictly about exchanging peer DIDs to create a connection, request and issue a credential, or prove things using credentials, these are merely specific protocols defined to run over the DIDComm messaging protocol. Many others are possible. The follow specifications describe the protocols for these three core applications of DIDComm:

Connecting with others Requesting and issuing credentials Proving things using credentials

There's a protocol for agents to discover the protocols that another agent supports. And another for one agent to make an introduction4 of one agent to another. The TicTacToe game Alice and Bob played above is enabled by a protocol for TicTacToe. Bruce Conrad who works on picos with me implemented the TicTacToe protocol for picos, which are DIDComm.

Daniel Hardman has provided a comprehensive tutorial on defining protocols on DIDComm. We can imagine a host of DIDComm protocols for all kinds of specialized interactions that people might want to undertake online including the following:

Delegating Commenting Notifying Buying and selling Negotiating Enacting and enforcing contracts Putting things in escrow (and taking them out again) Transferring ownership Scheduling Auditing Reporting errors

As you can see from this partial list, DIDComm is not just a secure, private way to connect and exchange credentials. Rather DIDComm is a foundation protocol that provides a secure and private overlay to the internet for carrying out almost any online workflow. Consequently, agents are more than the name "wallet" would imply, although that's a convenient shorthand for the common uses of DIDComm today.

A Self-Sovereign Internet

Because of the self-sovereign nature of agents and the flexibility and interoperable characteristics they gain from DIDComm, they form the basis for new, more empowering internet. While self-sovereign identity is the current focus of DIDComm, its capabilities exceed what many think of as "identity." When you combine the vast landscape of potential verifiable credentials with DIDComm's ability to create custom message-based workflows to support very specific interactions, it's easy to imagine that the DIDComm protocol and the heterarchical network of agents it enables will have an impact as large as the web, perhaps the internet itself.

Notes DIDComm messages do not strictly have to be formatted as JSON. Alice's agent can verify that it has the right DID Document and the most recent key by requesting a copy of Bob's key event log (called delta's for peer DIDs) and validating it. This is the basis for saying peer DIDs are self certifying. I'm using "ledger" as a generic term for any algorithmically controlled distributed consensus-based datastore including public blockchains, private blockchains, distributed file systems, and others. Fuse with Two Owners shows an introduction protocol for picos I used in building Fuse in 2014 before picos used DIDComm. I'd like to revisit this as a way of building introductions into picos using a DIDComm protocol.

Photo Credit: Mesh from Adam R (Pixabay)

Tags: decentralized+identifiers didcomm protocol ssi identity credentials self-sovereign vrm me2b

Wednesday, 11. November 2020

Phil Windley's Technometria

Authentic Digital Relationships

Summary: Self-sovereign identity, supported by a heterarchical identity metasystem, creates a firm foundation for rich digital relationships that allow people to be digitally embodied so they can act online as autonomous agents. An earlier blog post, Relationships and Identity proposed that we build digital identity systems to create and manage relationships—not identities—and discussed

Summary: Self-sovereign identity, supported by a heterarchical identity metasystem, creates a firm foundation for rich digital relationships that allow people to be digitally embodied so they can act online as autonomous agents.

An earlier blog post, Relationships and Identity proposed that we build digital identity systems to create and manage relationships—not identities—and discussed the nature of digital relationships in terms of their integrity, lifespan, and utility. You should read that post before this one.

In his article Architecture Eats Culture Eats Strategy, Tim Bouma makes the point that the old management chestnut Culture Eats Strategy leaves open the question: how do we change the culture? Tim's point is that architecture (in the general sense) is the upstream predator to culture. Architecture is a powerful force that drives culture and therefore determines what strategies will succeed—or, more generally, what use cases are possible.

Following on Tim's insight, my thesis is that identity systems are the foundational layer of our digital ecosystem and therefore the architecture of digital identity systems drives online culture and ultimately what we can do and what we can't. Specifically, since identity systems are built to create and manage relationships, their architecture deeply impacts the kinds of relationships they support. And the quality of those relationships determines whether or not we live effective lives in the digital sphere.

Administrative Identity Systems Create Anemic Relationships

I was the founder and CTO of iMall, an early, pioneering ecommerce tools vendor. As early as 1996 we determined that we not only needed a shopping cart that kept track of a shopper's purchases in a single session, but one that knew who the shopper was from visit to visit so we could keep the shopping cart and pre-fill forms with shipping and billing addresses. Consequently, we built an identity system. In the spirit of the early web, it was a one-off, written in Perl and storing personal data in Berkeley DB. We did hash the passwords—we weren't idiots1.

Early Web companies had a problem: we needed to know things about people and there was no reliable way for them to tell us who they were. So everyone built an identity system and thus began my and your journey to collecting thousands of identifiers as the Web expanded and every single site needed it's own way to know things about us.

Administrative identity systems, as these kinds of identity systems are called, create a relationship between the organization operating the identity system and the people who are their customers, citizens, partners, and so on. They are, federation notwithstanding, largely self contained and put the administrator at the center as shown in Figure 1. This is their fundamental architecture.

Figure 1: Administrative identity systems put the administrator at the center. (click to enlarge)

Administrative identity systems are owned. They are closed. They are run for the purposes of their owners, not the purposes of the people or things being administered. They provision and permission. They are bureaucracies for governing something. They rely on rules, procedures, and formal interaction patterns. Need a new password? Be sure to follow the password rules of what ever administrative system you're in. Fail to follow the company's terms of service? You could lose your account without recourse.

Administrative identity systems use a simple schema, containing just the attributes that the administrator needs to serve their purposes and reduce risk. The problem I and others were solving back in the 90's was legibility2. Legibility is a term used to describe how administrative systems make things governable by simplifying, inventorying, and rationalizing things around them. Identity systems make people legible in order to offer them continuity and convenience while reducing risk for the administrator.

Administrative identity systems give rise to a systematic inequality in the relationships they manage. Administrative identity systems create bureaucratic cultures. Every interaction you have online happens under the watchful eye of a bureaucracy built to govern the system and the people using it. The bureaucracy may be benevolent, benign, or malevolent but it controls the interaction.

Designers of administrative identity systems do the imaginative work of assigning identifiers, defining the administrative schemas and processes, and setting the purpose of the identity system and the relationships it engenders. Because of the systematic imbalance of power that administrative identity systems create, administrators can afford to be lazy. To the administrator, everyone is structurally the same, being fit into the same schema. This is efficient because they can afford to ignore all the qualities that make people unique and concentrate on just their business. Meanwhile subjects are left to perform the "interpretive labor" as David Graeber calls it of understanding the system, what it allows or doesn't, and how it can be bent to accomplish their goals. Subjects have few tools for managing these relationship because each one is a little different, not only technically, but procedurally as well. There is no common protocol or user experience. Consequently, subjects have no way to operationalize the relationship except in whatever manner the administrator allows.

Given that the architecture of administrative identity systems gives rise to a bureaucratic culture, what kinds of strategies or capabilities does that culture engender? Quoting David Graeber from The Utopia of Rules (pg 152):

Cold, impersonal, bureaucratic relations are much like cash transactions, and both offer similar advantages and disadvantages. On the one hand they are soulless. On the other, they are simple, predictable, and—within certain parameters, at least—treat everyone more or less the same.

I argue that this is the kind of thing the internet is best at. Our online relationships with ecommerce companies, social media providers, banks, and others are cold and impersonal, but also relatively efficient. In that sense, the web has kept its promise. But the institutionalized frame of action that has come to define it alienates its subjects in two ways:

They are isolated and estranged from each other. They surrender control over their online activity and the associated data within a given domain to the administrator of that domain.

The administrative architecture and the bureaucratic culture it creates has several unavoidable, regrettable outcomes:

Anemic relationships that limit the capabilities of the systems they support. For example, social media platforms are designed to allow people to form a link (symmetrical or asymmetrical) to others online. But it is all done within the sphere of the administrative domain of the system provider. The relationships in these systems are like two-dimensional cardboard cutouts of the real relationships they mirror. We inhabit multiple walled gardens that no more reflect real life than do the walled gardens of amusement parks. A surveillance economy that relies on the weak privacy provisions that administrative systems create to exploit our online behavior as the raw material for products that not only predict, but attempt to manipulate, our future behaviors.3 Many administrative relationships are set up to harvest data about our online behavior. The administrator controls the nature of these relationships: what is allowed and what behavior is rewarded. Single points of failure where key parts of our lives are contained within the systems of companies that will inevitably cease to exist someday. In the words of Craig Burton: "It's about choice: freedom of choice vs. prescribed options. Leadership shifts. Policies expire. Companies fail. Systems decay. Give me the freedom of choice to minimize these hazards." The Self-Sovereign Alternative

Self-sovereign identity (SSI) systems offers an alternative model that supports richer relationships. Rather than provisioning identifiers and accounts in an administrative system where the power imbalance assures that one party to the relationship can dictate the terms of the interaction, SSI is founded on peer relationships that are co-provisioned by the exchange of decentralized identifiers. This architecture implies that both parties will have tools that speak a common protocol.

Figure 2: Self-Sovereign Identity Stack (click to enlarge)

Figure 2 shows the self-sovereign identity stack. The bottom two layers, the Verifiable Data Repositories and the Peer-to-Peer Agents make up what we refer to as the Identity Metasystem. The features of the metasystem architecture are our primary interest. I have written extensively about the details of the architecture of the metasystem in other posts (see The Sovrin SSI Stack and Decentralized Identifiers).

The architecture of the metasystem has several important features:

Mediated by protocol—Instead of being intermediated by an intervening administrative authority, activities in the metasystem are mediated through peer-to-peer protocol. Protocols are the foundation of interoperability and allow for scale. Protocols describe the rules for a set of interactions, specifying the kinds of interactions that can happen without being overly prescriptive about their nature or content. Consequently, the metasystem supports a flexible set of interactions that can be adapted for many different contexts and needs. Heterarchical—Interactions in the metasystem are peer-to-peer rather than hierarchical. The are not just distributed, but decentralized. Decentralization enables autonomy and flexibility and to assure its independence from the influence of any single actor. No centralized system can anticipate all the various use cases. And no single actor should be allowed to determine who uses the system or for what purposes. Consistent user experience—A consistent user experience doesn’t mean a single user interface. Rather the focus is on the experience. As an example, consider an automobile. My grandfather, who died in 1955, could get in a modern car and, with only a little instruction, successfully drive it. Consistent user experiences let people know what to expect so they can intuitively understand how to interact in any given situation regardless of context. Polymorphic—The information we need in any given relationship varies widely with context. The content that an identity metasystem carries must be flexible enough to support many different situations.

These architectural features give rise to a culture that I describe as protocological. The protocological culture of the identity metasystem has the following properties:

Open and permissionless—The metasystem has the same three virtues of the Internet that Doc Searls and Dave Weinberger enumerated as NEA: No one owns it, Everyone can use it, and Anyone can improve it. Special care is taken to ensure that the metasystem is censorship resistant so that everyone has access. The protocols and code that enable the metasystem are open source and available for review and improvement. Agentic—The metasystem allows allows people to act as autonomous agents, under their self-sovereign authority. The most vital value proposition of self-sovereign identity is autonomy—not being inside someone else's administrative system where they make the rules in a one sided way. Autonomy requires that participants interact as peers in the system, which the architecture of the metasystem supports. Inclusive—Inclusivity is more than being open and permissionless. Inclusivity requires design that ensures people are not left behind. For example, some people cannot act for themselves for legal (e.g. minors) or other (e.g. refugees) reasons. Support for digital guardianship ensures that those who cannot act for themselves can still participate. Flexible—The metasystem allows people to select appropriate service providers and features. No single system can anticipate all the scenarios that will be required for billions of individuals to live their own effective lives. A metasystem allows for context-specific scenarios. Modular—An identity metasystem can’t be a single, centralized system from a single vendor with limited pieces and parts. Rather, the metasystem will have interchangeable parts, built and operated by various parties. Protocols and standards enable this. Modularity supports substitutability, a key factor in autonomy and flexibility. Universal—Successful protocols eat other protocols until only one survives. An identity metasystem based on protocol will have network effects that drive interoperability leading to universality. This doesn't mean that one organization will have control, it means that one protocol will mediate all interaction and everyone in the ecosystem will conform to it. Supporting Authentic Relationships

Self-sovereign identity envisions digital life that cannot be supported with traditional identity architectures. The architecture of self-sovereign identity and the culture that springs from it support richer, more authentic relationships:

Self-sovereign identity provides people with the means of operationalizing their online relationships by providing them the tools for acting online as peers and managing the relationships they enter into. Self-sovereign identity, through protocol, allows ad hoc interactions that were not or cannot be imagined a priori.

The following subsections give examples for each of these.

Disintermediating Platforms

Many real-world experiences have been successfully digitized, but the resulting intermediation opens us to exploitation despite the conveniences. We need digitized experiences that respect human dignity and don't leave us open to being exploited for some company's advantage. As an example consider how the identity metasystem could be the foundation for a system that disintermediates the food delivery platforms. Platform companies have been very successful in intermediating these exchanges and charging exorbitant rents for what ought to be a natural interaction among peers.

That's not to say platforms provide no value. The problem isn't that they charge for services, but that their intervening position gives them too much power to make markets and set prices. Platforms provide several things that make them valuable to participants: a means of discovering relevant service providers, a system to facilitate the transaction, and a trust framework to help participants make the leap over the trust gap, as Rachel Botsman puts it. An identity metasystem supporting self-sovereign identity provides a universal trust framework for building systems that can serve as the foundation for creating markets without intermediaries. Such a system with support for a token can even facilitate the transaction without anyone having an intervening position.

Disintermediating platforms requires creating a peer-to-peer marketplace on top of the metasystem. While the metasystem provides the means of creating and managing the peer-to-peer relationship, defining this marketplace requires determining the messages to be exchanged between participants and creating the means of discovery. These messages might be simple or complex depending on the market and could be exchanged using DIDComm, or even ride on top of a verifiable credential exchange. There might be businesses that provide discovery, but they don't intermediate, they sit to the side of the interaction providing a service. For example, such a business might provide a service that allows a restaurant to define its menu, create a shopping cart, and provide for discovery, but the merchant could replace it with a similar service, providing competition, because the trust interaction and transaction are happening via a protocol built on a universal metasystem.

Building markets without intermediaries greatly reduces the cost of participating in the market and frees participants to innovate. Because these results are achieved through protocol, we do not need to create new regulations that stifle innovation and lock in incumbents by making it difficult for new entrants to comply. And these systems preserve human dignity and autonomy by removing administrative authorities.

Digitizing Auto Accidents

As an example of the the kinds of interactions that people have every day that are difficult to bring into the administrative sphere, consider the interactions that occur between various participants and their representatives following an auto accident. Because these interactions are ad hoc, large parts of our lives have yet to enjoy the convenience of being digitized. In You've Had an Automobile Accident, I imagine a digital identity system that enables the kinds of ad hoc, messy, and unpredictable interactions that happen all the time in the physical world.

In this scenario, two drivers, Alice and Bob, have had an accident. Fortunately, no one was hurt, but the highway patrol has come to the scene to make an accident report. Both Alice and Bob have a number of credentials that will be necessary to create the report:

Proof of insurance issued by their respective insurance companies Vehicle title from the state Vehicle registration issued by the Department of Motor Vehicles (DMV) in different states (potentially) Driver's license, potentially from a different agencies than the one who registers cars and potentially in different states

In addition, the police officer has credentials from the Highway Patrol, Alice and Bob will make and sign statements, and the police officer will create an accident report. What's more, the owners of the vehicles may not be the drivers.

Now imagine you're building a startup to solve the "car accident use case." You imagine a platform to relate to all these participants and intermediate the exchange of all this information. To have value, it has to do more than provide a way to exchange PDFs and most if not all of the participants have to be on it. The system has to make the information usable. How do you get all the various insurance companies, state agencies, to say nothing of the many body shops and hospitals, fire departments, and ambulance companies on board? And yet, these kinds of ad hoc interactions confront us daily.

Taking our Rightful Place in the Digital Sphere

Devon Leffreto said something recently that made me think:

You do not have an accurate operational relationship with your Government.

My thought was "not just government". The key word is "operational". People don't have operational relationships anywhere online.4 We have plenty of online relationships, but they are not operational because we are prevented from acting by their anemic natures. Our helplessness is the result of the power imbalance that is inherent in bureaucratic relationships. The solution to the anemic relationships created by administrative identity systems is to provide people with the tools they need to operationalize their self-sovereign authority and act as peers with others online. Scenarios like the ones envisioned in the preceding section happen all the time in the physical world—in fact they're the norm. When we dine at a restaurant or shop at a store in the physical world, we do not do so under some administrative system. Rather, as embodied agents, we operationalize our relationships, whether they be long-lived or nascent, by acting for ourselves. An identity metasystem provides people with the tools they need to be "embodied" in the digital world and act autonomously.

Time and again, various people have tried to create decentralized marketplaces or social networks only to fail to gain traction. These systems fail because they are not based on a firm foundation that allows people to act in relationships with sovereign authority in systems mediated through protocol rather than companies. We have a fine example of a protocol mediated system in the internet, but we've failed to take up the daunting task of building the same kind of system for identity. Consequently, when we act, we do so without firm footing or sufficient leverage.

Ironically, the internet broke down the walled gardens of CompuServe and Prodigy with a protocol-mediated metasystem, but surveillance capitalism has rebuilt them on the web. No one could live an effective life in an amusement park. Similarly, we cannot function as fully embodied agents in the digital sphere within the administrative systems of surveillance capitalists, despite their attractions. The emergence of self-sovereign identity, agreements on protocols, and the creation of a metasystem to operationalize them promises a digital world where decentralized interactions create life-like online experiences. The identity metasystem and the richer relationships that result from it promise an online future that gives people the opportunity to act for themselves as autonomous human beings and supports their dignity so that they can live an effective online life.

End Notes Two of my friends at the time, Eric Thompson and Stacey Son were playing with FPGAs that could crack hashed passwords, so we were aware of the problems and did our best to mitigate them. See Venkatesh Rao's nice summary of James C. Scott's seminal book on legibility and its unintended consequences, Seeing Like a State for more on this idea. See The Age of Surveillance Capitalism by Shoshana Zuboff for a detailed (705 page) exploration of this idea. The one exception I can think of to this is email. People act through email all the time in ways that aren't intermediated by their email provider. Again, it's a result of the architecture of email, set up over four decades ago and the culture that architecture supports.

Photo Credit: Two Yellow Bees from Pikrepo (public)

Tags: ssi identity relationships administrative+identity sovereign+source


John Philpin : Lifestream

Now THAT is Visual Thinking

Now THAT is Visual Thinking

Now THAT is Visual Thinking


Question for the Apple cognoscenti. How much email storag

Question for the Apple cognoscenti. How much email storage can you get with Apple Mail? Eg; If you have signed up for (say) a couple of Tera Wotsits in icloud, does that mean your Mac mail system can get pretty giant? UPDATE : email is limited only by your icloud storage.

Question for the Apple cognoscenti.

How much email storage can you get with Apple Mail?

Eg; If you have signed up for (say) a couple of Tera Wotsits in icloud, does that mean your Mac mail system can get pretty giant?

UPDATE : email is limited only by your icloud storage.


Day 10. The elderly say that youth is wasted on the young

Day 10. The elderly say that youth is wasted on the young. See The Series

Day 10.

The elderly say that youth is wasted on the young.

See The Series


Day 11. The youth say nothing … because they don’t hear t

Day 11. The youth say nothing … because they don’t hear the elderly. See The Series

Day 11.

The youth say nothing … because they don’t hear the elderly.

See The Series

Tuesday, 10. November 2020

Phil Windley's Technometria

Operationalizing Digital Relationships

Summary: An SSI wallet provides a place for people to stand in the digital realm. Using the wallet, people can operationalize their digital relationships as peers with others online. The result is better, more authentic, digital relationships, more flexible online interactions, and the preservation of human freedom, privacy, and dignity. Recently, I've been making the case for self

Summary: An SSI wallet provides a place for people to stand in the digital realm. Using the wallet, people can operationalize their digital relationships as peers with others online. The result is better, more authentic, digital relationships, more flexible online interactions, and the preservation of human freedom, privacy, and dignity.

Recently, I've been making the case for self-sovereign identity and why it is the correct architecture for online identity systems.

In Relationships and Identity, I discuss why identity systems are really built to manage relationships rather than identities. I also show how the architecture of the identity system affects the three important properties relationships must have to function effectively: integrity, lifespan, and utility. In The Architecture of Identity Systems I introduced a classification scheme that broadly categorized identity systems into one of three broad architectures: administrative, algorithmic, or autonomic. In Authentic Digital Relationships, I discuss how the architecture of an identity system affects the authenticity of the relationships it manages.

This post focuses how people can operationalize the relationships they are party to and become full fledged participants online. Last week, Doc Searls posted What SSI Needs where he recalls how the graphical web browser was the catalyst for making the web real. Free (in both the beer and freedom senses) and substitutable, browsers provided the spark that gave rise to the generative qualities that ignited an explosion of innovation. Doc goes on to posit that the SSI equivalent of the web browser is what we have called a "wallet" since it holds, analogously to your physical wallet, credentials.

The SSI wallet Doc is discussing is the tool people use to operationalize their digital relationships. I created the following picture to help illustrate how the wallet fulfills that purpose.

Relationships and Interactions in Sovrin Network (click to enlarge)

This figure shows the relationships and interactions in SSI networks enabled by the Hyperledger Indy and Aries protocols. The most complete example of these protocols in production is the Sovrin Network.

In the figure, Alice has an SSI wallet1. She uses the wallet to manage her relationships with Bob and Carol as well as a host of organizations. Bob and Carol also have wallets. They have a relationship with each other and Carol has a relationship with Bravo Corp, just as Alice does2. These relationships are enabled by autonomic identifiers in the form of peer DIDs (blue arrows). The SSI wallet each participant uses provides a consistent user experience, like the browser did for the Web. People using wallets don't see the DIDs (identifiers) but rather the connections they have to other people, organizations, and things.

These autonomic relationships are self-certifying meaning they don't rely on any third party for their trust basis. They are also mutually authenticating: each of the parties in the relationship can authenticate the other. Further, these relationships create a secure communications channel using the DIDComm protocol. Because of the built-in mutual authentication, DIDComm messaging creates a batphone-like experience wherein each participant knows they are communicating with the right party without the need for further authentication. As a result, Alice has trustworthy communication channels with everyone with whom she has a peer DID relationship.

Alice, as mentioned, also has a relationship with various organizations. One of them, Attester Org, has issued a verifiable credential to Alice. They issued the credential (green arrows) using the Aries credential exchange protocol that runs on top of the DIDComm-based communication channel enabled by the peer DID relationship Alice has with Attester. The credential they issue is founded on the credential definition and public DID (an algorithmic identifier) that Attester Org wrote to the ledger.

When Alice later needs to prove something (e.g. her address) to Certiphi Corp, she presents the proof over the DIDComm protocol, again enabled by the peer DID relationship she has with Certiphi Corp. Certiphi is able to validate the fidelity of the credential by reading the credential definition from the ledger, retrieving Attester Org's public DID from the credential definition, and resolving it to get Attester Org's public key to check the credential's signature. At the same time, Certiphi can use cryptography to know that the credential is being presented by the person it was issued to and that it hasn't been revoked.

This diagram has elements of each architectural style described The Architecture of Identity Systems.

Alice has relationships with five different entities: her friends Bob and Carol as well as three different organizations. These relationships are based on autonomic identifiers in the form of peer DIDs. All of the organizations use enterprise wallets to manage autonomic relationships4. As a credential issuer, Attester Org has an algorithmic identifier in the form of a public DID that has been recorded on the ledger. The use of algorithmic identifiers on a ledger3 allows public discovery of the credential definition and the public DID by Certiphi Corp when it validates the credential. The use of a ledger for this purpose is not optional unless we give up the loose coupling it provides. Loose coupling provides scalability, flexibility, and isolation. Isolation is critical to the privacy protections that verifiable credential exchange via Aries promises. Each company will keep track of attributes and other properties they need for the relationship to provide the needed utility. These are administrative systems since they are administered by the organization for their own purpose and their root of trust is a database managed by the organization. The difference between these administrative systems and those common in online identity today is that only the organization is depending on them. People have their own autonomic root of trust in the key event log that supports their wallet.

Alice's SSI wallet allows her to create, manage, and utilize secure, trustworthy communications channels with anyone online without reliance on any third party. Alice's wallet is also the place where specific, protocol-enabled interactions like credential exchange happen. The wallet is flexible tool that Alice uses to manage her digital life.

We have plenty of online relationships today, but they are not operational because we are prevented from acting by their anemic natures. Our helplessness is the result of the power imbalance that is inherent in bureaucratic relationships. The solution to the anemic relationships created by administrative identity systems is to provide people with the tools they need to operationalize their self-sovereign authority and act as peers with others online. When we dine at a restaurant or shop at a store in the physical world, we do not do so within some administrative system. Rather, as embodied agents, we operationalize our relationships, whether they be long-lived or nascent, by acting for ourselves. The SSI wallet is the platform upon which people can stand and become digitally embodied to operationalize their digital life as full-fledged participants in the digital realm.

Alice's SSI wallet is like other wallets she has on her phone with several important differences. First, it is enabled by open protocols and second, it is entirely under her control. I'm using the term "wallet" fairly loosely here to denote not only the wallet but also the agent necessary for the interactions in an SSI ecosystem. For purposes of this post, delineating them isn't important. In particular, Alice may not be aware of the agent, but she will know about her wallet and see it as the tool she uses. Note that Bob doesn't have a relationship with any of the organizations shown. Each participant has the set of relationships they choose to have to meet their differing circumstances and needs. I'm using "ledger" as a generic term for any algorithmically controlled distributed consensus-based datastore including public blockchains, private blockchains, distributed file systems, and others. Enterprise wallets speak the same protocols as the wallets people use, but are adapted to the increased scale an enterprise would likely need and are designed to be intergrated with the enterprise's other administrative systems.

Photo Credit: Red leather wallet on white paper from Pikrepo (CC0)

Tags: aries indy ssi autonomic algorithmic decentralized+identifiers credentials identity vrm me2b


Identity Woman

Self-Sovereign Identity Critique, Critique /8

Now we are in the Meg Wheatly section of the article. I’ve been reading Meg’s book since I read Leadership and the New Sciences 25 years ago. Relationships are the pathways for organizing, required for the creation and transformation of information, the expansion of the organizational identity, and accumulation of wisdom. Relationships are formed with […] The post Self-Sovereign Identity Critiqu

Now we are in the Meg Wheatly section of the article. I’ve been reading Meg’s book since I read Leadership and the New Sciences 25 years ago. Relationships are the pathways for organizing, required for the creation and transformation of information, the expansion of the organizational identity, and accumulation of wisdom. Relationships are formed with […]

The post Self-Sovereign Identity Critique, Critique /8 appeared first on Identity Woman.

Monday, 09. November 2020

reb00ted

The better, and harder, plan!

I would give credit if I knew to whom.

I would give credit if I knew to whom.


Phil Windley's Technometria

Relationships and Identity

Summary: We build digital identity systems to create and manage relationships—not identities. We need our digital relationships to have integrity and to be useful over a specified lifetime. Identity systems should provide relationship integrity and utility to participants for the appropriate length of time. Participants should be able to create relationships with whatever party will provide u

Summary: We build digital identity systems to create and manage relationships—not identities. We need our digital relationships to have integrity and to be useful over a specified lifetime. Identity systems should provide relationship integrity and utility to participants for the appropriate length of time. Participants should be able to create relationships with whatever party will provide utility. SSI provides improved support for creating, managing, and using digital relationships.

The most problematic word in the term Self-Sovereign Identity (SSI) isn't "sovereign" but "identity" because whenever you start discussing identity, the conversation is rife with unspoken assumptions. Identity usually conjures thoughts of authentication and various federation schemes that purport to make authentication easier. I frequently point out that even though SSI has "identity" in it's name, there's no artifact in SSI called an "identity." Instead the user experience in an SSI system is based on forming relationships and using credentials.

I've been thinking a lot lately about the role of relationships in digital identity systems and have come to the conclusion that we've been working on building building systems that support digital relationships without acknowledging the importance of relationships or using them in our designs and architectures. The reason we build identity systems isn't to manage identities, it's to support digital relationships.

I recently read and then reread a 2011 paper called Identities Evolve: Why Federated Identity is Easier Said than Done from Steve Wilson. Despite being almost ten years old, the paper is still relevant, full of good analysis and interesting ideas. Among those is the idea that the goal of using federation schemes to create a few identities that serve all purposes is deeply flawed. Steve's point is that we have so many identities because we have lots of relationships. The identity data for a given relationship is contextual and highly evolved to fit its specific niche.

Steve's discussion reminded me of a talk Sam Ramji used to give about speciation of APIs. Sam illustrated his talk with a picture from Encyclopedia Britannica to show adaptive radiation in Galapagos finches in response to evolutionary pressure. These 14 different finches all share a common ancestor, but ended up with quite different features because of specialization for a particular niche.

Adaptive radiation in Galapagos finches (click to enlarge)

In the same way, each of us have hundreds, even thousands, of online relationships. They each have a common root but are highly contextualized. Some are long-lived, some are ephemeral. Some are personal, some are commercial. Some are important, some are trivial. Still, we have them. The information about ourselves, what many refer to as identity data, that we share with each is just as adapted to the specific niche that the relationship represents as the Galapagos finches are to their niches. Once you realize this, the idea of creating a few online identities to serve all needs becomes preposterous.

Not only is each relationship evolved for a particular niche, but it is also constantly changing. Often those changes are just more of what's already there. For example, my Netflix account represents a relationship between me and Netflix. It's constantly being updated with my viewing data but not changing dramatically in structure. But some changes are larger. For example, it also allows me to create additional profiles which makes the relationship specialized for the specific members of my household. And when Netflix moved from DVDs only to streaming, the nature of my relationship with Netflix changed significantly.

I'm convinced that identity systems would be better architected if we were more intentional about their need to support specialized relationships spanning millions of potential relationship types. This article is aimed at better understanding the nature of digital relationships.

Relationships

One of the fundamental problems of internet identity is proximity. Because we're not interacting with people physically, our natural means of knowing who we're dealing with are useless. Joe Andrieu defines identity as "how we recognize, remember, and respond" to another entity. These three activities correspond to three properties digital relationships must have to overcome the proximity problem:

Integrity—we want to know that, from interaction to interaction, we're dealing with the same entity we were before. In other words, we want to identify them. Lifespan—normally we want relationships to be long-lived, although we also create ephemeral relationships for short-lived interactions. Utility—we create online relationships in order to use them within a specific context.

We'll discuss each of these in detail below, followed by a discussion of risk in digital relationships.

Relationship Integrity

Without integrity, we cannot recognize the other party to the relationship. Consequently, all identity systems manage relationship integrity as a foundational capability. Federated identity systems improve on one-off, often custom identity systems by providing integrity in a way that reduces user management overhead for the organization, increases convenience for the user, and increases security by eliminating the need to create one-off, proprietary solutions. SSI aims to establish integrity with the convenience of the federated model but without relying on an intervening IdP in order to provide autonomy and privacy.

A relationship has two parties, let's call them P1 and P2.1 P1 is connecting with P2 and, as a result, P1 and P2 will have a relationship. P1 and P2 could be people, organizations, or things represented by a website, app, or service. Recognizing the other party in an online relationship relies on being able to know that you're dealing with the same entity each time you encounter them.

In a typical administrative identity system, when P1 initiates a relationship with P2, P2 typically uses usernames and passwords to ensure the integrity of the relationship. By asking for a username to identify P1 and a password to ensure that it's the same P1 as before, P2 has some assurance that they are interacting with P1. In this model, P1 and P2 are not peers. Rather P2 controls the system and determines how and for what it is used.

In a federated flow, P1 is usually called the subject or consumer and P2 is called the relying party (RP). When the consumer visits the RP's site or opens their app, they are offered the opportunity to establish a relationship through an identity provider (IdP) whom the RP trusts. The consumer may or may not have a relationship with an IdP the RP trusts. RPs pick well-known IdPs with large numbers of users to reduce friction in signing up. The consumer chooses which IdP they want to use from the relying party's menu and is redirected to the IdP's identity service where they present a username and password to the IdP, are authenticated, and redirected back to the RP. As part of this flow, the RP gets some kind of token from the IdP that signifies that the IdP will vouch for this person. They may also get attributes that the IdP has stored for the consumer.2

In the federated model, the IdP is identifying the person and attesting the integrity of the relationship to the RP. The IdP is a third party, P3, who acts as an intervening administrative authority. Without their service, the RP may not have an alternative means of assuring themselves that the relationship has integrity over time. On the other hand, the person gets no assurance from the identity system about relationship integrity in this model. For that they must rely on TLS, which is visible in a web interaction, but largely hidden inside an app on a mobile device. P1 and P2 are not peers in the federated model. Instead, P1 is subject to the administrative control of both the IdP and the RP. Further, the RP us subject to the administrative control of the IdP.

In SSI, a relationship is initiated when P1 and P2 exchange decentralized identifiers (DID). For example, when a person visits a web site or app, they are presented with a connection invitation. When they accept the invitation, they use a software agent to share a DID that they created. In turn, they receive a DID from the web site, app, or service. We call this a "connection" since DIDs are cryptographically based and thus provide a means of both parties mutually authenticating. The user experience does not necessarily surface all this activity to the user. To get a feel for the user experience, run through the demo at Connect.me3.

In contrast to the federated model, the participants in SSI mutually authenticate and the relationship has integrity without the intervention of a third party due to the self-certifying nature of the identifiers. By exchanging DIDs both parties have also exchanged public keys. They can consequently use cryptographic means to ensure they are interacting with the party who controls the DID they received when the relationship was initiated. Mutual authentication, based on self-certifying DIDs provides SSI relationships with inherent integrity. P1 and P2 are peers in SSI since they both have equal control over the relationship.

In addition to removing the need for intermediaries to vouch for the integrity of the relationship, the peer nature of relationships in SSI also means that neither party has access to the authentication credentials of the other. Mutual authentication means that each party manages their own keys and never shares the private key with another party. Consequently, attacks, like the recent attack on Twitter accounts can't happen in SSI because there's no administrator who has access to the credentials of everyone using the system.

Relationship Lifespan

Whether in the physical world or digital, relationships have lifespans. Some relationships are long-lived, some are short-term, and others are ephemeral, existing only for the duration of a single interaction. We typically don't think of it this way, but every interaction we have in the physical world, no matter for what purpose or how short, sets up a relationship. So too in the digital world, although our tools, especially as people online, have been sorely lacking.

I believe one of the biggest failings of modern digital identity systems is our failure to recognize that people often want, even need, short-lived relationships. Think about your day for a minute. How many people and organizations did you interact with in the physical world4 where you established a permanent relationship? What if whenever you stopped in the convenience store for a cup of coffee, you had to create a permanent relationship with the coffee machine, the cashier, the point of sale terminal, and the customers in line ahead and behind you? Sounds ridiculous. But that's what most digital interactions require. At every turn we're asked to establish permanent accounts to transact and interact online.

There are several reasons for this. The biggest one is that every Web site, app, or service wants to send you ads, at best, or track you on other sites, at worst. Unneeded, long-lived relationships are the foundation of the surveillance economy that has come to define the modern online experience.

There are a number of services I want a long-lived relationship with. I value my relationships with Amazon and Netflix, for example. But there are many things I just need to remember for the length of the interaction. I ordered some bolts for a car top carrier from a specialty place a few weeks ago. I don't order bolts all the time, so I just want to place my order and be done. I want an ephemeral relationship. The Internet of Things will increase the need for ephemeral relationships. When I open a door with my digital credential, I don't want the hassle of creating a long-term relationship with it; I just want to open the door and then forget about it.

Digital relationships should be easy to set up and tear down. They should allow for the relationship to grow over time, if both parties desire. While they exist, they should be easily managable while providing all the tools for the needed utility. Unless there's long term benefit to me, I shouldn't need to create a long term relationship. And when a digital relationship ends, it should really be over.

Relationship Utility

Obviously, we don't create digital relationships just so we can authenticate the other party. Integrity is a necessary, but insufficient, condition for an identity system. This is where most identity models fall short. We can understand why this is so given the evolution of the modern Web. For the most part, user-centric identity really took off when the web gave people reasons to visit places they didn't have a physical relationship with, like an ecommerce web site. Once the identity system had established the integrity of the relationship, at least from the web site's perspective, we expected HTTP would provide the rest.

Most Identity and Access Management systems don't provide much beyond integrity except possibly access control. Once Facebook has established who you are, it knows just what resources to let you see or change. But as more and more of our lives are intermediated by digital services, we need more utility from the identity system than simple access control. The most that an IdP can provide in the federated model is integrity and, perhaps, a handful of almost-always-self-asserted attributes in a static, uncustomizable schema. But rich relationships require much more than that.

Relationships are established to provide utility. An ecommerce site wants to sell you things. A social media site wants to show you ads. Thus, their identity systems, built around the IAM system, are designed to do far more than just establish the integrity of the relationship. They want to store data about you and your activities. For the most part this is welcome. I love that Amazon shows me my past orders, Netflix remembers where I was in a series, and Twitter keeps track of my followers and past tweets.

The true identity system is much, much larger and specialized than the IAM portion. All of the account or profile data these companies use is properly thought of as part of the identity system that they build and run. Returning to Joe Andrieu:

Identity systems acquire, correlate, apply, reason over, and govern [the] information assets of subjects, identifiers, attributes, raw data, and context.

Regardless of whether or not they outsource the integrity of their relationships (and notice that none of the companies I list above do), companies still have to keep track of the relationships they have with customers or users in order to provide the service they promise. They can't outsource this to a third party because the data in their identity system has evolved to suit the needs of the specific relationship. We'll never have a single identity that serves all relationships because they are unique contexts that demand their own identity data. Rip out the identity system from a Netflix or Amazon and it won't be the same company anymore.

This leads us to a simple, but important conclusion: You can't outsource a relationship. Online apps and services decorate the relationship with information they observe, and use that information to provide utility to the relationships they administer. Doing this, and doing it well is the foundation of the modern web.

Consequently, the bad news is that SSI is not going to reduce the need for companies to build, manage, and use identity systems. Their identity systems are what make them what they are—there is no "one size fits all" model. The good news is that SSI makes online relationships richer. Relationships are easier and less expensive to manage, not just for companies, but for people too. Here's some of the ways SSI will enhance the utility of digital relationships:

Richer, more trustworthy data—Relationships change over time because the parties change. We want to reliably ascertain facts about the other party in a relationship, not just from direct observation, but also from third parties in a trustworthy manner to build confidence in the actions of the other party within a specific context. Verifiable credentials, self-issued or from others, allow relationships to be enhanced incrementally as the relationships matures or changes. Autonomy and choice through peer relationships—Peer relationship give each party more autonomy than traditional administrative identity systems have provided. And, through standardization and substitutability, SSI gives participants choice in what vendors and agents they employ to manage their relationships. The current state of digital identity is asymmetric, providing companies with tools that are difficult or unwieldy for people to use. People like Doc Searls and organizations like Project VRM and the Me2B Alliance argue that people need tools for managing online relationships too. SSI provides people with tools and autonomy to manage their online relationships with companies and each other. Better, more secure communications—DID exchange provides a private, secure messaging channel using the DIDComm protocol. This messaging channel is mutually recognized, authenticated, and encrypted. Unifying, interoperable protocol for data transmission—DIDs and Verifiable Credentials, and DIDComm provides a standardized, secure means of interaction. Just like URLs, HTML, and HTTP provided for an interoperable web that led to an explosion of uses, the common protocol of SSI will ensure everyone benefits from the network effects. Consistent user experience—Similarly, SSI provides a consistent user experience, regardless of what digital wallet people use. SSI's user experience is centered on relationships and credentials, not arcane addresses and keys. The user experience mirrors the experience people have of managing credentials in the physical world. Support for ad hoc digital interactions—The real world is messy and unpredictable. SSI is flexible enough to support the various ad hoc scenarios that the world presents us and supports sharing multiple credentials from various authorities in the ways the scenario demands.

These benefits are delivered using an architecture that provides a common, interoperable layer, called the identity metasystem, upon with anyone can build the identity systems they need. A ubiquitous identity layer for the internet must be a metasystem that provides the building blocks and protocols necessary for others to build identity systems that meet the needs of any specific context or domain. An identity metasystem is a prerequisite for an online world where identity is as natural as it is in the physical world. An identity metasystem can remove friction, decrease cognitive overload, and make online interactions more private and secure.

Relationships, Risk, and Trust

Trust is a popular term in the SSI community. People like Steve Wilson and Kaliya Young rightly ask about risk whenever someone in the identity community talks about trust. Because of the proximity problem, digital relationships are potentially risky. One of the goals of an identity system is to provide evidence that can be used in the risk calculation.

In their excellent paper, Risk and Trust, Nickel and Vaesen define trust as the "disposition to willingly rely on another person or entity to perform actions that benefit or protect oneself or one’s interests in a given domain." From this definition, we see why crypto-proponents often say "To trust is good, but to not trust is better." The point being that not having to rely on some other human, or human-mediated process is more likely to result in a beneficial outcome because it reduces the risk of non-performance.

Relationships imply a shared domain, context, and set of activities. We often rely on third parties to tell us things relevant to the relationship. Our vulnerability, and therefore our risk, depends on the degree of reliance we have on another party's performance. Relationships can never be "no trust" because of the very reasons we create relationships. Bitcoin, and similar systems, can be low or no trust precisely because the point of the system is to reduce the reliance on any relationship at all. The good news, is the architecture of the SSI stack significantly limits the ways we must rely on external parties for the exchange of information via verifiable credentials and thus reduces the vulnerability of parties inside and outside of the relationship. The SSI identity metasystem clearly delineates the parts of the system that are low trust and those where human processes are still necessary.

The exchange of verifiable credentials can be split into two distinct parts as shown in the following diagram. SSI reduces risk in remote relationships using the properties of these two layers to combine cryptographic processes with human processes.

SSI Stack (click to enlarge)

The bottom layer, labeled Identity Metasystem, comprises two components: a set of verifiable data repositories for storing metadata about credentials and a processing layer supported by protocols and practices for the transport and validation of credentials. Timothy Ruff uses the analogy of shipping containers to describe the identity metasystem. Verifiable credentials are analogous to shipping containers and the metasystem is analogous to the shipping infrastructure that makes intermodal shipping so efficient and secure. The Identity Metasystem provides a standardized infrastructure that similarly increases the efficiency and security of data interchange via credentials.

The top layer, labeled Identity Systems, is where people and organizations determine what credentials to issue, determine what credentials to seek and hold, and determine which credentials to accept. This layer comprises the individual credential exchange ecosystems that spring up and the governance processes for managing those credential ecosystems. In Timothy's analogy to shipping containers, this layer is about the data—the cargo—that the credential is carrying.

The Identity Metasystem allows verifiable credentials can be cryptographically checked to ensure four key properties that relate to the risk profile.

Who issued the credential? Was the credential issued to the party presenting it? Has the credential been tampered with? Has the credential been revoked?

These checks show the fidelity of the credential and are done without the verifier needing a relationship with the issuer. And because they're automatically performed by the Identity Metasystem, they significantly reduce the risk related to using data transferred using verifiable credentials. This is the portion of credential exchange that could be properly labeled "low or no trust" since the metasystem is built on standards that ensure the cryptographic verifiability of fidelity without reliance on humans and human-mediated processes.

The upper, Identity Systems, layer is different. Here we are very much relying on the credential issuer. Some of the questions we might ask include:

Is the credential issuer, as shown by the identifier in the credential, the organization we think they are? Is the organization properly accredited in whatever domain they're operating in? What data did the issuer include in the credential? What is the source of that data? How has that data been processed and protected?

These questions are not automatically verifiable in the way we can verify the fidelity of a credential. They are different for each domain and perhaps different for each type of relationship based on the vulnerability of the parties to the data in the credential and their appetite for risk. Their answers depend on the provenance of the data in the credential. We would expect to see credential verifier's perform provenance checks by answering these and other questions during the process they use to establish trust in the issuer. Once the verifier has established this trust, the effort needed to evaluate the provenance of the data in a credential should cease or be greatly reduced.

As parties in a relationship share data with each other, the credential verifier will spend effort evaluating the provenance of issuers of credentials they have not previously evaluated. Once that is done, the metasystem will provide fidelity checks on each transaction. For the most part, SSI does not impose new ways of making these risk evaluations. Rather, most domains already have processes for establishing the provenance of data. For example, we know how to determine if a bank is a bank, the accreditation status of a university, and the legitimacy of a business. Companies with heavy reliance on properties of their supply chain, like pharmaceuticals, already have processes for establishing the provenance of the supply chain. For the most part, verifiable credential exchange will faithfully present digital representations of the kinds of physical world processes, credentials, and data we already know how to evaluate. Consequently, SSI promises to significantly reduce the cost of reducing risk in remote relationships without requiring wholesale changes to existing business practices.

Conclusion

Relationships have always been the reason we created digital identity systems. Our historic focus on relationship integrity and IAM made modern Web 2.0 platforms and services possible, but has limited use cases, reduced interoperability, and left people open to privacy and security breaches. By focusing on peer relationships supported by an Identity Metasystem, SSI not only improves relationships integrity but also better supports flexible relationship lifecycles, more functional, trustworthy relationship utility, and provides tools for participants to correctly gauge, respond to, and reduce risks inherent in remote relationships.

Notes For simplicity, I'm going to limit this discussion to two-party relationships, not groups. I've described federation initialization very generally here and left out a number of details that distinguish various federation architectures. Connect.me is just one of a handful of digital wallets that support the same SSI user experience. Others vendors include Trinsic, ID Ramp, and eSatus AG. Actually, imagine a day before Covid-19 lockdown.

Photo Credit: Two People Holding Hands from Albert Rafael (Free to use)

Tags: sovrin ssi decentralized+identifiers identity relationships


The Architecture of Identity Systems

Summary: The architecture of an identity system has a profound impact on the nature of the relationships it supports. This post categorizes the high-level architecture of identity systems, discusses the properties of each category to understand architectural influences, and explores what their respective architectures mean to their legitimacy as a basis for online life. Introductory n

Summary: The architecture of an identity system has a profound impact on the nature of the relationships it supports. This post categorizes the high-level architecture of identity systems, discusses the properties of each category to understand architectural influences, and explores what their respective architectures mean to their legitimacy as a basis for online life.

Introductory note: I recently read a paper from Sam Smith, Key Event Receipt Infrastructure, that provided inspiration for a way to think about and classify identity systems. In particular his terminology was helpful to me. This blog post uses terminology and ideas from Sam's paper to classify and analyze three different identity system architectures. I hope it provides a used model for thinking about identity online.

John Locke was an English philosopher who thought a lot about power: who had it, how it was used, and how it impacted the structure of society. Locke’s theory of mind forms the foundation for our modern ideas about identity and independence. Locke argued that "sovereign and independent" was man’s natural state and that we gave up freedom, our sovereignty, in exchange for something else, protection, sociality, and commerce, among others. This grand bargain forms the basis for any society.

This question of power and authority is vital in identity systems. We can ask "what do we give up and to whom in a given identity system?" More succinctly we ask: who controls what? In Authentic Digital Relationships I made the argument that self-sovereign identity, supporting heterarchical (peer-to-peer) interaction, enables rich digital relationships that allow people to be digitally embodied so they can act online as autonomous agents. I argued that the architecture of SSI, its structure, made those relationships more authentic.

In this post, I intend to explore the details of that architecture so we can better understand the legitimacy of SSI as an identity system for online interaction. Wikipedia defines legitimacy as

the right and acceptance of an authority, usually a governing law or a regime.

While the idea of legitimacy is most often applied to governments, I think we can rightly pose legitimacy questions for technical systems, especially those that function in an authoritative manner and have large impacts on people and society. Without legitimacy, people and organizations using an identity system will be unable to act because anyone they interact with will not see that action as authorized. My thesis is that SSI provides a more legitimate basis for online identity than administrative identity systems of the past.

Terminology

While we properly talk of identity systems, identity systems do not manage identities, but rather relationships. Identity systems provide the means necessary for remembering, recognizing, and relying on the other parties to the relationship. To do so, they use identifiers, convenient handles that name the thing being remembered. Identifiers are unique within some namespace. The namespace gives context to the identifiers since the same string of characters might be a phone number in one system and a product ID in another.

Figure 1: Binding of controller, authentication factors, and identifiers in identity systems. (click to enlarge)

Identifiers are issued to or created by a controller who by virtue of knowing the authentication factors can make authoritative statements about the identifier (e.g. claiming it by logging in). The controller might be a person, organization, or software system. The controller might be the subject that the identifier refers to, but not necessarily. The authentication factors might be a password, key fob, cryptographic keys, or something else. The strength and nature of the bindings between the controller, authentication factors, and identifier determine the strength and nature of the relationships built on top of them.

To understand why that's so, we introduce the concept of a root of trust1. A root of trust is a foundational component or process in the identity system that is relied on by other components of the system and whose failure would compromise the integrity of the bindings. A root of trust might be primary or secondary depending on whether or not it is replaceable. Primary roots of trust are irreplaceable. Together, the roots of trust form the trust basis for the system.

The trust basis enabled by the identity system underlies a particular trust domain. The trust domain is the set of digital activities that depend on the binding of the controller to the identifier. For example, binding a customer to an identifier allows Amazon to trust that the actions linked to the identifier are authorized by the controller. Another way to look at this is that the strength of the binding between the identifier and customer (controller) determines the risk that Amazon assumes in honoring those actions.

The strength of the controller-identifier binding depends on the strength of the binding between the controller and the authentication factors and between the authentication factors and the identifier. Attacking either of those bindings reduces the trust we have in the controller-identifier binding and increases the risk that actions taken through a particular identifier are unauthorized.

Identity Architectures

We can broadly classify identity systems into one of three types based on their architectures and primary root of trust:

Administrative Algorithmic Autonomic

Both algorithmic and autonomic are SSI systems. They are distinguished by their trust bases. Some SSI systems use one or the other and some (like Sovrin) are hybrid, employing each for different purposes. We'll discuss the properties of the trust basis for each of these in an effort to understand the comparative legitimacy of SSI solutions to traditional administrative ones.

These architectures differ in who controls what and that is the primary factor in determining the basis for trust in them. We call this control authority. The entity with control authority takes action through operations that affect the creation (inception), updating, rotation, revocation, deletion, and delegation of the authentication factors and their relation to the identifier. How these events are ordered and their dependence on previous operations is important. The record of these operations is the source of truth for the identity system.

Administrative Architecture

Identity systems with an administrative architecture rely on an administrator to bind the identifier to the authentication factors. The administrator is the primary root of trust for any domain with an administrative architecture. Almost every identity system in use today has an administrative architecture and their trust basis is founded on the administrator.

Figure 2: The trust basis in administrative identity systems. (click to enlarge)

Figure 2 shows the interactions between the controller, identifier and authentication factors in an administrative identity system, the role of the administrator, and the impact these have on the strength of the bindings. The controller usually generates the authentication factors by choosing a password, linking a two-factor authentication (2FA) mechanism, or generating keys.

Even though the identifier might be the controller's email address, phone number, public key, or other ID, the administrator "assigns" the identifier to the controller because it is their policy that determines which identifiers are allowed, whether they can be updated, and their legitimacy within the identity system's domain. The administrator "owns" the identifier within the domain. The administrator also asserts the binding between the identifier and the authentication factors. An employee's mistake, a policy change, or a hack could affect the binding between the identifier and authentication factors or the identifier and the controller. Consequently, these bindings are relatively weak. Only the binding between the controller and authentication factors is strong because the controller generates them.

The administrator's primary duty is to authoritatively assert the controller-identifier binding. Authoritative control statements about the identifier are recorded in the administrator's database, the source of truth in the system, subject to change by employees and hackers. The administrator might be an ecommerce site that maintains an identity system as the basis for its customer's account. In this case the binding is private, and its integrity is of interest only to the web site and the customer. Alternatively, the administrator might provide federated login services. In this case the administrator is asserting the controller-identifier binding in a semi-public manner to anyone who relies on the federated login. A certificate authority is an example of an administrator who publicly asserts the controller-identifier binding, signing a certificate to that effect.

Because the administrator is responsible for binding the identifier to both the authentication factors and the controller, the administrator is the primary root of trust and thus the basis for trust in the overall system. Regardless of whether the binding is private, semi-public, or public, the integrity of the binding is entirely dependent on the administrator and the strength of their infrastructure, policies, employees, and continued existence. The failure of any of those can jeopardize the binding, rendering the identity system unusable by those who rely on it.

Algorithmic Architecture

Identity systems that rely on a ledger have an algorithmic architecture. I'm using "ledger" as a generic term for any algorithmically controlled distributed consensus-based datastore including public blockchains, private blockchains, distributed file systems, and others. Of course, it's not just algorithms. Algorithms are embodied in code, written by people, running on servers. How the code is written, its availability to scrutiny, and the means by which it is executed all impact the trust basis for the system. "Algorithmic" is just shorthand for all of this.

Figure 3: The trust basis in algorithmic identity systems. (click to enlarge)

Figure 3 shows how the controller, authentication factors, identifier, and ledger are bound in an identity system with an algorithmic architecture. As in the administrative identity system, the controller generates the authentication factors, albeit in the form of a public-private key pair. The controller keeps and does not share the private key. The public key, on the other hand, is used to derive an identifier (at least in well-designed SSI systems) and both are registered on the ledger. This registration is the inception of the controller-identifier binding since the controller can use the private key to assert her control over the identifier as registered on the ledger. Anyone with access to the ledger can determine algorithmically that the controller-identifier binding is valid.

The controller makes authoritative control statements about the identifier. The events marking these operations are recorded on the ledger which becomes the source of truth for anyone interested in the binding between the identifier and authentication factors.

In an identity system with an algorithmic trust basis, computer algorithms create a ledger that records the key events. The point of the ledger is that no party has the power to unilaterally decide whether these records are made, modified, or deleted and how they're ordered. Instead, the system relies on code executed in a decentralized manner to make these decisions. The nature of the algorithm, the manner in which the code is written, and the methods and rules for its execution all impact the integrity of the algorithmic identity system and consequently any bindings that it records.

Autonomic Architecture

Identity systems with an autonomic architecture function similarly to those with an algorithmic architecture. As shown in Figure 4, the controller generates a public-private key pair, derives a globally unique identifier, and shares the identifier and the currently associated public key with anyone.

Figure 4: Trust basis in autonomic identity systems. (click to enlarge)

The controller uses her private key to authoritatively and non-repudiably sign statements about the operations on the keys and their binding to the identifier, storing those in an ordered key event log2. One of the important realizations that make autonomic identity systems possible is that the key event log must only be ordered in the context of a single identifier, not globally. So, a ledger is not needed for recording operations on identifiers that are not public. The key event log can be shared with and verified by anyone who cares to see it.

The controller also uses the private key to sign statements that authenticate herself and authorize use of the identifier. A digital signature also provides the means of cryptographically responding to challenges to prove her control of the identifier. These self-authentication and self-authorization capabilities make the identifier self-certifying and self-managing, meaning that there is no external third party, not even a ledger, needed for the controller to manage and use the identifier and prove to others the integrity of the bindings between herself and the identifier. Thus anyone (any entity) can create and establish control over an identifier namespace in a manner that is independent, interoperable, and portable without recourse to any central authority. Autonomic identity systems rely solely on self-sovereign authority.

Autonomic identifiers have a number of advantages:

Self-Certification—autonomic identifiers are self-certifying so there is no reliance on a third party. Self-Administration—autonomic identifiers can be idependently administered by the controller. Cost—autonomic identifiers are virtually free to create and manage. Security—because the keys are decentralized, there is no trove of secrets that can be stolen. Regulatory—since autonomic identifiers need not be publicly shared or stored in an organization’s database, regulatory concern over personal data can be reduced. Scale—autonomic identifiers scale with the combined computing capacity of all participants, not some central system. Independent—autonomic identifiers are not dependent on any specific technology or even being online. Algorithmic and Autonomic Identity In Practice

We are all familiar with administrative identity systems. We use them all the time. Less familiar are algorithmic and autonomic identity systems. Their use is emerging under the title of self-sovereign identity.

There are several parallel development efforts supporting algorithmic and autonomic identifiers. The Decentralized Identifier specification is the primary guide to algorithmic identifiers that live on a ledger. The DID specification provides for many DID methods that allow DIDs to live on a variety of data stores. There's nothing in the DID specification itself that requires that the data store be a blockchain or ledger, but that is the primary use case.

I've written about the details of decentralized identifiers before. DIDs have a number of important properties that make them ideal as algorithmic identifiers. Specifically, they are non-reassignable, resolvable, cryptographically verifiable, and decentralized.

As algorithmic identifiers, DIDs allow the controller to cryptographically make authoritative statements about the identifier and the keys it is bound to. Those statements are recorded on a ledger or blockchain to provide a record of the key events that anyone with access to the ledger can evaluate. The record is usually public since the purpose of putting them on a ledger is to allow parties who don't have an existing relationship to evaluate the identifier and its linkage to the controller and public keys.

There are two related efforts for autonomic identifiers. Key Event Receipt Infrastructure is a general-purpose self-certifying system for autonomic identifiers. KERI identifiers are strongly bound at inception to a public-private key pair. All operations on the identifier are recorded in a cryptographic key event log. KERI has strong security and accountability properties. Drummond Reed has made a proposal that would allow KERI autonomic identifiers to be used with any DID method.

The second option is Peer DIDs. The vast majority of relationships between people, organizations, and things need not be public and thus have no need for the ability to publicly resolve the DID. Peer DIDs fill this need with the benefits of autonomic identifiers listed above.

Like KERI, Peer DIDs maintain a key event log (called "deltas") that records the relevant operations on the keys in a cryptographic manner. The Peer DID key event log can be shared with other parties in the relationship over DIDComm, a protocol that allows parties to a relationship to securely and privately share authenticated messages. The security and authority of a DIDComm channel are rooted in DIDs and their associated authentication factors. DIDComm can be used over a wide variety of transports.

The vast majority of digital relationships are peer to peer and should use autonomic identifiers. Algorithmic identifiers allow for public discovery of identifier properties when relationships are not peer to peer. In the Sovrin Network3, the ledger records public DIDs for verifiable credential issuers. But people, organizations, and things form relationships using peer DIDs without need for the ledger. This hybrid use of both algorithmic and autonomic identity systems was designed so that credential exchange would be practical, secure, and private while reducing the correlation that might occur if individuals used a few DIDs on a ledger.

Comparing Identity Architectures

Table 1 summarizes the architectural properties of identity systems with administrative, algorithmic, and autonomic bases of trust.

Table 1: Architectural properties of administrative, algorithmic, and autonomic identity systems (click to enlarge)

The table shows how the locus of control, source of truth, root of trust, and trust basis differ for each of our three architectures. For Administrative systems, the administrator is directly in control of all four of these. In an algorithmic architecture, the controller is the locus of control because the ledger is set up to allow the controller to be in charge of all key operations. Sometimes this is done using special administrative keys instead of the keys associated with the identifier. The organizations or people operating nodes on the ledger never have access to the keys necessary to unilaterally change the record of operations. No third party is involved in autonomic identity systems.

Table 2 summarizes the trust bases of administrative, algorithmic, and autonomic identity systems4.

Table 2: Summarizing the trust bases of administrative, algorithmic, and autonomic identity systems (click to enlarge)

We can see from the evaluation that algorithmic and autonomic architectures are decentralized while the administrative system has a single point of failure—the third party administrator. As a result administrative systems are less secure since an attack on one party can yield a trove of valuable information. Administrative systems also rely on privacy by policy rather than having privacy preserving features built into the architecture. And, as we've seen, all too often privacy is in direct conflict with the administrator's profit motive leading to weak privacy policies.

Power and Legitimacy

I started this post by talking about power and legitimacy. From our discussion and the summary tables above, we know that power is held very differently in these three systems. In an administrative system, the administrator holds all the power. I argued in Authentic Digital Relationships that the architecture of our identity systems directly impacts the quality and utility of the digital relationships they support. Specifically, the power imbalance inherent in administrative identity systems yields anemic relationships. In contrast, the balance of power engendered by SSI systems (both algorithmic and autonomic) yields richer relationships since all parties can contribute to it.

Clearly, administrative identity systems have legitimacy—if they didn't, no one would use or trust them. As new architectures, algorithmic and autonomic systems have yet to prove themselves through usage. But we can evaluate each architecture in terms of the promises it makes and how well it does in the purposes of an identity system: recognizing, remembering, and relying on other parties in the relationship. These are largely a function of the trust basis for the system.

Administrative systems promise an account for taking actions that the administrator allows. They also promise these accounts will be secure and private. But people and organizations are increasingly concerned with privacy and seemingly non-stop security breaches are chipping away at that legitimacy. As noted above, the privacy promise is often quite limited. Since the administrator is the basis for trust, administrative systems allow the administrator to recognize, remember, and rely on the identifier depending on their security. But the account holder does not get any support from the administrative system in recognizing, remembering, or relying on the administrator. The relationship is strictly one-way and anemic.

SSI systems promise to give anyone the means to securely and privately create online relationships and trustworthily share self-asserted and third-party-attested attributes with whoever they chose. These promises are embodied in the property I call fidelity. To the extent that algorithmic and autonomic identity systems deliver on these promises, they will be seen as legitimate.

Both algorithmic and autonomic identity systems provide strong means for recognizing, remembering, and relying on the identifiers in the relationship. For an algorithmic system, we must trust the ledger as the primary root of trust and the trust basis. Clearly our trust in the ledger will depend on many factors, including the code and the governance that controls its operation.

The trust basis for an autonomic identity system is cryptography. This implies that digital key management will become an important factor in its legitimacy. If people and organizations cannot easily manage the keys in such a system, then it will not be trusted. There is hope that key management can be solved since the primary artifacts that people using an SSI system manipulate are relationships and credentials, not keys and secrets. By supporting a consistent user experience rooted in familiar artifacts from the physical world, SSI promises to make cryptography a usable technology by the majority of people on the Internet.

Conclusion

In this article, we've explored the high-level architectures for the identity systems in use today as well as new designs that promise richer, more authentic online relationships, better security, and more privacy. By exploring the trust bases that arise from these architectures we've been able to explore the legitimacy of these architectures as a basis for online identity. My belief is that a hybrid system that combines algorithmic public identifiers with autonomic private identifiers can provide a universal identity layer for the Internet, increasing security and privacy, reducing friction, and providing new and better online experiences.

Notes Often the term root of trust is used to a hardware device or some trusted hardware component in the computer. The usage here is broader and refers to anything that is relied on for trust in the identity system. Trust anchor is a term that has sometimes been used in the cryptography community to refer to this same concept. A number of cryptographic systems are trivially self-certifying (e.g. PGP, Ethereum, Bitcoin, etc.). What sets the autonomic identity systems described here apart is the key event log. Sam Smith calls these identifiers “autonomic identifiers” to set them apart from their less capable counterparts and emphasize their ability to self-manage keys without recourse to a third party system. The Sovrin Network is an operational instance of the code found in the Hyperledger Indy and Aries projects. This table is borrowed, with permission, from Chapter 10 of the upcoming Manning publication Self-Sovereign Identity by Drummond Reed and Alex Preukschat.

Photo Credit: Skysong Center from Phil Windley (CC BY-SA 3.0 US)

Tags: identity identifiers relationships ssi root+of+trust namespaces architecture trust sovrin


Boris Mann's Blog

Went out for a bike ride along Arbutus Greenway. Beautiful sun and a stop at Beacoup Bakery on the way back, plus all the cyclists waiting for the train at Union.

Went out for a bike ride along Arbutus Greenway. Beautiful sun and a stop at Beacoup Bakery on the way back, plus all the cyclists waiting for the train at Union.

Sunday, 08. November 2020

Boris Mann's Blog

I got sent this Azerbaijani Country Life Vlog video blog by @florence_ann and ended up watching the entire 45 minute video. Added to @atbrecipes.

I got sent this Azerbaijani Country Life Vlog video blog by @florence_ann and ended up watching the entire 45 minute video.

Added to @atbrecipes.


Doc Searls Weblog

We’re in the epilogue now

The show is over. Biden won. Trump lost. Sure, there is more to be said, details to argue. But the main story—Biden vs. Trump, the 2020 Presidential Election, is over. So is the Trump presidency, now in the lame duck stage. We’re in the epilogue now. There are many stories within and behind the story, […]


The show is over. Biden won. Trump lost.

Sure, there is more to be said, details to argue. But the main story—Biden vs. Trump, the 2020 Presidential Election, is over. So is the Trump presidency, now in the lame duck stage.

We’re in the epilogue now.

There are many stories within and behind the story, but this was the big one, and it had to end. Enough refs calling it made the ending official. President Trump will continue to fight, but the outcome won’t change. Biden will be the next president. The story of the Trump presidency will end with Biden’s inauguration.

The story of the Biden presidency began last night. Attempts by Trump to keep the story of his own presidency going will be written in the epilogue, heard in the coda, the outro, the postlude.

Fox News, which had been the Trump administration’s house organ, concluded the story when it declared Biden the winner and moved on to covering him as the next president.

This is how stories go.

This doesn’t mean that the story was right in every factual sense. Stories aren’t.

As a journalist who has covered much and has been covered as well, I can vouch for the inevitability of inaccuracy, of overlooked details, of patches, approximations, compressions, misquotes and summaries that are more true to story, arc, flow and narrative than to all the facts involved, or the truths that might be told.

Stories have loose ends, and big stories like this one have lots of them. But they are ends. And The End is here.

We are also at the beginning of something new that isn’t a story, and does not comport with the imperatives of journalism: of storytelling, of narrative, of characters with problems struggling toward resolutions.

What’s new is the ground on which all the figures in every story now stand. That ground is digital. Decades old at most, it will be with us for centuries or millennia. Arriving on digital ground is as profound a turn in the history of our species on Earth as the one our distant ancestors faced when they waddled out of the sea and grew lungs to replace their gills.

We live in the digital world now now, in addition to the physical one where I am typing and you are reading, as embodied beings.

In this world we are not just bodies. We are something and somewhere else, in a place that isn’t a place: one without distance or gravity, where the only preposition that applies without stretch or irony is with. (Because the others—over, under, beside, around, though, within, upon, etc.—pertain too fully to positions and relationships in the physical world.)

Because the digital world is ground and not figure (here’s the difference), it is as hard for us to make full sense of being there as it was for the first fish to do the same with ocean or for our amphibian grandparents to make sense of land. (For some help with this, dig David Foster Wallace’s This is water.)

The challenge of understanding digital life will likely not figure in the story of Joe Biden’s presidency. But nothing is more important than the ground under everything. And this ground is the same as the one without which we would not have had an Obama or a Trump presidency. It will at least help to think about that.

 

Saturday, 07. November 2020

Identity Woman

Interdisciplinary Expertise for Self-Sovereign Identity

Philip makes this point in his latest post. And on twitter asserts that I have narrow limited expertise in our twitter thread about it today. Below the quote from his piece is my response. I know from experience that this isn’t an easy subject to talk about — ‘digital identity’ is such a broad, deep […] The post Interdisciplinary Expertise for Self-Sovereign Identity appeared first on Identity W

Philip makes this point in his latest post. And on twitter asserts that I have narrow limited expertise in our twitter thread about it today. Below the quote from his piece is my response. I know from experience that this isn’t an easy subject to talk about — ‘digital identity’ is such a broad, deep […]

The post Interdisciplinary Expertise for Self-Sovereign Identity appeared first on Identity Woman.


Simon Willison

Weeknotes: sqlite-utils 3.0 alpha, Git scraping in the zeitgeist

Natalie and I decided to escape San Francisco for election week, and have been holed up in Fort Bragg on the Northern California coast. I've mostly been on vacation, but I did find time to make some significant changes to sqlite-utils. Plus notes on an exciting Git scraping project. Better search in the sqlite-utils 3.0 alpha I practice semantic versioning with sqlite-utils, which means it onl

Natalie and I decided to escape San Francisco for election week, and have been holed up in Fort Bragg on the Northern California coast. I've mostly been on vacation, but I did find time to make some significant changes to sqlite-utils. Plus notes on an exciting Git scraping project.

Better search in the sqlite-utils 3.0 alpha

I practice semantic versioning with sqlite-utils, which means it only gets a major version bump if I break backwards compatibility in some way.

My goal is to avoid breaking backwards compatibility as much as possible, and I was proud to have made it all the way to version 2.23 representing 23 new feature releases since the 2.0 release without breaking any documented features!

Sadly this run has come to an end: I realized that the table.search() method was poorly designed, and I also needed to grab back the -c command-line option (a shortcut for --csv output) to be used for another purpose.

The chances that either of these changes will break anyone are pretty small, but semantic versioning dictates a major version bump so here we are.

I shipped a 3.0 alpha today, which should hopefully become a stable release very shortly (milestone here).

The big new feature is sqlite-utils search - a command-line tool for executing searches against a full-text search enabled table:

$ sqlite-utils search 24ways-fts4.db articles maps -c title [{"rowid": 163, "title": "Get To Grips with Slippy Maps", "rank": -10.028754920576421}, {"rowid": 220, "title": "Finding Your Way with Static Maps", "rank": -9.952534352591737}, {"rowid": 27, "title": "Putting Design on the Map", "rank": -5.667327088267961}, {"rowid": 168, "title": "Unobtrusively Mapping Microformats with jQuery", "rank": -4.662224207228984},

Here's full documentation for the new command.

Notably, this command works against both FTS4 and FTS5 tables in SQLite - despite FTS4 not shipping with a built-in ranking function. I'm using my sqlite-fts4 package for this, which I described back in January 2019 in Exploring search relevance algorithms with SQLite.

Git scraping to predict the election

It's not quite over yet but the end is in sight, and one of the best tools to track the late arriving vote counts is this Election 2020 results site built by Alex Gaynor and a growing cohort of contributors.

The site is a beautiful example of Git scraping in action, and I'm thrilled that it links to my article in the README!

Take a look at the repo to see how it works. Short version: this GitHub Action workflow grabs the latest snapshot of this undocumented New York Times JSON API once every five minutes and commits it to the repository. It then runs this Python script which iterates through the Git history and generates an HTML summary showing the different batches of new votes that were reported and their impact on the overall race.

The resulting report is published to GitHub pages - resulting in a site that can handle a great deal of traffic and is updated entirely by code running in scheduled actions.

This is a perfect use-case for Git scraping: it takes a JSON endpoint that represents the current state of the world and turns it into a sequence of historic snapshots, then uses those snapshots to build a unique and useful new source of information to help people understand what's going on.

Releases this week sqlite-utils 3.0a0 - 2020-11-07 sqlite-fts4 1.0.1 - 2020-11-06 sqlite-fts4 1.0 - 2020-11-06 csvs-to-sqlite 1.2 - 2020-11-03 datasette 0.51.1 - 2020-11-01

Friday, 06. November 2020

Ben Werdmüller

Winning

When she was twelve years old, my aunt escaped from the concentration camp where her mother and siblings were interned. She swam through the sewers, found food, and returned. My grandmother collected snails and cooked them out of sight of the Japanese guards. Around them, people were tortured and killed on a daily basis. On paper, the Allies won the war. For my family, it continued to rage. To

When she was twelve years old, my aunt escaped from the concentration camp where her mother and siblings were interned. She swam through the sewers, found food, and returned. My grandmother collected snails and cooked them out of sight of the Japanese guards. Around them, people were tortured and killed on a daily basis.

On paper, the Allies won the war. For my family, it continued to rage.

To this day, trauma has rippled from generation to generation. That simple act of ripping my aunts from their lives mid-education has led to cycles of poverty and misery that continue to this day. Some of my family, like my father, were able to break the cycle. Some were not. The recent history of my family runs the gamut from stability to crime and heroin addiction. At every end of the spectrum, a culture of anxiety - you've got to be safe; get an education because they can't take that away from you - underpinned every choice.

Joe Biden has won the Presidency. The cruelest policies of the Trump administration will likely come to an end; we will need to be vigilant and apply pressure so that the long tail of cruelty is replaced by a dogma of inclusion and care.

For some families, the effects of the Trump administration will be felt not just for years, but for generations. Family separation, willful mismanagement of the pandemic, and sanctioned police brutality have created centers of trauma that are difficult to escape from. These are Trump's victims. For them, regardless of the result of this election, Trump won.

They will need help to escape the cycle. We owe it to them because we did this to them. Regardless of their nationality or context, they are our responsibility.

The real work begins now. It starts by setting things right. And then we start to build the society we actually want.


Simon Willison

nyt-2020-election-scraper

nyt-2020-election-scraper Brilliant application of git scraping by Alex Gaynor and a growing team of contributors. Takes a JSON snapshot of the NYT's latest election poll figures every five minutes, then runs a Python script to iterate through the history and build an HTML page showing the trends, including what percentage of the remaining votes each candidate needs to win each state. This is th

nyt-2020-election-scraper

Brilliant application of git scraping by Alex Gaynor and a growing team of contributors. Takes a JSON snapshot of the NYT's latest election poll figures every five minutes, then runs a Python script to iterate through the history and build an HTML page showing the trends, including what percentage of the remaining votes each candidate needs to win each state. This is the perfect case study in why it can be useful to take a "snapshot if the world right now" data source and turn it into a git revision history over time.

Thursday, 05. November 2020

Simon Willison

Learning from Mini Apps

Learning from Mini Apps WeChat, Baidu, Alipay and Douyin in China are all examples of "Super apps" that can host "Mini apps" written in HTML and JavaScript by other developers and installed via in-app search or through scanning a QR code. Mini apps are granted (permission-gated) access to further system APIs via a JavaScript bridge. It's a fascinating developer ecosystem, explored in detail here

Learning from Mini Apps

WeChat, Baidu, Alipay and Douyin in China are all examples of "Super apps" that can host "Mini apps" written in HTML and JavaScript by other developers and installed via in-app search or through scanning a QR code. Mini apps are granted (permission-gated) access to further system APIs via a JavaScript bridge. It's a fascinating developer ecosystem, explored in detail here by Thomas Steiner.

Via @thomaswilburn


CSVs: The good, the bad, and the ugly

CSVs: The good, the bad, and the ugly Useful, thoughtful summary of the pros and cons of the most common format for interchanging data. Via @alex_gaynor

CSVs: The good, the bad, and the ugly

Useful, thoughtful summary of the pros and cons of the most common format for interchanging data.

Via @alex_gaynor


Doc Searls Weblog

How the once mighty fall

For many decades, one of the landmark radio stations in Washington, DC was WMAL-AM (now re-branded WSPN), at 630 on (what in pre-digital times we called) the dial. As AM listening faded, so did WMAL, which moved its talk format to 105.9 FM in Woodbridge and its signal to a less ideal location, far out […]

For many decades, one of the landmark radio stations in Washington, DC was WMAL-AM (now re-branded WSPN), at 630 on (what in pre-digital times we called) the dial. As AM listening faded, so did WMAL, which moved its talk format to 105.9 FM in Woodbridge and its signal to a less ideal location, far out to the northwest of town.

They made the latter move because the 75 acres of land under the station’s four towers in Bethesda had become far more valuable than the signal. So, like many other station owners with valuable real estate under legacy transmitter sites, Cumulus Mediasold sold the old site for $74 million. Nice haul.

I’ve written at some length about this here and here in 2015, and here in 2016. I’ve also covered the whole topic of radio and its decline here and elsewhere.

I only bring the whole mess up today because it’s a five-year story that ended this morning, when WMAL’s towers were demolished. The Washington Post wrote about it here, and provided the video from which I pulled the screen-grab above. Pedestrians.org also has a much more complete video on YouTube, here. WRC-TV, channel 4, has a chopper view (best I’ve seen yet) here. Spake the Post,

When the four orange and white steel towers first soared over Bethesda in 1941, they stood in a field surrounded by sparse suburbs emerging just north of where the Capital Beltway didn’t yet exist. Reaching 400 feet, they beamed the voices of WMAL 630 AM talk radio across the nation’s capital for 77 years.

As the area grew, the 75 acres of open land surrounding the towers became a de facto park for runners, dog owners and generations of teenagers who recall sneaking smokes and beer at “field parties.”

Shortly after 9 a.m. Wednesday, the towers came down in four quick controlled explosions to make way for a new subdivision of 309 homes, taking with them a remarkably large piece of privately owned — but publicly accessible — green space. The developer, Toll Brothers, said construction is scheduled to begin in 2021.

Local radio buffs say the Washington region will lose a piece of history. Residents say they’ll lose a public play space that close-in suburbs have too little of.

After seeing those towers fall, I posted this to a private discussion among broadcast engineers (a role I once played, briefly and inexpertly, many years ago):

It’s like watching a public execution.

I’m sure that’s how many of who have spent our lives looking at and maintaining these things feel at a sight like this.

It doesn’t matter that the AM band is a century old, and that nearly all listening today is to other media. We know how these towers make waves that spread like ripples across the land and echo off invisible mirrors in the night sky. We know from experience how the inverse square law works, how nulls and lobes are formed, how oceans and prairie soils make small signals large and how rocky mountains and crappy soils are like mud to a strong signal’s wheels. We know how and why it is good to know these things, because we can see an invisible world where other people only hear songs, talk and noise.

We also know that, in time, all these towers are going away, or repurposed to hold up antennas sending and receiving radio frequencies better suited for carrying data.

We know that everything ends, and in that respect AM radio is no different than any other medium.

What matters isn’t whether it ends with a bang (such as here with WMAL’s classic towers) or with a whimper (as with so many other stations going dark or shrinking away in lesser facilities). It’s that there’s still some good work and fun in the time this old friend still has left.

Wednesday, 04. November 2020

FACILELOGIN

What’s new in OAuth 2.1?

The OAuth 2.1 authorization framework enables a third-party application to obtain limited access to an HTTP service, either on behalf of a… Continue reading on FACILELOGIN »

The OAuth 2.1 authorization framework enables a third-party application to obtain limited access to an HTTP service, either on behalf of a…

Continue reading on FACILELOGIN »


Boris Mann's Blog

Yesterday’s lunch at Laksa King was a bowl of laksa, a bowl of mohinga (Fernando got a picture of it), and an order of roti canai

Yesterday’s lunch at Laksa King was a bowl of laksa, a bowl of mohinga (Fernando got a picture of it), and an order of roti canai

Tuesday, 03. November 2020

reb00ted

IndieWebCamp East is in cyberspace - join me?

I’m registered. It’s free, friendly, #decentralized, non-#SurveillanceCapitalism, slightly #geeky and fun.

I’m registered. It’s free, friendly, #decentralized, non-#SurveillanceCapitalism, slightly #geeky and fun.


Jon Udell

Moonstone Beach Breakdown

Travel always has its ups and downs but I don’t think I’ve ever experienced both at the same time as intensely as right now. I’m at Moonstone Beach in Cambria, just south of San Simeon, in a rented camper van. After a walk on the beach I hop in, reverse, clip a rock, blow a … Continue reading Moonstone Beach Breakdown

Travel always has its ups and downs but I don’t think I’ve ever experienced both at the same time as intensely as right now.

I’m at Moonstone Beach in Cambria, just south of San Simeon, in a rented camper van. After a walk on the beach I hop in, reverse, clip a rock, blow a tire, and come to rest alongside the guard rail facing the ocean.

I call roadside assistance; they can deliver a tire but not until tomorrow morning.

I may be about to win the road trip breakdown lottery. I’m snuggled in my two-sleeping-bag nest on the air mattress in the back of the van, on a bluff about 25 feet above the beach, with the van’s big side door open, watching and hearing the tide roll in.

The worst and best parts of my trip are happening at the same time. I screwed up, am stuck, cannot go anywhere. But of all the places I could have been stuck on this trip, I’m stuck in the place I most want to be.

The sign says the gate closes at 6, but nobody has shown up by 7 when everyone else is gone. I can’t reach the authorities. This would be the campsite of my dreams if I’m allowed to stay.

The suspense is killing me.

Eventually a cop shows up, agrees that I can’t go anywhere, and gives me permission to stay for the night. I win the lottery! Nobody ever gets to stay here overnight. But here I am.

We’re all stuck in many ways for many reasons. A road trip during the final week before the election seemed like a way to silence the demons. Roaming around the state didn’t really help. But this night on the bluff over Moonstone Beach most certainly will.

In the light of the full moon, the crests of the waves are sometimes curls of silver, sometimes wraiths of foam that drift slowly south, continually morphing.

I don’t know how we’re all going to get through this winter. I don’t know what comes next. I don’t even have a plan for tomorrow. But I am so grateful to be here now.


Ben Werdmüller

And here we are

We all know this, but today is life or death for a great many people. The stakes are sky high. I don’t know what will happen afterwards, but I do know I’ve been waiting for today for four very long years. And I have hope. If you're an American, you've been bombarded by messages urging you to vote. Over a hundred million people have already taken advantage of early and distance voting. If you'r

We all know this, but today is life or death for a great many people. The stakes are sky high. I don’t know what will happen afterwards, but I do know I’ve been waiting for today for four very long years.

And I have hope.

If you're an American, you've been bombarded by messages urging you to vote. Over a hundred million people have already taken advantage of early and distance voting. If you're reading this and you have the right to vote in this election, you probably have. I have too.

So I'm not going to tell you to vote. It's too late for that.

My father is a concentration camp survivor. I remember my grandmother's nightmares. I see the generational ripples of trauma in my family that continue to this day. Trump's rhetoric of nationalist division has already created those ripples for an untold number of families. The caravans of armed supporters, the racist police brutality, the Blue Lives Matter flag as a fascist symbol - all of these things will grow into something worse if left unchecked. There is no way to support Trump in 2020 and not be a fascist. There is no way for Trump to win and not further transform America into a fascist country.

This is what's at stake. All we can do now is cross our fingers and see what happens.

Monday, 02. November 2020

Simon Willison

selenium-wire

selenium-wire Really useful scraping tool: enhances the Python Selenium bindings to run against a proxy which then allows Python scraping code to look at captured requests - great for if a site you are working with triggers Ajax requests and you want to extract data from the raw JSON that came back.

selenium-wire

Really useful scraping tool: enhances the Python Selenium bindings to run against a proxy which then allows Python scraping code to look at captured requests - great for if a site you are working with triggers Ajax requests and you want to extract data from the raw JSON that came back.


Boris Mann's Blog

I made a chocolate cake to use up sourdough starter discard. It’s not a pretty cake, and making an entire sheet cake to use up a little sourdough is perhaps overkill, but I’m happy with how it turned out.

I made a chocolate cake to use up sourdough starter discard.

It’s not a pretty cake, and making an entire sheet cake to use up a little sourdough is perhaps overkill, but I’m happy with how it turned out.

Sunday, 01. November 2020

Just a Theory

Central Park Autumn

A couple photos of the gorgeous fall colors oer The Pool in Central Park.

Autumn colors over The Pool © 2020 David E. Wheeler

It’s that most magical time of year in Central Park: Autumn. I spent a lot of time wandering around The Pool yesterday. Lots of folks were about, taking in the views, shooting photos. The spectacular foliage photographed best backlit by the sun. Here’s another one.

⧉ Hard to go wrong with these colors. © 2020 David E. Wheeler

Both shot with an iPhone 12 Pro.

More about… New York City Central Park The Pool Autumn Leaves

Ben Werdmüller

Reading, watching, playing, using: October 2020

This is my monthly roundup of the tech and media I consumed and found interesting. Here's my list for October. This month I've changed my process a little: I save my links to a Notion database, export them at the end of the month, and convert them into a blog post using a small script. Instead of taking a couple of hours at the end of the month to put the post together, I save my thoughts on ea

This is my monthly roundup of the tech and media I consumed and found interesting. Here's my list for October.

This month I've changed my process a little: I save my links to a Notion database, export them at the end of the month, and convert them into a blog post using a small script. Instead of taking a couple of hours at the end of the month to put the post together, I save my thoughts on each link as I read it, and collation at the end (in iA Writer) takes much less time.

Streaming

The Queen's Gambit. A beautifully written, impeccably acted drama with gorgeous cinematography and superb attention to detail. I'm still working through it, but I can't recommend it enough.

Borat Subsequent Moviefilm. It's been pretty controversial, and it's often in hugely bad taste, but I found the broad humor at the expense of American bigotry to be cathartic. And yes, "that" Giuliani scene is everything it's been reported to be.

Seven Seconds. This new-to-me police drama based on a Russian movie is really well done: a story about police corruption and how our criminal justice system fails the people who need it the most.

Ted Lasso. Sure, it's ostensibly about sports, which isn't usually my thing. But it's also a really optimistic, funny comedy that mines a lot of humor from the cultural differences between the US and UK, which is really my thing. Very occasionally it goes broader than it needs to, particularly in its first few minutes, but there's something here for everyone.

Notable Articles Business

How we built a $1m ARR SaaS startup. I’m always interested to read peoples’ journeys. This one is very clearly written, with lots to think about.

The end of the American internet. “80-90% of internet users are now outside the USA, there are more smartphone users in China than in the USA and western Europe combined, and the creation of venture-based startups has gone global.” This is a broadly good thing: the internet was always American-led, for better or for worse. As the platforms that dominate it become more internationally-based, it becomes less of a monoculture.

Why the Survival of the Airlines Depends on Frequent Flyer Programs. “The Financial Times pegs the value of Delta’s loyalty program at a whopping $26 billion, American Airlines at $24 billion, and United at $20 billion. All of these valuations are comfortably above the market capitalization of the airlines themselves — Delta is worth $19 billion, American $6 billion, and United $10 billion. In other words, if you take away the loyalty program, Delta’s real-world airline operation — with hundreds of planes, a world-beating maintenance operation, landing rights, brand recognition, and experienced executives — is worth roughly negative $7 billion.”

Facebook Just Forced Its Most Powerful Critics Offline. “The Real Facebook Oversight Board, a group established last month in response to the tech giant’s failure to get its actual Oversight Board up and running before the presidential election, was forced offline on Wednesday night after Facebook wrote to the internet service provider demanding the group’s website — realfacebookoversight.org — be taken offline.” Ridiculously petty.

How Clubhouse brought the culture war to Silicon Valley’s venture capital community. "I am convinced that most people in the tech world do not understand the role of a free media in a liberal society."

San Francisco Apartment Rents Crater Up to 31%, Most in U.S. During Covid. “One-bedroom rents in San Francisco fell 24% and two-bedrooms were down 21%, to $2,873 and $3,931 a month, respectively.” Still way too high.

How to Stay Sane While Working at Home. “Staying happy, healthy and productive requires effort when you’re working at home. This essay provides five suggestions for keeping things on an even keel.”

The warmth/competence matrix for women, from the West Wing to the workplace. "The warmth / competence matrix is a useful tool to optimize a leader’s influence in the workplace, especially during a crisis." I found this to be a fascinating insight into how women are stereotyped and held back at work.

U.S. Accuses Google of Illegally Protecting Monopoly. “The Justice Department accused Google of maintaining an illegal monopoly over search and search advertising in a lawsuit filed on Tuesday, the government’s most significant legal challenge to a tech company’s market power in a generation.” The most significant lawsuit in the tech industry since Microsoft’s own antitrust suit. Whatever happens here, it will remake the internet industry forever.

Surveillance Startup Used Own Cameras to Harass Coworkers. "The big picture for me having worked at the company is that it has opened my eyes to how surveillance can be abused by the people in power."

Reviewing Ben Thompson’s Stratechery. “Competition driven by quality reflects what antitrust and net neutrality advocates want competition to look like — i.e. the better product wins, instead of whomever owns the pipes (or the channels). But that doesn’t mean it is what competition actually does look like, even on the internet.”

Culture

Correction by Hal Maclean. We regret the error.

Work, Float, Eat, Dream: Life on the International Space Station. “You need to become an extraterrestrial person.” First-hand descriptions of what it’s like to live on the ISS. What an adventure.

A Book Of Beasts – an accumulation of things. A very sweet modern bestiary.

Praise Song for the Kitchen Ghosts. “Remembering her grandmother’s jam cake, biscuits, and sweet black tea, Crystal Wilkinson evokes a legacy of joy, love, and plenty in the culinary traditions of Black Appalachia.”

Inside Creative Growth, the Always Inspiring Oakland-Based Incubator For Artists With Disabilities. My friend Madelyn works at Creative Growth. As well as the insightful New Yorker profile, it’s fun to see examples of the art made there. This organization is a gem, and we need more like it.

How a Revered Studio for Artists with Disabilities Is Surviving at a Distance. “As their artists endure month after month of quarantine, Creative Growth faces an extreme version of the dilemmas that other arts organizations and educational institutions have struggled with during the pandemic: if your purpose is to foster the ideal conditions for learning and making things together, how do you proceed when those conditions are suddenly impossible?”

The return of Spitting Image shows how toothless British satire has become. When I was growing up, Spitting Image was an important part of the social landscape. The satire was biting. This modern reboot sounds rubbish. In life and comedy, the rule is: always punch up.

His Writing Radicalized Young Hackers. Now He Wants to Redeem Them. "Doctorow says that the intention of Attack Surface wasn’t to swing in the other direction on the spectrum between “nerd triumphalism” and “nerd despair,” as he puts it. Instead, it’s to find a more nuanced middle ground, one that acknowledges that technology can win some battles, but that others must be won with human willpower and political struggle, sometimes with the aim of controlling technology’s most dangerous applications."

Noelle Stevenson Shares Her Coming Out Story in an Original Comic. This is completely lovely on every level.

Easily Diminished at the Edges by Amanda Hollander. “Fay had expected many different emotions in the wake of the aliens arriving, but she had not anticipated the ennui.”

Shonda Rhimes Is Ready to "Own Her S***": The Game-Changing Showrunner on Leaving ABC, "Culture Shock" at Netflix and Overcoming Her Fears. “Shonda Rhimes was tired of the battles. She was producing some 70 hours of annual television in 256 territories; she was making tens of millions of dollars for herself and more than $2 billion for Disney, and still there were battles with ABC. They'd push, she'd push back. Over budget. Over content. Over an ad she and the stars of her series — Grey's Anatomy, Scandal and How to Get Away With Murder — made for then-presidential nominee Hillary Clinton.” A fascinating portrait of an inspiring creator.

About Face. A remarkable graphic essay about authoritarian cultural signifiers, conformity, and an alarming breakdown in American society.

WarGames: A Look Back at the Film That Turned Geeks and Phreaks Into Stars. The film that got me - and so many other people - into computing. It’s a fundamentally ethical, anti nuclear war film.

The NYT Best-Seller List Has an Awful Lot of Right-Wing Trump-Loving Conservative Authors. ... and they’re buying their way there. This is a bigger problem than political books, but it’s clear that conservative authors in particular are purchasing legitimacy.

Media

UK gov report links local newspaper circulation and voter turnout: Absence of journalism in some areas potentially 'catastrophic'. "Government -backed research has found that for every percentage point growth in a local daily newspaper’s circulation, electoral turnout on its patch goes up by 0.37 percentage points."

James Murdoch: Rebellious Scion. “A contest of ideas shouldn’t be used to legitimize disinformation. And I think it’s often taken advantage of. And I think at great news organizations, the mission really should be to introduce fact to disperse doubt — not to sow doubt, to obscure fact.”

Kat Downs Mulder named managing editor/digital of The Washington Post. It’s exciting to see a product leader take on this kind of role in media.

The Problem of Free Speech in an Age of Disinformation. “Other democracies, in Europe and elsewhere, have taken a different approach. Despite more regulations on speech, these countries remain democratic; in fact, they have created better conditions for their citizenry to sort what’s true from what’s not and to make informed decisions about what they want their societies to be. Here in the United States, meanwhile, we’re drowning in lies.”

Facebook Stymied Traffic to Left-Leaning News Outlets. “Mother Jones CEO Monika Bauerlein expressed frustration with Facebook in a Twitter thread Friday, explaining that the loss of traffic had “real effects” on the organization. Mother Jones saw a roughly $400,000 drop in the site’s annual revenue, and couldn’t fill positions or pursue certain projects as a result, she said.” No single company should ever have this kind of power.

Climate news Trump can use. “The vast majority of news stories published about Biden’s climate plan since Thursday’s presidential debate have adopted the Trump campaign’s framing of the conflict. They focus solely on Trump’s attacks on Biden’s climate plan, and ignore the fact that Trump doesn’t have a climate plan at all.” Infuriating when so much is at stake.

Politics

‘Where are all of the arrests?’: Trump demands Barr lock up his foes. “Donald Trump mounted an overnight Twitter blitz demanding to jail his political enemies and call out allies he says are failing to arrest his rivals swiftly enough.” Seems like a normal thing that definitely happens in a democratic society.

Is America in Decline?. A fascinating discussion between J. Bradford DeLong and Om Malik on Pairagraph, which seems like an interesting platform for intellectual debates.

The Swamp That Trump Built. “An investigation by The Times found over 200 companies, special-interest groups and foreign governments that patronized Mr. Trump’s properties while reaping benefits from him and his administration. Nearly a quarter of those patrons have not been previously reported.”

Don’t know any COVID-19 patients who’ve died or been in the hospital? That may explain a lot. “Other research suggests that a failure to embrace COVID-19 restrictions may be fueled by a lack of empathy, in the same way that someone in rural Pennsylvania may not view urban gun violence as an urgent problem, or that those without military family members may give less thought to the ongoing toll of combat.”

As Trump Flouts Safety Protocols, News Outlets Balk at Close Coverage. “Among the concerns raised by reporters: Many flight attendants and Secret Service agents on Air Force One have not worn masks; White House aides who tested positive for the coronavirus, or were potentially exposed, are returning to work before the end of a two-week quarantine; and the campaign has instituted few restrictions at the raucous rallies that Mr. Trump is now pledging to hold on a regular basis until Election Day.”

Inside the Fall of the CDC. “How the world’s greatest public health organization was brought to its knees by a virus, the president and the capitulation of its own leaders, causing damage that could last much longer than the coronavirus.”

HHS halts a taxpayer-funded advertising effort that aimed to ‘defeat despair, inspire hope’ on the pandemic by using Santa and celebrities like Dennis Quaid. The single most insane scandal of the Trump administration. I can’t stop laughing about it. Don’t miss the audio.

Judge cites Trump tweets in restricting feds at protests. “A federal judge found Friday that tweets by President Donald Trump helped incite improper conduct by federal officers responding to racial justice demonstrations in Portland, Oregon.” Finally.

Biden Camp Cancels Austin, Texas Event After Pro-Trump ‘Ambush’ on Campaign Bus. ““We’ve got you now,” the man shouted. “You’re going to vote for Trump whether you like it or not, you’ve got no choice.”” This whole account is genuinely frightening, not just in itself, but for the implications.

Society

Stay-at-home orders cut noise exposure nearly in half. “People’s exposure to environmental noise dropped nearly in half during the early months of the coronavirus pandemic, according to University of Michigan researchers who analyzed data from the Apple Hearing Study.”

How Teens Handled Quarantine. “The percentage of teens who were depressed or lonely was actually lower in 2020 than in 2018, and the percentage who were unhappy or dissatisfied with life was only slightly higher.” It turns out that making kids go to school at ungodly hours has a negative effect. Who knew?

8 Million Have Slipped Into Poverty Since May as Federal Aid Has Dried Up. “The number of poor people has grown by eight million since May, according to researchers at Columbia University, after falling by four million at the pandemic’s start as a result of a $2 trillion emergency package known as the Cares Act.”

Megan Thee Stallion: Why I Speak Up for Black Women. “Wouldn’t it be nice if Black girls weren’t inundated with negative, sexist comments about Black women? If they were told instead of the many important things that we’ve achieved?”

Exam Surveillance Tools Monitor, Record Students During Tests. “On one occasion, I was ‘flagged’ for movement and obscuring my eyes. I have trichotillomania triggered by my anxiety, which is why my hand was near my face. Explaining this to my professor was nightmarish.” It’s absurd that students are so afraid that they’re not using their real names. Abolish surveillance - at school and everywhere.

The House on Blue Lick Road. I should probably start a “weird” category. Don’t miss this link.

'We are broken': Montana health care workers battle growing Covid outbreak. “If I have to stay late after working, if it means doing it on my day off. They're not going to pass alone on my unit. Again. None of them.” Healthcare workers are superheroes and I’m grateful for all of them.

Technology

What Working At Stripe Has Been Like. A pretty great summary of working for Stripe during a period of hypergrowth from Patrick McKenzie, who famously was a successful sole operator beforehand.

SpaceX Is Building a Military Rocket to Ship Weapons Anywhere in the World. "SpaceX and the Pentagon just signed a contract to jointly develop a new rocket that can launch into space and deliver up to 80 tons of cargo and weaponry anywhere in the world — in just one hour." We should not be uncritically cheerleading for this company.

Cory Doctorow: ‘Technologists have failed to listen to non-technologists’. "Technologists have failed to listen to non-technologists. In technological circles, there’s a quantitative fallacy that if you can’t do maths on it, you can just ignore it. And so you just incinerate the qualitative elements and do maths on the dubious quantitative residue that remains. This is how you get physicists designing models for reopening American schools – because they completely fail to take on board the possibility that students might engage in, say, drunken eyeball-licking parties, which completely trips up the models."

Git scraping: track changes over time by scraping to a Git repository. A really smart way to track changes to a website or dataset over time and commit it to git. The example, using fire data, is brilliant.

Data & Society — Good Intentions, Bad Inventions. “Lenhart and Owens break down 4 common “healthy tech” myths by explaining where they come from, what they obscure, and how we can move beyond them. Intended for those designing, developing, and regulating emerging technologies, the primer provides teams with fresh ideas for how to analyze and improve user well-being.”

When It Rains, Rotterdam’s Bikers Get To Go Through Lights Faster. “Now, when it starts to shower, the traffic lights prioritize cyclists so they don’t wait so long to cross. At the same time, car drivers need to wait a little longer, because they are inside and can stay dry.” I think this is the coolest thing.

Various first words. “The first characters sent on ARPANET, the predecessor to the internet, by Charley Kline, 1969: lo – for “login,” but it crashed.”

How Google Drive Can Make Every Corner of Your Life Easier. An absolutely epic guide to the platform, with full instructions for every tip.

Something Awful, a Cornerstone of Internet Culture, Is Under New Ownership. It was (1) a hugely important source of early internet culture, (2) a cesspool.

50 years ago, I helped invent the internet. How did it go so wrong? “When I was a young scientist working on the fledgling creation that came to be known as the internet, the ethos that defined the culture we were building was characterized by words such as ethical, open, trusted, free, shared. None of us knew where our research would lead, but these words and principles were our beacon.”

Flamethrowers and Fire Extinguishers – a review of “The Social Dilemma”. This is how I felt about The Social Dilemma, too. It’s an important problem that needs to be discussed. But I wouldn’t trust the people who claim to have the solutions here. Not at all.

Moxie Marlinspike Has a Plan to Reclaim Our Privacy. Moxie is a hero of mine, and Signal is one of the most important apps and projects on the internet. This portrait only increased my respect for him.

Animals Keep Evolving Into Crabs, Which Is Somewhat Disturbing. I’m ready.

Dutch Ethical Hacker Logs into Trump’s Twitter Account. The President of the United States had set his password to “maga2020!”

Apple, Google and a Deal That Controls the Internet. “Apple now receives an estimated $8 billion to $12 billion in annual payments — up from $1 billion a year in 2014 — in exchange for building Google’s search engine into its products. It is probably the single biggest payment that Google makes to anyone and accounts for 14 to 21 percent of Apple’s annual profits.”

Police are using facial recognition for minor crimes because they can. “Law enforcement is tapping the tech for low-level crimes like shoplifting, because there are no limits. But the tool often makes errors.”

I became an unwanted woman in tech. “There is something innately different now about my words. They’ve not changed, but their context has entirely shifted. It’s as though I walk around now with a badge that invites dismissal and disrespect. That badge is called womanhood.”


Simon Willison

Datasette 0.51 (plus weeknotes)

I shipped Datasette 0.51 today, with a new visual design, plugin hooks for adding navigation options, better handling of binary data, URL building utility methods and better support for running Datasette behind a proxy. It's a lot of stuff! Here are the annotated release notes. New visual design Datasette is no longer white and grey with blue and purple links! Natalie Downe has been working

I shipped Datasette 0.51 today, with a new visual design, plugin hooks for adding navigation options, better handling of binary data, URL building utility methods and better support for running Datasette behind a proxy. It's a lot of stuff! Here are the annotated release notes.

New visual design

Datasette is no longer white and grey with blue and purple links! Natalie Downe has been working on a visual refresh, the first iteration of which is included in this release. (#1056)

It's about time Datasette grew beyond its clearly-designed-by-a-mostly-backend-engineer roots. Natalie has been helping me start adding some visual polish: we've started with an update to the colour scheme and will be continuing to iterate on the visual design as the project evolves towards the 1.0 release.

The new design makes the navigation bar much more obvious, which is important for this release since the new navigation menu (tucked away behind a three-bar icon) is a key new feature.

Plugins can now add links within Datasette

A number of existing Datasette plugins add new pages to the Datasette interface, providig tools for things like uploading CSVs, editing table schemas or configuring full-text search.

Plugins like this can now link to themselves from other parts of Datasette interface. The menu_links(datasette, actor) hook (#1064) lets plugins add links to Datasette's new top-right application menu, and the table_actions(datasette, actor, database, table) hook (#1066) adds links to a new "table actions" menu on the table page.

This feature has been a long time coming. I've been writing an increasing number of plugins that add new pages to Datasette, and so far the main way of using them has been to memorise and type in their URLs!

The new navigation menu (which only displays if it has something in it) provides a global location to add new links. I've already released several plugin updates that take advantage of this.

The new "table actions" menu imitates Datasette's existing column header menu icon - it's a cog. Clicking it opens a menu of actions relating to the current table.

Want to see a demo?

The demo at latest.datasette.io now includes some example plugins. To see the new table actions menu first sign into that demo as root and then visit the facetable table to see the new cog icon menu at the top of the page.

Here's an animated GIF demo showing the new menus in action.

Binary data

SQLite tables can contain binary data in BLOB columns. Datasette now provides links for users to download this data directly from Datasette, and uses those links to make binary data available from CSV exports. See Binary data for more details. (#1036 and #1034).

I spent a ton of time on this over the past few weeks. The initial impetus was a realization that Datasette CSV exports included ugly Python b'\x15\x1c\x02\xc7\xad\x05\xfe' strings, which felt like the worst possible way to display binary in a CSV file, out of universally bad options.

Datasette's main interface punted on binary entirely - it would show a <Binary data: 7 bytes> label which didn't help much either.

The only way to get at binary data stored in a Datasette instance was to request the JSON version and then manually decode the Base-64 value within it!

This is now fixed: binary columns can be downloaded directly to your computer, using a new .blob output renderer. The approach is described on this new page in the documentation.

Security was a major consideration when building this feature. Allowing the download of arbitrary byte payloads from a web server is dangerous business: it can easily result in XSS holes where HTML with dangerous <script> content can end up hosted on the primary domain.

After some research, I decided to serve up binary content for download using the following headings:

content-type: application/binary x-content-type-options: nosniff content-disposition: attachment; filename="data-f30889.blob"

application/binary is a safer Content-Type option than the more common application/octet-stream, according to Michal Zalewski's renowned web application security book The Tangled Web (quoted here)

x-content-type-options: nosniff disables the XSS-tastic content sniffing feature in older versions of Internet Explorer, where IE would helpfully guess that you intended to serve HTML based on the first few bytes of the response.

The content-disposition: attachment header causes the browser to show a "download this file" dialog, using the suggested filename.

If you know of a reason that this isn't secure enough, please let me know!

URL building

The new datasette.urls family of methods can be used to generate URLs to key pages within the Datasette interface, both within custom templates and Datasette plugins. See Building URLs within plugins for more details. (#904)

Datasette's base_url configuration setting was the forcing factor around this piece of work.

It allows you to configure Datasette to serve content starting at a path other than / - for example:

datasette --config base_url:/path-to-datasette/

This will serve all Datasette pages at locations starting with /path-to-datasette/.

Why would you want to do this? It's useful if you are proxying traffic to Datasette from within the URL hierarchy of an existing website.

The feature didn't work properly, and enough people care about it that I had a steady stream of bug reports. For 0.51 I gathered them all into a single giant tracking issue and worked through them all one by one.

It quickly became apparent that the key challenge was building URLs within Datasette - not just within HTML template pages, but also for things like HTTP redirects.

Datasette itself needed to generate URLs that took the base_url setting into account, but so do Datasette plugins. So I built a new datasette.urls collection of helper methods and made them part of the documented internals API for plugins. The Building URLs within plugins documentation shows how these should be used.

I also added documentation on Running Datasette behind a proxy with example configs (tested on my laptop) for both nginx and Apache.

The datasette.client mechanism from Datasette 0.50 allows plugins to make calls to Datasette's internal JSON API without the overhead of an HTTP request. This is another place where plugins need to be able to construct valid URLs to internal Datasette pages.

I added this example to the documentation showing how the two features can work together:

table_json = ( await datasette.client.get( datasette.urls.table("fixtures", "facetable", format="json") ) ).json()

One final weird detail on this: Datasette now has various methods that automatically add the base_url prefix to a URL. I got worried about what would happen if these were applied more than once (as above, where datasette.urls.table() applies the prefix so does datasette.client.get()).

I fixed this using the same trick that Django and Jinja use to avoid appliying auto-escaping twice to content that will be displayed in HTML: the datasette.urls methods actually return a PrefixedUrlString object which is a subclass of str that knows that the prefix has been applied! Code for that lives here.

Smaller changes

A few highlights from the "smaller changes" in Datasette 0.51:

Wide tables shown within Datasette now scroll horizontally (#998). This is achieved using a new <div class="table-wrapper"> element which may impact the implementation of some plugins (for example this change to datasette-cluster-map).

I think this is a big improvement: if your database table is too wide, it now scrolls horizontally on the page (rather than blowing the entire page out to a wider width). You can see that in action on the global-power-plants demo.

New debug-menu permission. (#1068)

If you are signed in as root the new navigation menu links to a whole plethora of previously-undiscoverable Datasette debugging tools. This new permission controls the display of those items.

Link: HTTP header pagination. (#1014)

Inspired by GitHub and WordPress, which both use the HTTP Link header in this way. It's an optional extra though: Datasette will always offer in-JSON pagination information.

Edit SQL button on canned queries, (#1019)

Suggested by Jacob Fenton in this issue. The implementation had quite a few edge cases since there are certain categories of canned query that can't be executed as custom SQL by the user. See the issue comments for details and a demo.

--load-extension=spatialite shortcut. (#1028)

Inspired by a similar feature in sqlite-utils.

datasette -o option now opens the most relevant page. (#976)

This is a fun little feature. If your Datasette only loads a single database, and that database only has a single table (common if you've just run a single CSV import) then running this will open your browser directly to that table page:

datasette data.db -o
datasette --cors option now enables access to /database.db downloads. (#1057)

This was inspired by Mike Bostock's Observable Notebook that uses the Emscripten-compiled JavaScript version of SQLite to run queries against SQLite database files.

It turned out you couldn't use that notebook against SQLite files hosted in Datasette because they weren't covered by Datasette's CORS option. Now they are!

New documentation on Designing URLs for your plugin. (#1053)

Recommendations for plugin authors, inspired by a question from David Kane on Twitter. David has been building datasette-reconcile, a Datasette plugin that offers a reconciliation API endpoint that can be used with OpenRefine. What a brilliant idea!

datasette-edit-templates (almost)

Inspired by a conversation with Jesse Vincent, I also spent some time experimenting with the idea of a plugin that can load and edit templates from the database - which would turn a personal Datasette into a really fun interface hacking environment. I nearly got this working, and even shipped a preview of a load_template() plugin hook in the Datasette 0.51a2 alpha... before crashing into a road block when I realized that it also needed to work with Jinja's {% extends %} and {% include %} template tags and loaders for those don't currenty support async functions.

In exploring this I also realized that my load_template() plugin hook wasn't actually necessary - if I'm going to solve this problem with Jinja loaders I can do so using the existing prepare_jinja2_environment(env) hook.

My not-yet-functional prototype for this is caled datasette-edit-templates. I'm pretty confident I can get it working against the old plugin hook with a little more work.

Other weeknotes

Most of my time this week was spent on Datasette 0.51 - but I did find a little bit of time for other projects.

I finished recording my talk for PyCon Argentina. It will air on November 20th.

sqlite-utils 2.23 is out, with a .m2m() bug fix from Adam Wolf and the new ability to display progress bars when importing TSV and CSV files.

Releases this week

Several of these are updates to take advantage of the new navigation plugin hooks introduced in Datasette 0.51.

datasette-configure-fts 1.1 - 2020-11-01 datasette-graphql 1.1 - 2020-11-01 datasette-edit-schema 0.4 - 2020-10-31 datasette-upload-csvs 0.6 - 2020-10-31 datasette 0.51 - 2020-10-31 datasette 0.51a2 - 2020-10-30 datasette-edit-schema 0.4a0 - 2020-10-30 datasette 0.51a1 - 2020-10-30 datasette-render-markdown 1.2 - 2020-10-28 sqlite-utils 2.23 - 2020-10-28 TIL this week Decorators with optional arguments Dropdown menu with details summary

Saturday, 31. October 2020

Nicholas Rempel

Migrating This Site Away From Gatsby

Well it's happened again. I've done yet another rebuild of this website using Django and Wagtail.

Well it's happened again. I've done yet another rebuild of this website using Django and Wagtail. For my previous build, I used Gatsby to build a static site which I hosted on Netlify. The dynamic content of the site was stored in an instance of Ghost which was then pulled in at build-time.

Overall, I was quite disappointed with Gatsby. I found the framework to be quite over-engineered. At one point, I found myself trying to resize a photo or something and it required some (to me at least) very complex configuration and some fancy graphQL query just to get the image to load. Another issue I have is with the offline plugin. Gatsby uses service workers to cache your entire site offline which can then be served by a service worker in case your users have spotty internet. Or no internet. The problem comes with removing this service worker once you move away from the framework. I needed to do quite a bit of research to figure out how to delete the service worker once I moved over to the new site since Gatsby chooses to use caching very aggressively which is not recommended.

I understand why Gatsby makes these choices. Their goal is to make websites load insanely fast which I think they accomplish at a great cost of complexity. And they work hard to ensure that Gatsby sites score highly on the Google Lighthouse test. I have two issues with these goals. First, I'm not sure pursuing a high score on Lighthouse is a good goal to optimize for. Making a website load quickly is important, but Lighthouse will ding you for things using a particular Javascript library instead of their recommended smaller library. This is a game of cat-and-mouse which has little benefit beyond speeding up your website (which is important). Second, the amount of complexity that Gatsby introduces will likely slow down the development of your site and stifle your creativity. I can't comment on the experience of using Gatsby in the context of a larger team, but for me I know it led to making fewer improvements to the site because of the complexity.

As for Ghost as a headless CMS – I think there are some limitations there. Overall, Ghost is a great platform. It's reliable and usable and has a good writing experience. I do think that their push into the JAMStack fad was mostly a marketing play since some limitations were never addressed even after several years. I would definitely use Ghost again, just not as a headless CMS. I would use it how it was primarily designed to be used.

So why did I choose Django and Wagtail? Well, I've been using Django on and off for a long time. Probably 7 years at this point. The framework is reliable and seriously productive. Wagtail is built as a Django "app" so it's very familiar to me and it's very powerful. Overall, I'm content with the developer experience so far. I think the admin interface that is used to publish content could use some work. It's functional but frankly it's pretty ugly. Especially compared to something like Statamic which has been getting some attention lately.

It's been a while since I've run my blog with server-rendered pages with dynamic content as opposed to a static site published to a CDN. I think the fact that I can log into a dashboard and publish content is so much more ergonomic from a writing and publishing perspective that I'm more likely to write more. Compared to the process of building a static site and deploying an update, it's quite nice. The downside, obviously, is that I need to worry again about load on the site and downtime during traffic spikes. I'm hopeful that this is something that I can avoid by using caching and a CDN. The site now has virtually no javascript. Compared to Gatsby this is a big change. We'll see how it turns out but my theory is that I should be able to achieve similar speeds (good enough anyway) using some clever caching and a CDN. Browsers are pretty good at rendering plain HTML after all.

Friday, 30. October 2020

Margo Johnson

did:(customer)

Transmute’s evolving criteria for matching DID methods to business requirements. Photo by Louis Hansel @shotsoflouis on Unsplash Transmute builds solutions that solve real business problems. For this reason, we support a number of different decentralized identifier (DID) methods. While we are committed to providing optionality to our customers, it’s equally important to communicate the select
Transmute’s evolving criteria for matching DID methods to business requirements. Photo by Louis Hansel @shotsoflouis on Unsplash

Transmute builds solutions that solve real business problems. For this reason, we support a number of different decentralized identifier (DID) methods. While we are committed to providing optionality to our customers, it’s equally important to communicate the selection criteria behind these options so that customers can consider the tradeoffs of underlying DID-methods alongside the problem set they’re solving for. Essentially, we help them pick the right tool for the job.

In the spirit of sharing and improving as an industry, here are the work-in-progress criteria we use to help customers assess what DID method is best for their use case: Interoperability

This DID method meets the interoperability requirements of my business, for example:

Other parties can verify my DID method. I can switch out this DID method in the future if my business needs change. Security

This DID method meets the security requirements of my business, such as:

Approved cryptography for jurisdiction/industry Ledger/anchoring preferences Key rotation/revocation Privacy

This DID method meets privacy requirements relevant to my use case, for example:

Identifiers of individuals (data privacy and consent priorities) Identifiers for companies (organization identity and legal protection priorities) Identifiers for things (scaling, linking, and selective sharing priorities) Scalability

This DID method meets the scalability needs of my business use case, for example:

Speed Cost Stability/maturity Root(s) of Trust

This DID method appropriately leverages existing roots of trust that have value for my business or network (or it is truly decentralized). For example:

Trusted domain Existing identifiers/ identity systems Existing credentials

We are currently using and improving these criteria as we co-design and implement solutions with customers.

For example, our commercial importer customers care a lot about ensuring that their ecosystem can efficiently use the credentials they issue (interoperability) without disclosing sensitive trade information (privacy). Government entities emphasize interoperability and accepted cryptography. Use cases that include individual consumers focus more on data privacy regulation and control/consent. In some instances where other standardized identifiers already exist, DIDs may not make sense as primary identifiers at all.

Examples of DID methods Transmute helps customers choose from today include: Sidetree Element (did:elem, Ethereum anchoring), Sidetree Ion (did:ion, Bitcoin anchoring), Sidetree Photon (did:photon, Amazon QLDB anchoring), did:web (ties to trusted domains), did:key (testing and hardware-backed keys), and more.

How do you think about selecting the right DID method for the job?

Let’s improve this framework together.

did:(customer) was originally published in Transmute on Medium, where people are continuing the conversation by highlighting and responding to this story.

Thursday, 29. October 2020

Ally Medina - Blockchain Advocacy

CA’s 2020 Blockchain Legislative Roundup

After a cup of menstrual blood went flying across the Senate floor, I had assumed 2019 would be California’s wildest legislative session for a while. Covid-19 proved me unfortunately wrong. The Legislative process, calendar and agenda was quickly thrown into the dumpster fire of March and everyone turned back to the white board. When the Legislature returned from its two-month long “stay at

After a cup of menstrual blood went flying across the Senate floor, I had assumed 2019 would be California’s wildest legislative session for a while. Covid-19 proved me unfortunately wrong. The Legislative process, calendar and agenda was quickly thrown into the dumpster fire of March and everyone turned back to the white board.

When the Legislature returned from its two-month long “stay at home” recess in May, it passed a stripped-down state budget which reflected lower revenues given the pandemic-induced recession. They then began prioritizing and shelving hundreds of bills that would no longer make the cut in the truncated legislative calendar. Faced with less time to hold hearings and less money to spend on new proposals, legislators shelved an estimated three-quarters of the bills introduced at the beginning of the two-year session.

Here’s what happened for BAC’s blockchain/crypto sponsored bills:

AB 953 (Ting, San Francisco), which would have allowed state and local taxes to be paid with stablecoins, was sadly withdrawn, lacking a clear Covid-19 nexus.

AB 2004 (Calderon, Whittier) marked the first time verifiable credentials saw legislative debate. The bill to allow the use of verifiable credentials for covid-19 test results and other medical records made it through both houses with bipartisan support. Due to state budget restraints, it was ultimately vetoed, however the concept gained significant legislative momentum quickly. We are actively working on our strategy for verifiable credentials policy next year.

AB 2150 (Calderon, Whittier) spun through several dizzying iterations. The Blockchain Advocacy Coalition worked closely with Assemblymember Calderon’s office to suggest amended language that would have directed the Department of Business Oversight to study the applicability of SEC Commissioner Hester Pierce’s Proposal to an intrastate safe harbor. An idea that in previous years seemed far fetched suddenly had political legs and was well received by the agency and several committees. It died in the Senate Appropriations Committee along with nearly everything else that had a significant price tag or wasn’t urgently related to the pandemic.

Fruitful discussions with the DBO about crypto regulation were well timed, however. Given the microscope on consumer protections due to the economic distress caused by the pandemic, the agency received a $19.2 million allocation and a new name: California Department of Financial Protection and Innovation (CDFPI- much worse acronym imo).

HERE’S THE PART YOU NEED TO PAY ATTENTION TO:

With this new agency budget/mission, comes a very likely change in the way cryptocurrency is regulated in CA. AB 1864 does a few things:

Establishes a Financial Technology Innovation Office based in San Francisco Requires the department to promulgate rules regarding registration requirements Charges this department with regulating currently unregulated financial services including issuers of stored value or such business

This marks a departure from the agency’s previous approach. Virtual currency businesses did not have any separate registration requirements or the need to apply for a money transmitter license. BAC participated in stakeholder calls this summer about the agency’s expansion and we are continuing to engage with the agency about how these registration requirements will be created. Cryptocurrency businesses need to understand that the agency has been given the authority to create these standards without going back to the legislature, so early engagement is key.

Interested in joining our coalition and having a seat at the table? Contact: ally@blockadvocacy.org

BAC has previously facilitated educational workshops with the Department of Business Oversight and hosted roundtables with Gov. Newsom, Treasurer Ma and the Legislature to build an understanding of the importance of the blockchain industry in CA.


Ben Werdmüller

Vacation

I took this week off so I could spend a little more time with my mother. Originally, it was also because I was in danger of burning out, and because I wanted to help get out the vote. Today I was going to gently take her to the ocean - she can't walk far, but at least she could see the waves. It would have been a nice day for it. Instead, this week she's had a medical procedure of some kind ev

I took this week off so I could spend a little more time with my mother. Originally, it was also because I was in danger of burning out, and because I wanted to help get out the vote.

Today I was going to gently take her to the ocean - she can't walk far, but at least she could see the waves. It would have been a nice day for it.

Instead, this week she's had a medical procedure of some kind every single day, for five days running. She's about to have a multi-unit blood transfusion because her hemoglobin levels have plummeted. Afterwards, she'll likely want to sleep, in the same way she does after the dialysis sessions she has three times a week.

I'm very glad I'm here with my parents: I've been sheltering in place with them throughout the pandemic, which has allowed me to spend more time with them, and do what I can to help my dad, who is my mother's primary carer. This month alone events have included evacuating from a fire that miraculously stopped a block away from the house, and a mix of emergency and planned hospital visits. It's a lot, and I'm exhausted.

This is all quality time, but not the kind I was hoping for: there are fewer long talks, and far more feeding tube flushes and wound cleanings. I really hope it's not too late for those conversations. I'll be here regardless; I'm grateful for my family, and I'll take all the time I can get.

I could, however, use another vacation.


Simon Willison

Defining Data Intuition

Defining Data Intuition Ryan T. Harter, Principal Data Scientist at Mozilla defines data intuition as "a resilience to misleading data and analyses". He also introduces the term "data-stink" as a similar term to "code smell", where your intuition should lead you to distrust analysis that exhibits certain characteristics without first digging in further. I strongly believe that data reports shoul

Defining Data Intuition

Ryan T. Harter, Principal Data Scientist at Mozilla defines data intuition as "a resilience to misleading data and analyses". He also introduces the term "data-stink" as a similar term to "code smell", where your intuition should lead you to distrust analysis that exhibits certain characteristics without first digging in further. I strongly believe that data reports should include a link the raw methodology and numbers to ensure they can be more easily vetted - so that data-stink can be investigated with the least amount of resistance.


Quoting Michael Hobbes

Seniors generally report having more trust in the people around them, a characteristic that may make them more credulous of information that comes from friends and family. There is also the issue of context: Misinformation appears in a stream that also includes baby pictures, recipes and career updates. Users may not expect to toggle between light socializing and heavy truth-assessing when they’r

Seniors generally report having more trust in the people around them, a characteristic that may make them more credulous of information that comes from friends and family. There is also the issue of context: Misinformation appears in a stream that also includes baby pictures, recipes and career updates. Users may not expect to toggle between light socializing and heavy truth-assessing when they’re looking at their phone for a few minutes in line at the grocery store.

Michael Hobbes


Mike Jones: self-issued

Second OpenID Foundation Virtual Workshop

Like the First OpenID Foundation Virtual Workshop, I was once again pleased by the usefulness of the discussions at the Second OpenID Foundation Virtual Workshop held today. Many leading identity engineers and businesspeople participated, with valuable conversations happening both via the voice channel and in the chat. Topics included current work in the working groups, […]

Like the First OpenID Foundation Virtual Workshop, I was once again pleased by the usefulness of the discussions at the Second OpenID Foundation Virtual Workshop held today. Many leading identity engineers and businesspeople participated, with valuable conversations happening both via the voice channel and in the chat. Topics included current work in the working groups, such as eKYC-IDA, FAPI, MODRNA, FastFed, EAP, Shared Signals and Events, and OpenID Connect, plus OpenID Certification, OpenID Connect Federation, and Self-Issued OpenID Provider (SIOP) extensions.

Identity Standards team colleagues Kristina Yasuda and Tim Cappalli presented respectively on Self-Issued OpenID Provider (SIOP) extensions and Continuous Access Evaluation Protocol (CAEP) work. Here’s my presentation on the OpenID Connect working group (PowerPoint) (PDF) and the Enhanced Authentication Profile (EAP) (PowerPoint) (PDF) working group. I’ll add links to the other presentations when they’re posted.


Ben Werdmüller

Introducing ben.lol

I've spent a few hours here and there over the last few months building a text adventure game. I grew up with adventure games. The Secret of Monkey Island was foundational for me: an irreverent point and click story with an anarchic sense of humor that completely appealed to my twelve year old self. Slide across a telegraph wire using a rubber chicken with a pulley in  the middle? Sure. (S

I've spent a few hours here and there over the last few months building a text adventure game.

I grew up with adventure games. The Secret of Monkey Island was foundational for me: an irreverent point and click story with an anarchic sense of humor that completely appealed to my twelve year old self. Slide across a telegraph wire using a rubber chicken with a pulley in  the middle? Sure. (Sorry for the spoiler.)

But long before SCUMM caught my imagination, I spent many hours with interactive fiction games written by companies like Infocom.

Douglas Adams was co-author of Infocom's Hitchhiker's Guide to the Galaxy game. (You can play a version of it on the BBC website.) It was every bit as funny and confounding as the books, and it worked because it was described entirely with prose.

Graham Nelson's Inform language is an expressive way to build these kinds of interactive fiction games. It's a programming language built for writers, which is fascinating in itself: you define the world using complete, declarative sentences. Emily Short in particular has done amazing work with the language, which is now on its seventh version.

And I thought I'd much around with it. It's a work in progress in the truest sense of the word; far more exploration than game. Almost every dream I have is set in a consistent universe, with a dream London, a dream Edinburgh, and so on, and I thought it would be fun to set it there.

For now, it lives at ben.lol, a domain name I bought for silly experiments, and should work on every browser. I'll keep playing around with it.

Let me know what you think!

Wednesday, 28. October 2020

The Dingle Group

Bridging to Self-Sovereign Identity

How to enable the Enterprise to move from existing centralized or federated identity access management systems to a decentralized model was the topic in the 15th Vienna Digital Identity Meetup*. In both the private and public sector the capital investments in IAMs runs into the billions of dollars, for decentralized identity models to make serious inroads in this sector providing a of roadmap

How to enable the Enterprise to move from existing centralized or federated identity access management systems to a decentralized model was the topic in the 15th Vienna Digital Identity Meetup*.  In both the private and public sector the capital investments in IAMs runs into the billions of dollars, for decentralized identity models to make serious inroads in this sector providing a of roadmap or transition journey that both educates and enables the enterprise to make the move is required.

In our 15th event we discussed the routes  being taken by Raonsecure and IdRamp.  Both Raonsecure and IdRamp are being successful in making the introduction of  decentralized identity concepts to the market and helping their customers start on this transition journey.

Alex David (Senior Manager, Raonsecure) started with brief update on the forces driving decentralized identifiers in the Korean market and then presented Raonsecure’s Omnione product.  In keeping with the theme on the ‘bridging’ to bring DIDs and VCs into the market Alex went through six different pilots,  and proof of concept solutions that Omnione has implemented in the Korean market.  These include working with the Korean Military Manpower Association on DID based authentication and issuing of verifiable credentials (VCs) to Korean veterans to the use of DIDs for driverless car identification in an autonomous vehicle pilot in Sejong, Korea.

Mike Vesey (CEO, IdRamp) introduced IdRamp and discussed their core objective of bringing DIDs and VCs into the enterprise market by creating a ’non-frightening’ educational path to adoption of decentralized identity.  IdRamp is a session based transactional gateway (no logging) to all things identity, providing common service delivery and compliance and consent management across identity platforms.  They are not an identity service provider but resemble more closely as a digital notary in the generation of decentralized identifiers and verifiable credentials.  The service integration capability of IdRamp was demonstrated with the use of employee issued verifiable credentials to authenticate to a enterprise service (in this case a Zoom session login).  

Finally, the seed of an upcoming event was planted.  As with any new technology market breaking through the ’noise’ of everyday life is very difficult.  This is no different for DIDs and VCs.  You will have to watch the recording to get the topic…. 

For a recording of the event please check out the link: https://vimeo.com/472937478

Time markers:

0:00:00 - Introduction

0:05:04 - State of DIDs in South Korea

0:13:29 - Introducing Omnione

0:24:00 - DIDs and VCs in action in South Korea

0:51:18 - Introduction to IdRamp

1:04:00 - Interactive demo on IdRamp

1:08:00 - Wallets and compatibility

1:11:00 - Service integration demo & discussion

1:31:57 - Upcoming events

For more information on Raonsecure Omnione: https://omnione.net/en/main

For more information on IdRamp: https://idramp.com/

And as a reminder, due to increased COVID-19 infections we are back to online only events. Hopefully we will be back to in person and online soon!

If interested in getting notifications of upcoming events please join the event group at: https://www.meetup.com/Vienna-Digital-Identity-Meetup/

*Vienna Digital Identity Meetup is hosted by The Dingle Group and is focused on educating business, societal, legal and technologists on the value that a high assurance digital identity creates by reducing risk and strengthening provenance. We meet on the 4th Monday of every month, in person (when permitted) in Vienna and online on Zoom. Connecting and educating across borders and oceans.


Tuesday, 27. October 2020

Ben Werdmüller

Principles for the future

Like many people, 2020 has creatively consumed me. It's hard to give your undivided attention to something, or put yourself in a truly creative flow, when so much is going on. The sheer onslaught of new information - some newly jaw-dropping story seems to be showing up four to six times a day - puts my brain in a reactive mode. Instead of being inventive and generative, I'm constantly aghast.

Like many people, 2020 has creatively consumed me. It's hard to give your undivided attention to something, or put yourself in a truly creative flow, when so much is going on. The sheer onslaught of new information - some newly jaw-dropping story seems to be showing up four to six times a day - puts my brain in a reactive mode. Instead of being inventive and generative, I'm constantly aghast. I'm hopeful that it will be possible to re-find a sort of mental peace once the election has been and gone, but I'm also a realist. The pandemic will continue; the political clown show will continue; children have been permanently separated from their parents, creating an entire, lost Trump generation; we will not right all the wrongs of the last four years overnight.

I've been thinking it would be an interesting exercise to force myself into a generative mode about the future. Instead of reacting to the onslaught of awfulness and saying this is what I don't want, which is almost a default biological reaction, what if we deliberately and proactively painted a picture of a possible future and said this is what I want?

It's a surprisingly hard thing to do. Even thinking about the form of it - is it a manifesto? a short story? - brings difficult choices. But if we're to be truly successful at building a better world, we need to have a strong idea of what that vision for the future really is.

I think speculative fiction can carry us a long way. But even in this creative realm, it's commonplace to paint dystopias: Black Mirror warnings of what could be rather than optimistic visions of what we might aspire to. I would love to read explorations of utopia, but aside from some facets of Star Trek (which, let's call it out, is united by a militaristic Federation), I don't know where to begin to look.

Rather than a complete imagining of this better future, perhaps it's helpful to start with principles. What is the guiding North Star that will help us make decisions about which paths to take?

I've long had a professional mission: I want to work on products that make the world more equal, informed, and kind. But that's a different exercise to defining principles that guide a positive vision of the future.

This is my attempt to define those - or at least, my representation of how I'm thinking about principles for the future today. I would love to read yours.

Principles for the future: Life, Fairness, Autonomy, and Forward Motion Life

Everyone - regardless of their background, character, geography, and context - should be able to live a good life.

Not every American, or every person in whichever nation you happen to be reading this from; every person.

Nobody should experience poverty; everyone should have a home; everyone should have enough food to eat; everyone should have the opportunity to receive a great education; nobody should succumb to curable disease; everyone should have mobility. This is the foundation of a society that can provide a good life for all.

These things should not be provided by private businesses. I believe we need private businesses, but ability to make a profit at scale is not the same thing as being able to provide a fundamental societal foundation. Those things should be provided by a democratically elected government and upheld by an implicit social contract.

Taxation as part of this social contract is a reasonable funding mechanism. Societies with higher levels of progressive taxation turn out to result in a higher quality of life, in part because basic human rights are taken care of. The important thing is not the money you have in your pocket; it's your experience of living.

Wealth is not a shorthand for well-being.

The progress of the world should be measured by the experience of the people living in it rather than their wealth. In turn, we should measure the success of our nations by the quality of the experience of living in them, rather than their output. GDP, which has become a sort of shorthand for national success, was designed to be a measure of wartime productivity and is far less well-suited to domestic life. We need a better model, and a better measure.

For example, the Human Development Index and Gross National Happiness are great steps in this direction. There are other alternative indexes that are worth considering, although I suspect there will need to be a new, humanist measure to guide us. We do need some measure in order to gauge our successes and failures. This measure must be developed in an inclusive way, with ownership shared by communities across society, so as not to privilege one group over another.

Take the climate crisis: GDP doesn't disincentivize pollution, or directly incentivize cleaning up the environment. (It will, later on, when it is much too late.) A quality of life index would take into account the billions of people who are already feeling the effects of climate change.

It would incentivize public art, and underwriting culture, and providing amazing services, and help for people who need it.

Finally, GDP incentivizes building economic markets, whereas quality of life is method agnostic. All that matters is that we are continually improving the experience of being a human.

Fairness

Equity is a fundamental human value. Everyone should have equitable access to opportunities and resources.

Equity is giving everyone what they need to be successful; equality is treating everyone the same regardless of their context. In other words, mere equality perpetuates existing inequities. It isn't always enough.

Imagine if government was truly representative: not just geographically, but intersectionally.

Imagine if everyone had access to the same level of education, regardless of their context. Then imagine if people who didn't come from generational educational success could receive extra help with the implicit ideas and skills, as well as baseline financial resources, that some people arrive at an institution already possessing. All for free. Imagine if everyone could have the opportunity to do well. Imagine how this wider gene pool of ideas would, in turn, benefit all of us.

I believe strongly that private schools and universities shouldn't exist. Finland, which has one of the highest test scores in the world (as well as one of the highest quality of life rankings), does so well precisely because it prioritizes equity. (There are independent schools, but they're state subsidized, too.)

Imagine if everyone had the same opportunities once they entered the workforce. Imagine if maternity and paternity leave were equalized, eliminating tired old arguments for not promoting women. Imagine if the collective paid parental leave was 480 days, as it is in Sweden, allowing for healthier relationships within families with less economic hardship.

Imagine if salaries were required to be published ahead of time, eliminating both the need for negotiation and the possibility of women and people of color being paid less for the same job. (Finland goes a step further and publishes everyone's taxable income once a year.) Imagine if company boards reflected societal diversity. Imagine if conversations about justice were permitted and encouraged at work.

Imagine if businesses did not depend on workers earning poverty-level wages, in any country. Imagine if resources were fairly traded.

Black Lives Matter is needed to undo centuries of generational, institutional discrimination. Likewise, feminism is a crucial ideology of restorative justice. Imagine if these ideas - restorative justice, generational healing, compassion - were core societal values. Imagine, in turn, if misogyny, racism, colonialism, and the broad spectrum of bigotry that has held so many people back were finally thrown to the fire.

In short, imagine if we built our institutions, systems, and processes to uphold fairness for all, rather than to uphold profit or benefit for some. It's not about ensuring equality of outcome (although, of course, everyone has the right to live a good life); it's about ensuring equity of opportunity.

Imagine if we all punched up instead of down.

Autonomy

Everyone has the right to make decisions for themselves and act on them, subject to the social contract we all make with each other.

That means women have the right to choose what they do with their bodies. Abortion must be legal.

That means rather than criminalizing drug addicts, we should provide help.

That means free speech and creative human expression are imperatives - until my speech is in service of rallying others to harm. It means that the right to protest is also an imperative. Sedition is always a bogus charge; government is never a protected group.

That means privacy and freedom from surveillance are human rights.

That means sex workers should be protected rather than demonized.

That means there should be complete freedom of religion (or freedom to practice no religion) - until that religion is used to invade someone else's autonomy, or to create unfair rules elsewhere in society, or to diminish someone else's quality of life.

That means what consenting adults do with each other is not your business, whether in private or public; nor is their decision to marry, for example.

That means you should wear a mask, as it protects others, just as you should wear a seatbelt, because it protects others.

That means building participative, inclusive, democratic governments rather than authoritarian institutions.

That means everyone should have the opportunity and ability to own and maintain property.

That means valuing diversity and inclusion.

That means allowing broad immigration between countries. Ideally you should be able to choose to live in the country whose values most closely align with your own.

That means enacting peaceful, globally democratic foreign policy.

That means accepting that some people will do things that you will not like - and, as long as it does not cause harm, upholding this as an important value by which we can all live.

It also means ensuring that everyone has an equal opportunity to make their own decisions and act on them. It implies a non-aggressive approach to policing, and a community-orientated approach to justice.

Forward motion

We should use our resources, creativity, and expertise to rapidly improve quality of life, fairness, and autonomy.

Basic human needs should be the responsibility of an intersectionally inclusive, democratically-elected government so we can concentrate on advancing human society rather than providing the basics.

By effectively measuring quality of life, inclusive teams should receive support to perform rapid, human-centered experiments within their communities in order to quickly determine how that quality of life can be improved.

Universities and research centers should be well-funded - not just for STEM activities, but also for humanities and cultural research. As this research is publicly funded, it should then be made publicly available, so everyone can benefit from its findings.

Exploration of the universe, and of our own planet, should similarly be owned by all of us. By making the fruits of human endeavor public, we can allow everyone to build on it, snowballing human progress.

Entrepreneurship has an important part to play. Innovation is a driver for progress. We should create a world where everyone has the ability to be an entrepreneur (not just the rich and well-connected), can be supported in doing so, and can build on a rich body of public research to help them succeed.

We should all own the process and the fruits of our communal progress. We should react to harms quickly, and continuously work to improve everyone's quality of life. We should have the space to seek our own individual goals, while valuing the goals of our communities. We should be there for each other.

We should tell stories about the future and try to make them real.

What's next?

This is clearly an incomplete set of principles. They're mine, as written over a set of days in October, 2020. But I think my next step is to stress test them by building a set of possible futures; to tell those utopian stories.

Perhaps your next step could be to build your own set of principles, and use them to tell your own stories. We can build on each other's ideas, as well as the ideas of diverse authors and futurists, to envision the world we want.

And then, of course, we build it.

 

Photo by Sara Kurfeß on Unsplash

Monday, 26. October 2020

Simon Willison

Quoting Apple, Google and a Deal That Controls the Internet

Apple now receives an estimated $8 billion to $12 billion in annual payments — up from $1 billion a year in 2014 — in exchange for building Google’s search engine into its products. It is probably the single biggest payment that Google makes to anyone and accounts for 14 to 21 percent of Apple’s annual profits. — Apple, Google and a Deal That Controls the Internet

Apple now receives an estimated $8 billion to $12 billion in annual payments — up from $1 billion a year in 2014 — in exchange for building Google’s search engine into its products. It is probably the single biggest payment that Google makes to anyone and accounts for 14 to 21 percent of Apple’s annual profits.

Apple, Google and a Deal That Controls the Internet


Webistemology - John Wunderlich

Privacy in Ontario?

MyData Canada recently submitted a report to the Government of Ontario in response to its consultation for strengthening privacy protections in Ontario.
MyData Canada Privacy Law Reform Submission

MyData Canada recently submitted a report to the Government of Ontario in response to its consultation for strengthening privacy protections in Ontario. You can download the submission from the MyData Canada site. I am a board member of MyData Global and a member of the MyData Canada hub. This is a brief summary of some of the recommendations in that report.

Part of what MyData Canada would like the province of Ontario to address is the current ‘gatekeeper model’ where each of us cede control to information about us under terms or privacy policies based on a flawed consent model. As the report puts it,

Behind each of the gate-keepers (“data controllers” in GDPR terms) in the centralized model are thousands of intermediaries and data brokers invisible to the individuals whose data they process. Individuals’ personal data is used in ways they could not anticipate, and often without their awareness or even the opportunity to meaningfully consent.

This needs to be fixed.

Background

Canada has a multi-jurisdictional privacy environment. That means that both levels of government have privacy commissioners and privacy laws. Ontario, Canada’s most populous province, does not have a private sector privacy law. This leaves a number of categories of persons and organizations uncovered; don’t ask why, it’s a constitutional jurisdictional thing. Thus the consultation and submission. MyData Canada believes that,

…our proposed approach will help accelerate the development and uptake of privacy- focused, human-centric innovation and ultimately serve to regain public trust and confidence in the digital economy.
2-Branch Privacy Reform

MyData Canada proposes a two branch approach to privacy law reform; a harmonization branch, and a transformation branch.

The harmonization branch proposes an incremental approach to enable any new Ontario law to work harmoniously with other regimes, both in Canada and in the rest of the world. This branch is intended to ensure that Ontario is a low friction end point for cross border data flows with other data protection data. At the same time this branch will introduce a regulatory framework with a functional equivalency to the CCPA in the US and to the GDPR in the EU. In essence, this broad framework skates to where the puck will be with respect to global data protection laws. The digital transformation branch proposes to simultaneously create a ‘next-generation’ regulatory space within Ontario. This space will allow Ontario based companies or organization to create new forward looking and individually centred solutions. To continue the metaphor, this branch will allow breakaway solutions that will disrupt the current platform information gatekeepers and return autonomy to individuals. Harmonize Up

Rather than seeking a lowest common denominator or participating in a race to the bottom, MyData recommends harmonizing ‘up’ including the following:

Adopting a principled and risk based approach to privacy regulation; Coordinating with other provinces and the federal government, perhaps including a pan-Canadian council of information and privacy commissioners; Aligning with Convention 108 and 108+; Increased enforcement powers; and Implementation support for businesses and organizations for compliance. Digital Transformation

Create a regulatory environment to reward first movers with privacy enhancing technologies that put people at the centre of their own data. Recommendations include:

Creating a privacy technology incubator; Host regulatory sandboxes and hackathons; Grants and other incentives for privacy ‘retrofits’; Create and support a regime for seals, badges, and privacy trust marks; Foster interoperability by requiring api or similar means to prevent or counter ‘platform dominance’ and network affects; and Create up-skilling programs for a multi-disciplinary privacy engineering centre of excellence in Ontario. Summing up

The above is just a summary of the first recommendations of the report. It includes further recommendations on:

Taking a comprehensive approach to move beyond compliance; Adopting a Consumer Protection and Human Rights oriented regulatory enforcement model Adopting a multi-stakeholder and inclusive model to spur innovation and open data

If you find this interesting please download and share the report.

Sunday, 25. October 2020

Just a Theory

Automate Postgres Extension Releases on GitHub and PGXN

Go beyond testing and fully automate the release of Postgres extensions on both GitHub and PGXN using GitHub actions.

Back in June, I wrote about testing Postgres extensions on multiple versions of Postgres using GitHub Actions. The pattern relies on Docker image, pgxn/pgxn-tools, which contains scripts to build and run any version of PostgreSQL, install additional dependencies, build, test, bundle, and release an extension. I’ve since updated it to support testing on the the latest development release of Postgres, meaning one can test on any major version from 8.4 to (currently) 14. I’ve also created GitHub workflows for all of my PGXN extensions (except for pgTAP, which is complicated). I’m quite happy with it.

But I was never quite satisfied with the release process. Quite a number of Postgres extensions also release on GitHub; indeed, Paul Ramsey told me straight up that he did not want to manually upload extensions like pgsql-http and PostGIS to PGXN, but for PGXN to automatically pull them in when they were published on GitHub. It’s pretty cool that newer packaging systems like pkg.go.dev auto-index any packages on GibHub. Adding such a feature to PGXN would be an interesting exercise.

But since I’m low on TUITs for such a significant undertaking, I decided instead to work out how to automatically publish a release on GitHub and PGXN via GitHub Actions. After experimenting for a few months, I’ve worked out a straightforward method that should meet the needs of most projects. I’ve proven the pattern via the pair extension’s release.yml, which successfully published the v0.1.7 release today on both GitHub and PGXN. With that success, I updated the pgxn/pgxn-tools documentation with a starter example. It looks like this:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 name: Release on: push: tags: - 'v*' # Push events matching v1.0, v20.15.10, etc. jobs: release: name: Release on GitHub and PGXN runs-on: ubuntu-latest container: pgxn/pgxn-tools env: # Required to create GitHub release and upload the bundle. GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} steps: - name: Check out the repo uses: actions/checkout@v2 - name: Bundle the Release id: bundle run: pgxn-bundle - name: Release on PGXN env: # Required to release on PGXN. PGXN_USERNAME: ${{ secrets.PGXN_USERNAME }} PGXN_USERNAME: ${{ secrets.PGXN_PASSWORD }} run: pgxn-release - name: Create GitHub Release id: release uses: actions/create-release@v1 with: tag_name: ${{ github.ref }} release_name: Release ${{ github.ref }} body: | Changes in this Release - First Change - Second Change - name: Upload Release Asset uses: actions/upload-release-asset@v1 with: # Reference the upload URL and bundle name from previous steps. upload_url: ${{ steps.release.outputs.upload_url }} asset_path: ./${{ steps.bundle.outputs.bundle }} asset_name: ${{ steps.bundle.outputs.bundle }} asset_content_type: application/zip

Here’s how it works:

Lines 4-5 trigger the workflow only when a tag starting with the letter v is pushed to the repository. This follows the common convention of tagging releases with version numbers, such as v0.1.7 or v4.6.0-dev. This assumes that the tag represents the commit for the release.

Line 10 specifies that the job run in the pgxn/pgxn-tools container, where we have our tools for building and releasing extensions.

Line 13 passes the GITHUB_TOKEN variable into the container. This is the GitHub personal access token that’s automatically set for every build. It lets us call the GitHub API via actions later in the workflow.

Step “Bundle the Release”, on Lines 17-19, validates the extension META.json file and creates the release zip file. It does so by simply reading the distribution name and version from the META.json file and archiving the Git repo into a zip file. If your process for creating a release file is more complicated, you can do it yourself here; just be sure to include an id for the step, and emit a line of text so that later actions know what file to release. The output should look like this, with $filename representing the name of the release file, usually $extension-$version.zip:

::set-output name=bundle::$filename

Step “Release on PGXN”, on lines 20-25, releases the extension on PGXN. We take this step first because it’s the strictest, and therefore the most likely to fail. If it fails, we don’t end up with an orphan GitHub release to clean up once we’ve fixed things for PGXN.

With the success of a PGXN release, step “Create GitHub Release”, on lines 26-35, uses the GitHub create-release action to create a release corresponding to the tag. Note the inclusion of id: release, which will be referenced below. You’ll want to customize the body of the release; for the pair extension, I added a simple make target to generate a file, then pass it via the body_path config:

- name: Generate Release Changes run: make latest-changes.md - name: Create GitHub Release id: release uses: actions/create-release@v1 with: tag_name: ${{ github.ref }} release_name: Release ${{ github.ref }} body_path: latest-changes.md

Step “Upload Release Asset”, on lines 36-43, adds the release file to the GitHub release, using output of the release step to specify the URL to upload to, and the output of the bundle step to know what file to upload.

Lotta steps, but works nicely. I only wish I could require that the testing workflow finish before doing a release, but I generally tag a release once it has been thoroughly tested in previous commits, so I think it’s acceptable.

Now if you’ll excuse me, I’m off to add this workflow to my other PGXN extensions.

More about… Postgres PGXN GitHub GitHub Actions Automation CI/CD

reb00ted

Face recognition everwhere, yay!

“Police are using facial recognition for minor crimes because they can.” CNET.

“Police are using facial recognition for minor crimes because they can.” CNET.


Webistemology - John Wunderlich

A near term future history

This is a flight of fancy written a week and a half before the US election. I hope it proves to be bad speculation.
What if?

Here’s a speculative fiction plot, NOT a prediction, prompted by the news frenzy of US presidential politics, influenced by my own contrarian nature. If you read history and science fiction we live in an interesting time. Whether it will be a historical turning point in US and global history is not something that we are privileged to know at the time. But if this is a historic turning point I would like to plot out a story based on where we are, not much more than a week before the US election between President Trump and former Vice-President Biden. Let’s ignore that reality strains credulity, and proceed from there. Nothing below should be read as representative of my own views or preferences.

Biden wins?

The Democrats win the House and the Senate. Joe Biden wins the popular vote but the electoral college remains unclear pending delayed counts for mail-in ballots. Election night ends with both candidates being declared the victor by different media outlets.

Taking it to the streets

On the day after the election Trump claims to be the legitimate President based on claims of a rigged election. From the White House he asks his supporters to come out to defend his victory. Activists from both sides take to the streets in every major American city. The national guard takes sides, but on different sides in different states or cities, and civic order breaks down.

COVID confusion

The electoral college convenes and declares Biden to be the winner. Trump and his staff leave the White House under political pressure. President elect Biden is then revealed to be ill with COVID, increasing uncertainty. Before the inauguration Biden dies in hospital, and Kamala Harris is sworn in as the 46th President of the United States, with Pete Buttigieg as her vice-president. The Joint Chiefs of Staff are prominently on display at the inauguration even as conflict rages on the streets of America.

Consolidation

President Harris deploys the military, aided by private military contractors, to establish order on the streets. Former President Trump flees the country to Moscow, declaring a government in exile. The Harris administration declares a continuing state of military emergency with Democratic majorities in the House and Senate. The state of emergency delays elections until the ‘emergency is over’.

Epilogue

10 years after the start of the Harris dynasty, the United States has become the Democratic Republic of America, run by the United Democratic Party under Grand President Harris. The former Republican Party has been absorbed into the United Democratic Party. Wall Street has become a financial back-water. The US overseas military presence has been dramatically reduced because of domestic military requirements. There is an underground resistance, led by Alexandria Ocasio-Cortez, the most wanted fugitive in American. Climate change is at 2.5 degrees and increasing. China has become the dominant global power, dominating the United Nation, based on its Belt and Road initiative.

End note

I hope that this will prove to be really bad speculation. The past is prologue but it is our job to create the history we want to inhabit. I wish my American friends a successful and well-run election with a successful and peaceful transition of power if the Biden-Harris ticket wins.


Boris Mann's Blog

One of the few times I’ve stayed somewhere else on #bowenisland. Up on Eagle Cliff, looking out to Strait of Georgia and across to UBC. Wind and whitecaps.

One of the few times I’ve stayed somewhere else on #bowenisland. Up on Eagle Cliff, looking out to Strait of Georgia and across to UBC. Wind and whitecaps.

Saturday, 24. October 2020

Simon Willison

Weeknotes: incremental improvements

I've been writing my talk for PyCon Argentina this week, which has proved surprisingly time consuming. I hope to have that wrapped up soon - I'm pre-recording it, which it turns out is much more work than preparing a talk to stream live. I've made bits and pieces of progress on a whole bunch of different projects. Here are my notes on Datasette, plus an annotated version of my other releases-thi

I've been writing my talk for PyCon Argentina this week, which has proved surprisingly time consuming. I hope to have that wrapped up soon - I'm pre-recording it, which it turns out is much more work than preparing a talk to stream live.

I've made bits and pieces of progress on a whole bunch of different projects. Here are my notes on Datasette, plus an annotated version of my other releases-this-week.

Datasette 0.51a0

Datasette's base_url configuration option is designed to help run Datasette behind a proxy - so you can configure Apache or nginx to proxy /my-datasette/ to a Datasette instance and have every internal link work correctly.

It doesn't completely work. I gathered all of the bugs with it in a tracking issue, addressed as many of them as I could and released Datasette 0.51a0 as a testing alpha.

if you run Datasette behind a proxy please try out this new alpha and tell me if it works for you! Testing help is requested here.

Also in the alpha:

New datasette.urls URL builder for plugins, see Building URLs within plugins. (#904) Removed --debug option, which didn't do anything. (#814) Link: HTTP header pagination. (#1014) x button for clearing filters. (#1016) Edit SQL button on canned queries, (#1019) --load-extension=spatialite shortcut. (#1028)
Other releases this week

sphinx-to-sqlite 0.1a1 and 0.1a

One of the features I'm planning for the official Datasette website is combined search across issues, commits, releases, plugins and documentation - powered by my Dogsheep Beta search engine.

This means I nead to load Datasette's documentation into a SQLite database. sphinx-to-sqlite is my new tool for doing that: it uses the optional XML output from Sphinx to create a SQLite table populated with sections from the documentation, since these seem like the right unit for executing search against.

I'm now using this to build a Datasette instance at latest-docs.datasette.io with the latest documentation on every commit.

datasette-cluster-map 0.14 and 0.13

The default marker popup for datasette-cluster-map is finally a human readable window, not a blob of JSON! You can see that in action on the global-power-plants demo.

inaturalist-to-sqlite 0.2.1, pocket-to-sqlite 0.2.1

I tried out the new PyPI resolver and found that it is a lot less tolerant of ~= v.s. >= dependencies, so I pushed out new releases of these two packages.

datasette-json-preview 0.2

I'm using this plugin to preview the new default JSON representation I'm planning for Datasette 1.0. Carl Johnson provided some useful feedback leading to this new iteration, which now looks like this.

github-to-sqlite 2.7

Quoting the release notes:

github-to-sqlite repos command now takes options --readme and --readme-html, which write the README or rendered HTML README into the readme or readme_html columns, respectively. #52

Another feature I need for the Datasette website search engine, described above.

dogsheep-beta 0.9

My personal search engine, described in the latest Datasette Weekly newsletter. This release added facet by date, part of ongoing work on a timeline view.

I also updated it to take advantage of the datasette.client internal API mechanism introduced in Datasette 0.50.

healthkit-to-sqlite 1.0

Another project bumped to 1.0. Only a small bug fix here: this can now import Apple HealthKit data from devices that use languages other than English.

TIL this week Writing JavaScript that responds to media queries

I finally upgraded this blog to show recently added "Elsewhere" content (bookmarks and quotations) interspersed with my main entries in mobile view. I worte up this TIL to explain what I did.

Friday, 23. October 2020

Boris Mann's Blog

My parents are moving today. Third move in 47 years they’ve been in Canada.

My parents are moving today. Third move in 47 years they’ve been in Canada.


Simon Willison

OCTO Speaker Series: Simon Willison - Personal Data Warehouses: Reclaiming Your Data

OCTO Speaker Series: Simon Willison - Personal Data Warehouses: Reclaiming Your Data I'm giving a talk in the GitHub OCTO (Office of the CTO) speaker series about Datasette and my Dogsheep personal analytics project. You can register for free here - the stream will be on Thursday November 12, 2020 at 8:30am PST (4:30pm GMT).

OCTO Speaker Series: Simon Willison - Personal Data Warehouses: Reclaiming Your Data

I'm giving a talk in the GitHub OCTO (Office of the CTO) speaker series about Datasette and my Dogsheep personal analytics project. You can register for free here - the stream will be on Thursday November 12, 2020 at 8:30am PST (4:30pm GMT).

Thursday, 22. October 2020

reb00ted

Disqus is bad for privacy

No surprise here.

No surprise here.


Identity Praxis, Inc.

PodCast – On eCommerce

Really enjoyed an engaging interview (09:24 min) on eCommerce with Miguel Arriola, Solutions Architect Manager at Gretrix. #57 Michael Becker the CEO of Identity Praxis, Inc. tells us his thoughts on eCommerce today, with insights into the future of personal information management exchange. The post PodCast – On eCommerce appeared first on Identity Praxis, Inc..

Really enjoyed an engaging interview (09:24 min) on eCommerce with Miguel Arriola, Solutions Architect Manager at Gretrix.

#57 Michael Becker the CEO of Identity Praxis, Inc. tells us his thoughts on eCommerce today, with insights into the future of personal information management exchange.

The post PodCast – On eCommerce appeared first on Identity Praxis, Inc..


Simon Willison

CG-SQL

CG-SQL This is the toolkit the Facebook Messenger team wrote to bring stored procedures to SQLite. It implements a custom version of the T-SQL language which it uses to generate C code that can then be compiled into a SQLite module. Via @ricardoanderegg

CG-SQL

This is the toolkit the Facebook Messenger team wrote to bring stored procedures to SQLite. It implements a custom version of the T-SQL language which it uses to generate C code that can then be compiled into a SQLite module.

Via @ricardoanderegg


Project LightSpeed: Rewriting the Messenger codebase for a faster, smaller, and simpler messaging app

Project LightSpeed: Rewriting the Messenger codebase for a faster, smaller, and simpler messaging app Facebook rewrote their iOS messaging app earlier this year, dropping it from 1.7m lines of code to 360,000 and reducing the binary size to a quarter of what it was. A key part of the new app's architecture is much heavier reliance on SQLite to coordinate data between views, and to dynamically co

Project LightSpeed: Rewriting the Messenger codebase for a faster, smaller, and simpler messaging app

Facebook rewrote their iOS messaging app earlier this year, dropping it from 1.7m lines of code to 360,000 and reducing the binary size to a quarter of what it was. A key part of the new app's architecture is much heavier reliance on SQLite to coordinate data between views, and to dynamically configure how different views are displayed. They even built their own custom system to add stored procedures to SQLite so they could execute portable business logic inside the database.

Via @ricardoanderegg


Doc Searls Weblog

On KERI: a way not to reveal more personal info than you need to

You don’t walk around wearing a name badge.  Except maybe at a conference, or some other enclosed space where people need to share their names and affiliations with each other. But otherwise, no. Why is that? Because you don’t need a name badge for people who know you—or for people who don’t. Here in civilization […]

You don’t walk around wearing a name badge.  Except maybe at a conference, or some other enclosed space where people need to share their names and affiliations with each other. But otherwise, no.

Why is that?

Because you don’t need a name badge for people who know you—or for people who don’t.

Here in civilization we typically reveal information about ourselves to others on a need-to-know basis: “I’m over 18.” “I’m a citizen of Canada.” “Here’s my Costco card.” “Hi, I’m Jane.” We may or may not present credentials in these encounters. And in most we don’t say our names. “Michael” being a common name, a guy called “Mike” may tell a barista his name is “Clive” if the guy in front of him just said his name is “Mike.” (My given name is David, a name so common that another David re-branded me Doc. Later I learned that his middle name was David and his first name was Paul. True story.)

This is how civilization works in the offline world.

Kim Cameron wrote up how this ought to work, in Laws of Identity, first published in 2004. The Laws include personal control and consent, minimum disclosure for a constrained use, justifiable parties, and plurality of operators. Again, we have those in here in the offline world where your body is reading this on a screen.

In the online world behind that screen, however, you have a monstrous mess. I won’t go into why. The results are what matter, and you already know those anyway.

Instead, I’d like to share what (at least for now) I think is the best approach to the challenge of presenting verifiable credentials in the digital world. It’s called KERI, and you can read about it here: https://keri.one/. If you’d like to contribute to the code work, that’s here: https://github.com/decentralized-identity/keri/.

I’m still just getting acquainted with it, in sessions at IIW. The main thing is that I’m sure it matters. So I’m sharing that sentiment, along with those links.

 


Justin Richer

Filling in the GNAP

Filling in the GNAP About a year ago I wrote an article arguing for creating the next generation of the OAuth protocol. That article, and some of the other writing around it, has been picked up recently, and so people have been asking me what’s the deal with XYZ, TxAuth, OAuth 3.0, and anything else mentioned there. As you can imagine, a lot has happened in the last year and we’re in a very
Filling in the GNAP

About a year ago I wrote an article arguing for creating the next generation of the OAuth protocol. That article, and some of the other writing around it, has been picked up recently, and so people have been asking me what’s the deal with XYZ, TxAuth, OAuth 3.0, and anything else mentioned there. As you can imagine, a lot has happened in the last year and we’re in a very different place.

The short version is that there is now a new working group in the IETF: Grant Negotiation and Authorization Protocol (gnap). The mailing list is still at txauth, and the first WG draft is available online now as draft-ietf-gnap-core-protocol-00.

How Are These All Related?

OK, so there’s GNAP, but now you’re probably asking yourself what’s the difference between GNAP and XYZ, or TxAuth, or OAuth 3.0. With the alphabet soup of names, it’s certainly confusing if you haven’t been following along the story in the last year.

The XYZ project started as a concrete proposal for how a security protocol could work post-OAuth 2.0. It was based on experience with a variety of OAuth-based and non-OAuth-based deployments, and on conversations with developers from many different backgrounds and walks. This started out as a test implementation of ideas, which was later written down into a website and even later incorporated into an IETF individual draft. The most important thing about XYZ is that it has always been implementation-driven: things almost always started with code and moved forward from there.

This led to the project itself being called OAuth.XYZ after the website, and later just XYZ. When it came time to write the specification, this document was named after a core concept in the architecture: Transactional Authorization. The mailing list at IETF that was created for discussing this proposal was named after this draft: TxAuth. As such, the draft, project, and website were all referred to as either XYZ or TxAuth depending on who and when you asked.

After months of discussion and debate (because naming things is really hard), the working group settled on GNAP, and GNAP is now the official name of both the working group and the protocol the group is working on publishing.

As for OAuth 3.0? Simply put, it canonically does not exist. The GNAP work is being done by many members of the OAuth community, but not as part of the OAuth working group. While there may be people who refer to GNAP as OAuth 3.0, and it does represent a similar shift forward that OAuth 2.0 did, GNAP is not part of the OAuth protocol family. It’s not out of the question for the OAuth working group decides to adopt GNAP or something else in the future to create OAuth 3.0, but right now that is not on the table.

The GNAP Protocol

Not only is GNAP an official working group, but the GNAP protocol has also been defined in an official working group draft document. This draft represents the output of several months of concerted effort by a design team within the GNAP working group. The protocol in this document is not exactly the same as the earlier XYZ/TxAuth protocol, since it pulled from multiple sources and discussions, but there are some familiar pieces.

The upshot is that GNAP is now an official draft protocol.

GNAP is also not a final protocol by any stretch. If you read through the draft, you’ll notice that there are a large number of things tagged as “Editor’s Notes” and similar commentary throughout, making up a significant portion of the page count. These represent portions of the protocol or document where the design team identified some specific decisions and choices that need to be made by the working group. The goal was to present a set of initial choices along with rationale and context for them.

But that’s not to say that the only flexible portions are those marked in the editor’s notes. What’s important about the gnap-00 document is that it’s a starting point for the working group discussion. It gives the working group something concrete to talk about and debate instead of a blank page of unknown possibilities (and monsters). With this document in hand, the working group can and will change the protocol and how it’s presented over the specification’s lifecycle.

The Immediate Future

Now that GNAP is an active standard under development, XYZ will shift into being an open-source implementation of GNAP from here out. As of the time of publication, we are actively working to implement all of the changes that were introduced during the design team process. Other developers are gearing up to implement the gnap-00 draft as well, and it will be really interesting to try to plug these into each other to test interoperability at a first stage.

TxAuth and Transactional Authorization are functionally retired as names for this work, though the mailing list at IETF will remain txauth so you might still hear reference to that from time to time because of this.

And as stated above, OAuth 3.0 is not a real thing. Which is fine, since OAuth 2.0 isn’t going anywhere any time soon. The work on GNAP is shifting into a new phase that is really just starting. I think we’ve probably got a couple years of active work on this specification, and a few more years after that before anything we do really sees any kind of wide adoption on the internet. These things take a long time and a lot of work, and it’s my hope to see a diverse and engaged group building things out!

Wednesday, 21. October 2020

Boris Mann's Blog

Dude Chilling Park in the fall

Dude Chilling Park in the fall


Identity Praxis, Inc.

Atlanta Innovation Forum Webinar – The Challenging New World of Privacy & Security

An in-depth conversation on privacy & security.  On October 15, 2020, I had a wonderful time discussing privacy and security. Speakers Joining me on the panel were, Carlos J. Bosch, Head of Technology, GSMA North America Matt Littleton, Global Advanced Compliance Specialist, Microsoft Donna Gallaher, President & CEO, New Oceans Enterprises Michael Becker, Founder & […] The post

An in-depth conversation on privacy & security. 

On October 15, 2020, I had a wonderful time discussing privacy and security.

Speakers

Joining me on the panel were,

Carlos J. Bosch, Head of Technology, GSMA North America Matt Littleton, Global Advanced Compliance Specialist, Microsoft Donna Gallaher, President & CEO, New Oceans Enterprises Michael Becker, Founder & CEO, Identity Praxis Chad Hunt, Supervisory Special Agent, Federal Bureau of Investigation Julie Meredith, Federal Bureau of Investigation Key Themes

The key themes that came out of our conversation:

An assessment of corporate and individual threats and attach vectors (e.g. phishing, ransomware, etc.) People’s sentiment in today’s age: Connection, concern, control (compromised by convenience) Strategies for corporate risk assessment Strategies for executing corporate privacy & security measures Relevance and adhering to privacy regulations (e.g. GDRP, CCPA) Definitions of key terms, concepts, and nuances: privacy, security, compliance, identity, risk, etc. A review of key frameworks: Personal Information Management Triad, Five-pillars of digital sovereignty

You can watch our 60-minute discussion below.

 

 

Reference

737475 {737475:QWZEW3I6} items 1 apa default asc https://identitypraxis.com/wp-content/plugins/zotpress/ Bosch, C., Hunt, C., Gallaher, D., Becker, M., & Meredith, J. (2020, October 15). The Challenging New World of Privacy & Security. Atlanta Innovation Forum The Challenging New World of Privacy & Security Webinar, Online. https://www.youtube.com/watch?v=JmlvOKg_dS4

The post Atlanta Innovation Forum Webinar – The Challenging New World of Privacy & Security appeared first on Identity Praxis, Inc..


Simon Willison

Quoting James 'zofrex' Sanderson

Writing the code to sign data with a private key and verify it with a public key would have been easier to get correct than correctly invoking the JWT library. In fact, the iOS app (which gets this right) doesn’t use a JWT library at all, but manages to verify using a public key in fewer lines of code than the Android app takes to incorrectly use a JWT library! — James 'zofrex' Sanderson

Writing the code to sign data with a private key and verify it with a public key would have been easier to get correct than correctly invoking the JWT library. In fact, the iOS app (which gets this right) doesn’t use a JWT library at all, but manages to verify using a public key in fewer lines of code than the Android app takes to incorrectly use a JWT library!

James 'zofrex' Sanderson


Proof of concept: sqlite_utils magic for Jupyter

Proof of concept: sqlite_utils magic for Jupyter Tony Hirst has been experimenting with building a Jupyter "magic" that adds special syntax for using sqlite-utils to insert data and run queries. Query results come back as a Pandas DataFrame, which Jupyter then displays as a table. Via @psychemedia

Proof of concept: sqlite_utils magic for Jupyter

Tony Hirst has been experimenting with building a Jupyter "magic" that adds special syntax for using sqlite-utils to insert data and run queries. Query results come back as a Pandas DataFrame, which Jupyter then displays as a table.

Via @psychemedia


reb00ted

Universal Declaration of Digital Rights (UDDR)

A draft has been put together.

A draft has been put together.


Amazon will pay you for your shopping data

When you bought from somewhere else than Amazon. Interesting (and, overall, bad for privacy).

When you bought from somewhere else than Amazon. Interesting (and, overall, bad for privacy).


Karyl Fowler

Transmute Closes $2M Seed Round

We’re thrilled to announce the close of Transmute’s $2 million series seed round led by Moonshots Capital, and joined by TMV, Kerr Tech Investments and several strategic angels. Transmute has gained momentum on our mission to be the trusted data exchange platform for global trade. As a byproduct of the pandemic, the world is collectively facing persistent supply chain disruption and unpredictabil

We’re thrilled to announce the close of Transmute’s $2 million series seed round led by Moonshots Capital, and joined by TMV, Kerr Tech Investments and several strategic angels.

Transmute has gained momentum on our mission to be the trusted data exchange platform for global trade. As a byproduct of the pandemic, the world is collectively facing persistent supply chain disruption and unpredictability. This coupled with increasing traceability regulations is driving an urgency for importers to fortify their supply chains. COVID-19 especially has highlighted the need for preventing counterfeit goods and having certainty about your suppliers (and their suppliers).

Transmute Co-founders, Karyl Fowler & Orie Steele @ SXSW 2019

Transmute’s software is upgrading trade documentation today to give importers a competitive edge in an increasingly dynamic, global marketplace. Leveraging decentralized identifier (DID) and verifiable credential (VC) tech with existing cloud-based systems, Transmute is able to offer digital product and supplier credentials that are traceable across an entire logistics ecosystem. From point of origin to end customer, we are unlocking unprecedented visibility into customers’ supplier networks.

Disrupting a highly regulated and old-fashioned industry is complex, and an intentional first step in our go-to-market strategy has been balancing both the needs of regulators and commercial customers.

This is why we’re incredibly proud to join forces with our lead investors at Moonshots Capital, a VC firm focused on investing in extraordinary leaders. We look forward to growing alongside Kelly Perdew (our newest Board of Directors member) and his founding partner Craig Cummings. They’re a team of military veterans and serial entrepreneurs with extensive success selling into government agencies and enterprises.

We are equally proud to be joined by Marina Hadjipateras and the team at TMV, a New York-based firm focused on funding pioneering, early-stage founders. Between their commitment to diverse teams, building sustainable futures and their deep expertise in global shipping and logistics, we feel more than ready to take on global trade with this firm.

The support of Kerr Tech Investments, led by Josh and Michael Kerr, further validates our company’s innovative approach to data exchange. Josh is a seasoned entrepreneur, an e-signature expert and has been advising us since Transmute’s inception.

Closing our seed round coincides with another exciting announcement: our recent launch of Phase II work with the U.S. Department of Homeland Security, Science & Technology’s Silicon Valley Innovation Program (SVIP) to enhance “transparency, automation and security in processing the importation of raw materials” like steel.

Our vision is more broad than just improving how trade gets done, and steel imports are just the beginning. We’re inserting revolutionary changes into the fabric of how enterprises manage product and supplier identity, effectively building a bridge — or a fulcrum, rather — towards new revenue streams and business models across industries.

Last — but absolutely not least — I want to give a personal shoutout to my core teammates; startups are a team sport, and our team is stacked! Tremendous congratulations as these backers will accelerate our progress in a huge way. And finally, thanks also to our stellar team of advisors who commit significant time coaching us through blind spots as we bring Transmute’s product to market.

Also, we’re Hiring!

Expanding our capacity to meet customer demand is our top nearterm priority. We’re adding a few engineering and product roles to our core team in Austin, TX, so please apply or spread the word!

Transmute Closes $2M Seed Round was originally published in Transmute on Medium, where people are continuing the conversation by highlighting and responding to this story.

Tuesday, 20. October 2020

Boris Mann's Blog

I should log my epic sandwich from yesterday’s lunch. Sourdough from Tall Shadow breads, polish mayo, pickle slice, Roma tomato, havarti, and salami.

I should log my epic sandwich from yesterday’s lunch. Sourdough from Tall Shadow breads, polish mayo, pickle slice, Roma tomato, havarti, and salami.


Spicy udon bowl for lunch today. Onions, ginger, garlic, some diced red peppers & carrots I had laying around. Diced Roma tomatoes. Miso. Soy Sauce. Samba Oelek. Rice vinegar.

Spicy udon bowl for lunch today. Onions, ginger, garlic, some diced red peppers & carrots I had laying around. Diced Roma tomatoes. Miso. Soy Sauce. Samba Oelek. Rice vinegar.


Mike Jones: self-issued

OpenID Presentation at IIW XXXI

I gave the following invited “101” session presentation at the 31st Internet Identity Workshop (IIW) on Tuesday, October 20, 2020: Introduction to OpenID Connect (PowerPoint) (PDF) I appreciated learning about how the participants are using or considering using OpenID Connect. The session was recorded and will be available in the IIW proceedings.

I gave the following invited “101” session presentation at the 31st Internet Identity Workshop (IIW) on Tuesday, October 20, 2020:

Introduction to OpenID Connect (PowerPoint) (PDF)

I appreciated learning about how the participants are using or considering using OpenID Connect. The session was recorded and will be available in the IIW proceedings.


reb00ted

Piloting an alternate economy

Grace Rachmany is connecting intentional communities and ecovillages via economics that are not based on money. Interesting experiment.

Grace Rachmany is connecting intentional communities and ecovillages via economics that are not based on money. Interesting experiment.


Boris Mann's Blog

Managed to stop in at the new Radpower Bikes Vancouver showroom right before closing. Snagged a rear rack & basket for Rachael’s new bike.

Managed to stop in at the new Radpower Bikes Vancouver showroom right before closing. Snagged a rear rack & basket for Rachael’s new bike.

Sunday, 18. October 2020

Boris Mann's Blog

When we went for brunch at Forage yesterday, I also bought some of their sourdough starter. Looking forward to waking it up!

When we went for brunch at Forage yesterday, I also bought some of their sourdough starter. Looking forward to waking it up!

Saturday, 17. October 2020

Doc Searls Weblog

Time to unscrew subscriptions

The goal here is to obsolesce this brilliant poster by Despair.com: I got launched on that path a couple months ago, when I got this email from  The_New_Yorker at e-mail.condenast.com: Why did they “need” a “confirmation” to a subscription which, best I could recall, was last renewed early this year? So I looked at the links. […]

The goal here is to obsolesce this brilliant poster by Despair.com:

I got launched on that path a couple months ago, when I got this email from  The_New_Yorker at e-mail.condenast.com:

Why did they “need” a “confirmation” to a subscription which, best I could recall, was last renewed early this year?

So I looked at the links.

The “renew,” Confirmation Needed” and “Discounted Subscription” links all go to a page with a URL that began with https://subscriptions.newyorker.com/pubs…, followed by a lot of tracking cruft. Here’s a screen shot of that one, cut short of where one filled in a credit card number. Note the price:

I was sure I had been paying $80-something per year, for years. As I also recalled, this was a price one could only obtain by calling the 800 number at NewYorker.com.

Or somewhere. After digging around, I found it at
 https://w1.buysub.com/pubs/N3/NYR/accoun…, which is where the link to Customer Care under My Account on the NewYorker website goes. It also required yet another login.

So, when I told the representative at the call center that I’d rather not “confirm” a year for a “discount” that probably wasn’t, she said I could renew for the $89.99 I had paid in the past, and that the deal would be good  through February of 2022. I said fine, let’s do that. So I gave her my credit card, said this was way too complicated, and added that a single simple subscription price would be better. She replied,  “Never gonna happen.” Let’s repeat that:

Never gonna happen.

Then I got this by email:

This appeared to confirm the subscription I already had. To see if that was the case, I went back to the buysub.com website and looked under the Account Summary tab, where it said this:

I think this means that I last renewed on February 3 of this year, and what I did on the phone in August was commit to paying $89.99/year until February 10 of 2022.

If that’s what happened, all my call did was extend my existing subscription. Which was fine, but why require a phone call for that?

And WTF was that “Account Confirmation Required” email about? I assume it was bait to switch existing subscribers into paying $50 more per year.

Then there was this, at the bottom of the Account summary page:

This might explain why I stopped getting Vanity Fair, which I suppose I should still be getting.

So I clicked on”Reactivate and got a login page where the login I had used to get this far didn’t work.

After other failing efforts that I neglected to write down, I decided to go back to the New Yorker site and work my way back through two logins to the same page, and then click Reactivate one more time. Voila! ::::::

So now I’ve got one page that tells me I’m good to March 2021 next to a link that takes me to another page that says I ordered 12 issues last December and I can “start” a new subscription for $15 that would begin nine months ago. This is how one “reactivates” a subscription?  OMFG.

I’m also not going into the hell of moving the print subscription back and forth between the two places where I live. Nor will I bother now, in October, to ask why I haven’t seen another copy of Vanity Fair. (Maybe they’re going to the other place. Maybe not. I don’t know, and I’m too weary to try finding out.)

I want to be clear here that I am not sharing this to complain. In fact, I don’t want The New Yorker,  Vanity Fair, Wred, Condé Nast (their parent company) or buysub.com to do a damn thing. They’re all FUBAR. By design. (Bonus link.)

Nor do I want any action out of Spectrum, SiriusXM, Dish Network or the other subscription-based whatevers whose customer disservice systems have recently soaked up many hours of my life.

See, with too many subscription systems (especially ones for periodicals), FUBAR is the norm. A matter of course. Pro forma. Entrenched. A box outside of which nobody making, managing or working in those systems can think.

This is why, when an alien idea appears, for example from a loyal customer just wanting a single and simple damn price, the response is “Never gonna happen.”

This is also why the subscription fecosystem can only be turned into an ecosystem from the outside. Our side. The subscribers’ side.

I’ll explain how at Customer Commons, which we created for exactly that purpose. Stay tuned for that.

Two exceptions are Consumer Reports and The Sun.


Boris Mann's Blog

Rachael won an overnight stay prize pack from VanMuralFest and the Robson BIA. Finishing our downtown adventure with a nice breakfast at Forage.

Rachael won an overnight stay prize pack from VanMuralFest and the Robson BIA. Finishing our downtown adventure with a nice breakfast at Forage.

Friday, 16. October 2020

Doc Searls Weblog

Higher education adrift

In Your favorite cruise ship may never come back: 23 classic vessels that could be laid-up, sold or scrapped, Gene Sloan (aka @ThePointsGuy) named the Carnival Fantasy as one those that might be headed for the heap. Now, sure enough, there it is, in the midst of being torn to bits (HT 7News, above) in Aliağa, Turkey. Other stories in the […]

In Your favorite cruise ship may never come back: 23 classic vessels that could be laid-up, sold or scrappedGene Sloan (aka @ThePointsGuy) named the Carnival Fantasy as one those that might be headed for the heap. Now, sure enough, there it is, in the midst of being torn to bits (HT 7News, above) in Aliağa, Turkey. Other stories in the same vein are herehere, here, here, here and here.

I been on a number of cruises (here’s one) in the course of my work as a journalist, and I’ve enjoyed them all. I’ve also hung out at a similar number of colleges and universities, and have long found myself wondering how well the former might be a good metaphor for the latter. Both are expensive, well-branded and self-contained structures with a lot of specialized staff and overhead. Both are also vulnerable to pandemics, and in doomed cases their physical components turn out to be worth more than their institutional ones. John Naughton also notes the resemblance. But it’s Scott Galloway who runs all the way with it; first with Higher Ed: Enough Already, and then with a long and research-filled post titled USS University, featuring this title graphic:

Those three schools are adrift across a 2×2 with low value<—>high value on the X axis and high vulnerability<—>low vulnerability on the Y axis. At the lower left are the low-value/high vulnerability schools in a quadrant Scott calls “challenged,” meaning “high admit rates, high tuition, low endowments, dependence on international students, and weak brand equity.” Among those are—

Adelphi Brandeis Bard Dickenson Dennison Hofstra Kent State Kenyon LIU Mt. Holyoke Old Dominion Pace Pacific Robert Morris Sarah Lawrence Seton Hall Skidmore Smith St. John’s (Maryland & New Mexico) The New School Union UC Santa Cruz U Mass Dartmouth Valparaiso Wittenberg

— plus a plethora of mostly state-run “directional” schools (e.g. University of Somewhere at Somewhere).

The Hmm here is, How many have more value as real estate than as what they are today?

I started wondering in the same direction in May, when I posted Figuring the Future and Choose One. Both pivoted off this 2×2 by Arnold Kling

On Arnold’s rectangle, D (Fragile/Inessential) is Scott’s “challenged” quadrant. What I’m wondering, now that school is in session and at least some results should be coming in (or at least trending in a direction), if any colleges or universities in that group (or in the other quadrants) are headed already toward their own Aliağa.

Thoughts? If so, let me know on Twitter (where I am @dsearls), Facebook (here) or by email (doc at searls dot com). I hope to have comments working again here soon, but for now they don’t, alas.

Thursday, 15. October 2020

Boris Mann's Blog

I’ve been stuck not really reading much. Gave up on the book I was on, and bought Sue Burke’s Interference. I loved the first one, and have been happily reading this before bed. #book #scifi

I’ve been stuck not really reading much. Gave up on the book I was on, and bought Sue Burke’s Interference.

I loved the first one, and have been happily reading this before bed.

#book #scifi

Wednesday, 14. October 2020

@_Nat Zone

[2020-10-15] Fighting Back Against Digital Identity Fraud – Identity Week Asia 2020

「Identity Week Asia 2020」… The post [2020-10-15] Fighting Back Against Digital Identity Fraud - Identity Week Asia 2020 first appeared on @_Nat Zone.

「Identity Week Asia 2020」に出演しました。Fighting Back Against Digital Identity Fraudと題したパネルディスカッションのモデレータです。大変おもしろいパネルでした。そのうちビデオが公開されたらまたここでシェアさせていただきます。

Fighting Back Against Digital Identity Fraud

Digital Identity in an Online WorldThe isolation of worldwide lockdowns this year have presented criminals with new opportunities for phishing, and this only adds to already existing trends in the rise of synthetic IDs, account takeover and SIM swap fraud. This panel will explore solutions which utilise digital identity to fight back against the fraudsters.

Moderator:Nat Sakimura,Chairman,OpenID Foundation

Subhashish Bose,Senior Director – Fraud & Security,FICO

Jeremy Grant,Coordinator,Better Identity Coalition

David Turkington,Head of Technology, APAC,GSMA

The post [2020-10-15] Fighting Back Against Digital Identity Fraud - Identity Week Asia 2020 first appeared on @_Nat Zone.


Ben Werdmüller

Great tips! I’m a big fan of ...

Great tips! I’m a big fan of NextJS, and it’s lovely to see webmentions made so easy for that ecosystem.

Great tips! I’m a big fan of NextJS, and it’s lovely to see webmentions made so easy for that ecosystem.


Boris Mann's Blog

The Sifter in Atlas Obscura

The Sifter thesifter.org a multilingual database, currently 130,000-items strong, of the ingredients, techniques, authors, and section titles included in more than 5,000 European and U.S. cookbooks. – A Database of 5,000 Historical Cookbooks Is Now Online, and You Can Help Improve It, Atlas Obscura The data on this site is super interesting. Whatever is running the site is not great.

The Sifter thesifter.org

a multilingual database, currently 130,000-items strong, of the ingredients, techniques, authors, and section titles included in more than 5,000 European and U.S. cookbooks.

A Database of 5,000 Historical Cookbooks Is Now Online, and You Can Help Improve It, Atlas Obscura

The data on this site is super interesting. Whatever is running the site is not great. It’s some sort of out of the box Microsoft thing including default loading animations.

But! It aims to be a sort of Wikipedia. I’ve signed up to be a contributor and I hope that this can be built on and be licensed for re-use.

I did a search for “Vancouver” and found only one entry, so my used cookbook collection may be able to add a handful more. It does say it only wants pre-1940 cookbooks, but it’s unclear why.

It provides a bird’s-eye view of long-term trends in European and American cuisines, from shifting trade routes and dining habits to culinary fads. Search “cupcakes,” for example, and you’ll find the term may have first popped up in Mrs. Putnam’s Receipt Book And Young Housekeeper’s Assistant, a guide for ladies running middle-class households in the 1850s.

Yes! Super interesting to me. Looking forward to see how this evolves.

(From the Gastro Obscura section of A.O.)


reb00ted

No more misleading language in California

At least be honest about the mischief you are about to do.

At least be honest about the mischief you are about to do.

Tuesday, 13. October 2020

@_Nat Zone

[2020-10-13] IIF年次会員会合のパネルのビデオが公開されました

事後報告すみません。先程、IIF (Interna… The post [2020-10-13] IIF年次会員会合のパネルのビデオが公開されました first appeared on @_Nat Zone.

事後報告すみません。先程、IIF (International Institute of Finance) のAnnual Membership Meeting 2020 のパネルディスカッションへの出演が終わりました。

マドリッドからの司会者の声がこちらにとぎれとぎれというアクシデントはありましたが、概ねうまく言ったのではないかと思います。

ビデオも公開されましたので、シェアしておきます。

IIF Annual Membership Meeting – Digital Identity panel

In the accelerated move to a digitalized world, new technology is changing how people are identified and what we know about them, in an environment where trust has to be built differently from physical interactions. As well as presenting opportunities for efficiency and access, this also drives debate about roles, responsibilities, standards, and privacy. These issues will grow as new technologies in data, sensors, 5G connected devices, and biometrics come online with platforms of systemic scale. This session will discuss how technology and market trends are changing the costs, responsibilities, business models, and opportunities in identity and authentication, and how financial services firms can build on compliance capabilities to create new services and revenue streams.

Tuesday, October 13, from 1:00pm London / 2:00pm Madrid / 8:00pm Singapore / 9:00pm Tokyo (55 minutes)

Vivienne Artz, Chief Privacy Officer, Refinitiv Sopnendu Mohanty, Chief FinTech Officer, Monetary Authority of Singapore Nat Sakimura, Chairman, OpenID Foundation Victoria Roig, Head of Transformation Office, Santander (Moderator)

内容に興味が有る方はコメント欄にコメントしてください。希望が多ければ時間を見つけてブログに書きます。

The post [2020-10-13] IIF年次会員会合のパネルのビデオが公開されました first appeared on @_Nat Zone.


Nicholas Rempel

How to Publish a Python Package to PyPI: A Comprehensive Guide

In this guide, I'm going to show you how can easily publish a package to the Python Package Index so that others can install and use your work.

In this guide, I'm going to show you how can easily publish a package to the Py thon P ackage I ndex so that others can install and use your work. Once the package is available on PyPI, anyone will be able to run pip install to install the package on their machine. This allows people to import your code into their own modules as a library, but you can also release command-line tools via PyPI; I will show you how to do both.

note : these instructions are Mac and Linux centric. I don't think they would be much different, but I don't do much development work on Windows so I don't know the nuances of that platform.

The Code

This first thing you need is some code to publish! I have created a minimal project that you can use as a starting point. You can find the code here . Let's take a look.

First let's walk through the folder structure:

1 2 3 4 5 6 7 8 9 ├── README.md ├── package_boilerplate │   ├── __init__.py │   └── importable.py ├── requirements.txt ├── scripts │   └── boilerplate-cli ├── setup.py └── version.py

These are all of the files inside of the python package. First, we have a directory named package_boilerplate . This directory contains all of the code for our module. All of the files outside of this directory are metadata for the package itself and are used in the package publishing process, for installing dependencies, and stuff like that. We'll go through the other files shortly.

README.md is markdown formatted text containing details about the package. This content is usually rendered as HTML and made visible in places like GitHub and PyPI.

If we look inside of __init__.py , we see the following:

1 2 3 4 from colored import fg, bg, attr def main(): print(f"{fg('dark_orange')}{attr('bold')}You called me!{attr('reset')}")

We're doing a couple of things here:  first, we import a library called colored . Colored is a tool to colourize your program's terminal output. It's fun!

So basically, this file will print some coloured text to the terminal if main is called. If you continue reading, you will see how we can publish this as a command-line tool through pip !

Next, let's look inside of importable.py :

1 2 3 from colored import fg, bg, attr print(f"{fg('orchid')}{attr('bold')}You imported me!{attr('reset')}")

Again, we're just print some coloured text to the terminal but this time we can import this code in an external module. The text will be printed as soon as the file is imported since the code is not contained in a function.

Ok, so that is the logic of our program. Let's continue down the folder structure to see what else is needed to make this publishable.

The next file is requirements.txt . If you've ever written a python module that depends on other external python packages, then you may be familiar with this file. The file is used to tell pip which versions of which modules it should install all at once if you run pip install -r requirements.txt . Python packages are no different; if they rely on dependencies, pip needs to know what to install. We'll reference this file in setup.py .

Next, we have the scripts directory. The folder name doesn't matter and isn't required, but I prefer to keep things separate and organized. Inside of this folder we have a file called boilerplate-cli . Notice how we don't give the file an extension like .py and we used the naming convention boilerplate-cli instead of boilerplate_cli . This is because this is the file that will be installed as a command-line program in the users PATH . We want the the user to be able to invoke the program by running boilerplate-cli and not boilerplate_cli.py because that is the expected convention for command-line programs.

Inside of this file we have this:

1 2 3 4 5 #!/usr/bin/env python from package_boilerplate import main main()

The first line is important because we have to tell the user's operating system what kind of code this it because it doesn't have a file extension. This is called a shebang . Next we import the entrypoint into the program from __init__.py that we named main and we run the function.

Calling this function kicks off our program! Right now it only prints text to the terminal, but it could be the entrypoint into a more complicated program. We will reference this file in the setup.py section to make it available as a command-line program for the user.

The most important part of this process is the [setup.py](https://docs.python.org/3.7/distutils/setupscript.html) file. This tells pip how to bundle the package and what all the settings and dependencies are.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 """Module setup.""" import runpy from setuptools import setup, find_packages PACKAGE_NAME = "package-boilerplate" version_meta = runpy.run_path("./version.py") VERSION = version_meta["__version__"] with open("README.md", "r") as fh: long_description = fh.read() def parse_requirements(filename): """Load requirements from a pip requirements file.""" lineiter = (line.strip() for line in open(filename)) return [line for line in lineiter if line and not line.startswith("#")] if __name__ == "__main__": setup( name=PACKAGE_NAME, version=VERSION, packages=find_packages(), install_requires=parse_requirements("requirements.txt"), python_requires=">=3.6.3", scripts=["scripts/boilerplate-cli"], description="This is a description.", long_description=long_description, long_description_content_type="text/markdown", )

First we import a few utilities called runpy and setuptools . runpy is built into python, but setuptools needs to be installed which we will go over in the next section.

Next, we get the python metadata out of the file version.py and obtain the current version number VERSION . This file just contains the current version number of the package you are publishing. We'll go over why it's split into its own file in the next section.

Next, long_description is pulled out of the content from README.md . Using the content from the readme means that this description only needs to live in one place.

Then, I've added a helper function called parse_requirements . The python setup.py format expects that install_requires property contains a list of dependencies that this package relies on. Since it's burdensome to maintain a requirements.txt file with your dependencies as well as the dependencies in a list here, this parse_requirements function simple imports the dependencies from requirements.txt so that you only need to maintain them in one place.

Finally, we call setup with some values to define the package. The section that specifies packages=find_packages() is pointing to the directory named package_boilerplate and is discovered automatically by this function. Name and version are used in the PyPI directory listing and also are what people reference when they want to install the package. install_requires pulls in the list of requirements from the requirement.txt file. python_requires=">=3.6.3" is the version of python that the user must have installed. And finally, scripts is where you specify the command line program that should be installed for the user.

There are other values that you should set as well such as description and author . You can read more about the setup values available here .

The last file, version.py is quite small. This file is used for two things: first, it is imported in the setup.py module above to use as the package version; and second, the version value is stored in the __version__ attribute. This is recommend so that the package version can be discovered programatically.

A tip : as you are working, you can install your package locally before you even publish it. This way, you will know that it works before submitting it to PyPI. To install the package locally, you can run pip install -e . in the package directory.

Publishing Your Package

So you've written your package and you want to publish it! We're almost there. There are just a few more steps to publish your package.

The first thing will want to do is install a tool called twine by running pip install twine . Twine is a tool for publishing Python packages on PyPI. Then, we also need to install setuptools since we are using it in our setup script: pip install setuptools .

Now, we need to create a distribution package that we will publish. To do this, run:   python setup.py sdist bdist_wheel which will create build and dist directories.

Once the build has completed, we can check for any errors or warnings using twine . Run twine check dist/*  to check the distribution package for errors. If all goes well, you should see this:

1 2 Checking distribution dist/package_boilerplate-1.0.0-py3-none-any.whl: Passed Checking distribution dist/package-boilerplate-1.0.0.tar.gz: Passed

Once this is done, we want to publish our package to the PyPI test site. We can publish to test.pypi.org before we publish to the real index to make sure everything looks good! To do this, we can instruct twine to use a different repository like so: twine upload --repository-url https://test.pypi.org/legacy/ dist/*

If we run this now, we will see the following:

1 Enter your username:

Looks like we need to create an account! So head over to the registration page for the PyPI test site and create an account. If you run the command again and enter your username and password hopefully you should see:

1 2 3 4 5 Uploading distributions to https://test.pypi.org/legacy/ Uploading package_boilerplate-1.0.0-py3-none-any.whl 100%|█████████████████████████| 4.95k/4.95k [00:00<00:00, 37.2kB/s] Uploading package-boilerplate-1.0.0.tar.gz 100%|█████████████████████████| 4.11k/4.11k [00:01<00:00, 3.76kB/s]

If you do, congratulations! You have successfully published your package!

Note: make sure your package name is unique!

All that remains is to publish your package to the official pypi.org . To do this, you just need to create another account there and publish your package without specifying a different repository like so: twine upload dist/* .

Installing Your Package

Now that your package is installed, I bet you want to install it to see how it works! Let's install our package from the PyPI test index: pip install --index-url https://test.pypi.org/simple/ package-boilerplate . In the future, if we don't provide the --index-url parameter, then we would be requesting a package from the official index. If all goes well, you should see this:

1 2 3 4 5 6 Looking in indexes: https://test.pypi.org/simple/ Collecting package-boilerplate Downloading https://test-files.pythonhosted.org/packages/10/63/208bbd4ea4427f5eb0e4e6248973b387f1dec4ed60333c4daf416c310903/package_boilerplate-1.0.0-py3-none-any.whl Requirement already satisfied: colored==1.3.93 in /usr/local/lib/python3.7/site-packages (from package-boilerplate) (1.3.93) Installing collected packages: package-boilerplate Successfully installed package-boilerplate-1.0.0

If so, then the package is installed! So let's use it. Remember that command-line program we created? Well now we should be able to run it from anywhere by calling invoking the name of the script:

1 $ boilerplate-cli

And the text is orange like we wanted! What if we wanted to import the module in a different project? Let's try:

1 2 3 4 5 $ python Python 3.7.3 (default, Mar 27 2019, 09:23:15) [Clang 10.0.1 (clang-1001.0.46.3)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>>

It works! And we even get the coloured text.

Once you publish your package to the official PyPI index, you will be able to install the package by running pip install package-boilerplate . Of course, you would want to choose a different name for the package throughout the process and use that here.

I hope this guide is helpful. If something doesn't make sense, please let me know.

Friday, 20. April 2018

Transparent Health Blog

Why Transparent Health Was Created

Over the past few years, I've begun or contributed to a number of open source projects.  These have mostly been centered around health technology while supporting the Unites States federal government agencies such as CMS, HHS, and NIST. TransparentHealth.org and https://github.com/TransparentHealth were first created as a place to consolidate these open source projects into one place.&nb
Over the past few years, I've begun or contributed to a number of open source projects.  These have mostly been centered around health technology while supporting the Unites States federal government agencies such as CMSHHS, and NIST.

TransparentHealth.org and https://github.com/TransparentHealth were first created as a place to consolidate these open source projects into one place.  Other people seemed to like the idea.  Not only is open source software a good idea, but so is the adoption of open standards.  We made facilitating the adoption of standards a priority as well.

To ensure the long term maintenance of these projects and goals it became increasingly apparent that software foundation was needed.


Hence Transparent Health was created. In April of 2018, Transparent health became a non-profit organization.  The directors of the organization are Mark Scrimshire, and myself.  Other people are helping out too. We are just getting started!

We encourage your code contribution to our current projects. We also provide a good home for select open source projects. We curate included projects based on its relevance to patient access, openness, and the maturity of the project.

There are a number of reasons why you or your organization might consider contributing code.
Here are a few of those reasons:

You or your  organization wants to amplify impact by making open source contributions more visible and in a shared location. You or your organization would prefer to manage the repository outside of  existing accounts.  (There are a lot of situations where this makes sense, but I won't go into that here.)  You and your organization can still maintain and manage your repository. Your uncertain that a useful repository will be removed from public view  and you want to make a copy.
If you would like to discuss your project's inclusion, please send us an email.


Let's all work together to improve the patient experience through sharing, openness and transparency.

Alan Viars, Director Transparent Health


Boris Mann's Blog

Ride out to our usual bench at Burnaby Lake was pretty cold. Got to get gloves and other gear sorted to keep #biking. The new Rad Mini in white has slightly smaller wheels, and of course no extended rear rack.

Ride out to our usual bench at Burnaby Lake was pretty cold. Got to get gloves and other gear sorted to keep #biking.

The new Rad Mini in white has slightly smaller wheels, and of course no extended rear rack.


Pulled out some old achiote and made a turkey thigh stew plus did a trial run of savoury waffles I’m going to serve tomorrow.

Pulled out some old achiote and made a turkey thigh stew plus did a trial run of savoury waffles I’m going to serve tomorrow.

Sunday, 11. October 2020

Boris Mann's Blog

Had these “screw buns” from Dinesty Dumpling House last night, which came with sweetened condensed milk for dipping. So pretty, so tasty!

Had these “screw buns” from Dinesty Dumpling House last night, which came with sweetened condensed milk for dipping. So pretty, so tasty!

Saturday, 10. October 2020

Boris Mann's Blog

I wrote a blog post about joining Social.Coop. It’s a co-op that runs “user-controlled social media” in the form of a Mastodon server. If you’re on Mastodon, please let me know, I’d like to follow you there!

I wrote a blog post about joining Social.Coop. It’s a co-op that runs “user-controlled social media” in the form of a Mastodon server. If you’re on Mastodon, please let me know, I’d like to follow you there!


Jon Udell

Learning analytics for annotated courses

When teachers and students use Hypothesis annotation to explore course readings, certain benefits are immediately obvious. When students find one another in the margins of documents they tell us that the experience of reading is less lonely, which matters now more than ever. When teachers and students discuss key passages, their conversation — linked directly … Continue reading Learning analytics f

When teachers and students use Hypothesis annotation to explore course readings, certain benefits are immediately obvious. When students find one another in the margins of documents they tell us that the experience of reading is less lonely, which matters now more than ever. When teachers and students discuss key passages, their conversation — linked directly to highlights, displayed in the margin — is more focused than in conventional forums.

This experience of social reading can be delivered as an assignment in a learning management system (LMS). Students are expected to read a chapter of a book and make a prescribed number of substantive annotations; teachers grade the exercise.

Or it can happen less formally. A teacher, who may or may not operate in the context of an LMS, can use annotation to open a window into students’ minds, whether or not their annotations influence their grades.

All these scenarios produce data. The annotated corpus has, at its core, a set of documents. Annotation data explicitly records highlighted passages in documents along with the notes linked to those highlights. It also records conversations that flow from those annotated highlights

Teachers and students, as well as authors and publishers of course readings, can all learn useful things from this data. Teachers and students may want to know who has participated and how often, which are the most highlighted passages across all documents in the course, or which highlights have attracted the longest conversation threads. Authors and publishers will also want to know which passages have attracted the most highlights and discussion.

The annotation data can also support deeper analysis. How often do teachers or students ask questions? How often are questions answered, or not? How often do teachers ask questions answered by students, students ask questions answered by teachers, or students ask questions answered by students?

Analysis of questions and answers

We are providing these views now, on an experimental basis, in order to explore what learning analytics can become in the realm of annotated course reading. Surfacing questions asked by students and answered by students, for example, could be a way to measure the sort of peer interaction that research suggests can improve engagement and outcomes.

Of course the devil is always in the details. Our initial naive approach looks for question marks in annotations, then correlates responses to those annotations. This is convenient because it works with natural discourse, but imprecise because questions are often rhetorical.

A less convenient but more precise approach would require participants to signal intent using an inline hashtag, or a formal tag on the annotation.

A convenient and precise approach would bake affordances into the annotation tool. For example, there might be reaction icons to say things like:

– I’m asking a question that expects an answer

– I’m confused on this point

– I’m highlighting an example of a rhetorical device

In my view we can’t know, a priori, what those baked-in affordances should be. We’ll need to discover them in collaboration with teachers who are using Hypothesis in their courses, and who are willing and able to explore these ideas before they can be fully codified in software.

If you’re a teacher using Hypothesis actively this term, and you’d like to participate in this research, please let us know. We’ll invite you to try our prototype analytics system and help us evolve it.


Just a Theory

George Washington Bridge Pier

Photo of the Manhattan pier of the George Washington Bridge.

Manhattan pier of the George Washington Bridge. © 2020 David E. Wheeler

View of the Manhattan pier of the George Washington Bridge, Taken on 20 September, 2020 with an iPhone Xs.

More about… New York City George Washington Bridge Pier Photo Shrubbery Clouds

Boris Mann's Blog

Check out @rachaelashe’s work at The Art Shop upcoming pop up “Wood Paper Scissors: Raw Materials in Art”. Opening is Oct 23rd, runs until Nov 3rd. The space is on East Hastings, next to Strathcona Brewing and Prototype Coffee.

Check out @rachaelashe’s work at The Art Shop upcoming pop up “Wood Paper Scissors: Raw Materials in Art”. Opening is Oct 23rd, runs until Nov 3rd. The space is on East Hastings, next to Strathcona Brewing and Prototype Coffee.

Friday, 09. October 2020

Boris Mann's Blog

Our new Rad Mini got delivered today! It looks cute and small compared to my Rad Runner, plus a few upgrades: it has gears and a bike computer screen, and an improved front light vs the Run This will primarily be Rachael’s bike, but I’ll borrow it on occasion ;)

Our new Rad Mini got delivered today! It looks cute and small compared to my Rad Runner, plus a few upgrades: it has gears and a bike computer screen, and an improved front light vs the Run

This will primarily be Rachael’s bike, but I’ll borrow it on occasion ;)


@_Nat Zone

[2020-10-09] 情報セキュリティWS越後湯沢–「本人確認」のニューノーマル

事後報告ですみません。 情報セキュリティWS越後湯… The post [2020-10-09] 情報セキュリティWS越後湯沢–「本人確認」のニューノーマル first appeared on @_Nat Zone.

事後報告ですみません。

情報セキュリティWS越後湯沢にて、「「本人確認」のニューノーマル」と第して講演会を行いました。mmhmmを駆使したビデオ編集もかなり評判が良かったようで、次こそは燃える講師の先生方もちらほらと見かけられたようです。

講演の中で一つ失敗したのは七十七銀行を「ななじゅうななぎんこう」と言っているところ。大変申し訳なかったです。日本語苦手なので許してください(_o_)

コンテンツについては、別途このページでご紹介申し上げますので、stay tuned!

The post [2020-10-09] 情報セキュリティWS越後湯沢–「本人確認」のニューノーマル first appeared on @_Nat Zone.


Boris Mann's Blog

I have a cute little like green Bodum charcoal BBQ that we can take to the park at the end of the street. But charcoal is a bit of an ordeal so doesn’t get used often. What about a Scottish made gas pizza oven??? The Ooni: youtu.be/7kzenzsML… #cooking

I have a cute little like green Bodum charcoal BBQ that we can take to the park at the end of the street. But charcoal is a bit of an ordeal so doesn’t get used often. What about a Scottish made gas pizza oven???

The Ooni: youtu.be/7kzenzsML…

#cooking

Thursday, 08. October 2020

Nicholas Rempel

How to Create a Docker Development Environment

This guide will show you how to set up a docker development environment which will enable you and new developers to get up and running in minutes with even the most complex system.

If you’ve ever worked on a large piece of software, I’m sure you’ve endured the pain of setting up a complex development environment. Installing and configuring a database, message broker, web server, worker processes, local smtp server, (and who knows what else!) is time consuming for every developer starting on a project. This guide will show you how to set up a docker development environment which will enable you and new developers to get up and running in minutes with even the most complex system. This will make your life much easier in the long run and get new developers up and running on the project much more quickly.

In this guide, we’ll be using Docker Community Edition and Docker Compose . You may want to read up on these tools a bit before proceeding.

The code for the guide is available here .

Installing Docker

Grab Docker for your operating system here . Docker is available for all modern operating systems. For most users, this will also include Docker Compose. Once installed, keep Docker running in the background to use Docker commands!

Dockerfile Example

Your Dockerfile is the blueprint for your container. You’ll want to use your Dockerfile to create your desired environment. This includes installing any language runtimes you might need and installing any dependencies your project relies on. Luckily, most languages have a base image that you can inherit. We’ll get dig into this further with the Dockerfile example below.

Your Dockerfile doesn’t need to include any instructions for installing a database, cache server, or other tools. Each container should be built around a single process. Other processes would normally be defined in other Dockerfiles but you don’t even need to worry about that; in this example, we use 3 readymade containers for our databases and message broker.

Dockerfile 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 # Inherit from node base image FROM node # This is an alternative to mounting our source code as a volume. # ADD . /app # Install Yarn repository RUN curl -sS http://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add - RUN echo "deb http://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list # Install OS dependencies RUN apt-get update RUN apt-get install yarn # Install Node dependencies RUN yarn install

The Dockerfile above does a couple things: first, we inherit from the node base image. This means that it includes the instructions from that image’s Dockerfile (including whatever base image it inherits from). Second, I install the Yarn package manager since I prefer it over the default NodeJs package manager. Note that while my preferred language here is NodeJs, this guide is language independent. Set up your container for whatever language runtime you prefer to work in.

Give it a try and run docker-compose build and see what happens.

Docker Compose Example

A few sections ago, I mentioned Docker Compose which is a tool to declaratively define your container formation. This means that you can define multiple different process types which all run concurrently in different containers and communicate with one another via http. Docker makes exposing interfaces between containers easier by using what they call links . The beauty here is that it’s as simple as working with multiple processes on single machine but you can be sure that there are no tightly coupled components that might not work in a production environment!

docker-compose.yml 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 version: '3' services: ############################### # Built from local Dockerfile # ############################### web: # Build the Dockerfile in this directory. build: . # Mount this directory as a volume at /app volumes: - '.:/app' # Make all commands relative to our application directory working_dir: /app # The process that runs in the container. # Remeber, a container runs only ONE process. command: 'node server.js' # Set some environment variables to be used in the application environment: PORT: 8080 # Notice the hostname postgres. # This is made available via container links. DATABASE_URL: 'postgres://postgres:@postgres:5432/postgres' REDIS_URL: 'redis://redis:6379' RABBIT_URL: 'amqp://rabbitmq' # Make the port available on the host machine # so that we can navigate there with our web browser. ports: - '8080:8080' # Link this container to other containers to create # a network interface. links: - postgres - redis - rabbitmq clock: build: . volumes: - '.:/app' working_dir: /app command: 'node clock.js' environment: DATABASE_URL: 'postgres://postgres:@postgres:5432/postgres' REDIS_URL: 'redis://redis:6379' RABBIT_URL: 'amqp://rabbitmq' links: - postgres - redis - rabbitmq worker: build: . volumes: - '.:/app' working_dir: /app command: 'node worker.js' environment: DATABASE_URL: 'postgres://postgres:@postgres:5432/postgres' REDIS_URL: 'redis://redis:6379' RABBIT_URL: 'amqp://rabbitmq' links: - postgres - redis - rabbitmq shell: build: . volumes: - '.:/app' working_dir: /app command: bash environment: DATABASE_URL: 'postgres://postgres:@postgres:5432/postgres' REDIS_URL: 'redis://redis:6379' ports: - '8080:8080' links: - postgres - redis - rabbitmq ############################ # Built from remote images # ############################ postgres: # Image name image: postgres # Expose the port on your local machine. # This is not needed to link containers. # BUT, it is handy for connecting to your # database with something like DataGrip from # you local host machine. ports: - '5432:5432' rabbitmq: image: rabbitmq ports: - '5672:5672' redis: image: redis ports: - '6379:6379'

Let’s walk through this example:

We have 7 different containers in our formation: web , clock , worker , shell , postgres , rabbitmq , and redis . That’s a lot! In a production environment, these processes might each run on separate physical servers; or, the processes all might run on a single machine.

Notice how the web, clock, worker, and shell containers are all built from the current directory. So each of those 4 processes all run on the container that we defined in our Dockerfile. The postgres, rabbitmq, and redis containers, on the other hand, are built from prebuilt images which are found on the Docker Store . Building containers for these tools from images is much quicker than installing each of the tools on your local machine.

Take a look at the volumes key. Here, we mounted our current directory at /app . Then the working_dir key indicates that all commands shall be run relative to this directory.

Ok. Now, take a look at the links key present on the locally built containers. This exposes a network interface between this container and the containers listed. Notice how we use the name of the link as the hostname in our environment variables. In this example, we link the containers and then we expose the uri for each of our linked services as environment variables.

Try running one of the services: run the command docker-compose up web .

Write your application code

Ok, our server architecture includes 3 process types that run your application code; we have our web process that is responsible for serving web requests and pushing work to a job queue; we have our worker process that is responsible for pulling jobs off the queue and doing the work; and we have our clock process is effectively a cron runner that pushes work onto our job queue.

Our architecture also includes 3 other services that you commonly see in web server architecture: a Postgres database, a Redis datastore, and a RabbitMQ message broker.

Here’s a minimal implementation of the 3 aforementioned processes that also showcase the usage of our 3 data backends:

clock.js 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 const SimpleCron = require("simple-cron"); const cron = new SimpleCron(); const amqp = require("amqplib/callback_api"); cron.schedule("* * * * *", () => { amqp.connect(process.env.RABBIT_URL, (err, conn) => { conn.createChannel((err, ch) => { const q = "clock"; ch.assertQueue(q, { durable: false }); ch.sendToQueue(q, Buffer.from("hi.")); }); console.log("Queuing new job!"); }); }); cron.run(); server.js 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 const express = require("express"); const pg = require("pg"); const redis = require("redis"); const amqp = require("amqplib/callback_api"); const app = express(); app.get("/", (req, res) => { res.send("Hello World!"); }); // Test Postgres connection app.get("/postgres/:blurb", (req, res) => { const ip = req.connection.remoteAddress; const db = new pg.Pool({ connectionString: process.env.DATABASE_URL }); db.connect((err, client, done) => { client.query( 'create table if not exists "blurbs" ("id" serial primary key, "text" varchar(255))', (err, result) => { client.query( 'insert into "blurbs" ("text") values ($1)', [req.params.blurb], (err, result) => { client.query('select * from "blurbs"', (err, result) => { const blurbs = result.rows.map(o => o.text); res.send(`List of blurbs:\n${blurbs.join(" ")}`); client.end(); done(); }); } ); } ); }); }); // Test Redis connection app.get("/redis", (req, res) => { const client = redis.createClient(process.env.REDIS_URL); client.incr("count", (err, reply) => { res.send(`Request count: ${reply}`); }); }); // Test RabbitMQ connection app.get("/rabbit/:msg", (req, res) => { amqp.connect(process.env.RABBIT_URL, (err, conn) => { conn.createChannel((err, ch) => { const q = "web"; ch.assertQueue(q, { durable: false }); ch.sendToQueue(q, Buffer.from(req.params.msg)); }); res.send("Message sent to worker process; check your terminal!"); }); }); app.listen(process.env.PORT, () => { console.log(`Example app listening on port ${process.env.PORT}!`); }); worker.js 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 const amqp = require("amqplib/callback_api"); amqp.connect(process.env.RABBIT_URL, (err, conn) => { conn.createChannel((err, ch) => { // Consume messages from web queue var q1 = "web"; ch.assertQueue(q1, { durable: false }); ch.consume( q1, msg => { console.info( "Message received from web process:", msg.content.toString() ); }, { noAck: true } ); // Consume messages from clock queue var q2 = "clock"; ch.assertQueue(q2, { durable: false }); ch.consume( q2, msg => { console.info( "Message received from clock process:", msg.content.toString() ); }, { noAck: true } ); }); });

There are example endpoints for each of the different components of our architecture. Visiting /postgres/:something will insert something into the postgres database and render a view containing all of the contents. Visiting /redis will count the number of visits to that page and display the count. Visiting /rabbit/:msg will send a message to the worker process and you can check the terminal logs to see the message. The clock process will also run continuously and send a message to the worker process once every minute. Not bad for a 1 minute set up!

Pull it all together with a bash script

I like to write a simple script so I don’t have to memorize as many commands:

manage.sh 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 #!/bin/bash set -e SCRIPT_HOME="$( cd "$( dirname "$0" )" && pwd )" cd $SCRIPT_HOME case "$1" in start) docker-compose up web worker clock ;; stop) docker-compose stop ;; build) docker-compose build ;; rebuild) docker-compose build --no-cache ;; run) if [ "$#" -lt "2" ] then echo $"Usage: $0 $1 <command>" RETVAL=1 else shift docker-compose run shell "$@" fi ;; shell) docker-compose run shell ;; *) echo $"Usage: $0 {start|stop|build|rebuild|run}" RETVAL=1 esac cd - > /dev/null

Done! Now, we don’t need to worry about remembering docker-compose commands. To run our entire server stack we now simply run ./manage.sh start . If we need to build our containers again because we changed our Dockerfile or we need to install new dependencies, we can run ./manage.sh build .

Our shell container exists so that we can shell into our container or run one-off commands in the context of our container. Using the script above, you can run ./manage shell to start a terminal session in the container. If you want to run a single command in your container, you can use ./manage run <command> .

If you’re familiar with the difficulty caused by complex development environments running on your local machine, then investigating a Docker powered development environment could save you time. There is a bit of set up involved but the productivity gained in the long term by using a tool like Docker pays for itself.

Wednesday, 07. October 2020

Bill Wendel's Real Estate Cafe

Agents debate “Is this a housing boom, housing bubble, or what?”

When Ralph Nader delivered his first speech on real estate 28 years ago in Middlebury, CT he said the best way to learn about an… The post Agents debate "Is this a housing boom, housing bubble, or what?" first appeared on Real Estate Cafe.

When Ralph Nader delivered his first speech on real estate 28 years ago in Middlebury, CT he said the best way to learn about an…

The post Agents debate "Is this a housing boom, housing bubble, or what?" first appeared on Real Estate Cafe.

Tuesday, 06. October 2020

Just a Theory

Blockhouse

A photo of The Blockhouse, a small fort in Central Park, New York City.

The Blockhouse. © 2020 David E. Wheeler

The Blockhouse is a small fort in the North Woods of Central Park, constructed as one of a series of fortifications in norther Manhattan. Photo taken on 6 September 2020 with an iPhone Xs.

More about… New York City Central Park Blockhouse

Monday, 05. October 2020

Virtual Democracy

Toxic cultures plague the academy

Remember that decisions that don’t get made by the people who are supposed to make them get made anyhow by the people who need them. Even the decision not to decide today is made by someone. When decisions are guided by the values and vision of the organization, when the process is transparent, when the conflicts appear on the surface, when failure is just another chance at success, and when leader
Remember that decisions that don’t get made by the people who are supposed to make them get made anyhow by the people who need them. Even the decision not to decide today is made by someone. When decisions are guided by the values and vision of the organization, when the process is transparent, when the conflicts appear on the surface, when failure is just another chance at success, and when leadership opens up in front of those who have proven their worth: that is when institutional guilt has no purchase on the logic of your organization.

Just a Theory

Harlem Hawk

An encounter with a hawk and a squirrel in St. Nicholas Park, Harlem.

Three months into the Covid-19 Pandemic, I had barely left the apartment. But summer humidity splashed int our little apartment — and it became clear that outdoor spread is almost nonexistent — I started taking daily walks. I quickly expanded my range, delighted to find that one can walk from the south end of Central Park at 59th Street to the northern­most tip of Manhattan almost entirely in parks. It’s really quite stunning, and there’s so much to take in: architecture, views, rivers, flowers and trees, wildlife — the works. Those of you who follow my IG know.

On my jaunt through St. Nicholas Park yesterday, a branch shook vigorously ahead, alerting me to an unusual presence. As I approached, this keen stare greeted me, a mere 5-8m.

⧉ Hello there, New Yorker. © 2020 David E. Wheeler

I don’t know what I expected to see, but it wasn’t this! I’ve heard that red-tailed hawks1 live in the trees or buildings of the nearby City College of New York, but I never saw one I could recognize, and certainly not this close. Turns out, red-tailed hawks are quite common residents of New York City, committed to pest control and delighting residents and visitors alike. Myself included.

This one, however, paid little attention to me. Rather, it seemed quite curious about this black squirrel running up a tree between us, about a meter away.

⧉ “Be cool, be cool…” © 2020 David E. Wheeler

⧉ I mean look at the cock of the head! © 2020 David E. Wheeler

⧉ Watching the squirrel, watching me. © 2020 David E. Wheeler

The squirrel kept running a loop up and down the tree. It would disappear from sight (mine, not the hawk’s), then reappear further down the trunk and scamper up again. The hawk seemed curious, amused, then, perhaps, bored.

⧉ Tracking the squirrel down the tree. © 2020 David E. Wheeler

⧉ “Where do you think you’re going, rodent?” © 2020 David E. Wheeler

⧉ Around and around and around. © 2020 David E. Wheeler

⧉ Zoooooom! © 2020 David E. Wheeler

Eventually I put my phone away and continued my walk, but kept thinking about this vignette. What story could one tell? Was the squirrel trying to protect its home by distracting the hawk? Was the hawk already well-sated, and now committed to satisfying its intellectual curiosity with a little naturalistic observation? Maybe the hawk and the squirrel were friends and neighbors, happy to enjoy a bit of camaraderie on a beautiful fall day in The City.

When I circled back an hour or so later, the hawk had moved across the path, and now was poking around in the ground cover. It saw me watching. Some ethno­graphy, perhaps?

Naw, it kept picking something up and shaking its head; a dragonfly or cricket I think, but couldn’t get close enough to tell. Could be it was hungry after all.

⧉ Poking among the ground cover, looking for a snack? © 2020 David E. Wheeler

Either way, it made my day. Man I love this city.

At least I think this is a red-tailed hawk. Though I see many bird watchers on my city schlepps, I myself am not one. Please do give me a holler if you happen to know just how mistaken I am. ↩︎

More about… Harlem Hawk Red-Tailed Hawk St. Nicholas Park Photography Nervous Squirrel

Biden on the Green New Deal

The Green New Deal may not be Joe Biden's climate plan, but you better believe he knows it deeply and could dive deep into the details.

This exchange from first presidential debate a few days ago really struck me (from the Rev transcript):

President Donald J. Trump: (57:56) So why didn’t you get the world… China sends up real dirt into the air. Russia does. India does. They all do. We’re supposed to be good. And by the way, he made a couple of statements. The Green New Deal is a hundred trillion dollars. Vice President Joe Biden: (58:08) That is not my plan [crosstalk]. The Green New Deal [crosstalk] is not my plan. [crosstalk]—

A hundred trillion dollars? As David Roberts of Vox points out, “US GDP is $21.44 trillion.” But I digress.

A bewildering back and forth followed (something about insulting the military), before moderator Chris Wallace managed to right the ship:

Chris Wallace: (58:53) The Green New Deal and the idea of what your environmental changes will do— Vice President Joe Biden: (58:57) The Green New Deal will pay for itself as we move forward. We’re not going to build plants that, in fact, are great polluting plants—

This impressed the hell out of me. Shortly after saying the GND isn’t his plan, Biden starts to get into its policy details to defend it? Wow. I mean, he may not agree with it all, but to respond with, “okay, so you wanna talk about the Green New Deal? I’ve got all the details, let’s go!” Props to level of policy engagement.

But listening again jut now, I realize that I missed the next bit:

Chris Wallace: (59:05) So, do you support the Green New Deal? Vice President Joe Biden: (59:07) Pardon me? Chris Wallace: (59:08) Do you support the— Vice President Joe Biden: (59:08) No, I don’t support the Green New Deal. President Donald J. Trump: (59:10) Oh, you don’t? Oh, well, that’s a big statement. Vice President Joe Biden: (59:12) I support [crosstalk]— President Donald J. Trump: (59:13) You just lost the radical left. Vice President Joe Biden: (59:15) I support [crosstalk] the Biden plan that I put forward. Chris Wallace: (59:19) Okay. Vice President Joe Biden: (59:19) The Biden plan, which is different than what he calls the radical Green New Deal.

He explicitly says that the GND not his plan and he doesn’t support it. When he said, “The Green New Deal will pay for itself as we move forward,” did he mean to say “The Biden Plan”? Digging a little deeper, I don’t think so. From the actual Biden Plan:

Biden believes the Green New Deal is a crucial framework for meeting the climate challenges we face. It powerfully captures two basic truths, which are at the core of his plan: (1) the United States urgently needs to embrace greater ambition on an epic scale to meet the scope of this challenge, and (2) our environment and our economy are completely and totally connected.

So there it is. The GND may not be his plan, but it deeply informs his plan, and I’ve little doubt he could expound on it. GND champion Alexandria Ocasio-Cortez eliminates any doubt in this clap-back to a snarky tweet by Kellyanne Conway:

This isn’t news, Kellyanne.

Our differences are exactly why I joined Biden’s Climate Unity Task Force - so we could set aside our differences & figure out an aggressive climate plan to address the planetary crisis at our feet.

Trump doesn’t even believe climate change is real.

Fantastic! Let’s do this thing.

More about… Politics Joe Biden Green New Deal Debate

Sunday, 04. October 2020

The Dingle Group

Digital Identity in Education

On Monday, September 28 the 14th Vienna Digital Identity Meetup* (link) hosted a focused session on digital identifiers and verifiable credentials in education. We have two great updates from Kim Hamilton Duffy (Architect Digital Credentials Consortium, Chair of the W3C CCG and Verifiable Credentials for Education Task Force) and Lluis Arińo (convenor of Diplomas Use Case at European Blockchai

On Monday, September 28 the 14th Vienna Digital Identity Meetup* (link) hosted a focused session on digital identifiers and verifiable credentials in education. We have two great updates from Kim Hamilton Duffy (Architect Digital Credentials Consortium, Chair of the W3C CCG and Verifiable Credentials for Education Task Force) and Lluis Arińo (convenor of Diplomas Use Case at European Blockchain Service Infrastructure and CIO Rovira i Virgili University, Spain).

Kim gave a great update on the W3C Verifiable Credentials in Education Task Force is working on, what educational institutions are currently participating on the Task Force and it objectives. The presentation included some great questions from the event participants, as well as highlighting that the Task Force is interested in participation of institutions, organizations or individuals from Asia Region.

Lluis provided an overview of the goals and current projects (Notarisation of Documents, ESSIF, Diplomas management and Trusted Sharing Data) of EBSI (European Blockchain Services Infrastructure). Lluis then gave a detailed overview of the Diploma Use Case covering, architectural elements, the centrality of verifiable credentials to the Use Case and the paradigm shift from institution centric to user centricity, making EBSI the common underlying building block for lifecycle management of diplomas.

Recording of the Event: https://vimeo.com/464715275

0:00:00 - Introductions
0:06:00 - Kim Hamilton Duffy - Verifiable Education Task Force
0:48:31 - Lluis Ariño - EBSI Diploma Use Case
1:32:21 - Upcoming schedule

For more information on Education Task Force: https://w3c-ccg.github.io/vc-ed/

For more information on EBSI: https://ec.europa.eu/cefdigital/wiki/display/CEFDIGITAL/Piloting+with+EBSI+explained


Finally we covered the upcoming Vienna Digital identity Events:

- October 26 - Bridging to SSI
- November 9 - High Assurance Digital Identity (HADI) and Pharma
- November 23 - Guardianship
- December 14 - GADI


And as a reminder, due to increased COVID-19 infections we are back to online only events. Hopefully we will be back to in person and online soon!

If interested in getting notifications of upcoming events please join the event group at: https://www.meetup.com/Vienna-Digital-Identity-Meetup/

*Vienna Digital Identity Meetup is hosted by The Dingle Group and is focused on educating business, societal, legal and technologists on the value that a high assurance digital identity creates by reducing risk and strengthening provenance. We meet on the 4th Monday of every month, in person (when permitted) in Vienna and online on Zoom. Connecting and educating across borders and oceans.

Friday, 02. October 2020

Ludo Sketches

Directory Services Containerization Webinar

If you’ve missed the webinar I did this week, with Steve Giovannetti, CTO and Co-Founder of Hub City Media, don’t worry: the webinar is available for replay on demand. In the last part of the webinar, we’ve been trying to… Continue reading →

If you’ve missed the webinar I did this week, with Steve Giovannetti, CTO and Co-Founder of Hub City Media, don’t worry: the webinar is available for replay on demand. In the last part of the webinar, we’ve been trying to address many of the questions we’ve received. But if you have some additional questions, I’d be happy to answer them. Just send them to me.


Jon Udell

A good way to encourage voters

I tried to sign up for phone banking and I just couldn’t see doing it, I don’t answer unknown calls and I wouldn’t expect anyone else to right now. I wound up with the Vote Forward letter-writing system, votefwd.org, which I like for a variety of reasons. It’s really well organized. You get batches of … Continue reading A good way to encourage voters

I tried to sign up for phone banking and I just couldn’t see doing it, I don’t answer unknown calls and I wouldn’t expect anyone else to right now. I wound up with the Vote Forward letter-writing system, votefwd.org, which I like for a variety of reasons.

It’s really well organized. You get batches of five or 20 letters that are for voters who are likely to be under-represented, and/or have not voted recently. The templates just ask the person to vote, not specifying for whom, and provide links to localized voter info.

They also leave space for you to hand-address the recipient, add a handwritten message, and sign your name.

The last couple of batches I prepared are heading to South Carolina. The letters won’t go out until October 17, though, for maximum impact. This was a compromise, the original plan — backed by research — was for letters to arrive closer to the election. But now that the threat model includes USPS sabotage, the date had to be moved earlier.

Vote Forward claims to have evidence showing that this method makes a difference on the margin. I haven’t seen that evidence, and would like to, but it seems plausible. The recipients are getting a hand-addressed stamped envelope with a return address that has my name atop an address in their region, which is slightly deceptive but feels like a good way to get them to open the letter.

You buy your stamps, so there’s a bit of financial contribution that way.

As a bonus I am re-learning cursive handwriting, which I forgot was primarily about speed. My fifth grade teacher, Mrs. Cloud, who tormented me about my handwriting, would approve.

I’m finding the process to be therapeutic, and it was a much better way to spend an hour and a half the other night than watching the so-called debate.

There’s plenty of time until October 17, so if you’ve been looking for a way to do something and haven’t found it yet, I can recommend this one. I’ve never done anything like this before, I hope the fact I am doing it now is representative of what a lot of others are doing in a lot of ways.

Thursday, 01. October 2020

Orie Steele

Transmute Releases Technical Workbenches

Explore the standards-based scalable identifiers and encrypted data storage tools that power Transmute’s product. An example of the Transmute Encrypted Data Vaults Workbench document preview. Transmute is proud to announce the release of several new technical workbenches as a part of our continued commitment to open-standards development, interoperability, and product transparency. Whenever p

Explore the standards-based scalable identifiers and encrypted data storage tools that power Transmute’s product.

An example of the Transmute Encrypted Data Vaults Workbench document preview.

Transmute is proud to announce the release of several new technical workbenches as a part of our continued commitment to open-standards development, interoperability, and product transparency. Whenever possible, our team strives to provide interactive proof of functionality along with standards, specifications, and library support.

This new suite of tools is available for developers to experiment with today and includes:

Element Ropsten Workbench Encrypted Data Vault Workbench DID Key Workbench

Transmute leverages these workbenches as part of our global trade solutions, where our customers benefit from verifiable data workflows and integrated capabilities. Reach out to our team here to learn more.

Workbench Details

Read on to learn technical details of what is included in each workbench, and follow the links to see how each works for yourself.

Element Testnet Workbench https://staging.element.transmute.industries/workbench

We’ve updated did:elem to support the latest stable version of the Sidetree protocol, and we’ve reimplemented our block explorer from https://element-did.com to support the new Sidetree filesystem and the latest element dids.

We’ve also added universal wallet* support to the element workbench, so you can create a Sidetree did and control it with the same keys you use for did key or any other universal wallet compatible product.

*The universal wallet is also an official work item of the W3C CCG https://github.com/w3c-ccg/universal-wallet-interop-spec.

Data Vault Workbench
https://staging.date-vault.transmute.industries/workbench

We’ve added support for encrypted data vaults to the universal wallet spec, and provide a developer user interface which is similar to a database administration interface which helps DID controllers explore their vaults, documents, and indexes inside encrypted data vaults.

We also published the first vendor interoperability tests in the Secure Data Store working group: https://github.com/decentralized-identity/secure-data-store. These tests help vendors prove they are interoperable.

Having workbenches like these helps Transmute separate standards, libraries, sample implementations, demos into microservices which are independently upgradeable and valuable by themselves as standalone products.

For example, our Sidetree node for Element regularly anchors testnet DID activity, and it’s helpful to be able to explore that activity on our block explorer, even if you didn’t use our node to anchor those events… If you want to dig into the Ethereum related details, we happily link you to https://ropsten.etherscan.io/ for more detailed information about the Ethereum transactions and blocks.

Our encrypted data vault workbench demonstrates the concept of “wallet portability” by showing how wallet content can be encrypted client-side and replicated between clients. This demonstrates the value of encrypted data vaults and the universal wallet interop spec at the same time…. It also helps us prove that encrypted data vaults work with did:key and did:elem.

DID Key Workbench
https://did.key.transmute.industries

We’ve added support for BLS12381, which is used to construct zero-knowledge proofs using https://github.com/w3c-ccg/ldp-bbs2020

We’ve also added support for the “NIST Curves” which are legacy elliptic curves that are supported almost everywhere, including natively in web browsers. Not everyone trusts them, you should review https://safecurves.cr.yp.to/. Nonetheless, we have shown them to be working with DID Key, which opens the door for legacy integration and interoperability.

We use DID Key for testing, and because of its simplicity it’s an ideal starting point for learning about DIDs and VCs.

The DID Key Workbench also has the first [to our knowledge] support for content-type and multiple did document representations. Support for multiple representations in the DID Core Specifications is currently being defined and subject to change. Today, there is a lot of language which describes JSON-LD, and almost no examples of JSON or CBOR. We hope that by showing how did:key can support both JSON and JSON-LD we can help the community figure out the representation sufficient for it to be testable.

Transmute Releases Technical Workbenches was originally published in Transmute on Medium, where people are continuing the conversation by highlighting and responding to this story.


Identity Praxis, Inc.

Call-for-insights: Latest & Greatest Mobile Marketing Use Cases (Success Stories and Epic Fails), Stats, Tools, and Leaders

I’ll be teaching mobile marketing at National University (www.nu.edu) this term. I’m looking for insights from mobile marketing leaders (brands, agencies, MarTech, etc.) that I can share with my students. I’m looking for the latest on mobile marketing, Use Cases/Case Studies, detailed use cases on mobile marketing programs, both success stories and epic failures (aka […] The post Call-for-insigh

I’ll be teaching mobile marketing at National University (www.nu.edu) this term.

I’m looking for insights from mobile marketing leaders (brands, agencies, MarTech, etc.) that I can share with my students.

I’m looking for the latest on mobile marketing,

Use Cases/Case Studies, detailed use cases on mobile marketing programs, both success stories and epic failures (aka key learnings) Stats, statistics around relevant user behavior, campaign success, etc. Challenges, challenges that brands and agencies are trying to overcome to successfully server people Opportunities, opportunities that brands and agencies are trying to realize Tools, a list of MarTech tools and services that can help marketers with mobile marketing Leaders, a list of people and organizations that are doing an exceptional job with mobile marketing Speakers, I’m on the lookout for guest speakers (Oct. 26~Nov. 25, 2020), if can share your experience and expertise (30~45 min) with us that would be fantastic!

If you have material that supports any of the above, please share it with me by completing my Mobile Marketing Insights Form. My students and I would be grateful.

There are two parts to your submission:

Info about you (name, title/company, email address, LinkedIn URL, guest speaker availability) Your mobile marketing insight and any supporting materials you can provide

I will be using your material to support the class as well as in my blog articles and related publications.

Thank you. I will be grateful for the community’s support with helping me bring mobile marketing excellence to our future marketing and business leaders.

The post Call-for-insights: Latest & Greatest Mobile Marketing Use Cases (Success Stories and Epic Fails), Stats, Tools, and Leaders appeared first on Identity Praxis, Inc..

Wednesday, 30. September 2020

Shingai Thornton

Twitter Lists Are Underrated

Lists can help solve some of Twitter’s major pain points “Unfollowing everyone on Twitter and switching to lists for better control of my feed. Nothing personal, love you all! “ — Github CEO Nat Friedman 9/19/20 Twitter is my favorite social network. It is a treasure trove of enlightening blog posts, news articles, research papers, and video content that I would never see in the
Lists can help solve some of Twitter’s major pain points

“Unfollowing everyone on Twitter and switching to lists for better control of my feed. Nothing personal, love you all! “ — Github CEO Nat Friedman 9/19/20

Twitter is my favorite social network.

It is a treasure trove of enlightening blog posts, news articles, research papers, and video content that I would never see in the stagnant echo chamber that is my Facebook news feed; a place filled with content posted primarily by former classmates who seem to have frighteningly homogenous political opinions and little apparent interest in seriously engaging with opposing viewpoints.

On Twitter, I have the freedom to engage with a heterogeneous network of thinkers from diverse geographical and ideological backgrounds. The platform has vast untapped potential as a tool for research as well as monitoring current events and trends. A 2018 study found compelling evidence that Twitter isn’t very polarized. Average users are exposed to a diverse range of ideas and post content that is more politically moderate than what they receive in their feeds.

Unfortunately, the abundance of content comes at a high psychological cost:

Information overload makes me feel like I’m drowning as I navigate an infinite array of never-ending newsfeeds. Finding the signal that I’m searching for in a sea of noise takes work. Fear of missing out and fear of not knowing compel me to constantly check the application for anything important I may have missed. “Just one more refresh, just a bit more scrolling…”

Twitter lists are an under-appreciated feature that can increase the platform’s potential to facilitate productive social discourse. They help address existing issues by:

Allowing for the creation of multiple newsfeeds with content from personally curated groups of users, thus reducing information overload. Using Tweetdeck to manage lists provides a much smoother Twitter user experience. Providing raw data, via the Twitter API, that can be used in applications that push relevant content to users and reduce the amount of time they spend searching and scrolling.

Lists have helped satisfy my craving for nuanced political and economic discourse that transcends the left/right paradigm and focuses on addressing systemic issues rather than proposing simplistic band-aid solutions. They also help me effectively track technical, social, and cultural developments in the rapidly evolving crypto industry.

Twitter could become a vital part of the digital public square with the help of well-curated lists and better ways of interacting with them. It could help citizens fulfill their obligation to be well informed and perhaps even allow us to escape the bubbles of groupthink that social media has trapped us in.

Jack Dorsey recently announced plans to transform Twitter into an open decentralized platform by using public blockchains to put users in control of their data. This shift will open up new possibilities in terms of what sorts of applications could be built using lists and drastically increase Twitter’s potential to serve the common good.

You can get started with exploring lists by checking out a few that I’ve made:

AI Bitcoin Blockchain Gaming Cryptofunds Cryptogurus Decentralized Finance Ethereum Gen Z Mafia Systems Thinkers

And some great public lists that others have created:

Astronomers Bitcoin Developers Crypto Cryptoart Congress Digital and Social Media Economics Financial News Futurists Health Innovators Investors/VC/Angel Machine Learning Media \\ Content Neuroscience Philosophers Physicists Wall Street Influencers

Tweet me @shingaithornton with any thoughts on lists or suggestions for how to improve the ones I’m working on.

This piece was originally posted on my personal blog.


DustyCloud Brainstorms

Spritely website launches, plus APConf video(s)!

Note: This originally appeared as a post on my Patreon account... thanks to all who have donated to support my work! Hello, hello! Spritely's website has finally launched! Whew... it's been a lot of work to get it to this state! Plus check out our new logo: Not bad, eh …

Note: This originally appeared as a post on my Patreon account... thanks to all who have donated to support my work!

Hello, hello! Spritely's website has finally launched! Whew... it's been a lot of work to get it to this state! Plus check out our new logo:

Not bad, eh? Also with plenty of cute characters on the Spritely site (thank you to David Revoy for taking my loose character sketches and making them into such beautiful paintings!)

But those cute characters are there for a reason! Spritely is quite ambitious and has quite a few subprojects. Here's a video that explains how they all fit together. Hopefully that makes things more clear!

Actually that video is from ActivityPub Conference 2020, the talks of which have now all have their videos live! I also moderated the intro keynote panel about ActivityPub authors/editors. Plus there's an easter egg, the ActivityPub Conference Opening Song! :)

But I can't take credit for APConf 2020... organization and support are thanks to Morgan Lemmer-Webber, Sebastian Lasse, and FOSSHost for hosting the website and BigBlueButton instance and conf.tube for generously hosting all the videos. There's a panel about the organization of APConf you can watch if you're interested in more of that! (And of course, all the other great videos too!)

So... what about that week I was going to work on Terminal Phase? Well... I'm still planning on doing it but admittedly it hasn't happened yet. All of the above took more time than expected. However, today I am working on my talk about Spritely Goblins for RacketCon, and as it turns out, extending Terminal Phase is a big part of that talk. But I'll announce more soon when the Terminal Phase stuff happens.

Onwards and upwards!

Monday, 28. September 2020

Doc Searls Weblog

Remembering Gail Sheehy

It bums me out that Gail Sheehy passed without much notice—meaning I only heard about it in passing. And I didn’t hear about it, actually; I saw it on CBS’ Sunday Morning, where her face passed somewhere between Tom Seaver’s and John Thompson’s in the September 6 show’s roster of the freshly dead. I was […]

It bums me out that Gail Sheehy passed without much notice—meaning I only heard about it in passing. And I didn’t hear about it, actually; I saw it on CBS’ Sunday Morning, where her face passed somewhere between Tom Seaver’s and John Thompson’s in the September 6 show’s roster of the freshly dead. I was shocked: She was older than both those guys and far less done. Or done at all, except technically. Death seems especially out of character for her, of all people.

Credit where due: The New York Times did post a fine obituary, and New York, for which she wrote much, has an excellent remembrance in the magazine by Christopher Bonanos (@heybonanos). Writes Bonanos,

Sheehy had an 18th book in the works, and it would have been — or will be, if someone else takes to the finish line — a fascinating one. Instead of reporting amid her peers (she was a few years older than the boomers, but roughly in their cohort), she set out to write a kind of echo of Passages, but this time about the millennial generation. And I can tell you that she was reveling in the immersion among people 50 and 60 years younger than she. She went to clubs with college guys and got out on the dance floor, and (by her account, at least) they were disarmed and amused by her — which is to say that she’d found every journalist’s sweet spot, where people get loose and comfortable enough to reveal themselves. She was constantly offering bits and pieces of her findings as magazine stories and columns. Some would work as stand-alone pieces and others wouldn’t, but all were tesserae in what was clearly going to be a big ambitious swoop of sociology. At New York, we were so taken with this project that we had also begun work on a profile of her…

@Gail_Sheehy is chronologically uncorrected. Her website, GailSheehy.com, also still speaks of her in the present tense: “For her new book-in-progress, she’s a woman on a mission to redefine the most misunderstood generation: millennials. They are struggling with the rupture in gender roles and a crisis in mental health. But this generation of 20- and 30-somethings is also inventing radically new passages.”

Gail Sheehy’s writing isn’t just solid; it is enviably good. BrainyQuotes has 71 samples, which is far short of sufficient. One goes, It is a paradox that as we reach our prime, we also see there is a place where it finishes. (Tell me about it. I’m only ten years younger than she was.)

As it happens, I’m writing this in the home library of a friend. On its many shelves a single spine stands out: Pathfinders, published in 1981. I pull it down and open it at random, knowing I’ll find something worth sharing from a page there. Here ya go, under the subhead Secrets of Well-being:

Like the dance of brilliant reflections on a clear pond, well-being is a shimmer that accumulates from many important life choices, mad over the years by a mind that is not often muddied by pretense or ignorance and a heart that is open enough to sense people in their depths and to intuit the meaning of most situations.

If there is an afterlife, I am sure Gail Sheehy is already reporting on it.

Saturday, 26. September 2020

Doc Searls Weblog

How early is digital life?

Bits don’t leave a fossil record. Well, not quite. They do persist on magnetic, optical and other media, all easily degraded or erased. But how long will those last? Since I’ve already asked that question, I’ll set it aside and ask the one in the headline. Some perspective: depending on when you date its origins, […]

Bits don’t leave a fossil record.

Well, not quite. They do persist on magnetic, optical and other media, all easily degraded or erased. But how long will those last?

Since I’ve already asked that question, I’ll set it aside and ask the one in the headline.

Some perspective: depending on when you date its origins, digital life—the one we live by extension through our digital devices, and on the Internet (which, like The Force, connects nearly all digital things)—is decades old.

It’s an open question how long it will last. But I’m guessing it will be more than a few more decades. More likely we will remain partly digital for centuries or millennia.

So I’m looking for guesses more educated than mine.

Since this blog no longer takes comments from outside of Harvard (and I do want to fix that), please tag your responses, wherever they may be on the Internet with #DigitalLife. Thanks.


Identity Praxis, Inc.

Considerations of a New ‘Ism, Dataism

Understanding the harms caused from datism Yesterday was a big day, Humanity Power™ released their Humanity Power action kit, a step-by-step guide for putting the unity that is within our humanity to work. It is a guide to bring joy, peace, and purpose to your life, the lives of everyone around you and afar, and […] The post Considerations of a New ‘Ism, Dataism appeared first on Identity Praxis

Understanding the harms caused from datism

Musings on dataism. Image adapted from S. Hermann & F. Richter

Yesterday was a big day, Humanity Power™ released their Humanity Power action kit, a step-by-step guide for putting the unity that is within our humanity to work. It is a guide to bring joy, peace, and purpose to your life, the lives of everyone around you and afar, and to put an end to the ‘isms (sexism, ageism, ableism, classism, racism) that plague humanity; ‘isms that are the root of so many of our individual and social ills.

See my related post and my personal testimonial on the value of the Humanity Power framework.

As I reflect on the powerful framework and message that Humanity Power has built, I’ve not been able to not stop considering how it relates to the emerging personal information economy and the role that data plays, both positively and negatively, throughout society.

I believe that datism, data’s dark side, is one of the many ‘ism that we must strive to eradicate if we are to truly be free to thrive in today’s data-driven society.

What is an ‘ism

Lets first start with understanding what an ‘ism is and what are the ‘isms Humanity Power is taking head-on?

Courtesy of Humanity Power: Humanities ‘isms

An ‘ism origin means “to side with.” While there are many ‘isms out there, Humanity Power focuses on ending these specific ‘isms, sexism, ageism, ableism, colorism, classism, and racism. These are the ‘isms that continue perpetuate discrimination within our society.

Charisse Alayna Fontes
Founder of Humanity Power
September 2020

There is so much we can do with the energy that we’re losing to these ‘isms, energy we can never get back. We need to learn to eradicate these ‘isms that are at work within our society, communities, and ourselves. Lets band together, eradicate these ‘isms and put the reclaimed energy to work so that we can join forces and tackle the challenges and realize the opportunities that lay ahead, together.

Identity Praxis is a proponent of Humanity Power and looks to add one more ‘ism to the conversation: dataism

Identity Praxis is a proponent of Humanity Power, it is through unity that we can all find balance, joy, and achievement of purpose. Also, it is the perfect platform to discuss another ‘ism that is plaguing society, dataism.

David Brooks in his 2013 article “The Philosophy of Data,” considers dataism an ethical system. And, Yuval Noah Harari, in his book Homo Deus, suggested dataism is a new form of religion that reveres big data. I see it as something more sinister. I see it to be more in the light of the other ‘isms, in that is divides and segregates more than drives unity.

Dataism, as an economic and social movement, relates to how people’s data, their identity and personal information, is being used by society’s private and public actors for gain to the detriment of the society-at-large and the individual who is the data subject.

Today, data is being used and co-opted by public (aka government) and private (aka organizations) for economic, political, and social gain, a situation often referred to as surveillance capitalism. People, individual data subjects, are being left out of the data flow equation, they don’t have a seat at the table.

While the flow of data throughout the world brings significant good to our societies and economies, and to the individual in many contexts, the fact that individuals have largely been left out of the decision-making process as to how their data is collected and used is a problem.

As with the other isms, dataism is causing harm to individuals and society at large, impart and due to, among other things, an imbalance of the flow of data, the value derived from the flow of data, and the risks generated by the flow of data; and, sadly, most people don’t even know this is happening or comprehend the long-term implications.

It is worth noting that dataism is both independent and interdependent in relation to the other ‘isms. It stands alone, as well as influences and is influenced by the other ‘isms.

At a micro level, dataism is causing harm, both legally recognized data harm, such as identity left, as well as other data harms that are not strictly measured or recognized, such as increased anxiety, loss of time, money, reputation, discrimination, opportunity (personal, economic, social, influence), and death.

At a macro level, dataism can create class wars, generate division, stimulate distrust, discriminate, silence free speech, suppress minorities, and more. Why, because dataism is causing a disequilibrium of power (money, resources, and influence), opportunity, social good, risk, and externalities among society’s actors. The lion share of the externalities are being born by people (you, me, and everyone around us), and the majority of the value is going to private institutions, aka “Big Tech”.

Data has power. Used properly data can create great prosperity. Used improperly it can create great harm.

A screen shot of The Opportunity Atlas

Did you know a single data point, a child’s zip code, can predict their future success? See The Opportunity Atlas and learn more.

How does The Opportunity Atlas fit into dataism? Here is just one example. It is quite common for marketers to use geofencing to target their ads. If marketers know that a particular zip code is populated by the economically disadvantaged, for instance, they will hold back their advertising and not reach out to people in those areas. Individuals in those areas, that may otherwise value seeing the ad are being lumped in with the “poor,” and are hurt by this application of data. While I’m not putting forth a moral judgment to this practice, this is a form of dataism.

Get Started Today

It is time. It is time to eradicate all the ‘ism, including dataism. It is time to give people, the data subject, agency and control over their data, and to bring unity to our phygital lives.

It is through unity, the eradication of datism, and all the other ‘isms, that we’ll achieve and sustain The Identity Nexus™, the equilibrium point where the value, risk, social good, and externalities of identity and personal information exchange within society are equitably and fairly shared among public institutions, private organizations, and people.

How can we eradicate dataism. We can empower people and start giving them agency over their identity and personal information. We can start giving them tools–personal information management solutions–so that they can be in control of the flow of their information and determine for themselves, who, what, when, where, for how long, and for what purpose their identity and personal information can be used. And, if they’re so inclined they should be able to charge for their data. They should be able to require organizations to have to accept their terms of access rather than only being forced to accept the organization’s terms of service.

It is upon us all to create a world worth living, working, and playing in, one that helps people follow their path, one that brings prosperity and joy without causing harm to others. It is time we end dataism.

References

Brooks, D. (2013, February 4). Opinion | The Philosophy of Data. The New York Times. https://www.nytimes.com/2013/02/05/opinion/brooks-the-philosophy-of-data.html

Harari, Y. N. (2018). Homo Deus: A Brief History of Tomorrow (Illustrated Edition). Harper Perennial.

Humanity Power. (n.d.). Humanity Power. Retrieved September 25, 2020, from https://www.humanitypower.co

Gilliland, D. (2019, August 30). Privacy law needs privacy harm [Text]. TheHill. https://thehill.com/opinion/cybersecurity/459427-privacy-law-needs-privacy-harm

Solove, D. (2006). A Taxonomy of Privacy. American Law Register, 154(3), 477–560. https://heinonline.org/HOL/LandingPage?handle=hein.journals/pnlr154&div=20&id=&page=

Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (1st Edition). PublicAffairs.

Legal Notices

The Identity Nexus™ is an Identity Praxis, Inc, trademark
The Humanity Power™ is a Culture Circle trademark

Copyright Policy

Please feel free to copy excerpts, embed our infographics, and use our ideas, insigths, and data for your own stories. All that we ask is that you include a link or citation to our article and reports. Our projects require a huge amount of resources, time & dedication. Thank you.

Article Citation:

Becker, M. (2020, September 26). Considerations of a New ‘Ism, Dataism. Identity Praxis, Inc. https://identitypraxis.com/2020/09/26/considerations-on-a-new-ism-dataism/

The post Considerations of a New ‘Ism, Dataism appeared first on Identity Praxis, Inc..


Humanity Power Release Its Action Kit & 30-Day Challenge

Putting an end to the ‘isms that plague humanity Today is a big day at Humanity Power™, and for all. Charisse and the team at Humanity Power have launched the Humanity Power action kit, a step-by-step guide for putting the unity that is within our humanity to work. It is a guide to bring joy, […] The post Humanity Power Release Its Action Kit & 30-Day Challenge appeared first on Identity Pra

Putting an end to the ‘isms that plague humanity

Today is a big day at Humanity Power™, and for all.

Charisse and the team at Humanity Power have launched the Humanity Power action kit, a step-by-step guide for putting the unity that is within our humanity to work. It is a guide to bring joy, peace, and purpose to your life, the lives of everyone around you and afar, and to put an end to the ‘isms (sexism, ageism, ableism, classism, racism) that plague humanity; ‘isms that are the root of so many of our individual and social ills.

I encourage you to take the leap into Humanity Power. Follow the six-actions outlined in the action kit, and personally contribute, in your own unique way, to bring joy, prosperity, and purpose to your life and those around you, and afar.

What are humanity’s ‘isms

So, what is an ‘ism and what are the ‘isms Humanity Power is taking head-on?

Courtesy of Humanity Power: Humanities ‘isms

An ‘ism origin means “to side with.” While there are many ‘isms out there, Humanity Power focuses on ending these specific ‘isms, sexism, ageism, ableism, colorism, classism, and racism. These are the ‘isms that continue perpetuate discrimination within our society.

Charisse Alayna Fontes
Founder of Humanity Power
September 2020

There is so much we can do with the energy that we’re losing to these ‘isms, energy we can never get back. We need to learn to eradicate these ‘isms that are at work within our society, communities, and ourselves. Lets band together, eradicate these ‘isms and put the reclaimed energy to work so that we can join forces and tackle the challenges and realize the opportunities that lay ahead, together.

The Six Actions for Claiming Your Humanity Power

The Humanity Power action kit lays out six actions, steps, you can take to claim your humanity and to help bring prosperity to yourself and the world.

Humanity Power Action List

Learn the ‘isms, start by building awareness of the ‘isms and what they mean and who they affect. Learn about high-vibration Humanity, learn what it means to use your high-vibration Humanity. Buy a Humanity Power T-Shirt, wear the message of Humanity Power, and support the mission (T-Shirt proceeds go to the support of the mission). Gift a Humanity Power T-Shirt, give the gift of unity and spread the message of Humanity (T-Shirt proceeds go to the support of the mission). Take the 30-day Humanity Power challenge, put your humanity to action and practice high-vibrational activities to help end the ‘isms. Subscribe and Stay connected, we’re just getting started and you are a big part in that. Let’s stay connected so we can experience mpact together.

Oh, and be sure to keep an eye you for future actions that you can engage in, like

sharing the Humanity the Manatee™ stuffy and book that teaches kids about humanity supporting the Humanity Power “3rd Thursday” community enrichment events that provide people in need with a free meal while supporting local small businesses taking part in action-based education. More on all of these later Personal Testimonial

As for my part, I’m a walking billboard for the impact that Humanity Power can have on your life. Humanity Power has brought me positivity, joy, energy, friends, connections, commercial opportunity, prosperity, and more.

Image by author:
Becker wearing Humanity Power T-Shirt Image by author:
Parents wearing Humanity Power T-Shirt

I am using Humanity Power to change my attitude, providing self-care. This has lead to me being more positive. I’m writing more. I am more protective and am finding renewed interest in my work.

I’ve finished all but one of the six actions. Action 5 of the action kit, the one I’m most excited about. It is next on my plate. I won’t give away the surprise as to what is in the daily challenge, you can take action yourself and download the kit.

Courtesy of Humanity Power: Humanity Power 30 Day Challenge Get Started Today

It is time. It is time to eradicate all the ‘ism.

It is upon us all to create a world worth living in, one that helps people follow their path, one that brings property and joy without causing harm to others.

If you’re not ready to start the Humanity Power journey just yet, that’s ok. Humanity Power attests that there is unity in humanity. Well, guess what, there is unity in community as well. As a first step, I encourage you to at least join the community, start with action 6, and stay connected with the movement and the mission. If you are ready, jump right in. I encourage you to join the Humanity Power community and take part. You won’t regret it.

For our part at Identity Praxis, we’re focused on helping public institutions, private organizations, and people find an equitable balance as it relates to the exchange of identity and personal information throughout society. After reflecting on the Humanity Power framework, this brings another ‘ism to mind, dataism. We’ll reflect on Datism in a future post.

References

Humanity Power. (n.d.). Humanity Power. Retrieved September 25, 2020, from https://www.humanitypower.co

The Humanity Power™ and Humanity the Manatee™ are Culture Circle trademarks.

The post Humanity Power Release Its Action Kit & 30-Day Challenge appeared first on Identity Praxis, Inc..

Wednesday, 23. September 2020

@_Nat Zone

[2020-09-22]FDX Global Summit キーノート

米国Financial Data Exchange… The post [2020-09-22]FDX Global Summit キーノート first appeared on @_Nat Zone.

米国Financial Data Exchange (FDX)のメンバーオンリーだったので事前に告知しませんでしたが、米国中部時間午前10時(日本時間23日午前0時)より、FDX Global Summit 2020 Fall でキーノートスピーチを行いました。

タイトルは「Global Adoption of FAPI Among Open Banking Standards」

30ほどのスピーチと、15分のQ&Aセッションです。わりと評判が良かったようです。

スライドを貼っておきます。

20200922-Global-Adoption-of-FAPI-EN

The post [2020-09-22]FDX Global Summit キーノート first appeared on @_Nat Zone.


Doc Searls Weblog

On Moral Politics

I spent 17 minutes while exercising the other day, thinking out loud about what @GeorgeLakoff says in his 1996 book Moral Politics: What Conservatives Know That Liberals Don’t, (also in his expanded 2016 edition, re-subtitled How Liberals and Conservatives Think). I also tweeted about the book this morning here. In it I explain what pretty much nobody else is […]

I spent 17 minutes while exercising the other day, thinking out loud about what @GeorgeLakoff says in his 1996 book Moral Politics: What Conservatives Know That Liberals Don’t, (also in his expanded 2016 edition, re-subtitled How Liberals and Conservatives Think). I also tweeted about the book this morning here. In it I explain what pretty much nobody else is saying about Trump, Biden, and how each actually appeals to their voters (and those in between).

Here’s the audio file:

http://blogs.harvard.edu/doc/files/2020/09/2020_09_09_lakoff-moral-politics.mp3

This blog no longer takes comments, alas, so post a response elsewhere, if you like, or write me. I’m doc @ my surname dot com.

Tuesday, 22. September 2020

Identity Praxis, Inc.

Hold the date: Atlanta Innovation Forum The Challenging New World of Privacy & Security

I’m thrilled to have been invited to speak at the Atlanta Innovation Forum’s live web event, on October 15, 2020, 6:00 PM EST, “The Challenging New World of Privacy & Security,” and to have the opportunity to discuss the state of privacy, self-sovereignty, The Identity Nexus™, and more with my fellow panelists. Program Abstract Keeping […] The post Hold the date: Atlanta Innovation Forum The

I’m thrilled to have been invited to speak at the Atlanta Innovation Forum’s live web event, on October 15, 2020, 6:00 PM EST, “The Challenging New World of Privacy & Security,” and to have the opportunity to discuss the state of privacy, self-sovereignty, The Identity Nexus™, and more with my fellow panelists.

Atlanta Innovation Forum Live Event Program Abstract

Keeping Pace with the Explosion of Online Traffic… and the Surge of Online Threats Technology has always moved at an accelerating pace. But, since the start of the global pandemic, Internet usage has exploded at a never-before-seen pace, creating an equally explosive opportunity for cybercrime and online security risks.

In 2020:

Internet traffic has increased by 70% this year; streaming has increased by more than 12% Ecommerce during the first six months increased more than 30% over online sales from the same time period last year (compared to the previous year to year same period growth of 13%) Ecommerce sales for June 2020 were up 76% over June 2019 sales; for July 2020, they were up 56% from July 2019 10% of shoppers are making an online purchase for the first time this year, due to social distancing concerns

The increase is great news for Internet retailers and online service providers. But, it’s also a great opportunity for online criminals. It’s estimated that online crime in 2020 could result in over $6 trillion in losses this year. So what’s being done to address this? What steps are being taken to make online use by businesses and consumers more secure, to keep pace for with the bad guys? What new tech is being employed? How are businesses and technology vendors – many who are competitors – working together to protect their consumer and business customers to reduce losses, protect privacy and confidential data and to keep the ever-smarter bad guys from getting ahead? Join us as a panel of experts from B2B, B2C, Security and InfoSec explore what “high tech” is doing to thwart “high crime”

My Contribution

For my part, I’m going to speak about The Identity Nexus. The Identity Nexus is the elusive equilibrium point where mutual and equitable personal information value exchange between people, public, and private institutions can be had. I’ll discuss the trinity of self-sovereignty (Privacy, Security, Compliance) and the solutions needed to help us all thrive in the emerging personal information economy.

IAF Program Speakers Carlos J. Bosch, Head of Technology, GSMA North America Matt Littleton, Global Advanced Compliance Specialist, Microsoft Donna Gallaher, President & CEO, New Oceans Enterprises Michael Becker, Founder & CEO, Identity Praxis Chad Hunt, Supervisory Special Agent, Federal Bureau of Investigation

The post Hold the date: Atlanta Innovation Forum The Challenging New World of Privacy & Security appeared first on Identity Praxis, Inc..

Sunday, 20. September 2020

Just a Theory

The Kushner Kakistocracy

An expertly-reported, deeply disturbing piece by Katherine Eban on Jared Kushner's key role in the colossal federal response to the Covid-19 pandemic.

Katherine Eban, in a deeply reported piece, for Vanity Faire:

Those representing the private sector expected to learn about a sweeping government plan to procure supplies and direct them to the places they were needed most. New York, home to more than a third of the nation’s coronavirus cases, seemed like an obvious candidate. In turn they came armed with specific commitments of support, a memo on the merits of the Defense Production Act, a document outlining impediments to the private-sector response, and two key questions: How could they best help? And how could they best support the government’s strategy?

According to one attendee, Kushner then began to rail against the governor: “Cuomo didn’t pound the phones hard enough to get PPE for his state…. His people are going to suffer and that’s their problem.”

But wait, it gets worse:

Kushner, seated at the head of the conference table, in a chair taller than all the others, was quick to strike a confrontational tone. “The federal government is not going to lead this response,” he announced. “It’s up to the states to figure out what they want to do.”

One attendee explained to Kushner that due to the finite supply of PPE, Americans were bidding against each other and driving prices up. To solve that, businesses eager to help were looking to the federal government for leadership and direction.

“Free markets will solve this,” Kushner said dismissively. “That is not the role of government.”

Seldom have falser words been spoken. These incompetents conflate their failure to lead with their belief that the government cannot lead. The prophecy fulfills itself.

The same attendee explained that although he believed in open markets, he feared that the system was breaking. As evidence, he pointed to a CNN report about New York governor Andrew Cuomo and his desperate call for supplies.

“That’s the CNN bullshit,” Kushner snapped. “They lie.”

“That’s when I was like, We’re screwed,” the shocked attendee told Vanity Fair.

And indeed we sure have been. Nearly 200,000 have died from Covid-19 in the United States to date, with close to 400,000 deaths forecast by January 1.

I’m restraining myself from quoting more; the reporting is impeccable, and the truth of the situation deeply alarming. Read the whole thing, then maybe go for a long walk and practice deep breathing.

And then Vote. And make sure everyone you know is registered and ready to vote.

More about… Politics Katherine Eban Jared Kushner Covid-19 Kakistocracy

Saturday, 19. September 2020

Doc Searls Weblog

Saving Mount Wilson

This was last night: And this was just before sunset tonight: From the Mt. Wilson Observatory website: Mount Wilson Observatory Status Angeles National Forest is CLOSED due to the extreme fire hazard conditions. To see how the Observatory is faring during the ongoing Bobcat fire, check our Facebook link, Twitter link, or go to the HPWREN Tower Cam and click on […]

This was last night:

And this was just before sunset tonight:

From the Mt. Wilson Observatory website:

Mount Wilson Observatory Status

Angeles National Forest is CLOSED due to the extreme fire hazard conditions. To see how the Observatory is faring during the ongoing Bobcat fire, check our Facebook linkTwitter link, or go to the HPWREN Tower Cam and click on the second frame from the left at the top, which looks east towards the fire (also check the south-facing cam and the recently archived timelapse movies below which offer a good look at the latest events in 3-hour segments. Note to media: These images can be used with credit to HPWREN). For the latest updates on the Bobcat fire from U.S. Forest Service, please check out their Twitter page.

Last night the firefighters set a strategic backfire to make a barrier to the fire on our southern flank. To many of us who did not know what was happening it looked like the end. Click here to watch the timelapse. The 12 ground crews up there have now declared us safe. They will remain to make sure nothing gets by as the fires tend to linger in the canyons below. They are our heroes and we owe them our existence. They are true professionals, artists with those backfires, and willing to put themselves at considerable risk to protect us. We thank them!!!!

There will be plenty of stories about how the Observatory and the many broadcast transmitters nearby were saved from the Bobcat Fire. For the curious, start with these links:

Mt. Wilson Observatory website Tweets from the Mount Wilson Observatory Tweets from the Angeles National Forest Space.com’s report on the event

I’ll add some more soon.

Friday, 18. September 2020

Tim Bouma's Blog

A Simple Ecosystem Model

Disclaimer: This is posted by me and does not represent the position of my employer or the working groups of which I am a member. In my never-ending quest to come up with super-simple models I came up with this diagram. This post is a slight editorial refactoring of my recent Twitter thread found here. A simple ecosystem model The above illustration is not intended to be an architectura

Disclaimer: This is posted by me and does not represent the position of my employer or the working groups of which I am a member.

In my never-ending quest to come up with super-simple models I came up with this diagram. This post is a slight editorial refactoring of my recent Twitter thread found here.

A simple ecosystem model

The above illustration is not intended to be an architectural diagram — rather, it helps to 1) clarify conflations, 2) help define scope (the dotted box) and 3) understand motivations — of the parties that exist ‘outside of the system’

For example, ‘Issuer” usually gets conflated with ‘Authority’ — an authority merely ‘Attests’ — if you recognize it, then you can assume it is authoritative.

Anyone can attest to anything and issue something. The point of this model is that everything inside the box is neutral to that and solely focused on specific properties everyone needs regardless of intent or role.

The “Verifier” usually gets conflated with Relying Party. But a Verifier could be an off-the-shelf black box with the firmware baked in to verify against the right DIDs, challenging the holder with Bluetooth or NFC. The “Acceptor” could be logic that simply throws a switch to open a secure door. All done on behalf of a Relying Party.

The Holder can be anyone outside the system. An individual, organization or device, that is the ultimate ‘holder’ of secrets or cryptographic keys that is the basis of their power to convey intention.

Finally, the Registrar, is anyone or anything that is responsible for integrity of the ledger (doesn’t have to be blockchain). This ledger is responsible for two fundamental interactions: validation and transfer. In the case of a permissionless system, the ‘Registrar’ is actually an agreed-on set of rules, and proven (or not yet disproven) cryptographic primitives. For permissioned, or centralized systems, it could be a group of people, or even a single person in the back room with an Excel spreadsheet (not blockchain).

As for the dotted box — you need to determine who/what sits inside or outside of the box. For many outside the box, they may only care about a black box that they trust. This dotted box is also useful when you start thinking about the non-functional properties of the system — black or grey, should it be permissioned, permissionless, restricted access, globally available?

In the end, what I am trying to achieve is the expression of a simple conceptual model to help me express what could serve the wide range of use cases e.g.: opening a door, applying for university, letting someone across the border, etc. The model could also be used to express simply what we need to start building as a new digital infrastructure.

As always, this is a work-in-progress. Constructive comments welcome.

Thursday, 17. September 2020

@_Nat Zone

[2020-09-17] Citi Talks on Payments に出演しました

Citi がやっているYoutubeチャンネルに出… The post [2020-09-17] Citi Talks on Payments に出演しました first appeared on @_Nat Zone.

Citi がやっているYoutubeチャンネルに出演しました。OpenID, FAPI, などについてのインタビューです。

The post [2020-09-17] Citi Talks on Payments に出演しました first appeared on @_Nat Zone.

Monday, 14. September 2020

Virtual Democracy

The new Nobel: celebrating science events, their teams, and the history of discovery

The idea of giving out prizes is not itself obsolete; yet all award practices need to be refactored occasionally to capture the heart of the process of doing science, as this expands and changes in the coming decades. And, if it’s time to refactor the Nobel Prize, what does that suggest for the prizes your learned society hands out? Adding an ecosystem of badges (to show off skills and accomplishme
The idea of giving out prizes is not itself obsolete; yet all award practices need to be refactored occasionally to capture the heart of the process of doing science, as this expands and changes in the coming decades. And, if it’s time to refactor the Nobel Prize, what does that suggest for the prizes your learned society hands out? Adding an ecosystem of badges (to show off skills and accomplishments) to the recognition landscape helps to replace prizes as a central feature of open science. Since prizes celebrate brilliant work, and as celebrations as a whole add positive affect to your culture, let the prizes continue. But give them some new thought. What is your idea for Nobel 2.0?

Identity Praxis, Inc.

Braze Privacy Data Report

The Braze Privacy Data Report provides useful insights toward understanding U.S. consumers’ (n=2,000) and marketer executives’ (n=500) opinions regarding personal data usage and privacy.  The findings are insightful.  Marketers should start to take action now, and prepare themselves for the rise of the self-sovereign individual.  Individuals expect transparency According to the stud

The Braze Privacy Data Report provides useful insights toward understanding U.S. consumers’ (n=2,000) and marketer executives’ (n=500) opinions regarding personal data usage and privacy. 

The findings are insightful. 

Marketers should start to take action now, and prepare themselves for the rise of the self-sovereign individual. 

Individuals expect transparency

According to the study, individuals want transparency. They want to know, 

how their data will be used (94%) who it will be shared with (74%) what will be done with it (74%) what data has been collected (70%) how long it will be retained (59%) who is storing it (56%)

Marketers agree (99%), but what consumers and marketers don’t agree on is the value exchange and who should be in control of the policies and rules for privacy and personal information management & oversight.

Ignoring privacy concerns will hurt the bottom line

Marketers beware!

The Braze report notes that 84% of U.S. adults have decided against engaging a brand due to personal information requests, and 71% did so more than once; and, 75% stopped engaging with a company all together over privacy concerns. 

The message is clear, properly handling privacy will have a positive impact on the bottom line, and not doing so will be detrimental. 

Value exchange is possible

People are open to exchanging their personal information, but they want something in return; 71% of consumers will share their personal information in exchange for value: 

60% in exchange for cash; a 2018 study by SYZYGY suggested this could be as much as $150 USD (The Price of Personal Data, 2018) 26% for product and incentives 21% for free content. 

We see a gap here between consumers and marketing executives, as only 31% of marketing executives believe consumers should receive cash in exchange for personal data. 

The privacy expectation gap

Again, we have a privacy expectations gap in the U.S.; the report finds that 82% of U.S. adults say privacy is important to them, while only 29% of marketing executives hold the same opinion. 

It is time for the market to listen and begin to respect the sovereignty of the individual.

Rules and regulations

As for who should set the rules and regulations for personal information management & oversight and exchange and privacy protection, it’s not clear.

Marketing executives (88%) find the state-by-state sectoral approach in the US burdensome and believe that federal direction could provide clearer directions. And yet, just 52% of marketers and consumers believe the Federal governments should do more. 

The Bottom line

The bottom line is clear. People are waking up to the fact that their personal information has value. They’re open to exchange, but they expect transparency and compensation. Clearly, from a consumer attitudinal perspective, the personal information economy is at hand, but it is also clear there are industry and regulatory gaps that must first start to close; moreover, we need to develop and deploy the tools to help people safely and securely engage in the exchange of their personal information.  

Reference

Data Privacy Report (p. 4~12). (2020). Wakefield Research. https://info.braze.com/rs/367-GUY-242/images/BrazeDataPrivacyReport.pdf

The Price of Personal Data (pp. 1–17). (2018). SYZYGY. https://media.szg.io/uploads/media/5b0544ac92c3a/the-price-of-personal-data-syzygy-digital-insight-survey-2018.pdf 

The post Braze Privacy Data Report appeared first on Identity Praxis, Inc..


Project Your Privacy: Consider Blurring Your House on Google Maps

Proactively managing your privacy is no longer a luxury, it is a must and it takes conscious effort. One step you may not have considered is blurring your house, face, car or license plate, or some other object on Google Maps? Why would you want to do this, you might ask? Well, here are a […] The post Project Your Privacy: Consider Blurring Your House on Google Maps appeared first on Identity Pr

Proactively managing your privacy is no longer a luxury, it is a must and it takes conscious effort. One step you may not have considered is blurring your house, face, car or license plate, or some other object on Google Maps?

Why would you want to do this, you might ask? Well, here are a few reasons; I’m sure there are more.

You have street-facing windows from your bedroom or living room You’ve been a victim and want to minimize your digital footprint You generally want to maintain your privacy 

If you decide to do it, it is surprisingly easy.  

Steps to Blur your house on Google Maps

You can follow the steps below to blur your house on Google Maps.

Warning: According to Google, the blurring of an image on Google Maps is permanent; you can’t reverse it. So, be sure this is what you want to do before you proceed. 

Go to Google Maps and enter your home address Click on the picture of your house to enter Street View mode Click “Report a problem” link in the bottom-right corner of the screen Center the red box on the object you want to blur, like your car or home Click on the object type in the “Request blurring” list Provide an explanation of why you want to blur the object, in the filed that appears Enter your email address and hit Sumit.

Google will evaluate your request. The team (or systems) at Google will consider your request; it is not clear how long this will take. Once they’ve finished reviewing your request, they will let you know if it has been approved or rejected. 

The post Project Your Privacy: Consider Blurring Your House on Google Maps appeared first on Identity Praxis, Inc..

Sunday, 13. September 2020

DustyCloud Brainstorms

Spritely Goblins v0.7 released!

I'm delighted to say that Spritely Goblins v0.7 has been released! This is the first release featuring CapTP support (ie, "capability-secure distributed/networked programming support"), which is a huge milestone for the project! Okay, caveat... there are still some things missing from the CapTP stuff so far; you can …

I'm delighted to say that Spritely Goblins v0.7 has been released! This is the first release featuring CapTP support (ie, "capability-secure distributed/networked programming support"), which is a huge milestone for the project!

Okay, caveat... there are still some things missing from the CapTP stuff so far; you can only set up a bidirectional connection between two machines, and can't "introduce" capabilities to other machines on the network. Also setting up connections is an extremely manual process. Both of those should be improved in the next release.

But still! Goblins can now be used to easily write distributed programs! And Goblins' CapTP code even includes such wild features as distributed garbage collection!

As an example (also mentioned in a recent blogpost), I recently wrote a short chat program demo. Both the client and server "protocol" code were less than 250 lines of code, despite having such features as authenticating users during subscription to the chatroom and verifying that messages claimed by the chatroom came from the users it said it did. (The GUI code, by contrast, was a little less than 300 lines.) I wrote this up without writing any network code at all and then tested hooking together two clients over Tor Onion Services using Goblins' CapTP support, and it Just Worked (TM):

What's interesting here is that not a single line of code was added to the backend or GUI to accomodate networking; the host and guest modules merely imported the backend and GUI files completely unchanged and did the network wiring there. Yes, that's what it sounds like: in Goblins you can write distributed asynchronous programs

This is the really significant part of Goblins that's starting to become apparent, and it's all thanks to the brilliant design of CapTP. Goblins continues to stand on the shoulders of giants; thank you to everyone in the ocap community, but especially in this case Michael FIG, Mark S. Miller, Kevin Reid, and Baldur Jóhannsson, all of whom answered an enormous amount of questions (some of them very silly) about CapTP.

There are more people to thank too (too many to list here), and you can see some of them in this monster thread on the captp mailing list which started on May 18th (!!!) as I went through my journey of trying to understand and eventually implement CapTP. I actually started preparing a few weeks before which really means that this journey took me about four and a half months to understand and implement. As it turns out, CapTP is a surprisingly simple protocol protocol in its coneptualization once you understand what it's doing (though implementing it is a bit more complex). I do hope to try to build a guide for others to understand and implement on their own systems... but that will probably wait until Goblins is ported to another language (due to the realative simplicity of the task due to the language similarities, the current plan is to port to Guile next).

Anyway. This is a big deal, a truly exciting moment for goblinkind. If you're excited yourself, maybe join the #spritely channel on irc.freenode.net.

OH! And also, I can't believe I nearly forgot to say this, but if you want to hear more about Spritely in general (not just Goblins), we just released a Spritely-centric episode of FOSS and Crafts. Maybe take a listen!

Friday, 11. September 2020

Doc Searls Weblog

On fire

The white mess in the image above is the Bobcat Fire, spreading now in the San Gabriel Mountains, against which Los Angeles’ suburban sprawl (that’s it, on the right) reaches its limits of advance to the north. It makes no sense to build very far up or into these mountains, for two good reasons. One […]

The white mess in the image above is the Bobcat Fire, spreading now in the San Gabriel Mountains, against which Los Angeles’ suburban sprawl (that’s it, on the right) reaches its limits of advance to the north. It makes no sense to build very far up or into these mountains, for two good reasons. One is fire, which happens often and awfully. The other is that the mountains are geologically new, and falling down almost as fast as they are rising up. At the mouths of valleys emptying into the sprawl are vast empty reservoirs—catch basins—ready to be filled with rocks, soil and mud “downwasting,” as geologists say, from a range as big as the Smokies, twice as high, ready to burn and shed.

Outside of its northern rain forests and snow-capped mountains, California has just two seasons: fire and rain. Right now we’re in the midst of fire season. Rain is called Winter, and it has been dry since the last one. If the Bobcat fire burns down to the edge of Monrovia, or Altadena, or any of the towns at the base of the mountains, heavy winter rains will cause downwasting in a form John McPhee describes in Los Angeles Against the Mountains:

The water was now spreading over the street. It descended in heavy sheets. As the young Genofiles and their mother glimpsed it in the all but total darkness, the scene was suddenly illuminated by a blue electrical flash. In the blue light they saw a massive blackness, moving. It was not a landslide, not a mudslide, not a rock avalanche; nor by any means was it the front of a conventional flood. In Jackie’s words, “It was just one big black thing coming at us, rolling, rolling with a lot of water in front of it, pushing the water, this big black thing. It was just one big black hill coming toward us.

In geology, it would be known as a debris flow. Debris flows amass in stream valleys and more or less resemble fresh concrete. They consist of water mixed with a good deal of solid material, most of which is above sand size. Some of it is Chevrolet size. Boulders bigger than cars ride long distances in debris flows. Boulders grouped like fish eggs pour downhill in debris flows. The dark material coming toward the Genofiles was not only full of boulders; it was so full of automobiles it was like bread dough mixed with raisins. On its way down Pine Cone Road, it plucked up cars from driveways and the street. When it crashed into the Genofiles’ house, the shattering of safety glass made terrific explosive sounds. A door burst open. Mud and boulders poured into the hall. We’re going to go, Jackie thought. Oh, my God, what a hell of a way for the four of us to die together.

Three rains ago we had debris flows in Montecito, the next zip code over from our home in Santa Barbara. I wrote about it in Making sense of what happened to Montecito. The flows, which destroyed much of the town and killed about two dozen people, were caused by heavy rains following the Thomas Fire, which at 281,893 acres was biggest fire in California history at the time. The Camp Fire, a few months later, burned a bit less land but killed 85 people and destroyed more than 18,000 buildings, including whole towns. This year we already have two fires bigger than the Thomas, and at least three more growing fast enough to take the lead. You can see the whole updated list on the Los Angeles Times California Wildfires Map.

For a good high-altitude picture of what’s going on, I recommend NASA’s FIRMS (Fire Information for Resource Management System). It’s a highly interactive map that lets you mix input from satellite photographs and fire detection by orbiting MODIS and VIIRS systems. MODIS is onboard the Terra and Aqua satellites; and VIIRS is onboard the Suomi National Polar-Orbiting Partnership (Suomi NPP) spacecraft. (It’s actually more complicated than that. If you’re interested, dig into those links.) Here’s how the FIRMS map shows the active West Coast fires and the smoke they’re producing:

That’s a lot of cremated forest and wildlife right there.

I just put those two images and a bunch of others up on Flickr, here. Most are of MODIS fire detections superimposed on 3-D Google Earth maps. The main thing I want to get across with these is how large and anomalous these fires are.

True: fire is essential to many of the West’s wild ecosystems. It’s no accident that the California state tree, the Coast Redwood, grows so tall and lives so long: it’s adapted to fire. (One can also make a case that the state flower, the California Poppy, which thrives amidst fresh rocks and soil, is adapted to earthquakes.) But what’s going on here is something much bigger. Explain it any way you like, including strange luck.

Whatever you conclude, it’s a hell of a show. And vice versa.

Thursday, 10. September 2020

Bill Wendel's Real Estate Cafe

Open Letter: Are BLIND Bidding Wars part of unfair & deceptive business practices?

Sent this email two weeks ago, and still have not received a response even though a similar effort caused Barron’s to tone down their headline… The post Open Letter: Are BLIND Bidding Wars part of unfair & deceptive business practices? first appeared on Real Estate Cafe.

Sent this email two weeks ago, and still have not received a response even though a similar effort caused Barron’s to tone down their headline…

The post Open Letter: Are BLIND Bidding Wars part of unfair & deceptive business practices? first appeared on Real Estate Cafe.

Tuesday, 08. September 2020

Altmode

Line voltage fluctuations

This past July, we replaced our roof and at the same time updated our solar panels and inverter (I’ll write about the new solar equipment in the near future). I was monitoring the new equipment somewhat more closely than usual, and noticed on one warm August day that the inverter had shut down due to […]

This past July, we replaced our roof and at the same time updated our solar panels and inverter (I’ll write about the new solar equipment in the near future). I was monitoring the new equipment somewhat more closely than usual, and noticed on one warm August day that the inverter had shut down due to low line voltage. Having home solar generation shut down on a warm day with a high air conditioning load is the opposite of what the utility, Pacific Gas & Electric (PG&E), should want to happen. In addition to shutting down solar power inverters, low line voltage can be hard on power equipment, such as motors.

At a time when our voltage was particularly low, I opened a low line voltage case with PG&E. This resulted in a call from a field technician that told me several things:

PG&E has been aware of the voltage regulation problem in my neighborhood for some time The problem is likely to be due to the older 4-kilovolt service in my part of town. Newer areas have 12-kilovolt service that would be expected to have about 1/9 the voltage drop with an equivalent load. Another possible cause is the pole transformer that feeds our house and nearby neighbors that the technician told me is overloaded. [Other neighbors that aren’t as close are reporting these problems as well, so they would have to have similarly overloaded transformers.] Line voltage at my home is supposed to be between 114 and 126 VAC.

Another technician from PG&E came out a couple of days later to install a voltage monitor on the line. But it occurred to me that I have been collecting data since 2007 from my solar inverter that includes voltage data. A total of about 3.2 million data points. So I thought I’d check to see what I can find out from that.

My data are in a MySQL database that I can query easily. So asked it how many days there have been where the line voltage went below 110 VAC (giving PG&E some margin here) and the solar inverter was fully operating. There were 37 such days, including very brief voltage dips (<10 minutes) up to over 5 hours undervoltage on September 2, 2017. The line voltage that day looked like this:

A more recent representative sample is this:

Part of my concern is that this problem seems to be getting worse. Here is a table of the number days where <110 VAC lasted for more than 10 minutes:

YearDays with
UndervoltageUndervoltage
Minutes2007002008002009114201000201100201200201300201400201511920161102017141386201800201975612020 (to June 30)2160

And as I mentioned above, the problem seems to occur on particularly hot days (which is when others run their air conditioners; we don’t have air conditioning). Fortunately, the NOAA National Centers for Environmental Information provide good historical data on high and low temperatures. I was able to download the data for Los Altos and relate it to the days with the outages. Indeed, the days with the most serious voltage problems are very warm (high of 110 on 9/2/2017 and 100 degrees on 6/3/2020 shown above).

Does that mean we’re seeing purely a temperature effect that is happening more often due to global warming? It doesn’t seem likely because there have been very warm days in past years with little voltage drop. Here’s a day with a recorded high temperature of 108 in 2009:

My street, and the City of Los Altos more generally, has seen a lot of extensive home renovations and tear-down/rebuilds the past few years. The section of the street I live on, which has about 50 homes, currently has three homes being completely rebuilt and currently unoccupied. So this is only going to get worse.

The ongoing renovations and rebuilds in Los Altos are all considerably larger than the homes (built in the 1950s) that they replace, and I expect nearly all have air conditioning while the original homes didn’t. This is resulting in a considerably higher electrical load on infrastructure that wasn’t designed for this. While this is mitigated somewhat by the prevalence of solar panels in our area, the City needs to require that PG&E upgrade its infrastructure before issuing new building permits that will exacerbate this problem.

SolarEdge inverter display

Monday, 07. September 2020

Aaron Parecki

How to make an RTMP Streaming Server and Player with a Raspberry Pi

In this tutorial we'll use a Raspberry Pi to build an RTMP server that plays out any video stream it receives over the Raspberry Pi's HDMI port automatically. This effectively turns a Raspberry Pi into a Blackmagic Streaming Bridge.

In this tutorial we'll use a Raspberry Pi to build an RTMP server that plays out any video stream it receives over the Raspberry Pi's HDMI port automatically. This effectively turns a Raspberry Pi into a Blackmagic Streaming Bridge.

You can use this to stream from OBS or an ATEM Mini across your local network or the internet, and convert that to an HDMI signal in your studio to mix with other HDMI sources locally.

Parts

Here's a list of all the parts you'll need for this.

Of course, you'll need a Raspberry Pi. It doesn't need a ton of RAM, I got one with 4GB but it would probably work fine with 2GB as well. I prefer to buy the parts individually rather than the full kits, but either way is fine. If you get the bare Raspberry Pi you'll need to make sure to get a good quality power supply like this one.

I have tested this on a Raspberry Pi 3, and it does work, but there's much more of a delay, so I definitely recommend doing this with a Raspberry Pi 4 instead.

Get a good quality SD card for the Pi. We won't be doing anything super disk intensive, but it will generally perform a lot better with an SD Card with an "A1" or "A2" rating. You don't need much disk space, 16gb, 32gb or 64gb cards are all good options. The "A1" or "A2" ratings mean the card is optimized for running applications rather than storing photos or videos. I like the Sandisk cards, either the 32gb A1 or the slightly faster 64gb A2.

You'll need a case for the Pi as well. I like this one which is also a giant heat sink so that it's completely silent.

The Raspberry Pi 4 has a Micro HDMI port rather than a full size, so you'll need a cable to plug that in to a regular size HDMI port like this one.

Make sure you have your Raspberry Pi and whatever device you're streaming from connected via a wired network. While this will probably work over wifi, I wouldn't count on wifi to be reliable or fast for this.

Prepare the Raspberry Pi SD Card

First, head over to raspberrypi.org/downloads to download the Raspberry Pi Imager app. This app makes it super easy to create an SD card with the Raspberry Pi OS.

When you choose the operating system to install, select "Other"

then choose "Lite"

We don't need a desktop environment for this so it will be easier to use the command line.

Go ahead and write this to the SD card, then take out the SD card and put it into the Raspberry Pi.

Configure the OS

The first time you boot it up it will take a few seconds and then it will prompt you to log in.

Log in using the default username and password. Enter the username "pi", and then type the password "raspberry". You won't see the password as you're typing it.

It's a good idea to change the password to something else, so go ahead and do that now. Type:

sudo raspi-config

and choose the first option in the menu by pressing "Enter". When you type the new password, you won't see it on the screen, but it will ask you to type it twice to confirm. Press "tab" twice to select "Finish" to close out of this menu.

Next we need to configure the video mode so we know what kind of signal the Raspberry Pi will be sending on the HDMI port. You'll need to edit a text file to make these changes.

sudo nano /boot/config.txt

This will launch a text editor to edit this file. We need to change the following things in the file. These may be commented out with a # so you can delete that character to uncomment the line and make the necessary changes. These options are documented here.

# Make sure the image fits the whole screen disable_overscan=1 # Set HDMI output mode to Consumer Electronics Association mode hdmi_group=1 # Enable audio over HDMI hdmi_drive=2 # Set the output resolution and frame rate to your desired option # 1920x1080 60fps hdmi_mode=16 # 1920x1080 25fps hdmi_mode=33 # 1920x1080 30fps hdmi_mode=34

To save your changes, press Ctrl+X and then "Y" to confirm, then "Enter". At this point we need to reboot to make the changes take effect, so type:

sudo reboot

and wait a few seconds for it to reboot.

Install and Configure Software

We'll be using nginx with the RTMP module as the RTMP server, and then connect omxplayer to play out the stream over the HDMI port.

Install the necessary software by typing:

sudo apt update sudo apt install omxplayer nginx libnginx-mod-rtmp

We need to give nginx permission to use the video port, so do that with the following command:

sudo usermod -aG video www-data

Now we need to set up an RTMP server in nginx. Edit the main nginx config file:

sudo nano /etc/nginx/nginx.conf

Scroll all the way to the bottom and copy the below text into the config file:

rtmp { server { listen 1935; application live { # Enable livestreaming live on; # Disable recording record off; # Allow only this machine to play back the stream allow play 127.0.0.1; deny play all; # Start omxplayer and play the stream out over HDMI exec omxplayer -o hdmi rtmp://127.0.0.1:1935/live/$name; } } }

The magic sauce here is the exec line that starts omxplayer. omxplayer is an application that can play an RTMP stream out over the Raspberry Pi's HDMI port. The exec line will run this command whenever a new RTMP stream is received. The stream key will be set to the $name variable. Note that this means any stream key will work, there is no access control the way we have it configured here. You can read up on the RTMP module if you'd like to learn how to lock down access to only specific stream keys or if you want to enable recording the stream.

Save this file by pressing ctrl+X, then Y, then enter.

To test the config file for errors, type:

sudo nginx -t

If that worked, you can reload nginx to make your changes take effect:

sudo nginx -s reload

At this point the Raspberry Pi is ready! You can now stream to this box and it will output the received stream over HDMI! Any stream key will work, and you can stream using any sort of device or software like OBS. You'll need to find the IP address of the Raspberry Pi which you can do by typing

hostname -I

To stream to the Raspberry Pi, use the RTMP URL: rtmp://YOUR_IP_ADDRESS/live and anything as the stream key.

Setting up the ATEM Mini

We'll now walk through setting up an ATEM Mini Pro to stream to the Raspberry Pi.

If you're familiar with customizing your ATEM Software's Streaming.xml file, you can add a new entry with the Raspberry Pi's IP address. But there is another way which I like better, which is to create a custom streaming file that you can send to a remote guest and they can add it in their Software Control app without needing to edit any XML.

Create a new XML file with the following contents. This is the contents of one of the <service> blocks from the Streaming.xml file, wrapped with a <streaming> element.

<streaming> <service> <name>Raspberry Pi</name> <servers> <server> <name>Primary</name> <url>rtmp://RASPBERRY_PI_IP/live</url> </server> </servers> <profiles> <profile> <name>Streaming High</name> <config resolution="1080p" fps="60"> <bitrate>9000000</bitrate> <audio-bitrate>128000</audio-bitrate> <keyframe-interval>2</keyframe-interval> </config> <config resolution="1080p" fps="30"> <bitrate>6000000</bitrate> <audio-bitrate>128000</audio-bitrate> <keyframe-interval>2</keyframe-interval> </config> </profile> <profile> <name>Streaming Low</name> <config resolution="1080p" fps="60"> <bitrate>4500000</bitrate> <audio-bitrate>128000</audio-bitrate> <keyframe-interval>2</keyframe-interval> </config> <config resolution="1080p" fps="30"> <bitrate>3000000</bitrate> <audio-bitrate>128000</audio-bitrate> <keyframe-interval>2</keyframe-interval> </config> </profile> </profiles> </service> </streaming>

Replace the IP address with your own, and you can customize the <name> as well which will show up in the ATEM Software Control. Save this file with an .xml extension.

In the ATEM Software Control app, click "Stream" from the menu bar, choose "Load Streaming Settings", and select the XML file you created.

This will create a new option in the "Live Stream" section where you can stream to the Raspberry Pi instead of YouTube!

Go ahead and enter a streaming key now, it doesn't matter what you enter since there is no access control on the Raspberry Pi. Click "On Air" and in a few seconds you should see the image pop up on the Raspberry Pi!


Identity Praxis, Inc.

Geofencing Warrants

Today in Wired, I read about geofencing warrants. Geofence Warrants are a law enforcement practice. Law enforcement submits a request to tech companies, notably Google, Apple, Uber, Facebook, for a list of all the devices in, at, or near a location during a specific period of time. The objective of this practice is to identify people […] The post Geofencing Warrants appeared first on Identi

Today in Wired, I read about geofencing warrants.

Geofence Warrants are a law enforcement practice. Law enforcement submits a request to tech companies, notably Google, Apple, Uber, Facebook, for a list of all the devices in, at, or near a location during a specific period of time. The objective of this practice is to identify people to be interviewed as part of an investigation. 

Example,

To identify devices within a few hundred yards to a mile of a murder scene, or accident.

As of the summer of 2020, this practice is coming under scrutiny, as some have raised privacy concerns about the practice; the practice could cause harm. It could have an adverse effect on individuals’ right to privacy and civil liberties. 

For instance, it could be used

to identify people during a protest in violation of the protesters’ First Amendment rights, i.e. the right to free speech. in densely populated areas in violation of individuals’ Fourth Amendment rights, namely their right to “secure in their persons, houses, papers, and effects, against unreasonable searches and seizure.”

I’m all for the development of technology and the use of technology to maintain societal safety and democracy, but I do have my concerns. I want to believe that most people and organizations have well-meaning intentions, that they want to create value for people and themselves, but capabilities like geofencing have so many applications that can be used for malign purposes. We need to be careful.

I think it is crucial that we develop a wide range of technology that leverages personal data, like an individual’s location, to generate value for both society and for the individual, but it is imperative that we find ways to ensure individual users of this technology have the ability to maintain their agency and self-sovereignty.

The post Geofencing Warrants appeared first on Identity Praxis, Inc..

Thursday, 03. September 2020

FACILELOGIN

What is Customer IAM (CIAM)?

Customer Identity and Access Management (CIAM) over the time has become a bit of an overloaded term. There are multiple articles, blogs, analyst reports explaining what CIAM is and defining it in different ways. The objective of this blog is to provide my perspective of CIAM in one-line definition. Before defining what it is, let’s be clear on why we need to worry about CIAM. Why CIAM? Tra

Customer Identity and Access Management (CIAM) over the time has become a bit of an overloaded term. There are multiple articles, blogs, analyst reports explaining what CIAM is and defining it in different ways. The objective of this blog is to provide my perspective of CIAM in one-line definition.

Before defining what it is, let’s be clear on why we need to worry about CIAM.

Why CIAM?

Transforming the customer experience is at the heart of digital transformation. Digital technologies are changing the game of customer interactions, with new rules and possibilities that were unimaginable only a few years back. CIAM is a whole emerging area in the IAM, which is essentially an ingredient for digital customer experience.

The rest of the blog is based on the above statement. I believe that’s fair enough, and haven’t seen that been questioned much. We can safely assume that’s a well-accepted definition of the objective of CIAM. It might not be in the same words, but still, many who talk about CIAM, share a similar/closer view.

Gartner Definition

In one of it’s reports Gartner defines CIAM in a lighter way as,

In my view it’s not strong enough and does not carry enough depth to reach the objective of CIAM. CIAM is more than managing customer identities in a traditional way. It needs to be the facilitator to leverage identity data to catalyze business growth.

More CIAM Definitions

If you Google, you can find more definitions of CIAM. Not that all of them are wrong, but none of them IMO put enough weight on the objective of CIAM. Here I list few of them. Then again, these are different view points, and none of them are wrong or bad.

Customer-focused IAM

Rather calling CIAM, managing customer identities, I would like to call it customer-focused IAM. IAM is a well-defined term. As per Gartner,

The customer-focused IAM adds a lot of depth into the definition of IAM. For example, unlike traditional IAM, when you focus on customers, you probably start working with millions of impatient users, who get annoyed by filling lengthy forms, and cannot wait at least 2 seconds to log into a system. Even a small glitch in your system, they will take it to the social media and will make a big buzz. The slightest of leaked customer information could take a big slice of your share price down.

Yahoo!, for example, was in the middle of a series of data breaches a few years back, that exposed the PII data of more than 1 billion users. That did cost the company $350 million. They had to lower the sales price of its email and other digital services, which they sold to Verizon from $4.83 billion to $4.48 billion to account for the potential backlash from the data breaches.

Beyond Customer-focused IAM

Calling CIAM, the customer-focused IAM, still does not add enough weight on the emphasis that it should catalyze business growth. Unlike traditional IAM, a CIAM system should have the capability to integrate with customer relationship management (CRM) systems, marketing platforms, e-commerce platforms, content management systems (CMS), data management platforms and many more. A customer-focused IAM system, with no business integrations adds little value in terms of the business growth that we expect from having a CIAM solution.

What CIAM is?

Customer-focused IAM does not necessarily mean you only manage customer identities. That’s why I preferred customer-focused IAM instead of managing customer identities. In a typical CIAM solution in addition to direct customers, you also need to manage identities of employees who have direct access to the CIAM solution, or should integrate with an IAM system that manages employee identities. The latter is the preferred option. Also, not all the CIAM solutions are just B2C (business-to-consumer), it can also be B2B (business-to-business) or B2B2C (business-to-business-consumer) as well.

What is Customer IAM (CIAM)? was originally published in FACILELOGIN on Medium, where people are continuing the conversation by highlighting and responding to this story.

Wednesday, 02. September 2020

FACILELOGIN

Advanced Authentication Flows with Identity Server

WSO2 Identity Server ships with more than 35 connectors to support different authentication requirements. If you visit store.wso2.com, you can find all of them, and download and install into the product. Just like the product, all these connectors too, are released under the open source Apache 2.0 license. Identity Server supports passwordless authentication with FIDO 2.0 — and mobile push b

WSO2 Identity Server ships with more than 35 connectors to support different authentication requirements. If you visit store.wso2.com, you can find all of them, and download and install into the product. Just like the product, all these connectors too, are released under the open source Apache 2.0 license.

Identity Server supports passwordless authentication with FIDO 2.0 — and mobile push based authentication with Duo and mePin. Also, we have partnered with Veridium and Aware biometrics to support biometric authentication. In addition to that Identity Server also supports RSA SecurID, TOTP, which you can use with the Google Authenticator mobile app, and then OTP over SMS and Email.

During a login flow, you can orchestrate between these authenticators by writing an adaptive authentication script in JavaScript. With that you can define how you want to authenticate a user based on environmental attributes (e.g: any HTTP header, geo-location), user attributes / roles (e.g: admins always log with MFA), user behaviors (e.g.: number of failed login attempts, geo-velocity), a risk score and more.

In the above video, I discuss a set of use cases and show you how you can apply adaptive authentication policies to address advanced authentication requirements. If you’d like to know how to set things up from scratch please join our slack channel for any help.

Advanced Authentication Flows with Identity Server was originally published in FACILELOGIN on Medium, where people are continuing the conversation by highlighting and responding to this story.

Monday, 31. August 2020

Virtual Democracy

It’s time to eliminate patents in universities: Step up to Open.

“It is true that many people in science will scoff if you try to tell them about a scientific community in which ideas are treated as gifts. This has not been their experience at all. They will tell you a story about a stolen idea. So-and-so invented the famous such and such, but the man … Continue reading It’s time to eliminate patents in universities: Step up to Open.
“It is true that many people in science will scoff if you try to tell them about a scientific community in which ideas are treated as gifts. This has not been their experience at all. They will tell you a story about a stolen idea. So-and-so invented the famous such and such, but the man … Continue reading It’s time to eliminate patents in universities: Step up to Open.

Sunday, 30. August 2020

FACILELOGIN

Speedle+ for Authorization

Speedle+ is a general purpose authorization engine. It allows users to construct their policy model with user-friendly policy definition language and get authorization decision in milliseconds based on the policies. It is based on the Speedle open source project and maintained by previous Speedle maintainers. Speelde was born at Oracle couple of years back, but didn’t get much community adoption.

Speedle+ is a general purpose authorization engine. It allows users to construct their policy model with user-friendly policy definition language and get authorization decision in milliseconds based on the policies. It is based on the Speedle open source project and maintained by previous Speedle maintainers. Speelde was born at Oracle couple of years back, but didn’t get much community adoption. Both Speelde and Speedle+ try to address a similar set of use cases like the Open Policy Agent (OPA).

In our 31st Silicon Valley IAM meetup, we invited William Cai, the Speedle project lead and the founder of Speedle+ to talk about Speedle+. It was a very informative session, followed by an insightful demo. Please find below the recording of the meetup.

Speedle+ for Authorization was originally published in FACILELOGIN on Medium, where people are continuing the conversation by highlighting and responding to this story.

Saturday, 29. August 2020

FACILELOGIN

Five Pillars of CIAM

Transforming the customer experience is at the heart of digital transformation. Digital technologies are changing the game of customer interactions, with new rules and possibilities that were unimaginable only a few years back. Customer Identity and Access Management (CIAM) is a whole emerging area in the IAM, which is essentially an ingredient for digital customer experience. Today’s increasingl

Transforming the customer experience is at the heart of digital transformation. Digital technologies are changing the game of customer interactions, with new rules and possibilities that were unimaginable only a few years back. Customer Identity and Access Management (CIAM) is a whole emerging area in the IAM, which is essentially an ingredient for digital customer experience.

Today’s increasingly sophisticated consumers now view digital interactions as the primary mechanism for interacting with brands and, consequently, expect deeper online relationships delivered simply and unobtrusively. CIAM turns customer data into Gold! Scalability, Security & Privacy, Usability, Extensibility, and APIs & Integration are the five pillars of CIAM. The following video talks about these five pillars in detail.

Five Pillars of CIAM was originally published in FACILELOGIN on Medium, where people are continuing the conversation by highlighting and responding to this story.


DustyCloud Brainstorms

If you can't tell people anything, can you show them?

The other day I made a sadpost on the fediverse that said: "simultaneously regularly feel like people don't take the directions I'm trying to push seriously enough and that I'm not worth taking seriously". (Similarly, I've also joked that "imposter syndrome and a Cassandra complex are a hell of a …

The other day I made a sadpost on the fediverse that said: "simultaneously regularly feel like people don't take the directions I'm trying to push seriously enough and that I'm not worth taking seriously". (Similarly, I've also joked that "imposter syndrome and a Cassandra complex are a hell of a combo" before.) I got a number of replies from people, both publicly and privately, and the general summary of most of them are, "We do care! The stuff you're working on seems really cool and valuable! I'll admit that I don't really know what it is you're talking about but it sounds important!" (Okay, and I just re-read, and it was only a portion of it that even said the latter part, but of course, what do I emphasize in my brain?) That was nice to hear that people care and are enthusiastic, and I did feel much better, but it did also kind of feel like confirmation that I'm not getting through to people completely either.

But then jfred made an interesting reply:

Yeah, that feels familiar. Impostor syndrome hits hard. You're definitely worth taking seriously though, and the projects you're working on are the most exciting ones I've been following.

As for people not taking the directions you're pushing seriously... I've felt the same at work, and I think part of it is that there's only so much one person can do. But also part of it is: http://habitatchronicles.com/2004/04/you-cant-tell-people-anything/

...it's hard to get ideas across to someone until they can interact with it themselves

So first of all, what a nice post! Second of all, it's kind of funny that jfred replied with this because out of everyone, jfred is one of the people who's picked up and understood what's happening in Spritely Goblins in particular the most, often running or creating demos of things on top of it using things I haven't even documented yet (so definitely not a person I would say isn't taking me seriously or getting what the work is doing).

But third, that link to Habitat Chronicles is right on point for a few reasons: first of all, Spritely is hugely influenced by the various generations of Habitat, from the original first-ever-graphical-virtual-worlds Habitat (premiering on the Commodore 64 in the mid 1980s, of all things!) to Electric Communities Habitat, especially because that's where the E programming language came from, which I think it's safe to say has had a bigger influence on Spritely Goblins than anything (except maybe this paper by Jonathan Rees, which is the first time I realized that "oh, object capability security is just normal programming flow"). But also, that blogpost in particular was so perfect about this subject: You can't tell people anything...!

In summary, the blogpost isn't saying that people aren't foolishly incapable of understanding things, but that people in general don't understand well by "being explained to". What helps people understand is experiences:

Eventually people can be educated, but what you have to do is find a way give them the experience, to put them in the situation. Sometimes this can only happen by making real the thing you are describing, but sometimes by dint of clever artifice you can simulate it.

This really congealed for me and helped me feel justified in an approach I've been taking in the Spritely project. In general, up until now I've spent most of my time between two states: coding the backend super-engineering stuff, and coding demos on top of it. You might in the meanwhile see me post technobabble onto my fediverse or birdsite accounts, but I'm not in general trying too hard to write about the structurally interesting things going on until it comes time to write documentation (whether it be for Goblins, or the immutable storage and mutable storage writeups). But in general, the way that I'm convinced people will get it is not by talk but by first, demonstration, and second, use.

Aside from the few people that have picked up and played with Goblins yet, I don't think I've hit a sufficient amount of "use" yet in Spritely. That's ok, I'm not at that stage yet, and when I am, it'll be fairly clear. (ETA: one year from now.) So let's talk about demonstration.

The first demo I wrote was the Golem demo, that showed roughly that distributed but encrypted storage could be applied to the fediverse. Cute and cool, and that turned the heads of a few fediverse implementers.

But let's face it, the best demo I've done yet was the Terminal Phase time travel demo. And it didn't hurt that it had a cool looking animated GIF to go with it:

Prior to this demo, people would ask me, "What's this Goblins thing?" And I'd try to say a number of things to them... "oh, its a distributed, transactional, quasi-functional distributed programming system safe to run in mutually suspicious networks that follows object capability security and the classic actor model in the style of the E programming language but written in Scheme!" And I'd watch as their eyes glaze over because why wouldn't their eyes glaze over after a statement like that, and then I'd try to explain the individual pieces but I could tell that the person would be losing interest by then and why wouldn't they lose interest but even realizing that I'd kind of feel despair settling in...

But when you show them a pew pew space lasers game and oh wow why is there time travel, how did you add time travel, is it using functional reactive programming or something? (Usually FRP systems are the only other ones where people have seen these kinds of time travel demos.) And I'd say nope! It doesn't require that. Mostly it looks like writing just straightahead code but you get this kind of thing for free. And the person would say, wow! Sounds really cool! How much work does it take to add the time travel into the game? And I just say: no extra work at all. I wrote the whole game without testing anything about time travel or even thinking about it, then later I just threw a few extra lines to write the UI to expose the time travel part and it just worked. And that's when I see peoples' heads explode with wonder and the connections start to be made about what Goblins might be able to do.

But of course, that's only a partial connection for two reasons. One is that the time travel demo above only shows off a small, minute part of the features of Goblins. And actually, the least interesting of them! It doesn't show off the distributed programming or asynchronous programming parts, it doesn't show off the cool object capability security that's safe to run in mutually suspicious networks. But still: it gave a taste that something cool is happening here. Maybe Chris hasn't just been blowing a bunch of time since finishing the ActivityPub standardization process about two and a half years ago. (Yikes, two and a half years ago!?!)

To complete the rest of that demonstration of the other things in the system requires a different kind of demo. Terminal Phase was a demo to show off the synchronous half of Goblins, but where Goblins really shines is in the asynchronous, distributed programming stuff. That's not ready to show off yet, but I'll give you the first taste of what's in progress:

(Actually a bit more has progressed since I've recorded that GIF, multiple chatrooms and etc, but not really worth bothering to show off quite yet.)

Hmm, that's not really all that thrilling. A chatroom that looks about the same level of featureful, maybe less, than IRC? Well, it could be more exciting if you hear that the full chat protocol implementation is only about 250 lines of code, including authenticating users and posts by users. That's smaller even than its corresponding GUI code, which is less than 300 lines of code. So the exciting thing there is how much heavy lifting Goblins takes care of for you.

But that's hardly razzle-dazzle exciting. In order for me to hint at the rest of what's happening here, we need to put out an asynchronous programming demo that's as or more interesting than the time travel demo. And I expect to do that. I hope soon enough to show off stuff that will make people go, "Oh, what's going on here?"

But even that doesn't complete the connection for people, because showing is one thing but to complete the loop, we need people to use things. We need to get this stuff in the hands of users to play with and experiment themselves. I have plans to do that... and not only that, make this stuff not intimidating for newcomers. When Spritely guides everyday people towards extending Spritely from inside of Spritely as it runs, that's when it'll really click.

And once it clicks sufficiently, it'll no longer become exciting, because people will just come to expect it. A good example of that comes from the aforementioned You can't tell people anything article:

Years ago, before Lucasfilm, I worked for Project Xanadu (the original hypertext project, way before this newfangled World Wide Web thing). One of the things I did was travel around the country trying to evangelize the idea of hypertext. People loved it, but nobody got it. Nobody. We provided lots of explanation. We had pictures. We had scenarios, little stories that told what it would be like. People would ask astonishing questions, like “who’s going to pay to make all those links?” or “why would anyone want to put documents online?” Alas, many things really must be experienced to be understood. We didn’t have much of an experience to deliver to them though — after all, the whole point of all this evangelizing was to get people to give us money to pay for developing the software in the first place! But someone who’s spent even 10 minutes using the Web would never think to ask some of the questions we got asked.

Eventually, if we succeed, the ideas in Spritely will no longer seem exciting... because people will have internalized and come to expect them. Just like hyperlinks on the web today.

But to get there, in the meanwhile, we have to get people interested. To become so successful as to be mundane, we have to first be razzle-dazzle exciting. And to that end, that's why I take the demo approach to Spritely. Because it's hard to tell someone something... but showing them, that's another matter.

PS: It's also not true that people don't get what I'm doing, and that's even been reflected materially. I've been lucky to be supported over the last few years from a combination of a grant from Samsung's Stack Zero and one from NLNet, not to mention quite a few donors on Patreon. I do recognize and appreciate that people are supporting me. In some ways receiving this support makes me feel more seriously about the need to demonstrate and prove that what I'm doing is real. I hope I am doing and will continue to do a sufficient job, and hope that the upcoming demos contribute to that more materially!

PPS: If, in the meanwhile, you're already excited, check out the Goblins documentation. The most exciting stuff is coming in the next major release (which will be out soon), which is when the distributed programming tools will be made available to users of the system for the first time. But if you want to get a head start, the code you'll be writing will mostly work the same between the distributed and non-distributed (as in, distributed across computers/processes) asynchronous stuff, so if you start reading the docs today, most of your code will already just work on the new stuff once released. And if you do start playing around, maybe drop by the #spritely channel on freenode and say hello!


FACILELOGIN

The Integrated Supply Chain for CIAM

Following is the summary of the CIAM maturity model, which I talk about in detail in this blog — and now the question is how we get from level-0 to level-4 or from nonexistent to optimized. That’s where we see the need for a carefully designed integrated supply chain for CIAM. In general, a supply chain is a system of organizations, people, activities, information, and other resources involv

Following is the summary of the CIAM maturity model, which I talk about in detail in this blog — and now the question is how we get from level-0 to level-4 or from nonexistent to optimized. That’s where we see the need for a carefully designed integrated supply chain for CIAM.

In general, a supply chain is a system of organizations, people, activities, information, and other resources involved in supplying a product or service to a consumer, from the inception to delivery. In the industrial supply chain, we see 5 main phases.

Source: https://thenewstack.io/a-successful-api-strategy-needs-a-digital-supply-chain-and-a-thriving-ecosystem/

Under sourcing you find the raw materials, machinery, labour which you need to build your product. In doing that, you will also find out the suppliers that you need to work with. One of the McKinsey reports claims — on average, an auto manufacturer has around 250 tier-one suppliers

Then during the manufacturing phase you build the product, then distribute it, sell it and finally the consumers start using the product.

We can build a similar analogy in the digital supply chain.

If you are building a CIAM solution, then in the discovery phase, you need to figure out what you need to buy and what you need to build. You need not to build everything from scratch.

Uber for example, uses Google Maps, for navigation. It’s one of the most critical parts of Uber to build a smooth experience for its riders. From 2016 to 2018, Uber paid 58M USD to Google for using Google Maps. But, then again it’s a peanut, when you compare that with their revenue in 2019, which was 14.15 billion USD. So, you need to make the right decision and the discovery phase is critical to find out what’s best for you

In terms of CIAM, during the discovery phase, you need to find out what you want for your IAM system, CRM system, marketing platform, e-commerce platform, fraud detection system, risk engine, CMS, data management platform and so on. For each of these systems, you would need to pick a supplier or vendor. Once again, one of the McKinsey reports claims — technology companies have an average of 125 suppliers in their tier-one group.

Then again you need not to pick everything at once. You can go for a phased approach.

In the development phase, you start building your CIAM solution, by integrating multiple systems together, which should finally result in the right-level of user experience that would help you to drive the revenue growth by leveraging identity data to acquire and retain customers.

Then during the deployment phase you need to come up with a model to address some of your non-functional requirements such as scalability, security and so on. Once the system is up and running — you start on-boarding customers. Now you need to start monitoring the customer experience — you need to see how the customers use your product, their pain points, and so on — and then the digital supply chain will continue.

Then, you go to the next phase, do the discovery based on the services you need, and keep going.

The Integrated Supply Chain for CIAM was originally published in FACILELOGIN on Medium, where people are continuing the conversation by highlighting and responding to this story.


Mike Jones: self-issued

Concise Binary Object Representation (CBOR) Tags for Date progressed to IESG Evaluation

The “Concise Binary Object Representation (CBOR) Tags for Date” specification has completed IETF last call and advanced to evaluation by the Internet Engineering Steering Group (IESG). This is the specification that defines the full-date tag requested for use by the ISO Mobile Driver’s License specification in the ISO/IEC JTC 1/SC 17 “Cards and security devices […]

The “Concise Binary Object Representation (CBOR) Tags for Date” specification has completed IETF last call and advanced to evaluation by the Internet Engineering Steering Group (IESG). This is the specification that defines the full-date tag requested for use by the ISO Mobile Driver’s License specification in the ISO/IEC JTC 1/SC 17 “Cards and security devices for personal identification” working group.

The specification is available at:

https://tools.ietf.org/html/draft-ietf-cbor-date-tag-06

An HTML-formatted version is also available at:

https://self-issued.info/docs/draft-ietf-cbor-date-tag-06.html

Thursday, 27. August 2020

Vishal Gupta

Money might be obsolete in 40 years

You can look at money in 3 ways.. A global accounting system — to settle value of transactions between people. A behavior control or gamification — to make people get out of bed and do things. A way to channelize resources efficiently — balance scarcity with demand. However it is a tool that really belongs to the stone age.. why? It fails at global accounting — as it

You can look at money in 3 ways..

A global accounting system — to settle value of transactions between people.

A behavior control or gamification — to make people get out of bed and do things.

A way to channelize resources efficiently — balance scarcity with demand.

However it is a tool that really belongs to the stone age.. why?

It fails at global accounting — as it operates in silos and remains two dimensional that requires manual reconciliations, transaction reporting & taxation.

It fails in behavior control or gamification — as it does not differentiate between good or illegal behavior. It creates perverse incentives and does not prevent money laundering.

It fails at channelizing resources — as it does not account for things like environmental costs, family integrity and worsens wealth inequality.

What will replace it?

There will be at least 2 exponential trends.

The robots and AI will take over jobs and incomes will decrease. The wealth asymmetry will keep increasing.

As the autonomous vehicles increase — it changes the availability and access to shared resources. It becomes a platform to deliver on-demand physical goods and services shared in communities.

As the computing power keeps increasing.. humanity will adopt new forms of decentralized surveillance and compliance. Reputation will drive human behavior in getting access to community driven public utilities and services.

The concept of ownership may evaporate as the cost of disposal and environmental damage may be way more than the cost of access to the shared resource.

Most people will be able to live purely on the entitlements they recieve for being good citizens or voting or crowd sourcing the right policies and helping AI to serve better.

The governments will launch more and more robotic services on top of autonomous vehicles as public services.

Wednesday, 26. August 2020

Phil Windley's Technometria

What is SSI?

Summary: If your identity system doesn't use DIDs and verifiable credentials in a way that gives participants autonomy and freedom from intervening administrative authorities, then it's not SSI. A few days ago I was in a conversation with a couple of my identerati friends. When one used the term "SSI", the other asked him to define it since there were so many systems that were claiming

Summary: If your identity system doesn't use DIDs and verifiable credentials in a way that gives participants autonomy and freedom from intervening administrative authorities, then it's not SSI.

A few days ago I was in a conversation with a couple of my identerati friends. When one used the term "SSI", the other asked him to define it since there were so many systems that were claiming to be SSI and yet were seemingly different. That's a fair question. So I thought I'd write down my definition in hopes of stimulating some conversation around the topic.

I think we've arrived at a place where it's possible to define SSI and get broad consensus about it. SSI stands for self-sovereign identity, but that's not really helpful since people have different ideas about what "sovereign" means and what "identity" means. So, rather than try to go down those rabbit holes, let's just stick with "SSI."1

SSI has the following properties:

SSI systems use decentralized identifiers (DIDs) to identify people, organizations, and things. Decentralized identifiers provide a cryptograhic basis for the system and can be employed so that they don't require a central administrative system to manage and control the identifiers. Exchanging DIDs is how participants in SSI create relationships, a critical feature. SSI participants use verifiable credentials exchange to share information (attributes) with each other to strengthen or enrich relationships. The system provides the means of establishing credential fidelity. SSI supports autonomy for participants. The real value proposition of SSI is autonomy—not being inside someone else's administrative system where they make the rules in a one sided way. Autonomy implies that participants interact as peers in the system. You can build systems that use DIDs and verifiable credentials without giving participants autonomy.

Beyond these there are lots of choices system architects are making. Debates rage about how specifically credential exchange should work, whether distributed ledgers are necessary, and, if so, how should they be employed. But if you don't use DIDs and verifiable credentials in a way that gives participants autonomy and freedom from intervening administrative authorities, then you're not doing SSI.

As a consequence of these properties, participants in SSI systems use some kind of software agent (typically called a wallet for individuals) to create relationships and exchange credentials. They don't typically see or manage keys or passwords. And there's no artifact called an "identity." The primary artifacts are relationships and credentials. The user experience involves managing these artifacts to share attributes within relationships via credential exchange. This user experience should be common to all SSI systems, although the user interface and what happens under the covers might be different between SSI systems or vendors on those systems.

I'm hopeful that, as we work more on interoperability, the implementation differences will fade away so that we have a single identity metasystem where participants have choice about tools and vendors. An identity metasystem is flexible enough to support the various ad hoc scenarios that the world presents us and will support digital interactions that are life-like.

Notes This is not to say I don't have opinions on what those words mean in this context. I've written about "sovereign" in Cogito, Ergo Sum, On Sovereignty, and Self Sovereign is Not Self Asserted.

Photo Credit: Girl On A Bicycle from alantankenghoe (CC BY 2.0)

Tags: identity ssi self-sovereign credentials sovrin

Monday, 24. August 2020

Kim Cameron's Identity Weblog

Technical naïveté: UK’s Matt Hancock sticks an ignorant finger in the COVID dike

The following letter from a group of UK parliamentarians rings alarm bells that should awaken all of us – I suspect similar things are happening in the shadows well beyond the borders of the United Kingdom… The letter recounts the sad story of one more politician with no need for science or expertise – for … Continue reading Technical naïveté: UK’s Matt Hancock sticks an ignorant finger in the COVI

The following letter from a group of UK parliamentarians rings alarm bells that should awaken all of us – I suspect similar things are happening in the shadows well beyond the borders of the United Kingdom…

The letter recounts the sad story of one more politician with no need for science or expertise – for him, rigorous attention to what systems do to data protection and privacy can simply be dismissed as “bureaucracy”.  Here we see a man in over his head – evidently unaware that failure to follow operational procedures protecting security and privacy introduces great risk and undermines both public trust and national security.  I sincerely hope Mr. Hancock brings in some advisors who have paid their dues and know how this type of shortcut wastes precious time and introduces weakness into our technical infrastructure at a time when cyberattack by organized crime and nation states should get politicians to sober up and get on the case.

Elizabeth Denham CBE, UK Information Commissioner
Information Commissioner’s Office
Wycliffe House
Water Lane
Wilmslow
Cheshire SK9 5AF

Dear Elizabeth Denham,

We are writing to you about the Government’s approach to data protection and privacy during the COVID-19 pandemic, and also the ICO’s approach to ensuring the Government is held to account.
During the crisis, the Government has paid scant regard to both privacy concerns and data protection duties. It has engaged private contractors with problematic reputations to process personal data, as highlighted by Open Democracy and Foxglove. It has built a data store of unproven benefit. It chose to build a contact tracing proximity App that centralised and stored more data than was necessary, without sufficient safeguards, as highlighted by the Human Rights Committee. On releasing the App for trial, it failed to notify yourselves in advance of its Data Protection Impact Assessment – a fact you highlighted to the Human Rights Committee.

Most recently, the Government has admitted breaching their data protection obligations by failing to conduct an impact assessment prior to the launch of their Test and Trace programme. They have only acknowledged this failing in the face of a threat of legal action by Open Rights Group.The Government have highlighted your role at every turn, citing you as an advisor looking at the detail of their work, and using you to justify their actions.

On Monday 20 July, Matt Hancock indicated his disregard for data protection safeguards, saying to Parliament that “I will not be held back by bureaucracy” and claiming, against the stated position of the Government’s own legal service, that three DPIAs covered “all of the necessary”.

In this context, Parliamentarians and the public need to be able to rely on the Regulator. However, the Government not only appears unwilling to understand its legal duties, it also seems to lack any sense that it needs your advice, except as a shield against criticism.
Regarding Test and Trace, it is imperative that you take action to establish public confidence – a trusted system is critical to protecting public health. The ICO has powers to compel documents to understand data processing, contractual relations and the like (Information Notices). The ICO has powers to assess what needs to change (Assessment Notices). The ICO can demand particular changes are made (Enforcement notices).  Ultimately the ICO has powers to fine Government, if it fails to adhere to the standards which the ICO is responsible for upholding.

ICO action is urgently required for Parliament and the public to have confidence that their data is being treated safely and legally, in the current COVID-19 pandemic and beyond.

Signed,
Apsana Begum MP
Steven Bonnar MP
Alan Brown MP
Daisy Cooper MP
Sir Edward Davey MP
Marion Fellows MP
Patricia Gibson MP
Drew Hendry MP
Clive Lewis MP
Caroline Lucas MP
Kenny MacAskill MP
John McDonnell MP
Layla Moran MP
Grahame Morris MP
John Nicholson MP
Sarah Olney MP
Bell Ribeiro-Addy MP
Tommy Sheppard MP
Christopher Stephens MP
Owen Thompson MP
Richard Thomson MP Philippa Whitford MP

 

[Thanks to Patrick McKenna for keeping me in the loop]


@_Nat Zone

[2020-08-25] Fin/Sum | BG2C に出演しました

事後報告ですが、金融庁と日本経済新聞の共催の『Fi… The post [2020-08-25] Fin/Sum | BG2C に出演しました first appeared on @_Nat Zone.

事後報告ですが、金融庁と日本経済新聞の共催の『Fin/Sum | BG2C 2020』に出演しました。本来は3月に行う予定だったものですが、新型コロナ禍により延期になっていたものです。

会場は東京は日本橋のコレド室町テラス3階の「室町三井ホール&カンファレンス」で、座席はありましたが、ソーシャルディスタンスを撮っておりましたし、参加人数的にはちょっと寂しい感じで、麻生大臣はビデオメッセージ、その他の登壇者も多くはリモートという形でした。

ビデオ出演の麻生大臣 キーノートのメインホールの様子 モデレータの佐古早稲田大学教授j以外はリモート参加 松尾ジョージタウン大学教授はこのために来日、2週間の隔離生活 ルーム2はこんな感じ 珍しく日本のパネリストのみのパネル

わたしのセッションは、11:20からのBG2Cの一環としての「ブロックチェーンとアイデンティティ」というパネルディスカッションでした。モデレータのわたしだけが部屋にいて、登壇者の

アイリーン・ヘルナンデス
GATACA
創業者 兼 CEO マリア・ヴァッチーノ
Easy Dynamics Corp.
ディレクター キム・キャメロン
Convergence Tech
チーフアイデンティティオフィサー
www.identityblog.comモデレーター

は、当然のことながら、それぞれ、スペイン、アメリカ、カナダからのリモート参加です。アイリーンは午前3時20分からというすごい時間帯でしたが、ビシッと服装も整えておしゃれして出てくれました。もちろん、マリアもです。総じて海外パネリストは男性陣は髪ボサボサに対して女性陣はビシッとしてた感じがします。

しかし、こういうリモートのパネルはやっぱり心臓に悪いですね。リモート側の音が会場ではリバーブがかかりすぎてよく聞こえなかったりなど、コンディションが良くないですし、それ以前に、ちゃんとコールインしてくれるか、その時の回線は大丈夫か、などいろいろ不安要素があります。

そんなこんなで、自分のセッションのスクショとか取るの忘れてしまいました。主催者からそのうち写真や録音が出てくるのではないかと期待して…。

なお、Fin/Sum | BG2C では、他に2つのアンカンファレンスセッションにも出て、そのうち一つではファシリテーターをやらせていただきました。リモートのアンカンファレンスもなかなかおもしろい経験でした。

The post [2020-08-25] Fin/Sum | BG2C に出演しました first appeared on @_Nat Zone.


Rebecca Rachmany

Anonymity and Distributed Governance: A Bad Idea

I host a weekly call on distributed governance. This blog provides my personal opinions but by all means view the entire discussion here . One of the big debates in the Genesis DAO started by DAOstack was the question of anonymity. Should people be able to make proposals and ask for budgets without providing a real identity? Part of the problem was a structural problem with DAOstack at the

I host a weekly call on distributed governance. This blog provides my personal opinions but by all means view the entire discussion here .

One of the big debates in the Genesis DAO started by DAOstack was the question of anonymity. Should people be able to make proposals and ask for budgets without providing a real identity?

Part of the problem was a structural problem with DAOstack at the time: there was no escrow system. You could allocate funds to a project, but you could not hold the funds until the project was complete. In other words, everyone was paid up front for their project as soon as the group approved it.

Another aspect of the problem was human: we all feel a little weird chatting with someone faceless. On the discussion boards, one person could potentially have multiple pseudonyms. If we were discussing something controversial, it would be fairly easy for someone to pretend to be multiple people arguing for or against the proposal. It was also fairly easy to troll the system anonymously. It wasn’t as easy to game the voting, though it was certainly possible.

It Ain’t Real

The example of the Genesis DAO was somewhat trivial, because it was a small number of people who actually did know one another. None of the anonymous people seriously asked for budget (though there was an anonymous troll), the amounts of money in question weren’t huge, and it was a small enough community that everyone pretty much knew one another.

In real life, identity is fundamental to democracy. It amazes me how many people cherish their anonymity so much that this is under debate. Our weekly chat about anonymity was wide-ranging, and as usual, we came to the conclusion that “it depends.”

Does It Really Depend?

Personally, I think it doesn’t depend on the situation at all. At almost all stages of governance, you need to know some information about the person. You almost never need to know their actual name, but you almost always need to know, at a minimum, whether they have the right to influence a particular situation.

My minimum viable values statement about governance is: If the decision will impact you, you should have the ability to influence the decision. How much influence you should have is a different question. For example, if you are not an expert on hydroelectric dams, maybe you don’t get to decide where to build the dam, but if you live near the river, your perspective should be taken into account.

This isn’t the way democracy runs today. Corporations make decisions that impact their workers, customers and the environment with no regard for their opinions. Governments determine foreign policy without having any responsibility for the citizens of foreign countries who will be impacted by those policies. Lawmakers in one state make laws that influence neighboring states. We call that democracy. I digress but it’s important. We are completely normalized to a situation in which, as long as we feel fairly treated inside our organization, the external people’s feelings are irrelevant to what we call fair process.

What’s in a Name?

Throughout most of the decision-making process, therefore, full anonymity is not appropriate. Knowing people’s name isn’t particularly important, but in each stage of the process, some identifying information is helpful to democracy.

In discussions and sentiment signalling, you need to know a person’s affiliation and expertise. Are they a resident? Do they work for the solar panel company? Are they an expert in urban planning? Is the electric company going to make a bid to buy up their land if this project is approved? Did they educate themselves and read multiple perspectives on the issue at hand? In the best of cases, you would also show an indicator of their reputation in the domains being discussed. In problem definition, you need to know the person’s sentiment and perspective on the issue as well as something about their cognitive abilities. Are they good at detail or systems-wide analysis? Can they integrate multiple perspectives and listen well to others? Does the makeup of the problem-definition team appropriately represent enough different perspectives on the problem? Are they good at asking deeper questions, or do you use a highly-facilitated problem-definition process? For proposal-making, while it is often optimal to let everyone propose ideas, it’s equally important to have the right experts in the room. Is this person an electrician or architect? Have they done other successful projects in this specific area? Again, the best ideas might come from someone who doesn’t have the proper background, but the process of solidifying the proposal needs to be grounded in reality. For voting you need to know that this is the individual they said they were, and that they are voting in accordance with the rules of the voting system. For execution of the decision, you need to know the qualifications of the people carrying out the work. For accountability, oversight and assessment of the process, you need to know the qualifications and the vested interests of the people. Finally, for whistle-blowing, you need some level of anonymity but you may also need verifiable evidence. Throughout the entire process, there needs to be some mechanism for people to give feedback safely when their own self-interest might be endangered. Scientists at a chemical company are the best qualified to expose if there might be unpublished side-effects to some new product. If there are good enough privacy and anonymity controls, such information could be leaked more transparently while verifying the reliability of the sources. Collapsing reality and desire

One of the reasons people clamor for anonymity is that the tech collapses our identity and name and private data. Identity isn’t just your name. All of your data doesn’t have to be identified in every transaction — collapsing these concepts is sloppy and leads people to think there are only two possibilities: complete anonymity and complete exposure.

Identifying yourself by your name is the easiest way for people to eliminate anonymity, but it exposes more information than is necessary. I think that most of the debate over anonymity is due to the fact that we haven’t found creative ways to de-couple someone’s actual name from other attributes about them.

Technically, it would be possible to create a system where someone has a different name to different people. I would look at all of Alice’s posts and develop an opinion of her. You might be looking at Andrea’s posts — not knowing that Andrea and Alice were actually the same person, but anonymized so that when we met this person in real life, let’s say it’s really Alexa, we wouldn’t be able to attribute that information to her unless she wanted us to. She might appear differently on different forums, because she’s more of an expert in oceanography than in architecture. That’s an extreme implementation, but it’s just one example of how technology can be used to provide us the information we need to form opinions online without compromising someone’s identity.

As soon as we recognize that we can develop solutions for allowing different levels of participation and providing the data we need without exposing something sensitive, we can start to have a conversation about the need for anonymity in specific situations.

Enjoy our talk about anonymity and democracy. Most people don’t agree with me on the call! Feel free to join us. We have new topics every week, and the call is open to all.

Monday, 24. August 2020

Kim Cameron's Identity Weblog

Identity Blog Active Again

Many readers will already know that I retired from Microsoft after twenty years working as Chief Architect of Identity and other related roles. I had a great time there, and Microsoft adopted the Laws of Identity in 2005 at a time when most tech companies were still under dark influence of “Privacy is Dead”, building … Continue reading Identity Blog Active Again

Many readers will already know that I retired from Microsoft after twenty years working as Chief Architect of Identity and other related roles. I had a great time there, and Microsoft adopted the Laws of Identity in 2005 at a time when most tech companies were still under dark influence of “Privacy is Dead”, building systems destined to crash at endless cost into a privacy-enabled future. Microsoft is a big complicated place, but Bill Gates and Satya Nadella were as passionate as me about moving Microsoft and the industry towards digital identity respectful of the rights of individuals and able to empower both individuals and organizations. I thank them and all my wonderful colleagues and friends for a really great ride.

In the last years I led Microsoft to support Decentralized Identity as the best way to recognize the needs and rights of individual people, as well as the way to move enterprises and governments past the security problems, privacy roadblocks and dead ends that had resulted from the backend systems of the last century. Truly exciting, but I needed more time for my personal life.

I love being completely in control of my time, but my interest in digital identity is a keen as ever. So besides working with a small startup in Toronto called Convergence Tech on exciting innovation around Verifiable Credentials and Decentralized Identity, I’ve decided to start blogging again. I will, as always, attempt to dissuade those responsible for the most egregious assaults on the Laws of Identity. Beyond that, I share my thoughts on developments in the world of Decentralized Identity and technology that enfranchises the individual person so each of us can play our role in a democratic and secure digital future.


Matt Flynn: InfoSec | IAM

Addressing the Cloud Security Readiness Gap

Cloud security is about much more than security functionality. The top cloud providers all seem to have a capable suite of security features and most surveyed organizations report that they see all the top cloud platforms as generally secure. So, why do 92% of surveyed organizations still report a cloud security readiness gap? They’re not comfortable with the security implications of moving worklo

Cloud security is about much more than security functionality. The top cloud providers all seem to have a capable suite of security features and most surveyed organizations report that they see all the top cloud platforms as generally secure. So, why do 92% of surveyed organizations still report a cloud security readiness gap? They’re not comfortable with the security implications of moving workloads to cloud even if they believe it’s a secure environment and even if the platform offers a robust set of security features. 

Two contributing factors to that gap include:

78% reported that cloud requires different security than on-prem. With security skills at a shortage, the ability to quickly ramp up on a new architecture and a new set of security capabilities can certainly slow progress.
Only 8% of respondents claimed to fully understand the cloud security shared responsibilities model; they don’t even know what they’re responsible for; never mind how to implement the right policies and procedures, hire the right people, or find the right security technologies.

I recently posted about how Oracle is addressing the gap on the Oracle Cloud Security blog. There's a link in the post to a new whitepaper from Dao Research that evaluates the cloud security capabilities offered by Amazon AWS, Google Cloud Platform, Microsoft Azure, and Oracle Cloud Infrastructure.

Oracle took some criticism for arriving late to the game with our cloud infrastructure offering. But, several years of significant investments are paying off. Dao's research concludes that “Oracle has an edge over Amazon, Microsoft, and Google, as it provides a more centralized security configuration and posture management, as well as more automated enforcement of security practices at no additional cost. This allows OCI customers to enhance overall security without requiring additional manual effort, as is the case with AWS, Azure, and GCP.”

A key take-away for me is that sometimes, the competitive edge in security in delivered through simplicity and ease of use. We've heard over and over for several years that complexity is the enemy of security. If we can remove human error, bake-in security by default, and automate security wherever possible, then the system will be more secure than if we're relying on human effort to properly configure and maintain the system and its security.

Click here to check out the post and the Dao Research whitepaper.

Friday, 21. August 2020

Mike Jones: self-issued

OAuth 2.0 JWT Secured Authorization Request (JAR) sent to the RFC Editor

Congratulations to Nat Sakimura and John Bradley for progressing the OAuth 2.0 JWT Secured Authorization Request (JAR) specification from the working group through the IESG to the RFC Editor. This specification takes the JWT Request Object from Section 6 of OpenID Connect Core (Passing Request Parameters as JWTs) and makes this functionality available for pure […]

Congratulations to Nat Sakimura and John Bradley for progressing the OAuth 2.0 JWT Secured Authorization Request (JAR) specification from the working group through the IESG to the RFC Editor. This specification takes the JWT Request Object from Section 6 of OpenID Connect Core (Passing Request Parameters as JWTs) and makes this functionality available for pure OAuth 2.0 applications – and intentionally does so without introducing breaking changes.

This is one of a series of specifications bringing functionality originally developed for OpenID Connect to the OAuth 2.0 ecosystem. Other such specifications included OAuth 2.0 Dynamic Client Registration Protocol [RFC 7591] and OAuth 2.0 Authorization Server Metadata [RFC 8414].

The specification is available at:

https://tools.ietf.org/html/draft-ietf-oauth-jwsreq-28

An HTML-formatted version is also available at:

https://self-issued.info/docs/draft-ietf-oauth-jwsreq-28.html

Again, congratulations to Nat and John and the OAuth Working Group for this achievement!

Thursday, 20. August 2020

Virtual Democracy

Open science badges are coming

Badges give your cultural norms footholds for members to learn and practice “A ‘badge’ is a symbol or indicator of an accomplishment, skill, quality or interest. From the Boy and Girl Scouts, to PADI diving instruction, to the more recently popular geo-location game, Foursquare, badges have been successfully used to set goals, motivate behaviors, represent achievements … Continue reading Open
Badges give your cultural norms footholds for members to learn and practice “A ‘badge’ is a symbol or indicator of an accomplishment, skill, quality or interest. From the Boy and Girl Scouts, to PADI diving instruction, to the more recently popular geo-location game, Foursquare, badges have been successfully used to set goals, motivate behaviors, represent achievements … Continue reading Open science badges are coming

Tuesday, 18. August 2020

Discovering Identity

test post

test post

test post

Monday, 17. August 2020

FACILELOGIN

A Maturity Model for Customer IAM

The main objective of Customer IAM (CIAM) is to drive the revenue growth by leveraging identity data to acquire and retain customers. It… Continue reading on FACILELOGIN »

The main objective of Customer IAM (CIAM) is to drive the revenue growth by leveraging identity data to acquire and retain customers. It…

Continue reading on FACILELOGIN »


Rebecca Rachmany

And the fact that even if you ask for a ballot early, some states won’t send it out until…

And the fact that even if you ask for a ballot early, some states won’t send it out until mid-September for overseas voters.

And the fact that even if you ask for a ballot early, some states won’t send it out until mid-September for overseas voters.

Saturday, 15. August 2020

Timothy Ruff

Introducing Self-Sovereign Student ID (Part 2 of 2)

Introducing Self-Sovereign Student ID Part 2 of 2: ID Is Only the Beginning. Achievements, Skills, & Competencies For many working with SSI and VCs, exchanging achievements is the top-of-mind use case. By achievements, I mean any kind: diplomas, degrees, certificates, skills, skill shapes, competencies, badges, milestones, grades, awards, commendations, micro-credentials, incremental a
Introducing Self-Sovereign Student ID
Part 2 of 2: ID Is Only the Beginning.
Achievements, Skills, & Competencies

For many working with SSI and VCs, exchanging achievements is the top-of-mind use case. By achievements, I mean any kind: diplomas, degrees, certificates, skills, skill shapes, competencies, badges, milestones, grades, awards, commendations, micro-credentials, incremental achievements, and others.

Students will eventually share their achievements in pursuit of a job, but they also may want to transfer between schools, or reverse transfer credits from their current school to a former one. SSI and VCs are the ideal means of receiving achievements in a form where they can be readily shared again, and instantly trustable without a manual verification process.

But unlike student ID, broadly useful achievements exchange among schools and employers not only requires them to become capable of issuing, holding, and verifying VCs, it also requires them to come to agreement about how the data payload should be arranged. This will happen, but it’s gonna take awhile. Thankfully, there is significant and growing momentum toward precisely that.

For example, serious efforts are underway at the T3 Innovation Network, within the U.S. Chamber of Commerce, in developing Learning and Employment Records, or LERs. LERs are powered by the same VC standards and technologies that enable self-sovereign student ID, with the same issue, hold, verify model to/from an SSI wallet, which they call a “learner wallet” for simplicity. A learner wallet is the same as an SSI wallet, with one important addition: a learner wallet includes in its scope the capability for a student to store some VCs in a cloud-based container with a custodian, in place of or in addition to a personally held wallet, and retain self-sovereign control over them. This is useful with large data sets, infrequently used credentials, and as a backup, and is offered by the likes of ASU’s Trusted Learner Network and the Velocity Network Foundation.

An impressive piece was recently released that everyone interested in interoperable achievements should read, whether in the U.S. or abroad: Applying Self-sovereign Identity Principles To Interoperable Learning Records. The lead author of that piece, Kim Hamilton Duffy, also leads a group called the Digital Credentials Consortium (DCC). DCC includes 14 intrepid schools, including the likes of MIT and Harvard, developing interoperability of achievements that are literally carried by the achievers. They also see VCs as the basis for this interoperability, and are making exciting progress.

My conclusion: VCs are where “the puck is headed” for broad, even global academic interoperability, they are the containers referred to in these documents that can securely transport the achievement or LER “payload” between issuers and verifiers, via the achiever herself.

By using this same VC technology for student ID, a school does three critical things to lay the foundation for later exchanging achievements:

It puts the tools necessary for exchanging achievements into schools’ and students’ hands. It gets schools familiar with working with VCs: issuing, verifying, and managing. It gets students familiar with using an SSI wallet: making connections, receiving VCs, sharing VCs, communicating, giving consent, etc.

After self-sovereign student ID is in place, issuing an achievement or LER to a student means simply clicking a different button (or two).

The “Digital Experience”

Education is increasingly engaged in digital transformation, from enrollment to instruction to achievement and employment. Through all the schools, programs and other experiences you might have, there is one thing that’s constant: you. You are the ideal courier of your own data whenever it’s useful to prove or qualify for something, if only you could receive your data, have a way to hold it and present it, and it was verifiable when presented. That is precisely what this technology does.

When you realize that self-sovereign student ID is simply a school-issued digital VC held inside a secure wallet capable of also storing verifiable achievements, and that wallet ideally belongs to the student and not the school, it becomes clear how it can become foundational to a lifetime digital learning experience for that learner. In this context, the “Digital Student ID” becomes a part of the digital experience rather than the whole of it.

This also ties into the future of work, where lifelong achievements can be accumulated by the student and later used to prove skills and competencies to prospective employers at a granular level, with the power of selective disclosure enabling strong privacy to avoid oversharing.

Taken together, this is direct application of the “Internet of Education” from the Learning Economy Foundation, a vision that is now feasible and with which self-sovereign student ID is aligned.

Privileges, Perks, & Freebies

Unlike typical digital credentials or even digital student ID, with self-sovereign student ID students can prove their ID and status anywhere, not just in school-approved or school-connected systems. This independence opens up the entire internet and the world itself, to embrace your school’s students as verifiably what they claim to be, and give them whatever benefits and privileges that status might conceivably afford. This could mean, at any non-school organization, online or off:

Formless onboarding & passwordless authentication at websites Freebies, discounts and special deals anywhere Access to students-only facilities and events from multiple schools Access to special loans, grants, scholarships and more

Intuitively, the more benefits you can arrange for your students, the more they will want to become your students; with self-sovereign student ID, you can unlock more benefits than ever before.

Communication & Interaction

This category of capabilities is often overlooked in SSI, but I believe it could become the most used and beneficial class of capabilities that self-sovereign student ID enables. If you think about how much time we spend communicating versus how much time we spend authenticating, you’ll get where I’m coming from.

Before issuing a VC to a student, a direct connection must be established between the student’s chosen wallet and the school. This connection is unlike other connections the school may have with the student, and unlike the connections people have with each other; it is peer-to-peer, private, and encrypted end-to-end.

This connection between school and student isn’t ephemeral; it persists until either side breaks it, even for alumni who’ve long since left the school (useful for helping keep track of grads for annual IPEDS and other accreditation reporting), It is a new, private, digital relationship between school and student that enables interactions of many forms: messages, phone calls, video calls, file exchange, playing games, taking polls, voting, gathering permission or consent (digitally signed by the student), granting the school’s consent (digitally signed by the school), and more.

A bit like email, both your school and the student can use different services to help with your/their end of the connection. And these services are substitutable; there is no single vendor in the middle that is connecting both sides, as there is with popular messaging services today. If there were, then self-sovereign independence is lost and most of the benefits listed in this article along with it, replaced with dependence on that intermediary.

Using this capability, schools could do away with proprietary messaging systems they’ve installed to ensure FERPA-protected data, for example, is not shared incorrectly, and instead use a standards-based system that comes for free with self-sovereign student ID.

This communication channel must be respected and not overused, because either side can choose to break it; it’s not like email or a phone number where the other party can simply resume sending messages from another address or device. Reconnection can happen at any time, but both parties must opt in. I particularly love this aspect of SSI, because it is the beginning of the end to spam and phishing, and encourages considerate communications behavior on all sides.

Preventing Fraud & Phishing

Once issued to the student by the school, self-sovereign student ID helps prevent student-related fraud, including with student AID programs the student may apply for with outside organizations, such as government, scholarship programs, and others. Once these organizations realize they can receive cryptographic proof directly from the student, they can lessen their reliance on passwords, social security numbers, and other personal information, bringing us closer to a world devoid of identity theft, where having someone’s personal information — even their passwords — is no longer sufficient to impersonate them.

When a student applies and presents their VCs for verification, the benefits offeror, such as FAFSA in the U.S., can instantly and digitally verify, either remotely or in person, the student’s ID and status as a student, even when the organization isn’t connected to the school’s systems. Eventually, as VCs become more prevalent and the student acquires more VCs as they progress in their learner journey, they’ll be able to prove things like their citizenship or visitor status, high school diploma, GED, academic progress, and more, further preventing fraud and accelerating the process of applying for student aid.

Of course this use case requires the benefits offeror to gain the ability to verify VCs, which they could do tomorrow, but in reality may take awhile.

Phishing attempts to impersonate the school in communications with the student can also be detected and prevented, by sending school communications through the private SSI connection or by using it to validate communications sent via other means. And the school isn’t the only one fraudsters may try to impersonate: faculty, staff, tutors, proctors, authorized partners, service providers and more can be strongly and mutually authenticated by using this same capability.

Why Not Embed An SSI Wallet Into Your School’s Existing App?

We hear questions about “embedded wallets” a lot, and for good reason: your school has worked hard to get your official app into as many hands as possible, so adding functionality to it makes sense, whereas asking students to get another ‘app’ — even though an SSI wallet isn’t really an app — seems almost a non-starter.

Well, if a self-sovereign ‘wallet’ were just another app, and intended solely for interacting with your school, this sentiment would make perfect sense. But it isn’t, so it doesn’t, at least in the longer term. But it might in the short term.

We should unpack that a bit.

‘Wallet’ is a woefully inadequate term for what SSI is and does for a person; it is useful because it is an easy to understand metaphor for the most basic building blocks of SSI, but it is ultimately misleading, like mistaking the trunk of an elephant for a snake. SSI is more like a self-sovereign cockpit for consolidating all your relationships, not just your academic ones, and certainly not just one school. SSI consolidates, under your ultimate control, your connections, communications, interactions, agreements, preferences and data, even data not in your physical possession like medical data, which might be best physically stored with a healthcare provider or other custodian. Leaving all that in bespoke, embedded wallets from each provider brings you right back to the status quo, with your relationships, interactions, and data spread out and under the ultimate control of third parties, with all that entails: vendor lock-in; privacy, security, and compliance issues; ID theft; surveillance capitalism; duplicate relationships and data; etc.

Microsoft, Mastercard, IBM, Samsung, the US Credit Union industry and hundreds of others globally are now developing SSI/VC tech for use in many industries, so your school will soon not be the only entity offering SSI-powered benefits to your students, faculty and staff. Imagine if every organization embedded wallets into their own apps rather than working with an external one, or if every payment, ID, and loyalty card you carried required its own separate physical wallet… people would begin to get annoyed, to say the least, and prefer schools and organizations that made life easier, not harder.

All that said, an embedded wallet could be a reasonable tradeoff early on, when SSI is new and its first uses for your students may be limited to your school. So if you jump on self-sovereign student ID quickly as an early adopter, you could embed SSI/VC/wallet tech into your existing app, foregoing self-sovereignty for now without too much of a tradeoff, and still gain several of the key benefits mentioned. Then, as students, faculty, and staff begin to receive SSI connection requests and VC offers from their other relationships in life, and they start wanting to consolidate things, you can make moves toward greater self-sovereignty with less of a dilemma, counting on SSI’s standards-enabled portability.

What’s Ready Now? What’s Ready Code, Products, & Services — Open source code; VC-oriented products from Microsoft, Workday, IBM, and dozens of startups. Compatibility With Existing Federated ID — CAS, Okta, Ping, ForgeRock, etc. for connecting with SAML, OAuth, OIDC and other federation protocols for passwordless login, KBA-free call-in, and cardless walk-in authentication. Standards Work — W3C, Trust over IP Foundation, DIF Custodial Solutions — Trusted Learner Network, Velocity Network Foundation Broad Consensus About VCs — The Verifiable Credential is the only container I’m consistently seeing under consideration for transporting verifiable data between trust domains, which self-sovereign control and trust require, from academia to healthcare to finance and beyond. Broad Consensus About Individual Control of Data — From academia to healthcare to Europe’s GDPR and the current disdain for big tech and surveillance capitalism, I see broad consensus that control over data must move more and more into the hands of individuals, even data not in their physical possession. Momentum — Years of global open-source development and standards work for SSI; orgs large and small in many industries are actively participating in developing VC code, standards, use cases and business models; strong support from the T3 Innovation Network in the U.S. Chamber of Commerce. What’s Not Ready (Yet) User Experience — The SSI space knows the basics — issue, hold, and verify VCs — but does not yet have the UX figured out. Honestly, the existing SSI wallets I’ve seen are all still a bit clunky and confusing (even though it’s still a much better experience than passwords or answering personal questions), but they do work. Usability must be smoothed and complexity hidden, and access for the disabled, older devices, and more, has yet to be addressed. Interoperability — Today, standards are ahead of implementations. All the players know the importance of interop but haven’t gotten there yet, though there are serious multi-org testing and development efforts underway to get it resolved. I like the alignment of incentives here: any vendor not interoperable with others will be left on its own technology island. Communications — While these private, peer-to-peer connections can support any kind of communication, so far I’ve not seen anything other than simple messaging. Passive Authentication — I look forward to the day when I can be authenticated by those I know and trust passively, by policy, by automatically sharing the appropriate proofs when prompted, without touching my device. As far as I know, only active authentication is now offered. Embedded Agents in Door Access Readers — Another missing element is embedded SSI agents into NFC (or other tap technology) readers, to make door access compatible and performant. Ancillary & Rainy Day Use Cases — Most new tech must first nail sunny day scenarios before tackling the rainy day ones. For example, VCs could be used for guardian relationships, children, pets, things, and complex organizational hierarchies, but those haven’t been done anywhere that I’m aware of. VCs could work off-line or from a QR code on a card or piece of paper, but no one has gone there yet either, to my knowledge.

Considering what’s ready today, what’s not ready, the long list of benefits for both schools and students, the fraud with existing credentials, and the possibility of eliminating existing costs (see next section), I think it adds up to a compelling case that self-sovereign student ID is ready for piloting.

That said, the pieces that enable self-sovereign student ID are nascent and only recently available; it is a new application of SSI that itself has only been around for about four years, and mostly in the lab and not much in production, though that is changing. Schools considering this in 2020 would be the first, which for those that prefer to lead rather than follow makes for a wonderful opportunity, especially during a pandemic.

Cue the Technology Adoption Lifecycle… welcome, Innovators!

Where to Begin

To get started with self-sovereign student ID, a school needs capabilities to issue and verify VCs, and students need wallets to hold them.

Code for simple issuance tools is available open source, more advanced tools are offered by various SSI service providers, and standards-compliant SSI wallets are available for free in both Apple and Google app stores. Verification is the tricky part, as it requires existing school systems to be adapted to accept VCs for authentication, and later for other purposes¹. Thankfully, some IAM systems commonly found in higher ed are adaptable now:

For schools running CAS, Okta, or any IAM system that facilitates an external Identity Provider, self-sovereign student ID can be integrated relatively painlessly, enabling students to immediately use it in place of passwords, and potentially eliminating the need for a dedicated 2FA provider. For contact centers running Salesforce Service Cloud, the StudentPass product that Credential Master is developing will integrate natively, enabling students to use their VCs to authenticate when calling in, without answering personal questions.

Of course, integrations can be made with any existing system. Better to start a new identity project with self-sovereign student ID, which can begin to consolidate systems and reduce complexity, than build another layer that may well add to it.

In Conclusion

For those interested primarily in achievements, ID is an “and” and not an “or,” and it should come first, as it lays a technical and familiarity foundation for achievements to be issued and quickly useful. Communications could come soon after ID, because it becomes available as soon as the first connection with a student is created.

Achievements will come later, after the necessary consensus among schools and employers has been sorted to establish semantic meaning and interoperability for exchanged data payloads. Then this model — of people receiving, controlling, and sharing their achievements in a digital, verifiable form — will become the norm, and the future of work will become the present.

Until that day, schools can leap right past proprietary digital ID solutions and go straight to self-sovereign, and reap all its benefits without having to wait for anyone else to agree to anything, giving students a modern digital experience they’ll love.

Part 1: Strong, flexible, digital student ID that’s not bound to your campus, network, or vendor.

¹ Down the road, VCs could also be used for authorization, where granular-level permissions and entitlements are carried and presented by individuals, simplifying the administration of centralized policy-based access control and moving enforcement to the edge.

Special thanks to several helpful reviewers, editors, and contributors: John Phillips, Dr. Phil Windley, Phil Long, Scott Perry, Dr. Samuel Smith, Alan Davies, Taylor Kendal, and Matthew Hailstone.


Introducing Self-Sovereign Student ID (Part 1 of 2)

Introducing Self-Sovereign Student ID Part 1 of 2: Strong, flexible, digital student ID that’s not bound to your campus, network, or vendor. Full Disclosure: The benefits discussed here apply to self-sovereign student ID generally and are not specific to any vendor or product, and do not result in vendor lock-in; any standards-compliant SSI tools for issuance, holding, and verification
Introducing Self-Sovereign Student ID
Part 1 of 2: Strong, flexible, digital student ID that’s not bound to your campus, network, or vendor.

Full Disclosure: The benefits discussed here apply to self-sovereign student ID generally and are not specific to any vendor or product, and do not result in vendor lock-in; any standards-compliant SSI tools for issuance, holding, and verification could deliver these results. As a partner of venture studio Digital Trust Ventures, I am a co-founder of Credential Master, a startup now developing a self-sovereign student ID product.

TL;DR: Digital student ID: doesn’t exist yet for most students. Self-sovereign student ID: students independently store verifiable data, and strongly prove things about themselves anywhere, online or off. Many uses: strong, passwordless login, KBA-free call in, & cardless walk in; secure, keyless facility and systems access; prove skills, competencies and achievements; secure peer-to-peer communication and interaction; breakthrough fraud prevention; student-controlled privacy; digitally signed consent; and more. Open standards: not tied to a specific vendor, usable outside the school’s network. Schools can start with self-sovereign student ID today, without waiting for collaboration with or agreement from other institutions. Part 2: ID is Only the Beginning

Note: This article assumes a basic understanding of the concepts of self-sovereign identity (SSI), especially W3C Verifiable Credentials (VCs). The technical specifics of how VCs are issued, revoked, held, verified, and trustworthy are covered extensively across the SSI industry. Basic VC mechanics can be found in my other post: How Verifiable Credentials Bridge Trust Domains.

Digital Student ID

For most students, digital student ID still doesn’t exist.

I’ve been asking experts in academia what comes to mind when I say “digital student ID,” and here is what I learned: other than login credentials for a student portal — which don’t count — a digital student ID still doesn’t exist, at least not for most students in most schools.

The first, still obscure attempts at what I would call real digital student ID have cropped up fairly recently, enabling students to prove their identity in multiple environments and granting them access to electronic systems, software, and even physical facilities. Apple has introduced their version of digital student ID that works exclusively with Apple software and devices, and various smaller companies have launched similarly proprietary software platforms with corresponding apps. Search “student ID” in the Apple or Google app stores and you’ll find dozens of similar offerings.

So why hasn’t digital student ID caught on? I think it’s because available offerings are tied to a single vendor, usable only in systems and facilities where that vendor is installed, and verifiable only by that vendor’s app. In Apple’s case it’s tied to their hardware, too. Even a homegrown digital student ID solution can be verified only by an associated homegrown app. It is vendor lock-in to the extreme, even if that ‘vendor’ is the school itself.

For a school to confidently roll out a more broadly useful digital student ID, it must be with technology that can traverse boundaries between vendors, both within the school’s network and external to it, such as when a student applies for aid. Such technology now exists, and it can do a heckuva lot more than ID.

Introducing a powerful new model for student ID: “self-sovereign” student ID.

Self-Sovereign Student ID

You may have heard of self-sovereign identity, or SSI¹. In this article I’ll explore how self-sovereign student ID can apply SSI capabilities and principles for students, faculty, and staff in an educational environment, primarily higher education.

I recognize that the term “self-sovereign” may not resonate as well with some in academia, which is often dominated by institutions preferring to expand their scope of control and influence, and perceiving self-sovereignty as a threat. However, it is the very act of giving greater control and independence to students that yields most of the benefits listed in this article, and counter intuitively, a closer, richer relationship with those students.

I’ll discuss this more in “Why Self-Sovereign” below.

What Is It?

The term “self-sovereign” is unfortunately not self-explanatory, but when properly understood should feel like a familiar analog to physical identity and credentials, which we already use in a self-sovereign manner well beyond the physical and digital bounds of the organization that issued them to us.

In short, self-sovereign student ID gives students the ability to independently and securely store tamper-resistant, verifiable data related to themselves, and to prove things wherever they want by privately sharing cryptographic (mathematical) proofs from that data, online or off. Importantly, it also enables students to securely, directly, and privately communicate and interact with the school.

Technically, the student has self-sovereign control over three² things:

A standardized digital container, for holding verifiable data; Peer-to-peer connections between their container and the containers of other people, organizations, and things³; Verifiable Credentials (VCs) that the student accepts into their wallet, and shares proofs of with others when desired.

Today, that self-sovereign container is typically a standardized digital SSI “wallet”⁴ within a compliant app that the student can see and interact with on a smart device; eventually, wallets will be found anywhere digital things are stored and become part of the everyday life of people, organizations, and things, hidden and integrated into our devices and experiences in a way where we no longer notice or think of them.

One important point that will help readers better understand VCs, both in this document and generally: I believe VCs are misnamed, they are not credentials, they are verifiable containers capable of transporting any data payload, which may or may not be typically considered a “credential.” This means that VCs held in a wallet are containers within a larger container, but this is a feature, not a bug; it is how physical goods are transported in meatspace and is very useful. I’ve written about these points in greater detail here.

Why Self-Sovereign?

Self-sovereignty implies giving students control over their data and how it’s shared. That may seem counter intuitive to the traditional academic notion of in loco parentis, but the truth is that it not only mirrors how physical student IDs work, it simplifies the IT integration work while expanding the possible use cases.

Importantly, an SSI-based student ID binds the school-issued (and verifiable) information in the ID to a student’s right to use it. Through this binding, students can prove, at their own discretion, that they are enrolled, taking classes, receiving or received grades (and what they are), received a degree, and so on. Giving these facts to students in digital form can make their lives easier, reducing friction and fraud for both students and the school. ID is only one of many VCs that will be issued to the student by the school for various uses.

And once the student has an SSI wallet, it’s not just the school that can now exchange VCs with them. The student might want to connect and exchange VCs with multiple schools, their church, their gym club, their favorite sandwich shop or other entities they interact with. You might think that sounds like an Apple or Samsung wallet, and it kinda does, except:

SSI wallets are portable between vendors, devices, and device types (ie, move from iPhone to Samsung and back again) VCs are portable between wallets VCs can hold any kind of data (ie, identity, location, degree, favorite pizza) VCs are cryptographically verifiable Holders can share only part of a VC, metadata about a VC, or just proof that they have one, without revealing anything else The protocols that make wallets and VCs portable and interoperable are open and standardized

And of course, Apple and Android wallets don’t enable persistent, encrypted connections with other people, organizations and things for secure peer-to-peer communication and interaction, but I’m getting ahead of myself…

A Thousand Uses

By “useful” I mean it can be used for many important, new, relevant purposes and also make expected and ordinary uses better.

I’ll organize the usefulness of self-sovereign student ID into six categories:

Identity & Access Achievements, Skills, & Competencies The “Digital Experience” Privileges, Perks & Freebies Communication & Interaction Preventing Fraud & Phishing

The first category, Identity & Access, will be covered within this Part 1 and is where I’ll delve the most deeply. I’ll touch on the remaining categories in Part 2.

Note: As mentioned above, it is outside the scope of this article to explain the basic mechanics of SSI and VC exchange, such as how the initial encrypted connections are offered and accepted, how VCs are offered and accepted using those connections, how VC presentation and verification occurs, or how those verifications can be trustworthy. Those subjects are covered amply by many other documents, papers, and websites throughout the SSI industry, and are the reason for the industry’s abundant standards activity. Refer to links at the beginning of this article for more information.

Identity & Access

The primary use of self-sovereign student ID — and the gateway to limitless other uses — is as a digital, and therefore more powerful, version of a student ID card, enabling students to instantly and strongly prove their identity and status as a student, online or off, without passwords.

Password Replacement

Perhaps the simplest starting point for self-sovereign student ID is to replace passwords. Passwords have many known problems with both security and UX, and can be replaced with a quick ping to a student’s smartphone. The student responds with a tap, which cryptographically shares their school-issued ID, which is instantly verifiable by the school.

This is a big step-up in both security and user experience over passwords, and social login, and can be used in conjunction with or as a substitute for 2FA.

For schools running CAS, Okta, or any ID system supportive of a “custom IDP” or “external authentication handler,” password replacement can be implemented quickly.

Identity+

Self-sovereign student ID starts with a student-controlled wallet into which the school can issue VCs containing any kind of data, identity or otherwise. So typical student ID data — name, photo, ID number, etc. — can be supplemented with any other desired information: classes registered for or taken, status and authority as a student leader, entitlements, permissions, preferences, allergies, relationships to other students, contact information, family information, achievements (more on that below), and on and on.

Multi-Factor Authentication+ (MFA+)

The plus in MFA+ means the ability to exchange more factors than is feasible with current tech, while not impairing the user experience, and likely improving it.

Because shared secrets (something the student knows, like a username or password) are replaced with cryptographically secure, digitally signed VCs (something the student has, like a unique key), you can exchange much stronger credentials than passwords. Because they’re digital and easily shared, you can exchange more of them, even dozens. Biometrics, location and more can also be incorporated, either as payloads within VCs or in conjunction with them.

Because VCs can be exchanged actively or passively behind the scenes, they are useful within Zero Trust Architecture (ZTA). For higher-risk applications, multiple signatures can be required from multiple devices and/or multiple individuals. Combining MFA+, ZTA, and multi-device and multi-signature capabilities results in a formidable approach to protecting sensitive systems and facilities.

Vendor Independence

SSI is based on standards, so tools from different vendors can be interoperable. This means that your school isn’t locked into a single vendor and can replace a vendor without reissuing IDs (try that with a traditional ID system). Students, faculty, and staff can choose from multiple wallets to store their credentials, which are portable if they later decide to switch. You might use one vendor to help issue student IDs, another to integrate with the student information system for registration and transcript data, and a third to verify student or staff status at the campus bookstore POS. You don’t have to worry that any vendor selected for digital student IDs can’t support your diverse campus needs, or that you’re stuck with lousy quality or service.

Use of student ID becomes no longer constrained digitally or physically by the boundaries of the school’s trust domain, or the presence of any particular vendor, removing what I believe is the #1 barrier for digital student ID adoption today.

Digital First, Then Physical

Self-sovereign student ID is digital first, but not digital only. With self-sovereign student ID you can have both digital and physical forms in several varieties.

A ‘smart’/chipped student ID card could hold an SSI wallet (note the irony) with some of a student’s VCs, greatly expanding the types of credentials, entitlements, tickets or other items the student can carry and benefit from, and making it much harder for a fraudster to get away with swapping out the name, photo, ID number, etc.

A student could also present a card or a paper with a QR code on it. Scanning the QR code could pull up a verified student ID, including a photo, either from a school-controlled database or from student-controlled storage. QR code-based verification could be restricted to authorized personnel, who could also be required to digitally prove their ID before gaining access.

Combining these capabilities, a school could issue a physical ID card with only three elements: a photo, a QR code, and an embedded chip— no name, no ID number, nothing else. If the chip and QR code worked as described above, even this extreme approach could be more useful than existing student ID cards while being more private for students and more difficult to hack for fraudsters.

Access+

Whether for accessing digital systems, physical facilities or events, self-sovereign student ID can begin to support digital versions of key cards, vouchers, receipts and more, all uniquely associated with the student. A VC can also be issued as a bearer instrument not associated with any individual, like a movie ticket or a coupon. Using geofencing, students could be passively authenticated when entering a secure area, of course subject to their consent to the practice.

Mutual Authentication

Because it enables a bi-directional exchange of VCs, self-sovereign student ID may be the first technology that enables students to authenticate schools as strongly as schools authenticate students, preventing impersonation and phishing.

User Experience

Who doesn’t want fewer usernames and passwords to deal with? With self-sovereign student ID, the student can digitally present their ID and other entitlements and be authenticated more strongly, safely, and quickly than with usernames and passwords. This reduces the incidence of fraud, account take-over, and password reset requests.

Because the school maintains a secure, peer-to-peer connection with the student, it can use this connection to prompt the student for ID when the student calls in or walks in⁵. When calling in, this eliminates the need for knowledge-based authentication questions (birthday, mother’s maiden name, etc.) and speeds up the call; when walking in, this eliminates the need to pull out a physical student ID (useful during a pandemic).

Whether calling in, walking in, or logging in, a student can feel recognized by the school rather than repeatedly treated as a stranger.

Privacy & Compliance

Today, when presenting a physical student ID card, the student divulges everything on it; there’s no way to present only part of it. With selective disclosure enabled by self-sovereign student ID, a student can share only the data required and nothing more, or prove something about their credentials without disclosing any of the data, or just prove that they have it. Some examples are helpful:

Prove status as a currently enrolled student, without revealing name, ID number, or other personal info Prove age is over a threshold, without revealing actual age or birthday Prove address on file is within a certain area, city, or building, without revealing the exact location

Selective disclosure is useful for many things, including voting, and can be used online or off. It affords whatever level of privacy the student desires, and satisfies both the spirit and the letter of aggressive privacy regulations such as GDPR and CCPA, while remaining auditable for all interactions between students and the school. And by minimizing the presentation of identity to only the attributes needed, selective disclosure curtails the unchecked growth in the ‘grey market’ of personal data, worth billions of dollars and growing.

And when data is shared, it is shared directly by the student with digitally signed evidence that they did so, a veritable get-out-of-jail free card in today’s PII-sensitive privacy climate.

Consolidation & Simplification

As it scales, federated ID has a tendency to grow into a complex, tangled web of identities, identifiers, vendors, integrations, synchronizations, and registries. Self-sovereign student ID can begin a process of consolidation around a coherent identity meta system, with a reduction in vendors, a reduction in identifiers, and an overall reduction in complexity, without consolidating around a single vendor.

And that’s not a joke.

That said, I’m taking the advice of my business partner, SSI pioneer Dr. Sam Smith, and avoiding diving into how this can occur within this piece. He is the better author for it anyway, as it quickly gets into a technical discussion about self-certifying identifiers, which SSI uses, versus administrative identifiers, which federated ID uses, which is his area of study and expertise. So, Sam can write a separate piece if there is sufficient interest. For readers having a pressing need related to this topic, please get in touch.

ID Is the Low-Hanging Fruit

For most, ID may be the best place to start with VCs. There’s far less complexity than something like achievements, and no other entities need to be consulted before adopting a particular approach; a school catches what it pitches. Plus, the benefits can be broadly and immediately felt, and readily integrated into most IAM systems. And of course, ID is a prerequisite for most other use cases; even if those feel more important or urgent, you usually need to begin by verifying who you’re dealing with.

If you read no further, this should already be apparent: though ID and Access may be a fraction of what self-sovereign student ID can do, it is more than enough to justify serious consideration. The uses discussed in Part 2 are cool and exciting, but just icing on the cake.

If you’re not planning to read Part 2 and are wondering how to operationalize self-sovereign student ID, open Part 2 and skip down to “Where to Begin.”

Part 2: ID is Only the Beginning

¹ One of the earliest/best pieces about SSI, from Christopher Allen: http://www.lifewithalacrity.com/2016/04/the-path-to-self-soverereign-identity.html

² Technically there is a fourth thing a student controls: self-certifying identifiers provide the root of trust, the crypto magic that makes it possible to traverse trust domain boundaries, but they’re “deeper under the hood” and out of scope for this piece. You can learn more about self-certifying identifiers here, here, and here.

³ It’s technically agents that connect, not wallets.

⁴ In the context of student ID, a wallet in the student’s possession is the container that makes the most sense; for large data sets such as entire transcripts or medical data, or infrequently used data, or simply for backup, a student may employ a custodial storage solution in a blockchain or traditional database, while still retaining self-sovereign control over the stored data.

⁵ The Credit Union industry is beginning to deploy MemberPass, which uses this means to streamline incoming calls into service centers.

Special thanks to several helpful reviewers, editors, and contributors: John Phillips, Dr. Phil Windley, Phil Long, Scott Perry, Dr. Samuel Smith, Alan Davies, Taylor Kendal, and Matthew Hailstone.

Friday, 14. August 2020

Mike Jones: self-issued

COSE and JOSE Registrations for Web Authentication (WebAuthn) Algorithms is now RFC 8812

The W3C Web Authentication (WebAuthn) working group and the IETF COSE working group created “CBOR Object Signing and Encryption (COSE) and JSON Object Signing and Encryption (JOSE) Registrations for Web Authentication (WebAuthn) Algorithms” to make some algorithms and elliptic curves used by WebAuthn and FIDO2 officially part of COSE and JOSE. The RSA algorithms are […]

The W3C Web Authentication (WebAuthn) working group and the IETF COSE working group created “CBOR Object Signing and Encryption (COSE) and JSON Object Signing and Encryption (JOSE) Registrations for Web Authentication (WebAuthn) Algorithms” to make some algorithms and elliptic curves used by WebAuthn and FIDO2 officially part of COSE and JOSE. The RSA algorithms are used by TPMs. The “secp256k1” curve registered (a.k.a., the Bitcoin curve) is also used in some decentralized identity applications. The completed specification has now been published as RFC 8812.

As described when the registrations recently occurred, the algorithms registered are:

RS256 – RSASSA-PKCS1-v1_5 using SHA-256 – new for COSE RS384 – RSASSA-PKCS1-v1_5 using SHA-384 – new for COSE RS512 – RSASSA-PKCS1-v1_5 using SHA-512 – new for COSE RS1 – RSASSA-PKCS1-v1_5 using SHA-1 – new for COSE ES256K – ECDSA using secp256k1 curve and SHA-256 – new for COSE and JOSE

The elliptic curves registered are:

secp256k1 – SECG secp256k1 curve – new for COSE and JOSE

See them in the IANA COSE Registry and the IANA JOSE Registry.


Just a Theory

Flaked, Brewed, and Docked

Sqitch v0.9998: Now with Snowflake support, an improved Homebrew tap, and the quickest way to get started: the new [Docker image](https://hub.docker.com/r/sqitch/sqitch/).

I released Sqitch v0.9998 this week. Despite the long list of changes, only one new feature stands out: support for the Snowflake Data Warehouse platform. A major work project aims to move all of our reporting data from Postgres to Snowflake. I asked the team lead if they needed Sqitch support, and they said something like, “Oh hell yes, that would save us months of work!” Fortunately I had time to make it happen.

Snowflake’s SQL interface ably supports all the functionality required for Sqitch; indeed, the implementation required fairly little customization. And while I did report a number of issues and shortcomings to the Snowflake support team, they always responded quickly and helpfully — sometimes revealing undocumented workarounds to solve my problems. I requested that they be documented.

The work turned out well. If you use Snowflake, consider managing your databases with Sqitch. Start with the tutorial to get a feel for it.

Bundle Up

Of course, you might find it a little tricky to get started. In addition to long list of Perl dependencies, each database engines requires two external resources: a command-line client and a driver library. For Snowflake, that means the SnowSQL client and the ODBC driver. The PostgreSQL engine requires psql and DBD::Pg compiled with libpq. MySQL calls for the mysql client and DBD::mysql compiled with the MySQL connection library. And so on. You likely don’t care what needs to be built and installed; you just want it to work. Ideally install a binary and go.

I do, too. So I spent the a month or so building Sqitch bundling support, to easily install all its Perl dependencies into a single directory for distribution as a single package. It took a while because, sadly, Perl provides no straightforward method to build such a feature without also bundling unneeded libraries. I plan to write up the technical details soon; for now, just know that I made it work. If you Homebrew, you’ll reap the benefits in your next brew install sqitch.

Pour One Out

In fact, the bundling feature enabled a complete rewrite of the Sqitch Homebrew tap. Previously, Sqitch’s Homebrew formula installed the required modules in Perl’s global include path. This pattern violated Homebrew best practices, which prefer that all the dependencies for an app, aside from configuration, reside in a single directory, or “cellar.”

The new formula follows this dictum, bundling Sqitch and its CPAN dependencies into a nice, neat package. Moreover, it enables engine dependency selection at build time. Gone are the separate sqitch_$engine formulas. Just pass the requisite options when you build Sqitch:

brew install sqitch --with-postgres-support --with-sqlite-support

Include as many engines as you need (here’s the list). Find yourself with only Postgres support but now need Oracle, too? Just reinstall:

export HOMEBREW_ORACLE_HOME=$ORACLE_HOME brew reinstall sqitch --with-postgres-support --with-oracle-support

In fact, the old sqitch_oracle formula hasn’t worked in quite some time, but the new $HOMEBREW_ORACLE_HOME environment variable does the trick (provided you disable SIP; see the instructions for details).

I recently became a Homebrew user myself, and felt it important to make Sqitch build “the right way”. I expect this formula to be more reliable and better maintained going forward.

Still, despite its utility, Homebrew Sqitch lives up to its name: It downloads and builds Sqitch from source. To attract newbies with a quick and easy method to get started, we need something even simpler.

Dock of the Bae

Which brings me to the installer that excites me most: The new Docker image. Curious about Sqitch and want to download and go? Use Docker? Try this:

curl -L https://git.io/JJKCn -o sqitch && chmod +x sqitch ./sqitch help

That’s it. On first run, the script pulls down the Docker image, which includes full support for PostgreSQL, MySQL, Firebird, and SQLite, and weighs in at just 164 MB (54 MB compressed). Thereafter, it works just as if Sqitch was locally-installed. It uses a few tricks to achieve this bit of magic:

It mounts the current directory, so it acts on the Sqitch project you intend it to It mounts your home directory, so it can read the usual configuration files It syncs the environment variables that Sqitch cares about

The script even syncs your username, full name, and host name, in case you haven’t configured your name and email address with sqitch config. The only outwardly obvious difference is the editor:1 If you add a change and let the editor open, it launches nano rather than your preferred editor. This limitation allows the image ot remain as small as possible.

I invested quite a lot of effort into the Docker image, to make it as small as possible while maximizing out-of-the-box database engine support — without foreclosing support for proprietary databases. To that end, the repository already contains Dockerfiles to support Oracle and Snowflake: simply download the required binary files, built the image, and push it to your private registry. Then set $SQITCH_IMAGE to the image name to transparently run it with the magic shell script.

Docker Docket

I plan to put more work into the Sqitch Docker repository over the next few months. Exasol and Vertica Dockerfiles come next. Beyond that, I envision matrix of different images, one for each database engine, to minimize download and runtime size for folx who need only one engine — especially for production deployments. Adding Alpine-based images also tempts me; they’d be even smaller, though unable to support most (all?) of the commercial database engines. Still: tiny!

Container size obsession is a thing, right?

At work, we believe the future of app deployment and execution belongs to containerization, particularly on Docker and Kubernetes. I presume that conviction will grant me time to work on these improvements.

Well, that and connecting to a service on your host machine is a little fussy. For example, to use Postgres on your local host, you can’t connect to Unix sockets. The shell script enables host networking, so on Linux, at least, you should be able to connect to localhost to deploy your changes. On macOS and Windows, use the host.docker.internal host name. ↩︎

More about… Sqitch Docker Homebrew Snowflake

Wednesday, 12. August 2020

DustyCloud Brainstorms

Terminal Phase in Linux Magazine (Polish edition)

Hey look at that! My terminal-space-shooter-game Terminal Phase made an appearance in the Polish version of Linux Magazine. I had no idea, but Michal Majchrzak both tipped me off to it and took the pictures. (Thank you!) I don't know Polish but I can see some references to Konami and …

Hey look at that! My terminal-space-shooter-game Terminal Phase made an appearance in the Polish version of Linux Magazine. I had no idea, but Michal Majchrzak both tipped me off to it and took the pictures. (Thank you!)

I don't know Polish but I can see some references to Konami and SHMUP (shoot-em-up game). The screenshot they have isn't the one I published, so I guess the author got it running too... I hope they had fun!

Apparently it appeared in the June 2020 edition:

I guess because print media coverage is smaller, it feels cooler to get covered these days in it in some way?

I wonder if I can find a copy somewhere!

Tuesday, 11. August 2020

Mike Jones: self-issued

Registries for Web Authentication (WebAuthn) is now RFC 8809

The W3C Web Authentication (WebAuthn) working group created the IETF specification “Registries for Web Authentication (WebAuthn)” to establish registries needed for WebAuthn extension points. These IANA registries were populated in June 2020. Now the specification creating them has been published as RFC 8809. Thanks again to Kathleen Moriarty and Benjamin Kaduk for their Area Director […]

The W3C Web Authentication (WebAuthn) working group created the IETF specification “Registries for Web Authentication (WebAuthn)” to establish registries needed for WebAuthn extension points. These IANA registries were populated in June 2020. Now the specification creating them has been published as RFC 8809.

Thanks again to Kathleen Moriarty and Benjamin Kaduk for their Area Director sponsorships of the specification and to Jeff Hodges and Giridhar Mandyam for their work on it.

Monday, 10. August 2020

FACILELOGIN

OpenID Connect Authentication Flows

OpenID Connect core specification defines three authentication flows: authorization code flows, implicit flow and hybrid flow. The… Continue reading on FACILELOGIN »

OpenID Connect core specification defines three authentication flows: authorization code flows, implicit flow and hybrid flow. The…

Continue reading on FACILELOGIN »


Phil Windley's Technometria

Cogito, Ergo Sum

Summary: Sovereign is the right word for describing the essential distinction between our inalienable self and the administrative identifiers and attributes others assign to us online. Descartes didn't say "I have a birth certificate, therefore I am." We do not spring into existence because some administrative system provisions an identifer for us. No single administrative regime, or e

Summary: Sovereign is the right word for describing the essential distinction between our inalienable self and the administrative identifiers and attributes others assign to us online.

Descartes didn't say "I have a birth certificate, therefore I am." We do not spring into existence because some administrative system provisions an identifer for us. No single administrative regime, or even a collection of them, defines us. Doc Searls said this to me recently:

We are, within ourselves, what William Ernest Henley calls “the captain” of “my unconquerable soul”, and what Walt Whitman meant when he said “I know this orbit of mine cannot be swept by a carpenter's compass,” and “I know that I am august. I do not trouble my spirit to vindicate itself or be understood.” Each of us has an inner essence that is who we are.

Even in the digital realm, and limiting ourselves to what Joe Andrieu calls "functional identity", we are more than any single relationship. Our identity is something we are, not something we have. And it's certainly not what someone else provides to us. We are self-sovereign.

Some shrink from the self-sovereign label. There are some good reasons for their reluctance. Self-sovereignty requires some explanation. And it has political overtones that make some uncomfortable. But I've decided to embrace it. Self-sovereign identity is more than decentralized identity. Self-sovereign identity implies autonomy and inalienability.

If our identity is inalienable, then it's not transferable to another and not capable of being taken away or denied. To be inalienable is to be sovereign: to exercise supreme authority over one’s personal sphere—Whitman’s “orbit of mine.” Administrative identifiers, what others choose to call us, are alienable. Relationships are alienable. Most attributes are alienable1. Who we are, and our right to choose how we present ourselves to the world, is not alienable. The distinction between the inalienable and the alienable, the self-sovereign and the administrative, is essential. Without this distinction, we are constantly at the mercy of the various administrative regimes we interact with.

Self-sovereignty is concerned with relationships and boundaries. When we say a nation is sovereign, we mean that it can act as a peer to other sovereign states, not that it can do whatever it wants. Sovereignty defines the source of our authority to act. Sovereignty defines a boundary, within which the sovereign has complete control and outside of which the sovereign relates to others within established rules and norms. Self-sovereign identity defines the boundary in the digital space, gives tools to people and organizations so they can assert control—their autonomy, and defines the rules for how relationships are formed, authenticated, and used.

In the opening chapter of her groundbreaking book, The Age of Surveillance Capitalism, Shoshana Zuboff asks the question "Can the digital future be our home?" Not if it's based on administrative identity systems and the anemic, ofttimes dangerous, relationships they create. By starting with self-sovereignty, we found our digital relationships on principles that support and preserve human freedom, privacy, and dignity. So, while talking about trust, decentralization, credentials, wallets, and DIDs might help explain how self-sovereign identity works, sovereignty explains why we do it. If self-sovereignty requires explanation, maybe that's a feature, not a bug.

End Notes I'm distinguishing attributes from traits without going too deep into that idea for now.

Photo Credit: Cogito, Ergo Sum from Latin Quotes (Unknown License)

Tags: identity ssi self-sovereign

Saturday, 08. August 2020

Mike Jones: self-issued

OpenID Connect Logout specs addressing all known issues

I’ve been systematically working through all the open issues filed about the OpenID Connect Logout specs in preparation for advancing them to Final Specification status. I’m pleased to report that I’ve released drafts that address all these issues. The new drafts are: OpenID Connect RP-Initiated Logout 1.0 – draft 01 OpenID Connect Session Management 1.0 […]

I’ve been systematically working through all the open issues filed about the OpenID Connect Logout specs in preparation for advancing them to Final Specification status. I’m pleased to report that I’ve released drafts that address all these issues. The new drafts are:

OpenID Connect RP-Initiated Logout 1.0 – draft 01 OpenID Connect Session Management 1.0 – draft 30 OpenID Connect Front-Channel Logout 1.0 – draft 04 OpenID Connect Back-Channel Logout 1.0 – draft 06

The OpenID Connect working group waited to make these Final Specifications until we received feedback resulting from certification of logout deployments. Indeed, this feedback identified a few ambiguities and deficiencies in the specifications, which have been addressed in the latest edits. You can see the certified logout implementations at https://openid.net/certification/. We encourage you to likewise certify your implementations now.

Please see the latest History entries in the specifications for descriptions of the normative changes made. The history entries list the issue numbers addressed. The issues can be viewed in the OpenID Connect issue tracker, including links to the commits containing the changes that resolved them.

All are encouraged to review these drafts in advance of the formal OpenID Foundation review period for them, which should commence in a few weeks. If you believe that changes are needed before they become Final Specifications, please file issues describing the proposed changes. Discussion on the OpenID Connect mailing list is also encouraged.

Special thanks to Roland Hedberg for writing the initial logout certification tests. And thanks to Filip Skokan for providing resolutions to two of the thornier Session Management issues.

Friday, 07. August 2020

Transparent Health Blog

State API Implementation Playbook Published

This playbook helps states implement the new CMS Interoperability and Patient Access Rule.  https://bit.ly/state-api-playbook    

This playbook helps states implement the new CMS Interoperability and Patient Access Rule. 

https://bit.ly/state-api-playbook  

 



Heather Vescent

This was exactly what I needed. Thank you dear friend.

This was exactly what I needed. Thank you dear friend.

This was exactly what I needed. Thank you dear friend.

Wednesday, 05. August 2020

Virtual Democracy

Steal like a(n Open) Scientist

Science is give and take “After giving talks about open science I’ve sometimes been approached by skeptics who say, ‘Why would I help out my competitors by sharing ideas and data on these new websites? Isn’t that just inviting other people to steal my data, or to scoop me? Only someone naive could think this will … Continue reading Steal like a(n Open) Scientist
Science is give and take “After giving talks about open science I’ve sometimes been approached by skeptics who say, ‘Why would I help out my competitors by sharing ideas and data on these new websites? Isn’t that just inviting other people to steal my data, or to scoop me? Only someone naive could think this will … Continue reading Steal like a(n Open) Scientist

Time for the academy to retire the giants

The academy can’t afford a culture centered on creating giants in their fields “If I have seen further it is by standing on the sholders [sic] of Giants.” Isaac Newton. 1676. Letter to Robert Hooke (before they became bitter enemies). This notion was a commonplace in the 17th Century, with the implications that even a dwarf … Continue reading Time for the academy to retire the giants
The academy can’t afford a culture centered on creating giants in their fields “If I have seen further it is by standing on the sholders [sic] of Giants.” Isaac Newton. 1676. Letter to Robert Hooke (before they became bitter enemies). This notion was a commonplace in the 17th Century, with the implications that even a dwarf … Continue reading Time for the academy to retire the giants

Monday, 03. August 2020

Jon Udell

Robert Plomin on heritability

This post is just a record of the key insights I took away from Sam Harris’ enlightening talk with Robert Plomin. I sure wish it were easier to capture and embed these kinds of audio segments, and to do so more effectively — ideally with transcription. It was way too much work to construct the … Continue reading Robert Plomin on heritability

This post is just a record of the key insights I took away from Sam Harris’ enlightening talk with Robert Plomin.

I sure wish it were easier to capture and embed these kinds of audio segments, and to do so more effectively — ideally with transcription. It was way too much work to construct the URLs that deliver these segments into the embedded players I’ve included here, even with the tool I made for that purpose. It’s nice to have a standard embedded player, finally, but since it doesn’t show the endpoints of the segments you can’t tell they’re each just a few minutes. It looks like they all run to the end.

I wound up going to more trouble than it’s probably worth to convey the most memorable parts of that excellent podcast, and I don’t have time to transcribe more fully, but for what it’s worth, here are the segments that I’ll be remembering.

1. “The most important thing we’ve learned from the DNA revolution in the last 10 years is that genetic influences on complex traits are due to thousands of tiny DNA differences.”

2. “These polygenic scores are perfectly normally distributed.”

3. “I’m saying there are no disorders, there are just quantitative dimensions.”


Sunday, 02. August 2020

Just a Theory

We Need to Talk About Ventilation

Zeynep Tufekci on aerosolized Covid-19 transmission and the need for ventilation.

Zeynep Tufekci, in a piece for The Atlantic:

Jimenez also wondered why the National Guard hadn’t been deployed to set up tent schools (not sealed, but letting air in like an outdoor wedding canopy) around the country, and why the U.S. hadn’t set up the mass production of HEPA filters for every classroom and essential indoor space. Instead, one air-quality expert reported, teachers who wanted to buy portable HEPA filters were being told that they weren’t allowed to, because the CDC wasn’t recommending them. It is still difficult to get Clorox wipes in my supermarket, but I went online to check, and there is no shortage of portable HEPA filters. There is no run on them.

It’s the profoundly irresponsible plan to reopen schools without any remotely sufficient attempt to upgrade and modernize the air circulation systems of our dilapidated public school buildings that disturbs me. Meanwhile, school reopening proposals pay undue attention to hygiene theater to assuage fears, while very real risks go largely unaddressed.1 It simply won’t work, and that means disastrous outcomes for communities.

And it’s not like there aren’t ways to get things under better control. Tufekci continues:

However, Japan masked up early, focused on super-spreader events (a strategy it calls “